netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Arinzon, David" <darinzon@amazon.com>
To: "Nelson, Shannon" <shannon.nelson@amd.com>,
	David Miller <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Cc: "Woodhouse, David" <dwmw@amazon.co.uk>,
	"Machulsky, Zorik" <zorik@amazon.com>,
	"Matushevsky, Alexander" <matua@amazon.com>,
	"Bshara, Saeed" <saeedb@amazon.com>,
	"Wilson, Matt" <msw@amazon.com>,
	"Liguori, Anthony" <aliguori@amazon.com>,
	"Bshara, Nafea" <nafea@amazon.com>,
	"Belgazal, Netanel" <netanel@amazon.com>,
	"Saidi, Ali" <alisaidi@amazon.com>,
	"Herrenschmidt, Benjamin" <benh@amazon.com>,
	"Kiyanovski, Arthur" <akiyano@amazon.com>,
	"Dagan, Noam" <ndagan@amazon.com>,
	"Agroskin, Shay" <shayagr@amazon.com>,
	"Itzko, Shahar" <itzko@amazon.com>,
	"Abboud, Osama" <osamaabb@amazon.com>,
	"Ostrovsky, Evgeny" <evostrov@amazon.com>,
	"Tabachnik, Ofir" <ofirt@amazon.com>,
	"Koler, Nati" <nkolder@amazon.com>
Subject: RE: [PATCH v1 net-next 11/11] net: ena: Reduce lines with longer column width boundary
Date: Tue, 30 Jan 2024 09:39:36 +0000	[thread overview]
Message-ID: <c72880e38eb640f2bc8896f2d1113b73@amazon.com> (raw)
In-Reply-To: <df3df051-9dd8-4797-b402-db1a019e902b@amd.com>

> On 1/29/2024 12:55 AM, darinzon@amazon.com wrote:
> >
> > From: David Arinzon <darinzon@amazon.com>
> >
> > This patch reduces some of the lines by removing newlines
> > where more variables or print strings can be pushed back
> > to the previous line while still adhering to the styling
> > guidelines.
> >
> > Signed-off-by: David Arinzon <darinzon@amazon.com>
> > ---
> >   drivers/net/ethernet/amazon/ena/ena_com.c     | 315 +++++++-----------
> >   drivers/net/ethernet/amazon/ena/ena_eth_com.c |  49 ++-
> >   drivers/net/ethernet/amazon/ena/ena_eth_com.h |  15 +-
> >   drivers/net/ethernet/amazon/ena/ena_netdev.c  |  32 +-
> >   4 files changed, 151 insertions(+), 260 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c
> b/drivers/net/ethernet/amazon/ena/ena_com.c
> > index 675ee72..9e9e4a0 100644
> > --- a/drivers/net/ethernet/amazon/ena/ena_com.c
> > +++ b/drivers/net/ethernet/amazon/ena/ena_com.c
> > @@ -90,8 +90,7 @@ static int ena_com_admin_init_sq(struct
> ena_com_admin_queue *admin_queue)
> >          struct ena_com_admin_sq *sq = &admin_queue->sq;
> >          u16 size = ADMIN_SQ_SIZE(admin_queue->q_depth);
> >
> > -       sq->entries = dma_alloc_coherent(admin_queue->q_dmadev, size,
> > -                                        &sq->dma_addr, GFP_KERNEL);
> > +       sq->entries = dma_alloc_coherent(admin_queue->q_dmadev, size,
> &sq->dma_addr, GFP_KERNEL);
> >
> >          if (!sq->entries) {
> >                  netdev_err(ena_dev->net_device, "Memory allocation failed\n");
> > @@ -113,8 +112,7 @@ static int ena_com_admin_init_cq(struct
> ena_com_admin_queue *admin_queue)
> >          struct ena_com_admin_cq *cq = &admin_queue->cq;
> >          u16 size = ADMIN_CQ_SIZE(admin_queue->q_depth);
> >
> > -       cq->entries = dma_alloc_coherent(admin_queue->q_dmadev, size,
> > -                                        &cq->dma_addr, GFP_KERNEL);
> > +       cq->entries = dma_alloc_coherent(admin_queue->q_dmadev, size,
> &cq->dma_addr, GFP_KERNEL);
> >
> >          if (!cq->entries) {
> >                  netdev_err(ena_dev->net_device, "Memory allocation failed\n");
> > @@ -136,8 +134,7 @@ static int ena_com_admin_init_aenq(struct
> ena_com_dev *ena_dev,
> >
> >          ena_dev->aenq.q_depth = ENA_ASYNC_QUEUE_DEPTH;
> >          size = ADMIN_AENQ_SIZE(ENA_ASYNC_QUEUE_DEPTH);
> > -       aenq->entries = dma_alloc_coherent(ena_dev->dmadev, size,
> > -                                          &aenq->dma_addr, GFP_KERNEL);
> > +       aenq->entries = dma_alloc_coherent(ena_dev->dmadev, size,
> &aenq->dma_addr, GFP_KERNEL);
> >
> >          if (!aenq->entries) {
> >                  netdev_err(ena_dev->net_device, "Memory allocation failed\n");
> > @@ -155,14 +152,13 @@ static int ena_com_admin_init_aenq(struct
> ena_com_dev *ena_dev,
> >
> >          aenq_caps = 0;
> >          aenq_caps |= ena_dev->aenq.q_depth &
> ENA_REGS_AENQ_CAPS_AENQ_DEPTH_MASK;
> > -       aenq_caps |= (sizeof(struct ena_admin_aenq_entry)
> > -                     << ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_SHIFT) &
> > -                    ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_MASK;
> > +       aenq_caps |=
> > +               (sizeof(struct ena_admin_aenq_entry) <<
> ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_SHIFT) &
> > +               ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_MASK;
> 
> This might be better served by using the FIELD_PREP macro
> 

I agree with you, we have a plan to utilize FIELD_GET and FIELD_PREP
macros throughout the code, as there are multiple places that 
look similar to this one and would benefit from this change.
I prefer to address it in a separate patchset at a later stage,
if that's OK with you.

> >          writel(aenq_caps, ena_dev->reg_bar +
> ENA_REGS_AENQ_CAPS_OFF);
> >
> >          if (unlikely(!aenq_handlers)) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "AENQ handlers pointer is NULL\n");
> > +               netdev_err(ena_dev->net_device, "AENQ handlers pointer is
> NULL\n");
> >                  return -EINVAL;
> >          }
> >
> > @@ -189,14 +185,12 @@ static struct ena_comp_ctx
> *get_comp_ctxt(struct ena_com_admin_queue *admin_queu
> >          }
> >
> >          if (unlikely(!admin_queue->comp_ctx)) {
> > -               netdev_err(admin_queue->ena_dev->net_device,
> > -                          "Completion context is NULL\n");
> > +               netdev_err(admin_queue->ena_dev->net_device, "Completion
> context is NULL\n");
> >                  return NULL;
> >          }
> >
> >          if (unlikely(admin_queue->comp_ctx[command_id].occupied &&
> capture)) {
> > -               netdev_err(admin_queue->ena_dev->net_device,
> > -                          "Completion context is occupied\n");
> > +               netdev_err(admin_queue->ena_dev->net_device, "Completion
> context is occupied\n");
> >                  return NULL;
> >          }
> >
> > @@ -226,8 +220,7 @@ static struct ena_comp_ctx
> *__ena_com_submit_admin_cmd(struct ena_com_admin_queu
> >          /* In case of queue FULL */
> >          cnt = (u16)atomic_read(&admin_queue->outstanding_cmds);
> >          if (cnt >= admin_queue->q_depth) {
> > -               netdev_dbg(admin_queue->ena_dev->net_device,
> > -                          "Admin queue is full.\n");
> > +               netdev_dbg(admin_queue->ena_dev->net_device, "Admin
> queue is full.\n");
> >                  admin_queue->stats.out_of_space++;
> >                  return ERR_PTR(-ENOSPC);
> >          }
> > @@ -274,8 +267,7 @@ static int ena_com_init_comp_ctxt(struct
> ena_com_admin_queue *admin_queue)
> >          struct ena_comp_ctx *comp_ctx;
> >          u16 i;
> >
> > -       admin_queue->comp_ctx =
> > -               devm_kzalloc(admin_queue->q_dmadev, size, GFP_KERNEL);
> > +       admin_queue->comp_ctx = devm_kzalloc(admin_queue->q_dmadev,
> size, GFP_KERNEL);
> >          if (unlikely(!admin_queue->comp_ctx)) {
> >                  netdev_err(ena_dev->net_device, "Memory allocation failed\n");
> >                  return -ENOMEM;
> > @@ -336,20 +328,17 @@ static int ena_com_init_io_sq(struct
> ena_com_dev *ena_dev,
> >                  dev_node = dev_to_node(ena_dev->dmadev);
> >                  set_dev_node(ena_dev->dmadev, ctx->numa_node);
> >                  io_sq->desc_addr.virt_addr =
> > -                       dma_alloc_coherent(ena_dev->dmadev, size,
> > -                                          &io_sq->desc_addr.phys_addr,
> > +                       dma_alloc_coherent(ena_dev->dmadev, size, &io_sq-
> >desc_addr.phys_addr,
> >                                             GFP_KERNEL);
> >                  set_dev_node(ena_dev->dmadev, dev_node);
> >                  if (!io_sq->desc_addr.virt_addr) {
> >                          io_sq->desc_addr.virt_addr =
> >                                  dma_alloc_coherent(ena_dev->dmadev, size,
> > -                                                  &io_sq->desc_addr.phys_addr,
> > -                                                  GFP_KERNEL);
> > +                                                  &io_sq->desc_addr.phys_addr, GFP_KERNEL);
> >                  }
> >
> >                  if (!io_sq->desc_addr.virt_addr) {
> > -                       netdev_err(ena_dev->net_device,
> > -                                  "Memory allocation failed\n");
> > +                       netdev_err(ena_dev->net_device, "Memory allocation
> failed\n");
> >                          return -ENOMEM;
> >                  }
> >          }
> > @@ -367,16 +356,14 @@ static int ena_com_init_io_sq(struct
> ena_com_dev *ena_dev,
> >
> >                  dev_node = dev_to_node(ena_dev->dmadev);
> >                  set_dev_node(ena_dev->dmadev, ctx->numa_node);
> > -               io_sq->bounce_buf_ctrl.base_buffer =
> > -                       devm_kzalloc(ena_dev->dmadev, size, GFP_KERNEL);
> > +               io_sq->bounce_buf_ctrl.base_buffer = devm_kzalloc(ena_dev-
> >dmadev, size, GFP_KERNEL);
> >                  set_dev_node(ena_dev->dmadev, dev_node);
> >                  if (!io_sq->bounce_buf_ctrl.base_buffer)
> >                          io_sq->bounce_buf_ctrl.base_buffer =
> >                                  devm_kzalloc(ena_dev->dmadev, size, GFP_KERNEL);
> >
> >                  if (!io_sq->bounce_buf_ctrl.base_buffer) {
> > -                       netdev_err(ena_dev->net_device,
> > -                                  "Bounce buffer memory allocation failed\n");
> > +                       netdev_err(ena_dev->net_device, "Bounce buffer memory
> allocation failed\n");
> >                          return -ENOMEM;
> >                  }
> >
> > @@ -425,13 +412,11 @@ static int ena_com_init_io_cq(struct
> ena_com_dev *ena_dev,
> >          prev_node = dev_to_node(ena_dev->dmadev);
> >          set_dev_node(ena_dev->dmadev, ctx->numa_node);
> >          io_cq->cdesc_addr.virt_addr =
> > -               dma_alloc_coherent(ena_dev->dmadev, size,
> > -                                  &io_cq->cdesc_addr.phys_addr, GFP_KERNEL);
> > +               dma_alloc_coherent(ena_dev->dmadev, size, &io_cq-
> >cdesc_addr.phys_addr, GFP_KERNEL);
> >          set_dev_node(ena_dev->dmadev, prev_node);
> >          if (!io_cq->cdesc_addr.virt_addr) {
> >                  io_cq->cdesc_addr.virt_addr =
> > -                       dma_alloc_coherent(ena_dev->dmadev, size,
> > -                                          &io_cq->cdesc_addr.phys_addr,
> > +                       dma_alloc_coherent(ena_dev->dmadev, size, &io_cq-
> >cdesc_addr.phys_addr,
> >                                             GFP_KERNEL);
> >          }
> >
> > @@ -514,8 +499,8 @@ static int ena_com_comp_status_to_errno(struct
> ena_com_admin_queue *admin_queue,
> >                                          u8 comp_status)
> >   {
> >          if (unlikely(comp_status != 0))
> > -               netdev_err(admin_queue->ena_dev->net_device,
> > -                          "Admin command failed[%u]\n", comp_status);
> > +               netdev_err(admin_queue->ena_dev->net_device, "Admin
> command failed[%u]\n",
> > +                          comp_status);
> >
> >          switch (comp_status) {
> >          case ENA_ADMIN_SUCCESS:
> > @@ -580,8 +565,7 @@ static int
> ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx
> *comp_c
> >          }
> >
> >          if (unlikely(comp_ctx->status == ENA_CMD_ABORTED)) {
> > -               netdev_err(admin_queue->ena_dev->net_device,
> > -                          "Command was aborted\n");
> > +               netdev_err(admin_queue->ena_dev->net_device, "Command
> was aborted\n");
> >                  spin_lock_irqsave(&admin_queue->q_lock, flags);
> >                  admin_queue->stats.aborted_cmd++;
> >                  spin_unlock_irqrestore(&admin_queue->q_lock, flags);
> > @@ -589,8 +573,7 @@ static int
> ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx
> *comp_c
> >                  goto err;
> >          }
> >
> > -       WARN(comp_ctx->status != ENA_CMD_COMPLETED, "Invalid comp
> status %d\n",
> > -            comp_ctx->status);
> > +       WARN(comp_ctx->status != ENA_CMD_COMPLETED, "Invalid comp
> status %d\n", comp_ctx->status);
> >
> >          ret = ena_com_comp_status_to_errno(admin_queue, comp_ctx-
> >comp_status);
> >   err:
> > @@ -634,8 +617,7 @@ static int ena_com_set_llq(struct ena_com_dev
> *ena_dev)
> >                                              sizeof(resp));
> >
> >          if (unlikely(ret))
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to set LLQ configurations: %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to set LLQ
> configurations: %d\n", ret);
> >
> >          return ret;
> >   }
> > @@ -658,8 +640,7 @@ static int ena_com_config_llq_info(struct
> ena_com_dev *ena_dev,
> >                          llq_default_cfg->llq_header_location;
> >          } else {
> >                  netdev_err(ena_dev->net_device,
> > -                          "Invalid header location control, supported: 0x%x\n",
> > -                          supported_feat);
> > +                          "Invalid header location control, supported: 0x%x\n",
> supported_feat);
> >                  return -EINVAL;
> >          }
> >
> > @@ -681,8 +662,8 @@ static int ena_com_config_llq_info(struct
> ena_com_dev *ena_dev,
> >
> >                          netdev_err(ena_dev->net_device,
> >                                     "Default llq stride ctrl is not supported, performing
> fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
> > -                                  llq_default_cfg->llq_stride_ctrl,
> > -                                  supported_feat, llq_info->desc_stride_ctrl);
> > +                                  llq_default_cfg->llq_stride_ctrl, supported_feat,
> > +                                  llq_info->desc_stride_ctrl);
> 
> Most of these changes make sense, but seem less useful in cases like
> this where the line count doesn't change.
> 
> sln
> 

I understand your point, this is a uniform application of the clang formatter
tool and the penalties applied, given some compromises.
Most of the changes make the code better and more readable, but
there are some corner cases which are ok/reasonable, and I preferred to not
make manual changes and override the output. Let me know what you think.

Thanks!
David

> >                  }
> >          } else {
> >                  llq_info->desc_stride_ctrl = 0;
> > @@ -704,8 +685,7 @@ static int ena_com_config_llq_info(struct
> ena_com_dev *ena_dev,
> >                          llq_info->desc_list_entry_size = 256;
> >                  } else {
> >                          netdev_err(ena_dev->net_device,
> > -                                  "Invalid entry_size_ctrl, supported: 0x%x\n",
> > -                                  supported_feat);
> > +                                  "Invalid entry_size_ctrl, supported: 0x%x\n",
> supported_feat);
> >                          return -EINVAL;
> >                  }
> >
> > @@ -750,8 +730,8 @@ static int ena_com_config_llq_info(struct
> ena_com_dev *ena_dev,
> >
> >                  netdev_err(ena_dev->net_device,
> >                             "Default llq num descs before header is not supported,
> performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n",
> > -                          llq_default_cfg->llq_num_decs_before_header,
> > -                          supported_feat, llq_info->descs_num_before_header);
> > +                          llq_default_cfg->llq_num_decs_before_header,
> supported_feat,
> > +                          llq_info->descs_num_before_header);
> >          }
> >          /* Check for accelerated queue supported */
> >          llq_accel_mode_get = llq_features->accel_mode.u.get;
> > @@ -767,8 +747,7 @@ static int ena_com_config_llq_info(struct
> ena_com_dev *ena_dev,
> >
> >          rc = ena_com_set_llq(ena_dev);
> >          if (rc)
> > -               netdev_err(ena_dev->net_device,
> > -                          "Cannot set LLQ configuration: %d\n", rc);
> > +               netdev_err(ena_dev->net_device, "Cannot set LLQ configuration:
> %d\n", rc);
> >
> >          return rc;
> >   }
> > @@ -780,8 +759,7 @@ static int
> ena_com_wait_and_process_admin_cq_interrupts(struct ena_comp_ctx
> *com
> >          int ret;
> >
> >          wait_for_completion_timeout(&comp_ctx->wait_event,
> > -                                   usecs_to_jiffies(
> > -                                           admin_queue->completion_timeout));
> > +                                   usecs_to_jiffies(admin_queue->completion_timeout));
> >
> >          /* In case the command wasn't completed find out the root cause.
> >           * There might be 2 kinds of errors
> > @@ -797,8 +775,7 @@ static int
> ena_com_wait_and_process_admin_cq_interrupts(struct ena_comp_ctx
> *com
> >                  if (comp_ctx->status == ENA_CMD_COMPLETED) {
> >                          netdev_err(admin_queue->ena_dev->net_device,
> >                                     "The ena device sent a completion but the driver didn't
> receive a MSI-X interrupt (cmd %d), autopolling mode is %s\n",
> > -                                  comp_ctx->cmd_opcode,
> > -                                  admin_queue->auto_polling ? "ON" : "OFF");
> > +                                  comp_ctx->cmd_opcode, admin_queue->auto_polling ?
> "ON" : "OFF");
> >                          /* Check if fallback to polling is enabled */
> >                          if (admin_queue->auto_polling)
> >                                  admin_queue->polling = true;
> > @@ -867,15 +844,13 @@ static u32 ena_com_reg_bar_read32(struct
> ena_com_dev *ena_dev, u16 offset)
> >          if (unlikely(i == timeout)) {
> >                  netdev_err(ena_dev->net_device,
> >                             "Reading reg failed for timeout. expected: req id[%u]
> offset[%u] actual: req id[%u] offset[%u]\n",
> > -                          mmio_read->seq_num, offset, read_resp->req_id,
> > -                          read_resp->reg_off);
> > +                          mmio_read->seq_num, offset, read_resp->req_id,
> read_resp->reg_off);
> >                  ret = ENA_MMIO_READ_TIMEOUT;
> >                  goto err;
> >          }
> >
> >          if (read_resp->reg_off != offset) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Read failure: wrong offset provided\n");
> > +               netdev_err(ena_dev->net_device, "Read failure: wrong offset
> provided\n");
> >                  ret = ENA_MMIO_READ_TIMEOUT;
> >          } else {
> >                  ret = read_resp->reg_val;
> > @@ -934,8 +909,7 @@ static int ena_com_destroy_io_sq(struct
> ena_com_dev *ena_dev,
> >                                              sizeof(destroy_resp));
> >
> >          if (unlikely(ret && (ret != -ENODEV)))
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to destroy io sq error: %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to destroy io sq error:
> %d\n", ret);
> >
> >          return ret;
> >   }
> > @@ -949,8 +923,7 @@ static void ena_com_io_queue_free(struct
> ena_com_dev *ena_dev,
> >          if (io_cq->cdesc_addr.virt_addr) {
> >                  size = io_cq->cdesc_entry_size_in_bytes * io_cq->q_depth;
> >
> > -               dma_free_coherent(ena_dev->dmadev, size,
> > -                                 io_cq->cdesc_addr.virt_addr,
> > +               dma_free_coherent(ena_dev->dmadev, size, io_cq-
> >cdesc_addr.virt_addr,
> >                                    io_cq->cdesc_addr.phys_addr);
> >
> >                  io_cq->cdesc_addr.virt_addr = NULL;
> > @@ -959,8 +932,7 @@ static void ena_com_io_queue_free(struct
> ena_com_dev *ena_dev,
> >          if (io_sq->desc_addr.virt_addr) {
> >                  size = io_sq->desc_entry_size * io_sq->q_depth;
> >
> > -               dma_free_coherent(ena_dev->dmadev, size,
> > -                                 io_sq->desc_addr.virt_addr,
> > +               dma_free_coherent(ena_dev->dmadev, size, io_sq-
> >desc_addr.virt_addr,
> >                                    io_sq->desc_addr.phys_addr);
> >
> >                  io_sq->desc_addr.virt_addr = NULL;
> > @@ -985,8 +957,7 @@ static int wait_for_reset_state(struct ena_com_dev
> *ena_dev, u32 timeout,
> >                  val = ena_com_reg_bar_read32(ena_dev,
> ENA_REGS_DEV_STS_OFF);
> >
> >                  if (unlikely(val == ENA_MMIO_READ_TIMEOUT)) {
> > -                       netdev_err(ena_dev->net_device,
> > -                                  "Reg read timeout occurred\n");
> > +                       netdev_err(ena_dev->net_device, "Reg read timeout
> occurred\n");
> >                          return -ETIME;
> >                  }
> >
> > @@ -1026,8 +997,7 @@ static int ena_com_get_feature_ex(struct
> ena_com_dev *ena_dev,
> >          int ret;
> >
> >          if (!ena_com_check_supported_feature_id(ena_dev, feature_id)) {
> > -               netdev_dbg(ena_dev->net_device, "Feature %d isn't
> supported\n",
> > -                          feature_id);
> > +               netdev_dbg(ena_dev->net_device, "Feature %d isn't
> supported\n", feature_id);
> >                  return -EOPNOTSUPP;
> >          }
> >
> > @@ -1064,8 +1034,7 @@ static int ena_com_get_feature_ex(struct
> ena_com_dev *ena_dev,
> >
> >          if (unlikely(ret))
> >                  netdev_err(ena_dev->net_device,
> > -                          "Failed to submit get_feature command %d error: %d\n",
> > -                          feature_id, ret);
> > +                          "Failed to submit get_feature command %d error: %d\n",
> feature_id, ret);
> >
> >          return ret;
> >   }
> > @@ -1104,13 +1073,11 @@ static int ena_com_hash_key_allocate(struct
> ena_com_dev *ena_dev)
> >   {
> >          struct ena_rss *rss = &ena_dev->rss;
> >
> > -       if (!ena_com_check_supported_feature_id(ena_dev,
> > -                                               ENA_ADMIN_RSS_HASH_FUNCTION))
> > +       if (!ena_com_check_supported_feature_id(ena_dev,
> ENA_ADMIN_RSS_HASH_FUNCTION))
> >                  return -EOPNOTSUPP;
> >
> > -       rss->hash_key =
> > -               dma_alloc_coherent(ena_dev->dmadev, sizeof(*rss->hash_key),
> > -                                  &rss->hash_key_dma_addr, GFP_KERNEL);
> > +       rss->hash_key = dma_alloc_coherent(ena_dev->dmadev, sizeof(*rss-
> >hash_key),
> > +                                          &rss->hash_key_dma_addr, GFP_KERNEL);
> >
> >          if (unlikely(!rss->hash_key))
> >                  return -ENOMEM;
> > @@ -1123,8 +1090,8 @@ static void ena_com_hash_key_destroy(struct
> ena_com_dev *ena_dev)
> >          struct ena_rss *rss = &ena_dev->rss;
> >
> >          if (rss->hash_key)
> > -               dma_free_coherent(ena_dev->dmadev, sizeof(*rss->hash_key),
> > -                                 rss->hash_key, rss->hash_key_dma_addr);
> > +               dma_free_coherent(ena_dev->dmadev, sizeof(*rss->hash_key),
> rss->hash_key,
> > +                                 rss->hash_key_dma_addr);
> >          rss->hash_key = NULL;
> >   }
> >
> > @@ -1132,9 +1099,8 @@ static int ena_com_hash_ctrl_init(struct
> ena_com_dev *ena_dev)
> >   {
> >          struct ena_rss *rss = &ena_dev->rss;
> >
> > -       rss->hash_ctrl =
> > -               dma_alloc_coherent(ena_dev->dmadev, sizeof(*rss->hash_ctrl),
> > -                                  &rss->hash_ctrl_dma_addr, GFP_KERNEL);
> > +       rss->hash_ctrl = dma_alloc_coherent(ena_dev->dmadev, sizeof(*rss-
> >hash_ctrl),
> > +                                           &rss->hash_ctrl_dma_addr, GFP_KERNEL);
> >
> >          if (unlikely(!rss->hash_ctrl))
> >                  return -ENOMEM;
> > @@ -1147,8 +1113,8 @@ static void ena_com_hash_ctrl_destroy(struct
> ena_com_dev *ena_dev)
> >          struct ena_rss *rss = &ena_dev->rss;
> >
> >          if (rss->hash_ctrl)
> > -               dma_free_coherent(ena_dev->dmadev, sizeof(*rss->hash_ctrl),
> > -                                 rss->hash_ctrl, rss->hash_ctrl_dma_addr);
> > +               dma_free_coherent(ena_dev->dmadev, sizeof(*rss->hash_ctrl),
> rss->hash_ctrl,
> > +                                 rss->hash_ctrl_dma_addr);
> >          rss->hash_ctrl = NULL;
> >   }
> >
> > @@ -1177,15 +1143,13 @@ static int
> ena_com_indirect_table_allocate(struct ena_com_dev *ena_dev,
> >          tbl_size = (1ULL << log_size) *
> >                  sizeof(struct ena_admin_rss_ind_table_entry);
> >
> > -       rss->rss_ind_tbl =
> > -               dma_alloc_coherent(ena_dev->dmadev, tbl_size,
> > -                                  &rss->rss_ind_tbl_dma_addr, GFP_KERNEL);
> > +       rss->rss_ind_tbl = dma_alloc_coherent(ena_dev->dmadev, tbl_size,
> &rss->rss_ind_tbl_dma_addr,
> > +                                             GFP_KERNEL);
> >          if (unlikely(!rss->rss_ind_tbl))
> >                  goto mem_err1;
> >
> >          tbl_size = (1ULL << log_size) * sizeof(u16);
> > -       rss->host_rss_ind_tbl =
> > -               devm_kzalloc(ena_dev->dmadev, tbl_size, GFP_KERNEL);
> > +       rss->host_rss_ind_tbl = devm_kzalloc(ena_dev->dmadev, tbl_size,
> GFP_KERNEL);
> >          if (unlikely(!rss->host_rss_ind_tbl))
> >                  goto mem_err2;
> >
> > @@ -1197,8 +1161,7 @@ mem_err2:
> >          tbl_size = (1ULL << log_size) *
> >                  sizeof(struct ena_admin_rss_ind_table_entry);
> >
> > -       dma_free_coherent(ena_dev->dmadev, tbl_size, rss->rss_ind_tbl,
> > -                         rss->rss_ind_tbl_dma_addr);
> > +       dma_free_coherent(ena_dev->dmadev, tbl_size, rss->rss_ind_tbl,
> rss->rss_ind_tbl_dma_addr);
> >          rss->rss_ind_tbl = NULL;
> >   mem_err1:
> >          rss->tbl_log_size = 0;
> > @@ -1261,8 +1224,7 @@ static int ena_com_create_io_sq(struct
> ena_com_dev *ena_dev,
> >                                             &create_cmd.sq_ba,
> >                                             io_sq->desc_addr.phys_addr);
> >                  if (unlikely(ret)) {
> > -                       netdev_err(ena_dev->net_device,
> > -                                  "Memory address set failed\n");
> > +                       netdev_err(ena_dev->net_device, "Memory address set
> failed\n");
> >                          return ret;
> >                  }
> >          }
> > @@ -1273,8 +1235,7 @@ static int ena_com_create_io_sq(struct
> ena_com_dev *ena_dev,
> >                                              (struct ena_admin_acq_entry *)&cmd_completion,
> >                                              sizeof(cmd_completion));
> >          if (unlikely(ret)) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to create IO SQ. error: %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to create IO SQ. error:
> %d\n", ret);
> >                  return ret;
> >          }
> >
> > @@ -1289,8 +1250,7 @@ static int ena_com_create_io_sq(struct
> ena_com_dev *ena_dev,
> >                          cmd_completion.llq_descriptors_offset);
> >          }
> >
> > -       netdev_dbg(ena_dev->net_device, "Created sq[%u], depth[%u]\n",
> > -                  io_sq->idx, io_sq->q_depth);
> > +       netdev_dbg(ena_dev->net_device, "Created sq[%u], depth[%u]\n",
> io_sq->idx, io_sq->q_depth);
> >
> >          return ret;
> >   }
> > @@ -1417,8 +1377,7 @@ int ena_com_create_io_cq(struct ena_com_dev
> *ena_dev,
> >                                              (struct ena_admin_acq_entry *)&cmd_completion,
> >                                              sizeof(cmd_completion));
> >          if (unlikely(ret)) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to create IO CQ. error: %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to create IO CQ. error:
> %d\n", ret);
> >                  return ret;
> >          }
> >
> > @@ -1432,8 +1391,7 @@ int ena_com_create_io_cq(struct ena_com_dev
> *ena_dev,
> >                          (u32 __iomem *)((uintptr_t)ena_dev->reg_bar +
> >                          cmd_completion.numa_node_register_offset);
> >
> > -       netdev_dbg(ena_dev->net_device, "Created cq[%u], depth[%u]\n",
> > -                  io_cq->idx, io_cq->q_depth);
> > +       netdev_dbg(ena_dev->net_device, "Created cq[%u], depth[%u]\n",
> io_cq->idx, io_cq->q_depth);
> >
> >          return ret;
> >   }
> > @@ -1443,8 +1401,7 @@ int ena_com_get_io_handlers(struct
> ena_com_dev *ena_dev, u16 qid,
> >                              struct ena_com_io_cq **io_cq)
> >   {
> >          if (qid >= ENA_TOTAL_NUM_QUEUES) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Invalid queue number %d but the max is %d\n", qid,
> > +               netdev_err(ena_dev->net_device, "Invalid queue number %d but
> the max is %d\n", qid,
> >                             ENA_TOTAL_NUM_QUEUES);
> >                  return -EINVAL;
> >          }
> > @@ -1484,8 +1441,7 @@ void ena_com_wait_for_abort_completion(struct
> ena_com_dev *ena_dev)
> >          spin_lock_irqsave(&admin_queue->q_lock, flags);
> >          while (atomic_read(&admin_queue->outstanding_cmds) != 0) {
> >                  spin_unlock_irqrestore(&admin_queue->q_lock, flags);
> > -               ena_delay_exponential_backoff_us(exp++,
> > -                                                ena_dev->ena_min_poll_delay_us);
> > +               ena_delay_exponential_backoff_us(exp++, ena_dev-
> >ena_min_poll_delay_us);
> >                  spin_lock_irqsave(&admin_queue->q_lock, flags);
> >          }
> >          spin_unlock_irqrestore(&admin_queue->q_lock, flags);
> > @@ -1511,8 +1467,7 @@ int ena_com_destroy_io_cq(struct ena_com_dev
> *ena_dev,
> >                                              sizeof(destroy_resp));
> >
> >          if (unlikely(ret && (ret != -ENODEV)))
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to destroy IO CQ. error: %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to destroy IO CQ. error:
> %d\n", ret);
> >
> >          return ret;
> >   }
> > @@ -1580,8 +1535,7 @@ int ena_com_set_aenq_config(struct
> ena_com_dev *ena_dev, u32 groups_flag)
> >                                              sizeof(resp));
> >
> >          if (unlikely(ret))
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to config AENQ ret: %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to config AENQ ret:
> %d\n", ret);
> >
> >          return ret;
> >   }
> > @@ -1602,8 +1556,7 @@ int ena_com_get_dma_width(struct
> ena_com_dev *ena_dev)
> >          netdev_dbg(ena_dev->net_device, "ENA dma width: %d\n", width);
> >
> >          if ((width < 32) || width > ENA_MAX_PHYS_ADDR_SIZE_BITS) {
> > -               netdev_err(ena_dev->net_device, "DMA width illegal value:
> %d\n",
> > -                          width);
> > +               netdev_err(ena_dev->net_device, "DMA width illegal value:
> %d\n", width);
> >                  return -EINVAL;
> >          }
> >
> > @@ -1625,19 +1578,16 @@ int ena_com_validate_version(struct
> ena_com_dev *ena_dev)
> >          ctrl_ver = ena_com_reg_bar_read32(ena_dev,
> >                                            ENA_REGS_CONTROLLER_VERSION_OFF);
> >
> > -       if (unlikely((ver == ENA_MMIO_READ_TIMEOUT) ||
> > -                    (ctrl_ver == ENA_MMIO_READ_TIMEOUT))) {
> > +       if (unlikely((ver == ENA_MMIO_READ_TIMEOUT) || (ctrl_ver ==
> ENA_MMIO_READ_TIMEOUT))) {
> >                  netdev_err(ena_dev->net_device, "Reg read timeout
> occurred\n");
> >                  return -ETIME;
> >          }
> >
> >          dev_info(ena_dev->dmadev, "ENA device version: %d.%d\n",
> > -                (ver & ENA_REGS_VERSION_MAJOR_VERSION_MASK) >>
> > -                        ENA_REGS_VERSION_MAJOR_VERSION_SHIFT,
> > +                (ver & ENA_REGS_VERSION_MAJOR_VERSION_MASK) >>
> ENA_REGS_VERSION_MAJOR_VERSION_SHIFT,
> >                   ver & ENA_REGS_VERSION_MINOR_VERSION_MASK);
> >
> > -       dev_info(ena_dev->dmadev,
> > -                "ENA controller version: %d.%d.%d implementation version
> %d\n",
> > +       dev_info(ena_dev->dmadev, "ENA controller version: %d.%d.%d
> implementation version %d\n",
> >                   (ctrl_ver &
> ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK) >>
> >                           ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_SHIFT,
> >                   (ctrl_ver &
> ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_MASK) >>
> > @@ -1686,20 +1636,17 @@ void ena_com_admin_destroy(struct
> ena_com_dev *ena_dev)
> >
> >          size = ADMIN_SQ_SIZE(admin_queue->q_depth);
> >          if (sq->entries)
> > -               dma_free_coherent(ena_dev->dmadev, size, sq->entries,
> > -                                 sq->dma_addr);
> > +               dma_free_coherent(ena_dev->dmadev, size, sq->entries, sq-
> >dma_addr);
> >          sq->entries = NULL;
> >
> >          size = ADMIN_CQ_SIZE(admin_queue->q_depth);
> >          if (cq->entries)
> > -               dma_free_coherent(ena_dev->dmadev, size, cq->entries,
> > -                                 cq->dma_addr);
> > +               dma_free_coherent(ena_dev->dmadev, size, cq->entries, cq-
> >dma_addr);
> >          cq->entries = NULL;
> >
> >          size = ADMIN_AENQ_SIZE(aenq->q_depth);
> >          if (ena_dev->aenq.entries)
> > -               dma_free_coherent(ena_dev->dmadev, size, aenq->entries,
> > -                                 aenq->dma_addr);
> > +               dma_free_coherent(ena_dev->dmadev, size, aenq->entries,
> aenq->dma_addr);
> >          aenq->entries = NULL;
> >   }
> >
> > @@ -1725,10 +1672,8 @@ int
> ena_com_mmio_reg_read_request_init(struct ena_com_dev *ena_dev)
> >          struct ena_com_mmio_read *mmio_read = &ena_dev->mmio_read;
> >
> >          spin_lock_init(&mmio_read->lock);
> > -       mmio_read->read_resp =
> > -               dma_alloc_coherent(ena_dev->dmadev,
> > -                                  sizeof(*mmio_read->read_resp),
> > -                                  &mmio_read->read_resp_dma_addr, GFP_KERNEL);
> > +       mmio_read->read_resp = dma_alloc_coherent(ena_dev->dmadev,
> sizeof(*mmio_read->read_resp),
> > +                                                 &mmio_read->read_resp_dma_addr,
> GFP_KERNEL);
> >          if (unlikely(!mmio_read->read_resp))
> >                  goto err;
> >
> > @@ -1759,8 +1704,8 @@ void
> ena_com_mmio_reg_read_request_destroy(struct ena_com_dev
> *ena_dev)
> >          writel(0x0, ena_dev->reg_bar + ENA_REGS_MMIO_RESP_LO_OFF);
> >          writel(0x0, ena_dev->reg_bar + ENA_REGS_MMIO_RESP_HI_OFF);
> >
> > -       dma_free_coherent(ena_dev->dmadev, sizeof(*mmio_read-
> >read_resp),
> > -                         mmio_read->read_resp, mmio_read-
> >read_resp_dma_addr);
> > +       dma_free_coherent(ena_dev->dmadev, sizeof(*mmio_read-
> >read_resp), mmio_read->read_resp,
> > +                         mmio_read->read_resp_dma_addr);
> >
> >          mmio_read->read_resp = NULL;
> >   }
> > @@ -1792,8 +1737,7 @@ int ena_com_admin_init(struct ena_com_dev
> *ena_dev,
> >          }
> >
> >          if (!(dev_sts & ENA_REGS_DEV_STS_READY_MASK)) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Device isn't ready, abort com init\n");
> > +               netdev_err(ena_dev->net_device, "Device isn't ready, abort com
> init\n");
> >                  return -ENODEV;
> >          }
> >
> > @@ -1870,8 +1814,7 @@ int ena_com_create_io_queue(struct
> ena_com_dev *ena_dev,
> >          int ret;
> >
> >          if (ctx->qid >= ENA_TOTAL_NUM_QUEUES) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Qid (%d) is bigger than max num of queues (%d)\n",
> > +               netdev_err(ena_dev->net_device, "Qid (%d) is bigger than max
> num of queues (%d)\n",
> >                             ctx->qid, ENA_TOTAL_NUM_QUEUES);
> >                  return -EINVAL;
> >          }
> > @@ -1897,8 +1840,7 @@ int ena_com_create_io_queue(struct
> ena_com_dev *ena_dev,
> >
> >          if (ctx->direction == ENA_COM_IO_QUEUE_DIRECTION_TX)
> >                  /* header length is limited to 8 bits */
> > -               io_sq->tx_max_header_size =
> > -                       min_t(u32, ena_dev->tx_max_header_size, SZ_256);
> > +               io_sq->tx_max_header_size = min_t(u32, ena_dev-
> >tx_max_header_size, SZ_256);
> >
> >          ret = ena_com_init_io_sq(ena_dev, ctx, io_sq);
> >          if (ret)
> > @@ -1930,8 +1872,7 @@ void ena_com_destroy_io_queue(struct
> ena_com_dev *ena_dev, u16 qid)
> >          struct ena_com_io_cq *io_cq;
> >
> >          if (qid >= ENA_TOTAL_NUM_QUEUES) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Qid (%d) is bigger than max num of queues (%d)\n",
> > +               netdev_err(ena_dev->net_device, "Qid (%d) is bigger than max
> num of queues (%d)\n",
> >                             qid, ENA_TOTAL_NUM_QUEUES);
> >                  return;
> >          }
> > @@ -1975,8 +1916,7 @@ int ena_com_get_dev_attr_feat(struct
> ena_com_dev *ena_dev,
> >                  if (rc)
> >                          return rc;
> >
> > -               if (get_resp.u.max_queue_ext.version !=
> > -                   ENA_FEATURE_MAX_QUEUE_EXT_VER)
> > +               if (get_resp.u.max_queue_ext.version !=
> ENA_FEATURE_MAX_QUEUE_EXT_VER)
> >                          return -EINVAL;
> >
> >                  memcpy(&get_feat_ctx->max_queue_ext,
> &get_resp.u.max_queue_ext,
> > @@ -2017,18 +1957,15 @@ int ena_com_get_dev_attr_feat(struct
> ena_com_dev *ena_dev,
> >          rc = ena_com_get_feature(ena_dev, &get_resp,
> ENA_ADMIN_HW_HINTS, 0);
> >
> >          if (!rc)
> > -               memcpy(&get_feat_ctx->hw_hints, &get_resp.u.hw_hints,
> > -                      sizeof(get_resp.u.hw_hints));
> > +               memcpy(&get_feat_ctx->hw_hints, &get_resp.u.hw_hints,
> sizeof(get_resp.u.hw_hints));
> >          else if (rc == -EOPNOTSUPP)
> > -               memset(&get_feat_ctx->hw_hints, 0x0,
> > -                      sizeof(get_feat_ctx->hw_hints));
> > +               memset(&get_feat_ctx->hw_hints, 0x0, sizeof(get_feat_ctx-
> >hw_hints));
> >          else
> >                  return rc;
> >
> >          rc = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_LLQ,
> 0);
> >          if (!rc)
> > -               memcpy(&get_feat_ctx->llq, &get_resp.u.llq,
> > -                      sizeof(get_resp.u.llq));
> > +               memcpy(&get_feat_ctx->llq, &get_resp.u.llq,
> sizeof(get_resp.u.llq));
> >          else if (rc == -EOPNOTSUPP)
> >                  memset(&get_feat_ctx->llq, 0x0, sizeof(get_feat_ctx->llq));
> >          else
> > @@ -2076,8 +2013,7 @@ void ena_com_aenq_intr_handler(struct
> ena_com_dev *ena_dev, void *data)
> >          aenq_common = &aenq_e->aenq_common_desc;
> >
> >          /* Go over all the events */
> > -       while ((READ_ONCE(aenq_common->flags) &
> > -               ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) == phase) {
> > +       while ((READ_ONCE(aenq_common->flags) &
> ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) == phase) {
> >                  /* Make sure the phase bit (ownership) is as expected before
> >                   * reading the rest of the descriptor.
> >                   */
> > @@ -2086,8 +2022,7 @@ void ena_com_aenq_intr_handler(struct
> ena_com_dev *ena_dev, void *data)
> >                  timestamp = (u64)aenq_common->timestamp_low |
> >                          ((u64)aenq_common->timestamp_high << 32);
> >
> > -               netdev_dbg(ena_dev->net_device,
> > -                          "AENQ! Group[%x] Syndrome[%x] timestamp: [%llus]\n",
> > +               netdev_dbg(ena_dev->net_device, "AENQ! Group[%x]
> Syndrome[%x] timestamp: [%llus]\n",
> >                             aenq_common->group, aenq_common->syndrome,
> timestamp);
> >
> >                  /* Handle specific event*/
> > @@ -2116,8 +2051,7 @@ void ena_com_aenq_intr_handler(struct
> ena_com_dev *ena_dev, void *data)
> >
> >          /* write the aenq doorbell after all AENQ descriptors were read */
> >          mb();
> > -       writel_relaxed((u32)aenq->head,
> > -                      ena_dev->reg_bar + ENA_REGS_AENQ_HEAD_DB_OFF);
> > +       writel_relaxed((u32)aenq->head, ena_dev->reg_bar +
> ENA_REGS_AENQ_HEAD_DB_OFF);
> >   }
> >
> >   int ena_com_dev_reset(struct ena_com_dev *ena_dev,
> > @@ -2129,15 +2063,13 @@ int ena_com_dev_reset(struct ena_com_dev
> *ena_dev,
> >          stat = ena_com_reg_bar_read32(ena_dev,
> ENA_REGS_DEV_STS_OFF);
> >          cap = ena_com_reg_bar_read32(ena_dev, ENA_REGS_CAPS_OFF);
> >
> > -       if (unlikely((stat == ENA_MMIO_READ_TIMEOUT) ||
> > -                    (cap == ENA_MMIO_READ_TIMEOUT))) {
> > +       if (unlikely((stat == ENA_MMIO_READ_TIMEOUT) || (cap ==
> ENA_MMIO_READ_TIMEOUT))) {
> >                  netdev_err(ena_dev->net_device, "Reg read32 timeout
> occurred\n");
> >                  return -ETIME;
> >          }
> >
> >          if ((stat & ENA_REGS_DEV_STS_READY_MASK) == 0) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Device isn't ready, can't reset device\n");
> > +               netdev_err(ena_dev->net_device, "Device isn't ready, can't reset
> device\n");
> >                  return -EINVAL;
> >          }
> >
> > @@ -2160,8 +2092,7 @@ int ena_com_dev_reset(struct ena_com_dev
> *ena_dev,
> >          rc = wait_for_reset_state(ena_dev, timeout,
> >                                    ENA_REGS_DEV_STS_RESET_IN_PROGRESS_MASK);
> >          if (rc != 0) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Reset indication didn't turn on\n");
> > +               netdev_err(ena_dev->net_device, "Reset indication didn't turn
> on\n");
> >                  return rc;
> >          }
> >
> > @@ -2169,8 +2100,7 @@ int ena_com_dev_reset(struct ena_com_dev
> *ena_dev,
> >          writel(0, ena_dev->reg_bar + ENA_REGS_DEV_CTL_OFF);
> >          rc = wait_for_reset_state(ena_dev, timeout, 0);
> >          if (rc != 0) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Reset indication didn't turn off\n");
> > +               netdev_err(ena_dev->net_device, "Reset indication didn't turn
> off\n");
> >                  return rc;
> >          }
> >
> > @@ -2207,8 +2137,7 @@ static int ena_get_dev_stats(struct ena_com_dev
> *ena_dev,
> >                                               sizeof(*get_resp));
> >
> >          if (unlikely(ret))
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to get stats. error: %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to get stats. error:
> %d\n", ret);
> >
> >          return ret;
> >   }
> > @@ -2220,8 +2149,7 @@ int ena_com_get_eni_stats(struct ena_com_dev
> *ena_dev,
> >          int ret;
> >
> >          if (!ena_com_get_cap(ena_dev, ENA_ADMIN_ENI_STATS)) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Capability %d isn't supported\n",
> > +               netdev_err(ena_dev->net_device, "Capability %d isn't
> supported\n",
> >                             ENA_ADMIN_ENI_STATS);
> >                  return -EOPNOTSUPP;
> >          }
> > @@ -2258,8 +2186,7 @@ int ena_com_set_dev_mtu(struct ena_com_dev
> *ena_dev, u32 mtu)
> >          int ret;
> >
> >          if (!ena_com_check_supported_feature_id(ena_dev,
> ENA_ADMIN_MTU)) {
> > -               netdev_dbg(ena_dev->net_device, "Feature %d isn't
> supported\n",
> > -                          ENA_ADMIN_MTU);
> > +               netdev_dbg(ena_dev->net_device, "Feature %d isn't
> supported\n", ENA_ADMIN_MTU);
> >                  return -EOPNOTSUPP;
> >          }
> >
> > @@ -2278,8 +2205,7 @@ int ena_com_set_dev_mtu(struct ena_com_dev
> *ena_dev, u32 mtu)
> >                                              sizeof(resp));
> >
> >          if (unlikely(ret))
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to set mtu %d. error: %d\n", mtu, ret);
> > +               netdev_err(ena_dev->net_device, "Failed to set mtu %d. error:
> %d\n", mtu, ret);
> >
> >          return ret;
> >   }
> > @@ -2293,8 +2219,7 @@ int ena_com_get_offload_settings(struct
> ena_com_dev *ena_dev,
> >          ret = ena_com_get_feature(ena_dev, &resp,
> >                                    ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0);
> >          if (unlikely(ret)) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to get offload capabilities %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to get offload
> capabilities %d\n", ret);
> >                  return ret;
> >          }
> >
> > @@ -2312,8 +2237,7 @@ int ena_com_set_hash_function(struct
> ena_com_dev *ena_dev)
> >          struct ena_admin_get_feat_resp get_resp;
> >          int ret;
> >
> > -       if (!ena_com_check_supported_feature_id(ena_dev,
> > -                                               ENA_ADMIN_RSS_HASH_FUNCTION)) {
> > +       if (!ena_com_check_supported_feature_id(ena_dev,
> ENA_ADMIN_RSS_HASH_FUNCTION)) {
> >                  netdev_dbg(ena_dev->net_device, "Feature %d isn't
> supported\n",
> >                             ENA_ADMIN_RSS_HASH_FUNCTION);
> >                  return -EOPNOTSUPP;
> > @@ -2326,8 +2250,7 @@ int ena_com_set_hash_function(struct
> ena_com_dev *ena_dev)
> >                  return ret;
> >
> >          if (!(get_resp.u.flow_hash_func.supported_func & BIT(rss-
> >hash_func))) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Func hash %d isn't supported by device, abort\n",
> > +               netdev_err(ena_dev->net_device, "Func hash %d isn't supported
> by device, abort\n",
> >                             rss->hash_func);
> >                  return -EOPNOTSUPP;
> >          }
> > @@ -2357,8 +2280,7 @@ int ena_com_set_hash_function(struct
> ena_com_dev *ena_dev)
> >                                              (struct ena_admin_acq_entry *)&resp,
> >                                              sizeof(resp));
> >          if (unlikely(ret)) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to set hash function %d. error: %d\n",
> > +               netdev_err(ena_dev->net_device, "Failed to set hash function
> %d. error: %d\n",
> >                             rss->hash_func, ret);
> >                  return -EINVAL;
> >          }
> > @@ -2390,16 +2312,15 @@ int ena_com_fill_hash_function(struct
> ena_com_dev *ena_dev,
> >                  return rc;
> >
> >          if (!(BIT(func) & get_resp.u.flow_hash_func.supported_func)) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Flow hash function %d isn't supported\n", func);
> > +               netdev_err(ena_dev->net_device, "Flow hash function %d isn't
> supported\n", func);
> >                  return -EOPNOTSUPP;
> >          }
> >
> >          if ((func == ENA_ADMIN_TOEPLITZ) && key) {
> >                  if (key_len != sizeof(hash_key->key)) {
> >                          netdev_err(ena_dev->net_device,
> > -                                  "key len (%u) doesn't equal the supported size (%zu)\n",
> > -                                  key_len, sizeof(hash_key->key));
> > +                                  "key len (%u) doesn't equal the supported size (%zu)\n",
> key_len,
> > +                                  sizeof(hash_key->key));
> >                          return -EINVAL;
> >                  }
> >                  memcpy(hash_key->key, key, key_len);
> > @@ -2487,8 +2408,7 @@ int ena_com_set_hash_ctrl(struct ena_com_dev
> *ena_dev)
> >          struct ena_admin_set_feat_resp resp;
> >          int ret;
> >
> > -       if (!ena_com_check_supported_feature_id(ena_dev,
> > -                                               ENA_ADMIN_RSS_HASH_INPUT)) {
> > +       if (!ena_com_check_supported_feature_id(ena_dev,
> ENA_ADMIN_RSS_HASH_INPUT)) {
> >                  netdev_dbg(ena_dev->net_device, "Feature %d isn't
> supported\n",
> >                             ENA_ADMIN_RSS_HASH_INPUT);
> >                  return -EOPNOTSUPP;
> > @@ -2519,8 +2439,7 @@ int ena_com_set_hash_ctrl(struct ena_com_dev
> *ena_dev)
> >                                              (struct ena_admin_acq_entry *)&resp,
> >                                              sizeof(resp));
> >          if (unlikely(ret))
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to set hash input. error: %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to set hash input.
> error: %d\n", ret);
> >
> >          return ret;
> >   }
> > @@ -2597,8 +2516,7 @@ int ena_com_fill_hash_ctrl(struct ena_com_dev
> *ena_dev,
> >          int rc;
> >
> >          if (proto >= ENA_ADMIN_RSS_PROTO_NUM) {
> > -               netdev_err(ena_dev->net_device, "Invalid proto num (%u)\n",
> > -                          proto);
> > +               netdev_err(ena_dev->net_device, "Invalid proto num (%u)\n",
> proto);
> >                  return -EINVAL;
> >          }
> >
> > @@ -2650,8 +2568,7 @@ int ena_com_indirect_table_set(struct
> ena_com_dev *ena_dev)
> >          struct ena_admin_set_feat_resp resp;
> >          int ret;
> >
> > -       if (!ena_com_check_supported_feature_id(
> > -                   ena_dev, ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG)) {
> > +       if (!ena_com_check_supported_feature_id(ena_dev,
> ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG)) {
> >                  netdev_dbg(ena_dev->net_device, "Feature %d isn't
> supported\n",
> >                             ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG);
> >                  return -EOPNOTSUPP;
> > @@ -2691,8 +2608,7 @@ int ena_com_indirect_table_set(struct
> ena_com_dev *ena_dev)
> >                                              sizeof(resp));
> >
> >          if (unlikely(ret))
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to set indirect table. error: %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to set indirect table.
> error: %d\n", ret);
> >
> >          return ret;
> >   }
> > @@ -2771,9 +2687,8 @@ int ena_com_allocate_host_info(struct
> ena_com_dev *ena_dev)
> >   {
> >          struct ena_host_attribute *host_attr = &ena_dev->host_attr;
> >
> > -       host_attr->host_info =
> > -               dma_alloc_coherent(ena_dev->dmadev, SZ_4K,
> > -                                  &host_attr->host_info_dma_addr, GFP_KERNEL);
> > +       host_attr->host_info = dma_alloc_coherent(ena_dev->dmadev,
> SZ_4K,
> > +                                                 &host_attr->host_info_dma_addr,
> GFP_KERNEL);
> >          if (unlikely(!host_attr->host_info))
> >                  return -ENOMEM;
> >
> > @@ -2819,8 +2734,7 @@ void ena_com_delete_debug_area(struct
> ena_com_dev *ena_dev)
> >
> >          if (host_attr->debug_area_virt_addr) {
> >                  dma_free_coherent(ena_dev->dmadev, host_attr-
> >debug_area_size,
> > -                                 host_attr->debug_area_virt_addr,
> > -                                 host_attr->debug_area_dma_addr);
> > +                                 host_attr->debug_area_virt_addr, host_attr-
> >debug_area_dma_addr);
> >                  host_attr->debug_area_virt_addr = NULL;
> >          }
> >   }
> > @@ -2869,8 +2783,7 @@ int ena_com_set_host_attributes(struct
> ena_com_dev *ena_dev)
> >                                              sizeof(resp));
> >
> >          if (unlikely(ret))
> > -               netdev_err(ena_dev->net_device,
> > -                          "Failed to set host attributes: %d\n", ret);
> > +               netdev_err(ena_dev->net_device, "Failed to set host attributes:
> %d\n", ret);
> >
> >          return ret;
> >   }
> > @@ -2888,8 +2801,7 @@ static int
> ena_com_update_nonadaptive_moderation_interval(struct ena_com_dev
> *en
> >                                                            u32 *intr_moder_interval)
> >   {
> >          if (!intr_delay_resolution) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "Illegal interrupt delay granularity value\n");
> > +               netdev_err(ena_dev->net_device, "Illegal interrupt delay
> granularity value\n");
> >                  return -EFAULT;
> >          }
> >
> > @@ -2927,14 +2839,12 @@ int ena_com_init_interrupt_moderation(struct
> ena_com_dev *ena_dev)
> >
> >          if (rc) {
> >                  if (rc == -EOPNOTSUPP) {
> > -                       netdev_dbg(ena_dev->net_device,
> > -                                  "Feature %d isn't supported\n",
> > +                       netdev_dbg(ena_dev->net_device, "Feature %d isn't
> supported\n",
> >                                     ENA_ADMIN_INTERRUPT_MODERATION);
> >                          rc = 0;
> >                  } else {
> >                          netdev_err(ena_dev->net_device,
> > -                                  "Failed to get interrupt moderation admin cmd. rc: %d\n",
> > -                                  rc);
> > +                                  "Failed to get interrupt moderation admin cmd. rc:
> %d\n", rc);
> >                  }
> >
> >                  /* no moderation supported, disable adaptive support */
> > @@ -2982,8 +2892,7 @@ int ena_com_config_dev_mode(struct
> ena_com_dev *ena_dev,
> >                  (llq_info->descs_num_before_header * sizeof(struct
> ena_eth_io_tx_desc));
> >
> >          if (unlikely(ena_dev->tx_max_header_size == 0)) {
> > -               netdev_err(ena_dev->net_device,
> > -                          "The size of the LLQ entry is smaller than needed\n");
> > +               netdev_err(ena_dev->net_device, "The size of the LLQ entry is
> smaller than needed\n");
> >                  return -EINVAL;
> >          }
> >
> > diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.c
> b/drivers/net/ethernet/amazon/ena/ena_eth_com.c
> > index f9f8862..933e619 100644
> > --- a/drivers/net/ethernet/amazon/ena/ena_eth_com.c
> > +++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.c
> > @@ -18,8 +18,7 @@ static struct ena_eth_io_rx_cdesc_base
> *ena_com_get_next_rx_cdesc(
> >          cdesc = (struct ena_eth_io_rx_cdesc_base *)(io_cq-
> >cdesc_addr.virt_addr
> >                          + (head_masked * io_cq->cdesc_entry_size_in_bytes));
> >
> > -       desc_phase = (READ_ONCE(cdesc->status) &
> > -                     ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK) >>
> > +       desc_phase = (READ_ONCE(cdesc->status) &
> ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK) >>
> >                       ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT;
> >
> >          if (desc_phase != expected_phase)
> > @@ -65,8 +64,8 @@ static int
> ena_com_write_bounce_buffer_to_dev(struct ena_com_io_sq *io_sq,
> >
> >                  io_sq->entries_in_tx_burst_left--;
> >                  netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> > -                          "Decreasing entries_in_tx_burst_left of queue %d to %d\n",
> > -                          io_sq->qid, io_sq->entries_in_tx_burst_left);
> > +                          "Decreasing entries_in_tx_burst_left of queue %d to %d\n",
> io_sq->qid,
> > +                          io_sq->entries_in_tx_burst_left);
> >          }
> >
> >          /* Make sure everything was written into the bounce buffer before
> > @@ -75,8 +74,8 @@ static int
> ena_com_write_bounce_buffer_to_dev(struct ena_com_io_sq *io_sq,
> >          wmb();
> >
> >          /* The line is completed. Copy it to dev */
> > -       __iowrite64_copy(io_sq->desc_addr.pbuf_dev_addr + dst_offset,
> > -                        bounce_buffer, (llq_info->desc_list_entry_size) / 8);
> > +       __iowrite64_copy(io_sq->desc_addr.pbuf_dev_addr + dst_offset,
> bounce_buffer,
> > +                        (llq_info->desc_list_entry_size) / 8);
> >
> >          io_sq->tail++;
> >
> > @@ -102,16 +101,14 @@ static int
> ena_com_write_header_to_bounce(struct ena_com_io_sq *io_sq,
> >          header_offset =
> >                  llq_info->descs_num_before_header * io_sq->desc_entry_size;
> >
> > -       if (unlikely((header_offset + header_len) >
> > -                    llq_info->desc_list_entry_size)) {
> > +       if (unlikely((header_offset + header_len) > llq_info-
> >desc_list_entry_size)) {
> >                  netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> >                             "Trying to write header larger than llq entry can
> accommodate\n");
> >                  return -EFAULT;
> >          }
> >
> >          if (unlikely(!bounce_buffer)) {
> > -               netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> > -                          "Bounce buffer is NULL\n");
> > +               netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> "Bounce buffer is NULL\n");
> >                  return -EFAULT;
> >          }
> >
> > @@ -129,8 +126,7 @@ static void *get_sq_desc_llq(struct ena_com_io_sq
> *io_sq)
> >          bounce_buffer = pkt_ctrl->curr_bounce_buf;
> >
> >          if (unlikely(!bounce_buffer)) {
> > -               netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> > -                          "Bounce buffer is NULL\n");
> > +               netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> "Bounce buffer is NULL\n");
> >                  return NULL;
> >          }
> >
> > @@ -247,8 +243,7 @@ static u16 ena_com_cdesc_rx_pkt_get(struct
> ena_com_io_cq *io_cq,
> >
> >                  ena_com_cq_inc_head(io_cq);
> >                  count++;
> > -               last = (READ_ONCE(cdesc->status) &
> > -                       ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK) >>
> > +               last = (READ_ONCE(cdesc->status) &
> ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK) >>
> >                         ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT;
> >          } while (!last);
> >
> > @@ -369,9 +364,8 @@ static void ena_com_rx_set_flags(struct
> ena_com_io_cq *io_cq,
> >
> >          netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
> >                     "l3_proto %d l4_proto %d l3_csum_err %d l4_csum_err %d hash
> %d frag %d cdesc_status %x\n",
> > -                  ena_rx_ctx->l3_proto, ena_rx_ctx->l4_proto,
> > -                  ena_rx_ctx->l3_csum_err, ena_rx_ctx->l4_csum_err,
> > -                  ena_rx_ctx->hash, ena_rx_ctx->frag, cdesc->status);
> > +                  ena_rx_ctx->l3_proto, ena_rx_ctx->l4_proto, ena_rx_ctx-
> >l3_csum_err,
> > +                  ena_rx_ctx->l4_csum_err, ena_rx_ctx->hash, ena_rx_ctx->frag,
> cdesc->status);
> >   }
> >
> >
> /**********************************************************
> *******************/
> > @@ -403,13 +397,12 @@ int ena_com_prepare_tx(struct ena_com_io_sq
> *io_sq,
> >
> >          if (unlikely(header_len > io_sq->tx_max_header_size)) {
> >                  netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> > -                          "Header size is too large %d max header: %d\n",
> > -                          header_len, io_sq->tx_max_header_size);
> > +                          "Header size is too large %d max header: %d\n", header_len,
> > +                          io_sq->tx_max_header_size);
> >                  return -EINVAL;
> >          }
> >
> > -       if (unlikely(io_sq->mem_queue_type ==
> ENA_ADMIN_PLACEMENT_POLICY_DEV &&
> > -                    !buffer_to_push)) {
> > +       if (unlikely(io_sq->mem_queue_type ==
> ENA_ADMIN_PLACEMENT_POLICY_DEV && !buffer_to_push)) {
> >                  netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> >                             "Push header wasn't provided in LLQ mode\n");
> >                  return -EINVAL;
> > @@ -556,13 +549,11 @@ int ena_com_rx_pkt(struct ena_com_io_cq
> *io_cq,
> >          }
> >
> >          netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
> > -                  "Fetch rx packet: queue %d completed desc: %d\n", io_cq->qid,
> > -                  nb_hw_desc);
> > +                  "Fetch rx packet: queue %d completed desc: %d\n", io_cq->qid,
> nb_hw_desc);
> >
> >          if (unlikely(nb_hw_desc > ena_rx_ctx->max_bufs)) {
> >                  netdev_err(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
> > -                          "Too many RX cdescs (%d) > MAX(%d)\n", nb_hw_desc,
> > -                          ena_rx_ctx->max_bufs);
> > +                          "Too many RX cdescs (%d) > MAX(%d)\n", nb_hw_desc,
> ena_rx_ctx->max_bufs);
> >                  return -ENOSPC;
> >          }
> >
> > @@ -586,8 +577,8 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq,
> >          io_sq->next_to_comp += nb_hw_desc;
> >
> >          netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
> > -                  "[%s][QID#%d] Updating SQ head to: %d\n", __func__,
> > -                  io_sq->qid, io_sq->next_to_comp);
> > +                  "[%s][QID#%d] Updating SQ head to: %d\n", __func__, io_sq-
> >qid,
> > +                  io_sq->next_to_comp);
> >
> >          /* Get rx flags from the last pkt */
> >          ena_com_rx_set_flags(io_cq, ena_rx_ctx, cdesc);
> > @@ -624,8 +615,8 @@ int ena_com_add_single_rx_desc(struct
> ena_com_io_sq *io_sq,
> >          desc->req_id = req_id;
> >
> >          netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> > -                  "[%s] Adding single RX desc, Queue: %u, req_id: %u\n",
> > -                  __func__, io_sq->qid, req_id);
> > +                  "[%s] Adding single RX desc, Queue: %u, req_id: %u\n",
> __func__, io_sq->qid,
> > +                  req_id);
> >
> >          desc->buff_addr_lo = (u32)ena_buf->paddr;
> >          desc->buff_addr_hi =
> > diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.h
> b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
> > index 4d65d82..72b0197 100644
> > --- a/drivers/net/ethernet/amazon/ena/ena_eth_com.h
> > +++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
> > @@ -143,8 +143,8 @@ static inline bool
> ena_com_is_doorbell_needed(struct ena_com_io_sq *io_sq,
> >          }
> >
> >          netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> > -                  "Queue: %d num_descs: %d num_entries_needed: %d\n",
> > -                  io_sq->qid, num_descs, num_entries_needed);
> > +                  "Queue: %d num_descs: %d num_entries_needed: %d\n",
> io_sq->qid, num_descs,
> > +                  num_entries_needed);
> >
> >          return num_entries_needed > io_sq->entries_in_tx_burst_left;
> >   }
> > @@ -155,15 +155,14 @@ static inline int
> ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq)
> >          u16 tail = io_sq->tail;
> >
> >          netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> > -                  "Write submission queue doorbell for queue: %d tail: %d\n",
> > -                  io_sq->qid, tail);
> > +                  "Write submission queue doorbell for queue: %d tail: %d\n",
> io_sq->qid, tail);
> >
> >          writel(tail, io_sq->db_addr);
> >
> >          if (is_llq_max_tx_burst_exists(io_sq)) {
> >                  netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device,
> > -                          "Reset available entries in tx burst for queue %d to %d\n",
> > -                          io_sq->qid, max_entries_in_tx_burst);
> > +                          "Reset available entries in tx burst for queue %d to %d\n",
> io_sq->qid,
> > +                          max_entries_in_tx_burst);
> >                  io_sq->entries_in_tx_burst_left = max_entries_in_tx_burst;
> >          }
> >
> > @@ -224,8 +223,8 @@ static inline int
> ena_com_tx_comp_req_id_get(struct ena_com_io_cq *io_cq,
> >
> >          *req_id = READ_ONCE(cdesc->req_id);
> >          if (unlikely(*req_id >= io_cq->q_depth)) {
> > -               netdev_err(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
> > -                          "Invalid req id %d\n", cdesc->req_id);
> > +               netdev_err(ena_com_io_cq_to_ena_dev(io_cq)->net_device,
> "Invalid req id %d\n",
> > +                          cdesc->req_id);
> >                  return -EINVAL;
> >          }
> >
> > diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c
> b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> > index ca56dff..526ab3e 100644
> > --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
> > +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
> > @@ -141,11 +141,9 @@ int ena_xmit_common(struct ena_adapter
> *adapter,
> >          if (unlikely(rc)) {
> >                  netif_err(adapter, tx_queued, adapter->netdev,
> >                            "Failed to prepare tx bufs\n");
> > -               ena_increase_stat(&ring->tx_stats.prepare_ctx_err, 1,
> > -                                 &ring->syncp);
> > +               ena_increase_stat(&ring->tx_stats.prepare_ctx_err, 1, &ring-
> >syncp);
> >                  if (rc != -ENOMEM)
> > -                       ena_reset_device(adapter,
> > -                                        ENA_REGS_RESET_DRIVER_INVALID_STATE);
> > +                       ena_reset_device(adapter,
> ENA_REGS_RESET_DRIVER_INVALID_STATE);
> >                  return rc;
> >          }
> >
> > @@ -510,8 +508,7 @@ static struct page *ena_alloc_map_page(struct
> ena_ring *rx_ring,
> >           */
> >          page = dev_alloc_page();
> >          if (!page) {
> > -               ena_increase_stat(&rx_ring->rx_stats.page_alloc_fail, 1,
> > -                                 &rx_ring->syncp);
> > +               ena_increase_stat(&rx_ring->rx_stats.page_alloc_fail, 1,
> &rx_ring->syncp);
> >                  return ERR_PTR(-ENOSPC);
> >          }
> >
> > @@ -570,8 +567,8 @@ static void ena_unmap_rx_buff_attrs(struct
> ena_ring *rx_ring,
> >                                      struct ena_rx_buffer *rx_info,
> >                                      unsigned long attrs)
> >   {
> > -       dma_unmap_page_attrs(rx_ring->dev, rx_info->dma_addr,
> ENA_PAGE_SIZE,
> > -                            DMA_BIDIRECTIONAL, attrs);
> > +       dma_unmap_page_attrs(rx_ring->dev, rx_info->dma_addr,
> ENA_PAGE_SIZE, DMA_BIDIRECTIONAL,
> > +                            attrs);
> >   }
> >
> >   static void ena_free_rx_page(struct ena_ring *rx_ring,
> > @@ -844,8 +841,7 @@ static int ena_clean_tx_irq(struct ena_ring *tx_ring,
> u32 budget)
> >                                                  &req_id);
> >                  if (rc) {
> >                          if (unlikely(rc == -EINVAL))
> > -                               handle_invalid_req_id(tx_ring, req_id, NULL,
> > -                                                     false);
> > +                               handle_invalid_req_id(tx_ring, req_id, NULL, false);
> >                          break;
> >                  }
> >
> > @@ -1070,8 +1066,7 @@ static struct sk_buff *ena_rx_skb(struct ena_ring
> *rx_ring,
> >                                          DMA_FROM_DEVICE);
> >
> >                  if (!reuse_rx_buf_page)
> > -                       ena_unmap_rx_buff_attrs(rx_ring, rx_info,
> > -                                               DMA_ATTR_SKIP_CPU_SYNC);
> > +                       ena_unmap_rx_buff_attrs(rx_ring, rx_info,
> DMA_ATTR_SKIP_CPU_SYNC);
> >
> >                  skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_info->page,
> >                                  page_offset + buf_offset, len, buf_len);
> > @@ -1342,8 +1337,7 @@ error:
> >          adapter = netdev_priv(rx_ring->netdev);
> >
> >          if (rc == -ENOSPC) {
> > -               ena_increase_stat(&rx_ring->rx_stats.bad_desc_num, 1,
> > -                                 &rx_ring->syncp);
> > +               ena_increase_stat(&rx_ring->rx_stats.bad_desc_num, 1,
> &rx_ring->syncp);
> >                  ena_reset_device(adapter,
> ENA_REGS_RESET_TOO_MANY_RX_DESCS);
> >          } else {
> >                  ena_increase_stat(&rx_ring->rx_stats.bad_req_id, 1,
> > @@ -1833,8 +1827,7 @@ static int ena_rss_configure(struct ena_adapter
> *adapter)
> >          if (!ena_dev->rss.tbl_log_size) {
> >                  rc = ena_rss_init_default(adapter);
> >                  if (rc && (rc != -EOPNOTSUPP)) {
> > -                       netif_err(adapter, ifup, adapter->netdev,
> > -                                 "Failed to init RSS rc: %d\n", rc);
> > +                       netif_err(adapter, ifup, adapter->netdev, "Failed to init RSS rc:
> %d\n", rc);
> >                          return rc;
> >                  }
> >          }
> > @@ -2790,8 +2783,7 @@ static void ena_config_debug_area(struct
> ena_adapter *adapter)
> >          rc = ena_com_set_host_attributes(adapter->ena_dev);
> >          if (rc) {
> >                  if (rc == -EOPNOTSUPP)
> > -                       netif_warn(adapter, drv, adapter->netdev,
> > -                                  "Cannot set host attributes\n");
> > +                       netif_warn(adapter, drv, adapter->netdev, "Cannot set host
> attributes\n");
> >                  else
> >                          netif_err(adapter, drv, adapter->netdev,
> >                                    "Cannot set host attributes\n");
> > @@ -3831,8 +3823,8 @@ static int ena_rss_init_default(struct ena_adapter
> *adapter)
> >                  }
> >          }
> >
> > -       rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_TOEPLITZ,
> NULL,
> > -                                       ENA_HASH_KEY_SIZE, 0xFFFFFFFF);
> > +       rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_TOEPLITZ,
> NULL, ENA_HASH_KEY_SIZE,
> > +                                       0xFFFFFFFF);
> >          if (unlikely(rc && (rc != -EOPNOTSUPP))) {
> >                  dev_err(dev, "Cannot fill hash function\n");
> >                  goto err_fill_indir;
> > --
> > 2.40.1
> >
> >

  reply	other threads:[~2024-01-30  9:39 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-29  8:55 [PATCH v1 net-next 00/11] ENA driver changes darinzon
2024-01-29  8:55 ` [PATCH v1 net-next 01/11] net: ena: Remove an unused field darinzon
2024-01-29  8:55 ` [PATCH v1 net-next 02/11] net: ena: Add more documentation for RX copybreak darinzon
2024-01-29  8:55 ` [PATCH v1 net-next 03/11] net: ena: Minor cosmetic changes darinzon
2024-01-29  8:55 ` [PATCH v1 net-next 04/11] net: ena: Enable DIM by default darinzon
2024-01-29  8:55 ` [PATCH v1 net-next 05/11] net: ena: Remove CQ tail pointer update darinzon
2024-01-30  1:16   ` Nelson, Shannon
2024-01-30  9:39     ` Arinzon, David
2024-01-29  8:55 ` [PATCH v1 net-next 06/11] net: ena: Change error print during ena_device_init() darinzon
2024-01-29  8:55 ` [PATCH v1 net-next 07/11] net: ena: Add more information on TX timeouts darinzon
2024-01-29  8:55 ` [PATCH v1 net-next 08/11] net: ena: Relocate skb_tx_timestamp() to improve time stamping accuracy darinzon
2024-01-29  8:55 ` [PATCH v1 net-next 09/11] net: ena: Change default print level for netif_ prints darinzon
2024-01-29  8:55 ` [PATCH v1 net-next 10/11] net: ena: handle ena_calc_io_queue_size() possible errors darinzon
2024-01-30  1:16   ` Nelson, Shannon
2024-01-30  9:39     ` Arinzon, David
2024-01-29  8:55 ` [PATCH v1 net-next 11/11] net: ena: Reduce lines with longer column width boundary darinzon
2024-01-30  1:16   ` Nelson, Shannon
2024-01-30  9:39     ` Arinzon, David [this message]
2024-01-30  1:20 ` [PATCH v1 net-next 00/11] ENA driver changes Nelson, Shannon
2024-01-30  9:39   ` Arinzon, David
2024-01-30 21:07     ` Nelson, Shannon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c72880e38eb640f2bc8896f2d1113b73@amazon.com \
    --to=darinzon@amazon.com \
    --cc=akiyano@amazon.com \
    --cc=aliguori@amazon.com \
    --cc=alisaidi@amazon.com \
    --cc=benh@amazon.com \
    --cc=davem@davemloft.net \
    --cc=dwmw@amazon.co.uk \
    --cc=evostrov@amazon.com \
    --cc=itzko@amazon.com \
    --cc=kuba@kernel.org \
    --cc=matua@amazon.com \
    --cc=msw@amazon.com \
    --cc=nafea@amazon.com \
    --cc=ndagan@amazon.com \
    --cc=netanel@amazon.com \
    --cc=netdev@vger.kernel.org \
    --cc=nkolder@amazon.com \
    --cc=ofirt@amazon.com \
    --cc=osamaabb@amazon.com \
    --cc=saeedb@amazon.com \
    --cc=shannon.nelson@amd.com \
    --cc=shayagr@amazon.com \
    --cc=zorik@amazon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).