From: Umang Jain <umang.jain@ideasonboard.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
Broadcom internal kernel review list
<bcm-kernel-feedback-list@broadcom.com>
Cc: linux-rpi-kernel@lists.infradead.org,
linux-arm-kernel@lists.infradead.org,
linux-staging@lists.linux.dev, linux-kernel@vger.kernel.org,
Kieran Bingham <kieran.bingham@ideasonboard.com>,
Dan Carpenter <dan.carpenter@linaro.org>,
Stefan Wahren <wahrenst@gmx.net>,
Laurent Pinchart <laurent.pinchart@ideasonboard.com>,
Umang Jain <umang.jain@ideasonboard.com>
Subject: [PATCH 5/5] staging: vchiq_core: Locally cache cache_line_size information
Date: Thu, 10 Oct 2024 15:52:49 +0530 [thread overview]
Message-ID: <20241010102250.236545-6-umang.jain@ideasonboard.com> (raw)
In-Reply-To: <20241010102250.236545-1-umang.jain@ideasonboard.com>
Locally cache 'cache_line_size' information in a variable instead of
repeatedly accessing it from drv_mgmt->info. This helps to reflow lines
under 80 columns.
No functional change intended in this patch.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
---
.../interface/vchiq_arm/vchiq_core.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
index d03b67f9cdb7..19c24dd9d1b3 100644
--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
@@ -1516,6 +1516,7 @@ create_pagelist(struct vchiq_instance *instance, char *buf, char __user *ubuf,
size_t pagelist_size;
struct scatterlist *scatterlist, *sg;
int dma_buffers;
+ unsigned int cache_line_size;
dma_addr_t dma_addr;
if (count >= INT_MAX - PAGE_SIZE)
@@ -1666,10 +1667,10 @@ create_pagelist(struct vchiq_instance *instance, char *buf, char __user *ubuf,
}
/* Partial cache lines (fragments) require special measures */
+ cache_line_size = drv_mgmt->info->cache_line_size;
if ((type == PAGELIST_READ) &&
- ((pagelist->offset & (drv_mgmt->info->cache_line_size - 1)) ||
- ((pagelist->offset + pagelist->length) &
- (drv_mgmt->info->cache_line_size - 1)))) {
+ ((pagelist->offset & (cache_line_size - 1)) ||
+ ((pagelist->offset + pagelist->length) & (cache_line_size - 1)))) {
char *fragments;
if (down_interruptible(&drv_mgmt->free_fragments_sema)) {
@@ -1699,6 +1700,7 @@ free_pagelist(struct vchiq_instance *instance,
struct pagelist *pagelist = pagelistinfo->pagelist;
struct page **pages = pagelistinfo->pages;
unsigned int num_pages = pagelistinfo->num_pages;
+ unsigned int cache_line_size;
dev_dbg(instance->state->dev, "arm: %pK, %d\n",
pagelistinfo->pagelist, actual);
@@ -1714,6 +1716,7 @@ free_pagelist(struct vchiq_instance *instance,
pagelistinfo->scatterlist_mapped = 0;
/* Deal with any partial cache lines (fragments) */
+ cache_line_size = drv_mgmt->info->cache_line_size;
if (pagelist->type >= PAGELIST_READ_WITH_FRAGMENTS &&
drv_mgmt->fragments_base) {
char *fragments = drv_mgmt->fragments_base +
@@ -1721,10 +1724,10 @@ free_pagelist(struct vchiq_instance *instance,
drv_mgmt->fragments_size;
int head_bytes, tail_bytes;
- head_bytes = (drv_mgmt->info->cache_line_size - pagelist->offset) &
- (drv_mgmt->info->cache_line_size - 1);
+ head_bytes = (cache_line_size - pagelist->offset) &
+ (cache_line_size - 1);
tail_bytes = (pagelist->offset + actual) &
- (drv_mgmt->info->cache_line_size - 1);
+ (cache_line_size - 1);
if ((actual >= 0) && (head_bytes != 0)) {
if (head_bytes > actual)
@@ -1737,8 +1740,8 @@ free_pagelist(struct vchiq_instance *instance,
(tail_bytes != 0))
memcpy_to_page(pages[num_pages - 1],
(pagelist->offset + actual) &
- (PAGE_SIZE - 1) & ~(drv_mgmt->info->cache_line_size - 1),
- fragments + drv_mgmt->info->cache_line_size,
+ (PAGE_SIZE - 1) & ~(cache_line_size - 1),
+ fragments + cache_line_size,
tail_bytes);
down(&drv_mgmt->free_fragments_mutex);
--
2.45.2
next prev parent reply other threads:[~2024-10-10 10:23 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-10 10:22 [PATCH 0/5] staging: vchiq_core: Improve indentation Umang Jain
2024-10-10 10:22 ` [PATCH 1/5] staging: vchiq_core: Fix white space indentation error Umang Jain
2024-10-10 16:54 ` Stefan Wahren
2024-10-10 10:22 ` [PATCH 2/5] staging: vchiq_core: Indent static_assert on single line Umang Jain
2024-10-10 16:55 ` Stefan Wahren
2024-10-10 10:22 ` [PATCH 3/5] staging: vchiq_core: Reflow long lines to 80 columns Umang Jain
2024-10-11 4:36 ` Greg Kroah-Hartman
2024-10-10 10:22 ` [PATCH 4/5] staging: vchiq_core: Macros indentation fix Umang Jain
2024-10-10 16:50 ` Stefan Wahren
2024-10-11 4:36 ` Greg Kroah-Hartman
2024-10-10 10:22 ` Umang Jain [this message]
2024-10-10 16:52 ` [PATCH 5/5] staging: vchiq_core: Locally cache cache_line_size information Stefan Wahren
2024-10-11 4:39 ` Greg Kroah-Hartman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241010102250.236545-6-umang.jain@ideasonboard.com \
--to=umang.jain@ideasonboard.com \
--cc=bcm-kernel-feedback-list@broadcom.com \
--cc=dan.carpenter@linaro.org \
--cc=gregkh@linuxfoundation.org \
--cc=kieran.bingham@ideasonboard.com \
--cc=laurent.pinchart@ideasonboard.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rpi-kernel@lists.infradead.org \
--cc=linux-staging@lists.linux.dev \
--cc=wahrenst@gmx.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox