From: Marek Vasut <marex@denx.de>
To: u-boot@lists.denx.de
Subject: [U-Boot] [PATCH 3/3] MIPS: Abstract cache op loops with a macro
Date: Thu, 26 May 2016 18:13:17 +0200 [thread overview]
Message-ID: <5747209D.10801@denx.de> (raw)
In-Reply-To: <20160526155850.25412-4-paul.burton@imgtec.com>
On 05/26/2016 05:58 PM, Paul Burton wrote:
> The various cache maintenance routines perform a number of loops over
> cache lines. Rather than duplicate the code for performing such loops,
> abstract it out into a new cache_loop macro which performs an arbitrary
> number of cache ops on a range of addresses. This reduces duplication in
> the existing L1 cache maintenance code & will allow for not adding
> further duplication when introducing L2 cache support.
>
> Signed-off-by: Paul Burton <paul.burton@imgtec.com>
>
> ---
>
> arch/mips/lib/cache.c | 59 ++++++++++++++++-----------------------------------
> 1 file changed, 18 insertions(+), 41 deletions(-)
>
> diff --git a/arch/mips/lib/cache.c b/arch/mips/lib/cache.c
> index 2bb91c6..320335c 100644
> --- a/arch/mips/lib/cache.c
> +++ b/arch/mips/lib/cache.c
> @@ -37,82 +37,59 @@ static inline unsigned long dcache_line_size(void)
> return 2 << dl;
> }
>
> +#define cache_loop(start, end, lsize, ops...) do { \
> + const void *addr = (const void *)(start & ~(lsize - 1)); \
> + const void *aend = (const void *)((end - 1) & ~(lsize - 1)); \
> + const unsigned int cache_ops[] = { ops }; \
Can't you turn this into a function instead and pass some flags to
select the ops instead ?
> + unsigned int i; \
> + \
> + for (; addr <= aend; addr += lsize) { \
> + for (i = 0; i < ARRAY_SIZE(cache_ops); i++) \
> + mips_cache(cache_ops[i], addr); \
> + } \
> +} while (0)
> +
> void flush_cache(ulong start_addr, ulong size)
> {
> unsigned long ilsize = icache_line_size();
> unsigned long dlsize = dcache_line_size();
> - const void *addr, *aend;
>
> /* aend will be miscalculated when size is zero, so we return here */
> if (size == 0)
> return;
>
> - addr = (const void *)(start_addr & ~(dlsize - 1));
> - aend = (const void *)((start_addr + size - 1) & ~(dlsize - 1));
> -
> if (ilsize == dlsize) {
> /* flush I-cache & D-cache simultaneously */
> - while (1) {
> - mips_cache(HIT_WRITEBACK_INV_D, addr);
> - mips_cache(HIT_INVALIDATE_I, addr);
> - if (addr == aend)
> - break;
> - addr += dlsize;
> - }
> + cache_loop(start_addr, start_addr + size, ilsize,
> + HIT_WRITEBACK_INV_D, HIT_INVALIDATE_I);
> return;
> }
>
> /* flush D-cache */
> - while (1) {
> - mips_cache(HIT_WRITEBACK_INV_D, addr);
> - if (addr == aend)
> - break;
> - addr += dlsize;
> - }
> + cache_loop(start_addr, start_addr + size, dlsize, HIT_WRITEBACK_INV_D);
>
> /* flush I-cache */
> - addr = (const void *)(start_addr & ~(ilsize - 1));
> - aend = (const void *)((start_addr + size - 1) & ~(ilsize - 1));
> - while (1) {
> - mips_cache(HIT_INVALIDATE_I, addr);
> - if (addr == aend)
> - break;
> - addr += ilsize;
> - }
> + cache_loop(start_addr, start_addr + size, ilsize, HIT_INVALIDATE_I);
> }
>
> void flush_dcache_range(ulong start_addr, ulong stop)
> {
> unsigned long lsize = dcache_line_size();
> - const void *addr = (const void *)(start_addr & ~(lsize - 1));
> - const void *aend = (const void *)((stop - 1) & ~(lsize - 1));
>
> /* aend will be miscalculated when size is zero, so we return here */
> if (start_addr == stop)
> return;
>
> - while (1) {
> - mips_cache(HIT_WRITEBACK_INV_D, addr);
> - if (addr == aend)
> - break;
> - addr += lsize;
> - }
> + cache_loop(start_addr, stop, lsize, HIT_WRITEBACK_INV_D);
> }
>
> void invalidate_dcache_range(ulong start_addr, ulong stop)
> {
> unsigned long lsize = dcache_line_size();
> - const void *addr = (const void *)(start_addr & ~(lsize - 1));
> - const void *aend = (const void *)((stop - 1) & ~(lsize - 1));
>
> /* aend will be miscalculated when size is zero, so we return here */
> if (start_addr == stop)
> return;
>
> - while (1) {
> - mips_cache(HIT_INVALIDATE_D, addr);
> - if (addr == aend)
> - break;
> - addr += lsize;
> - }
> + cache_loop(start_addr, stop, lsize, HIT_INVALIDATE_I);
> }
>
--
Best regards,
Marek Vasut
next prev parent reply other threads:[~2016-05-26 16:13 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-26 15:58 [U-Boot] [PATCH 0/3] MIPS cache cleanups Paul Burton
2016-05-26 15:58 ` [U-Boot] [PATCH 1/3] MIPS: Move cache sizes to Kconfig Paul Burton
2016-05-26 16:10 ` Marek Vasut
2016-05-27 10:36 ` Paul Burton
2016-05-27 14:32 ` Marek Vasut
2016-05-27 14:34 ` Paul Burton
2016-05-27 14:40 ` Marek Vasut
2016-05-27 14:54 ` Paul Burton
2016-05-28 12:18 ` Marek Vasut
2016-05-27 15:43 ` Daniel Schwierzeck
2016-05-28 12:03 ` Marek Vasut
2016-05-31 16:21 ` Zubair Lutfullah Kakakhel
2016-05-31 8:00 ` Daniel Schwierzeck
2016-05-26 15:58 ` [U-Boot] [PATCH 2/3] MIPS: Split I & D cache line size config Paul Burton
2016-05-26 16:12 ` Marek Vasut
2016-05-27 11:21 ` Daniel Schwierzeck
2016-05-31 8:01 ` Daniel Schwierzeck
2016-05-26 15:58 ` [U-Boot] [PATCH 3/3] MIPS: Abstract cache op loops with a macro Paul Burton
2016-05-26 16:13 ` Marek Vasut [this message]
2016-05-27 10:30 ` Paul Burton
2016-05-27 14:36 ` Marek Vasut
2016-05-27 14:48 ` Paul Burton
2016-05-28 12:27 ` Marek Vasut
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5747209D.10801@denx.de \
--to=marex@denx.de \
--cc=u-boot@lists.denx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox