From mboxrd@z Thu Jan 1 00:00:00 1970 From: mark.rutland@arm.com (Mark Rutland) Date: Thu, 19 Mar 2015 10:41:01 +0000 Subject: [PATCH v5 8/8] arm64: enforce x1|x2|x3 == 0 upon kernel entry as per boot protocol In-Reply-To: References: <1426690527-14258-9-git-send-email-ard.biesheuvel@linaro.org> <20150318181315.GH19814@leverpostej> <20150318185737.GJ19814@leverpostej> <20150318202430.GA17417@leverpostej> <20150319103551.GA18473@leverpostej> Message-ID: <20150319104100.GB18473@leverpostej> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org > >> Does it matter at all that __inval_cache_range() will mostly end up > >> doing a civac for the whole array, since it uses civac not ivac for > >> both non-cachelined aligned ends of the region, and the typical > >> cacheline size is larger then the size of the array? Couldn't that > >> also clobber what we just wrote with a stale cacheline? > > > > Yes, though only if the memory were outside the footprint of the loaded > > Image (which per the boot protocol should be clean to the PoC). > > > > So I guess we should move the boot_regs structure back into head.S so it > > doesn't fall outside > > > > OK that means .data should be fine too. __cacheline_aligned variables > are put into the .data section, so let me use that instead (.bss gets > cleared after anyway, that is why i added the __read_mostly initially) Great. Using __cachelign_aligned sounds good to me. Mark.