From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from phobos.denx.de (phobos.denx.de [85.214.62.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CE63BC433F5 for ; Wed, 12 Jan 2022 17:32:39 +0000 (UTC) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 20B608328D; Wed, 12 Jan 2022 18:32:38 +0100 (CET) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="Psn23ehT"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id D3F3483201; Wed, 12 Jan 2022 18:32:35 +0100 (CET) Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 61F468344C for ; Wed, 12 Jan 2022 18:21:51 +0100 (CET) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=pali@kernel.org Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 14377B82006; Wed, 12 Jan 2022 17:21:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99A0DC36AE5; Wed, 12 Jan 2022 17:21:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1642008109; bh=9jksTANecZOEWl4kmyVnLdCWZt66Ew2BTZol1bk2JCU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Psn23ehTK5CvZUhrU5mcfIQG5MngMv5XPUqlbl6K65cf/Sew1gIdweQyX64gvN2jz i1/AlmP51FSgjjvBKzNQfPI1PWWy7Ck+fU5T4bt7Plnuj4fYFPge1w3UpgMHiR5SVY n7FhlqYVVHSLkPNXP4IUEapco1+AnP2mqbaJWITlz25G6BO1+DNS8Vk3QA2jV1NDw7 XwZnpXuFIexBKUHPe15t2Xay8H8lNQSUdUFO8rY/PQCxMmfQSZoU2N2bC61j6r53fz Yc7MIEgKWQdKjEGHT9CIJkuBMBJKAFvAKEJWeXG7U+rw5EVjzTg8oJzbWUidTboGDU /F6QS15tMvsog== Received: by pali.im (Postfix) id 4FF32768; Wed, 12 Jan 2022 18:21:49 +0100 (CET) From: =?UTF-8?q?Pali=20Roh=C3=A1r?= To: Stefan Roese , =?UTF-8?q?Marek=20Beh=C3=BAn?= , Chris Packham Cc: u-boot@lists.denx.de Subject: [PATCH u-boot-marvell v2 12/20] tools: kwbimage: Enforce 128-bit boundary alignment only for Sheeva CPU Date: Wed, 12 Jan 2022 18:20:46 +0100 Message-Id: <20220112172054.5961-13-pali@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20220112172054.5961-1-pali@kernel.org> References: <20211221155416.8557-1-pali@kernel.org> <20220112172054.5961-1-pali@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.2 at phobos.denx.de X-Virus-Status: Clean This alignment is required only for platforms based on Sheeva CPU core which are A370 and AXP. Now when U-Boot build system correctly propagates LOAD_ADDRESS there is no need to have enabled 128-bit boundary alignment on platforms which do not need it. Previously it was required because load address was implicitly rounded to 128-bit boundary and U-Boot build system expected it and misused it. Now with explicit setting of LOAD_ADDRESS there is no guessing for load address anymore. Signed-off-by: Pali Rohár --- tools/kwbimage.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/tools/kwbimage.c b/tools/kwbimage.c index ce053a4a5a78..7c2106006ad7 100644 --- a/tools/kwbimage.c +++ b/tools/kwbimage.c @@ -1101,8 +1101,10 @@ static size_t image_headersz_v1(int *hasext) return 0; } headersz = e->binary.loadaddr - base_addr; - } else { + } else if (cpu_sheeva) { headersz = ALIGN(headersz, 16); + } else { + headersz = ALIGN(headersz, 4); } headersz += ALIGN(s.st_size, 4) + sizeof(uint32_t); @@ -1158,8 +1160,8 @@ static int add_binary_header_v1(uint8_t **cur, uint8_t **next_ext, *cur += (binarye->binary.nargs + 1) * sizeof(uint32_t); /* - * ARM executable code inside the BIN header on some mvebu platforms - * (e.g. A370, AXP) must always be aligned with the 128-bit boundary. + * ARM executable code inside the BIN header on platforms with Sheeva + * CPU (A370 and AXP) must always be aligned with the 128-bit boundary. * In the case when this code is not position independent (e.g. ARM * SPL), it must be placed at fixed load and execute address. * This requirement can be met by inserting dummy arguments into @@ -1170,8 +1172,10 @@ static int add_binary_header_v1(uint8_t **cur, uint8_t **next_ext, offset = *cur - (uint8_t *)main_hdr; if (binarye->binary.loadaddr) add_args = (binarye->binary.loadaddr - base_addr - offset) / sizeof(uint32_t); - else + else if (cpu_sheeva) add_args = ((16 - offset % 16) % 16) / sizeof(uint32_t); + else + add_args = 0; if (add_args) { *(args - 1) = cpu_to_le32(binarye->binary.nargs + add_args); *cur += add_args * sizeof(uint32_t); -- 2.20.1