public inbox for linux-ia64@vger.kernel.org
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@hp.com>
To: linux-ia64@vger.kernel.org
Subject: [PATCH] fix mem= & max_addr=
Date: Thu, 07 Oct 2004 22:46:27 +0000	[thread overview]
Message-ID: <1097189187.4491.63.camel@tdi> (raw)


   This should hopefully fix all strange behavior with using mem= or
max_addr= when trying to limit memory usage.  The current code has
several problems with splitting granules and removing the dangling
pieces on subsequent passes.  This potentially happened when mem_limit
hits total_mem and any time we reduced the page count of an entry
without updating first_non_wb_addr.  There was also an off by one in
max_addr that caused an extra granule to get dropped sometimes.

   With this change, there's some extra fuzz introduced that a max_addr
specification will get rounded down to a granule boundary and memory
quantity, when using mem=, will be within a granule size of the
requested amount.  Let me know if anyone finds more problems with it.
Thanks,

	Alex

-- 
Signed-off-by: Alex Williamson <alex.williamson@hp.com>

=== arch/ia64/kernel/efi.c 1.36 vs edited ==--- 1.36/arch/ia64/kernel/efi.c	2004-08-25 11:50:37 -06:00
+++ edited/arch/ia64/kernel/efi.c	2004-10-07 15:59:50 -06:00
@@ -348,19 +348,31 @@
 			trim_top(md, last_granule_addr);
 
 		if (is_available_memory(md)) {
-			if (md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) > max_addr) {
-				if (md->phys_addr > max_addr)
+			if (md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) >= max_addr) {
+				if (md->phys_addr >= max_addr)
 					continue;
 				md->num_pages = (max_addr - md->phys_addr) >> EFI_PAGE_SHIFT;
+				first_non_wb_addr = max_addr;
 			}
 
 			if (total_mem >= mem_limit)
 				continue;
-			total_mem += (md->num_pages << EFI_PAGE_SHIFT);
-			if (total_mem > mem_limit) {
-				md->num_pages -= ((total_mem - mem_limit) >> EFI_PAGE_SHIFT);
-				max_addr = md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT);
+
+			if (total_mem + (md->num_pages << EFI_PAGE_SHIFT) > mem_limit) {
+				unsigned long limit_addr = md->phys_addr;
+
+				limit_addr += mem_limit - total_mem;
+				limit_addr &= ~(IA64_GRANULE_SIZE - 1);
+
+				if (md->phys_addr > limit_addr)
+					continue;
+
+				md->num_pages = (limit_addr - md->phys_addr) >>
+				                EFI_PAGE_SHIFT;
+				first_non_wb_addr = max_addr = md->phys_addr +
+				              (md->num_pages << EFI_PAGE_SHIFT);
 			}
+			total_mem += (md->num_pages << EFI_PAGE_SHIFT);
 
 			if (md->num_pages = 0)
 				continue;
@@ -495,13 +507,14 @@
 	for (cp = saved_command_line; *cp; ) {
 		if (memcmp(cp, "mem=", 4) = 0) {
 			cp += 4;
-			mem_limit = memparse(cp, &end) - 2;
+			mem_limit = memparse(cp, &end);
 			if (end != cp)
 				break;
 			cp = end;
 		} else if (memcmp(cp, "max_addr=", 9) = 0) {
 			cp += 9;
-			max_addr = memparse(cp, &end) - 1;
+			max_addr = (memparse(cp, &end) &
+			            ~(IA64_GRANULE_SIZE - 1));
 			if (end != cp)
 				break;
 			cp = end;



                 reply	other threads:[~2004-10-07 22:46 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1097189187.4491.63.camel@tdi \
    --to=alex.williamson@hp.com \
    --cc=linux-ia64@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox