public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: venkatesh.pallipadi@intel.com
To: mingo@elte.hu, tglx@linutronix.de, hpa@zytor.com, airlied@redhat.com
Cc: arjan@infradead.org, eric@anholt.net,
	linux-kernel@vger.kernel.org,
	Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Subject: [patch 3/3] x86, CPA: Add set_pages_arrayuc and set_pages_array_wb
Date: Thu, 19 Mar 2009 14:51:15 -0700	[thread overview]
Message-ID: <20090319215358.901545000@intel.com> (raw)
In-Reply-To: 20090319215112.636641000@intel.com

[-- Attachment #1: set_pages_array_modified.patch --]
[-- Type: text/plain, Size: 3581 bytes --]

Add new interfaces
set_pages_array_uc()
set_pages_array_wb()
that can be used change the page attribute for a bunch of pages with flush etc
done once at the end of all the changes. These interfaces are similar to
existing set_memory_array_uc() and set_memory_array_wc().

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>

---
 arch/x86/include/asm/cacheflush.h |    3 +
 arch/x86/mm/pageattr.c            |   63 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 66 insertions(+)

Index: tip/arch/x86/mm/pageattr.c
===================================================================
--- tip.orig/arch/x86/mm/pageattr.c	2009-03-17 11:03:51.000000000 -0700
+++ tip/arch/x86/mm/pageattr.c	2009-03-17 11:04:08.000000000 -0700
@@ -919,6 +919,20 @@ static inline int change_page_attr_clear
 		(array ? CPA_ARRAY : 0), NULL);
 }
 
+static inline int cpa_set_pages_array(struct page **pages, int numpages,
+				       pgprot_t mask)
+{
+	return change_page_attr_set_clr(NULL, numpages, mask, __pgprot(0), 0,
+		CPA_PAGES_ARRAY, pages);
+}
+
+static inline int cpa_clear_pages_array(struct page **pages, int numpages,
+					 pgprot_t mask)
+{
+	return change_page_attr_set_clr(NULL, numpages, __pgprot(0), mask, 0,
+		CPA_PAGES_ARRAY, pages);
+}
+
 int _set_memory_uc(unsigned long addr, int numpages)
 {
 	/*
@@ -1075,6 +1089,35 @@ int set_pages_uc(struct page *page, int 
 }
 EXPORT_SYMBOL(set_pages_uc);
 
+int set_pages_array_uc(struct page **pages, int addrinarray)
+{
+	unsigned long start;
+	unsigned long end;
+	int i;
+	int free_idx;
+
+	for (i = 0; i < addrinarray; i++) {
+		start = (unsigned long)page_address(pages[i]);
+		end = start + PAGE_SIZE;
+		if (reserve_memtype(start, end, _PAGE_CACHE_UC_MINUS, NULL))
+			goto err_out;
+	}
+
+	if (cpa_set_pages_array(pages, addrinarray,
+			__pgprot(_PAGE_CACHE_UC_MINUS)) == 0) {
+		return 0; /* Success */
+	}
+err_out:
+	free_idx = i;
+	for (i = 0; i < free_idx; i++) {
+		start = (unsigned long)page_address(pages[i]);
+		end = start + PAGE_SIZE;
+		free_memtype(start, end);
+	}
+	return -EINVAL;
+}
+EXPORT_SYMBOL(set_pages_array_uc);
+
 int set_pages_wb(struct page *page, int numpages)
 {
 	unsigned long addr = (unsigned long)page_address(page);
@@ -1083,6 +1126,26 @@ int set_pages_wb(struct page *page, int 
 }
 EXPORT_SYMBOL(set_pages_wb);
 
+int set_pages_array_wb(struct page **pages, int addrinarray)
+{
+	int retval;
+	unsigned long start;
+	unsigned long end;
+	int i;
+
+	retval = cpa_clear_pages_array(pages, addrinarray,
+			__pgprot(_PAGE_CACHE_MASK));
+
+	for (i = 0; i < addrinarray; i++) {
+		start = (unsigned long)page_address(pages[i]);
+		end = start + PAGE_SIZE;
+		free_memtype(start, end);
+	}
+
+	return retval;
+}
+EXPORT_SYMBOL(set_pages_array_wb);
+
 int set_pages_x(struct page *page, int numpages)
 {
 	unsigned long addr = (unsigned long)page_address(page);
Index: tip/arch/x86/include/asm/cacheflush.h
===================================================================
--- tip.orig/arch/x86/include/asm/cacheflush.h	2009-03-13 11:51:31.000000000 -0700
+++ tip/arch/x86/include/asm/cacheflush.h	2009-03-17 11:04:08.000000000 -0700
@@ -90,6 +90,9 @@ int set_memory_4k(unsigned long addr, in
 int set_memory_array_uc(unsigned long *addr, int addrinarray);
 int set_memory_array_wb(unsigned long *addr, int addrinarray);
 
+int set_pages_array_uc(struct page **pages, int addrinarray);
+int set_pages_array_wb(struct page **pages, int addrinarray);
+
 /*
  * For legacy compatibility with the old APIs, a few functions
  * are provided that work on a "struct page".

-- 


  parent reply	other threads:[~2009-03-19 21:56 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-03-19 21:51 [patch 0/3] x86, CPA: Introduce new APIs set_pages_array[uc|wb] venkatesh.pallipadi
2009-03-19 21:51 ` [patch 1/3] x86, CPA: Add a flag parameter to cpa set_clr venkatesh.pallipadi
2009-03-20 10:24   ` [tip:x86/mm] x86, CPA: Add a flag parameter to cpa set_clr() venkatesh.pallipadi
2009-03-19 21:51 ` [patch 2/3] x86, PAT: Add support for struct page pointer array in cpa set_clr venkatesh.pallipadi
2009-03-20 10:24   ` [tip:x86/mm] " venkatesh.pallipadi
2009-03-19 21:51 ` venkatesh.pallipadi [this message]
2009-03-20 10:24   ` [tip:x86/mm] x86, CPA: Add set_pages_arrayuc and set_pages_array_wb venkatesh.pallipadi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090319215358.901545000@intel.com \
    --to=venkatesh.pallipadi@intel.com \
    --cc=airlied@redhat.com \
    --cc=arjan@infradead.org \
    --cc=eric@anholt.net \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox