public inbox for ltp@lists.linux.it
 help / color / mirror / Atom feed
* [LTP] [PATCH 0/2] use system_specific_hugepages_info.h/.c
@ 2011-04-21  7:13 Caspar Zhang
  2011-04-21  7:13 ` [LTP] [PATCH 1/2] add get_no_of_free_hugepages function Caspar Zhang
  2011-04-22  3:26 ` [LTP] [PATCH 0/2] use system_specific_hugepages_info.h/.c Garrett Cooper
  0 siblings, 2 replies; 4+ messages in thread
From: Caspar Zhang @ 2011-04-21  7:13 UTC (permalink / raw)
  To: LTP List


I found some tests in hugemmap uses
$(topdir)/include/system_specific_hugepages_info.h while some others not.
I also find some function overlaps in hugemmap tests, like getting
hugepagesize or getting free hugepages, I believe we could move all these
funtions to system_specific_hugepages_info.c and make the code simpler and
cleaner.

------------------------------------------------------------------------------
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [LTP] [PATCH 1/2] add get_no_of_free_hugepages function
  2011-04-21  7:13 [LTP] [PATCH 0/2] use system_specific_hugepages_info.h/.c Caspar Zhang
@ 2011-04-21  7:13 ` Caspar Zhang
  2011-04-21  7:13   ` [LTP] [PATCH 2/2] hugemmap: use common header files Caspar Zhang
  2011-04-22  3:26 ` [LTP] [PATCH 0/2] use system_specific_hugepages_info.h/.c Garrett Cooper
  1 sibling, 1 reply; 4+ messages in thread
From: Caspar Zhang @ 2011-04-21  7:13 UTC (permalink / raw)
  To: LTP List

[-- Attachment #1: Type: text/plain, Size: 393 bytes --]


I want to enable using hugepage releated functions in
$(topdir)/lib/system_specific_hugepages_info.c and I added a function to
get No. of free_hugepages.

Signed-off-by: Caspar Zhang <czhang@redhat.com>
---
 include/system_specific_hugepages_info.h |    4 ++-
 lib/system_specific_hugepages_info.c     |   44 ++++++++++++++++++++---------
 2 files changed, 33 insertions(+), 15 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-add-get_no_of_free_hugepages-function.patch --]
[-- Type: text/x-patch; name="0001-add-get_no_of_free_hugepages-function.patch", Size: 3158 bytes --]

diff --git a/include/system_specific_hugepages_info.h b/include/system_specific_hugepages_info.h
index 43c5ddc..f5e3c46 100644
--- a/include/system_specific_hugepages_info.h
+++ b/include/system_specific_hugepages_info.h
@@ -22,7 +22,9 @@
 
 /*Returns Total No. of available Hugepages in the system from /proc/meminfo*/
 int get_no_of_hugepages(void);
+/*Returns No. of Hugepages_Free  from /proc/meminfo*/
+int get_no_of_free_hugepages(void);
 /*Returns Hugepages Size from /proc/meminfo*/
 int hugepages_size(void);
-#endif
 
+#endif
diff --git a/lib/system_specific_hugepages_info.c b/lib/system_specific_hugepages_info.c
index 7e9be3b..2344add 100644
--- a/lib/system_specific_hugepages_info.c
+++ b/lib/system_specific_hugepages_info.c
@@ -29,22 +29,19 @@
 #include <sys/types.h>
 #include <test.h>
 
-#define BUFSIZE 512
-
 int get_no_of_hugepages() {
  #ifdef __linux__
        FILE *f;
        char buf[BUFSIZ];
 
        f = popen("grep 'HugePages_Total' /proc/meminfo | cut -d ':' -f2 | tr -d ' \n'", "r");
-       if (!f) {
-               tst_resm(TBROK, "Could not get info about Total_Hugepages from /proc/meminfo");
-               tst_exit();
-       }
+       if (!f)
+              tst_brkm(TBROK, NULL,
+                     "Could not get info about Total_Hugepages from /proc/meminfo");
        if (!fgets(buf, 10, f)) {
                fclose(f);
-               tst_resm(TBROK, "Could not read Total_Hugepages from /proc/meminfo");
-               tst_exit();
+               tst_brkm(TBROK, NULL,
+                     "Could not read Total_Hugepages from /proc/meminfo");
        }
        pclose(f);
        return(atoi(buf));
@@ -53,6 +50,26 @@ int get_no_of_hugepages() {
  #endif
 }
 
+int get_no_of_free_hugepages() {
+ #ifdef __linux__
+       FILE *f;
+       char buf[BUFSIZ];
+
+       f = popen("grep 'HugePages_Free' /proc/meminfo | cut -d ':' -f2 | tr -d ' \n'", "r");
+       if (!f)
+               tst_brkm(TBROK, NULL,
+                     "Could not get info about HugePages_Free from /proc/meminfo");
+       if (!fgets(buf, 10, f)) {
+               fclose(f);
+               tst_brkm(TBROK, NULL,
+                     "Could not read HugePages_Free from /proc/meminfo");
+       }
+       pclose(f);
+       return(atoi(buf));
+ #else
+        return -1;
+ #endif
+}
 
 int hugepages_size() {
  #ifdef __linux__
@@ -60,14 +77,13 @@ int hugepages_size() {
        char buf[BUFSIZ];
 
        f = popen("grep 'Hugepagesize' /proc/meminfo | cut -d ':' -f2 | tr -d 'kB \n'", "r");
-       if (!f) {
-               tst_resm(TBROK, "Could not get info about HugePages_Size from /proc/meminfo");
-               tst_exit();
-       }
+       if (!f)
+               tst_brkm(TBROK, NULL,
+                     "Could not get info about HugePages_Size from /proc/meminfo");
        if (!fgets(buf, 10, f)) {
                fclose(f);
-               tst_resm(TBROK, "Could not read HugePages_Size from /proc/meminfo");
-               tst_exit();
+               tst_brkm(TBROK, NULL,
+                     "Could not read HugePages_Size from /proc/meminfo");
        }
        pclose(f);
        return(atoi(buf));

[-- Attachment #3: Type: text/plain, Size: 438 bytes --]

------------------------------------------------------------------------------
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev

[-- Attachment #4: Type: text/plain, Size: 155 bytes --]

_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [LTP] [PATCH 2/2] hugemmap: use common header files
  2011-04-21  7:13 ` [LTP] [PATCH 1/2] add get_no_of_free_hugepages function Caspar Zhang
@ 2011-04-21  7:13   ` Caspar Zhang
  0 siblings, 0 replies; 4+ messages in thread
From: Caspar Zhang @ 2011-04-21  7:13 UTC (permalink / raw)
  To: LTP List

[-- Attachment #1: Type: text/plain, Size: 601 bytes --]


there are some hugetlb related functions in
$(topdir)/include/system_specific_hugepages_info.h, this patch makes the
hugemmap tests use this common header so that codes can be simpler and
less overlaps.

Signed-off-by: Caspar Zhang <czhang@redhat.com>
---
 testcases/kernel/mem/hugetlb/hugemmap/hugemmap01.c |   65 +------------------
 testcases/kernel/mem/hugetlb/hugemmap/hugemmap02.c |   18 +++---
 testcases/kernel/mem/hugetlb/hugemmap/hugemmap03.c |   18 +++---
 testcases/kernel/mem/hugetlb/hugemmap/hugemmap04.c |   64 +------------------
 4 files changed, 26 insertions(+), 139 deletions(-)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0002-hugemmap-use-common-header-files.patch --]
[-- Type: text/x-patch; name="0002-hugemmap-use-common-header-files.patch", Size: 9938 bytes --]

diff --git a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap01.c b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap01.c
index 4b2cd30..874f736 100644
--- a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap01.c
+++ b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap01.c
@@ -72,6 +72,7 @@
 
 #include "test.h"
 #include "usctest.h"
+#include "system_specific_hugepages_info.h"
 
 #define BUFFER_SIZE  256
 
@@ -87,8 +88,6 @@ int aftertest=0;		/* Amount of free huge pages after testing */
 int hugepagesmapped=0;		/* Amount of huge pages mapped after testing */
 
 void setup();			/* Main setup function of test */
-int getfreehugepages();		/* Reads free huge pages */
-int get_huge_pagesize();        /* Reads huge page size */
 void cleanup();			/* cleanup function for the test */
 
 void help()
@@ -135,10 +134,10 @@ main(int ac, char **av)
 		Tst_count=0;
 
 		/* Note the number of free huge pages BEFORE testing */
-		beforetest = getfreehugepages();
+		beforetest = get_no_of_free_hugepages();
 
 		/* Note the size of huge page size BEFORE testing */
-		page_sz = get_huge_pagesize();
+		page_sz = hugepages_size();
 
 		/*
 		 * Call mmap
@@ -160,7 +159,7 @@ main(int ac, char **av)
 		}
 
 		/* Make sure the number of free huge pages AFTER testing decreased */
-		aftertest = getfreehugepages();
+		aftertest = get_no_of_free_hugepages();
 		hugepagesmapped = beforetest - aftertest;
 		if (hugepagesmapped < 1) {
 			tst_resm(TWARN,"Number of HUGEPAGES_FREE stayed the same. Okay if");
@@ -205,62 +204,6 @@ setup()
 }
 
 /*
- * getfreehugepages() - Reads the number of free huge pages from /proc/meminfo
- */
-int
-getfreehugepages()
-{
-	int hugefree;
-	FILE* f;
-	int retcode=0;
-	char buff[BUFFER_SIZE];
-
-        f = fopen("/proc/meminfo", "r");
-	if (!f)
-     		tst_brkm(TFAIL, cleanup, "Could not open /proc/meminfo for reading");
-
-	while (fgets(buff,BUFFER_SIZE, f) != NULL) {
-		if ((retcode = sscanf(buff, "HugePages_Free: %d ", &hugefree)) == 1)
-	  		break;
-	}
-
-        if (retcode != 1) {
-        	fclose(f);
-       		tst_brkm(TFAIL, cleanup, "Failed reading number of huge pages free.");
-     	}
-	fclose(f);
-	return(hugefree);
-}
-
-/*
- * get_huge_pagesize() - Reads the size of huge page size from /proc/meminfo
- */
-int
-get_huge_pagesize()
-{
-        int hugesize;
-        FILE* f;
-        int retcode=0;
-        char buff[BUFFER_SIZE];
-
-        f = fopen("/proc/meminfo", "r");
-        if (!f)
-                tst_brkm(TFAIL, cleanup, "Could not open /proc/meminfo for reading");
-
-        while (fgets(buff,BUFFER_SIZE, f) != NULL) {
-                if ((retcode = sscanf(buff, "Hugepagesize: %d ", &hugesize)) == 1)
-                        break;
-        }
-
-        if (retcode != 1) {
-                fclose(f);
-                tst_brkm(TFAIL, cleanup, "Failed reading size of huge page.");
-        }
-        fclose(f);
-        return(hugesize);
-}
-
-/*
  * cleanup() - performs all ONE TIME cleanup for this test at
  *             completion or premature exit.
  * 	       Remove the temporary directory created.
diff --git a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap02.c b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap02.c
index ce4cdac..45cddf7 100644
--- a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap02.c
+++ b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap02.c
@@ -62,10 +62,8 @@
 
 #include "test.h"
 #include "usctest.h"
+#include "system_specific_hugepages_info.h"
 
-#define PAGE_SIZE      ((1UL) << 12) 	/* Normal page size */
-#define HPAGE_SIZE     ((1UL) << 24) 	/* Huge page size */
-#define MAP_SIZE       (2*HPAGE_SIZE) 	/* Huge map page size */
 #define LOW_ADDR       (void *)(0x80000000)
 #define LOW_ADDR2      (void *)(0x90000000)
 
@@ -95,6 +93,7 @@ main(int ac, char **av)
 	int lc;
 	char *msg;
         int Hflag = 0;
+	int page_sz, map_sz;
 
        	option_t options[] = {
         	{ "H:",   &Hflag, &Hopt },    /* Required for location of hugetlbfs */
@@ -110,6 +109,9 @@ main(int ac, char **av)
 		    "-H option is REQUIRED for this test, use -h for options help");
 	}
 
+	page_sz = getpagesize();
+	map_sz = 2 * 1024 * hugepages_size();
+
 	setup();
 
 	for (lc = 0; TEST_LOOPING(lc); lc++) {
@@ -138,14 +140,14 @@ main(int ac, char **av)
 
 		/* mmap using normal pages and a low memory address */
 		errno = 0;
-		addr = mmap(LOW_ADDR, PAGE_SIZE, PROT_READ,
+		addr = mmap(LOW_ADDR, page_sz, PROT_READ,
 			    MAP_SHARED | MAP_FIXED, nfildes, 0);
 		if (addr == MAP_FAILED)
 			tst_brkm(TBROK, cleanup,"mmap failed on nfildes");
 
 		/* Attempt to mmap a huge page into a low memory address */
 		errno = 0;
-		addr2 = mmap(LOW_ADDR2, MAP_SIZE, PROT_READ | PROT_WRITE,
+		addr2 = mmap(LOW_ADDR2, map_sz, PROT_READ | PROT_WRITE,
 			    MAP_SHARED, fildes, 0);
 
 #if __WORDSIZE == 64 /* 64-bit process */
@@ -177,11 +179,11 @@ main(int ac, char **av)
         	}
 
 #if __WORDSIZE == 64
-		if (munmap(addr2, MAP_SIZE) == -1) {
+		if (munmap(addr2, map_sz) == -1) {
 			tst_brkm(TFAIL|TERRNO, NULL, "huge munmap failed");
 		}
 #endif
-		if (munmap(addr, PAGE_SIZE) == -1) {
+		if (munmap(addr, page_sz) == -1) {
 			tst_brkm(TFAIL|TERRNO, NULL, "munmap failed");
 		}
 
@@ -231,4 +233,4 @@ cleanup()
 	unlink(TEMPFILE);
 
 	tst_rmdir();
-}
\ No newline at end of file
+}
diff --git a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap03.c b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap03.c
index c1479fe..cac94b0 100644
--- a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap03.c
+++ b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap03.c
@@ -53,8 +53,8 @@
 
 #include "test.h"
 #include "usctest.h"
+#include "system_specific_hugepages_info.h"
 
-#define PAGE_SIZE      ((1UL) << 12) 	/* Normal page size */
 #define HIGH_ADDR      (void *)(0x1000000000000)
 
 char* TEMPFILE="mmapfile";
@@ -76,15 +76,14 @@ void help()
 int
 main(int ac, char **av)
 {
-#if __WORDSIZE==32  /* 32-bit compiled */
-      	tst_resm(TCONF,"This test is only for 64bit");
-	tst_exit();
-
-       	return 1;
-#else	/* 64-bit compiled */
 	int lc;			/* loop counter */
 	char *msg;		/* message returned from parse_opts */
         int Hflag=0;              /* binary flag: opt or not */
+	int page_sz;
+
+#if __WORDSIZE==32  /* 32-bit compiled */
+	tst_brkm(TCONF, NULL, "This test is only for 64bit");
+#endif
 
        	option_t options[] = {
         	{ "H:",   &Hflag, &Hopt },    /* Required for location of hugetlbfs */
@@ -103,6 +102,8 @@ main(int ac, char **av)
 		tst_exit();
 	}
 
+	page_sz = getpagesize();
+
 	setup();
 
 	for (lc = 0; TEST_LOOPING(lc); lc++) {
@@ -118,7 +119,7 @@ main(int ac, char **av)
 
 		/* Attempt to mmap using normal pages and a high memory address */
 		errno = 0;
-		addr = mmap(HIGH_ADDR, PAGE_SIZE, PROT_READ,
+		addr = mmap(HIGH_ADDR, page_sz, PROT_READ,
 			    MAP_SHARED | MAP_FIXED, fildes, 0);
 		if (addr != MAP_FAILED) {
 			tst_resm(TFAIL, "Normal mmap() into high region unexpectedly succeeded on %s, errno=%d : %s",
@@ -135,7 +136,6 @@ main(int ac, char **av)
 	cleanup();
 
 	tst_exit();
-#endif
 }
 
 /*
diff --git a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap04.c b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap04.c
index b6c6988..4cc6ed4 100644
--- a/testcases/kernel/mem/hugetlb/hugemmap/hugemmap04.c
+++ b/testcases/kernel/mem/hugetlb/hugemmap/hugemmap04.c
@@ -90,8 +90,6 @@ int hugepagesmapped=0;		/* Amount of huge pages mapped after testing */
 char *Hopt;                     /* location of hugetlbfs */
 
 void setup();			/* Main setup function of test */
-int getfreehugepages();		/* Reads free huge pages */
-int get_huge_pagesize();        /* Reads huge page size */
 void cleanup();			/* cleanup function for the test */
 
 void help()
@@ -142,11 +140,11 @@ main(int ac, char **av)
 		Tst_count=0;
 
 		/* Note the number of free huge pages BEFORE testing */
-		freepages = getfreehugepages();
+		freepages = get_no_of_free_hugepages();
 		beforetest = freepages;
 
 		/* Note the size of huge page size BEFORE testing */
-		huge_pagesize = get_huge_pagesize();
+		huge_pagesize = hugepages_size();
 		tst_resm(TINFO,"Size of huge pages is %d KB",huge_pagesize);
 
 #if __WORDSIZE==32
@@ -176,7 +174,7 @@ main(int ac, char **av)
 		}
 
 		/* Make sure the number of free huge pages AFTER testing decreased */
-		aftertest = getfreehugepages();
+		aftertest = get_no_of_free_hugepages();
 		hugepagesmapped = beforetest - aftertest;
 		if (hugepagesmapped < 1) {
 			tst_resm(TWARN,"Number of HUGEPAGES_FREE stayed the same. Okay if");
@@ -221,62 +219,6 @@ setup()
 }
 
 /*
- * getfreehugepages() - Reads the number of free huge pages from /proc/meminfo
- */
-int
-getfreehugepages()
-{
-	int hugefree;
-	FILE* f;
-	int retcode=0;
-	char buff[BUFFER_SIZE];
-
-        f = fopen("/proc/meminfo", "r");
-	if (!f)
-     		tst_brkm(TFAIL, cleanup, "Could not open /proc/meminfo for reading");
-
-	while (fgets(buff,BUFFER_SIZE, f) != NULL) {
-		if ((retcode = sscanf(buff, "HugePages_Free: %d ", &hugefree)) == 1)
-			break;
-	}
-
-        if (retcode != 1) {
-        	fclose(f);
-       		tst_brkm(TFAIL, cleanup, "Failed reading number of huge pages free.");
-     	}
-	fclose(f);
-	return(hugefree);
-}
-
-/*
- * get_huge_pagesize() - Reads the size of huge page size from /proc/meminfo
-*/
-int
-get_huge_pagesize()
-{
-	int hugesize;
-	FILE* f;
-	int retcode=0;
-	char buff[BUFFER_SIZE];
-
-        f = fopen("/proc/meminfo", "r");
-	if (!f)
-     		tst_brkm(TFAIL, cleanup, "Could not open /proc/meminfo for reading");
-
-	while (fgets(buff,BUFFER_SIZE, f) != NULL) {
-		if ((retcode = sscanf(buff, "Hugepagesize: %d ", &hugesize)) == 1)
-			break;
-	}
-
-        if (retcode != 1) {
-        	fclose(f);
-       		tst_brkm(TFAIL, cleanup, "Failed reading size of huge page.");
-     	}
-	fclose(f);
-	return(hugesize);
-}
-
-/*
  * cleanup() - performs all ONE TIME cleanup for this test at
  *             completion or premature exit.
  * 	       Remove the temporary directory created.

[-- Attachment #3: Type: text/plain, Size: 438 bytes --]

------------------------------------------------------------------------------
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev

[-- Attachment #4: Type: text/plain, Size: 155 bytes --]

_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [LTP] [PATCH 0/2] use system_specific_hugepages_info.h/.c
  2011-04-21  7:13 [LTP] [PATCH 0/2] use system_specific_hugepages_info.h/.c Caspar Zhang
  2011-04-21  7:13 ` [LTP] [PATCH 1/2] add get_no_of_free_hugepages function Caspar Zhang
@ 2011-04-22  3:26 ` Garrett Cooper
  1 sibling, 0 replies; 4+ messages in thread
From: Garrett Cooper @ 2011-04-22  3:26 UTC (permalink / raw)
  To: Caspar Zhang; +Cc: LTP List

On Thu, Apr 21, 2011 at 12:13 AM, Caspar Zhang <czhang@redhat.com> wrote:
>
> I found some tests in hugemmap uses
> $(topdir)/include/system_specific_hugepages_info.h while some others not.
> I also find some function overlaps in hugemmap tests, like getting
> hugepagesize or getting free hugepages, I believe we could move all these
> funtions to system_specific_hugepages_info.c and make the code simpler and
> cleaner.

Committed -- thanks!
-Garrett

------------------------------------------------------------------------------
Fulfilling the Lean Software Promise
Lean software platforms are now widely adopted and the benefits have been 
demonstrated beyond question. Learn why your peers are replacing JEE 
containers with lightweight application servers - and what you can gain 
from the move. http://p.sf.net/sfu/vmware-sfemails
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2011-04-22  3:27 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-21  7:13 [LTP] [PATCH 0/2] use system_specific_hugepages_info.h/.c Caspar Zhang
2011-04-21  7:13 ` [LTP] [PATCH 1/2] add get_no_of_free_hugepages function Caspar Zhang
2011-04-21  7:13   ` [LTP] [PATCH 2/2] hugemmap: use common header files Caspar Zhang
2011-04-22  3:26 ` [LTP] [PATCH 0/2] use system_specific_hugepages_info.h/.c Garrett Cooper

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox