* [LTP] [PATCH V2 1/3] Added ALIGN, __ALIGN_MASK macroses
[not found] <52122141.3020005@oracle.com>
@ 2013-08-21 11:54 ` Stanislav Kholmanskikh
2013-08-21 11:54 ` [LTP] [PATCH V2 2/3] syscalls/migrate_pages: fix nodemask memory allocation Stanislav Kholmanskikh
` (4 subsequent siblings)
5 siblings, 0 replies; 18+ messages in thread
From: Stanislav Kholmanskikh @ 2013-08-21 11:54 UTC (permalink / raw)
To: ltp-list; +Cc: vasily.isaenko
Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
---
include/test.h | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/include/test.h b/include/test.h
index d9eba31..a517ff9 100644
--- a/include/test.h
+++ b/include/test.h
@@ -105,6 +105,12 @@
#define MAP_PRIVATE_EXCEPT_UCLINUX MAP_PRIVATE
#endif
+/* Round x to the next multiple of a.
+ * a should be a power of 2.
+ */
+#define ALIGN(x, a) __ALIGN_MASK(x, (typeof(x))(a) - 1)
+#define __ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
+
/*
* lib/forker.c
*/
--
1.7.1
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [LTP] [PATCH V2 2/3] syscalls/migrate_pages: fix nodemask memory allocation
[not found] <52122141.3020005@oracle.com>
2013-08-21 11:54 ` [LTP] [PATCH V2 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
@ 2013-08-21 11:54 ` Stanislav Kholmanskikh
2013-08-21 14:32 ` Jan Stancek
2013-08-21 11:54 ` [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size Stanislav Kholmanskikh
` (3 subsequent siblings)
5 siblings, 1 reply; 18+ messages in thread
From: Stanislav Kholmanskikh @ 2013-08-21 11:54 UTC (permalink / raw)
To: ltp-list; +Cc: vasily.isaenko
In accordance to man migrate_pages(), mbind() the bit mask size
should be rounded to next multiple of sizeof(unsigned long).
Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
---
.../syscalls/migrate_pages/migrate_pages01.c | 2 +-
.../syscalls/migrate_pages/migrate_pages02.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c b/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c
index c23e8b0..6361e9f 100644
--- a/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c
+++ b/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c
@@ -248,7 +248,7 @@ static void setup(void)
ret);
sane_max_node = get_max_node();
- sane_nodemask_size = sane_max_node / 8 + 1;
+ sane_nodemask_size = ALIGN(sane_max_node, sizeof(unsigned long)*8) / 8;
sane_old_nodes = SAFE_MALLOC(NULL, sane_nodemask_size);
sane_new_nodes = SAFE_MALLOC(NULL, sane_nodemask_size);
memset(sane_old_nodes, 0, sane_nodemask_size);
diff --git a/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c b/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c
index 49129e0..5790abd 100644
--- a/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c
+++ b/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c
@@ -110,7 +110,7 @@ static int migrate_to_node(pid_t pid, int node)
tst_resm(TINFO, "pid(%d) migrate pid %d to node -> %d",
getpid(), pid, node);
max_node = get_max_node();
- nodemask_size = max_node / 8 + 1;
+ nodemask_size = ALIGN(max_node, sizeof(unsigned long)*8) / 8;
old_nodes = SAFE_MALLOC(NULL, nodemask_size);
new_nodes = SAFE_MALLOC(NULL, nodemask_size);
--
1.7.1
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
[not found] <52122141.3020005@oracle.com>
2013-08-21 11:54 ` [LTP] [PATCH V2 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
2013-08-21 11:54 ` [LTP] [PATCH V2 2/3] syscalls/migrate_pages: fix nodemask memory allocation Stanislav Kholmanskikh
@ 2013-08-21 11:54 ` Stanislav Kholmanskikh
2013-08-21 12:29 ` Jan Stancek
2013-08-22 7:41 ` [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
` (2 subsequent siblings)
5 siblings, 1 reply; 18+ messages in thread
From: Stanislav Kholmanskikh @ 2013-08-21 11:54 UTC (permalink / raw)
To: ltp-list; +Cc: vasily.isaenko
Now nodemask_size is rounded up to the next multiple
of sizeof(nodemask_t).
Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
---
testcases/kernel/lib/numa_helper.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/testcases/kernel/lib/numa_helper.c b/testcases/kernel/lib/numa_helper.c
index 4157816..9151583 100644
--- a/testcases/kernel/lib/numa_helper.c
+++ b/testcases/kernel/lib/numa_helper.c
@@ -60,7 +60,7 @@ unsigned long get_max_node(void)
#if HAVE_NUMA_H
static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long max_node)
{
- unsigned long nodemask_size = max_node / 8 + 1;
+ unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
int i;
char fn[64];
struct stat st;
@@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long max_node)
static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long max_node)
{
#if MPOL_F_MEMS_ALLOWED
- unsigned long nodemask_size = max_node / 8 + 1;
+ unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
memset(nodemask, 0, nodemask_size);
/*
* avoid numa_get_mems_allowed(), because of bug in getpol()
@@ -165,7 +165,7 @@ int get_allowed_nodes_arr(int flag, int *num_nodes, int **nodes)
#if HAVE_NUMA_H
unsigned long max_node = get_max_node();
- unsigned long nodemask_size = max_node / 8 + 1;
+ unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
nodemask = malloc(nodemask_size);
if (nodes)
--
1.7.1
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
2013-08-21 11:54 ` [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size Stanislav Kholmanskikh
@ 2013-08-21 12:29 ` Jan Stancek
2013-08-21 13:22 ` Stanislav Kholmanskikh
0 siblings, 1 reply; 18+ messages in thread
From: Jan Stancek @ 2013-08-21 12:29 UTC (permalink / raw)
To: Stanislav Kholmanskikh; +Cc: vasily isaenko, ltp-list
----- Original Message -----
> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
> To: ltp-list@lists.sourceforge.net
> Cc: "vasily isaenko" <vasily.isaenko@oracle.com>, jstancek@redhat.com
> Sent: Wednesday, 21 August, 2013 1:54:58 PM
> Subject: [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
>
> Now nodemask_size is rounded up to the next multiple
> of sizeof(nodemask_t).
Hi,
Why multiple of nodemask_t? It can be quite large.
>
> Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
> ---
> testcases/kernel/lib/numa_helper.c | 6 +++---
> 1 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/testcases/kernel/lib/numa_helper.c
> b/testcases/kernel/lib/numa_helper.c
> index 4157816..9151583 100644
> --- a/testcases/kernel/lib/numa_helper.c
> +++ b/testcases/kernel/lib/numa_helper.c
> @@ -60,7 +60,7 @@ unsigned long get_max_node(void)
> #if HAVE_NUMA_H
> static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long
> max_node)
> {
> - unsigned long nodemask_size = max_node / 8 + 1;
> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
Because mask is passed as parameter, we should respect max_node and
clear only up to byte which holds max_node. So I think we should align
to next byte only:
unsigned long nodemask_size = ALIGN(max_node, 8) / 8;
> int i;
> char fn[64];
> struct stat st;
> @@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask,
> unsigned long max_node)
> static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long
> max_node)
> {
> #if MPOL_F_MEMS_ALLOWED
> - unsigned long nodemask_size = max_node / 8 + 1;
> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
Same as above:
unsigned long nodemask_size = ALIGN(max_node, 8) / 8;
> memset(nodemask, 0, nodemask_size);
> /*
> * avoid numa_get_mems_allowed(), because of bug in getpol()
> @@ -165,7 +165,7 @@ int get_allowed_nodes_arr(int flag, int *num_nodes, int
> **nodes)
>
> #if HAVE_NUMA_H
> unsigned long max_node = get_max_node();
> - unsigned long nodemask_size = max_node / 8 + 1;
> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
This function allocates the nodemask, so we can align to as much as we need.
I'd expect this to be same as in migrate_pages, align to next long:
unsigned long nodemask_size = ALIGN(max_node / 8, sizeof(long));
Regards,
Jan
>
> nodemask = malloc(nodemask_size);
> if (nodes)
> --
> 1.7.1
>
>
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
2013-08-21 12:29 ` Jan Stancek
@ 2013-08-21 13:22 ` Stanislav Kholmanskikh
2013-08-21 14:28 ` Jan Stancek
0 siblings, 1 reply; 18+ messages in thread
From: Stanislav Kholmanskikh @ 2013-08-21 13:22 UTC (permalink / raw)
To: Jan Stancek; +Cc: vasily isaenko, ltp-list
On 08/21/2013 04:29 PM, Jan Stancek wrote:
>
>
>
> ----- Original Message -----
>> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
>> To: ltp-list@lists.sourceforge.net
>> Cc: "vasily isaenko" <vasily.isaenko@oracle.com>, jstancek@redhat.com
>> Sent: Wednesday, 21 August, 2013 1:54:58 PM
>> Subject: [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
>>
>> Now nodemask_size is rounded up to the next multiple
>> of sizeof(nodemask_t).
> Hi,
>
> Why multiple of nodemask_t? It can be quite large.
Hi.
As nodemask is a pointer to nodemask_t type, so it should point to
memory areas
multiple of sizeof(nodemask_t).
Isn't it?
>
>> Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
>> ---
>> testcases/kernel/lib/numa_helper.c | 6 +++---
>> 1 files changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/testcases/kernel/lib/numa_helper.c
>> b/testcases/kernel/lib/numa_helper.c
>> index 4157816..9151583 100644
>> --- a/testcases/kernel/lib/numa_helper.c
>> +++ b/testcases/kernel/lib/numa_helper.c
>> @@ -60,7 +60,7 @@ unsigned long get_max_node(void)
>> #if HAVE_NUMA_H
>> static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long
>> max_node)
>> {
>> - unsigned long nodemask_size = max_node / 8 + 1;
>> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
> Because mask is passed as parameter, we should respect max_node and
> clear only up to byte which holds max_node. So I think we should align
> to next byte only:
>
> unsigned long nodemask_size = ALIGN(max_node, 8) / 8;
I agree but I'm not sure how bytes comprising nodemask_t are handled.
If they are handled in an endianness-dependant way then your approach will
work only on little-endian systems.
So I decided to clear entire region. The same for filter_nodemask_mem.
>
>> int i;
>> char fn[64];
>> struct stat st;
>> @@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask,
>> unsigned long max_node)
>> static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long
>> max_node)
>> {
>> #if MPOL_F_MEMS_ALLOWED
>> - unsigned long nodemask_size = max_node / 8 + 1;
>> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
> Same as above:
> unsigned long nodemask_size = ALIGN(max_node, 8) / 8;
>
>> memset(nodemask, 0, nodemask_size);
>> /*
>> * avoid numa_get_mems_allowed(), because of bug in getpol()
>> @@ -165,7 +165,7 @@ int get_allowed_nodes_arr(int flag, int *num_nodes, int
>> **nodes)
>>
>> #if HAVE_NUMA_H
>> unsigned long max_node = get_max_node();
>> - unsigned long nodemask_size = max_node / 8 + 1;
>> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
> This function allocates the nodemask, so we can align to as much as we need.
> I'd expect this to be same as in migrate_pages, align to next long:
>
> unsigned long nodemask_size = ALIGN(max_node / 8, sizeof(long));
This formula may give incorrect results. For example, if max_mode = 66
and sizeof(long) = 8, then
ALIGN(max_node / 8, sizeof(long)) will output 8 and we will lost 2 bits.
The correct output should be 16.
I think as max_node contains number of bits so we should align it on
sizeof(long)*8 boundary and after that divide the final result by 8.
> Regards,
> Jan
>
>>
>> nodemask = malloc(nodemask_size);
>> if (nodes)
>> --
>> 1.7.1
>>
>>
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
2013-08-21 13:22 ` Stanislav Kholmanskikh
@ 2013-08-21 14:28 ` Jan Stancek
2013-08-22 6:13 ` Stanislav Kholmanskikh
0 siblings, 1 reply; 18+ messages in thread
From: Jan Stancek @ 2013-08-21 14:28 UTC (permalink / raw)
To: Stanislav Kholmanskikh; +Cc: vasily isaenko, ltp-list
----- Original Message -----
> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
> To: "Jan Stancek" <jstancek@redhat.com>
> Cc: ltp-list@lists.sourceforge.net, "vasily isaenko" <vasily.isaenko@oracle.com>
> Sent: Wednesday, 21 August, 2013 3:22:04 PM
> Subject: Re: [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
>
>
> On 08/21/2013 04:29 PM, Jan Stancek wrote:
> >
> >
> >
> > ----- Original Message -----
> >> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
> >> To: ltp-list@lists.sourceforge.net
> >> Cc: "vasily isaenko" <vasily.isaenko@oracle.com>, jstancek@redhat.com
> >> Sent: Wednesday, 21 August, 2013 1:54:58 PM
> >> Subject: [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
> >>
> >> Now nodemask_size is rounded up to the next multiple
> >> of sizeof(nodemask_t).
> > Hi,
> >
> > Why multiple of nodemask_t? It can be quite large.
> Hi.
>
> As nodemask is a pointer to nodemask_t type, so it should point to
> memory areas
> multiple of sizeof(nodemask_t).
>
> Isn't it?
typedef struct {
unsigned long n[NUMA_NUM_NODES/(sizeof(unsigned long)*8)];
} nodemask_t;
It's used more like trailing array in this case, because NUMA_NUM_NODES is not
always correct (I think it was version < 2.0 that had this issue).
I kept the type so I can reuse some trivial functions from numa.h
and kernel gets 'n' field directly so it doesn't care about nodemask_t.
> >
> >> Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
> >> ---
> >> testcases/kernel/lib/numa_helper.c | 6 +++---
> >> 1 files changed, 3 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/testcases/kernel/lib/numa_helper.c
> >> b/testcases/kernel/lib/numa_helper.c
> >> index 4157816..9151583 100644
> >> --- a/testcases/kernel/lib/numa_helper.c
> >> +++ b/testcases/kernel/lib/numa_helper.c
> >> @@ -60,7 +60,7 @@ unsigned long get_max_node(void)
> >> #if HAVE_NUMA_H
> >> static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long
> >> max_node)
> >> {
> >> - unsigned long nodemask_size = max_node / 8 + 1;
> >> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
> > Because mask is passed as parameter, we should respect max_node and
> > clear only up to byte which holds max_node. So I think we should align
> > to next byte only:
> >
> > unsigned long nodemask_size = ALIGN(max_node, 8) / 8;
> I agree but I'm not sure how bytes comprising nodemask_t are handled.
> If they are handled in an endianness-dependant way then your approach will
> work only on little-endian systems.
>
> So I decided to clear entire region. The same for filter_nodemask_mem.
>
> >
> >> int i;
> >> char fn[64];
> >> struct stat st;
> >> @@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask,
> >> unsigned long max_node)
> >> static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long
> >> max_node)
> >> {
> >> #if MPOL_F_MEMS_ALLOWED
> >> - unsigned long nodemask_size = max_node / 8 + 1;
> >> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
> > Same as above:
> > unsigned long nodemask_size = ALIGN(max_node, 8) / 8;
> >
> >> memset(nodemask, 0, nodemask_size);
> >> /*
> >> * avoid numa_get_mems_allowed(), because of bug in getpol()
> >> @@ -165,7 +165,7 @@ int get_allowed_nodes_arr(int flag, int *num_nodes,
> >> int
> >> **nodes)
> >>
> >> #if HAVE_NUMA_H
> >> unsigned long max_node = get_max_node();
> >> - unsigned long nodemask_size = max_node / 8 + 1;
> >> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
> > This function allocates the nodemask, so we can align to as much as we
> > need.
> > I'd expect this to be same as in migrate_pages, align to next long:
> >
> > unsigned long nodemask_size = ALIGN(max_node / 8, sizeof(long));
> This formula may give incorrect results. For example, if max_mode = 66
> and sizeof(long) = 8, then
> ALIGN(max_node / 8, sizeof(long)) will output 8 and we will lost 2 bits.
> The correct output should be 16.
>
> I think as max_node contains number of bits so we should align it on
> sizeof(long)*8 boundary and after that divide the final result by 8.
Agreed, we should align on bits then divide.
What if we align max_node? Then we can be sure that nodemask_size
in all functions is also aligned:
diff --git a/testcases/kernel/lib/numa_helper.c b/testcases/kernel/lib/numa_helper.c
index 4157816..a2b6b4a 100644
--- a/testcases/kernel/lib/numa_helper.c
+++ b/testcases/kernel/lib/numa_helper.c
@@ -60,7 +60,7 @@ unsigned long get_max_node(void)
#if HAVE_NUMA_H
static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long max_node)
{
- unsigned long nodemask_size = max_node / 8 + 1;
+ unsigned long nodemask_size = max_node / 8;
int i;
char fn[64];
struct stat st;
@@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long max_node)
static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long max_node)
{
#if MPOL_F_MEMS_ALLOWED
- unsigned long nodemask_size = max_node / 8 + 1;
+ unsigned long nodemask_size = max_node / 8;
memset(nodemask, 0, nodemask_size);
/*
* avoid numa_get_mems_allowed(), because of bug in getpol()
@@ -164,8 +164,8 @@ int get_allowed_nodes_arr(int flag, int *num_nodes, int **nodes)
*nodes = NULL;
#if HAVE_NUMA_H
- unsigned long max_node = get_max_node();
- unsigned long nodemask_size = max_node / 8 + 1;
+ unsigned long max_node = ALIGN(get_max_node(), sizeof(long)*8);
+ unsigned long nodemask_size = max_node / 8;
nodemask = malloc(nodemask_size);
if (nodes)
Regards,
Jan
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V2 2/3] syscalls/migrate_pages: fix nodemask memory allocation
2013-08-21 11:54 ` [LTP] [PATCH V2 2/3] syscalls/migrate_pages: fix nodemask memory allocation Stanislav Kholmanskikh
@ 2013-08-21 14:32 ` Jan Stancek
0 siblings, 0 replies; 18+ messages in thread
From: Jan Stancek @ 2013-08-21 14:32 UTC (permalink / raw)
To: Stanislav Kholmanskikh; +Cc: vasily isaenko, ltp-list
----- Original Message -----
> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
> To: ltp-list@lists.sourceforge.net
> Cc: "vasily isaenko" <vasily.isaenko@oracle.com>, jstancek@redhat.com
> Sent: Wednesday, 21 August, 2013 1:54:57 PM
> Subject: [PATCH V2 2/3] syscalls/migrate_pages: fix nodemask memory allocation
>
> In accordance to man migrate_pages(), mbind() the bit mask size
> should be rounded to next multiple of sizeof(unsigned long).
>
> Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
As noted in 3/3 we need to align on bits, looks good to me.
Reviewed-by: Jan Stancek <jstancek@redhat.com>
> ---
> .../syscalls/migrate_pages/migrate_pages01.c | 2 +-
> .../syscalls/migrate_pages/migrate_pages02.c | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c
> b/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c
> index c23e8b0..6361e9f 100644
> --- a/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c
> +++ b/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c
> @@ -248,7 +248,7 @@ static void setup(void)
> ret);
>
> sane_max_node = get_max_node();
> - sane_nodemask_size = sane_max_node / 8 + 1;
> + sane_nodemask_size = ALIGN(sane_max_node, sizeof(unsigned long)*8) / 8;
> sane_old_nodes = SAFE_MALLOC(NULL, sane_nodemask_size);
> sane_new_nodes = SAFE_MALLOC(NULL, sane_nodemask_size);
> memset(sane_old_nodes, 0, sane_nodemask_size);
> diff --git a/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c
> b/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c
> index 49129e0..5790abd 100644
> --- a/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c
> +++ b/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c
> @@ -110,7 +110,7 @@ static int migrate_to_node(pid_t pid, int node)
> tst_resm(TINFO, "pid(%d) migrate pid %d to node -> %d",
> getpid(), pid, node);
> max_node = get_max_node();
> - nodemask_size = max_node / 8 + 1;
> + nodemask_size = ALIGN(max_node, sizeof(unsigned long)*8) / 8;
> old_nodes = SAFE_MALLOC(NULL, nodemask_size);
> new_nodes = SAFE_MALLOC(NULL, nodemask_size);
>
> --
> 1.7.1
>
>
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
2013-08-21 14:28 ` Jan Stancek
@ 2013-08-22 6:13 ` Stanislav Kholmanskikh
2013-08-22 7:01 ` Jan Stancek
0 siblings, 1 reply; 18+ messages in thread
From: Stanislav Kholmanskikh @ 2013-08-22 6:13 UTC (permalink / raw)
To: Jan Stancek; +Cc: vasily isaenko, ltp-list
On 08/21/2013 06:28 PM, Jan Stancek wrote:
>
> ----- Original Message -----
>> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
>> To: "Jan Stancek" <jstancek@redhat.com>
>> Cc: ltp-list@lists.sourceforge.net, "vasily isaenko" <vasily.isaenko@oracle.com>
>> Sent: Wednesday, 21 August, 2013 3:22:04 PM
>> Subject: Re: [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
>>
>>
>> On 08/21/2013 04:29 PM, Jan Stancek wrote:
>>>
>>>
>>> ----- Original Message -----
>>>> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
>>>> To: ltp-list@lists.sourceforge.net
>>>> Cc: "vasily isaenko" <vasily.isaenko@oracle.com>, jstancek@redhat.com
>>>> Sent: Wednesday, 21 August, 2013 1:54:58 PM
>>>> Subject: [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
>>>>
>>>> Now nodemask_size is rounded up to the next multiple
>>>> of sizeof(nodemask_t).
>>> Hi,
>>>
>>> Why multiple of nodemask_t? It can be quite large.
>> Hi.
>>
>> As nodemask is a pointer to nodemask_t type, so it should point to
>> memory areas
>> multiple of sizeof(nodemask_t).
>>
>> Isn't it?
> typedef struct {
> unsigned long n[NUMA_NUM_NODES/(sizeof(unsigned long)*8)];
> } nodemask_t;
>
> It's used more like trailing array in this case, because NUMA_NUM_NODES is not
> always correct (I think it was version < 2.0 that had this issue).
>
> I kept the type so I can reuse some trivial functions from numa.h
> and kernel gets 'n' field directly so it doesn't care about nodemask_t.
Thank you. Now it's clear.
>>>> Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
>>>> ---
>>>> testcases/kernel/lib/numa_helper.c | 6 +++---
>>>> 1 files changed, 3 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/testcases/kernel/lib/numa_helper.c
>>>> b/testcases/kernel/lib/numa_helper.c
>>>> index 4157816..9151583 100644
>>>> --- a/testcases/kernel/lib/numa_helper.c
>>>> +++ b/testcases/kernel/lib/numa_helper.c
>>>> @@ -60,7 +60,7 @@ unsigned long get_max_node(void)
>>>> #if HAVE_NUMA_H
>>>> static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long
>>>> max_node)
>>>> {
>>>> - unsigned long nodemask_size = max_node / 8 + 1;
>>>> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
>>> Because mask is passed as parameter, we should respect max_node and
>>> clear only up to byte which holds max_node. So I think we should align
>>> to next byte only:
>>>
>>> unsigned long nodemask_size = ALIGN(max_node, 8) / 8;
>> I agree but I'm not sure how bytes comprising nodemask_t are handled.
>> If they are handled in an endianness-dependant way then your approach will
>> work only on little-endian systems.
>>
>> So I decided to clear entire region. The same for filter_nodemask_mem.
Given a definition of _setbit, _getbit functions (from
numactl-2.0.8/libnuma.c) used in nodemask_t (bitmask)
handling functions:
static void
_setbit(struct bitmask *bmp, unsigned int n, unsigned int v)
{
if (n < bmp->size) {
if (v)
bmp->maskp[n/bitsperlong] |= 1UL << (n %
bitsperlong);
else
bmp->maskp[n/bitsperlong] &= ~(1UL << (n %
bitsperlong));
}
}
I think we cannot clear area by zeroing only few bytes of a long. Each
byte of a long should
be zeroed.
>>
>>>> int i;
>>>> char fn[64];
>>>> struct stat st;
>>>> @@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask,
>>>> unsigned long max_node)
>>>> static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long
>>>> max_node)
>>>> {
>>>> #if MPOL_F_MEMS_ALLOWED
>>>> - unsigned long nodemask_size = max_node / 8 + 1;
>>>> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
>>> Same as above:
>>> unsigned long nodemask_size = ALIGN(max_node, 8) / 8;
>>>
>>>> memset(nodemask, 0, nodemask_size);
>>>> /*
>>>> * avoid numa_get_mems_allowed(), because of bug in getpol()
>>>> @@ -165,7 +165,7 @@ int get_allowed_nodes_arr(int flag, int *num_nodes,
>>>> int
>>>> **nodes)
>>>>
>>>> #if HAVE_NUMA_H
>>>> unsigned long max_node = get_max_node();
>>>> - unsigned long nodemask_size = max_node / 8 + 1;
>>>> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
>>> This function allocates the nodemask, so we can align to as much as we
>>> need.
>>> I'd expect this to be same as in migrate_pages, align to next long:
>>>
>>> unsigned long nodemask_size = ALIGN(max_node / 8, sizeof(long));
>> This formula may give incorrect results. For example, if max_mode = 66
>> and sizeof(long) = 8, then
>> ALIGN(max_node / 8, sizeof(long)) will output 8 and we will lost 2 bits.
>> The correct output should be 16.
>>
>> I think as max_node contains number of bits so we should align it on
>> sizeof(long)*8 boundary and after that divide the final result by 8.
> Agreed, we should align on bits then divide.
>
> What if we align max_node? Then we can be sure that nodemask_size
> in all functions is also aligned:
Ok. And for convenience the same for migrate_pages fix ?
>
> diff --git a/testcases/kernel/lib/numa_helper.c b/testcases/kernel/lib/numa_helper.c
> index 4157816..a2b6b4a 100644
> --- a/testcases/kernel/lib/numa_helper.c
> +++ b/testcases/kernel/lib/numa_helper.c
> @@ -60,7 +60,7 @@ unsigned long get_max_node(void)
> #if HAVE_NUMA_H
> static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long max_node)
> {
> - unsigned long nodemask_size = max_node / 8 + 1;
> + unsigned long nodemask_size = max_node / 8;
> int i;
> char fn[64];
> struct stat st;
> @@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long max_node)
> static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long max_node)
> {
> #if MPOL_F_MEMS_ALLOWED
> - unsigned long nodemask_size = max_node / 8 + 1;
> + unsigned long nodemask_size = max_node / 8;
> memset(nodemask, 0, nodemask_size);
> /*
> * avoid numa_get_mems_allowed(), because of bug in getpol()
> @@ -164,8 +164,8 @@ int get_allowed_nodes_arr(int flag, int *num_nodes, int **nodes)
> *nodes = NULL;
>
> #if HAVE_NUMA_H
> - unsigned long max_node = get_max_node();
> - unsigned long nodemask_size = max_node / 8 + 1;
> + unsigned long max_node = ALIGN(get_max_node(), sizeof(long)*8);
> + unsigned long nodemask_size = max_node / 8;
>
> nodemask = malloc(nodemask_size);
> if (nodes)
>
> Regards,
> Jan
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
2013-08-22 6:13 ` Stanislav Kholmanskikh
@ 2013-08-22 7:01 ` Jan Stancek
0 siblings, 0 replies; 18+ messages in thread
From: Jan Stancek @ 2013-08-22 7:01 UTC (permalink / raw)
To: Stanislav Kholmanskikh; +Cc: vasily isaenko, ltp-list
----- Original Message -----
> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
> To: "Jan Stancek" <jstancek@redhat.com>
> Cc: ltp-list@lists.sourceforge.net, "vasily isaenko" <vasily.isaenko@oracle.com>
> Sent: Thursday, 22 August, 2013 8:13:34 AM
> Subject: Re: [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
<snip>
> I think we cannot clear area by zeroing only few bytes of a long. Each
> byte of a long should
> be zeroed.
Agreed, that was good point.
> >> This formula may give incorrect results. For example, if max_mode = 66
> >> and sizeof(long) = 8, then
> >> ALIGN(max_node / 8, sizeof(long)) will output 8 and we will lost 2 bits.
> >> The correct output should be 16.
> >>
> >> I think as max_node contains number of bits so we should align it on
> >> sizeof(long)*8 boundary and after that divide the final result by 8.
> > Agreed, we should align on bits then divide.
> >
> > What if we align max_node? Then we can be sure that nodemask_size
> > in all functions is also aligned:
> Ok. And for convenience the same for migrate_pages fix ?
I'll leave it up to you.
Man page says, that "bits beyond those specified by maxnode are ignored"
so both ways should work equally well in those testcases.
Regards,
Jan
>
>
> >
> > diff --git a/testcases/kernel/lib/numa_helper.c
> > b/testcases/kernel/lib/numa_helper.c
> > index 4157816..a2b6b4a 100644
> > --- a/testcases/kernel/lib/numa_helper.c
> > +++ b/testcases/kernel/lib/numa_helper.c
> > @@ -60,7 +60,7 @@ unsigned long get_max_node(void)
> > #if HAVE_NUMA_H
> > static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long
> > max_node)
> > {
> > - unsigned long nodemask_size = max_node / 8 + 1;
> > + unsigned long nodemask_size = max_node / 8;
> > int i;
> > char fn[64];
> > struct stat st;
> > @@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask,
> > unsigned long max_node)
> > static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long
> > max_node)
> > {
> > #if MPOL_F_MEMS_ALLOWED
> > - unsigned long nodemask_size = max_node / 8 + 1;
> > + unsigned long nodemask_size = max_node / 8;
> > memset(nodemask, 0, nodemask_size);
> > /*
> > * avoid numa_get_mems_allowed(), because of bug in getpol()
> > @@ -164,8 +164,8 @@ int get_allowed_nodes_arr(int flag, int *num_nodes, int
> > **nodes)
> > *nodes = NULL;
> >
> > #if HAVE_NUMA_H
> > - unsigned long max_node = get_max_node();
> > - unsigned long nodemask_size = max_node / 8 + 1;
> > + unsigned long max_node = ALIGN(get_max_node(), sizeof(long)*8);
> > + unsigned long nodemask_size = max_node / 8;
> >
> > nodemask = malloc(nodemask_size);
> > if (nodes)
> >
> > Regards,
> > Jan
>
>
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 18+ messages in thread
* [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses
[not found] <52122141.3020005@oracle.com>
` (2 preceding siblings ...)
2013-08-21 11:54 ` [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size Stanislav Kholmanskikh
@ 2013-08-22 7:41 ` Stanislav Kholmanskikh
2013-08-23 7:57 ` Jan Stancek
2013-08-22 7:41 ` [LTP] [PATCH V3 2/3] syscalls/migrate_pages: fix nodemask memory allocation Stanislav Kholmanskikh
2013-08-22 7:41 ` [LTP] [PATCH V3 3/3] lib/numa_helper.c: fix nodemask_size Stanislav Kholmanskikh
5 siblings, 1 reply; 18+ messages in thread
From: Stanislav Kholmanskikh @ 2013-08-22 7:41 UTC (permalink / raw)
To: ltp-list; +Cc: vasily.isaenko
Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
---
include/test.h | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/include/test.h b/include/test.h
index d9eba31..a517ff9 100644
--- a/include/test.h
+++ b/include/test.h
@@ -105,6 +105,12 @@
#define MAP_PRIVATE_EXCEPT_UCLINUX MAP_PRIVATE
#endif
+/* Round x to the next multiple of a.
+ * a should be a power of 2.
+ */
+#define ALIGN(x, a) __ALIGN_MASK(x, (typeof(x))(a) - 1)
+#define __ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
+
/*
* lib/forker.c
*/
--
1.7.1
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [LTP] [PATCH V3 2/3] syscalls/migrate_pages: fix nodemask memory allocation
[not found] <52122141.3020005@oracle.com>
` (3 preceding siblings ...)
2013-08-22 7:41 ` [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
@ 2013-08-22 7:41 ` Stanislav Kholmanskikh
2013-08-27 12:04 ` chrubis
2013-08-22 7:41 ` [LTP] [PATCH V3 3/3] lib/numa_helper.c: fix nodemask_size Stanislav Kholmanskikh
5 siblings, 1 reply; 18+ messages in thread
From: Stanislav Kholmanskikh @ 2013-08-22 7:41 UTC (permalink / raw)
To: ltp-list; +Cc: vasily.isaenko
In accordance to man migrate_pages(), mbind() the bit mask size
should be rounded to the next multiple of sizeof(unsigned long).
Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
---
.../syscalls/migrate_pages/migrate_pages01.c | 4 ++--
.../syscalls/migrate_pages/migrate_pages02.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c b/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c
index c23e8b0..9af4f14 100644
--- a/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c
+++ b/testcases/kernel/syscalls/migrate_pages/migrate_pages01.c
@@ -247,8 +247,8 @@ static void setup(void)
tst_brkm(TBROK | TERRNO, NULL, "get_allowed_nodes_arr: %d",
ret);
- sane_max_node = get_max_node();
- sane_nodemask_size = sane_max_node / 8 + 1;
+ sane_max_node = ALIGN(get_max_node(), sizeof(unsigned long)*8);
+ sane_nodemask_size = sane_max_node / 8;
sane_old_nodes = SAFE_MALLOC(NULL, sane_nodemask_size);
sane_new_nodes = SAFE_MALLOC(NULL, sane_nodemask_size);
memset(sane_old_nodes, 0, sane_nodemask_size);
diff --git a/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c b/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c
index 49129e0..ba0e625 100644
--- a/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c
+++ b/testcases/kernel/syscalls/migrate_pages/migrate_pages02.c
@@ -109,8 +109,8 @@ static int migrate_to_node(pid_t pid, int node)
tst_resm(TINFO, "pid(%d) migrate pid %d to node -> %d",
getpid(), pid, node);
- max_node = get_max_node();
- nodemask_size = max_node / 8 + 1;
+ max_node = ALIGN(get_max_node(), sizeof(unsigned long)*8);
+ nodemask_size = max_node / 8;
old_nodes = SAFE_MALLOC(NULL, nodemask_size);
new_nodes = SAFE_MALLOC(NULL, nodemask_size);
--
1.7.1
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [LTP] [PATCH V3 3/3] lib/numa_helper.c: fix nodemask_size
[not found] <52122141.3020005@oracle.com>
` (4 preceding siblings ...)
2013-08-22 7:41 ` [LTP] [PATCH V3 2/3] syscalls/migrate_pages: fix nodemask memory allocation Stanislav Kholmanskikh
@ 2013-08-22 7:41 ` Stanislav Kholmanskikh
5 siblings, 0 replies; 18+ messages in thread
From: Stanislav Kholmanskikh @ 2013-08-22 7:41 UTC (permalink / raw)
To: ltp-list; +Cc: vasily.isaenko
Now nodemask_size is rounded up to the next
multiple of sizeof(unsigned long).
Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
---
testcases/kernel/lib/numa_helper.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/testcases/kernel/lib/numa_helper.c b/testcases/kernel/lib/numa_helper.c
index 4157816..ab294f0 100644
--- a/testcases/kernel/lib/numa_helper.c
+++ b/testcases/kernel/lib/numa_helper.c
@@ -60,7 +60,7 @@ unsigned long get_max_node(void)
#if HAVE_NUMA_H
static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long max_node)
{
- unsigned long nodemask_size = max_node / 8 + 1;
+ unsigned long nodemask_size = max_node / 8;
int i;
char fn[64];
struct stat st;
@@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long max_node)
static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long max_node)
{
#if MPOL_F_MEMS_ALLOWED
- unsigned long nodemask_size = max_node / 8 + 1;
+ unsigned long nodemask_size = max_node / 8;
memset(nodemask, 0, nodemask_size);
/*
* avoid numa_get_mems_allowed(), because of bug in getpol()
@@ -164,8 +164,8 @@ int get_allowed_nodes_arr(int flag, int *num_nodes, int **nodes)
*nodes = NULL;
#if HAVE_NUMA_H
- unsigned long max_node = get_max_node();
- unsigned long nodemask_size = max_node / 8 + 1;
+ unsigned long max_node = ALIGN(get_max_node(), sizeof(unsigned long)*8);
+ unsigned long nodemask_size = max_node / 8;
nodemask = malloc(nodemask_size);
if (nodes)
--
1.7.1
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses
2013-08-22 7:41 ` [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
@ 2013-08-23 7:57 ` Jan Stancek
2013-08-23 8:26 ` [LTP] [PATCH V3.1 1/3] Added ALIGN, __ALIGN_MASK macros Stanislav Kholmanskikh
2013-08-23 8:28 ` [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
0 siblings, 2 replies; 18+ messages in thread
From: Jan Stancek @ 2013-08-23 7:57 UTC (permalink / raw)
To: Stanislav Kholmanskikh; +Cc: vasily isaenko, ltp-list
----- Original Message -----
> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
> To: ltp-list@lists.sourceforge.net
> Cc: "vasily isaenko" <vasily.isaenko@oracle.com>, jstancek@redhat.com
> Sent: Thursday, 22 August, 2013 9:41:47 AM
> Subject: [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses
>
> Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
The series looks good to me. I also ran it on small NUMA system, it worked fine.
My only nit is this patch's subject, I believe plural of macro is macros :-).
Reviewed-by: Jan Stancek <jstancek@redhat.com>
> ---
> include/test.h | 6 ++++++
> 1 files changed, 6 insertions(+), 0 deletions(-)
>
> diff --git a/include/test.h b/include/test.h
> index d9eba31..a517ff9 100644
> --- a/include/test.h
> +++ b/include/test.h
> @@ -105,6 +105,12 @@
> #define MAP_PRIVATE_EXCEPT_UCLINUX MAP_PRIVATE
> #endif
>
> +/* Round x to the next multiple of a.
> + * a should be a power of 2.
> + */
> +#define ALIGN(x, a) __ALIGN_MASK(x, (typeof(x))(a) - 1)
> +#define __ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
> +
> /*
> * lib/forker.c
> */
> --
> 1.7.1
>
>
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 18+ messages in thread
* [LTP] [PATCH V3.1 1/3] Added ALIGN, __ALIGN_MASK macros
2013-08-23 7:57 ` Jan Stancek
@ 2013-08-23 8:26 ` Stanislav Kholmanskikh
2013-08-27 12:01 ` chrubis
2013-08-23 8:28 ` [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
1 sibling, 1 reply; 18+ messages in thread
From: Stanislav Kholmanskikh @ 2013-08-23 8:26 UTC (permalink / raw)
To: ltp-list; +Cc: vasily.isaenko
Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
---
include/test.h | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/include/test.h b/include/test.h
index d9eba31..a517ff9 100644
--- a/include/test.h
+++ b/include/test.h
@@ -105,6 +105,12 @@
#define MAP_PRIVATE_EXCEPT_UCLINUX MAP_PRIVATE
#endif
+/* Round x to the next multiple of a.
+ * a should be a power of 2.
+ */
+#define ALIGN(x, a) __ALIGN_MASK(x, (typeof(x))(a) - 1)
+#define __ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
+
/*
* lib/forker.c
*/
--
1.7.1
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses
2013-08-23 7:57 ` Jan Stancek
2013-08-23 8:26 ` [LTP] [PATCH V3.1 1/3] Added ALIGN, __ALIGN_MASK macros Stanislav Kholmanskikh
@ 2013-08-23 8:28 ` Stanislav Kholmanskikh
1 sibling, 0 replies; 18+ messages in thread
From: Stanislav Kholmanskikh @ 2013-08-23 8:28 UTC (permalink / raw)
To: Jan Stancek; +Cc: vasily isaenko, ltp-list
On 08/23/2013 11:57 AM, Jan Stancek wrote:
>
> ----- Original Message -----
>> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
>> To: ltp-list@lists.sourceforge.net
>> Cc: "vasily isaenko" <vasily.isaenko@oracle.com>, jstancek@redhat.com
>> Sent: Thursday, 22 August, 2013 9:41:47 AM
>> Subject: [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses
>>
>> Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
> The series looks good to me. I also ran it on small NUMA system, it worked fine.
> My only nit is this patch's subject, I believe plural of macro is macros :-).
Shame on me :) Sent V3.1 of this patch.
>
> Reviewed-by: Jan Stancek <jstancek@redhat.com>
>
>> ---
>> include/test.h | 6 ++++++
>> 1 files changed, 6 insertions(+), 0 deletions(-)
>>
>> diff --git a/include/test.h b/include/test.h
>> index d9eba31..a517ff9 100644
>> --- a/include/test.h
>> +++ b/include/test.h
>> @@ -105,6 +105,12 @@
>> #define MAP_PRIVATE_EXCEPT_UCLINUX MAP_PRIVATE
>> #endif
>>
>> +/* Round x to the next multiple of a.
>> + * a should be a power of 2.
>> + */
>> +#define ALIGN(x, a) __ALIGN_MASK(x, (typeof(x))(a) - 1)
>> +#define __ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
>> +
>> /*
>> * lib/forker.c
>> */
>> --
>> 1.7.1
>>
>>
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V3.1 1/3] Added ALIGN, __ALIGN_MASK macros
2013-08-23 8:26 ` [LTP] [PATCH V3.1 1/3] Added ALIGN, __ALIGN_MASK macros Stanislav Kholmanskikh
@ 2013-08-27 12:01 ` chrubis
0 siblings, 0 replies; 18+ messages in thread
From: chrubis @ 2013-08-27 12:01 UTC (permalink / raw)
To: Stanislav Kholmanskikh; +Cc: vasily.isaenko, ltp-list
Hi!
> Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
> ---
> include/test.h | 6 ++++++
> 1 files changed, 6 insertions(+), 0 deletions(-)
>
> diff --git a/include/test.h b/include/test.h
> index d9eba31..a517ff9 100644
> --- a/include/test.h
> +++ b/include/test.h
> @@ -105,6 +105,12 @@
> #define MAP_PRIVATE_EXCEPT_UCLINUX MAP_PRIVATE
> #endif
>
> +/* Round x to the next multiple of a.
> + * a should be a power of 2.
> + */
> +#define ALIGN(x, a) __ALIGN_MASK(x, (typeof(x))(a) - 1)
> +#define __ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
> +
Generally looks good to me. I'm a bit concerned that macros named as
ALIGN() and __ALING_MASK() may eventually collide with some system
definiton. What about prefixing them with LTP_?
Also test.h is a bit crowded, what about adding them into compiler.h
instead?
--
Cyril Hrubis
chrubis@suse.cz
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V3 2/3] syscalls/migrate_pages: fix nodemask memory allocation
2013-08-22 7:41 ` [LTP] [PATCH V3 2/3] syscalls/migrate_pages: fix nodemask memory allocation Stanislav Kholmanskikh
@ 2013-08-27 12:04 ` chrubis
[not found] ` <1377607792-5731-1-git-send-email-stanislav.kholmanskikh@oracle.com>
0 siblings, 1 reply; 18+ messages in thread
From: chrubis @ 2013-08-27 12:04 UTC (permalink / raw)
To: Stanislav Kholmanskikh; +Cc: vasily.isaenko, ltp-list
Hi!
> In accordance to man migrate_pages(), mbind() the bit mask size
> should be rounded to the next multiple of sizeof(unsigned long).
>
> Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
The 2/3 and 3/3 looks good. Acked.
--
Cyril Hrubis
chrubis@suse.cz
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [LTP] [PATCH V4] syscalls/migrate_pages: fix nodemask_memory_allocation
[not found] ` <1377607792-5731-1-git-send-email-stanislav.kholmanskikh@oracle.com>
@ 2013-08-28 13:43 ` chrubis
0 siblings, 0 replies; 18+ messages in thread
From: chrubis @ 2013-08-28 13:43 UTC (permalink / raw)
To: Stanislav Kholmanskikh; +Cc: vasily.isaenko, ltp-list
Hi!
> Changes since V3:
> * Macros ALIGN, __ALIGN_MASK renamed to LTP_ALIGN, __LTP_ALIGN_MASK
> * Both macros are placed in compiler.h (instead of test.h)
Pushed, thanks.
--
Cyril Hrubis
chrubis@suse.cz
------------------------------------------------------------------------------
Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more!
Discover the easy way to master current and previous Microsoft technologies
and advance your career. Get an incredible 1,500+ hours of step-by-step
tutorial videos with LearnDevNow. Subscribe today and save!
http://pubads.g.doubleclick.net/gampad/clk?id=58040911&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2013-08-28 13:42 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <52122141.3020005@oracle.com>
2013-08-21 11:54 ` [LTP] [PATCH V2 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
2013-08-21 11:54 ` [LTP] [PATCH V2 2/3] syscalls/migrate_pages: fix nodemask memory allocation Stanislav Kholmanskikh
2013-08-21 14:32 ` Jan Stancek
2013-08-21 11:54 ` [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size Stanislav Kholmanskikh
2013-08-21 12:29 ` Jan Stancek
2013-08-21 13:22 ` Stanislav Kholmanskikh
2013-08-21 14:28 ` Jan Stancek
2013-08-22 6:13 ` Stanislav Kholmanskikh
2013-08-22 7:01 ` Jan Stancek
2013-08-22 7:41 ` [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
2013-08-23 7:57 ` Jan Stancek
2013-08-23 8:26 ` [LTP] [PATCH V3.1 1/3] Added ALIGN, __ALIGN_MASK macros Stanislav Kholmanskikh
2013-08-27 12:01 ` chrubis
2013-08-23 8:28 ` [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
2013-08-22 7:41 ` [LTP] [PATCH V3 2/3] syscalls/migrate_pages: fix nodemask memory allocation Stanislav Kholmanskikh
2013-08-27 12:04 ` chrubis
[not found] ` <1377607792-5731-1-git-send-email-stanislav.kholmanskikh@oracle.com>
2013-08-28 13:43 ` [LTP] [PATCH V4] syscalls/migrate_pages: fix nodemask_memory_allocation chrubis
2013-08-22 7:41 ` [LTP] [PATCH V3 3/3] lib/numa_helper.c: fix nodemask_size Stanislav Kholmanskikh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox