From: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
To: Jan Stancek <jstancek@redhat.com>
Cc: vasily isaenko <vasily.isaenko@oracle.com>,
ltp-list@lists.sourceforge.net
Subject: Re: [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
Date: Thu, 22 Aug 2013 10:13:34 +0400 [thread overview]
Message-ID: <5215AC0E.6000205@oracle.com> (raw)
In-Reply-To: <1748281355.1930512.1377095285040.JavaMail.root@redhat.com>
On 08/21/2013 06:28 PM, Jan Stancek wrote:
>
> ----- Original Message -----
>> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
>> To: "Jan Stancek" <jstancek@redhat.com>
>> Cc: ltp-list@lists.sourceforge.net, "vasily isaenko" <vasily.isaenko@oracle.com>
>> Sent: Wednesday, 21 August, 2013 3:22:04 PM
>> Subject: Re: [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
>>
>>
>> On 08/21/2013 04:29 PM, Jan Stancek wrote:
>>>
>>>
>>> ----- Original Message -----
>>>> From: "Stanislav Kholmanskikh" <stanislav.kholmanskikh@oracle.com>
>>>> To: ltp-list@lists.sourceforge.net
>>>> Cc: "vasily isaenko" <vasily.isaenko@oracle.com>, jstancek@redhat.com
>>>> Sent: Wednesday, 21 August, 2013 1:54:58 PM
>>>> Subject: [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size
>>>>
>>>> Now nodemask_size is rounded up to the next multiple
>>>> of sizeof(nodemask_t).
>>> Hi,
>>>
>>> Why multiple of nodemask_t? It can be quite large.
>> Hi.
>>
>> As nodemask is a pointer to nodemask_t type, so it should point to
>> memory areas
>> multiple of sizeof(nodemask_t).
>>
>> Isn't it?
> typedef struct {
> unsigned long n[NUMA_NUM_NODES/(sizeof(unsigned long)*8)];
> } nodemask_t;
>
> It's used more like trailing array in this case, because NUMA_NUM_NODES is not
> always correct (I think it was version < 2.0 that had this issue).
>
> I kept the type so I can reuse some trivial functions from numa.h
> and kernel gets 'n' field directly so it doesn't care about nodemask_t.
Thank you. Now it's clear.
>>>> Signed-off-by: Stanislav Kholmanskikh <stanislav.kholmanskikh@oracle.com>
>>>> ---
>>>> testcases/kernel/lib/numa_helper.c | 6 +++---
>>>> 1 files changed, 3 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/testcases/kernel/lib/numa_helper.c
>>>> b/testcases/kernel/lib/numa_helper.c
>>>> index 4157816..9151583 100644
>>>> --- a/testcases/kernel/lib/numa_helper.c
>>>> +++ b/testcases/kernel/lib/numa_helper.c
>>>> @@ -60,7 +60,7 @@ unsigned long get_max_node(void)
>>>> #if HAVE_NUMA_H
>>>> static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long
>>>> max_node)
>>>> {
>>>> - unsigned long nodemask_size = max_node / 8 + 1;
>>>> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
>>> Because mask is passed as parameter, we should respect max_node and
>>> clear only up to byte which holds max_node. So I think we should align
>>> to next byte only:
>>>
>>> unsigned long nodemask_size = ALIGN(max_node, 8) / 8;
>> I agree but I'm not sure how bytes comprising nodemask_t are handled.
>> If they are handled in an endianness-dependant way then your approach will
>> work only on little-endian systems.
>>
>> So I decided to clear entire region. The same for filter_nodemask_mem.
Given a definition of _setbit, _getbit functions (from
numactl-2.0.8/libnuma.c) used in nodemask_t (bitmask)
handling functions:
static void
_setbit(struct bitmask *bmp, unsigned int n, unsigned int v)
{
if (n < bmp->size) {
if (v)
bmp->maskp[n/bitsperlong] |= 1UL << (n %
bitsperlong);
else
bmp->maskp[n/bitsperlong] &= ~(1UL << (n %
bitsperlong));
}
}
I think we cannot clear area by zeroing only few bytes of a long. Each
byte of a long should
be zeroed.
>>
>>>> int i;
>>>> char fn[64];
>>>> struct stat st;
>>>> @@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask,
>>>> unsigned long max_node)
>>>> static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long
>>>> max_node)
>>>> {
>>>> #if MPOL_F_MEMS_ALLOWED
>>>> - unsigned long nodemask_size = max_node / 8 + 1;
>>>> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
>>> Same as above:
>>> unsigned long nodemask_size = ALIGN(max_node, 8) / 8;
>>>
>>>> memset(nodemask, 0, nodemask_size);
>>>> /*
>>>> * avoid numa_get_mems_allowed(), because of bug in getpol()
>>>> @@ -165,7 +165,7 @@ int get_allowed_nodes_arr(int flag, int *num_nodes,
>>>> int
>>>> **nodes)
>>>>
>>>> #if HAVE_NUMA_H
>>>> unsigned long max_node = get_max_node();
>>>> - unsigned long nodemask_size = max_node / 8 + 1;
>>>> + unsigned long nodemask_size = ALIGN(max_node, sizeof(nodemask_t)*8) / 8;
>>> This function allocates the nodemask, so we can align to as much as we
>>> need.
>>> I'd expect this to be same as in migrate_pages, align to next long:
>>>
>>> unsigned long nodemask_size = ALIGN(max_node / 8, sizeof(long));
>> This formula may give incorrect results. For example, if max_mode = 66
>> and sizeof(long) = 8, then
>> ALIGN(max_node / 8, sizeof(long)) will output 8 and we will lost 2 bits.
>> The correct output should be 16.
>>
>> I think as max_node contains number of bits so we should align it on
>> sizeof(long)*8 boundary and after that divide the final result by 8.
> Agreed, we should align on bits then divide.
>
> What if we align max_node? Then we can be sure that nodemask_size
> in all functions is also aligned:
Ok. And for convenience the same for migrate_pages fix ?
>
> diff --git a/testcases/kernel/lib/numa_helper.c b/testcases/kernel/lib/numa_helper.c
> index 4157816..a2b6b4a 100644
> --- a/testcases/kernel/lib/numa_helper.c
> +++ b/testcases/kernel/lib/numa_helper.c
> @@ -60,7 +60,7 @@ unsigned long get_max_node(void)
> #if HAVE_NUMA_H
> static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long max_node)
> {
> - unsigned long nodemask_size = max_node / 8 + 1;
> + unsigned long nodemask_size = max_node / 8;
> int i;
> char fn[64];
> struct stat st;
> @@ -76,7 +76,7 @@ static void get_nodemask_allnodes(nodemask_t * nodemask, unsigned long max_node)
> static int filter_nodemask_mem(nodemask_t * nodemask, unsigned long max_node)
> {
> #if MPOL_F_MEMS_ALLOWED
> - unsigned long nodemask_size = max_node / 8 + 1;
> + unsigned long nodemask_size = max_node / 8;
> memset(nodemask, 0, nodemask_size);
> /*
> * avoid numa_get_mems_allowed(), because of bug in getpol()
> @@ -164,8 +164,8 @@ int get_allowed_nodes_arr(int flag, int *num_nodes, int **nodes)
> *nodes = NULL;
>
> #if HAVE_NUMA_H
> - unsigned long max_node = get_max_node();
> - unsigned long nodemask_size = max_node / 8 + 1;
> + unsigned long max_node = ALIGN(get_max_node(), sizeof(long)*8);
> + unsigned long nodemask_size = max_node / 8;
>
> nodemask = malloc(nodemask_size);
> if (nodes)
>
> Regards,
> Jan
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and
AppDynamics. Performance Central is your source for news, insights,
analysis and resources for efficient Application Performance Management.
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Ltp-list mailing list
Ltp-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ltp-list
next prev parent reply other threads:[~2013-08-22 6:13 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <52122141.3020005@oracle.com>
2013-08-21 11:54 ` [LTP] [PATCH V2 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
2013-08-21 11:54 ` [LTP] [PATCH V2 2/3] syscalls/migrate_pages: fix nodemask memory allocation Stanislav Kholmanskikh
2013-08-21 14:32 ` Jan Stancek
2013-08-21 11:54 ` [LTP] [PATCH V2 3/3] lib/numa_helper.c: fix nodemask_size Stanislav Kholmanskikh
2013-08-21 12:29 ` Jan Stancek
2013-08-21 13:22 ` Stanislav Kholmanskikh
2013-08-21 14:28 ` Jan Stancek
2013-08-22 6:13 ` Stanislav Kholmanskikh [this message]
2013-08-22 7:01 ` Jan Stancek
2013-08-22 7:41 ` [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
2013-08-23 7:57 ` Jan Stancek
2013-08-23 8:26 ` [LTP] [PATCH V3.1 1/3] Added ALIGN, __ALIGN_MASK macros Stanislav Kholmanskikh
2013-08-27 12:01 ` chrubis
2013-08-23 8:28 ` [LTP] [PATCH V3 1/3] Added ALIGN, __ALIGN_MASK macroses Stanislav Kholmanskikh
2013-08-22 7:41 ` [LTP] [PATCH V3 2/3] syscalls/migrate_pages: fix nodemask memory allocation Stanislav Kholmanskikh
2013-08-27 12:04 ` chrubis
[not found] ` <1377607792-5731-1-git-send-email-stanislav.kholmanskikh@oracle.com>
2013-08-28 13:43 ` [LTP] [PATCH V4] syscalls/migrate_pages: fix nodemask_memory_allocation chrubis
2013-08-22 7:41 ` [LTP] [PATCH V3 3/3] lib/numa_helper.c: fix nodemask_size Stanislav Kholmanskikh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5215AC0E.6000205@oracle.com \
--to=stanislav.kholmanskikh@oracle.com \
--cc=jstancek@redhat.com \
--cc=ltp-list@lists.sourceforge.net \
--cc=vasily.isaenko@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox