* [PATCH] tests/functional/x86_64: Accept a few locked pages in test_memlock.py
@ 2025-09-15 18:55 Richard Henderson
2025-09-15 20:16 ` Thomas Huth
2025-09-16 1:38 ` Richard Henderson
0 siblings, 2 replies; 6+ messages in thread
From: Richard Henderson @ 2025-09-15 18:55 UTC (permalink / raw)
To: qemu-devel; +Cc: thuth
Startup of libgcrypt locks a small pool of pages -- by default 16k.
Testing for zero locked pages is isn't correct, while testing for
32k is a decent compromise.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tests/functional/x86_64/test_memlock.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tests/functional/x86_64/test_memlock.py b/tests/functional/x86_64/test_memlock.py
index 2b515ff979..81bce80b0c 100755
--- a/tests/functional/x86_64/test_memlock.py
+++ b/tests/functional/x86_64/test_memlock.py
@@ -37,7 +37,8 @@ def test_memlock_off(self):
status = self.get_process_status_values(self.vm.get_pid())
- self.assertTrue(status['VmLck'] == 0)
+ # libgcrypt may mlock a few pages
+ self.assertTrue(status['VmLck'] < 32)
def test_memlock_on(self):
self.common_vm_setup_with_memlock('on')
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] tests/functional/x86_64: Accept a few locked pages in test_memlock.py
2025-09-15 18:55 [PATCH] tests/functional/x86_64: Accept a few locked pages in test_memlock.py Richard Henderson
@ 2025-09-15 20:16 ` Thomas Huth
2025-09-16 1:38 ` Richard Henderson
1 sibling, 0 replies; 6+ messages in thread
From: Thomas Huth @ 2025-09-15 20:16 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 15/09/2025 20.55, Richard Henderson wrote:
> Startup of libgcrypt locks a small pool of pages -- by default 16k.
> Testing for zero locked pages is isn't correct, while testing for
> 32k is a decent compromise.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> tests/functional/x86_64/test_memlock.py | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/tests/functional/x86_64/test_memlock.py b/tests/functional/x86_64/test_memlock.py
> index 2b515ff979..81bce80b0c 100755
> --- a/tests/functional/x86_64/test_memlock.py
> +++ b/tests/functional/x86_64/test_memlock.py
> @@ -37,7 +37,8 @@ def test_memlock_off(self):
>
> status = self.get_process_status_values(self.vm.get_pid())
>
> - self.assertTrue(status['VmLck'] == 0)
> + # libgcrypt may mlock a few pages
> + self.assertTrue(status['VmLck'] < 32)
>
> def test_memlock_on(self):
> self.common_vm_setup_with_memlock('on')
Reviewed-by: Thomas Huth <thuth@redhat.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] tests/functional/x86_64: Accept a few locked pages in test_memlock.py
2025-09-15 18:55 [PATCH] tests/functional/x86_64: Accept a few locked pages in test_memlock.py Richard Henderson
2025-09-15 20:16 ` Thomas Huth
@ 2025-09-16 1:38 ` Richard Henderson
2025-09-16 5:18 ` Thomas Huth
1 sibling, 1 reply; 6+ messages in thread
From: Richard Henderson @ 2025-09-16 1:38 UTC (permalink / raw)
To: qemu-devel; +Cc: thuth
On 9/15/25 11:55, Richard Henderson wrote:
> Startup of libgcrypt locks a small pool of pages -- by default 16k.
> Testing for zero locked pages is isn't correct, while testing for
> 32k is a decent compromise.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> tests/functional/x86_64/test_memlock.py | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/tests/functional/x86_64/test_memlock.py b/tests/functional/x86_64/test_memlock.py
> index 2b515ff979..81bce80b0c 100755
> --- a/tests/functional/x86_64/test_memlock.py
> +++ b/tests/functional/x86_64/test_memlock.py
> @@ -37,7 +37,8 @@ def test_memlock_off(self):
>
> status = self.get_process_status_values(self.vm.get_pid())
>
> - self.assertTrue(status['VmLck'] == 0)
> + # libgcrypt may mlock a few pages
> + self.assertTrue(status['VmLck'] < 32)
>
> def test_memlock_on(self):
> self.common_vm_setup_with_memlock('on')
I wonder if I should have chosen 64k, which might be one 64k page...
r~
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] tests/functional/x86_64: Accept a few locked pages in test_memlock.py
2025-09-16 1:38 ` Richard Henderson
@ 2025-09-16 5:18 ` Thomas Huth
2025-09-16 16:55 ` Richard Henderson
0 siblings, 1 reply; 6+ messages in thread
From: Thomas Huth @ 2025-09-16 5:18 UTC (permalink / raw)
To: Richard Henderson, qemu-devel
On 16/09/2025 03.38, Richard Henderson wrote:
> On 9/15/25 11:55, Richard Henderson wrote:
>> Startup of libgcrypt locks a small pool of pages -- by default 16k.
>> Testing for zero locked pages is isn't correct, while testing for
>> 32k is a decent compromise.
>>
>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>> ---
>> tests/functional/x86_64/test_memlock.py | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/tests/functional/x86_64/test_memlock.py b/tests/functional/
>> x86_64/test_memlock.py
>> index 2b515ff979..81bce80b0c 100755
>> --- a/tests/functional/x86_64/test_memlock.py
>> +++ b/tests/functional/x86_64/test_memlock.py
>> @@ -37,7 +37,8 @@ def test_memlock_off(self):
>> status = self.get_process_status_values(self.vm.get_pid())
>> - self.assertTrue(status['VmLck'] == 0)
>> + # libgcrypt may mlock a few pages
>> + self.assertTrue(status['VmLck'] < 32)
>> def test_memlock_on(self):
>> self.common_vm_setup_with_memlock('on')
>
> I wonder if I should have chosen 64k, which might be one 64k page...
It's a x86 test, so we should not have to worry about 64k pages there, I hope?
Thomas
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] tests/functional/x86_64: Accept a few locked pages in test_memlock.py
2025-09-16 5:18 ` Thomas Huth
@ 2025-09-16 16:55 ` Richard Henderson
2025-09-16 18:39 ` Thomas Huth
0 siblings, 1 reply; 6+ messages in thread
From: Richard Henderson @ 2025-09-16 16:55 UTC (permalink / raw)
To: Thomas Huth, qemu-devel
On 9/15/25 22:18, Thomas Huth wrote:
> On 16/09/2025 03.38, Richard Henderson wrote:
>> On 9/15/25 11:55, Richard Henderson wrote:
>>> Startup of libgcrypt locks a small pool of pages -- by default 16k.
>>> Testing for zero locked pages is isn't correct, while testing for
>>> 32k is a decent compromise.
>>>
>>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>>> ---
>>> tests/functional/x86_64/test_memlock.py | 3 ++-
>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/tests/functional/x86_64/test_memlock.py b/tests/functional/ x86_64/
>>> test_memlock.py
>>> index 2b515ff979..81bce80b0c 100755
>>> --- a/tests/functional/x86_64/test_memlock.py
>>> +++ b/tests/functional/x86_64/test_memlock.py
>>> @@ -37,7 +37,8 @@ def test_memlock_off(self):
>>> status = self.get_process_status_values(self.vm.get_pid())
>>> - self.assertTrue(status['VmLck'] == 0)
>>> + # libgcrypt may mlock a few pages
>>> + self.assertTrue(status['VmLck'] < 32)
>>> def test_memlock_on(self):
>>> self.common_vm_setup_with_memlock('on')
>>
>> I wonder if I should have chosen 64k, which might be one 64k page...
>
> It's a x86 test, so we should not have to worry about 64k pages there, I hope?
Fair enough, though it does beg the question of why it's an x86-specific test. Don't all
host architectures support memory locking?
r~
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] tests/functional/x86_64: Accept a few locked pages in test_memlock.py
2025-09-16 16:55 ` Richard Henderson
@ 2025-09-16 18:39 ` Thomas Huth
0 siblings, 0 replies; 6+ messages in thread
From: Thomas Huth @ 2025-09-16 18:39 UTC (permalink / raw)
To: Richard Henderson, qemu-devel; +Cc: Alexandr Moshkov
On 16/09/2025 18.55, Richard Henderson wrote:
> On 9/15/25 22:18, Thomas Huth wrote:
>> On 16/09/2025 03.38, Richard Henderson wrote:
>>> On 9/15/25 11:55, Richard Henderson wrote:
>>>> Startup of libgcrypt locks a small pool of pages -- by default 16k.
>>>> Testing for zero locked pages is isn't correct, while testing for
>>>> 32k is a decent compromise.
>>>>
>>>> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
>>>> ---
>>>> tests/functional/x86_64/test_memlock.py | 3 ++-
>>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/tests/functional/x86_64/test_memlock.py b/tests/functional/
>>>> x86_64/ test_memlock.py
>>>> index 2b515ff979..81bce80b0c 100755
>>>> --- a/tests/functional/x86_64/test_memlock.py
>>>> +++ b/tests/functional/x86_64/test_memlock.py
>>>> @@ -37,7 +37,8 @@ def test_memlock_off(self):
>>>> status = self.get_process_status_values(self.vm.get_pid())
>>>> - self.assertTrue(status['VmLck'] == 0)
>>>> + # libgcrypt may mlock a few pages
>>>> + self.assertTrue(status['VmLck'] < 32)
>>>> def test_memlock_on(self):
>>>> self.common_vm_setup_with_memlock('on')
>>>
>>> I wonder if I should have chosen 64k, which might be one 64k page...
>>
>> It's a x86 test, so we should not have to worry about 64k pages there, I
>> hope?
>
> Fair enough, though it does beg the question of why it's an x86-specific
> test. Don't all host architectures support memory locking?
I guess you need at least a target machine that runs a firmware by default,
since this test does not download any assets...?
Thomas
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-09-16 18:40 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-15 18:55 [PATCH] tests/functional/x86_64: Accept a few locked pages in test_memlock.py Richard Henderson
2025-09-15 20:16 ` Thomas Huth
2025-09-16 1:38 ` Richard Henderson
2025-09-16 5:18 ` Thomas Huth
2025-09-16 16:55 ` Richard Henderson
2025-09-16 18:39 ` Thomas Huth
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).