* [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases
@ 2018-08-02 14:50 Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 1/4] qemu-iotests: Test removing a throttle group member with a pending timer Alberto Garcia
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Alberto Garcia @ 2018-08-02 14:50 UTC (permalink / raw)
To: qemu-devel
Cc: Alberto Garcia, qemu-block, Stefan Hajnoczi, Kevin Wolf,
Max Reitz
Hi all,
here are the patches that I promised yesterday.
I was originally thinking to propose this for the v3.0 release, but
after debugging and fixing the problem I think that it's not
essential (details below).
The important patch is the second one. The first and the third are
just test cases and the last is an alternative solution for the bug
that Stefan fixed in 6fccbb475bc6effc313ee9481726a1748b6dae57.
There are details in the patches themselves, but here's an explanation
of the problem: consider a scenario with two drives A and B that are
part of the same throttle group. Both of them have throttled requests
and they're waiting for a timer that is set on drive A.
(timer here) --> [A] --- req1, req2
[B] --- req3
If we drain drive [A] (e.g. by disabling its I/O limits) then its
queue is restarted. req1 is processed immediately, and before
finishing it calls schedule_next_request(). This follows the
round-robin algorithm, selects req3 and puts a timer in [B].
But we're still not done with draining [A], and now we have a
BDRV_POLL_WHILE() loop at the end of bdrv_do_drained_begin() waiting
for req2 to finish. That won't happen until the timer in [B] fires and
req3 is done. If there are more drives in the group and more requests
in the queue this can take a while. That's why disabling a drive's I/O
limits can be noticeably slow: we disabled the I/O limits but they're
still being enforced in practice.
The QEMU I/O tests run in qtest mode (with QEMU_CLOCK_VIRTUAL). The
clock must be advanced manually, which means that the scenario that I
just described hangs QEMU because BDRV_POLL_WHILE() loops forever (you
can reproduce this with patch 3). In a real world scenario this only
results in the aforementioned slowdown (probably negligible in
practice), which is not a critical thing, and that's why I think it's
safe to keep the current code for QEMU 3.
I think that's all. Questions and commend are welcome.
Berto
Alberto Garcia (4):
qemu-iotests: Test removing a throttle group member with a pending
timer
throttle-groups: Skip the round-robin if a member is being drained
qemu-iotests: Update 093 to improve the draining test
throttle-groups: Don't allow timers without throttled requests
block/throttle-groups.c | 41 +++++++++++++++++++++++++---------
tests/qemu-iotests/093 | 55 ++++++++++++++++++++++++++++++++++++++++++++++
tests/qemu-iotests/093.out | 4 ++--
3 files changed, 88 insertions(+), 12 deletions(-)
--
2.11.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH 1/4] qemu-iotests: Test removing a throttle group member with a pending timer
2018-08-02 14:50 [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases Alberto Garcia
@ 2018-08-02 14:50 ` Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 2/4] throttle-groups: Skip the round-robin if a member is being drained Alberto Garcia
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Alberto Garcia @ 2018-08-02 14:50 UTC (permalink / raw)
To: qemu-devel
Cc: Alberto Garcia, qemu-block, Stefan Hajnoczi, Kevin Wolf,
Max Reitz
A throttle group can have several members, and each one of them can
have several pending requests in the queue.
The requests are processed in a round-robin fashion, so the algorithm
decides the drive that is going to run the next request and sets a
timer in it. Once the timer fires and the throttled request is run
then the next drive from the group is selected and a new timer is set.
If the user tried to remove a drive from a group and that drive had a
timer set then the code was not taking care of setting up a new timer
in one of the remaining members of the group, freezing their I/O.
This problem was fixed in 6fccbb475bc6effc313ee9481726a1748b6dae57,
and this patch adds a new test case that reproduces this exact
scenario.
Signed-off-by: Alberto Garcia <berto@igalia.com>
---
tests/qemu-iotests/093 | 52 ++++++++++++++++++++++++++++++++++++++++++++++
tests/qemu-iotests/093.out | 4 ++--
2 files changed, 54 insertions(+), 2 deletions(-)
diff --git a/tests/qemu-iotests/093 b/tests/qemu-iotests/093
index 68e344f8c1..b26cd34e32 100755
--- a/tests/qemu-iotests/093
+++ b/tests/qemu-iotests/093
@@ -208,6 +208,58 @@ class ThrottleTestCase(iotests.QMPTestCase):
limits[tk] = rate
self.do_test_throttle(ndrives, 5, limits)
+ # Test that removing a drive from a throttle group should not
+ # affect the remaining members of the group.
+ # https://bugzilla.redhat.com/show_bug.cgi?id=1535914
+ def test_remove_group_member(self):
+ # Create a throttle group with two drives
+ # and set a 4 KB/s read limit.
+ params = {"bps": 0,
+ "bps_rd": 4096,
+ "bps_wr": 0,
+ "iops": 0,
+ "iops_rd": 0,
+ "iops_wr": 0 }
+ self.configure_throttle(2, params)
+
+ # Read 4KB from drive0. This is performed immediately.
+ self.vm.hmp_qemu_io("drive0", "aio_read 0 4096")
+
+ # Read 4KB again. The I/O limit has been exceeded so this
+ # request is throttled and a timer is set to wake it up.
+ self.vm.hmp_qemu_io("drive0", "aio_read 0 4096")
+
+ # Read from drive1. We're still over the I/O limit so this
+ # request is also throttled. There's no timer set in drive1
+ # because there's already one in drive0. Once the timer in
+ # drive0 fires and its throttled request is processed then the
+ # next request in the queue will be scheduled: this one.
+ self.vm.hmp_qemu_io("drive1", "aio_read 0 4096")
+
+ # At this point only the first 4KB have been read from drive0.
+ # The other requests are throttled.
+ self.assertEqual(self.blockstats('drive0')[0], 4096)
+ self.assertEqual(self.blockstats('drive1')[0], 0)
+
+ # Remove drive0 from the throttle group and disable its I/O limits.
+ # drive1 remains in the group with a throttled request.
+ params['bps_rd'] = 0
+ params['device'] = 'drive0'
+ result = self.vm.qmp("block_set_io_throttle", conv_keys=False, **params)
+ self.assert_qmp(result, 'return', {})
+
+ # Removing the I/O limits from drive0 drains its pending request.
+ # The read request in drive1 is still throttled.
+ self.assertEqual(self.blockstats('drive0')[0], 8192)
+ self.assertEqual(self.blockstats('drive1')[0], 0)
+
+ # Advance the clock 5 seconds. This completes the request in drive1
+ self.vm.qtest("clock_step %d" % (5 * nsec_per_sec))
+
+ # Now all requests have been processed.
+ self.assertEqual(self.blockstats('drive0')[0], 8192)
+ self.assertEqual(self.blockstats('drive1')[0], 4096)
+
class ThrottleTestCoroutine(ThrottleTestCase):
test_img = "null-co://"
diff --git a/tests/qemu-iotests/093.out b/tests/qemu-iotests/093.out
index 594c16f49f..36376bed87 100644
--- a/tests/qemu-iotests/093.out
+++ b/tests/qemu-iotests/093.out
@@ -1,5 +1,5 @@
-........
+..........
----------------------------------------------------------------------
-Ran 8 tests
+Ran 10 tests
OK
--
2.11.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH 2/4] throttle-groups: Skip the round-robin if a member is being drained
2018-08-02 14:50 [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 1/4] qemu-iotests: Test removing a throttle group member with a pending timer Alberto Garcia
@ 2018-08-02 14:50 ` Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 3/4] qemu-iotests: Update 093 to improve the draining test Alberto Garcia
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Alberto Garcia @ 2018-08-02 14:50 UTC (permalink / raw)
To: qemu-devel
Cc: Alberto Garcia, qemu-block, Stefan Hajnoczi, Kevin Wolf,
Max Reitz
In the throttling code after an I/O request has been completed the
next one is selected from a different member using a round-robin
algorithm. This ensures that all members get a chance to finish their
pending I/O requests.
However, if a group member has its I/O limits disabled (because it's
being drained) then we should always give it priority in order to have
all its pending requests finished as soon as possible.
If we don't do this we could have a member in the process of being
drained waiting for the throttled requests of other members, for which
the I/O limits still apply.
This can have additional consequences: if we're running in qtest mode
(with QEMU_CLOCK_VIRTUAL) then timers can only fire if we advance the
clock manually, so attempting to drain a block device can hang QEMU in
the BDRV_POLL_WHILE() loop at the end of bdrv_do_drained_begin().
Signed-off-by: Alberto Garcia <berto@igalia.com>
---
block/throttle-groups.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/block/throttle-groups.c b/block/throttle-groups.c
index e297b04e17..d46c56b31e 100644
--- a/block/throttle-groups.c
+++ b/block/throttle-groups.c
@@ -221,6 +221,15 @@ static ThrottleGroupMember *next_throttle_token(ThrottleGroupMember *tgm,
ThrottleGroup *tg = container_of(ts, ThrottleGroup, ts);
ThrottleGroupMember *token, *start;
+ /* If this member has its I/O limits disabled then it means that
+ * it's being drained. Skip the round-robin search and return tgm
+ * immediately if it has pending requests. Otherwise we could be
+ * forcing it to wait for other member's throttled requests. */
+ if (tgm_has_pending_reqs(tgm, is_write) &&
+ atomic_read(&tgm->io_limits_disabled)) {
+ return tgm;
+ }
+
start = token = tg->tokens[is_write];
/* get next bs round in round robin style */
--
2.11.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH 3/4] qemu-iotests: Update 093 to improve the draining test
2018-08-02 14:50 [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 1/4] qemu-iotests: Test removing a throttle group member with a pending timer Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 2/4] throttle-groups: Skip the round-robin if a member is being drained Alberto Garcia
@ 2018-08-02 14:50 ` Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 4/4] throttle-groups: Don't allow timers without throttled requests Alberto Garcia
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Alberto Garcia @ 2018-08-02 14:50 UTC (permalink / raw)
To: qemu-devel
Cc: Alberto Garcia, qemu-block, Stefan Hajnoczi, Kevin Wolf,
Max Reitz
The previous patch fixes a problem in which draining a block device
with more than one throttled request can make it wait first for the
completion of requests in other members of the same group.
This patch updates test_remove_group_member() in iotest 093 to
reproduce that scenario. This updated test would hang QEMU without the
fix from the previous patch.
Signed-off-by: Alberto Garcia <berto@igalia.com>
---
tests/qemu-iotests/093 | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/tests/qemu-iotests/093 b/tests/qemu-iotests/093
index b26cd34e32..9d1971a56c 100755
--- a/tests/qemu-iotests/093
+++ b/tests/qemu-iotests/093
@@ -225,15 +225,18 @@ class ThrottleTestCase(iotests.QMPTestCase):
# Read 4KB from drive0. This is performed immediately.
self.vm.hmp_qemu_io("drive0", "aio_read 0 4096")
- # Read 4KB again. The I/O limit has been exceeded so this
+ # Read 2KB. The I/O limit has been exceeded so this
# request is throttled and a timer is set to wake it up.
- self.vm.hmp_qemu_io("drive0", "aio_read 0 4096")
+ self.vm.hmp_qemu_io("drive0", "aio_read 0 2048")
- # Read from drive1. We're still over the I/O limit so this
- # request is also throttled. There's no timer set in drive1
- # because there's already one in drive0. Once the timer in
- # drive0 fires and its throttled request is processed then the
- # next request in the queue will be scheduled: this one.
+ # Read 2KB again. We're still over the I/O limit so this is
+ # request is also throttled, but no new timer is set since
+ # there's already one.
+ self.vm.hmp_qemu_io("drive0", "aio_read 0 2048")
+
+ # Read from drive1. This request is also throttled, and no
+ # timer is set in drive1 because there's already one in
+ # drive0.
self.vm.hmp_qemu_io("drive1", "aio_read 0 4096")
# At this point only the first 4KB have been read from drive0.
@@ -248,7 +251,7 @@ class ThrottleTestCase(iotests.QMPTestCase):
result = self.vm.qmp("block_set_io_throttle", conv_keys=False, **params)
self.assert_qmp(result, 'return', {})
- # Removing the I/O limits from drive0 drains its pending request.
+ # Removing the I/O limits from drive0 drains its two pending requests.
# The read request in drive1 is still throttled.
self.assertEqual(self.blockstats('drive0')[0], 8192)
self.assertEqual(self.blockstats('drive1')[0], 0)
--
2.11.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [Qemu-devel] [PATCH 4/4] throttle-groups: Don't allow timers without throttled requests
2018-08-02 14:50 [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases Alberto Garcia
` (2 preceding siblings ...)
2018-08-02 14:50 ` [Qemu-devel] [PATCH 3/4] qemu-iotests: Update 093 to improve the draining test Alberto Garcia
@ 2018-08-02 14:50 ` Alberto Garcia
2018-08-13 11:42 ` [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases Alberto Garcia
2018-08-13 15:37 ` Kevin Wolf
5 siblings, 0 replies; 7+ messages in thread
From: Alberto Garcia @ 2018-08-02 14:50 UTC (permalink / raw)
To: qemu-devel
Cc: Alberto Garcia, qemu-block, Stefan Hajnoczi, Kevin Wolf,
Max Reitz
Commit 6fccbb475bc6effc313ee9481726a1748b6dae57 fixed a bug caused by
QEMU attempting to remove a throttle group member with no pending
requests but an active timer set. This was the result of a previous
bdrv_drained_begin() call processing the throttled requests but
leaving the timer untouched.
Although the commit does solve the problem, the situation shouldn't
happen in the first place. If we try to drain a throttle group member
which has a timer set, we should cancel the timer instead of ignoring
it.
Signed-off-by: Alberto Garcia <berto@igalia.com>
---
block/throttle-groups.c | 32 ++++++++++++++++++++++----------
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/block/throttle-groups.c b/block/throttle-groups.c
index d46c56b31e..5d8213a443 100644
--- a/block/throttle-groups.c
+++ b/block/throttle-groups.c
@@ -36,6 +36,7 @@
static void throttle_group_obj_init(Object *obj);
static void throttle_group_obj_complete(UserCreatable *obj, Error **errp);
+static void timer_cb(ThrottleGroupMember *tgm, bool is_write);
/* The ThrottleGroup structure (with its ThrottleState) is shared
* among different ThrottleGroupMembers and it's independent from
@@ -424,15 +425,31 @@ static void throttle_group_restart_queue(ThrottleGroupMember *tgm, bool is_write
rd->tgm = tgm;
rd->is_write = is_write;
+ /* This function is called when a timer is fired or when
+ * throttle_group_restart_tgm() is called. Either way, there can
+ * be no timer pending on this tgm at this point */
+ assert(!timer_pending(tgm->throttle_timers.timers[is_write]));
+
co = qemu_coroutine_create(throttle_group_restart_queue_entry, rd);
aio_co_enter(tgm->aio_context, co);
}
void throttle_group_restart_tgm(ThrottleGroupMember *tgm)
{
+ int i;
+
if (tgm->throttle_state) {
- throttle_group_restart_queue(tgm, 0);
- throttle_group_restart_queue(tgm, 1);
+ for (i = 0; i < 2; i++) {
+ QEMUTimer *t = tgm->throttle_timers.timers[i];
+ if (timer_pending(t)) {
+ /* If there's a pending timer on this tgm, fire it now */
+ timer_del(t);
+ timer_cb(tgm, i);
+ } else {
+ /* Else run the next request from the queue manually */
+ throttle_group_restart_queue(tgm, i);
+ }
+ }
}
}
@@ -567,16 +584,11 @@ void throttle_group_unregister_tgm(ThrottleGroupMember *tgm)
return;
}
- assert(tgm->pending_reqs[0] == 0 && tgm->pending_reqs[1] == 0);
- assert(qemu_co_queue_empty(&tgm->throttled_reqs[0]));
- assert(qemu_co_queue_empty(&tgm->throttled_reqs[1]));
-
qemu_mutex_lock(&tg->lock);
for (i = 0; i < 2; i++) {
- if (timer_pending(tgm->throttle_timers.timers[i])) {
- tg->any_timer_armed[i] = false;
- schedule_next_request(tgm, i);
- }
+ assert(tgm->pending_reqs[i] == 0);
+ assert(qemu_co_queue_empty(&tgm->throttled_reqs[i]));
+ assert(!timer_pending(tgm->throttle_timers.timers[i]));
if (tg->tokens[i] == tgm) {
token = throttle_group_next_tgm(tgm);
/* Take care of the case where this is the last tgm in the group */
--
2.11.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases
2018-08-02 14:50 [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases Alberto Garcia
` (3 preceding siblings ...)
2018-08-02 14:50 ` [Qemu-devel] [PATCH 4/4] throttle-groups: Don't allow timers without throttled requests Alberto Garcia
@ 2018-08-13 11:42 ` Alberto Garcia
2018-08-13 15:37 ` Kevin Wolf
5 siblings, 0 replies; 7+ messages in thread
From: Alberto Garcia @ 2018-08-13 11:42 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-block, Stefan Hajnoczi, Kevin Wolf, Max Reitz
ping
On Thu 02 Aug 2018 04:50:22 PM CEST, Alberto Garcia wrote:
> Hi all,
>
> here are the patches that I promised yesterday.
>
> I was originally thinking to propose this for the v3.0 release, but
> after debugging and fixing the problem I think that it's not
> essential (details below).
>
> The important patch is the second one. The first and the third are
> just test cases and the last is an alternative solution for the bug
> that Stefan fixed in 6fccbb475bc6effc313ee9481726a1748b6dae57.
>
> There are details in the patches themselves, but here's an explanation
> of the problem: consider a scenario with two drives A and B that are
> part of the same throttle group. Both of them have throttled requests
> and they're waiting for a timer that is set on drive A.
>
> (timer here) --> [A] --- req1, req2
> [B] --- req3
>
> If we drain drive [A] (e.g. by disabling its I/O limits) then its
> queue is restarted. req1 is processed immediately, and before
> finishing it calls schedule_next_request(). This follows the
> round-robin algorithm, selects req3 and puts a timer in [B].
>
> But we're still not done with draining [A], and now we have a
> BDRV_POLL_WHILE() loop at the end of bdrv_do_drained_begin() waiting
> for req2 to finish. That won't happen until the timer in [B] fires and
> req3 is done. If there are more drives in the group and more requests
> in the queue this can take a while. That's why disabling a drive's I/O
> limits can be noticeably slow: we disabled the I/O limits but they're
> still being enforced in practice.
>
> The QEMU I/O tests run in qtest mode (with QEMU_CLOCK_VIRTUAL). The
> clock must be advanced manually, which means that the scenario that I
> just described hangs QEMU because BDRV_POLL_WHILE() loops forever (you
> can reproduce this with patch 3). In a real world scenario this only
> results in the aforementioned slowdown (probably negligible in
> practice), which is not a critical thing, and that's why I think it's
> safe to keep the current code for QEMU 3.
>
> I think that's all. Questions and commend are welcome.
>
> Berto
>
> Alberto Garcia (4):
> qemu-iotests: Test removing a throttle group member with a pending
> timer
> throttle-groups: Skip the round-robin if a member is being drained
> qemu-iotests: Update 093 to improve the draining test
> throttle-groups: Don't allow timers without throttled requests
>
> block/throttle-groups.c | 41 +++++++++++++++++++++++++---------
> tests/qemu-iotests/093 | 55 ++++++++++++++++++++++++++++++++++++++++++++++
> tests/qemu-iotests/093.out | 4 ++--
> 3 files changed, 88 insertions(+), 12 deletions(-)
>
> --
> 2.11.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases
2018-08-02 14:50 [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases Alberto Garcia
` (4 preceding siblings ...)
2018-08-13 11:42 ` [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases Alberto Garcia
@ 2018-08-13 15:37 ` Kevin Wolf
5 siblings, 0 replies; 7+ messages in thread
From: Kevin Wolf @ 2018-08-13 15:37 UTC (permalink / raw)
To: Alberto Garcia; +Cc: qemu-devel, qemu-block, Stefan Hajnoczi, Max Reitz
Am 02.08.2018 um 16:50 hat Alberto Garcia geschrieben:
> Hi all,
>
> here are the patches that I promised yesterday.
>
> I was originally thinking to propose this for the v3.0 release, but
> after debugging and fixing the problem I think that it's not
> essential (details below).
>
> The important patch is the second one. The first and the third are
> just test cases and the last is an alternative solution for the bug
> that Stefan fixed in 6fccbb475bc6effc313ee9481726a1748b6dae57.
>
> There are details in the patches themselves, but here's an explanation
> of the problem: consider a scenario with two drives A and B that are
> part of the same throttle group. Both of them have throttled requests
> and they're waiting for a timer that is set on drive A.
>
> (timer here) --> [A] --- req1, req2
> [B] --- req3
>
> If we drain drive [A] (e.g. by disabling its I/O limits) then its
> queue is restarted. req1 is processed immediately, and before
> finishing it calls schedule_next_request(). This follows the
> round-robin algorithm, selects req3 and puts a timer in [B].
>
> But we're still not done with draining [A], and now we have a
> BDRV_POLL_WHILE() loop at the end of bdrv_do_drained_begin() waiting
> for req2 to finish. That won't happen until the timer in [B] fires and
> req3 is done. If there are more drives in the group and more requests
> in the queue this can take a while. That's why disabling a drive's I/O
> limits can be noticeably slow: we disabled the I/O limits but they're
> still being enforced in practice.
>
> The QEMU I/O tests run in qtest mode (with QEMU_CLOCK_VIRTUAL). The
> clock must be advanced manually, which means that the scenario that I
> just described hangs QEMU because BDRV_POLL_WHILE() loops forever (you
> can reproduce this with patch 3). In a real world scenario this only
> results in the aforementioned slowdown (probably negligible in
> practice), which is not a critical thing, and that's why I think it's
> safe to keep the current code for QEMU 3.
>
> I think that's all. Questions and commend are welcome.
Thanks, applied to the block branch.
Kevin
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2018-08-13 15:37 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-08-02 14:50 [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 1/4] qemu-iotests: Test removing a throttle group member with a pending timer Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 2/4] throttle-groups: Skip the round-robin if a member is being drained Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 3/4] qemu-iotests: Update 093 to improve the draining test Alberto Garcia
2018-08-02 14:50 ` [Qemu-devel] [PATCH 4/4] throttle-groups: Don't allow timers without throttled requests Alberto Garcia
2018-08-13 11:42 ` [Qemu-devel] [PATCH 0/4] throttle: Race condition fixes and test cases Alberto Garcia
2018-08-13 15:37 ` Kevin Wolf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).