* [Drbd-dev] [PATCH] drbd: fix a bug of got_peer_ack
@ 2022-09-21 9:07 Rui Xu
2022-09-29 9:39 ` Joel Colledge
0 siblings, 1 reply; 2+ messages in thread
From: Rui Xu @ 2022-09-21 9:07 UTC (permalink / raw)
To: philipp.reisner, drbd-dev, joel.colledge; +Cc: Rui Xu, dongsheng.yang
Consider a scenrio that io is ongoing and the backing disk of
secondary drbd suddenly broken. Some requset from primary node
will not be processed in receive_Data since there is no ldev.
And primary node will send peer_ack to secondary node for those
requsets, but the secondary node will not find these requests in
got_peer_ack.
The first problem caused by this bug is that the two nodes will be
disconnected, and the second problem is that some peer requests
can't be destroyed.
Fix it by find the last peer request on peer_requests list and then
the remaining requests on the list will be destroyed.
Signed-off-by: Rui Xu <rui.xu@easystack.cn>
---
drbd/drbd_receiver.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drbd/drbd_receiver.c b/drbd/drbd_receiver.c
index 656d4eced..b63e83f52 100644
--- a/drbd/drbd_receiver.c
+++ b/drbd/drbd_receiver.c
@@ -9343,6 +9343,13 @@ static int got_peer_ack(struct drbd_connection *connection, struct packet_info *
if (dagtag == peer_req->dagtag_sector)
goto found;
}
+
+ list_for_each_entry(peer_req, &connection->peer_requests, recv_order) {
+ if (peer_req->dagtag_sector == atomic64_read(&connection->last_dagtag_sector)) {
+ drbd_info(connection, "peer request with last dagtag sector %llu has found\n", peer_req->dagtag_sector);
+ goto found;
+ }
+ }
spin_unlock_irq(&connection->peer_reqs_lock);
drbd_err(connection, "peer request with dagtag %llu not found\n", dagtag);
--
2.25.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [Drbd-dev] [PATCH] drbd: fix a bug of got_peer_ack
2022-09-21 9:07 [Drbd-dev] [PATCH] drbd: fix a bug of got_peer_ack Rui Xu
@ 2022-09-29 9:39 ` Joel Colledge
0 siblings, 0 replies; 2+ messages in thread
From: Joel Colledge @ 2022-09-29 9:39 UTC (permalink / raw)
To: Rui Xu; +Cc: philipp.reisner, dongsheng.yang, drbd-dev
Hi Xu,
> Consider a scenrio that io is ongoing and the backing disk of
> secondary drbd suddenly broken. Some requset from primary node
> will not be processed in receive_Data since there is no ldev.
> And primary node will send peer_ack to secondary node for those
> requsets, but the secondary node will not find these requests in
> got_peer_ack.
>
> The first problem caused by this bug is that the two nodes will be
> disconnected, and the second problem is that some peer requests
> can't be destroyed.
I can confirm this issue. Thanks for reporting it.
> Fix it by find the last peer request on peer_requests list and then
> the remaining requests on the list will be destroyed.
I believe this is a valid solution. It is missing the case where
another peer ack is sent afterwards too, so that got_peer_ack() is
called with connection->peer_requests empty. But don't worry about
that for now.
The question is - do we need to send peer acks to peers that responded
with P_NEG_ACK at all? At the point when the write fails on the
secondary, we could set the bitmap bits and free the request. Then we
don't need the peer-ack from the primary. This may lead to a simpler
and more robust solution. I'll try it.
Best regards,
Joel
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2022-09-29 9:39 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-09-21 9:07 [Drbd-dev] [PATCH] drbd: fix a bug of got_peer_ack Rui Xu
2022-09-29 9:39 ` Joel Colledge
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox