public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/20] DRBD 9 rework
@ 2026-03-27 22:38 Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 01/20] drbd: mark as BROKEN during " Christoph Böhmwalder
                   ` (19 more replies)
  0 siblings, 20 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder

As discussed (context: [0]), here is the first version of our DRBD 9
rework series, intended for for-next via for-7.1/drbd.

This replays about 10-15 years of active out-of-tree development work
[1], depending on your way of counting. The out-of-tree module has
severely diverged from the in-tree version over the years, which is
what we are aiming to fix now.

Hopefully that somewhat excuses (or at least explains) the massive
diffs -- we've tried to come up with a way to group the changes by
topic, but I realize it's still not exactly trivial to review.

We've been polishing this series for a while now, and we have taken
great care to make it as "upstream-presentable" as possible. That said,
there are still probably imperfections. It's a start -- feedback welcome!

The main blocker that still remains is that this technically breaks
userspace: some ancient versions of the DRBD userspace utilities will
not be able to talk to this version of the driver (v8 and v9 genetlink
families are completely incompatible).
We will fix that by introducing a completely new genetlink family (think
"drbd2") that follows all modern conventions. Then we can register both
families, going through a compat layer for the old family.

A prerequisite for that is converting the genl_magic macro
infrastructure we use now to YNL. That is already in the pipeline, we
expect to have it ready by the 7.2 merge window.

The plan is to submit one new version of this series for every merge
window, which should end up in linux-next. Within a few kernel
releases, we will hopefully be close enough to get this over the line
and submitted for real.

Thanks,
Christoph

[0] https://lore.kernel.org/linux-next/899e0337-9642-4ca6-9050-aeab14fa22ef@kernel.dk/
[1] https://github.com/LINBIT/drbd

Christoph Böhmwalder (20):
  drbd: mark as BROKEN during DRBD 9 rework
  drbd: extend wire protocol definitions for DRBD 9
  drbd: introduce DRBD 9 on-disk metadata format
  drbd: add transport layer abstraction
  drbd: add TCP transport implementation
  drbd: add RDMA transport implementation
  drbd: add load-balancing TCP transport
  drbd: add DAX/PMEM support for metadata access
  drbd: add optional compatibility layer for DRBD 8.4
  drbd: rename drbd_worker.c to drbd_sender.c
  drbd: rework sender for DRBD 9 multi-peer
  drbd: replace per-device state model with multi-peer data structures
  drbd: rewrite state machine for DRBD 9 multi-peer clusters
  drbd: rework activity log and bitmap for multi-peer replication
  drbd: rework request processing for DRBD 9 multi-peer IO
  drbd: rework module core for DRBD 9 transport and multi-peer
  drbd: rework receiver for DRBD 9 transport and multi-peer protocol
  drbd: rework netlink management interface for DRBD 9
  drbd: update monitoring interfaces for multi-peer topology
  drbd: remove BROKEN for DRBD

 drivers/block/drbd/Kconfig                    |    58 +
 drivers/block/drbd/Makefile                   |     9 +-
 drivers/block/drbd/drbd_actlog.c              |  1122 +-
 drivers/block/drbd/drbd_bitmap.c              |  1824 +--
 drivers/block/drbd/drbd_buildtag.c            |     2 +-
 drivers/block/drbd/drbd_config.h              |    38 +
 drivers/block/drbd/drbd_dax_pmem.c            |   158 +
 drivers/block/drbd/drbd_dax_pmem.h            |    40 +
 drivers/block/drbd/drbd_debugfs.c             |  1657 ++-
 drivers/block/drbd/drbd_debugfs.h             |     2 +
 .../block/drbd}/drbd_genl_api.h               |    19 +-
 drivers/block/drbd/drbd_int.h                 |  3278 +++--
 drivers/block/drbd/drbd_interval.c            |    35 +-
 drivers/block/drbd/drbd_interval.h            |   156 +-
 drivers/block/drbd/drbd_legacy_84.c           |   564 +
 drivers/block/drbd/drbd_legacy_84.h           |    27 +
 drivers/block/drbd/drbd_main.c                |  6008 +++++---
 drivers/block/drbd/drbd_meta_data.h           |   126 +
 drivers/block/drbd/drbd_nl.c                  |  7248 ++++++---
 drivers/block/drbd/drbd_nla.c                 |     2 +-
 drivers/block/drbd/drbd_nla.h                 |     7 +-
 drivers/block/drbd/drbd_polymorph_printk.h    |   265 +-
 drivers/block/drbd/drbd_proc.c                |   320 +-
 drivers/block/drbd/drbd_protocol.h            |   519 +-
 drivers/block/drbd/drbd_receiver.c            | 12258 +++++++++++-----
 drivers/block/drbd/drbd_req.c                 |  2990 ++--
 drivers/block/drbd/drbd_req.h                 |   303 +-
 drivers/block/drbd/drbd_sender.c              |  3871 +++++
 drivers/block/drbd/drbd_state.c               |  7724 +++++++---
 drivers/block/drbd/drbd_state.h               |   298 +-
 drivers/block/drbd/drbd_state_change.h        |    66 +-
 drivers/block/drbd/drbd_strings.c             |   219 +-
 drivers/block/drbd/drbd_strings.h             |    25 +-
 drivers/block/drbd/drbd_transport.c           |   403 +
 drivers/block/drbd/drbd_transport.h           |   340 +
 drivers/block/drbd/drbd_transport_lb-tcp.c    |  1905 +++
 drivers/block/drbd/drbd_transport_rdma.c      |  3496 +++++
 drivers/block/drbd/drbd_transport_tcp.c       |  1670 +++
 drivers/block/drbd/drbd_transport_template.c  |   160 +
 drivers/block/drbd/drbd_worker.c              |  2223 ---
 include/linux/drbd.h                          |   190 +-
 include/linux/drbd_config.h                   |    16 -
 include/linux/drbd_genl.h                     |   352 +-
 include/linux/drbd_limits.h                   |   112 +-
 include/linux/genl_magic_func.h               |    50 +-
 45 files changed, 45891 insertions(+), 16264 deletions(-)
 create mode 100644 drivers/block/drbd/drbd_config.h
 create mode 100644 drivers/block/drbd/drbd_dax_pmem.c
 create mode 100644 drivers/block/drbd/drbd_dax_pmem.h
 rename {include/linux => drivers/block/drbd}/drbd_genl_api.h (68%)
 create mode 100644 drivers/block/drbd/drbd_legacy_84.c
 create mode 100644 drivers/block/drbd/drbd_legacy_84.h
 create mode 100644 drivers/block/drbd/drbd_meta_data.h
 create mode 100644 drivers/block/drbd/drbd_sender.c
 create mode 100644 drivers/block/drbd/drbd_transport.c
 create mode 100644 drivers/block/drbd/drbd_transport.h
 create mode 100644 drivers/block/drbd/drbd_transport_lb-tcp.c
 create mode 100644 drivers/block/drbd/drbd_transport_rdma.c
 create mode 100644 drivers/block/drbd/drbd_transport_tcp.c
 create mode 100644 drivers/block/drbd/drbd_transport_template.c
 delete mode 100644 drivers/block/drbd/drbd_worker.c
 delete mode 100644 include/linux/drbd_config.h


base-commit: 67807fbaf12719fca46a622d759484652b79c7c3
-- 
2.53.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 01/20] drbd: mark as BROKEN during DRBD 9 rework
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 02/20] drbd: extend wire protocol definitions for DRBD 9 Christoph Böhmwalder
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder

Mark DRBD as BROKEN while the driver is being reworked for DRBD 9
multi-peer support. The following commits restructure the driver
extensively, and intermediate states do not compile. DRBD will be
re-enabled once the rework is complete.

Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/block/drbd/Kconfig b/drivers/block/drbd/Kconfig
index 495a72da04c6..b907b07468bb 100644
--- a/drivers/block/drbd/Kconfig
+++ b/drivers/block/drbd/Kconfig
@@ -8,6 +8,7 @@ comment "DRBD disabled because PROC_FS or INET not selected"
 
 config BLK_DEV_DRBD
 	tristate "DRBD Distributed Replicated Block Device support"
+	depends on BROKEN
 	depends on PROC_FS && INET
 	select LRU_CACHE
 	select CRC32
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 02/20] drbd: extend wire protocol definitions for DRBD 9
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 01/20] drbd: mark as BROKEN during " Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-28 14:13   ` kernel test robot
  2026-03-27 22:38 ` [PATCH 03/20] drbd: introduce DRBD 9 on-disk metadata format Christoph Böhmwalder
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Extend drbd_protocol.h with the packet types and structures needed for
multi-peer operation.

Two-phase commit (2PC) messages coordinate distributed state changes
across all peers in a cluster.
Data-generation-tag (dagtag) messages order application writes relative
to resync IO, preventing stale overwrites during concurrent resync.
Peer-acknowledgement packets carry a node bitmask so each primary can
track which peers have persisted a write.

The connection-features handshake now carries sender and receiver node
IDs, establishing peer identity at the wire level.
New feature-flag bits advertise these capabilities during negotiation,
allowing DRBD to remain wire-compatible with 8.4 peers while enabling
the full DRBD 9 feature set when both ends support it.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_protocol.h | 519 ++++++++++++++++++++++-------
 1 file changed, 403 insertions(+), 116 deletions(-)

diff --git a/drivers/block/drbd/drbd_protocol.h b/drivers/block/drbd/drbd_protocol.h
index 56bbca9d7700..886686f8cd1d 100644
--- a/drivers/block/drbd/drbd_protocol.h
+++ b/drivers/block/drbd/drbd_protocol.h
@@ -2,6 +2,9 @@
 #ifndef __DRBD_PROTOCOL_H
 #define __DRBD_PROTOCOL_H
 
+#include <linux/types.h>
+#include <linux/drbd.h>
+
 enum drbd_packet {
 	/* receiver (data socket) */
 	P_DATA		      = 0x00,
@@ -24,12 +27,12 @@ enum drbd_packet {
 	P_AUTH_RESPONSE	      = 0x11,
 	P_STATE_CHG_REQ	      = 0x12,
 
-	/* (meta socket) */
+	/* asender (meta socket */
 	P_PING		      = 0x13,
 	P_PING_ACK	      = 0x14,
 	P_RECV_ACK	      = 0x15, /* Used in protocol B */
 	P_WRITE_ACK	      = 0x16, /* Used in protocol C */
-	P_RS_WRITE_ACK	      = 0x17, /* Is a P_WRITE_ACK, additionally call set_in_sync(). */
+	P_RS_WRITE_ACK	      = 0x17, /* Write ack for resync reply. */
 	P_SUPERSEDED	      = 0x18, /* Used in proto C, two-primaries conflict detection */
 	P_NEG_ACK	      = 0x19, /* Sent if local disk is unusable */
 	P_NEG_DREPLY	      = 0x1a, /* Local disk is broken... */
@@ -41,7 +44,7 @@ enum drbd_packet {
 
 	P_OV_REQUEST	      = 0x1e, /* data socket */
 	P_OV_REPLY	      = 0x1f,
-	P_OV_RESULT	      = 0x20, /* meta socket */
+	P_OV_RESULT	      = 0x20, /* meta sock: Protocol < 122 version of P_OV_RESULT_ID */
 	P_CSUM_RS_REQUEST     = 0x21, /* data socket */
 	P_RS_IS_IN_SYNC	      = 0x22, /* meta socket */
 	P_SYNC_PARAM89	      = 0x23, /* data socket, protocol version 89 replacement for P_SYNC_PARAM */
@@ -51,32 +54,69 @@ enum drbd_packet {
 	P_DELAY_PROBE         = 0x27, /* is used on BOTH sockets */
 	P_OUT_OF_SYNC         = 0x28, /* Mark as out of sync (Outrunning), data socket */
 	P_RS_CANCEL           = 0x29, /* meta: Used to cancel RS_DATA_REQUEST packet by SyncSource */
-	P_CONN_ST_CHG_REQ     = 0x2a, /* data sock: Connection wide state request */
-	P_CONN_ST_CHG_REPLY   = 0x2b, /* meta sock: Connection side state req reply */
+	P_CONN_ST_CHG_REQ     = 0x2a, /* data sock: state change request */
+	P_CONN_ST_CHG_REPLY   = 0x2b, /* meta sock: state change reply */
 	P_RETRY_WRITE	      = 0x2c, /* Protocol C: retry conflicting write request */
 	P_PROTOCOL_UPDATE     = 0x2d, /* data sock: is used in established connections */
-        /* 0x2e to 0x30 reserved, used in drbd 9 */
+	P_TWOPC_PREPARE       = 0x2e, /* data sock: prepare state change */
+	P_TWOPC_ABORT         = 0x2f, /* data sock: abort state change */
+
+	P_DAGTAG	      = 0x30, /* data sock: set the current dagtag */
 
-	/* REQ_OP_DISCARD. We used "discard" in different contexts before,
+	/* REQ_DISCARD. We used "discard" in different contexts before,
 	 * which is why I chose TRIM here, to disambiguate. */
 	P_TRIM                = 0x31,
 
 	/* Only use these two if both support FF_THIN_RESYNC */
 	P_RS_THIN_REQ         = 0x32, /* Request a block for resync or reply P_RS_DEALLOCATED */
-	P_RS_DEALLOCATED      = 0x33, /* Contains only zeros on sync source node */
+	P_RS_DEALLOCATED      = 0x33, /* Protocol < 122 version of P_RS_DEALLOCATED_ID */
 
 	/* REQ_WRITE_SAME.
 	 * On a receiving side without REQ_WRITE_SAME,
 	 * we may fall back to an opencoded loop instead. */
 	P_WSAME               = 0x34,
-
-	/* 0x35 already claimed in DRBD 9 */
+	P_TWOPC_PREP_RSZ      = 0x35, /* PREPARE a 2PC resize operation*/
 	P_ZEROES              = 0x36, /* data sock: zero-out, WRITE_ZEROES */
 
-	/* 0x40 .. 0x48 already claimed in DRBD 9 */
+	/* place new packets for both 8.4 and 9 here,
+	 * place new packets for 9-only in the next gap. */
+
+	P_PEER_ACK            = 0x40, /* meta sock: tell which nodes have acked a request */
+	P_PEERS_IN_SYNC       = 0x41, /* data sock: Mark area as in sync */
+
+	P_UUIDS110	      = 0x42, /* data socket */
+	P_PEER_DAGTAG         = 0x43, /* data socket, used to trigger reconciliation resync */
+	P_CURRENT_UUID	      = 0x44, /* data socket */
+
+	P_TWOPC_YES           = 0x45, /* meta sock: allow two-phase commit */
+	P_TWOPC_NO            = 0x46, /* meta sock: reject two-phase commit */
+	P_TWOPC_COMMIT        = 0x47, /* data sock: commit state change */
+	P_TWOPC_RETRY         = 0x48, /* meta sock: retry two-phase commit */
+
+	P_CONFIRM_STABLE      = 0x49, /* meta sock: similar to an unsolicited partial barrier ack */
+	P_RS_CANCEL_AHEAD     = 0x4a, /* protocol version 115,
+		 * meta: cancel RS_DATA_REQUEST packet if already Ahead again,
+		 *       tell peer to stop sending resync requests... */
+	P_DISCONNECT          = 0x4b, /* data sock: Disconnect and stop connection attempts */
+
+	P_RS_DAGTAG_REQ       = 0x4c, /* data sock: Request a block for resync, with dagtag dependency */
+	P_RS_CSUM_DAGTAG_REQ  = 0x4d, /* data sock: Request a block for resync if checksum differs, with dagtag dependency */
+	P_RS_THIN_DAGTAG_REQ  = 0x4e, /* data sock: Request a block for resync or reply P_RS_DEALLOCATED, with dagtag dependency */
+	P_OV_DAGTAG_REQ       = 0x4f, /* data sock: Request a checksum for online verify, with dagtag dependency */
+	P_OV_DAGTAG_REPLY     = 0x50, /* data sock: Reply with a checksum for online verify, with dagtag dependency */
+
+	P_WRITE_ACK_IN_SYNC   = 0x51, /* meta sock: Application write ack setting bits in sync. */
+	P_RS_NEG_ACK          = 0x52, /* meta sock: Local disk is unusable writing resync reply. */
+	P_OV_RESULT_ID        = 0x53, /* meta sock: Online verify result with block ID. */
+	P_RS_DEALLOCATED_ID   = 0x54, /* data sock: Contains only zeros on sync source node. */
+
+	P_FLUSH_REQUESTS      = 0x55, /* data sock: Flush prior requests then send ack and/or forward */
+	P_FLUSH_FORWARD       = 0x56, /* meta sock: Send ack after sending P_OUT_OF_SYNC for prior P_PEER_ACK */
+	P_FLUSH_REQUESTS_ACK  = 0x57, /* data sock: Response to initiator of P_FLUSH_REQUESTS */
+	P_ENABLE_REPLICATION_NEXT = 0x58, /* data sock: whether to start replication on next resync start */
+	P_ENABLE_REPLICATION  = 0x59, /* data sock: enable or disable replication during resync */
 
 	P_MAY_IGNORE	      = 0x100, /* Flag to test if (cmd > P_MAY_IGNORE) ... */
-	P_MAX_OPT_CMD	      = 0x101,
 
 	/* special command ids for handshake */
 
@@ -86,9 +126,6 @@ enum drbd_packet {
 	P_CONNECTION_FEATURES = 0xfffe	/* FIXED for the next century! */
 };
 
-#ifndef __packed
-#define __packed __attribute__((packed))
-#endif
 
 /* This is the layout for a packet on the wire.
  * The byteorder is the network byte order.
@@ -101,24 +138,24 @@ enum drbd_packet {
  * regardless of 32 or 64 bit arch!
  */
 struct p_header80 {
-	u32	  magic;
-	u16	  command;
-	u16	  length;	/* bytes of data after this header */
+	uint32_t magic;
+	uint16_t command;
+	uint16_t length;	/* bytes of data after this header */
 } __packed;
 
 /* Header for big packets, Used for data packets exceeding 64kB */
 struct p_header95 {
-	u16	  magic;	/* use DRBD_MAGIC_BIG here */
-	u16	  command;
-	u32	  length;
+	uint16_t magic;	/* use DRBD_MAGIC_BIG here */
+	uint16_t command;
+	uint32_t length;
 } __packed;
 
 struct p_header100 {
-	u32	  magic;
-	u16	  volume;
-	u16	  command;
-	u32	  length;
-	u32	  pad;
+	uint32_t magic;
+	uint16_t volume;
+	uint16_t command;
+	uint32_t length;
+	uint32_t pad;
 } __packed;
 
 /* These defines must not be changed without changing the protocol version.
@@ -128,10 +165,10 @@ struct p_header100 {
 #define DP_HARDBARRIER	      1 /* no longer used */
 #define DP_RW_SYNC	      2 /* equals REQ_SYNC    */
 #define DP_MAY_SET_IN_SYNC    4
-#define DP_UNPLUG             8 /* not used anymore   */
+#define DP_UNPLUG             8 /* equals REQ_UNPLUG (compat) */
 #define DP_FUA               16 /* equals REQ_FUA     */
 #define DP_FLUSH             32 /* equals REQ_PREFLUSH   */
-#define DP_DISCARD           64 /* equals REQ_OP_DISCARD */
+#define DP_DISCARD           64 /* equals REQ_DISCARD */
 #define DP_SEND_RECEIVE_ACK 128 /* This is a proto B write request */
 #define DP_SEND_WRITE_ACK   256 /* This is a proto C write request */
 #define DP_WSAME            512 /* equiv. REQ_WRITE_SAME */
@@ -143,52 +180,103 @@ struct p_header100 {
  */
 
 struct p_data {
-	u64	    sector;    /* 64 bits sector number */
-	u64	    block_id;  /* to identify the request in protocol B&C */
-	u32	    seq_num;
-	u32	    dp_flags;
+	uint64_t sector;    /* 64 bits sector number */
+	uint64_t block_id;  /* to identify the request in protocol B&C */
+	uint32_t seq_num;
+	uint32_t dp_flags;
 } __packed;
 
 struct p_trim {
 	struct p_data p_data;
-	u32	    size;	/* == bio->bi_size */
+	uint32_t size;	/* == bio->bi_size */
 } __packed;
 
 struct p_wsame {
 	struct p_data p_data;
-	u32           size;     /* == bio->bi_size */
+	uint32_t size;     /* == bio->bi_size */
 } __packed;
 
 /*
- * commands which share a struct:
- *  p_block_ack:
- *   P_RECV_ACK (proto B), P_WRITE_ACK (proto C),
+ * struct p_block_ack shared by commands:
+ *   P_RECV_ACK (proto B)
+ *   P_WRITE_ACK (proto C),
+ *   P_WRITE_ACK_IN_SYNC,
  *   P_SUPERSEDED (proto C, two-primaries conflict detection)
- *  p_block_req:
- *   P_DATA_REQUEST, P_RS_DATA_REQUEST
+ *   P_RS_WRITE_ACK
+ *   P_NEG_ACK
+ *   P_NEG_DREPLY
+ *   P_NEG_RS_DREPLY
+ *   P_RS_NEG_ACK
+ *   P_OV_RESULT
+ *   P_RS_IS_IN_SYNC
+ *   P_RS_CANCEL
+ *   P_RS_DEALLOCATED_ID
+ *   P_RS_CANCEL_AHEAD
  */
 struct p_block_ack {
-	u64	    sector;
-	u64	    block_id;
-	u32	    blksize;
-	u32	    seq_num;
+	uint64_t sector;
+	uint64_t block_id;
+	uint32_t blksize;
+	uint32_t seq_num;
+} __packed;
+
+/* For P_OV_RESULT_ID. */
+struct p_ov_result {
+	uint64_t sector;
+	uint64_t block_id;
+	uint32_t blksize;
+	uint32_t seq_num;
+	uint32_t result;
+	uint32_t pad;
+} __packed;
+
+enum ov_result {
+	OV_RESULT_SKIP = 0,
+	OV_RESULT_IN_SYNC = 1,
+	OV_RESULT_OUT_OF_SYNC = 2,
+};
+
+struct p_block_req_common {
+	uint64_t sector;
+	uint64_t block_id;
+	uint32_t blksize;
 } __packed;
 
+/*
+ * struct p_block_req shared by commands:
+ *   P_DATA_REQUEST
+ *   P_RS_DATA_REQUEST
+ *   P_OV_REQUEST
+ *   P_OV_REPLY
+ *   P_CSUM_RS_REQUEST
+ *   P_RS_THIN_REQ
+ */
 struct p_block_req {
-	u64 sector;
-	u64 block_id;
-	u32 blksize;
-	u32 pad;	/* to multiple of 8 Byte */
+	/* Allow fields to be addressed directly or via req_common. */
+	union {
+		struct {
+			uint64_t sector;
+			uint64_t block_id;
+			uint32_t blksize;
+		} __packed;
+		struct p_block_req_common req_common;
+	};
+	uint32_t pad;	/* to multiple of 8 Byte */
 } __packed;
 
 /*
- * commands with their own struct for additional fields:
- *   P_CONNECTION_FEATURES
- *   P_BARRIER
- *   P_BARRIER_ACK
- *   P_SYNC_PARAM
- *   ReportParams
+ * struct p_rs_req shared by commands:
+ *   P_RS_DAGTAG_REQ
+ *   P_RS_CSUM_DAGTAG_REQ
+ *   P_RS_THIN_DAGTAG_REQ
+ *   P_OV_DAGTAG_REQ
+ *   P_OV_DAGTAG_REPLY
  */
+struct p_rs_req {
+	struct p_block_req_common req_common;
+	uint32_t dagtag_node_id;
+	uint64_t dagtag;
+} __packed;
 
 /* supports TRIM/DISCARD on the "wire" protocol */
 #define DRBD_FF_TRIM 1
@@ -243,54 +331,98 @@ struct p_block_req {
  */
 #define DRBD_FF_WZEROES 8
 
+/* Supports synchronization of application and resync IO using data generation
+ * tags (dagtags). See Documentation/application-resync-synchronization.rst for
+ * details.
+ */
+#define DRBD_FF_RESYNC_DAGTAG 16
+
+/* V2 of p_twopc_request has a 32 bit flag field and the two fields for node
+ * Ids are reduced to 8 bit instead of 32 bits.
+ *
+ * The flag TWOPC_HAS_RECHABLE indicates that in the commit phase
+ * (P_TWOPC_COMMIT) the reachable_nodes mask is set.
+ *
+ * The old behavior sends the primary_nodes mask, mask, and val in
+ * phase 2 (P_TWOPC_COMMIT), where mask and val are the same values as
+ * in phase 1 (P_TWOPC_PREPARE).
+ */
+#define DRBD_FF_2PC_V2 32
+
+/* Starting with drbd-9.1.15, a node with a backing disk sends the new
+ * current-uuid also to diskless nodes when the initial resync is
+ * skipped.
+ *
+ * The peer needs to know about this detail to apply the necessary
+ * strictness regarding downgrading its view of the partner's disk
+ * state.
+ */
+#define DRBD_FF_RS_SKIP_UUID 64
+
+/* Support for resync_without_replication.
+ */
+#define DRBD_FF_RESYNC_WITHOUT_REPLICATION 128
+
+/* Support for bitmap block size != 4k. If you connect peers with
+ * different bitmap block sizes, the resync becomes more
+ * interesting, and we need to communicate the bitmap block size.
+ */
+#define DRBD_FF_BM_BLOCK_SHIFT 256
 
 struct p_connection_features {
-	u32 protocol_min;
-	u32 feature_flags;
-	u32 protocol_max;
+	uint32_t protocol_min;
+	uint32_t feature_flags;
+	uint32_t protocol_max;
+	uint32_t sender_node_id;
+	uint32_t receiver_node_id;
 
 	/* should be more than enough for future enhancements
 	 * for now, feature_flags and the reserved array shall be zero.
 	 */
 
-	u32 _pad;
-	u64 reserved[7];
+	uint32_t _pad;
+	uint64_t reserved[6];
 } __packed;
 
 struct p_barrier {
-	u32 barrier;	/* barrier number _handle_ only */
-	u32 pad;	/* to multiple of 8 Byte */
+	uint32_t barrier;	/* barrier number _handle_ only */
+	uint32_t pad;	/* to multiple of 8 Byte */
 } __packed;
 
 struct p_barrier_ack {
-	u32 barrier;
-	u32 set_size;
+	uint32_t barrier;
+	uint32_t set_size;
+} __packed;
+
+struct p_confirm_stable {
+	uint64_t oldest_block_id;
+	uint64_t youngest_block_id;
+	uint32_t set_size;
+	uint32_t pad; /* to multiple of 8 Byte */
 } __packed;
 
 struct p_rs_param {
-	u32 resync_rate;
+	uint32_t resync_rate;
 
-	      /* Since protocol version 88 and higher. */
+	/* Since protocol version 88 and higher. */
 	char verify_alg[];
 } __packed;
 
 struct p_rs_param_89 {
-	u32 resync_rate;
+	uint32_t resync_rate;
 	/* protocol version 89: */
 	char verify_alg[SHARED_SECRET_MAX];
 	char csums_alg[SHARED_SECRET_MAX];
 } __packed;
 
 struct p_rs_param_95 {
-	u32 resync_rate;
-	struct_group(algs,
-		char verify_alg[SHARED_SECRET_MAX];
-		char csums_alg[SHARED_SECRET_MAX];
-	);
-	u32 c_plan_ahead;
-	u32 c_delay_target;
-	u32 c_fill_target;
-	u32 c_max_rate;
+	uint32_t resync_rate;
+	char verify_alg[SHARED_SECRET_MAX];
+	char csums_alg[SHARED_SECRET_MAX];
+	uint32_t c_plan_ahead;
+	uint32_t c_delay_target;
+	uint32_t c_fill_target;
+	uint32_t c_max_rate;
 } __packed;
 
 enum drbd_conn_flags {
@@ -299,35 +431,81 @@ enum drbd_conn_flags {
 };
 
 struct p_protocol {
-	u32 protocol;
-	u32 after_sb_0p;
-	u32 after_sb_1p;
-	u32 after_sb_2p;
-	u32 conn_flags;
-	u32 two_primaries;
+	uint32_t protocol;
+	uint32_t after_sb_0p;
+	uint32_t after_sb_1p;
+	uint32_t after_sb_2p;
+	uint32_t conn_flags;
+	uint32_t two_primaries;
 
 	/* Since protocol version 87 and higher. */
 	char integrity_alg[];
 
 } __packed;
 
+#define UUID_FLAG_DISCARD_MY_DATA     ((u64)1 << 0)
+#define UUID_FLAG_CRASHED_PRIMARY     ((u64)1 << 1)
+#define UUID_FLAG_INCONSISTENT        ((u64)1 << 2)
+#define UUID_FLAG_SKIP_INITIAL_SYNC   ((u64)1 << 3)
+
+#define UUID_FLAG_MASK_COMPAT_84 \
+	(UUID_FLAG_DISCARD_MY_DATA|\
+	 UUID_FLAG_CRASHED_PRIMARY|\
+	 UUID_FLAG_INCONSISTENT|\
+	 UUID_FLAG_SKIP_INITIAL_SYNC)
+
+#define UUID_FLAG_NEW_DATAGEN         ((u64)1 << 4)
+#define UUID_FLAG_STABLE              ((u64)1 << 5)
+#define UUID_FLAG_GOT_STABLE          ((u64)1 << 6) /* send UUIDs */
+#define UUID_FLAG_RESYNC              ((u64)1 << 7) /* compare UUIDs and eventually start resync */
+#define UUID_FLAG_RECONNECT           ((u64)1 << 8)
+#define UUID_FLAG_DISKLESS_PRIMARY    ((u64)1 << 9) /* Use with UUID_FLAG_RESYNC if a diskless primary is the reason */
+#define UUID_FLAG_PRIMARY_LOST_QUORUM ((u64)1 << 10)
+#define UUID_FLAG_SYNC_TARGET         ((u64)1 << 11) /* currently L_SYNC_TARGET to some peer */
+#define UUID_FLAG_HAS_UNALLOC         ((u64)1 << 12) /* highest byte contains index of not allocated bitmap uuid */
+
+#define UUID_FLAG_UNALLOC_SHIFT       56
+#define UUID_FLAG_UNALLOC_MASK        ((u64)0xff << UUID_FLAG_UNALLOC_SHIFT)
+
 struct p_uuids {
-	u64 uuid[UI_EXTENDED_SIZE];
+	uint64_t current_uuid;
+	uint64_t bitmap_uuid;
+	uint64_t history_uuids[HISTORY_UUIDS_V08];
+	uint64_t dirty_bits;
+	uint64_t uuid_flags;
+} __packed;
+
+struct p_uuids110 {
+	uint64_t current_uuid;
+	uint64_t dirty_bits;
+	uint64_t uuid_flags;
+	uint64_t node_mask; /* weak_nodes when UUID_FLAG_NEW_DATAGEN is set ;
+			       authoritative nodes when UUID_FLAG_STABLE not set */
+
+	uint64_t bitmap_uuids_mask; /* non zero bitmap UUIDS for these nodes */
+	uint64_t other_uuids[]; /* the first hweight(bitmap_uuids_mask) slots carry bitmap uuids.
+				    The node with the lowest node_id first.
+				    The remaining slots carry history uuids */
 } __packed;
 
-struct p_rs_uuid {
-	u64	    uuid;
+struct p_current_uuid {
+	uint64_t uuid;
+	uint64_t weak_nodes;
+} __packed;
+
+struct p_uuid {
+	uint64_t uuid;
 } __packed;
 
 /* optional queue_limits if (agreed_features & DRBD_FF_WSAME)
  * see also struct queue_limits, as of late 2015 */
 struct o_qlim {
 	/* we don't need it yet, but we may as well communicate it now */
-	u32 physical_block_size;
+	uint32_t physical_block_size;
 
 	/* so the original in struct queue_limits is unsigned short,
 	 * but I'd have to put in padding anyways. */
-	u32 logical_block_size;
+	uint32_t logical_block_size;
 
 	/* One incoming bio becomes one DRBD request,
 	 * which may be translated to several bio on the receiving side.
@@ -335,9 +513,9 @@ struct o_qlim {
 	 */
 
 	/* various IO hints may be useful with "diskless client" setups */
-	u32 alignment_offset;
-	u32 io_min;
-	u32 io_opt;
+	uint32_t alignment_offset;
+	uint32_t io_min;
+	uint32_t io_opt;
 
 	/* We may need to communicate integrity stuff at some point,
 	 * but let's not get ahead of ourselves. */
@@ -347,51 +525,119 @@ struct o_qlim {
 	 * more specifics.  If the backend cannot do discards, the DRBD peer
 	 * may fall back to blkdev_issue_zeroout().
 	 */
-	u8 discard_enabled;
-	u8 discard_zeroes_data;
-	u8 write_same_capable;
-	u8 _pad;
+	uint8_t discard_enabled;
+	uint8_t discard_zeroes_data;
+	uint8_t write_same_capable;
+
+	/* Bitmap block shift relative to 4k. If peers have differnt bitmap
+	 * granularity, any resync related request needs to be aligned to the
+	 * larger granularity: we can not clear partial bits.
+	 * 0 to 8 to represent 4k to 1M.
+	 * If DRBD_FF_BM_BLOCK_SHIFT is agreed on.
+	 */
+	uint8_t bm_block_shift_minus_12;
 } __packed;
 
 struct p_sizes {
-	u64	    d_size;  /* size of disk */
-	u64	    u_size;  /* user requested size */
-	u64	    c_size;  /* current exported size */
-	u32	    max_bio_size;  /* Maximal size of a BIO */
-	u16	    queue_order_type;  /* not yet implemented in DRBD*/
-	u16	    dds_flags; /* use enum dds_flags here. */
+	uint64_t d_size;  /* size of disk */
+	uint64_t u_size;  /* user requested size */
+	uint64_t c_size;  /* current exported size */
+	uint32_t max_bio_size;  /* Maximal size of a BIO */
+	uint16_t queue_order_type;  /* not yet implemented in DRBD*/
+	uint16_t dds_flags; /* use enum dds_flags here. */
 
 	/* optional queue_limits if (agreed_features & DRBD_FF_WSAME) */
 	struct o_qlim qlim[];
 } __packed;
 
 struct p_state {
-	u32	    state;
+	uint32_t state;
 } __packed;
 
 struct p_req_state {
-	u32	    mask;
-	u32	    val;
+	uint32_t mask;
+	uint32_t val;
 } __packed;
 
 struct p_req_state_reply {
-	u32	    retcode;
+	uint32_t retcode;
+} __packed;
+
+struct p_twopc_request {
+	uint32_t tid;  /* transaction identifier */
+	union {
+		struct { /* when DRBD_FF_2PC_V2 is set */
+			uint32_t flags;
+			uint16_t _pad;
+			int8_t  s8_initiator_node_id;  /* initiator of the transaction */
+			int8_t  s8_target_node_id;  /* target of the transaction (or -1) */
+		};
+		struct { /* original packet version */
+			uint32_t u32_initiator_node_id;  /* initiator of the transaction */
+			uint32_t u32_target_node_id;  /* target of the transaction (or -1) */
+		};
+	};
+	uint64_t nodes_to_reach;
+	union {
+		union { /* TWOPC_STATE_CHANGE */
+			struct {    /* P_TWOPC_PREPARE */
+				uint64_t _compat_pad;
+				uint32_t mask;
+				uint32_t val;
+			};
+			struct { /* P_TWOPC_COMMIT */
+				uint64_t primary_nodes;
+				uint64_t reachable_nodes; /* when TWOPC_HAS_RECHABLE flag is set */
+			};
+		};
+		union {	 /* TWOPC_RESIZE */
+			struct {    /* P_TWOPC_PREP_RSZ */
+				uint64_t user_size;
+				uint16_t dds_flags;
+			};
+			struct {    /* P_TWOPC_COMMIT	*/
+				uint64_t diskful_primary_nodes;
+				uint64_t exposed_size;
+			};
+		};
+	};
+} __packed;
+
+#define TWOPC_HAS_FLAGS     0x80000000 /* For packet dissectors */
+#define TWOPC_HAS_REACHABLE 0x40000000 /* The reachable_nodes field is valid */
+#define TWOPC_PRI_INCAPABLE 0x20000000 /* The primary has no access to data */
+
+struct p_twopc_reply {
+	uint32_t tid;  /* transaction identifier */
+	uint32_t initiator_node_id;  /* initiator of the transaction */
+	uint64_t reachable_nodes;
+
+	union {
+		struct { /* TWOPC_STATE_CHANGE */
+			uint64_t primary_nodes;
+			uint64_t weak_nodes;
+		};
+		struct { /* TWOPC_RESIZE */
+			uint64_t diskful_primary_nodes;
+			uint64_t max_possible_size;
+		};
+	};
 } __packed;
 
 struct p_drbd06_param {
-	u64	  size;
-	u32	  state;
-	u32	  blksize;
-	u32	  protocol;
-	u32	  version;
-	u32	  gen_cnt[5];
-	u32	  bit_map_gen[5];
+	uint64_t size;
+	uint32_t state;
+	uint32_t blksize;
+	uint32_t protocol;
+	uint32_t version;
+	uint32_t gen_cnt[5];
+	uint32_t bit_map_gen[5];
 } __packed;
 
 struct p_block_desc {
-	u64 sector;
-	u32 blksize;
-	u32 pad;	/* to multiple of 8 Byte */
+	uint64_t sector;
+	uint32_t blksize;
+	uint32_t pad;	/* to multiple of 8 Byte */
 } __packed;
 
 /* Valid values for the encoding field.
@@ -409,14 +655,55 @@ struct p_compressed_bm {
 	 * ((encoding >> 4) & 0x07): pad_bits, number of trailing zero bits
 	 * used to pad up to head.length bytes
 	 */
-	u8 encoding;
+	uint8_t encoding;
 
-	u8 code[];
+	uint8_t code[];
 } __packed;
 
 struct p_delay_probe93 {
-	u32     seq_num; /* sequence number to match the two probe packets */
-	u32     offset;  /* usecs the probe got sent after the reference time point */
+	uint32_t seq_num; /* sequence number to match the two probe packets */
+	uint32_t offset;  /* usecs the probe got sent after the reference time point */
+} __packed;
+
+struct p_dagtag {
+	uint64_t dagtag;
+} __packed;
+
+struct p_peer_ack {
+	uint64_t mask;
+	uint64_t dagtag;
+} __packed;
+
+struct p_peer_block_desc {
+	uint64_t sector;
+	uint64_t mask;
+	uint32_t size;
+	uint32_t pad;	/* to multiple of 8 Byte */
+} __packed;
+
+struct p_peer_dagtag {
+	uint64_t dagtag;
+	uint32_t node_id;
+} __packed;
+
+struct p_flush_requests {
+	uint64_t flush_sequence;
+} __packed;
+
+struct p_flush_forward {
+	uint64_t flush_sequence;
+	uint32_t initiator_node_id;
+} __packed;
+
+struct p_flush_ack {
+	uint64_t flush_sequence;
+	uint32_t primary_node_id;
+} __packed;
+
+struct p_enable_replication {
+	uint8_t enable;
+	uint8_t _pad1;
+	uint16_t _pad2;
 } __packed;
 
 /*
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 03/20] drbd: introduce DRBD 9 on-disk metadata format
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 01/20] drbd: mark as BROKEN during " Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 02/20] drbd: extend wire protocol definitions for DRBD 9 Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 04/20] drbd: add transport layer abstraction Christoph Böhmwalder
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Add a new header that captures the DRBD 9 on-disk metadata layout,
enabling state tracking for mutiple peers.
It includes the per-device superblock and per-peer slot structures
needed to track bitmap UUIDs and sync state for each peer.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_meta_data.h | 126 ++++++++++++++++++++++++++++
 1 file changed, 126 insertions(+)
 create mode 100644 drivers/block/drbd/drbd_meta_data.h

diff --git a/drivers/block/drbd/drbd_meta_data.h b/drivers/block/drbd/drbd_meta_data.h
new file mode 100644
index 000000000000..af77e8d53f02
--- /dev/null
+++ b/drivers/block/drbd/drbd_meta_data.h
@@ -0,0 +1,126 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef DRBD_META_DATA_H
+#define DRBD_META_DATA_H
+
+/* how I came up with this magic?
+ * base64 decode "actlog==" ;) */
+#define DRBD_AL_MAGIC 0x69cb65a2
+
+#define BM_BLOCK_SHIFT_4k	12			 /* 4k per bit */
+#define BM_BLOCK_SHIFT_MIN	BM_BLOCK_SHIFT_4k
+#define BM_BLOCK_SHIFT_MAX	20
+#define BM_BLOCK_SIZE_4k	4096
+#define BM_BLOCK_SIZE_MIN	(1<<BM_BLOCK_SHIFT_MIN)
+#define BM_BLOCK_SIZE_MAX	(1<<BM_BLOCK_SHIFT_MAX)
+
+struct peer_dev_md_on_disk_9 {
+	__be64 bitmap_uuid;
+	__be64 bitmap_dagtag;
+	__be32 flags;
+	__be32 bitmap_index;
+	__be32 reserved_u32[2];
+} __packed;
+
+struct meta_data_on_disk_9 {
+	__be64 effective_size;    /* last agreed size */
+	__be64 current_uuid;
+	__be64 members;		  /* only if MDF_HAVE_MEMBERS_MASK is in the flags */
+	__be64 reserved_u64[3];   /* to have the magic at the same position as in v07, and v08 */
+	__be64 device_uuid;
+	__be32 flags;             /* MDF */
+	__be32 magic;
+	__be32 md_size_sect;
+	__be32 al_offset;         /* offset to this block */
+	__be32 al_nr_extents;     /* important for restoring the AL */
+	__be32 bm_offset;         /* offset to the bitmap, from here */
+	__be32 bm_bytes_per_bit;  /* BM_BLOCK_SIZE */
+	__be32 la_peer_max_bio_size;   /* last peer max_bio_size */
+	__be32 bm_max_peers;
+	__be32 node_id;
+
+	/* see al_tr_number_to_on_disk_sector() */
+	__be32 al_stripes;
+	__be32 al_stripe_size_4k;
+
+	__be32 reserved_u32[2];
+
+	struct peer_dev_md_on_disk_9 peers[DRBD_PEERS_MAX];
+	__be64 history_uuids[HISTORY_UUIDS];
+
+	unsigned char padding_start[0];
+	unsigned char padding_end[0] __aligned(4096);
+} __packed;
+
+/* Attention, these two are defined in drbd_int.h as well! */
+#define AL_UPDATES_PER_TRANSACTION 64
+#define AL_CONTEXT_PER_TRANSACTION 919
+
+enum al_transaction_types {
+	AL_TR_UPDATE = 0,
+	AL_TR_INITIALIZED = 0xffff
+};
+/* all fields on disc in big endian */
+struct __packed al_transaction_on_disk {
+	/* don't we all like magic */
+	__be32	magic;
+
+	/* to identify the most recent transaction block
+	 * in the on disk ring buffer */
+	__be32	tr_number;
+
+	/* checksum on the full 4k block, with this field set to 0. */
+	__be32	crc32c;
+
+	/* type of transaction, special transaction types like:
+	 * purge-all, set-all-idle, set-all-active, ... to-be-defined
+	 * see also enum al_transaction_types */
+	__be16	transaction_type;
+
+	/* we currently allow only a few thousand extents,
+	 * so 16bit will be enough for the slot number. */
+
+	/* how many updates in this transaction */
+	__be16	n_updates;
+
+	/* maximum slot number, "al-extents" in drbd.conf speak.
+	 * Having this in each transaction should make reconfiguration
+	 * of that parameter easier. */
+	__be16	context_size;
+
+	/* slot number the context starts with */
+	__be16	context_start_slot_nr;
+
+	/* Some reserved bytes.  Expected usage is a 64bit counter of
+	 * sectors-written since device creation, and other data generation tag
+	 * supporting usage */
+	__be32	__reserved[4];
+
+	/* --- 36 byte used --- */
+
+	/* Reserve space for up to AL_UPDATES_PER_TRANSACTION changes
+	 * in one transaction, then use the remaining byte in the 4k block for
+	 * context information.  "Flexible" number of updates per transaction
+	 * does not help, as we have to account for the case when all update
+	 * slots are used anyways, so it would only complicate code without
+	 * additional benefit.
+	 */
+	__be16	update_slot_nr[AL_UPDATES_PER_TRANSACTION];
+
+	/* but the extent number is 32bit, which at an extent size of 4 MiB
+	 * allows to cover device sizes of up to 2**54 Byte (16 PiB) */
+	__be32	update_extent_nr[AL_UPDATES_PER_TRANSACTION];
+
+	/* --- 420 bytes used (36 + 64*6) --- */
+
+	/* 4096 - 420 = 3676 = 919 * 4 */
+	__be32	context[AL_CONTEXT_PER_TRANSACTION];
+};
+
+#define DRBD_AL_PMEM_MAGIC 0x6aa667a6 /* "al==pmem" */
+
+struct __packed al_on_pmem {
+	__be32 magic;
+	__be32 slots[];
+};
+
+#endif
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 04/20] drbd: add transport layer abstraction
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (2 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 03/20] drbd: introduce DRBD 9 on-disk metadata format Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 05/20] drbd: add TCP transport implementation Christoph Böhmwalder
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

DRBD 9 decouples all network I/O from the core driver by introducing a
transport abstraction layer. The core driver interacts with the network
only through a well-defined ops table, and concrete implementations
live in separate kernel modules that register themselves at load time.

The abstraction models connections as a set of paths, each representing
a local/remote address pair. A shared listener mechanism allows multiple
paths on the same resource to reuse a single listening socket.
The core exports a callbacks which the transports can call back into,
keeping protocol logic in the core and handling wire details in the
transport.

This commit adds the header defining the interface and some
infrastructure. actual transport implementations follow later in the
series.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/Makefile                  |   1 +
 drivers/block/drbd/drbd_transport.c          | 379 ++++++++++++++++
 drivers/block/drbd/drbd_transport.h          | 443 +++++++++++++++++++
 drivers/block/drbd/drbd_transport_template.c | 160 +++++++
 4 files changed, 983 insertions(+)
 create mode 100644 drivers/block/drbd/drbd_transport.c
 create mode 100644 drivers/block/drbd/drbd_transport.h
 create mode 100644 drivers/block/drbd/drbd_transport_template.c

diff --git a/drivers/block/drbd/Makefile b/drivers/block/drbd/Makefile
index 67a8b352a1d5..4929bd423472 100644
--- a/drivers/block/drbd/Makefile
+++ b/drivers/block/drbd/Makefile
@@ -4,6 +4,7 @@ drbd-y += drbd_worker.o drbd_receiver.o drbd_req.o drbd_actlog.o
 drbd-y += drbd_main.o drbd_strings.o drbd_nl.o
 drbd-y += drbd_interval.o drbd_state.o
 drbd-y += drbd_nla.o
+drbd-y += drbd_transport.o
 drbd-$(CONFIG_DEBUG_FS) += drbd_debugfs.o
 
 obj-$(CONFIG_BLK_DEV_DRBD)     += drbd.o
diff --git a/drivers/block/drbd/drbd_transport.c b/drivers/block/drbd/drbd_transport.c
new file mode 100644
index 000000000000..7c6128cbb8bc
--- /dev/null
+++ b/drivers/block/drbd/drbd_transport.c
@@ -0,0 +1,379 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#define pr_fmt(fmt)	KBUILD_MODNAME ": " fmt
+
+#include <linux/spinlock.h>
+#include <linux/module.h>
+#include <net/ipv6.h>
+#include "drbd_transport.h"
+#include "drbd_int.h"
+
+static LIST_HEAD(transport_classes);
+static DECLARE_RWSEM(transport_classes_lock);
+
+static struct drbd_transport_class *__find_transport_class(const char *transport_name)
+{
+	struct drbd_transport_class *transport_class;
+
+	list_for_each_entry(transport_class, &transport_classes, list)
+		if (!strcmp(transport_class->name, transport_name))
+			return transport_class;
+
+	return NULL;
+}
+
+int drbd_register_transport_class(struct drbd_transport_class *transport_class, int version,
+				  int drbd_transport_size)
+{
+	int rv = 0;
+	if (version != DRBD_TRANSPORT_API_VERSION) {
+		pr_err("DRBD_TRANSPORT_API_VERSION not compatible\n");
+		return -EINVAL;
+	}
+
+	if (drbd_transport_size != sizeof(struct drbd_transport)) {
+		pr_err("sizeof(drbd_transport) not compatible\n");
+		return -EINVAL;
+	}
+
+	down_write(&transport_classes_lock);
+	if (__find_transport_class(transport_class->name)) {
+		pr_err("transport class '%s' already registered\n", transport_class->name);
+		rv = -EEXIST;
+	} else {
+		list_add_tail(&transport_class->list, &transport_classes);
+		pr_info("registered transport class '%s' (version:%s)\n",
+			transport_class->name,
+			transport_class->module->version ?: "N/A");
+	}
+	up_write(&transport_classes_lock);
+	return rv;
+}
+
+void drbd_unregister_transport_class(struct drbd_transport_class *transport_class)
+{
+	down_write(&transport_classes_lock);
+	if (!__find_transport_class(transport_class->name)) {
+		pr_crit("unregistering unknown transport class '%s'\n",
+			transport_class->name);
+		BUG();
+	}
+	list_del_init(&transport_class->list);
+	pr_info("unregistered transport class '%s'\n", transport_class->name);
+	up_write(&transport_classes_lock);
+}
+
+static struct drbd_transport_class *get_transport_class(const char *name)
+{
+	struct drbd_transport_class *tc;
+
+	down_read(&transport_classes_lock);
+	tc = __find_transport_class(name);
+	if (tc && !try_module_get(tc->module))
+		tc = NULL;
+	up_read(&transport_classes_lock);
+	return tc;
+}
+
+struct drbd_transport_class *drbd_get_transport_class(const char *name)
+{
+	struct drbd_transport_class *tc = get_transport_class(name);
+
+	if (!tc) {
+		request_module("drbd_transport_%s", name);
+		tc = get_transport_class(name);
+	}
+
+	return tc;
+}
+
+void drbd_put_transport_class(struct drbd_transport_class *tc)
+{
+	/* convenient in the error cleanup path */
+	if (!tc)
+		return;
+	down_read(&transport_classes_lock);
+	module_put(tc->module);
+	up_read(&transport_classes_lock);
+}
+
+void drbd_print_transports_loaded(struct seq_file *seq)
+{
+	struct drbd_transport_class *tc;
+
+	down_read(&transport_classes_lock);
+
+	seq_puts(seq, "Transports (api:" __stringify(DRBD_TRANSPORT_API_VERSION) "):");
+	list_for_each_entry(tc, &transport_classes, list) {
+		seq_printf(seq, " %s (%s)", tc->name,
+				tc->module->version ? tc->module->version : "NONE");
+	}
+	seq_putc(seq, '\n');
+
+	up_read(&transport_classes_lock);
+}
+
+static bool addr_equal(const struct sockaddr_storage *addr1, const struct sockaddr_storage *addr2)
+{
+	if (addr1->ss_family != addr2->ss_family)
+		return false;
+
+	if (addr1->ss_family == AF_INET6) {
+		const struct sockaddr_in6 *v6a1 = (const struct sockaddr_in6 *)addr1;
+		const struct sockaddr_in6 *v6a2 = (const struct sockaddr_in6 *)addr2;
+
+		if (!ipv6_addr_equal(&v6a1->sin6_addr, &v6a2->sin6_addr))
+			return false;
+		else if (ipv6_addr_type(&v6a1->sin6_addr) & IPV6_ADDR_LINKLOCAL)
+			return v6a1->sin6_scope_id == v6a2->sin6_scope_id;
+		return true;
+	} else /* AF_INET, AF_SSOCKS, AF_SDP */ {
+		const struct sockaddr_in *v4a1 = (const struct sockaddr_in *)addr1;
+		const struct sockaddr_in *v4a2 = (const struct sockaddr_in *)addr2;
+
+		return v4a1->sin_addr.s_addr == v4a2->sin_addr.s_addr;
+	}
+}
+
+static bool addr_and_port_equal(const struct sockaddr_storage *addr1, const struct sockaddr_storage *addr2)
+{
+	if (!addr_equal(addr1, addr2))
+		return false;
+
+	if (addr1->ss_family == AF_INET6) {
+		const struct sockaddr_in6 *v6a1 = (const struct sockaddr_in6 *)addr1;
+		const struct sockaddr_in6 *v6a2 = (const struct sockaddr_in6 *)addr2;
+
+		return v6a1->sin6_port == v6a2->sin6_port;
+	} else /* AF_INET, AF_SSOCKS, AF_SDP */ {
+		const struct sockaddr_in *v4a1 = (const struct sockaddr_in *)addr1;
+		const struct sockaddr_in *v4a2 = (const struct sockaddr_in *)addr2;
+
+		return v4a1->sin_port == v4a2->sin_port;
+	}
+
+	return false;
+}
+
+static struct drbd_listener *find_listener(struct drbd_connection *connection,
+					   const struct sockaddr_storage *addr)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_listener *listener;
+
+	list_for_each_entry(listener, &resource->listeners, list) {
+		if (addr_and_port_equal(&listener->listen_addr, addr)) {
+			if (kref_get_unless_zero(&listener->kref))
+				return listener;
+		}
+	}
+	return NULL;
+}
+
+int drbd_get_listener(struct drbd_path *path)
+{
+	struct drbd_transport *transport = path->transport;
+	struct drbd_connection *connection =
+		container_of(transport, struct drbd_connection, transport);
+	struct sockaddr *addr = (struct sockaddr *)&path->my_addr;
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_transport_class *tc = transport->class;
+	struct drbd_listener *listener;
+	bool needs_init = false;
+	int err;
+
+	spin_lock_bh(&resource->listeners_lock);
+	listener = find_listener(connection, (struct sockaddr_storage *)addr);
+	if (!listener) {
+		listener = kzalloc(tc->listener_instance_size, GFP_ATOMIC);
+		if (!listener) {
+			spin_unlock_bh(&resource->listeners_lock);
+			return -ENOMEM;
+		}
+		kref_init(&listener->kref);
+		INIT_LIST_HEAD(&listener->waiters);
+		listener->resource = resource;
+		listener->pending_accepts = 0;
+		spin_lock_init(&listener->waiters_lock);
+		init_completion(&listener->ready);
+		listener->listen_addr = *(struct sockaddr_storage *)addr;
+		listener->transport_class = NULL;
+
+		list_add(&listener->list, &resource->listeners);
+		needs_init = true;
+	}
+	spin_unlock_bh(&resource->listeners_lock);
+
+	if (needs_init) {
+		if (try_module_get(tc->module)) {
+			listener->transport_class = tc;
+			err = tc->ops.init_listener(transport, addr, path->net, listener);
+		} else {
+			err = -ENODEV;
+		}
+		listener->err = err;
+		complete_all(&listener->ready);
+	} else {
+		wait_for_completion(&listener->ready);
+		err = listener->err;
+	}
+
+	if (err) {
+		kref_put(&listener->kref, drbd_listener_destroy);
+		return err;
+	}
+
+	spin_lock_bh(&listener->waiters_lock);
+	kref_get(&path->kref);
+	list_add(&path->listener_link, &listener->waiters);
+	path->listener = listener;
+	spin_unlock_bh(&listener->waiters_lock);
+	/* After exposing the listener on a path, drbd_put_listenr() can destroy it. */
+
+	return 0;
+}
+
+void drbd_listener_destroy(struct kref *kref)
+{
+	struct drbd_listener *listener = container_of(kref, struct drbd_listener, kref);
+	struct drbd_transport_class *tc = listener->transport_class;
+	struct drbd_resource *resource = listener->resource;
+
+	spin_lock_bh(&resource->listeners_lock);
+	list_del(&listener->list);
+	spin_unlock_bh(&resource->listeners_lock);
+
+	if (tc) {
+		tc->ops.release_listener(listener);
+		module_put(tc->module);
+	}
+	kfree(listener);
+}
+
+void drbd_put_listener(struct drbd_path *path)
+{
+	struct drbd_listener *listener;
+
+	listener = xchg(&path->listener, NULL);
+	if (!listener)
+		return;
+
+	spin_lock_bh(&listener->waiters_lock);
+	list_del(&path->listener_link);
+	kref_put(&path->kref, drbd_destroy_path);
+	spin_unlock_bh(&listener->waiters_lock);
+	kref_put(&listener->kref, drbd_listener_destroy);
+}
+
+struct drbd_path *drbd_find_path_by_addr(struct drbd_listener *listener, struct sockaddr_storage *addr)
+{
+	struct drbd_path *path;
+
+	list_for_each_entry(path, &listener->waiters, listener_link) {
+		if (addr_equal(&path->peer_addr, addr))
+			return path;
+	}
+
+	return NULL;
+}
+
+/**
+ * drbd_stream_send_timed_out() - Tells transport if the connection should stay alive
+ * @transport:	DRBD transport to operate on.
+ * @stream:     DATA_STREAM or CONTROL_STREAM
+ *
+ * When it returns true, the transport should return -EAGAIN to its caller of the
+ * send function. When it returns false the transport should keep on trying to
+ * get the packet through.
+ */
+bool drbd_stream_send_timed_out(struct drbd_transport *transport, enum drbd_stream stream)
+{
+	struct drbd_connection *connection =
+		container_of(transport, struct drbd_connection, transport);
+	bool drop_it;
+
+	drop_it = stream == CONTROL_STREAM || connection->cstate[NOW] < C_CONNECTED;
+
+	if (drop_it)
+		return true;
+
+	drop_it = !--connection->transport.ko_count;
+	if (!drop_it) {
+		drbd_err(connection, "[%s/%d] sending time expired, ko = %u\n",
+			 current->comm, current->pid, connection->transport.ko_count);
+		schedule_work(&connection->send_ping_work);
+	}
+
+	return drop_it;
+}
+
+bool drbd_should_abort_listening(struct drbd_transport *transport)
+{
+	struct drbd_connection *connection =
+		container_of(transport, struct drbd_connection, transport);
+	bool abort = false;
+
+	if (connection->cstate[NOW] <= C_DISCONNECTING)
+		abort = true;
+	if (signal_pending(current)) {
+		flush_signals(current);
+		smp_rmb();
+		if (get_t_state(&connection->receiver) == EXITING)
+			abort = true;
+	}
+
+	return abort;
+}
+
+/* Called by a transport if a path was established / disconnected */
+void drbd_path_event(struct drbd_transport *transport, struct drbd_path *path)
+{
+	struct drbd_connection *connection =
+		container_of(transport, struct drbd_connection, transport);
+
+	notify_path(connection, path, NOTIFY_CHANGE);
+}
+
+struct drbd_path *__drbd_next_path_ref(struct drbd_path *drbd_path,
+					      struct drbd_transport *transport)
+{
+	rcu_read_lock();
+	if (!drbd_path) {
+		drbd_path = list_first_or_null_rcu(&transport->paths, struct drbd_path, list);
+	} else {
+		struct list_head *pos;
+		bool in_list;
+
+		pos = list_next_rcu(&drbd_path->list);
+		/* Ensure list head is read before flag. */
+		smp_rmb();
+		in_list = !test_bit(TR_UNREGISTERED, &drbd_path->flags);
+		kref_put(&drbd_path->kref, drbd_destroy_path);
+
+		if (pos == &transport->paths) {
+			drbd_path = NULL;
+		} else if (in_list) {
+			drbd_path = list_entry_rcu(pos, struct drbd_path, list);
+		} else {
+			/* No longer on the list, element might be freed, restart from the start */
+			drbd_path = list_first_or_null_rcu(&transport->paths,
+					struct drbd_path, list);
+		}
+	}
+	if (drbd_path)
+		kref_get(&drbd_path->kref);
+	rcu_read_unlock();
+
+	return drbd_path;
+}
+
+/* Network transport abstractions */
+EXPORT_SYMBOL_GPL(drbd_register_transport_class);
+EXPORT_SYMBOL_GPL(drbd_unregister_transport_class);
+EXPORT_SYMBOL_GPL(drbd_get_listener);
+EXPORT_SYMBOL_GPL(drbd_put_listener);
+EXPORT_SYMBOL_GPL(drbd_find_path_by_addr);
+EXPORT_SYMBOL_GPL(drbd_stream_send_timed_out);
+EXPORT_SYMBOL_GPL(drbd_should_abort_listening);
+EXPORT_SYMBOL_GPL(drbd_path_event);
+EXPORT_SYMBOL_GPL(drbd_listener_destroy);
+EXPORT_SYMBOL_GPL(__drbd_next_path_ref);
diff --git a/drivers/block/drbd/drbd_transport.h b/drivers/block/drbd/drbd_transport.h
new file mode 100644
index 000000000000..ff393e8d12dc
--- /dev/null
+++ b/drivers/block/drbd/drbd_transport.h
@@ -0,0 +1,443 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef DRBD_TRANSPORT_H
+#define DRBD_TRANSPORT_H
+
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/wait.h>
+#include <linux/socket.h>
+
+/* Whenever touch this file in a non-trivial way, increase the
+   DRBD_TRANSPORT_API_VERSION
+   So that transport compiled against an older version of this
+   header will no longer load in a module that assumes a newer
+   version. */
+#define DRBD_TRANSPORT_API_VERSION 21
+
+/* MSG_MSG_DONTROUTE and MSG_PROBE are not used by DRBD. I.e.
+   we can reuse these flags for our purposes */
+#define CALLER_BUFFER  MSG_DONTROUTE
+#define GROW_BUFFER    MSG_PROBE
+
+/*
+ * gfp_mask for allocating memory with no write-out.
+ *
+ * When drbd allocates memory on behalf of the peer, we prevent it from causing
+ * write-out because in a criss-cross setup, the write-out could lead to memory
+ * pressure on the peer, eventually leading to deadlock.
+ */
+#define GFP_TRY	(__GFP_HIGHMEM | __GFP_NOWARN | __GFP_RECLAIM)
+
+#define tr_printk(level, transport, fmt, args...)  ({		\
+	rcu_read_lock();					\
+	printk(level "drbd %s %s:%s: " fmt,			\
+	       (transport)->log_prefix,				\
+	       (transport)->class->name,			\
+	       rcu_dereference((transport)->net_conf)->name,	\
+	       ## args);					\
+	rcu_read_unlock();					\
+	})
+
+#define tr_err(transport, fmt, args...) \
+	tr_printk(KERN_ERR, transport, fmt, ## args)
+#define tr_warn(transport, fmt, args...) \
+	tr_printk(KERN_WARNING, transport, fmt, ## args)
+#define tr_notice(transport, fmt, args...) \
+	tr_printk(KERN_NOTICE, transport, fmt, ## args)
+#define tr_info(transport, fmt, args...) \
+	tr_printk(KERN_INFO, transport, fmt, ## args)
+
+#define TR_ASSERT(x, exp)							\
+	do {									\
+		if (!(exp))							\
+			tr_err(x, "ASSERTION %s FAILED in %s\n",	\
+				 #exp, __func__);				\
+	} while (0)
+
+struct drbd_resource;
+struct drbd_listener;
+struct drbd_transport;
+
+enum drbd_stream {
+	DATA_STREAM,
+	CONTROL_STREAM
+};
+
+enum drbd_tr_hints {
+	CORK,
+	UNCORK,
+	NODELAY,
+	NOSPACE,
+	QUICKACK
+};
+
+enum { /* bits in the flags word */
+	NET_CONGESTED,		/* The data socket is congested */
+	RESOLVE_CONFLICTS,	/* Set on one node, cleared on the peer! */
+};
+
+enum drbd_tr_free_op {
+	CLOSE_CONNECTION,
+	DESTROY_TRANSPORT
+};
+
+enum drbd_tr_event {
+	CLOSED_BY_PEER,
+	TIMEOUT,
+};
+
+enum drbd_tr_path_flag {
+	TR_ESTABLISHED, /* updated by the transport */
+	TR_UNREGISTERED,
+	TR_TRANSPORT_PRIVATE = 32, /* flags starting here are used exclusively by the transport */
+};
+
+/* A transport might wrap its own data structure around this. Having
+   this base class as its first member. */
+struct drbd_path {
+	struct sockaddr_storage my_addr;
+	struct sockaddr_storage peer_addr;
+
+	struct kref kref;
+
+	struct net *net;
+	int my_addr_len;
+	int peer_addr_len;
+	unsigned long flags;
+
+	struct drbd_transport *transport;
+	struct list_head list; /* paths of a connection */
+	struct list_head listener_link; /* paths waiting for an incoming connection,
+					   head is in a drbd_listener */
+	struct drbd_listener *listener;
+
+	struct rcu_head rcu;
+};
+
+/* Each transport implementation should embed a struct drbd_transport
+   into it's instance data structure. */
+struct drbd_transport {
+	struct drbd_transport_class *class;
+
+	struct list_head paths;
+
+	const char *log_prefix;		/* resource name */
+	struct net_conf __rcu *net_conf;	/* content protected by rcu */
+
+	/* These members are intended to be updated by the transport: */
+	unsigned int ko_count;
+	unsigned long flags;
+};
+
+struct drbd_transport_stats {
+	int unread_received;
+	int unacked_send;
+	int send_buffer_size;
+	int send_buffer_used;
+};
+
+/* argument to ->recv_pages() */
+struct drbd_page_chain_head {
+	struct page *head;
+	unsigned int nr_pages;
+};
+
+struct drbd_const_buffer {
+	const u8 *buffer;
+	unsigned int avail;
+};
+
+/**
+ * struct drbd_transport_ops - Operations implemented by the transport.
+ *
+ * The user of this API guarantees that all of the following will be exclusive
+ * with respect to each other for a given transport instance:
+ * * init()
+ * * free()
+ * * prepare_connect()
+ * * finish_connect()
+ * * add_path() and the subsequent list_add_tail_rcu() for the paths list
+ * * may_remove_path() and the subsequent list_del_rcu() for the paths list
+ *
+ * The connection sequence is as follows:
+ * 1. prepare_connect(), with the above exclusivity guarantee
+ * 2. connect(), this may take a long time
+ * 3. finish_connect(), with the above exclusivity guarantee
+ */
+struct drbd_transport_ops {
+	int (*init)(struct drbd_transport *);
+	void (*free)(struct drbd_transport *, enum drbd_tr_free_op free_op);
+	int (*init_listener)(struct drbd_transport *, const struct sockaddr *, struct net *net,
+			     struct drbd_listener *);
+	void (*release_listener)(struct drbd_listener *);
+	int (*prepare_connect)(struct drbd_transport *);
+	int (*connect)(struct drbd_transport *);
+	void (*finish_connect)(struct drbd_transport *);
+
+/**
+ * recv() - Receive data via the transport
+ * @transport:	The transport to use
+ * @stream:	The stream within the transport to use. Ether DATA_STREAM or CONTROL_STREAM
+ * @buf:	The function will place here the pointer to the data area
+ * @size:	Number of byte to receive
+ * @msg_flags:	Bitmask of CALLER_BUFFER, GROW_BUFFER and MSG_DONTWAIT
+ *
+ * recv() returns the requests data in a buffer (owned by the transport).
+ * You may pass MSG_DONTWAIT as flags.  Usually with the next call to recv()
+ * or recv_pages() on the same stream, the buffer may no longer be accessed
+ * by the caller. I.e. it is reclaimed by the transport.
+ *
+ * If the transport was not capable of fulfilling the complete "wish" of the
+ * caller (that means it returned a smaller size that size), the caller may
+ * call recv() again with the flag GROW_BUFFER, and *buf as returned by the
+ * previous call.
+ * Note1: This can happen if MSG_DONTWAIT was used, or if a receive timeout
+ *	was we with set_rcvtimeo().
+ * Note2: recv() is free to re-locate the buffer in such a call. I.e. to
+ *	modify *buf. Then it copies the content received so far to the new
+ *	memory location.
+ *
+ * Last not least the caller may also pass an arbitrary pointer in *buf with
+ * the CALLER_BUFFER flag. This is expected to be used for small amounts
+ * of data only
+ *
+ * Upon success the function returns the bytes read. Upon error the return
+ * code is negative. A 0 indicates that the socket was closed by the remote
+ * side.
+ */
+	int (*recv)(struct drbd_transport *, enum drbd_stream, void **buf, size_t size, int flags);
+
+/**
+ * recv_pages() - Receive bulk data via the transport's DATA_STREAM
+ * @peer_device: Identify the transport and the device
+ * @page_chain:	Here recv_pages() will place the page chain head and length
+ * @size:	Number of bytes to receive
+ *
+ * recv_pages() will return the requested amount of data from DATA_STREAM,
+ * and place it into pages allocated with drbd_alloc_pages().
+ *
+ * Upon success the function returns 0. Upon error the function returns a
+ * negative value
+ */
+	int (*recv_pages)(struct drbd_transport *, struct drbd_page_chain_head *, size_t size);
+
+	void (*stats)(struct drbd_transport *, struct drbd_transport_stats *stats);
+/**
+ * net_conf_change() - Notify about changed network configuration on the transport.
+ * @new_net_conf: The new network configuration that should be applied.
+ *
+ * net_conf_change() is called in the context of either the initial creation of the connection,
+ * or when the net_conf is changed via netlink. Note that assignment of the net_conf to the
+ * transport object happens after this function is called.
+ *
+ * On a negative (error) return value, it is expected that any changes are reverted and
+ * the old net_conf (if any) is still in effect.
+ *
+ * Upon success the function return 0. Upon error the function returns a negative value.
+ */
+	int (*net_conf_change)(struct drbd_transport *, struct net_conf *new_net_conf);
+	void (*set_rcvtimeo)(struct drbd_transport *, enum drbd_stream, long timeout);
+	long (*get_rcvtimeo)(struct drbd_transport *, enum drbd_stream);
+	int (*send_page)(struct drbd_transport *, enum drbd_stream, struct page *,
+			 int offset, size_t size, unsigned msg_flags);
+	int (*send_zc_bio)(struct drbd_transport *, struct bio *bio);
+	bool (*stream_ok)(struct drbd_transport *, enum drbd_stream);
+	bool (*hint)(struct drbd_transport *, enum drbd_stream, enum drbd_tr_hints hint);
+	void (*debugfs_show)(struct drbd_transport *, struct seq_file *m);
+
+/**
+ * add_path() - Prepare path to be added
+ * @path: The path that is being added
+ *
+ * Called before the path is added to the paths list.
+ *
+ * Return: 0 if path may be added, error code otherwise.
+ */
+	int (*add_path)(struct drbd_path *path);
+
+/**
+ * may_remove_path() - Query whether path may currently be removed
+ * @path: The path to be removed
+ *
+ * Return: true is path may be removed, false otherwise.
+ */
+	bool (*may_remove_path)(struct drbd_path *path);
+
+/**
+ * remove_path() - Clear up after path removal
+ * @path: The path that is being removed
+ *
+ * Clear up a path that is being removed. Called after the path has been
+ * removed from the list and all kref references have been put.
+ */
+	void (*remove_path)(struct drbd_path *path);
+};
+
+struct drbd_transport_class {
+	const char *name;
+	const int instance_size;
+	const int path_instance_size;
+	const int listener_instance_size;
+	struct drbd_transport_ops ops;
+
+	struct module *module;
+
+	struct list_head list;
+};
+
+
+/* An "abstract base class" for transport implementations. I.e. it
+   should be embedded into a transport specific representation of a
+   listening "socket" */
+struct drbd_listener {
+	struct kref kref;
+	struct drbd_resource *resource;
+	struct drbd_transport_class *transport_class;
+	struct list_head list; /* link for resource->listeners */
+	struct list_head waiters; /* list head for paths */
+	spinlock_t waiters_lock;
+	int pending_accepts;
+	struct sockaddr_storage listen_addr;
+	struct completion ready;
+	int err;
+};
+
+/* drbd_main.c */
+void drbd_destroy_path(struct kref *kref);
+
+/* drbd_transport.c */
+int drbd_register_transport_class(struct drbd_transport_class *transport_class,
+				  int version, int drbd_transport_size);
+void drbd_unregister_transport_class(struct drbd_transport_class *transport_class);
+struct drbd_transport_class *drbd_get_transport_class(const char *name);
+void drbd_put_transport_class(struct drbd_transport_class *tc);
+void drbd_print_transports_loaded(struct seq_file *seq);
+
+int drbd_get_listener(struct drbd_path *path);
+void drbd_put_listener(struct drbd_path *path);
+struct drbd_path *drbd_find_path_by_addr(struct drbd_listener *listener,
+					 struct sockaddr_storage *addr);
+bool drbd_stream_send_timed_out(struct drbd_transport *transport,
+				enum drbd_stream stream);
+bool drbd_should_abort_listening(struct drbd_transport *transport);
+void drbd_path_event(struct drbd_transport *transport, struct drbd_path *path);
+void drbd_listener_destroy(struct kref *kref);
+struct drbd_path *__drbd_next_path_ref(struct drbd_path *drbd_path,
+				       struct drbd_transport *transport);
+
+/* Might restart iteration, if current element is removed from list!! */
+#define for_each_path_ref(path, transport)			\
+	for (path = __drbd_next_path_ref(NULL, transport);	\
+	     path;						\
+	     path = __drbd_next_path_ref(path, transport))
+
+/* drbd_receiver.c*/
+struct page *drbd_alloc_pages(struct drbd_transport *transport,
+			      unsigned int number, gfp_t gfp_mask);
+void drbd_free_pages(struct drbd_transport *transport, struct page *page);
+void drbd_control_data_ready(struct drbd_transport *transport,
+			     struct drbd_const_buffer *pool);
+void drbd_control_event(struct drbd_transport *transport,
+			enum drbd_tr_event event);
+
+static inline void drbd_alloc_page_chain(struct drbd_transport *t,
+	struct drbd_page_chain_head *chain, unsigned int nr, gfp_t gfp_flags)
+{
+	chain->head = drbd_alloc_pages(t, nr, gfp_flags);
+	chain->nr_pages = chain->head ? nr : 0;
+}
+
+static inline void drbd_free_page_chain(struct drbd_transport *transport,
+					struct drbd_page_chain_head *chain)
+{
+	drbd_free_pages(transport, chain->head);
+	chain->head = NULL;
+	chain->nr_pages = 0;
+}
+
+/*
+ * Some helper functions to deal with our page chains.
+ */
+/* Our transports may sometimes need to only partially use a page.
+ * We need to express that somehow.  Use this struct, and "graft" it into
+ * struct page at page->lru.
+ *
+ * According to include/linux/mm.h:
+ *  | A page may be used by anyone else who does a __get_free_page().
+ *  | In this case, page_count still tracks the references, and should only
+ *  | be used through the normal accessor functions. The top bits of page->flags
+ *  | and page->virtual store page management information, but all other fields
+ *  | are unused and could be used privately, carefully. The management of this
+ *  | page is the responsibility of the one who allocated it, and those who have
+ *  | subsequently been given references to it.
+ * (we do alloc_page(), that is equivalent).
+ *
+ * Red Hat struct page is different from upstream (layout and members) :(
+ * So I am not too sure about the "all other fields", and it is not as easy to
+ * find a place where sizeof(struct drbd_page_chain) would fit on all archs and
+ * distribution-changed layouts.
+ *
+ * But (upstream) struct page also says:
+ *  | struct list_head lru;   * ...
+ *  |       * Can be used as a generic list
+ *  |       * by the page owner.
+ *
+ * On 32bit, use unsigned short for offset and size,
+ * to still fit in sizeof(page->lru).
+ */
+
+/* grafted over struct page.lru */
+struct drbd_page_chain {
+	struct page *next;	/* next page in chain, if any */
+#ifdef CONFIG_64BIT
+	unsigned int offset;	/* start offset of data within this page */
+	unsigned int size;	/* number of data bytes within this page */
+#else
+#if PAGE_SIZE > (1U<<16)
+#error "won't work."
+#endif
+	unsigned short offset;	/* start offset of data within this page */
+	unsigned short size;	/* number of data bytes within this page */
+#endif
+};
+
+static inline void dummy_for_buildbug(void)
+{
+	struct page *dummy;
+	BUILD_BUG_ON(sizeof(struct drbd_page_chain) > sizeof(dummy->lru));
+}
+
+#define page_chain_next(page) \
+	(((struct drbd_page_chain *)&(page)->lru)->next)
+#define page_chain_size(page) \
+	(((struct drbd_page_chain *)&(page)->lru)->size)
+#define page_chain_offset(page) \
+	(((struct drbd_page_chain *)&(page)->lru)->offset)
+#define set_page_chain_next(page, v) \
+	(((struct drbd_page_chain *)&(page)->lru)->next = (v))
+#define set_page_chain_size(page, v) \
+	(((struct drbd_page_chain *)&(page)->lru)->size = (v))
+#define set_page_chain_offset(page, v) \
+	(((struct drbd_page_chain *)&(page)->lru)->offset = (v))
+#define set_page_chain_next_offset_size(page, n, o, s)		\
+	(*((struct drbd_page_chain *)&(page)->lru) =		\
+	((struct drbd_page_chain) {				\
+		.next = (n),					\
+		.offset = (o),					\
+		.size = (s),					\
+	 }))
+
+#define page_chain_for_each(page) \
+	for (; page && ({ prefetch(page_chain_next(page)); 1; }); \
+			page = page_chain_next(page))
+#define page_chain_for_each_safe(page, n) \
+	for (; page && ({ n = page_chain_next(page); 1; }); page = n)
+
+#ifndef SK_CAN_REUSE
+/* This constant was introduced by Pavel Emelyanov <xemul@parallels.com> on
+   Thu Apr 19 03:39:36 2012 +0000. Before the release of linux-3.5
+   commit 4a17fd52 sock: Introduce named constants for sk_reuse */
+#define SK_CAN_REUSE   1
+#endif
+
+#endif
diff --git a/drivers/block/drbd/drbd_transport_template.c b/drivers/block/drbd/drbd_transport_template.c
new file mode 100644
index 000000000000..7a07dff0b5e8
--- /dev/null
+++ b/drivers/block/drbd/drbd_transport_template.c
@@ -0,0 +1,160 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/module.h>
+#include "drbd_transport.h"
+#include "drbd_int.h"
+
+
+MODULE_AUTHOR("xxx");
+MODULE_DESCRIPTION("xxx transport layer for DRBD");
+MODULE_LICENSE("GPL");
+
+
+struct drbd_xxx_transport {
+	struct drbd_transport transport;
+	/* xxx */
+};
+
+struct xxx_listener {
+	struct drbd_listener listener;
+	/* xxx */
+};
+
+struct xxx_waiter {
+	struct drbd_waiter waiter;
+	/* xxx */
+};
+
+static struct drbd_transport *xxx_create(struct drbd_connection *connection);
+static void xxx_free(struct drbd_transport *transport, enum drbd_tr_free_op free_op);
+static int xxx_connect(struct drbd_transport *transport);
+static int xxx_recv(struct drbd_transport *transport, enum drbd_stream stream, void *buf, size_t size, int flags);
+static void xxx_stats(struct drbd_transport *transport, struct drbd_transport_stats *stats);
+static void xxx_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream, long timeout);
+static long xxx_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream);
+static int xxx_send_page(struct drbd_transport *transport, enum drbd_stream stream, struct page *page,
+		    int offset, size_t size, unsigned msg_flags);
+static bool xxx_stream_ok(struct drbd_transport *transport, enum drbd_stream stream);
+static bool xxx_hint(struct drbd_transport *transport, enum drbd_stream stream, enum drbd_tr_hints hint);
+
+
+static struct drbd_transport_class xxx_transport_class = {
+	.name = "xxx",
+	.create = xxx_create,
+	.list = LIST_HEAD_INIT(xxx_transport_class.list),
+};
+
+static struct drbd_transport_ops xxx_ops = {
+	.free = xxx_free,
+	.connect = xxx_connect,
+	.recv = xxx_recv,
+	.stats = xxx_stats,
+	.set_rcvtimeo = xxx_set_rcvtimeo,
+	.get_rcvtimeo = xxx_get_rcvtimeo,
+	.send_page = xxx_send_page,
+	.stream_ok = xxx_stream_ok,
+	.hint = xxx_hint,
+};
+
+
+static struct drbd_transport *xxx_create(struct drbd_connection *connection)
+{
+	struct drbd_xxx_transport *xxx_transport;
+
+	if (!try_module_get(THIS_MODULE))
+		return NULL;
+
+	xxx_transport = kzalloc_obj(struct drbd_xxx_transport);
+	if (!xxx_transport) {
+		module_put(THIS_MODULE);
+		return NULL;
+	}
+
+	xxx_transport->transport.ops = &xxx_ops;
+	xxx_transport->transport.connection = connection;
+
+	return &xxx_transport->transport;
+}
+
+static void xxx_free(struct drbd_transport *transport, enum drbd_tr_free_op free_op)
+{
+	struct drbd_xxx_transport *xxx_transport =
+		container_of(transport, struct drbd_xxx_transport, transport);
+
+	/* disconnect here */
+
+	if (free_op == DESTROY_TRANSPORT) {
+		kfree(xxx_transport);
+		module_put(THIS_MODULE);
+	}
+}
+
+static int xxx_send(struct drbd_transport *transport, enum drbd_stream stream, void *buf, size_t size, unsigned msg_flags)
+{
+	struct drbd_xxx_transport *xxx_transport =
+		container_of(transport, struct drbd_xxx_transport, transport);
+
+	return 0;
+}
+
+static int xxx_recv(struct drbd_transport *transport, enum drbd_stream stream, void *buf, size_t size, int flags)
+{
+	struct drbd_xxx_transport *xxx_transport =
+		container_of(transport, struct drbd_xxx_transport, transport);
+
+	return 0;
+}
+
+static void xxx_stats(struct drbd_transport *transport, struct drbd_transport_stats *stats)
+{
+}
+
+static int xxx_connect(struct drbd_transport *transport)
+{
+	struct drbd_xxx_transport *xxx_transport =
+		container_of(transport, struct drbd_xxx_transport, transport);
+
+	return true;
+}
+
+static void xxx_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream, long timeout)
+{
+}
+
+static long xxx_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream)
+{
+	return 0;
+}
+
+static bool xxx_stream_ok(struct drbd_transport *transport, enum drbd_stream stream)
+{
+	return true;
+}
+
+static int xxx_send_page(struct drbd_transport *transport, enum drbd_stream stream, struct page *page,
+		    int offset, size_t size, unsigned msg_flags)
+{
+	return 0;
+}
+
+static bool xxx_hint(struct drbd_transport *transport, enum drbd_stream stream,
+		enum drbd_tr_hints hint)
+{
+	switch (hint) {
+	default: /* not implemented, but should not trigger error handling */
+		return true;
+	}
+	return true;
+}
+
+static int __init xxx_init(void)
+{
+	return drbd_register_transport_class(&xxx_transport_class);
+}
+
+static void __exit xxx_cleanup(void)
+{
+	drbd_unregister_transport_class(&xxx_transport_class);
+}
+
+module_init(xxx_init)
+module_exit(xxx_cleanup)
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 05/20] drbd: add TCP transport implementation
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (3 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 04/20] drbd: add transport layer abstraction Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 06/20] drbd: add RDMA " Christoph Böhmwalder
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Add the TCP transport implementation as a standalone kernel module
(drbd_transport_tcp) that registers itself with the DRBD transport
abstraction layer.
This extracts TCP replication details to a separate component
rather than being hard-wired in the main module, so alternative
transports can coexist and be selected at runtime.

The implementation manages two independent sockets per peer connection
(data and control streams) and supports dynamic path add/remove while a
connection is live. This is the foundation for DRBD 9 multi-path
operation.
It also integrates kernel-native TLS via the in-kernel handshake API,
enabling encrypted replication.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/Kconfig              |   11 +
 drivers/block/drbd/Makefile             |    2 +
 drivers/block/drbd/drbd_transport_tcp.c | 1669 +++++++++++++++++++++++
 3 files changed, 1682 insertions(+)
 create mode 100644 drivers/block/drbd/drbd_transport_tcp.c

diff --git a/drivers/block/drbd/Kconfig b/drivers/block/drbd/Kconfig
index b907b07468bb..f69e50be190e 100644
--- a/drivers/block/drbd/Kconfig
+++ b/drivers/block/drbd/Kconfig
@@ -72,3 +72,14 @@ config DRBD_FAULT_INJECTION
 		echo 5 > /sys/module/drbd/parameters/fault_rate
 
 	  If unsure, say N.
+
+config BLK_DEV_DRBD_TCP
+	tristate "DRBD TCP transport"
+	depends on BLK_DEV_DRBD
+	default BLK_DEV_DRBD
+	help
+
+	  TCP transport support for DRBD. This is the standard transport
+	  for DRBD replication over TCP/IP networks.
+
+	  If unsure, say Y.
diff --git a/drivers/block/drbd/Makefile b/drivers/block/drbd/Makefile
index 4929bd423472..35f1c60d4142 100644
--- a/drivers/block/drbd/Makefile
+++ b/drivers/block/drbd/Makefile
@@ -8,3 +8,5 @@ drbd-y += drbd_transport.o
 drbd-$(CONFIG_DEBUG_FS) += drbd_debugfs.o
 
 obj-$(CONFIG_BLK_DEV_DRBD)     += drbd.o
+
+obj-$(CONFIG_BLK_DEV_DRBD_TCP) += drbd_transport_tcp.o
diff --git a/drivers/block/drbd/drbd_transport_tcp.c b/drivers/block/drbd/drbd_transport_tcp.c
new file mode 100644
index 000000000000..31885ff9341f
--- /dev/null
+++ b/drivers/block/drbd/drbd_transport_tcp.c
@@ -0,0 +1,1669 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+   drbd_transport_tcp.c
+
+   This file is part of DRBD.
+
+   Copyright (C) 2014-2017, LINBIT HA-Solutions GmbH.
+
+
+*/
+
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/socket.h>
+#include <linux/pkt_sched.h>
+#include <linux/sched/signal.h>
+#include <linux/net.h>
+#include <linux/file.h>
+#include <linux/tcp.h>
+#include <linux/highmem.h>
+#include <linux/bio.h>
+#include <linux/drbd_genl_api.h>
+#include <linux/drbd_config.h>
+#include <linux/tls.h>
+#include <net/tcp.h>
+#include <net/handshake.h>
+#include <net/tls.h>
+#include <net/tls_prot.h>
+#include "drbd_protocol.h"
+#include "drbd_transport.h"
+
+
+MODULE_AUTHOR("Philipp Reisner <philipp.reisner@linbit.com>");
+MODULE_AUTHOR("Lars Ellenberg <lars.ellenberg@linbit.com>");
+MODULE_AUTHOR("Roland Kammerer <roland.kammerer@linbit.com>");
+MODULE_DESCRIPTION("TCP (SDP, SSOCKS) transport layer for DRBD");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(REL_VERSION);
+
+/* TCP keepalive has proven to be vital in many deployment scenarios.
+ * Without keepalive, after a device has seen a sufficiently long period of
+ * idle time, packets on our "bulk data" socket may be dropped because an
+ * overly "smart" network infrastructure decided that TCP session was stale.
+ * Note that we don't try to use this to detect "broken" tcp sessions here,
+ * these will still be handled by the DRBD effective network timeout via
+ * timeout / ko-count settings.
+ * We use this to try to keep "idle" TCP sessions "alive".
+ * Default to send a probe every 23 seconds.
+ */
+#define DRBD_KEEP_IDLE	23
+#define DRBD_KEEP_INTVL 23
+#define DRBD_KEEP_CNT	9
+static unsigned int drbd_keepcnt = DRBD_KEEP_CNT;
+module_param_named(keepcnt, drbd_keepcnt, uint, 0664);
+MODULE_PARM_DESC(keepcnt, "see tcp(7) tcp_keepalive_probes; set TCP_KEEPCNT for data sockets; default: 9");
+static unsigned int drbd_keepidle = DRBD_KEEP_IDLE;
+module_param_named(keepidle, drbd_keepidle, uint, 0664);
+MODULE_PARM_DESC(keepidle, "see tcp(7) tcp_keepalive_time; set TCP_KEEPIDLE for data sockets; default: 23s");
+static unsigned int drbd_keepintvl = DRBD_KEEP_INTVL;
+module_param_named(keepintvl, drbd_keepintvl, uint, 0664);
+MODULE_PARM_DESC(keepintvtl, "see tcp(7) tcp_keepalive_intvl; set TCP_KEEPINTVL for data sockets; default: 23s");
+
+static struct workqueue_struct *dtt_csocket_recv;
+
+struct buffer {
+	void *base;
+	void *pos;
+};
+
+#define DTT_CONNECTING 1
+#define DTT_DATA_READY_ARMED 2
+
+struct drbd_tcp_transport {
+	struct drbd_transport transport; /* Must be first! */
+	spinlock_t control_recv_lock;
+	unsigned long flags;
+	struct socket *stream[2];
+	struct buffer rbuf[2];
+	struct timer_list control_timer;
+	struct work_struct control_data_ready_work;
+	void (*original_control_sk_state_change)(struct sock *sk);
+	void (*original_control_sk_data_ready)(struct sock *sk);
+};
+
+struct dtt_listener {
+	struct drbd_listener listener;
+	void (*original_sk_state_change)(struct sock *sk);
+	struct socket *s_listen;
+
+	wait_queue_head_t wait; /* woken if a connection came in */
+};
+
+/* Since each path might have a different local IP address, each
+   path might need its own listener. Therefore the drbd_waiter object
+   is embedded into the dtt_path and _not_ the dtt_waiter. */
+
+struct dtt_socket_container {
+	struct list_head list;
+	struct socket *socket;
+};
+
+struct dtt_path {
+	struct drbd_path path;
+
+	struct list_head sockets; /* sockets passed to me by other receiver threads */
+};
+
+static int dtt_init(struct drbd_transport *transport);
+static void dtt_free(struct drbd_transport *transport, enum drbd_tr_free_op free_op);
+static void dtt_socket_free(struct socket **sock);
+static int dtt_init_listener(struct drbd_transport *transport, const struct sockaddr *addr,
+			     struct net *net, struct drbd_listener *drbd_listener);
+static void dtt_destroy_listener(struct drbd_listener *generic_listener);
+static int dtt_prepare_connect(struct drbd_transport *transport);
+static int dtt_connect(struct drbd_transport *transport);
+static void dtt_finish_connect(struct drbd_transport *transport);
+static int dtt_recv(struct drbd_transport *transport, enum drbd_stream stream, void **buf, size_t size, int flags);
+static int dtt_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain, size_t size);
+static void dtt_stats(struct drbd_transport *transport, struct drbd_transport_stats *stats);
+static int dtt_net_conf_change(struct drbd_transport *transport, struct net_conf *new_net_conf);
+static void dtt_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream, long timeout);
+static long dtt_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream);
+static int dtt_send_page(struct drbd_transport *transport, enum drbd_stream, struct page *page,
+		int offset, size_t size, unsigned msg_flags);
+static int dtt_send_zc_bio(struct drbd_transport *, struct bio *bio);
+static bool dtt_stream_ok(struct drbd_transport *transport, enum drbd_stream stream);
+static bool dtt_hint(struct drbd_transport *transport, enum drbd_stream stream, enum drbd_tr_hints hint);
+static void dtt_debugfs_show(struct drbd_transport *transport, struct seq_file *m);
+static void dtt_update_congested(struct drbd_tcp_transport *tcp_transport);
+static int dtt_add_path(struct drbd_path *path);
+static bool dtt_may_remove_path(struct drbd_path *);
+static void dtt_remove_path(struct drbd_path *);
+static void dtt_control_timer_fn(struct timer_list *t);
+
+static struct drbd_transport_class tcp_transport_class = {
+	.name = "tcp",
+	.instance_size = sizeof(struct drbd_tcp_transport),
+	.path_instance_size = sizeof(struct dtt_path),
+	.listener_instance_size = sizeof(struct dtt_listener),
+	.ops = (struct drbd_transport_ops) {
+		.init = dtt_init,
+		.free = dtt_free,
+		.init_listener = dtt_init_listener,
+		.release_listener = dtt_destroy_listener,
+		.prepare_connect = dtt_prepare_connect,
+		.connect = dtt_connect,
+		.finish_connect = dtt_finish_connect,
+		.recv = dtt_recv,
+		.recv_pages = dtt_recv_pages,
+		.stats = dtt_stats,
+		.net_conf_change = dtt_net_conf_change,
+		.set_rcvtimeo = dtt_set_rcvtimeo,
+		.get_rcvtimeo = dtt_get_rcvtimeo,
+		.send_page = dtt_send_page,
+		.send_zc_bio = dtt_send_zc_bio,
+		.stream_ok = dtt_stream_ok,
+		.hint = dtt_hint,
+		.debugfs_show = dtt_debugfs_show,
+		.add_path = dtt_add_path,
+		.may_remove_path = dtt_may_remove_path,
+		.remove_path = dtt_remove_path,
+	},
+	.module = THIS_MODULE,
+	.list = LIST_HEAD_INIT(tcp_transport_class.list),
+};
+
+static int dtt_init(struct drbd_transport *transport)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	enum drbd_stream i;
+
+	spin_lock_init(&tcp_transport->control_recv_lock);
+	tcp_transport->transport.class = &tcp_transport_class;
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++) {
+		void *buffer = (void *)__get_free_page(GFP_KERNEL);
+		if (!buffer)
+			goto fail;
+		tcp_transport->rbuf[i].base = buffer;
+		tcp_transport->rbuf[i].pos = buffer;
+	}
+	timer_setup(&tcp_transport->control_timer, dtt_control_timer_fn, 0);
+
+	return 0;
+fail:
+	free_page((unsigned long)tcp_transport->rbuf[0].base);
+	return -ENOMEM;
+}
+
+static void dtt_free(struct drbd_transport *transport, enum drbd_tr_free_op free_op)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	enum drbd_stream i;
+	struct drbd_path *drbd_path;
+	/* free the socket specific stuff,
+	 * mutexes are handled by caller */
+
+	clear_bit(DTT_DATA_READY_ARMED, &tcp_transport->flags);
+
+	if (tcp_transport->control_data_ready_work.func) {
+		cancel_work_sync(&tcp_transport->control_data_ready_work);
+		tcp_transport->control_data_ready_work.func = NULL;
+	}
+
+	if (tcp_transport->stream[CONTROL_STREAM] &&
+	    tcp_transport->original_control_sk_state_change) {
+		write_lock_bh(&tcp_transport->stream[CONTROL_STREAM]->sk->sk_callback_lock);
+		tcp_transport->stream[CONTROL_STREAM]->sk->sk_state_change =
+			tcp_transport->original_control_sk_state_change;
+		write_unlock_bh(&tcp_transport->stream[CONTROL_STREAM]->sk->sk_callback_lock);
+	}
+
+	synchronize_rcu();
+	for (i = DATA_STREAM; i <= CONTROL_STREAM; i++) {
+		dtt_socket_free(&tcp_transport->stream[i]);
+	}
+
+	list_for_each_entry(drbd_path, &transport->paths, list) {
+		bool was_established = test_and_clear_bit(TR_ESTABLISHED, &drbd_path->flags);
+
+		if (free_op == CLOSE_CONNECTION && was_established)
+			drbd_path_event(transport, drbd_path);
+	}
+
+	timer_delete_sync(&tcp_transport->control_timer);
+
+	if (free_op == DESTROY_TRANSPORT) {
+		for (i = DATA_STREAM; i <= CONTROL_STREAM; i++) {
+			free_page((unsigned long)tcp_transport->rbuf[i].base);
+			tcp_transport->rbuf[i].base = NULL;
+		}
+	}
+}
+
+static int _dtt_send(struct drbd_tcp_transport *tcp_transport, struct socket *socket,
+		      void *buf, size_t size, unsigned msg_flags)
+{
+	struct kvec iov;
+	struct msghdr msg;
+	int rv, sent = 0;
+
+	/* THINK  if (signal_pending) return ... ? */
+
+	iov.iov_base = buf;
+	iov.iov_len  = size;
+
+	msg.msg_name       = NULL;
+	msg.msg_namelen    = 0;
+	msg.msg_control    = NULL;
+	msg.msg_controllen = 0;
+	msg.msg_flags      = msg_flags | MSG_NOSIGNAL;
+
+	do {
+		rv = kernel_sendmsg(socket, &msg, &iov, 1, iov.iov_len);
+		if (rv == -EAGAIN) {
+			struct drbd_transport *transport = &tcp_transport->transport;
+			enum drbd_stream stream =
+				tcp_transport->stream[DATA_STREAM] == socket ?
+					DATA_STREAM : CONTROL_STREAM;
+
+			if (drbd_stream_send_timed_out(transport, stream))
+				break;
+			else
+				continue;
+		}
+		if (rv == -EINTR) {
+			flush_signals(current);
+			rv = 0;
+		}
+		if (rv < 0)
+			break;
+		sent += rv;
+		iov.iov_base += rv;
+		iov.iov_len  -= rv;
+	} while (sent < size);
+
+	if (rv <= 0)
+		return rv;
+
+	return sent;
+}
+
+static int dtt_recv_short(struct socket *socket, void *buf, size_t size, int flags)
+{
+	struct kvec iov = {
+		.iov_base = buf,
+		.iov_len = size,
+	};
+	union {
+		struct cmsghdr cmsg;
+		u8 buf[CMSG_SPACE(sizeof(u8))];
+	} u;
+	struct msghdr msg = {
+		.msg_control = &u,
+		.msg_controllen = sizeof(u),
+	};
+	int ret;
+
+	flags = flags ? flags : MSG_WAITALL | MSG_NOSIGNAL;
+
+	ret = kernel_recvmsg(socket, &msg, &iov, 1, size, flags);
+
+	if (msg.msg_controllen != sizeof(u)) {
+		u8 level, description;
+
+		switch (tls_get_record_type(socket->sk, &u.cmsg)) {
+		case 0:
+			fallthrough;
+		case TLS_RECORD_TYPE_DATA:
+			break;
+		case TLS_RECORD_TYPE_ALERT:
+			tls_alert_recv(socket->sk, &msg, &level, &description);
+			ret = (level == TLS_ALERT_LEVEL_FATAL) ? -EACCES : -EAGAIN;
+			break;
+		default:
+			/* discard this record type */
+			ret = -EAGAIN;
+			break;
+		}
+	}
+
+	return ret;
+}
+
+static int dtt_recv(struct drbd_transport *transport, enum drbd_stream stream, void **buf, size_t size, int flags)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct socket *socket = tcp_transport->stream[stream];
+	void *buffer;
+	int rv;
+
+	if (!socket)
+		return -ENOTCONN;
+
+	if (flags & CALLER_BUFFER) {
+		buffer = *buf;
+		rv = dtt_recv_short(socket, buffer, size, flags & ~CALLER_BUFFER);
+	} else if (flags & GROW_BUFFER) {
+		TR_ASSERT(transport, *buf == tcp_transport->rbuf[stream].base);
+		buffer = tcp_transport->rbuf[stream].pos;
+		TR_ASSERT(transport, (buffer - *buf) + size <= PAGE_SIZE);
+
+		rv = dtt_recv_short(socket, buffer, size, flags & ~GROW_BUFFER);
+	} else {
+		buffer = tcp_transport->rbuf[stream].base;
+
+		rv = dtt_recv_short(socket, buffer, size, flags);
+		if (rv > 0)
+			*buf = buffer;
+	}
+
+	if (rv > 0)
+		tcp_transport->rbuf[stream].pos = buffer + rv;
+
+	return rv;
+}
+
+static int dtt_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain, size_t size)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct socket *socket = tcp_transport->stream[DATA_STREAM];
+	struct page *page;
+	int err;
+
+	if (!socket)
+		return -ENOTCONN;
+
+	drbd_alloc_page_chain(transport, chain, DIV_ROUND_UP(size, PAGE_SIZE), GFP_TRY);
+	page = chain->head;
+	if (!page)
+		return -ENOMEM;
+
+	page_chain_for_each(page) {
+		size_t len = min_t(int, size, PAGE_SIZE);
+		void *data = kmap(page);
+		err = dtt_recv_short(socket, data, len, 0);
+		kunmap(page);
+		set_page_chain_offset(page, 0);
+		set_page_chain_size(page, len);
+		if (err < 0)
+			goto fail;
+		size -= err;
+	}
+	if (unlikely(size)) {
+		tr_warn(transport, "Not enough data received; missing %zu bytes\n", size);
+		err = -ENODATA;
+		goto fail;
+	}
+	return 0;
+fail:
+	drbd_free_page_chain(transport, chain);
+	return err;
+}
+
+static void dtt_stats(struct drbd_transport *transport, struct drbd_transport_stats *stats)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+
+	struct socket *socket = tcp_transport->stream[DATA_STREAM];
+
+	if (socket) {
+		struct sock *sk = socket->sk;
+		struct tcp_sock *tp = tcp_sk(sk);
+
+		stats->unread_received = tp->rcv_nxt - tp->copied_seq;
+		stats->unacked_send = tp->write_seq - tp->snd_una;
+		stats->send_buffer_size = sk->sk_sndbuf;
+		stats->send_buffer_used = sk->sk_wmem_queued;
+	}
+}
+
+static void dtt_setbufsize(struct socket *socket, unsigned int snd,
+			   unsigned int rcv)
+{
+	struct sock *sk = socket->sk;
+
+	/* open coded SO_SNDBUF, SO_RCVBUF */
+	if (snd) {
+		sk->sk_sndbuf = snd;
+		sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+		/* Wake up sending tasks if we upped the value. */
+		sk->sk_write_space(sk);
+	} else {
+		sk->sk_userlocks &= ~SOCK_SNDBUF_LOCK;
+	}
+
+	if (rcv) {
+		sk->sk_rcvbuf = rcv;
+		sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
+	} else {
+		sk->sk_userlocks &= ~SOCK_RCVBUF_LOCK;
+	}
+}
+
+static bool dtt_path_cmp_addr(struct dtt_path *path)
+{
+	struct drbd_path *drbd_path = &path->path;
+	int addr_size;
+
+	addr_size = min(drbd_path->my_addr_len, drbd_path->peer_addr_len);
+	return memcmp(&drbd_path->my_addr, &drbd_path->peer_addr, addr_size) > 0;
+}
+
+static int dtt_try_connect(struct dtt_path *path, struct socket **ret_socket)
+{
+	struct drbd_transport *transport = path->path.transport;
+	const char *what;
+	struct socket *socket;
+	struct sockaddr_storage my_addr, peer_addr;
+	struct net_conf *nc;
+	int err;
+	int sndbuf_size, rcvbuf_size, connect_int;
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	if (!nc) {
+		rcu_read_unlock();
+		return -EIO;
+	}
+	sndbuf_size = nc->sndbuf_size;
+	rcvbuf_size = nc->rcvbuf_size;
+	connect_int = nc->connect_int;
+	rcu_read_unlock();
+
+	my_addr = path->path.my_addr;
+	if (my_addr.ss_family == AF_INET6)
+		((struct sockaddr_in6 *)&my_addr)->sin6_port = 0;
+	else
+		((struct sockaddr_in *)&my_addr)->sin_port = 0; /* AF_INET & AF_SCI */
+
+	/* In some cases, the network stack can end up overwriting
+	   peer_addr.ss_family, so use a copy here. */
+	peer_addr = path->path.peer_addr;
+
+	what = "sock_create_kern";
+	err = sock_create_kern(path->path.net, my_addr.ss_family, SOCK_STREAM, IPPROTO_TCP, &socket);
+	if (err < 0) {
+		socket = NULL;
+		goto out;
+	}
+
+	socket->sk->sk_rcvtimeo =
+	socket->sk->sk_sndtimeo = connect_int * HZ;
+	dtt_setbufsize(socket, sndbuf_size, rcvbuf_size);
+
+	/* explicitly bind to the configured IP as source IP
+	*  for the outgoing connections.
+	*  This is needed for multihomed hosts and to be
+	*  able to use lo: interfaces for drbd.
+	* Make sure to use 0 as port number, so linux selects
+	*  a free one dynamically.
+	*/
+	what = "bind before connect";
+	err = socket->ops->bind(socket, (struct sockaddr_unsized *) &my_addr,
+			path->path.my_addr_len);
+	if (err < 0)
+		goto out;
+
+	/* connect may fail, peer not yet available.
+	 * stay C_CONNECTING, don't go Disconnecting! */
+	what = "connect";
+	err = socket->ops->connect(socket, (struct sockaddr_unsized *) &peer_addr,
+				   path->path.peer_addr_len, 0);
+	if (err < 0) {
+		switch (err) {
+		case -ETIMEDOUT:
+		case -EINPROGRESS:
+		case -EINTR:
+		case -ERESTARTSYS:
+		case -ECONNREFUSED:
+		case -ECONNRESET:
+		case -ENETUNREACH:
+		case -EHOSTDOWN:
+		case -EHOSTUNREACH:
+			err = -EAGAIN;
+			break;
+		case -EINVAL:
+			err = -EADDRNOTAVAIL;
+			break;
+		}
+	}
+
+out:
+	if (err < 0) {
+		if (socket)
+			sock_release(socket);
+		if (err != -EAGAIN && err != -EADDRNOTAVAIL)
+			tr_err(transport, "%s failed, err = %d\n", what, err);
+	} else {
+		*ret_socket = socket;
+	}
+
+	return err;
+}
+
+typedef int (*tls_hello_func)(const struct tls_handshake_args *, gfp_t);
+
+struct tls_handshake_wait {
+	struct completion done;
+	int status;
+};
+
+static void tls_handshake_done(void *data, int status, key_serial_t peerid)
+{
+	struct tls_handshake_wait *wait = data;
+
+	// Normalize the error to be negative: while the error _should_ be negative
+	// it is not guaranteed: the netlink interface allows any u32 value, which is
+	// then negated and cast to int, so who knows what will be returned.
+	if (status > 0)
+		status = -status;
+
+	wait->status = status;
+	complete(&wait->done);
+}
+
+static int tls_init_hello(struct socket *sock, const char *peername,
+			  key_serial_t keyring, key_serial_t privkey,
+			  key_serial_t certificate, tls_hello_func hello,
+			  struct tls_handshake_wait *tls_wait)
+{
+	int err;
+	struct tls_handshake_args tls_args = {
+			.ta_sock = sock,
+			.ta_done = tls_handshake_done,
+			.ta_data = tls_wait,
+			.ta_peername = peername,
+			.ta_keyring = keyring,
+			.ta_my_privkey = privkey,
+			.ta_my_cert = certificate,
+	};
+
+	if (IS_ERR(sock_alloc_file(sock, O_NONBLOCK, NULL)))
+		return -EIO;
+
+	do {
+		err = hello(&tls_args, GFP_KERNEL);
+	} while (err == -EAGAIN);
+
+	return err;
+}
+
+static int tls_wait_hello(struct tls_handshake_wait *csocket_tls_wait,
+			  struct tls_handshake_wait *dsocket_tls_wait,
+			  unsigned long timeout)
+{
+	unsigned long remaining = wait_for_completion_timeout(
+		&csocket_tls_wait->done, timeout);
+	if (!remaining)
+		return -ETIMEDOUT;
+
+	if (!wait_for_completion_timeout(&dsocket_tls_wait->done, remaining))
+		return -ETIMEDOUT;
+
+	if (csocket_tls_wait->status)
+		return csocket_tls_wait->status;
+
+	return dsocket_tls_wait->status;
+}
+
+
+static int dtt_send_first_packet(struct drbd_tcp_transport *tcp_transport, struct socket *socket,
+				 enum drbd_packet cmd)
+{
+	struct p_header80 h;
+
+	h.magic = cpu_to_be32(DRBD_MAGIC);
+	h.command = cpu_to_be16(cmd);
+	h.length = 0;
+
+	return _dtt_send(tcp_transport, socket, &h, sizeof(h), 0);
+}
+
+/**
+ * dtt_socket_free() - Free the socket
+ * @socket:	pointer to the pointer to the socket.
+ */
+static void dtt_socket_free(struct socket **socket)
+{
+	if (!*socket)
+		return;
+
+	tls_handshake_cancel((*socket)->sk);
+	kernel_sock_shutdown(*socket, SHUT_RDWR);
+
+	if ((*socket)->file)
+		sockfd_put((*socket));
+	else
+		sock_release(*socket);
+
+	*socket = NULL;
+}
+
+/**
+ * dtt_socket_ok_or_free() - Free the socket if its connection is not okay
+ * @socket:	pointer to the pointer to the socket.
+ */
+static bool dtt_socket_ok_or_free(struct socket **socket)
+{
+	if (!*socket)
+		return false;
+
+	if ((*socket)->sk->sk_state == TCP_ESTABLISHED)
+		return true;
+
+	dtt_socket_free(socket);
+	return false;
+}
+
+static bool dtt_connection_established(struct drbd_transport *transport,
+				       struct socket **socket1,
+				       struct socket **socket2,
+				       struct dtt_path **first_path)
+{
+	struct net_conf *nc;
+	int timeout, good = 0;
+
+	if (!*socket1 || !*socket2)
+		return false;
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	timeout = (nc->sock_check_timeo ?: nc->ping_timeo) * HZ / 10;
+	rcu_read_unlock();
+	schedule_timeout_interruptible(timeout);
+
+	good += dtt_socket_ok_or_free(socket1);
+	good += dtt_socket_ok_or_free(socket2);
+
+	if (good == 0) {
+		kref_put(&(*first_path)->path.kref, drbd_destroy_path);
+		*first_path = NULL;
+	}
+
+	return good == 2;
+}
+
+static struct dtt_path *dtt_wait_connect_cond(struct drbd_transport *transport)
+{
+	struct drbd_listener *listener;
+	struct drbd_path *drbd_path;
+	struct dtt_path *path = NULL;
+	bool rv = false;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(drbd_path, &transport->paths, list) {
+		path = container_of(drbd_path, struct dtt_path, path);
+		listener = drbd_path->listener;
+
+		spin_lock_bh(&listener->waiters_lock);
+		rv = listener->pending_accepts > 0 || !list_empty(&path->sockets);
+		spin_unlock_bh(&listener->waiters_lock);
+
+		if (rv)
+			break;
+	}
+	if (rv)
+		kref_get(&path->path.kref);
+	rcu_read_unlock();
+
+	return rv ? path : NULL;
+}
+
+static void unregister_state_change(struct sock *sock, struct dtt_listener *listener)
+{
+	write_lock_bh(&sock->sk_callback_lock);
+	sock->sk_state_change = listener->original_sk_state_change;
+	sock->sk_user_data = NULL;
+	write_unlock_bh(&sock->sk_callback_lock);
+}
+
+static int dtt_wait_for_connect(struct drbd_transport *transport,
+				struct drbd_listener *drbd_listener, struct socket **socket,
+				struct dtt_path **ret_path)
+{
+	struct dtt_socket_container *socket_c;
+	struct sockaddr_storage peer_addr;
+	int connect_int, err = 0;
+	long timeo;
+	struct socket *s_estab = NULL;
+	struct net_conf *nc;
+	struct drbd_path *drbd_path2;
+	struct dtt_listener *listener = container_of(drbd_listener, struct dtt_listener, listener);
+	struct dtt_path *path = NULL;
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	if (!nc) {
+		rcu_read_unlock();
+		return -EINVAL;
+	}
+	connect_int = nc->connect_int;
+	rcu_read_unlock();
+
+	timeo = connect_int * HZ;
+	timeo += get_random_u32_below(2) ? timeo / 7 : -timeo / 7; /* 28.5% random jitter */
+
+retry:
+	if (path)
+		kref_put(&path->path.kref, drbd_destroy_path);
+	timeo = wait_event_interruptible_timeout(listener->wait,
+			(path = dtt_wait_connect_cond(transport)),
+			timeo);
+	if (timeo <= 0)
+		return -EAGAIN;
+
+	spin_lock_bh(&listener->listener.waiters_lock);
+	socket_c = list_first_entry_or_null(&path->sockets, struct dtt_socket_container, list);
+	if (socket_c) {
+		s_estab = socket_c->socket;
+		list_del(&socket_c->list);
+		kfree(socket_c);
+	} else if (listener->listener.pending_accepts > 0) {
+		listener->listener.pending_accepts--;
+		spin_unlock_bh(&listener->listener.waiters_lock);
+
+		s_estab = NULL;
+		err = kernel_accept(listener->s_listen, &s_estab, O_NONBLOCK);
+		if (err < 0) {
+			kref_put(&path->path.kref, drbd_destroy_path);
+			return err;
+		}
+
+		/* The established socket inherits the sk_state_change callback
+		   from the listening socket. */
+		unregister_state_change(s_estab->sk, listener);
+
+		s_estab->ops->getname(s_estab, (struct sockaddr *)&peer_addr, 2);
+
+		spin_lock_bh(&listener->listener.waiters_lock);
+		drbd_path2 = drbd_find_path_by_addr(&listener->listener, &peer_addr);
+		if (!drbd_path2) {
+			struct sockaddr_in6 *from_sin6;
+			struct sockaddr_in *from_sin;
+
+			switch (peer_addr.ss_family) {
+			case AF_INET6:
+				from_sin6 = (struct sockaddr_in6 *)&peer_addr;
+				tr_notice(transport, "Closing unexpected connection from "
+				       "%pI6\n", &from_sin6->sin6_addr);
+				break;
+			default:
+				from_sin = (struct sockaddr_in *)&peer_addr;
+				tr_notice(transport, "Closing unexpected connection from "
+					 "%pI4\n", &from_sin->sin_addr);
+				break;
+			}
+
+			goto retry_locked;
+		}
+		if (drbd_path2 != &path->path) {
+			struct dtt_path *path2 =
+				container_of(drbd_path2, struct dtt_path, path);
+
+			socket_c = kmalloc_obj(*socket_c, GFP_ATOMIC);
+			if (!socket_c) {
+				tr_info(transport, /* path2->transport, */
+					"No mem, dropped an incoming connection\n");
+				goto retry_locked;
+			}
+
+			socket_c->socket = s_estab;
+			s_estab = NULL;
+			list_add_tail(&socket_c->list, &path2->sockets);
+			wake_up(&listener->wait);
+			goto retry_locked;
+		}
+		if (s_estab->sk->sk_state != TCP_ESTABLISHED)
+			goto retry_locked;
+	}
+	spin_unlock_bh(&listener->listener.waiters_lock);
+	*socket = s_estab;
+	if (*ret_path)
+		kref_put(&(*ret_path)->path.kref, drbd_destroy_path);
+	*ret_path = path;
+	return 0;
+
+retry_locked:
+	spin_unlock_bh(&listener->listener.waiters_lock);
+	dtt_socket_free(&s_estab);
+	goto retry;
+}
+
+static int dtt_receive_first_packet(struct drbd_tcp_transport *tcp_transport, struct socket *socket)
+{
+	struct drbd_transport *transport = &tcp_transport->transport;
+	struct p_header80 *h = tcp_transport->rbuf[DATA_STREAM].base;
+	const unsigned int header_size = sizeof(*h);
+	struct net_conf *nc;
+	int err;
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	if (!nc) {
+		rcu_read_unlock();
+		return -EIO;
+	}
+	socket->sk->sk_rcvtimeo = nc->ping_timeo * 4 * HZ / 10;
+	rcu_read_unlock();
+
+	err = dtt_recv_short(socket, h, header_size, 0);
+	if (err != header_size) {
+		if (err >= 0)
+			err = -EIO;
+		return err;
+	}
+	if (h->magic != cpu_to_be32(DRBD_MAGIC)) {
+		tr_err(transport, "Wrong magic value 0x%08x in receive_first_packet\n",
+			 be32_to_cpu(h->magic));
+		return -EINVAL;
+	}
+	return be16_to_cpu(h->command);
+}
+
+
+static int dtt_control_tcp_input(read_descriptor_t *rd_desc, struct sk_buff *skb,
+				 unsigned int offset, size_t len)
+{
+	struct drbd_transport *transport = rd_desc->arg.data;
+	unsigned int avail, consumed = 0;
+	struct skb_seq_state seq;
+
+	skb_prepare_seq_read(skb, offset, offset + len, &seq);
+	do {
+		struct drbd_const_buffer buffer;
+
+		/*
+		 * skb_seq_read() returns the length of the block assigned to buffer. This might
+		 * be more than is actually ready, so we ensure we only mark as available what
+		 * is ready.
+		 */
+		avail = skb_seq_read(consumed, &buffer.buffer, &seq);
+		if (!avail)
+			break;
+		buffer.avail = min_t(unsigned int, avail, len - consumed);
+		consumed += buffer.avail;
+		drbd_control_data_ready(transport, &buffer);
+	} while (consumed < len);
+	skb_abort_seq_read(&seq);
+
+	return consumed;
+}
+
+static void dtt_control_data_ready_work(struct work_struct *item)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(item, struct drbd_tcp_transport, control_data_ready_work);
+	struct socket *csocket = tcp_transport->stream[CONTROL_STREAM];
+	struct drbd_const_buffer drbd_buffer;
+	int n;
+
+	while (true) {
+		n = dtt_recv_short(csocket, tcp_transport->rbuf[CONTROL_STREAM].base, PAGE_SIZE,
+				   MSG_DONTWAIT | MSG_NOSIGNAL);
+		if (n <= 0)
+			break;
+
+		drbd_buffer.buffer = tcp_transport->rbuf[CONTROL_STREAM].base;
+		drbd_buffer.avail = n;
+		drbd_control_data_ready(&tcp_transport->transport, &drbd_buffer);
+	}
+}
+
+static void dtt_control_data_ready(struct sock *sock)
+{
+	struct drbd_transport *transport = sock->sk_user_data;
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+
+	read_descriptor_t rd_desc = {
+		.count = 1,
+		.arg = { .data = transport },
+	};
+
+	if (!test_bit(DTT_DATA_READY_ARMED, &tcp_transport->flags)
+	    && tcp_transport->original_control_sk_data_ready)
+		return tcp_transport->original_control_sk_data_ready(sock);
+
+	/* We have two different paths depending on if TLS is enabled or not.
+	 * If TLS is enabled, we can't use read_sock, firstly because it's not implemented for the
+	 * TLS protocol on most kernels, secondly the implementation that does exist is not safe
+	 * to call from SOFTIRQ context. Instead, we schedule a work and increment the counter of
+	 * "pending" ready events.
+	 *
+	 * In normal TCP mode, we can simply use tcp_read_sock, as that is safe to call from SOFTIRQ
+	 * contexts.
+	 */
+	mod_timer(&tcp_transport->control_timer, jiffies + sock->sk_rcvtimeo);
+	if (tcp_transport->control_data_ready_work.func) {
+		queue_work(dtt_csocket_recv, &tcp_transport->control_data_ready_work);
+	} else {
+		spin_lock_bh(&tcp_transport->control_recv_lock);
+		tcp_read_sock(sock, &rd_desc, dtt_control_tcp_input);
+		spin_unlock_bh(&tcp_transport->control_recv_lock);
+	}
+}
+
+static void dtt_control_state_change(struct sock *sock)
+{
+	struct drbd_transport *transport = sock->sk_user_data;
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+
+	switch (sock->sk_state) {
+	case TCP_FIN_WAIT1:
+	case TCP_CLOSE_WAIT:
+	case TCP_CLOSE:
+	case TCP_LAST_ACK:
+	case TCP_CLOSING:
+		drbd_control_event(transport, CLOSED_BY_PEER);
+		break;
+	default:
+		tr_warn(transport, "unhandled state %d\n", sock->sk_state);
+	}
+
+	tcp_transport->original_control_sk_state_change(sock);
+}
+
+static void dtt_incoming_connection(struct sock *sock)
+{
+	struct dtt_listener *listener = sock->sk_user_data;
+	void (*state_change)(struct sock *sock);
+
+	state_change = listener->original_sk_state_change;
+	state_change(sock);
+
+	spin_lock(&listener->listener.waiters_lock);
+	listener->listener.pending_accepts++;
+	spin_unlock(&listener->listener.waiters_lock);
+	wake_up(&listener->wait);
+}
+
+static void dtt_control_timer_fn(struct timer_list *t)
+{
+	struct drbd_tcp_transport *tcp_transport = timer_container_of(tcp_transport, t,
+			control_timer);
+	struct drbd_transport *transport = &tcp_transport->transport;
+
+	drbd_control_event(transport, TIMEOUT);
+}
+
+static void dtt_destroy_listener(struct drbd_listener *generic_listener)
+{
+	struct dtt_listener *listener =
+		container_of(generic_listener, struct dtt_listener, listener);
+
+	if (!listener->s_listen)
+		return;
+	unregister_state_change(listener->s_listen->sk, listener);
+	sock_release(listener->s_listen);
+}
+
+static int dtt_init_listener(struct drbd_transport *transport,
+			     const struct sockaddr *addr,
+			     struct net *net,
+			     struct drbd_listener *drbd_listener)
+{
+	int err, sndbuf_size, rcvbuf_size, addr_len;
+	struct sockaddr_storage my_addr;
+	struct dtt_listener *listener = container_of(drbd_listener, struct dtt_listener, listener);
+	struct socket *s_listen;
+	struct net_conf *nc;
+	const char *what = "";
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	if (!nc) {
+		rcu_read_unlock();
+		return -EINVAL;
+	}
+	sndbuf_size = nc->sndbuf_size;
+	rcvbuf_size = nc->rcvbuf_size;
+	rcu_read_unlock();
+
+	my_addr = *(struct sockaddr_storage *)addr;
+
+	err = sock_create_kern(net, my_addr.ss_family, SOCK_STREAM, IPPROTO_TCP, &s_listen);
+	if (err < 0) {
+		s_listen = NULL;
+		what = "sock_create_kern";
+		goto out;
+	}
+
+	s_listen->sk->sk_reuse = SK_CAN_REUSE; /* SO_REUSEADDR */
+	dtt_setbufsize(s_listen, sndbuf_size, rcvbuf_size);
+
+	addr_len = addr->sa_family == AF_INET6 ? sizeof(struct sockaddr_in6)
+		: sizeof(struct sockaddr_in);
+
+	err = s_listen->ops->bind(s_listen, (struct sockaddr_unsized *)&my_addr, addr_len);
+	if (err < 0) {
+		what = "bind before listen";
+		goto out;
+	}
+
+	listener->s_listen = s_listen;
+	write_lock_bh(&s_listen->sk->sk_callback_lock);
+	listener->original_sk_state_change = s_listen->sk->sk_state_change;
+	s_listen->sk->sk_state_change = dtt_incoming_connection;
+	s_listen->sk->sk_user_data = listener;
+	write_unlock_bh(&s_listen->sk->sk_callback_lock);
+
+	err = s_listen->ops->listen(s_listen, DRBD_PEERS_MAX * 2);
+	if (err < 0) {
+		what = "listen";
+		goto out;
+	}
+
+	listener->listener.listen_addr = my_addr;
+	init_waitqueue_head(&listener->wait);
+
+	return 0;
+out:
+	if (s_listen)
+		sock_release(s_listen);
+
+	if (err < 0 &&
+	    err != -EAGAIN && err != -EINTR && err != -ERESTARTSYS && err != -EADDRINUSE &&
+	    err != -EADDRNOTAVAIL)
+		tr_err(transport, "%s failed, err = %d\n", what, err);
+
+	return err;
+}
+
+static void dtt_cleanup_accepted_sockets(struct dtt_path *path)
+{
+	while (!list_empty(&path->sockets)) {
+		struct dtt_socket_container *socket_c =
+			list_first_entry(&path->sockets, struct dtt_socket_container, list);
+
+		list_del(&socket_c->list);
+		dtt_socket_free(&socket_c->socket);
+		kfree(socket_c);
+	}
+}
+
+static void dtt_finish_connect(struct drbd_transport *transport)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct dtt_path *path;
+
+	clear_bit(DTT_CONNECTING, &tcp_transport->flags);
+
+	list_for_each_entry(path, &transport->paths, path.list) {
+		drbd_put_listener(&path->path);
+		dtt_cleanup_accepted_sockets(path);
+	}
+}
+
+static struct dtt_path *dtt_next_path(struct dtt_path *path, struct drbd_transport *transport)
+{
+	struct drbd_path *drbd_path;
+
+	drbd_path = __drbd_next_path_ref(path ? &path->path : NULL, transport);
+
+	/* Loop when we reach the end. */
+	if (!drbd_path)
+		drbd_path = __drbd_next_path_ref(NULL, transport);
+
+	return drbd_path ? container_of(drbd_path, struct dtt_path, path) : NULL;
+}
+
+static int dtt_prepare_connect(struct drbd_transport *transport)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct dtt_path *path;
+	struct drbd_path *drbd_path;
+
+	list_for_each_entry(path, &transport->paths, path.list)
+		dtt_cleanup_accepted_sockets(path);
+
+	set_bit(DTT_CONNECTING, &tcp_transport->flags);
+
+	list_for_each_entry(drbd_path, &transport->paths, list) {
+		if (!drbd_path->listener) {
+			int err = drbd_get_listener(drbd_path);
+
+			if (err)
+				return err;
+		}
+	}
+
+	return 0;
+}
+
+static int dtt_connect(struct drbd_transport *transport)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct dtt_path *connect_to_path, *first_path = NULL;
+	struct socket *dsocket, *csocket;
+	struct net_conf *nc;
+	bool tls, dsocket_is_server = false, csocket_is_server = false;
+	char peername[64];
+	key_serial_t tls_keyring, tls_privkey, tls_certificate;
+	int timeout, err;
+	bool ok;
+	struct tls_handshake_wait csocket_tls_wait = { .status = 0 };
+	struct tls_handshake_wait dsocket_tls_wait = { .status = 0 };
+
+	dsocket = NULL;
+	csocket = NULL;
+
+	connect_to_path = dtt_next_path(NULL, transport);
+	if (!connect_to_path) {
+		err = -EDESTADDRREQ;
+		goto out;
+	}
+
+	do {
+		struct socket *s = NULL;
+
+		err = dtt_try_connect(connect_to_path, &s);
+		if (err < 0 && err != -EAGAIN)
+			goto out_release_sockets;
+
+		if (s) {
+			bool use_for_data;
+
+			if (first_path) {
+				if (first_path != connect_to_path) {
+					tr_info(transport, "initial paths crossed A - fail over\n");
+					dtt_socket_free(&dsocket);
+					dtt_socket_free(&csocket);
+				}
+
+				kref_put(&first_path->path.kref, drbd_destroy_path);
+				first_path = NULL;
+			}
+
+			kref_get(&connect_to_path->path.kref);
+			first_path = connect_to_path;
+
+			if (!dsocket && !csocket) {
+				use_for_data = dtt_path_cmp_addr(first_path);
+			} else if (!dsocket) {
+				use_for_data = true;
+			} else {
+				if (csocket) {
+					tr_err(transport, "Logic error in conn_connect()\n");
+					goto out_eagain;
+				}
+				use_for_data = false;
+			}
+
+			if (!use_for_data)
+				clear_bit(RESOLVE_CONFLICTS, &transport->flags);
+
+			err = dtt_send_first_packet(tcp_transport,
+						    s,
+						    use_for_data ? P_INITIAL_DATA : P_INITIAL_META);
+
+			if (err < 0) {
+				tr_warn(transport, "Error sending initial packet: %d\n", err);
+				dtt_socket_free(&s);
+			} else if (use_for_data) {
+				dsocket = s;
+				dsocket_is_server = false;
+			} else {
+				csocket = s;
+				csocket_is_server = false;
+			}
+		} else if (!first_path) {
+			connect_to_path = dtt_next_path(connect_to_path, transport);
+
+			/*
+			 * The final path should not be removed while
+			 * connecting, but handle the case for robustness.
+			 */
+			err = -EDESTADDRREQ;
+			if (!connect_to_path)
+				goto out_release_sockets;
+		}
+
+		if (dtt_connection_established(transport, &dsocket, &csocket, &first_path))
+			break;
+
+retry:
+		s = NULL;
+		err = dtt_wait_for_connect(transport, connect_to_path->path.listener, &s, &connect_to_path);
+		if (err < 0 && err != -EAGAIN)
+			goto out_release_sockets;
+
+		if (s) {
+			int fp = dtt_receive_first_packet(tcp_transport, s);
+
+			if (first_path) {
+				if (first_path != connect_to_path) {
+					tr_info(transport, "initial paths crossed P - fail over\n");
+					dtt_socket_free(&dsocket);
+					dtt_socket_free(&csocket);
+				}
+
+				kref_put(&first_path->path.kref, drbd_destroy_path);
+				first_path = NULL;
+			}
+
+			kref_get(&connect_to_path->path.kref);
+			first_path = connect_to_path;
+
+			dtt_socket_ok_or_free(&dsocket);
+			dtt_socket_ok_or_free(&csocket);
+			switch (fp) {
+			case P_INITIAL_DATA:
+				if (dsocket) {
+					tr_warn(transport, "initial packet S crossed\n");
+					dtt_socket_free(&dsocket);
+					dsocket = s;
+					dsocket_is_server = true;
+					goto randomize;
+				}
+				dsocket = s;
+				dsocket_is_server = true;
+				break;
+			case P_INITIAL_META:
+				set_bit(RESOLVE_CONFLICTS, &transport->flags);
+				if (csocket) {
+					tr_warn(transport, "initial packet M crossed\n");
+					dtt_socket_free(&csocket);
+					csocket = s;
+					csocket_is_server = true;
+					goto randomize;
+				}
+				csocket = s;
+				csocket_is_server = true;
+				break;
+			default:
+				tr_warn(transport, "Error receiving initial packet: %d\n", fp);
+				dtt_socket_free(&s);
+randomize:
+				if (get_random_u32_below(2))
+					goto retry;
+			}
+		}
+
+		if (drbd_should_abort_listening(transport))
+			goto out_eagain;
+
+		ok = dtt_connection_established(transport, &dsocket, &csocket, &first_path);
+	} while (!ok);
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	timeout = nc->timeout * HZ / 10;
+	tls = nc->tls;
+	memcpy(peername, nc->name, 64);
+	tls_keyring = nc->tls_keyring;
+	tls_privkey = nc->tls_privkey;
+	tls_certificate = nc->tls_certificate;
+	rcu_read_unlock();
+
+	write_lock_bh(&csocket->sk->sk_callback_lock);
+	clear_bit(DTT_DATA_READY_ARMED, &tcp_transport->flags);
+	tcp_transport->original_control_sk_data_ready = csocket->sk->sk_data_ready;
+	csocket->sk->sk_user_data = transport;
+	csocket->sk->sk_data_ready = dtt_control_data_ready;
+	write_unlock_bh(&csocket->sk->sk_callback_lock);
+
+	if (tls) {
+		csocket_tls_wait.done = COMPLETION_INITIALIZER_ONSTACK(csocket_tls_wait.done);
+		dsocket_tls_wait.done = COMPLETION_INITIALIZER_ONSTACK(dsocket_tls_wait.done);
+
+		err = tls_init_hello(
+			csocket, peername, tls_keyring, tls_privkey, tls_certificate,
+			csocket_is_server ? tls_server_hello_x509 : tls_client_hello_x509,
+			&csocket_tls_wait);
+		if (err < 0) {
+			tr_warn(transport, "Error from control socket tls handshake: %d\n", err);
+			goto out_release_sockets;
+		}
+
+		err = tls_init_hello(
+			dsocket, peername, tls_keyring, tls_privkey, tls_certificate,
+			dsocket_is_server ? tls_server_hello_x509 : tls_client_hello_x509,
+			&dsocket_tls_wait);
+		if (err < 0) {
+			tr_warn(transport, "Error from data socket tls handshake: %d\n", err);
+			goto out_release_sockets;
+		}
+
+		err = tls_wait_hello(&csocket_tls_wait, &dsocket_tls_wait, timeout);
+		if (err < 0) {
+			tr_warn(transport, "Error from tls handshake: %d\n", err);
+			goto out_release_sockets;
+		}
+
+		INIT_WORK(&tcp_transport->control_data_ready_work, dtt_control_data_ready_work);
+	}
+
+	TR_ASSERT(transport, first_path == connect_to_path);
+	set_bit(TR_ESTABLISHED, &connect_to_path->path.flags);
+	drbd_path_event(transport, &connect_to_path->path);
+
+	dsocket->sk->sk_reuse = SK_CAN_REUSE; /* SO_REUSEADDR */
+	csocket->sk->sk_reuse = SK_CAN_REUSE; /* SO_REUSEADDR */
+
+	/* We are a block device, we are in the write-out path,
+	 * we may need memory to facilitate memory reclaim
+	 */
+	dsocket->sk->sk_allocation = GFP_ATOMIC;
+	csocket->sk->sk_allocation = GFP_ATOMIC;
+
+	dsocket->sk->sk_use_task_frag = false;
+	csocket->sk->sk_use_task_frag = false;
+
+	sk_set_memalloc(dsocket->sk);
+	sk_set_memalloc(csocket->sk);
+
+	dsocket->sk->sk_priority = TC_PRIO_INTERACTIVE_BULK;
+	csocket->sk->sk_priority = TC_PRIO_INTERACTIVE;
+
+	/* NOT YET ...
+	 * sock.socket->sk->sk_sndtimeo = transport->net_conf->timeout*HZ/10;
+	 * sock.socket->sk->sk_rcvtimeo = MAX_SCHEDULE_TIMEOUT;
+	 * first set it to the P_CONNECTION_FEATURES timeout,
+	 * which we set to 4x the configured ping_timeout. */
+
+	/* we don't want delays.
+	 * we use tcp_sock_set_cork where appropriate, though */
+	tcp_sock_set_nodelay(dsocket->sk);
+	tcp_sock_set_nodelay(csocket->sk);
+
+	tcp_transport->stream[DATA_STREAM] = dsocket;
+	tcp_transport->stream[CONTROL_STREAM] = csocket;
+
+	dsocket->sk->sk_sndtimeo = timeout;
+	csocket->sk->sk_sndtimeo = timeout;
+
+	sock_set_keepalive(dsocket->sk);
+
+	if (drbd_keepidle)
+		tcp_sock_set_keepidle(dsocket->sk, drbd_keepidle);
+	if (drbd_keepcnt)
+		tcp_sock_set_keepcnt(dsocket->sk, drbd_keepcnt);
+	if (drbd_keepintvl)
+		tcp_sock_set_keepintvl(dsocket->sk, drbd_keepintvl);
+
+	write_lock_bh(&csocket->sk->sk_callback_lock);
+	tcp_transport->original_control_sk_state_change = csocket->sk->sk_state_change;
+	csocket->sk->sk_state_change = dtt_control_state_change;
+	set_bit(DTT_DATA_READY_ARMED, &tcp_transport->flags);
+	write_unlock_bh(&csocket->sk->sk_callback_lock);
+
+	err = 0;
+	goto out;
+
+out_eagain:
+	err = -EAGAIN;
+
+out_release_sockets:
+	dtt_socket_free(&dsocket);
+	dtt_socket_free(&csocket);
+
+out:
+	if (first_path)
+		kref_put(&first_path->path.kref, drbd_destroy_path);
+	if (connect_to_path)
+		kref_put(&connect_to_path->path.kref, drbd_destroy_path);
+
+	return err;
+}
+
+static int dtt_net_conf_change(struct drbd_transport *transport, struct net_conf *new_net_conf)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct net_conf *old_net_conf;
+	struct socket *data_socket = tcp_transport->stream[DATA_STREAM];
+	struct socket *control_socket = tcp_transport->stream[CONTROL_STREAM];
+
+	rcu_read_lock();
+	old_net_conf = rcu_dereference(transport->net_conf);
+	rcu_read_unlock();
+
+	if (old_net_conf && old_net_conf->tls != new_net_conf->tls &&
+	    (data_socket || control_socket)) {
+		tr_warn(transport, "cannot switch tls (%s -> %s) while connected\n",
+			old_net_conf->tls ? "yes" : "no", new_net_conf->tls ? "yes" : "no");
+		return -EINVAL;
+	}
+
+	if (data_socket) {
+		dtt_setbufsize(data_socket, new_net_conf->sndbuf_size, new_net_conf->rcvbuf_size);
+	}
+
+	if (control_socket) {
+		dtt_setbufsize(control_socket, new_net_conf->sndbuf_size, new_net_conf->rcvbuf_size);
+	}
+
+	return 0;
+}
+
+static void dtt_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream, long timeout)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct socket *socket = tcp_transport->stream[stream];
+
+	if (!socket)
+		return;
+
+	socket->sk->sk_rcvtimeo = timeout;
+
+	if (stream == CONTROL_STREAM)
+		mod_timer(&tcp_transport->control_timer, jiffies + timeout);
+
+}
+
+static long dtt_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct socket *socket = tcp_transport->stream[stream];
+
+	if (!socket)
+		return -ENOTCONN;
+
+	return socket->sk->sk_rcvtimeo;
+}
+
+static bool dtt_stream_ok(struct drbd_transport *transport, enum drbd_stream stream)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct socket *socket = tcp_transport->stream[stream];
+
+	return socket && socket->sk;
+}
+
+static void dtt_update_congested(struct drbd_tcp_transport *tcp_transport)
+{
+	struct socket *socket = tcp_transport->stream[DATA_STREAM];
+	struct sock *sock;
+
+	if (!socket)
+		return;
+
+	sock = socket->sk;
+	if (sock->sk_wmem_queued > sock->sk_sndbuf * 4 / 5)
+		set_bit(NET_CONGESTED, &tcp_transport->transport.flags);
+}
+
+static int dtt_send_page(struct drbd_transport *transport, enum drbd_stream stream,
+			 struct page *page, int offset, size_t size, unsigned msg_flags)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct socket *socket = tcp_transport->stream[stream];
+	struct msghdr msg = { .msg_flags = msg_flags | MSG_NOSIGNAL | MSG_SPLICE_PAGES };
+	struct bio_vec bvec;
+	int len = size;
+	int err = -EIO;
+
+	if (!socket)
+		return -ENOTCONN;
+
+	dtt_update_congested(tcp_transport);
+	do {
+		int sent;
+
+		bvec_set_page(&bvec, page, len, offset);
+		iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, len);
+
+		sent = sock_sendmsg(socket, &msg);
+		if (sent <= 0) {
+			if (sent == -EAGAIN) {
+				if (drbd_stream_send_timed_out(transport, stream))
+					break;
+				continue;
+			}
+			tr_warn(transport, "%s: size=%d len=%d sent=%d\n",
+			     __func__, (int)size, len, sent);
+			if (sent < 0)
+				err = sent;
+			break;
+		}
+		len    -= sent;
+		offset += sent;
+		/* NOTE: it may take up to twice the socket timeout to have it
+		 * return -EAGAIN, the first timeout will likely happen with a
+		 * partial send, masking the timeout.  Maybe we want to export
+		 * drbd_stream_should_continue_after_partial_send(transport, stream)
+		 * and add that to the while() condition below.
+		 */
+	} while (len > 0 /* THINK && peer_device->repl_state[NOW] >= L_ESTABLISHED */);
+	clear_bit(NET_CONGESTED, &tcp_transport->transport.flags);
+
+	if (len == 0)
+		err = 0;
+
+	return err;
+}
+
+static int dtt_send_zc_bio(struct drbd_transport *transport, struct bio *bio)
+{
+	struct bio_vec bvec;
+	struct bvec_iter iter;
+
+	bio_for_each_segment(bvec, bio, iter) {
+		int err;
+
+		err = dtt_send_page(transport, DATA_STREAM, bvec.bv_page,
+				      bvec.bv_offset, bvec.bv_len,
+				      bio_iter_last(bvec, iter) ? 0 : MSG_MORE);
+		if (err)
+			return err;
+	}
+	return 0;
+}
+
+static bool dtt_hint(struct drbd_transport *transport, enum drbd_stream stream,
+		enum drbd_tr_hints hint)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	bool rv = true;
+	struct socket *socket = tcp_transport->stream[stream];
+
+	if (!socket)
+		return false;
+
+	switch (hint) {
+	case CORK:
+		tcp_sock_set_cork(socket->sk, true);
+		break;
+	case UNCORK:
+		tcp_sock_set_cork(socket->sk, false);
+		break;
+	case NODELAY:
+		tcp_sock_set_nodelay(socket->sk);
+		break;
+	case NOSPACE:
+		if (socket->sk->sk_socket)
+			set_bit(SOCK_NOSPACE, &socket->sk->sk_socket->flags);
+		break;
+	case QUICKACK:
+		tcp_sock_set_quickack(socket->sk, 2);
+		break;
+	default: /* not implemented, but should not trigger error handling */
+		return true;
+	}
+
+	return rv;
+}
+
+static void dtt_debugfs_show_stream(struct seq_file *m, struct socket *socket)
+{
+	struct sock *sk = socket->sk;
+	struct tcp_sock *tp = tcp_sk(sk);
+
+	seq_printf(m, "unread receive buffer: %u Byte\n",
+		   tp->rcv_nxt - tp->copied_seq);
+	seq_printf(m, "unacked send buffer: %u Byte\n",
+		   tp->write_seq - tp->snd_una);
+	seq_printf(m, "send buffer size: %u Byte\n", sk->sk_sndbuf);
+	seq_printf(m, "send buffer used: %u Byte\n", sk->sk_wmem_queued);
+}
+
+static void dtt_debugfs_show(struct drbd_transport *transport, struct seq_file *m)
+{
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	enum drbd_stream i;
+
+	/* BUMP me if you change the file format/content/presentation */
+	seq_printf(m, "v: %u\n\n", 0);
+
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++) {
+		struct socket *socket = tcp_transport->stream[i];
+
+		if (socket) {
+			seq_printf(m, "%s stream\n", i == DATA_STREAM ? "data" : "control");
+			dtt_debugfs_show_stream(m, socket);
+		}
+	}
+
+}
+
+static int dtt_add_path(struct drbd_path *drbd_path)
+{
+	struct drbd_transport *transport = drbd_path->transport;
+	struct drbd_tcp_transport *tcp_transport =
+		container_of(transport, struct drbd_tcp_transport, transport);
+	struct dtt_path *path = container_of(drbd_path, struct dtt_path, path);
+
+	clear_bit(TR_ESTABLISHED, &drbd_path->flags);
+	INIT_LIST_HEAD(&path->sockets);
+
+	if (!test_bit(DTT_CONNECTING, &tcp_transport->flags))
+		return 0;
+
+	return drbd_get_listener(drbd_path);
+}
+
+static bool dtt_may_remove_path(struct drbd_path *drbd_path)
+{
+	return !test_bit(TR_ESTABLISHED, &drbd_path->flags);
+}
+
+static void dtt_remove_path(struct drbd_path *drbd_path)
+{
+	drbd_put_listener(drbd_path);
+}
+
+static int __init dtt_initialize(void)
+{
+	dtt_csocket_recv = alloc_workqueue("dtt_csocket_recv",
+					   WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
+	if (!dtt_csocket_recv)
+		return -ENOMEM;
+	return drbd_register_transport_class(&tcp_transport_class,
+					     DRBD_TRANSPORT_API_VERSION,
+					     sizeof(struct drbd_transport));
+}
+
+static void __exit dtt_cleanup(void)
+{
+	destroy_workqueue(dtt_csocket_recv);
+	drbd_unregister_transport_class(&tcp_transport_class);
+}
+
+module_init(dtt_initialize)
+module_exit(dtt_cleanup)
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 06/20] drbd: add RDMA transport implementation
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (4 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 05/20] drbd: add TCP transport implementation Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 07/20] drbd: add load-balancing TCP transport Christoph Böhmwalder
                   ` (13 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Add a separate module implementing DRBD's transport abstraction
over InfiniBand/RDMA using the kernel's rdma_cm and IB verbs APIs.

The implementation uses send/receive semantics rather than RDMA WRITE
or READ, keeping the model compatible with the existing TCP transport.

The RDMA transport multiplexes DRBD's data and control streams over a
single RDMA connection using immediate data to tag and sequence
messages per stream.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/Kconfig               |   10 +
 drivers/block/drbd/Makefile              |    1 +
 drivers/block/drbd/drbd_transport_rdma.c | 3524 ++++++++++++++++++++++
 3 files changed, 3535 insertions(+)
 create mode 100644 drivers/block/drbd/drbd_transport_rdma.c

diff --git a/drivers/block/drbd/Kconfig b/drivers/block/drbd/Kconfig
index f69e50be190e..203cfa2bf228 100644
--- a/drivers/block/drbd/Kconfig
+++ b/drivers/block/drbd/Kconfig
@@ -83,3 +83,13 @@ config BLK_DEV_DRBD_TCP
 	  for DRBD replication over TCP/IP networks.
 
 	  If unsure, say Y.
+
+config BLK_DEV_DRBD_RDMA
+	tristate "DRBD RDMA transport"
+	depends on BLK_DEV_DRBD && INFINIBAND && INFINIBAND_ADDR_TRANS
+	help
+
+	  RDMA transport support for DRBD. This enables DRBD replication
+	  over RDMA-capable networks for lower latency and higher throughput.
+
+	  If unsure, say N.
diff --git a/drivers/block/drbd/Makefile b/drivers/block/drbd/Makefile
index 35f1c60d4142..d47d311f76ea 100644
--- a/drivers/block/drbd/Makefile
+++ b/drivers/block/drbd/Makefile
@@ -10,3 +10,4 @@ drbd-$(CONFIG_DEBUG_FS) += drbd_debugfs.o
 obj-$(CONFIG_BLK_DEV_DRBD)     += drbd.o
 
 obj-$(CONFIG_BLK_DEV_DRBD_TCP) += drbd_transport_tcp.o
+obj-$(CONFIG_BLK_DEV_DRBD_RDMA) += drbd_transport_rdma.o
diff --git a/drivers/block/drbd/drbd_transport_rdma.c b/drivers/block/drbd/drbd_transport_rdma.c
new file mode 100644
index 000000000000..21790a769d63
--- /dev/null
+++ b/drivers/block/drbd/drbd_transport_rdma.c
@@ -0,0 +1,3524 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+   drbd_transport_rdma.c
+
+   This file is part of DRBD.
+
+   Copyright (C) 2014-2021, LINBIT HA-Solutions GmbH.
+*/
+
+#undef pr_fmt
+#define pr_fmt(fmt)	"drbd_rdma: " fmt
+
+#ifndef SENDER_COMPACTS_BVECS
+/* My benchmarking shows a limit of 30 MB/s
+ * with the current implementation of this idea.
+ * cpu bound, perf top shows mainly get_page/put_page.
+ * Without this, using the plain send_page,
+ * I achieve > 400 MB/s on the same system.
+ * => disable for now, improve later.
+ */
+#define SENDER_COMPACTS_BVECS 0
+#endif
+
+#include <linux/module.h>
+#include <linux/sched/signal.h>
+#include <linux/bio.h>
+#include <rdma/ib_verbs.h>
+#include <rdma/rdma_cm.h>
+#include <rdma/ib_cm.h>
+#include <linux/interrupt.h>
+#include <linux/drbd_genl_api.h>
+#include "drbd_protocol.h"
+#include "drbd_transport.h"
+#include "linux/drbd_config.h" /* for REL_VERSION */
+
+/* Nearly all data transfer uses the send/receive semantics. No need to
+   actually use RDMA WRITE / READ.
+
+   Only for DRBD's remote read (P_DATA_REQUEST and P_DATA_REPLY) a
+   RDMA WRITE would make a lot of sense:
+     Right now the recv_dless_read() function in DRBD is one of the few
+     remaining callers of recv(,,CALLER_BUFFER). This in turn needs a
+     memcpy().
+
+   The block_id field (64 bit) could be re-labelled to be the RKEY for
+   an RDMA WRITE. The P_DATA_REPLY packet will then only deliver the
+   news that the RDMA WRITE was executed...
+
+
+   Flow Control
+   ============
+
+   If the receiving machine can not keep up with the data rate it needs to
+   slow down the sending machine. In order to do so we keep track of the
+   number of rx_descs the peer has posted (peer_rx_descs).
+
+   If one player posts new rx_descs it tells the peer about it with a
+   dtr_flow_control packet. Those packet get never delivered to the
+   DRBD above us.
+*/
+
+MODULE_AUTHOR("Roland Kammerer <roland.kammerer@linbit.com>");
+MODULE_AUTHOR("Philipp Reisner <philipp.reisner@linbit.com>");
+MODULE_AUTHOR("Lars Ellenberg <lars.ellenberg@linbit.com>");
+MODULE_DESCRIPTION("RDMA transport layer for DRBD");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(REL_VERSION);
+
+int allocation_size;
+/* module_param(allocation_size, int, 0664);
+   MODULE_PARM_DESC(allocation_size, "Allocation size for receive buffers (page size of peer)");
+
+   That needs to be implemented in dtr_create_rx_desc() and in dtr_recv() and dtr_recv_pages() */
+
+/* If no recvbuf_size or sendbuf_size is configured use 1M plus two pages for the DATA_STREAM */
+/* Actually it is not a buffer, but the number of tx_descs or rx_descs we allow,
+   very comparable to the socket sendbuf and recvbuf sizes */
+#define RDMA_DEF_BUFFER_SIZE (DRBD_MAX_BIO_SIZE + 2 * PAGE_SIZE)
+
+/* If we can send less than 8 packets, we consider the transport as congested. */
+#define DESCS_LOW_LEVEL 8
+
+/* Assuming that a singe 4k write should be at the highest scatterd over 8
+   pages. I.e. has no parts smaller than 512 bytes.
+   Arbitrary assumption. It seems that Mellanox hardware can do up to 29
+   ppc64 page size might be 64k */
+#if (PAGE_SIZE / 512) > 28
+# define DTR_MAX_TX_SGES 28
+#else
+# define DTR_MAX_TX_SGES (PAGE_SIZE / 512)
+#endif
+
+#define DTR_MAGIC ((u32)0x5257494E)
+
+struct dtr_flow_control {
+	uint32_t magic;
+	uint32_t new_rx_descs[2];
+	uint32_t send_from_stream;
+} __packed;
+
+/* These numbers are sent within the immediate data value to identify
+   if the packet is a data, and control or a (transport private) flow_control
+   message */
+enum dtr_stream_nr {
+	ST_DATA = DATA_STREAM,
+	ST_CONTROL = CONTROL_STREAM,
+	ST_FLOW_CTRL
+};
+
+/* IB_WR_SEND_WITH_IMM and IB_WR_RDMA_WRITE_WITH_IMM
+
+   both transfer user data and a 32bit value with is delivered at the receiving
+   to the event handler of the completion queue. I.e. that can be used to queue
+   the incoming messages to different streams.
+
+   dtr_imm:
+   In order to support folding the data and the control stream into one RDMA
+   connection we use the stream field of dtr_imm: DATA_STREAM, CONTROL_STREAM
+   and FLOW_CONTROL.
+   To be able to order the messages on the receiving side before delivering them
+   to the upper layers we use a sequence number.
+
+   */
+#define SEQUENCE_BITS 30
+union dtr_immediate {
+	struct {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+		unsigned int sequence:SEQUENCE_BITS;
+		unsigned int stream:2;
+#elif defined(__BIG_ENDIAN_BITFIELD)
+		unsigned int stream:2;
+		unsigned int sequence:SEQUENCE_BITS;
+#else
+# error "this endianness is not supported"
+#endif
+	};
+	unsigned int i;
+};
+
+
+enum dtr_state_bits {
+	DSB_CONNECT_REQ,
+	DSB_CONNECTING,
+	DSB_CONNECTED,
+	DSB_ERROR,
+};
+
+#define DSM_CONNECT_REQ   (1 << DSB_CONNECT_REQ)
+#define DSM_CONNECTING    (1 << DSB_CONNECTING)
+#define DSM_CONNECTED     (1 << DSB_CONNECTED)
+#define DSM_ERROR         (1 << DSB_ERROR)
+
+enum dtr_alloc_rdma_res_causes {
+	IB_ALLOC_PD,
+	IB_ALLOC_CQ_RX,
+	IB_ALLOC_CQ_TX,
+	RDMA_CREATE_QP,
+	IB_GET_DMA_MR
+};
+
+struct dtr_rx_desc {
+	struct page *page;
+	struct list_head list;
+	int size;
+	unsigned int sequence;
+	struct dtr_cm *cm;
+	struct ib_cqe cqe;
+	struct ib_sge sge;
+};
+
+struct dtr_tx_desc {
+	union {
+		struct page *page;
+		void *data;
+		struct bio *bio;
+	};
+	enum {
+		SEND_PAGE,
+		SEND_MSG,
+		SEND_BIO,
+	} type;
+	int nr_sges;
+	union dtr_immediate imm;
+	struct ib_cqe cqe;
+	struct ib_sge sge[]; /* must be last! */
+};
+
+struct dtr_flow {
+	struct dtr_path *path;
+
+	atomic_t tx_descs_posted;
+	int tx_descs_max; /* derived from net_conf->sndbuf_size. Do not change after alloc. */
+	atomic_t peer_rx_descs; /* peer's receive window in number of rx descs */
+
+	atomic_t rx_descs_posted;
+	int rx_descs_max;  /* derived from net_conf->rcvbuf_size. Do not change after alloc. */
+
+	atomic_t rx_descs_allocated;
+	int rx_descs_want_posted;
+	atomic_t rx_descs_known_to_peer;
+};
+
+enum connect_state_enum {
+	PCS_INACTIVE,
+	PCS_REQUEST_ABORT,
+	PCS_FINISHING = PCS_REQUEST_ABORT,
+	PCS_CONNECTING,
+};
+
+struct dtr_connect_state {
+	struct delayed_work retry_connect_work;
+	atomic_t active_state; /* trying to establish a connection*/
+	atomic_t passive_state; /* listening for a connection */
+	wait_queue_head_t wq;
+	bool active; /* active = established by connect ; !active = established by accept */
+};
+
+struct dtr_path {
+	struct drbd_path path;
+
+	struct dtr_connect_state cs;
+
+	struct dtr_cm *cm; /* RCU'd and kref in cm */
+
+	struct dtr_flow flow[2];
+	spinlock_t send_flow_control_lock;
+	struct tasklet_struct flow_control_tasklet;
+	struct work_struct refill_rx_descs_work;
+};
+
+struct dtr_stream {
+	wait_queue_head_t send_wq;
+	wait_queue_head_t recv_wq;
+
+	/* for recv() to keep track of the current rx_desc:
+	 * - whenever the bytes_left of the current rx_desc == 0, we know that all data
+	 *   is consumed, and get a new rx_desc from the completion queue, and set
+	 *   current rx_desc accodingly.
+	 */
+	struct {
+		struct dtr_rx_desc *desc;
+		void *pos;
+		int bytes_left;
+	} current_rx;
+
+	unsigned long unread; /* unread received; unit: bytes */
+	struct list_head rx_descs;
+	spinlock_t rx_descs_lock;
+
+	long send_timeout;
+	long recv_timeout;
+
+	unsigned int tx_sequence;
+	unsigned int rx_sequence;
+	struct dtr_transport *rdma_transport;
+};
+
+struct dtr_transport {
+	struct drbd_transport transport;
+	struct dtr_stream stream[2];
+	int rx_allocation_size;
+	int sges_max;
+	bool active; /* connect() returned no error. I.e. C_CONNECTING or C_CONNECTED */
+
+	/* per transport rate limit state for diagnostic messages.
+	 * maybe: one for debug, one for warning, one for error?
+	 * maybe: move into generic drbd_transport an tr_{warn,err,debug}().
+	 */
+	struct ratelimit_state rate_limit;
+
+	struct timer_list control_timer;
+	atomic_t first_path_connect_err;
+	struct completion connected;
+
+	struct tasklet_struct control_tasklet;
+};
+
+struct dtr_cm {
+	struct kref kref;
+	struct rdma_cm_id *id;
+	struct dtr_path *path;
+
+	struct ib_cq *recv_cq;
+	struct ib_cq *send_cq;
+	struct ib_pd *pd;
+
+	unsigned long state; /* DSB bits / DSM masks */
+	wait_queue_head_t state_wq;
+	unsigned long last_sent_jif;
+	atomic_t tx_descs_posted;
+	struct timer_list tx_timeout;
+
+	struct work_struct tx_timeout_work;
+	struct work_struct connect_work;
+	struct work_struct establish_work;
+	struct work_struct disconnect_work;
+
+	struct list_head error_rx_descs;
+	spinlock_t error_rx_descs_lock;
+	struct work_struct end_rx_work;
+	struct work_struct end_tx_work;
+
+	struct dtr_transport *rdma_transport;
+	struct rcu_head rcu;
+};
+
+struct dtr_listener {
+	struct drbd_listener listener;
+
+	struct dtr_cm cm;
+};
+
+static int dtr_init(struct drbd_transport *transport);
+static void dtr_free(struct drbd_transport *transport, enum drbd_tr_free_op);
+static int dtr_prepare_connect(struct drbd_transport *transport);
+static int dtr_connect(struct drbd_transport *transport);
+static void dtr_finish_connect(struct drbd_transport *transport);
+static int dtr_recv(struct drbd_transport *transport, enum drbd_stream stream, void **buf, size_t size, int flags);
+static void dtr_stats(struct drbd_transport *transport, struct drbd_transport_stats *stats);
+static int dtr_net_conf_change(struct drbd_transport *transport, struct net_conf *new_net_conf);
+static void dtr_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream, long timeout);
+static long dtr_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream);
+static int dtr_send_page(struct drbd_transport *transport, enum drbd_stream stream, struct page *page,
+		int offset, size_t size, unsigned msg_flags);
+static int dtr_send_zc_bio(struct drbd_transport *, struct bio *bio);
+static int dtr_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain, size_t size);
+static bool dtr_stream_ok(struct drbd_transport *transport, enum drbd_stream stream);
+static bool dtr_hint(struct drbd_transport *transport, enum drbd_stream stream, enum drbd_tr_hints hint);
+static void dtr_debugfs_show(struct drbd_transport *, struct seq_file *m);
+static int dtr_add_path(struct drbd_path *path);
+static bool dtr_may_remove_path(struct drbd_path *path);
+static void dtr_remove_path(struct drbd_path *path);
+
+static int dtr_create_cm_id(struct dtr_cm *cm_context, struct net *net);
+static bool dtr_path_ok(struct dtr_path *path);
+static bool dtr_transport_ok(struct drbd_transport *transport);
+static int __dtr_post_tx_desc(struct dtr_cm *, struct dtr_tx_desc *);
+static int dtr_post_tx_desc(struct dtr_transport *, struct dtr_tx_desc *);
+static int dtr_repost_tx_desc(struct dtr_cm *old_cm, struct dtr_tx_desc *tx_desc);
+static int dtr_repost_rx_desc(struct dtr_cm *cm, struct dtr_rx_desc *rx_desc);
+static bool dtr_receive_rx_desc(struct dtr_transport *, enum drbd_stream,
+				struct dtr_rx_desc **);
+static void dtr_recycle_rx_desc(struct drbd_transport *transport,
+				enum drbd_stream stream,
+				struct dtr_rx_desc **pp_rx_desc,
+				gfp_t gfp_mask);
+static void dtr_refill_rx_desc(struct dtr_transport *rdma_transport,
+			       enum drbd_stream stream);
+static void dtr_free_tx_desc(struct dtr_cm *cm, struct dtr_tx_desc *tx_desc);
+static void dtr_free_rx_desc(struct dtr_rx_desc *rx_desc);
+static void dtr_cma_disconnect_work_fn(struct work_struct *work);
+static void dtr_disconnect_path(struct dtr_path *path);
+static void __dtr_disconnect_path(struct dtr_path *path);
+static int dtr_init_flow(struct dtr_path *path, enum drbd_stream stream);
+static int dtr_cm_alloc_rdma_res(struct dtr_cm *cm);
+static void __dtr_refill_rx_desc(struct dtr_path *path, enum drbd_stream stream);
+static int dtr_send_flow_control_msg(struct dtr_path *path, gfp_t gfp_mask);
+static struct dtr_cm *dtr_path_get_cm_connected(struct dtr_path *path);
+static void dtr_destroy_cm(struct kref *kref);
+static void dtr_destroy_cm_keep_id(struct kref *kref);
+static int dtr_activate_path(struct dtr_path *path);
+static void dtr_end_tx_work_fn(struct work_struct *work);
+static void dtr_end_rx_work_fn(struct work_struct *work);
+static void dtr_cma_retry_connect(struct dtr_path *path, struct dtr_cm *failed_cm);
+static void dtr_tx_timeout_fn(struct timer_list *t);
+static void dtr_control_timer_fn(struct timer_list *t);
+static void dtr_tx_timeout_work_fn(struct work_struct *work);
+static void dtr_cma_connect_work_fn(struct work_struct *work);
+static struct dtr_rx_desc *dtr_next_rx_desc(struct dtr_stream *rdma_stream);
+static void dtr_control_tasklet_fn(struct tasklet_struct *t);
+static int dtr_init_listener(struct drbd_transport *transport, const struct sockaddr *addr,
+			     struct net *net, struct drbd_listener *drbd_listener);
+static void dtr_destroy_listener(struct drbd_listener *generic_listener);
+
+
+static struct drbd_transport_class rdma_transport_class = {
+	.name = "rdma",
+	.instance_size = sizeof(struct dtr_transport),
+	.path_instance_size = sizeof(struct dtr_path),
+	.listener_instance_size = sizeof(struct dtr_listener),
+	.ops = (struct drbd_transport_ops) {
+		.init = dtr_init,
+		.free = dtr_free,
+		.init_listener = dtr_init_listener,
+		.release_listener = dtr_destroy_listener,
+		.prepare_connect = dtr_prepare_connect,
+		.connect = dtr_connect,
+		.finish_connect = dtr_finish_connect,
+		.recv = dtr_recv,
+		.stats = dtr_stats,
+		.net_conf_change = dtr_net_conf_change,
+		.set_rcvtimeo = dtr_set_rcvtimeo,
+		.get_rcvtimeo = dtr_get_rcvtimeo,
+		.send_page = dtr_send_page,
+		.send_zc_bio = dtr_send_zc_bio,
+		.recv_pages = dtr_recv_pages,
+		.stream_ok = dtr_stream_ok,
+		.hint = dtr_hint,
+		.debugfs_show = dtr_debugfs_show,
+		.add_path = dtr_add_path,
+		.may_remove_path = dtr_may_remove_path,
+		.remove_path = dtr_remove_path,
+	},
+	.module = THIS_MODULE,
+	.list = LIST_HEAD_INIT(rdma_transport_class.list),
+};
+
+static struct rdma_conn_param dtr_conn_param = {
+	.responder_resources = 1,
+	.initiator_depth = 1,
+	.retry_count = 10,
+	.rnr_retry_count  = 7,
+};
+
+static u32 dtr_cm_to_lkey(struct dtr_cm *cm)
+{
+	return cm->pd->local_dma_lkey;
+}
+
+static void dtr_re_init_stream(struct dtr_stream *rdma_stream)
+{
+	struct drbd_transport *transport = &rdma_stream->rdma_transport->transport;
+
+	rdma_stream->current_rx.pos = NULL;
+	rdma_stream->current_rx.bytes_left = 0;
+
+	rdma_stream->tx_sequence = 1;
+	rdma_stream->rx_sequence = 1;
+	rdma_stream->unread = 0;
+
+	TR_ASSERT(transport, list_empty(&rdma_stream->rx_descs));
+	TR_ASSERT(transport, rdma_stream->current_rx.desc == NULL);
+}
+
+static void dtr_init_stream(struct dtr_stream *rdma_stream,
+			    struct drbd_transport *transport)
+{
+	rdma_stream->current_rx.desc = NULL;
+
+	rdma_stream->recv_timeout = MAX_SCHEDULE_TIMEOUT;
+	rdma_stream->send_timeout = MAX_SCHEDULE_TIMEOUT;
+
+	init_waitqueue_head(&rdma_stream->recv_wq);
+	init_waitqueue_head(&rdma_stream->send_wq);
+	rdma_stream->rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+
+	INIT_LIST_HEAD(&rdma_stream->rx_descs);
+	spin_lock_init(&rdma_stream->rx_descs_lock);
+
+	dtr_re_init_stream(rdma_stream);
+}
+
+static int dtr_init(struct drbd_transport *transport)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	int i;
+
+	transport->class = &rdma_transport_class;
+
+	rdma_transport->rx_allocation_size = allocation_size;
+	rdma_transport->active = false;
+	rdma_transport->sges_max = DTR_MAX_TX_SGES;
+
+	ratelimit_state_init(&rdma_transport->rate_limit, 5*HZ, 4);
+	timer_setup(&rdma_transport->control_timer, dtr_control_timer_fn, 0);
+
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++)
+		dtr_init_stream(&rdma_transport->stream[i], transport);
+
+	tasklet_setup(&rdma_transport->control_tasklet, dtr_control_tasklet_fn);
+
+	return 0;
+}
+
+static void dtr_free(struct drbd_transport *transport, enum drbd_tr_free_op free_op)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	struct drbd_path *drbd_path;
+	int i;
+
+	rdma_transport->active = false;
+
+	list_for_each_entry(drbd_path, &transport->paths, list) {
+		struct dtr_path *path = container_of(drbd_path, struct dtr_path, path);
+
+		__dtr_disconnect_path(path);
+	}
+
+	/* Free the rx_descs that where received and not consumed. */
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++) {
+		struct dtr_stream *rdma_stream = &rdma_transport->stream[i];
+		struct dtr_rx_desc *rx_desc, *tmp;
+		LIST_HEAD(rx_descs);
+
+		dtr_free_rx_desc(rdma_stream->current_rx.desc);
+		rdma_stream->current_rx.desc = NULL;
+
+		spin_lock_irq(&rdma_stream->rx_descs_lock);
+		list_splice_init(&rdma_stream->rx_descs, &rx_descs);
+		spin_unlock_irq(&rdma_stream->rx_descs_lock);
+
+		list_for_each_entry_safe(rx_desc, tmp, &rx_descs, list)
+			dtr_free_rx_desc(rx_desc);
+	}
+
+	list_for_each_entry(drbd_path, &transport->paths, list) {
+		struct dtr_path *path = container_of(drbd_path, struct dtr_path, path);
+		struct dtr_cm *cm;
+
+		cm = xchg(&path->cm, NULL); // RCU xchg
+		if (cm)
+			kref_put(&cm->kref, dtr_destroy_cm);
+	}
+
+	timer_delete_sync(&rdma_transport->control_timer);
+
+	if (free_op == DESTROY_TRANSPORT) {
+		list_for_each_entry(drbd_path, &transport->paths, list) {
+			struct dtr_path *path = container_of(drbd_path, struct dtr_path, path);
+
+			cancel_work_sync(&path->refill_rx_descs_work);
+			flush_delayed_work(&path->cs.retry_connect_work);
+		}
+
+		/* The transport object itself is embedded into a conneciton.
+		   Do not free it here! The function should better be called
+		   uninit. */
+	}
+}
+
+static void dtr_control_timer_fn(struct timer_list *t)
+{
+	struct dtr_transport *rdma_transport = timer_container_of(rdma_transport, t, control_timer);
+	struct drbd_transport *transport = &rdma_transport->transport;
+
+	drbd_control_event(transport, TIMEOUT);
+}
+
+static bool atomic_inc_if_below(atomic_t *v, int limit)
+{
+	int old, cur;
+
+	cur = atomic_read(v);
+	do {
+		old = cur;
+		if (old >= limit)
+			return false;
+
+		cur = atomic_cmpxchg(v, old, old + 1);
+	} while (cur != old);
+
+	return true;
+}
+
+static int dtr_send(struct dtr_path *path, void *buf, size_t size, gfp_t gfp_mask)
+{
+	struct ib_device *device;
+	struct dtr_tx_desc *tx_desc;
+	struct dtr_cm *cm;
+	void *send_buffer;
+	int err = -ECONNRESET;
+
+	// pr_info("%s: dtr_send() size = %d data[0]:%lx\n", rdma_stream->name, (int)size, *(unsigned long*)buf);
+
+	cm = dtr_path_get_cm_connected(path);
+	if (!cm)
+		goto out;
+
+	err = -ENOMEM;
+	tx_desc = kzalloc(sizeof(*tx_desc) + sizeof(struct ib_sge), gfp_mask);
+	if (!tx_desc)
+		goto out_put;
+
+	send_buffer = kmalloc(size, gfp_mask);
+	if (!send_buffer) {
+		kfree(tx_desc);
+		goto out_put;
+	}
+	memcpy(send_buffer, buf, size);
+
+	device = cm->id->device;
+	tx_desc->type = SEND_MSG;
+	tx_desc->data = send_buffer;
+	tx_desc->nr_sges = 1;
+	tx_desc->sge[0].addr = ib_dma_map_single(device, send_buffer, size, DMA_TO_DEVICE);
+	err = ib_dma_mapping_error(device, tx_desc->sge[0].addr);
+	if (err) {
+		kfree(tx_desc);
+		kfree(send_buffer);
+		goto out_put;
+	}
+
+	tx_desc->sge[0].lkey = dtr_cm_to_lkey(cm);
+	tx_desc->sge[0].length = size;
+	tx_desc->imm = (union dtr_immediate)
+		{ .stream = ST_FLOW_CTRL, .sequence = 0 };
+
+	err = __dtr_post_tx_desc(cm, tx_desc);
+	if (err)
+		dtr_free_tx_desc(cm, tx_desc);
+
+out_put:
+	kref_put(&cm->kref, dtr_destroy_cm);
+out:
+	return err;
+}
+
+
+static int dtr_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain, size_t size)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	struct dtr_stream *rdma_stream = &rdma_transport->stream[DATA_STREAM];
+	struct page *page, *head = NULL, *tail = NULL;
+	int i = 0;
+
+	if (!dtr_transport_ok(transport))
+		return -ECONNRESET;
+
+	// pr_info("%s: in recv_pages, size: %zu\n", rdma_stream->name, size);
+	TR_ASSERT(transport, rdma_stream->current_rx.bytes_left == 0);
+	dtr_recycle_rx_desc(transport, DATA_STREAM, &rdma_stream->current_rx.desc, GFP_NOIO);
+	dtr_refill_rx_desc(rdma_transport, DATA_STREAM);
+
+	while (size) {
+		struct dtr_rx_desc *rx_desc = NULL;
+		long t;
+
+		t = wait_event_interruptible_timeout(rdma_stream->recv_wq,
+					dtr_receive_rx_desc(rdma_transport, DATA_STREAM, &rx_desc),
+					rdma_stream->recv_timeout);
+
+		if (t <= 0) {
+			/*
+			 * Cannot give back pages that may still be in use!
+			 * (More reason why we only have one rx_desc per page,
+			 * and don't get_page() in dtr_create_rx_desc).
+			 */
+			drbd_free_pages(transport, head);
+			return t == 0 ? -EAGAIN : -EINTR;
+		}
+
+		page = rx_desc->page;
+		/* put_page() if we would get_page() in
+		 * dtr_create_rx_desc().  but we don't. We return the page
+		 * chain to the user, which is supposed to give it back to
+		 * drbd_free_pages() eventually. */
+		rx_desc->page = NULL;
+		size -= rx_desc->size;
+
+		/* If the sender did dtr_send_page every bvec of a bio with
+		 * unaligned bvecs (as xfs often creates), rx_desc->size and
+		 * offset may well be not the PAGE_SIZE and 0 we hope for.
+		 */
+		if (tail) {
+			/* See also dtr_create_rx_desc().
+			 * For PAGE_SIZE > 4k, we may create several RR per page.
+			 * We cannot link a page to itself, though.
+			 *
+			 * Adding to size would be easy enough.
+			 * But what do we do about possible holes?
+			 * FIXME
+			 */
+			BUG_ON(page == tail);
+
+			set_page_chain_next(tail, page);
+			tail = page;
+		} else
+			head = tail = page;
+
+		set_page_chain_offset(page, 0);
+		set_page_chain_size(page, rx_desc->size);
+
+		atomic_dec(&rx_desc->cm->path->flow[DATA_STREAM].rx_descs_allocated);
+		dtr_free_rx_desc(rx_desc);
+
+		i++;
+		dtr_refill_rx_desc(rdma_transport, DATA_STREAM);
+	}
+
+	// pr_info("%s: rcvd %d pages\n", rdma_stream->name, i);
+	chain->head = head;
+	chain->nr_pages = i;
+	return 0;
+}
+
+static int _dtr_recv(struct drbd_transport *transport, enum drbd_stream stream,
+		     void **buf, size_t size, int flags)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	struct dtr_stream *rdma_stream = &rdma_transport->stream[stream];
+	struct dtr_rx_desc *rx_desc = NULL;
+	void *buffer;
+
+	if (flags & GROW_BUFFER) {
+		/* Since transport_rdma always returns the full, requested amount
+		   of data, DRBD should never call with GROW_BUFFER! */
+		tr_err(transport, "Called with GROW_BUFFER\n");
+		return -EINVAL;
+	} else if (rdma_stream->current_rx.bytes_left == 0) {
+		long t;
+
+		dtr_recycle_rx_desc(transport, stream, &rdma_stream->current_rx.desc, GFP_NOIO);
+		if (flags & MSG_DONTWAIT) {
+			t = dtr_receive_rx_desc(rdma_transport, stream, &rx_desc);
+		} else {
+			t = wait_event_interruptible_timeout(rdma_stream->recv_wq,
+						dtr_receive_rx_desc(rdma_transport, stream, &rx_desc),
+						rdma_stream->recv_timeout);
+		}
+
+		if (t <= 0)
+			return t == 0 ? -EAGAIN : -EINTR;
+
+		// pr_info("%s: got a new page with size: %d\n", rdma_stream->name, rx_desc->size);
+		buffer = page_address(rx_desc->page);
+		rdma_stream->current_rx.desc = rx_desc;
+		rdma_stream->current_rx.pos = buffer + size;
+		rdma_stream->current_rx.bytes_left = rx_desc->size - size;
+		if (rdma_stream->current_rx.bytes_left < 0)
+			tr_warn(transport,
+				"new, requesting more (%zu) than available (%d)\n", size, rx_desc->size);
+
+		if (flags & CALLER_BUFFER)
+			memcpy(*buf, buffer, size);
+		else
+			*buf = buffer;
+
+		// pr_info("%s: recv completely new fine, returning size on\n", rdma_stream->name);
+		// pr_info("%s: rx_count: %d\n", rdma_stream->name, rdma_stream->rx_descs_posted);
+
+		return size;
+	} else { /* return next part */
+		// pr_info("recv next part on %s\n", rdma_stream->name);
+		buffer = rdma_stream->current_rx.pos;
+		rdma_stream->current_rx.pos += size;
+
+		if (rdma_stream->current_rx.bytes_left < size) {
+			tr_err(transport,
+			       "requested more than left! bytes_left = %d, size = %zu\n",
+					rdma_stream->current_rx.bytes_left, size);
+			rdma_stream->current_rx.bytes_left = 0; /* 0 left == get new entry */
+		} else {
+			rdma_stream->current_rx.bytes_left -= size;
+			// pr_info("%s: old_rx left: %d\n", rdma_stream->name, rdma_stream->current_rx.bytes_left);
+		}
+
+		if (flags & CALLER_BUFFER)
+			memcpy(*buf, buffer, size);
+		else
+			*buf = buffer;
+
+		// pr_info("%s: recv next part fine, returning size\n", rdma_stream->name);
+		return size;
+	}
+
+	return 0;
+}
+
+static int dtr_recv(struct drbd_transport *transport, enum drbd_stream stream, void **buf, size_t size, int flags)
+{
+	struct dtr_transport *rdma_transport;
+	int err;
+
+	if (!transport)
+		return -ECONNRESET;
+
+	rdma_transport = container_of(transport, struct dtr_transport, transport);
+
+	if (!dtr_transport_ok(transport))
+		return -ECONNRESET;
+
+	err = _dtr_recv(transport, stream, buf, size, flags);
+
+	dtr_refill_rx_desc(rdma_transport, stream);
+	return err;
+}
+
+static void dtr_stats(struct drbd_transport *transport, struct drbd_transport_stats *stats)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	struct dtr_path *path;
+	int sb_size = 0, sb_used = 0;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(path, &transport->paths, path.list) {
+		struct dtr_flow *flow = &path->flow[DATA_STREAM];
+
+		sb_size += flow->tx_descs_max;
+		sb_used += atomic_read(&flow->tx_descs_posted);
+	}
+	rcu_read_unlock();
+
+	/* these are used by the sender, guess we should them get right */
+	stats->send_buffer_size = sb_size * DRBD_SOCKET_BUFFER_SIZE;
+	stats->send_buffer_used = sb_used * DRBD_SOCKET_BUFFER_SIZE;
+
+	/* these two for debugfs */
+	stats->unread_received = rdma_transport->stream[DATA_STREAM].unread;
+	stats->unacked_send = stats->send_buffer_used;
+
+}
+
+/* The following functions (at least)
+   dtr_path_established_work_fn(),
+   dtr_cma_accept_work_fn(), dtr_cma_accept(),
+   dtr_cma_retry_connect_work_fn(),
+   dtr_cma_retry_connect(),
+   dtr_cma_connect_fail_work_fn(), dtr_cma_connect(),
+   dtr_cma_disconnect_work_fn(), dtr_cma_disconnect(),
+   dtr_cma_event_handler()
+
+   are called from worker context or are callbacks from rdma_cm's context.
+
+   We need to make sure the path does not go away in the meantime.
+ */
+
+static int dtr_path_prepare(struct dtr_path *path, struct dtr_cm *cm, bool active)
+{
+	struct dtr_cm *cm2;
+	int i, err;
+
+	cm2 = cmpxchg(&path->cm, NULL, cm); // RCU xchg
+	if (cm2) {
+		/*
+		 * The caller needs to hold a ref on cm. dtr_path_prepare()
+		 * gifts that reference to the path. If setting the pointer in
+		 * the path fails, we have to put one ref of cm.
+		 */
+		kref_put(&cm->kref, dtr_destroy_cm);
+		return -ENOENT;
+	}
+
+	path->cs.active = active;
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++)
+		dtr_init_flow(path, i);
+
+	err = dtr_cm_alloc_rdma_res(cm);
+
+	return err;
+}
+
+static struct dtr_cm *__dtr_path_get_cm(struct dtr_path *path)
+{
+	struct dtr_cm *cm;
+
+	cm = rcu_dereference(path->cm);
+	if (cm && !kref_get_unless_zero(&cm->kref))
+		cm = NULL;
+	return cm;
+}
+
+static struct dtr_cm *dtr_path_get_cm(struct dtr_path *path)
+{
+	struct dtr_cm *cm;
+
+	rcu_read_lock();
+	cm = __dtr_path_get_cm(path);
+	rcu_read_unlock();
+	return cm;
+}
+
+static struct dtr_cm *dtr_path_get_cm_connected(struct dtr_path *path)
+{
+	struct dtr_cm *cm;
+
+	cm = dtr_path_get_cm(path);
+	if (cm && cm->state != DSM_CONNECTED) {
+		kref_put(&cm->kref, dtr_destroy_cm);
+		cm = NULL;
+	}
+	return cm;
+}
+
+static void dtr_path_established_work_fn(struct work_struct *work)
+{
+	struct dtr_cm *cm = container_of(work, struct dtr_cm, establish_work);
+	struct dtr_path *path = cm->path;
+	struct drbd_transport *transport = path->path.transport;
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	struct dtr_connect_state *cs = &path->cs;
+	int i, p, err;
+
+
+	err = cm != path->cm;
+	kref_put(&cm->kref, dtr_destroy_cm);
+	if (err)
+		return;
+
+	p = atomic_cmpxchg(&cs->passive_state, PCS_CONNECTING, PCS_FINISHING);
+	if (p < PCS_CONNECTING)
+		goto out;
+
+	path->cm->state = DSM_CONNECTED;
+
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++)
+		__dtr_refill_rx_desc(path, i);
+	err = dtr_send_flow_control_msg(path, GFP_NOIO);
+	if (err > 0)
+		err = 0;
+	if (err)
+		tr_err(transport, "sending first flow_control_msg() failed\n");
+
+	schedule_timeout(HZ / 4);
+	if (!dtr_path_ok(path)) {
+		if (path->cs.active)
+			dtr_cma_retry_connect(path, path->cm);
+		return;
+	}
+
+	p = atomic_cmpxchg(&rdma_transport->first_path_connect_err, 1, err);
+	if (p == 1) {
+		if (cs->active)
+			set_bit(RESOLVE_CONFLICTS, &transport->flags);
+		else
+			clear_bit(RESOLVE_CONFLICTS, &transport->flags);
+		complete(&rdma_transport->connected);
+	}
+
+	set_bit(TR_ESTABLISHED, &path->path.flags);
+	drbd_path_event(transport, &path->path);
+
+out:
+	atomic_set(&cs->active_state, PCS_INACTIVE);
+	p = atomic_xchg(&cs->passive_state, PCS_INACTIVE);
+	if (p > PCS_INACTIVE)
+		drbd_put_listener(&path->path);
+
+	wake_up(&cs->wq);
+}
+
+static struct dtr_cm *dtr_alloc_cm(struct dtr_path *path)
+{
+	struct dtr_cm *cm;
+
+	cm = kzalloc_obj(*cm);
+	if (!cm)
+		return NULL;
+
+	kref_init(&cm->kref);
+	INIT_WORK(&cm->connect_work, dtr_cma_connect_work_fn);
+	INIT_WORK(&cm->establish_work, dtr_path_established_work_fn);
+	INIT_WORK(&cm->disconnect_work, dtr_cma_disconnect_work_fn);
+	INIT_WORK(&cm->end_rx_work, dtr_end_rx_work_fn);
+	INIT_WORK(&cm->end_tx_work, dtr_end_tx_work_fn);
+	INIT_WORK(&cm->tx_timeout_work, dtr_tx_timeout_work_fn);
+	INIT_LIST_HEAD(&cm->error_rx_descs);
+	spin_lock_init(&cm->error_rx_descs_lock);
+	timer_setup(&cm->tx_timeout, dtr_tx_timeout_fn, 0);
+
+	kref_get(&path->path.kref);
+	cm->path = path;
+	cm->rdma_transport = container_of(path->path.transport, struct dtr_transport, transport);
+
+	/*
+	 * We need this module in core as long as a dtr_tx_desc, a dtr_rx_desc
+	 * or a dtr_cm object exists because they might have a callback
+	 * registered in the RDMA code that will call back into this module. The
+	 * rx and tx descs have a reference to the dtr_cm object, so taking an
+	 * extra reference to the module for each dtr_cm object is sufficient.
+	 */
+	__module_get(THIS_MODULE);
+
+	return cm;
+}
+
+static int dtr_cma_accept(struct dtr_listener *listener, struct rdma_cm_id *new_cm_id, struct dtr_cm **ret_cm)
+{
+	struct sockaddr_storage *peer_addr;
+	struct dtr_connect_state *cs;
+	struct dtr_path *path;
+	struct drbd_path *drbd_path;
+	struct dtr_cm *cm;
+	int err;
+
+	*ret_cm = NULL;
+	peer_addr = &new_cm_id->route.addr.dst_addr;
+
+	spin_lock(&listener->listener.waiters_lock);
+	drbd_path = drbd_find_path_by_addr(&listener->listener, peer_addr);
+	if (drbd_path)
+		kref_get(&drbd_path->kref);
+	spin_unlock(&listener->listener.waiters_lock);
+
+	if (!drbd_path) {
+		struct sockaddr_in6 *from_sin6;
+		struct sockaddr_in *from_sin;
+
+		switch (peer_addr->ss_family) {
+		case AF_INET6:
+			from_sin6 = (struct sockaddr_in6 *)peer_addr;
+			pr_warn("Closing unexpected connection from "
+			       "%pI6\n", &from_sin6->sin6_addr);
+			break;
+		case AF_INET:
+			from_sin = (struct sockaddr_in *)peer_addr;
+			pr_warn("Closing unexpected connection from "
+				"%pI4\n", &from_sin->sin_addr);
+			break;
+		default:
+			pr_warn("Closing unexpected connection family = %d\n",
+				peer_addr->ss_family);
+		}
+
+		rdma_reject(new_cm_id, NULL, 0, IB_CM_REJ_CONSUMER_DEFINED);
+		return -EAGAIN;
+	}
+
+	path = container_of(drbd_path, struct dtr_path, path);
+	cs = &path->cs;
+	if (atomic_read(&cs->passive_state) < PCS_CONNECTING)
+		goto reject;
+
+	cm = dtr_alloc_cm(path);
+	if (!cm) {
+		pr_err("rejecting connecting since -ENOMEM for cm\n");
+		goto reject;
+	}
+
+	cm->state = DSM_CONNECT_REQ;
+	init_waitqueue_head(&cm->state_wq);
+	new_cm_id->context = cm;
+	cm->id = new_cm_id;
+	*ret_cm = cm;
+
+	/* Expecting RDMA_CM_EVENT_ESTABLISHED, after rdma_accept(). Get
+	   the ref before dtr_path_prepare(), since that exposes the cm
+	   to the path, and the path might get destroyed, and with that
+	   going to put the cm */
+	kref_get(&cm->kref);
+
+	/* Gifting the initial kref to the path->cm pointer */
+	err = dtr_path_prepare(path, cm, false);
+	if (err) {
+		/* Returning the cm via ret_cm and an error causes the caller to put one ref */
+		goto reject;
+	}
+	kref_put(&drbd_path->kref, drbd_destroy_path);
+
+	err = rdma_accept(new_cm_id, &dtr_conn_param);
+	if (err)
+		kref_put(&cm->kref, dtr_destroy_cm);
+
+	return err;
+
+reject:
+	rdma_reject(new_cm_id, NULL, 0, IB_CM_REJ_CONSUMER_DEFINED);
+	kref_put(&drbd_path->kref, drbd_destroy_path);
+	return -EAGAIN;
+}
+
+static int dtr_start_try_connect(struct dtr_connect_state *cs)
+{
+	struct dtr_path *path = container_of(cs, struct dtr_path, cs);
+	struct drbd_transport *transport = path->path.transport;
+	struct dtr_cm *cm;
+	int err = -ENOMEM;
+
+	cm = dtr_alloc_cm(path);
+	if (!cm)
+		goto out;
+
+	err = dtr_create_cm_id(cm, path->path.net);
+	if (err) {
+		tr_err(transport, "rdma_create_id() failed %d\n", err);
+		goto out;
+	}
+
+	/* Holding the initial reference on cm, expecting RDMA_CM_EVENT_ADDR_RESOLVED */
+	err = rdma_resolve_addr(cm->id, NULL,
+				(struct sockaddr *)&path->path.peer_addr,
+				2000);
+	if (err) {
+		tr_err(transport, "rdma_resolve_addr error %d\n", err);
+		goto out;
+	}
+
+	return 0;
+out:
+	if (cm)
+		kref_put(&cm->kref, dtr_destroy_cm);
+	return err;
+}
+
+static void dtr_cma_retry_connect_work_fn(struct work_struct *work)
+{
+	struct dtr_connect_state *cs = container_of(work, struct dtr_connect_state, retry_connect_work.work);
+	enum connect_state_enum p;
+	int err;
+
+	p = atomic_cmpxchg(&cs->active_state, PCS_REQUEST_ABORT, PCS_INACTIVE);
+	if (p != PCS_CONNECTING) {
+		wake_up(&cs->wq);
+		return;
+	}
+
+	err = dtr_start_try_connect(cs);
+	if (err) {
+		struct dtr_path *path = container_of(cs, struct dtr_path, cs);
+		struct drbd_transport *transport = path->path.transport;
+
+		tr_err(transport, "dtr_start_try_connect failed  %d\n", err);
+		schedule_delayed_work(&cs->retry_connect_work, HZ);
+	}
+}
+
+static void dtr_remove_cm_from_path(struct dtr_path *path, struct dtr_cm *failed_cm)
+{
+	struct dtr_cm *cm;
+
+	cm = cmpxchg(&path->cm, failed_cm, NULL); // RCU &path->cm
+	if (cm == failed_cm && cm->id && cm->id->qp) {
+		struct drbd_transport *transport = path->path.transport;
+		struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
+		int err;
+
+		err = ib_modify_qp(cm->id->qp, &attr, IB_QP_STATE);
+		if (err)
+			tr_err(transport, "ib_modify_qp failed %d\n", err);
+
+		kref_put(&cm->kref, dtr_destroy_cm);
+	}
+}
+
+static void dtr_cma_retry_connect(struct dtr_path *path, struct dtr_cm *failed_cm)
+{
+	struct drbd_transport *transport = path->path.transport;
+	struct dtr_connect_state *cs = &path->cs;
+	long connect_int = 10 * HZ;
+	struct net_conf *nc;
+	int a;
+
+	dtr_remove_cm_from_path(path, failed_cm);
+
+	a = atomic_read(&cs->active_state);
+	if (a == PCS_INACTIVE) {
+		return;
+	} else if (a == PCS_CONNECTING) {
+		rcu_read_lock();
+		nc = rcu_dereference(transport->net_conf);
+		if (nc)
+			connect_int = nc->connect_int * HZ;
+		rcu_read_unlock();
+	} else {
+		connect_int = 1;
+	}
+	schedule_delayed_work(&cs->retry_connect_work, connect_int);
+}
+
+static void dtr_cma_connect_work_fn(struct work_struct *work)
+{
+	struct dtr_cm *cm = container_of(work, struct dtr_cm, connect_work);
+	struct dtr_path *path = cm->path;
+	struct drbd_transport *transport = path->path.transport;
+	enum connect_state_enum p;
+	int err;
+
+	p = atomic_cmpxchg(&path->cs.active_state, PCS_REQUEST_ABORT, PCS_INACTIVE);
+	if (p != PCS_CONNECTING) {
+		wake_up(&path->cs.wq);
+		kref_put(&cm->kref, dtr_destroy_cm); /* for work */
+		return;
+	}
+
+	kref_get(&cm->kref); /* for the path->cm pointer */
+	err = dtr_path_prepare(path, cm, true);
+	if (err) {
+		tr_err(transport, "dtr_path_prepare() = %d\n", err);
+		goto out;
+	}
+
+	kref_get(&cm->kref); /* Expecting RDMA_CM_EVENT_ESTABLISHED */
+	set_bit(DSB_CONNECTING, &cm->state);
+	err = rdma_connect(cm->id, &dtr_conn_param);
+	if (err) {
+		if (test_and_clear_bit(DSB_CONNECTING, &cm->state))
+			kref_put(&cm->kref, dtr_destroy_cm); /* no _EVENT_ESTABLISHED */
+		tr_err(transport, "rdma_connect error %d\n", err);
+		goto out;
+	}
+
+	kref_put(&cm->kref, dtr_destroy_cm); /* for work */
+	return;
+out:
+	kref_put(&cm->kref, dtr_destroy_cm); /* for work */
+	dtr_cma_retry_connect(path, cm);
+}
+
+static void dtr_cma_disconnect_work_fn(struct work_struct *work)
+{
+	struct dtr_cm *cm = container_of(work, struct dtr_cm, disconnect_work);
+	struct dtr_path *path = cm->path;
+	struct drbd_transport *transport = path->path.transport;
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	struct drbd_path *drbd_path = &path->path;
+	bool destroyed;
+	int err;
+
+	err = cm != path->cm;
+	kref_put(&cm->kref, dtr_destroy_cm);
+	if (err)
+		return;
+
+	destroyed = test_bit(TR_UNREGISTERED, &drbd_path->flags) || rdma_transport->active == false;
+	if (test_and_clear_bit(TR_ESTABLISHED, &drbd_path->flags) && !destroyed)
+		drbd_path_event(transport, drbd_path);
+
+	if (!dtr_transport_ok(transport))
+		drbd_control_event(transport, CLOSED_BY_PEER);
+
+	if (destroyed)
+		return;
+
+	/* in dtr_disconnect_path() -> __dtr_uninit_path() we free the previous
+	   cm. That causes the reference on the path to be dropped.
+	   In dtr_activate_path() -> dtr_start_try_connect() we allocate a new
+	   cm, that holds a reference on the path again.
+
+	   Bridge the gap with a reference here!
+	*/
+
+	kref_get(&path->path.kref);
+	dtr_disconnect_path(path);
+
+	/* dtr_disconnect_path() may take time, recheck here... */
+	if (test_bit(TR_UNREGISTERED, &drbd_path->flags) || rdma_transport->active == false)
+		goto abort;
+
+	if (!dtr_transport_ok(transport)) {
+		/* If there is no other connected path mark the connection as
+		   no longer active. Do not try to re-establish this path!! */
+		rdma_transport->active = false;
+		goto abort;
+	}
+
+	err = dtr_activate_path(path);
+	if (err)
+		tr_err(transport, "dtr_activate_path() = %d\n", err);
+abort:
+	kref_put(&path->path.kref, drbd_destroy_path);
+}
+
+static void dtr_cma_disconnect(struct dtr_cm *cm)
+{
+	kref_get(&cm->kref);
+	schedule_work(&cm->disconnect_work);
+}
+
+static int dtr_cma_event_handler(struct rdma_cm_id *cm_id, struct rdma_cm_event *event)
+{
+	int err;
+	/* context comes from rdma_create_id() */
+	struct dtr_cm *cm = cm_id->context;
+	struct dtr_listener *listener;
+	bool connecting;
+
+	if (!cm) {
+		pr_err("id %p event %d, but no context!\n", cm_id, event->event);
+		return 0;
+	}
+
+	switch (event->event) {
+	case RDMA_CM_EVENT_ADDR_RESOLVED:
+		// pr_info("%s: RDMA_CM_EVENT_ADDR_RESOLVED\n", cm->name);
+		kref_get(&cm->kref); /* Expecting RDMA_CM_EVENT_ROUTE_RESOLVED */
+		err = rdma_resolve_route(cm_id, 2000);
+		if (err) {
+			kref_put(&cm->kref, dtr_destroy_cm);
+			pr_err("rdma_resolve_route error %d\n", err);
+		}
+		break;
+
+	case RDMA_CM_EVENT_ROUTE_RESOLVED:
+		// pr_info("%s: RDMA_CM_EVENT_ROUTE_RESOLVED\n", cm->name);
+
+		kref_get(&cm->kref);
+		schedule_work(&cm->connect_work);
+		break;
+
+	case RDMA_CM_EVENT_CONNECT_REQUEST:
+		// pr_info("%s: RDMA_CM_EVENT_CONNECT_REQUEST\n", cm->name);
+		/* for listener */
+
+		listener = container_of(cm, struct dtr_listener, cm);
+		err = dtr_cma_accept(listener, cm_id, &cm);
+
+		/* I found this a bit confusing. When a new connection comes in, the callback
+		   gets called with a new rdma_cm_id. The new rdma_cm_id inherits its context
+		   pointer from the listening rdma_cm_id. The new context gets created in
+		   dtr_cma_accept() and is put into &cm here.
+		   cm now contains the accepted connection (no longer the listener); */
+		if (err) {
+			if (!cm)
+				return 1; /* caller destroy the cm_id */
+			break; /* drop the last ref of cm at function exit */
+		}
+		return 0; /* do not touch kref of the new connection */
+
+	case RDMA_CM_EVENT_CONNECT_RESPONSE:
+		// pr_info("%s: RDMA_CM_EVENT_CONNECT_RESPONSE\n", cm->name);
+		/*cm->path->cm = cm;
+		  dtr_path_established(cm->path); */
+		break;
+
+	case RDMA_CM_EVENT_ESTABLISHED:
+		// pr_info("%s: RDMA_CM_EVENT_ESTABLISHED\n", cm->name);
+		/* cm->state = DSM_CONNECTED; is set later in the work item */
+		/* This is called for active and passive connections */
+
+		connecting = test_and_clear_bit(DSB_CONNECTING, &cm->state) ||
+			test_and_clear_bit(DSB_CONNECT_REQ, &cm->state);
+		kref_get(&cm->kref); /* connected -> expect a disconnect in the future */
+		kref_get(&cm->kref); /* for the work */
+		schedule_work(&cm->establish_work);
+
+		if (!connecting)
+			return 0; /* keep ref; __dtr_disconnect_path() won */
+		break;
+
+	case RDMA_CM_EVENT_ADDR_ERROR:
+		// pr_info("%s: RDMA_CM_EVENT_ADDR_ERROR\n", cm->name);
+	case RDMA_CM_EVENT_ROUTE_ERROR:
+		// pr_info("%s: RDMA_CM_EVENT_ROUTE_ERROR\n", cm->name);
+		set_bit(DSB_ERROR, &cm->state);
+
+		dtr_cma_retry_connect(cm->path, cm);
+		break;
+
+	case RDMA_CM_EVENT_CONNECT_ERROR:
+		// pr_info("%s: RDMA_CM_EVENT_CONNECT_ERROR\n", cm->name);
+	case RDMA_CM_EVENT_UNREACHABLE:
+		// pr_info("%s: RDMA_CM_EVENT_UNREACHABLE\n", cm->name);
+	case RDMA_CM_EVENT_REJECTED:
+		// pr_info("%s: RDMA_CM_EVENT_REJECTED\n", cm->name);
+		// pr_info("event = %d, status = %d\n", event->event, event->status);
+		set_bit(DSB_ERROR, &cm->state);
+
+		dtr_cma_retry_connect(cm->path, cm);
+		connecting = test_and_clear_bit(DSB_CONNECTING, &cm->state) ||
+			test_and_clear_bit(DSB_CONNECT_REQ, &cm->state);
+		if (!connecting)
+			return 0; /* keep ref; __dtr_disconnect_path() won */
+		break;
+
+	case RDMA_CM_EVENT_DISCONNECTED:
+		// pr_info("%s: RDMA_CM_EVENT_DISCONNECTED\n", cm->name);
+		if (!test_and_clear_bit(DSB_CONNECTED, &cm->state))
+			return 0; /* keep ref on cm; probably a tx_timeout */
+
+		dtr_cma_disconnect(cm);
+		break;
+
+	case RDMA_CM_EVENT_DEVICE_REMOVAL:
+		// pr_info("%s: RDMA_CM_EVENT_DEVICE_REMOVAL\n", cm->name);
+		return 0;
+
+	case RDMA_CM_EVENT_TIMEWAIT_EXIT:
+		return 0;
+
+	default:
+		pr_warn("id %p context %p unexpected event %d!\n",
+				cm_id, cm, event->event);
+		return 0;
+	}
+	wake_up(&cm->state_wq);
+
+	/* by returning 1 we instruct the caller to destroy the cm_id. We
+	   are not allowed to free it within the callback, since that deadlocks! */
+	return kref_put(&cm->kref, dtr_destroy_cm_keep_id);
+}
+
+static int dtr_create_cm_id(struct dtr_cm *cm, struct net *net)
+{
+	struct rdma_cm_id *id;
+
+	cm->state = 0;
+	init_waitqueue_head(&cm->state_wq);
+
+	id = rdma_create_id(net, dtr_cma_event_handler, cm, RDMA_PS_TCP, IB_QPT_RC);
+	if (IS_ERR(id)) {
+		cm->id = NULL;
+		set_bit(DSB_ERROR, &cm->state);
+		return PTR_ERR(id);
+	}
+
+	cm->id = id;
+	return 0;
+}
+
+/* Number of rx_descs the peer does not know */
+static int dtr_new_rx_descs(struct dtr_flow *flow)
+{
+	int posted, known;
+
+	posted = atomic_read(&flow->rx_descs_posted);
+	smp_rmb(); /* smp_wmb() is in dtr_rx_cqe_done() */
+	known = atomic_read(&flow->rx_descs_known_to_peer);
+
+	/* If the two decrements in dtr_rx_cqe_done() execute in
+	 * parallel our result might be one too low, that does not matter.
+	 * Only make sure to never return a -1 because that would matter! */
+	return max(posted - known, 0);
+}
+
+static struct dtr_rx_desc *dtr_next_rx_desc(struct dtr_stream *rdma_stream)
+{
+	struct dtr_rx_desc *rx_desc;
+
+	spin_lock_irq(&rdma_stream->rx_descs_lock);
+	rx_desc = list_first_entry_or_null(&rdma_stream->rx_descs, struct dtr_rx_desc, list);
+	if (rx_desc) {
+		if (rx_desc->sequence == rdma_stream->rx_sequence) {
+			list_del(&rx_desc->list);
+			rdma_stream->rx_sequence =
+				(rdma_stream->rx_sequence + 1) & ((1UL << SEQUENCE_BITS) - 1);
+			rdma_stream->unread -= rx_desc->size;
+		} else {
+			rx_desc = NULL;
+		}
+	}
+	spin_unlock_irq(&rdma_stream->rx_descs_lock);
+
+	return rx_desc;
+}
+
+static bool dtr_receive_rx_desc(struct dtr_transport *rdma_transport,
+				enum drbd_stream stream,
+				struct dtr_rx_desc **ptr_rx_desc)
+{
+	struct dtr_stream *rdma_stream = &rdma_transport->stream[stream];
+	struct dtr_rx_desc *rx_desc;
+
+	rx_desc = dtr_next_rx_desc(rdma_stream);
+
+	if (rx_desc) {
+		struct dtr_cm *cm = rx_desc->cm;
+		struct dtr_transport *rdma_transport =
+			container_of(cm->path->path.transport, struct dtr_transport, transport);
+
+		INIT_LIST_HEAD(&rx_desc->list);
+		ib_dma_sync_single_for_cpu(cm->id->device, rx_desc->sge.addr,
+					   rdma_transport->rx_allocation_size, DMA_FROM_DEVICE);
+		*ptr_rx_desc = rx_desc;
+		return true;
+	} else {
+		/* The waiting thread gets woken up if a packet arrived, or if there is no
+		   new packet but we need to tell the peer about space in our receive window */
+		struct dtr_path *path;
+
+		rcu_read_lock();
+		list_for_each_entry_rcu(path, &rdma_transport->transport.paths, path.list) {
+			struct dtr_flow *flow = &path->flow[stream];
+
+			if (atomic_read(&flow->rx_descs_known_to_peer) <
+			    atomic_read(&flow->rx_descs_posted) / 8)
+				dtr_send_flow_control_msg(path, GFP_ATOMIC);
+		}
+		rcu_read_unlock();
+	}
+
+	return false;
+}
+
+static int dtr_send_flow_control_msg(struct dtr_path *path, gfp_t gfp_mask)
+{
+	struct dtr_flow_control msg;
+	struct dtr_flow *flow;
+	enum drbd_stream i;
+	int err, n[2], send_from_stream = -1, rx_descs = 0;
+
+	msg.magic = cpu_to_be32(DTR_MAGIC);
+
+	spin_lock_bh(&path->send_flow_control_lock);
+	/* dtr_send_flow_control_msg() is called from the receiver thread and
+	   areceiver, asender (multiple threads).
+	   determining the number of new tx_descs and subtracting this number
+	   from rx_descs_known_to_peer has to be atomic!
+	 */
+	for (i = DATA_STREAM; i <= CONTROL_STREAM; i++) {
+		flow = &path->flow[i];
+
+		n[i] = dtr_new_rx_descs(flow);
+		atomic_add(n[i], &flow->rx_descs_known_to_peer);
+		rx_descs += n[i];
+
+		msg.new_rx_descs[i] = cpu_to_be32(n[i]);
+		if (send_from_stream == -1 &&
+			atomic_read(&flow->tx_descs_posted) < flow->tx_descs_max &&
+			atomic_dec_if_positive(&flow->peer_rx_descs) >= 0)
+			send_from_stream = i;
+	}
+	spin_unlock_bh(&path->send_flow_control_lock);
+
+	if (send_from_stream == -1) {
+		struct drbd_transport *transport = path->path.transport;
+		struct dtr_transport *rdma_transport =
+			container_of(transport, struct dtr_transport, transport);
+
+		if (__ratelimit(&rdma_transport->rate_limit))
+			tr_err(transport, "Not sending flow_control msg, no receive window!\n");
+		err = -ENOBUFS;
+		goto out_undo;
+	}
+
+	flow = &path->flow[send_from_stream];
+	if (rx_descs == 0 || !atomic_inc_if_below(&flow->tx_descs_posted, flow->tx_descs_max)) {
+		atomic_inc(&flow->peer_rx_descs);
+		return 0;
+	}
+
+	msg.send_from_stream = cpu_to_be32(send_from_stream);
+	err = dtr_send(path, &msg, sizeof(msg), gfp_mask);
+	if (err) {
+		atomic_inc(&flow->peer_rx_descs);
+		atomic_dec(&flow->tx_descs_posted);
+out_undo:
+		for (i = DATA_STREAM; i <= CONTROL_STREAM; i++) {
+			flow = &path->flow[i];
+			atomic_sub(n[i], &flow->rx_descs_known_to_peer);
+		}
+	}
+	return err;
+}
+
+static void dtr_flow_control(struct dtr_flow *flow, gfp_t gfp_mask)
+{
+	int n, known_to_peer = atomic_read(&flow->rx_descs_known_to_peer);
+	int tx_descs_max = flow->tx_descs_max;
+
+	n = dtr_new_rx_descs(flow);
+	if (n > tx_descs_max / 8 || known_to_peer < tx_descs_max / 8)
+	  dtr_send_flow_control_msg(flow->path, gfp_mask);
+}
+
+static int dtr_got_flow_control_msg(struct dtr_path *path,
+				     struct dtr_flow_control *msg)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(path->path.transport, struct dtr_transport, transport);
+	struct dtr_flow *flow;
+	int i, n;
+
+	for (i = CONTROL_STREAM; i >= DATA_STREAM; i--) {
+		uint32_t new_rx_descs = be32_to_cpu(msg->new_rx_descs[i]);
+		flow = &path->flow[i];
+
+		n = atomic_add_return(new_rx_descs, &flow->peer_rx_descs);
+		wake_up_interruptible(&rdma_transport->stream[i].send_wq);
+	}
+
+	/* rdma_stream is the data_stream here... */
+	if (n >= DESCS_LOW_LEVEL) {
+		int tx_descs_posted = atomic_read(&flow->tx_descs_posted);
+		if (flow->tx_descs_max - tx_descs_posted >= DESCS_LOW_LEVEL)
+			clear_bit(NET_CONGESTED, &rdma_transport->transport.flags);
+	}
+
+	return be32_to_cpu(msg->send_from_stream);
+}
+
+static void dtr_flow_control_tasklet_fn(struct tasklet_struct *t)
+{
+	struct dtr_path *path = from_tasklet(path, t, flow_control_tasklet);
+
+	dtr_send_flow_control_msg(path, GFP_ATOMIC);
+}
+
+static void dtr_maybe_trigger_flow_control_msg(struct dtr_path *path, int send_from_stream)
+{
+	struct dtr_flow *flow;
+	int n;
+
+	flow = &path->flow[send_from_stream];
+	n = atomic_dec_return(&flow->rx_descs_known_to_peer);
+	/* If we get a lot of flow control messages in, but no data on this
+	 * path, we need to tell the peer that we recycled all these buffers
+	 */
+	if (n < atomic_read(&flow->rx_descs_posted) / 8)
+		tasklet_schedule(&path->flow_control_tasklet);
+}
+
+static void dtr_tx_timeout_work_fn(struct work_struct *work)
+{
+	struct dtr_cm *cm = container_of(work, struct dtr_cm, tx_timeout_work);
+	struct drbd_transport *transport;
+	struct dtr_path *path = cm->path;
+
+	if (!test_and_clear_bit(DSB_CONNECTED, &cm->state) || !path)
+		goto out;
+
+	transport = path->path.transport;
+	tr_warn(transport, "%pI4 - %pI4: tx timeout\n",
+		&((struct sockaddr_in *)&path->path.my_addr)->sin_addr,
+		&((struct sockaddr_in *)&path->path.peer_addr)->sin_addr);
+
+	dtr_remove_cm_from_path(path, cm);
+
+	/* It is not sure that a RDMA_CM_EVENT_DISCONNECTED will be delivered.
+	 * Dropping ref for that here. In case it is delivered we will not drop
+	 * the ref in dtr_cma_event_handler() due to clearing DSB_CONNECTED
+	 * from cm->state */
+	kref_put(&cm->kref, dtr_destroy_cm);
+
+	clear_bit(TR_ESTABLISHED, &path->path.flags);
+	drbd_path_event(transport, &path->path);
+
+	if (!dtr_transport_ok(transport)) {
+		struct dtr_transport *rdma_transport =
+			container_of(transport, struct dtr_transport, transport);
+
+		drbd_control_event(transport, CLOSED_BY_PEER);
+		rdma_transport->active = false;
+	} else {
+		dtr_activate_path(path);
+	}
+
+out:
+	kref_put(&cm->kref, dtr_destroy_cm); /* for work (armed timer) */
+}
+
+static void dtr_tx_timeout_fn(struct timer_list *t)
+{
+	struct dtr_cm *cm = timer_container_of(cm, t, tx_timeout);
+
+	/* cm->kref for armed timer becomes a ref for the work */
+	schedule_work(&cm->tx_timeout_work);
+}
+
+static bool higher_in_sequence(unsigned int higher, unsigned int base)
+{
+	/*
+	  SEQUENCE Arithmetic: By looking at the most signifficant bit of
+	  the reduced word size we find out if the difference is positive.
+	  The difference is necessary to deal with the overflow in the
+	  sequence number space.
+	 */
+	unsigned int diff = higher - base;
+
+	return !(diff & (1 << (SEQUENCE_BITS - 1)));
+}
+
+static void __dtr_order_rx_descs(struct dtr_stream *rdma_stream,
+				 struct dtr_rx_desc *rx_desc)
+{
+	struct dtr_rx_desc *pos;
+	unsigned int seq = rx_desc->sequence;
+
+	list_for_each_entry_reverse(pos, &rdma_stream->rx_descs, list) {
+		if (higher_in_sequence(seq, pos->sequence)) { /* think: seq > pos->sequence */
+			list_add(&rx_desc->list, &pos->list);
+			return;
+		}
+	}
+	list_add(&rx_desc->list, &rdma_stream->rx_descs);
+}
+
+static void dtr_order_rx_descs(struct dtr_stream *rdma_stream,
+			       struct dtr_rx_desc *rx_desc)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&rdma_stream->rx_descs_lock, flags);
+	__dtr_order_rx_descs(rdma_stream, rx_desc);
+	rdma_stream->unread += rx_desc->size;
+	spin_unlock_irqrestore(&rdma_stream->rx_descs_lock, flags);
+}
+
+static void dtr_dec_rx_descs(struct dtr_cm *cm)
+{
+	struct dtr_flow *flow = cm->path->flow;
+	struct dtr_transport *rdma_transport = cm->rdma_transport;
+
+	/* When we get the posted rx_descs back, we do not know if they
+	 * where accoutend for the data stream or the control stream...
+	 */
+	if (atomic_dec_if_positive(&flow[DATA_STREAM].rx_descs_posted) >= 0)
+		return;
+
+	if (atomic_dec_if_positive(&flow[CONTROL_STREAM].rx_descs_posted) >= 0)
+		return;
+
+	if (__ratelimit(&rdma_transport->rate_limit)) {
+		struct drbd_transport *transport = &rdma_transport->transport;
+
+		tr_warn(transport, "rx_descs_posted underflow avoided\n");
+	}
+}
+
+static void dtr_control_data_ready(struct dtr_stream *rdma_stream, struct dtr_rx_desc *rx_desc)
+{
+	struct dtr_transport *rdma_transport = rdma_stream->rdma_transport;
+	struct drbd_transport *transport = &rdma_transport->transport;
+	struct drbd_const_buffer buffer;
+	struct dtr_cm *cm = rx_desc->cm;
+	struct dtr_path *path = cm->path;
+	struct dtr_flow *flow = &path->flow[CONTROL_STREAM];
+
+	if (atomic_read(&flow->rx_descs_known_to_peer) < atomic_read(&flow->rx_descs_posted) / 8)
+		dtr_send_flow_control_msg(path, GFP_ATOMIC);
+
+	ib_dma_sync_single_for_cpu(cm->id->device, rx_desc->sge.addr,
+				   rdma_transport->rx_allocation_size, DMA_FROM_DEVICE);
+
+	buffer.buffer = page_address(rx_desc->page);
+	buffer.avail = rx_desc->size;
+	drbd_control_data_ready(transport, &buffer);
+
+	dtr_recycle_rx_desc(transport, CONTROL_STREAM, &rx_desc, GFP_ATOMIC);
+}
+
+static void __dtr_order_rx_descs_front(struct dtr_stream *rdma_stream,
+				       struct dtr_rx_desc *rx_desc)
+{
+	struct dtr_rx_desc *pos;
+	unsigned int seq = rx_desc->sequence;
+
+	list_for_each_entry(pos, &rdma_stream->rx_descs, list) {
+		if (higher_in_sequence(seq, pos->sequence)) { /* think: seq > pos->sequence */
+			list_add(&rx_desc->list, &pos->list);
+			return;
+		}
+	}
+	list_add(&rx_desc->list, &rdma_stream->rx_descs);
+}
+
+static void dtr_control_tasklet_fn(struct tasklet_struct *t)
+{
+	struct dtr_transport *rdma_transport =
+		from_tasklet(rdma_transport, t, control_tasklet);
+	struct dtr_stream *rdma_stream = &rdma_transport->stream[CONTROL_STREAM];
+	struct dtr_rx_desc *rx_desc, *tmp;
+	LIST_HEAD(rx_descs);
+
+	spin_lock_irq(&rdma_stream->rx_descs_lock);
+	list_splice_init(&rdma_stream->rx_descs, &rx_descs);
+	spin_unlock_irq(&rdma_stream->rx_descs_lock);
+
+	list_for_each_entry_safe(rx_desc, tmp, &rx_descs, list) {
+		if (rx_desc->sequence != rdma_stream->rx_sequence)
+			goto abort;
+		list_del(&rx_desc->list);
+		rdma_stream->rx_sequence =
+			(rdma_stream->rx_sequence + 1) & ((1UL << SEQUENCE_BITS) - 1);
+		rdma_stream->unread -= rx_desc->size;
+		dtr_control_data_ready(rdma_stream, rx_desc);
+	}
+	return;
+
+abort:
+	spin_lock_irq(&rdma_stream->rx_descs_lock);
+	list_for_each_entry_safe(rx_desc, tmp, &rx_descs, list) {
+		list_del(&rx_desc->list);
+		__dtr_order_rx_descs_front(rdma_stream, rx_desc);
+	}
+	spin_unlock_irq(&rdma_stream->rx_descs_lock);
+
+	tasklet_schedule(&rdma_transport->control_tasklet);
+}
+
+static void dtr_rx_cqe_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct dtr_rx_desc *rx_desc = container_of(wc->wr_cqe, struct dtr_rx_desc, cqe);
+	struct dtr_cm *cm = rx_desc->cm;
+	struct dtr_path *path = cm->path;
+	struct dtr_transport *rdma_transport =
+		container_of(path->path.transport, struct dtr_transport, transport);
+	union dtr_immediate immediate;
+	int err;
+
+	if (wc->status != IB_WC_SUCCESS || !(wc->opcode & IB_WC_RECV)) {
+		struct drbd_transport *transport = &rdma_transport->transport;
+		unsigned long irq_flags;
+
+		switch (wc->status) {
+		case IB_WC_WR_FLUSH_ERR:
+			/* "Work Request Flushed Error: A Work Request was in
+			 * process or outstanding when the QP transitioned into
+			 * the Error State."
+			 *
+			 * Which is not entirely unexpected...
+			 */
+			break;
+
+		default:
+			if (__ratelimit(&rdma_transport->rate_limit)) {
+				tr_warn(transport,
+					"wc.status = %d (%s), wc.opcode = %d (%s)\n",
+					wc->status, wc->status == IB_WC_SUCCESS ? "ok" : "bad",
+					wc->opcode, wc->opcode & IB_WC_RECV ? "ok" : "bad");
+
+				tr_warn(transport,
+					"wc.vendor_err = %d, wc.byte_len = %d wc.imm_data = %d\n",
+					wc->vendor_err, wc->byte_len, wc->ex.imm_data);
+			}
+		}
+
+		/* dtr_free_rx_desc() will call drbd_free_page(), and that function
+		 * should not be called from softirq context.
+		 */
+		spin_lock_irqsave(&cm->error_rx_descs_lock, irq_flags);
+		list_add_tail(&rx_desc->list, &cm->error_rx_descs);
+		spin_unlock_irqrestore(&cm->error_rx_descs_lock, irq_flags);
+		dtr_dec_rx_descs(cm);
+		set_bit(DSB_ERROR, &cm->state);
+
+		kref_get(&cm->kref);
+		if (!schedule_work(&cm->end_rx_work))
+			kref_put(&cm->kref, dtr_destroy_cm);
+
+		return;
+	}
+
+	rx_desc->size = wc->byte_len;
+	immediate.i = be32_to_cpu(wc->ex.imm_data);
+	if (immediate.stream == ST_FLOW_CTRL) {
+		int send_from_stream;
+
+		ib_dma_sync_single_for_cpu(cm->id->device, rx_desc->sge.addr,
+					   rdma_transport->rx_allocation_size, DMA_FROM_DEVICE);
+		send_from_stream = dtr_got_flow_control_msg(path, page_address(rx_desc->page));
+		err = dtr_repost_rx_desc(cm, rx_desc);
+		if (err)
+			tr_err(&rdma_transport->transport, "dtr_repost_rx_desc() failed %d", err);
+		dtr_maybe_trigger_flow_control_msg(path, send_from_stream);
+	} else {
+		struct dtr_flow *flow = &path->flow[immediate.stream];
+		struct dtr_stream *rdma_stream = &rdma_transport->stream[immediate.stream];
+
+		atomic_dec(&flow->rx_descs_posted);
+		smp_wmb(); /* smp_rmb() is in dtr_new_rx_descs() */
+		atomic_dec(&flow->rx_descs_known_to_peer);
+
+		if (immediate.stream == ST_CONTROL)
+			mod_timer(&rdma_transport->control_timer, jiffies + rdma_stream->recv_timeout);
+
+		rx_desc->sequence = immediate.sequence;
+		dtr_order_rx_descs(rdma_stream, rx_desc);
+
+		if (immediate.stream == ST_CONTROL)
+			tasklet_schedule(&rdma_transport->control_tasklet);
+		else
+			wake_up_interruptible(&rdma_stream->recv_wq);
+	}
+
+	if (dtr_path_ok(path)) {
+		struct dtr_flow *flow = &path->flow[DATA_STREAM];
+
+		if (atomic_read(&flow->rx_descs_posted) < flow->rx_descs_want_posted / 2)
+			schedule_work(&path->refill_rx_descs_work);
+	}
+}
+
+static void dtr_free_tx_desc(struct dtr_cm *cm, struct dtr_tx_desc *tx_desc)
+{
+	struct ib_device *device = cm->id->device;
+	struct bio_vec bvec;
+	struct bvec_iter iter;
+	int i, nr_sges;
+
+	switch (tx_desc->type) {
+	case SEND_PAGE:
+		ib_dma_unmap_page(device, tx_desc->sge[0].addr, tx_desc->sge[0].length, DMA_TO_DEVICE);
+		put_page(tx_desc->page);
+		break;
+	case SEND_MSG:
+		ib_dma_unmap_single(device, tx_desc->sge[0].addr, tx_desc->sge[0].length, DMA_TO_DEVICE);
+		kfree(tx_desc->data);
+		break;
+	case SEND_BIO:
+		nr_sges = tx_desc->nr_sges;
+		for (i = 0; i < nr_sges; i++)
+			ib_dma_unmap_page(device, tx_desc->sge[i].addr, tx_desc->sge[i].length,
+					  DMA_TO_DEVICE);
+		bio_for_each_segment(bvec, tx_desc->bio, iter) {
+			put_page(bvec.bv_page);
+		}
+		break;
+	}
+	kfree(tx_desc);
+}
+
+static void dtr_tx_cqe_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+	struct dtr_tx_desc *tx_desc = container_of(wc->wr_cqe, struct dtr_tx_desc, cqe);
+	struct dtr_cm *cm = cq->cq_context;
+	struct dtr_path *path = cm->path;
+	struct dtr_transport *rdma_transport =
+		container_of(path->path.transport, struct dtr_transport, transport);
+	struct dtr_flow *flow;
+	struct dtr_stream *rdma_stream;
+	enum dtr_stream_nr stream_nr = tx_desc->imm.stream;
+	int err;
+
+	if (stream_nr != ST_FLOW_CTRL) {
+		flow = &path->flow[stream_nr];
+		rdma_stream = &rdma_transport->stream[stream_nr];
+	} else {
+		struct dtr_flow_control *msg = (struct dtr_flow_control *)tx_desc->data;
+		enum dtr_stream_nr send_from_stream = be32_to_cpu(msg->send_from_stream);
+
+		flow = &path->flow[send_from_stream];
+		rdma_stream = &rdma_transport->stream[send_from_stream];
+	}
+
+	if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {
+		struct drbd_transport *transport = &rdma_transport->transport;
+
+		if (wc->status == IB_WC_RNR_RETRY_EXC_ERR) {
+			tr_err(transport, "tx_event: wc.status = IB_WC_RNR_RETRY_EXC_ERR\n");
+			tr_info(transport, "peer_rx_descs = %d", atomic_read(&flow->peer_rx_descs));
+		} else if (wc->status != IB_WC_WR_FLUSH_ERR) {
+			tr_err(transport, "tx_event: wc.status != IB_WC_SUCCESS %d\n", wc->status);
+			tr_err(transport, "wc.vendor_err = %d, wc.byte_len = %d wc.imm_data = %d\n",
+			       wc->vendor_err, wc->byte_len, wc->ex.imm_data);
+		}
+
+		atomic_inc(&flow->peer_rx_descs);
+		set_bit(DSB_ERROR, &cm->state);
+
+		if (stream_nr != ST_FLOW_CTRL) {
+			err = dtr_repost_tx_desc(cm, tx_desc);
+			if (!err)
+				tx_desc = NULL; /* it is in the air again! Fly! */
+			else if (__ratelimit(&rdma_transport->rate_limit)) {
+				tr_warn(transport, "repost of tx_desc failed! %d\n", err);
+				drbd_control_event(transport, CLOSED_BY_PEER);
+			}
+		}
+	}
+
+	atomic_dec(&flow->tx_descs_posted);
+	wake_up_interruptible(&rdma_stream->send_wq);
+
+	if (tx_desc)
+		dtr_free_tx_desc(cm, tx_desc);
+	if (atomic_dec_and_test(&cm->tx_descs_posted)) {
+		bool was_active = timer_delete(&cm->tx_timeout);
+
+		if (was_active)
+			kref_put(&cm->kref, dtr_destroy_cm);
+
+		if (cm->state == DSM_CONNECTED)
+			kref_put(&cm->kref, dtr_destroy_cm); /* this is _not_ the last ref */
+		else
+			schedule_work(&cm->end_tx_work); /* the last ref might be put in this work */
+	}
+}
+
+static int dtr_create_qp(struct dtr_cm *cm, int rx_descs_max, int tx_descs_max)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(cm->path->path.transport, struct dtr_transport, transport);
+	int err;
+
+	struct ib_qp_init_attr init_attr = {
+		.cap.max_send_wr = tx_descs_max,
+		.cap.max_recv_wr = rx_descs_max,
+		.cap.max_recv_sge = 1, /* We only receive into single pages */
+		.cap.max_send_sge = rdma_transport->sges_max,
+		.qp_type = IB_QPT_RC,
+		.send_cq = cm->send_cq,
+		.recv_cq = cm->recv_cq,
+		.sq_sig_type = IB_SIGNAL_REQ_WR
+	};
+
+	err = rdma_create_qp(cm->id, cm->pd, &init_attr);
+
+	return err;
+}
+
+static int dtr_post_rx_desc(struct dtr_cm *cm, struct dtr_rx_desc *rx_desc)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(cm->path->path.transport, struct dtr_transport, transport);
+	struct ib_recv_wr recv_wr;
+	const struct ib_recv_wr *recv_wr_failed;
+	int err = -EIO;
+
+	recv_wr.next = NULL;
+	rx_desc->cqe.done = dtr_rx_cqe_done;
+	recv_wr.wr_cqe = &rx_desc->cqe;
+	recv_wr.sg_list = &rx_desc->sge;
+	recv_wr.num_sge = 1;
+
+	ib_dma_sync_single_for_device(cm->id->device,
+				      rx_desc->sge.addr, rdma_transport->rx_allocation_size, DMA_FROM_DEVICE);
+
+	err = ib_post_recv(cm->id->qp, &recv_wr, &recv_wr_failed);
+	if (err)
+		tr_err(&rdma_transport->transport, "ib_post_recv error %d\n", err);
+
+	return err;
+}
+
+static void dtr_free_rx_desc(struct dtr_rx_desc *rx_desc)
+{
+	struct dtr_transport *rdma_transport;
+	struct dtr_path *path;
+	struct ib_device *device;
+	struct dtr_cm *cm;
+	int alloc_size;
+
+	if (!rx_desc)
+		return; /* Allow call with NULL */
+
+	cm = rx_desc->cm;
+	device = cm->id->device;
+	path = cm->path;
+	rdma_transport = container_of(path->path.transport, struct dtr_transport, transport);
+	alloc_size = rdma_transport->rx_allocation_size;
+	ib_dma_unmap_single(device, rx_desc->sge.addr, alloc_size, DMA_FROM_DEVICE);
+	kref_put(&cm->kref, dtr_destroy_cm);
+
+	if (rx_desc->page) {
+		struct drbd_transport *transport = &rdma_transport->transport;
+
+		/* put_page(), if we had more than one rx_desc per page,
+		 * but see comments in dtr_create_rx_desc */
+		drbd_free_pages(transport, rx_desc->page);
+	}
+	kfree(rx_desc);
+}
+
+static int dtr_create_rx_desc(struct dtr_flow *flow, gfp_t gfp_mask, bool connected_only)
+{
+	struct dtr_path *path = flow->path;
+	struct drbd_transport *transport = path->path.transport;
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	struct dtr_rx_desc *rx_desc;
+	struct page *page;
+	int err, alloc_size = rdma_transport->rx_allocation_size;
+	int nr_pages = alloc_size / PAGE_SIZE;
+	struct dtr_cm *cm;
+
+	rx_desc = kzalloc_obj(*rx_desc, gfp_mask);
+	if (!rx_desc)
+		return -ENOMEM;
+
+	/* As of now, this MUST NEVER return a highmem page!
+	 * Which means no other user may ever have requested and then given
+	 * back a highmem page!
+	 */
+	page = drbd_alloc_pages(transport, nr_pages, gfp_mask);
+	if (!page) {
+		kfree(rx_desc);
+		return -ENOMEM;
+	}
+	BUG_ON(PageHighMem(page));
+
+	err = -ECONNRESET;
+	cm = dtr_path_get_cm(path);
+	if (!cm)
+		goto out;
+	if (connected_only && cm->state != DSM_CONNECTED)
+		goto out_put;
+
+	rx_desc->cm = cm;
+	rx_desc->page = page;
+	rx_desc->size = 0;
+	rx_desc->sge.lkey = dtr_cm_to_lkey(cm);
+	rx_desc->sge.addr = ib_dma_map_single(cm->id->device, page_address(page), alloc_size,
+					      DMA_FROM_DEVICE);
+	err = ib_dma_mapping_error(cm->id->device, rx_desc->sge.addr);
+	if (err) {
+		tr_err(transport, "ib_dma_map_single() failed %d\n", err);
+		goto out_put;
+	}
+	rx_desc->sge.length = alloc_size;
+
+	atomic_inc(&flow->rx_descs_allocated);
+	atomic_inc(&flow->rx_descs_posted);
+	err = dtr_post_rx_desc(cm, rx_desc);
+	if (err) {
+		tr_err(transport, "dtr_post_rx_desc() returned %d\n", err);
+		atomic_dec(&flow->rx_descs_posted);
+		atomic_dec(&flow->rx_descs_allocated);
+		dtr_free_rx_desc(rx_desc);
+	}
+	return err;
+
+out_put:
+	kref_put(&cm->kref, dtr_destroy_cm);
+out:
+	kfree(rx_desc);
+	drbd_free_pages(transport, page);
+	return err;
+}
+
+static void dtr_refill_rx_descs_work_fn(struct work_struct *work)
+{
+	struct dtr_path *path = container_of(work, struct dtr_path, refill_rx_descs_work);
+	int i;
+
+	if (!dtr_path_ok(path))
+		return;
+
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++) {
+		struct dtr_flow *flow = &path->flow[i];
+
+		if (atomic_read(&flow->rx_descs_posted) < flow->rx_descs_want_posted / 2)
+			__dtr_refill_rx_desc(path, i);
+		dtr_flow_control(flow, GFP_NOIO);
+	}
+}
+
+static void __dtr_refill_rx_desc(struct dtr_path *path, enum drbd_stream stream)
+{
+	struct drbd_transport *transport = path->path.transport;
+	struct dtr_flow *flow = &path->flow[stream];
+	int descs_want_posted, descs_max;
+
+	descs_max = flow->rx_descs_max;
+	descs_want_posted = flow->rx_descs_want_posted;
+
+	while (atomic_read(&flow->rx_descs_posted) < descs_want_posted &&
+	       atomic_read(&flow->rx_descs_allocated) < descs_max) {
+		int err;
+
+		err = dtr_create_rx_desc(flow, (GFP_NOIO & ~__GFP_RECLAIM) | __GFP_NOWARN, true);
+		/*
+		 * drbd_alloc_pages() goes over the configured max_buffers, but throttles the
+		 * caller with sleeping 100ms for each of those excess pages.  By calling
+		 * without __GFP_RECLAIM we request to get a -ENOMEM instead of sleeping.
+		 * We simply stop refilling then.
+		 */
+		if (err == -ENOMEM) {
+			break;
+		} else if (err) {
+			tr_err(transport, "dtr_create_rx_desc() = %d\n", err);
+			break;
+		}
+	}
+}
+
+static void dtr_refill_rx_desc(struct dtr_transport *rdma_transport,
+			       enum drbd_stream stream)
+{
+	struct drbd_transport *transport = &rdma_transport->transport;
+	struct drbd_path *drbd_path;
+
+	for_each_path_ref(drbd_path, transport) {
+		struct dtr_path *path = container_of(drbd_path, struct dtr_path, path);
+
+		schedule_work(&path->refill_rx_descs_work);
+	}
+}
+
+static int dtr_repost_rx_desc(struct dtr_cm *cm, struct dtr_rx_desc *rx_desc)
+{
+	int err;
+
+	rx_desc->size = 0;
+	rx_desc->sge.lkey = dtr_cm_to_lkey(cm);
+	/* rx_desc->sge.addr = rx_desc->dma_addr;
+	   rx_desc->sge.length = rx_desc->alloc_size; */
+
+	err = dtr_post_rx_desc(cm, rx_desc);
+	return err;
+}
+
+static void dtr_recycle_rx_desc(struct drbd_transport *transport,
+				enum drbd_stream stream,
+				struct dtr_rx_desc **pp_rx_desc,
+				gfp_t gfp_mask)
+{
+	struct dtr_rx_desc *rx_desc = *pp_rx_desc;
+	struct dtr_cm *cm;
+	struct dtr_path *path;
+	struct dtr_flow *flow;
+	int err;
+
+	if (!rx_desc)
+		return;
+
+	cm = rx_desc->cm;
+	path = cm->path;
+	flow = &path->flow[stream];
+
+	err = dtr_repost_rx_desc(cm, rx_desc);
+
+	if (err) {
+		dtr_free_rx_desc(rx_desc);
+	} else {
+		atomic_inc(&flow->rx_descs_posted);
+		dtr_flow_control(flow, gfp_mask);
+	}
+
+	*pp_rx_desc = NULL;
+}
+
+static int __dtr_post_tx_desc(struct dtr_cm *cm, struct dtr_tx_desc *tx_desc)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(cm->path->path.transport, struct dtr_transport, transport);
+	struct drbd_transport *transport = &rdma_transport->transport;
+	struct ib_send_wr send_wr;
+	const struct ib_send_wr *send_wr_failed;
+	struct ib_device *device = cm->id->device;
+	unsigned long timeout;
+	struct net_conf *nc;
+	int i, err = -EIO;
+	bool was_active;
+
+	send_wr.next = NULL;
+	tx_desc->cqe.done = dtr_tx_cqe_done;
+	send_wr.wr_cqe = &tx_desc->cqe;
+	send_wr.sg_list = tx_desc->sge;
+	send_wr.num_sge = tx_desc->nr_sges;
+	send_wr.ex.imm_data = cpu_to_be32(tx_desc->imm.i);
+	send_wr.opcode = IB_WR_SEND_WITH_IMM;
+	send_wr.send_flags = IB_SEND_SIGNALED;
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	timeout = nc->ping_timeo;
+	rcu_read_unlock();
+
+	for (i = 0; i < tx_desc->nr_sges; i++)
+		ib_dma_sync_single_for_device(device, tx_desc->sge[i].addr,
+					      tx_desc->sge[i].length, DMA_TO_DEVICE);
+
+	if (atomic_inc_return(&cm->tx_descs_posted) == 1)
+		kref_get(&cm->kref); /* keep one extra ref as long as one tx is posted */
+
+	kref_get(&cm->kref);
+	was_active = mod_timer(&cm->tx_timeout, jiffies + timeout * HZ / 20);
+	if (was_active)
+		kref_put(&cm->kref, dtr_destroy_cm);
+
+	err = ib_post_send(cm->id->qp, &send_wr, &send_wr_failed);
+	if (err) {
+		tr_err(&rdma_transport->transport, "ib_post_send() failed %d\n", err);
+		was_active = timer_delete(&cm->tx_timeout);
+		if (!was_active)
+			was_active = cancel_work_sync(&cm->tx_timeout_work);
+		if (was_active)
+			kref_put(&cm->kref, dtr_destroy_cm);
+		if (atomic_dec_and_test(&cm->tx_descs_posted))
+			kref_put(&cm->kref, dtr_destroy_cm);
+	}
+
+	return err;
+}
+
+static struct dtr_cm *dtr_select_and_get_cm_for_tx(struct dtr_transport *rdma_transport,
+						     enum drbd_stream stream)
+{
+	struct drbd_transport *transport = &rdma_transport->transport;
+	struct dtr_path *path, *candidate = NULL;
+	unsigned long last_sent_jif = -1UL;
+	struct dtr_cm *cm;
+
+	/* Within in 16 jiffy use one path, in case we switch to an other one,
+	   use that that was used longest ago */
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(path, &transport->paths, path.list) {
+		struct dtr_flow *flow = &path->flow[stream];
+		unsigned long ls;
+
+		cm = rcu_dereference(path->cm);
+		if (!cm || cm->state != DSM_CONNECTED)
+			continue;
+
+		/* Normal packets are not allowed to consume all of the peer's rx_descs,
+		   the last one is reserved for flow-control messages. */
+		if (atomic_read(&flow->tx_descs_posted) >= flow->tx_descs_max ||
+		    atomic_read(&flow->peer_rx_descs) <= 1)
+			continue;
+
+		ls = cm->last_sent_jif;
+		if ((ls & ~0xfUL) == (jiffies & ~0xfUL) && kref_get_unless_zero(&cm->kref)) {
+			rcu_read_unlock();
+			return cm;
+		}
+		if (ls < last_sent_jif) {
+			last_sent_jif = ls;
+			candidate = path;
+		}
+	}
+
+	if (candidate) {
+		cm = __dtr_path_get_cm(candidate);
+		cm->last_sent_jif = jiffies;
+	} else {
+		cm = NULL;
+	}
+	rcu_read_unlock();
+
+	return cm;
+}
+
+static int dtr_remap_tx_desc(struct dtr_cm *old_cm, struct dtr_cm *cm,
+			      struct dtr_tx_desc *tx_desc)
+{
+	struct ib_device *device = old_cm->id->device;
+	int i, nr_sges, err;
+	dma_addr_t a = 0;
+
+	switch (tx_desc->type) {
+	case SEND_PAGE:
+		ib_dma_unmap_page(device, tx_desc->sge[0].addr, tx_desc->sge[0].length, DMA_TO_DEVICE);
+		break;
+	case SEND_MSG:
+		ib_dma_unmap_single(device, tx_desc->sge[0].addr, tx_desc->sge[0].length, DMA_TO_DEVICE);
+		break;
+	case SEND_BIO:
+		nr_sges = tx_desc->nr_sges;
+		for (i = 0; i < nr_sges; i++)
+			ib_dma_unmap_page(device, tx_desc->sge[i].addr, tx_desc->sge[i].length,
+					  DMA_TO_DEVICE);
+		break;
+	}
+
+	device = cm->id->device;
+	switch (tx_desc->type) {
+	case SEND_PAGE:
+		a = ib_dma_map_page(device, tx_desc->page, tx_desc->sge[0].addr & ~PAGE_MASK,
+				    tx_desc->sge[0].length, DMA_TO_DEVICE);
+		break;
+	case SEND_MSG:
+		a = ib_dma_map_single(device, tx_desc->data, tx_desc->sge[0].length, DMA_TO_DEVICE);
+		break;
+	case SEND_BIO:
+#if SENDER_COMPACTS_BVECS
+		#error implement me
+#endif
+		break;
+	}
+	err = ib_dma_mapping_error(device, a);
+
+	tx_desc->sge[0].addr = a;
+	tx_desc->sge[0].lkey = dtr_cm_to_lkey(cm);
+
+	return err;
+}
+
+
+static int dtr_repost_tx_desc(struct dtr_cm *old_cm, struct dtr_tx_desc *tx_desc)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(old_cm->path->path.transport, struct dtr_transport, transport);
+	enum drbd_stream stream = tx_desc->imm.stream;
+	struct dtr_cm *cm;
+	struct dtr_flow *flow;
+	int err;
+
+	do {
+		cm = dtr_select_and_get_cm_for_tx(rdma_transport, stream);
+		if (!cm)
+			return -ECONNRESET;
+
+		err = dtr_remap_tx_desc(old_cm, cm, tx_desc);
+		if (err) {
+			tr_err(&rdma_transport->transport, "dtr_remap_tx_desc failed: %d\n", err);
+			kref_put(&cm->kref, dtr_destroy_cm);
+			continue;
+		}
+
+		flow = &cm->path->flow[stream];
+		if (atomic_dec_if_positive(&flow->peer_rx_descs) < 0) {
+			kref_put(&cm->kref, dtr_destroy_cm);
+			continue;
+		}
+		if (!atomic_inc_if_below(&flow->tx_descs_posted, flow->tx_descs_max)) {
+			atomic_inc(&flow->peer_rx_descs);
+			kref_put(&cm->kref, dtr_destroy_cm);
+			continue;
+		}
+
+		err = __dtr_post_tx_desc(cm, tx_desc);
+		if (err) {
+			atomic_inc(&flow->peer_rx_descs);
+			atomic_dec(&flow->tx_descs_posted);
+		}
+		kref_put(&cm->kref, dtr_destroy_cm);
+	} while (err);
+
+	return err;
+}
+
+static int dtr_post_tx_desc(struct dtr_transport *rdma_transport,
+			    struct dtr_tx_desc *tx_desc)
+{
+	enum drbd_stream stream = tx_desc->imm.stream;
+	struct dtr_stream *rdma_stream = &rdma_transport->stream[stream];
+	struct ib_device *device;
+	struct dtr_flow *flow;
+	struct dtr_cm *cm;
+	int offset, err;
+	long t;
+
+retry:
+	t = wait_event_interruptible_timeout(rdma_stream->send_wq,
+			(cm = dtr_select_and_get_cm_for_tx(rdma_transport, stream)),
+			rdma_stream->send_timeout);
+
+	if (t == 0) {
+		struct dtr_transport *rdma_transport = rdma_stream->rdma_transport;
+
+		if (drbd_stream_send_timed_out(&rdma_transport->transport, stream))
+			return -EAGAIN;
+		goto retry;
+	} else if (t < 0)
+		return -EINTR;
+
+	flow = &cm->path->flow[stream];
+	if (atomic_dec_if_positive(&flow->peer_rx_descs) < 0) {
+		kref_put(&cm->kref, dtr_destroy_cm);
+		goto retry;
+	}
+	if (!atomic_inc_if_below(&flow->tx_descs_posted, flow->tx_descs_max)) {
+		atomic_inc(&flow->peer_rx_descs);
+		kref_put(&cm->kref, dtr_destroy_cm);
+		goto retry;
+	}
+
+	device = cm->id->device;
+	switch (tx_desc->type) {
+	case SEND_PAGE:
+		offset = tx_desc->sge[0].lkey;
+		tx_desc->sge[0].addr = ib_dma_map_page(device, tx_desc->page, offset,
+						      tx_desc->sge[0].length, DMA_TO_DEVICE);
+		err = ib_dma_mapping_error(device, tx_desc->sge[0].addr);
+		if (err) {
+			atomic_inc(&flow->peer_rx_descs);
+			atomic_dec(&flow->tx_descs_posted);
+			goto out;
+		}
+
+		tx_desc->sge[0].lkey = dtr_cm_to_lkey(cm);
+		break;
+	case SEND_MSG:
+	case SEND_BIO:
+		BUG();
+	}
+
+	err = __dtr_post_tx_desc(cm, tx_desc);
+	if (err) {
+		atomic_inc(&flow->peer_rx_descs);
+		atomic_dec(&flow->tx_descs_posted);
+		ib_dma_unmap_page(device, tx_desc->sge[0].addr, tx_desc->sge[0].length, DMA_TO_DEVICE);
+	}
+
+
+out:
+	// pr_info("%s: Created send_wr (%p, %p): nr_sges=%u, first seg: lkey=%x, addr=%llx, length=%d\n", rdma_stream->name, tx_desc->page, tx_desc, tx_desc->nr_sges, tx_desc->sge[0].lkey, tx_desc->sge[0].addr, tx_desc->sge[0].length);
+	kref_put(&cm->kref, dtr_destroy_cm);
+	return err;
+}
+
+static int dtr_init_flow(struct dtr_path *path, enum drbd_stream stream)
+{
+	struct drbd_transport *transport = path->path.transport;
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	unsigned int alloc_size = rdma_transport->rx_allocation_size;
+	unsigned int rcvbuf_size = RDMA_DEF_BUFFER_SIZE;
+	unsigned int sndbuf_size = RDMA_DEF_BUFFER_SIZE;
+	struct dtr_flow *flow = &path->flow[stream];
+	struct net_conf *nc;
+	int err = 0;
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	if (!nc) {
+		rcu_read_unlock();
+		tr_err(transport, "need net_conf\n");
+		err = -EINVAL;
+		goto out;
+	}
+
+	if (nc->rcvbuf_size)
+		rcvbuf_size = nc->rcvbuf_size;
+	if (nc->sndbuf_size)
+		sndbuf_size = nc->sndbuf_size;
+
+	if (stream == CONTROL_STREAM) {
+		rcvbuf_size = nc->rdma_ctrl_rcvbuf_size ?: max(rcvbuf_size / 64, alloc_size * 8);
+		sndbuf_size = nc->rdma_ctrl_sndbuf_size ?: max(sndbuf_size / 64, alloc_size * 8);
+	}
+
+	if (rcvbuf_size / DRBD_SOCKET_BUFFER_SIZE > nc->max_buffers) {
+		tr_err(transport, "Set max-buffers at least to %d, (right now it is %d).\n",
+		       rcvbuf_size / DRBD_SOCKET_BUFFER_SIZE, nc->max_buffers);
+		tr_err(transport, "This is due to rcvbuf-size = %d.\n", rcvbuf_size);
+		rcu_read_unlock();
+		err = -EINVAL;
+		goto out;
+	}
+
+	rcu_read_unlock();
+
+	flow->path = path;
+	flow->tx_descs_max = sndbuf_size / DRBD_SOCKET_BUFFER_SIZE;
+	flow->rx_descs_max = rcvbuf_size / DRBD_SOCKET_BUFFER_SIZE;
+
+	atomic_set(&flow->tx_descs_posted, 0);
+	atomic_set(&flow->peer_rx_descs, stream == CONTROL_STREAM ? 1 : 0);
+	atomic_set(&flow->rx_descs_known_to_peer, stream == CONTROL_STREAM ? 1 : 0);
+
+	atomic_set(&flow->rx_descs_posted, 0);
+	atomic_set(&flow->rx_descs_allocated, 0);
+
+	flow->rx_descs_want_posted = flow->rx_descs_max / 2;
+
+ out:
+	return err;
+}
+
+static int _dtr_cm_alloc_rdma_res(struct dtr_cm *cm,
+				    enum dtr_alloc_rdma_res_causes *cause)
+{
+	int err, i, rx_descs_max = 0, tx_descs_max = 0;
+	struct dtr_path *path = cm->path;
+
+	/* Each path might be the sole path, therefore it must be able to
+	   support both streams */
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++) {
+		rx_descs_max += path->flow[i].rx_descs_max;
+		tx_descs_max += path->flow[i].tx_descs_max;
+	}
+
+	/* alloc protection domain (PD) */
+	/* in 4.9 ib_alloc_pd got the ability to specify flags as second param */
+	/* so far we don't use flags, but if we start using them, we have to be
+	 * aware that the compat layer removes this parameter for old kernels */
+	cm->pd = ib_alloc_pd(cm->id->device, 0);
+	if (IS_ERR(cm->pd)) {
+		*cause = IB_ALLOC_PD;
+		err = PTR_ERR(cm->pd);
+		goto pd_failed;
+	}
+
+	/* allocate recv completion queue (CQ) */
+	cm->recv_cq = ib_alloc_cq_any(cm->id->device, cm, rx_descs_max, IB_POLL_SOFTIRQ);
+	if (IS_ERR(cm->recv_cq)) {
+		*cause = IB_ALLOC_CQ_RX;
+		err = PTR_ERR(cm->recv_cq);
+		goto recv_cq_failed;
+	}
+
+	/* allocate send completion queue (CQ) */
+	cm->send_cq = ib_alloc_cq_any(cm->id->device, cm, tx_descs_max, IB_POLL_SOFTIRQ);
+	if (IS_ERR(cm->send_cq)) {
+		*cause = IB_ALLOC_CQ_TX;
+		err = PTR_ERR(cm->send_cq);
+		goto send_cq_failed;
+	}
+
+	/* create a queue pair (QP) */
+	err = dtr_create_qp(cm, rx_descs_max, tx_descs_max);
+	if (err) {
+		*cause = RDMA_CREATE_QP;
+		goto createqp_failed;
+	}
+
+	/* some RDMA transports need at least one rx desc for establishing a connection */
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++)
+		dtr_create_rx_desc(&path->flow[i], GFP_NOIO, false);
+
+	return 0;
+
+createqp_failed:
+	ib_free_cq(cm->send_cq);
+	cm->send_cq = NULL;
+send_cq_failed:
+	ib_free_cq(cm->recv_cq);
+	cm->recv_cq = NULL;
+recv_cq_failed:
+	ib_dealloc_pd(cm->pd);
+	cm->pd = NULL;
+pd_failed:
+	return err;
+}
+
+
+static int dtr_cm_alloc_rdma_res(struct dtr_cm *cm)
+{
+	struct dtr_path *path = cm->path;
+	struct drbd_transport *transport = path->path.transport;
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	enum dtr_alloc_rdma_res_causes cause;
+	struct ib_device_attr dev_attr;
+	struct ib_udata uhw = {.outlen = 0, .inlen = 0};
+	struct ib_device *device = cm->id->device;
+	int rx_descs_max = 0, tx_descs_max = 0;
+	bool reduced = false;
+	int i, hca_max, err, dev_sge;
+
+	static const char * const err_txt[] = {
+		[IB_ALLOC_PD] = "ib_alloc_pd()",
+		[IB_ALLOC_CQ_RX] = "ib_alloc_cq_any() rx",
+		[IB_ALLOC_CQ_TX] = "ib_alloc_cq_any() tx",
+		[RDMA_CREATE_QP] = "rdma_create_qp()",
+		[IB_GET_DMA_MR] = "ib_get_dma_mr()",
+	};
+
+	err = device->ops.query_device(device, &dev_attr, &uhw);
+	if (err) {
+		tr_err(transport, "ib_query_device: %d\n", err);
+		return err;
+	}
+
+	dev_sge = min(dev_attr.max_send_sge, dev_attr.max_recv_sge);
+	if (rdma_transport->sges_max > dev_sge)
+		rdma_transport->sges_max = dev_sge;
+
+	hca_max = min(dev_attr.max_qp_wr, dev_attr.max_cqe);
+
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++) {
+		rx_descs_max += path->flow[i].rx_descs_max;
+		tx_descs_max += path->flow[i].tx_descs_max;
+	}
+
+	if (tx_descs_max > hca_max || rx_descs_max > hca_max) {
+		int rx_correction = 0, tx_correction = 0;
+		reduced = true;
+
+		if (tx_descs_max > hca_max)
+			tx_correction = hca_max - tx_descs_max;
+
+		if (rx_descs_max > hca_max)
+			rx_correction = hca_max - rx_descs_max;
+
+		path->flow[DATA_STREAM].rx_descs_max -= rx_correction;
+		path->flow[DATA_STREAM].tx_descs_max -= tx_correction;
+
+		rx_descs_max -= rx_correction;
+		tx_descs_max -= tx_correction;
+	}
+
+	for (;;) {
+		err = _dtr_cm_alloc_rdma_res(cm, &cause);
+
+		if (err == 0 || cause != RDMA_CREATE_QP || err != -ENOMEM)
+			break;
+
+		reduced = true;
+		if (path->flow[DATA_STREAM].rx_descs_max <= 64)
+			break;
+		path->flow[DATA_STREAM].rx_descs_max -= 64;
+		if (path->flow[DATA_STREAM].tx_descs_max <= 64)
+			break;
+		path->flow[DATA_STREAM].tx_descs_max -= 64;
+		if (path->flow[CONTROL_STREAM].rx_descs_max > 8)
+			path->flow[CONTROL_STREAM].rx_descs_max -= 1;
+		if (path->flow[CONTROL_STREAM].tx_descs_max > 8)
+			path->flow[CONTROL_STREAM].tx_descs_max -= 1;
+	}
+
+	if (err) {
+		tr_err(transport, "%s failed with err = %d\n", err_txt[cause], err);
+	} else if (reduced) {
+		/* ib_create_qp() may return -ENOMEM if max_send_wr or max_recv_wr are
+		   too big. Unfortunately there is no way to find the working maxima.
+		   http://www.rdmamojo.com/2012/12/21/ibv_create_qp/
+		   Suggests "Trial end error" to find the maximal number. */
+
+		tr_warn(transport, "Needed to adjust buffer sizes for HCA\n");
+		tr_warn(transport, "rcvbuf = %d sndbuf = %d \n",
+			path->flow[DATA_STREAM].rx_descs_max * DRBD_SOCKET_BUFFER_SIZE,
+			path->flow[DATA_STREAM].tx_descs_max * DRBD_SOCKET_BUFFER_SIZE);
+		tr_warn(transport, "It is recommended to apply this change to the configuration\n");
+	}
+
+	return err;
+}
+
+static void dtr_end_rx_work_fn(struct work_struct *work)
+{
+	struct dtr_cm *cm = container_of(work, struct dtr_cm, end_rx_work);
+	struct dtr_rx_desc *rx_desc, *tmp;
+	unsigned long irq_flags;
+	LIST_HEAD(rx_descs);
+
+	spin_lock_irqsave(&cm->error_rx_descs_lock, irq_flags);
+	list_splice_init(&cm->error_rx_descs, &rx_descs);
+	spin_unlock_irqrestore(&cm->error_rx_descs_lock, irq_flags);
+	list_for_each_entry_safe(rx_desc, tmp, &rx_descs, list)
+		dtr_free_rx_desc(rx_desc);
+	kref_put(&cm->kref, dtr_destroy_cm);
+}
+
+static void dtr_end_tx_work_fn(struct work_struct *work)
+{
+	struct dtr_cm *cm = container_of(work, struct dtr_cm, end_tx_work);
+
+	kref_put(&cm->kref, dtr_destroy_cm);
+}
+
+static void __dtr_disconnect_path(struct dtr_path *path)
+{
+	struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
+	struct drbd_transport *transport;
+	enum connect_state_enum a, p;
+	bool was_scheduled;
+	struct dtr_cm *cm;
+	long t;
+	int err;
+
+	if (!path)
+		return;
+
+	transport = path->path.transport;
+
+	a = atomic_cmpxchg(&path->cs.active_state, PCS_CONNECTING, PCS_REQUEST_ABORT);
+	p = atomic_cmpxchg(&path->cs.passive_state, PCS_CONNECTING, PCS_INACTIVE);
+
+	switch (p) {
+	case PCS_CONNECTING:
+		drbd_put_listener(&path->path);
+		break;
+	case PCS_FINISHING:
+		t = wait_event_timeout(path->cs.wq,
+				       atomic_read(&path->cs.passive_state) == PCS_INACTIVE,
+				       HZ * 60);
+		if (t == 0)
+			tr_warn(transport, "passive_state still %d\n", atomic_read(&path->cs.passive_state));
+		fallthrough;
+	case PCS_INACTIVE:
+		break;
+	}
+
+	switch (a) {
+	case PCS_CONNECTING:
+		was_scheduled = flush_delayed_work(&path->cs.retry_connect_work);
+		if (!was_scheduled) {
+			atomic_set(&path->cs.active_state, PCS_INACTIVE);
+			break;
+		}
+		fallthrough;
+	case PCS_REQUEST_ABORT:
+		t = wait_event_timeout(path->cs.wq,
+				       atomic_read(&path->cs.active_state) == PCS_INACTIVE,
+				       HZ * 60);
+		if (t == 0)
+			tr_warn(transport, "active_state still %d\n", atomic_read(&path->cs.active_state));
+		fallthrough;
+	case PCS_INACTIVE:
+		break;
+	}
+
+	cm = dtr_path_get_cm(path);
+	if (!cm)
+		return;
+
+	err = rdma_disconnect(cm->id);
+	if (err) {
+		tr_warn(transport, "failed to disconnect, id %p context %p err %d\n",
+			cm->id, cm->id->context, err);
+		/* We are ignoring errors here on purpose */
+		goto out;
+	}
+
+	/* There might be a signal pending here. Not incorruptible! */
+	wait_event_timeout(cm->state_wq,
+			   !test_bit(DSB_CONNECTED, &cm->state),
+			   HZ);
+
+	if (test_bit(DSB_CONNECTED, &cm->state))
+		tr_warn(transport, "WARN: not properly disconnected, state = %lu\n",
+			cm->state);
+
+ out:
+	/* between dtr_alloc_cm() and dtr_cm_alloc_rdma_res() cm->id->qp is NULL */
+	if (cm->id->qp) {
+		/* With putting the QP into error state, it has to hand back
+		   all posted rx_descs */
+		err = ib_modify_qp(cm->id->qp, &attr, IB_QP_STATE);
+		if (err)
+			tr_err(transport, "ib_modify_qp failed %d\n", err);
+	}
+
+	/*
+	 * We are expecting one of RDMA_CM_EVENT_ESTABLISHED, _UNREACHABLE,
+	 * _CONNECT_ERROR, or _REJECTED on this cm. Some RDMA drivers report
+	 * these error events after unexpectedly long timeouts, while others do
+	 * not report it at all. We are no longer interested in these
+	 * events. Destroy the cm and cm_id to avoid leaking it.
+	 * This is racing with the event delivery, which drops a reference.
+	 */
+	if (test_and_clear_bit(DSB_CONNECTING, &cm->state) ||
+	    test_and_clear_bit(DSB_CONNECT_REQ, &cm->state))
+		kref_put(&cm->kref, dtr_destroy_cm);
+
+	kref_put(&cm->kref, dtr_destroy_cm);
+}
+
+static void dtr_reclaim_cm(struct rcu_head *rcu_head)
+{
+	struct dtr_cm *cm = container_of(rcu_head, struct dtr_cm, rcu);
+
+	kfree(cm);
+	module_put(THIS_MODULE);
+}
+
+/* dtr_destroy_cm() might run after the transport was destroyed */
+static void __dtr_destroy_cm(struct kref *kref, bool destroy_id)
+{
+	struct dtr_cm *cm = container_of(kref, struct dtr_cm, kref);
+
+	if (cm->id) {
+		if (cm->id->qp)
+			rdma_destroy_qp(cm->id);
+		cm->id->qp = NULL;
+	}
+
+	if (cm->send_cq) {
+		ib_free_cq(cm->send_cq);
+		cm->send_cq = NULL;
+	}
+
+	if (cm->recv_cq) {
+		ib_free_cq(cm->recv_cq);
+		cm->recv_cq = NULL;
+	}
+
+	if (cm->pd) {
+		ib_dealloc_pd(cm->pd);
+		cm->pd = NULL;
+	}
+
+	if (cm->id) {
+		/* Just in case some callback is still triggered
+		 * after we kfree'd path. */
+		cm->id->context = NULL;
+		if (destroy_id)
+			rdma_destroy_id(cm->id);
+		cm->id = NULL;
+	}
+	if (cm->path) {
+		kref_put(&cm->path->path.kref, drbd_destroy_path);
+		cm->path = NULL;
+	}
+
+	call_rcu(&cm->rcu, dtr_reclaim_cm);
+}
+
+static void dtr_destroy_cm(struct kref *kref)
+{
+	__dtr_destroy_cm(kref, true);
+}
+
+static void dtr_destroy_cm_keep_id(struct kref *kref)
+{
+	__dtr_destroy_cm(kref, false);
+}
+
+static void dtr_disconnect_path(struct dtr_path *path)
+{
+	struct dtr_cm *cm;
+
+	if (!path)
+		return;
+
+	__dtr_disconnect_path(path);
+	cancel_work_sync(&path->refill_rx_descs_work);
+
+	cm = xchg(&path->cm, NULL); // RCU xchg
+	if (cm)
+		kref_put(&cm->kref, dtr_destroy_cm);
+}
+
+static void dtr_destroy_listener(struct drbd_listener *generic_listener)
+{
+	struct dtr_listener *listener =
+		container_of(generic_listener, struct dtr_listener, listener);
+
+	if (listener->cm.id)
+		rdma_destroy_id(listener->cm.id);
+}
+
+static int dtr_init_listener(struct drbd_transport *transport, const struct sockaddr *addr, struct net *net, struct drbd_listener *drbd_listener)
+{
+	struct dtr_listener *listener = container_of(drbd_listener, struct dtr_listener, listener);
+	struct sockaddr_storage my_addr;
+	int err = -ENOMEM;
+
+	my_addr = *(struct sockaddr_storage *)addr;
+
+	err = dtr_create_cm_id(&listener->cm, net);
+	if (err) {
+		tr_err(transport, "rdma_create_id() failed\n");
+		goto out;
+	}
+	listener->cm.state = 0; /* listening */
+
+	err = rdma_bind_addr(listener->cm.id, (struct sockaddr *)&my_addr);
+	if (err) {
+		tr_err(transport, "rdma_bind_addr error %d\n", err);
+		goto out;
+	}
+
+	err = rdma_listen(listener->cm.id, 1);
+	if (err) {
+		tr_err(transport, "rdma_listen error %d\n", err);
+		goto out;
+	}
+
+	listener->listener.listen_addr = *(struct sockaddr_storage *)addr;
+
+	return 0;
+out:
+	if (listener->cm.id) {
+		rdma_destroy_id(listener->cm.id);
+		listener->cm.id = NULL;
+	}
+
+	return err;
+}
+
+static int dtr_activate_path(struct dtr_path *path)
+{
+	struct drbd_transport *transport = path->path.transport;
+	struct dtr_connect_state *cs;
+	int err = -ENOMEM;
+
+	cs = &path->cs;
+
+	init_waitqueue_head(&cs->wq);
+
+	atomic_set(&cs->passive_state, PCS_CONNECTING);
+	atomic_set(&cs->active_state, PCS_CONNECTING);
+
+	if (path->path.listener) {
+		tr_warn(transport, "ASSERTION FAILED: in dtr_activate_path() found listener, dropping it\n");
+		drbd_put_listener(&path->path);
+	}
+	err = drbd_get_listener(&path->path);
+	if (err)
+		goto out_no_put;
+
+	/*
+	 * Check passive_state after drbd_get_listener() completed.
+	 * __dtr_disconnect_path() sets passive_state before calling
+	 * drbd_put_listener(). That drbd_put_listner() might return
+	 * before the drbd_get_listner() here started.
+	 */
+	if (atomic_read(&cs->passive_state) != PCS_CONNECTING ||
+	    atomic_read(&cs->active_state) != PCS_CONNECTING)
+		goto out;
+
+	err = dtr_start_try_connect(cs);
+	if (err)
+		goto out;
+
+	return 0;
+
+out:
+	drbd_put_listener(&path->path);
+out_no_put:
+	atomic_set(&cs->passive_state, PCS_INACTIVE);
+	atomic_set(&cs->active_state, PCS_INACTIVE);
+	wake_up(&cs->wq);
+
+	return err;
+}
+
+static int dtr_prepare_connect(struct drbd_transport *transport)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+
+	struct dtr_stream *data_stream = NULL, *control_stream = NULL;
+	struct dtr_path *path;
+	struct net_conf *nc;
+	int timeout, err = -ENOMEM;
+
+	flush_signals(current);
+
+	if (!list_first_or_null_rcu(&transport->paths, struct drbd_path, list))
+		return -EDESTADDRREQ;
+
+	data_stream = &rdma_transport->stream[DATA_STREAM];
+	dtr_re_init_stream(data_stream);
+
+	control_stream = &rdma_transport->stream[CONTROL_STREAM];
+	dtr_re_init_stream(control_stream);
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+
+	timeout = nc->timeout * HZ / 10;
+	rcu_read_unlock();
+
+	data_stream->send_timeout = timeout;
+	control_stream->send_timeout = timeout;
+
+	atomic_set(&rdma_transport->first_path_connect_err, 1);
+	init_completion(&rdma_transport->connected);
+
+	rdma_transport->active = true;
+
+	list_for_each_entry(path, &transport->paths, path.list) {
+		err = dtr_activate_path(path);
+		if (err)
+			goto abort;
+	}
+
+	return 0;
+
+abort:
+	rdma_transport->active = false;
+	return err;
+}
+
+static int dtr_connect(struct drbd_transport *transport)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	int i, err = -ENOMEM;
+
+	err = wait_for_completion_interruptible(&rdma_transport->connected);
+	if (err) {
+		flush_signals(current);
+		goto abort;
+	}
+
+	err = atomic_read(&rdma_transport->first_path_connect_err);
+	if (err == 1)
+		err = -EAGAIN;
+	if (err)
+		goto abort;
+
+
+	/* Make sure at least one path has rx_descs... */
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++)
+		dtr_refill_rx_desc(rdma_transport, i);
+
+	/* make sure the other side had time to create rx_descs */
+	schedule_timeout(HZ / 4);
+
+	return 0;
+
+abort:
+	rdma_transport->active = false;
+
+	return err;
+}
+
+static void dtr_finish_connect(struct drbd_transport *transport)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+
+	if (!rdma_transport->active) {
+		struct dtr_path *path;
+
+		list_for_each_entry(path, &transport->paths, path.list)
+			dtr_disconnect_path(path);
+	}
+}
+
+static int dtr_net_conf_change(struct drbd_transport *transport, struct net_conf *new_net_conf)
+{
+	struct net_conf *old_net_conf;
+	struct dtr_transport *dtr_transport = container_of(transport,
+		struct dtr_transport, transport);
+	int ret = 0;
+
+	rcu_read_lock();
+	old_net_conf = rcu_dereference(transport->net_conf);
+	if (old_net_conf && dtr_transport->active) {
+		if (old_net_conf->sndbuf_size != new_net_conf->sndbuf_size) {
+			tr_warn(transport, "online change of sndbuf_size not supported\n");
+			ret = -EINVAL;
+		}
+		if (old_net_conf->rcvbuf_size != new_net_conf->rcvbuf_size) {
+			tr_warn(transport, "online change of rcvbuf_size not supported\n");
+			ret = -EINVAL;
+		}
+	}
+	rcu_read_unlock();
+
+	return ret;
+}
+
+static void dtr_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream, long timeout)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+
+	rdma_transport->stream[stream].recv_timeout = timeout;
+
+	if (stream == CONTROL_STREAM)
+		mod_timer(&rdma_transport->control_timer, jiffies + timeout);
+}
+
+static long dtr_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+
+	return rdma_transport->stream[stream].recv_timeout;
+}
+
+static bool dtr_path_ok(struct dtr_path *path)
+{
+	bool r = false;
+	struct dtr_cm *cm = path->cm;
+
+	rcu_read_lock();
+	cm = rcu_dereference(path->cm);
+	if (cm) {
+		r = cm->id && cm->state == DSM_CONNECTED;
+	}
+	rcu_read_unlock();
+
+	return r;
+}
+
+static bool dtr_transport_ok(struct drbd_transport *transport)
+{
+	struct dtr_path *path;
+	bool r = false;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(path, &transport->paths, path.list) {
+		r = dtr_path_ok(path);
+		if (r)
+			break;
+	}
+	rcu_read_unlock();
+
+	return r;
+}
+
+static bool dtr_stream_ok(struct drbd_transport *transport, enum drbd_stream stream)
+{
+	return dtr_transport_ok(transport);
+}
+
+static void dtr_update_congested(struct drbd_transport *transport)
+{
+	struct dtr_path *path;
+	bool congested = true;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(path, &transport->paths, path.list) {
+		struct dtr_flow *flow = &path->flow[DATA_STREAM];
+		bool path_congested = false;
+		int tx_descs_posted;
+
+		if (!dtr_path_ok(path))
+			continue;
+
+		tx_descs_posted = atomic_read(&flow->tx_descs_posted);
+		path_congested |= flow->tx_descs_max - tx_descs_posted < DESCS_LOW_LEVEL;
+		path_congested |= atomic_read(&flow->peer_rx_descs) < DESCS_LOW_LEVEL;
+
+		if (!path_congested) {
+			congested = false;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	if (congested)
+		set_bit(NET_CONGESTED, &transport->flags);
+}
+
+static int dtr_send_page(struct drbd_transport *transport, enum drbd_stream stream,
+			 struct page *page, int offset, size_t size, unsigned msg_flags)
+{
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	struct dtr_tx_desc *tx_desc;
+	int err;
+
+	// pr_info("%s: in send_page, size: %zu\n", rdma_stream->name, size);
+
+	if (!dtr_transport_ok(transport))
+		return -ECONNRESET;
+
+	tx_desc = kmalloc(sizeof(*tx_desc) + sizeof(struct ib_sge), GFP_NOIO);
+	if (!tx_desc)
+		return -ENOMEM;
+
+	if (msg_flags & MSG_SPLICE_PAGES) {
+		page = caller_page;
+		get_page(page); /* The put_page() is in dtr_tx_cqe_done() */
+	} else {
+		void *from;
+
+		page = drbd_alloc_pages(transport, GFP_NOIO, PAGE_SIZE);
+		from = kmap_local_page(caller_page);
+		memcpy(page_address(page), from + offset, size);
+		kunmap_local(from);
+		offset = 0;
+	}
+
+	tx_desc->type = SEND_PAGE;
+	tx_desc->page = page;
+	tx_desc->nr_sges = 1;
+	tx_desc->imm = (union dtr_immediate)
+		{ .stream = stream,
+		  .sequence = rdma_transport->stream[stream].tx_sequence++
+		};
+	tx_desc->sge[0].length = size;
+	tx_desc->sge[0].lkey = offset; /* abusing lkey fild. See dtr_post_tx_desc() */
+
+	err = dtr_post_tx_desc(rdma_transport, tx_desc);
+	if (err) {
+		put_page(page);
+		kfree(tx_desc);
+
+		tr_err(transport, "dtr_post_tx_desc() failed %d\n", err);
+		drbd_control_event(transport, CLOSED_BY_PEER);
+	}
+
+	if (stream == DATA_STREAM)
+		dtr_update_congested(transport);
+
+	return err;
+}
+
+#if SENDER_COMPACTS_BVECS
+static int dtr_send_bio_part(struct dtr_transport *rdma_transport,
+			     struct bio *bio, int start, int size_tx_desc, int sges)
+{
+	struct dtr_stream *rdma_stream = &rdma_transport->stream[DATA_STREAM];
+	struct dtr_tx_desc *tx_desc;
+	struct ib_device *device;
+	struct dtr_path *path = NULL;
+	struct bio_vec bvec;
+	struct bvec_iter iter;
+	int i = 0, pos = 0, done = 0, err;
+
+	if (!size_tx_desc)
+		return 0;
+
+	//tr_info(&rdma_transport->transport,
+	//	"  dtr_send_bio_part(start = %d, size = %d, sges = %d)\n",
+	//	start, size_tx_desc, sges);
+
+	tx_desc = kmalloc(sizeof(*tx_desc) + sizeof(struct ib_sge) * sges, GFP_NOIO);
+	if (!tx_desc)
+		return -ENOMEM;
+
+	tx_desc->type = SEND_BIO;
+	tx_desc->bio = bio;
+	tx_desc->nr_sges = sges;
+	device = rdma_stream->cm.id->device;
+
+	bio_for_each_segment(bvec, tx_desc->bio, iter) {
+		struct page *page = bvec.bv_page;
+		int offset = bvec.bv_offset;
+		int size = bvec.bv_len;
+		int shift = 0;
+		get_page(page);
+
+		if (pos < start || done == size_tx_desc) {
+			if (done != size_tx_desc && pos + size > start) {
+				shift = (start - pos);
+			} else {
+				pos += size;
+				continue;
+			}
+		}
+
+		pos += size;
+		offset += shift;
+		size = min(size - shift, size_tx_desc - done);
+
+		//tr_info(&rdma_transport->transport,
+		//	"   sge (i = %d, offset = %d, size = %d)\n",
+		//	i, offset, size);
+
+		tx_desc->sge[i].addr = ib_dma_map_page(device, page, offset, size, DMA_TO_DEVICE);
+		err = ib_dma_mapping_error(device, tx_desc->sge[i].addr);
+		if (err)
+			return err; // FIX THIS
+		tx_desc->sge[i].lkey = dtr_path_to_lkey(path);
+		tx_desc->sge[i].length = size;
+		done += size;
+		i++;
+	}
+
+	TR_ASSERT(&rdma_transport->transport, done == size_tx_desc);
+	tx_desc->imm = (union dtr_immediate)
+		{ .stream = ST_DATA,
+		  .sequence = rdma_transport->stream[ST_DATA].tx_sequence++
+		};
+
+	err = dtr_post_tx_desc(rdma_stream, tx_desc, &path);
+	if (err) {
+		if (path) {
+			dtr_free_tx_desc(path, tx_desc);
+		} else {
+			bio_for_each_segment(bvec, tx_desc->bio, iter) {
+				put_page(bvec.bv_page);
+			}
+			kfree(tx_desc);
+		}
+
+		tr_err(transport, "dtr_post_tx_desc() failed %d\n", err);
+		drbd_control_event(transport, CLOSED_BY_PEER);
+	}
+
+	return err;
+}
+#endif
+
+static int dtr_send_zc_bio(struct drbd_transport *transport, struct bio *bio)
+{
+#if SENDER_COMPACTS_BVECS
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	int start = 0, sges = 0, size_tx_desc = 0, remaining = 0, err;
+	int sges_max = rdma_transport->sges_max;
+#endif
+	int err = -EINVAL;
+	struct bio_vec bvec;
+	struct bvec_iter iter;
+
+	//tr_info(transport, "in send_zc_bio, size: %d\n", bio->bi_size);
+
+	if (!dtr_transport_ok(transport))
+		return -ECONNRESET;
+
+#if SENDER_COMPACTS_BVECS
+	bio_for_each_segment(bvec, bio, iter) {
+		size_tx_desc += bvec.bv_len;
+		//tr_info(transport, " bvec len = %d\n", bvec.bv_len);
+		if (size_tx_desc > DRBD_SOCKET_BUFFER_SIZE) {
+			remaining = size_tx_desc - DRBD_SOCKET_BUFFER_SIZE;
+			size_tx_desc = DRBD_SOCKET_BUFFER_SIZE;
+		}
+		sges++;
+		if (size_tx_desc == DRBD_SOCKET_BUFFER_SIZE || sges >= sges_max) {
+			err = dtr_send_bio_part(rdma_transport, bio, start, size_tx_desc, sges);
+			if (err)
+				goto out;
+			start += size_tx_desc;
+			sges = 0;
+			size_tx_desc = remaining;
+			if (remaining) {
+				sges++;
+				remaining = 0;
+			}
+		}
+	}
+	err = dtr_send_bio_part(rdma_transport, bio, start, size_tx_desc, sges);
+	start += size_tx_desc;
+
+	TR_ASSERT(transport, start == bio->bi_iter.bi_size);
+out:
+#else
+	bio_for_each_segment(bvec, bio, iter) {
+		err = dtr_send_page(transport, DATA_STREAM,
+			bvec.bv_page, bvec.bv_offset, bvec.bv_len,
+			0 /* flags currently unused by dtr_send_page */);
+		if (err)
+			break;
+	}
+#endif
+	if (1 /* stream == DATA_STREAM */)
+		dtr_update_congested(transport);
+
+	return err;
+}
+
+static bool dtr_hint(struct drbd_transport *transport, enum drbd_stream stream,
+		enum drbd_tr_hints hint)
+{
+	switch (hint) {
+	default: /* not implemented, but should not trigger error handling */
+		return true;
+	}
+	return true;
+}
+
+static void dtr_debugfs_show_flow(struct dtr_flow *flow, const char *name, struct seq_file *m)
+{
+	seq_printf(m,    " %-7s field:  posted\t alloc\tdesired\t  max\n", name);
+	seq_printf(m, "      tx_descs: %5d\t\t\t%5d\n", atomic_read(&flow->tx_descs_posted), flow->tx_descs_max);
+	seq_printf(m, " peer_rx_descs: %5d (receive window at peer)\n", atomic_read(&flow->peer_rx_descs));
+	seq_printf(m, "      rx_descs: %5d\t%5d\t%5d\t%5d\n", atomic_read(&flow->rx_descs_posted),
+		   atomic_read(&flow->rx_descs_allocated),
+		   flow->rx_descs_want_posted, flow->rx_descs_max);
+	seq_printf(m, " rx_peer_knows: %5d (what the peer knows about my receive window)\n\n",
+		   atomic_read(&flow->rx_descs_known_to_peer));
+}
+
+static void dtr_debugfs_show_path(struct dtr_path *path, struct seq_file *m)
+{
+	static const char * const stream_names[] = {
+		[ST_DATA] = "data",
+		[ST_CONTROL] = "control",
+	};
+	static const char * const state_names[] = {
+		[0] = "not connected",
+		[DSM_CONNECT_REQ] = "CONNECT_REQ",
+		[DSM_CONNECTING] = "CONNECTING",
+		[DSM_CONNECTING|DSM_CONNECT_REQ] = "CONNECTING|DSM_CONNECT_REQ",
+		[DSM_CONNECTED] = "CONNECTED",
+		[DSM_CONNECTED|DSM_CONNECT_REQ] = "CONNECTED|CONNECT_REQ",
+		[DSM_CONNECTED|DSM_CONNECTING] = "CONNECTED|CONNECTING",
+		[DSM_CONNECTED|DSM_CONNECTING|DSM_CONNECT_REQ] =
+			"CONNECTED|CONNECTING|DSM_CONNECT_REQ",
+		[DSM_ERROR] = "ERROR",
+		[DSM_ERROR|DSM_CONNECT_REQ] = "ERROR|CONNECT_REQ",
+		[DSM_ERROR|DSM_CONNECTING] = "ERROR|CONNECTING",
+		[DSM_ERROR|DSM_CONNECTING|DSM_CONNECT_REQ] = "ERROR|CONNECTING|CONNECT_REQ",
+		[DSM_ERROR|DSM_CONNECTED] = "ERROR|CONNECTED",
+		[DSM_ERROR|DSM_CONNECTED|DSM_CONNECT_REQ] = "ERROR|CONNECTED|CONNECT_REQ",
+		[DSM_ERROR|DSM_CONNECTED|DSM_CONNECTING] = "ERROR|CONNECTED|CONNECTING|",
+		[DSM_ERROR|DSM_CONNECTED|DSM_CONNECTING|DSM_CONNECT_REQ] =
+			"ERROR|CONNECTED|CONNECTING|CONNECT_REQ",
+	};
+
+	enum drbd_stream i;
+	unsigned long s = 0;
+	struct dtr_cm *cm;
+
+	rcu_read_lock();
+	cm = rcu_dereference(path->cm);
+	if (cm)
+		s = cm->state;
+	rcu_read_unlock();
+
+	seq_printf(m, "%pI4 - %pI4: %s\n",
+		   &((struct sockaddr_in *)&path->path.my_addr)->sin_addr,
+		   &((struct sockaddr_in *)&path->path.peer_addr)->sin_addr,
+		   state_names[s]);
+
+	if (dtr_path_ok(path)) {
+		for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++)
+			dtr_debugfs_show_flow(&path->flow[i], stream_names[i], m);
+	}
+}
+
+static void dtr_debugfs_show(struct drbd_transport *transport, struct seq_file *m)
+{
+	struct dtr_path *path;
+
+	/* BUMP me if you change the file format/content/presentation */
+	seq_printf(m, "v: %u\n\n", 1);
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(path, &transport->paths, path.list)
+		dtr_debugfs_show_path(path, m);
+	rcu_read_unlock();
+}
+
+static int dtr_add_path(struct drbd_path *add_path)
+{
+	struct drbd_transport *transport = add_path->transport;
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	struct dtr_path *path;
+
+	path = container_of(add_path, struct dtr_path, path);
+
+	/* initialize private parts of path */
+	atomic_set(&path->cs.passive_state, PCS_INACTIVE);
+	atomic_set(&path->cs.active_state, PCS_INACTIVE);
+	spin_lock_init(&path->send_flow_control_lock);
+	tasklet_setup(&path->flow_control_tasklet, dtr_flow_control_tasklet_fn);
+	INIT_WORK(&path->refill_rx_descs_work, dtr_refill_rx_descs_work_fn);
+	INIT_DELAYED_WORK(&path->cs.retry_connect_work, dtr_cma_retry_connect_work_fn);
+
+	if (!rdma_transport->active)
+		return 0;
+
+	return dtr_activate_path(path);
+}
+
+static bool dtr_may_remove_path(struct drbd_path *del_path)
+{
+	struct drbd_transport *transport = del_path->transport;
+	struct dtr_transport *rdma_transport =
+		container_of(transport, struct dtr_transport, transport);
+	struct drbd_path *drbd_path, *connected_path = NULL;
+	int connected = 0;
+
+	if (!rdma_transport->active)
+		return true;
+
+	list_for_each_entry(drbd_path, &transport->paths, list) {
+		struct dtr_path *path = container_of(drbd_path, struct dtr_path, path);
+
+		if (dtr_path_ok(path)) {
+			connected++;
+			connected_path = drbd_path;
+		}
+	}
+
+	return connected > 1 || connected_path != del_path;
+}
+
+static void dtr_remove_path(struct drbd_path *del_path)
+{
+	struct dtr_path *path = container_of(del_path, struct dtr_path, path);
+
+	dtr_disconnect_path(path);
+}
+
+static int __init dtr_initialize(void)
+{
+	allocation_size = PAGE_SIZE;
+
+	return drbd_register_transport_class(&rdma_transport_class,
+					     DRBD_TRANSPORT_API_VERSION,
+					     sizeof(struct drbd_transport));
+}
+
+static void __exit dtr_cleanup(void)
+{
+	drbd_unregister_transport_class(&rdma_transport_class);
+}
+
+module_init(dtr_initialize)
+module_exit(dtr_cleanup)
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 07/20] drbd: add load-balancing TCP transport
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (5 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 06/20] drbd: add RDMA " Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 08/20] drbd: add DAX/PMEM support for metadata access Christoph Böhmwalder
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Add a second TCP transport implementation (lb-tcp) that distributes
replication traffic across multiple network paths simultaneously.
Unlike the standard TCP transport which treats paths as failover
alternatives, this transport connects all configured paths in parallel
and always sends on whichever path has the shortest send queue.

To make out-of-order delivery across paths coherent, each chunk of data
is prefixed with a sequence number so the receiver can reassemble
chunks in the correct order regardless of which path delivered them
first.

This transport can be optionally enabled and loaded as a separate
module; it does not affect the standard TCP transport.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/Kconfig                 |   11 +
 drivers/block/drbd/Makefile                |    1 +
 drivers/block/drbd/drbd_transport_lb-tcp.c | 1905 ++++++++++++++++++++
 3 files changed, 1917 insertions(+)
 create mode 100644 drivers/block/drbd/drbd_transport_lb-tcp.c

diff --git a/drivers/block/drbd/Kconfig b/drivers/block/drbd/Kconfig
index 203cfa2bf228..a214e92c32eb 100644
--- a/drivers/block/drbd/Kconfig
+++ b/drivers/block/drbd/Kconfig
@@ -84,6 +84,17 @@ config BLK_DEV_DRBD_TCP
 
 	  If unsure, say Y.
 
+config BLK_DEV_DRBD_LB_TCP
+	tristate "DRBD load-balanced TCP transport"
+	depends on BLK_DEV_DRBD
+	help
+
+	  Load-balanced TCP transport support for DRBD. This transport
+	  distributes DRBD replication traffic across multiple TCP
+	  connections for improved throughput.
+
+	  If unsure, say N.
+
 config BLK_DEV_DRBD_RDMA
 	tristate "DRBD RDMA transport"
 	depends on BLK_DEV_DRBD && INFINIBAND && INFINIBAND_ADDR_TRANS
diff --git a/drivers/block/drbd/Makefile b/drivers/block/drbd/Makefile
index d47d311f76ea..7f2655a206aa 100644
--- a/drivers/block/drbd/Makefile
+++ b/drivers/block/drbd/Makefile
@@ -10,4 +10,5 @@ drbd-$(CONFIG_DEBUG_FS) += drbd_debugfs.o
 obj-$(CONFIG_BLK_DEV_DRBD)     += drbd.o
 
 obj-$(CONFIG_BLK_DEV_DRBD_TCP) += drbd_transport_tcp.o
+obj-$(CONFIG_BLK_DEV_DRBD_LB_TCP) += drbd_transport_lb-tcp.o
 obj-$(CONFIG_BLK_DEV_DRBD_RDMA) += drbd_transport_rdma.o
diff --git a/drivers/block/drbd/drbd_transport_lb-tcp.c b/drivers/block/drbd/drbd_transport_lb-tcp.c
new file mode 100644
index 000000000000..497fca8c413c
--- /dev/null
+++ b/drivers/block/drbd/drbd_transport_lb-tcp.c
@@ -0,0 +1,1905 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * drbd_transport_lb-tcp.c
+ *
+ * This file is part of DRBD.
+ *
+ *  Copyright (C) 2014-2023, LINBIT HA-Solutions GmbH.
+ */
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/socket.h>
+#include <linux/pkt_sched.h>
+#include <linux/sched/signal.h>
+#include <linux/net.h>
+#include <linux/tcp.h>
+#include <linux/highmem.h>
+#include <linux/bio.h>
+#include <linux/drbd_genl_api.h>
+#include <linux/drbd_config.h>
+#include <net/tcp.h>
+#include "drbd_protocol.h"
+#include "drbd_transport.h"
+
+
+MODULE_AUTHOR("Philipp Reisner <philipp.reisner@linbit.com>");
+MODULE_AUTHOR("Lars Ellenberg <lars.ellenberg@linbit.com>");
+MODULE_AUTHOR("Roland Kammerer <roland.kammerer@linbit.com>");
+MODULE_DESCRIPTION("Load balancing TCP transport layer for DRBD");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(REL_VERSION);
+
+/* TCP keepalive has proven to be vital in many deployment scenarios.
+ * Without keepalive, after a device has seen a sufficiently long period of
+ * idle time, packets on our "bulk data" socket may be dropped because an
+ * overly "smart" network infrastructure decided that TCP session was stale.
+ * Note that we don't try to use this to detect "broken" tcp sessions here,
+ * these will still be handled by the DRBD effective network timeout via
+ * timeout / ko-count settings.
+ * We use this to try to keep "idle" TCP sessions "alive".
+ * Default to send a probe every 23 seconds.
+ */
+#define DRBD_KEEP_IDLE	23
+#define DRBD_KEEP_INTVL 23
+#define DRBD_KEEP_CNT	9
+static unsigned int drbd_keepcnt = DRBD_KEEP_CNT;
+module_param_named(keepcnt, drbd_keepcnt, uint, 0664);
+MODULE_PARM_DESC(keepcnt, "see tcp(7) tcp_keepalive_probes; set TCP_KEEPCNT for data sockets; default: 9");
+static unsigned int drbd_keepidle = DRBD_KEEP_IDLE;
+module_param_named(keepidle, drbd_keepidle, uint, 0664);
+MODULE_PARM_DESC(keepidle, "see tcp(7) tcp_keepalive_time; set TCP_KEEPIDLE for data sockets; default: 23s");
+static unsigned int drbd_keepintvl = DRBD_KEEP_INTVL;
+module_param_named(keepintvl, drbd_keepintvl, uint, 0664);
+MODULE_PARM_DESC(keepintvtl, "see tcp(7) tcp_keepalive_intvl; set TCP_KEEPINTVL for data sockets; default: 23s");
+
+#define DTL_CONNECTING 1
+#define DTL_LOAD_BALANCE 2
+
+struct dtl_flow;
+
+struct dtl_header {
+	u32 sequence;
+	u32 bytes;
+} __packed;
+
+struct buffer {
+	void *base;
+	void *pos;
+};
+
+struct dtl_stream {
+	unsigned int send_sequence;
+	struct dtl_flow *recv_flow;
+	unsigned int recv_sequence;
+	long rcvtimeo;
+};
+
+struct dtl_transport {
+	struct drbd_transport transport; /* Must be first! */
+	spinlock_t control_recv_lock;
+	unsigned long flags;
+	struct timer_list control_timer;
+	struct delayed_work connect_work;
+	wait_queue_head_t data_ready;
+	wait_queue_head_t write_space;
+	struct dtl_stream streams[2];
+	struct buffer rbuf;
+	int connected_paths;
+	wait_queue_head_t connected_paths_change;
+	int err;
+};
+
+struct dtl_listener {
+	struct drbd_listener listener;
+
+	struct work_struct accept_work;
+	void (*original_sk_state_change)(struct sock *sk);
+	struct socket *s_listen;
+};
+
+struct dtl_flow {
+	struct socket *sock;
+	unsigned int recv_sequence;
+	int recv_bytes; /* The number of bytes to receive before the next dtl_header */
+	struct {
+		union {
+			struct dtl_header header;
+			u8 bytes[8];
+		};
+		int avail;
+	} control_reassemble;
+
+	void (*original_sk_state_change)(struct sock *sk);
+	void (*original_sk_data_ready)(struct sock *sk);
+	void (*original_sk_write_space)(struct sock *sk);
+
+	enum drbd_stream stream_nr;
+};
+
+struct dtl_path {
+	struct drbd_path path;
+	struct dtl_flow flow[2];
+};
+
+
+static int dtl_init(struct drbd_transport *transport);
+static void dtl_free(struct drbd_transport *transport, enum drbd_tr_free_op free_op);
+static void dtl_socket_free(struct drbd_transport *transport, struct socket **sock);
+static int dtl_prepare_connect(struct drbd_transport *transport);
+static int dtl_connect(struct drbd_transport *transport);
+static void dtl_finish_connect(struct drbd_transport *transport);
+static int dtl_recv(struct drbd_transport *transport, enum drbd_stream stream, void **buf,
+		    size_t size, int flags);
+static int dtl_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain,
+			  size_t size);
+static void dtl_stats(struct drbd_transport *transport, struct drbd_transport_stats *stats);
+static int dtl_net_conf_change(struct drbd_transport *transport, struct net_conf *new_net_conf);
+static void dtl_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream,
+			     long timeout);
+static long dtl_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream);
+static int dtl_send_page(struct drbd_transport *transport, enum drbd_stream, struct page *page,
+		int offset, size_t size, unsigned int msg_flags);
+static int dtl_send_zc_bio(struct drbd_transport *, struct bio *bio);
+static bool dtl_stream_ok(struct drbd_transport *transport, enum drbd_stream stream);
+static bool dtl_hint(struct drbd_transport *transport, enum drbd_stream stream,
+		     enum drbd_tr_hints hint);
+static void dtl_debugfs_show(struct drbd_transport *transport, struct seq_file *m);
+static int dtl_add_path(struct drbd_path *path);
+static bool dtl_may_remove_path(struct drbd_path *);
+static void dtl_remove_path(struct drbd_path *);
+static void dtl_control_timer_fn(struct timer_list *t);
+static void dtl_write_space(struct sock *sk);
+static void dtl_connect_work_fn(struct work_struct *work);
+static void dtl_accept_work_fn(struct work_struct *work);
+static int dtl_set_active(struct drbd_transport *transport, bool active);
+static int dtl_path_adjust_listener(struct dtl_path *path, bool active);
+static int dtl_init_listener(struct drbd_transport *transport, const struct sockaddr *addr,
+			     struct net *net, struct drbd_listener *drbd_listener);
+static void dtl_destroy_listener(struct drbd_listener *generic_listener);
+static void dtl_set_socket_callbacks(struct dtl_transport *dtl_transport, struct dtl_flow *flow);
+
+
+static struct drbd_transport_class dtl_transport_class = {
+	.name = "lb-tcp",
+	.instance_size = sizeof(struct dtl_transport),
+	.path_instance_size = sizeof(struct dtl_path),
+	.listener_instance_size = sizeof(struct dtl_listener),
+	.ops = (struct drbd_transport_ops) {
+		.init = dtl_init,
+		.free = dtl_free,
+		.init_listener = dtl_init_listener,
+		.release_listener = dtl_destroy_listener,
+		.prepare_connect = dtl_prepare_connect,
+		.connect = dtl_connect,
+		.finish_connect = dtl_finish_connect,
+		.recv = dtl_recv,
+		.recv_pages = dtl_recv_pages,
+		.stats = dtl_stats,
+		.net_conf_change = dtl_net_conf_change,
+		.set_rcvtimeo = dtl_set_rcvtimeo,
+		.get_rcvtimeo = dtl_get_rcvtimeo,
+		.send_page = dtl_send_page,
+		.send_zc_bio = dtl_send_zc_bio,
+		.stream_ok = dtl_stream_ok,
+		.hint = dtl_hint,
+		.debugfs_show = dtl_debugfs_show,
+		.add_path = dtl_add_path,
+		.may_remove_path = dtl_may_remove_path,
+		.remove_path = dtl_remove_path,
+	},
+	.module = THIS_MODULE,
+	.list = LIST_HEAD_INIT(dtl_transport_class.list),
+};
+
+static int dtl_init(struct drbd_transport *transport)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+
+	spin_lock_init(&dtl_transport->control_recv_lock);
+
+	dtl_transport->transport.class = &dtl_transport_class;
+	timer_setup(&dtl_transport->control_timer, dtl_control_timer_fn, 0);
+
+	init_waitqueue_head(&dtl_transport->data_ready);
+	init_waitqueue_head(&dtl_transport->write_space);
+	INIT_DELAYED_WORK(&dtl_transport->connect_work, dtl_connect_work_fn);
+	dtl_transport->connected_paths = 0;
+	dtl_transport->flags = 0;
+	init_waitqueue_head(&dtl_transport->connected_paths_change);
+
+	dtl_transport->rbuf.base = (void *)__get_free_page(GFP_KERNEL);
+	dtl_transport->rbuf.pos = dtl_transport->rbuf.base;
+	if (!dtl_transport->rbuf.base)
+		return -ENOMEM;
+
+	return 0;
+}
+
+
+static void dtl_free(struct drbd_transport *transport, enum drbd_tr_free_op free_op)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	struct drbd_path *drbd_path;
+	/* free the socket specific stuff, mutexes are handled by caller */
+
+	dtl_set_active(transport, false);
+	list_for_each_entry(drbd_path, &transport->paths, list) {
+		bool was_established = test_and_clear_bit(TR_ESTABLISHED, &drbd_path->flags);
+
+		if (free_op == CLOSE_CONNECTION && was_established)
+			drbd_path_event(transport, drbd_path);
+	}
+
+	timer_delete_sync(&dtl_transport->control_timer);
+	cancel_delayed_work_sync(&dtl_transport->connect_work);
+
+	if (free_op == DESTROY_TRANSPORT) {
+		free_page((unsigned long)dtl_transport->rbuf.base);
+		dtl_transport->rbuf.base = NULL;
+	}
+}
+
+static int _dtl_send(struct dtl_transport *dtl_transport, struct dtl_flow *flow,
+		      void *buf, size_t size, unsigned int msg_flags)
+{
+	struct socket *sock = flow->sock;
+	struct kvec iov;
+	struct msghdr msg;
+	int rv, sent = 0;
+
+	/* THINK  if (signal_pending) return ... ? */
+
+	iov.iov_base = buf;
+	iov.iov_len  = size;
+
+	msg.msg_name       = NULL;
+	msg.msg_namelen    = 0;
+	msg.msg_control    = NULL;
+	msg.msg_controllen = 0;
+	msg.msg_flags      = msg_flags | MSG_NOSIGNAL;
+
+	do {
+		rv = kernel_sendmsg(sock, &msg, &iov, 1, iov.iov_len);
+		if (rv == -EAGAIN) {
+			struct drbd_transport *transport = &dtl_transport->transport;
+
+			if (drbd_stream_send_timed_out(transport, flow->stream_nr))
+				break;
+			continue;
+		}
+		if (rv == -EINTR) {
+			flush_signals(current);
+			rv = 0;
+		}
+		if (rv < 0)
+			break;
+		sent += rv;
+		iov.iov_base += rv;
+		iov.iov_len  -= rv;
+	} while (sent < size);
+
+	if (rv <= 0)
+		return rv;
+
+	return sent;
+}
+
+static int dtl_recv_short(struct socket *sock, void *buf, size_t size, int flags)
+{
+	struct kvec iov = {
+		.iov_base = buf,
+		.iov_len = size,
+	};
+	struct msghdr msg = {
+		.msg_flags = (flags ? flags : MSG_WAITALL | MSG_NOSIGNAL)
+	};
+
+	return kernel_recvmsg(sock, &msg, &iov, 1, size, msg.msg_flags);
+}
+
+static void dtl_data_ready(struct sock *sk)
+{
+	struct dtl_flow *flow = sk->sk_user_data;
+	struct dtl_path *path = container_of(flow, struct dtl_path, flow[flow->stream_nr]);
+	struct dtl_transport *dtl_transport =
+		container_of(path->path.transport, struct dtl_transport, transport);
+
+	wake_up(&dtl_transport->data_ready);
+
+	flow->original_sk_data_ready(sk);
+}
+
+static int dtl_wait_data_cond(struct dtl_transport *dtl_transport,
+			      enum drbd_stream st, struct dtl_flow **rh_fl)
+{
+	struct drbd_transport *transport = &dtl_transport->transport;
+	struct dtl_stream *stream = &dtl_transport->streams[st];
+	struct drbd_path *drbd_path;
+	struct dtl_flow *flow;
+	struct tcp_sock *tp;
+	struct sock *sk;
+	int err = -ENOTCONN;
+
+	for_each_path_ref(drbd_path, transport) {
+		struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+
+		if (!test_bit(TR_ESTABLISHED, &drbd_path->flags))
+			continue;
+		flow = &path->flow[st];
+		if (!flow->sock)
+			continue;
+		sk = flow->sock->sk;
+		tp = tcp_sk(sk);
+		if (sk->sk_state != TCP_ESTABLISHED)
+			continue;
+		if (flow->recv_sequence == stream->recv_sequence + 1)
+			goto found;
+		err = -EAGAIN;
+		if (READ_ONCE(tp->rcv_nxt) - READ_ONCE(tp->copied_seq) < sizeof(struct dtl_header))
+			continue;
+		if (flow->recv_bytes)
+			continue;
+
+		*rh_fl = flow;
+		err = -EBFONT; /* Abusing strange errno to activate outer loop */
+		kref_put(&drbd_path->kref, drbd_destroy_path); /* aborting for_each_path_ref */
+		goto out;
+	}
+	if (err > 0)
+		err = -EAGAIN;
+
+	goto out;
+found:
+	kref_put(&drbd_path->kref, drbd_destroy_path); /* aborted for_each_path_ref */
+	stream->recv_sequence++;
+	stream->recv_flow = flow;
+	err = 0;
+out:
+	return err;
+}
+
+static int dtl_select_recv_flow(struct dtl_transport *dtl_transport, enum drbd_stream st,
+				struct dtl_flow **flow)
+{
+	struct drbd_transport *transport = &dtl_transport->transport;
+	struct dtl_stream *stream = &dtl_transport->streams[st];
+	long rem, timeout = stream->rcvtimeo;
+	int err;
+
+	if (stream->recv_flow) {
+		if (!stream->recv_flow->sock)
+			return -ENOTCONN;
+
+		*flow = stream->recv_flow;
+		return 0;
+	}
+
+	while (true) {
+		struct dtl_header header;
+		struct dtl_flow *rh_fl;
+
+		rem = wait_event_interruptible_timeout(dtl_transport->data_ready,
+			(err = dtl_wait_data_cond(dtl_transport, st, &rh_fl)) != -EAGAIN,
+			timeout);
+		if (rem < 0)
+			return rem;
+		if (!err)
+			break;
+		if (err != -EBFONT)
+			return err;
+
+		err = dtl_recv_short(rh_fl->sock, &header, sizeof(header), 0);
+		if (err < 0)
+			return err;
+		if (err < sizeof(header)) {
+			tr_warn(transport, "got too little %d\n", err);
+			return -EIO;
+		}
+		rh_fl->recv_sequence = be32_to_cpu(header.sequence);
+		rh_fl->recv_bytes = be32_to_cpu(header.bytes);
+		if (rh_fl->recv_sequence == stream->recv_sequence + 1) {
+			stream->recv_sequence++;
+			stream->recv_flow = rh_fl;
+			break;
+		}
+	}
+
+	*flow = stream->recv_flow;
+	return 0;
+}
+
+static void dtl_received(struct dtl_transport *dtl_transport, struct dtl_flow *flow, int size)
+{
+	if (test_bit(DTL_LOAD_BALANCE, &dtl_transport->flags)) {
+		flow->recv_bytes -= size;
+		if (flow->recv_bytes == 0)
+			dtl_transport->streams[flow->stream_nr].recv_flow = NULL;
+	}
+}
+
+static int
+dtl_recv(struct drbd_transport *transport, enum drbd_stream st, void **buf, size_t size, int flags)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	struct dtl_flow *flow;
+	void *buffer;
+	int err;
+
+	err = dtl_select_recv_flow(dtl_transport, st, &flow);
+	if (err)
+		return err;
+
+	if (flags & CALLER_BUFFER) {
+		buffer = *buf;
+		err = dtl_recv_short(flow->sock, buffer, size, flags & ~CALLER_BUFFER);
+	} else if (flags & GROW_BUFFER) {
+		TR_ASSERT(transport, *buf == dtl_transport->rbuf.base);
+		buffer = dtl_transport->rbuf.pos;
+		TR_ASSERT(transport, (buffer - *buf) + size <= PAGE_SIZE);
+
+		err = dtl_recv_short(flow->sock, buffer, size, flags & ~GROW_BUFFER);
+	} else {
+		buffer = dtl_transport->rbuf.base;
+
+		err = dtl_recv_short(flow->sock, buffer, size, flags);
+		if (err > 0)
+			*buf = buffer;
+	}
+
+	if (err > 0) {
+		dtl_received(dtl_transport, flow, err);
+		dtl_transport->rbuf.pos = buffer + err;
+	}
+
+	return err;
+}
+
+static int
+_dtl_recv_page(struct dtl_transport *dtl_transport, struct page *page, int size)
+{
+	void *data = kmap_local_page(page);
+	void *pos = data;
+	struct dtl_flow *flow;
+	int err;
+
+	while (size) {
+		err = dtl_select_recv_flow(dtl_transport, DATA_STREAM, &flow);
+		if (err)
+			goto out;
+
+		err = dtl_recv_short(flow->sock, data, min(size, flow->recv_bytes), 0);
+		if (err < 0)
+			goto out;
+		size -= err;
+		pos += err;
+		dtl_received(dtl_transport, flow, err);
+	}
+	err = pos - data;
+out:
+	kunmap_local(data);
+	return err;
+}
+
+static int
+dtl_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain, size_t size)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	struct page *page;
+	int err;
+
+	drbd_alloc_page_chain(transport, chain, DIV_ROUND_UP(size, PAGE_SIZE), GFP_TRY);
+	page = chain->head;
+	if (!page)
+		return -ENOMEM;
+
+	page_chain_for_each(page) {
+		size_t len = min_t(int, size, PAGE_SIZE);
+
+		err = _dtl_recv_page(dtl_transport, page, len);
+		if (err < 0)
+			goto fail;
+		set_page_chain_offset(page, 0);
+		set_page_chain_size(page, len);
+		size -= err;
+	}
+	if (unlikely(size)) {
+		tr_warn(transport, "Not enough data received; missing %zu bytes\n", size);
+		err = -ENODATA;
+		goto fail;
+	}
+	return 0;
+fail:
+	drbd_free_page_chain(transport, chain);
+	return err;
+}
+
+static void dtl_stats(struct drbd_transport *transport, struct drbd_transport_stats *stats)
+{
+	struct drbd_transport_stats s = {};
+	struct drbd_path *drbd_path;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(drbd_path, &transport->paths, list) {
+		struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+		struct dtl_flow *flow = &path->flow[DATA_STREAM];
+
+		if (flow->sock) {
+			struct sock *sk = flow->sock->sk;
+			struct tcp_sock *tp = tcp_sk(sk);
+
+			s.unread_received += tp->rcv_nxt - tp->copied_seq;
+			s.unacked_send += tp->write_seq - tp->snd_una;
+			s.send_buffer_size += sk->sk_sndbuf;
+			s.send_buffer_used += sk->sk_wmem_queued;
+		}
+	}
+	rcu_read_unlock();
+
+	*stats = s;
+}
+
+static void dtl_setbufsize(struct socket *sock, unsigned int snd, unsigned int rcv)
+{
+	struct sock *sk = sock->sk;
+
+	/* open coded SO_SNDBUF, SO_RCVBUF */
+	if (snd) {
+		sk->sk_sndbuf = snd;
+		sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
+		/* Wake up sending tasks if we upped the value. */
+		sk->sk_write_space(sk);
+	} else {
+		sk->sk_userlocks &= ~SOCK_SNDBUF_LOCK;
+	}
+
+	if (rcv) {
+		sk->sk_rcvbuf = rcv;
+		sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
+	} else {
+		sk->sk_userlocks &= ~SOCK_RCVBUF_LOCK;
+	}
+}
+
+static bool dtl_path_cmp_addr(struct dtl_path *path)
+{
+	struct drbd_path *drbd_path = &path->path;
+	int addr_size;
+
+	addr_size = min(drbd_path->my_addr_len, drbd_path->peer_addr_len);
+	return memcmp(&drbd_path->my_addr, &drbd_path->peer_addr, addr_size) > 0;
+}
+
+static int
+dtl_try_connect(struct drbd_transport *transport, struct dtl_path *path, struct socket **ret_sock)
+{
+	const char *what;
+	struct socket *sock;
+	struct sockaddr_storage my_addr, peer_addr;
+	struct net_conf *nc;
+	int err;
+	int sndbuf_size, rcvbuf_size, connect_int;
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	if (!nc) {
+		rcu_read_unlock();
+		return -EIO;
+	}
+	sndbuf_size = nc->sndbuf_size;
+	rcvbuf_size = nc->rcvbuf_size;
+	connect_int = nc->connect_int;
+	rcu_read_unlock();
+
+	my_addr = path->path.my_addr;
+	if (my_addr.ss_family == AF_INET6)
+		((struct sockaddr_in6 *)&my_addr)->sin6_port = 0;
+	else
+		((struct sockaddr_in *)&my_addr)->sin_port = 0; /* AF_INET & AF_SCI */
+
+	/* The network stack might change peer_addr.ss_family, so use a copy here. */
+	peer_addr = path->path.peer_addr;
+
+	what = "sock_create_kern";
+	err = sock_create_kern(path->path.net, my_addr.ss_family, SOCK_STREAM, IPPROTO_TCP,
+			       &sock);
+	if (err < 0) {
+		sock = NULL;
+		goto out;
+	}
+
+	sock->sk->sk_rcvtimeo =
+	sock->sk->sk_sndtimeo = connect_int * HZ;
+	dtl_setbufsize(sock, sndbuf_size, rcvbuf_size);
+
+	/* explicitly bind to the configured IP as source IP
+	 * for the outgoing connections.
+	 * This is needed for multihomed hosts and to be
+	 * able to use lo: interfaces for drbd.
+	 * Make sure to use 0 as port number, so linux selects
+	 * a free one dynamically.
+	 */
+	what = "bind before connect";
+	err = sock->ops->bind(sock, (struct sockaddr_unsized *) &my_addr, path->path.my_addr_len);
+	if (err < 0)
+		goto out;
+
+	/* connect may fail, peer not yet available. stay C_CONNECTING */
+	what = "connect";
+	err = sock->ops->connect(sock, (struct sockaddr_unsized *) &peer_addr,
+				   path->path.peer_addr_len, 0);
+	if (err < 0) {
+		switch (err) {
+		case -ETIMEDOUT:
+		case -EINPROGRESS:
+		case -EINTR:
+		case -ERESTARTSYS:
+		case -ECONNREFUSED:
+		case -ECONNRESET:
+		case -ENETUNREACH:
+		case -EHOSTDOWN:
+		case -EHOSTUNREACH:
+			err = -EAGAIN;
+			break;
+		case -EINVAL:
+			err = -EADDRNOTAVAIL;
+			break;
+		}
+	}
+
+out:
+	if (err < 0) {
+		if (sock)
+			sock_release(sock);
+		if (err != -EAGAIN && err != -EADDRNOTAVAIL)
+			tr_err(transport, "%s failed, err = %d\n", what, err);
+	} else {
+		*ret_sock = sock;
+	}
+
+	return err;
+}
+
+static int dtl_send_first_packet(struct dtl_transport *dtl_transport,
+				 struct dtl_flow *flow, enum drbd_packet cmd)
+{
+	struct p_header80 h;
+	int msg_flags = 0;
+	int err;
+
+	if (!flow->sock)
+		return -EIO;
+
+	if (test_bit(DTL_LOAD_BALANCE, &dtl_transport->flags)) {
+		struct dtl_header hdr = { .sequence = 0, .bytes = cpu_to_be32(sizeof(h)) };
+
+		err = _dtl_send(dtl_transport, flow, &hdr, sizeof(hdr), msg_flags | MSG_MORE);
+		if (err < 0)
+			return err;
+	}
+
+	h.magic = cpu_to_be32(DRBD_MAGIC);
+	h.command = cpu_to_be16(cmd);
+	h.length = 0;
+
+	err = _dtl_send(dtl_transport, flow, &h, sizeof(h), msg_flags);
+
+	return err;
+}
+
+/**
+ * dtl_socket_free() - Free the socket
+ * @transport:	DRBD transport.
+ * @sock:	pointer to the pointer to the socket.
+ */
+static void dtl_socket_free(struct drbd_transport *transport, struct socket **sock)
+{
+	struct socket *s = xchg(sock, NULL);
+
+	if (!s)
+		return;
+
+	synchronize_rcu();
+	kernel_sock_shutdown(s, SHUT_RDWR);
+	sock_release(s);
+}
+
+/**
+ * dtl_socket_ok_or_free() - Free the socket if its connection is not okay
+ * @transport:	DRBD transport.
+ * @sock:	pointer to the pointer to the socket.
+ */
+static bool dtl_socket_ok_or_free(struct drbd_transport *transport, struct socket **sock)
+{
+	struct socket *s;
+	bool rv;
+
+	rcu_read_lock();
+	s = rcu_dereference(*sock);
+	rv = s && s->sk->sk_state == TCP_ESTABLISHED;
+	rcu_read_unlock();
+
+	if (s && !rv)
+		dtl_socket_free(transport, sock);
+
+	return rv;
+}
+
+static bool _dtl_path_established(struct drbd_transport *transport, struct dtl_path *path)
+{
+	return	dtl_socket_ok_or_free(transport, &path->flow[DATA_STREAM].sock) &&
+		dtl_socket_ok_or_free(transport, &path->flow[CONTROL_STREAM].sock);
+}
+
+static bool dtl_deactivate_other_paths(struct dtl_path *path)
+{
+	struct drbd_transport *transport = path->path.transport;
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	bool active = test_and_clear_bit(DTL_CONNECTING, &dtl_transport->flags);
+	struct drbd_path *drbd_path;
+
+	if (active) {
+		for_each_path_ref(drbd_path, transport)
+			dtl_path_adjust_listener(path, false);
+	}
+
+	return active;
+}
+
+static bool dtl_path_established(struct drbd_transport *transport, struct dtl_path *path)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	bool lb = test_bit(DTL_LOAD_BALANCE, &dtl_transport->flags);
+	struct drbd_path *drbd_path = &path->path;
+	struct net_conf *nc;
+	enum drbd_stream i;
+	bool established;
+	int timeout;
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	timeout = (nc->sock_check_timeo ?: nc->ping_timeo) * HZ / 10;
+	rcu_read_unlock();
+	schedule_timeout_interruptible(timeout);
+
+	established = _dtl_path_established(transport, path);
+
+	if (established && !lb) {
+		established = dtl_deactivate_other_paths(path);
+
+		if (!established) {
+			dtl_socket_free(transport, &path->flow[DATA_STREAM].sock);
+			dtl_socket_free(transport, &path->flow[CONTROL_STREAM].sock);
+		}
+	}
+
+	if (!established) {
+		if (test_and_clear_bit(TR_ESTABLISHED, &drbd_path->flags)) {
+			dtl_transport->connected_paths--;
+			drbd_path_event(transport, drbd_path);
+		}
+	} else if (!test_and_set_bit(TR_ESTABLISHED, &drbd_path->flags)) {
+		dtl_transport->connected_paths++;
+
+		for (i = DATA_STREAM; i <= CONTROL_STREAM; i++) {
+			if (lb) {
+				path->flow[i].recv_sequence = 0;
+				path->flow[i].recv_bytes = 0;
+			} else {
+				path->flow[i].recv_sequence = 1;
+				path->flow[i].recv_bytes = INT_MAX;
+				dtl_transport->streams[i].recv_flow = &path->flow[i];
+			}
+		}
+		wake_up(&dtl_transport->data_ready);
+		drbd_put_listener(drbd_path);
+		dtl_set_socket_callbacks(dtl_transport, &path->flow[DATA_STREAM]);
+		dtl_set_socket_callbacks(dtl_transport, &path->flow[CONTROL_STREAM]);
+		drbd_path_event(transport, drbd_path);
+	}
+
+	return established;
+}
+
+static void unregister_state_change(struct sock *sk, struct dtl_listener *listener)
+{
+	write_lock_bh(&sk->sk_callback_lock);
+	sk->sk_state_change = listener->original_sk_state_change;
+	sk->sk_user_data = NULL;
+	write_unlock_bh(&sk->sk_callback_lock);
+}
+
+
+static int dtl_receive_first_packet(struct dtl_transport *dtl_transport, struct dtl_path *path,
+				    struct socket *sock)
+{
+	struct drbd_transport *transport = &dtl_transport->transport;
+	struct p_header80 header;
+	struct net_conf *nc;
+	int err;
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	if (!nc) {
+		rcu_read_unlock();
+		return -EIO;
+	}
+	sock->sk->sk_rcvtimeo = nc->ping_timeo * 4 * HZ / 10;
+	rcu_read_unlock();
+
+	if (test_bit(DTL_LOAD_BALANCE, &dtl_transport->flags)) {
+		struct dtl_header hdr;
+
+		err = dtl_recv_short(sock, &hdr, sizeof(hdr), 0);
+		if (err != sizeof(hdr)) {
+			if (err >= 0)
+				err = -EIO;
+			return err;
+		}
+	}
+	err = dtl_recv_short(sock, &header, sizeof(header), 0);
+	if (err != sizeof(header)) {
+		if (err >= 0)
+			err = -EIO;
+		return err;
+	}
+	if (header.magic != cpu_to_be32(DRBD_MAGIC)) {
+		tr_err(transport, "Wrong magic value 0x%08x in receive_first_packet\n",
+			 be32_to_cpu(header.magic));
+		return -EINVAL;
+	}
+	return be16_to_cpu(header.command);
+}
+
+static struct dtl_flow *dtl_control_next_flow_in_seq(struct dtl_transport *dtl_transport)
+{
+	struct dtl_stream *stream = &dtl_transport->streams[CONTROL_STREAM];
+	struct drbd_transport *transport = &dtl_transport->transport;
+	struct drbd_path *drbd_path;
+	struct dtl_flow *flow;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(drbd_path, &transport->paths, list) {
+		struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+
+		flow = &path->flow[CONTROL_STREAM];
+		if (flow->sock &&
+		    flow->recv_sequence == stream->recv_sequence + 1 && flow->recv_bytes > 0) {
+			struct sock *sk = flow->sock->sk;
+			struct tcp_sock *tp = tcp_sk(sk);
+
+			if (READ_ONCE(tp->rcv_nxt) - READ_ONCE(tp->copied_seq))
+				goto found;
+		}
+	}
+	flow = NULL;
+found:
+	rcu_read_unlock();
+	return flow;
+}
+
+static int dtl_control_tcp_input(read_descriptor_t *rd_desc, struct sk_buff *skb,
+				 unsigned int offset, size_t len)
+{
+	struct dtl_flow *flow = rd_desc->arg.data;
+	struct dtl_path *path = container_of(flow, struct dtl_path, flow[flow->stream_nr]);
+	struct dtl_transport *dtl_transport =
+		container_of(path->path.transport, struct dtl_transport, transport);
+	struct dtl_stream *stream = &dtl_transport->streams[CONTROL_STREAM];
+	struct drbd_transport *transport = &dtl_transport->transport;
+	int overall_avail, avail, consumed = 0;
+	struct drbd_const_buffer buffer;
+	struct skb_seq_state seq;
+
+	if (flow->recv_bytes &&
+	    flow->recv_sequence != stream->recv_sequence + 1)
+		return 0;
+
+	skb_prepare_seq_read(skb, offset, skb->len, &seq);
+	do {
+		/*
+		 * skb_seq_read() returns the length of the block assigned to buffer. This might
+		 * be more than is actually ready, so we ensure we only mark as available what
+		 * is ready.
+		 */
+		overall_avail = skb_seq_read(consumed, &buffer.buffer, &seq);
+		if (!overall_avail)
+			break;
+		avail = min_t(int, overall_avail, len - consumed);
+		while (avail) {
+			if (flow->recv_bytes == 0) {
+				const struct dtl_header *hdr = (struct dtl_header *)buffer.buffer;
+				int size = sizeof(struct dtl_header);
+				bool hdr_frag = flow->control_reassemble.avail || avail < size;
+
+				if (hdr_frag) {
+					int cra = flow->control_reassemble.avail;
+
+					size = min(size - cra, avail);
+					memcpy(flow->control_reassemble.bytes + cra, hdr, size);
+					flow->control_reassemble.avail += size;
+				}
+				consumed += size;
+				avail -= size;
+				buffer.buffer += size;
+				if (hdr_frag) {
+					if (flow->control_reassemble.avail < sizeof(*hdr))
+						continue;
+					hdr = &flow->control_reassemble.header;
+					flow->control_reassemble.avail = 0;
+				}
+
+				flow->recv_sequence = be32_to_cpu(hdr->sequence);
+				flow->recv_bytes = be32_to_cpu(hdr->bytes);
+				if (flow->recv_sequence != stream->recv_sequence + 1)
+					goto out;
+			}
+			buffer.avail = min(flow->recv_bytes, avail);
+			if (!buffer.avail)
+				continue;
+			consumed += buffer.avail;
+			avail -= buffer.avail;
+			if (test_bit(DTL_LOAD_BALANCE, &dtl_transport->flags))
+				flow->recv_bytes -= buffer.avail;
+			drbd_control_data_ready(transport, &buffer);
+			if (flow->recv_bytes == 0)
+				stream->recv_sequence++;
+		}
+	} while (consumed < len);
+out:
+	skb_abort_seq_read(&seq);
+	return consumed;
+}
+
+static void dtl_control_data_ready(struct sock *sk)
+{
+	struct dtl_flow *flow = sk->sk_user_data;
+	struct dtl_path *path = container_of(flow, struct dtl_path, flow[flow->stream_nr]);
+	struct dtl_transport *dtl_transport =
+		container_of(path->path.transport, struct dtl_transport, transport);
+
+	read_descriptor_t rd_desc = {
+		.count = 1,
+		.arg = { .data = flow },
+	};
+	mod_timer(&dtl_transport->control_timer, jiffies + sk->sk_rcvtimeo);
+
+	spin_lock_bh(&dtl_transport->control_recv_lock);
+	tcp_read_sock(sk, &rd_desc, dtl_control_tcp_input);
+
+	/* in case another flow became the next in sequence */
+	while ((flow = dtl_control_next_flow_in_seq(dtl_transport))) {
+		sk = flow->sock->sk;
+		rd_desc.arg.data = flow;
+		tcp_read_sock(sk, &rd_desc, dtl_control_tcp_input);
+	}
+	spin_unlock_bh(&dtl_transport->control_recv_lock);
+}
+
+static void dtl_control_state_change(struct sock *sk)
+{
+	struct dtl_flow *flow = sk->sk_user_data;
+	struct dtl_path *path = container_of(flow, struct dtl_path, flow[flow->stream_nr]);
+	struct dtl_transport *dtl_transport =
+		container_of(path->path.transport, struct dtl_transport, transport);
+	struct drbd_transport *transport = &dtl_transport->transport;
+
+	switch (sk->sk_state) {
+	case TCP_FIN_WAIT1:
+	case TCP_CLOSE_WAIT:
+	case TCP_CLOSE:
+	case TCP_LAST_ACK:
+	case TCP_CLOSING:
+		drbd_control_event(transport, CLOSED_BY_PEER);
+		break;
+	default:
+		tr_warn(transport, "unhandled state %d\n", sk->sk_state);
+	}
+
+	flow->original_sk_state_change(sk);
+}
+
+static void dtl_incoming_connection(struct sock *sk)
+{
+	struct dtl_listener *listener = sk->sk_user_data;
+	void (*state_change)(struct sock *sk);
+
+	state_change = listener->original_sk_state_change;
+	state_change(sk);
+
+	spin_lock(&listener->listener.waiters_lock);
+	listener->listener.pending_accepts++;
+	spin_unlock(&listener->listener.waiters_lock);
+	kref_get(&listener->listener.kref);
+	if (!schedule_work(&listener->accept_work))
+		kref_put(&listener->listener.kref, drbd_listener_destroy);
+}
+
+static void dtl_control_timer_fn(struct timer_list *t)
+{
+	struct dtl_transport *dtl_transport = timer_container_of(dtl_transport, t, control_timer);
+	struct drbd_transport *transport = &dtl_transport->transport;
+
+	drbd_control_event(transport, TIMEOUT);
+}
+
+static void dtl_destroy_listener(struct drbd_listener *generic_listener)
+{
+	struct dtl_listener *listener =
+		container_of(generic_listener, struct dtl_listener, listener);
+
+	if (!listener->s_listen)
+		return;
+	unregister_state_change(listener->s_listen->sk, listener);
+	sock_release(listener->s_listen);
+}
+
+static int dtl_init_listener(struct drbd_transport *transport,
+			     const struct sockaddr *addr,
+			     struct net *net,
+			     struct drbd_listener *drbd_listener)
+{
+	int err, sndbuf_size, rcvbuf_size, addr_len;
+	struct sockaddr_storage my_addr;
+	struct dtl_listener *listener = container_of(drbd_listener, struct dtl_listener, listener);
+	struct socket *s_listen;
+	struct net_conf *nc;
+	const char *what = "";
+
+
+	INIT_WORK(&listener->accept_work, dtl_accept_work_fn);
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	if (!nc) {
+		rcu_read_unlock();
+		return -EINVAL;
+	}
+	sndbuf_size = nc->sndbuf_size;
+	rcvbuf_size = nc->rcvbuf_size;
+	rcu_read_unlock();
+
+	my_addr = *(struct sockaddr_storage *)addr;
+
+	err = sock_create_kern(net, my_addr.ss_family, SOCK_STREAM, IPPROTO_TCP, &s_listen);
+	if (err < 0) {
+		s_listen = NULL;
+		what = "sock_create_kern";
+		goto out;
+	}
+
+	s_listen->sk->sk_reuse = SK_CAN_REUSE; /* SO_REUSEADDR */
+	dtl_setbufsize(s_listen, sndbuf_size, rcvbuf_size);
+
+	addr_len = addr->sa_family == AF_INET6 ? sizeof(struct sockaddr_in6)
+		: sizeof(struct sockaddr_in);
+
+	err = s_listen->ops->bind(s_listen, (struct sockaddr_unsized *)&my_addr, addr_len);
+	if (err < 0) {
+		what = "bind before listen";
+		goto out;
+	}
+
+	listener->s_listen = s_listen;
+	write_lock_bh(&s_listen->sk->sk_callback_lock);
+	listener->original_sk_state_change = s_listen->sk->sk_state_change;
+	s_listen->sk->sk_state_change = dtl_incoming_connection;
+	s_listen->sk->sk_user_data = listener;
+	write_unlock_bh(&s_listen->sk->sk_callback_lock);
+
+	err = s_listen->ops->listen(s_listen, DRBD_PEERS_MAX * 2);
+	if (err < 0) {
+		what = "listen";
+		goto out;
+	}
+
+	listener->listener.listen_addr = my_addr;
+
+	return 0;
+out:
+	if (s_listen)
+		sock_release(s_listen);
+
+	if (err < 0 &&
+	    err != -EAGAIN && err != -EINTR && err != -ERESTARTSYS && err != -EADDRINUSE &&
+	    err != -EADDRNOTAVAIL)
+		tr_err(transport, "%s failed, err = %d\n", what, err);
+
+	return err;
+}
+
+static void dtl_setup_socket(struct dtl_transport *dtl_transport, struct socket *sock,
+			     struct dtl_flow *flow)
+{
+	struct drbd_transport *transport = &dtl_transport->transport;
+	bool use_for_data = flow->stream_nr == DATA_STREAM;
+	struct net_conf *nc;
+	long timeout = HZ;
+
+	sock->sk->sk_reuse = SK_CAN_REUSE; /* SO_REUSEADDR */
+	/* We are a block device, we are in the write-out path,
+	 * we may need memory to facilitate memory reclaim
+	 */
+	sock->sk->sk_use_task_frag = false;
+	sock->sk->sk_allocation = GFP_ATOMIC;
+	sk_set_memalloc(sock->sk);
+
+	sock->sk->sk_priority = use_for_data ? TC_PRIO_INTERACTIVE_BULK : TC_PRIO_INTERACTIVE;
+	tcp_sock_set_nodelay(sock->sk);
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	if (nc)
+		timeout = nc->timeout * HZ / 10;
+	rcu_read_unlock();
+
+	sock->sk->sk_sndtimeo = timeout;
+	sock_set_keepalive(sock->sk);
+
+	if (use_for_data) {
+		if (drbd_keepidle)
+			tcp_sock_set_keepidle(sock->sk, drbd_keepidle);
+		if (drbd_keepcnt)
+			tcp_sock_set_keepcnt(sock->sk, drbd_keepcnt);
+		if (drbd_keepintvl)
+			tcp_sock_set_keepintvl(sock->sk, drbd_keepintvl);
+	}
+	flow->sock = sock;
+}
+
+static void dtl_set_socket_callbacks(struct dtl_transport *dtl_transport, struct dtl_flow *flow)
+{
+	bool use_for_data = flow->stream_nr == DATA_STREAM;
+	struct socket *sock = flow->sock;
+
+	write_lock_bh(&sock->sk->sk_callback_lock);
+	if (sock->sk->sk_data_ready != dtl_data_ready &&
+	    sock->sk->sk_data_ready != dtl_control_data_ready) {
+		sock->sk->sk_user_data = flow;
+		flow->original_sk_data_ready = sock->sk->sk_data_ready;
+		if (use_for_data) {
+			flow->original_sk_write_space = sock->sk->sk_write_space;
+			sock->sk->sk_data_ready = dtl_data_ready;
+			sock->sk->sk_write_space = dtl_write_space;
+		} else {
+			flow->original_sk_state_change = sock->sk->sk_state_change;
+			sock->sk->sk_data_ready = dtl_control_data_ready;
+			sock->sk->sk_state_change = dtl_control_state_change;
+		}
+	}
+	write_unlock_bh(&sock->sk->sk_callback_lock);
+}
+
+static void dtl_do_first_packet(struct dtl_transport *dtl_transport, struct dtl_path *path,
+				struct socket *s)
+{
+	struct drbd_transport *transport = &dtl_transport->transport;
+	int fp;
+
+	fp = dtl_receive_first_packet(dtl_transport, path, s);
+
+	dtl_socket_ok_or_free(transport, &path->flow[DATA_STREAM].sock);
+	dtl_socket_ok_or_free(transport, &path->flow[CONTROL_STREAM].sock);
+
+	switch (fp) {
+	case P_INITIAL_DATA:
+		if (path->flow[DATA_STREAM].sock)
+			tr_warn(transport, "initial packet S crossed\n");
+		dtl_socket_free(transport, &path->flow[DATA_STREAM].sock);
+		dtl_setup_socket(dtl_transport, s, &path->flow[DATA_STREAM]);
+		break;
+	case P_INITIAL_META:
+		if (path->flow[CONTROL_STREAM].sock)
+			tr_warn(transport, "initial packet M crossed\n");
+		dtl_socket_free(transport, &path->flow[CONTROL_STREAM].sock);
+		dtl_setup_socket(dtl_transport, s, &path->flow[CONTROL_STREAM]);
+		break;
+	default:
+		tr_warn(transport, "Error receiving initial packet. err = %d\n", fp);
+		kernel_sock_shutdown(s, SHUT_RDWR);
+		sock_release(s);
+		return;
+	}
+
+	if (dtl_path_established(transport, path)) {
+		if (dtl_transport->connected_paths == 1 && fp == P_INITIAL_META)
+			set_bit(RESOLVE_CONFLICTS, &transport->flags);
+	} else {
+		/* successful accept, not yet both -> speed up next connect attempt */
+		if (test_bit(DTL_CONNECTING, &dtl_transport->flags))
+			mod_delayed_work(system_wq, &dtl_transport->connect_work, 1);
+	}
+
+	if (!dtl_transport->err && fp < 0)
+		dtl_transport->err = fp;
+
+	wake_up_all(&dtl_transport->connected_paths_change);
+}
+
+static void dtl_accept_work_fn(struct work_struct *work)
+{
+	struct dtl_listener *listener = container_of(work, struct dtl_listener, accept_work);
+	struct dtl_transport *dtl_transport;
+	struct drbd_path *drbd_path;
+	struct dtl_path *path;
+	struct socket *s;
+	int err, tries = 5;
+
+	while (listener->listener.pending_accepts && tries > 0) {
+		struct sockaddr_storage peer_addr;
+
+		s = NULL;
+		err = kernel_accept(listener->s_listen, &s, O_NONBLOCK);
+
+		tries--;
+		if (err || !s)
+			continue;
+
+		unregister_state_change(s->sk, listener);
+		s->ops->getname(s, (struct sockaddr *)&peer_addr, 2);
+
+		spin_lock_bh(&listener->listener.waiters_lock);
+		listener->listener.pending_accepts--;
+		drbd_path = drbd_find_path_by_addr(&listener->listener, &peer_addr);
+		if (drbd_path)
+			kref_get(&drbd_path->kref);
+		spin_unlock_bh(&listener->listener.waiters_lock);
+
+		if (!drbd_path) {
+			switch (peer_addr.ss_family) {
+				struct sockaddr_in6 *from_sin6;
+				struct sockaddr_in *from_sin;
+
+			case AF_INET6:
+				from_sin6 = (struct sockaddr_in6 *)&peer_addr;
+				pr_notice("drbd: Closing unexpected connection from %pI6\n",
+					  &from_sin6->sin6_addr);
+				break;
+			default:
+				from_sin = (struct sockaddr_in *)&peer_addr;
+				pr_notice("drbd: Closing unexpected connection from %pI4\n",
+					  &from_sin->sin_addr);
+				break;
+			}
+			kernel_sock_shutdown(s, SHUT_RDWR);
+			sock_release(s);
+			continue;
+		}
+
+		path = container_of(drbd_path, struct dtl_path, path);
+		dtl_transport = container_of(path->path.transport, struct dtl_transport, transport);
+
+		/* Do not add sockets to a path after DTL_CONNECTING was cleared! */
+		if (test_bit(DTL_CONNECTING, &dtl_transport->flags)) {
+			dtl_do_first_packet(dtl_transport, path, s);
+		} else {
+			kernel_sock_shutdown(s, SHUT_RDWR);
+			sock_release(s);
+		}
+		kref_put(&drbd_path->kref, drbd_destroy_path);
+	}
+	kref_put(&listener->listener.kref, drbd_listener_destroy);
+}
+
+static void dtl_connect_work_fn(struct work_struct *work)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(work, struct dtl_transport, connect_work.work);
+	struct drbd_transport *transport = &dtl_transport->transport;
+	int connected_paths = dtl_transport->connected_paths;
+	int err, nr_paths = 0, to_connect = 0, err_ret = 0;
+	struct drbd_path *drbd_path;
+
+	for_each_path_ref(drbd_path, transport) {
+		struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+		struct socket *s = NULL;
+		bool use_for_data;
+
+		nr_paths++;
+		if (_dtl_path_established(transport, path))
+			continue;
+
+		to_connect++;
+		err = dtl_try_connect(transport, path, &s);
+		if (err < 0) {
+			if (err != -EAGAIN && err != -EADDRNOTAVAIL && !err_ret)
+				err_ret = err;
+			continue;
+		}
+
+		dtl_socket_ok_or_free(transport, &path->flow[DATA_STREAM].sock);
+		dtl_socket_ok_or_free(transport, &path->flow[CONTROL_STREAM].sock);
+
+		if (!path->flow[DATA_STREAM].sock && !path->flow[CONTROL_STREAM].sock) {
+			use_for_data = dtl_path_cmp_addr(path);
+		} else if (!path->flow[DATA_STREAM].sock) {
+			use_for_data = true;
+		} else {
+			if (path->flow[CONTROL_STREAM].sock) {
+				tr_err(transport, "Logic error in conn_connect()\n");
+				dtl_socket_free(transport, &s);
+				continue;
+			}
+			use_for_data = false;
+		}
+
+		if (use_for_data) {
+			struct dtl_flow tmp_flow = path->flow[DATA_STREAM];
+
+			tmp_flow.sock = s;
+			err = dtl_send_first_packet(dtl_transport, &tmp_flow, P_INITIAL_DATA);
+			dtl_setup_socket(dtl_transport, s, &path->flow[DATA_STREAM]);
+
+		} else {
+			struct dtl_flow tmp_flow = path->flow[CONTROL_STREAM];
+
+			tmp_flow.sock = s;
+			err = dtl_send_first_packet(dtl_transport, &tmp_flow, P_INITIAL_META);
+			dtl_setup_socket(dtl_transport, s, &path->flow[CONTROL_STREAM]);
+		}
+
+		if (dtl_path_established(transport, path)) {
+			if (dtl_transport->connected_paths == 1 && !use_for_data)
+				clear_bit(RESOLVE_CONFLICTS, &transport->flags);
+		}
+	}
+
+	if (to_connect && test_bit(DTL_CONNECTING, &dtl_transport->flags)) {
+		struct net_conf *nc;
+		int connect_int = HZ;
+
+		rcu_read_lock();
+		nc = rcu_dereference(transport->net_conf);
+		if (nc)
+			connect_int = nc->connect_int;
+		rcu_read_unlock();
+
+		schedule_delayed_work(&dtl_transport->connect_work, connect_int * HZ);
+	}
+
+	if (nr_paths == to_connect && err_ret && !dtl_transport->err)
+		dtl_transport->err = err_ret;
+
+	if (connected_paths != dtl_transport->connected_paths || err_ret)
+		wake_up_all(&dtl_transport->connected_paths_change);
+}
+
+static int dtl_path_adjust_listener(struct dtl_path *path, bool active)
+{
+	struct drbd_path *drbd_path = &path->path;
+	struct drbd_listener *listener = READ_ONCE(drbd_path->listener);
+	int err = 0;
+
+	if (!active && listener)
+		drbd_put_listener(drbd_path);
+	else if (active && !listener)
+		err = drbd_get_listener(drbd_path);
+
+	return err;
+}
+
+static int dtl_set_active(struct drbd_transport *transport, bool active)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	struct drbd_path *drbd_path;
+
+	if (active)
+		set_bit(DTL_CONNECTING, &dtl_transport->flags);
+	else
+		clear_bit(DTL_CONNECTING, &dtl_transport->flags);
+
+	for_each_path_ref(drbd_path, transport) {
+		struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+		enum drbd_stream i;
+		int err;
+
+		for (i = DATA_STREAM; i <= CONTROL_STREAM; i++) {
+			if (path->flow[i].sock && path->flow[i].original_sk_state_change) {
+				write_lock_bh(&path->flow[i].sock->sk->sk_callback_lock);
+				path->flow[i].sock->sk->sk_state_change =
+					path->flow[i].original_sk_state_change;
+				write_unlock_bh(&path->flow[i].sock->sk->sk_callback_lock);
+			}
+
+			dtl_socket_free(transport, &path->flow[i].sock);
+		}
+
+		err = dtl_path_adjust_listener(path, active);
+
+		if (err) {
+			kref_put(&drbd_path->kref, drbd_destroy_path);
+			return err;
+		}
+	}
+	return 0;
+}
+
+static int dtl_prepare_connect(struct drbd_transport *transport)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+
+	dtl_transport->connected_paths = 0;
+	dtl_transport->err = 0;
+	flush_signals(current);
+	timer_delete_sync(&dtl_transport->control_timer);
+	dtl_transport->err = dtl_set_active(transport, true);
+
+	return dtl_transport->err;
+}
+
+static int dtl_connect(struct drbd_transport *transport)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	int err;
+
+	schedule_work(&dtl_transport->connect_work.work);
+	err = wait_event_interruptible(dtl_transport->connected_paths_change,
+				       dtl_transport->connected_paths > 0);
+
+	if (err < 0)
+		dtl_transport->err = err;
+
+	return dtl_transport->err;
+}
+
+static void dtl_finish_connect(struct drbd_transport *transport)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	bool lb = test_bit(DTL_LOAD_BALANCE, &dtl_transport->flags);
+	enum drbd_stream i;
+
+	if (dtl_transport->err) {
+		dtl_set_active(transport, false);
+		cancel_delayed_work_sync(&dtl_transport->connect_work);
+	}
+
+	for (i = DATA_STREAM; i <= CONTROL_STREAM; i++) {
+		dtl_transport->streams[i].send_sequence = 1;
+		dtl_transport->streams[i].recv_sequence = 0;
+		if (lb)
+			dtl_transport->streams[i].recv_flow = NULL;
+	}
+}
+
+static int dtl_net_conf_change(struct drbd_transport *transport, struct net_conf *new_net_conf)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	struct drbd_path *drbd_path;
+
+	if (new_net_conf->load_balance_paths)
+		__set_bit(DTL_LOAD_BALANCE, &dtl_transport->flags);
+
+	for_each_path_ref(drbd_path, transport) {
+		struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+		struct socket *data_sock = path->flow[DATA_STREAM].sock;
+		struct socket *control_sock = path->flow[CONTROL_STREAM].sock;
+
+		if (data_sock)
+			dtl_setbufsize(data_sock, new_net_conf->sndbuf_size,
+				       new_net_conf->rcvbuf_size);
+
+		if (control_sock)
+			dtl_setbufsize(control_sock, new_net_conf->sndbuf_size,
+				       new_net_conf->rcvbuf_size);
+	}
+
+	return 0;
+}
+
+static void dtl_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream st, long timeout)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	struct dtl_stream *stream = &dtl_transport->streams[st];
+	struct drbd_path *drbd_path;
+
+	stream->rcvtimeo = timeout;
+	for_each_path_ref(drbd_path, transport) {
+		struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+		struct socket *sock = path->flow[st].sock;
+
+		if (!sock)
+			continue;
+
+		sock->sk->sk_rcvtimeo = timeout;
+
+		if (st == CONTROL_STREAM)
+			mod_timer(&dtl_transport->control_timer, jiffies + timeout);
+	}
+}
+
+static long dtl_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream st)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	struct dtl_stream *stream = &dtl_transport->streams[st];
+
+	return stream->rcvtimeo;
+}
+
+static bool dtl_stream_ok(struct drbd_transport *transport, enum drbd_stream stream)
+{
+	struct drbd_path *drbd_path;
+	bool established = false;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(drbd_path, &transport->paths, list) {
+		struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+		struct socket *sock = path->flow[stream].sock;
+
+		established = sock && sock->sk && sock->sk->sk_state == TCP_ESTABLISHED;
+		if (established)
+			break;
+	}
+	rcu_read_unlock();
+
+	return established;
+}
+
+static void dtl_write_space(struct sock *sk)
+{
+	struct dtl_flow *flow = sk->sk_user_data;
+	struct dtl_path *path = container_of(flow, struct dtl_path, flow[flow->stream_nr]);
+	struct dtl_transport *dtl_transport =
+		container_of(path->path.transport, struct dtl_transport, transport);
+
+	flow->original_sk_write_space(sk);
+	wake_up(&dtl_transport->write_space);
+}
+
+static int dtl_select_send_flow_cond(struct dtl_transport *dtl_transport,
+			      enum drbd_stream st, struct dtl_flow **result)
+{
+	struct drbd_transport *transport = &dtl_transport->transport;
+	int best_wmem = INT_MAX;
+	struct drbd_path *drbd_path;
+	struct dtl_flow *best = NULL;
+	bool empty;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(drbd_path, &transport->paths, list) {
+		struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+		struct dtl_flow *flow = &path->flow[st];
+
+		if (!test_bit(TR_ESTABLISHED, &drbd_path->flags))
+			continue;
+
+		if (flow->sock) {
+			struct sock *sk = flow->sock->sk;
+			int wmem = sk_stream_min_wspace(sk);
+			/* int wmem_queued = READ_ONCE(sk->sk_wmem_queued); */
+
+			if (st == DATA_STREAM) {
+				if (wmem < best_wmem && wmem < sk->sk_sndbuf) {
+					best = flow;
+					best_wmem = wmem;
+				}
+			} else {
+				if (wmem < sk->sk_sndbuf)
+					best = flow;
+				/* Only use first established control flow. */
+				break;
+			}
+		}
+	}
+	empty = list_empty(&transport->paths);
+	rcu_read_unlock();
+
+	if (!best) {
+		if (empty)
+			return -ENOTCONN;
+
+		set_bit(NET_CONGESTED, &dtl_transport->transport.flags);
+		return -EAGAIN;
+	}
+	clear_bit(NET_CONGESTED, &dtl_transport->transport.flags);
+
+	*result = best;
+	return 0;
+}
+
+static int dtl_select_send_flow(struct dtl_transport *dtl_transport,
+				enum drbd_stream st, struct dtl_flow **result)
+{
+	struct drbd_transport *transport = &dtl_transport->transport;
+	struct net_conf *nc;
+	long rem, timeout = HZ;
+	int err;
+
+	rcu_read_lock();
+	nc = rcu_dereference(transport->net_conf);
+	if (nc)
+		timeout = nc->timeout * HZ / 10;
+	rcu_read_unlock();
+
+	rem = wait_event_interruptible_timeout(dtl_transport->write_space,
+		(err = dtl_select_send_flow_cond(dtl_transport, st, result)) != -EAGAIN,
+		timeout);
+
+	return rem < 0 ? rem : err;
+}
+
+static int _dtl_send_page(struct dtl_transport *dtl_transport, struct dtl_flow *flow,
+			  struct page *page, int offset, size_t size, unsigned int msg_flags)
+{
+	struct msghdr msg = { .msg_flags = msg_flags | MSG_NOSIGNAL | MSG_SPLICE_PAGES };
+	struct drbd_transport *transport = &dtl_transport->transport;
+	struct socket *sock = flow->sock;
+	struct bio_vec bvec;
+	int len = size;
+	int err = -EIO;
+
+	do {
+		int sent;
+
+		bvec_set_page(&bvec, page, len, offset);
+		iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, len);
+
+		sent = sock_sendmsg(sock, &msg);
+		if (sent <= 0) {
+			if (sent == -EAGAIN) {
+				if (drbd_stream_send_timed_out(transport, flow->stream_nr))
+					break;
+				continue;
+			}
+			tr_warn(transport, "%s: size=%d len=%d sent=%d\n",
+			     __func__, (int)size, len, sent);
+			if (sent < 0)
+				err = sent;
+			break;
+		}
+		len    -= sent;
+		offset += sent;
+		/* NOTE: it may take up to twice the socket timeout to have it
+		 * return -EAGAIN, the first timeout will likely happen with a
+		 * partial send, masking the timeout.  Maybe we want to export
+		 * drbd_stream_should_continue_after_partial_send(transport, stream)
+		 * and add that to the while() condition below.
+		 */
+	} while (len > 0 /* THINK && peer_device->repl_state[NOW] >= L_ESTABLISHED */);
+
+	if (len == 0)
+		err = 0;
+
+	return err;
+}
+
+static int dtl_send_page(struct drbd_transport *transport, enum drbd_stream stream,
+			 struct page *page, int offset, size_t size, unsigned int msg_flags)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	struct dtl_header header;
+	struct dtl_flow *flow;
+	int err;
+
+	err = dtl_select_send_flow(dtl_transport, stream, &flow);
+	if (err)
+		return err;
+
+	if (test_bit(DTL_LOAD_BALANCE, &dtl_transport->flags)) {
+		header.sequence = cpu_to_be32(dtl_transport->streams[stream].send_sequence++);
+		header.bytes = cpu_to_be32(size);
+
+		err = _dtl_send(dtl_transport, flow, &header, sizeof(header), msg_flags | MSG_MORE);
+		if (err < 0)
+			goto out;
+	}
+	err = _dtl_send_page(dtl_transport, flow, page, offset, size, msg_flags);
+
+out:
+	return err;
+}
+
+static int dtl_bio_chunk_size_available(struct bio *bio, int wmem_available,
+		struct bvec_iter *iter_scan)
+{
+	struct bio_vec bvec;
+	int chunk = 0;
+
+	while (chunk < wmem_available && iter_scan->bi_size) {
+		bvec = bio_iter_iovec(bio, *iter_scan);
+		chunk += bvec.bv_len;
+		bio_advance_iter_single(bio, iter_scan, bvec.bv_len);
+	}
+
+	return chunk;
+}
+
+static int dtl_send_bio_pages(struct dtl_transport *dtl_transport, struct dtl_flow *flow,
+		struct bio *bio, struct bvec_iter *iter, int chunk)
+{
+	struct bio_vec bvec;
+
+	while (chunk > 0 && iter->bi_size) {
+		int err;
+
+		bvec = bio_iter_iovec(bio, *iter);
+		err = _dtl_send_page(dtl_transport, flow, bvec.bv_page,
+				bvec.bv_offset, bvec.bv_len,
+				bio_iter_last(bvec, *iter) ? 0 : MSG_MORE);
+		if (err)
+			return err;
+		chunk -= bvec.bv_len;
+		bio_advance_iter_single(bio, iter, bvec.bv_len);
+	}
+
+	return 0;
+}
+
+static int dtl_send_zc_bio(struct drbd_transport *transport, struct bio *bio)
+{
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	struct dtl_stream *stream = &dtl_transport->streams[DATA_STREAM];
+	bool lb = test_bit(DTL_LOAD_BALANCE, &dtl_transport->flags);
+	struct bvec_iter iter_scan = bio->bi_iter;
+	struct bvec_iter iter = bio->bi_iter;
+	int err;
+
+	if (!bio_has_data(bio)) /* e.g. REQ_OP_DISCARD */
+		return 0;
+
+	do {
+		struct dtl_flow *flow;
+		struct sock *sk;
+		int chunk, wmem_available;
+
+		err = dtl_select_send_flow(dtl_transport, DATA_STREAM, &flow);
+		if (err)
+			goto out;
+
+		sk = flow->sock->sk;
+		wmem_available = READ_ONCE(sk->sk_sndbuf) - READ_ONCE(sk->sk_wmem_queued);
+
+		if (lb && iter.bi_size > wmem_available) {
+			chunk = dtl_bio_chunk_size_available(bio, wmem_available, &iter_scan);
+		} else {
+			chunk = iter.bi_size;
+		}
+
+		if (lb) {
+			struct dtl_header header;
+
+			header.sequence = cpu_to_be32(stream->send_sequence++);
+			header.bytes = cpu_to_be32(chunk);
+			err = _dtl_send(dtl_transport, flow, &header, sizeof(header), MSG_MORE);
+			if (err < 0)
+				goto out;
+		}
+
+		err = dtl_send_bio_pages(dtl_transport, flow, bio, &iter, chunk);
+		if (err)
+			goto out;
+	} while (iter.bi_size);
+out:
+	return err;
+}
+
+static bool dtl_hint(struct drbd_transport *transport, enum drbd_stream stream,
+		enum drbd_tr_hints hint)
+{
+	struct drbd_path *drbd_path;
+
+	for_each_path_ref(drbd_path, transport) {
+		struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+		struct socket *sock = path->flow[stream].sock;
+
+		if (!sock)
+			continue;
+
+		switch (hint) {
+		case CORK:
+			tcp_sock_set_cork(sock->sk, true);
+			break;
+		case UNCORK:
+			tcp_sock_set_cork(sock->sk, false);
+			break;
+		case NODELAY:
+			tcp_sock_set_nodelay(sock->sk);
+			break;
+		case NOSPACE:
+			if (sock->sk->sk_socket)
+				set_bit(SOCK_NOSPACE, &sock->sk->sk_socket->flags);
+			break;
+		case QUICKACK:
+			tcp_sock_set_quickack(sock->sk, 2);
+			break;
+		}
+	}
+
+	return true;
+}
+
+static void dtl_debugfs_show_stream(struct seq_file *m, struct socket *sock)
+{
+	struct sock *sk = sock->sk;
+	struct tcp_sock *tp = tcp_sk(sk);
+
+	seq_printf(m, "unread receive buffer: %u Byte\n",
+		   tp->rcv_nxt - tp->copied_seq);
+	seq_printf(m, "unacked send buffer: %u Byte\n",
+		   tp->write_seq - tp->snd_una);
+	seq_printf(m, "send buffer size: %u Byte\n", sk->sk_sndbuf);
+	seq_printf(m, "send buffer used: %u Byte\n", sk->sk_wmem_queued);
+}
+
+static void dtl_debugfs_show(struct drbd_transport *transport, struct seq_file *m)
+{
+	struct drbd_path *drbd_path;
+
+	/* BUMP me if you change the file format/content/presentation */
+	seq_printf(m, "v: %u\n\n", 0);
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(drbd_path, &transport->paths, list) {
+		enum drbd_stream i;
+
+		seq_printf(m, "%pISpc - %pISpc:\n",
+			   &drbd_path->my_addr,
+			   &drbd_path->peer_addr);
+
+		for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++) {
+			struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+			struct socket *sock = path->flow[i].sock;
+
+			if (!sock)
+				continue;
+			seq_printf(m, "%s stream\n", i == DATA_STREAM ? "data" : "control");
+			dtl_debugfs_show_stream(m, sock);
+		}
+		seq_puts(m, "\n");
+	}
+	rcu_read_unlock();
+}
+
+static int dtl_add_path(struct drbd_path *drbd_path)
+{
+	struct drbd_transport *transport = drbd_path->transport;
+	struct dtl_transport *dtl_transport =
+		container_of(transport, struct dtl_transport, transport);
+	struct dtl_path *path = container_of(drbd_path, struct dtl_path, path);
+	bool active = test_bit(DTL_CONNECTING, &dtl_transport->flags);
+	enum drbd_stream i;
+
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++)
+		path->flow[i].stream_nr = i;
+
+	clear_bit(TR_ESTABLISHED, &drbd_path->flags);
+
+	return dtl_path_adjust_listener(path, active);
+}
+
+static bool dtl_may_remove_path(struct drbd_path *drbd_path)
+{
+	return !test_bit(TR_ESTABLISHED, &drbd_path->flags);
+}
+
+static void dtl_remove_path(struct drbd_path *drbd_path)
+{
+	drbd_put_listener(drbd_path);
+}
+
+static int __init dtl_initialize(void)
+{
+	return drbd_register_transport_class(&dtl_transport_class,
+					     DRBD_TRANSPORT_API_VERSION,
+					     sizeof(struct drbd_transport));
+}
+
+static void __exit dtl_cleanup(void)
+{
+	drbd_unregister_transport_class(&dtl_transport_class);
+}
+
+module_init(dtl_initialize)
+module_exit(dtl_cleanup)
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 08/20] drbd: add DAX/PMEM support for metadata access
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (6 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 07/20] drbd: add load-balancing TCP transport Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 09/20] drbd: add optional compatibility layer for DRBD 8.4 Christoph Böhmwalder
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

When DRBD's metadata device resides on persistent memory (PMEM/NVDIMM),
accessing it by reading and writing full blocks is unnecessarily
costly.
Add a DAX-based metadata path that directly maps the metadata region,
enabling byte-granular, IRQ-safe access without having to go through
the block layer.

The PMEM path also introduces a more efficient activity log layout:
instead of writing journal transactions, the in-memory LRU-cache hash
table is stored directly in persistent memory and updated in-place.
Similarly, the resync bitmap is accessed directly from PMEM rather than
being loaded into and flushed from DRAM.

This is compiled in only when CONFIG_DEV_DAX_PMEM is enabled.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/Makefile        |   1 +
 drivers/block/drbd/drbd_dax_pmem.c | 158 +++++++++++++++++++++++++++++
 drivers/block/drbd/drbd_dax_pmem.h |  40 ++++++++
 3 files changed, 199 insertions(+)
 create mode 100644 drivers/block/drbd/drbd_dax_pmem.c
 create mode 100644 drivers/block/drbd/drbd_dax_pmem.h

diff --git a/drivers/block/drbd/Makefile b/drivers/block/drbd/Makefile
index 7f2655a206aa..4b58eb83fc22 100644
--- a/drivers/block/drbd/Makefile
+++ b/drivers/block/drbd/Makefile
@@ -5,6 +5,7 @@ drbd-y += drbd_main.o drbd_strings.o drbd_nl.o
 drbd-y += drbd_interval.o drbd_state.o
 drbd-y += drbd_nla.o
 drbd-y += drbd_transport.o
+drbd-$(CONFIG_DEV_DAX_PMEM) += drbd_dax_pmem.o
 drbd-$(CONFIG_DEBUG_FS) += drbd_debugfs.o
 
 obj-$(CONFIG_BLK_DEV_DRBD)     += drbd.o
diff --git a/drivers/block/drbd/drbd_dax_pmem.c b/drivers/block/drbd/drbd_dax_pmem.c
new file mode 100644
index 000000000000..6f29dfd763a3
--- /dev/null
+++ b/drivers/block/drbd/drbd_dax_pmem.c
@@ -0,0 +1,158 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+   drbd_dax.c
+
+   This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+   Copyright (C) 2017, LINBIT HA-Solutions GmbH.
+
+
+ */
+
+/*
+  In case DRBD's meta-data resides in persistent memory do a few things
+   different.
+
+   1 direct access the bitmap in place. Do not load it into DRAM, do not
+     write it back from DRAM.
+   2 Use a better fitting format for the on-disk activity log. Instead of
+     writing transactions, the unmangled LRU-cache hash table is there.
+*/
+
+#include <linux/vmalloc.h>
+#include <linux/slab.h>
+#include <linux/dax.h>
+#include <linux/libnvdimm.h>
+#include <linux/blkdev.h>
+#include "drbd_int.h"
+#include "drbd_dax_pmem.h"
+#include "drbd_meta_data.h"
+
+static int map_superblock_for_dax(struct drbd_backing_dev *bdev, struct dax_device *dax_dev)
+{
+	long want = 1;
+	pgoff_t pgoff = bdev->md.md_offset >> (PAGE_SHIFT - SECTOR_SHIFT);
+	void *kaddr;
+	long len;
+	int id;
+
+	id = dax_read_lock();
+	len = dax_direct_access(dax_dev, pgoff, want, DAX_ACCESS, &kaddr, NULL);
+	dax_read_unlock(id);
+
+	if (len < want)
+		return -EIO;
+
+	bdev->md_on_pmem = kaddr;
+
+	return 0;
+}
+
+/**
+ * drbd_dax_open() - Open device for dax and map metadata superblock
+ * @bdev: backing device to be opened
+ */
+int drbd_dax_open(struct drbd_backing_dev *bdev)
+{
+	struct dax_device *dax_dev;
+	int err;
+	u64 part_off;
+
+	dax_dev = fs_dax_get_by_bdev(bdev->md_bdev, &part_off, NULL, NULL);
+	if (!dax_dev)
+		return -ENODEV;
+
+	err = map_superblock_for_dax(bdev, dax_dev);
+	if (!err)
+		bdev->dax_dev = dax_dev;
+	else
+		put_dax(dax_dev);
+
+	return err;
+}
+
+void drbd_dax_close(struct drbd_backing_dev *bdev)
+{
+	put_dax(bdev->dax_dev);
+}
+
+/**
+ * drbd_dax_map() - Map metadata for dax
+ * @bdev: backing device whose metadata is to be mapped
+ */
+int drbd_dax_map(struct drbd_backing_dev *bdev)
+{
+	struct dax_device *dax_dev = bdev->dax_dev;
+	sector_t first_sector = drbd_md_first_sector(bdev);
+	sector_t al_sector = bdev->md.md_offset + bdev->md.al_offset;
+	long want = (drbd_md_last_sector(bdev) + 1 - first_sector) >> (PAGE_SHIFT - SECTOR_SHIFT);
+	pgoff_t pgoff = first_sector >> (PAGE_SHIFT - SECTOR_SHIFT);
+	long md_offset_byte = (bdev->md.md_offset - first_sector) << SECTOR_SHIFT;
+	long al_offset_byte = (al_sector - first_sector) << SECTOR_SHIFT;
+	void *kaddr;
+	long len;
+	int id;
+
+	id = dax_read_lock();
+	len = dax_direct_access(dax_dev, pgoff, want, DAX_ACCESS, &kaddr, NULL);
+	dax_read_unlock(id);
+
+	if (len < want)
+		return -EIO;
+
+	bdev->md_on_pmem = kaddr + md_offset_byte;
+	bdev->al_on_pmem = kaddr + al_offset_byte;
+
+	return 0;
+}
+
+void drbd_dax_al_update(struct drbd_device *device, struct lc_element *al_ext)
+{
+	struct al_on_pmem *al_on_pmem = device->ldev->al_on_pmem;
+	__be32 *slot = &al_on_pmem->slots[al_ext->lc_index];
+
+	*slot = cpu_to_be32(al_ext->lc_new_number);
+	arch_wb_cache_pmem(slot, sizeof(*slot));
+}
+
+
+void drbd_dax_al_begin_io_commit(struct drbd_device *device)
+{
+	struct lc_element *e;
+
+	spin_lock_irq(&device->al_lock);
+
+	list_for_each_entry(e, &device->act_log->to_be_changed, list)
+		drbd_dax_al_update(device, e);
+
+	lc_committed(device->act_log);
+
+	spin_unlock_irq(&device->al_lock);
+}
+
+int drbd_dax_al_initialize(struct drbd_device *device)
+{
+	struct al_on_pmem *al_on_pmem = device->ldev->al_on_pmem;
+	__be32 *slots = al_on_pmem->slots;
+	int i, al_slots = (device->ldev->md.al_size_4k << (12 - 2)) - 1;
+
+	al_on_pmem->magic = cpu_to_be32(DRBD_AL_PMEM_MAGIC);
+	/* initialize all slots rather than just the configured number in case
+	 * the configuration is later changed */
+	for (i = 0; i < al_slots; i++) {
+		unsigned int extent_nr = i < device->act_log->nr_elements ?
+			lc_element_by_index(device->act_log, i)->lc_number :
+			LC_FREE;
+		slots[i] = cpu_to_be32(extent_nr);
+	}
+
+	return 0;
+}
+
+void *drbd_dax_bitmap(struct drbd_device *device, unsigned long want)
+{
+	struct drbd_backing_dev *bdev = device->ldev;
+	unsigned char *md_on_pmem = (unsigned char *)bdev->md_on_pmem;
+
+	return md_on_pmem + (long)bdev->md.bm_offset * SECTOR_SIZE;
+}
diff --git a/drivers/block/drbd/drbd_dax_pmem.h b/drivers/block/drbd/drbd_dax_pmem.h
new file mode 100644
index 000000000000..9a929969ff27
--- /dev/null
+++ b/drivers/block/drbd/drbd_dax_pmem.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef DRBD_DAX_H
+#define DRBD_DAX_H
+
+#include <linux/kconfig.h>
+
+#if IS_ENABLED(CONFIG_DEV_DAX_PMEM)
+
+int drbd_dax_open(struct drbd_backing_dev *bdev);
+void drbd_dax_close(struct drbd_backing_dev *bdev);
+int drbd_dax_map(struct drbd_backing_dev *bdev);
+void drbd_dax_al_update(struct drbd_device *device, struct lc_element *al_ext);
+void drbd_dax_al_begin_io_commit(struct drbd_device *device);
+int drbd_dax_al_initialize(struct drbd_device *device);
+void *drbd_dax_bitmap(struct drbd_device *device, unsigned long want);
+
+static inline bool drbd_md_dax_active(struct drbd_backing_dev *bdev)
+{
+	return bdev->dax_dev != NULL;
+}
+static inline struct meta_data_on_disk_9 *drbd_dax_md_addr(struct drbd_backing_dev *bdev)
+{
+	return bdev->md_on_pmem;
+}
+#else
+
+#define drbd_dax_open(B) do { } while (0)
+#define drbd_dax_close(B) do { } while (0)
+#define drbd_dax_map(B) (-ENOTSUPP)
+#define drbd_dax_al_begin_io_commit(D) do { } while (0)
+#define drbd_dax_al_initialize(D) (-EIO)
+#define drbd_dax_bitmap(D, L) (NULL)
+#define drbd_md_dax_active(B) (false)
+#define drbd_dax_md_addr(B) (NULL)
+
+#define arch_wb_cache_pmem(A, L) do { } while (0)
+
+#endif /* IS_ENABLED(CONFIG_DEV_DAX_PMEM) */
+
+#endif /* DRBD_DAX_H */
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 09/20] drbd: add optional compatibility layer for DRBD 8.4
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (7 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 08/20] drbd: add DAX/PMEM support for metadata access Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 10/20] drbd: rename drbd_worker.c to drbd_sender.c Christoph Böhmwalder
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Introduce a Kconfig option, DRBD_COMPAT_84, that enables a
self-contained source module that encapsulates everything needed to
interoperate with the userspace and on-disk formats of DRBD 8.4 (the
version that has been shipped with the kernel until now).

This ensures compatibility with existing userspace tooling is kept.

The main DRBD code is deliberately kept as clean as possible from any
backward compatibility hacks, so that this acts as the "main switch" for
enabling compatibility with old DRBD userspace utilities.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/Kconfig          |  26 ++
 drivers/block/drbd/Makefile         |   1 +
 drivers/block/drbd/drbd_legacy_84.c | 559 ++++++++++++++++++++++++++++
 drivers/block/drbd/drbd_legacy_84.h |  27 ++
 4 files changed, 613 insertions(+)
 create mode 100644 drivers/block/drbd/drbd_legacy_84.c
 create mode 100644 drivers/block/drbd/drbd_legacy_84.h

diff --git a/drivers/block/drbd/Kconfig b/drivers/block/drbd/Kconfig
index a214e92c32eb..d4975c21d4de 100644
--- a/drivers/block/drbd/Kconfig
+++ b/drivers/block/drbd/Kconfig
@@ -104,3 +104,29 @@ config BLK_DEV_DRBD_RDMA
 	  over RDMA-capable networks for lower latency and higher throughput.
 
 	  If unsure, say N.
+
+config DRBD_COMPAT_84
+	bool "Enable legacy (8.4) /proc/drbd and metadata compatibility"
+	depends on BLK_DEV_DRBD
+	default y
+	help
+
+	  This option enables the DRBD-8.4 style representation of
+	  DRBD devices in /proc/drbd. With DRBD 9.0, released in 2015,
+	  we deprecated this interface. The replacement is `drbdsetup
+	  status [--json] <resname>`. The new interface is optionally
+	  machine-readable, extensible, and scales to 1000s of
+	  resources. The deprecated interface had issues with all
+	  three areas.
+	  At the same time, this option also enables the DRBD driver
+	  to read and write the deprecated 8.4 version of the
+	  metadata.
+	  Only when you create a DRBD resource in the legacy way
+	  `drbdsetup new-resource <resname>` (no node-id positional
+	  argument), then the resource is put into legacy mode, and
+	  shows up in /proc/drbd, is capable of reading and writing
+	  the 8.4 metadata, and can have only a single peer. Without
+	  this compile-time option, creating a resource without a node
+	  ID is not possible.
+
+	  If unsure, say Y.
diff --git a/drivers/block/drbd/Makefile b/drivers/block/drbd/Makefile
index 4b58eb83fc22..1f0776c65349 100644
--- a/drivers/block/drbd/Makefile
+++ b/drivers/block/drbd/Makefile
@@ -7,6 +7,7 @@ drbd-y += drbd_nla.o
 drbd-y += drbd_transport.o
 drbd-$(CONFIG_DEV_DAX_PMEM) += drbd_dax_pmem.o
 drbd-$(CONFIG_DEBUG_FS) += drbd_debugfs.o
+drbd-$(CONFIG_DRBD_COMPAT_84) += drbd_legacy_84.o
 
 obj-$(CONFIG_BLK_DEV_DRBD)     += drbd.o
 
diff --git a/drivers/block/drbd/drbd_legacy_84.c b/drivers/block/drbd/drbd_legacy_84.c
new file mode 100644
index 000000000000..5363dab31918
--- /dev/null
+++ b/drivers/block/drbd/drbd_legacy_84.c
@@ -0,0 +1,559 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include "drbd_legacy_84.h"
+#include "drbd_meta_data.h"
+
+/*
+ *   drbd-8.4                      drbd-9 md.flags                   drbd-9 peer-md.flags
+ * MDF_CONSISTENT      1 << 0  MDF_CONSISTENT =        1 << 0,   MDF_PEER_CONNECTED =    1 << 0,
+ * MDF_PRIMARY_IND     1 << 1  MDF_PRIMARY_IND =       1 << 1,   MDF_PEER_OUTDATED =     1 << 1,
+ * MDF_CONNECTED_IND   1 << 2                                    MDF_PEER_FENCING =      1 << 2,
+ * MDF_FULL_SYNC       1 << 3                                    MDF_PEER_FULL_SYNC =    1 << 3,
+ * MDF_WAS_UP_TO_DATE  1 << 4  MDF_WAS_UP_TO_DATE =    1 << 4,   MDF_PEER_DEVICE_SEEN =  1 << 4,
+ * MDF_PEER_OUT_DATED  1 << 5
+ * MDF_CRASHED_PRIMARY 1 << 6  MDF_CRASHED_PRIMARY =   1 << 6,
+ * MDF_AL_CLEAN        1 << 7  MDF_AL_CLEAN =          1 << 7,
+ * MDF_AL_DISABLED     1 << 8  MDF_AL_DISABLED =       1 << 8,
+ *                             MDF_PRIMARY_LOST_QUORUM = 1 << 9,
+ *                             MDF_HAVE_QUORUM =       1 << 10,
+ *                                                                MDF_NODE_EXISTS =      1 << 16,
+ */
+
+#define MDF_84_MASK (MDF_CONSISTENT | MDF_PRIMARY_IND | MDF_WAS_UP_TO_DATE | \
+		     MDF_CRASHED_PRIMARY | MDF_AL_CLEAN | MDF_AL_DISABLED)
+#define MDF_84_PEER_MASK (MDF_PEER_FULL_SYNC)
+#define MDF_84_CONNECTED_IND (1<<2)
+#define MDF_84_PEER_OUTDATED (1<<5)
+
+struct meta_data_on_disk_84 {
+	u64 la_size_sect;      /* last agreed size. */
+	u64 uuid[UI_SIZE];   /* UUIDs. */
+	u64 device_uuid;
+	u64 reserved_u64_1;
+	u32 flags;             /* MDF */
+	u32 magic;
+	u32 md_size_sect;
+	u32 al_offset;         /* offset to this block */
+	u32 al_nr_extents;     /* important for restoring the AL (userspace) */
+	      /* `-- act_log->nr_elements <-- ldev->dc.al_extents */
+	u32 bm_offset;         /* offset to the bitmap, from here */
+	u32 bm_bytes_per_bit;  /* 4k. Treat as magic number, must keep it compatible. */
+	u32 la_peer_max_bio_size;   /* last peer max_bio_size */
+
+	/* see al_tr_number_to_on_disk_sector() */
+	u32 al_stripes;
+	u32 al_stripe_size_4k;
+
+	u8 reserved_u8[4096 - (7*8 + 10*4)];
+} __packed;
+
+
+static const char * const drbd_conn_s_names[] = {
+	[C_STANDALONE]       = "StandAlone",
+	[C_DISCONNECTING]    = "Disconnecting",
+	[C_UNCONNECTED]      = "Unconnected",
+	[C_TIMEOUT]          = "Timeout",
+	[C_BROKEN_PIPE]      = "BrokenPipe",
+	[C_NETWORK_FAILURE]  = "NetworkFailure",
+	[C_PROTOCOL_ERROR]   = "ProtocolError",
+	[C_CONNECTING]       = "WFConnection",
+	/* [C_WF_REPORT_PARAMS] = "WFReportParams", */
+	[C_TEAR_DOWN]        = "TearDown",
+	[C_CONNECTED]        = "Connected",
+	[L_STARTING_SYNC_S]  = "StartingSyncS",
+	[L_STARTING_SYNC_T]  = "StartingSyncT",
+	[L_WF_BITMAP_S]      = "WFBitMapS",
+	[L_WF_BITMAP_T]      = "WFBitMapT",
+	[L_WF_SYNC_UUID]     = "WFSyncUUID",
+	[L_SYNC_SOURCE]      = "SyncSource",
+	[L_SYNC_TARGET]      = "SyncTarget",
+	[L_PAUSED_SYNC_S]    = "PausedSyncS",
+	[L_PAUSED_SYNC_T]    = "PausedSyncT",
+	[L_VERIFY_S]         = "VerifyS",
+	[L_VERIFY_T]         = "VerifyT",
+	[L_AHEAD]            = "Ahead",
+	[L_BEHIND]           = "Behind",
+};
+
+static const char write_ordering_chars[] = {
+	[WO_NONE] = 'n',
+	[WO_DRAIN_IO] = 'd',
+	[WO_BDEV_FLUSH] = 'f',
+	[WO_BIO_BARRIER] = 'b',
+};
+
+
+static int seq_print_device_proc_drbd(struct seq_file *m, struct drbd_device *device);
+
+atomic_t nr_drbd8_devices;
+
+void drbd_md_decode_84(struct meta_data_on_disk_84 *on_disk, struct drbd_md *md)
+{
+	struct drbd_peer_md *peer_md;
+	const int peer_node_id = 0; /* setup_node_ids_84() moves it later */
+	u32 on_disk_flags;
+	int i;
+
+	md->effective_size = be64_to_cpu(on_disk->la_size_sect);
+	md->current_uuid = be64_to_cpu(on_disk->uuid[UI_CURRENT]);
+	md->prev_members = 0;
+	md->device_uuid = be64_to_cpu(on_disk->device_uuid);
+	md->md_size_sect = be32_to_cpu(on_disk->md_size_sect);
+	md->al_offset = be32_to_cpu(on_disk->al_offset);
+
+	md->bm_offset = be32_to_cpu(on_disk->bm_offset);
+
+	on_disk_flags = be32_to_cpu(on_disk->flags);
+	md->flags = on_disk_flags & MDF_84_MASK;
+
+	md->max_peers = 1;
+	md->bm_block_size = be32_to_cpu(on_disk->bm_bytes_per_bit);
+	md->node_id = -1; /* no node_id in the drbd-8.4 meta-data */
+	md->al_stripes = be32_to_cpu(on_disk->al_stripes);
+	md->al_stripe_size_4k = be32_to_cpu(on_disk->al_stripe_size_4k);
+
+
+	for (i = 0; i < DRBD_NODE_ID_MAX; i++) {
+		peer_md = &md->peers[i];
+
+		peer_md->bitmap_uuid = 0;
+		peer_md->bitmap_dagtag = 0;
+		peer_md->flags = 0;
+		peer_md->bitmap_index = -1;
+	}
+	peer_md = &md->peers[peer_node_id];
+	peer_md->bitmap_uuid = be64_to_cpu(on_disk->uuid[UI_BITMAP]);
+	peer_md->bitmap_index = 0;
+	peer_md->flags = on_disk_flags & MDF_84_PEER_MASK;
+	peer_md->flags |= MDF_HAVE_BITMAP;
+	peer_md->flags |= on_disk_flags & MDF_84_PEER_OUTDATED ? MDF_PEER_OUTDATED : 0;
+	peer_md->flags |= on_disk_flags & MDF_84_CONNECTED_IND ? MDF_PEER_CONNECTED : 0;
+
+
+	for (i = UI_HISTORY_START; i < UI_HISTORY_END; i++)
+		md->history_uuids[i - UI_HISTORY_START] = be64_to_cpu(on_disk->uuid[i]);
+}
+
+void drbd_md_encode_84(struct drbd_device *device, struct meta_data_on_disk_84 *buffer)
+{
+	struct drbd_md *md = &device->ldev->md;
+	int peer_node_id = !md->node_id;
+	struct drbd_peer_md *peer_md = &md->peers[peer_node_id];
+	u32 flags = (md->flags & MDF_84_MASK) | (peer_md->flags & MDF_84_PEER_MASK);
+	int i;
+
+	if (device->bitmap == NULL)
+		flags |= MDF_PEER_FULL_SYNC;
+
+	flags |= peer_md->flags & MDF_PEER_OUTDATED ? MDF_84_PEER_OUTDATED : 0;
+	flags |= peer_md->flags & MDF_PEER_CONNECTED ? MDF_84_CONNECTED_IND : 0;
+	buffer->la_size_sect = cpu_to_be64(md->effective_size);
+	buffer->device_uuid = cpu_to_be64(md->device_uuid);
+	buffer->uuid[UI_CURRENT] = cpu_to_be64(md->current_uuid);
+	buffer->uuid[UI_BITMAP] = cpu_to_be64(peer_md->bitmap_uuid);
+	for (i = UI_HISTORY_START; i < UI_HISTORY_END; i++)
+		buffer->uuid[i] = cpu_to_be64(md->history_uuids[i - UI_HISTORY_START]);
+	buffer->reserved_u64_1 = 0;
+	buffer->flags = cpu_to_be32(flags);
+	buffer->magic = cpu_to_be32(DRBD_MD_MAGIC_84_UNCLEAN);
+	buffer->md_size_sect = cpu_to_be32(md->md_size_sect);
+	buffer->al_offset = cpu_to_be32(md->al_offset);
+	buffer->al_nr_extents = cpu_to_be32(device->act_log->nr_elements);
+	buffer->bm_offset = cpu_to_be32(md->bm_offset);
+	buffer->bm_bytes_per_bit = cpu_to_be32(BM_BLOCK_SIZE_4k); /* treat as magic number */
+	buffer->la_peer_max_bio_size = cpu_to_be32(device->device_conf.max_bio_size);
+
+	buffer->al_stripes = cpu_to_be32(md->al_stripes);
+	buffer->al_stripe_size_4k = cpu_to_be32(md->al_stripe_size_4k);
+}
+
+
+/*
+ * This is DRBD 8 userspace compatibility mode, so we do not have a node ID
+ * yet. We derive our own node ID from the peer node ID. drbdsetup gives us the
+ * peer-node-id, which it determines by comparing the IP addresses.
+ */
+int drbd_setup_node_ids_84(struct drbd_connection *connection, struct drbd_path *path,
+			    unsigned int peer_node_id)
+{
+	int vnr, my_node_id, nr_legacy = 0, nr_v9 = 0;
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_device *peer_device;
+	struct drbd_device *device;
+
+	my_node_id = !peer_node_id;
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		if (test_bit(LEGACY_84_MD, &device->flags)) {
+			nr_legacy++;
+		} else {
+			nr_v9++;
+			if (get_ldev(device)) {
+				int md_my_node_id = device->ldev->md.node_id;
+
+				put_ldev(device);
+				if (my_node_id != md_my_node_id) {
+					drbd_err(connection, "inconsistent node_ids %d %d\n",
+						 my_node_id, md_my_node_id);
+					return -ENOTUNIQ;
+				}
+			}
+		}
+	}
+
+	if (nr_legacy && nr_v9)
+		drbd_warn(connection, "legacy-84 and drbd-9 metadata in one resource\n");
+
+	drbd_info(connection, "drbd8 userspace compat mode: setting my node id to %d\n",
+		  my_node_id);
+
+	/* setting up all node_ids*/
+	resource->res_opts.node_id = my_node_id;
+	connection->peer_node_id = peer_node_id;
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		peer_device = list_first_entry_or_null(&device->peer_devices,
+						       struct drbd_peer_device, peer_devices);
+		peer_device->node_id = peer_node_id;
+		peer_device->bitmap_index = 0;
+
+		if (get_ldev(device)) {
+			const struct drbd_peer_md clear = { .bitmap_index = -1 };
+			struct drbd_md *md = &device->ldev->md;
+			struct drbd_peer_md *to = &md->peers[peer_node_id];
+			int i;
+
+			md->node_id = my_node_id;
+
+			for (i = 0; i < DRBD_NODE_ID_MAX; i++) {
+				struct drbd_peer_md *from = &md->peers[i];
+
+				if (from->bitmap_index != -1) {
+					if (from != to) {
+						*to = *from;
+						*from = clear;
+					}
+					break;
+				}
+			}
+			put_ldev(device);
+		}
+	}
+
+	return 0;
+}
+
+
+/*
+ * Some resources may be operating in "DRBD 8 compatibility mode", where the
+ * user created the resource using the old drbd8-style drbdsetup command line
+ * syntax.
+ * This implies that the user probably also expects the old drbd8-style
+ * /proc/drbd output showing the device state.
+ * If the flag is set for a resource, we show the old-style output for that
+ * resource.
+ * If any resource is in DRBD 8 compatibility mode, this function returns true.
+ */
+bool drbd_show_legacy_device(struct seq_file *seq, void *v)
+{
+	struct drbd_device *device;
+	int i, prev_i = -1;
+
+	if (!atomic_read(&nr_drbd8_devices))
+		return false;
+
+	rcu_read_lock();
+	idr_for_each_entry(&drbd_devices, device, i) {
+		if (!device->resource->res_opts.drbd8_compat_mode)
+			continue;
+
+		if (prev_i != i - 1)
+			seq_putc(seq, '\n');
+		prev_i = i;
+
+		seq_print_device_proc_drbd(seq, device);
+	}
+	rcu_read_unlock();
+	return true;
+}
+
+static void seq_printf_with_thousands_grouping(struct seq_file *seq, long v)
+{
+	/* v is in kB/sec. We don't expect TiByte/sec yet. */
+	if (unlikely(v >= 1000000)) {
+		/* cool: > GiByte/s */
+		seq_printf(seq, "%ld,", v / 1000000);
+		v %= 1000000;
+		seq_printf(seq, "%03ld,%03ld", v/1000, v % 1000);
+	} else if (likely(v >= 1000))
+		seq_printf(seq, "%ld,%03ld", v/1000, v % 1000);
+	else
+		seq_printf(seq, "%ld", v);
+}
+
+static void drbd_get_syncer_progress(struct drbd_peer_device *pd,
+		enum drbd_repl_state repl_state, unsigned long *rs_total,
+		unsigned long *bits_left, unsigned int *per_mil_done)
+{
+	/* this is to break it at compile time when we change that, in case we
+	 * want to support more than (1<<32) bits on a 32bit arch.
+	 */
+	typecheck(unsigned long, pd->rs_total);
+	*rs_total = pd->rs_total;
+
+	/* note: both rs_total and rs_left are in bits, i.e. in
+	 * units of BM_BLOCK_SIZE.
+	 * for the percentage, we don't care.
+	 */
+
+	if (repl_state == L_VERIFY_S || repl_state == L_VERIFY_T)
+		*bits_left = atomic64_read(&pd->ov_left);
+	else
+		*bits_left = drbd_bm_total_weight(pd) - pd->rs_failed;
+	/* >> 10 to prevent overflow,
+	 * +1 to prevent division by zero
+	 */
+	if (*bits_left > *rs_total) {
+		/* D'oh. Maybe a logic bug somewhere.  More likely just a race
+		 * between state change and reset of rs_total.
+		 */
+		*bits_left = *rs_total;
+		*per_mil_done = *rs_total ? 0 : 1000;
+	} else {
+		/* Make sure the division happens in long context.
+		 * We allow up to one petabyte storage right now,
+		 * at a granularity of 4k per bit that is 2**38 bits.
+		 * After shift right and multiplication by 1000,
+		 * this should still fit easily into a 32bit long,
+		 * so we don't need a 64bit division on 32bit arch.
+		 * Note: currently we don't support such large bitmaps on 32bit
+		 * arch anyways, but no harm done to be prepared for it here.
+		 */
+		unsigned int shift = *rs_total > UINT_MAX ? 16 : 10;
+		unsigned long left = *bits_left >> shift;
+		unsigned long total = 1UL + (*rs_total >> shift);
+		unsigned long tmp = 1000UL - left * 1000UL/total;
+		*per_mil_done = tmp;
+	}
+}
+
+static void drbd_syncer_progress(struct drbd_peer_device *pd, struct seq_file *seq,
+		enum drbd_repl_state repl_state)
+{
+	unsigned long db, dt, dbdt, rt, rs_total, rs_left;
+	unsigned int res;
+	int i, x, y;
+	int stalled = 0;
+	unsigned int bm_block_shift = pd->device->last_bm_block_shift;
+
+	drbd_get_syncer_progress(pd, repl_state, &rs_total, &rs_left, &res);
+
+	x = res/50;
+	y = 20-x;
+	seq_puts(seq, "\t[");
+	for (i = 1; i < x; i++)
+		seq_putc(seq, '=');
+	seq_putc(seq, '>');
+	for (i = 0; i < y; i++)
+		seq_putc(seq, '.');
+	seq_puts(seq, "] ");
+
+	if (repl_state == L_VERIFY_S || repl_state == L_VERIFY_T)
+		seq_puts(seq, "verified:");
+	else
+		seq_puts(seq, "sync'ed:");
+	seq_printf(seq, "%3u.%u%% ", res / 10, res % 10);
+
+	/* if more than a few GB, display in MB */
+	if (rs_total > (4UL << (30 - bm_block_shift)))
+		seq_printf(seq, "(%llu/%llu)M",
+			    bit_to_kb(rs_left >> 10, bm_block_shift),
+			    bit_to_kb(rs_total >> 10, bm_block_shift));
+	else
+		seq_printf(seq, "(%llu/%llu)K",
+			    bit_to_kb(rs_left, bm_block_shift),
+			    bit_to_kb(rs_total, bm_block_shift));
+
+	seq_puts(seq, "\n\t");
+
+	/* see drivers/md/md.c
+	 * We do not want to overflow, so the order of operands and
+	 * the * 100 / 100 trick are important. We do a +1 to be
+	 * safe against division by zero. We only estimate anyway.
+	 *
+	 * dt: time from mark until now
+	 * db: blocks written from mark until now
+	 * rt: remaining time
+	 */
+	/* Rolling marks. last_mark+1 may just now be modified.  last_mark+2 is
+	 * at least (DRBD_SYNC_MARKS-2)*DRBD_SYNC_MARK_STEP old, and has at
+	 * least DRBD_SYNC_MARK_STEP time before it will be modified.
+	 */
+	/* ------------------------ ~18s average ------------------------ */
+	i = (pd->rs_last_mark + 2) % DRBD_SYNC_MARKS;
+	dt = (jiffies - pd->rs_mark_time[i]) / HZ;
+	if (dt > 180)
+		stalled = 1;
+
+	if (!dt)
+		dt++;
+	db = pd->rs_mark_left[i] - rs_left;
+	rt = (dt * (rs_left / (db/100+1)))/100; /* seconds */
+
+	seq_printf(seq, "finish: %lu:%02lu:%02lu",
+		rt / 3600, (rt % 3600) / 60, rt % 60);
+
+	dbdt = bit_to_kb(db/dt, bm_block_shift);
+	seq_puts(seq, " speed: ");
+	seq_printf_with_thousands_grouping(seq, dbdt);
+	seq_puts(seq, " (");
+	/* ------------------------- ~3s average ------------------------ */
+	if (1) {
+		/* this is what drbd_rs_should_slow_down() uses */
+		i = (pd->rs_last_mark + DRBD_SYNC_MARKS-1) % DRBD_SYNC_MARKS;
+		dt = (jiffies - pd->rs_mark_time[i]) / HZ;
+		if (!dt)
+			dt++;
+		db = pd->rs_mark_left[i] - rs_left;
+		dbdt = bit_to_kb(db/dt, bm_block_shift);
+		seq_printf_with_thousands_grouping(seq, dbdt);
+		seq_puts(seq, " -- ");
+	}
+
+	/* --------------------- long term average ---------------------- */
+	/* mean speed since syncer started we do account for PausedSync periods */
+	dt = (jiffies - pd->rs_start - pd->rs_paused) / HZ;
+	if (dt == 0)
+		dt = 1;
+	db = rs_total - rs_left;
+	dbdt = bit_to_kb(db/dt, bm_block_shift);
+	seq_printf_with_thousands_grouping(seq, dbdt);
+	seq_putc(seq, ')');
+
+	if (repl_state == L_SYNC_TARGET ||
+	    repl_state == L_VERIFY_S) {
+		seq_puts(seq, " want: ");
+		seq_printf_with_thousands_grouping(seq, pd->c_sync_rate);
+	}
+	seq_printf(seq, " K/sec%s\n", stalled ? " (stalled)" : "");
+
+	{
+		/* 64 bit: we convert to sectors in the display below. */
+		unsigned long bm_bits = drbd_bm_bits(pd->device);
+		unsigned long bit_pos;
+		unsigned long long stop_sector = 0;
+
+		if (repl_state == L_VERIFY_S ||
+		    repl_state == L_VERIFY_T) {
+			bit_pos = bm_bits - (unsigned long)atomic64_read(&pd->ov_left);
+			if (verify_can_do_stop_sector(pd))
+				stop_sector = pd->ov_stop_sector;
+		} else
+			bit_pos = pd->resync_next_bit;
+		/* Total sectors may be slightly off for oddly sized devices. So what. */
+		seq_printf(seq,
+			"\t%3d%% sector pos: %llu/%llu",
+			(int)(bit_pos / (bm_bits/100+1)),
+			(unsigned long long)bit_pos * sect_per_bit(bm_block_shift),
+			(unsigned long long)bm_bits * sect_per_bit(bm_block_shift));
+		if (stop_sector != 0 && stop_sector != ULLONG_MAX)
+			seq_printf(seq, " stop sector: %llu", stop_sector);
+		seq_putc(seq, '\n');
+	}
+}
+
+static const char *drbd_conn_str_84(enum drbd_conn_state s)
+{
+	/* enums are unsigned... */
+	return (int)s > (int)L_BEHIND ? "TOO_LARGE" : drbd_conn_s_names[s];
+}
+
+
+static int seq_print_device_proc_drbd(struct seq_file *m, struct drbd_device *device)
+{
+	unsigned int send_kb, recv_kb, pending_cnt, unacked_cnt, epochs;
+	struct drbd_connection *connection = NULL;
+	struct drbd_peer_device *peer_device;
+	union drbd_state state;
+	const char *sn;
+	char wp;
+
+	peer_device = list_first_or_null_rcu(&device->peer_devices, struct drbd_peer_device,
+					     peer_devices);
+
+	if (peer_device) {
+		state = drbd_get_peer_device_state(peer_device, NOW);
+		connection = peer_device->connection;
+		send_kb = peer_device->send_cnt/2;
+		recv_kb = peer_device->recv_cnt/2;
+		pending_cnt = atomic_read(&peer_device->ap_pending_cnt) +
+			atomic_read(&peer_device->rs_pending_cnt);
+		unacked_cnt = atomic_read(&peer_device->unacked_cnt);
+	} else {
+		state = drbd_get_device_state(device, NOW);
+		connection = list_first_or_null_rcu(&device->resource->connections,
+						    struct drbd_connection, connections);
+		send_kb = 0;
+		recv_kb = 0;
+		pending_cnt = 0;
+		unacked_cnt = 0;
+	}
+	if (connection) {
+		struct net_conf *nc = rcu_dereference(connection->transport.net_conf);
+
+		wp = nc ? nc->wire_protocol - DRBD_PROT_A + 'A' : ' ';
+		epochs = connection->epochs;
+	} else {
+		wp = 'C';
+		epochs = 0;
+	}
+
+	sn = drbd_conn_str_84(state.conn);
+
+	if (state.conn == C_STANDALONE &&
+	    state.disk == D_DISKLESS &&
+	    state.role == R_SECONDARY) {
+		seq_printf(m, "%2d: cs:Unconfigured\n", device->minor);
+	} else {
+		seq_printf(m,
+			   "%2d: cs:%s ro:%s/%s ds:%s/%s %c %c%c%c%c%c%c\n"
+			   "    ns:%u nr:%u dw:%u dr:%u al:%u bm:%u "
+			   "lo:%d pe:%d ua:%d ap:%d ep:%d wo:%c",
+			   device->minor, sn,
+			   drbd_role_str(state.role),
+			   drbd_role_str(state.peer),
+			   drbd_disk_str(state.disk),
+			   drbd_disk_str(state.pdsk),
+			   wp,
+			   drbd_suspended(device) ? 's' : 'r',
+			   state.aftr_isp ? 'a' : '-',
+			   state.peer_isp ? 'p' : '-',
+			   state.user_isp ? 'u' : '-',
+			   '-' /* congestion reason... FIXME */,
+			   test_bit(AL_SUSPENDED, &device->flags) ? 's' : '-',
+			   send_kb,
+			   recv_kb,
+			   device->writ_cnt/2,
+			   device->read_cnt/2,
+			   device->al_writ_cnt,
+			   device->bm_writ_cnt,
+			   atomic_read(&device->local_cnt),
+			   pending_cnt,
+			   unacked_cnt,
+			   atomic_read(&device->ap_bio_cnt[WRITE]) +
+			   atomic_read(&device->ap_bio_cnt[READ]),
+			   epochs,
+			   write_ordering_chars[device->resource->write_ordering]
+			);
+		seq_printf(m, " oos:%llu\n",
+			   peer_device ?
+				device_bit_to_kb(device, drbd_bm_total_weight(peer_device)) : 0);
+	}
+	if (state.conn == L_SYNC_SOURCE ||
+	    state.conn == L_SYNC_TARGET ||
+	    state.conn == L_VERIFY_S ||
+	    state.conn == L_VERIFY_T)
+		drbd_syncer_progress(peer_device, m, state.conn);
+
+	/* drbd_proc_details 1 or 2 missing */
+
+	return 0;
+}
diff --git a/drivers/block/drbd/drbd_legacy_84.h b/drivers/block/drbd/drbd_legacy_84.h
new file mode 100644
index 000000000000..6642fda72d17
--- /dev/null
+++ b/drivers/block/drbd/drbd_legacy_84.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef __DRBD_LEGACY_84_H
+#define __DRBD_LEGACY_84_H
+
+#include "drbd_int.h"
+
+struct meta_data_on_disk_84;
+
+#ifdef CONFIG_DRBD_COMPAT_84
+extern atomic_t nr_drbd8_devices;
+
+void drbd_md_decode_84(struct meta_data_on_disk_84 *on_disk, struct drbd_md *md);
+void drbd_md_encode_84(struct drbd_device *device, struct meta_data_on_disk_84 *buffer);
+int drbd_setup_node_ids_84(struct drbd_connection *connection, struct drbd_path *path,
+			   unsigned int peer_node_id);
+bool drbd_show_legacy_device(struct seq_file *seq, void *v);
+#else
+static inline void drbd_md_decode_84(struct meta_data_on_disk_84 *on_disk, struct drbd_md *md) {};
+static inline void drbd_md_encode_84(struct drbd_device *device,
+	struct meta_data_on_disk_84 *buffer) {};
+static inline int drbd_setup_node_ids_84(struct drbd_connection *connection, struct drbd_path *path,
+			   unsigned int peer_node_id) { return 0; };
+static inline bool drbd_show_legacy_device(struct seq_file *seq, void *v) { return false; };
+#endif  /* CONFIG_DRBD_COMPAT_84 */
+
+#endif  /* __DRBD_LEGACY_84_H */
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 10/20] drbd: rename drbd_worker.c to drbd_sender.c
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (8 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 09/20] drbd: add optional compatibility layer for DRBD 8.4 Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 11/20] drbd: rework sender for DRBD 9 multi-peer Christoph Böhmwalder
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Pure rename in preparation for the DRBD 9 sender rework. The file is
renamed to reflect the architectural split where the sender thread
handles per-connection transfer log processing and replication data
transmission, while the worker thread (function remains in the renamed
file) handles per-resource background work.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/Makefile                         | 2 +-
 drivers/block/drbd/{drbd_worker.c => drbd_sender.c} | 0
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename drivers/block/drbd/{drbd_worker.c => drbd_sender.c} (100%)

diff --git a/drivers/block/drbd/Makefile b/drivers/block/drbd/Makefile
index 1f0776c65349..af482bea1af1 100644
--- a/drivers/block/drbd/Makefile
+++ b/drivers/block/drbd/Makefile
@@ -1,6 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0-only
 drbd-y := drbd_buildtag.o drbd_bitmap.o drbd_proc.o
-drbd-y += drbd_worker.o drbd_receiver.o drbd_req.o drbd_actlog.o
+drbd-y += drbd_sender.o drbd_receiver.o drbd_req.o drbd_actlog.o
 drbd-y += drbd_main.o drbd_strings.o drbd_nl.o
 drbd-y += drbd_interval.o drbd_state.o
 drbd-y += drbd_nla.o
diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_sender.c
similarity index 100%
rename from drivers/block/drbd/drbd_worker.c
rename to drivers/block/drbd/drbd_sender.c
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 11/20] drbd: rework sender for DRBD 9 multi-peer
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (9 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 10/20] drbd: rename drbd_worker.c to drbd_sender.c Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 12/20] drbd: replace per-device state model with multi-peer data structures Christoph Böhmwalder
                   ` (8 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Split the single legacy worker thread into two: a per-connection
sender thread that drives the transfer log and replication data, and
a per-resource worker thread for background device tasks.
Resync and online-verify state moves entirely from the device to the
per-peer-device object.

Rewrite the resync request path around the interval tree instead of
the old resync-LRU extent locking, so conflict detection against
application I/O is precise rather than coarse-grained.
Resync requests carry DAG-tag ordering information if the peer supports
it, allowing the sync source to safely reorder replies.

Variable bitmap block sizes are handled correctly across peers with
different bitmap granularities.

Tighten locking throughout: replace the per-resource request spinlock
by a read-write state lock plus fine-grained per-connection and
per-device locks.
Move I/O completion tracking from per-device lists to per-connection
counters.
Switch the resync controller to nanosecond precision for better
throughput adaptation.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_sender.c | 3974 +++++++++++++++++++++---------
 1 file changed, 2811 insertions(+), 1163 deletions(-)

diff --git a/drivers/block/drbd/drbd_sender.c b/drivers/block/drbd/drbd_sender.c
index 0697f99fed18..bb854a3bc6b1 100644
--- a/drivers/block/drbd/drbd_sender.c
+++ b/drivers/block/drbd/drbd_sender.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
-   drbd_worker.c
+   drbd_sender.c
 
    This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
 
@@ -9,27 +9,32 @@
    Copyright (C) 2002-2008, Lars Ellenberg <lars.ellenberg@linbit.com>.
 
 
-*/
+ */
 
-#include <linux/module.h>
 #include <linux/drbd.h>
+#include <linux/sched.h>
 #include <linux/sched/signal.h>
 #include <linux/wait.h>
 #include <linux/mm.h>
-#include <linux/memcontrol.h>
+#include <linux/memcontrol.h> /* needed on kernels <4.3 */
 #include <linux/mm_inline.h>
 #include <linux/slab.h>
 #include <linux/random.h>
-#include <linux/string.h>
-#include <linux/scatterlist.h>
+#include <linux/overflow.h>
 #include <linux/part_stat.h>
 
 #include "drbd_int.h"
 #include "drbd_protocol.h"
 #include "drbd_req.h"
+#include "drbd_meta_data.h"
+
+void drbd_panic_after_delayed_completion_of_aborted_request(struct drbd_device *device);
 
 static int make_ov_request(struct drbd_peer_device *, int);
 static int make_resync_request(struct drbd_peer_device *, int);
+static bool should_send_barrier(struct drbd_connection *, unsigned int epoch);
+static void maybe_send_barrier(struct drbd_connection *, unsigned int);
+static unsigned long get_work_bits(const unsigned long mask, unsigned long *flags);
 
 /* endio handlers:
  *   drbd_md_endio (defined here)
@@ -51,10 +56,13 @@ void drbd_md_endio(struct bio *bio)
 {
 	struct drbd_device *device;
 
+	blk_status_t status = bio->bi_status;
+
 	device = bio->bi_private;
-	device->md_io.error = blk_status_to_errno(bio->bi_status);
+	device->md_io.error = blk_status_to_errno(status);
 
 	/* special case: drbd_md_read() during drbd_adm_attach() */
+	/* ldev_ref_transfer: ldev ref from bio submit in md I/O path */
 	if (device->ldev)
 		put_ldev(device);
 	bio_put(bio);
@@ -68,7 +76,7 @@ void drbd_md_endio(struct bio *bio)
 	 * Make sure we first drop the reference, and only then signal
 	 * completion, or we may (in drbd_al_read_log()) cycle so fast into the
 	 * next drbd_md_sync_page_io(), that we trigger the
-	 * ASSERT(atomic_read(&device->md_io_in_use) == 1) there.
+	 * ASSERT(atomic_read(&mdev->md_io_in_use) == 1) there.
 	 */
 	drbd_md_put_buffer(device);
 	device->md_io.done = 1;
@@ -78,89 +86,148 @@ void drbd_md_endio(struct bio *bio)
 /* reads on behalf of the partner,
  * "submitted" by the receiver
  */
-static void drbd_endio_read_sec_final(struct drbd_peer_request *peer_req) __releases(local)
+static void drbd_endio_read_sec_final(struct drbd_peer_request *peer_req)
 {
-	unsigned long flags = 0;
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
 	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
+	bool io_error;
 
-	spin_lock_irqsave(&device->resource->req_lock, flags);
 	device->read_cnt += peer_req->i.size >> 9;
-	list_del(&peer_req->w.list);
-	if (list_empty(&device->read_ee))
-		wake_up(&device->ee_wait);
-	if (test_bit(__EE_WAS_ERROR, &peer_req->flags))
-		__drbd_chk_io_error(device, DRBD_READ_ERROR);
-	spin_unlock_irqrestore(&device->resource->req_lock, flags);
-
-	drbd_queue_work(&peer_device->connection->sender_work, &peer_req->w);
+	io_error = test_bit(__EE_WAS_ERROR, &peer_req->flags);
+
+	drbd_queue_work(&connection->sender_work, &peer_req->w);
+	peer_req = NULL; /* peer_req may be freed. */
+
+	/*
+	 * Decrement counter after queuing work to avoid a moment where
+	 * backing_ee_cnt is zero and the sender work list is empty.
+	 */
+	if (atomic_dec_and_test(&connection->backing_ee_cnt))
+		wake_up(&connection->ee_wait);
+
+	if (io_error)
+		drbd_handle_io_error(device, DRBD_READ_ERROR);
+
 	put_ldev(device);
 }
 
+static int is_failed_barrier(int ee_flags)
+{
+	return (ee_flags & (EE_IS_BARRIER|EE_WAS_ERROR|EE_RESUBMITTED|EE_TRIM|EE_ZEROOUT))
+		== (EE_IS_BARRIER|EE_WAS_ERROR);
+}
+
+static bool drbd_peer_request_is_merged(struct drbd_peer_request *peer_req,
+		sector_t main_sector, sector_t main_sector_end)
+{
+	/*
+	 * We do not send overlapping resync requests. So any request which is
+	 * in the corresponding range and for which we have received a reply
+	 * must be a merged request. EE_TRIM implies that we have received a
+	 * reply.
+	 */
+	return peer_req->i.sector >= main_sector &&
+		peer_req->i.sector + (peer_req->i.size >> SECTOR_SHIFT) <= main_sector_end &&
+			peer_req->i.type == INTERVAL_RESYNC_WRITE &&
+			(peer_req->flags & EE_TRIM);
+}
+
+int drbd_unmerge_discard(struct drbd_peer_request *peer_req_main, struct list_head *list)
+{
+	struct drbd_peer_device *peer_device = peer_req_main->peer_device;
+	struct drbd_peer_request *peer_req = peer_req_main;
+	sector_t main_sector = peer_req_main->i.sector;
+	sector_t main_sector_end = main_sector + (peer_req_main->i.size >> SECTOR_SHIFT);
+	int merged_count = 0;
+
+	list_for_each_entry_continue(peer_req, &peer_device->resync_requests, recv_order) {
+		if (!drbd_peer_request_is_merged(peer_req, main_sector, main_sector_end))
+			break;
+
+		merged_count++;
+		list_add_tail(&peer_req->w.list, list);
+	}
+
+	return merged_count;
+}
+
 /* writes on behalf of the partner, or resync writes,
  * "submitted" by the receiver, final stage.  */
-void drbd_endio_write_sec_final(struct drbd_peer_request *peer_req) __releases(local)
+void drbd_endio_write_sec_final(struct drbd_peer_request *peer_req)
 {
 	unsigned long flags = 0;
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
 	struct drbd_device *device = peer_device->device;
 	struct drbd_connection *connection = peer_device->connection;
-	struct drbd_interval i;
-	int do_wake;
-	u64 block_id;
-	int do_al_complete_io;
+	enum drbd_interval_type type;
+	bool do_wake;
+
+	/* if this is a failed barrier request, disable use of barriers,
+	 * and schedule for resubmission */
+	if (is_failed_barrier(peer_req->flags)) {
+		drbd_bump_write_ordering(device->resource, device->ldev, WO_BDEV_FLUSH);
+		spin_lock_irqsave(&connection->peer_reqs_lock, flags);
+		peer_req->flags = (peer_req->flags & ~EE_WAS_ERROR) | EE_RESUBMITTED;
+		peer_req->w.cb = w_e_reissue;
+		spin_unlock_irqrestore(&connection->peer_reqs_lock, flags);
+		drbd_queue_work(&connection->sender_work, &peer_req->w);
+		if (atomic_dec_and_test(&connection->active_ee_cnt))
+			wake_up(&connection->ee_wait);
+		return;
+	}
 
 	/* after we moved peer_req to done_ee,
 	 * we may no longer access it,
 	 * it may be freed/reused already!
-	 * (as soon as we release the req_lock) */
-	i = peer_req->i;
-	do_al_complete_io = peer_req->flags & EE_CALL_AL_COMPLETE_IO;
-	block_id = peer_req->block_id;
-	peer_req->flags &= ~EE_CALL_AL_COMPLETE_IO;
+	 * (as soon as we release the peer_reqs_lock) */
+	type = peer_req->i.type;
 
 	if (peer_req->flags & EE_WAS_ERROR) {
 		/* In protocol != C, we usually do not send write acks.
-		 * In case of a write error, send the neg ack anyways. */
-		if (!__test_and_set_bit(__EE_SEND_WRITE_ACK, &peer_req->flags))
-			inc_unacked(device);
+		 * In case of a write error, send the neg ack anyways.
+		 * This only applies to to application writes, not to resync. */
+		if (peer_req->i.type == INTERVAL_PEER_WRITE) {
+			if (!__test_and_set_bit(__EE_SEND_WRITE_ACK, &peer_req->flags))
+				inc_unacked(peer_device);
+		}
 		drbd_set_out_of_sync(peer_device, peer_req->i.sector, peer_req->i.size);
+		drbd_handle_io_error(device, DRBD_WRITE_ERROR);
 	}
 
-	spin_lock_irqsave(&device->resource->req_lock, flags);
+	spin_lock_irqsave(&connection->peer_reqs_lock, flags);
 	device->writ_cnt += peer_req->i.size >> 9;
-	list_move_tail(&peer_req->w.list, &device->done_ee);
+	atomic_inc(&connection->done_ee_cnt);
+	list_add_tail(&peer_req->w.list, &connection->done_ee);
+	if (peer_req->i.type == INTERVAL_RESYNC_WRITE && peer_req->flags & EE_TRIM) {
+		LIST_HEAD(merged);
+		int merged_count;
+
+		merged_count = drbd_unmerge_discard(peer_req, &merged);
+		list_splice_tail(&merged, &connection->done_ee);
+		atomic_add(merged_count, &connection->done_ee_cnt);
+	}
+	peer_req = NULL; /* may be freed after unlock */
+	spin_unlock_irqrestore(&connection->peer_reqs_lock, flags);
 
 	/*
-	 * Do not remove from the write_requests tree here: we did not send the
-	 * Ack yet and did not wake possibly waiting conflicting requests.
-	 * Removed from the tree from "drbd_process_done_ee" within the
-	 * appropriate dw.cb (e_end_block/e_end_resync_block) or from
-	 * _drbd_clear_done_ee.
+	 * Do not remove from the requests tree here: we did not send the
+	 * Ack yet.
+	 * Removed from the tree from "drbd_finish_peer_reqs" within the
+	 * appropriate callback (e_end_block/e_end_resync_block) or from
+	 * cleanup functions if the connection is lost.
 	 */
 
-	do_wake = list_empty(block_id == ID_SYNCER ? &device->sync_ee : &device->active_ee);
-
-	/* FIXME do we want to detach for failed REQ_OP_DISCARD?
-	 * ((peer_req->flags & (EE_WAS_ERROR|EE_TRIM)) == EE_WAS_ERROR) */
-	if (peer_req->flags & EE_WAS_ERROR)
-		__drbd_chk_io_error(device, DRBD_WRITE_ERROR);
-
-	if (connection->cstate >= C_WF_REPORT_PARAMS) {
-		kref_get(&device->kref); /* put is in drbd_send_acks_wf() */
-		if (!queue_work(connection->ack_sender, &peer_device->send_acks_work))
-			kref_put(&device->kref, drbd_destroy_device);
-	}
-	spin_unlock_irqrestore(&device->resource->req_lock, flags);
+	if (connection->cstate[NOW] == C_CONNECTED)
+		queue_work(connection->ack_sender, &connection->send_acks_work);
 
-	if (block_id == ID_SYNCER)
-		drbd_rs_complete_io(device, i.sector);
+	if (type == INTERVAL_RESYNC_WRITE)
+		do_wake = atomic_dec_and_test(&connection->backing_ee_cnt);
+	else
+		do_wake = atomic_dec_and_test(&connection->active_ee_cnt);
 
 	if (do_wake)
-		wake_up(&device->ee_wait);
-
-	if (do_al_complete_io)
-		drbd_al_complete_io(device, &i);
+		wake_up(&connection->ee_wait);
 
 	put_ldev(device);
 }
@@ -175,33 +242,63 @@ void drbd_peer_request_endio(struct bio *bio)
 	bool is_write = bio_data_dir(bio) == WRITE;
 	bool is_discard = bio_op(bio) == REQ_OP_WRITE_ZEROES ||
 			  bio_op(bio) == REQ_OP_DISCARD;
+	blk_status_t status = bio->bi_status;
+	unsigned long flags;
+	struct page *page;
+	struct bio **pos;
 
-	if (bio->bi_status && drbd_ratelimit())
+	if (status && drbd_device_ratelimit(device, BACKEND))
 		drbd_warn(device, "%s: error=%d s=%llus\n",
 				is_write ? (is_discard ? "discard" : "write")
-					: "read", bio->bi_status,
+					: "read", status,
 				(unsigned long long)peer_req->i.sector);
 
-	if (bio->bi_status)
+	if (status)
 		set_bit(__EE_WAS_ERROR, &peer_req->flags);
 
-	bio_put(bio); /* no need for the bio anymore */
+	bio->bi_next = NULL; /* bi_next was used by the kernel during I/O; reinitialize */
+	/* Reset iter and restore sector and size for bio_for_each_segment(). */
+	page = bio->bi_io_vec[0].bv_page;
+	bio->bi_iter = (struct bvec_iter) {
+		.bi_sector = peer_req->i.sector + page->private,
+		.bi_size = (unsigned int)(unsigned long)page->lru.next,
+	};
+
+	spin_lock_irqsave(&device->peer_req_bio_completion_lock, flags);
+	if (bio_list_empty(&peer_req->bios)) {
+		bio_list_add(&peer_req->bios, bio);
+	} else {
+		/* Insert bio into the chain ordered by bi_sector */
+		for (pos = &peer_req->bios.head; *pos; pos = &(*pos)->bi_next) {
+			if (bio->bi_iter.bi_sector < (*pos)->bi_iter.bi_sector)
+				break;
+		}
+		bio->bi_next = *pos;
+		*pos = bio;
+		/* Update tail if we inserted at the end */
+		if (!bio->bi_next)
+			peer_req->bios.tail = bio;
+	}
+	spin_unlock_irqrestore(&device->peer_req_bio_completion_lock, flags);
+
 	if (atomic_dec_and_test(&peer_req->pending_bios)) {
 		if (is_write)
+			/* ldev_ref_transfer: ldev ref from bio submit in peer request I/O path */
 			drbd_endio_write_sec_final(peer_req);
 		else
 			drbd_endio_read_sec_final(peer_req);
 	}
 }
 
-static void
-drbd_panic_after_delayed_completion_of_aborted_request(struct drbd_device *device)
+/* Not static to increase the likelyhood that it will show up in a stack trace */
+void drbd_panic_after_delayed_completion_of_aborted_request(struct drbd_device *device)
 {
 	panic("drbd%u %s/%u potential random memory corruption caused by delayed completion of aborted local request\n",
 		device->minor, device->resource->name, device->vnr);
 }
 
-/* read, readA or write requests on R_PRIMARY coming from drbd_make_request
+
+/* read, readA or write requests on R_PRIMARY coming from drbd_submit_bio
  */
 void drbd_request_endio(struct bio *bio)
 {
@@ -211,6 +308,8 @@ void drbd_request_endio(struct bio *bio)
 	struct bio_and_error m;
 	enum drbd_req_event what;
 
+	blk_status_t status = bio->bi_status;
+
 	/* If this request was aborted locally before,
 	 * but now was completed "successfully",
 	 * chances are that this caused arbitrary data corruption.
@@ -221,7 +320,7 @@ void drbd_request_endio(struct bio *bio)
 	 * situation, usually a hard-reset and failover is the only way out.
 	 *
 	 * By "aborting", basically faking a local error-completion,
-	 * we allow for a more graceful swichover by cleanly migrating services.
+	 * we allow for a more graceful switchover by cleanly migrating services.
 	 * Still the affected node has to be rebooted "soon".
 	 *
 	 * By completing these requests, we allow the upper layers to re-use
@@ -239,89 +338,198 @@ void drbd_request_endio(struct bio *bio)
 	 * We assume that a delayed *error* completion is OK,
 	 * though we still will complain noisily about it.
 	 */
-	if (unlikely(req->rq_state & RQ_LOCAL_ABORTED)) {
-		if (drbd_ratelimit())
+	if (unlikely(req->local_rq_state & RQ_LOCAL_ABORTED)) {
+		if (drbd_device_ratelimit(device, BACKEND))
 			drbd_emerg(device, "delayed completion of aborted local request; disk-timeout may be too aggressive\n");
 
-		if (!bio->bi_status)
+		if (!status)
 			drbd_panic_after_delayed_completion_of_aborted_request(device);
 	}
 
 	/* to avoid recursion in __req_mod */
-	if (unlikely(bio->bi_status)) {
-		switch (bio_op(bio)) {
-		case REQ_OP_WRITE_ZEROES:
-		case REQ_OP_DISCARD:
-			if (bio->bi_status == BLK_STS_NOTSUPP)
+	if (unlikely(status)) {
+		enum req_op op = bio_op(bio);
+		if (op == REQ_OP_DISCARD || op == REQ_OP_WRITE_ZEROES) {
+			if (status == BLK_STS_NOTSUPP)
 				what = DISCARD_COMPLETED_NOTSUPP;
 			else
 				what = DISCARD_COMPLETED_WITH_ERROR;
-			break;
-		case REQ_OP_READ:
+		} else if (op == REQ_OP_READ) {
 			if (bio->bi_opf & REQ_RAHEAD)
 				what = READ_AHEAD_COMPLETED_WITH_ERROR;
 			else
 				what = READ_COMPLETED_WITH_ERROR;
-			break;
-		default:
+		} else {
 			what = WRITE_COMPLETED_WITH_ERROR;
-			break;
 		}
 	} else {
 		what = COMPLETED_OK;
 	}
 
-	req->private_bio = ERR_PTR(blk_status_to_errno(bio->bi_status));
-	bio_put(bio);
+	bio_put(req->private_bio);
+	req->private_bio = ERR_PTR(blk_status_to_errno(status));
+
+	/* it is legal to fail read-ahead, no drbd_handle_io_error for READ_AHEAD_COMPLETED_WITH_ERROR */
+	if (what == WRITE_COMPLETED_WITH_ERROR)
+		drbd_handle_io_error(device, DRBD_WRITE_ERROR);
+	else if (what == READ_COMPLETED_WITH_ERROR)
+		drbd_handle_io_error(device, DRBD_READ_ERROR);
+
+	spin_lock_irqsave(&device->interval_lock, flags);
+	set_bit(INTERVAL_BACKING_COMPLETED, &req->i.flags);
+	if (req->local_rq_state & RQ_WRITE)
+		drbd_release_conflicts(device, &req->i);
+	spin_unlock_irqrestore(&device->interval_lock, flags);
 
 	/* not req_mod(), we need irqsave here! */
-	spin_lock_irqsave(&device->resource->req_lock, flags);
+	read_lock_irqsave(&device->resource->state_rwlock, flags);
+	/* ldev_safe: bio endio, ldev ref held since drbd_request_prepare(), put_ldev() follows */
 	__req_mod(req, what, NULL, &m);
-	spin_unlock_irqrestore(&device->resource->req_lock, flags);
+	read_unlock_irqrestore(&device->resource->state_rwlock, flags);
 	put_ldev(device);
 
 	if (m.bio)
 		complete_master_bio(device, &m);
 }
 
-void drbd_csum_ee(struct crypto_shash *tfm, struct drbd_peer_request *peer_req, void *digest)
+struct dagtag_find_result {
+	int err;
+	unsigned int node_id;
+	u64 dagtag;
+};
+
+static struct dagtag_find_result find_current_dagtag(struct drbd_resource *resource)
 {
-	SHASH_DESC_ON_STACK(desc, tfm);
-	struct page *page = peer_req->pages;
-	struct page *tmp;
-	unsigned len;
-	void *src;
+	struct drbd_connection *connection;
+	struct dagtag_find_result ret = { 0 };
+
+	read_lock_irq(&resource->state_rwlock);
+
+	if (resource->role[NOW] == R_PRIMARY) {
+		/* Sending data and sending resync requests are not
+		 * synchronized with each other, so our peer may need to wait
+		 * until it has received more data before it can reply to this
+		 * request. */
+		ret.node_id = resource->res_opts.node_id;
+		ret.dagtag = resource->dagtag_sector;
+	} else {
+		for_each_connection(connection, resource) {
+			if (connection->peer_role[NOW] != R_PRIMARY)
+				continue;
 
-	desc->tfm = tfm;
+			/* Do not depend on a stale dagtag. */
+			if (!test_bit(RECEIVED_DAGTAG, &connection->flags))
+				continue;
 
-	crypto_shash_init(desc);
+			if (ret.dagtag) {
+				if (drbd_ratelimit())
+					drbd_err(resource, "Refusing to resync due to multiple remote primaries\n");
+				ret.err = 1;
+				break;
+			} else {
+				ret.node_id = connection->peer_node_id;
+				ret.dagtag = atomic64_read(&connection->last_dagtag_sector);
+			}
+		}
+	}
 
-	src = kmap_atomic(page);
-	while ((tmp = page_chain_next(page))) {
-		/* all but the last page will be fully used */
-		crypto_shash_update(desc, src, PAGE_SIZE);
-		kunmap_atomic(src);
-		page = tmp;
-		src = kmap_atomic(page);
+	read_unlock_irq(&resource->state_rwlock);
+
+	return ret;
+}
+
+static void send_resync_request(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_connection *connection = peer_device->connection;
+	struct dagtag_find_result dagtag_result;
+
+	if (!(connection->agreed_features & DRBD_FF_RESYNC_DAGTAG) &&
+			drbd_al_active(peer_device->device, peer_req->i.sector, peer_req->i.size)) {
+		dynamic_drbd_dbg(peer_device,
+				"Abort resync request at %llus+%u due to activity",
+				(unsigned long long) peer_req->i.sector, peer_req->i.size);
+
+		drbd_unsuccessful_resync_request(peer_req, false);
+		return;
 	}
-	/* and now the last, possibly only partially used page */
-	len = peer_req->i.size & (PAGE_SIZE - 1);
-	crypto_shash_update(desc, src, len ?: PAGE_SIZE);
-	kunmap_atomic(src);
 
-	crypto_shash_final(desc, digest);
-	shash_desc_zero(desc);
+	inc_rs_pending(peer_device);
+
+	dagtag_result = find_current_dagtag(peer_device->device->resource);
+	if (dagtag_result.err) {
+		change_cstate(peer_device->connection, C_DISCONNECTING, CS_HARD);
+		return;
+	}
+
+	if (peer_req->flags & EE_HAS_DIGEST) {
+		enum drbd_packet cmd = connection->agreed_features & DRBD_FF_RESYNC_DAGTAG ?
+			P_RS_CSUM_DAGTAG_REQ : P_CSUM_RS_REQUEST;
+
+		void *digest = drbd_prepare_drequest_csum(peer_req, cmd,
+				peer_req->digest->digest_size,
+				dagtag_result.node_id, dagtag_result.dagtag);
+		if (!digest)
+			return;
+
+		memcpy(digest, peer_req->digest->digest, peer_req->digest->digest_size);
+
+		/* We are now finished with the digest, so we can free it.
+		 * If we don't, the reference will be lost when the block_id
+		 * field of the union is used for the reply. */
+		peer_req->flags &= ~EE_HAS_DIGEST;
+		kfree(peer_req->digest);
+		peer_req->digest = NULL;
+
+		drbd_send_command(peer_device, cmd, DATA_STREAM);
+	} else {
+		enum drbd_packet cmd;
+		if (connection->agreed_features & DRBD_FF_RESYNC_DAGTAG)
+			cmd = peer_req->flags & EE_RS_THIN_REQ ? P_RS_THIN_DAGTAG_REQ : P_RS_DAGTAG_REQ;
+		else
+			cmd = peer_req->flags & EE_RS_THIN_REQ ? P_RS_THIN_REQ : P_RS_DATA_REQUEST;
+
+		drbd_send_rs_request(peer_device, cmd,
+				peer_req->i.sector, peer_req->i.size, peer_req->block_id,
+				dagtag_result.node_id, dagtag_result.dagtag);
+	}
 }
 
-void drbd_csum_bio(struct crypto_shash *tfm, struct bio *bio, void *digest)
+void drbd_conflict_send_resync_request(struct drbd_peer_request *peer_req)
 {
-	SHASH_DESC_ON_STACK(desc, tfm);
-	struct bio_vec bvec;
-	struct bvec_iter iter;
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_connection *connection = peer_req->peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+	bool conflict;
+	bool canceled;
+
+	spin_lock_irq(&device->interval_lock);
+	clear_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &peer_req->i.flags);
+	canceled = test_bit(INTERVAL_CANCELED, &peer_req->i.flags);
+	conflict = drbd_find_conflict(device, &peer_req->i, CONFLICT_FLAG_IGNORE_SAME_PEER);
+	if (drbd_interval_empty(&peer_req->i))
+		drbd_insert_interval(&device->requests, &peer_req->i);
+	if (!conflict)
+		set_bit(INTERVAL_READY_TO_SEND, &peer_req->i.flags);
+	spin_unlock_irq(&device->interval_lock);
+
+	if (!conflict) {
+		send_resync_request(peer_req);
+	} else if (canceled) {
+		drbd_remove_peer_req_interval(peer_req);
+		drbd_free_peer_req(peer_req);
+	}
 
-	desc->tfm = tfm;
+	if ((!conflict || canceled) && atomic_dec_and_test(&connection->backing_ee_cnt))
+		wake_up(&connection->ee_wait);
+}
 
-	crypto_shash_init(desc);
+
+
+static void __drbd_csum_bio(struct bio *bio, struct shash_desc *desc)
+{
+	struct bio_vec bvec;
+	struct bvec_iter iter;
 
 	bio_for_each_segment(bvec, bio, iter) {
 		u8 *src;
@@ -330,6 +538,33 @@ void drbd_csum_bio(struct crypto_shash *tfm, struct bio *bio, void *digest)
 		crypto_shash_update(desc, src, bvec.bv_len);
 		kunmap_local(src);
 	}
+}
+
+void drbd_csum_bios(struct crypto_shash *tfm, struct bio_list *bios, void *digest)
+{
+	struct bio *bio;
+	SHASH_DESC_ON_STACK(desc, tfm);
+
+	desc->tfm = tfm;
+	crypto_shash_init(desc);
+
+	bio_list_for_each(bio, bios)
+		__drbd_csum_bio(bio, desc);
+
+	crypto_shash_final(desc, digest);
+	shash_desc_zero(desc);
+}
+
+/* This function needs to ignore bi_next */
+void drbd_csum_bio(struct crypto_shash *tfm, struct bio *bio, void *digest)
+{
+	SHASH_DESC_ON_STACK(desc, tfm);
+
+	desc->tfm = tfm;
+	crypto_shash_init(desc);
+
+	__drbd_csum_bio(bio, desc);
+
 	crypto_shash_final(desc, digest);
 	shash_desc_zero(desc);
 }
@@ -339,73 +574,91 @@ static int w_e_send_csum(struct drbd_work *w, int cancel)
 {
 	struct drbd_peer_request *peer_req = container_of(w, struct drbd_peer_request, w);
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
-	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
 	int digest_size;
-	void *digest;
 	int err = 0;
+	struct digest_info *di;
 
 	if (unlikely(cancel))
 		goto out;
 
+	/* Do not add to interval tree if already disconnected or resync aborted */
+	if (!repl_is_sync_target(peer_device->repl_state[NOW]))
+		goto out;
+
 	if (unlikely((peer_req->flags & EE_WAS_ERROR) != 0))
 		goto out;
 
 	digest_size = crypto_shash_digestsize(peer_device->connection->csums_tfm);
-	digest = kmalloc(digest_size, GFP_NOIO);
-	if (digest) {
-		sector_t sector = peer_req->i.sector;
-		unsigned int size = peer_req->i.size;
-		drbd_csum_ee(peer_device->connection->csums_tfm, peer_req, digest);
-		/* Free peer_req and pages before send.
-		 * In case we block on congestion, we could otherwise run into
-		 * some distributed deadlock, if the other side blocks on
-		 * congestion as well, because our receiver blocks in
-		 * drbd_alloc_pages due to pp_in_use > max_buffers. */
-		drbd_free_peer_req(device, peer_req);
-		peer_req = NULL;
-		inc_rs_pending(peer_device);
-		err = drbd_send_drequest_csum(peer_device, sector, size,
-					      digest, digest_size,
-					      P_CSUM_RS_REQUEST);
-		kfree(digest);
-	} else {
-		drbd_err(device, "kmalloc() of digest failed.\n");
+
+	di = kmalloc(sizeof(*di) + digest_size, GFP_NOIO);
+	if (!di) {
 		err = -ENOMEM;
+		goto out;
 	}
 
+	di->digest_size = digest_size;
+	di->digest = (((char *)di)+sizeof(struct digest_info));
+
+	drbd_csum_bios(connection->csums_tfm, &peer_req->bios, di->digest);
+
+	/* Free pages before continuing.
+	 * In case we block on congestion, we could otherwise run into
+	 * some distributed deadlock, if the other side blocks on
+	 * congestion as well, because our receiver blocks in
+	 * drbd_alloc_pages due to pp_in_use > max_buffers. */
+	drbd_peer_req_strip_bio(peer_req);
+
+	/* Use the same drbd_peer_request for tracking resync request and for
+	 * writing, if that is necessary. */
+	peer_req->digest = di;
+	peer_req->flags |= EE_HAS_DIGEST;
+
+	atomic_inc(&connection->backing_ee_cnt);
+	drbd_conflict_send_resync_request(peer_req);
+	return 0;
+
 out:
-	if (peer_req)
-		drbd_free_peer_req(device, peer_req);
+	atomic_sub(peer_req->i.size >> SECTOR_SHIFT, &peer_device->device->rs_sect_ev);
+	drbd_free_peer_req(peer_req);
 
 	if (unlikely(err))
-		drbd_err(device, "drbd_send_drequest(..., csum) failed\n");
+		drbd_err(peer_device, "drbd_send_drequest(..., csum) failed\n");
 	return err;
 }
 
-#define GFP_TRY	(__GFP_HIGHMEM | __GFP_NOWARN)
-
 static int read_for_csum(struct drbd_peer_device *peer_device, sector_t sector, int size)
 {
+	struct drbd_connection *connection = peer_device->connection;
 	struct drbd_device *device = peer_device->device;
 	struct drbd_peer_request *peer_req;
 
 	if (!get_ldev(device))
 		return -EIO;
 
-	/* GFP_TRY, because if there is no memory available right now, this may
-	 * be rescheduled for later. It is "only" background resync, after all. */
-	peer_req = drbd_alloc_peer_req(peer_device, ID_SYNCER /* unused */, sector,
-				       size, size, GFP_TRY);
+	/* Do not wait if no memory is immediately available.  */
+	peer_req = drbd_alloc_peer_req(peer_device, GFP_TRY & ~__GFP_RECLAIM,
+				       size, REQ_OP_READ);
 	if (!peer_req)
 		goto defer;
 
+	spin_lock_irq(&connection->peer_reqs_lock);
+	list_add_tail(&peer_req->recv_order, &peer_device->resync_requests);
+	peer_req->flags |= EE_ON_RECV_ORDER;
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	peer_req->i.size = size;
+	peer_req->i.sector = sector;
+	/* This will be a resync write once we receive the data back from the
+	 * peer, assuming the checksums differ. */
+	peer_req->i.type = INTERVAL_RESYNC_WRITE;
+	peer_req->requested_size = size;
+
 	peer_req->w.cb = w_e_send_csum;
-	peer_req->opf = REQ_OP_READ;
-	spin_lock_irq(&device->resource->req_lock);
-	list_add_tail(&peer_req->w.list, &device->read_ee);
-	spin_unlock_irq(&device->resource->req_lock);
 
+	atomic_inc(&connection->backing_ee_cnt);
 	atomic_add(size >> 9, &device->rs_sect_ev);
+	/* ldev_ref_transfer: put_ldev in peer_req endio */
 	if (drbd_submit_peer_request(peer_req) == 0)
 		return 0;
 
@@ -413,41 +666,143 @@ static int read_for_csum(struct drbd_peer_device *peer_device, sector_t sector,
 	 * because bio_add_page failed (probably broken lower level driver),
 	 * retry may or may not help.
 	 * If it does not, you may need to force disconnect. */
-	spin_lock_irq(&device->resource->req_lock);
-	list_del(&peer_req->w.list);
-	spin_unlock_irq(&device->resource->req_lock);
 
-	drbd_free_peer_req(device, peer_req);
 defer:
 	put_ldev(device);
 	return -EAGAIN;
 }
 
+static int make_one_resync_request(struct drbd_peer_device *peer_device, int discard_granularity, sector_t sector, int size)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_peer_request *peer_req;
+
+	/* Do not wait if no memory is immediately available.  */
+	peer_req = drbd_alloc_peer_req(peer_device, GFP_TRY & ~__GFP_RECLAIM,
+				       size, REQ_OP_WRITE);
+	if (!peer_req) {
+		drbd_err(device, "Could not allocate resync request\n");
+		put_ldev(device);
+		return -EAGAIN;
+	}
+
+	peer_req->i.size = size;
+	peer_req->i.sector = sector;
+	peer_req->i.type = INTERVAL_RESYNC_WRITE;
+	peer_req->requested_size = size;
+
+	if (size == discard_granularity)
+		peer_req->flags |= EE_RS_THIN_REQ;
+
+	spin_lock_irq(&connection->peer_reqs_lock);
+	list_add_tail(&peer_req->recv_order, &peer_device->resync_requests);
+	peer_req->flags |= EE_ON_RECV_ORDER;
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	atomic_inc(&connection->backing_ee_cnt);
+	drbd_conflict_send_resync_request(peer_req);
+	return 0;
+}
+
 int w_resync_timer(struct drbd_work *w, int cancel)
 {
-	struct drbd_device *device =
-		container_of(w, struct drbd_device, resync_work);
+	struct drbd_peer_device *peer_device =
+		container_of(w, struct drbd_peer_device, resync_work);
 
-	switch (device->state.conn) {
-	case C_VERIFY_S:
-		make_ov_request(first_peer_device(device), cancel);
+	switch (peer_device->repl_state[NOW]) {
+	case L_VERIFY_S:
+		make_ov_request(peer_device, cancel);
+		break;
+	case L_SYNC_TARGET:
+		make_resync_request(peer_device, cancel);
 		break;
-	case C_SYNC_TARGET:
-		make_resync_request(first_peer_device(device), cancel);
+	default:
+		if (atomic_read(&peer_device->rs_sect_in) >= peer_device->rs_in_flight) {
+			struct drbd_resource *resource = peer_device->device->resource;
+			unsigned long irq_flags;
+			begin_state_change(resource, &irq_flags, 0);
+			peer_device->resync_active[NEW] = false;
+			end_state_change(resource, &irq_flags, "resync-inactive");
+		}
 		break;
 	}
 
 	return 0;
 }
 
+int w_send_dagtag(struct drbd_work *w, int cancel)
+{
+	struct drbd_connection *connection =
+		container_of(w, struct drbd_connection, send_dagtag_work);
+	struct drbd_resource *resource = connection->resource;
+	int err;
+	u64 dagtag_sector;
+
+	if (cancel)
+		return 0;
+
+	read_lock_irq(&resource->state_rwlock);
+	dagtag_sector = connection->send_dagtag;
+	/* It is OK to use the value outside the lock, because the work will be
+	 * queued again if it is changed. */
+	read_unlock_irq(&resource->state_rwlock);
+
+	/* Only send if no request with a newer dagtag has been sent. This can
+	 * occur if a write arrives after the state change and is processed
+	 * before this work item. */
+	if (dagtag_newer_eq(connection->send.current_dagtag_sector, dagtag_sector))
+		return 0;
+
+	err = drbd_send_dagtag(connection, dagtag_sector);
+	if (err)
+		return err;
+
+	connection->send.current_dagtag_sector = dagtag_sector;
+	return 0;
+}
+
+int w_send_uuids(struct drbd_work *w, int cancel)
+{
+	struct drbd_peer_device *peer_device =
+		container_of(w, struct drbd_peer_device, propagate_uuids_work);
+
+	if (peer_device->repl_state[NOW] < L_ESTABLISHED ||
+	    !test_bit(INITIAL_STATE_SENT, &peer_device->flags))
+		return 0;
+
+	drbd_send_uuids(peer_device, 0, 0);
+
+	return 0;
+}
+
+bool drbd_any_flush_pending(struct drbd_resource *resource)
+{
+	unsigned long flags;
+	struct drbd_connection *primary_connection;
+	bool any_flush_pending = false;
+
+	spin_lock_irqsave(&resource->initiator_flush_lock, flags);
+	rcu_read_lock();
+	for_each_connection_rcu(primary_connection, resource) {
+		if (primary_connection->pending_flush_mask) {
+			any_flush_pending = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+	spin_unlock_irqrestore(&resource->initiator_flush_lock, flags);
+
+	return any_flush_pending;
+}
+
 void resync_timer_fn(struct timer_list *t)
 {
-	struct drbd_device *device = timer_container_of(device, t,
-							resync_timer);
+	struct drbd_peer_device *peer_device = timer_container_of(peer_device, t, resync_timer);
 
 	drbd_queue_work_if_unqueued(
-		&first_peer_device(device)->connection->sender_work,
-		&device->resync_work);
+		&peer_device->connection->sender_work,
+		&peer_device->resync_work);
 }
 
 static void fifo_set(struct fifo_buffer *fb, int value)
@@ -494,32 +849,54 @@ struct fifo_buffer *fifo_alloc(unsigned int fifo_size)
 	return fb;
 }
 
-static int drbd_rs_controller(struct drbd_peer_device *peer_device, unsigned int sect_in)
+/* FIXME by choosing to calculate in nano seconds, we now have several do_div()
+ * in here, which I find very ugly.
+ */
+static int drbd_rs_controller(struct drbd_peer_device *peer_device, u64 sect_in, u64 duration_ns)
 {
-	struct drbd_device *device = peer_device->device;
-	struct disk_conf *dc;
+	const u64 max_duration_ns = RS_MAKE_REQS_INTV_NS * 10;
+	struct peer_device_conf *pdc;
 	unsigned int want;     /* The number of sectors we want in-flight */
 	int req_sect; /* Number of sectors to request in this turn */
 	int correction; /* Number of sectors more we need in-flight */
 	int cps; /* correction per invocation of drbd_rs_controller() */
 	int steps; /* Number of time steps to plan ahead */
 	int curr_corr;
-	int max_sect;
+	u64 max_sect;
 	struct fifo_buffer *plan;
+	u64 duration_ms;
+
+	if (duration_ns == 0)
+		duration_ns = 1;
+	else if (duration_ns > max_duration_ns)
+		duration_ns = max_duration_ns;
+
+	if (duration_ns < RS_MAKE_REQS_INTV_NS) {
+		/* Scale sect_in so that it represents the number of sectors which
+		 * would have arrived if the cycle had lasted the normal time
+		 * (RS_MAKE_REQS_INTV). */
+		sect_in = sect_in * RS_MAKE_REQS_INTV_NS;
+		do_div(sect_in, duration_ns);
+	}
 
-	dc = rcu_dereference(device->ldev->disk_conf);
-	plan = rcu_dereference(device->rs_plan_s);
+	pdc = rcu_dereference(peer_device->conf);
+	plan = rcu_dereference(peer_device->rs_plan_s);
 
-	steps = plan->size; /* (dc->c_plan_ahead * 10 * SLEEP_TIME) / HZ; */
+	steps = plan->size; /* (pdc->c_plan_ahead * 10 * RS_MAKE_REQS_INTV) / HZ; */
 
-	if (device->rs_in_flight + sect_in == 0) { /* At start of resync */
-		want = ((dc->resync_rate * 2 * SLEEP_TIME) / HZ) * steps;
+	if (peer_device->rs_in_flight + sect_in == 0) { /* At start of resync */
+		want = ((pdc->resync_rate * 2 * RS_MAKE_REQS_INTV) / HZ) * steps;
 	} else { /* normal path */
-		want = dc->c_fill_target ? dc->c_fill_target :
-			sect_in * dc->c_delay_target * HZ / (SLEEP_TIME * 10);
+		if (pdc->c_fill_target) {
+			want = pdc->c_fill_target;
+		} else {
+			u64 tmp = sect_in * pdc->c_delay_target * NSEC_PER_SEC;
+			do_div(tmp, (duration_ns * 10));
+			want = tmp;
+		}
 	}
 
-	correction = want - device->rs_in_flight - plan->total;
+	correction = want - peer_device->rs_in_flight - plan->total;
 
 	/* Plan ahead */
 	cps = correction / steps;
@@ -534,36 +911,62 @@ static int drbd_rs_controller(struct drbd_peer_device *peer_device, unsigned int
 	if (req_sect < 0)
 		req_sect = 0;
 
-	max_sect = (dc->c_max_rate * 2 * SLEEP_TIME) / HZ;
+	if (pdc->c_max_rate == 0) {
+		/* No rate limiting. */
+		max_sect = ~0ULL;
+	} else {
+		max_sect = (u64)pdc->c_max_rate * 2 * RS_MAKE_REQS_INTV_NS;
+		do_div(max_sect, NSEC_PER_SEC);
+	}
+
+	duration_ms = duration_ns;
+	do_div(duration_ms, NSEC_PER_MSEC);
+	dynamic_drbd_dbg(peer_device, "dur=%lluns (%llums) sect_in=%llu in_flight=%d wa=%u co=%d st=%d cps=%d cc=%d rs=%d mx=%llu\n",
+		 duration_ns, duration_ms, sect_in, peer_device->rs_in_flight, want, correction,
+		 steps, cps, curr_corr, req_sect, max_sect);
+
 	if (req_sect > max_sect)
 		req_sect = max_sect;
 
-	/*
-	drbd_warn(device, "si=%u if=%d wa=%u co=%d st=%d cps=%d pl=%d cc=%d rs=%d\n",
-		 sect_in, device->rs_in_flight, want, correction,
-		 steps, cps, device->rs_planed, curr_corr, req_sect);
-	*/
-
 	return req_sect;
 }
 
+/* Calculate how many 4k sized blocks we want to resync this time.
+ * Because peer nodes may have different bitmap granularity, and won't
+ * be able to clear "partial bits", make sure we try to request
+ * multiples of the bitmap block size (of myself or the peer, whichever
+ * is larger) in one go. Return value is scaled to our bm_block_size.
+ * If both peers operate at 4k granularity, or at identical granularity,
+ * this should not change behavior.
+ * If we operate at different granularities, we may need to improve our
+ * drbd_rs_controller() as well to get intended resync rates,
+ * especially if you configure for rather small c-max-rate.
+ */
 static int drbd_rs_number_requests(struct drbd_peer_device *peer_device)
 {
-	struct drbd_device *device = peer_device->device;
+	struct net_conf *nc;
+	ktime_t duration, now;
 	unsigned int sect_in;  /* Number of sectors that came in since the last turn */
 	int number, mxb;
+	int effective_resync_request_size;
+	struct drbd_bitmap *bm = peer_device->device->bitmap;
+
+	sect_in = atomic_xchg(&peer_device->rs_sect_in, 0);
+	peer_device->rs_in_flight -= sect_in;
 
-	sect_in = atomic_xchg(&device->rs_sect_in, 0);
-	device->rs_in_flight -= sect_in;
+	now = ktime_get();
+	duration = ktime_sub(now, peer_device->rs_last_mk_req_kt);
+	peer_device->rs_last_mk_req_kt = now;
 
 	rcu_read_lock();
-	mxb = drbd_get_max_buffers(device) / 2;
-	if (rcu_dereference(device->rs_plan_s)->size) {
-		number = drbd_rs_controller(peer_device, sect_in) >> (BM_BLOCK_SHIFT - 9);
-		device->c_sync_rate = number * HZ * (BM_BLOCK_SIZE / 1024) / SLEEP_TIME;
+	nc = rcu_dereference(peer_device->connection->transport.net_conf);
+	mxb = nc ? nc->max_buffers : 0;
+	if (rcu_dereference(peer_device->rs_plan_s)->size) {
+		number = drbd_rs_controller(peer_device, sect_in, ktime_to_ns(duration));
+		number = sect_to_bit(number, BM_BLOCK_SHIFT_4k);
 	} else {
-		device->c_sync_rate = rcu_dereference(device->ldev->disk_conf)->resync_rate;
-		number = SLEEP_TIME * device->c_sync_rate  / ((BM_BLOCK_SIZE / 1024) * HZ);
+		number = RS_MAKE_REQS_INTV * rcu_dereference(peer_device->conf)->resync_rate
+			/ ((BM_BLOCK_SIZE_4k/1024) * HZ);
 	}
 	rcu_read_unlock();
 
@@ -571,201 +974,603 @@ static int drbd_rs_number_requests(struct drbd_peer_device *peer_device)
 	 * Otherwise we may cause the remote site to stall on drbd_alloc_pages(),
 	 * potentially causing a distributed deadlock on congestion during
 	 * online-verify or (checksum-based) resync, if max-buffers,
-	 * socket buffer sizes and resync rate settings are mis-configured. */
-
-	/* note that "number" is in units of "BM_BLOCK_SIZE" (which is 4k),
+	 * socket buffer sizes and resync rate settings are mis-configured.
+	 * Note that "number" is in units of "bm_bytes_per_bit",
 	 * mxb (as used here, and in drbd_alloc_pages on the peer) is
-	 * "number of pages" (typically also 4k),
-	 * but "rs_in_flight" is in "sectors" (512 Byte). */
-	if (mxb - device->rs_in_flight/8 < number)
-		number = mxb - device->rs_in_flight/8;
+	 * "number of pages" (typically 4k), and "rs_in_flight" is in "sectors"
+	 * (512 Byte). Convert everything to sectors and back.
+	 */
+	{
+		int mxb_sect = mxb << (PAGE_SHIFT - 9);
+		int num_sect = bit_to_sect(number, BM_BLOCK_SHIFT_4k);
 
-	return number;
+		if (mxb_sect - peer_device->rs_in_flight < num_sect) {
+			num_sect = mxb_sect - peer_device->rs_in_flight;
+			number = sect_to_bit(num_sect, BM_BLOCK_SHIFT_4k);
+		}
+	}
+
+	/* BM_BLOCK_SIZE_MAX/BM_BLOCK_SIZE_4k? Maybe. But do not round up unless we have to. */
+	effective_resync_request_size =
+		1 << (max(bm->bm_block_shift, peer_device->bm_block_shift) - BM_BLOCK_SHIFT_4k);
+	number = ALIGN(number, effective_resync_request_size);
+	peer_device->c_sync_rate = number * HZ * (BM_BLOCK_SIZE_4k/1024) / RS_MAKE_REQS_INTV;
+	return number >> (bm->bm_block_shift - BM_BLOCK_SHIFT_4k);
 }
 
-static int make_resync_request(struct drbd_peer_device *const peer_device, int cancel)
+static int resync_delay(bool request_ok, int number, int done)
 {
-	struct drbd_device *const device = peer_device->device;
-	struct drbd_connection *const connection = peer_device ? peer_device->connection : NULL;
-	unsigned long bit;
-	sector_t sector;
-	const sector_t capacity = get_capacity(device->vdisk);
-	int max_bio_size;
-	int number, rollback_i, size;
-	int align, requeue = 0;
-	int i = 0;
-	int discard_granularity = 0;
+	if (request_ok && number > 0 && done > 0) {
+		/* Requests in-flight. Adjusting the standard delay to
+		 * mitigate rounding and other errors, that cause 'done'
+		 * to be different from the optimal 'number'.  (usually
+		 * in the range of 66ms to 133ms) */
+		return RS_MAKE_REQS_INTV * done / number;
+	}
 
-	if (unlikely(cancel))
-		return 0;
+	return RS_MAKE_REQS_INTV;
+}
 
-	if (device->rs_total == 0) {
-		/* empty resync? */
-		drbd_resync_finished(peer_device);
-		return 0;
-	}
+void drbd_rs_all_in_flight_came_back(struct drbd_peer_device *peer_device, int rs_sect_in)
+{
+	unsigned int max_bio_size_kb = DRBD_MAX_BIO_SIZE / 1024;
+	struct drbd_device *device = peer_device->device;
+	unsigned int c_max_rate, interval, latency, m, amount_kb;
+	unsigned int rs_kib_in = rs_sect_in / 2;
+	ktime_t latency_kt;
+	bool kickstart;
 
-	if (!get_ldev(device)) {
-		/* Since we only need to access device->rsync a
-		   get_ldev_if_state(device,D_FAILED) would be sufficient, but
-		   to continue resync with a broken disk makes no sense at
-		   all */
-		drbd_err(device, "Disk broke down during resync!\n");
-		return 0;
+	if (get_ldev(device)) {
+		max_bio_size_kb = queue_max_hw_sectors(device->rq_queue) / 2;
+		put_ldev(device);
 	}
 
-	if (connection->agreed_features & DRBD_FF_THIN_RESYNC) {
-		rcu_read_lock();
-		discard_granularity = rcu_dereference(device->ldev->disk_conf)->rs_discard_granularity;
-		rcu_read_unlock();
-	}
+	rcu_read_lock();
+	c_max_rate = rcu_dereference(peer_device->conf)->c_max_rate;
+	rcu_read_unlock();
 
-	max_bio_size = queue_max_hw_sectors(device->rq_queue) << 9;
-	number = drbd_rs_number_requests(peer_device);
-	if (number <= 0)
-		goto requeue;
+	latency_kt = ktime_sub(ktime_get(), peer_device->rs_last_mk_req_kt);
+	latency = nsecs_to_jiffies(ktime_to_ns(latency_kt));
 
-	for (i = 0; i < number; i++) {
-		/* Stop generating RS requests when half of the send buffer is filled,
-		 * but notify TCP that we'd like to have more space. */
-		mutex_lock(&connection->data.mutex);
-		if (connection->data.socket) {
-			struct sock *sk = connection->data.socket->sk;
-			int queued = sk->sk_wmem_queued;
-			int sndbuf = sk->sk_sndbuf;
-			if (queued > sndbuf / 2) {
-				requeue = 1;
-				if (sk->sk_socket)
-					set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
-			}
-		} else
-			requeue = 1;
-		mutex_unlock(&connection->data.mutex);
-		if (requeue)
-			goto requeue;
+	m = max_bio_size_kb > rs_kib_in ? max_bio_size_kb / rs_kib_in : 1;
+	if (c_max_rate != 0)
+		interval = rs_kib_in * m * HZ / c_max_rate;
+	else
+		interval = 0;
+	/* interval holds the ideal pace in which we should request max_bio_size */
+
+	if (peer_device->repl_state[NOW] == L_SYNC_TARGET) {
+		/* Only run resync_work early if we are definitely making
+		 * progress. Otherwise we might continually lock a resync
+		 * extent even when all the requests are canceled. This can
+		 * cause application IO to be blocked for an indefinitely long
+		 * time. */
+		if (test_bit(RS_REQUEST_UNSUCCESSFUL, &peer_device->flags))
+			return;
+	}
 
-next_sector:
-		size = BM_BLOCK_SIZE;
-		bit  = drbd_bm_find_next(device, device->bm_resync_fo);
+	amount_kb = c_max_rate / (HZ / RS_MAKE_REQS_INTV);
+	kickstart = rs_kib_in < amount_kb / 2 && latency < RS_MAKE_REQS_INTV / 2;
+	/* In case the latency of the link and remote IO subsystem is small and
+	   the controller was clearly issuing a too small number of requests,
+	   kickstart it by scheduling it immediately */
 
-		if (bit == DRBD_END_OF_BITMAP) {
-			device->bm_resync_fo = drbd_bm_bits(device);
-			put_ldev(device);
-			return 0;
-		}
+	if (kickstart || interval <= latency) {
+		drbd_queue_work_if_unqueued(
+			&peer_device->connection->sender_work,
+			&peer_device->resync_work);
+		return;
+	}
+
+	if (interval < RS_MAKE_REQS_INTV)
+		mod_timer(&peer_device->resync_timer, jiffies + (interval - latency));
+}
 
-		sector = BM_BIT_TO_SECT(bit);
+static void drbd_enable_peer_replication(struct drbd_device *device)
+{
+	struct drbd_resource *resource = device->resource;
+	unsigned long irq_flags;
+	struct drbd_peer_device *peer_device;
 
-		if (drbd_try_rs_begin_io(peer_device, sector)) {
-			device->bm_resync_fo = bit;
-			goto requeue;
+	begin_state_change(resource, &irq_flags, CS_VERBOSE);
+	for_each_peer_device(peer_device, device)
+		peer_device->peer_replication[NEW] = true;
+	end_state_change(resource, &irq_flags, "enable-peer-replication");
+}
+
+/* Returns whether whole resync is finished. */
+static bool drbd_resync_check_finished(struct drbd_peer_device *peer_device)
+{
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_resource *resource = connection->resource;
+	bool resync_requests_complete;
+	unsigned long bitmap_weight;
+	unsigned long last_resync_pass_bits;
+	bool peer_replication;
+
+	/* Test whether resync pass finished */
+	if (drbd_bm_find_next(peer_device, peer_device->resync_next_bit) < DRBD_END_OF_BITMAP)
+		return false;
+
+	if (drbd_any_flush_pending(resource))
+		return false;
+
+	spin_lock_irq(&connection->peer_reqs_lock);
+	resync_requests_complete = list_empty(&peer_device->resync_requests);
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	if (!resync_requests_complete)
+		return false;
+
+	last_resync_pass_bits = peer_device->last_resync_pass_bits;
+	bitmap_weight = drbd_bm_total_weight(peer_device);
+	peer_device->last_resync_pass_bits = bitmap_weight;
+
+	peer_replication = drbd_all_peer_replication(device, NOW);
+	dynamic_drbd_dbg(peer_device, "Resync pass complete last:%lu out-of-sync:%lu failed:%lu replication:%s\n",
+			last_resync_pass_bits, bitmap_weight, peer_device->rs_failed,
+			peer_replication ? "enabled" : "disabled");
+
+	if (!peer_replication) {
+		if (peer_device->rs_failed == 0 && bitmap_weight > 0 &&
+				bitmap_weight < last_resync_pass_bits / 2) {
+			/* Start next pass with replication still disabled */
+			peer_device->resync_next_bit = 0;
+			return false;
 		}
-		device->bm_resync_fo = bit + 1;
 
-		if (unlikely(drbd_bm_test_bit(device, bit) == 0)) {
-			drbd_rs_complete_io(device, sector);
-			goto next_sector;
+		drbd_enable_peer_replication(device);
+		return false;
+	}
+
+	if (peer_device->rs_failed == 0 && bitmap_weight > 0) {
+		/* Start next pass. Replication is enabled. */
+		peer_device->resync_next_bit = 0;
+		return false;
+	}
+
+	drbd_resync_finished(peer_device, D_MASK);
+	return true;
+}
+
+static bool send_buffer_half_full(struct drbd_peer_device *peer_device)
+{
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_transport *transport = &connection->transport;
+	bool half_full = false;
+
+	mutex_lock(&connection->mutex[DATA_STREAM]);
+	if (transport->class->ops.stream_ok(transport, DATA_STREAM)) {
+		struct drbd_transport_stats transport_stats;
+		int queued, sndbuf;
+
+		transport->class->ops.stats(transport, &transport_stats);
+		queued = transport_stats.send_buffer_used;
+		sndbuf = transport_stats.send_buffer_size;
+		if (queued > sndbuf / 2) {
+			half_full = true;
+			transport->class->ops.hint(transport, DATA_STREAM, NOSPACE);
 		}
+	} else {
+		half_full = true;
+	}
+	mutex_unlock(&connection->mutex[DATA_STREAM]);
 
-#if DRBD_MAX_BIO_SIZE > BM_BLOCK_SIZE
-		/* try to find some adjacent bits.
-		 * we stop if we have already the maximum req size.
-		 *
-		 * Additionally always align bigger requests, in order to
-		 * be prepared for all stripe sizes of software RAIDs.
+	return half_full;
+}
+
+static int optimal_bits_for_alignment(unsigned long bit, int bm_block_shift)
+{
+	int max_bio_bits = DRBD_MAX_BIO_SIZE >> bm_block_shift;
+
+	/* under the assumption that we find a big block of out-of-sync blocks
+	   in the bitmap, calculate the optimal request size so that the
+	   request sizes get bigger, and each request is "perfectly" aligned.
+	   (In case the backing device is a RAID5)
+	   for an odd number, it returns 1.
+	   for anything dividable by 2, it returns 2.
+	   for 3 it returns 1 so that the next request size can be 4.
+	   and so on...
+	*/
+
+	/* Only consider the lower order bits up to the size of max_bio_bits.
+	 * This prevents overflows when converting to int. */
+	bit = bit & (max_bio_bits - 1);
+
+	if (bit == 0)
+		return max_bio_bits;
+
+	return 1 << __ffs(bit);
+}
+
+static int round_to_powerof_2(int value)
+{
+	int l2 = fls(value) - 1;
+	int smaller = 1 << l2;
+	int bigger = smaller << 1;
+
+	if (value == 0)
+		return 0;
+
+	return value - smaller < bigger - value ? smaller : bigger;
+}
+
+static bool adjacent(sector_t sector1, int size, sector_t sector2)
+{
+	return sector1 + (size >> SECTOR_SHIFT) == sector2;
+}
+
+/* make_resync_request() - initiate resync requests as required
+ *
+ * Request handling flow:
+ *
+ *                     checksum resync
+ * make_resync_request --------+
+ *       |                     v
+ *       |               read_for_csum
+ *       |                     |
+ *       |                     v
+ *       |          drbd_submit_peer_request
+ *       |                     |
+ *       |                    ... backing device
+ *       |                     |
+ *       |                     v
+ *       |           drbd_peer_request_endio
+ *       |                     |
+ *       |                     v
+ *       |          drbd_endio_read_sec_final
+ *       |                     |
+ *       V                    ... sender_work
+ * make_one_resync_request     |
+ *       |                     v
+ *       +---------------- w_e_send_csum
+ *       |
+ *       v                             conflict
+ * drbd_conflict_send_resync_request -------+
+ *       |                ^                 |
+ *       |                |                ...
+ *       |                |                 |
+ *       |                |                 v
+ *       v                +---- drbd_do_submit_conflict
+ * send_resync_request
+ *       |
+ *      ... via peer
+ *       |
+ *       +----------------------------+
+ *       |                            |
+ *       v                            v
+ * receive_RSDataReply      receive_rs_deallocated
+ *       |                            |
+ *       |                           ... using list resync_requests
+ *       |                            |
+ *       v                            v
+ * recv_resync_read        drbd_process_rs_discards
+ *       |                            |
+ *       |                            v
+ *       +----------------- drbd_submit_rs_discard
+ *       |
+ *       v                             conflict
+ * drbd_conflict_submit_resync_request -----+
+ *       |                ^                 |
+ *       |                |                ...
+ *       |                |                 |
+ *       |                |                 v
+ *       v                +---- drbd_do_submit_conflict
+ * drbd_submit_peer_request
+ *       |
+ *      ... backing device
+ *       |
+ *       v
+ * drbd_peer_request_endio
+ *       |
+ *       v
+ * drbd_endio_write_sec_final
+ *       |
+ *      ... done_ee
+ *       |
+ *       v
+ * drbd_finish_peer_reqs
+ *       |
+ *       v
+ * e_end_resync_block
+ *       |
+ *       v
+ * drbd_resync_request_complete
+ */
+static int make_resync_request(struct drbd_peer_device *peer_device, int cancel)
+{
+	int optimal_bits_alignment, optimal_bits_rate, discard_granularity = 0;
+	int number = 0, rollback_i, size = 0, i = 0, optimal_bits;
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_bitmap *bm;
+	const sector_t capacity = get_capacity(device->vdisk);
+	bool request_ok = true;
+	unsigned long bit;
+	sector_t sector, prev_sector = 0;
+	unsigned int peer_bm_block_shift = peer_device->bm_block_shift;
+	unsigned int bm_block_shift, bits_per_peer_bit;
+
+	if (unlikely(cancel))
+		return 0;
+
+	if (test_bit(SYNC_TARGET_TO_BEHIND, &peer_device->flags)) {
+		/* If a P_RS_CANCEL_AHEAD on control socket overtook the
+		 * already queued data and state change to Ahead/Behind,
+		 * don't add more resync requests, just wait it out. */
+		drbd_info_ratelimit(peer_device, "peer pulled ahead during resync\n");
+		return 0;
+	}
+	if (!get_ldev(device)) {
+		/* Since we only need to access device->rsync a
+		   get_ldev_if_state(device,D_FAILED) would be sufficient, but
+		   to continue resync with a broken disk makes no sense at
+		   all */
+		drbd_err(device, "Disk broke down during resync!\n");
+		return 0;
+	}
+	bm = device->bitmap;
+
+	if (drbd_resync_check_finished(peer_device))
+		goto out_put_ldev;
+
+	if (send_buffer_half_full(peer_device)) {
+		/* We still want to reschedule ourselves, so do not return. */
+		goto skip_request;
+	}
+
+	if (connection->agreed_features & DRBD_FF_THIN_RESYNC) {
+		rcu_read_lock();
+		discard_granularity = rcu_dereference(device->ldev->disk_conf)->rs_discard_granularity;
+		rcu_read_unlock();
+	}
+
+	bm_block_shift = bm->bm_block_shift;
+	number = drbd_rs_number_requests(peer_device);
+	if (number < discard_granularity >> bm_block_shift)
+		number = discard_granularity >> bm_block_shift;
+
+	/*
+	 * Drain resync requests when we jump back to avoid conflicts that are
+	 * resolved in an arbitrary order, leading to an unexpected ordering of
+	 * requests being completed.
+	 */
+	if (test_bit(RS_REQUEST_UNSUCCESSFUL, &peer_device->flags) &&
+			peer_device->rs_in_flight > 0) {
+		/*
+		 * The rs_in_flight counter does not include discards waiting
+		 * to be merged. Hence we may jump back while there are
+		 * discards waiting to be merged. In this situation, we may
+		 * make a resync request that conflicts with a discard. Allow
+		 * the discard to be merged here so that the conflict is
+		 * resolved.
 		 */
-		align = 1;
-		rollback_i = i;
-		while (i < number) {
-			if (size + BM_BLOCK_SIZE > max_bio_size)
-				break;
+		drbd_process_rs_discards(peer_device, false);
+		goto skip_request;
+	}
 
-			/* Be always aligned */
-			if (sector & ((1<<(align+3))-1))
-				break;
+	/* don't let rs_sectors_came_in() re-schedule us "early"
+	 * just because the first reply came "fast", ... */
+	peer_device->rs_in_flight += bm_bit_to_sect(device->bitmap, number);
+
+	bits_per_peer_bit = peer_bm_block_shift > bm_block_shift ?
+		1 << (peer_bm_block_shift - bm_block_shift) : 1;
+
+	clear_bit(RS_REQUEST_UNSUCCESSFUL, &peer_device->flags);
+	for (; i < number; i += bits_per_peer_bit) {
+		int err;
+
+		/* If we are aborting the requests or the peer is canceling
+		 * them, there is no need to flood the connection with
+		 * requests. Back off now. */
+		if (i > 0 && test_bit(RS_REQUEST_UNSUCCESSFUL, &peer_device->flags)) {
+			request_ok = false;
+			goto request_done;
+		}
+
+		if ((number - i) < discard_granularity >> bm_block_shift)
+			goto request_done;
+
+		bit  = drbd_bm_find_next(peer_device, peer_device->resync_next_bit);
+		if (bit == DRBD_END_OF_BITMAP) {
+			peer_device->resync_next_bit = drbd_bm_bits(device);
+			goto request_done;
+		}
 
+		bit = ALIGN_DOWN(bit, bits_per_peer_bit);
+		sector = bm_bit_to_sect(bm, bit);
+
+		if (drbd_rs_c_min_rate_throttle(peer_device)) {
+			peer_device->resync_next_bit = bit;
+			goto request_done;
+		}
+
+		if (adjacent(prev_sector, size, sector) && (number - i) < size >> bm_block_shift) {
+			/* When making requests in an out-of-sync area, ensure that the size
+			   of successive requests does not decrease. This allows the next
+			   make_resync_request call to start with optimal alignment. */
+			goto request_done;
+		}
+
+		prev_sector = sector;
+		size = bm_block_size(bm) * bits_per_peer_bit;
+		optimal_bits_alignment = optimal_bits_for_alignment(bit,
+						max(peer_bm_block_shift, bm_block_shift));
+		optimal_bits_rate = round_to_powerof_2(number - i);
+		optimal_bits = min(optimal_bits_alignment, optimal_bits_rate) - 1;
+
+		/* try to find some adjacent bits. */
+		rollback_i = i;
+		while (optimal_bits-- > 0) {
 			if (discard_granularity && size == discard_granularity)
 				break;
 
-			/* do not cross extent boundaries */
-			if (((bit+1) & BM_BLOCKS_PER_BM_EXT_MASK) == 0)
+			if (drbd_bm_count_bits(device, peer_device->bitmap_index,
+					       bit + bits_per_peer_bit,
+					       bit + bits_per_peer_bit * 2 - 1) == 0)
 				break;
-			/* now, is it actually dirty, after all?
-			 * caution, drbd_bm_test_bit is tri-state for some
-			 * obscure reason; ( b == 0 ) would get the out-of-band
-			 * only accidentally right because of the "oddly sized"
-			 * adjustment below */
-			if (drbd_bm_test_bit(device, bit+1) != 1)
-				break;
-			bit++;
-			size += BM_BLOCK_SIZE;
-			if ((BM_BLOCK_SIZE << align) <= size)
-				align++;
-			i++;
+			size += bm_block_size(bm) * bits_per_peer_bit;
+			bit += bits_per_peer_bit;
+			i += bits_per_peer_bit;
 		}
-		/* if we merged some,
-		 * reset the offset to start the next drbd_bm_find_next from */
-		if (size > BM_BLOCK_SIZE)
-			device->bm_resync_fo = bit + 1;
-#endif
+
+		/* set the offset to start the next drbd_bm_find_next from */
+		peer_device->resync_next_bit = bit + bits_per_peer_bit;
 
 		/* adjust very last sectors, in case we are oddly sized */
 		if (sector + (size>>9) > capacity)
 			size = (capacity-sector)<<9;
 
-		if (device->use_csums) {
-			switch (read_for_csum(peer_device, sector, size)) {
-			case -EIO: /* Disk failure */
-				put_ldev(device);
-				return -EIO;
-			case -EAGAIN: /* allocation failed, or ldev busy */
-				drbd_rs_complete_io(device, sector);
-				device->bm_resync_fo = BM_SECT_TO_BIT(sector);
-				i = rollback_i;
-				goto requeue;
-			case 0:
-				/* everything ok */
-				break;
-			default:
-				BUG();
-			}
-		} else {
-			int err;
-
-			inc_rs_pending(peer_device);
-			err = drbd_send_drequest(peer_device,
-						 size == discard_granularity ? P_RS_THIN_REQ : P_RS_DATA_REQUEST,
-						 sector, size, ID_SYNCER);
-			if (err) {
-				drbd_err(device, "drbd_send_drequest() failed, aborting...\n");
-				dec_rs_pending(peer_device);
-				put_ldev(device);
-				return err;
-			}
+		if (peer_device->use_csums)
+			err = read_for_csum(peer_device, sector, size);
+		else
+			err = make_one_resync_request(peer_device, discard_granularity, sector, size);
+
+		switch (err) {
+		case -EIO: /* Disk failure */
+			put_ldev(device);
+			return -EIO;
+		case -EAGAIN: /* allocation failed, or ldev busy */
+			set_bit(RS_REQUEST_UNSUCCESSFUL, &peer_device->flags);
+			peer_device->resync_next_bit = bm_sect_to_bit(bm, sector);
+			i = rollback_i;
+			goto request_done;
+		case 0:
+			/* everything ok */
+			break;
+		default:
+			BUG();
 		}
 	}
 
-	if (device->bm_resync_fo >= drbd_bm_bits(device)) {
-		/* last syncer _request_ was sent,
-		 * but the P_RS_DATA_REPLY not yet received.  sync will end (and
-		 * next sync group will resume), as soon as we receive the last
-		 * resync data block, and the last bit is cleared.
-		 * until then resync "work" is "inactive" ...
+request_done:
+	/* ... but do a correction, in case we had to break/goto request_done; */
+	peer_device->rs_in_flight -= (number - i) * bm_sect_per_bit(bm);
+
+	if (peer_device->resync_next_bit >= drbd_bm_bits(device)) {
+		/*
+		 * Last resync request sent in this pass. There will be no
+		 * replies for subsequent sectors so discard merging should
+		 * stop here.
 		 */
-		put_ldev(device);
-		return 0;
+		drbd_last_resync_request(peer_device, false);
 	}
 
- requeue:
-	device->rs_in_flight += (i << (BM_BLOCK_SHIFT - 9));
-	mod_timer(&device->resync_timer, jiffies + SLEEP_TIME);
+skip_request:
+	/* Always reschedule ourselves as a form of polling to detect the end of a resync pass. */
+	mod_timer(&peer_device->resync_timer, jiffies + resync_delay(request_ok, number, i));
+
+	if (peer_device->rs_in_flight > 0 && request_ok) {
+		int rs_sect_in = atomic_read(&peer_device->rs_sect_in);
+
+		if (rs_sect_in >= peer_device->rs_in_flight) {
+			/*
+			 * In case replies were received before correction to
+			 * rs_in_flight, consider whether to schedule ourselves
+			 * early.
+			 */
+			drbd_rs_all_in_flight_came_back(peer_device, rs_sect_in);
+		}
+	}
+out_put_ldev:
 	put_ldev(device);
 	return 0;
 }
 
+static void send_ov_request(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct dagtag_find_result dagtag_result;
+	enum drbd_packet cmd = peer_device->connection->agreed_features & DRBD_FF_RESYNC_DAGTAG ?
+		P_OV_DAGTAG_REQ : P_OV_REQUEST;
+
+	inc_rs_pending(peer_device);
+
+	dagtag_result = find_current_dagtag(peer_device->device->resource);
+	if (dagtag_result.err) {
+		change_cstate(peer_device->connection, C_DISCONNECTING, CS_HARD);
+		return;
+	}
+
+	drbd_send_rs_request(peer_device, cmd,
+			peer_req->i.sector, peer_req->i.size, peer_req->block_id,
+			dagtag_result.node_id, dagtag_result.dagtag);
+}
+
+static void drbd_conflict_send_ov_request(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+
+	spin_lock_irq(&device->interval_lock);
+	if (drbd_find_conflict(device, &peer_req->i, 0))
+		set_bit(INTERVAL_CONFLICT, &peer_req->i.flags);
+	drbd_insert_interval(&device->requests, &peer_req->i);
+	set_bit(INTERVAL_READY_TO_SEND, &peer_req->i.flags);
+	/* Mark as submitted now, since OV requests do not have a second
+	 * conflict resolution stage when the reply is received. */
+	set_bit(INTERVAL_SUBMITTED, &peer_req->i.flags);
+	spin_unlock_irq(&device->interval_lock);
+
+	/* If there were conflicts we will skip the block. However, we send a
+	 * request anyway because the protocol doesn't include any way to mark
+	 * a block as skipped without having sent any request. */
+	send_ov_request(peer_req);
+}
+
+/* make_ov_request() - initiate online verify requests as required
+ *
+ * Request handling flow:
+ *
+ * make_ov_request
+ *        |
+ *        v
+ * drbd_conflict_send_ov_request
+ *        |
+ *        v
+ * send_ov_request
+ *        |
+ *       ... via peer
+ *        |
+ *        v
+ * receive_dagtag_ov_reply
+ *        |
+ *        v
+ * receive_common_ov_reply
+ *        |
+ *        v              dagtag waiting
+ * drbd_peer_resync_read --------------+
+ *        |                            |
+ *        |                           ... dagtag_wait_ee
+ *        |                            |
+ *        |                            v
+ *        +--------------- release_dagtag_wait
+ *        |
+ *        v
+ * drbd_conflict_submit_peer_read
+ *        |
+ *        v
+ * drbd_submit_peer_request
+ *        |
+ *       ... backing device
+ *        |
+ *        v
+ * drbd_peer_request_endio
+ *        |
+ *        v
+ * drbd_endio_read_sec_final
+ *        |
+ *       ... sender_work
+ *        |
+ *        v
+ * w_e_end_ov_reply
+ */
 static int make_ov_request(struct drbd_peer_device *peer_device, int cancel)
 {
 	struct drbd_device *device = peer_device->device;
+	struct drbd_bitmap *bm;
+	struct drbd_connection *connection = peer_device->connection;
 	int number, i, size;
 	sector_t sector;
 	const sector_t capacity = get_capacity(device->vdisk);
@@ -774,262 +1579,538 @@ static int make_ov_request(struct drbd_peer_device *peer_device, int cancel)
 	if (unlikely(cancel))
 		return 1;
 
+	if (!get_ldev(device))
+		return 0;
+
+	bm = device->bitmap;
 	number = drbd_rs_number_requests(peer_device);
+	sector = peer_device->ov_position;
 
-	sector = device->ov_position;
+	/* don't let rs_sectors_came_in() re-schedule us "early"
+	 * just because the first reply came "fast", ... */
+	peer_device->rs_in_flight += bm_bit_to_sect(bm, number);
 	for (i = 0; i < number; i++) {
+		struct drbd_peer_request *peer_req;
+
 		if (sector >= capacity)
-			return 1;
+			break;
 
 		/* We check for "finished" only in the reply path:
 		 * w_e_end_ov_reply().
 		 * We need to send at least one request out. */
-		stop_sector_reached = i > 0
-			&& verify_can_do_stop_sector(device)
-			&& sector >= device->ov_stop_sector;
+		stop_sector_reached = sector > peer_device->ov_start_sector
+			&& verify_can_do_stop_sector(peer_device)
+			&& sector >= peer_device->ov_stop_sector;
 		if (stop_sector_reached)
 			break;
 
-		size = BM_BLOCK_SIZE;
-
-		if (drbd_try_rs_begin_io(peer_device, sector)) {
-			device->ov_position = sector;
-			goto requeue;
-		}
+		if (drbd_rs_c_min_rate_throttle(peer_device))
+			break;
 
+		size = bm_block_size(bm);
 		if (sector + (size>>9) > capacity)
 			size = (capacity-sector)<<9;
 
-		inc_rs_pending(peer_device);
-		if (drbd_send_ov_request(first_peer_device(device), sector, size)) {
-			dec_rs_pending(peer_device);
+		/* Do not wait if no memory is immediately available.
+		 * Don't allocate pages yet - we only need them when we
+		 * receive the P_OV_REPLY and need to read local data.
+		 * That is important to not exhaust max_buffers prematurely.
+		 */
+		peer_req = drbd_alloc_peer_req(peer_device, GFP_TRY & ~__GFP_RECLAIM, size,
+					       REQ_NO_BIO);
+		if (!peer_req) {
+			drbd_err(device, "Could not allocate online verify request\n");
+			put_ldev(device);
 			return 0;
 		}
-		sector += BM_SECT_PER_BIT;
-	}
-	device->ov_position = sector;
 
- requeue:
-	device->rs_in_flight += (i << (BM_BLOCK_SHIFT - 9));
-	if (i == 0 || !stop_sector_reached)
-		mod_timer(&device->resync_timer, jiffies + SLEEP_TIME);
+		peer_req->i.size = size;
+		peer_req->i.sector = sector;
+		peer_req->i.type = INTERVAL_OV_READ_SOURCE;
+
+		spin_lock_irq(&connection->peer_reqs_lock);
+		list_add_tail(&peer_req->recv_order, &connection->peer_reads);
+		peer_req->flags |= EE_ON_RECV_ORDER;
+		spin_unlock_irq(&connection->peer_reqs_lock);
+
+		drbd_conflict_send_ov_request(peer_req);
+
+		sector += bm_sect_per_bit(bm);
+	}
+	/* ... but do a correction, in case we had to break; ... */
+	peer_device->rs_in_flight -= bm_bit_to_sect(bm, number-i);
+	peer_device->ov_position = sector;
+	if (stop_sector_reached)
+		goto out_ok;
+	/* ... and in case that raced with the receiver,
+	 * reschedule ourselves right now */
+	if (i > 0 && atomic_read(&peer_device->rs_sect_in) >= peer_device->rs_in_flight)
+		drbd_queue_work_if_unqueued(
+			&peer_device->connection->sender_work,
+			&peer_device->resync_work);
+	else
+		mod_timer(&peer_device->resync_timer, jiffies + resync_delay(true, number, i));
+out_ok:
+	put_ldev(device);
 	return 1;
 }
 
-int w_ov_finished(struct drbd_work *w, int cancel)
+struct resync_finished_work {
+	struct drbd_peer_device_work pdw;
+	enum drbd_disk_state new_peer_disk_state;
+};
+
+static int w_resync_finished(struct drbd_work *w, int cancel)
 {
-	struct drbd_device_work *dw =
-		container_of(w, struct drbd_device_work, w);
-	struct drbd_device *device = dw->device;
-	kfree(dw);
-	ov_out_of_sync_print(first_peer_device(device));
-	drbd_resync_finished(first_peer_device(device));
+	struct resync_finished_work *rfw = container_of(
+		container_of(w, struct drbd_peer_device_work, w),
+		struct resync_finished_work, pdw);
+
+	if (!cancel)
+		drbd_resync_finished(rfw->pdw.peer_device, rfw->new_peer_disk_state);
+	kfree(rfw);
 
 	return 0;
 }
 
-static int w_resync_finished(struct drbd_work *w, int cancel)
+static long ping_timeout(struct drbd_connection *connection)
 {
-	struct drbd_device_work *dw =
-		container_of(w, struct drbd_device_work, w);
-	struct drbd_device *device = dw->device;
-	kfree(dw);
+	struct net_conf *nc;
+	long timeout;
 
-	drbd_resync_finished(first_peer_device(device));
+	rcu_read_lock();
+	nc = rcu_dereference(connection->transport.net_conf);
+	timeout = nc->ping_timeo * HZ / 10;
+	rcu_read_unlock();
 
-	return 0;
+	return timeout;
+}
+
+static int send_ping_peer(struct drbd_connection *connection)
+{
+	bool was_pending = test_and_set_bit(PING_PENDING, &connection->flags);
+	int err = 0;
+
+	if (!was_pending) {
+		err = drbd_send_ping(connection);
+		if (err)
+			change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
+	}
+
+	return err;
+}
+
+void drbd_ping_peer(struct drbd_connection *connection)
+{
+	long r, timeout = ping_timeout(connection);
+	int err;
+
+	err = send_ping_peer(connection);
+	if (err)
+		return;
+
+	r = wait_event_timeout(connection->resource->state_wait,
+			       !test_bit(PING_PENDING, &connection->flags) ||
+			       connection->cstate[NOW] < C_CONNECTED,
+			       timeout);
+	if (r > 0)
+		return;
+
+	drbd_warn(connection, "PingAck did not arrive in time\n");
+	change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
+}
+
+/* caller needs to hold rcu_read_lock, state_rwlock, adm_mutex or conf_update */
+struct drbd_peer_device *peer_device_by_node_id(struct drbd_device *device, int node_id)
+{
+	struct drbd_peer_device *peer_device;
+
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->node_id == node_id)
+			return peer_device;
+	}
+
+	return NULL;
+}
+
+static void __outdate_peer_disk_by_mask(struct drbd_device *device, u64 nodes)
+{
+	struct drbd_peer_device *peer_device;
+	int node_id;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if (!(nodes & NODE_MASK(node_id)))
+			continue;
+		peer_device = peer_device_by_node_id(device, node_id);
+		if (peer_device && peer_device->disk_state[NEW] >= D_CONSISTENT)
+			__change_peer_disk_state(peer_device, D_OUTDATED);
+	}
+}
+
+/* An annoying corner case is if we are resync target towards a bunch
+   of nodes. One of the resyncs finished as STABLE_RESYNC, the others
+   as UNSTABLE_RESYNC. */
+static bool was_resync_stable(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+
+	if (test_bit(UNSTABLE_RESYNC, &peer_device->flags) &&
+	    !test_bit(STABLE_RESYNC, &device->flags))
+		return false;
+
+	set_bit(STABLE_RESYNC, &device->flags);
+	/* that STABLE_RESYNC bit gets reset if in any other ongoing resync
+	   we receive something from a resync source that is marked with
+	   UNSTABLE RESYNC. */
+
+	return true;
+}
+
+static u64 __cancel_other_resyncs(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	u64 target_m = 0;
+
+	for_each_peer_device(peer_device, device) {
+		if (peer_device->repl_state[NEW] == L_PAUSED_SYNC_T) {
+			target_m |= NODE_MASK(peer_device->node_id);
+			__change_repl_state(peer_device, L_ESTABLISHED);
+		}
+	}
+
+	return target_m;
 }
 
-static void ping_peer(struct drbd_device *device)
+static void resync_again(struct drbd_device *device, u64 source_m, u64 target_m)
 {
-	struct drbd_connection *connection = first_peer_device(device)->connection;
+	struct drbd_peer_device *peer_device;
+
+	for_each_peer_device(peer_device, device) {
+		if (peer_device->resync_again) {
+			u64 m = NODE_MASK(peer_device->node_id);
+			enum drbd_repl_state new_repl_state =
+				source_m & m ? L_WF_BITMAP_S :
+				target_m & m ? L_WF_BITMAP_T :
+				L_ESTABLISHED;
+
+			if (new_repl_state != L_ESTABLISHED) {
+				peer_device->resync_again--;
+				begin_state_change_locked(device->resource, CS_VERBOSE);
+				__change_repl_state(peer_device, new_repl_state);
+				end_state_change_locked(device->resource, "resync-again");
+			}
+		}
+	}
+}
+
+static void init_resync_stable_bits(struct drbd_peer_device *first_target_pd)
+{
+	struct drbd_device *device = first_target_pd->device;
+	struct drbd_peer_device *peer_device;
 
-	clear_bit(GOT_PING_ACK, &connection->flags);
-	request_ping(connection);
-	wait_event(connection->ping_wait,
-		   test_bit(GOT_PING_ACK, &connection->flags) || device->state.conn < C_CONNECTED);
+	clear_bit(UNSTABLE_RESYNC, &first_target_pd->flags);
+
+	/* Clear the device wide STABLE_RESYNC flag when becoming
+	   resync target on the first peer_device. */
+	for_each_peer_device(peer_device, device) {
+		enum drbd_repl_state repl_state = peer_device->repl_state[NOW];
+		if (peer_device == first_target_pd)
+			continue;
+		if (repl_state == L_SYNC_TARGET || repl_state == L_PAUSED_SYNC_T)
+			return;
+	}
+	clear_bit(STABLE_RESYNC, &device->flags);
 }
 
-int drbd_resync_finished(struct drbd_peer_device *peer_device)
+static void after_reconciliation_resync(struct drbd_connection *connection)
+{
+	struct drbd_connection *lost_peer =
+		drbd_get_connection_by_node_id(connection->resource,
+					       connection->after_reconciliation.lost_node_id);
+
+	if (lost_peer) {
+		if (lost_peer->cstate[NOW] < C_CONNECTED)
+			atomic64_set(&lost_peer->last_dagtag_sector,
+				connection->after_reconciliation.dagtag_sector);
+
+		kref_put(&lost_peer->kref, drbd_destroy_connection);
+	}
+
+	connection->after_reconciliation.lost_node_id = -1;
+}
+
+static void try_to_get_resynced_from_primary(struct drbd_device *device)
+{
+	struct drbd_resource *resource = device->resource;
+	struct drbd_peer_device *peer_device;
+	struct drbd_connection *connection;
+
+	read_lock_irq(&resource->state_rwlock);
+	for_each_peer_device(peer_device, device) {
+		if (peer_device->connection->peer_role[NEW] == R_PRIMARY &&
+		    peer_device->disk_state[NEW] == D_UP_TO_DATE)
+			goto found;
+	}
+	peer_device = NULL;
+found:
+	read_unlock_irq(&resource->state_rwlock);
+
+	if (!peer_device)
+		return;
+
+	connection = peer_device->connection;
+	if (connection->agreed_pro_version < 118) {
+		drbd_warn(connection,
+			  "peer is lower than protocol vers 118, reconnecting to get resynced\n");
+		change_cstate(connection, C_PROTOCOL_ERROR, CS_HARD);
+		return;
+	}
+
+	drbd_send_uuids(peer_device, 0, 0);
+	drbd_start_resync(peer_device, L_SYNC_TARGET, "resync-from-primary");
+}
+
+static void queue_resync_finished(struct drbd_peer_device *peer_device, enum drbd_disk_state new_peer_disk_state)
+{
+	struct drbd_connection *connection = peer_device->connection;
+	struct resync_finished_work *rfw;
+
+	rfw = kmalloc_obj(*rfw, GFP_ATOMIC);
+	if (!rfw) {
+		drbd_err(peer_device, "Warn failed to kmalloc(dw).\n");
+		return;
+	}
+
+	rfw->pdw.w.cb = w_resync_finished;
+	rfw->pdw.peer_device = peer_device;
+	rfw->new_peer_disk_state = new_peer_disk_state;
+	drbd_queue_work(&connection->sender_work, &rfw->pdw.w);
+}
+
+static void drbd_queue_final_peers_in_sync(struct drbd_peer_device *peer_device)
+{
+	sector_t last_end = peer_device->last_in_sync_end;
+	sector_t last_step = last_end & ~PEERS_IN_SYNC_STEP_SECT_MASK;
+	sector_t last_step_end = min(get_capacity(peer_device->device->vdisk),
+			last_step + PEERS_IN_SYNC_STEP_SECT);
+
+	/* Send for last request if it was part way through a step */
+	if (last_end > last_step)
+		drbd_queue_update_peers(peer_device, last_step, last_step_end);
+}
+
+void drbd_resync_finished(struct drbd_peer_device *peer_device,
+			 enum drbd_disk_state new_peer_disk_state)
 {
 	struct drbd_device *device = peer_device->device;
 	struct drbd_connection *connection = peer_device->connection;
+	enum drbd_repl_state *repl_state = peer_device->repl_state;
+	enum drbd_repl_state old_repl_state = L_ESTABLISHED;
+	bool try_to_get_resynced_from_primary_flag = false;
+	u64 source_m = 0, target_m = 0;
 	unsigned long db, dt, dbdt;
 	unsigned long n_oos;
-	union drbd_state os, ns;
-	struct drbd_device_work *dw;
 	char *khelper_cmd = NULL;
 	int verify_done = 0;
+	bool aborted = false;
+	int bm_block_shift = device->last_bm_block_shift;
 
-	/* Remove all elements from the resync LRU. Since future actions
-	 * might set bits in the (main) bitmap, then the entries in the
-	 * resync LRU would be wrong. */
-	if (drbd_rs_del_all(device)) {
-		/* In case this is not possible now, most probably because
-		 * there are P_RS_DATA_REPLY Packets lingering on the worker's
-		 * queue (or even the read operations for those packets
-		 * is not finished by now).   Retry in 100ms. */
-
-		schedule_timeout_interruptible(HZ / 10);
-		dw = kmalloc_obj(struct drbd_device_work, GFP_ATOMIC);
-		if (dw) {
-			dw->w.cb = w_resync_finished;
-			dw->device = device;
-			drbd_queue_work(&connection->sender_work, &dw->w);
-			return 1;
+	if (repl_state[NOW] == L_SYNC_SOURCE || repl_state[NOW] == L_PAUSED_SYNC_S) {
+		/* Make sure all queued w_update_peers() executed. */
+		if (current == device->resource->worker.task) {
+			queue_resync_finished(peer_device, new_peer_disk_state);
+			return;
+		} else {
+			drbd_flush_workqueue(&device->resource->work);
 		}
-		drbd_err(device, "Warn failed to drbd_rs_del_all() and to kmalloc(dw).\n");
 	}
 
-	dt = (jiffies - device->rs_start - device->rs_paused) / HZ;
+	if (!down_write_trylock(&device->uuid_sem)) {
+		if (current == device->resource->worker.task) {
+			queue_resync_finished(peer_device, new_peer_disk_state);
+			return;
+		} else {
+			down_write(&device->uuid_sem);
+		}
+	}
+
+	dt = (jiffies - peer_device->rs_start - peer_device->rs_paused) / HZ;
 	if (dt <= 0)
 		dt = 1;
-
-	db = device->rs_total;
+	db = peer_device->rs_total;
 	/* adjust for verify start and stop sectors, respective reached position */
-	if (device->state.conn == C_VERIFY_S || device->state.conn == C_VERIFY_T)
-		db -= device->ov_left;
+	if (repl_state[NOW] == L_VERIFY_S || repl_state[NOW] == L_VERIFY_T)
+		db -= atomic64_read(&peer_device->ov_left);
 
-	dbdt = Bit2KB(db/dt);
-	device->rs_paused /= HZ;
+	dbdt = bit_to_kb(db/dt, bm_block_shift);
+	peer_device->rs_paused /= HZ;
 
-	if (!get_ldev(device))
+	if (!get_ldev(device)) {
+		up_write(&device->uuid_sem);
 		goto out;
+	}
 
-	ping_peer(device);
+	drbd_ping_peer(connection);
 
-	spin_lock_irq(&device->resource->req_lock);
-	os = drbd_read_state(device);
+	write_lock_irq(&device->resource->state_rwlock);
+	begin_state_change_locked(device->resource, CS_VERBOSE);
+	old_repl_state = repl_state[NOW];
 
-	verify_done = (os.conn == C_VERIFY_S || os.conn == C_VERIFY_T);
+	verify_done = (repl_state[NOW] == L_VERIFY_S || repl_state[NOW] == L_VERIFY_T);
 
 	/* This protects us against multiple calls (that can happen in the presence
 	   of application IO), and against connectivity loss just before we arrive here. */
-	if (os.conn <= C_CONNECTED)
+	if (peer_device->repl_state[NOW] <= L_ESTABLISHED)
 		goto out_unlock;
 
-	ns = os;
-	ns.conn = C_CONNECTED;
+	/*
+	 * This protects us against a race with the peer when finishing a
+	 * resync at the same time as entering Ahead-Behind mode.
+	 */
+	if (peer_device->repl_state[NOW] == L_BEHIND)
+		goto out_unlock;
 
-	drbd_info(device, "%s done (total %lu sec; paused %lu sec; %lu K/sec)\n",
-	     verify_done ? "Online verify" : "Resync",
-	     dt + device->rs_paused, device->rs_paused, dbdt);
+	peer_device->resync_active[NEW] = false;
+	__change_repl_state(peer_device, L_ESTABLISHED);
+
+	aborted = device->disk_state[NOW] == D_OUTDATED && new_peer_disk_state == D_INCONSISTENT;
+	{
+	char tmp[sizeof(" but 01234567890123456789 4k blocks skipped")] = "";
+	if (verify_done && peer_device->ov_skipped)
+		snprintf(tmp, sizeof(tmp), " but %lu %lluk blocks skipped",
+			peer_device->ov_skipped, bit_to_kb(1, bm_block_shift));
+	drbd_info(peer_device, "%s %s%s (total %lu sec; paused %lu sec; %lu K/sec)\n",
+		  verify_done ? "Online verify" : "Resync",
+		  aborted ? "aborted" : "done", tmp,
+		  dt + peer_device->rs_paused, peer_device->rs_paused, dbdt);
+	}
 
-	n_oos = drbd_bm_total_weight(device);
+	n_oos = drbd_bm_total_weight(peer_device);
 
-	if (os.conn == C_VERIFY_S || os.conn == C_VERIFY_T) {
+	if (repl_state[NOW] == L_VERIFY_S || repl_state[NOW] == L_VERIFY_T) {
 		if (n_oos) {
-			drbd_alert(device, "Online verify found %lu %dk block out of sync!\n",
-			      n_oos, Bit2KB(1));
+			drbd_alert(peer_device, "Online verify found %lu %lluk blocks out of sync!\n",
+			      n_oos, bit_to_kb(1, bm_block_shift));
 			khelper_cmd = "out-of-sync";
 		}
 	} else {
-		D_ASSERT(device, (n_oos - device->rs_failed) == 0);
+		if (!aborted && peer_device->rs_failed == 0 && n_oos != 0)
+			drbd_warn(peer_device, "expected n_oos:%lu to be 0\n", n_oos);
 
-		if (os.conn == C_SYNC_TARGET || os.conn == C_PAUSED_SYNC_T)
+		if (repl_state[NOW] == L_SYNC_TARGET || repl_state[NOW] == L_PAUSED_SYNC_T)
 			khelper_cmd = "after-resync-target";
 
-		if (device->use_csums && device->rs_total) {
-			const unsigned long s = device->rs_same_csum;
-			const unsigned long t = device->rs_total;
+		if (peer_device->use_csums && peer_device->rs_total) {
+			const unsigned long s = peer_device->rs_same_csum;
+			const unsigned long t = peer_device->rs_total;
 			const int ratio =
 				(t == 0)     ? 0 :
 			(t < 100000) ? ((s*100)/t) : (s/(t/100));
-			drbd_info(device, "%u %% had equal checksums, eliminated: %luK; "
-			     "transferred %luK total %luK\n",
+			drbd_info(peer_device, "%u %% had equal checksums, eliminated: %lluK; "
+			     "transferred %lluK total %lluK\n",
 			     ratio,
-			     Bit2KB(device->rs_same_csum),
-			     Bit2KB(device->rs_total - device->rs_same_csum),
-			     Bit2KB(device->rs_total));
+			     bit_to_kb(peer_device->rs_same_csum, bm_block_shift),
+			     bit_to_kb(peer_device->rs_total - peer_device->rs_same_csum,
+					bm_block_shift),
+			     bit_to_kb(peer_device->rs_total, bm_block_shift));
 		}
 	}
 
-	if (device->rs_failed) {
-		drbd_info(device, "            %lu failed blocks\n", device->rs_failed);
+	if (peer_device->rs_failed) {
+		drbd_info(peer_device, "            %lu failed blocks\n", peer_device->rs_failed);
 
-		if (os.conn == C_SYNC_TARGET || os.conn == C_PAUSED_SYNC_T) {
-			ns.disk = D_INCONSISTENT;
-			ns.pdsk = D_UP_TO_DATE;
+		if (repl_state[NOW] == L_SYNC_TARGET || repl_state[NOW] == L_PAUSED_SYNC_T) {
+			__change_disk_state(device, D_INCONSISTENT);
+			__change_peer_disk_state(peer_device, D_UP_TO_DATE);
 		} else {
-			ns.disk = D_UP_TO_DATE;
-			ns.pdsk = D_INCONSISTENT;
+			__change_disk_state(device, D_UP_TO_DATE);
+			__change_peer_disk_state(peer_device, D_INCONSISTENT);
 		}
 	} else {
-		ns.disk = D_UP_TO_DATE;
-		ns.pdsk = D_UP_TO_DATE;
+		if (repl_state[NOW] == L_SYNC_TARGET || repl_state[NOW] == L_PAUSED_SYNC_T) {
+			bool stable_resync = was_resync_stable(peer_device);
+			if (stable_resync) {
+				enum drbd_disk_state new_disk_state = peer_device->disk_state[NOW];
+				if (new_disk_state < D_UP_TO_DATE &&
+				    test_bit(SYNC_SRC_CRASHED_PRI, &peer_device->flags)) {
+					try_to_get_resynced_from_primary_flag = true;
+					set_bit(CRASHED_PRIMARY, &device->flags);
+				}
+				__change_disk_state(device, new_disk_state);
+			}
 
-		if (os.conn == C_SYNC_TARGET || os.conn == C_PAUSED_SYNC_T) {
-			if (device->p_uuid) {
+			if (device->disk_state[NEW] == D_UP_TO_DATE)
+				target_m = __cancel_other_resyncs(device);
+
+			if (stable_resync && test_bit(UUIDS_RECEIVED, &peer_device->flags)) {
+				const int node_id = device->resource->res_opts.node_id;
 				int i;
-				for (i = UI_BITMAP ; i <= UI_HISTORY_END ; i++)
-					_drbd_uuid_set(device, i, device->p_uuid[i]);
-				drbd_uuid_set(device, UI_BITMAP, device->ldev->md.uuid[UI_CURRENT]);
-				_drbd_uuid_set(device, UI_CURRENT, device->p_uuid[UI_CURRENT]);
-			} else {
-				drbd_err(device, "device->p_uuid is NULL! BUG\n");
-			}
-		}
 
-		if (!(os.conn == C_VERIFY_S || os.conn == C_VERIFY_T)) {
-			/* for verify runs, we don't update uuids here,
-			 * so there would be nothing to report. */
-			drbd_uuid_set_bm(device, 0UL);
-			drbd_print_uuids(device, "updated UUIDs");
-			if (device->p_uuid) {
+				u64 newer = drbd_uuid_resync_finished(peer_device);
+				__outdate_peer_disk_by_mask(device, newer);
+				drbd_print_uuids(peer_device, "updated UUIDs");
+
 				/* Now the two UUID sets are equal, update what we
 				 * know of the peer. */
-				int i;
-				for (i = UI_CURRENT ; i <= UI_HISTORY_END ; i++)
-					device->p_uuid[i] = device->ldev->md.uuid[i];
+				peer_device->current_uuid = drbd_current_uuid(device);
+				peer_device->bitmap_uuids[node_id] = drbd_bitmap_uuid(peer_device);
+				for (i = 0; i < ARRAY_SIZE(peer_device->history_uuids); i++)
+					peer_device->history_uuids[i] =
+						drbd_history_uuid(device, i);
+			} else {
+				if (!test_bit(UUIDS_RECEIVED, &peer_device->flags))
+					drbd_err(peer_device, "BUG: uuids were not received!\n");
+
+				if (test_bit(UNSTABLE_RESYNC, &peer_device->flags))
+					drbd_info(peer_device, "Peer was unstable during resync\n");
+			}
+		} else if (repl_state[NOW] == L_SYNC_SOURCE || repl_state[NOW] == L_PAUSED_SYNC_S) {
+			if (new_peer_disk_state != D_MASK)
+				__change_peer_disk_state(peer_device, new_peer_disk_state);
+			if (peer_device->connection->agreed_pro_version < 110) {
+				drbd_uuid_set_bitmap(peer_device, 0UL);
+				drbd_print_uuids(peer_device, "updated UUIDs");
 			}
 		}
 	}
 
-	_drbd_set_state(device, ns, CS_VERBOSE, NULL);
 out_unlock:
-	spin_unlock_irq(&device->resource->req_lock);
+	end_state_change_locked(device->resource, "resync-finished");
 
-	/* If we have been sync source, and have an effective fencing-policy,
-	 * once *all* volumes are back in sync, call "unfence". */
-	if (os.conn == C_SYNC_SOURCE) {
-		enum drbd_disk_state disk_state = D_MASK;
-		enum drbd_disk_state pdsk_state = D_MASK;
-		enum drbd_fencing_p fp = FP_DONT_CARE;
+	put_ldev(device);
 
-		rcu_read_lock();
-		fp = rcu_dereference(device->ldev->disk_conf)->fencing;
-		if (fp != FP_DONT_CARE) {
-			struct drbd_peer_device *peer_device;
-			int vnr;
-			idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-				struct drbd_device *device = peer_device->device;
-				disk_state = min_t(enum drbd_disk_state, disk_state, device->state.disk);
-				pdsk_state = min_t(enum drbd_disk_state, pdsk_state, device->state.pdsk);
-			}
-		}
-		rcu_read_unlock();
-		if (disk_state == D_UP_TO_DATE && pdsk_state == D_UP_TO_DATE)
-			conn_khelper(connection, "unfence-peer");
-	}
+	peer_device->rs_total  = 0;
+	peer_device->rs_failed = 0;
+	peer_device->rs_paused = 0;
 
-	put_ldev(device);
-out:
-	device->rs_total  = 0;
-	device->rs_failed = 0;
-	device->rs_paused = 0;
+	if (old_repl_state == L_SYNC_TARGET || old_repl_state == L_PAUSED_SYNC_T)
+		target_m |= NODE_MASK(peer_device->node_id);
+	else if (old_repl_state == L_SYNC_SOURCE || old_repl_state == L_PAUSED_SYNC_S)
+		source_m |= NODE_MASK(peer_device->node_id);
 
+	resync_again(device, source_m, target_m);
+	write_unlock_irq(&device->resource->state_rwlock);
+	up_write(&device->uuid_sem);
+	if (connection->after_reconciliation.lost_node_id != -1)
+		after_reconciliation_resync(connection);
+
+	drbd_queue_final_peers_in_sync(peer_device);
+
+out:
 	/* reset start sector, if we reached end of device */
-	if (verify_done && device->ov_left == 0)
-		device->ov_start_sector = 0;
+	if (verify_done && atomic64_read(&peer_device->ov_left) == 0)
+		peer_device->ov_start_sector = 0;
 
-	drbd_md_sync(device);
+	drbd_md_sync_if_dirty(device);
 
 	if (khelper_cmd)
-		drbd_khelper(device, khelper_cmd);
+		drbd_maybe_khelper(device, connection, khelper_cmd);
 
-	return 1;
+	if (try_to_get_resynced_from_primary_flag)
+		try_to_get_resynced_from_primary(device);
 }
 
+
 /**
  * w_e_end_data_req() - Worker callback, to send a P_DATA_REPLY packet in response to a P_DATA_REQUEST
  * @w:		work object.
@@ -1039,7 +2120,6 @@ int w_e_end_data_req(struct drbd_work *w, int cancel)
 {
 	struct drbd_peer_request *peer_req = container_of(w, struct drbd_peer_request, w);
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
-	struct drbd_device *device = peer_device->device;
 	int err;
 
 	if (unlikely(cancel)) {
@@ -1050,162 +2130,236 @@ int w_e_end_data_req(struct drbd_work *w, int cancel)
 	if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
 		err = drbd_send_block(peer_device, P_DATA_REPLY, peer_req);
 	} else {
-		if (drbd_ratelimit())
-			drbd_err(device, "Sending NegDReply. sector=%llus.\n",
+		drbd_err_ratelimit(peer_device, "Sending NegDReply. sector=%llus.\n",
 			    (unsigned long long)peer_req->i.sector);
 
 		err = drbd_send_ack(peer_device, P_NEG_DREPLY, peer_req);
 	}
-
 	if (unlikely(err))
-		drbd_err(device, "drbd_send_block() failed\n");
+		drbd_err(peer_device, "drbd_send_block() failed\n");
+
 out:
-	dec_unacked(device);
-	drbd_free_peer_req(device, peer_req);
+	dec_unacked(peer_device);
+	drbd_free_peer_req(peer_req);
 
 	return err;
 }
 
+void
+drbd_resync_read_req_mod(struct drbd_peer_request *peer_req, enum drbd_interval_flags bit_to_set)
+{
+	const unsigned long done_mask = 1UL << INTERVAL_SENT | 1UL << INTERVAL_RECEIVED;
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	unsigned long nflags, oflags, new_flag;
+
+	new_flag = 1UL << bit_to_set;
+	if (!(new_flag & done_mask))
+		drbd_err(peer_device, "BUG: %s: Unexpected flag 0x%lx\n", __func__, new_flag);
+
+	do {
+		oflags = READ_ONCE(peer_req->i.flags);
+		nflags = oflags | new_flag;
+	} while (cmpxchg(&peer_req->i.flags, oflags, nflags) != oflags);
+
+	if (new_flag & oflags)
+		drbd_err(peer_device, "BUG: %s: Flag 0x%lx already set\n", __func__, new_flag);
+
+	if ((nflags & done_mask) == done_mask)
+		drbd_free_peer_req(peer_req);
+}
+
 static bool all_zero(struct drbd_peer_request *peer_req)
 {
-	struct page *page = peer_req->pages;
-	unsigned int len = peer_req->i.size;
+	struct bvec_iter iter;
+	struct bio_vec bvec;
+	struct bio *bio;
 
-	page_chain_for_each(page) {
-		unsigned int l = min_t(unsigned int, len, PAGE_SIZE);
-		unsigned int i, words = l / sizeof(long);
-		unsigned long *d;
+	bio_list_for_each(bio, &peer_req->bios) {
+		bio_for_each_segment(bvec, bio, iter) {
+			unsigned long *d = bvec_virt(&bvec);
+			unsigned int i, words = bvec.bv_len / sizeof(*d);
 
-		d = kmap_atomic(page);
-		for (i = 0; i < words; i++) {
-			if (d[i]) {
-				kunmap_atomic(d);
-				return false;
+			for (i = 0; i < words; i++) {
+				if (d[i])
+					return false;
 			}
 		}
-		kunmap_atomic(d);
-		len -= l;
 	}
 
 	return true;
 }
 
-/**
- * w_e_end_rsdata_req() - Worker callback to send a P_RS_DATA_REPLY packet in response to a P_RS_DATA_REQUEST
- * @w:		work object.
- * @cancel:	The connection will be closed anyways
- */
-int w_e_end_rsdata_req(struct drbd_work *w, int cancel)
+static bool al_resync_extent_active(struct drbd_device *device, sector_t sector, unsigned int size)
 {
-	struct drbd_peer_request *peer_req = container_of(w, struct drbd_peer_request, w);
-	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	sector_t resync_extent_sector = sector & ~LEGACY_BM_EXT_SECT_MASK;
+	sector_t end_sector = sector + (size >> SECTOR_SHIFT);
+	sector_t resync_extent_end_sector =
+		(end_sector + LEGACY_BM_EXT_SECT_MASK) & ~LEGACY_BM_EXT_SECT_MASK;
+	return drbd_al_active(device,
+			resync_extent_sector,
+			(resync_extent_end_sector - resync_extent_sector) << SECTOR_SHIFT);
+}
+
+static int drbd_rs_reply(struct drbd_peer_device *peer_device, struct drbd_peer_request *peer_req, bool *expect_ack)
+{
+	struct drbd_connection *connection = peer_device->connection;
 	struct drbd_device *device = peer_device->device;
 	int err;
+	bool eq = false;
 
-	if (unlikely(cancel)) {
-		err = 0;
-		goto out;
-	}
+	if (peer_req->flags & EE_HAS_DIGEST) {
+		struct digest_info *di = peer_req->digest;
+		int digest_size;
+		void *digest = NULL;
 
-	if (get_ldev_if_state(device, D_FAILED)) {
-		drbd_rs_complete_io(device, peer_req->i.sector);
-		put_ldev(device);
+		/* quick hack to try to avoid a race against reconfiguration.
+		 * a real fix would be much more involved,
+		 * introducing more locking mechanisms */
+		if (connection->csums_tfm) {
+			digest_size = crypto_shash_digestsize(connection->csums_tfm);
+			D_ASSERT(device, digest_size == di->digest_size);
+			digest = kmalloc(digest_size, GFP_NOIO);
+			if (digest) {
+				drbd_csum_bios(connection->csums_tfm, &peer_req->bios, digest);
+				eq = !memcmp(digest, di->digest, digest_size);
+				kfree(digest);
+			}
+		}
+
+		peer_req->flags &= ~EE_HAS_DIGEST; /* This peer request no longer has a digest pointer */
+		kfree(di);
 	}
 
-	if (device->state.conn == C_AHEAD) {
-		err = drbd_send_ack(peer_device, P_RS_CANCEL, peer_req);
-	} else if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
-		if (likely(device->state.pdsk >= D_INCONSISTENT)) {
-			inc_rs_pending(peer_device);
-			if (peer_req->flags & EE_RS_THIN_REQ && all_zero(peer_req))
-				err = drbd_send_rs_deallocated(peer_device, peer_req);
-			else
-				err = drbd_send_block(peer_device, P_RS_DATA_REPLY, peer_req);
+	if (eq) {
+		drbd_set_in_sync(peer_device, peer_req->i.sector, peer_req->i.size);
+		/* rs_same_csums unit is BM_BLOCK_SIZE */
+		/* ldev_safe: a bio that holds a ldev ref exists */
+		peer_device->rs_same_csum += peer_req->i.size >> device->ldev->md.bm_block_shift;
+		err = drbd_send_ack(peer_device, P_RS_IS_IN_SYNC, peer_req);
+	} else {
+		inc_rs_pending(peer_device);
+		/*
+		 * If we send back as P_RS_DEALLOCATED,
+		 * this is overestimating "in-flight" accounting.
+		 * But needed to be properly balanced with
+		 * the atomic_sub() in got_RSWriteAck.
+		 */
+		atomic_add(peer_req->i.size >> 9, &connection->rs_in_flight);
+
+		/* After setting this, peer_req can be found by got_RSWriteAck. */
+		set_bit(INTERVAL_READY_TO_SEND, &peer_req->i.flags);
+
+		if (peer_req->flags & EE_RS_THIN_REQ && all_zero(peer_req)) {
+			err = drbd_send_rs_deallocated(peer_device, peer_req);
 		} else {
-			if (drbd_ratelimit())
-				drbd_err(device, "Not sending RSDataReply, "
-				    "partner DISKLESS!\n");
-			err = 0;
+			err = drbd_send_block(peer_device, P_RS_DATA_REPLY, peer_req);
 		}
-	} else {
-		if (drbd_ratelimit())
-			drbd_err(device, "Sending NegRSDReply. sector %llus.\n",
-			    (unsigned long long)peer_req->i.sector);
+		*expect_ack = true;
 
-		err = drbd_send_ack(peer_device, P_NEG_RS_DREPLY, peer_req);
+		drbd_peer_req_strip_bio(peer_req);
 
-		/* update resync data with failure */
-		drbd_rs_failed_io(peer_device, peer_req->i.sector, peer_req->i.size);
+		drbd_resync_read_req_mod(peer_req, INTERVAL_SENT);
+		peer_req = NULL;
 	}
-	if (unlikely(err))
-		drbd_err(device, "drbd_send_block() failed\n");
-out:
-	dec_unacked(device);
-	drbd_free_peer_req(device, peer_req);
 
 	return err;
 }
 
-int w_e_end_csum_rs_req(struct drbd_work *w, int cancel)
+/**
+ * w_e_end_rsdata_req() - Reply to a resync request.
+ * @w:		work object.
+ * @cancel:	The connection is being closed
+ *
+ * Worker callback to send P_RS_DATA_REPLY or a related packet after completing
+ * a resync read.
+ *
+ * Return: Error code or 0 on success.
+ */
+int w_e_end_rsdata_req(struct drbd_work *w, int cancel)
 {
 	struct drbd_peer_request *peer_req = container_of(w, struct drbd_peer_request, w);
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
-	struct drbd_device *device = peer_device->device;
-	struct digest_info *di;
-	int digest_size;
-	void *digest = NULL;
-	int err, eq = 0;
-
-	if (unlikely(cancel)) {
-		err = 0;
-		goto out;
-	}
+	struct drbd_connection *connection = peer_device->connection;
+	int err;
+	bool expect_ack = false;
 
-	if (get_ldev(device)) {
-		drbd_rs_complete_io(device, peer_req->i.sector);
-		put_ldev(device);
+	if (unlikely(cancel) || connection->cstate[NOW] < C_CONNECTED) {
+		drbd_remove_peer_req_interval(peer_req);
+		drbd_free_peer_req(peer_req);
+		dec_unacked(peer_device);
+		return 0;
 	}
 
-	di = peer_req->digest;
-
-	if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
-		/* quick hack to try to avoid a race against reconfiguration.
-		 * a real fix would be much more involved,
-		 * introducing more locking mechanisms */
-		if (peer_device->connection->csums_tfm) {
-			digest_size = crypto_shash_digestsize(peer_device->connection->csums_tfm);
-			D_ASSERT(device, digest_size == di->digest_size);
-			digest = kmalloc(digest_size, GFP_NOIO);
-		}
-		if (digest) {
-			drbd_csum_ee(peer_device->connection->csums_tfm, peer_req, digest);
-			eq = !memcmp(digest, di->digest, digest_size);
-			kfree(digest);
-		}
-
-		if (eq) {
-			drbd_set_in_sync(peer_device, peer_req->i.sector, peer_req->i.size);
-			/* rs_same_csums unit is BM_BLOCK_SIZE */
-			device->rs_same_csum += peer_req->i.size >> BM_BLOCK_SHIFT;
-			err = drbd_send_ack(peer_device, P_RS_IS_IN_SYNC, peer_req);
+	if (peer_device->repl_state[NOW] == L_AHEAD) {
+		err = drbd_send_ack(peer_device, P_RS_CANCEL, peer_req);
+	} else if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
+		if (unlikely(peer_device->disk_state[NOW] < D_INCONSISTENT)) {
+			if (connection->agreed_features & DRBD_FF_RESYNC_DAGTAG) {
+				drbd_err_ratelimit(peer_device,
+						"Sending P_RS_CANCEL, partner DISKLESS!\n");
+				err = drbd_send_ack(peer_device, P_RS_CANCEL, peer_req);
+			} else {
+				/*
+				 * A peer that does not support DRBD_FF_RESYNC_DAGTAG does not
+				 * expect to receive P_RS_CANCEL after losing its disk.
+				 */
+				drbd_err_ratelimit(peer_device,
+						"Not sending resync reply, partner DISKLESS!\n");
+				err = 0;
+			}
+		} else if (connection->agreed_pro_version >= 110 &&
+				!(connection->agreed_features & DRBD_FF_RESYNC_DAGTAG) &&
+				al_resync_extent_active(peer_device->device,
+					peer_req->i.sector, peer_req->i.size)) {
+			/* DRBD versions without DRBD_FF_RESYNC_DAGTAG lock
+			 * 128MiB "resync extents" in the activity log whenever
+			 * they make resync requests. Some of these versions
+			 * also lock activity lock extents when receiving
+			 * P_DATA. In particular, DRBD 9.0 and 9.1. This can
+			 * cause a deadlock if we send resync replies in these
+			 * extents as follows:
+			 * * Node is SyncTarget towards us
+			 * * Node locks a resync extent and sends P_RS_DATA_REQUEST
+			 * * Node receives P_DATA write in this extent; write
+			 *   waits for resync extent to be unlocked
+			 * * Node receives P_BARRIER (protocol A); receiver
+			 *   thread blocks waiting for write to complete
+			 * * We reply to P_RS_DATA_REQUEST, but it is never
+			 *   processed because receiver thread is blocked
+			 *
+			 * Break the deadlock by canceling instead. This is
+			 * sent on the control socket so it will be processed. */
+			dynamic_drbd_dbg(peer_device,
+					"Cancel resync request at %llus+%u due to activity",
+					(unsigned long long) peer_req->i.sector, peer_req->i.size);
+
+			err = drbd_send_ack(peer_device, P_RS_CANCEL, peer_req);
 		} else {
-			inc_rs_pending(peer_device);
-			peer_req->block_id = ID_SYNCER; /* By setting block_id, digest pointer becomes invalid! */
-			peer_req->flags &= ~EE_HAS_DIGEST; /* This peer request no longer has a digest pointer */
-			kfree(di);
-			err = drbd_send_block(peer_device, P_RS_DATA_REPLY, peer_req);
+			err = drbd_rs_reply(peer_device, peer_req, &expect_ack);
+
+			/* If expect_ack is true, peer_req may already have been freed. */
+			if (expect_ack)
+				peer_req = NULL;
 		}
 	} else {
+		drbd_err_ratelimit(peer_device, "Sending NegRSDReply. sector %llus.\n",
+		    (unsigned long long)peer_req->i.sector);
+
 		err = drbd_send_ack(peer_device, P_NEG_RS_DREPLY, peer_req);
-		if (drbd_ratelimit())
-			drbd_err(device, "Sending NegDReply. I guess it gets messy.\n");
+
+		/* update resync data with failure */
+		drbd_rs_failed_io(peer_device, peer_req->i.sector, peer_req->i.size);
+	}
+
+	dec_unacked(peer_device);
+
+	if (!expect_ack) {
+		drbd_remove_peer_req_interval(peer_req);
+		drbd_free_peer_req(peer_req);
 	}
-	if (unlikely(err))
-		drbd_err(device, "drbd_send_block/ack() failed\n");
-out:
-	dec_unacked(device);
-	drbd_free_peer_req(device, peer_req);
 
+	if (unlikely(err))
+		drbd_err(peer_device, "Sending resync reply failed\n");
 	return err;
 }
 
@@ -1214,97 +2368,204 @@ int w_e_end_ov_req(struct drbd_work *w, int cancel)
 	struct drbd_peer_request *peer_req = container_of(w, struct drbd_peer_request, w);
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
 	struct drbd_device *device = peer_device->device;
-	sector_t sector = peer_req->i.sector;
-	unsigned int size = peer_req->i.size;
+	struct drbd_connection *connection = peer_device->connection;
 	int digest_size;
 	void *digest;
+	sector_t sector = peer_req->i.sector;
+	unsigned int size = peer_req->i.size;
+	struct dagtag_find_result dagtag_result;
 	int err = 0;
+	enum drbd_packet cmd = connection->agreed_features & DRBD_FF_RESYNC_DAGTAG ?
+		P_OV_DAGTAG_REPLY : P_OV_REPLY;
 
-	if (unlikely(cancel))
+	if (unlikely(cancel) || connection->cstate[NOW] < C_CONNECTED)
+		goto out;
+
+	if (!(connection->agreed_features & DRBD_FF_RESYNC_DAGTAG) &&
+		al_resync_extent_active(peer_device->device, peer_req->i.sector, peer_req->i.size)) {
+		/* A peer that does not support DRBD_FF_RESYNC_DAGTAG expects
+		 * online verify to be exclusive with 128MiB "resync extents"
+		 * in the activity log. If such a verify source sends a request
+		 * but we receive an overlapping write before the request then
+		 * we will read newer data for the verify transaction than the
+		 * source did. So we may detect spurious out-of-sync blocks.
+		 *
+		 * In addition, we may trigger a deadlock in such a peer by
+		 * sending a reply if it is waiting for writes to drain due to
+		 * a P_BARRIER packet. See w_e_end_rsdata_req for details.
+		 *
+		 * Prevent these issues by canceling instead.
+		 */
+		dynamic_drbd_dbg(peer_device,
+				"Cancel online verify request at %llus+%u due to activity",
+				(unsigned long long) peer_req->i.sector, peer_req->i.size);
+
+		spin_lock_irq(&device->interval_lock);
+		set_bit(INTERVAL_CONFLICT, &peer_req->i.flags);
+		spin_unlock_irq(&device->interval_lock);
+	}
+
+	if (test_bit(INTERVAL_CONFLICT, &peer_req->i.flags)) {
+		if (connection->agreed_pro_version < 110) {
+			if (drbd_ratelimit())
+				drbd_warn(peer_device, "Verify request conflicts but cannot cancel, "
+						"peer may report spurious out-of-sync\n");
+		} else {
+			drbd_verify_skipped_block(peer_device, sector, size);
+			verify_progress(peer_device, sector, size);
+			drbd_send_ack(peer_device, P_RS_CANCEL, peer_req);
+			goto out;
+		}
+	}
+
+	dagtag_result = find_current_dagtag(peer_device->device->resource);
+	if (dagtag_result.err)
 		goto out;
 
+	set_bit(INTERVAL_READY_TO_SEND, &peer_req->i.flags);
+
 	digest_size = crypto_shash_digestsize(peer_device->connection->verify_tfm);
-	digest = kmalloc(digest_size, GFP_NOIO);
+	/* FIXME if this allocation fails, online verify will not terminate! */
+	digest = drbd_prepare_drequest_csum(peer_req, cmd, digest_size,
+			dagtag_result.node_id, dagtag_result.dagtag);
 	if (!digest) {
-		err = 1;	/* terminate the connection in case the allocation failed */
+		err = -ENOMEM;
 		goto out;
 	}
 
-	if (likely(!(peer_req->flags & EE_WAS_ERROR)))
-		drbd_csum_ee(peer_device->connection->verify_tfm, peer_req, digest);
+	if (!(peer_req->flags & EE_WAS_ERROR))
+		drbd_csum_bios(peer_device->connection->verify_tfm, &peer_req->bios, digest);
 	else
 		memset(digest, 0, digest_size);
 
-	/* Free e and pages before send.
+	/* Free pages before send.
 	 * In case we block on congestion, we could otherwise run into
 	 * some distributed deadlock, if the other side blocks on
 	 * congestion as well, because our receiver blocks in
 	 * drbd_alloc_pages due to pp_in_use > max_buffers. */
-	drbd_free_peer_req(device, peer_req);
-	peer_req = NULL;
+	drbd_peer_req_strip_bio(peer_req);
+
 	inc_rs_pending(peer_device);
-	err = drbd_send_drequest_csum(peer_device, sector, size, digest, digest_size, P_OV_REPLY);
+
+	err = drbd_send_command(peer_device, cmd, DATA_STREAM);
 	if (err)
-		dec_rs_pending(peer_device);
-	kfree(digest);
+		goto out_rs_pending;
+
+	dec_unacked(peer_device);
+	return 0;
 
+out_rs_pending:
+	dec_rs_pending(peer_device);
 out:
-	if (peer_req)
-		drbd_free_peer_req(device, peer_req);
-	dec_unacked(device);
+	drbd_remove_peer_req_interval(peer_req);
+	drbd_free_peer_req(peer_req);
+	dec_unacked(peer_device);
 	return err;
 }
 
 void drbd_ov_out_of_sync_found(struct drbd_peer_device *peer_device, sector_t sector, int size)
 {
-	struct drbd_device *device = peer_device->device;
-	if (device->ov_last_oos_start + device->ov_last_oos_size == sector) {
-		device->ov_last_oos_size += size>>9;
+	if (peer_device->ov_last_oos_start + peer_device->ov_last_oos_size == sector) {
+		peer_device->ov_last_oos_size += size>>9;
 	} else {
-		device->ov_last_oos_start = sector;
-		device->ov_last_oos_size = size>>9;
+		ov_out_of_sync_print(peer_device);
+		peer_device->ov_last_oos_start = sector;
+		peer_device->ov_last_oos_size = size>>9;
 	}
 	drbd_set_out_of_sync(peer_device, sector, size);
 }
 
-int w_e_end_ov_reply(struct drbd_work *w, int cancel)
+void verify_progress(struct drbd_peer_device *peer_device,
+		const sector_t sector, const unsigned int size)
+{
+	bool stop_sector_reached =
+		(peer_device->repl_state[NOW] == L_VERIFY_S) &&
+		verify_can_do_stop_sector(peer_device) &&
+		(sector + (size>>9)) >= peer_device->ov_stop_sector;
+
+	unsigned long ov_left = atomic64_dec_return(&peer_device->ov_left);
+
+	/* let's advance progress step marks only for every other megabyte */
+	if ((ov_left & 0x1ff) == 0)
+		drbd_advance_rs_marks(peer_device, ov_left);
+
+	if (ov_left == 0 || stop_sector_reached)
+		drbd_peer_device_post_work(peer_device, RS_DONE);
+}
+
+static bool digest_equal(struct drbd_peer_request *peer_req)
 {
-	struct drbd_peer_request *peer_req = container_of(w, struct drbd_peer_request, w);
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
 	struct drbd_device *device = peer_device->device;
 	struct digest_info *di;
 	void *digest;
+	int digest_size;
+	bool eq = false;
+
+	di = peer_req->digest;
+
+	digest_size = crypto_shash_digestsize(peer_device->connection->verify_tfm);
+	digest = kmalloc(digest_size, GFP_NOIO);
+	if (digest) {
+		drbd_csum_bios(peer_device->connection->verify_tfm, &peer_req->bios, digest);
+
+		D_ASSERT(device, digest_size == di->digest_size);
+		eq = !memcmp(digest, di->digest, digest_size);
+		kfree(digest);
+	}
+
+	return eq;
+}
+
+int w_e_end_ov_reply(struct drbd_work *w, int cancel)
+{
+	struct drbd_peer_request *peer_req = container_of(w, struct drbd_peer_request, w);
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_connection *connection = peer_device->connection;
 	sector_t sector = peer_req->i.sector;
 	unsigned int size = peer_req->i.size;
-	int digest_size;
-	int err, eq = 0;
-	bool stop_sector_reached = false;
+	u64 block_id = peer_req->block_id;
+	enum ov_result result;
+	bool al_conflict = false;
+	int err;
 
 	if (unlikely(cancel)) {
-		drbd_free_peer_req(device, peer_req);
-		dec_unacked(device);
+		drbd_remove_peer_req_interval(peer_req);
+		drbd_free_peer_req(peer_req);
+		dec_unacked(peer_device);
 		return 0;
 	}
 
-	/* after "cancel", because after drbd_disconnect/drbd_rs_cancel_all
-	 * the resync lru has been cleaned up already */
-	if (get_ldev(device)) {
-		drbd_rs_complete_io(device, peer_req->i.sector);
-		put_ldev(device);
-	}
-
-	di = peer_req->digest;
+	if (!(connection->agreed_features & DRBD_FF_RESYNC_DAGTAG) &&
+		al_resync_extent_active(peer_device->device, peer_req->i.sector, peer_req->i.size)) {
+		/* A peer that does not support DRBD_FF_RESYNC_DAGTAG expects
+		 * online verify to be exclusive with 128MiB "resync extents"
+		 * in the activity log. We may have received an overlapping
+		 * write before issuing this read, which the peer did not have
+		 * at the time of its read. So we may detect spurious
+		 * out-of-sync blocks.
+		 *
+		 * Prevent this by skipping instead.
+		 */
+		dynamic_drbd_dbg(peer_device,
+				"Skip online verify block at %llus+%u due to activity",
+				(unsigned long long) peer_req->i.sector, peer_req->i.size);
 
-	if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
-		digest_size = crypto_shash_digestsize(peer_device->connection->verify_tfm);
-		digest = kmalloc(digest_size, GFP_NOIO);
-		if (digest) {
-			drbd_csum_ee(peer_device->connection->verify_tfm, peer_req, digest);
+		al_conflict = true;
+	}
 
-			D_ASSERT(device, digest_size == di->digest_size);
-			eq = !memcmp(digest, di->digest, digest_size);
-			kfree(digest);
-		}
+	if (test_bit(INTERVAL_CONFLICT, &peer_req->i.flags) || al_conflict) {
+		/* DRBD versions without DRBD_FF_RESYNC_DAGTAG do not know about
+		 * OV_RESULT_SKIP, but they treat it the same as OV_RESULT_IN_SYNC which is
+		 * the best we can do here anyway. */
+		result = OV_RESULT_SKIP;
+		drbd_verify_skipped_block(peer_device, sector, size);
+	} else if (likely((peer_req->flags & EE_WAS_ERROR) == 0) && digest_equal(peer_req)) {
+		result = OV_RESULT_IN_SYNC;
+		ov_out_of_sync_print(peer_device);
+	} else {
+		result = OV_RESULT_OUT_OF_SYNC;
+		drbd_ov_out_of_sync_found(peer_device, sector, size);
 	}
 
 	/* Free peer_req and pages before send.
@@ -1312,30 +2573,15 @@ int w_e_end_ov_reply(struct drbd_work *w, int cancel)
 	 * some distributed deadlock, if the other side blocks on
 	 * congestion as well, because our receiver blocks in
 	 * drbd_alloc_pages due to pp_in_use > max_buffers. */
-	drbd_free_peer_req(device, peer_req);
-	if (!eq)
-		drbd_ov_out_of_sync_found(peer_device, sector, size);
-	else
-		ov_out_of_sync_print(peer_device);
-
-	err = drbd_send_ack_ex(peer_device, P_OV_RESULT, sector, size,
-			       eq ? ID_IN_SYNC : ID_OUT_OF_SYNC);
-
-	dec_unacked(device);
-
-	--device->ov_left;
+	drbd_remove_peer_req_interval(peer_req);
+	drbd_free_peer_req(peer_req);
+	peer_req = NULL;
 
-	/* let's advance progress step marks only for every other megabyte */
-	if ((device->ov_left & 0x200) == 0x200)
-		drbd_advance_rs_marks(peer_device, device->ov_left);
+	err = drbd_send_ov_result(peer_device, sector, size, block_id, result);
 
-	stop_sector_reached = verify_can_do_stop_sector(device) &&
-		(sector + (size>>9)) >= device->ov_stop_sector;
+	dec_unacked(peer_device);
 
-	if (device->ov_left == 0 || stop_sector_reached) {
-		ov_out_of_sync_print(peer_device);
-		drbd_resync_finished(peer_device);
-	}
+	verify_progress(peer_device, sector, size);
 
 	return err;
 }
@@ -1348,194 +2594,92 @@ int w_e_end_ov_reply(struct drbd_work *w, int cancel)
 static int drbd_send_barrier(struct drbd_connection *connection)
 {
 	struct p_barrier *p;
-	struct drbd_socket *sock;
+	int err;
 
-	sock = &connection->data;
-	p = conn_prepare_command(connection, sock);
+	p = conn_prepare_command(connection, sizeof(*p), DATA_STREAM);
 	if (!p)
 		return -EIO;
+
 	p->barrier = connection->send.current_epoch_nr;
 	p->pad = 0;
+	connection->send.last_sent_epoch_nr = connection->send.current_epoch_nr;
 	connection->send.current_epoch_writes = 0;
 	connection->send.last_sent_barrier_jif = jiffies;
 
-	return conn_send_command(connection, sock, P_BARRIER, sizeof(*p), NULL, 0);
-}
-
-static int pd_send_unplug_remote(struct drbd_peer_device *pd)
-{
-	struct drbd_socket *sock = &pd->connection->data;
-	if (!drbd_prepare_command(pd, sock))
-		return -EIO;
-	return drbd_send_command(pd, sock, P_UNPLUG_REMOTE, 0, NULL, 0);
-}
-
-int w_send_write_hint(struct drbd_work *w, int cancel)
-{
-	struct drbd_device *device =
-		container_of(w, struct drbd_device, unplug_work);
-
-	if (cancel)
-		return 0;
-	return pd_send_unplug_remote(first_peer_device(device));
-}
-
-static void re_init_if_first_write(struct drbd_connection *connection, unsigned int epoch)
-{
-	if (!connection->send.seen_any_write_yet) {
-		connection->send.seen_any_write_yet = true;
-		connection->send.current_epoch_nr = epoch;
-		connection->send.current_epoch_writes = 0;
-		connection->send.last_sent_barrier_jif = jiffies;
-	}
-}
-
-static void maybe_send_barrier(struct drbd_connection *connection, unsigned int epoch)
-{
-	/* re-init if first write on this connection */
-	if (!connection->send.seen_any_write_yet)
-		return;
-	if (connection->send.current_epoch_nr != epoch) {
-		if (connection->send.current_epoch_writes)
-			drbd_send_barrier(connection);
-		connection->send.current_epoch_nr = epoch;
-	}
-}
-
-int w_send_out_of_sync(struct drbd_work *w, int cancel)
-{
-	struct drbd_request *req = container_of(w, struct drbd_request, w);
-	struct drbd_device *device = req->device;
-	struct drbd_peer_device *const peer_device = first_peer_device(device);
-	struct drbd_connection *const connection = peer_device->connection;
-	int err;
-
-	if (unlikely(cancel)) {
-		req_mod(req, SEND_CANCELED, peer_device);
-		return 0;
-	}
-	req->pre_send_jif = jiffies;
-
-	/* this time, no connection->send.current_epoch_writes++;
-	 * If it was sent, it was the closing barrier for the last
-	 * replicated epoch, before we went into AHEAD mode.
-	 * No more barriers will be sent, until we leave AHEAD mode again. */
-	maybe_send_barrier(connection, req->epoch);
-
-	err = drbd_send_out_of_sync(peer_device, req);
-	req_mod(req, OOS_HANDED_TO_NETWORK, peer_device);
-
-	return err;
-}
-
-/**
- * w_send_dblock() - Worker callback to send a P_DATA packet in order to mirror a write request
- * @w:		work object.
- * @cancel:	The connection will be closed anyways
- */
-int w_send_dblock(struct drbd_work *w, int cancel)
-{
-	struct drbd_request *req = container_of(w, struct drbd_request, w);
-	struct drbd_device *device = req->device;
-	struct drbd_peer_device *const peer_device = first_peer_device(device);
-	struct drbd_connection *connection = peer_device->connection;
-	bool do_send_unplug = req->rq_state & RQ_UNPLUG;
-	int err;
-
-	if (unlikely(cancel)) {
-		req_mod(req, SEND_CANCELED, peer_device);
-		return 0;
+	set_bit(BARRIER_ACK_PENDING, &connection->flags);
+	err = send_command(connection, -1, P_BARRIER, DATA_STREAM);
+	if (err) {
+		clear_bit(BARRIER_ACK_PENDING, &connection->flags);
+		wake_up(&connection->resource->barrier_wait);
 	}
-	req->pre_send_jif = jiffies;
-
-	re_init_if_first_write(connection, req->epoch);
-	maybe_send_barrier(connection, req->epoch);
-	connection->send.current_epoch_writes++;
-
-	err = drbd_send_dblock(peer_device, req);
-	req_mod(req, err ? SEND_FAILED : HANDED_OVER_TO_NETWORK, peer_device);
-
-	if (do_send_unplug && !err)
-		pd_send_unplug_remote(peer_device);
-
 	return err;
 }
 
-/**
- * w_send_read_req() - Worker callback to send a read request (P_DATA_REQUEST) packet
- * @w:		work object.
- * @cancel:	The connection will be closed anyways
- */
-int w_send_read_req(struct drbd_work *w, int cancel)
+static bool need_unplug(struct drbd_connection *connection)
 {
-	struct drbd_request *req = container_of(w, struct drbd_request, w);
-	struct drbd_device *device = req->device;
-	struct drbd_peer_device *const peer_device = first_peer_device(device);
-	struct drbd_connection *connection = peer_device->connection;
-	bool do_send_unplug = req->rq_state & RQ_UNPLUG;
-	int err;
-
-	if (unlikely(cancel)) {
-		req_mod(req, SEND_CANCELED, peer_device);
-		return 0;
-	}
-	req->pre_send_jif = jiffies;
-
-	/* Even read requests may close a write epoch,
-	 * if there was any yet. */
-	maybe_send_barrier(connection, req->epoch);
-
-	err = drbd_send_drequest(peer_device, P_DATA_REQUEST, req->i.sector, req->i.size,
-				 (unsigned long)req);
-
-	req_mod(req, err ? SEND_FAILED : HANDED_OVER_TO_NETWORK, peer_device);
-
-	if (do_send_unplug && !err)
-		pd_send_unplug_remote(peer_device);
-
-	return err;
+	unsigned i = connection->todo.unplug_slot;
+	return dagtag_newer_eq(connection->send.current_dagtag_sector,
+			connection->todo.unplug_dagtag_sector[i]);
 }
 
-int w_restart_disk_io(struct drbd_work *w, int cancel)
+static void maybe_send_unplug_remote(struct drbd_connection *connection, bool send_anyways)
 {
-	struct drbd_request *req = container_of(w, struct drbd_request, w);
-	struct drbd_device *device = req->device;
+	if (need_unplug(connection)) {
+		/* Yes, this is non-atomic wrt. its use in drbd_unplug_fn.
+		 * We save a spin_lock_irq, and worst case
+		 * we occasionally miss an unplug event. */
+
+		/* Paranoia: to avoid a continuous stream of unplug-hints,
+		 * in case we never get any unplug events */
+		connection->todo.unplug_dagtag_sector[connection->todo.unplug_slot] =
+			connection->send.current_dagtag_sector + (1ULL << 63);
+		/* advance the current unplug slot */
+		connection->todo.unplug_slot ^= 1;
+	} else if (!send_anyways)
+		return;
 
-	if (bio_data_dir(req->master_bio) == WRITE && req->rq_state & RQ_IN_ACT_LOG)
-		drbd_al_begin_io(device, &req->i);
+	if (connection->cstate[NOW] < C_CONNECTED)
+		return;
 
-	req->private_bio = bio_alloc_clone(device->ldev->backing_bdev,
-					   req->master_bio, GFP_NOIO,
-					  &drbd_io_bio_set);
-	req->private_bio->bi_private = req;
-	req->private_bio->bi_end_io = drbd_request_endio;
-	submit_bio_noacct(req->private_bio);
+	if (!conn_prepare_command(connection, 0, DATA_STREAM))
+		return;
 
-	return 0;
+	send_command(connection, -1, P_UNPLUG_REMOTE, DATA_STREAM);
 }
 
-static int _drbd_may_sync_now(struct drbd_device *device)
+static bool __drbd_may_sync_now(struct drbd_peer_device *peer_device)
 {
-	struct drbd_device *odev = device;
-	int resync_after;
+	struct drbd_device *other_device = peer_device->device;
+	int ret = true;
 
+	rcu_read_lock();
 	while (1) {
-		if (!odev->ldev || odev->state.disk == D_DISKLESS)
-			return 1;
-		rcu_read_lock();
-		resync_after = rcu_dereference(odev->ldev->disk_conf)->resync_after;
-		rcu_read_unlock();
+		struct drbd_peer_device *other_peer_device;
+		int resync_after;
+
+		if (!other_device->ldev || other_device->disk_state[NOW] == D_DISKLESS)
+			break;
+		resync_after = rcu_dereference(other_device->ldev->disk_conf)->resync_after;
 		if (resync_after == -1)
-			return 1;
-		odev = minor_to_device(resync_after);
-		if (!odev)
-			return 1;
-		if ((odev->state.conn >= C_SYNC_SOURCE &&
-		     odev->state.conn <= C_PAUSED_SYNC_T) ||
-		    odev->state.aftr_isp || odev->state.peer_isp ||
-		    odev->state.user_isp)
-			return 0;
+			break;
+		other_device = minor_to_device(resync_after);
+		if (!other_device)
+			break;
+		for_each_peer_device_rcu(other_peer_device, other_device) {
+			if ((other_peer_device->repl_state[NOW] >= L_SYNC_SOURCE &&
+			     other_peer_device->repl_state[NOW] <= L_PAUSED_SYNC_T) ||
+			    other_peer_device->resync_susp_dependency[NOW] ||
+			    other_peer_device->resync_susp_peer[NOW] ||
+			    other_peer_device->resync_susp_user[NOW]) {
+				ret = false;
+				goto break_unlock;
+			}
+		}
 	}
+break_unlock:
+	rcu_read_unlock();
+
+	return ret;
 }
 
 /**
@@ -1546,21 +2690,32 @@ static int _drbd_may_sync_now(struct drbd_device *device)
  */
 static bool drbd_pause_after(struct drbd_device *device)
 {
+	struct drbd_device *other_device;
 	bool changed = false;
-	struct drbd_device *odev;
-	int i;
+	int vnr;
 
+	/* FIXME seriously inefficient with many devices,
+	 * while also ignoring the input "device" argument :-( */
 	rcu_read_lock();
-	idr_for_each_entry(&drbd_devices, odev, i) {
-		if (odev->state.conn == C_STANDALONE && odev->state.disk == D_DISKLESS)
+	idr_for_each_entry(&drbd_devices, other_device, vnr) {
+		struct drbd_peer_device *other_peer_device;
+
+		begin_state_change_locked(other_device->resource, CS_HARD);
+		if (other_device->disk_state[NOW] == D_DISKLESS) {
+			abort_state_change_locked(other_device->resource);
 			continue;
-		if (!_drbd_may_sync_now(odev) &&
-		    _drbd_set_state(_NS(odev, aftr_isp, 1),
-				    CS_HARD, NULL) != SS_NOTHING_TO_DO)
+		}
+		for_each_peer_device_rcu(other_peer_device, other_device) {
+			if (other_peer_device->repl_state[NOW] == L_OFF)
+				continue;
+			if (!__drbd_may_sync_now(other_peer_device))
+				__change_resync_susp_dependency(other_peer_device, true);
+		}
+		if (end_state_change_locked(other_device->resource, "resync-after") !=
+				SS_NOTHING_TO_DO)
 			changed = true;
 	}
 	rcu_read_unlock();
-
 	return changed;
 }
 
@@ -1568,24 +2723,35 @@ static bool drbd_pause_after(struct drbd_device *device)
  * drbd_resume_next() - Resume resync on all devices that may resync now
  * @device:	DRBD device.
  *
- * Called from process context only (admin command and worker).
+ * Called from process context only (admin command and sender).
  */
 static bool drbd_resume_next(struct drbd_device *device)
 {
+	struct drbd_device *other_device;
 	bool changed = false;
-	struct drbd_device *odev;
-	int i;
+	int vnr;
 
+	/* FIXME seriously inefficient with many devices,
+	 * while also ignoring the input "device" argument :-( */
 	rcu_read_lock();
-	idr_for_each_entry(&drbd_devices, odev, i) {
-		if (odev->state.conn == C_STANDALONE && odev->state.disk == D_DISKLESS)
+	idr_for_each_entry(&drbd_devices, other_device, vnr) {
+		struct drbd_peer_device *other_peer_device;
+
+		begin_state_change_locked(other_device->resource, CS_HARD);
+		if (other_device->disk_state[NOW] == D_DISKLESS) {
+			abort_state_change_locked(other_device->resource);
 			continue;
-		if (odev->state.aftr_isp) {
-			if (_drbd_may_sync_now(odev) &&
-			    _drbd_set_state(_NS(odev, aftr_isp, 0),
-					    CS_HARD, NULL) != SS_NOTHING_TO_DO)
-				changed = true;
 		}
+		for_each_peer_device_rcu(other_peer_device, other_device) {
+			if (other_peer_device->repl_state[NOW] == L_OFF)
+				continue;
+			if (other_peer_device->resync_susp_dependency[NOW] &&
+			    __drbd_may_sync_now(other_peer_device))
+				__change_resync_susp_dependency(other_peer_device, false);
+		}
+		if (end_state_change_locked(other_device->resource, "resync-after") !=
+				SS_NOTHING_TO_DO)
+			changed = true;
 	}
 	rcu_read_unlock();
 	return changed;
@@ -1594,84 +2760,92 @@ static bool drbd_resume_next(struct drbd_device *device)
 void resume_next_sg(struct drbd_device *device)
 {
 	lock_all_resources();
-	drbd_resume_next(device);
+	while (drbd_resume_next(device))
+		; /* Iterate if some state changed. */
 	unlock_all_resources();
 }
 
 void suspend_other_sg(struct drbd_device *device)
 {
 	lock_all_resources();
-	drbd_pause_after(device);
+	while (drbd_pause_after(device))
+		; /* Iterate if some state changed. */
 	unlock_all_resources();
 }
 
-/* caller must lock_all_resources() */
-enum drbd_ret_code drbd_resync_after_valid(struct drbd_device *device, int o_minor)
+/* caller must hold resources_mutex */
+enum drbd_ret_code drbd_resync_after_valid(struct drbd_device *device, int resync_after)
 {
-	struct drbd_device *odev;
-	int resync_after;
+	struct drbd_device *other_device;
+	int rv = NO_ERROR;
 
-	if (o_minor == -1)
+	if (resync_after == -1)
 		return NO_ERROR;
-	if (o_minor < -1 || o_minor > MINORMASK)
+	if (resync_after < -1)
 		return ERR_RESYNC_AFTER;
+	other_device = minor_to_device(resync_after);
+
+	/* You are free to depend on diskless, non-existing,
+	 * or not yet/no longer existing minors.
+	 * We only reject dependency loops.
+	 * We cannot follow the dependency chain beyond a detached or
+	 * missing minor.
+	 */
+	if (!other_device)
+		return NO_ERROR;
 
 	/* check for loops */
-	odev = minor_to_device(o_minor);
+	rcu_read_lock();
 	while (1) {
-		if (odev == device)
-			return ERR_RESYNC_AFTER_CYCLE;
-
-		/* You are free to depend on diskless, non-existing,
-		 * or not yet/no longer existing minors.
-		 * We only reject dependency loops.
-		 * We cannot follow the dependency chain beyond a detached or
-		 * missing minor.
-		 */
-		if (!odev || !odev->ldev || odev->state.disk == D_DISKLESS)
-			return NO_ERROR;
+		if (other_device == device) {
+			rv = ERR_RESYNC_AFTER_CYCLE;
+			break;
+		}
+
+		if (!other_device)
+			break;
+
+		if (!get_ldev_if_state(other_device, D_NEGOTIATING))
+			break;
+		resync_after = rcu_dereference(other_device->ldev->disk_conf)->resync_after;
+		put_ldev(other_device);
 
-		rcu_read_lock();
-		resync_after = rcu_dereference(odev->ldev->disk_conf)->resync_after;
-		rcu_read_unlock();
 		/* dependency chain ends here, no cycles. */
 		if (resync_after == -1)
-			return NO_ERROR;
+			break;
 
 		/* follow the dependency chain */
-		odev = minor_to_device(resync_after);
+		other_device = minor_to_device(resync_after);
 	}
+	rcu_read_unlock();
+
+	return rv;
 }
 
-/* caller must lock_all_resources() */
+/* caller must hold resources_mutex */
 void drbd_resync_after_changed(struct drbd_device *device)
 {
-	int changed;
-
-	do {
-		changed  = drbd_pause_after(device);
-		changed |= drbd_resume_next(device);
-	} while (changed);
+	while (drbd_pause_after(device) || drbd_resume_next(device))
+		/* do nothing */ ;
 }
 
 void drbd_rs_controller_reset(struct drbd_peer_device *peer_device)
 {
-	struct drbd_device *device = peer_device->device;
-	struct gendisk *disk = device->ldev->backing_bdev->bd_disk;
+	struct gendisk *disk = peer_device->device->ldev->backing_bdev->bd_disk;
 	struct fifo_buffer *plan;
 
-	atomic_set(&device->rs_sect_in, 0);
-	atomic_set(&device->rs_sect_ev, 0);
-	device->rs_in_flight = 0;
-	device->rs_last_events =
-		(int)part_stat_read_accum(disk->part0, sectors);
+	atomic_set(&peer_device->rs_sect_in, 0);
+	atomic_set(&peer_device->device->rs_sect_ev, 0);  /* FIXME: ??? */
+	peer_device->rs_last_mk_req_kt = ktime_get();
+	peer_device->rs_in_flight = 0;
+	peer_device->rs_last_events = (int)part_stat_read_accum(disk->part0, sectors);
 
 	/* Updating the RCU protected object in place is necessary since
 	   this function gets called from atomic context.
 	   It is valid since all other updates also lead to an completely
 	   empty fifo */
 	rcu_read_lock();
-	plan = rcu_dereference(device->rs_plan_s);
+	plan = rcu_dereference(peer_device->rs_plan_s);
 	plan->total = 0;
 	fifo_set(plan, 0);
 	rcu_read_unlock();
@@ -1679,76 +2853,160 @@ void drbd_rs_controller_reset(struct drbd_peer_device *peer_device)
 
 void start_resync_timer_fn(struct timer_list *t)
 {
-	struct drbd_device *device = timer_container_of(device, t,
-							start_resync_timer);
-	drbd_device_post_work(device, RS_START);
+	struct drbd_peer_device *peer_device = timer_container_of(peer_device, t,
+			start_resync_timer);
+	drbd_peer_device_post_work(peer_device, RS_START);
+}
+
+bool drbd_stable_sync_source_present(struct drbd_peer_device *except_peer_device, enum which_state which)
+{
+	struct drbd_device *device = except_peer_device->device;
+	struct drbd_peer_device *peer_device;
+	u64 authoritative_nodes = 0;
+	bool rv = false;
+
+	if (!(except_peer_device->uuid_flags & UUID_FLAG_STABLE))
+		authoritative_nodes = except_peer_device->uuid_node_mask;
+
+	/* If a peer considers himself as unstable and sees me as an authoritative
+	   node, then we have a stable resync source! */
+	if (authoritative_nodes & NODE_MASK(device->resource->res_opts.node_id))
+		return true;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		enum drbd_repl_state repl_state;
+		struct net_conf *nc;
+
+		if (peer_device == except_peer_device)
+			continue;
+
+		repl_state = peer_device->repl_state[which];
+
+		if (repl_state == L_ESTABLISHED ||
+				repl_state == L_WF_BITMAP_S ||
+				(repl_state >= L_SYNC_SOURCE && repl_state < L_AHEAD)) {
+			if (authoritative_nodes & NODE_MASK(peer_device->node_id)) {
+				rv = true;
+				break;
+			}
+
+			nc = rcu_dereference(peer_device->connection->transport.net_conf);
+			/* Restricting the clause the two_primaries not allowed, otherwise
+			   we need to ensure here that we are neighbor of all primaries,
+			   and that is a lot more challenging. */
+
+			if ((!nc->two_primaries &&
+			     peer_device->connection->peer_role[which] == R_PRIMARY) ||
+			    ((repl_state == L_SYNC_TARGET || repl_state == L_PAUSED_SYNC_T) &&
+			     peer_device->uuid_flags & UUID_FLAG_STABLE)) {
+				rv = true;
+				break;
+			}
+		}
+	}
+	rcu_read_unlock();
+
+	return rv;
 }
 
-static void do_start_resync(struct drbd_device *device)
+static void do_start_resync(struct drbd_peer_device *peer_device)
 {
-	if (atomic_read(&device->unacked_cnt) || atomic_read(&device->rs_pending_cnt)) {
-		drbd_warn(device, "postponing start_resync ...\n");
-		device->start_resync_timer.expires = jiffies + HZ/10;
-		add_timer(&device->start_resync_timer);
+	/*
+	 * Postpone resync in any of these situations:
+	 * - There is still activity from a previous resync according to rs_pending_cnt.
+	 * - The resync as SyncTarget is still active.
+	 * - The transition from Ahead to SyncSource if there is any activity on this peer device.
+	 */
+	if (atomic_read(&peer_device->rs_pending_cnt) ||
+			peer_device->resync_active[NOW] ||
+			(peer_device->repl_state[NOW] == L_AHEAD &&
+			 atomic_read(&peer_device->unacked_cnt))) {
+		drbd_warn(peer_device, "postponing start_resync ...\n");
+		mod_timer(&peer_device->start_resync_timer, jiffies + HZ/10);
 		return;
 	}
 
-	drbd_start_resync(device, C_SYNC_SOURCE);
-	clear_bit(AHEAD_TO_SYNC_SOURCE, &device->flags);
+	drbd_start_resync(peer_device, peer_device->start_resync_side, "postponed-resync");
+	clear_bit(AHEAD_TO_SYNC_SOURCE, &peer_device->flags);
 }
 
-static bool use_checksum_based_resync(struct drbd_connection *connection, struct drbd_device *device)
+static void handle_congestion(struct drbd_peer_device *peer_device)
 {
-	bool csums_after_crash_only;
+	struct drbd_resource *resource = peer_device->device->resource;
+	unsigned long irq_flags;
+	struct net_conf *nc;
+	enum drbd_on_congestion on_congestion;
+
 	rcu_read_lock();
-	csums_after_crash_only = rcu_dereference(connection->net_conf)->csums_after_crash_only;
+	nc = rcu_dereference(peer_device->connection->transport.net_conf);
+	if (nc) {
+		on_congestion = nc->on_congestion;
+
+		begin_state_change(resource, &irq_flags, CS_VERBOSE | CS_HARD);
+		/* congestion may have cleared since it was detected */
+		if (atomic_read(&peer_device->connection->ap_in_flight) > 0) {
+			if (on_congestion == OC_PULL_AHEAD)
+				__change_repl_state(peer_device, L_AHEAD);
+			else if (on_congestion == OC_DISCONNECT)
+				__change_cstate(peer_device->connection, C_DISCONNECTING);
+		}
+		end_state_change(resource, &irq_flags, "congestion");
+	}
 	rcu_read_unlock();
-	return connection->agreed_pro_version >= 89 &&		/* supported? */
-		connection->csums_tfm &&			/* configured? */
-		(csums_after_crash_only == false		/* use for each resync? */
-		 || test_bit(CRASHED_PRIMARY, &device->flags));	/* or only after Primary crash? */
+
+	clear_bit(HANDLING_CONGESTION, &peer_device->flags);
 }
 
 /**
  * drbd_start_resync() - Start the resync process
- * @device:	DRBD device.
- * @side:	Either C_SYNC_SOURCE or C_SYNC_TARGET
+ * @peer_device: The DRBD peer device to start the resync on.
+ * @side: Direction of the resync; which side am I? Either L_SYNC_SOURCE or
+ * 	  L_SYNC_TARGET.
+ * @tag: State change tag to print in status messages.
  *
  * This function might bring you directly into one of the
  * C_PAUSED_SYNC_* states.
  */
-void drbd_start_resync(struct drbd_device *device, enum drbd_conns side)
+void drbd_start_resync(struct drbd_peer_device *peer_device, enum drbd_repl_state side,
+		const char *tag)
 {
-	struct drbd_peer_device *peer_device = first_peer_device(device);
-	struct drbd_connection *connection = peer_device ? peer_device->connection : NULL;
-	union drbd_state ns;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
+	enum drbd_disk_state finished_resync_pdsk = D_UNKNOWN;
+	enum drbd_repl_state repl_state;
 	int r;
 
-	if (device->state.conn >= C_SYNC_SOURCE && device->state.conn < C_AHEAD) {
-		drbd_err(device, "Resync already running!\n");
+	read_lock_irq(&device->resource->state_rwlock);
+	repl_state = peer_device->repl_state[NOW];
+	read_unlock_irq(&device->resource->state_rwlock);
+	if (repl_state < L_ESTABLISHED) {
+		/* Connection closed meanwhile. */
 		return;
 	}
-
-	if (!connection) {
-		drbd_err(device, "No connection to peer, aborting!\n");
+	if (repl_state >= L_SYNC_SOURCE && repl_state < L_AHEAD) {
+		drbd_err(peer_device, "Resync already running!\n");
 		return;
 	}
 
-	if (!test_bit(B_RS_H_DONE, &device->flags)) {
-		if (side == C_SYNC_TARGET) {
-			/* Since application IO was locked out during C_WF_BITMAP_T and
-			   C_WF_SYNC_UUID we are still unmodified. Before going to C_SYNC_TARGET
-			   we check that we might make the data inconsistent. */
-			r = drbd_khelper(device, "before-resync-target");
+	if (!test_bit(B_RS_H_DONE, &peer_device->flags)) {
+		if (side == L_SYNC_TARGET) {
+			r = drbd_maybe_khelper(device, connection, "before-resync-target");
+			if (r == DRBD_UMH_DISABLED)
+				goto skip_helper;
+
 			r = (r >> 8) & 0xff;
 			if (r > 0) {
 				drbd_info(device, "before-resync-target handler returned %d, "
 					 "dropping connection.\n", r);
-				conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_HARD);
+				change_cstate(connection, C_DISCONNECTING, CS_HARD);
 				return;
 			}
-		} else /* C_SYNC_SOURCE */ {
-			r = drbd_khelper(device, "before-resync-source");
+		} else /* L_SYNC_SOURCE */ {
+			r = drbd_maybe_khelper(device, connection, "before-resync-source");
+			if (r == DRBD_UMH_DISABLED)
+				goto skip_helper;
+
 			r = (r >> 8) & 0xff;
 			if (r > 0) {
 				if (r == 3) {
@@ -1757,185 +3015,101 @@ void drbd_start_resync(struct drbd_device *device, enum drbd_conns side)
 				} else {
 					drbd_info(device, "before-resync-source handler returned %d, "
 						 "dropping connection.\n", r);
-					conn_request_state(connection,
-							   NS(conn, C_DISCONNECTING), CS_HARD);
+					change_cstate(connection, C_DISCONNECTING, CS_HARD);
 					return;
 				}
 			}
 		}
 	}
 
-	if (current == connection->worker.task) {
-		/* The worker should not sleep waiting for state_mutex,
-		   that can take long */
-		if (!mutex_trylock(device->state_mutex)) {
-			set_bit(B_RS_H_DONE, &device->flags);
-			device->start_resync_timer.expires = jiffies + HZ/5;
-			add_timer(&device->start_resync_timer);
-			return;
+skip_helper:
+
+	if (side == L_SYNC_TARGET && drbd_current_uuid(device) == UUID_JUST_CREATED) {
+		/* prepare to continue an interrupted initial resync later */
+		if (get_ldev(device)) {
+			const int my_node_id = device->resource->res_opts.node_id;
+			u64 peer_bitmap_uuid = peer_device->bitmap_uuids[my_node_id];
+
+			if (peer_bitmap_uuid) {
+				down_write(&device->uuid_sem);
+				_drbd_uuid_set_current(device, peer_bitmap_uuid);
+				up_write(&device->uuid_sem);
+				drbd_print_uuids(peer_device, "setting UUIDs to");
+			}
+			put_ldev(device);
 		}
-	} else {
-		mutex_lock(device->state_mutex);
+	}
+
+	if (down_trylock(&device->resource->state_sem)) {
+		/* Retry later and let the worker make progress in the
+		 * meantime; two-phase commits depend on that.  */
+		set_bit(B_RS_H_DONE, &peer_device->flags);
+		peer_device->start_resync_side = side;
+		mod_timer(&peer_device->start_resync_timer, jiffies + HZ/5);
+		return;
 	}
 
 	lock_all_resources();
-	clear_bit(B_RS_H_DONE, &device->flags);
-	/* Did some connection breakage or IO error race with us? */
-	if (device->state.conn < C_CONNECTED
-	|| !get_ldev_if_state(device, D_NEGOTIATING)) {
+	clear_bit(B_RS_H_DONE, &peer_device->flags);
+	if (connection->cstate[NOW] < C_CONNECTED ||
+	    !get_ldev_if_state(device, D_NEGOTIATING)) {
 		unlock_all_resources();
 		goto out;
 	}
 
-	ns = drbd_read_state(device);
-
-	ns.aftr_isp = !_drbd_may_sync_now(device);
-
-	ns.conn = side;
-
-	if (side == C_SYNC_TARGET)
-		ns.disk = D_INCONSISTENT;
-	else /* side == C_SYNC_SOURCE */
-		ns.pdsk = D_INCONSISTENT;
-
-	r = _drbd_set_state(device, ns, CS_VERBOSE, NULL);
-	ns = drbd_read_state(device);
-
-	if (ns.conn < C_CONNECTED)
+	begin_state_change_locked(device->resource, CS_VERBOSE);
+	__change_resync_susp_dependency(peer_device, !__drbd_may_sync_now(peer_device));
+	__change_repl_state(peer_device, side);
+	if (side == L_SYNC_TARGET)
+		init_resync_stable_bits(peer_device);
+	finished_resync_pdsk = peer_device->resync_finished_pdsk;
+	peer_device->resync_finished_pdsk = D_UNKNOWN;
+	r = end_state_change_locked(device->resource, tag);
+	repl_state = peer_device->repl_state[NOW];
+
+	if (repl_state < L_ESTABLISHED)
 		r = SS_UNKNOWN_ERROR;
 
-	if (r == SS_SUCCESS) {
-		unsigned long tw = drbd_bm_total_weight(device);
-		unsigned long now = jiffies;
-		int i;
-
-		device->rs_failed    = 0;
-		device->rs_paused    = 0;
-		device->rs_same_csum = 0;
-		device->rs_last_sect_ev = 0;
-		device->rs_total     = tw;
-		device->rs_start     = now;
-		for (i = 0; i < DRBD_SYNC_MARKS; i++) {
-			device->rs_mark_left[i] = tw;
-			device->rs_mark_time[i] = now;
-		}
+	if (r == SS_SUCCESS)
 		drbd_pause_after(device);
-		/* Forget potentially stale cached per resync extent bit-counts.
-		 * Open coded drbd_rs_cancel_all(device), we already have IRQs
-		 * disabled, and know the disk state is ok. */
-		spin_lock(&device->al_lock);
-		lc_reset(device->resync);
-		device->resync_locked = 0;
-		device->resync_wenr = LC_FREE;
-		spin_unlock(&device->al_lock);
-	}
-	unlock_all_resources();
-
-	if (r == SS_SUCCESS) {
-		wake_up(&device->al_wait); /* for lc_reset() above */
-		/* reset rs_last_bcast when a resync or verify is started,
-		 * to deal with potential jiffies wrap. */
-		device->rs_last_bcast = jiffies - HZ;
-
-		drbd_info(device, "Began resync as %s (will sync %lu KB [%lu bits set]).\n",
-		     drbd_conn_str(ns.conn),
-		     (unsigned long) device->rs_total << (BM_BLOCK_SHIFT-10),
-		     (unsigned long) device->rs_total);
-		if (side == C_SYNC_TARGET) {
-			device->bm_resync_fo = 0;
-			device->use_csums = use_checksum_based_resync(connection, device);
-		} else {
-			device->use_csums = false;
-		}
 
-		/* Since protocol 96, we must serialize drbd_gen_and_send_sync_uuid
-		 * with w_send_oos, or the sync target will get confused as to
-		 * how much bits to resync.  We cannot do that always, because for an
-		 * empty resync and protocol < 95, we need to do it here, as we call
-		 * drbd_resync_finished from here in that case.
-		 * We drbd_gen_and_send_sync_uuid here for protocol < 96,
-		 * and from after_state_ch otherwise. */
-		if (side == C_SYNC_SOURCE && connection->agreed_pro_version < 96)
-			drbd_gen_and_send_sync_uuid(peer_device);
-
-		if (connection->agreed_pro_version < 95 && device->rs_total == 0) {
-			/* This still has a race (about when exactly the peers
-			 * detect connection loss) that can lead to a full sync
-			 * on next handshake. In 8.3.9 we fixed this with explicit
-			 * resync-finished notifications, but the fix
-			 * introduces a protocol change.  Sleeping for some
-			 * time longer than the ping interval + timeout on the
-			 * SyncSource, to give the SyncTarget the chance to
-			 * detect connection loss, then waiting for a ping
-			 * response (implicit in drbd_resync_finished) reduces
-			 * the race considerably, but does not solve it. */
-			if (side == C_SYNC_SOURCE) {
-				struct net_conf *nc;
-				int timeo;
-
-				rcu_read_lock();
-				nc = rcu_dereference(connection->net_conf);
-				timeo = nc->ping_int * HZ + nc->ping_timeo * HZ / 9;
-				rcu_read_unlock();
-				schedule_timeout_interruptible(timeo);
-			}
-			drbd_resync_finished(peer_device);
-		}
-
-		drbd_rs_controller_reset(peer_device);
-		/* ns.conn may already be != device->state.conn,
-		 * we may have been paused in between, or become paused until
-		 * the timer triggers.
-		 * No matter, that is handled in resync_timer_fn() */
-		if (ns.conn == C_SYNC_TARGET)
-			mod_timer(&device->resync_timer, jiffies);
-
-		drbd_md_sync(device);
-	}
+	unlock_all_resources();
 	put_ldev(device);
-out:
-	mutex_unlock(device->state_mutex);
+    out:
+	up(&device->resource->state_sem);
+	if (finished_resync_pdsk != D_UNKNOWN)
+		drbd_resync_finished(peer_device, finished_resync_pdsk);
 }
 
 static void update_on_disk_bitmap(struct drbd_peer_device *peer_device, bool resync_done)
 {
 	struct drbd_device *device = peer_device->device;
-	struct sib_info sib = { .sib_reason = SIB_SYNC_PROGRESS, };
-	device->rs_last_bcast = jiffies;
+	peer_device->rs_last_writeout = jiffies;
 
 	if (!get_ldev(device))
 		return;
 
-	drbd_bm_write_lazy(device, 0);
-	if (resync_done && is_sync_state(device->state.conn))
-		drbd_resync_finished(peer_device);
-
-	drbd_bcast_event(device, &sib);
-	/* update timestamp, in case it took a while to write out stuff */
-	device->rs_last_bcast = jiffies;
-	put_ldev(device);
-}
-
-static void drbd_ldev_destroy(struct drbd_device *device)
-{
-	lc_destroy(device->resync);
-	device->resync = NULL;
-	lc_destroy(device->act_log);
-	device->act_log = NULL;
+	drbd_bm_write_lazy(device, 0);
 
-	__acquire(local);
-	drbd_backing_dev_free(device, device->ldev);
-	device->ldev = NULL;
-	__release(local);
+	if (resync_done) {
+		if (is_verify_state(peer_device, NOW)) {
+			ov_out_of_sync_print(peer_device);
+			ov_skipped_print(peer_device);
+			drbd_resync_finished(peer_device, D_MASK);
+		} else if (is_sync_state(peer_device, NOW)) {
+			drbd_resync_finished(peer_device, D_MASK);
+		}
+	}
 
-	clear_bit(GOING_DISKLESS, &device->flags);
-	wake_up(&device->misc_wait);
+	/* update timestamp, in case it took a while to write out stuff */
+	peer_device->rs_last_writeout = jiffies;
+	put_ldev(device);
 }
 
 static void go_diskless(struct drbd_device *device)
 {
-	struct drbd_peer_device *peer_device = first_peer_device(device);
-	D_ASSERT(device, device->state.disk == D_FAILED);
+	D_ASSERT(device, device->disk_state[NOW] == D_FAILED ||
+			 device->disk_state[NOW] == D_DETACHING);
 	/* we cannot assert local_cnt == 0 here, as get_ldev_if_state will
 	 * inc/dec it frequently. Once we are D_DISKLESS, no one will touch
 	 * the protected members anymore, though, so once put_ldev reaches zero
@@ -1954,21 +3128,25 @@ static void go_diskless(struct drbd_device *device)
 	 * We still need to check if both bitmap and ldev are present, we may
 	 * end up here after a failed attach, before ldev was even assigned.
 	 */
+	/* ldev_safe: ldev is only destroyed after state change to D_DISKLESS below */
 	if (device->bitmap && device->ldev) {
-		/* An interrupted resync or similar is allowed to recounts bits
-		 * while we detach.
-		 * Any modifications would not be expected anymore, though.
-		 */
 		if (drbd_bitmap_io_from_worker(device, drbd_bm_write,
-					"detach", BM_LOCKED_TEST_ALLOWED, peer_device)) {
-			if (test_bit(WAS_READ_ERROR, &device->flags)) {
-				drbd_md_set_flag(device, MDF_FULL_SYNC);
-				drbd_md_sync(device);
+					       "detach",
+					       BM_LOCK_SET | BM_LOCK_CLEAR | BM_LOCK_BULK,
+					       NULL)) {
+			if (test_bit(CRASHED_PRIMARY, &device->flags)) {
+				struct drbd_peer_device *peer_device;
+
+				rcu_read_lock();
+				for_each_peer_device_rcu(peer_device, device)
+					drbd_md_set_peer_flag(peer_device, MDF_PEER_FULL_SYNC);
+				rcu_read_unlock();
 			}
 		}
 	}
 
-	drbd_force_state(device, NS(disk, D_DISKLESS));
+	drbd_md_sync_if_dirty(device);
+	change_disk_state(device, D_DISKLESS, CS_HARD, "go-diskless", NULL);
 }
 
 static int do_md_sync(struct drbd_device *device)
@@ -2001,41 +3179,148 @@ void __update_timing_details(
 	++(*cb_nr);
 }
 
+static bool all_responded(struct drbd_resource *resource)
+{
+	struct drbd_connection *connection;
+	bool all_responded = true;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (!test_bit(CHECKING_PEER, &connection->flags))
+			continue;
+		if (connection->cstate[NOW] < C_CONNECTED) {
+			clear_bit(CHECKING_PEER, &connection->flags);
+			continue;
+		}
+		if (test_bit(PING_PENDING, &connection->flags)) {
+			all_responded = false;
+			continue;
+		} else {
+			clear_bit(CHECKING_PEER, &connection->flags);
+		}
+	}
+	rcu_read_unlock();
+
+	return all_responded;
+}
+
+void drbd_check_peers(struct drbd_resource *resource)
+{
+	struct drbd_connection *connection;
+	long t, timeo = LONG_MAX;
+	unsigned long start;
+	bool check_ongoing;
+	u64 im;
+
+	check_ongoing = test_and_set_bit(CHECKING_PEERS, &resource->flags);
+	if (check_ongoing) {
+		wait_event(resource->state_wait,
+			   !test_bit(CHECKING_PEERS, &resource->flags));
+		return;
+	}
+
+	start = jiffies;
+	for_each_connection_ref(connection, im, resource) {
+		if (connection->cstate[NOW] < C_CONNECTED)
+			continue;
+		set_bit(CHECKING_PEER, &connection->flags);
+		send_ping_peer(connection);
+		t = ping_timeout(connection);
+		if (t < timeo)
+			timeo = t;
+	}
+
+	while (!wait_event_timeout(resource->state_wait, all_responded(resource), timeo)) {
+		unsigned long waited = jiffies - start;
+
+		timeo = LONG_MAX;
+		rcu_read_lock();
+		for_each_connection_rcu(connection, resource) {
+			if (!test_bit(CHECKING_PEER, &connection->flags))
+				continue;
+			t = ping_timeout(connection);
+			if (waited >= t) {
+				drbd_warn(connection, "peer failed to send PingAck in time\n");
+				change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
+				clear_bit(CHECKING_PEER, &connection->flags);
+				continue;
+			}
+			if (t - waited < timeo)
+				timeo = t - waited;
+		}
+		rcu_read_unlock();
+	}
+
+	clear_bit(CHECKING_PEERS, &resource->flags);
+	wake_up_all(&resource->state_wait);
+}
+
+void drbd_check_peers_new_current_uuid(struct drbd_device *device)
+{
+	struct drbd_resource *resource = device->resource;
+
+	drbd_check_peers(resource);
+
+	if (device->have_quorum[NOW] && drbd_data_accessible(device, NOW))
+		drbd_uuid_new_current(device, false);
+}
+
+static void make_new_current_uuid(struct drbd_device *device)
+{
+	drbd_check_peers_new_current_uuid(device);
+
+	get_work_bits(1UL << NEW_CUR_UUID | 1UL << WRITING_NEW_CUR_UUID, &device->flags);
+	wake_up(&device->misc_wait);
+}
+
 static void do_device_work(struct drbd_device *device, const unsigned long todo)
 {
 	if (test_bit(MD_SYNC, &todo))
 		do_md_sync(device);
-	if (test_bit(RS_DONE, &todo) ||
-	    test_bit(RS_PROGRESS, &todo))
-		update_on_disk_bitmap(first_peer_device(device), test_bit(RS_DONE, &todo));
 	if (test_bit(GO_DISKLESS, &todo))
 		go_diskless(device);
-	if (test_bit(DESTROY_DISK, &todo))
-		drbd_ldev_destroy(device);
+	if (test_bit(MAKE_NEW_CUR_UUID, &todo))
+		make_new_current_uuid(device);
+}
+
+static void do_peer_device_work(struct drbd_peer_device *peer_device, const unsigned long todo)
+{
+	if (test_bit(RS_PROGRESS, &todo))
+		drbd_broadcast_peer_device_state(peer_device);
+	if (test_bit(RS_DONE, &todo) ||
+	    test_bit(RS_LAZY_BM_WRITE, &todo))
+		update_on_disk_bitmap(peer_device, test_bit(RS_DONE, &todo));
 	if (test_bit(RS_START, &todo))
-		do_start_resync(device);
+		do_start_resync(peer_device);
+	if (test_bit(HANDLE_CONGESTION, &todo))
+		handle_congestion(peer_device);
 }
 
 #define DRBD_DEVICE_WORK_MASK	\
 	((1UL << GO_DISKLESS)	\
-	|(1UL << DESTROY_DISK)	\
 	|(1UL << MD_SYNC)	\
-	|(1UL << RS_START)	\
-	|(1UL << RS_PROGRESS)	\
-	|(1UL << RS_DONE)	\
+	|(1UL << MAKE_NEW_CUR_UUID)\
+	)
+
+#define DRBD_PEER_DEVICE_WORK_MASK	\
+	((1UL << RS_START)		\
+	|(1UL << RS_LAZY_BM_WRITE)	\
+	|(1UL << RS_PROGRESS)		\
+	|(1UL << RS_DONE)		\
+	|(1UL << HANDLE_CONGESTION)     \
 	)
 
-static unsigned long get_work_bits(unsigned long *flags)
+static unsigned long get_work_bits(const unsigned long mask, unsigned long *flags)
 {
 	unsigned long old, new;
 	do {
 		old = *flags;
-		new = old & ~DRBD_DEVICE_WORK_MASK;
+		new = old & ~mask;
 	} while (cmpxchg(flags, old, new) != old);
-	return old & DRBD_DEVICE_WORK_MASK;
+	return old & mask;
 }
 
-static void do_unqueued_work(struct drbd_connection *connection)
+static void __do_unqueued_peer_device_work(struct drbd_connection *connection)
 {
 	struct drbd_peer_device *peer_device;
 	int vnr;
@@ -2043,7 +3328,36 @@ static void do_unqueued_work(struct drbd_connection *connection)
 	rcu_read_lock();
 	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
 		struct drbd_device *device = peer_device->device;
-		unsigned long todo = get_work_bits(&device->flags);
+		unsigned long todo = get_work_bits(DRBD_PEER_DEVICE_WORK_MASK, &peer_device->flags);
+		if (!todo)
+			continue;
+
+		kref_get(&device->kref);
+		rcu_read_unlock();
+		do_peer_device_work(peer_device, todo);
+		kref_put(&device->kref, drbd_destroy_device);
+		rcu_read_lock();
+	}
+	rcu_read_unlock();
+}
+
+static void do_unqueued_peer_device_work(struct drbd_resource *resource)
+{
+	struct drbd_connection *connection;
+	u64 im;
+
+	for_each_connection_ref(connection, im, resource)
+		__do_unqueued_peer_device_work(connection);
+}
+
+static void do_unqueued_device_work(struct drbd_resource *resource)
+{
+	struct drbd_device *device;
+	int vnr;
+
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		unsigned long todo = get_work_bits(DRBD_DEVICE_WORK_MASK, &device->flags);
 		if (!todo)
 			continue;
 
@@ -2064,14 +3378,105 @@ static bool dequeue_work_batch(struct drbd_work_queue *queue, struct list_head *
 	return !list_empty(work_list);
 }
 
-static void wait_for_work(struct drbd_connection *connection, struct list_head *work_list)
+static struct drbd_request *__next_request_for_connection(
+		struct drbd_connection *connection)
+{
+	struct drbd_request *req;
+
+	list_for_each_entry_rcu(req, &connection->resource->transfer_log, tl_requests) {
+		unsigned s = req->net_rq_state[connection->peer_node_id];
+
+		if (likely(s & RQ_NET_QUEUED))
+			return req;
+	}
+	return NULL;
+}
+
+static struct drbd_request *tl_next_request_for_connection(
+		struct drbd_connection *connection, bool wait_ready)
+{
+	if (connection->todo.req_next == NULL)
+		connection->todo.req_next = __next_request_for_connection(connection);
+
+	if (connection->todo.req_next == NULL) {
+		connection->todo.req = NULL;
+	} else {
+		unsigned int s = connection->todo.req_next->net_rq_state[connection->peer_node_id];
+
+		if (likely((s & RQ_NET_READY) || !wait_ready)) {
+			connection->todo.req = connection->todo.req_next;
+			connection->send.seen_dagtag_sector = connection->todo.req->dagtag_sector;
+		} else {
+			/* Leave the request in "req_next" until it is ready */
+			connection->todo.req = NULL;
+		}
+	}
+
+	/*
+	 * Advancement of todo.req_next happens in advance_conn_req_next(),
+	 * called from mod_rq_state()
+	 */
+
+	return connection->todo.req;
+}
+
+static void maybe_send_state_after_ahead(struct drbd_connection *connection)
+{
+	struct drbd_peer_device *peer_device;
+	int vnr;
+
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		if (test_and_clear_bit(SEND_STATE_AFTER_AHEAD, &peer_device->flags)) {
+			peer_device->todo.was_sending_out_of_sync = false;
+			rcu_read_unlock();
+			drbd_send_current_state(peer_device);
+			rcu_read_lock();
+		}
+	}
+	rcu_read_unlock();
+}
+
+/* This finds the next not yet processed request from
+ * connection->resource->transfer_log.
+ * It also moves all currently queued connection->sender_work
+ * to connection->todo.work_list.
+ */
+static bool check_sender_todo(struct drbd_connection *connection)
+{
+	rcu_read_lock();
+	tl_next_request_for_connection(connection, true);
+
+	/* FIXME can we get rid of this additional lock? */
+	spin_lock_irq(&connection->sender_work.q_lock);
+	list_splice_tail_init(&connection->sender_work.q, &connection->todo.work_list);
+	spin_unlock_irq(&connection->sender_work.q_lock);
+	rcu_read_unlock();
+
+	return connection->todo.req
+		|| need_unplug(connection)
+		|| !list_empty(&connection->todo.work_list);
+}
+
+static bool drbd_send_barrier_next_oos(struct drbd_connection *connection)
+{
+	if (!connection->todo.req_next)
+		return false;
+
+	return connection->todo.req_next->net_rq_state[connection->peer_node_id]
+		& RQ_NET_PENDING_OOS;
+}
+
+static void wait_for_sender_todo(struct drbd_connection *connection)
 {
+	struct drbd_resource *resource = connection->resource;
 	DEFINE_WAIT(wait);
 	struct net_conf *nc;
 	int uncork, cork;
+	bool got_something = 0;
 
-	dequeue_work_batch(&connection->sender_work, work_list);
-	if (!list_empty(work_list))
+	got_something = check_sender_todo(connection);
+	if (got_something)
 		return;
 
 	/* Still nothing to do?
@@ -2081,26 +3486,19 @@ static void wait_for_work(struct drbd_connection *connection, struct list_head *
 	 * Also, poke TCP, just in case.
 	 * Then wait for new work (or signal). */
 	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
+	nc = rcu_dereference(connection->transport.net_conf);
 	uncork = nc ? nc->tcp_cork : 0;
 	rcu_read_unlock();
 	if (uncork) {
-		mutex_lock(&connection->data.mutex);
-		if (connection->data.socket)
-			tcp_sock_set_cork(connection->data.socket->sk, false);
-		mutex_unlock(&connection->data.mutex);
+		if (drbd_uncork(connection, DATA_STREAM))
+			return;
 	}
 
 	for (;;) {
 		int send_barrier;
-		prepare_to_wait(&connection->sender_work.q_wait, &wait, TASK_INTERRUPTIBLE);
-		spin_lock_irq(&connection->resource->req_lock);
-		spin_lock(&connection->sender_work.q_lock);	/* FIXME get rid of this one? */
-		if (!list_empty(&connection->sender_work.q))
-			list_splice_tail_init(&connection->sender_work.q, work_list);
-		spin_unlock(&connection->sender_work.q_lock);	/* FIXME get rid of this one? */
-		if (!list_empty(work_list) || signal_pending(current)) {
-			spin_unlock_irq(&connection->resource->req_lock);
+		prepare_to_wait(&connection->sender_work.q_wait, &wait,
+				TASK_INTERRUPTIBLE);
+		if (check_sender_todo(connection) || signal_pending(current)) {
 			break;
 		}
 
@@ -2108,23 +3506,46 @@ static void wait_for_work(struct drbd_connection *connection, struct list_head *
 		 * no other work item.  We may still need to close the last
 		 * epoch.  Next incoming request epoch will be connection ->
 		 * current transfer log epoch number.  If that is different
-		 * from the epoch of the last request we communicated, it is
-		 * safe to send the epoch separating barrier now.
+		 * from the epoch of the last request we communicated, we want
+		 * to send the epoch separating barrier now.
 		 */
-		send_barrier =
-			atomic_read(&connection->current_tle_nr) !=
-			connection->send.current_epoch_nr;
-		spin_unlock_irq(&connection->resource->req_lock);
-
-		if (send_barrier)
-			maybe_send_barrier(connection,
-					connection->send.current_epoch_nr + 1);
+		send_barrier = should_send_barrier(connection,
+					atomic_read(&resource->current_tle_nr));
+
+		if (send_barrier) {
+			/* Ensure that we read the most recent
+			 * resource->dagtag_sector value. */
+			smp_rmb();
+			/* If a request is currently being submitted it may not
+			 * have been picked up by this sender, even though it
+			 * belongs to the old epoch. Ensure that we are
+			 * up-to-date with the most recently submitted dagtag
+			 * to ensure that we do not send a barrier early in
+			 * this case. If there is such a request then this
+			 * sender will be woken, so it is OK to schedule().
+			 *
+			 * If we have found a request that is
+			 * RQ_NET_PENDING_OOS, but not yet RQ_NET_READY, then
+			 * we also need to send a barrier.
+			 */
+			if (dagtag_newer_eq(connection->send.seen_dagtag_sector,
+						READ_ONCE(resource->dagtag_sector))
+					|| drbd_send_barrier_next_oos(connection)) {
+				finish_wait(&connection->sender_work.q_wait, &wait);
+				maybe_send_barrier(connection,
+						connection->send.current_epoch_nr + 1);
+				continue;
+			}
+		}
 
-		if (test_bit(DEVICE_WORK_PENDING, &connection->flags))
-			break;
+		if (test_and_clear_bit(SEND_STATE_AFTER_AHEAD_C, &connection->flags)) {
+			finish_wait(&connection->sender_work.q_wait, &wait);
+			maybe_send_state_after_ahead(connection);
+			continue;
+		}
 
 		/* drbd_send() may have called flush_signals() */
-		if (get_t_state(&connection->worker) != RUNNING)
+		if (get_t_state(&connection->sender) != RUNNING)
 			break;
 
 		schedule();
@@ -2136,44 +3557,279 @@ static void wait_for_work(struct drbd_connection *connection, struct list_head *
 
 	/* someone may have changed the config while we have been waiting above. */
 	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
+	nc = rcu_dereference(connection->transport.net_conf);
 	cork = nc ? nc->tcp_cork : 0;
 	rcu_read_unlock();
-	mutex_lock(&connection->data.mutex);
-	if (connection->data.socket) {
-		if (cork)
-			tcp_sock_set_cork(connection->data.socket->sk, true);
-		else if (!uncork)
-			tcp_sock_set_cork(connection->data.socket->sk, false);
+
+	if (cork)
+		drbd_cork(connection, DATA_STREAM);
+	else if (!uncork)
+		drbd_uncork(connection, DATA_STREAM);
+}
+
+static void re_init_if_first_write(struct drbd_connection *connection, unsigned int epoch)
+{
+	if (!connection->send.seen_any_write_yet) {
+		connection->send.seen_any_write_yet = true;
+		connection->send.current_epoch_nr = epoch;
+		connection->send.current_epoch_writes = 0;
+		connection->send.last_sent_barrier_jif = jiffies;
 	}
-	mutex_unlock(&connection->data.mutex);
 }
 
-int drbd_worker(struct drbd_thread *thi)
+static bool should_send_barrier(struct drbd_connection *connection, unsigned int epoch)
+{
+	if (!connection->send.seen_any_write_yet)
+		return false;
+	return connection->send.current_epoch_nr != epoch;
+}
+static void maybe_send_barrier(struct drbd_connection *connection, unsigned int epoch)
+{
+	/* re-init if first write on this connection */
+	if (should_send_barrier(connection, epoch)) {
+		if (connection->send.current_epoch_writes)
+			drbd_send_barrier(connection);
+		connection->send.current_epoch_nr = epoch;
+	}
+}
+
+static int process_one_request(struct drbd_connection *connection)
+{
+	struct bio_and_error m;
+	struct drbd_request *req = connection->todo.req;
+	struct drbd_device *device = req->device;
+	struct drbd_peer_device *peer_device =
+			conn_peer_device(connection, device->vnr);
+	unsigned s = req->net_rq_state[peer_device->node_id];
+	bool do_send_unplug = req->local_rq_state & RQ_UNPLUG;
+	int err = 0;
+	enum drbd_req_event what;
+
+	/* pre_send_jif[] is used in net_timeout_reached() */
+	req->pre_send_jif[peer_device->node_id] = jiffies;
+	ktime_get_accounting(req->pre_send_kt[peer_device->node_id]);
+	if (drbd_req_is_write(req)) {
+		/* If a WRITE does not expect a barrier ack,
+		 * we are supposed to only send an "out of sync" info packet */
+		if (s & RQ_EXP_BARR_ACK) {
+			u64 current_dagtag_sector =
+				req->dagtag_sector - (req->i.size >> 9);
+
+			re_init_if_first_write(connection, req->epoch);
+			maybe_send_barrier(connection, req->epoch);
+			if (current_dagtag_sector != connection->send.current_dagtag_sector)
+				drbd_send_dagtag(connection, current_dagtag_sector);
+
+			connection->send.current_epoch_writes++;
+			connection->send.current_dagtag_sector = req->dagtag_sector;
+
+			if (peer_device->todo.was_sending_out_of_sync) {
+				clear_bit(SEND_STATE_AFTER_AHEAD, &peer_device->flags);
+				peer_device->todo.was_sending_out_of_sync = false;
+				drbd_send_current_state(peer_device);
+			}
+
+			err = drbd_send_dblock(peer_device, req);
+			what = err ? SEND_FAILED : HANDED_OVER_TO_NETWORK;
+		} else {
+			/* this time, no connection->send.current_epoch_writes++;
+			 * If it was sent, it was the closing barrier for the last
+			 * replicated epoch, before we went into AHEAD mode.
+			 * No more barriers will be sent, until we leave AHEAD mode again. */
+			maybe_send_barrier(connection, req->epoch);
+
+			/* make sure the state change to L_AHEAD/L_BEHIND
+			 * arrives before the first set-out-of-sync information */
+			if (!peer_device->todo.was_sending_out_of_sync) {
+				peer_device->todo.was_sending_out_of_sync = true;
+				drbd_send_current_state(peer_device);
+			}
+
+			/* When this flag is not set, sending OOS may be skipped */
+			if (s & RQ_NET_PENDING_OOS)
+				err = drbd_send_out_of_sync(peer_device,
+						req->i.sector, req->i.size);
+			/* This event has the appropriate effect even if OOS skipped or failed */
+			what = OOS_HANDED_TO_NETWORK;
+		}
+	} else {
+		maybe_send_barrier(connection, req->epoch);
+		err = drbd_send_drequest(peer_device,
+				req->i.sector, req->i.size, (unsigned long)req);
+		what = err ? SEND_FAILED : HANDED_OVER_TO_NETWORK;
+	}
+
+	read_lock_irq(&connection->resource->state_rwlock);
+	__req_mod(req, what, peer_device, &m);
+	read_unlock_irq(&connection->resource->state_rwlock);
+
+	check_sender_todo(connection);
+
+	if (m.bio)
+		complete_master_bio(device, &m);
+
+	do_send_unplug = do_send_unplug && what == HANDED_OVER_TO_NETWORK;
+	maybe_send_unplug_remote(connection, do_send_unplug);
+
+	return err;
+}
+
+static int process_sender_todo(struct drbd_connection *connection)
 {
-	struct drbd_connection *connection = thi->connection;
 	struct drbd_work *w = NULL;
+
+	/* Process all currently pending work items,
+	 * or requests from the transfer log.
+	 *
+	 * Right now, work items do not require any strict ordering wrt. the
+	 * request stream, so lets just do simple interleaved processing.
+	 *
+	 * Stop processing as soon as an error is encountered.
+	 */
+	if (!connection->todo.req) {
+		update_sender_timing_details(connection, maybe_send_unplug_remote);
+		maybe_send_unplug_remote(connection, false);
+	} else if (list_empty(&connection->todo.work_list)) {
+		update_sender_timing_details(connection, process_one_request);
+		/* ldev_safe: have connection->todo.req which holds its own ldev ref */
+		return process_one_request(connection);
+	}
+
+	while (!list_empty(&connection->todo.work_list)) {
+		int err;
+
+		w = list_first_entry(&connection->todo.work_list, struct drbd_work, list);
+		list_del_init(&w->list);
+		update_sender_timing_details(connection, w->cb);
+		err = w->cb(w, connection->cstate[NOW] < C_CONNECTED);
+		if (err)
+			return err;
+
+		if (connection->todo.req) {
+			update_sender_timing_details(connection, process_one_request);
+			/* ldev_safe: have connection->todo.req which holds its own ldev ref */
+			err = process_one_request(connection);
+		}
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+int drbd_sender(struct drbd_thread *thi)
+{
+	struct drbd_connection *connection = thi->connection;
+	struct drbd_work *w;
 	struct drbd_peer_device *peer_device;
-	LIST_HEAD(work_list);
 	int vnr;
+	int err;
+
+	/* Should we drop this? Or reset even more stuff? */
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		peer_device->send_cnt = 0;
+		peer_device->recv_cnt = 0;
+	}
+	rcu_read_unlock();
 
 	while (get_t_state(thi) == RUNNING) {
 		drbd_thread_current_set_cpu(thi);
 
-		if (list_empty(&work_list)) {
-			update_worker_timing_details(connection, wait_for_work);
-			wait_for_work(connection, &work_list);
+		if (list_empty(&connection->todo.work_list) &&
+		    connection->todo.req == NULL) {
+			update_sender_timing_details(connection, wait_for_sender_todo);
+			wait_for_sender_todo(connection);
+		}
+
+		if (signal_pending(current)) {
+			flush_signals(current);
+			if (get_t_state(thi) == RUNNING) {
+				drbd_warn(connection, "Sender got an unexpected signal\n");
+				continue;
+			}
+			break;
+		}
+
+		if (get_t_state(thi) != RUNNING)
+			break;
+
+		err = process_sender_todo(connection);
+		if (err)
+			change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
+	}
+
+	/* cleanup all currently unprocessed requests */
+	if (!connection->todo.req) {
+		rcu_read_lock();
+		tl_next_request_for_connection(connection, false);
+		rcu_read_unlock();
+	}
+	while (connection->todo.req) {
+		struct bio_and_error m;
+		struct drbd_request *req = connection->todo.req;
+		struct drbd_device *device = req->device;
+		peer_device = conn_peer_device(connection, device->vnr);
+
+		read_lock_irq(&connection->resource->state_rwlock);
+		/* ldev_safe: requests hold their own ldev refs */
+		__req_mod(req, SEND_CANCELED, peer_device, &m);
+		read_unlock_irq(&connection->resource->state_rwlock);
+		if (m.bio)
+			complete_master_bio(device, &m);
+
+		rcu_read_lock();
+		tl_next_request_for_connection(connection, false);
+		rcu_read_unlock();
+	}
+
+	/* cancel all still pending works */
+	do {
+		while (!list_empty(&connection->todo.work_list)) {
+			w = list_first_entry(&connection->todo.work_list, struct drbd_work, list);
+			list_del_init(&w->list);
+			w->cb(w, 1);
 		}
+		dequeue_work_batch(&connection->sender_work, &connection->todo.work_list);
+	} while (!list_empty(&connection->todo.work_list));
+
+	return 0;
+}
+
+int drbd_worker(struct drbd_thread *thi)
+{
+	LIST_HEAD(work_list);
+	struct drbd_resource *resource = thi->resource;
+	struct drbd_work *w;
+
+	while (get_t_state(thi) == RUNNING) {
+		drbd_thread_current_set_cpu(thi);
+
+		if (list_empty(&work_list)) {
+			bool w, d, p;
+
+			update_worker_timing_details(resource, dequeue_work_batch);
+			wait_event_interruptible(resource->work.q_wait,
+				(w = dequeue_work_batch(&resource->work, &work_list),
+				 d = test_and_clear_bit(DEVICE_WORK_PENDING, &resource->flags),
+				 p = test_and_clear_bit(PEER_DEVICE_WORK_PENDING, &resource->flags),
+				 w || d || p));
+
+			if (p) {
+				update_worker_timing_details(resource, do_unqueued_peer_device_work);
+				do_unqueued_peer_device_work(resource);
+			}
 
-		if (test_and_clear_bit(DEVICE_WORK_PENDING, &connection->flags)) {
-			update_worker_timing_details(connection, do_unqueued_work);
-			do_unqueued_work(connection);
+			if (d) {
+				update_worker_timing_details(resource, do_unqueued_device_work);
+				do_unqueued_device_work(resource);
+			}
 		}
 
 		if (signal_pending(current)) {
 			flush_signals(current);
 			if (get_t_state(thi) == RUNNING) {
-				drbd_warn(connection, "Worker got an unexpected signal\n");
+				drbd_warn(resource, "Worker got an unexpected signal\n");
 				continue;
 			}
 			break;
@@ -2182,42 +3838,34 @@ int drbd_worker(struct drbd_thread *thi)
 		if (get_t_state(thi) != RUNNING)
 			break;
 
-		if (!list_empty(&work_list)) {
+
+		while (!list_empty(&work_list)) {
 			w = list_first_entry(&work_list, struct drbd_work, list);
 			list_del_init(&w->list);
-			update_worker_timing_details(connection, w->cb);
-			if (w->cb(w, connection->cstate < C_WF_REPORT_PARAMS) == 0)
-				continue;
-			if (connection->cstate >= C_WF_REPORT_PARAMS)
-				conn_request_state(connection, NS(conn, C_NETWORK_FAILURE), CS_HARD);
+			update_worker_timing_details(resource, w->cb);
+			w->cb(w, 0);
 		}
 	}
 
 	do {
-		if (test_and_clear_bit(DEVICE_WORK_PENDING, &connection->flags)) {
-			update_worker_timing_details(connection, do_unqueued_work);
-			do_unqueued_work(connection);
+		if (test_and_clear_bit(DEVICE_WORK_PENDING, &resource->flags)) {
+			update_worker_timing_details(resource, do_unqueued_device_work);
+			do_unqueued_device_work(resource);
 		}
-		if (!list_empty(&work_list)) {
+		if (test_and_clear_bit(PEER_DEVICE_WORK_PENDING, &resource->flags)) {
+			update_worker_timing_details(resource, do_unqueued_peer_device_work);
+			do_unqueued_peer_device_work(resource);
+		}
+		while (!list_empty(&work_list)) {
 			w = list_first_entry(&work_list, struct drbd_work, list);
 			list_del_init(&w->list);
-			update_worker_timing_details(connection, w->cb);
+			update_worker_timing_details(resource, w->cb);
 			w->cb(w, 1);
-		} else
-			dequeue_work_batch(&connection->sender_work, &work_list);
-	} while (!list_empty(&work_list) || test_bit(DEVICE_WORK_PENDING, &connection->flags));
-
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		D_ASSERT(device, device->state.disk == D_DISKLESS && device->state.conn == C_STANDALONE);
-		kref_get(&device->kref);
-		rcu_read_unlock();
-		drbd_device_cleanup(device);
-		kref_put(&device->kref, drbd_destroy_device);
-		rcu_read_lock();
-	}
-	rcu_read_unlock();
+		}
+		dequeue_work_batch(&resource->work, &work_list);
+	} while (!list_empty(&work_list) ||
+		 test_bit(DEVICE_WORK_PENDING, &resource->flags) ||
+		 test_bit(PEER_DEVICE_WORK_PENDING, &resource->flags));
 
 	return 0;
 }
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 12/20] drbd: replace per-device state model with multi-peer data structures
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (10 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 11/20] drbd: rework sender for DRBD 9 multi-peer Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 13/20] drbd: rewrite state machine for DRBD 9 multi-peer clusters Christoph Böhmwalder
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Overhaul the internal header definitions to support DRBD 9's
multi-peer replication model.
The fundamental shift is that per-peer state (replication progress,
UUIDs, resync bookkeeping) moves from per-device to per-peer-device
scope, and all mutable state is now tracked as a [NOW]/[NEW] pair
on each object to support atomic, cluster-visible state transitions.

Redesign the locking model to match: remove the coarse per-resource
spinlock in favor of a resource-level rwlock for state, a
per-connection lock for peer request lists, and a per-device lock
for interval tree operations.

Replace direct socket members on the connection wth the transport
abstraction.
Move the transfer log with its peer-ack machinery up to the resource
level so that writes can be serialized and acknowledged across all
peers consistently.

Move the state change API to a two-phase commit model at the
resource level, enabling cluster-wide coordinated transitions for
connect, disconnect, role change, and resize operations.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_buildtag.c            |    2 +-
 drivers/block/drbd/drbd_config.h              |   38 +
 drivers/block/drbd/drbd_debugfs.h             |    2 +
 .../block/drbd}/drbd_genl_api.h               |   19 +-
 drivers/block/drbd/drbd_int.h                 | 3278 +++++++++++------
 drivers/block/drbd/drbd_interval.h            |  156 +-
 drivers/block/drbd/drbd_nl.c                  |    2 +-
 drivers/block/drbd/drbd_nla.c                 |    2 +-
 drivers/block/drbd/drbd_nla.h                 |    7 +-
 drivers/block/drbd/drbd_polymorph_printk.h    |  265 +-
 drivers/block/drbd/drbd_req.h                 |  303 +-
 drivers/block/drbd/drbd_state.h               |  298 +-
 drivers/block/drbd/drbd_state_change.h        |   66 +-
 drivers/block/drbd/drbd_strings.h             |   25 +-
 drivers/block/drbd/drbd_transport_lb-tcp.c    |    4 +-
 drivers/block/drbd/drbd_transport_rdma.c      |    4 +-
 drivers/block/drbd/drbd_transport_tcp.c       |    4 +-
 include/linux/drbd.h                          |  190 +-
 include/linux/drbd_config.h                   |   16 -
 include/linux/drbd_genl.h                     |  350 +-
 include/linux/drbd_limits.h                   |  105 +-
 include/linux/genl_magic_func.h               |   50 +-
 22 files changed, 3361 insertions(+), 1825 deletions(-)
 create mode 100644 drivers/block/drbd/drbd_config.h
 rename {include/linux => drivers/block/drbd}/drbd_genl_api.h (68%)
 delete mode 100644 include/linux/drbd_config.h

diff --git a/drivers/block/drbd/drbd_buildtag.c b/drivers/block/drbd/drbd_buildtag.c
index cb1aa66d7d5d..812f78070a0b 100644
--- a/drivers/block/drbd/drbd_buildtag.c
+++ b/drivers/block/drbd/drbd_buildtag.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
-#include <linux/drbd_config.h>
 #include <linux/module.h>
+#include "drbd_config.h"
 
 const char *drbd_buildtag(void)
 {
diff --git a/drivers/block/drbd/drbd_config.h b/drivers/block/drbd/drbd_config.h
new file mode 100644
index 000000000000..62fc91dc529a
--- /dev/null
+++ b/drivers/block/drbd/drbd_config.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+  drbd_config.h
+  DRBD's compile time configuration.
+*/
+
+#ifndef DRBD_CONFIG_H
+#define DRBD_CONFIG_H
+
+#include "drbd_protocol.h"
+
+const char *drbd_buildtag(void);
+
+#define REL_VERSION "9.3.0"
+#define PRO_VERSION_MIN 118 /* 9.0.26 */
+#define PRO_VERSION_MAX 124
+#define PRO_FEATURES (DRBD_FF_TRIM | DRBD_FF_THIN_RESYNC | DRBD_FF_WSAME | DRBD_FF_WZEROES | \
+		      DRBD_FF_RESYNC_DAGTAG | \
+		      DRBD_FF_2PC_V2 | DRBD_FF_RS_SKIP_UUID | \
+		      DRBD_FF_RESYNC_WITHOUT_REPLICATION)
+
+#define PRO_VERSION_8_MIN 86
+#define PRO_VERSION_8_MAX 101
+
+/* We support two ranges of DRBD protocol version:
+ *  86-101: accepted DRBD 8 protocol versions as "rolling upgrade" path
+ * 102-109: never defined
+ * 110-117: _rejected_ because of bugs in the backward compat path
+ *	in more recent DRBD versions.  That is 9.0.0 to 9.0.25 inclusive.
+ *	"Rolling" upgrade path for those versions:
+ *	first upgrade to 9.0.latest, then connect to 9.1/9.2 or later.
+ * 118-PRO_VERSION_MAX: accepted DRBD 9 protocol versions.
+ *
+ * Note that we also reject connections with protocol version 121 and feature
+ * DRBD_FF_RESYNC_DAGTAG.
+ */
+
+#endif
diff --git a/drivers/block/drbd/drbd_debugfs.h b/drivers/block/drbd/drbd_debugfs.h
index ee3d66eb40c6..37037b196e4a 100644
--- a/drivers/block/drbd/drbd_debugfs.h
+++ b/drivers/block/drbd/drbd_debugfs.h
@@ -11,6 +11,7 @@ void drbd_debugfs_cleanup(void);
 
 void drbd_debugfs_resource_add(struct drbd_resource *resource);
 void drbd_debugfs_resource_cleanup(struct drbd_resource *resource);
+void drbd_debugfs_resource_rename(struct drbd_resource *resource, const char *new_name);
 
 void drbd_debugfs_connection_add(struct drbd_connection *connection);
 void drbd_debugfs_connection_cleanup(struct drbd_connection *connection);
@@ -27,6 +28,7 @@ static inline void drbd_debugfs_cleanup(void) { }
 
 static inline void drbd_debugfs_resource_add(struct drbd_resource *resource) { }
 static inline void drbd_debugfs_resource_cleanup(struct drbd_resource *resource) { }
+static inline void drbd_debugfs_resource_rename(struct drbd_resource *resource, const char *new_name) { }
 
 static inline void drbd_debugfs_connection_add(struct drbd_connection *connection) { }
 static inline void drbd_debugfs_connection_cleanup(struct drbd_connection *connection) { }
diff --git a/include/linux/drbd_genl_api.h b/drivers/block/drbd/drbd_genl_api.h
similarity index 68%
rename from include/linux/drbd_genl_api.h
rename to drivers/block/drbd/drbd_genl_api.h
index 70682c058027..7096b9c4f6dc 100644
--- a/include/linux/drbd_genl_api.h
+++ b/drivers/block/drbd/drbd_genl_api.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: GPL-2.0 */
+/* SPDX-License-Identifier: GPL-2.0-only */
 #ifndef DRBD_GENL_STRUCT_H
 #define DRBD_GENL_STRUCT_H
 
@@ -13,12 +13,6 @@
  *     is used instead.
  * @flags: possible operation modifiers (relevant only for user->kernel):
  *     DRBD_GENL_F_SET_DEFAULTS
- * @volume:
- *     When creating a new minor (adding it to a resource), the resource needs
- *     to know which volume number within the resource this is supposed to be.
- *     The volume number corresponds to the same volume number on the remote side,
- *     whereas the minor number on the remote side may be different
- *     (union with flags).
  * @ret_code: kernel->userland unicast cfg reply return code (union with flags);
  */
 struct drbd_genlmsghdr {
@@ -34,20 +28,13 @@ enum {
 	DRBD_GENL_F_SET_DEFAULTS = 1,
 };
 
-enum drbd_state_info_bcast_reason {
-	SIB_GET_STATUS_REPLY = 1,
-	SIB_STATE_CHANGE = 2,
-	SIB_HELPER_PRE = 3,
-	SIB_HELPER_POST = 4,
-	SIB_SYNC_PROGRESS = 5,
-};
-
 /* hack around predefined gcc/cpp "linux=1",
  * we cannot possibly include <1/drbd_genl.h> */
 #undef linux
 
 #include <linux/drbd.h>
-#define GENL_MAGIC_VERSION	1
+#include "drbd_config.h"
+#define GENL_MAGIC_VERSION	2
 #define GENL_MAGIC_FAMILY	drbd
 #define GENL_MAGIC_FAMILY_HDRSZ	sizeof(struct drbd_genlmsghdr)
 #define GENL_MAGIC_INCLUDE_FILE <linux/drbd_genl.h>
diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
index f6d6276974ee..b7dc630cf784 100644
--- a/drivers/block/drbd/drbd_int.h
+++ b/drivers/block/drbd/drbd_int.h
@@ -18,55 +18,101 @@
 #include <linux/compiler.h>
 #include <linux/types.h>
 #include <linux/list.h>
+#include <linux/sched.h>
 #include <linux/sched/signal.h>
 #include <linux/bitops.h>
 #include <linux/slab.h>
 #include <linux/ratelimit.h>
-#include <linux/tcp.h>
 #include <linux/mutex.h>
 #include <linux/major.h>
 #include <linux/blkdev.h>
 #include <linux/backing-dev.h>
 #include <linux/idr.h>
-#include <linux/dynamic_debug.h>
-#include <net/tcp.h>
 #include <linux/lru_cache.h>
 #include <linux/prefetch.h>
-#include <linux/drbd_genl_api.h>
+#include "drbd_genl_api.h"
 #include <linux/drbd.h>
-#include <linux/drbd_config.h>
+
+#include "drbd_config.h"
 #include "drbd_strings.h"
 #include "drbd_state.h"
+#include "drbd_state_change.h"
 #include "drbd_protocol.h"
+#include "drbd_transport.h"
 #include "drbd_polymorph_printk.h"
 
-/* shared module parameters, defined in drbd_main.c */
+/* module parameter, defined in drbd_main.c */
+extern unsigned int drbd_minor_count;
+extern unsigned int drbd_protocol_version_min;
+extern bool drbd_strict_names;
+
+static inline bool drbd_protocol_version_acceptable(unsigned int pv)
+{
+	return	/* DRBD 9 */ (pv >= PRO_VERSION_MIN && pv <= PRO_VERSION_MAX) ||
+		/* DRBD 8 */ (pv >= PRO_VERSION_8_MIN && pv <= PRO_VERSION_8_MAX);
+}
+
 #ifdef CONFIG_DRBD_FAULT_INJECTION
 extern int drbd_enable_faults;
 extern int drbd_fault_rate;
 #endif
 
-extern unsigned int drbd_minor_count;
 extern char drbd_usermode_helper[];
-extern int drbd_proc_details;
+enum {
+	/* drbd_khelper returns >= 0, we can use negative values as flags for drbd_maybe_khelper */
+	DRBD_UMH_DISABLED = INT_MIN,
+};
 
+#ifndef DRBD_MAJOR
+# define DRBD_MAJOR 147
+#endif
 
 /* This is used to stop/restart our threads.
  * Cannot use SIGTERM nor SIGKILL, since these
  * are sent out by init on runlevel changes
  * I choose SIGHUP for now.
+ *
+ * FIXME btw, we should register some reboot notifier.
  */
 #define DRBD_SIGKILL SIGHUP
 
+/* For compatibility with protocol < 122 */
+#define ID_SKIP         (4710ULL)
 #define ID_IN_SYNC      (4711ULL)
 #define ID_OUT_OF_SYNC  (4712ULL)
 #define ID_SYNCER (-1ULL)
 
+static inline enum ov_result drbd_block_id_to_ov_result(u64 block_id)
+{
+	switch (block_id) {
+	case ID_IN_SYNC:
+		return OV_RESULT_IN_SYNC;
+	case ID_OUT_OF_SYNC:
+		return OV_RESULT_OUT_OF_SYNC;
+	default:
+		return OV_RESULT_SKIP;
+	}
+}
+
+static inline u64 drbd_ov_result_to_block_id(enum ov_result result)
+{
+	switch (result) {
+	case OV_RESULT_IN_SYNC:
+		return ID_IN_SYNC;
+	case OV_RESULT_OUT_OF_SYNC:
+		return ID_OUT_OF_SYNC;
+	default:
+		return ID_SKIP;
+	}
+}
+
 #define UUID_NEW_BM_OFFSET ((u64)0x0001000000000000ULL)
 
 struct drbd_device;
 struct drbd_connection;
-struct drbd_peer_device;
+
+/* I want to be able to grep for "drbd $resource_name"
+ * and get all relevant log lines. */
 
 /* Defines to control fault insertion */
 enum {
@@ -80,11 +126,12 @@ enum {
 	DRBD_FAULT_BM_ALLOC = 7,	/* bitmap allocation */
 	DRBD_FAULT_AL_EE = 8,	/* alloc ee */
 	DRBD_FAULT_RECEIVE = 9, /* Changes some bytes upon receiving a [rs]data block */
+	DRBD_FAULT_BIO_TOO_SMALL = 10, /* Allocate smaller bios to trigger bio chaining */
 
 	DRBD_FAULT_MAX,
 };
 
-extern unsigned int
+unsigned int
 _drbd_insert_fault(struct drbd_device *device, unsigned int type);
 
 static inline int
@@ -98,28 +145,31 @@ drbd_insert_fault(struct drbd_device *device, unsigned int type) {
 #endif
 }
 
-/* integer division, round _UP_ to the next integer */
-#define div_ceil(A, B) ((A)/(B) + ((A)%(B) ? 1 : 0))
-/* usual integer division */
-#define div_floor(A, B) ((A)/(B))
-
-extern struct ratelimit_state drbd_ratelimit_state;
-extern struct idr drbd_devices; /* RCU, updates: genl_lock() */
-extern struct list_head drbd_resources; /* RCU, updates: genl_lock() */
+/*
+ * our structs
+ *************************/
 
-extern const char *cmdname(enum drbd_packet cmd);
+extern struct idr drbd_devices; /* RCU, updates: drbd_devices_lock */
+extern struct list_head drbd_resources; /* RCU, updates: resources_mutex */
+extern struct mutex resources_mutex;
 
 /* for sending/receiving the bitmap,
- * possibly in some encoding scheme */
+ * possibly in some encoding scheme.
+ * For compatibility, we transfer as if bm_block_size was 4k.
+ */
 struct bm_xfer_ctx {
 	/* "const"
 	 * stores total bits and long words
 	 * of the bitmap, so we don't need to
 	 * call the accessor functions over and again. */
+	unsigned long bm_bits_4k; /* unused on sending side */
 	unsigned long bm_bits;
 	unsigned long bm_words;
+	unsigned int scale; /* against BM_BLOCK_SHIFT_4k */
 	/* during xfer, current position within the bitmap */
 	unsigned long bit_offset;
+	/* receiving "partial" bits; unused on sending side. */
+	unsigned long bit_offset_4k;
 	unsigned long word_offset;
 
 	/* statistics; index: (h->command == P_BITMAP) */
@@ -127,8 +177,8 @@ struct bm_xfer_ctx {
 	unsigned bytes[2];
 };
 
-extern void INFO_bm_xfer_stats(struct drbd_peer_device *peer_device,
-			       const char *direction, struct bm_xfer_ctx *c);
+void INFO_bm_xfer_stats(struct drbd_peer_device *peer_device,
+			const char *direction, struct bm_xfer_ctx *c);
 
 static inline void bm_xfer_ctx_bit_to_word_offset(struct bm_xfer_ctx *c)
 {
@@ -149,7 +199,7 @@ static inline void bm_xfer_ctx_bit_to_word_offset(struct bm_xfer_ctx *c)
 #endif
 }
 
-extern unsigned int drbd_header_size(struct drbd_connection *connection);
+unsigned int drbd_header_size(struct drbd_connection *connection);
 
 /**********************************************************************/
 enum drbd_thread_state {
@@ -164,7 +214,7 @@ struct drbd_thread {
 	struct task_struct *task;
 	struct completion stop;
 	enum drbd_thread_state t_state;
-	int (*function) (struct drbd_thread *);
+	int (*function)(struct drbd_thread *thi);
 	struct drbd_resource *resource;
 	struct drbd_connection *connection;
 	int reset_cpu_mask;
@@ -183,31 +233,61 @@ static inline enum drbd_thread_state get_t_state(struct drbd_thread *thi)
 
 struct drbd_work {
 	struct list_head list;
-	int (*cb)(struct drbd_work *, int cancel);
+	int (*cb)(struct drbd_work *w, int cancel);
 };
 
-struct drbd_device_work {
+struct drbd_peer_device_work {
 	struct drbd_work w;
-	struct drbd_device *device;
+	struct drbd_peer_device *peer_device;
 };
 
-#include "drbd_interval.h"
-
-extern int drbd_wait_misc(struct drbd_device *, struct drbd_interval *);
+enum drbd_stream;
 
-extern void lock_all_resources(void);
-extern void unlock_all_resources(void);
+#include "drbd_interval.h"
 
+void lock_all_resources(void);
+void unlock_all_resources(void);
+
+enum drbd_disk_state disk_state_from_md(struct drbd_device *device);
+bool want_bitmap(struct drbd_peer_device *peer_device);
+long twopc_timeout(struct drbd_resource *resource);
+long twopc_retry_timeout(struct drbd_resource *resource, int retries);
+void twopc_connection_down(struct drbd_connection *connection);
+u64 directly_connected_nodes(struct drbd_resource *resource,
+			     enum which_state which);
+
+/* sequence arithmetic for dagtag (data generation tag) sector numbers.
+ * dagtag_newer_eq: true, if a is newer than b */
+#define dagtag_newer_eq(a, b)      \
+	(typecheck(u64, a) && \
+	 typecheck(u64, b) && \
+	((s64)(a) - (s64)(b) >= 0))
+
+#define dagtag_newer(a, b)      \
+	(typecheck(u64, a) && \
+	 typecheck(u64, b) && \
+	((s64)(a) - (s64)(b) > 0))
+
+/* An application I/O request.
+ *
+ * Fields marked as "immutable" may only be modified when the request is
+ * exclusively owned, e.g. when the request is created or is being retried.
+ */
 struct drbd_request {
-	struct drbd_work w;
+	/* "immutable" */
 	struct drbd_device *device;
 
 	/* if local IO is not allowed, will be NULL.
 	 * if local IO _is_ allowed, holds the locally submitted bio clone,
 	 * or, after local IO completion, the ERR_PTR(error).
-	 * see drbd_request_endio(). */
+	 * see drbd_request_endio().
+	 *
+	 * Only accessed by app/submitter/endio - strictly sequential,
+	 * no serialization required. */
 	struct bio *private_bio;
 
+	/* Fields sector and size are "immutable". Other fields protected
+	 * by interval_lock. */
 	struct drbd_interval i;
 
 	/* epoch: used to check on "completion" whether this req was in
@@ -217,96 +297,152 @@ struct drbd_request {
 	 * This corresponds to "barrier" in struct p_barrier[_ack],
 	 * and to "barrier_nr" in struct drbd_epoch (and various
 	 * comments/function parameters/local variable names).
+	 *
+	 * "immutable"
 	 */
 	unsigned int epoch;
 
-	struct list_head tl_requests; /* ring list in the transfer log */
-	struct bio *master_bio;       /* master bio pointer */
+	/* Position of this request in the serialized per-resource change
+	 * stream. Can be used to serialize with other events when
+	 * communicating the change stream via multiple connections.
+	 * Assigned from device->resource->dagtag_sector.
+	 *
+	 * Given that some IO backends write several GB per second meanwhile,
+	 * lets just use a 64bit sequence space.
+	 *
+	 * "immutable"
+	 */
+	u64 dagtag_sector;
+
+	/* list entry in transfer log (protected by RCU) */
+	struct list_head tl_requests;
+
+	/* list entry in submitter lists, peer ack list, or retry lists;
+	 * protected by the locks for those lists */
+	struct list_head list;
+
+	/* master bio pointer; "immutable" */
+	struct bio *master_bio;
 
 	/* see struct drbd_device */
 	struct list_head req_pending_master_completion;
 	struct list_head req_pending_local;
 
-	/* for generic IO accounting */
+	/* for generic IO accounting; "immutable" */
 	unsigned long start_jif;
 
-	/* for DRBD internal statistics */
+	/* for request_timer_fn() */
+	unsigned long pre_submit_jif;
+	unsigned long pre_send_jif[DRBD_PEERS_MAX];
 
-	/* Minimal set of time stamps to determine if we wait for activity log
-	 * transactions, local disk or peer.  32 bit "jiffies" are good enough,
-	 * we don't expect a DRBD request to be stalled for several month.
-	 */
+#ifdef CONFIG_DRBD_TIMING_STATS
+	/* for DRBD internal statistics */
+	ktime_t start_kt;
 
 	/* before actual request processing */
-	unsigned long in_actlog_jif;
+	ktime_t in_actlog_kt;
 
 	/* local disk */
-	unsigned long pre_submit_jif;
+	ktime_t pre_submit_kt;
 
 	/* per connection */
-	unsigned long pre_send_jif;
-	unsigned long acked_jif;
-	unsigned long net_done_jif;
-
+	ktime_t pre_send_kt[DRBD_PEERS_MAX];
+	ktime_t acked_kt[DRBD_PEERS_MAX];
+	ktime_t net_done_kt[DRBD_PEERS_MAX];
+#endif
 	/* Possibly even more detail to track each phase:
-	 *  master_completion_jif
+	 *  master_completion_kt
 	 *      how long did it take to complete the master bio
 	 *      (application visible latency)
-	 *  allocated_jif
+	 *  allocated_kt
 	 *      how long the master bio was blocked until we finally allocated
 	 *      a tracking struct
-	 *  in_actlog_jif
+	 *  in_actlog_kt
 	 *      how long did we wait for activity log transactions
 	 *
-	 *  net_queued_jif
+	 *  net_queued_kt
 	 *      when did we finally queue it for sending
-	 *  pre_send_jif
+	 *  pre_send_kt
 	 *      when did we start sending it
-	 *  post_send_jif
+	 *  post_send_kt
 	 *      how long did we block in the network stack trying to send it
-	 *  acked_jif
+	 *  acked_kt
 	 *      when did we receive (or fake, in protocol A) a remote ACK
-	 *  net_done_jif
+	 *  net_done_kt
 	 *      when did we receive final acknowledgement (P_BARRIER_ACK),
 	 *      or decide, e.g. on connection loss, that we do no longer expect
 	 *      anything from this peer for this request.
 	 *
-	 *  pre_submit_jif
-	 *  post_sub_jif
+	 *  pre_submit_kt
+	 *  post_sub_kt
 	 *      when did we start submiting to the lower level device,
 	 *      and how long did we block in that submit function
-	 *  local_completion_jif
+	 *  local_completion_kt
 	 *      how long did it take the lower level device to complete this request
 	 */
 
 
 	/* once it hits 0, we may complete the master_bio */
 	atomic_t completion_ref;
+	/* once it hits 0, we may remove the request from the interval tree and activity log */
+	refcount_t done_ref;
+	/* once it hits 0, we may remove from transfer log and send a corresponding peer ack */
+	refcount_t oos_send_ref;
 	/* once it hits 0, we may destroy this drbd_request object */
 	struct kref kref;
 
-	unsigned rq_state; /* see comments above _req_mod() */
+	/* Creates a dependency chain between writes so that we know that a
+	 * peer ack can be sent when done_ref reaches zero.
+	 *
+	 * If not NULL, when this drbd_request is done, one done_ref reference
+	 * of ->done_next will be put.
+	 *
+	 * "immutable" */
+	struct drbd_request *next_write;
+
+	/* lock to protect state flags */
+	spinlock_t rq_lock;
+	unsigned int local_rq_state;
+	u16 net_rq_state[DRBD_NODE_ID_MAX];
+
+	/* for reclaim from transfer log */
+	struct rcu_head rcu;
+};
+
+/* Used to multicast peer acks. */
+struct drbd_peer_ack {
+	struct drbd_resource *resource;
+	struct list_head list;
+	/*
+	 * Keeps track of which connections have not yet processed this peer
+	 * ack. Peer acks are queued for connections on which they are not sent
+	 * so that last_peer_ack_dagtag_seen is updated at the correct moment.
+	 */
+	u64 queued_mask;
+	u64 pending_mask; /* Peer ack is sent to these nodes */
+	u64 mask; /* Nodes which successfully wrote the requests covered by this peer ack */
+	u64 dagtag_sector;
 };
 
+/* Tracks received writes grouped in epochs. Protected by epoch_lock. */
 struct drbd_epoch {
 	struct drbd_connection *connection;
+	struct drbd_peer_request *oldest_unconfirmed_peer_req;
 	struct list_head list;
 	unsigned int barrier_nr;
 	atomic_t epoch_size; /* increased on every request added. */
 	atomic_t active;     /* increased on every req. added, and dec on every finished. */
+	atomic_t confirmed;  /* adjusted for every P_CONFIRM_STABLE */
 	unsigned long flags;
 };
 
 /* drbd_epoch flag bits */
 enum {
+	DE_BARRIER_IN_NEXT_EPOCH_ISSUED,
+	DE_BARRIER_IN_NEXT_EPOCH_DONE,
+	DE_CONTAINS_A_BARRIER,
 	DE_HAVE_BARRIER_NUMBER,
-};
-
-enum epoch_event {
-	EV_PUT,
-	EV_GOT_BARRIER_NR,
-	EV_BECAME_LAST,
-	EV_CLEANUP = 32, /* used as flag */
+	DE_IS_FINISHING,
 };
 
 struct digest_info {
@@ -317,23 +453,36 @@ struct digest_info {
 struct drbd_peer_request {
 	struct drbd_work w;
 	struct drbd_peer_device *peer_device;
-	struct drbd_epoch *epoch; /* for writes */
-	struct page *pages;
-	blk_opf_t opf;
+	struct list_head recv_order; /* see peer_requests, peer_reads, resync_requests */
+
+	union {
+		struct { /* read requests */
+			unsigned int depend_dagtag_node_id;
+			u64 depend_dagtag;
+		};
+		struct { /* resync target requests */
+			unsigned int requested_size;
+		};
+	};
+
+	struct bio_list bios;
 	atomic_t pending_bios;
 	struct drbd_interval i;
-	/* see comments on ee flag bits below */
-	unsigned long flags;
-	unsigned long submit_jif;
+	unsigned long flags;	/* see comments on ee flag bits below */
 	union {
-		u64 block_id;
-		struct digest_info *digest;
+		struct { /* regular peer_request */
+			struct drbd_epoch *epoch; /* for writes */
+			unsigned long submit_jif;
+			u64 block_id;
+			struct digest_info *digest;
+			u64 dagtag_sector;
+		};
+		struct { /* reused object for sending OOS to other nodes */
+			u64 send_oos_pending;
+		};
 	};
 };
 
-/* Equivalent to bio_op and req_op. */
-#define peer_req_op(peer_req) \
-	((peer_req)->opf & REQ_OP_MASK)
 
 /* ee flag bits.
  * While corresponding bios are in flight, the only modification will be
@@ -342,9 +491,19 @@ struct drbd_peer_request {
  * non-atomic modification to ee->flags is ok.
  */
 enum {
-	__EE_CALL_AL_COMPLETE_IO,
+	/* If successfully written,
+	 * we may clear the corresponding out-of-sync bits */
 	__EE_MAY_SET_IN_SYNC,
 
+	/* Peer did not write this one, we must set-out-of-sync
+	 * before actually submitting ourselves */
+	__EE_SET_OUT_OF_SYNC,
+
+	/* This peer request closes an epoch using a barrier.
+	 * On successful completion, the epoch is released,
+	 * and the P_BARRIER_ACK send. */
+	__EE_IS_BARRIER,
+
 	/* is this a TRIM aka REQ_OP_DISCARD? */
 	__EE_TRIM,
 	/* explicit zero-out requested, or
@@ -364,125 +523,201 @@ enum {
 	/* This ee has a pointer to a digest instead of a block id */
 	__EE_HAS_DIGEST,
 
-	/* Conflicting local requests need to be restarted after this request */
-	__EE_RESTART_REQUESTS,
-
 	/* The peer wants a write ACK for this (wire proto C) */
 	__EE_SEND_WRITE_ACK,
 
-	/* Is set when net_conf had two_primaries set while creating this peer_req */
-	__EE_IN_INTERVAL_TREE,
-
-	/* for debugfs: */
-	/* has this been submitted, or does it still wait for something else? */
-	__EE_SUBMITTED,
-
-	/* this is/was a write request */
-	__EE_WRITE,
-
 	/* hand back using mempool_free(e, drbd_buffer_page_pool) */
 	__EE_RELEASE_TO_MEMPOOL,
 
 	/* this is/was a write same request */
 	__EE_WRITE_SAME,
 
-	/* this originates from application on peer
-	 * (not some resync or verify or other DRBD internal request) */
-	__EE_APPLICATION,
-
-	/* If it contains only 0 bytes, send back P_RS_DEALLOCATED */
+	/* On target: Send P_RS_THIN_REQ.
+	 * On source: If it contains only 0 bytes, send back P_RS_DEALLOCATED. */
 	__EE_RS_THIN_REQ,
+
+	/* Hold reference in activity log */
+	__EE_IN_ACTLOG,
+
+	/* SyncTarget: This is the last resync request. */
+	__EE_LAST_RESYNC_REQUEST,
+
+	/* This peer_req->recv_order is on some list */
+	__EE_ON_RECV_ORDER,
 };
-#define EE_CALL_AL_COMPLETE_IO (1<<__EE_CALL_AL_COMPLETE_IO)
 #define EE_MAY_SET_IN_SYNC     (1<<__EE_MAY_SET_IN_SYNC)
+#define EE_SET_OUT_OF_SYNC     (1<<__EE_SET_OUT_OF_SYNC)
+#define EE_IS_BARRIER          (1<<__EE_IS_BARRIER)
 #define EE_TRIM                (1<<__EE_TRIM)
 #define EE_ZEROOUT             (1<<__EE_ZEROOUT)
 #define EE_RESUBMITTED         (1<<__EE_RESUBMITTED)
 #define EE_WAS_ERROR           (1<<__EE_WAS_ERROR)
 #define EE_HAS_DIGEST          (1<<__EE_HAS_DIGEST)
-#define EE_RESTART_REQUESTS	(1<<__EE_RESTART_REQUESTS)
 #define EE_SEND_WRITE_ACK	(1<<__EE_SEND_WRITE_ACK)
-#define EE_IN_INTERVAL_TREE	(1<<__EE_IN_INTERVAL_TREE)
-#define EE_SUBMITTED		(1<<__EE_SUBMITTED)
-#define EE_WRITE		(1<<__EE_WRITE)
 #define EE_RELEASE_TO_MEMPOOL	(1<<__EE_RELEASE_TO_MEMPOOL)
 #define EE_WRITE_SAME		(1<<__EE_WRITE_SAME)
-#define EE_APPLICATION		(1<<__EE_APPLICATION)
 #define EE_RS_THIN_REQ		(1<<__EE_RS_THIN_REQ)
+#define EE_IN_ACTLOG		(1<<__EE_IN_ACTLOG)
+#define EE_LAST_RESYNC_REQUEST	(1<<__EE_LAST_RESYNC_REQUEST)
+#define EE_ON_RECV_ORDER	(1<<__EE_ON_RECV_ORDER)
+
+#define REQ_NO_BIO (REQ_OP_DRV_OUT) /* exception for drbd_alloc_peer_request(), DRBD private */
 
 /* flag bits per device */
-enum {
-	UNPLUG_REMOTE,		/* sending a "UnplugRemote" could help */
+enum device_flag {
 	MD_DIRTY,		/* current uuids and flags not yet on disk */
-	USE_DEGR_WFC_T,		/* degr-wfc-timeout instead of wfc-timeout. */
-	CL_ST_CHG_SUCCESS,
-	CL_ST_CHG_FAIL,
 	CRASHED_PRIMARY,	/* This node was a crashed primary.
 				 * Gets cleared when the state.conn
-				 * goes into C_CONNECTED state. */
-	CONSIDER_RESYNC,
-
-	MD_NO_FUA,		/* Users wants us to not use FUA/FLUSH on meta data dev */
-
-	BITMAP_IO,		/* suspend application io;
-				   once no more io in flight, start bitmap io */
-	BITMAP_IO_QUEUED,       /* Started bitmap IO */
-	WAS_IO_ERROR,		/* Local disk failed, returned IO error */
-	WAS_READ_ERROR,		/* Local disk READ failed (set additionally to the above) */
+				 * goes into L_ESTABLISHED state. */
+	MD_NO_FUA,		/* meta data device does not support barriers,
+				   so don't even try */
 	FORCE_DETACH,		/* Force-detach from local disk, aborting any pending local IO */
-	RESYNC_AFTER_NEG,       /* Resync after online grow after the attach&negotiate finished. */
-	RESIZE_PENDING,		/* Size change detected locally, waiting for the response from
-				 * the peer, if it changed there as well. */
-	NEW_CUR_UUID,		/* Create new current UUID when thawing IO */
+	ABORT_MDIO,		/* Interrupt ongoing meta-data I/O */
+	NEW_CUR_UUID,		/* Create new current UUID when thawing IO or issuing local IO */
+	__NEW_CUR_UUID,		/* Set NEW_CUR_UUID as soon as state change visible */
+	WRITING_NEW_CUR_UUID,	/* Set while the new current ID gets generated. */
 	AL_SUSPENDED,		/* Activity logging is currently suspended. */
-	AHEAD_TO_SYNC_SOURCE,   /* Ahead -> SyncSource queued */
-	B_RS_H_DONE,		/* Before resync handler done (already executed) */
-	DISCARD_MY_DATA,	/* discard_my_data flag per volume */
-	READ_BALANCE_RR,
-
+	UNREGISTERED,
 	FLUSH_PENDING,		/* if set, device->flush_jif is when we submitted that flush
 				 * from drbd_flush_after_epoch() */
 
 	/* cleared only after backing device related structures have been destroyed. */
-	GOING_DISKLESS,		/* Disk is being detached, because of io-error, or admin request. */
+	GOING_DISKLESS,         /* Disk is being detached, because of io-error, or admin request. */
 
 	/* to be used in drbd_device_post_work() */
-	GO_DISKLESS,		/* tell worker to schedule cleanup before detach */
-	DESTROY_DISK,		/* tell worker to close backing devices and destroy related structures. */
+	GO_DISKLESS,            /* tell worker to schedule cleanup before detach */
 	MD_SYNC,		/* tell worker to call drbd_md_sync() */
+	MAKE_NEW_CUR_UUID,	/* tell worker to ping peers and eventually write new current uuid */
+
+	STABLE_RESYNC,		/* One peer_device finished the resync stable! */
+	READ_BALANCE_RR,
+	PRIMARY_LOST_QUORUM,
+	TIEBREAKER_QUORUM,	/* Tiebreaker keeps quorum; used to avoid too verbose logging */
+	DESTROYING_DEV,
+	TRY_TO_GET_RESYNC,
+	OUTDATE_ON_2PC_COMMIT,
+	RESTORE_QUORUM,		/* Restore quorum when we have the same members as before */
+	RESTORING_QUORUM,	/* sanitize_state() -> finish_state_change() */
+	LEGACY_84_MD,
+	BDEV_FROZEN,		/* called bdev_freeze(), needs bdev_thaw() on resume-io */
+};
+
+/* flag bits per peer device */
+enum peer_device_flag {
+	CONSIDER_RESYNC,
+	RESYNC_AFTER_NEG,       /* Resync after online grow after the attach&negotiate finished. */
+	RESIZE_PENDING,		/* Size change detected locally, waiting for the response from
+				 * the peer, if it changed there as well. */
 	RS_START,		/* tell worker to start resync/OV */
 	RS_PROGRESS,		/* tell worker that resync made significant progress */
+	RS_LAZY_BM_WRITE,	/*  -"- and bitmap writeout should be efficient now */
 	RS_DONE,		/* tell worker that resync is done */
+	B_RS_H_DONE,		/* Before resync handler done (already executed) */
+	DISCARD_MY_DATA,	/* discard_my_data flag per volume */
+	USE_DEGR_WFC_T,		/* degr-wfc-timeout instead of wfc-timeout. */
+	INITIAL_STATE_SENT,
+	INITIAL_STATE_RECEIVED,
+	RECONCILIATION_RESYNC,
+	UNSTABLE_RESYNC,	/* Sync source went unstable during resync. */
+	SEND_STATE_AFTER_AHEAD,
+	GOT_NEG_ACK,		/* got a neg_ack while primary, wait until peer_disk is lower than
+				   D_UP_TO_DATE before becoming secondary! */
+	AHEAD_TO_SYNC_SOURCE,   /* Ahead -> SyncSource queued */
+	SYNC_TARGET_TO_BEHIND,  /* SyncTarget, wait for Behind */
+	HANDLING_CONGESTION,    /* Set while testing for congestion and handling it */
+	HANDLE_CONGESTION,      /* tell worker to change state due to congestion */
+	HOLDING_UUID_READ_LOCK, /* did a down_read(&device->uuid_sem) */
+	RS_SOURCE_MISSED_END,   /* SyncSource did not got P_UUIDS110 */
+	RS_PEER_MISSED_END,     /* Peer (which was SyncSource) did not got P_UUIDS110 after resync */
+	SYNC_SRC_CRASHED_PRI,   /* Source of this resync was a crashed primary */
+	HAVE_SIZES,		/* Cleared when connection gets lost; set when sizes received */
+	UUIDS_RECEIVED,		/* Have recent UUIDs from the peer */
+	CURRENT_UUID_RECEIVED,	/* Got a p_current_uuid packet */
+	PEER_QUORATE,		/* Peer has quorum */
+	RS_REQUEST_UNSUCCESSFUL, /* Some resync request was unsuccessful in current cycle */
+	REPLICATION_NEXT, /* If unset, do not replicate writes when next Inconsistent */
+	PEER_REPLICATION_NEXT, /* We have instructed peer not to replicate writes */
 };
 
-struct drbd_bitmap; /* opaque for drbd_device */
+/* We could make these currently hardcoded constants configurable
+ * variables at create-md time (or even re-configurable at runtime?).
+ * Which will require some more changes to the DRBD "super block"
+ * and attach code.
+ *
+ * updates per transaction:
+ *   This many changes to the active set can be logged with one transaction.
+ *   This number is arbitrary.
+ * context per transaction:
+ *   This many context extent numbers are logged with each transaction.
+ *   This number is resulting from the transaction block size (4k), the layout
+ *   of the transaction header, and the number of updates per transaction.
+ *   See drbd_actlog.c:struct al_transaction_on_disk
+ * */
+#define AL_UPDATES_PER_TRANSACTION	 64	// arbitrary
+#define AL_CONTEXT_PER_TRANSACTION	919	// (4096 - 36 - 6*64)/4
 
 /* definition of bits in bm_flags to be used in drbd_bm_lock
  * and drbd_bitmap_io and friends. */
 enum bm_flag {
-	/* currently locked for bulk operation */
-	BM_LOCKED_MASK = 0xf,
-
-	/* in detail, that is: */
-	BM_DONT_CLEAR = 0x1,
-	BM_DONT_SET   = 0x2,
-	BM_DONT_TEST  = 0x4,
+	/*
+	 * The bitmap can be locked to prevent others from clearing, setting,
+	 * and/or testing bits.  The following combinations of lock flags make
+	 * sense:
+	 *
+	 *   BM_LOCK_CLEAR,
+	 *   BM_LOCK_SET, | BM_LOCK_CLEAR,
+	 *   BM_LOCK_TEST | BM_LOCK_SET | BM_LOCK_CLEAR.
+	 */
 
-	/* so we can mark it locked for bulk operation,
-	 * and still allow all non-bulk operations */
-	BM_IS_LOCKED  = 0x8,
+	BM_LOCK_TEST = 0x1,
+	BM_LOCK_SET = 0x2,
+	BM_LOCK_CLEAR = 0x4,
+	BM_LOCK_BULK = 0x8, /* locked for bulk operation, allow all non-bulk operations */
 
-	/* (test bit, count bit) allowed (common case) */
-	BM_LOCKED_TEST_ALLOWED = BM_DONT_CLEAR | BM_DONT_SET | BM_IS_LOCKED,
+	BM_LOCK_ALL = BM_LOCK_TEST | BM_LOCK_SET | BM_LOCK_CLEAR | BM_LOCK_BULK,
 
-	/* testing bits, as well as setting new bits allowed, but clearing bits
-	 * would be unexpected.  Used during bitmap receive.  Setting new bits
-	 * requires sending of "out-of-sync" information, though. */
-	BM_LOCKED_SET_ALLOWED = BM_DONT_CLEAR | BM_IS_LOCKED,
+	BM_LOCK_SINGLE_SLOT = 0x10,
+	BM_ON_DAX_PMEM = 0x10000,
+};
 
-	/* for drbd_bm_write_copy_pages, everything is allowed,
-	 * only concurrent bulk operations are locked out. */
-	BM_LOCKED_CHANGE_ALLOWED = BM_IS_LOCKED,
+struct drbd_bitmap {
+	union {
+		struct page **bm_pages;
+		void *bm_on_pmem;
+	};
+	spinlock_t bm_lock;		/* fine-grain lock (TODO: per slot) */
+	spinlock_t bm_all_slots_lock;	/* all bitmap slots lock */
+
+	unsigned long bm_set[DRBD_PEERS_MAX]; /* number of bits set */
+	unsigned long bm_bits;  /* bits per peer */
+	unsigned long bm_bits_4k;  /* bits per peer, if we had bm_block_size of 4k */
+	size_t   bm_words; /* platform specitif word size; not 32bit!! */
+	size_t   bm_number_of_pages;
+	sector_t bm_dev_capacity;
+	struct mutex bm_change; /* serializes resize operations */
+
+	wait_queue_head_t bm_io_wait; /* used to serialize IO of single pages */
+
+	enum bm_flag bm_flags;
+	unsigned int bm_max_peers;
+	unsigned int bm_block_shift; /* ln2 of bytes per bit for this bitmap */
+
+	/* exclusively to be used by __al_write_transaction(),
+	 * and drbd_bm_write_hinted() -> bm_rw() called from there.
+	 * One activity log extent represents 4MB of storage, which are 1024
+	 * bits (at 4k per bit), times at most DRBD_PEERS_MAX (currently 32).
+	 * The bitmap is created interleaved, with a potentially odd number
+	 * of peer slots determined at create-md time.  Which means that one
+	 * AL-extent may be associated with one or two bitmap pages.
+	 */
+	unsigned int n_bitmap_hints;
+	unsigned int al_bitmap_hints[2*AL_UPDATES_PER_TRANSACTION];
+
+	/* debugging aid, in case we are still racy somewhere */
+	const char    *bm_why;
+	char          bm_task_comm[TASK_COMM_LEN];
+	pid_t         bm_task_pid;
+	struct drbd_peer_device *bm_locked_peer;
 };
 
 struct drbd_work_queue {
@@ -491,29 +726,37 @@ struct drbd_work_queue {
 	wait_queue_head_t q_wait;
 };
 
-struct drbd_socket {
-	struct mutex mutex;
-	struct socket    *socket;
-	/* this way we get our
-	 * send/receive buffers off the stack */
-	void *sbuf;
-	void *rbuf;
+struct drbd_peer_md {
+	u64 bitmap_uuid;
+	u64 bitmap_dagtag;
+	u32 flags;
+	s32 bitmap_index;
 };
 
 struct drbd_md {
 	u64 md_offset;		/* sector offset to 'super' block */
 
-	u64 la_size_sect;	/* last agreed size, unit sectors */
+	u64 effective_size;	/* last agreed size (sectors) */
+	u64 prev_members;	/* read from the meta-data */
+	u64 members;		/* current member mask for writing meta-data */
 	spinlock_t uuid_lock;
-	u64 uuid[UI_SIZE];
+	u64 current_uuid;
 	u64 device_uuid;
 	u32 flags;
+	s32 node_id;
 	u32 md_size_sect;
 
 	s32 al_offset;	/* signed relative sector offset to activity log */
 	s32 bm_offset;	/* signed relative sector offset to bitmap */
 
-	/* cached value of bdev->disk_conf->meta_dev_idx (see below) */
+	u32 max_peers;
+	u32 bm_block_size;
+	u32 bm_block_shift; /* ilog2(bm_block_size) */
+
+	struct drbd_peer_md peers[DRBD_NODE_ID_MAX];
+	u64 history_uuids[HISTORY_UUIDS];
+
+	/* cached value of bdev->disk_conf->meta_dev_idx */
 	s32 meta_dev_idx;
 
 	/* see al_tr_number_to_on_disk_sector() */
@@ -528,8 +771,13 @@ struct drbd_backing_dev {
 	struct block_device *md_bdev;
 	struct file *f_md_bdev;
 	struct drbd_md md;
-	struct disk_conf *disk_conf; /* RCU, for updates: resource->conf_update */
+	struct disk_conf __rcu *disk_conf; /* RCU, for updates: resource->conf_update */
 	sector_t known_size; /* last known size of that backing device */
+#if IS_ENABLED(CONFIG_DEV_DAX_PMEM)
+	struct dax_device *dax_dev;
+	struct meta_data_on_disk_9 *md_on_pmem; /* address of md_offset */
+	struct al_on_pmem *al_on_pmem;
+#endif
 };
 
 struct drbd_md_io {
@@ -544,43 +792,151 @@ struct drbd_md_io {
 
 struct bm_io_work {
 	struct drbd_work w;
+	struct drbd_device *device;
 	struct drbd_peer_device *peer_device;
 	char *why;
 	enum bm_flag flags;
-	int (*io_fn)(struct drbd_device *device, struct drbd_peer_device *peer_device);
-	void (*done)(struct drbd_device *device, int rv);
+	int (*io_fn)(struct drbd_device *device,
+		     struct drbd_peer_device *peer_device);
+	void (*done)(struct drbd_device *device,
+		     struct drbd_peer_device *peer_device,
+		     int rv);
 };
 
 struct fifo_buffer {
+	/* singly linked list to accumulate multiple such struct fifo_buffers,
+	 * to be freed after a single syncronize_rcu(),
+	 * outside a critical section. */
+	struct fifo_buffer *next;
 	unsigned int head_index;
 	unsigned int size;
 	int total; /* sum of all values */
 	int values[] __counted_by(size);
 };
-extern struct fifo_buffer *fifo_alloc(unsigned int fifo_size);
+struct fifo_buffer *fifo_alloc(unsigned int fifo_size);
 
 /* flag bits per connection */
-enum {
-	NET_CONGESTED,		/* The data socket is congested */
-	RESOLVE_CONFLICTS,	/* Set on one node, cleared on the peer! */
-	SEND_PING,
-	GOT_PING_ACK,		/* set when we receive a ping_ack packet, ping_wait gets woken */
-	CONN_WD_ST_CHG_REQ,	/* A cluster wide state change on the connection is active */
-	CONN_WD_ST_CHG_OKAY,
-	CONN_WD_ST_CHG_FAIL,
+enum connection_flag {
+	PING_PENDING,		/* cleared upon receiveing a ping_ack packet, wakes state_wait */
+	TWOPC_PREPARED,
+	TWOPC_YES,
+	TWOPC_NO,
+	TWOPC_RETRY,
 	CONN_DRY_RUN,		/* Expect disconnect after resync handshake. */
-	CREATE_BARRIER,		/* next P_DATA is preceded by a P_BARRIER */
-	STATE_SENT,		/* Do not change state/UUIDs while this is set */
+	DISCONNECT_EXPECTED,
+	BARRIER_ACK_PENDING,
+	CORKED,
+	DATA_CORKED = CORKED,	/* used as computed value CORKED + DATA_STREAM */
+	CONTROL_CORKED,		/* used as computed value CORKED + CONTROL_STREAM */
+	C_UNREGISTERED,
+	RECONNECT,
+	CONN_DISCARD_MY_DATA,
+	SEND_STATE_AFTER_AHEAD_C,
+	NOTIFY_PEERS_LOST_PRIMARY,
+	CHECKING_PEER,		/* used by make_new_urrent_uuid() to check liveliness */
+	CONN_CONGESTED,
+	CONN_HANDSHAKE_DISCONNECT,
+	CONN_HANDSHAKE_RETRY,
+	CONN_HANDSHAKE_READY,
+	RECEIVED_DAGTAG, /* Whether we received any write or dagtag since connecting. */
+	PING_TIMEOUT_ACTIVE,
+};
+
+/* flag bits per resource */
+enum resource_flag {
+	EXPLICIT_PRIMARY,
 	CALLBACK_PENDING,	/* Whether we have a call_usermodehelper(, UMH_WAIT_PROC)
 				 * pending, from drbd worker context.
 				 */
-	DISCONNECT_SENT,
+	TWOPC_ABORT_LOCAL,
+	TWOPC_WORK_PENDING,     /* Set while work for sending reply is scheduled */
+	TWOPC_EXECUTED,         /* Commited or aborted */
+	TWOPC_STATE_CHANGE_PENDING, /* set between sending commit and changing local state */
+
+	TRY_BECOME_UP_TO_DATE_PENDING,
 
 	DEVICE_WORK_PENDING,	/* tell worker that some device has pending work */
+	PEER_DEVICE_WORK_PENDING,/* tell worker that some peer_device has pending work */
+
+	/* to be used in drbd_post_work() */
+	R_UNREGISTERED,
+	DOWN_IN_PROGRESS,
+	CHECKING_PEERS,
+	WRONG_MDF_EXISTS,	/* Warned about MDF_EXISTS flag on all peer slots */
+	TWOPC_RECV_SIZES_ERR,	/* Error processing sizes packet during 2PC connect */
 };
 
 enum which_state { NOW, OLD = NOW, NEW };
 
+enum twopc_type {
+	TWOPC_STATE_CHANGE,
+	TWOPC_RESIZE,
+};
+
+struct twopc_reply {
+	int vnr;
+	unsigned int tid;  /* transaction identifier */
+	int initiator_node_id;  /* initiator of the transaction */
+	int target_node_id;  /* target of the transaction (or -1) */
+	u64 target_reachable_nodes;  /* behind the target node */
+	u64 reachable_nodes;  /* behind other nodes */
+	union {
+		struct { /* type == TWOPC_STATE_CHANGE */
+			u64 primary_nodes;
+			u64 weak_nodes;
+		};
+		struct { /* type == TWOPC_RESIZE */
+			u64 diskful_primary_nodes;
+			u64 max_possible_size;
+		};
+	};
+	unsigned int is_disconnect:1;
+	unsigned int is_connect:1;
+	unsigned int is_aborted:1;
+	/* Whether the state change on receiving the twopc failed. When this is
+	 * a twopc for transitioning to C_CONNECTED, we cannot immediately
+	 * reply with P_TWOPC_NO. The state handshake must complete first to
+	 * decide the appropriate reply. */
+	unsigned int state_change_failed:1;
+};
+
+struct twopc_request {
+	u64 nodes_to_reach;
+	enum drbd_packet cmd;
+	unsigned int tid;
+	int initiator_node_id;
+	int target_node_id;
+	int vnr;
+	u32 flags;
+};
+
+struct drbd_thread_timing_details {
+	unsigned long start_jif;
+	void *cb_addr;
+	const char *caller_fn;
+	unsigned int line;
+	unsigned int cb_nr;
+};
+#define DRBD_THREAD_DETAILS_HIST	16
+
+struct drbd_send_buffer {
+	struct page *page;  /* current buffer page for sending data */
+	char *unsent;  /* start of unsent area != pos if corked... */
+	char *pos; /* position within that page */
+	int allocated_size; /* currently allocated space */
+	int additional_size;  /* additional space to be added to next packet's size */
+};
+
+struct drbd_mutable_buffer {
+	u8 *buffer;
+	unsigned int avail;
+};
+
+enum drbd_per_resource_ratelimit {
+	D_RL_R_NOLIMIT = -1,
+	D_RL_R_GENERIC,
+};
+
 struct drbd_resource {
 	char *name;
 #ifdef CONFIG_DEBUG_FS
@@ -588,32 +944,141 @@ struct drbd_resource {
 	struct dentry *debugfs_res_volumes;
 	struct dentry *debugfs_res_connections;
 	struct dentry *debugfs_res_in_flight_summary;
+	struct dentry *debugfs_res_state_twopc;
+	struct dentry *debugfs_res_worker_pid;
+	struct dentry *debugfs_res_members;
 #endif
 	struct kref kref;
-	struct idr devices;		/* volume number to device mapping */
+
+	/* Volume number to device mapping. Updates protected by conf_update. */
+	struct idr devices;
+
+	struct ratelimit_state ratelimit[1];
+
+	/* RCU list. Updates protected by adm_mutex, conf_update and state_rwlock. */
 	struct list_head connections;
-	struct list_head resources;
+
+	struct list_head resources;     /* list entry in global resources list */
 	struct res_opts res_opts;
-	struct mutex conf_update;	/* mutex for ready-copy-update of net_conf and disk_conf */
+	int max_node_id;
+	/*
+	 * For read-copy-update of net_conf and disk_conf and devices,
+	 * connection, peer_devices and paths lists.
+	 */
+	struct mutex conf_update;
 	struct mutex adm_mutex;		/* mutex to serialize administrative requests */
-	spinlock_t req_lock;
+	struct mutex open_release;	/* serialize open/release */
+	struct {
+		char comm[TASK_COMM_LEN];
+		unsigned int minor;
+		pid_t pid;
+		ktime_t opened;
+	} auto_promoted_by;
+
+	rwlock_t state_rwlock;          /* serialize state changes */
+	u64 dagtag_sector;		/* Protected by tl_update_lock.
+					 * See also dagtag_sector in
+					 * &drbd_request */
+	u64 dagtag_from_backing_dev;
+	u64 dagtag_before_attach;
+	u64 members;			/* mask of online nodes */
+	unsigned long flags;
+
+	/* Protects updates to the transfer log and related counters. */
+	spinlock_t tl_update_lock;
+	struct list_head transfer_log;	/* all requests not yet fully processed */
+	struct drbd_request *tl_previous_write;
+
+	spinlock_t peer_ack_lock;
+	struct list_head peer_ack_req_list;  /* requests to send peer acks for */
+	struct list_head peer_ack_list;  /* peer acks to send */
+	struct drbd_work peer_ack_work;
+	u64 last_peer_acked_dagtag;  /* dagtag of last PEER_ACK'ed request */
+	struct drbd_request *peer_ack_req;  /* last request not yet PEER_ACK'ed */
 
-	unsigned susp:1;		/* IO suspended by user */
-	unsigned susp_nod:1;		/* IO suspended because no data */
-	unsigned susp_fen:1;		/* IO suspended because fence peer handler runs */
+	/* Protects current_flush_sequence and pending_flush_mask (connection) */
+	spinlock_t initiator_flush_lock;
+	u64 current_flush_sequence;
+
+	struct semaphore state_sem;
+	wait_queue_head_t state_wait;  /* upon each state change. */
+	enum chg_state_flags state_change_flags;
+	const char **state_change_err_str;
+	bool remote_state_change;  /* remote state change in progress */
+	enum drbd_packet twopc_prepare_reply_cmd; /* this node's answer to the prepare phase or 0 */
+	u64 twopc_parent_nodes;
+	struct twopc_reply twopc_reply;
+	struct timer_list twopc_timer;
+	struct work_struct twopc_work;
+	wait_queue_head_t twopc_wait;
+	struct {
+		enum twopc_type type;
+		union {
+			struct twopc_resize {
+				int dds_flags;		   /* from prepare phase */
+				sector_t user_size;	   /* from prepare phase */
+				u64 diskful_primary_nodes; /* added in commit phase */
+				u64 new_size;		   /* added in commit phase */
+			} resize;
+			struct twopc_state_change {
+				union drbd_state mask;	/* from prepare phase */
+				union drbd_state val;	/* from prepare phase */
+				u64 primary_nodes;	/* added in commit phase */
+				u64 reachable_nodes;	/* added in commit phase */
+			} state_change;
+		};
+	} twopc;
+	enum drbd_role role[2];
+	bool susp_user[2];			/* IO suspended by user */
+	bool susp_nod[2];		/* IO suspended because no data */
+	bool susp_quorum[2];		/* IO suspended because no quorum */
+	bool susp_uuid[2];		/* IO suspended because waiting new current UUID */
+	bool fail_io[2];		/* Fail all IO requests because forced a demote */
+	bool cached_susp;		/* cached result of looking at all different suspend bits */
+	bool cached_all_devices_have_quorum;
 
 	enum write_ordering_e write_ordering;
 
+	/* Protects the current transfer log (tle) fields. */
+	spinlock_t current_tle_lock;
+	atomic_t current_tle_nr;	/* transfer log epoch number */
+	unsigned current_tle_writes;	/* writes seen within this tl epoch */
+
+	unsigned cached_min_aggreed_protocol_version;
+
 	cpumask_var_t cpu_mask;
+
+	struct drbd_work_queue work;
+	struct drbd_thread worker;
+
+	struct list_head listeners;
+	spinlock_t listeners_lock;
+
+	struct timer_list peer_ack_timer; /* send a P_PEER_ACK after last completion */
+
+	unsigned int w_cb_nr; /* keeps counting up */
+	struct drbd_thread_timing_details w_timing_details[DRBD_THREAD_DETAILS_HIST];
+	wait_queue_head_t barrier_wait;  /* upon each state change. */
+	struct rcu_head rcu;
+
+	struct list_head suspended_reqs;
+	/*
+	 * The side effects of an empty state change two-phase commit are:
+	 *
+	 * * A local consistent disk can upgrade to up-to-date when no primary is reachable
+	 *   (or become outdated if the prepare packets reach a primary).
+	 *
+	 * * resource->members are updates
+	 *
+	 * * Faraway nodes might outdate themselves if they learn about the existence of a primary
+	 *   (with access to data) node.
+	 */
+	struct work_struct empty_twopc;
 };
 
-struct drbd_thread_timing_details
-{
-	unsigned long start_jif;
-	void *cb_addr;
-	const char *caller_fn;
-	unsigned int line;
-	unsigned int cb_nr;
+enum drbd_per_connection_ratelimit {
+	D_RL_C_NOLIMIT = -1,
+	D_RL_C_GENERIC,
 };
 
 struct drbd_connection {
@@ -623,36 +1088,49 @@ struct drbd_connection {
 	struct dentry *debugfs_conn;
 	struct dentry *debugfs_conn_callback_history;
 	struct dentry *debugfs_conn_oldest_requests;
+	struct dentry *debugfs_conn_transport;
+	struct dentry *debugfs_conn_debug;
+	struct dentry *debugfs_conn_receiver_pid;
+	struct dentry *debugfs_conn_sender_pid;
 #endif
 	struct kref kref;
 	struct idr peer_devices;	/* volume number to peer device mapping */
-	enum drbd_conns cstate;		/* Only C_STANDALONE to C_WF_REPORT_PARAMS */
-	struct mutex cstate_mutex;	/* Protects graceful disconnects */
-	unsigned int connect_cnt;	/* Inc each time a connection is established */
+	enum drbd_conn_state cstate[2];
+	enum drbd_role peer_role[2];
+	bool susp_fen[2];		/* IO suspended because fence peer handler runs */
+
+	struct ratelimit_state ratelimit[1];
 
 	unsigned long flags;
-	struct net_conf *net_conf;	/* content protected by rcu */
-	wait_queue_head_t ping_wait;	/* Woken upon reception of a ping, and a state change */
+	enum drbd_fencing_policy fencing_policy;
 
-	struct sockaddr_storage my_addr;
-	int my_addr_len;
-	struct sockaddr_storage peer_addr;
-	int peer_addr_len;
+	struct drbd_send_buffer send_buffer[2];
+	struct mutex mutex[2]; /* Protect assembling of new packet until sending it (in send_buffer) */
+	/* scratch buffers for use while "owning" the DATA_STREAM send_buffer,
+	 * to avoid larger on-stack temporary variables,
+	 * introduced for holding digests in drbd_send_dblock() */
+	union {
+		/* MAX_DIGEST_SIZE in the linux kernel at this point is 64 byte, afaik */
+		struct {
+			char before[64];
+			char after[64];
+		} d;
+	} scratch_buffer;
 
-	struct drbd_socket data;	/* data/barrier/cstate/parameter packets */
-	struct drbd_socket meta;	/* ping/ack (metadata) packets */
 	int agreed_pro_version;		/* actually used protocol version */
 	u32 agreed_features;
-	unsigned long last_received;	/* in jiffies, either socket */
-	unsigned int ko_count;
+	atomic_t ap_in_flight; /* App sectors in flight (waiting for ack) */
+	atomic_t rs_in_flight; /* Resync sectors in flight */
 
-	struct list_head transfer_log;	/* all requests not yet fully processed */
+	struct drbd_work connect_timer_work;
+	struct timer_list connect_timer;
 
 	struct crypto_shash *cram_hmac_tfm;
-	struct crypto_shash *integrity_tfm;  /* checksums we compute, updates protected by connection->data->mutex */
+	struct crypto_shash *integrity_tfm;  /* checksums we compute, updates protected by connection->mutex[DATA_STREAM] */
 	struct crypto_shash *peer_integrity_tfm;  /* checksums we verify, only accessed from receiver thread  */
 	struct crypto_shash *csums_tfm;
 	struct crypto_shash *verify_tfm;
+
 	void *int_dig_in;
 	void *int_dig_vv;
 
@@ -660,35 +1138,137 @@ struct drbd_connection {
 	struct drbd_epoch *current_epoch;
 	spinlock_t epoch_lock;
 	unsigned int epochs;
-	atomic_t current_tle_nr;	/* transfer log epoch number */
-	unsigned current_tle_writes;	/* writes seen within this tl epoch */
 
 	unsigned long last_reconnect_jif;
 	/* empty member on older kernels without blk_start_plug() */
 	struct blk_plug receiver_plug;
 	struct drbd_thread receiver;
-	struct drbd_thread worker;
-	struct drbd_thread ack_receiver;
+	struct drbd_thread sender;
 	struct workqueue_struct *ack_sender;
+	struct work_struct peer_ack_work;
+
+	/* Work for sending P_OUT_OF_SYNC due to P_PEER_ACK */
+	struct drbd_work send_oos_work;
+	/*
+	 * These peers have sent us a P_PEER_ACK for which we need to send
+	 * P_OUT_OF_SYNC on this connection.
+	 */
+	unsigned long send_oos_from_mask;
+
+	atomic64_t last_dagtag_sector;
+	/* Record of last peer ack to determine whether we can ack flush */
+	u64 last_peer_ack_dagtag_seen;
+
+	/* Mask of nodes from which we are waiting for a flush ack corresponding to this Primary */
+	u64 pending_flush_mask;
+
+	/* Protects the flush members below for this connection */
+	spinlock_t primary_flush_lock;
+	/* For handling P_FLUSH_REQUESTS from this peer */
+	u64 flush_requests_dagtag;
+	u64 flush_sequence;
+	u64 flush_forward_sent_mask;
+
+	/* For handling forwarded flushes. On connection to initiator node. */
+	spinlock_t flush_ack_lock;
+	struct drbd_work flush_ack_work;
+	/* For forwarded flushes. On connection to initiator node. Indexed by primary node ID */
+	u64 flush_ack_sequence[DRBD_PEERS_MAX];
+
+	atomic_t active_ee_cnt; /* Peer write requests waiting for activity log or backing disk. */
+	atomic_t backing_ee_cnt; /* Other peer requests waiting for conflicts or backing disk. */
+	atomic_t done_ee_cnt;
+	spinlock_t peer_reqs_lock;
+	spinlock_t send_oos_lock; /* Protects send_oos list */
+
+	/* Lists using drbd_peer_request.recv_order (see also drbd_peer_device.resync_requests) */
+	struct list_head peer_requests; /* All peer writes in the order we received them */
+	struct list_head peer_reads; /* All reads in the order we received them */
+	/*
+	 * Peer writes for which we need to send some P_OUT_OF_SYNC. These peer
+	 * writes continue to be stored on the connection over which the writes
+	 * and the P_PEER_ACK are received. They are accessed by the sender for
+	 * each relevant peer. Protected by send_oos_lock on this connection.
+	 */
+	struct list_head send_oos;
+
+	/* Lists using drbd_peer_request.w.list */
+	struct list_head done_ee;   /* Need to send P_WRITE_ACK/P_RS_WRITE_ACK */
+	struct list_head dagtag_wait_ee; /* Resync read waiting for dagtag to be reached */
+
+	struct work_struct send_acks_work;
+	struct work_struct send_ping_ack_work;
+	struct work_struct send_ping_work;
+	wait_queue_head_t ee_wait;
+
+	atomic_t pp_in_use;		/* allocated from page pool */
+	atomic_t pp_in_use_by_net;	/* sendpage()d, still referenced by transport */
+	/* sender side */
+	struct drbd_work_queue sender_work;
+
+	struct drbd_work send_dagtag_work;
+	u64 send_dagtag;
+
+	struct sender_todo {
+		struct list_head work_list;
+
+		/* If upper layers trigger an unplug on this side, we want to
+		 * send and unplug hint over to the peer.  Sending it too
+		 * early, or missing it completely, causes a potential latency
+		 * penalty (requests idling too long in the remote queue).
+		 * There is no harm done if we occasionally send one too many
+		 * such unplug hints.
+		 *
+		 * We have two slots, which are used in an alternating fashion:
+		 * If a new unplug event happens while the current pending one
+		 * has not even been processed yet, we overwrite the next
+		 * pending slot: there is not much point in unplugging on the
+		 * remote side, if we have a full request queue to be send on
+		 * this side still, and not even reached the position in the
+		 * change stream when the previous local unplug happened.
+		 */
+		u64 unplug_dagtag_sector[2];
+		unsigned int unplug_slot; /* 0 or 1 */
+
+		/* the currently (or last) processed request,
+		 * see process_sender_todo() */
+		struct drbd_request *req;
+
+		/* Points to the next request on the resource->transfer_log,
+		 * which is RQ_NET_QUEUED for this connection, and so can
+		 * safely be used as next starting point for the list walk
+		 * in tl_next_request_for_connection().
+		 *
+		 * If it is NULL (we walked off the tail last time), it will be
+		 * set by __req_mod( QUEUE_FOR.* ), so fast connections don't
+		 * need to walk the full transfer_log list every time, even if
+		 * the list is kept long by some slow connections.
+		 *
+		 * req_next is only accessed by drbd_sender thread, in
+		 * case of a resend from some worker, but then regular IO
+		 * is suspended.
+		 */
+		struct drbd_request *req_next;
+	} todo;
 
 	/* cached pointers,
 	 * so we can look up the oldest pending requests more quickly.
-	 * protected by resource->req_lock */
-	struct drbd_request *req_next; /* DRBD 9: todo.req_next */
+	 * TODO: RCU */
 	struct drbd_request *req_ack_pending;
+	/* The oldest request that is or was queued for this peer, but is not
+	 * done towards it. */
 	struct drbd_request *req_not_net_done;
+	/* Protects the caching pointers from being advanced concurrently. */
+	spinlock_t advance_cache_ptr_lock;
 
-	/* sender side */
-	struct drbd_work_queue sender_work;
-
-#define DRBD_THREAD_DETAILS_HIST	16
-	unsigned int w_cb_nr; /* keeps counting up */
+	unsigned int s_cb_nr; /* keeps counting up */
 	unsigned int r_cb_nr; /* keeps counting up */
-	struct drbd_thread_timing_details w_timing_details[DRBD_THREAD_DETAILS_HIST];
+	struct drbd_thread_timing_details s_timing_details[DRBD_THREAD_DETAILS_HIST];
 	struct drbd_thread_timing_details r_timing_details[DRBD_THREAD_DETAILS_HIST];
 
 	struct {
 		unsigned long last_sent_barrier_jif;
+		int last_sent_epoch_nr;
 
 		/* whether this sender thread
 		 * has processed a single write yet. */
@@ -701,52 +1281,245 @@ struct drbd_connection {
 		 * with req->epoch == current_epoch_nr.
 		 * If none, no P_BARRIER will be sent. */
 		unsigned current_epoch_writes;
-	} send;
-};
 
-static inline bool has_net_conf(struct drbd_connection *connection)
-{
-	bool has_net_conf;
+		/* Position in change stream of last write sent. */
+		u64 current_dagtag_sector;
 
-	rcu_read_lock();
-	has_net_conf = rcu_dereference(connection->net_conf);
-	rcu_read_unlock();
+		/* Position in change stream of last ready request seen. */
+		u64 seen_dagtag_sector;
+	} send;
 
-	return has_net_conf;
-}
+	struct {
+		u64 dagtag_sector;
+		int lost_node_id;
+	} after_reconciliation;
 
-void __update_timing_details(
-		struct drbd_thread_timing_details *tdp,
-		unsigned int *cb_nr,
-		void *cb,
-		const char *fn, const unsigned int line);
+	unsigned int peer_node_id;
 
-#define update_worker_timing_details(c, cb) \
-	__update_timing_details(c->w_timing_details, &c->w_cb_nr, cb, __func__ , __LINE__ )
-#define update_receiver_timing_details(c, cb) \
-	__update_timing_details(c->r_timing_details, &c->r_cb_nr, cb, __func__ , __LINE__ )
+	struct drbd_mutable_buffer reassemble_buffer;
+	union {
+		u8 bytes[8];
+		struct p_block_ack block_ack;
+		struct p_barrier_ack barrier_ack;
+		struct p_confirm_stable confirm_stable;
+		struct p_peer_ack peer_ack;
+		struct p_peer_block_desc peer_block_desc;
+		struct p_twopc_reply twopc_reply;
+	} reassemble_buffer_bytes;
+
+	/* Used when a network namespace is removed to track all connections
+	 * that need disconnecting. */
+	struct list_head remove_net_list;
+
+	struct rcu_head rcu;
+
+	unsigned int ctl_packets;
+	unsigned int ctl_bytes;
+
+	struct drbd_transport transport; /* The transport needs to be the last member. The acutal
+					    implementation might have more members than the
+					    abstract one. */
+};
 
-struct submit_worker {
-	struct workqueue_struct *wq;
-	struct work_struct worker;
+/* used to get the next lower or next higher peer_device depending on device node-id */
+enum drbd_neighbor {
+	NEXT_LOWER,
+	NEXT_HIGHER
+};
 
-	/* protected by ..->resource->req_lock */
-	struct list_head writes;
+enum drbd_per_peer_device_ratelimit {
+	D_RL_PD_NOLIMIT = -1,
+	D_RL_PD_GENERIC,
 };
 
 struct drbd_peer_device {
 	struct list_head peer_devices;
 	struct drbd_device *device;
 	struct drbd_connection *connection;
-	struct work_struct send_acks_work;
+	struct peer_device_conf __rcu *conf; /* RCU, for updates: resource->conf_update */
+	enum drbd_disk_state disk_state[2];
+	enum drbd_repl_state repl_state[2];
+	bool resync_susp_user[2];
+	bool resync_susp_peer[2];
+	bool resync_susp_dependency[2];
+	bool resync_susp_other_c[2];
+	bool resync_active[2];
+	bool replication[2]; /* Only while peer is Inconsistent: Is replication enabled? */
+	bool peer_replication[2]; /* Whether we have instructed peer to replicate to us */
+	enum drbd_repl_state negotiation_result; /* To find disk state after attach */
+	unsigned int send_cnt;
+	unsigned int recv_cnt;
+	atomic_t packet_seq;
+	unsigned int peer_seq;
+	spinlock_t peer_seq_lock;
+	uint64_t d_size;  /* size of disk */
+	uint64_t u_size;  /* user requested size */
+	uint64_t c_size;  /* current exported size */
+	uint64_t max_size;
+	int bitmap_index;
+	int node_id;
+
+	struct ratelimit_state ratelimit[1];
+
+	unsigned long flags;
+
+	enum drbd_repl_state start_resync_side;
+	enum drbd_repl_state last_repl_state; /* What we received from the peer */
+	struct timer_list start_resync_timer;
+	struct drbd_work resync_work;
+	struct timer_list resync_timer;
+	struct drbd_work propagate_uuids_work;
+
+	enum drbd_disk_state resync_finished_pdsk; /* Finished while starting resync */
+	int resync_again; /* decided to resync again while resync running */
+	sector_t last_in_sync_end; /* sector after end of last completed resync request */
+	unsigned long resync_next_bit; /* bitmap bit to search from for next resync request */
+	unsigned long last_resync_pass_bits; /* bitmap weight at end of previous pass */
+
+	atomic_t ap_pending_cnt; /* AP data packets on the wire, ack expected (RQ_NET_PENDING set) */
+	atomic_t unacked_cnt;	 /* Need to send replies for */
+	atomic_t rs_pending_cnt; /* RS request/data packets on the wire */
+
+	/* Protected by connection->peer_reqs_lock */
+	struct list_head resync_requests; /* Resync requests in the order we sent them */
+	/*
+	 * If not NULL, all requests in resync_requests until this one have
+	 * been received. Discards are only counted as "received" once merging
+	 * is complete.
+	 */
+	struct drbd_peer_request *received_last;
+	/*
+	 * If not NULL, all requests in resync_requests after received_last
+	 * until this one are discards.
+	 */
+	struct drbd_peer_request *discard_last;
+
+	/* use checksums for *this* resync */
+	bool use_csums;
+	/* blocks to resync in this run [unit BM_BLOCK_SIZE] */
+	unsigned long rs_total;
+	/* number of resync blocks that failed in this run */
+	unsigned long rs_failed;
+	/* Syncer's start time [unit jiffies] */
+	unsigned long rs_start;
+	/* cumulated time in PausedSyncX state [unit jiffies] */
+	unsigned long rs_paused;
+	/* skipped because csum was equal [unit BM_BLOCK_SIZE] */
+	unsigned long rs_same_csum;
+	unsigned long rs_last_progress_report_ts;
+#define DRBD_SYNC_MARKS 8
+#define DRBD_SYNC_MARK_STEP (3*HZ)
+	/* block not up-to-date at mark [unit BM_BLOCK_SIZE] */
+	unsigned long rs_mark_left[DRBD_SYNC_MARKS];
+	/* marks's time [unit jiffies] */
+	unsigned long rs_mark_time[DRBD_SYNC_MARKS];
+	/* current index into rs_mark_{left,time} */
+	int rs_last_mark;
+	unsigned long rs_last_writeout;
+
+	/* where does the admin want us to start? (sector) */
+	sector_t ov_start_sector;
+	sector_t ov_stop_sector;
+	/* where are we now? (sector) */
+	sector_t ov_position;
+	/* Start sector of out of sync range (to merge printk reporting). */
+	sector_t ov_last_oos_start;
+	/* size of out-of-sync range in sectors. */
+	sector_t ov_last_oos_size;
+	/* Start sector of skipped range (to merge printk reporting). */
+	sector_t ov_last_skipped_start;
+	/* size of skipped range in sectors. */
+	sector_t ov_last_skipped_size;
+	int c_sync_rate; /* current resync rate after syncer throttle magic */
+	struct fifo_buffer __rcu *rs_plan_s; /* correction values of resync planer (RCU, connection->conn_update) */
+	atomic_t rs_sect_in; /* for incoming resync data rate, SyncTarget */
+	int rs_last_events;  /* counter of read or write "events" (unit sectors)
+			      * on the lower level device when we last looked. */
+	int rs_in_flight; /* resync sectors in flight (to proxy, in proxy and from proxy) */
+	ktime_t rs_last_mk_req_kt;
+	atomic64_t ov_left; /* in bits */
+	unsigned long ov_skipped; /* in bits */
+	u64 rs_start_uuid;
+
+	u64 current_uuid;
+	u64 bitmap_uuids[DRBD_PEERS_MAX];
+	u64 history_uuids[HISTORY_UUIDS];
+	u64 dirty_bits;
+	u64 uuid_flags;
+	u64 uuid_node_mask; /* might be authoritative_nodes or weak_nodes */
+
+	unsigned long comm_bm_set; /* communicated number of set bits. */
+	u64 comm_current_uuid; /* communicated current UUID */
+	u64 comm_uuid_flags; /* communicated UUID flags */
+	u64 comm_bitmap_uuid;
+	union drbd_state comm_state;
+
 #ifdef CONFIG_DEBUG_FS
 	struct dentry *debugfs_peer_dev;
+	struct dentry *debugfs_peer_dev_proc_drbd;
 #endif
+	ktime_t pre_send_kt;
+	ktime_t acked_kt;
+	ktime_t net_done_kt;
+
+	struct {/* sender todo per peer_device */
+		bool was_sending_out_of_sync;
+	} todo;
+	union drbd_state connect_state;
+	struct {
+		unsigned int	physical_block_size;
+		unsigned int	logical_block_size;
+		unsigned int	alignment_offset;
+		unsigned int	io_min;
+		unsigned int	io_opt;
+		unsigned int	max_bio_size;
+	} q_limits;
+	/* communicated as part of o_qlim, if agreed on DRBD_FF_BM_BLOCK_SHIFT */
+	unsigned int bm_block_shift;
+};
+
+struct conflict_worker {
+	struct workqueue_struct *wq;
+	struct work_struct worker;
+
+	spinlock_t lock;
+	struct list_head resync_writes;
+	struct list_head resync_reads;
+	struct list_head writes;
+	struct list_head peer_writes;
+};
+
+struct submit_worker {
+	struct workqueue_struct *wq;
+	struct work_struct worker;
+
+	spinlock_t lock;
+	struct list_head writes;
+	struct list_head peer_writes;
+};
+
+struct opener {
+	struct list_head list;
+	char comm[TASK_COMM_LEN];
+	pid_t pid;
+	ktime_t opened;
+};
+
+enum drbd_per_device_ratelimit {
+	D_RL_D_NOLIMIT = -1,
+	D_RL_D_GENERIC,
+	D_RL_D_METADATA,
+	D_RL_D_BACKEND,
+	__D_RL_D_N
 };
 
 struct drbd_device {
 	struct drbd_resource *resource;
+
+	/* RCU list. Updates protected by adm_mutex, conf_update and state_rwlock. */
 	struct list_head peer_devices;
+
+	spinlock_t pending_bmio_lock;
 	struct list_head pending_bitmap_io;
 
 	unsigned long flush_jif;
@@ -755,12 +1528,22 @@ struct drbd_device {
 	struct dentry *debugfs_vol;
 	struct dentry *debugfs_vol_oldest_requests;
 	struct dentry *debugfs_vol_act_log_extents;
-	struct dentry *debugfs_vol_resync_extents;
+	struct dentry *debugfs_vol_act_log_histogram;
 	struct dentry *debugfs_vol_data_gen_id;
+	struct dentry *debugfs_vol_io_frozen;
 	struct dentry *debugfs_vol_ed_gen_id;
+	struct dentry *debugfs_vol_openers;
+	struct dentry *debugfs_vol_md_io;
+	struct dentry *debugfs_vol_interval_tree;
+	struct dentry *debugfs_vol_al_updates;
+	struct dentry *debugfs_vol_multi_bio_cnt;
+#ifdef CONFIG_DRBD_TIMING_STATS
+	struct dentry *debugfs_vol_req_timing;
+#endif
 #endif
+	struct ratelimit_state ratelimit[__D_RL_D_N];
 
-	unsigned int vnr;	/* volume number within the connection */
+	unsigned int vnr;	/* volume number within the resource */
 	unsigned int minor;	/* device minor number */
 
 	struct kref kref;
@@ -769,148 +1552,126 @@ struct drbd_device {
 	unsigned long flags;
 
 	/* configured by drbdsetup */
-	struct drbd_backing_dev *ldev;
+	struct drbd_backing_dev *ldev; /* enclose accessing code in get_ldev() / put_ldev() */
+
+	/* Used to close backing devices and destroy related structures. */
+	struct work_struct ldev_destroy_work;
 
-	sector_t p_size;     /* partner's disk size */
 	struct request_queue *rq_queue;
 	struct gendisk	    *vdisk;
 
 	unsigned long last_reattach_jif;
-	struct drbd_work resync_work;
-	struct drbd_work unplug_work;
-	struct timer_list resync_timer;
 	struct timer_list md_sync_timer;
-	struct timer_list start_resync_timer;
 	struct timer_list request_timer;
 
-	/* Used after attach while negotiating new disk state. */
-	union drbd_state new_state_tmp;
-
-	union drbd_dev_state state;
+	enum drbd_disk_state disk_state[2];
 	wait_queue_head_t misc_wait;
-	wait_queue_head_t state_wait;  /* upon each state change. */
-	unsigned int send_cnt;
-	unsigned int recv_cnt;
 	unsigned int read_cnt;
 	unsigned int writ_cnt;
 	unsigned int al_writ_cnt;
 	unsigned int bm_writ_cnt;
-	atomic_t ap_bio_cnt;	 /* Requests we need to complete */
-	atomic_t ap_actlog_cnt;  /* Requests waiting for activity log */
-	atomic_t ap_pending_cnt; /* AP data packets on the wire, ack expected */
-	atomic_t rs_pending_cnt; /* RS request/data packets on the wire */
-	atomic_t unacked_cnt;	 /* Need to send replies for */
+	unsigned int multi_bio_cnt; /* peer_reqs that needed multiple bios */
+	atomic_t ap_bio_cnt[2];	 /* Requests we need to complete. [READ] and [WRITE] */
 	atomic_t local_cnt;	 /* Waiting for local completion */
-	atomic_t suspend_cnt;
+	atomic_t ap_actlog_cnt;  /* Requests waiting for activity log */
+	atomic_t wait_for_actlog; /* Peer requests waiting for activity log */
+	/* worst case extent count needed to satisfy both requests and peer requests
+	 * currently waiting for the activity log */
+	atomic_t wait_for_actlog_ecnt;
+
+	atomic_t suspend_cnt;	/* recursive suspend counter, if non-zero, IO will be blocked. */
 
-	/* Interval tree of pending local requests */
-	struct rb_root read_requests;
-	struct rb_root write_requests;
+	/* Interval trees of pending requests */
+	spinlock_t interval_lock;
+	struct rb_root read_requests; /* Local reads */
+	struct rb_root requests; /* Local and peer writes, resync operations etc. */
 
 	/* for statistics and timeouts */
 	/* [0] read, [1] write */
+	spinlock_t pending_completion_lock;
 	struct list_head pending_master_completion[2];
 	struct list_head pending_completion[2];
 
-	/* use checksums for *this* resync */
-	bool use_csums;
-	/* blocks to resync in this run [unit BM_BLOCK_SIZE] */
-	unsigned long rs_total;
-	/* number of resync blocks that failed in this run */
-	unsigned long rs_failed;
-	/* Syncer's start time [unit jiffies] */
-	unsigned long rs_start;
-	/* cumulated time in PausedSyncX state [unit jiffies] */
-	unsigned long rs_paused;
-	/* skipped because csum was equal [unit BM_BLOCK_SIZE] */
-	unsigned long rs_same_csum;
-#define DRBD_SYNC_MARKS 8
-#define DRBD_SYNC_MARK_STEP (3*HZ)
-	/* block not up-to-date at mark [unit BM_BLOCK_SIZE] */
-	unsigned long rs_mark_left[DRBD_SYNC_MARKS];
-	/* marks's time [unit jiffies] */
-	unsigned long rs_mark_time[DRBD_SYNC_MARKS];
-	/* current index into rs_mark_{left,time} */
-	int rs_last_mark;
-	unsigned long rs_last_bcast; /* [unit jiffies] */
-
-	/* where does the admin want us to start? (sector) */
-	sector_t ov_start_sector;
-	sector_t ov_stop_sector;
-	/* where are we now? (sector) */
-	sector_t ov_position;
-	/* Start sector of out of sync range (to merge printk reporting). */
-	sector_t ov_last_oos_start;
-	/* size of out-of-sync range in sectors. */
-	sector_t ov_last_oos_size;
-	unsigned long ov_left; /* in bits */
-
-	struct drbd_bitmap *bitmap;
-	unsigned long bm_resync_fo; /* bit offset for drbd_bm_find_next */
-
-	/* Used to track operations of resync... */
-	struct lru_cache *resync;
-	/* Number of locked elements in resync LRU */
-	unsigned int resync_locked;
-	/* resync extent number waiting for application requests */
-	unsigned int resync_wenr;
+	struct drbd_bitmap *bitmap; /* enclose accessing code in get_ldev() / put_ldev() */
+	/* We may want to report on resync progress
+	 * even after we detached again (bitmap == NULL).
+	 * Cache the last bitmap block size here.
+	 */
+	unsigned int last_bm_block_shift;
 
 	int open_cnt;
-	u64 *p_uuid;
+	bool writable;
+	/* FIXME clean comments, restructure so it is more obvious which
+	 * members are protected by what */
 
-	struct list_head active_ee; /* IO in progress (P_DATA gets written to disk) */
-	struct list_head sync_ee;   /* IO in progress (P_RS_DATA_REPLY gets written to disk) */
-	struct list_head done_ee;   /* need to send P_WRITE_ACK */
-	struct list_head read_ee;   /* [RS]P_DATA_REQUEST being read */
-
-	struct list_head resync_reads;
-	atomic_t pp_in_use;		/* allocated from page pool */
-	atomic_t pp_in_use_by_net;	/* sendpage()d, still referenced by tcp */
-	wait_queue_head_t ee_wait;
 	struct drbd_md_io md_io;
 	spinlock_t al_lock;
 	wait_queue_head_t al_wait;
 	struct lru_cache *act_log;	/* activity log */
+	unsigned al_histogram[AL_UPDATES_PER_TRANSACTION+1];
 	unsigned int al_tr_number;
 	int al_tr_cycle;
 	wait_queue_head_t seq_wait;
-	atomic_t packet_seq;
-	unsigned int peer_seq;
-	spinlock_t peer_seq_lock;
-	unsigned long comm_bm_set; /* communicated number of set bits. */
-	struct bm_io_work bm_io_work;
-	u64 ed_uuid; /* UUID of the exposed data */
-	struct mutex own_state_mutex;
-	struct mutex *state_mutex; /* either own_state_mutex or first_peer_device(device)->connection->cstate_mutex */
-	char congestion_reason;  /* Why we where congested... */
-	atomic_t rs_sect_in; /* for incoming resync data rate, SyncTarget */
+	u64 exposed_data_uuid; /* UUID of the exposed data */
+	u64 next_exposed_data_uuid;
+	struct rw_semaphore uuid_sem;
 	atomic_t rs_sect_ev; /* for submitted resync data rate, both */
-	int rs_last_sect_ev; /* counter to compare with */
-	int rs_last_events;  /* counter of read or write "events" (unit sectors)
-			      * on the lower level device when we last looked. */
-	int c_sync_rate; /* current resync rate after syncer throttle magic */
-	struct fifo_buffer *rs_plan_s; /* correction values of resync planer (RCU, connection->conn_update) */
-	int rs_in_flight; /* resync sectors in flight (to proxy, in proxy and from proxy) */
-	atomic_t ap_in_flight; /* App sectors in flight (waiting for ack) */
-	unsigned int peer_max_bio_size;
-	unsigned int local_max_bio_size;
-
-	/* any requests that would block in drbd_make_request()
-	 * are deferred to this single-threaded work queue */
+	struct pending_bitmap_work_s {
+		atomic_t n;		/* inc when queued here, */
+		spinlock_t q_lock;	/* dec only once finished. */
+		struct list_head q;	/* n > 0 even if q already empty */
+	} pending_bitmap_work;
+	struct device_conf device_conf;
+
+	/* any requests that were blocked due to conflicts with other requests
+	 * or resync are submitted on this ordered work queue */
+	struct conflict_worker submit_conflict;
+	/* any requests that would block due to the activity log
+	 * are deferred to this ordered work queue */
 	struct submit_worker submit;
+	u64 read_nodes; /* used for balancing read requests among peers */
+	bool have_quorum[2];	/* no quorum -> suspend IO or error IO */
+	bool cached_state_unstable; /* updates with each state change */
+	bool cached_err_io; /* complete all IOs with error */
+
+#ifdef CONFIG_DRBD_TIMING_STATS
+	spinlock_t timing_lock;
+	unsigned long reqs;
+	ktime_t in_actlog_kt;
+	ktime_t pre_submit_kt; /* sum over over all reqs */
+
+	ktime_t before_queue_kt; /* sum over all al_misses */
+	ktime_t before_al_begin_io_kt;
+
+	ktime_t al_before_bm_write_hinted_kt; /* sum over all al_writ_cnt */
+	ktime_t al_mid_kt;
+	ktime_t al_after_sync_page_kt;
+#endif
+	struct list_head openers;
+	spinlock_t openers_lock;
+	spinlock_t peer_req_bio_completion_lock;
+
+	struct rcu_head rcu;
+	struct work_struct finalize_work;
 };
 
 struct drbd_bm_aio_ctx {
 	struct drbd_device *device;
-	struct list_head list; /* on device->pending_bitmap_io */;
+	struct list_head list; /* on device->pending_bitmap_io */
 	unsigned long start_jif;
+	struct blk_plug bm_aio_plug;
 	atomic_t in_flight;
 	unsigned int done;
 	unsigned flags;
 #define BM_AIO_COPY_PAGES	1
 #define BM_AIO_WRITE_HINTED	2
 #define BM_AIO_WRITE_ALL_PAGES	4
-#define BM_AIO_READ		8
+#define BM_AIO_READ	        8
+#define BM_AIO_WRITE_LAZY      16
+	/* only report stats for global read, write, write all */
+#define BM_AIO_NO_STATS (BM_AIO_COPY_PAGES\
+			|BM_AIO_WRITE_HINTED\
+			|BM_AIO_WRITE_LAZY)
 	int error;
 	struct kref kref;
 };
@@ -921,12 +1682,14 @@ struct drbd_config_context {
 	/* assigned from request attributes, if present */
 	unsigned int volume;
 #define VOLUME_UNSPECIFIED		(-1U)
+	unsigned int peer_node_id;
+#define PEER_NODE_ID_UNSPECIFIED	(-1U)
 	/* pointer into the request skb,
 	 * limited lifetime! */
 	char *resource_name;
-	struct nlattr *my_addr;
-	struct nlattr *peer_addr;
 
+	/* network namespace of the sending socket */
+	struct net *net;
 	/* reply buffer */
 	struct sk_buff *reply_skb;
 	/* pointer into reply buffer */
@@ -935,6 +1698,7 @@ struct drbd_config_context {
 	struct drbd_device *device;
 	struct drbd_resource *resource;
 	struct drbd_connection *connection;
+	struct drbd_peer_device *peer_device;
 };
 
 static inline struct drbd_device *minor_to_device(unsigned int minor)
@@ -942,10 +1706,6 @@ static inline struct drbd_device *minor_to_device(unsigned int minor)
 	return (struct drbd_device *)idr_find(&drbd_devices, minor);
 }
 
-static inline struct drbd_peer_device *first_peer_device(struct drbd_device *device)
-{
-	return list_first_entry_or_null(&device->peer_devices, struct drbd_peer_device, peer_devices);
-}
 
 static inline struct drbd_peer_device *
 conn_peer_device(struct drbd_connection *connection, int volume_number)
@@ -959,18 +1719,19 @@ conn_peer_device(struct drbd_connection *connection, int volume_number)
 #define for_each_resource_rcu(resource, _resources) \
 	list_for_each_entry_rcu(resource, _resources, resources)
 
-#define for_each_resource_safe(resource, tmp, _resources) \
-	list_for_each_entry_safe(resource, tmp, _resources, resources)
-
+/* see drbd_resource.connections for locking requirements */
 #define for_each_connection(connection, resource) \
 	list_for_each_entry(connection, &resource->connections, connections)
 
 #define for_each_connection_rcu(connection, resource) \
 	list_for_each_entry_rcu(connection, &resource->connections, connections)
 
-#define for_each_connection_safe(connection, tmp, resource) \
-	list_for_each_entry_safe(connection, tmp, &resource->connections, connections)
+#define for_each_connection_ref(connection, m, resource)		\
+	for (connection = __drbd_next_connection_ref(&m, NULL, resource); \
+	     connection;						\
+	     connection = __drbd_next_connection_ref(&m, connection, resource))
 
+/* see drbd_device.peer_devices for locking requirements */
 #define for_each_peer_device(peer_device, device) \
 	list_for_each_entry(peer_device, &device->peer_devices, peer_devices)
 
@@ -980,10 +1741,10 @@ conn_peer_device(struct drbd_connection *connection, int volume_number)
 #define for_each_peer_device_safe(peer_device, tmp, device) \
 	list_for_each_entry_safe(peer_device, tmp, &device->peer_devices, peer_devices)
 
-static inline unsigned int device_to_minor(struct drbd_device *device)
-{
-	return device->minor;
-}
+#define for_each_peer_device_ref(peer_device, m, device)		\
+	for (peer_device = __drbd_next_peer_device_ref(&m, NULL, device); \
+	     peer_device;						\
+	     peer_device = __drbd_next_peer_device_ref(&m, peer_device, device))
 
 /*
  * function declarations
@@ -992,97 +1753,163 @@ static inline unsigned int device_to_minor(struct drbd_device *device)
 /* drbd_main.c */
 
 enum dds_flags {
-	DDSF_FORCED    = 1,
+	/* This enum is part of the wire protocol!
+	 * See P_SIZES, struct p_sizes; */
+	DDSF_ASSUME_UNCONNECTED_PEER_HAS_SPACE    = 1,
 	DDSF_NO_RESYNC = 2, /* Do not run a resync for the new space */
+	DDSF_IGNORE_PEER_CONSTRAINTS = 4, /* no longer used */
+	DDSF_2PC = 8, /* local only, not on the wire */
 };
+struct meta_data_on_disk_9;
 
-extern void drbd_init_set_defaults(struct drbd_device *device);
-extern int  drbd_thread_start(struct drbd_thread *thi);
-extern void _drbd_thread_stop(struct drbd_thread *thi, int restart, int wait);
+int drbd_thread_start(struct drbd_thread *thi);
+void _drbd_thread_stop(struct drbd_thread *thi, int restart, int wait);
 #ifdef CONFIG_SMP
-extern void drbd_thread_current_set_cpu(struct drbd_thread *thi);
+void drbd_thread_current_set_cpu(struct drbd_thread *thi);
 #else
 #define drbd_thread_current_set_cpu(A) ({})
 #endif
-extern void tl_release(struct drbd_connection *, unsigned int barrier_nr,
-		       unsigned int set_size);
-extern void tl_clear(struct drbd_connection *);
-extern void drbd_free_sock(struct drbd_connection *connection);
-extern int drbd_send(struct drbd_connection *connection, struct socket *sock,
-		     void *buf, size_t size, unsigned msg_flags);
-extern int drbd_send_all(struct drbd_connection *, struct socket *, void *, size_t,
-			 unsigned);
-
-extern int __drbd_send_protocol(struct drbd_connection *connection, enum drbd_packet cmd);
-extern int drbd_send_protocol(struct drbd_connection *connection);
-extern int drbd_send_uuids(struct drbd_peer_device *);
-extern int drbd_send_uuids_skip_initial_sync(struct drbd_peer_device *);
-extern void drbd_gen_and_send_sync_uuid(struct drbd_peer_device *);
-extern int drbd_send_sizes(struct drbd_peer_device *, int trigger_reply, enum dds_flags flags);
-extern int drbd_send_state(struct drbd_peer_device *, union drbd_state s);
-extern int drbd_send_current_state(struct drbd_peer_device *);
-extern int drbd_send_sync_param(struct drbd_peer_device *);
-extern void drbd_send_b_ack(struct drbd_connection *connection, u32 barrier_nr,
-			    u32 set_size);
-extern int drbd_send_ack(struct drbd_peer_device *, enum drbd_packet,
-			 struct drbd_peer_request *);
-extern void drbd_send_ack_rp(struct drbd_peer_device *, enum drbd_packet,
-			     struct p_block_req *rp);
-extern void drbd_send_ack_dp(struct drbd_peer_device *, enum drbd_packet,
-			     struct p_data *dp, int data_size);
-extern int drbd_send_ack_ex(struct drbd_peer_device *, enum drbd_packet,
-			    sector_t sector, int blksize, u64 block_id);
-extern int drbd_send_out_of_sync(struct drbd_peer_device *, struct drbd_request *);
-extern int drbd_send_block(struct drbd_peer_device *, enum drbd_packet,
-			   struct drbd_peer_request *);
-extern int drbd_send_dblock(struct drbd_peer_device *, struct drbd_request *req);
-extern int drbd_send_drequest(struct drbd_peer_device *, int cmd,
-			      sector_t sector, int size, u64 block_id);
-extern int drbd_send_drequest_csum(struct drbd_peer_device *, sector_t sector,
-				   int size, void *digest, int digest_size,
-				   enum drbd_packet cmd);
-extern int drbd_send_ov_request(struct drbd_peer_device *, sector_t sector, int size);
-
-extern int drbd_send_bitmap(struct drbd_device *device, struct drbd_peer_device *peer_device);
-extern void drbd_send_sr_reply(struct drbd_peer_device *, enum drbd_state_rv retcode);
-extern void conn_send_sr_reply(struct drbd_connection *connection, enum drbd_state_rv retcode);
-extern int drbd_send_rs_deallocated(struct drbd_peer_device *, struct drbd_peer_request *);
-extern void drbd_backing_dev_free(struct drbd_device *device, struct drbd_backing_dev *ldev);
-extern void drbd_device_cleanup(struct drbd_device *device);
-extern void drbd_print_uuids(struct drbd_device *device, const char *text);
-extern void drbd_queue_unplug(struct drbd_device *device);
-
-extern void conn_md_sync(struct drbd_connection *connection);
-extern void drbd_md_write(struct drbd_device *device, void *buffer);
-extern void drbd_md_sync(struct drbd_device *device);
-extern int  drbd_md_read(struct drbd_device *device, struct drbd_backing_dev *bdev);
-extern void drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local);
-extern void _drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local);
-extern void drbd_uuid_new_current(struct drbd_device *device) __must_hold(local);
-extern void drbd_uuid_set_bm(struct drbd_device *device, u64 val) __must_hold(local);
-extern void drbd_uuid_move_history(struct drbd_device *device) __must_hold(local);
-extern void __drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local);
-extern void drbd_md_set_flag(struct drbd_device *device, int flags) __must_hold(local);
-extern void drbd_md_clear_flag(struct drbd_device *device, int flags)__must_hold(local);
-extern int drbd_md_test_flag(struct drbd_backing_dev *, int);
-extern void drbd_md_mark_dirty(struct drbd_device *device);
-extern void drbd_queue_bitmap_io(struct drbd_device *device,
-				 int (*io_fn)(struct drbd_device *, struct drbd_peer_device *),
-				 void (*done)(struct drbd_device *, int),
-				 char *why, enum bm_flag flags,
-				 struct drbd_peer_device *peer_device);
-extern int drbd_bitmap_io(struct drbd_device *device,
-		int (*io_fn)(struct drbd_device *, struct drbd_peer_device *),
-		char *why, enum bm_flag flags,
-		struct drbd_peer_device *peer_device);
-extern int drbd_bitmap_io_from_worker(struct drbd_device *device,
-		int (*io_fn)(struct drbd_device *, struct drbd_peer_device *),
-		char *why, enum bm_flag flags,
-		struct drbd_peer_device *peer_device);
-extern int drbd_bmio_set_n_write(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
-extern int drbd_bmio_clear_n_write(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+int tl_release(struct drbd_connection *connection, uint64_t o_block_id,
+	       uint64_t y_block_id, unsigned int barrier_nr,
+	       unsigned int set_size);
+
+int __drbd_send_protocol(struct drbd_connection *connection,
+			 enum drbd_packet cmd);
+u64 drbd_collect_local_uuid_flags(struct drbd_peer_device *peer_device,
+				  u64 *authoritative_mask);
+u64 drbd_resolved_uuid(struct drbd_peer_device *peer_device_base,
+		       u64 *uuid_flags);
+int drbd_send_uuids(struct drbd_peer_device *peer_device, u64 uuid_flags,
+		    u64 node_mask);
+void drbd_gen_and_send_sync_uuid(struct drbd_peer_device *peer_device);
+int drbd_send_sizes(struct drbd_peer_device *peer_device,
+		    uint64_t u_size_diskless, enum dds_flags flags);
+int conn_send_state(struct drbd_connection *connection,
+		    union drbd_state state);
+int drbd_send_state(struct drbd_peer_device *peer_device,
+		    union drbd_state state);
+int drbd_send_current_state(struct drbd_peer_device *peer_device);
+int drbd_send_sync_param(struct drbd_peer_device *peer_device);
+int drbd_send_out_of_sync(struct drbd_peer_device *peer_device,
+			  sector_t sector, unsigned int size);
+int drbd_send_block(struct drbd_peer_device *peer_device,
+		    enum drbd_packet cmd, struct drbd_peer_request *peer_req);
+int drbd_send_dblock(struct drbd_peer_device *peer_device,
+		     struct drbd_request *req);
+int drbd_send_drequest(struct drbd_peer_device *peer_device, sector_t sector,
+		       int size, u64 block_id);
+int drbd_send_rs_request(struct drbd_peer_device *peer_device,
+			 enum drbd_packet cmd, sector_t sector, int size,
+			 u64 block_id, unsigned int dagtag_node_id,
+			 u64 dagtag);
+void *drbd_prepare_drequest_csum(struct drbd_peer_request *peer_req,
+				 enum drbd_packet cmd, int digest_size,
+				 unsigned int dagtag_node_id, u64 dagtag);
+
+int drbd_send_bitmap(struct drbd_device *device,
+		     struct drbd_peer_device *peer_device);
+int drbd_send_dagtag(struct drbd_connection *connection, u64 dagtag);
+void drbd_send_sr_reply(struct drbd_connection *connection, int vnr,
+			enum drbd_state_rv retcode);
+int drbd_send_rs_deallocated(struct drbd_peer_device *peer_device,
+			     struct drbd_peer_request *peer_req);
+void drbd_send_twopc_reply(struct drbd_connection *connection,
+			   enum drbd_packet cmd, struct twopc_reply *reply);
+void drbd_send_peers_in_sync(struct drbd_peer_device *peer_device, u64 mask,
+			     sector_t sector, int size);
+int drbd_send_peer_dagtag(struct drbd_connection *connection,
+			  struct drbd_connection *lost_peer);
+int drbd_send_flush_requests(struct drbd_connection *connection,
+			     u64 flush_sequence);
+int drbd_send_flush_forward(struct drbd_connection *connection,
+			    u64 flush_sequence, int initiator_node_id);
+int drbd_send_flush_requests_ack(struct drbd_connection *connection,
+				 u64 flush_sequence, int primary_node_id);
+int drbd_send_enable_replication_next(struct drbd_peer_device *peer_device);
+int drbd_send_enable_replication(struct drbd_peer_device *peer_device, bool enable);
+int drbd_send_current_uuid(struct drbd_peer_device *peer_device,
+			   u64 current_uuid, u64 weak_nodes);
+void drbd_backing_dev_free(struct drbd_device *device,
+			   struct drbd_backing_dev *ldev);
+void drbd_print_uuids(struct drbd_peer_device *peer_device, const char *text);
+void drbd_queue_unplug(struct drbd_device *device);
+
+u64 drbd_capacity_to_on_disk_bm_sect(u64 capacity_sect, const struct drbd_md *md);
+void drbd_md_set_sector_offsets(struct drbd_backing_dev *bdev);
+int drbd_md_write(struct drbd_device *device,
+		  struct meta_data_on_disk_9 *buffer);
+int drbd_md_sync(struct drbd_device *device);
+int drbd_md_sync_if_dirty(struct drbd_device *device);
+void drbd_uuid_received_new_current(struct drbd_peer_device *from_pd, u64 val,
+				    u64 weak_nodes);
+void drbd_uuid_set_bitmap(struct drbd_peer_device *peer_device, u64 uuid);
+void _drbd_uuid_set_bitmap(struct drbd_peer_device *peer_device, u64 val);
+void _drbd_uuid_set_current(struct drbd_device *device, u64 val);
+void drbd_uuid_new_current(struct drbd_device *device, bool forced);
+void drbd_uuid_new_current_by_user(struct drbd_device *device);
+void _drbd_uuid_push_history(struct drbd_device *device, u64 val);
+u64 _drbd_uuid_pull_history(struct drbd_peer_device *peer_device);
+void drbd_uuid_resync_starting(struct drbd_peer_device *peer_device);
+u64 drbd_uuid_resync_finished(struct drbd_peer_device *peer_device);
+void drbd_uuid_detect_finished_resyncs(struct drbd_peer_device *peer_device);
+bool drbd_uuid_set_exposed(struct drbd_device *device, u64 val, bool log);
+u64 drbd_weak_nodes_device(struct drbd_device *device);
+bool drbd_uuid_is_day0(struct drbd_device *device);
+int drbd_md_test_flag(struct drbd_backing_dev *bdev, enum mdf_flag flag);
+void drbd_md_set_peer_flag(struct drbd_peer_device *peer_device,
+			   enum mdf_peer_flag flag);
+void drbd_md_clear_peer_flag(struct drbd_peer_device *peer_device,
+			     enum mdf_peer_flag flag);
+bool drbd_md_test_peer_flag(struct drbd_peer_device *peer_device,
+			    enum mdf_peer_flag flag);
+void drbd_md_mark_dirty(struct drbd_device *device);
+void drbd_queue_bitmap_io(struct drbd_device *device,
+			  int (*io_fn)(struct drbd_device *device,
+				       struct drbd_peer_device *peer_device),
+			  void (*done)(struct drbd_device *device,
+				       struct drbd_peer_device *peer_device,
+				       int rv),
+			  char *why, enum bm_flag flags,
+			  struct drbd_peer_device *peer_device);
+int drbd_bitmap_io(struct drbd_device *device,
+		   int (*io_fn)(struct drbd_device *, struct drbd_peer_device *),
+		   char *why, enum bm_flag flags,
+		   struct drbd_peer_device *peer_device);
+int drbd_bitmap_io_from_worker(struct drbd_device *device,
+			       int (*io_fn)(struct drbd_device *, struct drbd_peer_device *),
+			       char *why, enum bm_flag flags,
+			       struct drbd_peer_device *peer_device);
+int drbd_bmio_set_n_write(struct drbd_device *device,
+			  struct drbd_peer_device *peer_device);
+int drbd_bmio_clear_all_n_write(struct drbd_device *device,
+				struct drbd_peer_device *peer_device);
+int drbd_bmio_set_all_n_write(struct drbd_device *device,
+			      struct drbd_peer_device *peer_device);
+int drbd_bmio_set_allocated_n_write(struct drbd_device *device,
+				    struct drbd_peer_device *peer_device);
+int drbd_bmio_clear_one_peer(struct drbd_device *device,
+			     struct drbd_peer_device *peer_device);
+bool drbd_device_stable(struct drbd_device *device, u64 *authoritative_ptr);
+void drbd_flush_peer_acks(struct drbd_resource *resource);
+void drbd_cork(struct drbd_connection *connection, enum drbd_stream stream);
+int drbd_uncork(struct drbd_connection *connection, enum drbd_stream stream);
+void drbd_open_counts(struct drbd_resource *resource, int *rw_count_ptr,
+		      int *ro_count_ptr);
+
+struct drbd_connection *
+__drbd_next_connection_ref(u64 *visited, struct drbd_connection *connection,
+			   struct drbd_resource *resource);
+
+struct drbd_peer_device *
+__drbd_next_peer_device_ref(u64 *visited,
+			    struct drbd_peer_device *peer_device,
+			    struct drbd_device *device);
+
+void tl_abort_disk_io(struct drbd_device *device);
+
+sector_t drbd_get_max_capacity(struct drbd_device *device,
+			       struct drbd_backing_dev *bdev, bool warn);
+sector_t drbd_partition_data_capacity(struct drbd_device *device);
 
 /* Meta data layout
  *
@@ -1114,59 +1941,10 @@ extern int drbd_bmio_clear_n_write(struct drbd_device *device,
  *  but is about to become configurable.
  */
 
-/* Our old fixed size meta data layout
- * allows up to about 3.8TB, so if you want more,
- * you need to use the "flexible" meta data format. */
-#define MD_128MB_SECT (128LLU << 11)  /* 128 MB, unit sectors */
-#define MD_4kB_SECT	 8
-#define MD_32kB_SECT	64
-
 /* One activity log extent represents 4M of storage */
 #define AL_EXTENT_SHIFT 22
 #define AL_EXTENT_SIZE (1<<AL_EXTENT_SHIFT)
 
-/* We could make these currently hardcoded constants configurable
- * variables at create-md time (or even re-configurable at runtime?).
- * Which will require some more changes to the DRBD "super block"
- * and attach code.
- *
- * updates per transaction:
- *   This many changes to the active set can be logged with one transaction.
- *   This number is arbitrary.
- * context per transaction:
- *   This many context extent numbers are logged with each transaction.
- *   This number is resulting from the transaction block size (4k), the layout
- *   of the transaction header, and the number of updates per transaction.
- *   See drbd_actlog.c:struct al_transaction_on_disk
- * */
-#define AL_UPDATES_PER_TRANSACTION	 64	// arbitrary
-#define AL_CONTEXT_PER_TRANSACTION	919	// (4096 - 36 - 6*64)/4
-
-#if BITS_PER_LONG == 32
-#define LN2_BPL 5
-#define cpu_to_lel(A) cpu_to_le32(A)
-#define lel_to_cpu(A) le32_to_cpu(A)
-#elif BITS_PER_LONG == 64
-#define LN2_BPL 6
-#define cpu_to_lel(A) cpu_to_le64(A)
-#define lel_to_cpu(A) le64_to_cpu(A)
-#else
-#error "LN2 of BITS_PER_LONG unknown!"
-#endif
-
-/* resync bitmap */
-/* 16MB sized 'bitmap extent' to track syncer usage */
-struct bm_extent {
-	int rs_left; /* number of bits set (out of sync) in this extent. */
-	int rs_failed; /* number of failed resync requests in this extent. */
-	unsigned long flags;
-	struct lc_element lce;
-};
-
-#define BME_NO_WRITES  0  /* bm_extent.flags: no more requests on this one! */
-#define BME_LOCKED     1  /* bm_extent.flags: syncer active on this one. */
-#define BME_PRIORITY   2  /* finish resync IO on this extent ASAP! App IO waiting! */
-
 /* drbd_bitmap.c */
 /*
  * We need to store one bit for a block.
@@ -1175,94 +1953,87 @@ struct bm_extent {
  * Bit 1 ==> local node thinks this block needs to be synced.
  */
 
-#define SLEEP_TIME (HZ/10)
+#define RS_MAKE_REQS_INTV    (HZ/10)
+#define RS_MAKE_REQS_INTV_NS (NSEC_PER_SEC/10)
 
-/* We do bitmap IO in units of 4k blocks.
- * We also still have a hardcoded 4k per bit relation. */
-#define BM_BLOCK_SHIFT	12			 /* 4k per bit */
-#define BM_BLOCK_SIZE	 (1<<BM_BLOCK_SHIFT)
-/* mostly arbitrarily set the represented size of one bitmap extent,
- * aka resync extent, to 16 MiB (which is also 512 Byte worth of bitmap
- * at 4k per bit resolution) */
-#define BM_EXT_SHIFT	 24	/* 16 MiB per resync extent */
-#define BM_EXT_SIZE	 (1<<BM_EXT_SHIFT)
-
-#if (BM_EXT_SHIFT != 24) || (BM_BLOCK_SHIFT != 12)
-#error "HAVE YOU FIXED drbdmeta AS WELL??"
-#endif
-
-/* thus many _storage_ sectors are described by one bit */
-#define BM_SECT_TO_BIT(x)   ((x)>>(BM_BLOCK_SHIFT-9))
-#define BM_BIT_TO_SECT(x)   ((sector_t)(x)<<(BM_BLOCK_SHIFT-9))
-#define BM_SECT_PER_BIT     BM_BIT_TO_SECT(1)
-
-/* bit to represented kilo byte conversion */
-#define Bit2KB(bits) ((bits)<<(BM_BLOCK_SHIFT-10))
-
-/* in which _bitmap_ extent (resp. sector) the bit for a certain
- * _storage_ sector is located in */
-#define BM_SECT_TO_EXT(x)   ((x)>>(BM_EXT_SHIFT-9))
-#define BM_BIT_TO_EXT(x)    ((x) >> (BM_EXT_SHIFT - BM_BLOCK_SHIFT))
-
-/* first storage sector a bitmap extent corresponds to */
-#define BM_EXT_TO_SECT(x)   ((sector_t)(x) << (BM_EXT_SHIFT-9))
-/* how much _storage_ sectors we have per bitmap extent */
-#define BM_SECT_PER_EXT     BM_EXT_TO_SECT(1)
-/* how many bits are covered by one bitmap extent (resync extent) */
-#define BM_BITS_PER_EXT     (1UL << (BM_EXT_SHIFT - BM_BLOCK_SHIFT))
-
-#define BM_BLOCKS_PER_BM_EXT_MASK  (BM_BITS_PER_EXT - 1)
+#define LEGACY_BM_EXT_SHIFT	 27	/* 128 MiB per resync extent */
+#define LEGACY_BM_EXT_SECT_MASK ((1UL << (LEGACY_BM_EXT_SHIFT - SECTOR_SHIFT)) - 1)
 
+static inline unsigned int bm_block_size(const struct drbd_bitmap *bm)
+{
+	return 1 << bm->bm_block_shift;
+}
+static inline sector_t bm_bit_to_kb(const struct drbd_bitmap *bm, unsigned long bit)
+{
+	return (sector_t)bit << (bm->bm_block_shift - 10);
+}
+static inline unsigned long bm_sect_to_bit(const struct drbd_bitmap *bm, sector_t s)
+{
+	return s >> (bm->bm_block_shift - 9);
+}
+static inline sector_t bm_bit_to_sect(const struct drbd_bitmap *bm, unsigned long bit)
+{
+	return (sector_t)bit << (bm->bm_block_shift - 9);
+}
+static inline sector_t bm_sect_per_bit(const struct drbd_bitmap *bm)
+{
+	return (sector_t)1 << (bm->bm_block_shift - 9);
+}
 
-/* in one sector of the bitmap, we have this many activity_log extents. */
-#define AL_EXT_PER_BM_SECT  (1 << (BM_EXT_SHIFT - AL_EXTENT_SHIFT))
+static inline sector_t bit_to_kb(unsigned long bit, unsigned int bm_block_shift)
+{
+	return (sector_t)bit << (bm_block_shift - 10);
+}
+static inline unsigned long sect_to_bit(sector_t s, unsigned int bm_block_shift)
+{
+	return s >> (bm_block_shift - 9);
+}
+static inline sector_t bit_to_sect(unsigned long bit, unsigned int bm_block_shift)
+{
+	return (sector_t)bit << (bm_block_shift - 9);
+}
+static inline sector_t sect_per_bit(unsigned int bm_block_shift)
+{
+	return (sector_t)1 << (bm_block_shift - 9);
+}
 
-/* the extent in "PER_EXTENT" below is an activity log extent
- * we need that many (long words/bytes) to store the bitmap
- *		     of one AL_EXTENT_SIZE chunk of storage.
- * we can store the bitmap for that many AL_EXTENTS within
- * one sector of the _on_disk_ bitmap:
- * bit	 0	  bit 37   bit 38	     bit (512*8)-1
- *	     ...|........|........|.. // ..|........|
- * sect. 0	 `296	  `304			   ^(512*8*8)-1
- *
-#define BM_WORDS_PER_EXT    ( (AL_EXT_SIZE/BM_BLOCK_SIZE) / BITS_PER_LONG )
-#define BM_BYTES_PER_EXT    ( (AL_EXT_SIZE/BM_BLOCK_SIZE) / 8 )  // 128
-#define BM_EXT_PER_SECT	    ( 512 / BM_BYTES_PER_EXTENT )	 //   4
+/* We may have just lost our backing device, and with it ->ldev and ->bitmap.
+ * But we can still report sync progress and similar based on our last known
+ * bitmap block size.
  */
+static inline sector_t device_bit_to_kb(struct drbd_device *device, unsigned long bit)
+{
+	return bit_to_kb(bit, device->last_bm_block_shift);
+}
 
-#define DRBD_MAX_SECTORS_32 (0xffffffffLU)
-/* we have a certain meta data variant that has a fixed on-disk size of 128
- * MiB, of which 4k are our "superblock", and 32k are the fixed size activity
+/* Send P_PEERS_IN_SYNC in steps defined by this shift. Set to the activity log
+ * extent shift since the P_PEERS_IN_SYNC intervals are broken up based on
+ * activity log extents anyway. */
+#define PEERS_IN_SYNC_STEP_SHIFT AL_EXTENT_SHIFT
+#define PEERS_IN_SYNC_STEP_SECT      (1UL << (PEERS_IN_SYNC_STEP_SHIFT - SECTOR_SHIFT))
+#define PEERS_IN_SYNC_STEP_SECT_MASK (PEERS_IN_SYNC_STEP_SECT - 1)
+
+/* Indexed external meta data has a fixed on-disk size of 128MiB, of which
+ * 4KiB are our "superblock", and 32KiB are the fixed size activity
  * log, leaving this many sectors for the bitmap.
  */
+#define DRBD_BM_SECTORS_INDEXED \
+	  (((128 << 20) - (32 << 10) - (4 << 10)) >> SECTOR_SHIFT)
 
-#define DRBD_MAX_SECTORS_FIXED_BM \
-	  ((MD_128MB_SECT - MD_32kB_SECT - MD_4kB_SECT) * (1LL<<(BM_EXT_SHIFT-9)))
-#define DRBD_MAX_SECTORS      DRBD_MAX_SECTORS_FIXED_BM
-/* 16 TB in units of sectors */
 #if BITS_PER_LONG == 32
-/* adjust by one page worth of bitmap,
- * so we won't wrap around in drbd_bm_find_next_bit.
- * you should use 64bit OS for that much storage, anyways. */
-#define DRBD_MAX_SECTORS_FLEX BM_BIT_TO_SECT(0xffff7fff)
+#if !defined(CONFIG_LBDAF) && !defined(CONFIG_LBD)
+#define DRBD_MAX_SECTORS (0xffffffffLU)
 #else
-/* we allow up to 1 PiB now on 64bit architecture with "flexible" meta data */
-#define DRBD_MAX_SECTORS_FLEX (1UL << 51)
-/* corresponds to (1UL << 38) bits right now. */
+/* With large block device support, the size is limited by the fact that we
+ * want to be able to address bitmap bits with a long. Additionally adjust by
+ * one page worth of bitmap, so we don't wrap around when iterating. */
+#define DRBD_MAX_SECTORS BM_BIT_TO_SECT(0xffff7fff)
 #endif
-
-/* Estimate max bio size as 256 * PAGE_SIZE,
- * so for typical PAGE_SIZE of 4k, that is (1<<20) Byte.
- * Since we may live in a mixed-platform cluster,
- * we limit us to a platform agnostic constant here for now.
- * A followup commit may allow even bigger BIO sizes,
- * once we thought that through. */
-#define DRBD_MAX_BIO_SIZE (1U << 20)
-#if DRBD_MAX_BIO_SIZE > (BIO_MAX_VECS << PAGE_SHIFT)
-#error Architecture not supported: DRBD_MAX_BIO_SIZE > BIO_MAX_SIZE
+#else
+/* We allow up to 1 PiB on 64 bit architectures as long as our meta data
+ * is large enough. */
+#define DRBD_MAX_SECTORS (1UL << (50 - SECTOR_SHIFT))
 #endif
-#define DRBD_MAX_BIO_SIZE_SAFE (1U << 12)       /* Works always = 4k */
 
 #define DRBD_MAX_SIZE_H80_PACKET (1U << 15) /* Header 80 only allows packets up to 32KiB data */
 #define DRBD_MAX_BIO_SIZE_P95    (1U << 17) /* Protocol 95 to 99 allows bios up to 128KiB */
@@ -1273,61 +2044,91 @@ struct bm_extent {
 #define DRBD_MAX_BATCH_BIO_SIZE	 (AL_UPDATES_PER_TRANSACTION/2*AL_EXTENT_SIZE)
 #define DRBD_MAX_BBIO_SECTORS    (DRBD_MAX_BATCH_BIO_SIZE >> 9)
 
-extern int  drbd_bm_init(struct drbd_device *device);
-extern int  drbd_bm_resize(struct drbd_device *device, sector_t sectors, int set_new_bits);
-extern void drbd_bm_cleanup(struct drbd_device *device);
-extern void drbd_bm_set_all(struct drbd_device *device);
-extern void drbd_bm_clear_all(struct drbd_device *device);
+/* This gets ignored if the backing device has a larger discard granularity */
+#define DRBD_MAX_RS_DISCARD_SIZE (1U << 27) /* 128MiB; arbitrary */
+
+/* how many activity log extents are touched by this interval? */
+static inline int interval_to_al_extents(struct drbd_interval *i)
+{
+	unsigned int first = i->sector >> (AL_EXTENT_SHIFT-9);
+	unsigned int last = i->size == 0 ? first : (i->sector + (i->size >> 9) - 1) >> (AL_EXTENT_SHIFT-9);
+	return 1 + last - first; /* worst case: all touched extends are cold. */
+}
+
+struct drbd_bitmap *drbd_bm_alloc(unsigned int max_peers, unsigned int bm_block_shift);
+int  drbd_bm_resize(struct drbd_device *device, sector_t capacity,
+		    bool set_new_bits);
+void drbd_bm_free(struct drbd_device *device);
+void drbd_bm_set_all(struct drbd_device *device);
+void drbd_bm_clear_all(struct drbd_device *device);
 /* set/clear/test only a few bits at a time */
-extern int  drbd_bm_set_bits(
-		struct drbd_device *device, unsigned long s, unsigned long e);
-extern int  drbd_bm_clear_bits(
-		struct drbd_device *device, unsigned long s, unsigned long e);
-extern int drbd_bm_count_bits(
-	struct drbd_device *device, const unsigned long s, const unsigned long e);
+unsigned int drbd_bm_set_bits(struct drbd_device *device,
+			      unsigned int bitmap_index, unsigned long start,
+			      unsigned long end);
+unsigned int drbd_bm_clear_bits(struct drbd_device *device,
+				unsigned int bitmap_index,
+				unsigned long start, unsigned long end);
+int drbd_bm_count_bits(struct drbd_device *device, unsigned int bitmap_index,
+		       unsigned long s, unsigned long e);
 /* bm_set_bits variant for use while holding drbd_bm_lock,
  * may process the whole bitmap in one go */
-extern void _drbd_bm_set_bits(struct drbd_device *device,
-		const unsigned long s, const unsigned long e);
-extern int  drbd_bm_test_bit(struct drbd_device *device, unsigned long bitnr);
-extern int  drbd_bm_e_weight(struct drbd_device *device, unsigned long enr);
-extern int  drbd_bm_read(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
-extern void drbd_bm_mark_for_writeout(struct drbd_device *device, int page_nr);
-extern int  drbd_bm_write(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
-extern void drbd_bm_reset_al_hints(struct drbd_device *device) __must_hold(local);
-extern int  drbd_bm_write_hinted(struct drbd_device *device) __must_hold(local);
-extern int  drbd_bm_write_lazy(struct drbd_device *device, unsigned upper_idx) __must_hold(local);
-extern int drbd_bm_write_all(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
-extern int  drbd_bm_write_copy_pages(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
-extern size_t	     drbd_bm_words(struct drbd_device *device);
-extern unsigned long drbd_bm_bits(struct drbd_device *device);
-extern sector_t      drbd_bm_capacity(struct drbd_device *device);
+void drbd_bm_set_many_bits(struct drbd_peer_device *peer_device,
+			   unsigned long start, unsigned long end);
+void drbd_bm_clear_many_bits(struct drbd_peer_device *peer_device,
+			     unsigned long start, unsigned long end);
+void _drbd_bm_clear_many_bits(struct drbd_device *device, int bitmap_index,
+			      unsigned long start, unsigned long end);
+void _drbd_bm_set_many_bits(struct drbd_device *device, int bitmap_index,
+			    unsigned long start, unsigned long end);
+int  drbd_bm_read(struct drbd_device *device,
+		  struct drbd_peer_device *peer_device);
+void drbd_bm_reset_al_hints(struct drbd_device *device);
+void drbd_bm_mark_range_for_writeout(struct drbd_device *device,
+				     unsigned long start, unsigned long end);
+int  drbd_bm_write(struct drbd_device *device,
+		   struct drbd_peer_device *peer_device);
+int  drbd_bm_write_hinted(struct drbd_device *device);
+int  drbd_bm_write_lazy(struct drbd_device *device, unsigned int upper_idx);
+int drbd_bm_write_all(struct drbd_device *device,
+		      struct drbd_peer_device *peer_device);
+int drbd_bm_write_copy_pages(struct drbd_device *device,
+			     struct drbd_peer_device *peer_device);
+size_t	     drbd_bm_words(struct drbd_device *device);
+unsigned long drbd_bm_bits(struct drbd_device *device);
+unsigned long drbd_bm_bits_4k(struct drbd_device *device);
+sector_t      drbd_bm_capacity(struct drbd_device *device);
 
 #define DRBD_END_OF_BITMAP	(~(unsigned long)0)
-extern unsigned long drbd_bm_find_next(struct drbd_device *device, unsigned long bm_fo);
+unsigned long drbd_bm_find_next(struct drbd_peer_device *peer_device,
+				unsigned long start);
 /* bm_find_next variants for use while you hold drbd_bm_lock() */
-extern unsigned long _drbd_bm_find_next(struct drbd_device *device, unsigned long bm_fo);
-extern unsigned long _drbd_bm_find_next_zero(struct drbd_device *device, unsigned long bm_fo);
-extern unsigned long _drbd_bm_total_weight(struct drbd_device *device);
-extern unsigned long drbd_bm_total_weight(struct drbd_device *device);
+unsigned long _drbd_bm_find_next(struct drbd_peer_device *peer_device,
+				 unsigned long start);
+unsigned long _drbd_bm_find_next_zero(struct drbd_peer_device *peer_device,
+				      unsigned long start);
+unsigned long _drbd_bm_total_weight(struct drbd_device *device,
+				    int bitmap_index);
+unsigned long drbd_bm_total_weight(struct drbd_peer_device *peer_device);
 /* for receive_bitmap */
-extern void drbd_bm_merge_lel(struct drbd_device *device, size_t offset,
-		size_t number, unsigned long *buffer);
+void drbd_bm_merge_lel(struct drbd_peer_device *peer_device, size_t offset,
+		       size_t number, unsigned long *buffer);
 /* for _drbd_send_bitmap */
-extern void drbd_bm_get_lel(struct drbd_device *device, size_t offset,
-		size_t number, unsigned long *buffer);
-
-extern void drbd_bm_lock(struct drbd_device *device, char *why, enum bm_flag flags);
-extern void drbd_bm_unlock(struct drbd_device *device);
+void drbd_bm_get_lel(struct drbd_peer_device *peer_device, size_t offset,
+		     size_t number, unsigned long *buffer);
+
+void drbd_bm_lock(struct drbd_device *device, const char *why,
+		  enum bm_flag flags);
+void drbd_bm_unlock(struct drbd_device *device);
+void drbd_bm_slot_lock(struct drbd_peer_device *peer_device, char *why,
+		       enum bm_flag flags);
+void drbd_bm_slot_unlock(struct drbd_peer_device *peer_device);
+void drbd_bm_copy_slot(struct drbd_device *device, unsigned int from_index,
+		       unsigned int to_index);
 /* drbd_main.c */
 
+extern struct workqueue_struct *ping_ack_sender;
 extern struct kmem_cache *drbd_request_cache;
 extern struct kmem_cache *drbd_ee_cache;	/* peer requests */
-extern struct kmem_cache *drbd_bm_ext_cache;	/* bitmap extents */
 extern struct kmem_cache *drbd_al_ext_cache;	/* activity log extents */
 extern mempool_t drbd_request_mempool;
 extern mempool_t drbd_ee_mempool;
@@ -1348,38 +2149,69 @@ extern struct bio_set drbd_md_io_bio_set;
 /* And a bio_set for cloning */
 extern struct bio_set drbd_io_bio_set;
 
-extern struct mutex resources_mutex;
-
-extern enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsigned int minor);
-extern void drbd_destroy_device(struct kref *kref);
-extern void drbd_delete_device(struct drbd_device *device);
-
-extern struct drbd_resource *drbd_create_resource(const char *name);
-extern void drbd_free_resource(struct drbd_resource *resource);
-
-extern int set_resource_options(struct drbd_resource *resource, struct res_opts *res_opts);
-extern struct drbd_connection *conn_create(const char *name, struct res_opts *res_opts);
-extern void drbd_destroy_connection(struct kref *kref);
-extern struct drbd_connection *conn_get_by_addrs(void *my_addr, int my_addr_len,
-					    void *peer_addr, int peer_addr_len);
-extern struct drbd_resource *drbd_find_resource(const char *name);
-extern void drbd_destroy_resource(struct kref *kref);
-extern void conn_free_crypto(struct drbd_connection *connection);
+struct drbd_peer_device *create_peer_device(struct drbd_device *device,
+					    struct drbd_connection *connection);
+enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx,
+				      unsigned int minor,
+				      struct device_conf *device_conf,
+				      struct drbd_device **p_device);
+void drbd_unregister_device(struct drbd_device *device);
+void drbd_reclaim_device(struct rcu_head *rp);
+void drbd_unregister_connection(struct drbd_connection *connection);
+void drbd_reclaim_connection(struct rcu_head *rp);
+void drbd_reclaim_path(struct rcu_head *rp);
+void del_connect_timer(struct drbd_connection *connection);
+
+struct drbd_resource *drbd_create_resource(const char *name,
+					   struct res_opts *res_opts);
+void drbd_reclaim_resource(struct rcu_head *rp);
+struct drbd_resource *drbd_find_resource(const char *name);
+void drbd_destroy_resource(struct kref *kref);
+
+void drbd_destroy_device(struct kref *kref);
+
+int set_resource_options(struct drbd_resource *resource,
+			 struct res_opts *res_opts, const char *tag);
+struct drbd_connection *drbd_create_connection(struct drbd_resource *resource,
+					       struct drbd_transport_class *tc);
+void drbd_transport_shutdown(struct drbd_connection *connection,
+			     enum drbd_tr_free_op op);
+void drbd_destroy_connection(struct kref *kref);
+void conn_free_crypto(struct drbd_connection *connection);
 
 /* drbd_req */
-extern void do_submit(struct work_struct *ws);
-extern void __drbd_make_request(struct drbd_device *, struct bio *);
+void drbd_do_submit_conflict(struct work_struct *ws);
+void do_submit(struct work_struct *ws);
+#ifndef CONFIG_DRBD_TIMING_STATS
+#define __drbd_make_request(d, b, k, j) __drbd_make_request(d, b, j)
+#endif
+void __drbd_make_request(struct drbd_device *device, struct bio *bio,
+			 ktime_t start_kt, unsigned long start_jif);
 void drbd_submit_bio(struct bio *bio);
 
-/* drbd_nl.c */
-
-extern struct mutex notification_mutex;
+enum drbd_force_detach_flags {
+	DRBD_READ_ERROR,
+	DRBD_WRITE_ERROR,
+	DRBD_META_IO_ERROR,
+	DRBD_FORCE_DETACH,
+};
+#define drbd_handle_io_error(m, f) drbd_handle_io_error_(m, f,  __func__)
+void drbd_handle_io_error_(struct drbd_device *device,
+			   enum drbd_force_detach_flags df, const char *where);
 
-extern void drbd_suspend_io(struct drbd_device *device);
-extern void drbd_resume_io(struct drbd_device *device);
-extern char *ppsize(char *buf, unsigned long long size);
-extern sector_t drbd_new_dev_size(struct drbd_device *, struct drbd_backing_dev *, sector_t, int);
+/* drbd_nl.c */
+enum suspend_scope {
+	READ_AND_WRITE,
+	WRITE_ONLY
+};
+void drbd_suspend_io(struct drbd_device *device, enum suspend_scope ss);
+void drbd_resume_io(struct drbd_device *device);
+char *ppsize(char *buf, unsigned long long size);
+sector_t drbd_new_dev_size(struct drbd_device *device, sector_t current_size,
+			   sector_t user_capped_size, enum dds_flags flags);
 enum determine_dev_size {
+	DS_2PC_ERR = -5,
+	DS_2PC_NOT_SUPPORTED = -4,
 	DS_ERROR_SHRINK = -3,
 	DS_ERROR_SPACE_MD = -2,
 	DS_ERROR = -1,
@@ -1388,96 +2220,225 @@ enum determine_dev_size {
 	DS_GREW = 2,
 	DS_GREW_FROM_ZERO = 3,
 };
-extern enum determine_dev_size
-drbd_determine_dev_size(struct drbd_device *, enum dds_flags, struct resize_parms *) __must_hold(local);
-extern void resync_after_online_grow(struct drbd_device *);
-extern void drbd_reconsider_queue_parameters(struct drbd_device *device,
-			struct drbd_backing_dev *bdev, struct o_qlim *o);
-extern enum drbd_state_rv drbd_set_role(struct drbd_device *device,
-					enum drbd_role new_role,
-					int force);
-extern bool conn_try_outdate_peer(struct drbd_connection *connection);
-extern void conn_try_outdate_peer_async(struct drbd_connection *connection);
-extern enum drbd_peer_state conn_khelper(struct drbd_connection *connection, char *cmd);
-extern int drbd_khelper(struct drbd_device *device, char *cmd);
-
-/* drbd_worker.c */
-/* bi_end_io handlers */
-extern void drbd_md_endio(struct bio *bio);
-extern void drbd_peer_request_endio(struct bio *bio);
-extern void drbd_request_endio(struct bio *bio);
-extern int drbd_worker(struct drbd_thread *thi);
+enum determine_dev_size
+drbd_determine_dev_size(struct drbd_device *device,
+			sector_t peer_current_size, enum dds_flags flags,
+			struct resize_parms *rs);
+void resync_after_online_grow(struct drbd_peer_device *peer_device);
+void drbd_reconsider_queue_parameters(struct drbd_device *device,
+				      struct drbd_backing_dev *bdev);
+bool barrier_pending(struct drbd_resource *resource);
+enum drbd_state_rv
+drbd_set_role(struct drbd_resource *resource, enum drbd_role role, bool force,
+	      const char *tag, struct sk_buff *reply_skb);
+void conn_try_outdate_peer_async(struct drbd_connection *connection);
+int drbd_maybe_khelper(struct drbd_device *device,
+		       struct drbd_connection *connection, char *cmd);
+int drbd_create_peer_device_default_config(struct drbd_peer_device *peer_device);
+int drbd_unallocated_index(struct drbd_backing_dev *bdev);
+void youngest_and_oldest_opener_to_str(struct drbd_device *device, char *buf,
+				       size_t len);
+int param_set_drbd_strict_names(const char *val,
+				const struct kernel_param *kp);
+void drbd_enable_netns(void);
+
+/* drbd_sender.c */
+int drbd_sender(struct drbd_thread *thi);
+int drbd_worker(struct drbd_thread *thi);
 enum drbd_ret_code drbd_resync_after_valid(struct drbd_device *device, int o_minor);
 void drbd_resync_after_changed(struct drbd_device *device);
-extern void drbd_start_resync(struct drbd_device *device, enum drbd_conns side);
-extern void resume_next_sg(struct drbd_device *device);
-extern void suspend_other_sg(struct drbd_device *device);
-extern int drbd_resync_finished(struct drbd_peer_device *peer_device);
+bool drbd_stable_sync_source_present(struct drbd_peer_device *except_peer_device,
+				     enum which_state which);
+void drbd_start_resync(struct drbd_peer_device *peer_device,
+		       enum drbd_repl_state side, const char *tag);
+void resume_next_sg(struct drbd_device *device);
+void suspend_other_sg(struct drbd_device *device);
+void drbd_resync_finished(struct drbd_peer_device *peer_device,
+			  enum drbd_disk_state new_peer_disk_state);
+void verify_progress(struct drbd_peer_device *peer_device,
+		     const sector_t sector, const unsigned int size);
 /* maybe rather drbd_main.c ? */
-extern void *drbd_md_get_buffer(struct drbd_device *device, const char *intent);
-extern void drbd_md_put_buffer(struct drbd_device *device);
-extern int drbd_md_sync_page_io(struct drbd_device *device,
-		struct drbd_backing_dev *bdev, sector_t sector, enum req_op op);
-extern void drbd_ov_out_of_sync_found(struct drbd_peer_device *peer_device,
-		sector_t sector, int size);
-extern void wait_until_done_or_force_detached(struct drbd_device *device,
-		struct drbd_backing_dev *bdev, unsigned int *done);
-extern void drbd_rs_controller_reset(struct drbd_peer_device *peer_device);
+void *drbd_md_get_buffer(struct drbd_device *device, const char *intent);
+void drbd_md_put_buffer(struct drbd_device *device);
+int drbd_md_sync_page_io(struct drbd_device *device,
+			 struct drbd_backing_dev *bdev, sector_t sector,
+			 enum req_op op);
+bool drbd_al_active(struct drbd_device *device, sector_t sector,
+		    unsigned int size);
+void drbd_ov_out_of_sync_found(struct drbd_peer_device *peer_device,
+			       sector_t sector, int size);
+void wait_until_done_or_force_detached(struct drbd_device *device,
+				       struct drbd_backing_dev *bdev,
+				       unsigned int *done);
+void drbd_rs_controller_reset(struct drbd_peer_device *peer_device);
+void drbd_rs_all_in_flight_came_back(struct drbd_peer_device *peer_device,
+				     int rs_sect_in);
+void drbd_check_peers(struct drbd_resource *resource);
+void drbd_check_peers_new_current_uuid(struct drbd_device *device);
+void drbd_conflict_send_resync_request(struct drbd_peer_request *peer_req);
+void drbd_ping_peer(struct drbd_connection *connection);
+struct drbd_peer_device *peer_device_by_node_id(struct drbd_device *device,
+						int node_id);
+void drbd_update_mdf_al_disabled(struct drbd_device *device,
+				 enum which_state which);
 
 static inline void ov_out_of_sync_print(struct drbd_peer_device *peer_device)
 {
-	struct drbd_device *device = peer_device->device;
-
-	if (device->ov_last_oos_size) {
+	if (peer_device->ov_last_oos_size) {
 		drbd_err(peer_device, "Out of sync: start=%llu, size=%lu (sectors)\n",
-		     (unsigned long long)device->ov_last_oos_start,
-		     (unsigned long)device->ov_last_oos_size);
+		     (unsigned long long)peer_device->ov_last_oos_start,
+		     (unsigned long)peer_device->ov_last_oos_size);
 	}
-	device->ov_last_oos_size = 0;
+	peer_device->ov_last_oos_size = 0;
 }
 
+static inline void ov_skipped_print(struct drbd_peer_device *peer_device)
+{
+	if (peer_device->ov_last_skipped_size) {
+		drbd_info(peer_device, "Skipped verify, too busy: start=%llu, size=%lu (sectors)\n",
+		     (unsigned long long)peer_device->ov_last_skipped_start,
+		     (unsigned long)peer_device->ov_last_skipped_size);
+	}
+	peer_device->ov_last_skipped_size = 0;
+}
+
+void drbd_csum_bios(struct crypto_shash *tfm, struct bio_list *bios, void *digest);
+void drbd_csum_bio(struct crypto_shash *tfm, struct bio *bio, void *digest);
+void drbd_resync_read_req_mod(struct drbd_peer_request *peer_req,
+			      enum drbd_interval_flags bit_to_set);
 
-extern void drbd_csum_bio(struct crypto_shash *, struct bio *, void *);
-extern void drbd_csum_ee(struct crypto_shash *, struct drbd_peer_request *,
-			 void *);
 /* worker callbacks */
-extern int w_e_end_data_req(struct drbd_work *, int);
-extern int w_e_end_rsdata_req(struct drbd_work *, int);
-extern int w_e_end_csum_rs_req(struct drbd_work *, int);
-extern int w_e_end_ov_reply(struct drbd_work *, int);
-extern int w_e_end_ov_req(struct drbd_work *, int);
-extern int w_ov_finished(struct drbd_work *, int);
-extern int w_resync_timer(struct drbd_work *, int);
-extern int w_send_write_hint(struct drbd_work *, int);
-extern int w_send_dblock(struct drbd_work *, int);
-extern int w_send_read_req(struct drbd_work *, int);
-extern int w_restart_disk_io(struct drbd_work *, int);
-extern int w_send_out_of_sync(struct drbd_work *, int);
-
-extern void resync_timer_fn(struct timer_list *t);
-extern void start_resync_timer_fn(struct timer_list *t);
-
-extern void drbd_endio_write_sec_final(struct drbd_peer_request *peer_req);
+int w_e_end_data_req(struct drbd_work *w, int cancel);
+int w_e_end_rsdata_req(struct drbd_work *w, int cancel);
+int w_e_end_ov_reply(struct drbd_work *w, int cancel);
+int w_e_end_ov_req(struct drbd_work *w, int cancel);
+int w_resync_timer(struct drbd_work *w, int cancel);
+int w_e_reissue(struct drbd_work *w, int cancel);
+int w_send_dagtag(struct drbd_work *w, int cancel);
+int w_send_uuids(struct drbd_work *w, int cancel);
+
+bool drbd_any_flush_pending(struct drbd_resource *resource);
+void resync_timer_fn(struct timer_list *t);
+void start_resync_timer_fn(struct timer_list *t);
+
+int drbd_unmerge_discard(struct drbd_peer_request *peer_req_main,
+			 struct list_head *list);
+void drbd_endio_write_sec_final(struct drbd_peer_request *peer_req);
+
+/* bi_end_io handlers */
+void drbd_md_endio(struct bio *bio);
+void drbd_peer_request_endio(struct bio *bio);
+void drbd_request_endio(struct bio *bio);
+
+void __update_timing_details(
+		struct drbd_thread_timing_details *tdp,
+		unsigned int *cb_nr,
+		void *cb,
+		const char *fn, const unsigned int line);
+
+#define update_sender_timing_details(c, cb) \
+	__update_timing_details(c->s_timing_details, &c->s_cb_nr, cb, __func__, __LINE__)
+#define update_receiver_timing_details(c, cb) \
+	__update_timing_details(c->r_timing_details, &c->r_cb_nr, cb, __func__, __LINE__)
+#define update_worker_timing_details(r, cb) \
+	__update_timing_details(r->w_timing_details, &r->w_cb_nr, cb, __func__, __LINE__)
 
 /* drbd_receiver.c */
-extern int drbd_issue_discard_or_zero_out(struct drbd_device *device,
-		sector_t start, unsigned int nr_sectors, int flags);
-extern int drbd_receiver(struct drbd_thread *thi);
-extern int drbd_ack_receiver(struct drbd_thread *thi);
-extern void drbd_send_acks_wf(struct work_struct *ws);
-extern bool drbd_rs_c_min_rate_throttle(struct drbd_device *device);
-extern bool drbd_rs_should_slow_down(struct drbd_peer_device *peer_device, sector_t sector,
-		bool throttle_if_app_is_waiting);
-extern int drbd_submit_peer_request(struct drbd_peer_request *peer_req);
-extern int drbd_free_peer_reqs(struct drbd_device *, struct list_head *);
-extern struct drbd_peer_request *drbd_alloc_peer_req(struct drbd_peer_device *, u64,
-						     sector_t, unsigned int,
-						     unsigned int,
-						     gfp_t) __must_hold(local);
-extern void drbd_free_peer_req(struct drbd_device *device, struct drbd_peer_request *req);
-extern struct page *drbd_alloc_pages(struct drbd_peer_device *, unsigned int, bool);
-extern void _drbd_clear_done_ee(struct drbd_device *device, struct list_head *to_be_freed);
-extern int drbd_connected(struct drbd_peer_device *);
+struct packet_info {
+	enum drbd_packet cmd;
+	unsigned int size;
+	int vnr;
+	void *data;
+};
+
+/* packet_info->data is just a pointer into some temporary buffer
+ * owned by the transport. As soon as we call into the transport for
+ * any further receive operation, the data it points to is undefined.
+ * The buffer may be freed/recycled/re-used already.
+ * Convert and store the relevant information for any incoming data
+ * in drbd_peer_request_detail.
+ */
+
+struct drbd_peer_request_details {
+	uint64_t sector;	/* be64_to_cpu(p_data.sector) */
+	uint64_t block_id;	/* unmodified p_data.block_id */
+	uint32_t peer_seq;	/* be32_to_cpu(p_data.seq_num) */
+	uint32_t dp_flags;	/* be32_to_cpu(p_data.dp_flags) */
+	uint32_t length;	/* endian converted p_head*.length */
+	uint32_t bi_size;	/* resulting bio size */
+	/* for non-discards: bi_size = length - digest_size */
+	uint32_t digest_size;
+};
+
+
+void drbd_queue_update_peers(struct drbd_peer_device *peer_device,
+			     sector_t sector_start, sector_t sector_end);
+int drbd_issue_discard_or_zero_out(struct drbd_device *device, sector_t start,
+				   unsigned int nr_sectors, int flags);
+int drbd_send_ack_be(struct drbd_peer_device *peer_device,
+		     enum drbd_packet cmd, sector_t sector, int size,
+		     u64 block_id);
+int drbd_send_ack(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
+		  struct drbd_peer_request *peer_req);
+int drbd_send_ov_result(struct drbd_peer_device *peer_device, sector_t sector,
+			int blksize, u64 block_id, enum ov_result result);
+int drbd_receiver(struct drbd_thread *thi);
+void drbd_unsuccessful_resync_request(struct drbd_peer_request *peer_req,
+				      bool failed);
+int drbd_send_out_of_sync_wf(struct drbd_work *w, int cancel);
+int drbd_flush_ack_wf(struct drbd_work *w, int unused);
+void drbd_send_ping_wf(struct work_struct *ws);
+void drbd_send_acks_wf(struct work_struct *ws);
+void drbd_send_peer_ack_wf(struct work_struct *ws);
+bool drbd_rs_c_min_rate_throttle(struct drbd_peer_device *peer_device);
+void drbd_verify_skipped_block(struct drbd_peer_device *peer_device,
+			       const sector_t sector, const unsigned int size);
+void drbd_conflict_submit_resync_request(struct drbd_peer_request *peer_req);
+void drbd_conflict_submit_peer_read(struct drbd_peer_request *peer_req);
+void drbd_conflict_submit_peer_write(struct drbd_peer_request *peer_req);
+int drbd_submit_peer_request(struct drbd_peer_request *peer_req);
+void drbd_cleanup_after_failed_submit_peer_write(struct drbd_peer_request *peer_req);
+void drbd_cleanup_peer_requests_wfa(struct drbd_device *device,
+				    struct list_head *cleanup);
+void drbd_remove_peer_req_interval(struct drbd_peer_request *peer_req);
+int drbd_free_peer_reqs(struct drbd_connection *connection,
+			struct list_head *list);
+struct drbd_peer_request *drbd_alloc_peer_req(struct drbd_peer_device *peer_device, gfp_t gfp_mask,
+					      size_t size, blk_opf_t opf);
+void drbd_free_peer_req(struct drbd_peer_request *peer_req);
+void drbd_peer_req_strip_bio(struct drbd_peer_request *peer_req);
+int drbd_connected(struct drbd_peer_device *peer_device);
+void conn_connect2(struct drbd_connection *connection);
+void wait_initial_states_received(struct drbd_connection *connection);
+void abort_connect(struct drbd_connection *connection);
+void drbd_print_cluster_wide_state_change(struct drbd_resource *resource,
+					  const char *message,
+					  unsigned int tid,
+					  unsigned int initiator_node_id,
+					  int target_node_id,
+					  union drbd_state mask,
+					  union drbd_state val);
+void apply_unacked_peer_requests(struct drbd_connection *connection);
+struct drbd_connection *drbd_connection_by_node_id(struct drbd_resource *resource,
+						   int node_id);
+struct drbd_connection *drbd_get_connection_by_node_id(struct drbd_resource *resource,
+						       int node_id);
+bool drbd_have_local_disk(struct drbd_resource *resource);
+enum drbd_state_rv drbd_support_2pc_resize(struct drbd_resource *resource);
+enum determine_dev_size
+drbd_commit_size_change(struct drbd_device *device, struct resize_parms *rs,
+			u64 nodes_to_reach);
+void drbd_try_to_get_resynced(struct drbd_device *device);
+void drbd_process_rs_discards(struct drbd_peer_device *peer_device,
+			      bool submit_all);
+void drbd_last_resync_request(struct drbd_peer_device *peer_device,
+			      bool submit_all);
+void drbd_init_connect_state(struct drbd_connection *connection);
+
+static inline sector_t drbd_get_capacity(struct block_device *bdev)
+{
+	return bdev ? bdev_nr_sectors(bdev) : 0;
+}
 
 /* sets the number of 512 byte sectors of our virtual device */
 void drbd_set_my_capacity(struct drbd_device *device, sector_t size);
@@ -1488,207 +2449,108 @@ void drbd_set_my_capacity(struct drbd_device *device, sector_t size);
 static inline void drbd_submit_bio_noacct(struct drbd_device *device,
 					     int fault_type, struct bio *bio)
 {
-	__release(local);
-	if (!bio->bi_bdev) {
-		drbd_err(device, "drbd_submit_bio_noacct: bio->bi_bdev == NULL\n");
+	if (drbd_insert_fault(device, fault_type)) {
 		bio->bi_status = BLK_STS_IOERR;
 		bio_endio(bio);
-		return;
-	}
-
-	if (drbd_insert_fault(device, fault_type))
-		bio_io_error(bio);
-	else
+	} else {
 		submit_bio_noacct(bio);
+	}
 }
 
 void drbd_bump_write_ordering(struct drbd_resource *resource, struct drbd_backing_dev *bdev,
 			      enum write_ordering_e wo);
 
+void twopc_timer_fn(struct timer_list *t);
+void connect_timer_fn(struct timer_list *t);
+
 /* drbd_proc.c */
 extern struct proc_dir_entry *drbd_proc;
 int drbd_seq_show(struct seq_file *seq, void *v);
 
 /* drbd_actlog.c */
-extern bool drbd_al_begin_io_prepare(struct drbd_device *device, struct drbd_interval *i);
-extern int drbd_al_begin_io_nonblock(struct drbd_device *device, struct drbd_interval *i);
-extern void drbd_al_begin_io_commit(struct drbd_device *device);
-extern bool drbd_al_begin_io_fastpath(struct drbd_device *device, struct drbd_interval *i);
-extern void drbd_al_begin_io(struct drbd_device *device, struct drbd_interval *i);
-extern void drbd_al_complete_io(struct drbd_device *device, struct drbd_interval *i);
-extern void drbd_rs_complete_io(struct drbd_device *device, sector_t sector);
-extern int drbd_rs_begin_io(struct drbd_device *device, sector_t sector);
-extern int drbd_try_rs_begin_io(struct drbd_peer_device *peer_device, sector_t sector);
-extern void drbd_rs_cancel_all(struct drbd_device *device);
-extern int drbd_rs_del_all(struct drbd_device *device);
-extern void drbd_rs_failed_io(struct drbd_peer_device *peer_device,
-		sector_t sector, int size);
-extern void drbd_advance_rs_marks(struct drbd_peer_device *peer_device, unsigned long still_to_go);
-
+bool drbd_al_try_lock(struct drbd_device *device);
+bool drbd_al_try_lock_for_transaction(struct drbd_device *device);
+int drbd_al_begin_io_nonblock(struct drbd_device *device,
+			      struct drbd_interval *i);
+void drbd_al_begin_io_commit(struct drbd_device *device);
+bool drbd_al_begin_io_fastpath(struct drbd_device *device,
+			       struct drbd_interval *i);
+bool drbd_al_complete_io(struct drbd_device *device, struct drbd_interval *i);
+void drbd_advance_rs_marks(struct drbd_peer_device *peer_device,
+			   unsigned long still_to_go);
+bool drbd_lazy_bitmap_update_due(struct drbd_peer_device *peer_device);
+unsigned long drbd_set_all_out_of_sync(struct drbd_device *device, sector_t sector,
+			     int size);
+unsigned long drbd_set_sync(struct drbd_device *device, sector_t sector, int size,
+		  unsigned long bits, unsigned long mask);
 enum update_sync_bits_mode { RECORD_RS_FAILED, SET_OUT_OF_SYNC, SET_IN_SYNC };
-extern int __drbd_change_sync(struct drbd_peer_device *peer_device, sector_t sector, int size,
-		enum update_sync_bits_mode mode);
+int __drbd_change_sync(struct drbd_peer_device *peer_device, sector_t sector,
+		       int size, enum update_sync_bits_mode mode);
 #define drbd_set_in_sync(peer_device, sector, size) \
 	__drbd_change_sync(peer_device, sector, size, SET_IN_SYNC)
 #define drbd_set_out_of_sync(peer_device, sector, size) \
 	__drbd_change_sync(peer_device, sector, size, SET_OUT_OF_SYNC)
 #define drbd_rs_failed_io(peer_device, sector, size) \
 	__drbd_change_sync(peer_device, sector, size, RECORD_RS_FAILED)
-extern void drbd_al_shrink(struct drbd_device *device);
-extern int drbd_al_initialize(struct drbd_device *, void *);
+void drbd_al_shrink(struct drbd_device *device);
+int drbd_al_initialize(struct drbd_device *device, void *buffer);
 
 /* drbd_nl.c */
-/* state info broadcast */
-struct sib_info {
-	enum drbd_state_info_bcast_reason sib_reason;
-	union {
-		struct {
-			char *helper_name;
-			unsigned helper_exit_code;
-		};
-		struct {
-			union drbd_state os;
-			union drbd_state ns;
-		};
-	};
-};
-void drbd_bcast_event(struct drbd_device *device, const struct sib_info *sib);
-
-extern int notify_resource_state(struct sk_buff *,
-				  unsigned int,
-				  struct drbd_resource *,
-				  struct resource_info *,
-				  enum drbd_notification_type);
-extern int notify_device_state(struct sk_buff *,
-				unsigned int,
-				struct drbd_device *,
-				struct device_info *,
-				enum drbd_notification_type);
-extern int notify_connection_state(struct sk_buff *,
-				    unsigned int,
-				    struct drbd_connection *,
-				    struct connection_info *,
-				    enum drbd_notification_type);
-extern int notify_peer_device_state(struct sk_buff *,
-				     unsigned int,
-				     struct drbd_peer_device *,
-				     struct peer_device_info *,
-				     enum drbd_notification_type);
-extern void notify_helper(enum drbd_notification_type, struct drbd_device *,
-			  struct drbd_connection *, const char *, int);
 
+extern struct mutex notification_mutex;
+extern atomic_t drbd_genl_seq;
+
+int notify_resource_state(struct sk_buff *skb, unsigned int seq,
+			  struct drbd_resource *resource,
+			  struct resource_info *resource_info,
+			  struct rename_resource_info *rename_resource_info,
+			  enum drbd_notification_type type);
+int notify_device_state(struct sk_buff *skb, unsigned int seq,
+			struct drbd_device *device,
+			struct device_info *device_info,
+			enum drbd_notification_type type);
+int notify_connection_state(struct sk_buff *skb, unsigned int seq,
+			    struct drbd_connection *connection,
+			    struct connection_info *connection_info,
+			    enum drbd_notification_type type);
+int notify_peer_device_state(struct sk_buff *skb, unsigned int seq,
+			     struct drbd_peer_device *peer_device,
+			     struct peer_device_info *peer_device_info,
+			     enum drbd_notification_type type);
+void notify_helper(enum drbd_notification_type type,
+		   struct drbd_device *device,
+		   struct drbd_connection *connection, const char *name,
+		   int status);
+int notify_path(struct drbd_connection *connection, struct drbd_path *path,
+		enum drbd_notification_type type);
+void drbd_broadcast_peer_device_state(struct drbd_peer_device *peer_device);
+
+sector_t drbd_local_max_size(struct drbd_device *device);
+int drbd_open_ro_count(struct drbd_resource *resource);
+
+void device_to_info(struct device_info *info, struct drbd_device *device);
+void device_state_change_to_info(struct device_info *info,
+				 struct drbd_device_state_change *state_change);
+void peer_device_state_change_to_info(struct peer_device_info *info,
+				      struct drbd_peer_device_state_change *state_change);
 /*
  * inline helper functions
  *************************/
 
-/* see also page_chain_add and friends in drbd_receiver.c */
-static inline struct page *page_chain_next(struct page *page)
-{
-	return (struct page *)page_private(page);
-}
-#define page_chain_for_each(page) \
-	for (; page && ({ prefetch(page_chain_next(page)); 1; }); \
-			page = page_chain_next(page))
-#define page_chain_for_each_safe(page, n) \
-	for (; page && ({ n = page_chain_next(page); 1; }); page = n)
-
-
-static inline union drbd_state drbd_read_state(struct drbd_device *device)
-{
-	struct drbd_resource *resource = device->resource;
-	union drbd_state rv;
-
-	rv.i = device->state.i;
-	rv.susp = resource->susp;
-	rv.susp_nod = resource->susp_nod;
-	rv.susp_fen = resource->susp_fen;
-
-	return rv;
-}
-
-enum drbd_force_detach_flags {
-	DRBD_READ_ERROR,
-	DRBD_WRITE_ERROR,
-	DRBD_META_IO_ERROR,
-	DRBD_FORCE_DETACH,
-};
-
-#define __drbd_chk_io_error(m,f) __drbd_chk_io_error_(m,f, __func__)
-static inline void __drbd_chk_io_error_(struct drbd_device *device,
-		enum drbd_force_detach_flags df,
-		const char *where)
-{
-	enum drbd_io_error_p ep;
-
-	rcu_read_lock();
-	ep = rcu_dereference(device->ldev->disk_conf)->on_io_error;
-	rcu_read_unlock();
-	switch (ep) {
-	case EP_PASS_ON: /* FIXME would this be better named "Ignore"? */
-		if (df == DRBD_READ_ERROR || df == DRBD_WRITE_ERROR) {
-			if (drbd_ratelimit())
-				drbd_err(device, "Local IO failed in %s.\n", where);
-			if (device->state.disk > D_INCONSISTENT)
-				_drbd_set_state(_NS(device, disk, D_INCONSISTENT), CS_HARD, NULL);
-			break;
-		}
-		fallthrough;	/* for DRBD_META_IO_ERROR or DRBD_FORCE_DETACH */
-	case EP_DETACH:
-	case EP_CALL_HELPER:
-		/* Remember whether we saw a READ or WRITE error.
-		 *
-		 * Recovery of the affected area for WRITE failure is covered
-		 * by the activity log.
-		 * READ errors may fall outside that area though. Certain READ
-		 * errors can be "healed" by writing good data to the affected
-		 * blocks, which triggers block re-allocation in lower layers.
-		 *
-		 * If we can not write the bitmap after a READ error,
-		 * we may need to trigger a full sync (see w_go_diskless()).
-		 *
-		 * Force-detach is not really an IO error, but rather a
-		 * desperate measure to try to deal with a completely
-		 * unresponsive lower level IO stack.
-		 * Still it should be treated as a WRITE error.
-		 *
-		 * Meta IO error is always WRITE error:
-		 * we read meta data only once during attach,
-		 * which will fail in case of errors.
-		 */
-		set_bit(WAS_IO_ERROR, &device->flags);
-		if (df == DRBD_READ_ERROR)
-			set_bit(WAS_READ_ERROR, &device->flags);
-		if (df == DRBD_FORCE_DETACH)
-			set_bit(FORCE_DETACH, &device->flags);
-		if (device->state.disk > D_FAILED) {
-			_drbd_set_state(_NS(device, disk, D_FAILED), CS_HARD, NULL);
-			drbd_err(device,
-				"Local IO failed in %s. Detaching...\n", where);
-		}
-		break;
-	}
-}
-
-/**
- * drbd_chk_io_error: Handle the on_io_error setting, should be called from all io completion handlers
- * @device:	 DRBD device.
- * @error:	 Error code passed to the IO completion callback
- * @forcedetach: Force detach. I.e. the error happened while accessing the meta data
- *
- * See also drbd_main.c:after_state_ch() if (os.disk > D_FAILED && ns.disk == D_FAILED)
+/*
+ * When a device has a replication state above L_OFF, it must be
+ * connected.  Otherwise, we report the connection state, which has values up
+ * to C_CONNECTED == L_OFF.
  */
-#define drbd_chk_io_error(m,e,f) drbd_chk_io_error_(m,e,f, __func__)
-static inline void drbd_chk_io_error_(struct drbd_device *device,
-	int error, enum drbd_force_detach_flags forcedetach, const char *where)
+static inline int combined_conn_state(struct drbd_peer_device *peer_device, enum which_state which)
 {
-	if (error) {
-		unsigned long flags;
-		spin_lock_irqsave(&device->resource->req_lock, flags);
-		__drbd_chk_io_error_(device, forcedetach, where);
-		spin_unlock_irqrestore(&device->resource->req_lock, flags);
-	}
-}
+	enum drbd_repl_state repl_state = peer_device->repl_state[which];
 
+	if (repl_state > L_OFF)
+		return repl_state;
+	else
+		return peer_device->connection->cstate[which];
+}
 
 /**
  * drbd_md_first_sector() - Returns the first sector number of the meta data area
@@ -1718,54 +2580,13 @@ static inline sector_t drbd_md_last_sector(struct drbd_backing_dev *bdev)
 	switch (bdev->md.meta_dev_idx) {
 	case DRBD_MD_INDEX_INTERNAL:
 	case DRBD_MD_INDEX_FLEX_INT:
-		return bdev->md.md_offset + MD_4kB_SECT -1;
+		return bdev->md.md_offset + (4096 >> 9) - 1;
 	case DRBD_MD_INDEX_FLEX_EXT:
 	default:
-		return bdev->md.md_offset + bdev->md.md_size_sect -1;
+		return bdev->md.md_offset + bdev->md.md_size_sect - 1;
 	}
 }
 
-/* Returns the number of 512 byte sectors of the device */
-static inline sector_t drbd_get_capacity(struct block_device *bdev)
-{
-	return bdev ? bdev_nr_sectors(bdev) : 0;
-}
-
-/**
- * drbd_get_max_capacity() - Returns the capacity we announce to out peer
- * @bdev:	Meta data block device.
- *
- * returns the capacity we announce to out peer.  we clip ourselves at the
- * various MAX_SECTORS, because if we don't, current implementation will
- * oops sooner or later
- */
-static inline sector_t drbd_get_max_capacity(struct drbd_backing_dev *bdev)
-{
-	sector_t s;
-
-	switch (bdev->md.meta_dev_idx) {
-	case DRBD_MD_INDEX_INTERNAL:
-	case DRBD_MD_INDEX_FLEX_INT:
-		s = drbd_get_capacity(bdev->backing_bdev)
-			? min_t(sector_t, DRBD_MAX_SECTORS_FLEX,
-				drbd_md_first_sector(bdev))
-			: 0;
-		break;
-	case DRBD_MD_INDEX_FLEX_EXT:
-		s = min_t(sector_t, DRBD_MAX_SECTORS_FLEX,
-				drbd_get_capacity(bdev->backing_bdev));
-		/* clip at maximum size the meta device can support */
-		s = min_t(sector_t, s,
-			BM_EXT_TO_SECT(bdev->md.md_size_sect
-				     - bdev->md.bm_offset));
-		break;
-	default:
-		s = min_t(sector_t, DRBD_MAX_SECTORS,
-				drbd_get_capacity(bdev->backing_bdev));
-	}
-	return s;
-}
-
 /**
  * drbd_md_ss() - Return the sector number of our meta data super block
  * @bdev:	Meta data block device.
@@ -1784,18 +2605,10 @@ static inline sector_t drbd_md_ss(struct drbd_backing_dev *bdev)
 		return (drbd_get_capacity(bdev->backing_bdev) & ~7ULL) - 8;
 
 	/* external, some index; this is the old fixed size layout */
-	return MD_128MB_SECT * bdev->md.meta_dev_idx;
+	return (128 << 20 >> 9) * bdev->md.meta_dev_idx;
 }
 
-static inline void
-drbd_queue_work(struct drbd_work_queue *q, struct drbd_work *w)
-{
-	unsigned long flags;
-	spin_lock_irqsave(&q->q_lock, flags);
-	list_add_tail(&w->list, &q->q);
-	spin_unlock_irqrestore(&q->q_lock, flags);
-	wake_up(&q->q_wait);
-}
+void drbd_queue_work(struct drbd_work_queue *, struct drbd_work *);
 
 static inline void
 drbd_queue_work_if_unqueued(struct drbd_work_queue *q, struct drbd_work *w)
@@ -1812,46 +2625,48 @@ static inline void
 drbd_device_post_work(struct drbd_device *device, int work_bit)
 {
 	if (!test_and_set_bit(work_bit, &device->flags)) {
-		struct drbd_connection *connection =
-			first_peer_device(device)->connection;
-		struct drbd_work_queue *q = &connection->sender_work;
-		if (!test_and_set_bit(DEVICE_WORK_PENDING, &connection->flags))
+		struct drbd_resource *resource = device->resource;
+		struct drbd_work_queue *q = &resource->work;
+		if (!test_and_set_bit(DEVICE_WORK_PENDING, &resource->flags))
 			wake_up(&q->q_wait);
 	}
 }
 
-extern void drbd_flush_workqueue(struct drbd_work_queue *work_queue);
-
-/* To get the ack_receiver out of the blocking network stack,
- * so it can change its sk_rcvtimeo from idle- to ping-timeout,
- * and send a ping, we need to send a signal.
- * Which signal we send is irrelevant. */
-static inline void wake_ack_receiver(struct drbd_connection *connection)
-{
-	struct task_struct *task = connection->ack_receiver.task;
-	if (task && get_t_state(&connection->ack_receiver) == RUNNING)
-		send_sig(SIGXCPU, task, 1);
-}
-
-static inline void request_ping(struct drbd_connection *connection)
+static inline void
+drbd_peer_device_post_work(struct drbd_peer_device *peer_device, int work_bit)
 {
-	set_bit(SEND_PING, &connection->flags);
-	wake_ack_receiver(connection);
+	if (!test_and_set_bit(work_bit, &peer_device->flags)) {
+		struct drbd_resource *resource = peer_device->device->resource;
+		struct drbd_work_queue *q = &resource->work;
+		if (!test_and_set_bit(PEER_DEVICE_WORK_PENDING, &resource->flags))
+			wake_up(&q->q_wait);
+	}
 }
 
-extern void *conn_prepare_command(struct drbd_connection *, struct drbd_socket *);
-extern void *drbd_prepare_command(struct drbd_peer_device *, struct drbd_socket *);
-extern int conn_send_command(struct drbd_connection *, struct drbd_socket *,
-			     enum drbd_packet, unsigned int, void *,
-			     unsigned int);
-extern int drbd_send_command(struct drbd_peer_device *, struct drbd_socket *,
-			     enum drbd_packet, unsigned int, void *,
-			     unsigned int);
-
-extern int drbd_send_ping(struct drbd_connection *connection);
-extern int drbd_send_ping_ack(struct drbd_connection *connection);
-extern int drbd_send_state_req(struct drbd_peer_device *, union drbd_state, union drbd_state);
-extern int conn_send_state_req(struct drbd_connection *, union drbd_state, union drbd_state);
+void drbd_flush_workqueue(struct drbd_work_queue *work_queue);
+void drbd_flush_workqueue_interruptible(struct drbd_device *device);
+
+void *__conn_prepare_command(struct drbd_connection *connection, int size,
+			     enum drbd_stream drbd_stream);
+void *conn_prepare_command(struct drbd_connection *connection, int size,
+			   enum drbd_stream drbd_stream);
+void *drbd_prepare_command(struct drbd_peer_device *peer_device, int size,
+			   enum drbd_stream drbd_stream);
+int __send_command(struct drbd_connection *connection, int vnr,
+		   enum drbd_packet cmd, int stream_and_flags);
+int send_command(struct drbd_connection *connection, int vnr,
+		 enum drbd_packet cmd, int stream_and_flags);
+int drbd_send_command(struct drbd_peer_device *peer_device,
+		      enum drbd_packet cmd, enum drbd_stream drbd_stream);
+
+int drbd_send_ping(struct drbd_connection *connection);
+int conn_send_state_req(struct drbd_connection *connection, int vnr,
+			enum drbd_packet cmd, union drbd_state mask,
+			union drbd_state val);
+int conn_send_twopc_request(struct drbd_connection *connection,
+			    struct twopc_request *request);
+int drbd_send_peer_ack(struct drbd_connection *connection, u64 mask,
+		       u64 dagtag_sector);
 
 static inline void drbd_thread_stop(struct drbd_thread *thi)
 {
@@ -1868,59 +2683,37 @@ static inline void drbd_thread_restart_nowait(struct drbd_thread *thi)
 	_drbd_thread_stop(thi, true, false);
 }
 
-/* counts how many answer packets packets we expect from our peer,
- * for either explicit application requests,
- * or implicit barrier packets as necessary.
- * increased:
- *  w_send_barrier
- *  _req_mod(req, QUEUE_FOR_NET_WRITE or QUEUE_FOR_NET_READ);
- *    it is much easier and equally valid to count what we queue for the
- *    worker, even before it actually was queued or send.
- *    (drbd_make_request_common; recovery path on read io-error)
- * decreased:
- *  got_BarrierAck (respective tl_clear, tl_clear_barrier)
- *  _req_mod(req, DATA_RECEIVED)
- *     [from receive_DataReply]
- *  _req_mod(req, WRITE_ACKED_BY_PEER or RECV_ACKED_BY_PEER or NEG_ACKED)
- *     [from got_BlockAck (P_WRITE_ACK, P_RECV_ACK)]
- *     for some reason it is NOT decreased in got_NegAck,
- *     but in the resulting cleanup code from report_params.
- *     we should try to remember the reason for that...
- *  _req_mod(req, SEND_FAILED or SEND_CANCELED)
- *  _req_mod(req, CONNECTION_LOST_WHILE_PENDING)
- *     [from tl_clear_barrier]
- */
-static inline void inc_ap_pending(struct drbd_device *device)
+static inline void inc_ap_pending(struct drbd_peer_device *peer_device)
 {
-	atomic_inc(&device->ap_pending_cnt);
+	atomic_inc(&peer_device->ap_pending_cnt);
 }
 
-#define dec_ap_pending(device) ((void)expect((device), __dec_ap_pending(device) >= 0))
-static inline int __dec_ap_pending(struct drbd_device *device)
+#define dec_ap_pending(peer_device) \
+	((void)expect((peer_device), __dec_ap_pending(peer_device) >= 0))
+static inline int __dec_ap_pending(struct drbd_peer_device *peer_device)
 {
-	int ap_pending_cnt = atomic_dec_return(&device->ap_pending_cnt);
-
+	int ap_pending_cnt = atomic_dec_return(&peer_device->ap_pending_cnt);
 	if (ap_pending_cnt == 0)
-		wake_up(&device->misc_wait);
+		wake_up(&peer_device->device->misc_wait);
 	return ap_pending_cnt;
 }
 
 /* counts how many resync-related answers we still expect from the peer
  *		     increase			decrease
- * C_SYNC_TARGET sends P_RS_DATA_REQUEST (and expects P_RS_DATA_REPLY)
- * C_SYNC_SOURCE sends P_RS_DATA_REPLY   (and expects P_WRITE_ACK with ID_SYNCER)
+ * L_SYNC_TARGET sends P_RS_DATA_REQUEST (and expects P_RS_DATA_REPLY)
+ * L_SYNC_SOURCE sends P_RS_DATA_REPLY   (and expects P_WRITE_ACK with ID_SYNCER)
  *					   (or P_NEG_ACK with ID_SYNCER)
  */
 static inline void inc_rs_pending(struct drbd_peer_device *peer_device)
 {
-	atomic_inc(&peer_device->device->rs_pending_cnt);
+	atomic_inc(&peer_device->rs_pending_cnt);
 }
 
 #define dec_rs_pending(peer_device) \
 	((void)expect((peer_device), __dec_rs_pending(peer_device) >= 0))
 static inline int __dec_rs_pending(struct drbd_peer_device *peer_device)
 {
-	return atomic_dec_return(&peer_device->device->rs_pending_cnt);
+	return atomic_dec_return(&peer_device->rs_pending_cnt);
 }
 
 /* counts how many answers we still need to send to the peer.
@@ -1929,42 +2722,82 @@ static inline int __dec_rs_pending(struct drbd_peer_device *peer_device)
  *			we need to send a P_RECV_ACK (proto B)
  *			or P_WRITE_ACK (proto C)
  *  receive_RSDataReply (recv_resync_read) we need to send a P_WRITE_ACK
- *  receive_DataRequest (receive_RSDataRequest) we need to send back P_DATA
+ *  receive_data_request etc we need to send back P_DATA
  *  receive_Barrier_*	we need to send a P_BARRIER_ACK
  */
-static inline void inc_unacked(struct drbd_device *device)
+static inline void inc_unacked(struct drbd_peer_device *peer_device)
+{
+	atomic_inc(&peer_device->unacked_cnt);
+}
+
+#define dec_unacked(peer_device) \
+	((void)expect(peer_device, __dec_unacked(peer_device) >= 0))
+static inline int __dec_unacked(struct drbd_peer_device *peer_device)
+{
+	return atomic_dec_return(&peer_device->unacked_cnt);
+}
+
+static inline bool repl_is_sync_target(enum drbd_repl_state repl_state)
 {
-	atomic_inc(&device->unacked_cnt);
+	return repl_state == L_SYNC_TARGET || repl_state == L_PAUSED_SYNC_T;
 }
 
-#define dec_unacked(device) ((void)expect(device, __dec_unacked(device) >= 0))
-static inline int __dec_unacked(struct drbd_device *device)
+static inline bool repl_is_sync_source(enum drbd_repl_state repl_state)
 {
-	return atomic_dec_return(&device->unacked_cnt);
+	return repl_state == L_SYNC_SOURCE || repl_state == L_PAUSED_SYNC_S;
 }
 
-#define sub_unacked(device, n) ((void)expect(device, __sub_unacked(device) >= 0))
-static inline int __sub_unacked(struct drbd_device *device, int n)
+static inline bool repl_is_sync(enum drbd_repl_state repl_state)
 {
-	return atomic_sub_return(n, &device->unacked_cnt);
+	return repl_is_sync_source(repl_state) ||
+		repl_is_sync_target(repl_state);
 }
 
-static inline bool is_sync_target_state(enum drbd_conns connection_state)
+static inline bool is_sync_target_state(struct drbd_peer_device *peer_device,
+					enum which_state which)
 {
-	return	connection_state == C_SYNC_TARGET ||
-		connection_state == C_PAUSED_SYNC_T;
+	return repl_is_sync_target(peer_device->repl_state[which]);
 }
 
-static inline bool is_sync_source_state(enum drbd_conns connection_state)
+static inline bool is_sync_source_state(struct drbd_peer_device *peer_device,
+					enum which_state which)
 {
-	return	connection_state == C_SYNC_SOURCE ||
-		connection_state == C_PAUSED_SYNC_S;
+	return repl_is_sync_source(peer_device->repl_state[which]);
 }
 
-static inline bool is_sync_state(enum drbd_conns connection_state)
+static inline bool is_sync_state(struct drbd_peer_device *peer_device,
+				 enum which_state which)
 {
-	return	is_sync_source_state(connection_state) ||
-		is_sync_target_state(connection_state);
+	return repl_is_sync(peer_device->repl_state[which]);
+}
+
+static inline bool is_verify_state(struct drbd_peer_device *peer_device,
+				   enum which_state which)
+{
+	enum drbd_repl_state repl_state = peer_device->repl_state[which];
+	return repl_state == L_VERIFY_S || repl_state == L_VERIFY_T;
+}
+
+static inline bool resync_susp_comb_dep(struct drbd_peer_device *peer_device, enum which_state which)
+{
+	struct drbd_device *device = peer_device->device;
+
+	return peer_device->resync_susp_dependency[which] || peer_device->resync_susp_other_c[which] ||
+		(is_sync_source_state(peer_device, which) && device->disk_state[which] <= D_INCONSISTENT);
+}
+
+static inline int
+drbd_insert_fault_conn(struct drbd_connection *connection, unsigned int type) {
+#ifdef CONFIG_DRBD_FAULT_INJECTION
+	int id = 0;
+	struct drbd_device *device = idr_get_next(&connection->resource->devices, &id);
+
+	return device && drbd_fault_rate &&
+		(drbd_enable_faults & (1<<type)) &&
+		_drbd_insert_fault(device, type);
+#else
+	return 0;
+#endif
 }
 
 /**
@@ -1974,14 +2807,11 @@ static inline bool is_sync_state(enum drbd_conns connection_state)
  *
  * You have to call put_ldev() when finished working with device->ldev.
  */
-#define get_ldev_if_state(_device, _min_state)				\
-	(_get_ldev_if_state((_device), (_min_state)) ?			\
-	 ({ __acquire(x); true; }) : false)
 #define get_ldev(_device) get_ldev_if_state(_device, D_INCONSISTENT)
 
 static inline void put_ldev(struct drbd_device *device)
 {
-	enum drbd_disk_state disk_state = device->state.disk;
+	enum drbd_disk_state disk_state = device->disk_state[NOW];
 	/* We must check the state *before* the atomic_dec becomes visible,
 	 * or we have a theoretical race where someone hitting zero,
 	 * while state still D_FAILED, will then see D_DISKLESS in the
@@ -1991,13 +2821,14 @@ static inline void put_ldev(struct drbd_device *device)
 	/* This may be called from some endio handler,
 	 * so we must not sleep here. */
 
-	__release(local);
 	D_ASSERT(device, i >= 0);
 	if (i == 0) {
-		if (disk_state == D_DISKLESS)
+		if (disk_state == D_DISKLESS) {
 			/* even internal references gone, safe to destroy */
-			drbd_device_post_work(device, DESTROY_DISK);
-		if (disk_state == D_FAILED)
+			kref_get(&device->kref);
+			schedule_work(&device->ldev_destroy_work);
+		}
+		if (disk_state == D_FAILED || disk_state == D_DETACHING)
 			/* all application IO references gone. */
 			if (!test_and_set_bit(GOING_DISKLESS, &device->flags))
 				drbd_device_post_work(device, GO_DISKLESS);
@@ -2005,122 +2836,53 @@ static inline void put_ldev(struct drbd_device *device)
 	}
 }
 
-#ifndef __CHECKER__
-static inline int _get_ldev_if_state(struct drbd_device *device, enum drbd_disk_state mins)
+static inline int get_ldev_if_state(struct drbd_device *device, enum drbd_disk_state mins)
 {
 	int io_allowed;
 
 	/* never get a reference while D_DISKLESS */
-	if (device->state.disk == D_DISKLESS)
+	if (device->disk_state[NOW] == D_DISKLESS)
 		return 0;
 
 	atomic_inc(&device->local_cnt);
-	io_allowed = (device->state.disk >= mins);
+	io_allowed = (device->disk_state[NOW] >= mins);
 	if (!io_allowed)
 		put_ldev(device);
 	return io_allowed;
 }
-#else
-extern int _get_ldev_if_state(struct drbd_device *device, enum drbd_disk_state mins);
-#endif
 
-/* this throttles on-the-fly application requests
- * according to max_buffers settings;
- * maybe re-implement using semaphores? */
-static inline int drbd_get_max_buffers(struct drbd_device *device)
-{
-	struct net_conf *nc;
-	int mxb;
+void drbd_queue_pending_bitmap_work(struct drbd_device *device);
 
-	rcu_read_lock();
-	nc = rcu_dereference(first_peer_device(device)->connection->net_conf);
-	mxb = nc ? nc->max_buffers : 1000000;  /* arbitrary limit on open requests */
-	rcu_read_unlock();
-
-	return mxb;
-}
-
-static inline int drbd_state_is_stable(struct drbd_device *device)
+/* rw = READ or WRITE (0 or 1); nothing else. */
+static inline void dec_ap_bio(struct drbd_device *device, int rw)
 {
-	union drbd_dev_state s = device->state;
-
-	/* DO NOT add a default clause, we want the compiler to warn us
-	 * for any newly introduced state we may have forgotten to add here */
-
-	switch ((enum drbd_conns)s.conn) {
-	/* new io only accepted when there is no connection, ... */
-	case C_STANDALONE:
-	case C_WF_CONNECTION:
-	/* ... or there is a well established connection. */
-	case C_CONNECTED:
-	case C_SYNC_SOURCE:
-	case C_SYNC_TARGET:
-	case C_VERIFY_S:
-	case C_VERIFY_T:
-	case C_PAUSED_SYNC_S:
-	case C_PAUSED_SYNC_T:
-	case C_AHEAD:
-	case C_BEHIND:
-		/* transitional states, IO allowed */
-	case C_DISCONNECTING:
-	case C_UNCONNECTED:
-	case C_TIMEOUT:
-	case C_BROKEN_PIPE:
-	case C_NETWORK_FAILURE:
-	case C_PROTOCOL_ERROR:
-	case C_TEAR_DOWN:
-	case C_WF_REPORT_PARAMS:
-	case C_STARTING_SYNC_S:
-	case C_STARTING_SYNC_T:
-		break;
-
-		/* Allow IO in BM exchange states with new protocols */
-	case C_WF_BITMAP_S:
-		if (first_peer_device(device)->connection->agreed_pro_version < 96)
-			return 0;
-		break;
+	unsigned int nr_requests = device->resource->res_opts.nr_requests;
+	int ap_bio = atomic_dec_return(&device->ap_bio_cnt[rw]);
 
-		/* no new io accepted in these states */
-	case C_WF_BITMAP_T:
-	case C_WF_SYNC_UUID:
-	case C_MASK:
-		/* not "stable" */
-		return 0;
-	}
-
-	switch ((enum drbd_disk_state)s.disk) {
-	case D_DISKLESS:
-	case D_INCONSISTENT:
-	case D_OUTDATED:
-	case D_CONSISTENT:
-	case D_UP_TO_DATE:
-	case D_FAILED:
-		/* disk state is stable as well. */
-		break;
+	D_ASSERT(device, ap_bio >= 0);
 
-	/* no new io accepted during transitional states */
-	case D_ATTACHING:
-	case D_NEGOTIATING:
-	case D_UNKNOWN:
-	case D_MASK:
-		/* not "stable" */
-		return 0;
-	}
+	/* Check for list_empty outside the lock is ok.  Worst case it queues
+	 * nothing because someone else just now did.  During list_add, a
+	 * refcount on ap_bio_cnt[WRITE] is held, so the bitmap work will be
+	 * queued when that is released if we miss it here.
+	 * Checking pending_bitmap_work.n is not correct,
+	 * it has a different lifetime. */
+	if (ap_bio == 0 && rw == WRITE && !list_empty(&device->pending_bitmap_work.q))
+		drbd_queue_pending_bitmap_work(device);
 
-	return 1;
+	if (ap_bio == 0 || ap_bio == nr_requests-1)
+		wake_up(&device->misc_wait);
 }
 
-static inline int drbd_suspended(struct drbd_device *device)
+static inline bool drbd_suspended(struct drbd_device *device)
 {
-	struct drbd_resource *resource = device->resource;
-
-	return resource->susp || resource->susp_fen || resource->susp_nod;
+	return device->resource->cached_susp;
 }
 
 static inline bool may_inc_ap_bio(struct drbd_device *device)
 {
-	int mxb = drbd_get_max_buffers(device);
-
+	if (device->cached_err_io)
+		return true;
 	if (drbd_suspended(device))
 		return false;
 	if (atomic_read(&device->suspend_cnt))
@@ -2131,76 +2893,45 @@ static inline bool may_inc_ap_bio(struct drbd_device *device)
 	 * to start during "stable" states. */
 
 	/* no new io accepted when attaching or detaching the disk */
-	if (!drbd_state_is_stable(device))
+	if (device->cached_state_unstable)
 		return false;
 
-	/* since some older kernels don't have atomic_add_unless,
-	 * and we are within the spinlock anyways, we have this workaround.  */
-	if (atomic_read(&device->ap_bio_cnt) > mxb)
-		return false;
-	if (test_bit(BITMAP_IO, &device->flags))
+	if (atomic_read(&device->pending_bitmap_work.n))
 		return false;
 	return true;
 }
 
-static inline bool inc_ap_bio_cond(struct drbd_device *device)
+static inline u64 drbd_current_uuid(struct drbd_device *device)
 {
-	bool rv = false;
-
-	spin_lock_irq(&device->resource->req_lock);
-	rv = may_inc_ap_bio(device);
-	if (rv)
-		atomic_inc(&device->ap_bio_cnt);
-	spin_unlock_irq(&device->resource->req_lock);
-
-	return rv;
+	if (!device->ldev)
+		return 0;
+	return device->ldev->md.current_uuid;
 }
 
-static inline void inc_ap_bio(struct drbd_device *device)
+static inline bool verify_can_do_stop_sector(struct drbd_peer_device *peer_device)
 {
-	/* we wait here
-	 *    as long as the device is suspended
-	 *    until the bitmap is no longer on the fly during connection
-	 *    handshake as long as we would exceed the max_buffer limit.
-	 *
-	 * to avoid races with the reconnect code,
-	 * we need to atomic_inc within the spinlock. */
-
-	wait_event(device->misc_wait, inc_ap_bio_cond(device));
+	return peer_device->connection->agreed_pro_version >= 97 &&
+		peer_device->connection->agreed_pro_version != 100;
 }
 
-static inline void dec_ap_bio(struct drbd_device *device)
+static inline u64 drbd_bitmap_uuid(struct drbd_peer_device *peer_device)
 {
-	int mxb = drbd_get_max_buffers(device);
-	int ap_bio = atomic_dec_return(&device->ap_bio_cnt);
-
-	D_ASSERT(device, ap_bio >= 0);
+	struct drbd_device *device = peer_device->device;
+	struct drbd_peer_md *peer_md;
 
-	if (ap_bio == 0 && test_bit(BITMAP_IO, &device->flags)) {
-		if (!test_and_set_bit(BITMAP_IO_QUEUED, &device->flags))
-			drbd_queue_work(&first_peer_device(device)->
-				connection->sender_work,
-				&device->bm_io_work.w);
-	}
+	if (!device->ldev)
+		return 0;
 
-	/* this currently does wake_up for every dec_ap_bio!
-	 * maybe rather introduce some type of hysteresis?
-	 * e.g. (ap_bio == mxb/2 || ap_bio == 0) ? */
-	if (ap_bio < mxb)
-		wake_up(&device->misc_wait);
+	peer_md = &device->ldev->md.peers[peer_device->node_id];
+	return peer_md->bitmap_uuid;
 }
 
-static inline bool verify_can_do_stop_sector(struct drbd_device *device)
+static inline u64 drbd_history_uuid(struct drbd_device *device, int i)
 {
-	return first_peer_device(device)->connection->agreed_pro_version >= 97 &&
-		first_peer_device(device)->connection->agreed_pro_version != 100;
-}
+	if (!device->ldev || i >= ARRAY_SIZE(device->ldev->md.history_uuids))
+		return 0;
 
-static inline int drbd_set_ed_uuid(struct drbd_device *device, u64 val)
-{
-	int changed = device->ed_uuid != val;
-	device->ed_uuid = val;
-	return changed;
+	return device->ldev->md.history_uuids[i];
 }
 
 static inline int drbd_queue_order_type(struct drbd_device *device)
@@ -2219,4 +2950,215 @@ static inline struct drbd_connection *first_connection(struct drbd_resource *res
 				struct drbd_connection, connections);
 }
 
+static inline struct net *drbd_net_assigned_to_connection(struct drbd_connection *connection)
+{
+	struct drbd_path *path;
+	struct net *net;
+
+	rcu_read_lock();
+	path = list_first_or_null_rcu(&connection->transport.paths, struct drbd_path, list);
+	net = path ? path->net : NULL;
+	rcu_read_unlock();
+
+	return net;
+}
+
+#define NODE_MASK(id) ((u64)1 << (id))
+
+static inline void drbd_list_del_resync_request(struct drbd_peer_request *peer_req)
+{
+	peer_req->flags &= ~EE_ON_RECV_ORDER;
+	list_del(&peer_req->recv_order);
+
+	if (peer_req == peer_req->peer_device->received_last)
+		peer_req->peer_device->received_last = NULL;
+
+	if (peer_req == peer_req->peer_device->discard_last)
+		peer_req->peer_device->discard_last = NULL;
+}
+
+/*
+ * drbd_interval_same_peer - determine whether "interval" is for the same peer as "i"
+ *
+ * "i" must be an interval corresponding to a drbd_peer_request.
+ */
+static inline bool drbd_interval_same_peer(struct drbd_interval *interval, struct drbd_interval *i)
+{
+	struct drbd_peer_request *interval_peer_req, *i_peer_req;
+
+	/* Ensure we only call "container_of" if it is actually a peer request. */
+	if (interval->type == INTERVAL_LOCAL_WRITE ||
+			interval->type == INTERVAL_LOCAL_READ ||
+			interval->type == INTERVAL_PEERS_IN_SYNC_LOCK)
+		return false;
+
+	interval_peer_req = container_of(interval, struct drbd_peer_request, i);
+	i_peer_req = container_of(i, struct drbd_peer_request, i);
+	return interval_peer_req->peer_device == i_peer_req->peer_device;
+}
+
+/*
+ * drbd_should_defer_to_resync - determine whether "interval" should defer to
+ * "i" in order to ensure that resync makes progress
+ */
+static inline bool drbd_should_defer_to_resync(struct drbd_interval *interval, struct drbd_interval *i)
+{
+	if (!drbd_interval_is_resync(i))
+		return false;
+
+	/* Always defer to resync requests once the reply has been received.
+	 * These just need to wait for conflicting local I/O to complete. This
+	 * is necessary to ensure that resync replies received before
+	 * application writes are submitted first, so that the resync writes do
+	 * not overwrite newer data. */
+	if (test_bit(INTERVAL_RECEIVED, &i->flags))
+		return true;
+
+	/* If we are still waiting for a reply from the peer, only defer to the
+	 * request if it is towards a different peer. The exclusivity between
+	 * resync requests and application writes from another peer is
+	 * necessary to avoid overwriting newer data with older in the resync.
+	 * When the data in both cases is coming from the same peer, this is
+	 * not necessary. The peer ensures that the data stream is correctly
+	 * ordered. */
+	return !drbd_interval_same_peer(interval, i);
+}
+
+/*
+ * drbd_should_defer_to_interval - determine whether "interval" should defer to "i"
+ */
+static inline bool drbd_should_defer_to_interval(struct drbd_interval *interval,
+		struct drbd_interval *i, bool defer_to_resync)
+{
+	if (test_bit(INTERVAL_SUBMITTED, &i->flags))
+		return true;
+
+	if (defer_to_resync && drbd_should_defer_to_resync(interval, i))
+		return true;
+
+	/*
+	 * We do not send conflicting resync requests because that causes
+	 * difficulties associating the replies to the requests.
+	 */
+	if (interval->type == INTERVAL_RESYNC_WRITE &&
+			i->type == INTERVAL_RESYNC_WRITE &&
+			test_bit(INTERVAL_READY_TO_SEND, &i->flags))
+		return true;
+
+	return false;
+}
+
+/* Find conflicts at application level instead of at disk level. */
+#define CONFLICT_FLAG_APPLICATION_ONLY (1 << 0)
+
+/*
+ * Ignore peer writes from the peer that this request relates to. This is only
+ * used for determining whether to send a request. It must not be used for
+ * determining whether to submit a request, because that would allow concurrent
+ * writes to the backing disk.
+ */
+#define CONFLICT_FLAG_IGNORE_SAME_PEER (1 << 1)
+
+/*
+ * drbd_find_conflict - find conflicting interval, if any
+ */
+static inline struct drbd_interval *drbd_find_conflict(struct drbd_device *device,
+		struct drbd_interval *interval, unsigned long flags)
+{
+	struct drbd_interval *i;
+	sector_t sector = interval->sector;
+	int size = interval->size;
+	bool application_only = flags & CONFLICT_FLAG_APPLICATION_ONLY;
+	bool defer_to_resync =
+		(interval->type == INTERVAL_LOCAL_WRITE || interval->type == INTERVAL_PEER_WRITE) &&
+		!application_only;
+	bool exclusive_until_completed = interval->type == INTERVAL_LOCAL_WRITE || application_only;
+	bool ignore_same_peer = flags & CONFLICT_FLAG_IGNORE_SAME_PEER;
+
+	lockdep_assert_held(&device->interval_lock);
+
+	drbd_for_each_overlap(i, &device->requests, sector, size) {
+		/* Ignore the interval itself. */
+		if (i == interval)
+			continue;
+
+		if (exclusive_until_completed) {
+			/* Ignore, if already completed to upper layers. */
+			if (test_bit(INTERVAL_COMPLETED, &i->flags))
+				continue;
+		} else {
+			/* Ignore, if already completed by the backing disk. */
+			if (test_bit(INTERVAL_BACKING_COMPLETED, &i->flags))
+				continue;
+		}
+
+		/* Ignore, if there is no need to defer to it. */
+		if (!drbd_should_defer_to_interval(interval, i, defer_to_resync))
+			continue;
+
+		/*
+		 * Ignore peer writes from the peer that this request relates
+		 * to, if requested.
+		 */
+		if (ignore_same_peer && i->type == INTERVAL_PEER_WRITE && drbd_interval_same_peer(interval, i))
+			continue;
+
+		if (unlikely(application_only)) {
+			/* Ignore, if not an application request. */
+			if (!drbd_interval_is_application(i))
+				continue;
+		}
+
+		if (drbd_interval_is_write(interval)) {
+			/*
+			 * Mark verify requests as conflicting rather than
+			 * treating them as conflicts for us.
+			 */
+			if (drbd_interval_is_verify(i)) {
+				set_bit(INTERVAL_CONFLICT, &i->flags);
+				continue;
+			}
+		} else {
+			/* Ignore other resync reads. */
+			if (i->type == INTERVAL_RESYNC_READ)
+				continue;
+
+			/* Ignore verify requests, since they are always reads. */
+			if (drbd_interval_is_verify(i))
+				continue;
+
+			/* Ignore peers-in-sync intervals, since they are always reads. */
+			if (i->type == INTERVAL_PEERS_IN_SYNC_LOCK)
+				continue;
+		}
+
+		dynamic_drbd_dbg(device,
+				"%s at %llus+%u conflicts with %s at %llus+%u\n",
+				drbd_interval_type_str(interval),
+				(unsigned long long) sector, size,
+				drbd_interval_type_str(i),
+				(unsigned long long) i->sector, i->size);
+
+		break;
+	}
+
+	return i;
+}
+
+#ifdef CONFIG_DRBD_TIMING_STATS
+#define ktime_aggregate_delta(D, ST, M) (D->M = ktime_add(D->M, ktime_sub(ktime_get(), ST)))
+#define ktime_aggregate(D, R, M) (D->M = ktime_add(D->M, ktime_sub(R->M, R->start_kt)))
+#define ktime_aggregate_pd(P, N, R, M) (P->M = ktime_add(P->M, ktime_sub(R->M[N], R->start_kt)))
+#define ktime_get_accounting(V) (V = ktime_get())
+#define ktime_get_accounting_assign(V, T) (V = T)
+#define ktime_var_for_accounting(V) ktime_t V = ktime_get()
+#else
+#define ktime_aggregate_delta(D, ST, M)
+#define ktime_aggregate(D, R, M)
+#define ktime_aggregate_pd(P, N, R, M)
+#define ktime_get_accounting(V)
+#define ktime_get_accounting_assign(V, T)
+#define ktime_var_for_accounting(V)
+#endif
+
 #endif
diff --git a/drivers/block/drbd/drbd_interval.h b/drivers/block/drbd/drbd_interval.h
index 5d3213b81eed..a6ef04f89885 100644
--- a/drivers/block/drbd/drbd_interval.h
+++ b/drivers/block/drbd/drbd_interval.h
@@ -5,20 +5,149 @@
 #include <linux/types.h>
 #include <linux/rbtree.h>
 
+/* Interval types stored directly in drbd_interval so that we can handle
+ * conflicts without having to inspect the containing object. The value 0 is
+ * reserved for uninitialized intervals. */
+enum drbd_interval_type {
+	INTERVAL_LOCAL_WRITE = 1,
+	INTERVAL_PEER_WRITE,
+	INTERVAL_LOCAL_READ,
+	INTERVAL_PEER_READ,
+	INTERVAL_RESYNC_WRITE, /* L_SYNC_TARGET */
+	INTERVAL_RESYNC_READ, /* L_SYNC_SOURCE */
+	INTERVAL_OV_READ_SOURCE, /* L_VERIFY_S */
+	INTERVAL_OV_READ_TARGET, /* L_VERIFY_T */
+	INTERVAL_PEERS_IN_SYNC_LOCK,
+};
+
+#define INTERVAL_TYPE_MASK(type) (1 << (type))
+
+enum drbd_interval_flags {
+	/* Whether this peer request may be sent. */
+	INTERVAL_READY_TO_SEND,
+
+	/*
+	 * Used for resync reads. This flag is set after sending and is used to
+	 * manage the lifetime of the request. When INTERVAL_SENT is not set,
+	 * the sending path still has a reference to the request.
+	 */
+	INTERVAL_SENT,
+
+	/*
+	 * Whether this peer request has been received yet.
+	 *
+	 * For resync reads, this flag is set when the corresponding ack has
+	 * been received and is used to manage the lifetime of the request.
+	 * When INTERVAL_RECEIVED is not set, the receiving path has a
+	 * reference to the request. This reference counting is protected by
+	 * peer_reqs_lock.
+	 */
+	INTERVAL_RECEIVED,
+
+	/* Whether this has been queued after conflict. */
+	INTERVAL_SUBMIT_CONFLICT_QUEUED,
+
+	/* Whether this has been submitted already. */
+	INTERVAL_SUBMITTED,
+
+	/* Whether the local backing device bio is complete. */
+	INTERVAL_BACKING_COMPLETED,
+
+	/* This has been completed already; ignore for conflict detection. */
+	INTERVAL_COMPLETED,
+
+	/* For verify requests: whether this has conflicts. */
+	INTERVAL_CONFLICT,
+
+	/* For resync requests: whether this was canceled while waiting for conflict resolution. */
+	INTERVAL_CANCELED,
+
+	/*
+	 * For local requests: whether this is done.
+	 *
+	 * Included here instead of in local_rq_state to allow access with
+	 * atomic bit operations instead of taking rq_lock.
+	 */
+	INTERVAL_DONE,
+
+	/*
+	 * For local requests: when we put the AL extent for this request, it
+	 * was the last in that extent.
+	 *
+	 * Included here instead of in local_rq_state to allow access with
+	 * atomic bit operations instead of taking rq_lock.
+	 */
+	INTERVAL_AL_EXTENT_LAST,
+};
+
+/* Intervals used to manage conflicts between application requests and various
+ * internal requests, so that the disk content is deterministic.
+ *
+ * The requests progress through states indicated by successively setting the
+ * flags "INTERVAL_SUBMITTED", "INTERVAL_BACKING_COMPLETED" and
+ * "INTERVAL_COMPLETED".
+ *
+ * Application and resync requests wait to be submitted until any conflicts
+ * that are "INTERVAL_SUBMITTED" have reached "INTERVAL_BACKING_COMPLETED"
+ * state. Application requests also wait for conflicting application requests
+ * to ensure consistency between the replicated copies. In addition,
+ * application requests wait for resync requests that have not yet been
+ * submitted. Resync takes priority over application writes in this way because
+ * a resync locks each block at most once, so it will finish at some point,
+ * whereas the application may repeatedly write the same blocks, which would
+ * potentially lock out resync indefinitely.
+ *
+ * Resync read requests do not conflict with each other, but they are
+ * nevertheless mutually exclusive with writes, so that the bitmap can be
+ * updated reliably.
+ *
+ * Verify requests do not wait for other requests. If there are conflicts, they
+ * are simply cancelled. Futhermore, they do not lock out other requests;
+ * instead they are simply marked as having conflicts and ignored.
+ *
+ * Application write request intervals are retained even when they are
+ * "INTERVAL_COMPLETED", so that they can be used to look up remote replies
+ * that are still pending.
+ */
 struct drbd_interval {
 	struct rb_node rb;
 	sector_t sector;		/* start sector of the interval */
 	sector_t end;			/* highest interval end in subtree */
 	unsigned int size;		/* size in bytes */
-	unsigned int local:1		/* local or remote request? */;
-	unsigned int waiting:1;		/* someone is waiting for completion */
-	unsigned int completed:1;	/* this has been completed already;
-					 * ignore for conflict detection */
+	enum drbd_interval_type type;	/* what type of interval this is */
+	unsigned long flags;
 
 	/* to resume a partially successful drbd_al_begin_io_nonblock(); */
 	unsigned int partially_in_al_next_enr;
 };
 
+static inline bool drbd_interval_is_application(struct drbd_interval *i)
+{
+	return i->type == INTERVAL_LOCAL_WRITE || i->type == INTERVAL_PEER_WRITE ||
+		i->type == INTERVAL_LOCAL_READ || i->type == INTERVAL_PEER_READ;
+}
+
+static inline bool drbd_interval_is_write(struct drbd_interval *i)
+{
+	return i->type == INTERVAL_LOCAL_WRITE || i->type == INTERVAL_PEER_WRITE ||
+		i->type == INTERVAL_RESYNC_WRITE;
+}
+
+static inline bool drbd_interval_is_resync(struct drbd_interval *i)
+{
+	return i->type == INTERVAL_RESYNC_WRITE || i->type == INTERVAL_RESYNC_READ;
+}
+
+static inline bool drbd_interval_is_verify(struct drbd_interval *i)
+{
+	return i->type == INTERVAL_OV_READ_SOURCE || i->type == INTERVAL_OV_READ_TARGET;
+}
+
+static inline bool drbd_interval_is_local(struct drbd_interval *i)
+{
+	return i->type == INTERVAL_LOCAL_READ || i->type == INTERVAL_LOCAL_WRITE;
+}
+
 static inline void drbd_clear_interval(struct drbd_interval *i)
 {
 	RB_CLEAR_NODE(&i->rb);
@@ -29,14 +158,17 @@ static inline bool drbd_interval_empty(struct drbd_interval *i)
 	return RB_EMPTY_NODE(&i->rb);
 }
 
-extern bool drbd_insert_interval(struct rb_root *, struct drbd_interval *);
-extern bool drbd_contains_interval(struct rb_root *, sector_t,
-				   struct drbd_interval *);
-extern void drbd_remove_interval(struct rb_root *, struct drbd_interval *);
-extern struct drbd_interval *drbd_find_overlap(struct rb_root *, sector_t,
-					unsigned int);
-extern struct drbd_interval *drbd_next_overlap(struct drbd_interval *, sector_t,
-					unsigned int);
+const char *drbd_interval_type_str(struct drbd_interval *i);
+bool drbd_insert_interval(struct rb_root *root, struct drbd_interval *this);
+bool drbd_contains_interval(struct rb_root *root, sector_t sector,
+			    struct drbd_interval *interval);
+void drbd_remove_interval(struct rb_root *root, struct drbd_interval *this);
+struct drbd_interval *drbd_find_overlap(struct rb_root *root, sector_t sector,
+					unsigned int size);
+struct drbd_interval *drbd_next_overlap(struct drbd_interval *i,
+					sector_t sector, unsigned int size);
+void drbd_update_interval_size(struct drbd_interval *this,
+			       unsigned int new_size);
 
 #define drbd_for_each_overlap(i, root, sector, size)		\
 	for (i = drbd_find_overlap(root, sector, size);		\
diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index e201f0087a0f..463f57d33204 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -73,7 +73,7 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
 int drbd_adm_dump_peer_devices_done(struct netlink_callback *cb);
 int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb);
 
-#include <linux/drbd_genl_api.h>
+#include "drbd_genl_api.h"
 #include "drbd_nla.h"
 #include <linux/genl_magic_func.h>
 
diff --git a/drivers/block/drbd/drbd_nla.c b/drivers/block/drbd/drbd_nla.c
index df0d241d3f6a..2dd6dc99823a 100644
--- a/drivers/block/drbd/drbd_nla.c
+++ b/drivers/block/drbd/drbd_nla.c
@@ -1,7 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0-only
 #include <linux/kernel.h>
 #include <net/netlink.h>
-#include <linux/drbd_genl_api.h>
+#include "drbd_genl_api.h"
 #include "drbd_nla.h"
 
 static int drbd_nla_check_mandatory(int maxtype, struct nlattr *nla)
diff --git a/drivers/block/drbd/drbd_nla.h b/drivers/block/drbd/drbd_nla.h
index d3555df0d353..4463657c020d 100644
--- a/drivers/block/drbd/drbd_nla.h
+++ b/drivers/block/drbd/drbd_nla.h
@@ -2,8 +2,9 @@
 #ifndef __DRBD_NLA_H
 #define __DRBD_NLA_H
 
-extern int drbd_nla_parse_nested(struct nlattr *tb[], int maxtype, struct nlattr *nla,
-				 const struct nla_policy *policy);
-extern struct nlattr *drbd_nla_find_nested(int maxtype, struct nlattr *nla, int attrtype);
+int drbd_nla_parse_nested(struct nlattr *tb[], int maxtype,
+			  struct nlattr *nla, const struct nla_policy *policy);
+struct nlattr *drbd_nla_find_nested(int maxtype, struct nlattr *nla,
+				    int attrtype);
 
 #endif  /* __DRBD_NLA_H */
diff --git a/drivers/block/drbd/drbd_polymorph_printk.h b/drivers/block/drbd/drbd_polymorph_printk.h
index 8e0082d139ba..7b0873d2980e 100644
--- a/drivers/block/drbd/drbd_polymorph_printk.h
+++ b/drivers/block/drbd/drbd_polymorph_printk.h
@@ -11,104 +11,188 @@
 #define DYNAMIC_DEBUG_BRANCH(D) false
 #endif
 
+#define __drbd_printk(level, fmt, args...)				\
+	printk(level fmt, ## args)
+#define __drbd_dyn_dbg(descriptor, fmt, args...)			\
+	__dynamic_pr_debug(descriptor, fmt, ## args)
+
+#define ___drbd_printk_device(prmacro, rlt, device, lvl_or_desc, fmt, args...)\
+({									\
+	const struct drbd_device *__d =					\
+		(const struct drbd_device *)(device);			\
+	const struct drbd_resource *__r = __d->resource;		\
+	const char *__unregistered = "";				\
+	if (test_bit(UNREGISTERED, &__d->flags))			\
+		__unregistered = "/unregistered/";			\
+	if (drbd_device_ratelimit(__d, rlt))				\
+		prmacro(lvl_or_desc, "drbd %s%s/%u drbd%u: " fmt,	\
+			__unregistered, __r->name, __d->vnr, __d->minor,\
+			## args);					\
+})
+
+#define ___drbd_printk_resource(prmacro, rlt, resource, lvl_or_desc, fmt, args...)\
+({									\
+	const struct drbd_resource *__r =				\
+		(const struct drbd_resource *)(resource);		\
+	const char *__unregistered = "";				\
+	if (test_bit(R_UNREGISTERED, &__r->flags))			\
+		__unregistered = "/unregistered/";			\
+	if (drbd_resource_ratelimit(__r, rlt))				\
+		prmacro(lvl_or_desc, "drbd %s%s: " fmt,			\
+			__unregistered, __r->name, ## args);		\
+})
+
+// As long as the connection is still "registered", the resource
+// can not yet be "unregistered", no need to test R_UNREGISTERED
+#define ___drbd_printk_peer_device(prmacro, rlt, peer_device, lvl_or_desc, fmt, args...)\
+({									\
+	const struct drbd_peer_device *__pd;				\
+	const struct drbd_device *__d;					\
+	const struct drbd_connection *__c;				\
+	const struct drbd_resource *__r;				\
+	const char *__cn;						\
+	const char *__unregistered = "";				\
+	rcu_read_lock();						\
+	__pd = (const struct drbd_peer_device *)(peer_device);		\
+	__d = __pd->device;						\
+	__c = __pd->connection;						\
+	__r = __d->resource;						\
+	__cn = rcu_dereference(__c->transport.net_conf)->name;		\
+	if (test_bit(C_UNREGISTERED, &__c->flags))			\
+		__unregistered = "/unregistered/";			\
+	if (drbd_peer_device_ratelimit(__pd, rlt))			\
+		prmacro(lvl_or_desc, "drbd %s%s/%u drbd%u %s: " fmt,		\
+			__unregistered, __r->name, __d->vnr, __d->minor, __cn,	\
+			 ## args);					\
+	rcu_read_unlock();						\
+})
+
+#define ___drbd_printk_connection(prmacro, rlt, connection, lvl_or_desc, fmt, args...)	\
+({									\
+	const struct drbd_connection *__c =				\
+		(const struct drbd_connection *)(connection);		\
+	const struct drbd_resource *__r = __c->resource;		\
+	const char *__cn;						\
+	const char *__unregistered = "";				\
+	rcu_read_lock();						\
+	__cn = rcu_dereference(__c->transport.net_conf)->name;		\
+	if (test_bit(C_UNREGISTERED, &__c->flags))			\
+		__unregistered = "/unregistered/";			\
+	if (drbd_connection_ratelimit(__c, rlt))			\
+		prmacro(lvl_or_desc, "drbd %s%s %s: " fmt,		\
+			__unregistered, __r->name, __cn, ## args);	\
+	rcu_read_unlock();						\
+})
 
-#define __drbd_printk_drbd_device_prep(device)			\
-	const struct drbd_device *__d = (device);		\
-	const struct drbd_resource *__r = __d->resource
-#define __drbd_printk_drbd_device_fmt(fmt)	"drbd %s/%u drbd%u: " fmt
-#define __drbd_printk_drbd_device_args()	__r->name, __d->vnr, __d->minor
-#define __drbd_printk_drbd_device_unprep()
-
-#define __drbd_printk_drbd_peer_device_prep(peer_device)	\
-	const struct drbd_device *__d;				\
-	const struct drbd_resource *__r;			\
-	__d = (peer_device)->device;				\
-	__r = __d->resource
-#define __drbd_printk_drbd_peer_device_fmt(fmt) \
-	"drbd %s/%u drbd%u: " fmt
-#define __drbd_printk_drbd_peer_device_args() \
-	__r->name, __d->vnr, __d->minor
-#define __drbd_printk_drbd_peer_device_unprep()
-
-#define __drbd_printk_drbd_resource_prep(resource) \
-	const struct drbd_resource *__r = resource
-#define __drbd_printk_drbd_resource_fmt(fmt) "drbd %s: " fmt
-#define __drbd_printk_drbd_resource_args()	__r->name
-#define __drbd_printk_drbd_resource_unprep(resource)
-
-#define __drbd_printk_drbd_connection_prep(connection)		\
-	const struct drbd_connection *__c = (connection);	\
-	const struct drbd_resource *__r = __c->resource
-#define __drbd_printk_drbd_connection_fmt(fmt)			\
-	"drbd %s: " fmt
-#define __drbd_printk_drbd_connection_args()			\
-	__r->name
-#define __drbd_printk_drbd_connection_unprep()
+#define __drbd_printk_device(rlt, device, level, fmt, args...)\
+	___drbd_printk_device(__drbd_printk, rlt, device, level, fmt, ## args)
+#define __drbd_printk_resource(rlt, resource, level, fmt, args...)\
+	 ___drbd_printk_resource(__drbd_printk, rlt, resource, level, fmt, ## args)
+#define __drbd_printk_peer_device(rlt, peer_device, level, fmt, args...)\
+	 ___drbd_printk_peer_device(__drbd_printk, rlt, peer_device, level, fmt, ## args)
+#define __drbd_printk_connection(rlt, connection, level, fmt, args...)\
+	 ___drbd_printk_connection(__drbd_printk, rlt, connection, level, fmt, ## args)
 
 void drbd_printk_with_wrong_object_type(void);
 void drbd_dyn_dbg_with_wrong_object_type(void);
 
 #define __drbd_printk_choose_cond(obj, struct_name) \
-	(__builtin_types_compatible_p(typeof(obj), struct struct_name *) || \
-	 __builtin_types_compatible_p(typeof(obj), const struct struct_name *))
-#define __drbd_printk_if_same_type(obj, struct_name, level, fmt, args...) \
-	__drbd_printk_choose_cond(obj, struct_name), \
-({ \
-	__drbd_printk_ ## struct_name ## _prep((const struct struct_name *)(obj)); \
-	printk(level __drbd_printk_ ## struct_name ## _fmt(fmt), \
-		__drbd_printk_ ## struct_name ## _args(), ## args); \
-	__drbd_printk_ ## struct_name ## _unprep(); \
-})
-
-#define drbd_printk(level, obj, fmt, args...) \
-	__builtin_choose_expr( \
-	  __drbd_printk_if_same_type(obj, drbd_device, level, fmt, ## args), \
-	  __builtin_choose_expr( \
-	    __drbd_printk_if_same_type(obj, drbd_resource, level, fmt, ## args), \
-	    __builtin_choose_expr( \
-	      __drbd_printk_if_same_type(obj, drbd_connection, level, fmt, ## args), \
-	      __builtin_choose_expr( \
-		__drbd_printk_if_same_type(obj, drbd_peer_device, level, fmt, ## args), \
-		drbd_printk_with_wrong_object_type()))))
+	(__builtin_types_compatible_p(typeof(obj), struct drbd_ ## struct_name *) || \
+	 __builtin_types_compatible_p(typeof(obj), const struct drbd_ ## struct_name *))
+
+#define __drbd_obj_ratelimit(struct_name, obj, rlt)		\
+	({							\
+	int __rlt = (rlt);					\
+	BUILD_BUG_ON(!__drbd_printk_choose_cond(obj, struct_name)); \
+	BUILD_BUG_ON(__rlt < -1);				\
+	BUILD_BUG_ON(__rlt >= (int)ARRAY_SIZE(obj->ratelimit)); \
+	__rlt == -1 ? 1						\
+	: __ratelimit(/* unconst cast ratelimit state */	\
+		(struct ratelimit_state *)(unsigned long)	\
+		&obj->ratelimit[__rlt]);			\
+	})
+
+#define drbd_device_ratelimit(obj, rlt)		\
+	__drbd_obj_ratelimit(device, obj, D_RL_D_ ## rlt)
+#define drbd_resource_ratelimit(obj, rlt)	\
+	__drbd_obj_ratelimit(resource, obj, D_RL_R_ ## rlt)
+#define drbd_connection_ratelimit(obj, rlt)	\
+	__drbd_obj_ratelimit(connection, obj, D_RL_C_ ## rlt)
+#define drbd_peer_device_ratelimit(obj, rlt)	\
+	__drbd_obj_ratelimit(peer_device, obj, D_RL_PD_ ## rlt)
+
+#define drbd_printk(ratelimit_type, level, obj, fmt, args...) \
+	__builtin_choose_expr(__drbd_printk_choose_cond(obj, device), \
+	__drbd_printk_device(ratelimit_type, obj, level, fmt, ## args), \
+	\
+	__builtin_choose_expr(__drbd_printk_choose_cond(obj, resource), \
+	__drbd_printk_resource(ratelimit_type, obj, level, fmt, ## args), \
+	\
+	__builtin_choose_expr(__drbd_printk_choose_cond(obj, connection), \
+	__drbd_printk_connection(ratelimit_type, obj, level, fmt, ## args), \
+	\
+	__builtin_choose_expr(__drbd_printk_choose_cond(obj, peer_device), \
+	__drbd_printk_peer_device(ratelimit_type, obj, level, fmt, ## args), \
+	\
+	drbd_printk_with_wrong_object_type() \
+	))))
 
 #define __drbd_dyn_dbg_if_same_type(obj, struct_name, fmt, args...) \
-	__drbd_printk_choose_cond(obj, struct_name), \
 ({ \
 	DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt);		\
 	if (DYNAMIC_DEBUG_BRANCH(descriptor)) {			\
-		__drbd_printk_ ## struct_name ## _prep((const struct struct_name *)(obj)); \
-		__dynamic_pr_debug(&descriptor, __drbd_printk_ ## struct_name ## _fmt(fmt), \
-			__drbd_printk_ ## struct_name ## _args(), ## args); \
-		__drbd_printk_ ## struct_name ## _unprep();	\
+		___drbd_printk_ ## struct_name(			\
+			__drbd_dyn_dbg,				\
+				NOLIMIT, obj,			\
+				&descriptor, fmt, ## args);	\
 	}							\
 })
 
 #define dynamic_drbd_dbg(obj, fmt, args...) \
-	__builtin_choose_expr( \
-	  __drbd_dyn_dbg_if_same_type(obj, drbd_device, fmt, ## args), \
-	  __builtin_choose_expr( \
-	    __drbd_dyn_dbg_if_same_type(obj, drbd_resource, fmt, ## args), \
-	    __builtin_choose_expr( \
-	      __drbd_dyn_dbg_if_same_type(obj, drbd_connection, fmt, ## args), \
-	      __builtin_choose_expr( \
-		__drbd_dyn_dbg_if_same_type(obj, drbd_peer_device, fmt, ## args), \
-		drbd_dyn_dbg_with_wrong_object_type()))))
-
-#define drbd_emerg(device, fmt, args...) \
-	drbd_printk(KERN_EMERG, device, fmt, ## args)
-#define drbd_alert(device, fmt, args...) \
-	drbd_printk(KERN_ALERT, device, fmt, ## args)
-#define drbd_crit(device, fmt, args...) \
-	drbd_printk(KERN_CRIT, device, fmt, ## args)
-#define drbd_err(device, fmt, args...) \
-	drbd_printk(KERN_ERR, device, fmt, ## args)
-#define drbd_warn(device, fmt, args...) \
-	drbd_printk(KERN_WARNING, device, fmt, ## args)
-#define drbd_notice(device, fmt, args...) \
-	drbd_printk(KERN_NOTICE, device, fmt, ## args)
-#define drbd_info(device, fmt, args...) \
-	drbd_printk(KERN_INFO, device, fmt, ## args)
-
+	__builtin_choose_expr(__drbd_printk_choose_cond(obj, device), \
+	__drbd_dyn_dbg_if_same_type(obj, device, fmt, ## args), \
+	\
+	__builtin_choose_expr(__drbd_printk_choose_cond(obj, resource), \
+	__drbd_dyn_dbg_if_same_type(obj, resource, fmt, ## args), \
+	\
+	__builtin_choose_expr(__drbd_printk_choose_cond(obj, connection), \
+	__drbd_dyn_dbg_if_same_type(obj, connection, fmt, ## args), \
+	\
+	__builtin_choose_expr(__drbd_printk_choose_cond(obj, peer_device), \
+	__drbd_dyn_dbg_if_same_type(obj, peer_device, fmt, ## args), \
+	\
+	drbd_dyn_dbg_with_wrong_object_type() \
+	))))
+
+#define drbd_emerg_ratelimit(obj, fmt, args...) \
+	drbd_printk(GENERIC, KERN_EMERG, obj, fmt, ## args)
+#define drbd_alert_ratelimit(obj, fmt, args...) \
+	drbd_printk(GENERIC, KERN_ALERT, obj, fmt, ## args)
+#define drbd_crit_ratelimit(obj, fmt, args...) \
+	drbd_printk(GENERIC, KERN_CRIT, obj, fmt, ## args)
+#define drbd_err_ratelimit(obj, fmt, args...) \
+	drbd_printk(GENERIC, KERN_ERR, obj, fmt, ## args)
+#define drbd_warn_ratelimit(obj, fmt, args...) \
+	drbd_printk(GENERIC, KERN_WARNING, obj, fmt, ## args)
+#define drbd_notice_ratelimit(obj, fmt, args...) \
+	drbd_printk(GENERIC, KERN_NOTICE, obj, fmt, ## args)
+#define drbd_info_ratelimit(obj, fmt, args...) \
+	drbd_printk(GENERIC, KERN_INFO, obj, fmt, ## args)
+
+#define drbd_emerg(obj, fmt, args...) \
+	drbd_printk(NOLIMIT, KERN_EMERG, obj, fmt,  ## args)
+#define drbd_alert(obj, fmt, args...) \
+	drbd_printk(NOLIMIT, KERN_ALERT, obj, fmt,  ## args)
+#define drbd_crit(obj, fmt, args...) \
+	drbd_printk(NOLIMIT, KERN_CRIT, obj, fmt,  ## args)
+#define drbd_err(obj, fmt, args...) \
+	drbd_printk(NOLIMIT, KERN_ERR, obj, fmt,  ## args)
+#define drbd_warn(obj, fmt, args...) \
+	drbd_printk(NOLIMIT, KERN_WARNING, obj, fmt,  ## args)
+#define drbd_notice(obj, fmt, args...) \
+	drbd_printk(NOLIMIT, KERN_NOTICE, obj, fmt,  ## args)
+#define drbd_info(obj, fmt, args...) \
+	drbd_printk(NOLIMIT, KERN_INFO, obj, fmt,  ## args)
 
 #define drbd_ratelimit() \
 ({						\
@@ -122,7 +206,7 @@ void drbd_dyn_dbg_with_wrong_object_type(void);
 	do {									\
 		if (!(exp))							\
 			drbd_err(x, "ASSERTION %s FAILED in %s\n",		\
-				#exp, __func__);				\
+				 #exp, __func__);				\
 	} while (0)
 
 /**
@@ -130,12 +214,13 @@ void drbd_dyn_dbg_with_wrong_object_type(void);
  *
  * Unlike the assert macro, this macro returns a boolean result.
  */
-#define expect(x, exp) ({							\
-		bool _bool = (exp);						\
-		if (!_bool && drbd_ratelimit())					\
-			drbd_err(x, "ASSERTION %s FAILED in %s\n",		\
-				#exp, __func__);				\
-		_bool;								\
+#define expect(x, exp) ({					\
+		bool _bool = (exp);				\
+		if (!_bool)					\
+			drbd_err_ratelimit(x,			\
+				"ASSERTION %s FAILED in %s\n",	\
+				#exp, __func__);		\
+		_bool;						\
 		})
 
 #endif
diff --git a/drivers/block/drbd/drbd_req.h b/drivers/block/drbd/drbd_req.h
index 9ae860e7591b..e5770401cb7a 100644
--- a/drivers/block/drbd/drbd_req.h
+++ b/drivers/block/drbd/drbd_req.h
@@ -64,36 +64,31 @@
  */
 
 enum drbd_req_event {
-	CREATED,
-	TO_BE_SENT,
 	TO_BE_SUBMITTED,
 
-	/* XXX yes, now I am inconsistent...
-	 * these are not "events" but "actions"
-	 * oh, well... */
-	QUEUE_FOR_NET_WRITE,
-	QUEUE_FOR_NET_READ,
-	QUEUE_FOR_SEND_OOS,
+	NEW_NET_READ,
+	NEW_NET_WRITE,
+	NEW_NET_OOS,
+	READY_FOR_NET,
+	SKIP_OOS,
 
-	/* An empty flush is queued as P_BARRIER,
-	 * which will cause it to complete "successfully",
-	 * even if the local disk flush failed.
+	/* For an empty flush, mark that a corresponding barrier has been sent
+	 * to this peer. This causes it to complete "successfully", even if the
+	 * local disk flush failed.
 	 *
 	 * Just like "real" requests, empty flushes (blkdev_issue_flush()) will
 	 * only see an error if neither local nor remote data is reachable. */
-	QUEUE_AS_DRBD_BARRIER,
+	BARRIER_SENT,
 
 	SEND_CANCELED,
 	SEND_FAILED,
 	HANDED_OVER_TO_NETWORK,
 	OOS_HANDED_TO_NETWORK,
-	CONNECTION_LOST_WHILE_PENDING,
-	READ_RETRY_REMOTE_CANCELED,
+	CONNECTION_LOST,
+	CONNECTION_LOST_WHILE_SUSPENDED,
 	RECV_ACKED_BY_PEER,
 	WRITE_ACKED_BY_PEER,
 	WRITE_ACKED_BY_PEER_AND_SIS, /* and set_in_sync */
-	CONFLICT_RESOLVED,
-	POSTPONE_WRITE,
 	NEG_ACKED,
 	BARRIER_ACKED, /* in protocol A and B */
 	DATA_RECEIVED, /* (remote read) */
@@ -107,82 +102,93 @@ enum drbd_req_event {
 
 	ABORT_DISK_IO,
 	RESEND,
-	FAIL_FROZEN_DISK_IO,
-	RESTART_FROZEN_DISK_IO,
+	CANCEL_SUSPENDED_IO,
+	COMPLETION_RESUMED,
 	NOTHING,
 };
 
-/* encoding of request states for now.  we don't actually need that many bits.
- * we don't need to do atomic bit operations either, since most of the time we
- * need to look at the connection state and/or manipulate some lists at the
- * same time, so we should hold the request lock anyways.
+/*
+ * Encoding of request states. Modifications are protected by rq_lock. We don't
+ * do atomic bit operations.
  */
 enum drbd_req_state_bits {
-	/* 3210
-	 * 0000: no local possible
-	 * 0001: to be submitted
-	 *    UNUSED, we could map: 011: submitted, completion still pending
-	 * 0110: completed ok
-	 * 0010: completed with error
-	 * 1001: Aborted (before completion)
-	 * 1x10: Aborted and completed -> free
-	 */
-	__RQ_LOCAL_PENDING,
-	__RQ_LOCAL_COMPLETED,
-	__RQ_LOCAL_OK,
-	__RQ_LOCAL_ABORTED,
-
-	/* 87654
-	 * 00000: no network possible
-	 * 00001: to be send
-	 * 00011: to be send, on worker queue
-	 * 00101: sent, expecting recv_ack (B) or write_ack (C)
-	 * 11101: sent,
-	 *        recv_ack (B) or implicit "ack" (A),
-	 *        still waiting for the barrier ack.
-	 *        master_bio may already be completed and invalidated.
-	 * 11100: write acked (C),
-	 *        data received (for remote read, any protocol)
-	 *        or finally the barrier ack has arrived (B,A)...
-	 *        request can be freed
-	 * 01100: neg-acked (write, protocol C)
-	 *        or neg-d-acked (read, any protocol)
-	 *        or killed from the transfer log
-	 *        during cleanup after connection loss
-	 *        request can be freed
-	 * 01000: canceled or send failed...
-	 *        request can be freed
+	/*
+	 * Here are the possible combinations of the core net flags pending, pending-oos,
+	 * queued, ready, sent, done, ok.
+	 *
+	 * <none>:
+	 *   No network required, or not yet processed.
+	 * pending,queued:
+	 *   To be sent, must not be processed yet.
+	 * pending,queued,ready:
+	 *   To be sent, processing allowed.
+	 * pending,ready,sent:
+	 *   Sent, expecting P_RECV_ACK (B) or P_WRITE_ACK (C).
+	 * queued,ready,ok:
+	 *   P_RECV_ACK (B) or P_WRITE_ACK (C) received before request marked
+	 *   as having been sent.
+	 * ready,sent,ok:
+	 *   Sent, implicit "ack" (A), P_RECV_ACK (B) or P_WRITE_ACK (C) received.
+	 *   Still waiting for the barrier ack.
+	 *   master_bio may already be completed and invalidated.
+	 * pending:
+	 *   Intended for this peer, but connection lost before processing
+	 *   allowed.
+	 * pending,ready:
+	 *   Intended for this peer, but connection lost. If
+	 *   IO is suspended, it will stay in this state until the connection
+	 *   is restored or IO is resumed.
+	 * ready,sent,done,ok:
+	 *   Data received (for remote read, any protocol),
+	 *   or finally the barrier ack has arrived.
+	 * ready,sent,done:
+	 *   Received P_NEG_ACK for write (protocol C, or we are SyncSource),
+	 *   or P_NEG_DREPLY for read (any protocol).
+	 *   Or cleaned up after connection loss after send.
+	 * pending-oos,queued,done:
+	 *   P_OUT_OF_SYNC to be sent, must not be processed yet.
+	 * pending-oos,queued,ready,done:
+	 *   P_OUT_OF_SYNC to be sent, processing allowed.
+	 * queued,ready,done:
+	 *   P_OUT_OF_SYNC was intended, but skipped.
+	 * done:
+	 *   P_OUT_OF_SYNC was intended, but connection lost before processing
+	 *   allowed.
+	 * ready,done:
+	 *   P_OUT_OF_SYNC sent.
+	 *   Or cleaned up after connection loss, either before send or when
+	 *   only P_OUT_OF_SYNC was intended.
 	 */
 
-	/* if "SENT" is not set, yet, this can still fail or be canceled.
-	 * if "SENT" is set already, we still wait for an Ack packet.
-	 * when cleared, the master_bio may be completed.
-	 * in (B,A) the request object may still linger on the transaction log
-	 * until the corresponding barrier ack comes in */
+	/* Pending some network interaction towards the peer apart from
+	 * barriers or P_OUT_OF_SYNC.
+	 * If "sent" is not yet set, this can still fail or be canceled.
+	 * While set, the master_bio may not be completed. */
 	__RQ_NET_PENDING,
 
-	/* If it is QUEUED, and it is a WRITE, it is also registered in the
-	 * transfer log. Currently we need this flag to avoid conflicts between
-	 * worker canceling the request and tl_clear_barrier killing it from
-	 * transfer log.  We should restructure the code so this conflict does
-	 * no longer occur. */
+	/* Pending send of P_OUT_OF_SYNC */
+	__RQ_NET_PENDING_OOS,
+
+	/* The sender might store pointers to it */
 	__RQ_NET_QUEUED,
 
-	/* well, actually only "handed over to the network stack".
-	 *
-	 * TODO can potentially be dropped because of the similar meaning
-	 * of RQ_NET_SENT and ~RQ_NET_QUEUED.
-	 * however it is not exactly the same. before we drop it
-	 * we must ensure that we can tell a request with network part
-	 * from a request without, regardless of what happens to it. */
+	/* Ready for processing by the sender */
+	__RQ_NET_READY,
+
+	/* Well, actually only "handed over to the network stack". */
 	__RQ_NET_SENT,
 
-	/* when set, the request may be freed (if RQ_NET_QUEUED is clear).
-	 * basically this means the corresponding P_BARRIER_ACK was received */
+	/* When set, the data stage is done, as far as interaction with this
+	 * peer is concerned. Basically this means the corresponding
+	 * P_BARRIER_ACK was received. */
 	__RQ_NET_DONE,
 
-	/* whether or not we know (C) or pretend (B,A) that the write
-	 * was successfully written on the peer.
+	/* Set when the request was successful. That is, the corresponding
+	 * condition is fulfilled:
+	 * - The write was sent (A)
+	 * - Receipt of the write was acknowledged (B)
+	 * - The write was successfully written on the peer (C)
+	 * - Read data was received
 	 */
 	__RQ_NET_OK,
 
@@ -192,6 +198,29 @@ enum drbd_req_state_bits {
 	/* keep this last, its for the RQ_NET_MASK */
 	__RQ_NET_MAX,
 
+	/* We expect a receive ACK (wire proto B) */
+	__RQ_EXP_RECEIVE_ACK,
+
+	/* We expect a write ACK (wite proto C) */
+	__RQ_EXP_WRITE_ACK,
+
+	/* waiting for a barrier ack, did an extra kref_get */
+	__RQ_EXP_BARR_ACK,
+
+	/* 4321
+	 * 0000: no local possible
+	 * 0001: to be submitted
+	 *    UNUSED, we could map: 011: submitted, completion still pending
+	 * 0110: completed ok
+	 * 0010: completed with error
+	 * 1001: Aborted (before completion)
+	 * 1x10: Aborted and completed -> free
+	 */
+	__RQ_LOCAL_PENDING,
+	__RQ_LOCAL_COMPLETED,
+	__RQ_LOCAL_OK,
+	__RQ_LOCAL_ABORTED,
+
 	/* Set when this is a write, clear for a read */
 	__RQ_WRITE,
 	__RQ_WSAME,
@@ -212,26 +241,11 @@ enum drbd_req_state_bits {
 	/* would have been completed,
 	 * but was not, because of drbd_suspended() */
 	__RQ_COMPLETION_SUSP,
-
-	/* We expect a receive ACK (wire proto B) */
-	__RQ_EXP_RECEIVE_ACK,
-
-	/* We expect a write ACK (wite proto C) */
-	__RQ_EXP_WRITE_ACK,
-
-	/* waiting for a barrier ack, did an extra kref_get */
-	__RQ_EXP_BARR_ACK,
 };
-
-#define RQ_LOCAL_PENDING   (1UL << __RQ_LOCAL_PENDING)
-#define RQ_LOCAL_COMPLETED (1UL << __RQ_LOCAL_COMPLETED)
-#define RQ_LOCAL_OK        (1UL << __RQ_LOCAL_OK)
-#define RQ_LOCAL_ABORTED   (1UL << __RQ_LOCAL_ABORTED)
-
-#define RQ_LOCAL_MASK      ((RQ_LOCAL_ABORTED << 1)-1)
-
 #define RQ_NET_PENDING     (1UL << __RQ_NET_PENDING)
+#define RQ_NET_PENDING_OOS (1UL << __RQ_NET_PENDING_OOS)
 #define RQ_NET_QUEUED      (1UL << __RQ_NET_QUEUED)
+#define RQ_NET_READY       (1UL << __RQ_NET_READY)
 #define RQ_NET_SENT        (1UL << __RQ_NET_SENT)
 #define RQ_NET_DONE        (1UL << __RQ_NET_DONE)
 #define RQ_NET_OK          (1UL << __RQ_NET_OK)
@@ -239,6 +253,18 @@ enum drbd_req_state_bits {
 
 #define RQ_NET_MASK        (((1UL << __RQ_NET_MAX)-1) & ~RQ_LOCAL_MASK)
 
+#define RQ_EXP_RECEIVE_ACK (1UL << __RQ_EXP_RECEIVE_ACK)
+#define RQ_EXP_WRITE_ACK   (1UL << __RQ_EXP_WRITE_ACK)
+#define RQ_EXP_BARR_ACK    (1UL << __RQ_EXP_BARR_ACK)
+
+#define RQ_LOCAL_PENDING   (1UL << __RQ_LOCAL_PENDING)
+#define RQ_LOCAL_COMPLETED (1UL << __RQ_LOCAL_COMPLETED)
+#define RQ_LOCAL_OK        (1UL << __RQ_LOCAL_OK)
+#define RQ_LOCAL_ABORTED   (1UL << __RQ_LOCAL_ABORTED)
+
+#define RQ_LOCAL_MASK      \
+	(RQ_LOCAL_ABORTED | RQ_LOCAL_OK | RQ_LOCAL_COMPLETED | RQ_LOCAL_PENDING)
+
 #define RQ_WRITE           (1UL << __RQ_WRITE)
 #define RQ_WSAME           (1UL << __RQ_WSAME)
 #define RQ_UNMAP           (1UL << __RQ_UNMAP)
@@ -247,14 +273,25 @@ enum drbd_req_state_bits {
 #define RQ_UNPLUG          (1UL << __RQ_UNPLUG)
 #define RQ_POSTPONED	   (1UL << __RQ_POSTPONED)
 #define RQ_COMPLETION_SUSP (1UL << __RQ_COMPLETION_SUSP)
-#define RQ_EXP_RECEIVE_ACK (1UL << __RQ_EXP_RECEIVE_ACK)
-#define RQ_EXP_WRITE_ACK   (1UL << __RQ_EXP_WRITE_ACK)
-#define RQ_EXP_BARR_ACK    (1UL << __RQ_EXP_BARR_ACK)
 
-/* For waking up the frozen transfer log mod_req() has to return if the request
-   should be counted in the epoch object*/
-#define MR_WRITE       1
-#define MR_READ        2
+
+/* these flags go into local_rq_state,
+ * orhter flags go into their respective net_rq_state[idx] */
+#define RQ_STATE_0_MASK	\
+	(RQ_LOCAL_MASK  |\
+	 RQ_WRITE       |\
+	 RQ_WSAME       |\
+	 RQ_UNMAP       |\
+	 RQ_ZEROES      |\
+	 RQ_IN_ACT_LOG  |\
+	 RQ_UNPLUG      |\
+	 RQ_POSTPONED   |\
+	 RQ_COMPLETION_SUSP)
+
+static inline bool drbd_req_is_write(struct drbd_request *req)
+{
+	return req->local_rq_state & RQ_WRITE;
+}
 
 /* Short lived temporary struct on the stack.
  * We could squirrel the error to be returned into
@@ -264,61 +301,63 @@ struct bio_and_error {
 	int error;
 };
 
-extern void start_new_tl_epoch(struct drbd_connection *connection);
-extern void drbd_req_destroy(struct kref *kref);
-extern int __req_mod(struct drbd_request *req, enum drbd_req_event what,
-		struct drbd_peer_device *peer_device,
-		struct bio_and_error *m);
-extern void complete_master_bio(struct drbd_device *device,
-		struct bio_and_error *m);
-extern void request_timer_fn(struct timer_list *t);
-extern void tl_restart(struct drbd_connection *connection, enum drbd_req_event what);
-extern void _tl_restart(struct drbd_connection *connection, enum drbd_req_event what);
-extern void tl_abort_disk_io(struct drbd_device *device);
+bool start_new_tl_epoch(struct drbd_resource *resource);
+void drbd_req_destroy(struct kref *kref);
+void __req_mod(struct drbd_request *req, enum drbd_req_event what,
+	       struct drbd_peer_device *peer_device, struct bio_and_error *m);
+void complete_master_bio(struct drbd_device *device, struct bio_and_error *m);
+void drbd_release_conflicts(struct drbd_device *device,
+			    struct drbd_interval *release_interval);
+void drbd_put_ref_tl_walk(struct drbd_request *req, int done_put, int oos_send_put);
+void drbd_set_pending_out_of_sync(struct drbd_peer_device *peer_device);
+void request_timer_fn(struct timer_list *t);
+void tl_walk(struct drbd_connection *connection,
+	     struct drbd_request **from_req, enum drbd_req_event what);
+void __tl_walk(struct drbd_resource * const resource,
+	       struct drbd_connection * const connection,
+	       struct drbd_request **from_req, const enum drbd_req_event what);
+void drbd_destroy_peer_ack_if_done(struct drbd_peer_ack *peer_ack);
+int w_queue_peer_ack(struct drbd_work *w, int cancel);
+void drbd_queue_peer_ack(struct drbd_resource *resource,
+			 struct drbd_request *req);
+bool drbd_should_do_remote(struct drbd_peer_device *peer_device,
+			   enum which_state which);
+void drbd_reclaim_req(struct rcu_head *rp);
 
 /* this is in drbd_main.c */
-extern void drbd_restart_request(struct drbd_request *req);
+void drbd_restart_request(struct drbd_request *req);
+void drbd_restart_suspended_reqs(struct drbd_resource *resource);
 
 /* use this if you don't want to deal with calling complete_master_bio()
  * outside the spinlock, e.g. when walking some list on cleanup. */
-static inline int _req_mod(struct drbd_request *req, enum drbd_req_event what,
+static inline void _req_mod(struct drbd_request *req, enum drbd_req_event what,
 		struct drbd_peer_device *peer_device)
 {
 	struct drbd_device *device = req->device;
 	struct bio_and_error m;
-	int rv;
 
 	/* __req_mod possibly frees req, do not touch req after that! */
-	rv = __req_mod(req, what, peer_device, &m);
+	__req_mod(req, what, peer_device, &m);
 	if (m.bio)
 		complete_master_bio(device, &m);
-
-	return rv;
 }
 
-/* completion of master bio is outside of our spinlock.
- * We still may or may not be inside some irqs disabled section
- * of the lower level driver completion callback, so we need to
- * spin_lock_irqsave here. */
-static inline int req_mod(struct drbd_request *req,
+/* completion of master bio is outside of spinlock.
+ * If you need it irqsave, do it your self!
+ * Which means: don't use from bio endio callback. */
+static inline void req_mod(struct drbd_request *req,
 		enum drbd_req_event what,
 		struct drbd_peer_device *peer_device)
 {
-	unsigned long flags;
 	struct drbd_device *device = req->device;
 	struct bio_and_error m;
-	int rv;
 
-	spin_lock_irqsave(&device->resource->req_lock, flags);
-	rv = __req_mod(req, what, peer_device, &m);
-	spin_unlock_irqrestore(&device->resource->req_lock, flags);
+	read_lock_irq(&device->resource->state_rwlock);
+	__req_mod(req, what, peer_device, &m);
+	read_unlock_irq(&device->resource->state_rwlock);
 
 	if (m.bio)
 		complete_master_bio(device, &m);
-
-	return rv;
 }
 
-extern bool drbd_should_do_remote(union drbd_dev_state);
-
 #endif
diff --git a/drivers/block/drbd/drbd_state.h b/drivers/block/drbd/drbd_state.h
index cbaeb8018dbf..2ae525c1760e 100644
--- a/drivers/block/drbd/drbd_state.h
+++ b/drivers/block/drbd/drbd_state.h
@@ -2,26 +2,19 @@
 #ifndef DRBD_STATE_H
 #define DRBD_STATE_H
 
+#include "drbd_protocol.h"
+
+struct drbd_resource;
 struct drbd_device;
 struct drbd_connection;
+struct drbd_peer_device;
+struct drbd_work;
+struct twopc_request;
 
 /**
  * DOC: DRBD State macros
  *
  * These macros are used to express state changes in easily readable form.
- *
- * The NS macros expand to a mask and a value, that can be bit ored onto the
- * current state as soon as the spinlock (req_lock) was taken.
- *
- * The _NS macros are used for state functions that get called with the
- * spinlock. These macros expand directly to the new state value.
- *
- * Besides the basic forms NS() and _NS() additional _?NS[23] are defined
- * to express state changes that affect more than one aspect of the state.
- *
- * E.g. NS2(conn, C_CONNECTED, peer, R_SECONDARY)
- * Means that the network connection was established and that the peer
- * is in secondary role.
  */
 #define role_MASK R_MASK
 #define peer_MASK R_MASK
@@ -34,141 +27,168 @@ struct drbd_connection;
 #define susp_nod_MASK 1
 #define susp_fen_MASK 1
 
-#define NS(T, S) \
-	({ union drbd_state mask; mask.i = 0; mask.T = T##_MASK; mask; }), \
-	({ union drbd_state val; val.i = 0; val.T = (S); val; })
-#define NS2(T1, S1, T2, S2) \
-	({ union drbd_state mask; mask.i = 0; mask.T1 = T1##_MASK; \
-	  mask.T2 = T2##_MASK; mask; }), \
-	({ union drbd_state val; val.i = 0; val.T1 = (S1); \
-	  val.T2 = (S2); val; })
-#define NS3(T1, S1, T2, S2, T3, S3) \
-	({ union drbd_state mask; mask.i = 0; mask.T1 = T1##_MASK; \
-	  mask.T2 = T2##_MASK; mask.T3 = T3##_MASK; mask; }), \
-	({ union drbd_state val;  val.i = 0; val.T1 = (S1); \
-	  val.T2 = (S2); val.T3 = (S3); val; })
-
-#define _NS(D, T, S) \
-	D, ({ union drbd_state __ns; __ns = drbd_read_state(D); __ns.T = (S); __ns; })
-#define _NS2(D, T1, S1, T2, S2) \
-	D, ({ union drbd_state __ns; __ns = drbd_read_state(D); __ns.T1 = (S1); \
-	__ns.T2 = (S2); __ns; })
-#define _NS3(D, T1, S1, T2, S2, T3, S3) \
-	D, ({ union drbd_state __ns; __ns = drbd_read_state(D); __ns.T1 = (S1); \
-	__ns.T2 = (S2); __ns.T3 = (S3); __ns; })
-
 enum chg_state_flags {
-	CS_HARD	         = 1 << 0,
+	CS_HARD          = 1 << 0, /* Forced state change, such as a connection loss */
 	CS_VERBOSE       = 1 << 1,
 	CS_WAIT_COMPLETE = 1 << 2,
 	CS_SERIALIZE     = 1 << 3,
-	CS_ORDERED       = CS_WAIT_COMPLETE + CS_SERIALIZE,
-	CS_LOCAL_ONLY    = 1 << 4, /* Do not consider a device pair wide state change */
-	CS_DC_ROLE       = 1 << 5, /* DC = display as connection state change */
-	CS_DC_PEER       = 1 << 6,
-	CS_DC_CONN       = 1 << 7,
-	CS_DC_DISK       = 1 << 8,
-	CS_DC_PDSK       = 1 << 9,
-	CS_DC_SUSP       = 1 << 10,
-	CS_DC_MASK       = CS_DC_ROLE + CS_DC_PEER + CS_DC_CONN + CS_DC_DISK + CS_DC_PDSK,
-	CS_IGN_OUTD_FAIL = 1 << 11,
-
-	/* Make sure no meta data IO is in flight, by calling
-	 * drbd_md_get_buffer().  Used for graceful detach. */
-	CS_INHIBIT_MD_IO = 1 << 12,
+	CS_ALREADY_SERIALIZED = 1 << 4, /* resource->state_sem already taken */
+	CS_LOCAL_ONLY    = 1 << 5, /* Do not consider a cluster-wide state change */
+	CS_PREPARE	 = 1 << 6,
+	CS_PREPARED	 = 1 << 7,
+	CS_ABORT	 = 1 << 8,
+	CS_TWOPC	 = 1 << 9,
+	CS_IGN_OUTD_FAIL = 1 << 10,
+	CS_DONT_RETRY    = 1 << 11, /* Disable internal retry. Caller has a retry loop */
+	CS_FORCE_RECALC  = 1 << 13, /* Force re-evaluation of state logic */
+	CS_CLUSTER_WIDE  = 1 << 14, /* Make this a cluster wide state change! */
+	CS_FP_LOCAL_UP_TO_DATE = 1 << 15, /* force promotion by making local disk state up_to_date */
+	CS_FP_OUTDATE_PEERS = 1 << 16, /* force promotion by marking unknown peers as outdated */
+	CS_FS_IGN_OPENERS = 1 << 17, /* force demote, ignore openers */
 };
 
-/* drbd_dev_state and drbd_state are different types. This is to stress the
-   small difference. There is no suspended flag (.susp), and no suspended
-   while fence handler runs flas (susp_fen). */
-union drbd_dev_state {
-	struct {
-#if defined(__LITTLE_ENDIAN_BITFIELD)
-		unsigned role:2 ;   /* 3/4	 primary/secondary/unknown */
-		unsigned peer:2 ;   /* 3/4	 primary/secondary/unknown */
-		unsigned conn:5 ;   /* 17/32	 cstates */
-		unsigned disk:4 ;   /* 8/16	 from D_DISKLESS to D_UP_TO_DATE */
-		unsigned pdsk:4 ;   /* 8/16	 from D_DISKLESS to D_UP_TO_DATE */
-		unsigned _unused:1 ;
-		unsigned aftr_isp:1 ; /* isp .. imposed sync pause */
-		unsigned peer_isp:1 ;
-		unsigned user_isp:1 ;
-		unsigned _pad:11;   /* 0	 unused */
-#elif defined(__BIG_ENDIAN_BITFIELD)
-		unsigned _pad:11;
-		unsigned user_isp:1 ;
-		unsigned peer_isp:1 ;
-		unsigned aftr_isp:1 ; /* isp .. imposed sync pause */
-		unsigned _unused:1 ;
-		unsigned pdsk:4 ;   /* 8/16	 from D_DISKLESS to D_UP_TO_DATE */
-		unsigned disk:4 ;   /* 8/16	 from D_DISKLESS to D_UP_TO_DATE */
-		unsigned conn:5 ;   /* 17/32	 cstates */
-		unsigned peer:2 ;   /* 3/4	 primary/secondary/unknown */
-		unsigned role:2 ;   /* 3/4	 primary/secondary/unknown */
-#else
-# error "this endianess is not supported"
-#endif
-	};
-	unsigned int i;
-};
+void drbd_resume_al(struct drbd_device *device);
 
-extern enum drbd_state_rv drbd_change_state(struct drbd_device *device,
-					    enum chg_state_flags f,
-					    union drbd_state mask,
-					    union drbd_state val);
-extern void drbd_force_state(struct drbd_device *, union drbd_state,
-			union drbd_state);
-extern enum drbd_state_rv _drbd_request_state(struct drbd_device *,
-					      union drbd_state,
-					      union drbd_state,
-					      enum chg_state_flags);
-
-extern enum drbd_state_rv
-_drbd_request_state_holding_state_mutex(struct drbd_device *, union drbd_state,
-					union drbd_state, enum chg_state_flags);
-
-extern enum drbd_state_rv _drbd_set_state(struct drbd_device *, union drbd_state,
-					  enum chg_state_flags,
-					  struct completion *done);
-extern void print_st_err(struct drbd_device *, union drbd_state,
-			union drbd_state, enum drbd_state_rv);
-
-enum drbd_state_rv
-_conn_request_state(struct drbd_connection *connection, union drbd_state mask, union drbd_state val,
-		    enum chg_state_flags flags);
-
-enum drbd_state_rv
-conn_request_state(struct drbd_connection *connection, union drbd_state mask, union drbd_state val,
-		   enum chg_state_flags flags);
-
-extern void drbd_resume_al(struct drbd_device *device);
-extern bool conn_all_vols_unconf(struct drbd_connection *connection);
+enum drbd_disk_state conn_highest_disk(struct drbd_connection *connection);
+enum drbd_disk_state conn_highest_pdsk(struct drbd_connection *connection);
 
-/**
- * drbd_request_state() - Request a state change
- * @device:	DRBD device.
- * @mask:	mask of state bits to change.
- * @val:	value of new state bits.
- *
- * This is the most graceful way of requesting a state change. It is verbose
- * quite verbose in case the state change is not possible, and all those
- * state changes are globally serialized.
- */
-static inline int drbd_request_state(struct drbd_device *device,
-				     union drbd_state mask,
-				     union drbd_state val)
+void state_change_lock(struct drbd_resource *resource,
+		       unsigned long *irq_flags, enum chg_state_flags flags);
+void state_change_unlock(struct drbd_resource *resource,
+			 unsigned long *irq_flags);
+
+void begin_state_change(struct drbd_resource *resource,
+			unsigned long *irq_flags, enum chg_state_flags flags);
+enum drbd_state_rv end_state_change(struct drbd_resource *resource,
+				    unsigned long *irq_flags, const char *tag);
+void abort_state_change(struct drbd_resource *resource,
+			unsigned long *irq_flags);
+void abort_state_change_locked(struct drbd_resource *resource);
+
+void begin_state_change_locked(struct drbd_resource *resource,
+			       enum chg_state_flags flags);
+enum drbd_state_rv end_state_change_locked(struct drbd_resource *resource,
+					   const char *tag);
+
+void clear_remote_state_change(struct drbd_resource *resource);
+void __clear_remote_state_change(struct drbd_resource *resource);
+
+
+enum which_state;
+bool drbd_all_peer_replication(struct drbd_device *device, enum which_state which);
+union drbd_state drbd_get_device_state(struct drbd_device *device,
+				       enum which_state which);
+union drbd_state drbd_get_peer_device_state(struct drbd_peer_device *peer_device,
+					    enum which_state which);
+
+#define stable_state_change(resource, change_state) ({				\
+		enum drbd_state_rv rv;						\
+		int err;							\
+		err = wait_event_interruptible((resource)->state_wait,		\
+			(rv = (change_state)) != SS_IN_TRANSIENT_STATE);	\
+		if (err)							\
+			err = -SS_UNKNOWN_ERROR;				\
+		else								\
+			err = rv;						\
+		err;								\
+	})
+
+void nested_twopc_work(struct work_struct *work);
+void drbd_maybe_cluster_wide_reply(struct drbd_resource *resource);
+enum drbd_state_rv nested_twopc_request(struct drbd_resource *resource,
+					struct twopc_request *request);
+bool drbd_twopc_between_peer_and_me(struct drbd_connection *connection);
+bool cluster_wide_reply_ready(struct drbd_resource *resource);
+
+enum drbd_state_rv change_role(struct drbd_resource *resource,
+			       enum drbd_role role,
+			       enum chg_state_flags flags, const char *tag,
+			       const char **err_str);
+
+void __change_io_susp_user(struct drbd_resource *resource, bool value);
+enum drbd_state_rv change_io_susp_user(struct drbd_resource *resource,
+				       bool value, enum chg_state_flags flags);
+void __change_io_susp_no_data(struct drbd_resource *resource, bool value);
+void __change_io_susp_fencing(struct drbd_connection *connection, bool value);
+void __change_io_susp_quorum(struct drbd_resource *resource, bool value);
+
+void __change_disk_state(struct drbd_device *device,
+			 enum drbd_disk_state disk_state);
+void __downgrade_disk_states(struct drbd_resource *resource,
+			     enum drbd_disk_state disk_state);
+enum drbd_state_rv change_disk_state(struct drbd_device *device,
+				     enum drbd_disk_state disk_state,
+				     enum chg_state_flags flags,
+				     const char *tag, const char **err_str);
+
+void __change_cstate(struct drbd_connection *connection,
+		     enum drbd_conn_state cstate);
+enum drbd_state_rv change_cstate_tag(struct drbd_connection *connection,
+				     enum drbd_conn_state cstate,
+				     enum chg_state_flags flags,
+				     const char *tag, const char **err_str);
+static inline enum drbd_state_rv change_cstate(struct drbd_connection *connection,
+					       enum drbd_conn_state cstate,
+					       enum chg_state_flags flags)
 {
-	return _drbd_request_state(device, mask, val, CS_VERBOSE + CS_ORDERED);
+	return change_cstate_tag(connection, cstate, flags, NULL, NULL);
 }
 
-/* for use in adm_detach() (drbd_adm_detach(), drbd_adm_down()) */
-int drbd_request_detach_interruptible(struct drbd_device *device);
-
-enum drbd_role conn_highest_role(struct drbd_connection *connection);
-enum drbd_role conn_highest_peer(struct drbd_connection *connection);
-enum drbd_disk_state conn_highest_disk(struct drbd_connection *connection);
-enum drbd_disk_state conn_lowest_disk(struct drbd_connection *connection);
-enum drbd_disk_state conn_highest_pdsk(struct drbd_connection *connection);
-enum drbd_conns conn_lowest_conn(struct drbd_connection *connection);
-
+void __change_peer_role(struct drbd_connection *connection,
+			enum drbd_role peer_role);
+
+void __change_repl_state(struct drbd_peer_device *peer_device,
+			 enum drbd_repl_state repl_state);
+enum drbd_state_rv change_repl_state(struct drbd_peer_device *peer_device,
+				     enum drbd_repl_state new_repl_state,
+				     enum chg_state_flags flags,
+				     const char *tag);
+enum drbd_state_rv stable_change_repl_state(struct drbd_peer_device *peer_device,
+					    enum drbd_repl_state repl_state,
+					    enum chg_state_flags flags,
+					    const char *tag);
+
+void __change_peer_disk_state(struct drbd_peer_device *peer_device,
+			      enum drbd_disk_state disk_state);
+void __downgrade_peer_disk_states(struct drbd_connection *connection,
+				  enum drbd_disk_state disk_state);
+void __outdate_myself(struct drbd_resource *resource);
+enum drbd_state_rv change_peer_disk_state(struct drbd_peer_device *peer_device,
+					  enum drbd_disk_state disk_state,
+					  enum chg_state_flags flags,
+					  const char *tag);
+
+void __change_resync_susp_user(struct drbd_peer_device *peer_device,
+			       bool value);
+enum drbd_state_rv change_resync_susp_user(struct drbd_peer_device *peer_device,
+					   bool value,
+					   enum chg_state_flags flags);
+void __change_resync_susp_peer(struct drbd_peer_device *peer_device,
+			       bool value);
+void __change_resync_susp_dependency(struct drbd_peer_device *peer_device,
+				     bool value);
+void apply_connect(struct drbd_connection *connection, bool commit);
+
+struct drbd_work;
+
+bool resource_is_suspended(struct drbd_resource *resource,
+			   enum which_state which);
+bool is_suspended_fen(struct drbd_resource *resource, enum which_state which);
+
+enum dds_flags;
+enum determine_dev_size;
+struct resize_parms;
+
+enum determine_dev_size
+change_cluster_wide_device_size(struct drbd_device *device,
+				sector_t local_max_size,
+				uint64_t new_user_size,
+				enum dds_flags dds_flags,
+				struct resize_parms *rs);
+
+bool drbd_data_accessible(struct drbd_device *device, enum which_state which);
+bool drbd_res_data_accessible(struct drbd_resource *resource);
+
+
+void drbd_empty_twopc_work_fn(struct work_struct *work);
 #endif
diff --git a/drivers/block/drbd/drbd_state_change.h b/drivers/block/drbd/drbd_state_change.h
index a56a57d67686..bb68684a5fd3 100644
--- a/drivers/block/drbd/drbd_state_change.h
+++ b/drivers/block/drbd/drbd_state_change.h
@@ -7,58 +7,80 @@ struct drbd_resource_state_change {
 	enum drbd_role role[2];
 	bool susp[2];
 	bool susp_nod[2];
-	bool susp_fen[2];
+	bool susp_uuid[2];
+	bool fail_io[2];
 };
 
 struct drbd_device_state_change {
 	struct drbd_device *device;
 	enum drbd_disk_state disk_state[2];
+	bool have_quorum[2];
 };
 
 struct drbd_connection_state_change {
 	struct drbd_connection *connection;
-	enum drbd_conns cstate[2];  /* drbd9: enum drbd_conn_state */
+	enum drbd_conn_state cstate[2];
 	enum drbd_role peer_role[2];
+	bool susp_fen[2];
+};
+
+/* exception: stores state, not change.
+ * for get_initial_state. */
+struct drbd_path_state {
+	struct drbd_connection *connection;
+	struct drbd_path *path;
+	/* not an array,
+	 * because it's not an array in struct drbd_path either */
+	bool path_established;
 };
 
 struct drbd_peer_device_state_change {
 	struct drbd_peer_device *peer_device;
 	enum drbd_disk_state disk_state[2];
-	enum drbd_conns repl_state[2];  /* drbd9: enum drbd_repl_state */
+	enum drbd_repl_state repl_state[2];
 	bool resync_susp_user[2];
 	bool resync_susp_peer[2];
 	bool resync_susp_dependency[2];
+	bool resync_susp_other_c[2];
+	bool resync_active[2];
+	bool replication[2];
+	bool peer_replication[2];
+};
+
+struct drbd_state_change_object_count {
+	unsigned int n_devices;
+	unsigned int n_connections;
+	unsigned int n_paths;
 };
 
 struct drbd_state_change {
 	struct list_head list;
 	unsigned int n_devices;
 	unsigned int n_connections;
+	unsigned int n_paths;
 	struct drbd_resource_state_change resource[1];
 	struct drbd_device_state_change *devices;
 	struct drbd_connection_state_change *connections;
 	struct drbd_peer_device_state_change *peer_devices;
+	struct drbd_path_state *paths;
 };
 
-extern struct drbd_state_change *remember_old_state(struct drbd_resource *, gfp_t);
-extern void copy_old_to_new_state_change(struct drbd_state_change *);
-extern void forget_state_change(struct drbd_state_change *);
+struct drbd_state_change *remember_state_change(struct drbd_resource *resource,
+						gfp_t gfp);
+void copy_old_to_new_state_change(struct drbd_state_change *state_change);
+void forget_state_change(struct drbd_state_change *state_change);
 
-extern int notify_resource_state_change(struct sk_buff *,
-					 unsigned int,
-					 void *,
-					 enum drbd_notification_type type);
-extern int notify_connection_state_change(struct sk_buff *,
-					   unsigned int,
-					   void *,
-					   enum drbd_notification_type type);
-extern int notify_device_state_change(struct sk_buff *,
-				       unsigned int,
-				       void *,
-				       enum drbd_notification_type type);
-extern int notify_peer_device_state_change(struct sk_buff *,
-					    unsigned int,
-					    void *,
-					    enum drbd_notification_type type);
+int notify_resource_state_change(struct sk_buff *skb, unsigned int seq,
+				 void *state_change,
+				 enum drbd_notification_type type);
+int notify_connection_state_change(struct sk_buff *skb, unsigned int seq,
+				   void *state_change,
+				   enum drbd_notification_type type);
+int notify_device_state_change(struct sk_buff *skb, unsigned int seq,
+			       void *state_change,
+			       enum drbd_notification_type type);
+int notify_peer_device_state_change(struct sk_buff *skb, unsigned int seq,
+				    void *state_change,
+				    enum drbd_notification_type type);
 
 #endif  /* DRBD_STATE_CHANGE_H */
diff --git a/drivers/block/drbd/drbd_strings.h b/drivers/block/drbd/drbd_strings.h
index 0201f6590f6a..f376ce28a815 100644
--- a/drivers/block/drbd/drbd_strings.h
+++ b/drivers/block/drbd/drbd_strings.h
@@ -2,9 +2,26 @@
 #ifndef __DRBD_STRINGS_H
 #define __DRBD_STRINGS_H
 
-extern const char *drbd_conn_str(enum drbd_conns);
-extern const char *drbd_role_str(enum drbd_role);
-extern const char *drbd_disk_str(enum drbd_disk_state);
-extern const char *drbd_set_st_err_str(enum drbd_state_rv);
+struct state_names {
+	const char * const *names;
+	unsigned int size;
+};
+
+extern struct state_names drbd_conn_state_names;
+extern struct state_names drbd_repl_state_names;
+extern struct state_names drbd_role_state_names;
+extern struct state_names drbd_disk_state_names;
+extern struct state_names drbd_error_messages;
+extern struct state_names drbd_packet_names;
+
+enum drbd_packet;
+
+const char *drbd_repl_str(enum drbd_repl_state s);
+const char *drbd_conn_str(enum drbd_conn_state s);
+const char *drbd_role_str(enum drbd_role s);
+const char *drbd_disk_str(enum drbd_disk_state s);
+const char *drbd_set_st_err_str(enum drbd_state_rv err);
+const char *drbd_packet_name(enum drbd_packet cmd);
+
 
 #endif  /* __DRBD_STRINGS_H */
diff --git a/drivers/block/drbd/drbd_transport_lb-tcp.c b/drivers/block/drbd/drbd_transport_lb-tcp.c
index 497fca8c413c..29f18df2be88 100644
--- a/drivers/block/drbd/drbd_transport_lb-tcp.c
+++ b/drivers/block/drbd/drbd_transport_lb-tcp.c
@@ -15,10 +15,10 @@
 #include <linux/tcp.h>
 #include <linux/highmem.h>
 #include <linux/bio.h>
-#include <linux/drbd_genl_api.h>
-#include <linux/drbd_config.h>
+#include "drbd_genl_api.h"
 #include <net/tcp.h>
 #include "drbd_protocol.h"
+#include "drbd_config.h"
 #include "drbd_transport.h"
 
 
diff --git a/drivers/block/drbd/drbd_transport_rdma.c b/drivers/block/drbd/drbd_transport_rdma.c
index 21790a769d63..fbdf6a4bcda9 100644
--- a/drivers/block/drbd/drbd_transport_rdma.c
+++ b/drivers/block/drbd/drbd_transport_rdma.c
@@ -28,10 +28,10 @@
 #include <rdma/rdma_cm.h>
 #include <rdma/ib_cm.h>
 #include <linux/interrupt.h>
-#include <linux/drbd_genl_api.h>
+#include "drbd_genl_api.h"
 #include "drbd_protocol.h"
 #include "drbd_transport.h"
-#include "linux/drbd_config.h" /* for REL_VERSION */
+#include "drbd_config.h" /* for REL_VERSION */
 
 /* Nearly all data transfer uses the send/receive semantics. No need to
    actually use RDMA WRITE / READ.
diff --git a/drivers/block/drbd/drbd_transport_tcp.c b/drivers/block/drbd/drbd_transport_tcp.c
index 31885ff9341f..5faa6b82c358 100644
--- a/drivers/block/drbd/drbd_transport_tcp.c
+++ b/drivers/block/drbd/drbd_transport_tcp.c
@@ -19,14 +19,14 @@
 #include <linux/tcp.h>
 #include <linux/highmem.h>
 #include <linux/bio.h>
-#include <linux/drbd_genl_api.h>
-#include <linux/drbd_config.h>
+#include "drbd_genl_api.h"
 #include <linux/tls.h>
 #include <net/tcp.h>
 #include <net/handshake.h>
 #include <net/tls.h>
 #include <net/tls_prot.h>
 #include "drbd_protocol.h"
+#include "drbd_config.h"
 #include "drbd_transport.h"
 
 
diff --git a/include/linux/drbd.h b/include/linux/drbd.h
index 5468a2399d48..ed408088a282 100644
--- a/include/linux/drbd.h
+++ b/include/linux/drbd.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
+/* SPDX-License-Identifier: GPL-2.0-only */
 /*
   drbd.h
   Kernel module for 2.6.x Kernels
@@ -9,10 +9,10 @@
   Copyright (C) 2001-2008, Philipp Reisner <philipp.reisner@linbit.com>.
   Copyright (C) 2001-2008, Lars Ellenberg <lars.ellenberg@linbit.com>.
 
-
 */
 #ifndef DRBD_H
 #define DRBD_H
+
 #include <asm/types.h>
 
 #ifdef __KERNEL__
@@ -44,8 +44,7 @@ enum drbd_io_error_p {
 	EP_DETACH
 };
 
-enum drbd_fencing_p {
-	FP_NOT_AVAIL = -1, /* Not a policy */
+enum drbd_fencing_policy {
 	FP_DONT_CARE = 0,
 	FP_RESOURCE,
 	FP_STONITH
@@ -68,7 +67,9 @@ enum drbd_after_sb_p {
 	ASB_CONSENSUS,
 	ASB_DISCARD_SECONDARY,
 	ASB_CALL_HELPER,
-	ASB_VIOLENTLY
+	ASB_VIOLENTLY,
+	ASB_RETRY_CONNECT,
+	ASB_AUTO_DISCARD,
 };
 
 enum drbd_on_no_data {
@@ -76,6 +77,16 @@ enum drbd_on_no_data {
 	OND_SUSPEND_IO
 };
 
+enum drbd_on_no_quorum {
+	ONQ_IO_ERROR = OND_IO_ERROR,
+	ONQ_SUSPEND_IO = OND_SUSPEND_IO
+};
+
+enum drbd_on_susp_primary_outdated {
+	SPO_DISCONNECT,
+	SPO_FORCE_SECONDARY,
+};
+
 enum drbd_on_congestion {
 	OC_BLOCK,
 	OC_PULL_AHEAD,
@@ -96,6 +107,11 @@ enum drbd_read_balancing {
 	RB_1M_STRIPING,
 };
 
+/* Windows km/dderror.h has that a 0L */
+#ifdef NO_ERROR
+#undef NO_ERROR
+#endif
+
 /* KEEP the order, do not delete or insert. Only append. */
 enum drbd_ret_code {
 	ERR_CODE_BASE		= 100,
@@ -162,6 +178,12 @@ enum drbd_ret_code {
 	ERR_MD_LAYOUT_TOO_SMALL = 168,
 	ERR_MD_LAYOUT_NO_FIT    = 169,
 	ERR_IMPLICIT_SHRINK     = 170,
+	ERR_INVALID_PEER_NODE_ID = 171,
+	ERR_CREATE_TRANSPORT    = 172,
+	ERR_LOCAL_AND_PEER_ADDR = 173,
+	ERR_ALREADY_EXISTS 	= 174,
+	ERR_APV_TOO_LOW         = 175,
+
 	/* insert new ones above this line */
 	AFTER_LAST_ERR_CODE
 };
@@ -178,17 +200,17 @@ enum drbd_role {
 };
 
 /* The order of these constants is important.
- * The lower ones (<C_WF_REPORT_PARAMS) indicate
+ * The lower ones (< C_CONNECTED) indicate
  * that there is no socket!
- * >=C_WF_REPORT_PARAMS ==> There is a socket
+ * >= C_CONNECTED ==> There is a socket
  */
-enum drbd_conns {
+enum drbd_conn_state {
 	C_STANDALONE,
-	C_DISCONNECTING,  /* Temporal state on the way to StandAlone. */
+	C_DISCONNECTING,  /* Temporary state on the way to C_STANDALONE. */
 	C_UNCONNECTED,    /* >= C_UNCONNECTED -> inc_net() succeeds */
 
-	/* These temporal states are all used on the way
-	 * from >= C_CONNECTED to Unconnected.
+	/* These temporary states are used on the way
+	 * from C_CONNECTED to C_UNCONNECTED.
 	 * The 'disconnect reason' states
 	 * I do not allow to change between them. */
 	C_TIMEOUT,
@@ -197,35 +219,44 @@ enum drbd_conns {
 	C_PROTOCOL_ERROR,
 	C_TEAR_DOWN,
 
-	C_WF_CONNECTION,
-	C_WF_REPORT_PARAMS, /* we have a socket */
-	C_CONNECTED,      /* we have introduced each other */
-	C_STARTING_SYNC_S,  /* starting full sync by admin request. */
-	C_STARTING_SYNC_T,  /* starting full sync by admin request. */
-	C_WF_BITMAP_S,
-	C_WF_BITMAP_T,
-	C_WF_SYNC_UUID,
+	C_CONNECTING,
+
+	C_CONNECTED, /* we have a socket */
+
+	C_MASK = 31,
+};
+
+enum drbd_repl_state {
+	L_NEGOTIATING = C_CONNECTED, /* used for peer_device->negotiation_result only */
+	L_OFF = C_CONNECTED,
+
+	L_ESTABLISHED,      /* we have introduced each other */
+	L_STARTING_SYNC_S,  /* starting full sync by admin request. */
+	L_STARTING_SYNC_T,  /* starting full sync by admin request. */
+	L_WF_BITMAP_S,
+	L_WF_BITMAP_T,
+	L_WF_SYNC_UUID,
 
 	/* All SyncStates are tested with this comparison
-	 * xx >= C_SYNC_SOURCE && xx <= C_PAUSED_SYNC_T */
-	C_SYNC_SOURCE,
-	C_SYNC_TARGET,
-	C_VERIFY_S,
-	C_VERIFY_T,
-	C_PAUSED_SYNC_S,
-	C_PAUSED_SYNC_T,
-
-	C_AHEAD,
-	C_BEHIND,
-
-	C_MASK = 31
+	 * xx >= L_SYNC_SOURCE && xx <= L_PAUSED_SYNC_T */
+	L_SYNC_SOURCE,
+	L_SYNC_TARGET,
+	L_VERIFY_S,
+	L_VERIFY_T,
+	L_PAUSED_SYNC_S,
+	L_PAUSED_SYNC_T,
+
+	L_AHEAD,
+	L_BEHIND,
+	L_NEG_NO_RESULT = L_BEHIND,  /* used for peer_device->negotiation_result only */
 };
 
 enum drbd_disk_state {
 	D_DISKLESS,
 	D_ATTACHING,      /* In the process of reading the meta-data */
+	D_DETACHING,      /* Added in protocol version 110 */
 	D_FAILED,         /* Becomes D_DISKLESS as soon as we told it the peer */
-			  /* when >= D_FAILED it is legal to access mdev->ldev */
+			  /* when >= D_FAILED it is legal to access device->ldev */
 	D_NEGOTIATING,    /* Late attaching state, we need to talk to the peer */
 	D_INCONSISTENT,
 	D_OUTDATED,
@@ -257,9 +288,11 @@ union drbd_state {
 		unsigned user_isp:1 ;
 		unsigned susp_nod:1 ; /* IO suspended because no data */
 		unsigned susp_fen:1 ; /* IO suspended because fence peer handler runs*/
-		unsigned _pad:9;   /* 0	 unused */
+		unsigned quorum:1;
+		unsigned _pad:8;   /* 0	 unused */
 #elif defined(__BIG_ENDIAN_BITFIELD)
-		unsigned _pad:9;
+		unsigned _pad:8;
+		unsigned quorum:1;
 		unsigned susp_fen:1 ;
 		unsigned susp_nod:1 ;
 		unsigned user_isp:1 ;
@@ -297,29 +330,48 @@ enum drbd_state_rv {
 	SS_DEVICE_IN_USE = -12,
 	SS_NO_NET_CONFIG = -13,
 	SS_NO_VERIFY_ALG = -14,       /* drbd-8.2 only */
-	SS_NEED_CONNECTION = -15,    /* drbd-8.2 only */
+	SS_NEED_CONNECTION = -15,
 	SS_LOWER_THAN_OUTDATED = -16,
-	SS_NOT_SUPPORTED = -17,      /* drbd-8.2 only */
+	SS_NOT_SUPPORTED = -17,
 	SS_IN_TRANSIENT_STATE = -18,  /* Retry after the next state change */
 	SS_CONCURRENT_ST_CHG = -19,   /* Concurrent cluster side state change! */
 	SS_O_VOL_PEER_PRI = -20,
-	SS_OUTDATE_WO_CONN = -21,
-	SS_AFTER_LAST_ERROR = -22,    /* Keep this at bottom */
+	SS_INTERRUPTED = -21,	/* interrupted in stable_state_change() */
+	SS_PRIMARY_READER = -22,
+	SS_TIMEOUT = -23,
+	SS_WEAKLY_CONNECTED = -24,
+	SS_NO_QUORUM = -25,
+	SS_ATTACH_NO_BITMAP = -26,
+	SS_HANDSHAKE_DISCONNECT = -27,
+	SS_HANDSHAKE_RETRY = -28,
+	SS_AFTER_LAST_ERROR = -29,    /* Keep this at bottom */
 };
 
 #define SHARED_SECRET_MAX 64
 
-#define MDF_CONSISTENT		(1 << 0)
-#define MDF_PRIMARY_IND		(1 << 1)
-#define MDF_CONNECTED_IND	(1 << 2)
-#define MDF_FULL_SYNC		(1 << 3)
-#define MDF_WAS_UP_TO_DATE	(1 << 4)
-#define MDF_PEER_OUT_DATED	(1 << 5)
-#define MDF_CRASHED_PRIMARY	(1 << 6)
-#define MDF_AL_CLEAN		(1 << 7)
-#define MDF_AL_DISABLED		(1 << 8)
+enum mdf_flag {
+	MDF_CONSISTENT =	1 << 0,
+	MDF_PRIMARY_IND =	1 << 1,
+	MDF_WAS_UP_TO_DATE =	1 << 4,
+	MDF_CRASHED_PRIMARY =	1 << 6,
+	MDF_AL_CLEAN =		1 << 7,
+	MDF_AL_DISABLED =       1 << 8,
+	MDF_PRIMARY_LOST_QUORUM = 1 << 9,
+	MDF_HAVE_QUORUM =       1 << 10,
+};
+
+enum mdf_peer_flag {
+	MDF_PEER_CONNECTED =	1 << 0,
+	MDF_PEER_OUTDATED =	1 << 1,
+	MDF_PEER_FENCING =	1 << 2,
+	MDF_PEER_FULL_SYNC =	1 << 3,
+	MDF_PEER_DEVICE_SEEN =	1 << 4,
+	MDF_NODE_EXISTS =       1 << 16,
+	MDF_HAVE_BITMAP =       1 << 31,  /* For in core use; no meaning when persistet */
+};
 
-#define MAX_PEERS 32
+#define DRBD_PEERS_MAX 32
+#define DRBD_NODE_ID_MAX DRBD_PEERS_MAX
 
 enum drbd_uuid_index {
 	UI_CURRENT,
@@ -331,7 +383,8 @@ enum drbd_uuid_index {
 	UI_EXTENDED_SIZE   /* Everything. */
 };
 
-#define HISTORY_UUIDS MAX_PEERS
+#define HISTORY_UUIDS_V08 (UI_HISTORY_END - UI_HISTORY_START + 1)
+#define HISTORY_UUIDS DRBD_PEERS_MAX
 
 enum drbd_timeout_flag {
 	UT_DEFAULT      = 0,
@@ -339,6 +392,16 @@ enum drbd_timeout_flag {
 	UT_PEER_OUTDATED = 2,
 };
 
+#define UUID_JUST_CREATED ((__u64)4)
+#define UUID_PRIMARY ((__u64)1)
+
+enum write_ordering_e {
+	WO_NONE,
+	WO_DRAIN_IO,
+	WO_BDEV_FLUSH,
+	WO_BIO_BARRIER
+};
+
 enum drbd_notification_type {
 	NOTIFY_EXISTS,
 	NOTIFY_CREATE,
@@ -346,11 +409,13 @@ enum drbd_notification_type {
 	NOTIFY_DESTROY,
 	NOTIFY_CALL,
 	NOTIFY_RESPONSE,
+	NOTIFY_RENAME,
 
 	NOTIFY_CONTINUES = 0x8000,
 	NOTIFY_FLAGS = NOTIFY_CONTINUES,
 };
 
+/* These values are part of the ABI! */
 enum drbd_peer_state {
 	P_INCONSISTENT = 3,
 	P_OUTDATED = 4,
@@ -359,15 +424,6 @@ enum drbd_peer_state {
 	P_FENCING = 7,
 };
 
-#define UUID_JUST_CREATED ((__u64)4)
-
-enum write_ordering_e {
-	WO_NONE,
-	WO_DRAIN_IO,
-	WO_BDEV_FLUSH,
-	WO_BIO_BARRIER
-};
-
 /* magic numbers used in meta data and network packets */
 #define DRBD_MAGIC 0x83740267
 #define DRBD_MAGIC_BIG 0x835a
@@ -376,17 +432,23 @@ enum write_ordering_e {
 #define DRBD_MD_MAGIC_07   (DRBD_MAGIC+3)
 #define DRBD_MD_MAGIC_08   (DRBD_MAGIC+4)
 #define DRBD_MD_MAGIC_84_UNCLEAN	(DRBD_MAGIC+5)
-
-
-/* how I came up with this magic?
- * base64 decode "actlog==" ;) */
-#define DRBD_AL_MAGIC 0x69cb65a2
+#define DRBD_MD_MAGIC_09   (DRBD_MAGIC+6)
 
 /* these are of type "int" */
 #define DRBD_MD_INDEX_INTERNAL -1
 #define DRBD_MD_INDEX_FLEX_EXT -2
 #define DRBD_MD_INDEX_FLEX_INT -3
 
-#define DRBD_CPU_MASK_SIZE 32
+/*
+ * This is the maximum string length accepted by drbdadm.
+ * It allows a full mask for up to 908 CPUs.
+ */
+#define DRBD_CPU_MASK_SIZE 256
+
+#define DRBD_MAX_BIO_SIZE (1U << 20)
+
+#define QOU_OFF 0
+#define QOU_MAJORITY 1024
+#define QOU_ALL 1025
 
 #endif
diff --git a/include/linux/drbd_config.h b/include/linux/drbd_config.h
deleted file mode 100644
index d215365c6bb1..000000000000
--- a/include/linux/drbd_config.h
+++ /dev/null
@@ -1,16 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * drbd_config.h
- * DRBD's compile time configuration.
- */
-
-#ifndef DRBD_CONFIG_H
-#define DRBD_CONFIG_H
-
-extern const char *drbd_buildtag(void);
-
-#define REL_VERSION "8.4.11"
-#define PRO_VERSION_MIN 86
-#define PRO_VERSION_MAX 101
-
-#endif
diff --git a/include/linux/drbd_genl.h b/include/linux/drbd_genl.h
index 53f44b8cd75f..75e671a3c5d1 100644
--- a/include/linux/drbd_genl.h
+++ b/include/linux/drbd_genl.h
@@ -1,4 +1,4 @@
-/* SPDX-License-Identifier: GPL-2.0 */
+/* SPDX-License-Identifier: GPL-2.0-only */
 /*
  * General overview:
  * full generic netlink message:
@@ -68,7 +68,7 @@
  *	genl_magic_func.h
  *		generates an entry in the static genl_ops array,
  *		and static register/unregister functions to
- *		genl_register_family().
+ *		genl_register_family_with_ops().
  *
  *	flags and handler:
  *		GENL_op_init( .doit = x, .dumpit = y, .flags = something)
@@ -96,10 +96,12 @@ GENL_struct(DRBD_NLA_CFG_REPLY, 1, drbd_cfg_reply,
  * and/or the replication group (aka resource) name,
  * and the volume id within the resource. */
 GENL_struct(DRBD_NLA_CFG_CONTEXT, 2, drbd_cfg_context,
+	__u32_field(6, DRBD_GENLA_F_MANDATORY,	ctx_peer_node_id)
 	__u32_field(1, DRBD_GENLA_F_MANDATORY,	ctx_volume)
 	__str_field(2, DRBD_GENLA_F_MANDATORY,	ctx_resource_name, 128)
 	__bin_field(3, DRBD_GENLA_F_MANDATORY,	ctx_my_addr, 128)
 	__bin_field(4, DRBD_GENLA_F_MANDATORY,	ctx_peer_addr, 128)
+	__str_field_def(5, 0, ctx_conn_name, SHARED_SECRET_MAX)
 )
 
 GENL_struct(DRBD_NLA_DISK_CONF, 3, disk_conf,
@@ -109,37 +111,45 @@ GENL_struct(DRBD_NLA_DISK_CONF, 3, disk_conf,
 
 	/* use the resize command to try and change the disk_size */
 	__u64_field(4, DRBD_GENLA_F_MANDATORY | DRBD_F_INVARIANT,	disk_size)
-	/* we could change the max_bio_bvecs,
-	 * but it won't propagate through the stack */
-	__u32_field(5, DRBD_GENLA_F_MANDATORY | DRBD_F_INVARIANT,	max_bio_bvecs)
+	/*__u32_field(5, DRBD_GENLA_F_MANDATORY | DRBD_F_INVARIANT,	max_bio_bvecs)*/
 
 	__u32_field_def(6, DRBD_GENLA_F_MANDATORY,	on_io_error, DRBD_ON_IO_ERROR_DEF)
-	__u32_field_def(7, DRBD_GENLA_F_MANDATORY,	fencing, DRBD_FENCING_DEF)
+	/*__u32_field_def(7, DRBD_GENLA_F_MANDATORY,	fencing_policy, DRBD_FENCING_DEF)*/
 
-	__u32_field_def(8,	DRBD_GENLA_F_MANDATORY,	resync_rate, DRBD_RESYNC_RATE_DEF)
 	__s32_field_def(9,	DRBD_GENLA_F_MANDATORY,	resync_after, DRBD_MINOR_NUMBER_DEF)
 	__u32_field_def(10,	DRBD_GENLA_F_MANDATORY,	al_extents, DRBD_AL_EXTENTS_DEF)
-	__u32_field_def(11,	DRBD_GENLA_F_MANDATORY,	c_plan_ahead, DRBD_C_PLAN_AHEAD_DEF)
-	__u32_field_def(12,	DRBD_GENLA_F_MANDATORY,	c_delay_target, DRBD_C_DELAY_TARGET_DEF)
-	__u32_field_def(13,	DRBD_GENLA_F_MANDATORY,	c_fill_target, DRBD_C_FILL_TARGET_DEF)
-	__u32_field_def(14,	DRBD_GENLA_F_MANDATORY,	c_max_rate, DRBD_C_MAX_RATE_DEF)
-	__u32_field_def(15,	DRBD_GENLA_F_MANDATORY,	c_min_rate, DRBD_C_MIN_RATE_DEF)
-	__u32_field_def(20,     DRBD_GENLA_F_MANDATORY, disk_timeout, DRBD_DISK_TIMEOUT_DEF)
-	__u32_field_def(21,     0 /* OPTIONAL */,       read_balancing, DRBD_READ_BALANCING_DEF)
-	__u32_field_def(25,     0 /* OPTIONAL */,       rs_discard_granularity, DRBD_RS_DISCARD_GRANULARITY_DEF)
 
 	__flg_field_def(16, DRBD_GENLA_F_MANDATORY,	disk_barrier, DRBD_DISK_BARRIER_DEF)
 	__flg_field_def(17, DRBD_GENLA_F_MANDATORY,	disk_flushes, DRBD_DISK_FLUSHES_DEF)
 	__flg_field_def(18, DRBD_GENLA_F_MANDATORY,	disk_drain, DRBD_DISK_DRAIN_DEF)
 	__flg_field_def(19, DRBD_GENLA_F_MANDATORY,	md_flushes, DRBD_MD_FLUSHES_DEF)
+	__u32_field_def(20,	DRBD_GENLA_F_MANDATORY,	disk_timeout, DRBD_DISK_TIMEOUT_DEF)
+	__u32_field_def(21, DRBD_GENLA_F_MANDATORY,     read_balancing, DRBD_READ_BALANCING_DEF)
+	__u32_field_def(22,	DRBD_GENLA_F_MANDATORY,	unplug_watermark, DRBD_UNPLUG_WATERMARK_DEF)
+	__u32_field_def(25, 0 /* OPTIONAL */,           rs_discard_granularity, DRBD_RS_DISCARD_GRANULARITY_DEF)
 	__flg_field_def(23,     0 /* OPTIONAL */,	al_updates, DRBD_AL_UPDATES_DEF)
-	__flg_field_def(24,     0 /* OPTIONAL */,	discard_zeroes_if_aligned, DRBD_DISCARD_ZEROES_IF_ALIGNED_DEF)
+	__flg_field_def(24,     0 /* OPTIONAL */,       discard_zeroes_if_aligned, DRBD_DISCARD_ZEROES_IF_ALIGNED_DEF)
 	__flg_field_def(26,     0 /* OPTIONAL */,	disable_write_same, DRBD_DISABLE_WRITE_SAME_DEF)
+	__flg_field_def(27, 0 /* OPTIONAL */, 		d_bitmap, DRBD_BITMAP_DEF)
 )
 
 GENL_struct(DRBD_NLA_RESOURCE_OPTS, 4, res_opts,
 	__str_field_def(1,	DRBD_GENLA_F_MANDATORY,	cpu_mask,       DRBD_CPU_MASK_SIZE)
 	__u32_field_def(2,	DRBD_GENLA_F_MANDATORY,	on_no_data, DRBD_ON_NO_DATA_DEF)
+	__flg_field_def(3,	DRBD_GENLA_F_MANDATORY,	auto_promote, DRBD_AUTO_PROMOTE_DEF)
+	__u32_field(4,		DRBD_F_REQUIRED | DRBD_F_INVARIANT,	node_id)
+	__u32_field_def(5,	DRBD_GENLA_F_MANDATORY,	peer_ack_window, DRBD_PEER_ACK_WINDOW_DEF)
+	__u32_field_def(6,	DRBD_GENLA_F_MANDATORY,	twopc_timeout, DRBD_TWOPC_TIMEOUT_DEF)
+	__u32_field_def(7,	DRBD_GENLA_F_MANDATORY, twopc_retry_timeout, DRBD_TWOPC_RETRY_TIMEOUT_DEF)
+	__u32_field_def(8,	0 /* OPTIONAL */,	peer_ack_delay, DRBD_PEER_ACK_DELAY_DEF)
+	__u32_field_def(9,	0 /* OPTIONAL */,	auto_promote_timeout, DRBD_AUTO_PROMOTE_TIMEOUT_DEF)
+	__u32_field_def(10,	0 /* OPTIONAL */,	nr_requests, DRBD_NR_REQUESTS_DEF)
+	__s32_field_def(11,	0 /* OPTIONAL */,	quorum, DRBD_QUORUM_DEF)
+	__u32_field_def(12,     0 /* OPTIONAL */,	on_no_quorum, DRBD_ON_NO_QUORUM_DEF)
+	__s32_field_def(13,	0 /* OPTIONAL */,	quorum_min_redundancy, DRBD_QUORUM_DEF)
+	__u32_field_def(14,	0 /* OPTIONAL */,	on_susp_primary_outdated, DRBD_ON_SUSP_PRI_OUTD_DEF)
+	__flg_field_def(15, 0, drbd8_compat_mode, DRBD_DRBD8_COMPAT_MODE_DEF) /* invisible by drbdsetup show */
+	__flg_field_def(16,	0 /* OPTIONAL */,	explicit_drbd8_compat, DRBD_DRBD8_COMPAT_MODE_DEF)
 )
 
 GENL_struct(DRBD_NLA_NET_CONF, 5, net_conf,
@@ -157,9 +167,7 @@ GENL_struct(DRBD_NLA_NET_CONF, 5, net_conf,
 	__u32_field_def(11,	DRBD_GENLA_F_MANDATORY,	sndbuf_size, DRBD_SNDBUF_SIZE_DEF)
 	__u32_field_def(12,	DRBD_GENLA_F_MANDATORY,	rcvbuf_size, DRBD_RCVBUF_SIZE_DEF)
 	__u32_field_def(13,	DRBD_GENLA_F_MANDATORY,	ko_count, DRBD_KO_COUNT_DEF)
-	__u32_field_def(14,	DRBD_GENLA_F_MANDATORY,	max_buffers, DRBD_MAX_BUFFERS_DEF)
 	__u32_field_def(15,	DRBD_GENLA_F_MANDATORY,	max_epoch_size, DRBD_MAX_EPOCH_SIZE_DEF)
-	__u32_field_def(16,	DRBD_GENLA_F_MANDATORY,	unplug_watermark, DRBD_UNPLUG_WATERMARK_DEF)
 	__u32_field_def(17,	DRBD_GENLA_F_MANDATORY,	after_sb_0p, DRBD_AFTER_SB_0P_DEF)
 	__u32_field_def(18,	DRBD_GENLA_F_MANDATORY,	after_sb_1p, DRBD_AFTER_SB_1P_DEF)
 	__u32_field_def(19,	DRBD_GENLA_F_MANDATORY,	after_sb_2p, DRBD_AFTER_SB_2P_DEF)
@@ -168,20 +176,29 @@ GENL_struct(DRBD_NLA_NET_CONF, 5, net_conf,
 	__u32_field_def(22,	DRBD_GENLA_F_MANDATORY,	cong_fill, DRBD_CONG_FILL_DEF)
 	__u32_field_def(23,	DRBD_GENLA_F_MANDATORY,	cong_extents, DRBD_CONG_EXTENTS_DEF)
 	__flg_field_def(24, DRBD_GENLA_F_MANDATORY,	two_primaries, DRBD_ALLOW_TWO_PRIMARIES_DEF)
-	__flg_field(25, DRBD_GENLA_F_MANDATORY | DRBD_F_INVARIANT,	discard_my_data)
 	__flg_field_def(26, DRBD_GENLA_F_MANDATORY,	tcp_cork, DRBD_TCP_CORK_DEF)
 	__flg_field_def(27, DRBD_GENLA_F_MANDATORY,	always_asbp, DRBD_ALWAYS_ASBP_DEF)
-	__flg_field(28, DRBD_GENLA_F_MANDATORY | DRBD_F_INVARIANT,	tentative)
 	__flg_field_def(29,	DRBD_GENLA_F_MANDATORY,	use_rle, DRBD_USE_RLE_DEF)
-	/* 9: __u32_field_def(30,	DRBD_GENLA_F_MANDATORY,	fencing_policy, DRBD_FENCING_DEF) */
-	/* 9: __str_field_def(31,     DRBD_GENLA_F_MANDATORY, name, SHARED_SECRET_MAX) */
-	/* 9: __u32_field(32,         DRBD_F_REQUIRED | DRBD_F_INVARIANT,     peer_node_id) */
+	__u32_field_def(30,	DRBD_GENLA_F_MANDATORY,	fencing_policy, DRBD_FENCING_DEF)
+	__str_field_def(31,	DRBD_GENLA_F_MANDATORY, name, SHARED_SECRET_MAX)
+	/* moved into ctx_peer_node_id: __u32_field(32,		DRBD_F_REQUIRED | DRBD_F_INVARIANT,	peer_node_id) */
 	__flg_field_def(33, 0 /* OPTIONAL */,	csums_after_crash_only, DRBD_CSUMS_AFTER_CRASH_ONLY_DEF)
 	__u32_field_def(34, 0 /* OPTIONAL */, sock_check_timeo, DRBD_SOCKET_CHECK_TIMEO_DEF)
+	__str_field_def(35, DRBD_F_INVARIANT, transport_name, SHARED_SECRET_MAX)
+	__u32_field_def(36, 0 /* OPTIONAL */, max_buffers, DRBD_MAX_BUFFERS_DEF)
+	__flg_field_def(37, 0 /* OPTIONAL */, allow_remote_read, DRBD_ALLOW_REMOTE_READ_DEF)
+	__flg_field_def(38, 0 /* OPTIONAL */, tls, DRBD_TLS_DEF)
+	__s32_field_def(39, 0 /* OPTIONAL */, tls_privkey, DRBD_TLS_PRIVKEY_DEF)
+	__s32_field_def(40, 0 /* OPTIONAL */, tls_certificate, DRBD_TLS_CERTIFICATE_DEF)
+	__s32_field_def(41, 0 /* OPTIONAL */, tls_keyring, DRBD_TLS_KEYRING_DEF)
+	__flg_field_def(42, DRBD_F_INVARIANT, load_balance_paths, DRBD_LOAD_BALANCE_PATHS_DEF)
+	__u32_field_def(43, 0 /* OPTIONAL */, rdma_ctrl_rcvbuf_size, DRBD_RDMA_CTRL_RCVBUF_SIZE_DEF)
+	__u32_field_def(44, 0 /* OPTIONAL */, rdma_ctrl_sndbuf_size, DRBD_RDMA_CTRL_SNDBUF_SIZE_DEF)
+
 )
 
 GENL_struct(DRBD_NLA_SET_ROLE_PARMS, 6, set_role_parms,
-	__flg_field(1, DRBD_GENLA_F_MANDATORY,	assume_uptodate)
+	__flg_field(1, DRBD_GENLA_F_MANDATORY,	force)
 )
 
 GENL_struct(DRBD_NLA_RESIZE_PARMS, 7, resize_parms,
@@ -192,46 +209,6 @@ GENL_struct(DRBD_NLA_RESIZE_PARMS, 7, resize_parms,
 	__u32_field_def(5, 0 /* OPTIONAL */, al_stripe_size, DRBD_AL_STRIPE_SIZE_DEF)
 )
 
-GENL_struct(DRBD_NLA_STATE_INFO, 8, state_info,
-	/* the reason of the broadcast,
-	 * if this is an event triggered broadcast. */
-	__u32_field(1, DRBD_GENLA_F_MANDATORY,	sib_reason)
-	__u32_field(2, DRBD_F_REQUIRED,	current_state)
-	__u64_field(3, DRBD_GENLA_F_MANDATORY,	capacity)
-	__u64_field(4, DRBD_GENLA_F_MANDATORY,	ed_uuid)
-
-	/* These are for broadcast from after state change work.
-	 * prev_state and new_state are from the moment the state change took
-	 * place, new_state is not neccessarily the same as current_state,
-	 * there may have been more state changes since.  Which will be
-	 * broadcasted soon, in their respective after state change work.  */
-	__u32_field(5, DRBD_GENLA_F_MANDATORY,	prev_state)
-	__u32_field(6, DRBD_GENLA_F_MANDATORY,	new_state)
-
-	/* if we have a local disk: */
-	__bin_field(7, DRBD_GENLA_F_MANDATORY,	uuids, (UI_SIZE*sizeof(__u64)))
-	__u32_field(8, DRBD_GENLA_F_MANDATORY,	disk_flags)
-	__u64_field(9, DRBD_GENLA_F_MANDATORY,	bits_total)
-	__u64_field(10, DRBD_GENLA_F_MANDATORY,	bits_oos)
-	/* and in case resync or online verify is active */
-	__u64_field(11, DRBD_GENLA_F_MANDATORY,	bits_rs_total)
-	__u64_field(12, DRBD_GENLA_F_MANDATORY,	bits_rs_failed)
-
-	/* for pre and post notifications of helper execution */
-	__str_field(13, DRBD_GENLA_F_MANDATORY,	helper, 32)
-	__u32_field(14, DRBD_GENLA_F_MANDATORY,	helper_exit_code)
-
-	__u64_field(15,                      0, send_cnt)
-	__u64_field(16,                      0, recv_cnt)
-	__u64_field(17,                      0, read_cnt)
-	__u64_field(18,                      0, writ_cnt)
-	__u64_field(19,                      0, al_writ_cnt)
-	__u64_field(20,                      0, bm_writ_cnt)
-	__u32_field(21,                      0, ap_bio_cnt)
-	__u32_field(22,                      0, ap_pending_cnt)
-	__u32_field(23,                      0, rs_pending_cnt)
-)
-
 GENL_struct(DRBD_NLA_START_OV_PARMS, 9, start_ov_parms,
 	__u64_field(1, DRBD_GENLA_F_MANDATORY,	ov_start_sector)
 	__u64_field(2, DRBD_GENLA_F_MANDATORY,	ov_stop_sector)
@@ -239,6 +216,7 @@ GENL_struct(DRBD_NLA_START_OV_PARMS, 9, start_ov_parms,
 
 GENL_struct(DRBD_NLA_NEW_C_UUID_PARMS, 10, new_c_uuid_parms,
 	__flg_field(1, DRBD_GENLA_F_MANDATORY, clear_bm)
+	__flg_field(2, DRBD_GENLA_F_MANDATORY, force_resync)
 )
 
 GENL_struct(DRBD_NLA_TIMEOUT_PARMS, 11, timeout_parms,
@@ -251,6 +229,13 @@ GENL_struct(DRBD_NLA_DISCONNECT_PARMS, 12, disconnect_parms,
 
 GENL_struct(DRBD_NLA_DETACH_PARMS, 13, detach_parms,
 	__flg_field(1, DRBD_GENLA_F_MANDATORY,	force_detach)
+	__flg_field_def(2, 0 /* OPTIONAL */, intentional_diskless_detach, DRBD_DISK_DISKLESS_DEF)
+)
+
+GENL_struct(DRBD_NLA_DEVICE_CONF, 14, device_conf,
+	__u32_field_def(1, DRBD_F_INVARIANT,	max_bio_size, DRBD_MAX_BIO_SIZE_DEF)
+	__flg_field_def(2, 0 /* OPTIONAL */, intentional_diskless, DRBD_DISK_DISKLESS_DEF)
+	__u32_field_def(3, 0 /* OPTIONAL */, block_size, DRBD_BLOCK_SIZE_DEF)
 )
 
 GENL_struct(DRBD_NLA_RESOURCE_INFO, 15, resource_info,
@@ -258,11 +243,16 @@ GENL_struct(DRBD_NLA_RESOURCE_INFO, 15, resource_info,
 	__flg_field(2, 0, res_susp)
 	__flg_field(3, 0, res_susp_nod)
 	__flg_field(4, 0, res_susp_fen)
-	/* __flg_field(5, 0, res_weak) */
+	__flg_field(5, 0, res_susp_quorum)
+	__flg_field(6, 0, res_fail_io)
 )
 
 GENL_struct(DRBD_NLA_DEVICE_INFO, 16, device_info,
 	__u32_field(1, 0, dev_disk_state)
+	__flg_field(2, 0, is_intentional_diskless)
+	__flg_field(3, 0, dev_has_quorum)
+	__flg_field(5, 0, dev_is_open)
+	__str_field(4, 0, backing_dev_path, 128)
 )
 
 GENL_struct(DRBD_NLA_CONNECTION_INFO, 17, connection_info,
@@ -276,6 +266,7 @@ GENL_struct(DRBD_NLA_PEER_DEVICE_INFO, 18, peer_device_info,
 	__u32_field(3, 0, peer_resync_susp_user)
 	__u32_field(4, 0, peer_resync_susp_peer)
 	__u32_field(5, 0, peer_resync_susp_dependency)
+	__flg_field(6, 0, peer_is_intentional_diskless)
 )
 
 GENL_struct(DRBD_NLA_RESOURCE_STATISTICS, 19, resource_statistics,
@@ -301,6 +292,8 @@ GENL_struct(DRBD_NLA_DEVICE_STATISTICS, 20, device_statistics,
 
 GENL_struct(DRBD_NLA_CONNECTION_STATISTICS, 21, connection_statistics,
 	__flg_field(1, 0, conn_congested)
+	__u64_field(2, 0, ap_in_flight) /* sectors */
+	__u64_field(3, 0, rs_in_flight) /* sectors */
 )
 
 GENL_struct(DRBD_NLA_PEER_DEVICE_STATISTICS, 22, peer_device_statistics,
@@ -312,6 +305,27 @@ GENL_struct(DRBD_NLA_PEER_DEVICE_STATISTICS, 22, peer_device_statistics,
 	__u64_field(6, 0, peer_dev_resync_failed)  /* sectors */
 	__u64_field(7, 0, peer_dev_bitmap_uuid)
 	__u32_field(9, 0, peer_dev_flags)
+	/* you need the peer_repl_state from peer_device_info
+	 * to properly interpret these stats for "progress"
+	 * of syncer/verify */
+	__u64_field(10, 0, peer_dev_rs_total)	/* sectors */
+	__u64_field(11, 0, peer_dev_ov_start_sector)
+	__u64_field(12, 0, peer_dev_ov_stop_sector)
+	__u64_field(13, 0, peer_dev_ov_position) /* sectors */
+	__u64_field(14, 0, peer_dev_ov_left)	/* sectors */
+	__u64_field(15, 0, peer_dev_ov_skipped)	/* sectors */
+	__u64_field(16, 0, peer_dev_rs_same_csum)
+	__u64_field(17, 0, peer_dev_rs_dt_start_ms)
+	__u64_field(18, 0, peer_dev_rs_paused_ms)
+	/* resync progress marks for "resync speed" guestimation */
+	__u64_field(19, 0, peer_dev_rs_dt0_ms)
+	__u64_field(20, 0, peer_dev_rs_db0_sectors)
+	__u64_field(21, 0, peer_dev_rs_dt1_ms)
+	__u64_field(22, 0, peer_dev_rs_db1_sectors)
+	__u32_field(23, 0, peer_dev_rs_c_sync_rate)
+	/* events may not be sent for every change of the UUID flags, however
+	 * UUID_FLAG_STABLE can be trusted */
+	__u64_field(24, 0, peer_dev_uuid_flags)
 )
 
 GENL_struct(DRBD_NLA_NOTIFICATION_HEADER, 23, drbd_notification_header,
@@ -323,38 +337,67 @@ GENL_struct(DRBD_NLA_HELPER, 24, drbd_helper_info,
 	__u32_field(2, DRBD_GENLA_F_MANDATORY, helper_status)
 )
 
-/*
- * Notifications and commands (genlmsghdr->cmd)
- */
-GENL_mc_group(events)
+GENL_struct(DRBD_NLA_INVALIDATE_PARMS, 25, invalidate_parms,
+	__s32_field_def(1, DRBD_GENLA_F_MANDATORY, sync_from_peer_node_id, DRBD_SYNC_FROM_NID_DEF)
+	__flg_field_def(2, DRBD_GENLA_F_MANDATORY, reset_bitmap, DRBD_INVALIDATE_RESET_BITMAP_DEF)
+)
 
-	/* kernel -> userspace announcement of changes */
-GENL_notification(
-	DRBD_EVENT, 1, events,
-	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
-	GENL_tla_expected(DRBD_NLA_STATE_INFO, DRBD_F_REQUIRED)
-	GENL_tla_expected(DRBD_NLA_NET_CONF, DRBD_GENLA_F_MANDATORY)
-	GENL_tla_expected(DRBD_NLA_DISK_CONF, DRBD_GENLA_F_MANDATORY)
-	GENL_tla_expected(DRBD_NLA_SYNCER_CONF, DRBD_GENLA_F_MANDATORY)
+GENL_struct(DRBD_NLA_FORGET_PEER_PARMS, 26, forget_peer_parms,
+	__s32_field_def(1, DRBD_GENLA_F_MANDATORY, forget_peer_node_id, DRBD_SYNC_FROM_NID_DEF)
 )
 
-	/* query kernel for specific or all info */
-GENL_op(
-	DRBD_ADM_GET_STATUS, 2,
-	GENL_op_init(
-		.doit = drbd_adm_get_status,
-		.dumpit = drbd_adm_get_status_all,
-		/* anyone may ask for the status,
-		 * it is broadcasted anyways */
-	),
-	/* To select the object .doit.
-	 * Or a subset of objects in .dumpit. */
-	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY)
+GENL_struct(DRBD_NLA_PEER_DEVICE_OPTS, 27, peer_device_conf,
+	__u32_field_def(1,	DRBD_GENLA_F_MANDATORY,	resync_rate, DRBD_RESYNC_RATE_DEF)
+	__u32_field_def(2,	DRBD_GENLA_F_MANDATORY,	c_plan_ahead, DRBD_C_PLAN_AHEAD_DEF)
+	__u32_field_def(3,	DRBD_GENLA_F_MANDATORY,	c_delay_target, DRBD_C_DELAY_TARGET_DEF)
+	__u32_field_def(4,	DRBD_GENLA_F_MANDATORY,	c_fill_target, DRBD_C_FILL_TARGET_DEF)
+	__u32_field_def(5,	DRBD_GENLA_F_MANDATORY,	c_max_rate, DRBD_C_MAX_RATE_DEF)
+	__u32_field_def(6,	DRBD_GENLA_F_MANDATORY,	c_min_rate, DRBD_C_MIN_RATE_DEF)
+	__flg_field_def(7, 0 /* OPTIONAL */, bitmap, DRBD_BITMAP_DEF)
+#if (PRO_FEATURES & DRBD_FF_RESYNC_WITHOUT_REPLICATION) || !defined(__KERNEL__)
+	__flg_field_def(8, 0 /* OPTIONAL */, resync_without_replication, DRBD_RESYNC_WITHOUT_REPLICATION_DEF)
+#endif
+)
+
+GENL_struct(DRBD_NLA_PATH_PARMS, 28, path_parms,
+	__bin_field(1, DRBD_GENLA_F_MANDATORY,	my_addr, 128)
+	__bin_field(2, DRBD_GENLA_F_MANDATORY,	peer_addr, 128)
+)
+
+GENL_struct(DRBD_NLA_CONNECT_PARMS, 29, connect_parms,
+	__flg_field_def(1,	DRBD_GENLA_F_MANDATORY,	tentative, 0)
+	__flg_field_def(2,	DRBD_GENLA_F_MANDATORY,	discard_my_data, 0)
+)
+
+GENL_struct(DRBD_NLA_PATH_INFO, 30, drbd_path_info,
+	__flg_field(1, 0, path_established)
 )
 
+GENL_struct(DRBD_NLA_RENAME_RESOURCE_PARMS, 31, rename_resource_parms,
+	__str_field(1, DRBD_GENLA_F_MANDATORY, new_resource_name, 128)
+)
+
+GENL_struct(DRBD_NLA_RENAME_RESOURCE_INFO, 32, rename_resource_info,
+	__str_field(1, DRBD_GENLA_F_MANDATORY, res_new_name, 128)
+)
+
+GENL_struct(DRBD_NLA_INVAL_PEER_PARAMS, 33, invalidate_peer_parms,
+	__flg_field_def(1, DRBD_GENLA_F_MANDATORY, p_reset_bitmap, DRBD_INVALIDATE_RESET_BITMAP_DEF)
+)
+
+GENL_struct(DRBD_NLA_SUSPEND_IO_PARAMS, 34, suspend_io_parms,
+	__flg_field_def(1, DRBD_GENLA_F_MANDATORY, bdev_freeze, DRBD_SUSPEND_IO_BDEV_FREEZE_DEF)
+)
+
+/*
+ * Notifications and commands (genlmsghdr->cmd)
+ */
+GENL_mc_group(events)
+
 	/* add DRBD minor devices as volumes to resources */
 GENL_op(DRBD_ADM_NEW_MINOR, 5, GENL_doit(drbd_adm_new_minor),
-	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED))
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_DEVICE_CONF, DRBD_GENLA_F_MANDATORY))
 GENL_op(DRBD_ADM_DEL_MINOR, 6, GENL_doit(drbd_adm_del_minor),
 	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED))
 
@@ -370,11 +413,29 @@ GENL_op(DRBD_ADM_RESOURCE_OPTS, 9,
 	GENL_tla_expected(DRBD_NLA_RESOURCE_OPTS, DRBD_GENLA_F_MANDATORY)
 )
 
-GENL_op(
-	DRBD_ADM_CONNECT, 10,
-	GENL_doit(drbd_adm_connect),
+GENL_op(DRBD_ADM_NEW_PEER, 44, GENL_doit(drbd_adm_new_peer),
 	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
-	GENL_tla_expected(DRBD_NLA_NET_CONF, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_NET_CONF, DRBD_GENLA_F_MANDATORY)
+)
+
+GENL_op(DRBD_ADM_NEW_PATH, 45, GENL_doit(drbd_adm_new_path),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_PATH_PARMS, DRBD_F_REQUIRED)
+)
+
+GENL_op(DRBD_ADM_DEL_PEER, 46, GENL_doit(drbd_adm_del_peer),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_DISCONNECT_PARMS, DRBD_GENLA_F_MANDATORY)
+)
+
+GENL_op(DRBD_ADM_DEL_PATH, 47, GENL_doit(drbd_adm_del_path),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_PATH_PARMS, DRBD_F_REQUIRED)
+)
+
+GENL_op(DRBD_ADM_CONNECT, 10, GENL_doit(drbd_adm_connect),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_CONNECT_PARMS, DRBD_GENLA_F_MANDATORY)
 )
 
 GENL_op(
@@ -385,7 +446,9 @@ GENL_op(
 )
 
 GENL_op(DRBD_ADM_DISCONNECT, 11, GENL_doit(drbd_adm_disconnect),
-	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED))
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_DISCONNECT_PARMS, DRBD_GENLA_F_MANDATORY)
+)
 
 GENL_op(DRBD_ADM_ATTACH, 12,
 	GENL_doit(drbd_adm_attach),
@@ -438,15 +501,22 @@ GENL_op(DRBD_ADM_DETACH,	18, GENL_doit(drbd_adm_detach),
 	GENL_tla_expected(DRBD_NLA_DETACH_PARMS, DRBD_GENLA_F_MANDATORY))
 
 GENL_op(DRBD_ADM_INVALIDATE,	19, GENL_doit(drbd_adm_invalidate),
-	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED))
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_INVALIDATE_PARMS, DRBD_F_REQUIRED))
+
 GENL_op(DRBD_ADM_INVAL_PEER,	20, GENL_doit(drbd_adm_invalidate_peer),
-	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED))
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_INVAL_PEER_PARAMS, 0 /* OPTIONAL */))
+
 GENL_op(DRBD_ADM_PAUSE_SYNC,	21, GENL_doit(drbd_adm_pause_sync),
 	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED))
 GENL_op(DRBD_ADM_RESUME_SYNC,	22, GENL_doit(drbd_adm_resume_sync),
 	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED))
+
 GENL_op(DRBD_ADM_SUSPEND_IO,	23, GENL_doit(drbd_adm_suspend_io),
-	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED))
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_SUSPEND_IO_PARAMS, 0 /* OPTIONAL */))
+
 GENL_op(DRBD_ADM_RESUME_IO,	24, GENL_doit(drbd_adm_resume_io),
 	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED))
 GENL_op(DRBD_ADM_OUTDATE,	25, GENL_doit(drbd_adm_outdate),
@@ -457,39 +527,47 @@ GENL_op(DRBD_ADM_DOWN,		27, GENL_doit(drbd_adm_down),
 	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED))
 
 GENL_op(DRBD_ADM_GET_RESOURCES, 30,
-	 GENL_op_init(
-		 .dumpit = drbd_adm_dump_resources,
-	 ),
-	 GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY)
-	 GENL_tla_expected(DRBD_NLA_RESOURCE_INFO, DRBD_GENLA_F_MANDATORY)
-	 GENL_tla_expected(DRBD_NLA_RESOURCE_STATISTICS, DRBD_GENLA_F_MANDATORY))
+	GENL_op_init(
+		.dumpit = drbd_adm_dump_resources,
+	),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY)
+	GENL_tla_expected(DRBD_NLA_RESOURCE_INFO, DRBD_GENLA_F_MANDATORY)
+	GENL_tla_expected(DRBD_NLA_RESOURCE_STATISTICS, DRBD_GENLA_F_MANDATORY))
 
 GENL_op(DRBD_ADM_GET_DEVICES, 31,
-	 GENL_op_init(
-		 .dumpit = drbd_adm_dump_devices,
-		 .done = drbd_adm_dump_devices_done,
-	 ),
-	 GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY)
-	 GENL_tla_expected(DRBD_NLA_DEVICE_INFO, DRBD_GENLA_F_MANDATORY)
-	 GENL_tla_expected(DRBD_NLA_DEVICE_STATISTICS, DRBD_GENLA_F_MANDATORY))
+	GENL_op_init(
+		.dumpit = drbd_adm_dump_devices,
+		.done = drbd_adm_dump_devices_done,
+	),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY)
+	GENL_tla_expected(DRBD_NLA_DEVICE_INFO, DRBD_GENLA_F_MANDATORY)
+	GENL_tla_expected(DRBD_NLA_DEVICE_STATISTICS, DRBD_GENLA_F_MANDATORY))
 
 GENL_op(DRBD_ADM_GET_CONNECTIONS, 32,
-	 GENL_op_init(
-		 .dumpit = drbd_adm_dump_connections,
-		 .done = drbd_adm_dump_connections_done,
-	 ),
-	 GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY)
-	 GENL_tla_expected(DRBD_NLA_CONNECTION_INFO, DRBD_GENLA_F_MANDATORY)
-	 GENL_tla_expected(DRBD_NLA_CONNECTION_STATISTICS, DRBD_GENLA_F_MANDATORY))
+	GENL_op_init(
+		.dumpit = drbd_adm_dump_connections,
+		.done = drbd_adm_dump_connections_done,
+	),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY)
+	GENL_tla_expected(DRBD_NLA_CONNECTION_INFO, DRBD_GENLA_F_MANDATORY)
+	GENL_tla_expected(DRBD_NLA_CONNECTION_STATISTICS, DRBD_GENLA_F_MANDATORY))
 
 GENL_op(DRBD_ADM_GET_PEER_DEVICES, 33,
-	 GENL_op_init(
-		 .dumpit = drbd_adm_dump_peer_devices,
-		 .done = drbd_adm_dump_peer_devices_done,
-	 ),
-	 GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY)
-	 GENL_tla_expected(DRBD_NLA_PEER_DEVICE_INFO, DRBD_GENLA_F_MANDATORY)
-	 GENL_tla_expected(DRBD_NLA_PEER_DEVICE_STATISTICS, DRBD_GENLA_F_MANDATORY))
+	GENL_op_init(
+		.dumpit = drbd_adm_dump_peer_devices,
+		.done = drbd_adm_dump_peer_devices_done,
+	),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY)
+	GENL_tla_expected(DRBD_NLA_PEER_DEVICE_INFO, DRBD_GENLA_F_MANDATORY)
+	GENL_tla_expected(DRBD_NLA_PEER_DEVICE_STATISTICS, DRBD_GENLA_F_MANDATORY))
+
+GENL_op(DRBD_ADM_GET_PATHS, 50,
+	GENL_op_init(
+		.dumpit = drbd_adm_dump_paths,
+		.done = drbd_adm_dump_paths_done,
+	),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY)
+	GENL_tla_expected(DRBD_NLA_PATH_INFO, DRBD_GENLA_F_MANDATORY))
 
 GENL_notification(
 	DRBD_RESOURCE_STATE, 34, events,
@@ -509,6 +587,7 @@ GENL_notification(
 	DRBD_CONNECTION_STATE, 36, events,
 	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
 	GENL_tla_expected(DRBD_NLA_NOTIFICATION_HEADER, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_PATH_PARMS, DRBD_GENLA_F_MANDATORY)
 	GENL_tla_expected(DRBD_NLA_CONNECTION_INFO, DRBD_F_REQUIRED)
 	GENL_tla_expected(DRBD_NLA_CONNECTION_STATISTICS, DRBD_F_REQUIRED))
 
@@ -522,7 +601,8 @@ GENL_notification(
 GENL_op(
 	DRBD_ADM_GET_INITIAL_STATE, 38,
 	GENL_op_init(
-	        .dumpit = drbd_adm_get_initial_state,
+		.dumpit = drbd_adm_get_initial_state,
+		.done = drbd_adm_get_initial_state_done,
 	),
 	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_GENLA_F_MANDATORY))
 
@@ -534,3 +614,21 @@ GENL_notification(
 GENL_notification(
 	DRBD_INITIAL_STATE_DONE, 41, events,
 	GENL_tla_expected(DRBD_NLA_NOTIFICATION_HEADER, DRBD_F_REQUIRED))
+
+GENL_op(DRBD_ADM_FORGET_PEER,		42, GENL_doit(drbd_adm_forget_peer),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_FORGET_PEER_PARMS, DRBD_F_REQUIRED))
+
+GENL_op(DRBD_ADM_CHG_PEER_DEVICE_OPTS, 43,
+	GENL_doit(drbd_adm_peer_device_opts),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_PEER_DEVICE_OPTS, DRBD_F_REQUIRED))
+
+GENL_op(DRBD_ADM_RENAME_RESOURCE,		49, GENL_doit(drbd_adm_rename_resource),
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_RENAME_RESOURCE_PARMS, DRBD_F_REQUIRED))
+
+GENL_notification(
+	DRBD_PATH_STATE, 48, events,
+	GENL_tla_expected(DRBD_NLA_CFG_CONTEXT, DRBD_F_REQUIRED)
+	GENL_tla_expected(DRBD_NLA_PATH_INFO, DRBD_F_REQUIRED))
diff --git a/include/linux/drbd_limits.h b/include/linux/drbd_limits.h
index 5b042fb427e9..ed38f94d43c6 100644
--- a/include/linux/drbd_limits.h
+++ b/include/linux/drbd_limits.h
@@ -64,7 +64,7 @@
 #define DRBD_DISK_TIMEOUT_DEF 0U    /* disabled */
 #define DRBD_DISK_TIMEOUT_SCALE '1'
 
-  /* active connection retries when C_WF_CONNECTION */
+  /* active connection retries when C_CONNECTING */
 #define DRBD_CONNECT_INT_MIN 1U
 #define DRBD_CONNECT_INT_MAX 120U
 #define DRBD_CONNECT_INT_DEF 10U   /* seconds */
@@ -88,14 +88,13 @@
 #define DRBD_MAX_EPOCH_SIZE_DEF 2048U
 #define DRBD_MAX_EPOCH_SIZE_SCALE '1'
 
-  /* I don't think that a tcp send buffer of more than 10M is useful */
 #define DRBD_SNDBUF_SIZE_MIN  0U
-#define DRBD_SNDBUF_SIZE_MAX  (10U<<20)
+#define DRBD_SNDBUF_SIZE_MAX  (128U<<20)
 #define DRBD_SNDBUF_SIZE_DEF  0U
 #define DRBD_SNDBUF_SIZE_SCALE '1'
 
 #define DRBD_RCVBUF_SIZE_MIN  0U
-#define DRBD_RCVBUF_SIZE_MAX  (10U<<20)
+#define DRBD_RCVBUF_SIZE_MAX  (128U<<20)
 #define DRBD_RCVBUF_SIZE_DEF  0U
 #define DRBD_RCVBUF_SIZE_SCALE '1'
 
@@ -117,16 +116,19 @@
 #define DRBD_KO_COUNT_MAX  200U
 #define DRBD_KO_COUNT_DEF  7U
 #define DRBD_KO_COUNT_SCALE '1'
+
+#define DRBD_ALLOW_REMOTE_READ_DEF 1U
 /* } */
 
 /* syncer { */
   /* FIXME allow rate to be zero? */
 #define DRBD_RESYNC_RATE_MIN 1U
 /* channel bonding 10 GbE, or other hardware */
-#define DRBD_RESYNC_RATE_MAX (4 << 20)
+#define DRBD_RESYNC_RATE_MAX (8U << 20)
 #define DRBD_RESYNC_RATE_DEF 250U
 #define DRBD_RESYNC_RATE_SCALE 'k'  /* kilobytes */
 
+  /* less than 67 would hit performance unnecessarily. */
 #define DRBD_AL_EXTENTS_MIN  67U
   /* we use u16 as "slot number", (u16)~0 is "FREE".
    * If you use >= 292 kB on-disk ring buffer,
@@ -182,7 +184,7 @@
 #define DRBD_C_FILL_TARGET_DEF 100U /* Try to place 50KiB in socket send buffer during resync */
 #define DRBD_C_FILL_TARGET_SCALE 's'  /* sectors */
 
-#define DRBD_C_MAX_RATE_MIN     250U
+#define DRBD_C_MAX_RATE_MIN     0U
 #define DRBD_C_MAX_RATE_MAX     (4U << 20)
 #define DRBD_C_MAX_RATE_DEF     102400U
 #define DRBD_C_MAX_RATE_SCALE	'k'  /* kilobytes */
@@ -207,10 +209,11 @@
 #define DRBD_DISK_BARRIER_DEF	0U
 #define DRBD_DISK_FLUSHES_DEF	1U
 #define DRBD_DISK_DRAIN_DEF	1U
+#define DRBD_DISK_DISKLESS_DEF	0U
 #define DRBD_MD_FLUSHES_DEF	1U
 #define DRBD_TCP_CORK_DEF	1U
 #define DRBD_AL_UPDATES_DEF     1U
-
+#define DRBD_INVALIDATE_RESET_BITMAP_DEF 1U
 /* We used to ignore the discard_zeroes_data setting.
  * To not change established (and expected) behaviour,
  * by default assume that, for discard_zeroes_data=0,
@@ -227,6 +230,52 @@
 #define DRBD_ALWAYS_ASBP_DEF	0U
 #define DRBD_USE_RLE_DEF	1U
 #define DRBD_CSUMS_AFTER_CRASH_ONLY_DEF 0U
+#define DRBD_AUTO_PROMOTE_DEF	1U
+#define DRBD_BITMAP_DEF         1U
+#define DRBD_RESYNC_WITHOUT_REPLICATION_DEF 1U
+
+#define DRBD_NR_REQUESTS_MIN	4U
+#define DRBD_NR_REQUESTS_DEF	8000U
+#define DRBD_NR_REQUESTS_MAX	-1U
+#define DRBD_NR_REQUESTS_SCALE	'1'
+
+#define DRBD_MAX_BIO_SIZE_DEF	DRBD_MAX_BIO_SIZE
+#define DRBD_MAX_BIO_SIZE_MIN	(1U << 9)
+#define DRBD_MAX_BIO_SIZE_MAX	DRBD_MAX_BIO_SIZE
+#define DRBD_MAX_BIO_SIZE_SCALE '1'
+
+#define DRBD_NODE_ID_DEF		0U
+#define DRBD_NODE_ID_MIN		0U
+#ifndef DRBD_NODE_ID_MAX /* Is also defined in drbd.h */
+#define DRBD_NODE_ID_MAX		DRBD_PEERS_MAX
+#endif
+#define DRBD_NODE_ID_SCALE		'1'
+
+#define DRBD_PEER_ACK_WINDOW_DEF	4096U   /* 2 MiByte */
+#define DRBD_PEER_ACK_WINDOW_MIN	2048U   /* 1 MiByte */
+#define DRBD_PEER_ACK_WINDOW_MAX	204800U /* 100 MiByte */
+#define DRBD_PEER_ACK_WINDOW_SCALE 's' /* sectors*/
+
+#define DRBD_PEER_ACK_DELAY_DEF	100U    /* 100ms */
+#define DRBD_PEER_ACK_DELAY_MIN 1U
+#define DRBD_PEER_ACK_DELAY_MAX 10000U  /* 10 seconds */
+#define DRBD_PEER_ACK_DELAY_SCALE '1' /* milliseconds */
+
+/* Two-phase commit timeout (1/10 seconds). */
+#define DRBD_TWOPC_TIMEOUT_MIN	50U
+#define DRBD_TWOPC_TIMEOUT_MAX	600U
+#define DRBD_TWOPC_TIMEOUT_DEF	300U
+#define DRBD_TWOPC_TIMEOUT_SCALE '1'
+
+#define DRBD_TWOPC_RETRY_TIMEOUT_MIN 1U
+#define DRBD_TWOPC_RETRY_TIMEOUT_MAX 50U
+#define DRBD_TWOPC_RETRY_TIMEOUT_DEF 1U
+#define DRBD_TWOPC_RETRY_TIMEOUT_SCALE '1'
+
+#define DRBD_SYNC_FROM_NID_DEF -1
+#define DRBD_SYNC_FROM_NID_MIN -1
+#define DRBD_SYNC_FROM_NID_MAX DRBD_PEERS_MAX
+#define DRBD_SYNC_FROM_NID_SCALE '1'
 
 #define DRBD_AL_STRIPES_MIN     1U
 #define DRBD_AL_STRIPES_MAX     1024U
@@ -243,9 +292,51 @@
 #define DRBD_SOCKET_CHECK_TIMEO_DEF 0U
 #define DRBD_SOCKET_CHECK_TIMEO_SCALE '1'
 
+/* Auto promote timeout (1/10 seconds). */
+#define DRBD_AUTO_PROMOTE_TIMEOUT_MIN 0U
+#define DRBD_AUTO_PROMOTE_TIMEOUT_MAX 600U
+#define DRBD_AUTO_PROMOTE_TIMEOUT_DEF 20U
+#define DRBD_AUTO_PROMOTE_TIMEOUT_SCALE '1'
+
 #define DRBD_RS_DISCARD_GRANULARITY_MIN 0U
 #define DRBD_RS_DISCARD_GRANULARITY_MAX (1U<<20)  /* 1MiByte */
 #define DRBD_RS_DISCARD_GRANULARITY_DEF 0U     /* disabled by default */
 #define DRBD_RS_DISCARD_GRANULARITY_SCALE '1' /* bytes */
 
+#define DRBD_QUORUM_MIN 0U
+#define DRBD_QUORUM_MAX QOU_ALL /* Note: user visible min/max different */
+#define DRBD_QUORUM_DEF QOU_OFF /* kernel min/max includes symbolic values */
+#define DRBD_QUORUM_SCALE '1' /* nodes */
+
+#define DRBD_BLOCK_SIZE_MIN 512
+#define DRBD_BLOCK_SIZE_MAX 4096
+#define DRBD_BLOCK_SIZE_DEF 512
+#define DRBD_BLOCK_SIZE_SCALE '1' /* Bytes */
+
+/* By default freeze IO, if set error all IOs as quick as possible */
+#define DRBD_ON_NO_QUORUM_DEF ONQ_SUSPEND_IO
+
+#define DRBD_ON_SUSP_PRI_OUTD_DEF SPO_DISCONNECT
+#define DRBD_DRBD8_COMPAT_MODE_DEF 0U
+
+#define DRBD_TLS_DEF 0U /* disabled by default */
+#define DRBD_TLS_PRIVKEY_DEF 0 /* disabled by default */
+#define DRBD_TLS_CERTIFICATE_DEF 0 /* disabled by default */
+#define DRBD_TLS_KEYRING_DEF 0 /* disabled by default */
+
+#define DRBD_LOAD_BALANCE_PATHS_DEF 0U
+
+#define DRBD_RDMA_CTRL_RCVBUF_SIZE_MIN  0U
+#define DRBD_RDMA_CTRL_RCVBUF_SIZE_MAX  (10U<<20)
+#define DRBD_RDMA_CTRL_RCVBUF_SIZE_DEF 0
+#define DRBD_RDMA_CTRL_RCVBUF_SIZE_SCALE '1'
+
+#define DRBD_RDMA_CTRL_SNDBUF_SIZE_MIN  0U
+#define DRBD_RDMA_CTRL_SNDBUF_SIZE_MAX  (10U<<20)
+#define DRBD_RDMA_CTRL_SNDBUF_SIZE_DEF 0
+#define DRBD_RDMA_CTRL_SNDBUF_SIZE_SCALE '1'
+
+/* Enable bdev_freeze/lockfs by default */
+#define DRBD_SUSPEND_IO_BDEV_FREEZE_DEF 1U
+
 #endif
diff --git a/include/linux/genl_magic_func.h b/include/linux/genl_magic_func.h
index d4da060b7532..db462b860d18 100644
--- a/include/linux/genl_magic_func.h
+++ b/include/linux/genl_magic_func.h
@@ -130,41 +130,53 @@ static void dprint_array(const char *dir, int nla_type,
  *									{{{2
  */
 
-/* processing of generic netlink messages is serialized.
- * use one static buffer for parsing of nested attributes */
-static struct nlattr *nested_attr_tb[128];
-
 #undef GENL_struct
 #define GENL_struct(tag_name, tag_number, s_name, s_fields)		\
-/* *_from_attrs functions are static, but potentially unused */		\
 static int __ ## s_name ## _from_attrs(struct s_name *s,		\
+		struct nlattr ***ret_nested_attribute_table,		\
 		struct genl_info *info, bool exclude_invariants)	\
 {									\
 	const int maxtype = ARRAY_SIZE(s_name ## _nl_policy)-1;		\
 	struct nlattr *tla = info->attrs[tag_number];			\
-	struct nlattr **ntb = nested_attr_tb;				\
+	struct nlattr **ntb;						\
 	struct nlattr *nla;						\
-	int err;							\
-	BUILD_BUG_ON(ARRAY_SIZE(s_name ## _nl_policy) > ARRAY_SIZE(nested_attr_tb));	\
+	int err = 0;							\
+	if (ret_nested_attribute_table)					\
+		*ret_nested_attribute_table = NULL;			\
 	if (!tla)							\
 		return -ENOMSG;						\
+	ntb = kcalloc(ARRAY_SIZE(s_name ## _nl_policy), sizeof(*ntb), GFP_KERNEL); \
+	if (!ntb)							\
+		return -ENOMEM;						\
 	DPRINT_TLA(#s_name, "<=-", #tag_name);				\
 	err = drbd_nla_parse_nested(ntb, maxtype, tla, s_name ## _nl_policy);	\
 	if (err)							\
-		return err;						\
+		goto out;						\
 									\
 	s_fields							\
-	return 0;							\
+ out:									\
+	if (!err && ret_nested_attribute_table)				\
+		*ret_nested_attribute_table = ntb;			\
+	else								\
+		kfree(ntb);						\
+	return err;							\
 }					__attribute__((unused))		\
 static int s_name ## _from_attrs(struct s_name *s,			\
 						struct genl_info *info)	\
 {									\
-	return __ ## s_name ## _from_attrs(s, info, false);		\
+	return __ ## s_name ## _from_attrs(s, NULL, info, false);	\
+}					__attribute__((unused))		\
+static int s_name ## _ntb_from_attrs(					\
+			struct nlattr ***ret_nested_attribute_table,	\
+						struct genl_info *info)	\
+{									\
+	return __ ## s_name ## _from_attrs(NULL,			\
+			ret_nested_attribute_table, info, false);	\
 }					__attribute__((unused))		\
 static int s_name ## _from_attrs_for_change(struct s_name *s,		\
 						struct genl_info *info)	\
 {									\
-	return __ ## s_name ## _from_attrs(s, info, true);		\
+	return __ ## s_name ## _from_attrs(s, NULL, info, true);	\
 }					__attribute__((unused))		\
 
 #define __assign(attr_nr, attr_flag, name, nla_type, type, assignment...)	\
@@ -172,7 +184,8 @@ static int s_name ## _from_attrs_for_change(struct s_name *s,		\
 		if (nla) {						\
 			if (exclude_invariants && !!((attr_flag) & DRBD_F_INVARIANT)) {		\
 				pr_info("<< must not change invariant attr: %s\n", #name);	\
-				return -EEXIST;				\
+				err = -EEXIST;				\
+				goto out;				\
 			}						\
 			assignment;					\
 		} else if (exclude_invariants && !!((attr_flag) & DRBD_F_INVARIANT)) {		\
@@ -180,7 +193,8 @@ static int s_name ## _from_attrs_for_change(struct s_name *s,		\
 			/* which was expected */			\
 		} else if ((attr_flag) & DRBD_F_REQUIRED) {		\
 			pr_info("<< missing attr: %s\n", #name);	\
-			return -ENOMSG;					\
+			err = -ENOMSG;					\
+			goto out;					\
 		}
 
 #undef __field
@@ -271,12 +285,12 @@ enum CONCATENATE(GENL_MAGIC_FAMILY, group_ids) {
 #undef GENL_mc_group
 #define GENL_mc_group(group)						\
 static int CONCATENATE(GENL_MAGIC_FAMILY, _genl_multicast_ ## group)(	\
-	struct sk_buff *skb, gfp_t flags)				\
+	struct sk_buff *skb)						\
 {									\
 	unsigned int group_id =						\
 		CONCATENATE(GENL_MAGIC_FAMILY, _group_ ## group);		\
-	return genlmsg_multicast(&ZZZ_genl_family, skb, 0,		\
-				 group_id, flags);			\
+	return genlmsg_multicast_allns(&ZZZ_genl_family, skb, 0,	\
+				 group_id);				\
 }
 
 #include GENL_MAGIC_INCLUDE_FILE
@@ -298,6 +312,8 @@ static struct genl_family ZZZ_genl_family __ro_after_init = {
 	.resv_start_op = 42, /* drbd is currently the only user */
 	.n_mcgrps = ARRAY_SIZE(ZZZ_genl_mcgrps),
 	.module = THIS_MODULE,
+	.netnsok = false,
+	.parallel_ops = true,
 };
 
 int CONCATENATE(GENL_MAGIC_FAMILY, _genl_register)(void)
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 13/20] drbd: rewrite state machine for DRBD 9 multi-peer clusters
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (11 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 12/20] drbd: replace per-device state model with multi-peer data structures Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 14/20] drbd: rework activity log and bitmap for multi-peer replication Christoph Böhmwalder
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Replace the monolithic DRBD 8.4 state machine with an architecture
suited for clusters with more than one peer.
The central concept is a transactional model: state is held in
per-object arrays indexed by [NOW] and [NEW], and every change is
bracketed by begin/end calls that validate the proposed transition
resource-wide before atomically committing it or rolling it back.

Replace the single psinlock that serialized everything with
finer-grained locking: a read-write lock for state access, separate
locks for peer requests and interval trees.

Cluster-wide state changes (role changes, connect/disconnect, resize)
use a two-phase commit protocol.
The initiating node sends a prepare message to all reachable peers,
collects replies with timeout and exponential backoff, then commits
or aborts.
Not-fully-connected topologies are handled by forwarding nested 2PC
rounds through intermediate nodes.

Add a quorum mechanism with tiebreaker support for even-sized clusters.
This can suspend or fail I/O when the cluster loses more than half of
its peers.

Unify post-state-change processing into a single resource-wide work
item that handles UUID propagation, resync startup, I/O suspension,
metadata persistence, and netlink notifications for all objects in one
pass, replacing the separate per-device and per-connection callbacks
from 8.4.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_state.c | 7724 +++++++++++++++++++++++--------
 include/linux/drbd_genl.h       |    2 +
 include/linux/drbd_limits.h     |    7 +
 3 files changed, 5898 insertions(+), 1835 deletions(-)

diff --git a/drivers/block/drbd/drbd_state.c b/drivers/block/drbd/drbd_state.c
index adcba7f1d8ea..ab1ff6f85fb2 100644
--- a/drivers/block/drbd/drbd_state.c
+++ b/drivers/block/drbd/drbd_state.c
@@ -13,199 +13,414 @@
 
  */
 
-#include <linux/drbd_limits.h>
+#include <linux/random.h>
+#include <linux/jiffies.h>
 #include "drbd_int.h"
 #include "drbd_protocol.h"
 #include "drbd_req.h"
 #include "drbd_state_change.h"
 
-struct after_state_chg_work {
+
+struct after_state_change_work {
 	struct drbd_work w;
-	struct drbd_device *device;
-	union drbd_state os;
-	union drbd_state ns;
-	enum chg_state_flags flags;
-	struct completion *done;
 	struct drbd_state_change *state_change;
+	struct completion *done;
+};
+
+struct quorum_info {
+	int up_to_date;
+	int present;
+	int voters;
+	int quorum_at;
+	int min_redundancy_at;
+};
+
+struct quorum_detail {
+	int up_to_date;
+	int present;
+	int outdated;
+	int diskless;
+	int missing_diskless;
+	int quorumless;
+	int unknown;
+	int quorate_peers;
+};
+
+struct change_context {
+	struct drbd_resource *resource;
+	int vnr;
+	union drbd_state mask;
+	union drbd_state val;
+	int target_node_id;
+	enum chg_state_flags flags;
+	bool change_local_state_last;
+	const char **err_str;
+};
+
+enum change_phase {
+	PH_LOCAL_COMMIT,
+	PH_PREPARE,
+	PH_84_COMMIT,
+	PH_COMMIT,
 };
 
-enum sanitize_state_warnings {
-	NO_WARNING,
-	ABORTED_ONLINE_VERIFY,
-	ABORTED_RESYNC,
-	CONNECTION_LOST_NEGOTIATING,
-	IMPLICITLY_UPGRADED_DISK,
-	IMPLICITLY_UPGRADED_PDSK,
+struct change_disk_state_context {
+	struct change_context context;
+	struct drbd_device *device;
 };
 
+static bool lost_contact_to_peer_data(enum drbd_disk_state *peer_disk_state);
+static bool peer_returns_diskless(struct drbd_peer_device *peer_device,
+				  enum drbd_disk_state os, enum drbd_disk_state ns);
+static void print_state_change(struct drbd_resource *resource, const char *prefix, const char *tag);
+static void finish_state_change(struct drbd_resource *, const char *tag);
+static int w_after_state_change(struct drbd_work *w, int unused);
+static enum drbd_state_rv is_valid_soft_transition(struct drbd_resource *);
+static enum drbd_state_rv is_valid_transition(struct drbd_resource *resource);
+static void sanitize_state(struct drbd_resource *resource);
+static void ensure_exposed_data_uuid(struct drbd_device *device);
+static enum drbd_state_rv change_peer_state(struct drbd_connection *, int, union drbd_state,
+					    union drbd_state, unsigned long *);
+static void check_wrongly_set_mdf_exists(struct drbd_device *);
+static void update_members(struct drbd_resource *resource);
+static bool calc_data_accessible(struct drbd_state_change *state_change, int n_device,
+				 enum which_state which);
+
+/* We need to stay consistent if we are neighbor of a diskless primary with
+   different UUID. This function should be used if the device was D_UP_TO_DATE
+   before.
+ */
+static bool may_return_to_up_to_date(struct drbd_device *device, enum which_state which)
+{
+	struct drbd_peer_device *peer_device;
+	bool rv = true;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->disk_state[which] == D_DISKLESS &&
+		    peer_device->connection->peer_role[which] == R_PRIMARY &&
+		    peer_device->current_uuid != drbd_current_uuid(device)) {
+			rv = false;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return rv;
+}
+
+/**
+ * may_be_up_to_date()  -  check if transition from D_CONSISTENT to D_UP_TO_DATE is allowed
+ * @device: DRBD device.
+ * @which: OLD or NEW
+ *
+ * When fencing is enabled, it may only transition from D_CONSISTENT to D_UP_TO_DATE
+ * when ether all peers are connected, or outdated.
+ */
+static bool may_be_up_to_date(struct drbd_device *device, enum which_state which)
+{
+	bool all_peers_outdated = true;
+	int node_id;
+
+	if (!may_return_to_up_to_date(device, which))
+		return false;
+
+	rcu_read_lock();
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		struct drbd_peer_md *peer_md = &device->ldev->md.peers[node_id];
+		struct drbd_peer_device *peer_device;
+		enum drbd_disk_state peer_disk_state;
+		bool want_bitmap = true;
+
+		if (node_id == device->ldev->md.node_id)
+			continue;
+
+		if (!(peer_md->flags & MDF_HAVE_BITMAP) && !(peer_md->flags & MDF_NODE_EXISTS))
+			continue;
+
+		if (!(peer_md->flags & MDF_PEER_FENCING))
+			continue;
+		peer_device = peer_device_by_node_id(device, node_id);
+		if (peer_device) {
+			struct peer_device_conf *pdc = rcu_dereference(peer_device->conf);
+			want_bitmap = pdc->bitmap;
+			peer_disk_state = peer_device->disk_state[NEW];
+		} else {
+			peer_disk_state = D_UNKNOWN;
+		}
+
+		switch (peer_disk_state) {
+		case D_DISKLESS:
+			if (!(peer_md->flags & MDF_PEER_DEVICE_SEEN))
+				continue;
+			fallthrough;
+		case D_ATTACHING:
+		case D_DETACHING:
+		case D_FAILED:
+		case D_NEGOTIATING:
+		case D_UNKNOWN:
+			if (!want_bitmap)
+				continue;
+			if ((peer_md->flags & MDF_PEER_OUTDATED))
+				continue;
+			break;
+		case D_INCONSISTENT:
+		case D_OUTDATED:
+			continue;
+		case D_CONSISTENT:
+		case D_UP_TO_DATE:
+			/* These states imply that there is a connection. If there is
+			   a connection we do not need to insist that the peer was
+			   outdated. */
+			continue;
+		case D_MASK:
+			break;
+		}
+
+		all_peers_outdated = false;
+	}
+	rcu_read_unlock();
+	return all_peers_outdated;
+}
+
+static bool stable_up_to_date_neighbor(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	bool rv = false;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->disk_state[NEW] == D_UP_TO_DATE &&
+		    peer_device->uuid_flags & UUID_FLAG_STABLE && /* primary is also stable */
+		    peer_device->current_uuid == drbd_current_uuid(device)) {
+			rv = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return rv;
+}
+
+/**
+ * disk_state_from_md()  -  determine initial disk state
+ * @device: DRBD device.
+ *
+ * When a disk is attached to a device, we set the disk state to D_NEGOTIATING.
+ * We then wait for all connected peers to send the peer disk state.  Once that
+ * has happened, we can determine the actual disk state based on the peer disk
+ * states and the state of the disk itself.
+ *
+ * The initial disk state becomes D_UP_TO_DATE without fencing or when we know
+ * that all peers have been outdated, and D_CONSISTENT otherwise.
+ *
+ * The caller either needs to have a get_ldev() reference, or need to call
+ * this function only if disk_state[NOW] >= D_NEGOTIATING and holding the
+ * state_rwlock.
+ */
+enum drbd_disk_state disk_state_from_md(struct drbd_device *device)
+{
+	enum drbd_disk_state disk_state;
+
+	if (!drbd_md_test_flag(device->ldev, MDF_CONSISTENT))
+		disk_state = D_INCONSISTENT;
+	else if (!drbd_md_test_flag(device->ldev, MDF_WAS_UP_TO_DATE))
+		disk_state = D_OUTDATED;
+	else
+		disk_state = may_be_up_to_date(device, NOW) ? D_UP_TO_DATE : D_CONSISTENT;
+
+	return disk_state;
+}
+
+bool is_suspended_fen(struct drbd_resource *resource, enum which_state which)
+{
+	struct drbd_connection *connection;
+	bool rv = false;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (connection->susp_fen[which]) {
+			rv = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return rv;
+}
+
+bool resource_is_suspended(struct drbd_resource *resource, enum which_state which)
+{
+	bool rv = resource->susp_user[which] || resource->susp_nod[which] ||
+		resource->susp_quorum[which] ||	resource->susp_uuid[which];
+
+	if (rv)
+		return rv;
+
+	return is_suspended_fen(resource, which);
+}
+
 static void count_objects(struct drbd_resource *resource,
-			  unsigned int *n_devices,
-			  unsigned int *n_connections)
+			  struct drbd_state_change_object_count *ocnt)
 {
+	struct drbd_path *path;
 	struct drbd_device *device;
 	struct drbd_connection *connection;
 	int vnr;
 
-	*n_devices = 0;
-	*n_connections = 0;
+	lockdep_assert_held(&resource->state_rwlock);
+
+	ocnt->n_devices = 0;
+	ocnt->n_connections = 0;
+	ocnt->n_paths = 0;
 
 	idr_for_each_entry(&resource->devices, device, vnr)
-		(*n_devices)++;
-	for_each_connection(connection, resource)
-		(*n_connections)++;
+		ocnt->n_devices++;
+	for_each_connection(connection, resource) {
+		ocnt->n_connections++;
+		list_for_each_entry(path, &connection->transport.paths, list) {
+			ocnt->n_paths++;
+		}
+	}
 }
 
-static struct drbd_state_change *alloc_state_change(unsigned int n_devices, unsigned int n_connections, gfp_t gfp)
+static struct drbd_state_change *alloc_state_change(struct drbd_state_change_object_count *ocnt, gfp_t flags)
 {
 	struct drbd_state_change *state_change;
-	unsigned int size, n;
+	unsigned int size;
 
 	size = sizeof(struct drbd_state_change) +
-	       n_devices * sizeof(struct drbd_device_state_change) +
-	       n_connections * sizeof(struct drbd_connection_state_change) +
-	       n_devices * n_connections * sizeof(struct drbd_peer_device_state_change);
-	state_change = kmalloc(size, gfp);
+	       ocnt->n_devices * sizeof(struct drbd_device_state_change) +
+	       ocnt->n_connections * sizeof(struct drbd_connection_state_change) +
+	       ocnt->n_devices * ocnt->n_connections * sizeof(struct drbd_peer_device_state_change) +
+	       ocnt->n_paths * sizeof(struct drbd_path_state);
+	state_change = kzalloc(size, flags);
 	if (!state_change)
 		return NULL;
-	state_change->n_devices = n_devices;
-	state_change->n_connections = n_connections;
+	state_change->n_connections = ocnt->n_connections;
+	state_change->n_devices = ocnt->n_devices;
+	state_change->n_paths = ocnt->n_paths;
 	state_change->devices = (void *)(state_change + 1);
-	state_change->connections = (void *)&state_change->devices[n_devices];
-	state_change->peer_devices = (void *)&state_change->connections[n_connections];
-	state_change->resource->resource = NULL;
-	for (n = 0; n < n_devices; n++)
-		state_change->devices[n].device = NULL;
-	for (n = 0; n < n_connections; n++)
-		state_change->connections[n].connection = NULL;
+	state_change->connections = (void *)&state_change->devices[ocnt->n_devices];
+	state_change->peer_devices = (void *)&state_change->connections[ocnt->n_connections];
+	state_change->paths = (void *)&state_change->peer_devices[ocnt->n_devices*ocnt->n_connections];
 	return state_change;
 }
 
-struct drbd_state_change *remember_old_state(struct drbd_resource *resource, gfp_t gfp)
+struct drbd_state_change *remember_state_change(struct drbd_resource *resource, gfp_t gfp)
 {
 	struct drbd_state_change *state_change;
 	struct drbd_device *device;
-	unsigned int n_devices;
 	struct drbd_connection *connection;
-	unsigned int n_connections;
+	struct drbd_state_change_object_count ocnt;
 	int vnr;
 
 	struct drbd_device_state_change *device_state_change;
 	struct drbd_peer_device_state_change *peer_device_state_change;
 	struct drbd_connection_state_change *connection_state_change;
+	struct drbd_path_state *path_state; /* yes, not a _change :-( */
+
+	lockdep_assert_held(&resource->state_rwlock);
 
-	/* Caller holds req_lock spinlock.
-	 * No state, no device IDR, no connections lists can change. */
-	count_objects(resource, &n_devices, &n_connections);
-	state_change = alloc_state_change(n_devices, n_connections, gfp);
+	count_objects(resource, &ocnt);
+	state_change = alloc_state_change(&ocnt, gfp);
 	if (!state_change)
 		return NULL;
 
 	kref_get(&resource->kref);
 	state_change->resource->resource = resource;
-	state_change->resource->role[OLD] =
-		conn_highest_role(first_connection(resource));
-	state_change->resource->susp[OLD] = resource->susp;
-	state_change->resource->susp_nod[OLD] = resource->susp_nod;
-	state_change->resource->susp_fen[OLD] = resource->susp_fen;
-
-	connection_state_change = state_change->connections;
-	for_each_connection(connection, resource) {
-		kref_get(&connection->kref);
-		connection_state_change->connection = connection;
-		connection_state_change->cstate[OLD] =
-			connection->cstate;
-		connection_state_change->peer_role[OLD] =
-			conn_highest_peer(connection);
-		connection_state_change++;
-	}
+	memcpy(state_change->resource->role,
+	       resource->role, sizeof(resource->role));
+	memcpy(state_change->resource->susp,
+	       resource->susp_user, sizeof(resource->susp_user));
+	memcpy(state_change->resource->susp_nod,
+	       resource->susp_nod, sizeof(resource->susp_nod));
+	memcpy(state_change->resource->susp_uuid,
+	       resource->susp_uuid, sizeof(resource->susp_uuid));
+	memcpy(state_change->resource->fail_io,
+	       resource->fail_io, sizeof(resource->fail_io));
 
 	device_state_change = state_change->devices;
 	peer_device_state_change = state_change->peer_devices;
 	idr_for_each_entry(&resource->devices, device, vnr) {
+		struct drbd_peer_device *peer_device;
+
 		kref_get(&device->kref);
 		device_state_change->device = device;
-		device_state_change->disk_state[OLD] = device->state.disk;
+		memcpy(device_state_change->disk_state,
+		       device->disk_state, sizeof(device->disk_state));
+		memcpy(device_state_change->have_quorum,
+		       device->have_quorum, sizeof(device->have_quorum));
 
 		/* The peer_devices for each device have to be enumerated in
 		   the order of the connections. We may not use for_each_peer_device() here. */
 		for_each_connection(connection, resource) {
-			struct drbd_peer_device *peer_device;
-
 			peer_device = conn_peer_device(connection, device->vnr);
+
 			peer_device_state_change->peer_device = peer_device;
-			peer_device_state_change->disk_state[OLD] =
-				device->state.pdsk;
-			peer_device_state_change->repl_state[OLD] =
-				max_t(enum drbd_conns,
-				      C_WF_REPORT_PARAMS, device->state.conn);
-			peer_device_state_change->resync_susp_user[OLD] =
-				device->state.user_isp;
-			peer_device_state_change->resync_susp_peer[OLD] =
-				device->state.peer_isp;
-			peer_device_state_change->resync_susp_dependency[OLD] =
-				device->state.aftr_isp;
+			memcpy(peer_device_state_change->disk_state,
+			       peer_device->disk_state, sizeof(peer_device->disk_state));
+			memcpy(peer_device_state_change->repl_state,
+			       peer_device->repl_state, sizeof(peer_device->repl_state));
+			memcpy(peer_device_state_change->resync_susp_user,
+			       peer_device->resync_susp_user,
+			       sizeof(peer_device->resync_susp_user));
+			memcpy(peer_device_state_change->resync_susp_peer,
+			       peer_device->resync_susp_peer,
+			       sizeof(peer_device->resync_susp_peer));
+			memcpy(peer_device_state_change->resync_susp_dependency,
+			       peer_device->resync_susp_dependency,
+			       sizeof(peer_device->resync_susp_dependency));
+			memcpy(peer_device_state_change->resync_susp_other_c,
+			       peer_device->resync_susp_other_c,
+			       sizeof(peer_device->resync_susp_other_c));
+			memcpy(peer_device_state_change->resync_active,
+			       peer_device->resync_active,
+			       sizeof(peer_device->resync_active));
+			memcpy(peer_device_state_change->replication,
+			       peer_device->replication,
+			       sizeof(peer_device->replication));
+			memcpy(peer_device_state_change->peer_replication,
+			       peer_device->peer_replication,
+			       sizeof(peer_device->peer_replication));
 			peer_device_state_change++;
 		}
 		device_state_change++;
 	}
 
-	return state_change;
-}
-
-static void remember_new_state(struct drbd_state_change *state_change)
-{
-	struct drbd_resource_state_change *resource_state_change;
-	struct drbd_resource *resource;
-	unsigned int n;
-
-	if (!state_change)
-		return;
-
-	resource_state_change = &state_change->resource[0];
-	resource = resource_state_change->resource;
-
-	resource_state_change->role[NEW] =
-		conn_highest_role(first_connection(resource));
-	resource_state_change->susp[NEW] = resource->susp;
-	resource_state_change->susp_nod[NEW] = resource->susp_nod;
-	resource_state_change->susp_fen[NEW] = resource->susp_fen;
-
-	for (n = 0; n < state_change->n_devices; n++) {
-		struct drbd_device_state_change *device_state_change =
-			&state_change->devices[n];
-		struct drbd_device *device = device_state_change->device;
-
-		device_state_change->disk_state[NEW] = device->state.disk;
-	}
+	connection_state_change = state_change->connections;
+	path_state = state_change->paths;
+	for_each_connection(connection, resource) {
+		struct drbd_path *path;
 
-	for (n = 0; n < state_change->n_connections; n++) {
-		struct drbd_connection_state_change *connection_state_change =
-			&state_change->connections[n];
-		struct drbd_connection *connection =
-			connection_state_change->connection;
+		kref_get(&connection->kref);
+		connection_state_change->connection = connection;
+		memcpy(connection_state_change->cstate,
+		       connection->cstate, sizeof(connection->cstate));
+		memcpy(connection_state_change->peer_role,
+		       connection->peer_role, sizeof(connection->peer_role));
+		memcpy(connection_state_change->susp_fen,
+		       connection->susp_fen, sizeof(connection->susp_fen));
+
+		list_for_each_entry(path, &connection->transport.paths, list) {
+			/* Share the connection kref with above.
+			 * Could also share the pointer, but would then need to
+			 * remember an additional n_paths per connection
+			 * count/offset (connection_state_change->n_paths++)
+			 * to be able to associate the paths with its connection.
+			 * So why not directly store the pointer here again. */
+			path_state->connection = connection;
+			kref_get(&path->kref);
+			path_state->path = path;
+			path_state->path_established = test_bit(TR_ESTABLISHED, &path->flags);
+
+			path_state++;
+		}
 
-		connection_state_change->cstate[NEW] = connection->cstate;
-		connection_state_change->peer_role[NEW] =
-			conn_highest_peer(connection);
+		connection_state_change++;
 	}
 
-	for (n = 0; n < state_change->n_devices * state_change->n_connections; n++) {
-		struct drbd_peer_device_state_change *peer_device_state_change =
-			&state_change->peer_devices[n];
-		struct drbd_device *device =
-			peer_device_state_change->peer_device->device;
-		union drbd_dev_state state = device->state;
-
-		peer_device_state_change->disk_state[NEW] = state.pdsk;
-		peer_device_state_change->repl_state[NEW] =
-			max_t(enum drbd_conns, C_WF_REPORT_PARAMS, state.conn);
-		peer_device_state_change->resync_susp_user[NEW] =
-			state.user_isp;
-		peer_device_state_change->resync_susp_peer[NEW] =
-			state.peer_isp;
-		peer_device_state_change->resync_susp_dependency[NEW] =
-			state.aftr_isp;
-	}
+	return state_change;
 }
 
 void copy_old_to_new_state_change(struct drbd_state_change *state_change)
@@ -219,7 +434,8 @@ void copy_old_to_new_state_change(struct drbd_state_change *state_change)
 	OLD_TO_NEW(resource_state_change->role);
 	OLD_TO_NEW(resource_state_change->susp);
 	OLD_TO_NEW(resource_state_change->susp_nod);
-	OLD_TO_NEW(resource_state_change->susp_fen);
+	OLD_TO_NEW(resource_state_change->susp_uuid);
+	OLD_TO_NEW(resource_state_change->fail_io);
 
 	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
 		struct drbd_connection_state_change *connection_state_change =
@@ -227,6 +443,7 @@ void copy_old_to_new_state_change(struct drbd_state_change *state_change)
 
 		OLD_TO_NEW(connection_state_change->peer_role);
 		OLD_TO_NEW(connection_state_change->cstate);
+		OLD_TO_NEW(connection_state_change->susp_fen);
 	}
 
 	for (n_device = 0; n_device < state_change->n_devices; n_device++) {
@@ -234,6 +451,7 @@ void copy_old_to_new_state_change(struct drbd_state_change *state_change)
 			&state_change->devices[n_device];
 
 		OLD_TO_NEW(device_state_change->disk_state);
+		OLD_TO_NEW(device_state_change->have_quorum);
 	}
 
 	n_peer_devices = state_change->n_devices * state_change->n_connections;
@@ -246,6 +464,10 @@ void copy_old_to_new_state_change(struct drbd_state_change *state_change)
 		OLD_TO_NEW(p->resync_susp_user);
 		OLD_TO_NEW(p->resync_susp_peer);
 		OLD_TO_NEW(p->resync_susp_dependency);
+		OLD_TO_NEW(p->resync_susp_other_c);
+		OLD_TO_NEW(p->resync_active);
+		OLD_TO_NEW(p->replication);
+		OLD_TO_NEW(p->peer_replication);
 	}
 
 #undef OLD_TO_NEW
@@ -258,2140 +480,5972 @@ void forget_state_change(struct drbd_state_change *state_change)
 	if (!state_change)
 		return;
 
-	if (state_change->resource->resource)
+	if (state_change->resource->resource) {
 		kref_put(&state_change->resource->resource->kref, drbd_destroy_resource);
+	}
 	for (n = 0; n < state_change->n_devices; n++) {
 		struct drbd_device *device = state_change->devices[n].device;
 
-		if (device)
+		if (device) {
 			kref_put(&device->kref, drbd_destroy_device);
+		}
 	}
 	for (n = 0; n < state_change->n_connections; n++) {
 		struct drbd_connection *connection =
 			state_change->connections[n].connection;
 
-		if (connection)
+		if (connection) {
 			kref_put(&connection->kref, drbd_destroy_connection);
+		}
+	}
+	for (n = 0; n < state_change->n_paths; n++) {
+		struct drbd_path *path = state_change->paths[n].path;
+		if (path) {
+			kref_put(&path->kref, drbd_destroy_path);
+		}
 	}
 	kfree(state_change);
 }
 
-static int w_after_state_ch(struct drbd_work *w, int unused);
-static void after_state_ch(struct drbd_device *device, union drbd_state os,
-			   union drbd_state ns, enum chg_state_flags flags,
-			   struct drbd_state_change *);
-static enum drbd_state_rv is_valid_state(struct drbd_device *, union drbd_state);
-static enum drbd_state_rv is_valid_soft_transition(union drbd_state, union drbd_state, struct drbd_connection *);
-static enum drbd_state_rv is_valid_transition(union drbd_state os, union drbd_state ns);
-static union drbd_state sanitize_state(struct drbd_device *device, union drbd_state os,
-				       union drbd_state ns, enum sanitize_state_warnings *warn);
-
-static inline bool is_susp(union drbd_state s)
+static bool state_has_changed(struct drbd_resource *resource)
 {
-        return s.susp || s.susp_nod || s.susp_fen;
+	struct drbd_connection *connection;
+	struct drbd_device *device;
+	int vnr;
+
+	if (resource->state_change_flags & CS_FORCE_RECALC)
+		return true;
+
+	if (resource->role[OLD] != resource->role[NEW] ||
+	    resource->susp_user[OLD] != resource->susp_user[NEW] ||
+	    resource->susp_nod[OLD] != resource->susp_nod[NEW] ||
+	    resource->susp_quorum[OLD] != resource->susp_quorum[NEW] ||
+	    resource->susp_uuid[OLD] != resource->susp_uuid[NEW] ||
+	    resource->fail_io[OLD] != resource->fail_io[NEW])
+		return true;
+
+	for_each_connection(connection, resource) {
+		if (connection->cstate[OLD] != connection->cstate[NEW] ||
+		    connection->peer_role[OLD] != connection->peer_role[NEW] ||
+		    connection->susp_fen[OLD] != connection->susp_fen[NEW])
+			return true;
+	}
+
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		struct drbd_peer_device *peer_device;
+
+		if (device->disk_state[OLD] != device->disk_state[NEW] ||
+		    device->have_quorum[OLD] != device->have_quorum[NEW])
+			return true;
+
+		for_each_peer_device(peer_device, device) {
+			if (peer_device->disk_state[OLD] != peer_device->disk_state[NEW] ||
+			    peer_device->repl_state[OLD] != peer_device->repl_state[NEW] ||
+			    peer_device->resync_susp_user[OLD] !=
+				peer_device->resync_susp_user[NEW] ||
+			    peer_device->resync_susp_peer[OLD] !=
+				peer_device->resync_susp_peer[NEW] ||
+			    peer_device->resync_susp_dependency[OLD] !=
+				peer_device->resync_susp_dependency[NEW] ||
+			    peer_device->resync_susp_other_c[OLD] !=
+				peer_device->resync_susp_other_c[NEW] ||
+			    peer_device->resync_active[OLD] !=
+				peer_device->resync_active[NEW] ||
+			    peer_device->replication[OLD] !=
+				peer_device->replication[NEW] ||
+			    peer_device->peer_replication[OLD] !=
+				peer_device->peer_replication[NEW] ||
+			    peer_device->uuid_flags & UUID_FLAG_GOT_STABLE)
+				return true;
+		}
+	}
+	return false;
 }
 
-bool conn_all_vols_unconf(struct drbd_connection *connection)
+static void ___begin_state_change(struct drbd_resource *resource)
 {
-	struct drbd_peer_device *peer_device;
-	bool rv = true;
+	struct drbd_connection *connection;
+	struct drbd_device *device;
 	int vnr;
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		if (device->state.disk != D_DISKLESS ||
-		    device->state.conn != C_STANDALONE ||
-		    device->state.role != R_SECONDARY) {
-			rv = false;
-			break;
-		}
+	resource->role[NEW] = resource->role[NOW];
+	resource->susp_user[NEW] = resource->susp_user[NOW];
+	resource->susp_nod[NEW] = resource->susp_nod[NOW];
+	resource->susp_quorum[NEW] = resource->susp_quorum[NOW];
+	resource->susp_uuid[NEW] = resource->susp_uuid[NOW];
+	resource->fail_io[NEW] = resource->fail_io[NOW];
+
+	for_each_connection_rcu(connection, resource) {
+		connection->cstate[NEW] = connection->cstate[NOW];
+		connection->peer_role[NEW] = connection->peer_role[NOW];
+		connection->susp_fen[NEW] = connection->susp_fen[NOW];
 	}
-	rcu_read_unlock();
 
-	return rv;
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		struct drbd_peer_device *peer_device;
+
+		device->disk_state[NEW] = device->disk_state[NOW];
+		device->have_quorum[NEW] = device->have_quorum[NOW];
+
+		for_each_peer_device_rcu(peer_device, device) {
+			peer_device->disk_state[NEW] = peer_device->disk_state[NOW];
+			peer_device->repl_state[NEW] = peer_device->repl_state[NOW];
+			peer_device->resync_susp_user[NEW] =
+				peer_device->resync_susp_user[NOW];
+			peer_device->resync_susp_peer[NEW] =
+				peer_device->resync_susp_peer[NOW];
+			peer_device->resync_susp_dependency[NEW] =
+				peer_device->resync_susp_dependency[NOW];
+			peer_device->resync_susp_other_c[NEW] =
+				peer_device->resync_susp_other_c[NOW];
+			peer_device->resync_active[NEW] =
+				peer_device->resync_active[NOW];
+			peer_device->replication[NEW] =
+				peer_device->replication[NOW];
+			peer_device->peer_replication[NEW] =
+				peer_device->peer_replication[NOW];
+		}
+	}
 }
 
-/* Unfortunately the states where not correctly ordered, when
-   they where defined. therefore can not use max_t() here. */
-static enum drbd_role max_role(enum drbd_role role1, enum drbd_role role2)
+static void __begin_state_change(struct drbd_resource *resource)
 {
-	if (role1 == R_PRIMARY || role2 == R_PRIMARY)
-		return R_PRIMARY;
-	if (role1 == R_SECONDARY || role2 == R_SECONDARY)
-		return R_SECONDARY;
-	return R_UNKNOWN;
+	rcu_read_lock();
+	___begin_state_change(resource);
 }
 
-static enum drbd_role min_role(enum drbd_role role1, enum drbd_role role2)
+static enum drbd_state_rv try_state_change(struct drbd_resource *resource)
 {
-	if (role1 == R_UNKNOWN || role2 == R_UNKNOWN)
-		return R_UNKNOWN;
-	if (role1 == R_SECONDARY || role2 == R_SECONDARY)
-		return R_SECONDARY;
-	return R_PRIMARY;
+	enum drbd_state_rv rv;
+
+	if (!state_has_changed(resource))
+		return SS_NOTHING_TO_DO;
+	sanitize_state(resource);
+	rv = is_valid_transition(resource);
+	if (rv >= SS_SUCCESS && !(resource->state_change_flags & CS_HARD))
+		rv = is_valid_soft_transition(resource);
+	return rv;
 }
 
-enum drbd_role conn_highest_role(struct drbd_connection *connection)
+static void apply_update_to_exposed_data_uuid(struct drbd_resource *resource)
 {
-	enum drbd_role role = R_SECONDARY;
-	struct drbd_peer_device *peer_device;
+	struct drbd_device *device;
 	int vnr;
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		role = max_role(role, device->state.role);
-	}
-	rcu_read_unlock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		u64 nedu = device->next_exposed_data_uuid;
+		int changed = 0;
 
-	return role;
+		if (!nedu)
+			continue;
+		if (device->disk_state[NOW] < D_INCONSISTENT)
+			changed = drbd_uuid_set_exposed(device, nedu, false);
+
+		device->next_exposed_data_uuid = 0;
+		if (changed)
+			drbd_info(device, "Executing delayed exposed data uuid update: %016llX\n",
+				  (unsigned long long)device->exposed_data_uuid);
+		else
+			drbd_info(device, "Canceling delayed exposed data uuid update\n");
+	}
 }
 
-enum drbd_role conn_highest_peer(struct drbd_connection *connection)
+void __clear_remote_state_change(struct drbd_resource *resource)
 {
-	enum drbd_role peer = R_UNKNOWN;
-	struct drbd_peer_device *peer_device;
-	int vnr;
+	bool is_connect = resource->twopc_reply.is_connect;
+	int initiator_node_id = resource->twopc_reply.initiator_node_id;
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		peer = max_role(peer, device->state.peer);
+	resource->remote_state_change = false;
+	resource->twopc_reply.initiator_node_id = -1;
+	resource->twopc_reply.tid = 0;
+
+	if (is_connect && resource->twopc_prepare_reply_cmd == 0) {
+		struct drbd_connection *connection;
+
+		rcu_read_lock();
+		connection = drbd_connection_by_node_id(resource, initiator_node_id);
+		if (connection)
+			abort_connect(connection);
+		rcu_read_unlock();
 	}
-	rcu_read_unlock();
 
-	return peer;
+	wake_up_all(&resource->twopc_wait);
+
+	/* Do things that where postponed to after two-phase commits finished */
+	apply_update_to_exposed_data_uuid(resource);
 }
 
-enum drbd_disk_state conn_highest_disk(struct drbd_connection *connection)
+static bool state_is_stable(struct drbd_device *device)
 {
-	enum drbd_disk_state disk_state = D_DISKLESS;
 	struct drbd_peer_device *peer_device;
-	int vnr;
+	bool stable = true;
+
+	/* DO NOT add a default clause, we want the compiler to warn us
+	 * for any newly introduced state we may have forgotten to add here */
 
 	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		disk_state = max_t(enum drbd_disk_state, disk_state, device->state.disk);
+	for_each_peer_device_rcu(peer_device, device) {
+		switch (peer_device->repl_state[NOW]) {
+		/* New io is only accepted when the peer device is unknown or there is
+		 * a well-established connection. */
+		case L_OFF:
+		case L_ESTABLISHED:
+		case L_SYNC_SOURCE:
+		case L_SYNC_TARGET:
+		case L_VERIFY_S:
+		case L_VERIFY_T:
+		case L_PAUSED_SYNC_S:
+		case L_PAUSED_SYNC_T:
+		case L_AHEAD:
+		case L_BEHIND:
+		case L_STARTING_SYNC_S:
+		case L_STARTING_SYNC_T:
+			break;
+
+			/* Allow IO in BM exchange states with new protocols */
+		case L_WF_BITMAP_S:
+			if (peer_device->connection->agreed_pro_version < 96)
+				stable = false;
+			break;
+
+			/* no new io accepted in these states */
+		case L_WF_BITMAP_T:
+		case L_WF_SYNC_UUID:
+			stable = false;
+			break;
+		}
+		if (!stable)
+			break;
 	}
 	rcu_read_unlock();
 
-	return disk_state;
+	switch (device->disk_state[NOW]) {
+	case D_DISKLESS:
+	case D_INCONSISTENT:
+	case D_OUTDATED:
+	case D_CONSISTENT:
+	case D_UP_TO_DATE:
+	case D_FAILED:
+	case D_DETACHING:
+		/* disk state is stable as well. */
+		break;
+
+	/* no new io accepted during transitional states */
+	case D_ATTACHING:
+	case D_NEGOTIATING:
+	case D_UNKNOWN:
+	case D_MASK:
+		stable = false;
+	}
+
+	return stable;
 }
 
-enum drbd_disk_state conn_lowest_disk(struct drbd_connection *connection)
+static bool drbd_state_change_is_connect(struct drbd_resource *resource)
 {
-	enum drbd_disk_state disk_state = D_MASK;
-	struct drbd_peer_device *peer_device;
-	int vnr;
+	struct drbd_connection *connection;
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		disk_state = min_t(enum drbd_disk_state, disk_state, device->state.disk);
+	for_each_connection(connection, resource) {
+		if (connection->cstate[NOW] == C_CONNECTING &&
+				connection->cstate[NEW] == C_CONNECTED)
+			return true;
 	}
-	rcu_read_unlock();
 
-	return disk_state;
+	return false;
 }
 
-enum drbd_disk_state conn_highest_pdsk(struct drbd_connection *connection)
+static struct after_state_change_work *alloc_after_state_change_work(struct drbd_resource *resource)
 {
-	enum drbd_disk_state disk_state = D_DISKLESS;
-	struct drbd_peer_device *peer_device;
-	int vnr;
+	struct after_state_change_work *work;
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		disk_state = max_t(enum drbd_disk_state, disk_state, device->state.pdsk);
+	lockdep_assert_held(&resource->state_rwlock);
+
+	/* If the resource is already "unregistered", the worker thread
+	 * is gone, there is no-one to consume the work item and release
+	 * the associated refcounts. Just don't even create it.
+	 */
+	if (test_bit(R_UNREGISTERED, &resource->flags))
+		return NULL;
+
+	work = kmalloc_obj(*work, GFP_ATOMIC);
+	if (work) {
+		work->state_change = remember_state_change(resource, GFP_ATOMIC);
+		if (!work->state_change) {
+			kfree(work);
+			work = NULL;
+		}
 	}
-	rcu_read_unlock();
+	if (!work)
+		drbd_err(resource, "Could not allocate after state change work\n");
 
-	return disk_state;
+	return work;
 }
 
-enum drbd_conns conn_lowest_conn(struct drbd_connection *connection)
+static void queue_after_state_change_work(struct drbd_resource *resource,
+					  struct completion *done,
+					  struct after_state_change_work *work)
 {
-	enum drbd_conns conn = C_MASK;
-	struct drbd_peer_device *peer_device;
-	int vnr;
+	if (work) {
+		work->w.cb = w_after_state_change;
+		work->done = done;
+		drbd_queue_work(&resource->work, &work->w);
+	} else if (done) {
+		complete(done);
+	}
+}
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		conn = min_t(enum drbd_conns, conn, device->state.conn);
+static enum drbd_state_rv ___end_state_change(struct drbd_resource *resource, struct completion *done,
+					      enum drbd_state_rv rv, const char *tag)
+{
+	enum chg_state_flags flags = resource->state_change_flags;
+	struct drbd_connection *connection;
+	struct drbd_device *device;
+	bool is_connect;
+	unsigned int pro_ver;
+	int vnr;
+	bool all_devs_have_quorum = true;
+	struct after_state_change_work *work;
+
+	if (flags & CS_ABORT)
+		goto out;
+	if (rv >= SS_SUCCESS)
+		rv = try_state_change(resource);
+	if (rv < SS_SUCCESS) {
+		if (flags & CS_VERBOSE) {
+			drbd_err(resource, "State change failed: %s (%d)\n",
+					drbd_set_st_err_str(rv), rv);
+			print_state_change(resource, "Failed: ", tag);
+		}
+		goto out;
 	}
-	rcu_read_unlock();
+	if (flags & CS_PREPARE)
+		goto out;
 
-	return conn;
-}
+	update_members(resource);
+	finish_state_change(resource, tag);
 
-static bool no_peer_wf_report_params(struct drbd_connection *connection)
-{
-	struct drbd_peer_device *peer_device;
-	int vnr;
-	bool rv = true;
+	/* Check whether we are establishing a connection before applying the change. */
+	is_connect = drbd_state_change_is_connect(resource);
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
-		if (peer_device->device->state.conn == C_WF_REPORT_PARAMS) {
-			rv = false;
-			break;
+	/* This remembers the state change, so call before applying the change. */
+	work = alloc_after_state_change_work(resource);
+
+	/* changes to local_cnt and device flags should be visible before
+	 * changes to state, which again should be visible before anything else
+	 * depending on that change happens. */
+	smp_wmb();
+	resource->role[NOW] = resource->role[NEW];
+	resource->susp_user[NOW] = resource->susp_user[NEW];
+	resource->susp_nod[NOW] = resource->susp_nod[NEW];
+	resource->susp_quorum[NOW] = resource->susp_quorum[NEW];
+	resource->susp_uuid[NOW] = resource->susp_uuid[NEW];
+	resource->fail_io[NOW] = resource->fail_io[NEW];
+	resource->cached_susp = resource_is_suspended(resource, NEW);
+
+	pro_ver = PRO_VERSION_MAX;
+	for_each_connection(connection, resource) {
+		connection->cstate[NOW] = connection->cstate[NEW];
+		connection->peer_role[NOW] = connection->peer_role[NEW];
+		connection->susp_fen[NOW] = connection->susp_fen[NEW];
+
+		pro_ver = min_t(unsigned int, pro_ver,
+			connection->agreed_pro_version);
+
+		wake_up(&connection->ee_wait);
+	}
+	resource->cached_min_aggreed_protocol_version = pro_ver;
+
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		struct res_opts *o = &resource->res_opts;
+		struct drbd_peer_device *peer_device;
+
+		device->disk_state[NOW] = device->disk_state[NEW];
+		device->have_quorum[NOW] = device->have_quorum[NEW];
+
+		if (!device->have_quorum[NOW])
+			all_devs_have_quorum = false;
+
+		for_each_peer_device(peer_device, device) {
+			peer_device->disk_state[NOW] = peer_device->disk_state[NEW];
+			peer_device->repl_state[NOW] = peer_device->repl_state[NEW];
+			peer_device->resync_susp_user[NOW] =
+				peer_device->resync_susp_user[NEW];
+			peer_device->resync_susp_peer[NOW] =
+				peer_device->resync_susp_peer[NEW];
+			peer_device->resync_susp_dependency[NOW] =
+				peer_device->resync_susp_dependency[NEW];
+			peer_device->resync_susp_other_c[NOW] =
+				peer_device->resync_susp_other_c[NEW];
+			peer_device->resync_active[NOW] =
+				peer_device->resync_active[NEW];
+			peer_device->replication[NOW] =
+				peer_device->replication[NEW];
+			peer_device->peer_replication[NOW] =
+				peer_device->peer_replication[NEW];
 		}
-	rcu_read_unlock();
+		device->cached_state_unstable = !state_is_stable(device);
+		device->cached_err_io =
+			(o->on_no_quorum == ONQ_IO_ERROR && !device->have_quorum[NOW]) ||
+			(o->on_no_data == OND_IO_ERROR && !drbd_data_accessible(device, NOW)) ||
+			resource->fail_io[NEW];
+	}
+	resource->cached_all_devices_have_quorum = all_devs_have_quorum;
+	smp_wmb(); /* Make the NEW_CUR_UUID bit visible after the state change! */
 
-	return rv;
-}
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		struct drbd_peer_device *peer_device;
+		if (test_bit(__NEW_CUR_UUID, &device->flags)) {
+			clear_bit(__NEW_CUR_UUID, &device->flags);
+			set_bit(NEW_CUR_UUID, &device->flags);
+		}
+		ensure_exposed_data_uuid(device);
+
+		wake_up(&device->al_wait);
+		wake_up(&device->misc_wait);
+
+		/* Due to the exclusivity of two-phase commits, there can only
+		 * be one connection being established at once. Hence it is OK
+		 * to release uuid_sem for all connections if the state change
+		 * is establishing any connection. */
+		if (is_connect) {
+			for_each_peer_device(peer_device, device) {
+				if (test_and_clear_bit(HOLDING_UUID_READ_LOCK, &peer_device->flags))
+					up_read_non_owner(&device->uuid_sem);
+			}
+		}
+	}
 
-static void wake_up_all_devices(struct drbd_connection *connection)
-{
-	struct drbd_peer_device *peer_device;
-	int vnr;
+	wake_up_all(&resource->state_wait);
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
-		wake_up(&peer_device->device->state_wait);
+	/* Call this after applying the state change from NEW to NOW. */
+	queue_after_state_change_work(resource, done, work);
+out:
 	rcu_read_unlock();
 
-}
+	if ((flags & CS_TWOPC) && !(flags & CS_PREPARE))
+		__clear_remote_state_change(resource);
 
+	resource->state_change_err_str = NULL;
+	return rv;
+}
 
-/**
- * cl_wide_st_chg() - true if the state change is a cluster wide one
- * @device:	DRBD device.
- * @os:		old (current) state.
- * @ns:		new (wanted) state.
- */
-static int cl_wide_st_chg(struct drbd_device *device,
-			  union drbd_state os, union drbd_state ns)
+void state_change_lock(struct drbd_resource *resource, unsigned long *irq_flags, enum chg_state_flags flags)
 {
-	return (os.conn >= C_CONNECTED && ns.conn >= C_CONNECTED &&
-		 ((os.role != R_PRIMARY && ns.role == R_PRIMARY) ||
-		  (os.conn != C_STARTING_SYNC_T && ns.conn == C_STARTING_SYNC_T) ||
-		  (os.conn != C_STARTING_SYNC_S && ns.conn == C_STARTING_SYNC_S) ||
-		  (os.disk != D_FAILED && ns.disk == D_FAILED))) ||
-		(os.conn >= C_CONNECTED && ns.conn == C_DISCONNECTING) ||
-		(os.conn == C_CONNECTED && ns.conn == C_VERIFY_S) ||
-		(os.conn == C_CONNECTED && ns.conn == C_WF_REPORT_PARAMS);
+	if ((flags & CS_SERIALIZE) && !(flags & (CS_ALREADY_SERIALIZED | CS_PREPARED))) {
+		WARN_ONCE(current == resource->worker.task,
+			"worker should not initiate state changes with CS_SERIALIZE\n");
+		down(&resource->state_sem);
+	}
+	write_lock_irqsave(&resource->state_rwlock, *irq_flags);
+	resource->state_change_flags = flags;
 }
 
-static union drbd_state
-apply_mask_val(union drbd_state os, union drbd_state mask, union drbd_state val)
+static void __state_change_unlock(struct drbd_resource *resource, unsigned long *irq_flags, struct completion *done)
 {
-	union drbd_state ns;
-	ns.i = (os.i & ~mask.i) | val.i;
-	return ns;
+	enum chg_state_flags flags = resource->state_change_flags;
+
+	resource->state_change_flags = 0;
+	write_unlock_irqrestore(&resource->state_rwlock, *irq_flags);
+	if (done && expect(resource, current != resource->worker.task))
+		wait_for_completion(done);
+	if ((flags & CS_SERIALIZE) && !(flags & (CS_ALREADY_SERIALIZED | CS_PREPARE)))
+		up(&resource->state_sem);
 }
 
-enum drbd_state_rv
-drbd_change_state(struct drbd_device *device, enum chg_state_flags f,
-		  union drbd_state mask, union drbd_state val)
+void state_change_unlock(struct drbd_resource *resource, unsigned long *irq_flags)
 {
-	unsigned long flags;
-	union drbd_state ns;
-	enum drbd_state_rv rv;
-
-	spin_lock_irqsave(&device->resource->req_lock, flags);
-	ns = apply_mask_val(drbd_read_state(device), mask, val);
-	rv = _drbd_set_state(device, ns, f, NULL);
-	spin_unlock_irqrestore(&device->resource->req_lock, flags);
-
-	return rv;
+	__state_change_unlock(resource, irq_flags, NULL);
 }
 
-/**
- * drbd_force_state() - Impose a change which happens outside our control on our state
- * @device:	DRBD device.
- * @mask:	mask of state bits to change.
- * @val:	value of new state bits.
- */
-void drbd_force_state(struct drbd_device *device,
-	union drbd_state mask, union drbd_state val)
+void begin_state_change_locked(struct drbd_resource *resource, enum chg_state_flags flags)
 {
-	drbd_change_state(device, CS_HARD, mask, val);
+	BUG_ON(flags & (CS_SERIALIZE | CS_WAIT_COMPLETE | CS_PREPARE | CS_ABORT));
+	resource->state_change_flags = flags;
+	__begin_state_change(resource);
 }
 
-static enum drbd_state_rv
-_req_st_cond(struct drbd_device *device, union drbd_state mask,
-	     union drbd_state val)
+enum drbd_state_rv end_state_change_locked(struct drbd_resource *resource, const char *tag)
 {
-	union drbd_state os, ns;
-	unsigned long flags;
-	enum drbd_state_rv rv;
-
-	if (test_and_clear_bit(CL_ST_CHG_SUCCESS, &device->flags))
-		return SS_CW_SUCCESS;
+	return ___end_state_change(resource, NULL, SS_SUCCESS, tag);
+}
 
-	if (test_and_clear_bit(CL_ST_CHG_FAIL, &device->flags))
-		return SS_CW_FAILED_BY_PEER;
+void begin_state_change(struct drbd_resource *resource, unsigned long *irq_flags, enum chg_state_flags flags)
+{
+	state_change_lock(resource, irq_flags, flags);
+	__begin_state_change(resource);
+}
 
-	spin_lock_irqsave(&device->resource->req_lock, flags);
-	os = drbd_read_state(device);
-	ns = sanitize_state(device, os, apply_mask_val(os, mask, val), NULL);
-	rv = is_valid_transition(os, ns);
-	if (rv >= SS_SUCCESS)
-		rv = SS_UNKNOWN_ERROR;  /* cont waiting, otherwise fail. */
+static enum drbd_state_rv __end_state_change(struct drbd_resource *resource,
+					     unsigned long *irq_flags,
+					     enum drbd_state_rv rv,
+					     const char *tag)
+{
+	enum chg_state_flags flags = resource->state_change_flags;
+	struct completion __done, *done = NULL;
 
-	if (!cl_wide_st_chg(device, os, ns))
-		rv = SS_CW_NO_NEED;
-	if (rv == SS_UNKNOWN_ERROR) {
-		rv = is_valid_state(device, ns);
-		if (rv >= SS_SUCCESS) {
-			rv = is_valid_soft_transition(os, ns, first_peer_device(device)->connection);
-			if (rv >= SS_SUCCESS)
-				rv = SS_UNKNOWN_ERROR; /* cont waiting, otherwise fail. */
-		}
+	if ((flags & CS_WAIT_COMPLETE) && !(flags & (CS_PREPARE | CS_ABORT))) {
+		done = &__done;
+		init_completion(done);
 	}
-	spin_unlock_irqrestore(&device->resource->req_lock, flags);
-
+	rv = ___end_state_change(resource, done, rv, tag);
+	__state_change_unlock(resource, irq_flags, rv >= SS_SUCCESS ? done : NULL);
 	return rv;
 }
 
-/**
- * drbd_req_state() - Perform an eventually cluster wide state change
- * @device:	DRBD device.
- * @mask:	mask of state bits to change.
- * @val:	value of new state bits.
- * @f:		flags
- *
- * Should not be called directly, use drbd_request_state() or
- * _drbd_request_state().
- */
-static enum drbd_state_rv
-drbd_req_state(struct drbd_device *device, union drbd_state mask,
-	       union drbd_state val, enum chg_state_flags f)
+enum drbd_state_rv end_state_change(struct drbd_resource *resource, unsigned long *irq_flags,
+		const char *tag)
 {
-	struct completion done;
-	unsigned long flags;
-	union drbd_state os, ns;
-	enum drbd_state_rv rv;
-	void *buffer = NULL;
-
-	init_completion(&done);
-
-	if (f & CS_SERIALIZE)
-		mutex_lock(device->state_mutex);
-	if (f & CS_INHIBIT_MD_IO)
-		buffer = drbd_md_get_buffer(device, __func__);
-
-	spin_lock_irqsave(&device->resource->req_lock, flags);
-	os = drbd_read_state(device);
-	ns = sanitize_state(device, os, apply_mask_val(os, mask, val), NULL);
-	rv = is_valid_transition(os, ns);
-	if (rv < SS_SUCCESS) {
-		spin_unlock_irqrestore(&device->resource->req_lock, flags);
-		goto abort;
-	}
+	return __end_state_change(resource, irq_flags, SS_SUCCESS, tag);
+}
 
-	if (cl_wide_st_chg(device, os, ns)) {
-		rv = is_valid_state(device, ns);
-		if (rv == SS_SUCCESS)
-			rv = is_valid_soft_transition(os, ns, first_peer_device(device)->connection);
-		spin_unlock_irqrestore(&device->resource->req_lock, flags);
+void abort_state_change(struct drbd_resource *resource, unsigned long *irq_flags)
+{
+	resource->state_change_flags &= ~CS_VERBOSE;
+	__end_state_change(resource, irq_flags, SS_UNKNOWN_ERROR, NULL);
+}
 
-		if (rv < SS_SUCCESS) {
-			if (f & CS_VERBOSE)
-				print_st_err(device, os, ns, rv);
-			goto abort;
-		}
+void abort_state_change_locked(struct drbd_resource *resource)
+{
+	resource->state_change_flags &= ~CS_VERBOSE;
+	___end_state_change(resource, NULL, SS_UNKNOWN_ERROR, NULL);
+}
 
-		if (drbd_send_state_req(first_peer_device(device), mask, val)) {
-			rv = SS_CW_FAILED_BY_PEER;
-			if (f & CS_VERBOSE)
-				print_st_err(device, os, ns, rv);
-			goto abort;
-		}
+static void begin_remote_state_change(struct drbd_resource *resource, unsigned long *irq_flags)
+{
+	rcu_read_unlock();
+	write_unlock_irqrestore(&resource->state_rwlock, *irq_flags);
+}
 
-		wait_event(device->state_wait,
-			(rv = _req_st_cond(device, mask, val)));
+static void __end_remote_state_change(struct drbd_resource *resource, enum chg_state_flags flags)
+{
+	rcu_read_lock();
+	resource->state_change_flags = flags;
+	___begin_state_change(resource);
+}
 
-		if (rv < SS_SUCCESS) {
-			if (f & CS_VERBOSE)
-				print_st_err(device, os, ns, rv);
-			goto abort;
-		}
-		spin_lock_irqsave(&device->resource->req_lock, flags);
-		ns = apply_mask_val(drbd_read_state(device), mask, val);
-		rv = _drbd_set_state(device, ns, f, &done);
-	} else {
-		rv = _drbd_set_state(device, ns, f, &done);
-	}
+static void end_remote_state_change(struct drbd_resource *resource, unsigned long *irq_flags, enum chg_state_flags flags)
+{
+	write_lock_irqsave(&resource->state_rwlock, *irq_flags);
+	__end_remote_state_change(resource, flags);
+}
 
-	spin_unlock_irqrestore(&device->resource->req_lock, flags);
+void clear_remote_state_change(struct drbd_resource *resource)
+{
+	unsigned long irq_flags;
 
-	if (f & CS_WAIT_COMPLETE && rv == SS_SUCCESS) {
-		D_ASSERT(device, current != first_peer_device(device)->connection->worker.task);
-		wait_for_completion(&done);
-	}
+	write_lock_irqsave(&resource->state_rwlock, irq_flags);
+	__clear_remote_state_change(resource);
+	write_unlock_irqrestore(&resource->state_rwlock, irq_flags);
+}
 
-abort:
-	if (buffer)
-		drbd_md_put_buffer(device);
-	if (f & CS_SERIALIZE)
-		mutex_unlock(device->state_mutex);
+static union drbd_state drbd_get_resource_state(struct drbd_resource *resource, enum which_state which)
+{
+	union drbd_state rv = { {
+		.conn = C_STANDALONE,  /* really: undefined */
+		/* (user_isp, peer_isp, and aftr_isp are undefined as well.) */
+		.disk = D_UNKNOWN,  /* really: undefined */
+		.role = resource->role[which],
+		.peer = R_UNKNOWN,  /* really: undefined */
+		.susp = resource->susp_user[which] || resource->susp_quorum[which] || resource->susp_uuid[which],
+		.susp_nod = resource->susp_nod[which],
+		.susp_fen = is_suspended_fen(resource, which),
+		.pdsk = D_UNKNOWN,  /* really: undefined */
+	} };
 
 	return rv;
 }
 
-/**
- * _drbd_request_state() - Request a state change (with flags)
- * @device:	DRBD device.
- * @mask:	mask of state bits to change.
- * @val:	value of new state bits.
- * @f:		flags
- *
- * Cousin of drbd_request_state(), useful with the CS_WAIT_COMPLETE
- * flag, or when logging of failed state change requests is not desired.
- */
-enum drbd_state_rv
-_drbd_request_state(struct drbd_device *device, union drbd_state mask,
-		    union drbd_state val, enum chg_state_flags f)
+union drbd_state drbd_get_device_state(struct drbd_device *device, enum which_state which)
 {
-	enum drbd_state_rv rv;
+	union drbd_state rv = drbd_get_resource_state(device->resource, which);
 
-	wait_event(device->state_wait,
-		   (rv = drbd_req_state(device, mask, val, f)) != SS_IN_TRANSIENT_STATE);
+	rv.disk = device->disk_state[which];
+	rv.quorum = device->have_quorum[which];
 
 	return rv;
 }
 
-/*
- * We grab drbd_md_get_buffer(), because we don't want to "fail" the disk while
- * there is IO in-flight: the transition into D_FAILED for detach purposes
- * may get misinterpreted as actual IO error in a confused endio function.
- *
- * We wrap it all into wait_event(), to retry in case the drbd_req_state()
- * returns SS_IN_TRANSIENT_STATE.
- *
- * To avoid potential deadlock with e.g. the receiver thread trying to grab
- * drbd_md_get_buffer() while trying to get out of the "transient state", we
- * need to grab and release the meta data buffer inside of that wait_event loop.
- */
-static enum drbd_state_rv
-request_detach(struct drbd_device *device)
-{
-	return drbd_req_state(device, NS(disk, D_FAILED),
-			CS_VERBOSE | CS_ORDERED | CS_INHIBIT_MD_IO);
-}
-
-int drbd_request_detach_interruptible(struct drbd_device *device)
+union drbd_state drbd_get_peer_device_state(struct drbd_peer_device *peer_device, enum which_state which)
 {
-	int ret, rv;
+	struct drbd_connection *connection = peer_device->connection;
+	union drbd_state rv;
 
-	drbd_suspend_io(device); /* so no-one is stuck in drbd_al_begin_io */
-	wait_event_interruptible(device->state_wait,
-		(rv = request_detach(device)) != SS_IN_TRANSIENT_STATE);
-	drbd_resume_io(device);
-
-	ret = wait_event_interruptible(device->misc_wait,
-			device->state.disk != D_FAILED);
-
-	if (rv == SS_IS_DISKLESS)
-		rv = SS_NOTHING_TO_DO;
-	if (ret)
-		rv = ERR_INTR;
+	rv = drbd_get_device_state(peer_device->device, which);
+	rv.user_isp = peer_device->resync_susp_user[which];
+	rv.peer_isp = peer_device->resync_susp_peer[which];
+	rv.aftr_isp = resync_susp_comb_dep(peer_device, which);
+	rv.conn = combined_conn_state(peer_device, which);
+	rv.peer = connection->peer_role[which];
+	rv.pdsk = peer_device->disk_state[which];
 
 	return rv;
 }
 
-enum drbd_state_rv
-_drbd_request_state_holding_state_mutex(struct drbd_device *device, union drbd_state mask,
-		    union drbd_state val, enum chg_state_flags f)
+enum drbd_disk_state conn_highest_disk(struct drbd_connection *connection)
 {
-	enum drbd_state_rv rv;
-
-	BUG_ON(f & CS_SERIALIZE);
+	enum drbd_disk_state disk_state = D_DISKLESS;
+	struct drbd_peer_device *peer_device;
+	int vnr;
 
-	wait_event_cmd(device->state_wait,
-		       (rv = drbd_req_state(device, mask, val, f)) != SS_IN_TRANSIENT_STATE,
-		       mutex_unlock(device->state_mutex),
-		       mutex_lock(device->state_mutex));
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		struct drbd_device *device = peer_device->device;
+		disk_state = max_t(enum drbd_disk_state, disk_state, device->disk_state[NOW]);
+	}
+	rcu_read_unlock();
 
-	return rv;
+	return disk_state;
 }
 
-static void print_st(struct drbd_device *device, const char *name, union drbd_state ns)
+enum drbd_disk_state conn_highest_pdsk(struct drbd_connection *connection)
 {
-	drbd_err(device, " %s = { cs:%s ro:%s/%s ds:%s/%s %c%c%c%c%c%c }\n",
-	    name,
-	    drbd_conn_str(ns.conn),
-	    drbd_role_str(ns.role),
-	    drbd_role_str(ns.peer),
-	    drbd_disk_str(ns.disk),
-	    drbd_disk_str(ns.pdsk),
-	    is_susp(ns) ? 's' : 'r',
-	    ns.aftr_isp ? 'a' : '-',
-	    ns.peer_isp ? 'p' : '-',
-	    ns.user_isp ? 'u' : '-',
-	    ns.susp_fen ? 'F' : '-',
-	    ns.susp_nod ? 'N' : '-'
-	    );
+	enum drbd_disk_state disk_state = D_DISKLESS;
+	struct drbd_peer_device *peer_device;
+	int vnr;
+
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
+		disk_state = max_t(enum drbd_disk_state, disk_state, peer_device->disk_state[NOW]);
+	rcu_read_unlock();
+
+	return disk_state;
 }
 
-void print_st_err(struct drbd_device *device, union drbd_state os,
-	          union drbd_state ns, enum drbd_state_rv err)
+static bool suspend_reason_changed(struct drbd_resource *resource)
 {
-	if (err == SS_IN_TRANSIENT_STATE)
-		return;
-	drbd_err(device, "State change failed: %s\n", drbd_set_st_err_str(err));
-	print_st(device, " state", os);
-	print_st(device, "wanted", ns);
+	return resource->susp_user[OLD] != resource->susp_user[NEW] ||
+		resource->susp_nod[OLD] != resource->susp_nod[NEW] ||
+		resource->susp_quorum[OLD] != resource->susp_quorum[NEW] ||
+		resource->susp_uuid[OLD] != resource->susp_uuid[NEW] ||
+		is_suspended_fen(resource, OLD) != is_suspended_fen(resource, NEW);
 }
 
-static long print_state_change(char *pb, union drbd_state os, union drbd_state ns,
-			       enum chg_state_flags flags)
+static bool resync_suspended(struct drbd_peer_device *peer_device, enum which_state which)
 {
-	char *pbp;
-	pbp = pb;
-	*pbp = 0;
-
-	if (ns.role != os.role && flags & CS_DC_ROLE)
-		pbp += sprintf(pbp, "role( %s -> %s ) ",
-			       drbd_role_str(os.role),
-			       drbd_role_str(ns.role));
-	if (ns.peer != os.peer && flags & CS_DC_PEER)
-		pbp += sprintf(pbp, "peer( %s -> %s ) ",
-			       drbd_role_str(os.peer),
-			       drbd_role_str(ns.peer));
-	if (ns.conn != os.conn && flags & CS_DC_CONN)
-		pbp += sprintf(pbp, "conn( %s -> %s ) ",
-			       drbd_conn_str(os.conn),
-			       drbd_conn_str(ns.conn));
-	if (ns.disk != os.disk && flags & CS_DC_DISK)
-		pbp += sprintf(pbp, "disk( %s -> %s ) ",
-			       drbd_disk_str(os.disk),
-			       drbd_disk_str(ns.disk));
-	if (ns.pdsk != os.pdsk && flags & CS_DC_PDSK)
-		pbp += sprintf(pbp, "pdsk( %s -> %s ) ",
-			       drbd_disk_str(os.pdsk),
-			       drbd_disk_str(ns.pdsk));
-
-	return pbp - pb;
+	return peer_device->resync_susp_user[which] ||
+	       peer_device->resync_susp_peer[which] ||
+	       resync_susp_comb_dep(peer_device, which);
 }
 
-static void drbd_pr_state_change(struct drbd_device *device, union drbd_state os, union drbd_state ns,
-				 enum chg_state_flags flags)
+static int scnprintf_resync_suspend_flags(char *buffer, size_t size,
+					  struct drbd_peer_device *peer_device,
+					  enum which_state which)
 {
-	char pb[300];
-	char *pbp = pb;
+	struct drbd_device *device = peer_device->device;
+	char *b = buffer, *end = buffer + size;
+
+	if (!resync_suspended(peer_device, which))
+		return scnprintf(buffer, size, "no");
 
-	pbp += print_state_change(pbp, os, ns, flags ^ CS_DC_MASK);
+	if (peer_device->resync_susp_user[which])
+		b += scnprintf(b, end - b, "user,");
+	if (peer_device->resync_susp_peer[which])
+		b += scnprintf(b, end - b, "peer,");
+	if (peer_device->resync_susp_dependency[which])
+		b += scnprintf(b, end - b, "after dependency,");
+	if (peer_device->resync_susp_other_c[which])
+		b += scnprintf(b, end - b, "connection dependency,");
+	if (is_sync_source_state(peer_device, which) && device->disk_state[which] <= D_INCONSISTENT)
+		b += scnprintf(b, end - b, "disk inconsistent,");
 
-	if (ns.aftr_isp != os.aftr_isp)
-		pbp += sprintf(pbp, "aftr_isp( %d -> %d ) ",
-			       os.aftr_isp,
-			       ns.aftr_isp);
-	if (ns.peer_isp != os.peer_isp)
-		pbp += sprintf(pbp, "peer_isp( %d -> %d ) ",
-			       os.peer_isp,
-			       ns.peer_isp);
-	if (ns.user_isp != os.user_isp)
-		pbp += sprintf(pbp, "user_isp( %d -> %d ) ",
-			       os.user_isp,
-			       ns.user_isp);
+	*(--b) = 0;
 
-	if (pbp != pb)
-		drbd_info(device, "%s\n", pb);
+	return b - buffer;
 }
 
-static void conn_pr_state_change(struct drbd_connection *connection, union drbd_state os, union drbd_state ns,
-				 enum chg_state_flags flags)
+static int scnprintf_io_suspend_flags(char *buffer, size_t size,
+				      struct drbd_resource *resource,
+				      enum which_state which)
 {
-	char pb[300];
-	char *pbp = pb;
-
-	pbp += print_state_change(pbp, os, ns, flags);
-
-	if (is_susp(ns) != is_susp(os) && flags & CS_DC_SUSP)
-		pbp += sprintf(pbp, "susp( %d -> %d ) ",
-			       is_susp(os),
-			       is_susp(ns));
-
-	if (pbp != pb)
-		drbd_info(connection, "%s\n", pb);
+	char *b = buffer, *end = buffer + size;
+
+	if (!resource_is_suspended(resource, which))
+		return scnprintf(buffer, size, "no");
+
+	if (resource->susp_user[which])
+		b += scnprintf(b, end - b, "user,");
+	if (resource->susp_nod[which])
+		b += scnprintf(b, end - b, "no-disk,");
+	if (is_suspended_fen(resource, which))
+		b += scnprintf(b, end - b, "fencing,");
+	if (resource->susp_quorum[which])
+		b += scnprintf(b, end - b, "quorum,");
+	if (resource->susp_uuid[which])
+		b += scnprintf(b, end - b, "uuid,");
+	*(--b) = 0;
+
+	return b - buffer;
 }
 
-
-/**
- * is_valid_state() - Returns an SS_ error code if ns is not valid
- * @device:	DRBD device.
- * @ns:		State to consider.
- */
-static enum drbd_state_rv
-is_valid_state(struct drbd_device *device, union drbd_state ns)
+static void print_state_change(struct drbd_resource *resource, const char *prefix, const char *tag)
 {
-	/* See drbd_state_sw_errors in drbd_strings.c */
-
-	enum drbd_fencing_p fp;
-	enum drbd_state_rv rv = SS_SUCCESS;
-	struct net_conf *nc;
+	char buffer[150], *b, *end = buffer + sizeof(buffer);
+	struct drbd_connection *connection;
+	struct drbd_device *device;
+	enum drbd_role *role = resource->role;
+	bool *fail_io = resource->fail_io;
+	int vnr;
 
-	rcu_read_lock();
-	fp = FP_DONT_CARE;
-	if (get_ldev(device)) {
-		fp = rcu_dereference(device->ldev->disk_conf)->fencing;
-		put_ldev(device);
+	b = buffer;
+	if (role[OLD] != role[NEW])
+		b += scnprintf(b, end - b, "role( %s -> %s ) ",
+			       drbd_role_str(role[OLD]),
+			       drbd_role_str(role[NEW]));
+	if (suspend_reason_changed(resource)) {
+		b += scnprintf(b, end - b, "susp-io( ");
+		b += scnprintf_io_suspend_flags(b, end - b, resource, OLD);
+		b += scnprintf(b, end - b, " -> ");
+		b += scnprintf_io_suspend_flags(b, end - b, resource, NEW);
+		b += scnprintf(b, end - b, " ) ");
+	}
+	if (fail_io[OLD] != fail_io[NEW])
+		b += scnprintf(b, end - b, "force-io-failures( %s -> %s ) ",
+			       fail_io[OLD] ? "yes" : "no",
+			       fail_io[NEW] ? "yes" : "no");
+	if (b != buffer) {
+		*(b-1) = 0;
+		drbd_info(resource, "%s%s%s%s%s\n", prefix, buffer,
+			tag ? " [" : "", tag ?: "", tag ? "]" : "");
 	}
 
-	nc = rcu_dereference(first_peer_device(device)->connection->net_conf);
-	if (nc) {
-		if (!nc->two_primaries && ns.role == R_PRIMARY) {
-			if (ns.peer == R_PRIMARY)
-				rv = SS_TWO_PRIMARIES;
-			else if (conn_highest_peer(first_peer_device(device)->connection) == R_PRIMARY)
-				rv = SS_O_VOL_PEER_PRI;
+	for_each_connection(connection, resource) {
+		enum drbd_conn_state *cstate = connection->cstate;
+		enum drbd_role *peer_role = connection->peer_role;
+
+		b = buffer;
+		if (cstate[OLD] != cstate[NEW])
+			b += scnprintf(b, end - b, "conn( %s -> %s ) ",
+				       drbd_conn_str(cstate[OLD]),
+				       drbd_conn_str(cstate[NEW]));
+		if (peer_role[OLD] != peer_role[NEW])
+			b += scnprintf(b, end - b, "peer( %s -> %s ) ",
+				       drbd_role_str(peer_role[OLD]),
+				       drbd_role_str(peer_role[NEW]));
+
+		if (b != buffer) {
+			*(b-1) = 0;
+			drbd_info(connection, "%s%s%s%s%s\n", prefix, buffer,
+				tag ? " [" : "", tag ?: "", tag ? "]" : "");
 		}
 	}
 
-	if (rv <= 0)
-		goto out; /* already found a reason to abort */
-	else if (ns.role == R_SECONDARY && device->open_cnt)
-		rv = SS_DEVICE_IN_USE;
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		struct drbd_peer_device *peer_device;
+		enum drbd_disk_state *disk_state = device->disk_state;
+		bool *have_quorum = device->have_quorum;
+
+		b = buffer;
+		if (disk_state[OLD] != disk_state[NEW])
+			b += scnprintf(b, end - b, "disk( %s -> %s ) ",
+				       drbd_disk_str(disk_state[OLD]),
+				       drbd_disk_str(disk_state[NEW]));
+		if (have_quorum[OLD] != have_quorum[NEW])
+			b += scnprintf(b, end - b, "quorum( %s -> %s ) ",
+				       have_quorum[OLD] ? "yes" : "no",
+				       have_quorum[NEW] ? "yes" : "no");
+		if (b != buffer) {
+			*(b-1) = 0;
+			drbd_info(device, "%s%s%s%s%s\n", prefix, buffer,
+				tag ? " [" : "", tag ?: "", tag ? "]" : "");
+		}
 
-	else if (ns.role == R_PRIMARY && ns.conn < C_CONNECTED && ns.disk < D_UP_TO_DATE)
-		rv = SS_NO_UP_TO_DATE_DISK;
+		for_each_peer_device(peer_device, device) {
+			enum drbd_disk_state *peer_disk_state = peer_device->disk_state;
+			enum drbd_repl_state *repl_state = peer_device->repl_state;
+			bool *replication = peer_device->replication;
+			bool *peer_replication = peer_device->peer_replication;
+
+			b = buffer;
+			if (peer_disk_state[OLD] != peer_disk_state[NEW])
+				b += scnprintf(b, end - b, "pdsk( %s -> %s ) ",
+					       drbd_disk_str(peer_disk_state[OLD]),
+					       drbd_disk_str(peer_disk_state[NEW]));
+			if (repl_state[OLD] != repl_state[NEW])
+				b += scnprintf(b, end - b, "repl( %s -> %s ) ",
+					       drbd_repl_str(repl_state[OLD]),
+					       drbd_repl_str(repl_state[NEW]));
+
+			if (resync_suspended(peer_device, OLD) !=
+			    resync_suspended(peer_device, NEW)) {
+				b += scnprintf(b, end - b, "resync-susp( ");
+				b += scnprintf_resync_suspend_flags(b, end - b, peer_device, OLD);
+				b += scnprintf(b, end - b, " -> ");
+				b += scnprintf_resync_suspend_flags(b, end - b, peer_device, NEW);
+				b += scnprintf(b, end - b, " ) ");
+			}
 
-	else if (fp >= FP_RESOURCE &&
-		 ns.role == R_PRIMARY && ns.conn < C_CONNECTED && ns.pdsk >= D_UNKNOWN)
-		rv = SS_PRIMARY_NOP;
+			if (replication[OLD] != replication[NEW])
+				b += scnprintf(b, end - b, "replication( %s -> %s ) ",
+					       replication[OLD] ? "yes" : "no",
+					       replication[NEW] ? "yes" : "no");
 
-	else if (ns.role == R_PRIMARY && ns.disk <= D_INCONSISTENT && ns.pdsk <= D_INCONSISTENT)
-		rv = SS_NO_UP_TO_DATE_DISK;
+			if (peer_replication[OLD] != peer_replication[NEW])
+				b += scnprintf(b, end - b, "peer_replication( %s -> %s ) ",
+					       peer_replication[OLD] ? "yes" : "no",
+					       peer_replication[NEW] ? "yes" : "no");
 
-	else if (ns.conn > C_CONNECTED && ns.disk < D_INCONSISTENT)
-		rv = SS_NO_LOCAL_DISK;
+			if (b != buffer) {
+				*(b-1) = 0;
+				drbd_info(peer_device, "%s%s%s%s%s\n", prefix, buffer,
+					tag ? " [" : "", tag ?: "", tag ? "]" : "");
+			}
+		}
+	}
+}
 
-	else if (ns.conn > C_CONNECTED && ns.pdsk < D_INCONSISTENT)
-		rv = SS_NO_REMOTE_DISK;
+static bool local_disk_may_be_outdated(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
 
-	else if (ns.conn > C_CONNECTED && ns.disk < D_UP_TO_DATE && ns.pdsk < D_UP_TO_DATE)
-		rv = SS_NO_UP_TO_DATE_DISK;
+	if (device->resource->role[NEW] == R_PRIMARY) {
+		for_each_peer_device(peer_device, device) {
+			if (peer_device->disk_state[NEW] == D_UP_TO_DATE &&
+			    peer_device->repl_state[NEW] == L_WF_BITMAP_T)
+				return true;
+		}
+		return false;
+	}
 
-	else if ((ns.conn == C_CONNECTED ||
-		  ns.conn == C_WF_BITMAP_S ||
-		  ns.conn == C_SYNC_SOURCE ||
-		  ns.conn == C_PAUSED_SYNC_S) &&
-		  ns.disk == D_OUTDATED)
-		rv = SS_CONNECTED_OUTDATES;
+	for_each_peer_device(peer_device, device) {
+		if (peer_device->connection->peer_role[NEW] == R_PRIMARY &&
+		    peer_device->repl_state[NEW] > L_OFF)
+			goto have_primary_neighbor;
+	}
 
-	else if (nc && (ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) &&
-		 (nc->verify_alg[0] == 0))
-		rv = SS_NO_VERIFY_ALG;
+	return true;	/* No neighbor primary, I might be outdated*/
 
-	else if ((ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) &&
-		  first_peer_device(device)->connection->agreed_pro_version < 88)
-		rv = SS_NOT_SUPPORTED;
+have_primary_neighbor:
+	/* Allow self outdating while connecting to a diskless primary. */
+	if (peer_device->disk_state[NEW] == D_DISKLESS &&
+	    peer_device->repl_state[OLD] == L_OFF && peer_device->repl_state[NEW] == L_ESTABLISHED)
+		return true;
 
-	else if (ns.role == R_PRIMARY && ns.disk < D_UP_TO_DATE && ns.pdsk < D_UP_TO_DATE)
-		rv = SS_NO_UP_TO_DATE_DISK;
+	for_each_peer_device(peer_device, device) {
+		enum drbd_repl_state repl_state = peer_device->repl_state[NEW];
+		switch (repl_state) {
+		case L_WF_BITMAP_S:
+		case L_STARTING_SYNC_S:
+		case L_SYNC_SOURCE:
+		case L_PAUSED_SYNC_S:
+		case L_AHEAD:
+		case L_ESTABLISHED:
+		case L_VERIFY_S:
+		case L_VERIFY_T:
+		case L_OFF:
+			continue;
+		case L_WF_SYNC_UUID:
+		case L_WF_BITMAP_T:
+		case L_STARTING_SYNC_T:
+		case L_SYNC_TARGET:
+		case L_PAUSED_SYNC_T:
+		case L_BEHIND:
+			return true;
+		}
+	}
 
-	else if ((ns.conn == C_STARTING_SYNC_S || ns.conn == C_STARTING_SYNC_T) &&
-                 ns.pdsk == D_UNKNOWN)
-		rv = SS_NEED_CONNECTION;
+	return false;
+}
 
-	else if (ns.conn >= C_CONNECTED && ns.pdsk == D_UNKNOWN)
-		rv = SS_CONNECTED_OUTDATES;
+static int calc_quorum_at(s32 setting, int voters)
+{
+	int quorum_at;
 
-out:
-	rcu_read_unlock();
+	switch (setting) {
+	case QOU_MAJORITY:
+		quorum_at = voters / 2 + 1;
+		break;
+	case QOU_ALL:
+		quorum_at = voters;
+		break;
+	default:
+		quorum_at = setting;
+	}
 
-	return rv;
+	return quorum_at;
 }
 
-/**
- * is_valid_soft_transition() - Returns an SS_ error code if the state transition is not possible
- * This function limits state transitions that may be declined by DRBD. I.e.
- * user requests (aka soft transitions).
- * @os:		old state.
- * @ns:		new state.
- * @connection:  DRBD connection.
- */
-static enum drbd_state_rv
-is_valid_soft_transition(union drbd_state os, union drbd_state ns, struct drbd_connection *connection)
+static void __calc_quorum_with_disk(struct drbd_device *device, struct quorum_detail *qd)
 {
-	enum drbd_state_rv rv = SS_SUCCESS;
-
-	if ((ns.conn == C_STARTING_SYNC_T || ns.conn == C_STARTING_SYNC_S) &&
-	    os.conn > C_CONNECTED)
-		rv = SS_RESYNC_RUNNING;
+	struct drbd_resource *resource = device->resource;
+	const u64 quorumless_nodes = device->have_quorum[NOW] ? ~resource->members : 0;
+	const int my_node_id = resource->res_opts.node_id;
+	int node_id;
 
-	if (ns.conn == C_DISCONNECTING && os.conn == C_STANDALONE)
-		rv = SS_ALREADY_STANDALONE;
+	check_wrongly_set_mdf_exists(device);
 
-	if (ns.disk > D_ATTACHING && os.disk == D_DISKLESS)
-		rv = SS_IS_DISKLESS;
+	rcu_read_lock();
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		struct drbd_peer_md *peer_md = &device->ldev->md.peers[node_id];
+		struct drbd_peer_device *peer_device;
+		enum drbd_disk_state disk_state;
+		enum drbd_repl_state repl_state;
+		bool is_intentional_diskless, is_tiebreaker;
+		struct net_conf *nc;
+
+		if (node_id == my_node_id) {
+			disk_state = device->disk_state[NEW];
+			if (disk_state > D_DISKLESS) {
+				if (disk_state == D_UP_TO_DATE)
+					qd->up_to_date++;
+				else
+					qd->present++;
+			}
+			continue;
+		}
 
-	if (ns.conn == C_WF_CONNECTION && os.conn < C_UNCONNECTED)
-		rv = SS_NO_NET_CONFIG;
+		/* Ignore non existing nodes.
+		   Note: a fresh (before connected once), intentional diskless peer
+		   gets ignored as well by this.
+		   A fresh diskful peer counts! (since it has MDF_HAVE_BITMAP) */
+		if (!(peer_md->flags & (MDF_HAVE_BITMAP | MDF_NODE_EXISTS | MDF_PEER_DEVICE_SEEN)))
+			continue;
 
-	if (ns.disk == D_OUTDATED && os.disk < D_OUTDATED && os.disk != D_ATTACHING)
-		rv = SS_LOWER_THAN_OUTDATED;
+		peer_device = peer_device_by_node_id(device, node_id);
 
-	if (ns.conn == C_DISCONNECTING && os.conn == C_UNCONNECTED)
-		rv = SS_IN_TRANSIENT_STATE;
+		if (peer_device) {
+			is_intentional_diskless = !want_bitmap(peer_device);
+			nc = rcu_dereference(peer_device->connection->transport.net_conf);
+			is_tiebreaker = rcu_dereference(peer_device->conf)->peer_tiebreaker;
+			if (nc && !nc->allow_remote_read) {
+				dynamic_drbd_dbg(peer_device,
+						 "Excluding from quorum calculation because allow-remote-read = no\n");
+				continue;
+			}
+		} else {
+			is_intentional_diskless = !(peer_md->flags & MDF_PEER_DEVICE_SEEN);
+			is_tiebreaker = true;
+		}
 
-	/* While establishing a connection only allow cstate to change.
-	   Delay/refuse role changes, detach attach etc... (they do not touch cstate) */
-	if (test_bit(STATE_SENT, &connection->flags) &&
-	    !((ns.conn == C_WF_REPORT_PARAMS && os.conn == C_WF_CONNECTION) ||
-	      (ns.conn >= C_CONNECTED && os.conn == C_WF_REPORT_PARAMS)))
-		rv = SS_IN_TRANSIENT_STATE;
+		if (is_intentional_diskless && !is_tiebreaker)
+			continue;
 
-	/* Do not promote during resync handshake triggered by "force primary".
-	 * This is a hack. It should really be rejected by the peer during the
-	 * cluster wide state change request. */
-	if (os.role != R_PRIMARY && ns.role == R_PRIMARY
-		&& ns.pdsk == D_UP_TO_DATE
-		&& ns.disk != D_UP_TO_DATE && ns.disk != D_DISKLESS
-		&& (ns.conn <= C_WF_SYNC_UUID || ns.conn != os.conn))
-			rv = SS_IN_TRANSIENT_STATE;
+		repl_state = peer_device ? peer_device->repl_state[NEW] : L_OFF;
+		disk_state = peer_device ? peer_device->disk_state[NEW] : D_UNKNOWN;
+
+		if (repl_state == L_OFF) {
+			if (is_intentional_diskless)
+				/* device should be diskless but is absent */
+				qd->missing_diskless++;
+			else if (disk_state <= D_OUTDATED || peer_md->flags & MDF_PEER_OUTDATED)
+				qd->outdated++;
+			else if (NODE_MASK(node_id) & quorumless_nodes)
+				qd->quorumless++;
+			else
+				qd->unknown++;
+		} else {
+			if (disk_state == D_DISKLESS && is_intentional_diskless)
+				qd->diskless++;
+			else if (disk_state == D_UP_TO_DATE)
+				qd->up_to_date++;
+			else
+				qd->present++;
+		}
+	}
+	rcu_read_unlock();
+}
 
-	if ((ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) && os.conn < C_CONNECTED)
-		rv = SS_NEED_CONNECTION;
+static void __calc_quorum_no_disk(struct drbd_device *device, struct quorum_detail *qd)
+{
+	struct drbd_resource *resource = device->resource;
+	const u64 quorumless_nodes = device->have_quorum[NOW] ? ~resource->members : 0;
+	struct drbd_peer_device *peer_device;
+	bool is_intentional_diskless;
+
+	if (device->disk_state[NEW] == D_DISKLESS) {
+		/* We only want to consider ourselves as a diskless node when
+		 * we actually intended to be diskless in the config. Otherwise,
+		 * we shouldn't get a vote in the quorum process, so count
+		 * ourselves as unknown. */
+		if (device->device_conf.intentional_diskless)
+			qd->diskless++;
+		else
+			qd->unknown++;
+	}
 
-	if ((ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T) &&
-	    ns.conn != os.conn && os.conn > C_CONNECTED)
-		rv = SS_RESYNC_RUNNING;
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		enum drbd_disk_state disk_state;
+		enum drbd_repl_state repl_state;
+		struct net_conf *nc;
+		bool is_tiebreaker;
+
+		repl_state = peer_device->repl_state[NEW];
+		disk_state = peer_device->disk_state[NEW];
+
+		is_intentional_diskless = !want_bitmap(peer_device);
+		nc = rcu_dereference(peer_device->connection->transport.net_conf);
+		is_tiebreaker = rcu_dereference(peer_device->conf)->peer_tiebreaker;
+		if (nc && !nc->allow_remote_read) {
+			dynamic_drbd_dbg(peer_device,
+					 "Excluding from quorum calculation because allow-remote-read = no\n");
+			continue;
+		}
+		if (is_intentional_diskless && !is_tiebreaker)
+			continue;
 
-	if ((ns.conn == C_STARTING_SYNC_S || ns.conn == C_STARTING_SYNC_T) &&
-	    os.conn < C_CONNECTED)
-		rv = SS_NEED_CONNECTION;
+		if (repl_state == L_OFF) {
+			if (is_intentional_diskless)
+				/* device should be diskless but is absent */
+				qd->missing_diskless++;
+			else if (disk_state <= D_OUTDATED)
+				qd->outdated++;
+			else if (NODE_MASK(peer_device->node_id) & quorumless_nodes)
+				qd->quorumless++;
+			else
+				qd->unknown++;
+		} else {
+			if (disk_state == D_DISKLESS && is_intentional_diskless)
+				qd->diskless++;
+			else if (disk_state == D_UP_TO_DATE)
+				qd->up_to_date++;
+			else
+				qd->present++;
+		}
 
-	if ((ns.conn == C_SYNC_TARGET || ns.conn == C_SYNC_SOURCE)
-	    && os.conn < C_WF_REPORT_PARAMS)
-		rv = SS_NEED_CONNECTION; /* No NetworkFailure -> SyncTarget etc... */
+		if (disk_state == D_UP_TO_DATE && test_bit(PEER_QUORATE, &peer_device->flags))
+			qd->quorate_peers++;
 
-	if (ns.conn == C_DISCONNECTING && ns.pdsk == D_OUTDATED &&
-	    os.conn < C_CONNECTED && os.pdsk > D_OUTDATED)
-		rv = SS_OUTDATE_WO_CONN;
 
-	return rv;
+	}
+	rcu_read_unlock();
 }
 
-static enum drbd_state_rv
-is_valid_conn_transition(enum drbd_conns oc, enum drbd_conns nc)
+static bool calc_quorum(struct drbd_device *device, struct quorum_info *qi)
 {
-	/* no change -> nothing to do, at least for the connection part */
-	if (oc == nc)
-		return SS_NOTHING_TO_DO;
-
-	/* disconnect of an unconfigured connection does not make sense */
-	if (oc == C_STANDALONE && nc == C_DISCONNECTING)
-		return SS_ALREADY_STANDALONE;
+	struct drbd_resource *resource = device->resource;
+	int voters, quorum_at, diskless_majority_at, min_redundancy_at;
+	struct quorum_detail qd = {};
+	bool have_quorum;
 
-	/* from C_STANDALONE, we start with C_UNCONNECTED */
-	if (oc == C_STANDALONE && nc != C_UNCONNECTED)
-		return SS_NEED_CONNECTION;
+	if (device->disk_state[NEW] > D_ATTACHING && get_ldev_if_state(device, D_ATTACHING)) {
+		__calc_quorum_with_disk(device, &qd);
+		put_ldev(device);
+	} else {
+		__calc_quorum_no_disk(device, &qd);
+	}
 
-	/* When establishing a connection we need to go through WF_REPORT_PARAMS!
-	   Necessary to do the right thing upon invalidate-remote on a disconnected resource */
-	if (oc < C_WF_REPORT_PARAMS && nc >= C_CONNECTED)
-		return SS_NEED_CONNECTION;
+	/* Check if a partition containing all missing nodes might have quorum */
+	voters = qd.outdated + qd.quorumless + qd.unknown + qd.up_to_date + qd.present;
+	quorum_at = calc_quorum_at(resource->res_opts.quorum, voters);
+	if (qd.outdated + qd.quorumless + qd.unknown >= quorum_at) {
+		/* when the missing nodes have the quorum, give up the quorumless */
+		qd.unknown += qd.quorumless;
+		qd.quorumless = 0;
+	}
 
-	/* After a network error only C_UNCONNECTED or C_DISCONNECTING may follow. */
-	if (oc >= C_TIMEOUT && oc <= C_TEAR_DOWN && nc != C_UNCONNECTED && nc != C_DISCONNECTING)
-		return SS_IN_TRANSIENT_STATE;
+	/* When all the absent nodes are D_OUTDATED (no one D_UNKNOWN), we can be
+	   sure that the other partition is not able to promote. ->
+	   We remove them from the voters. -> We have quorum */
+	if (qd.unknown)
+		voters = qd.outdated + qd.quorumless + qd.unknown + qd.up_to_date + qd.present;
+	else
+		voters = qd.up_to_date + qd.present;
+
+	quorum_at = calc_quorum_at(resource->res_opts.quorum, voters);
+	diskless_majority_at = calc_quorum_at(QOU_MAJORITY, qd.diskless + qd.missing_diskless);
+	min_redundancy_at = calc_quorum_at(resource->res_opts.quorum_min_redundancy, voters);
+
+	if (qi) {
+		qi->voters = voters;
+		qi->up_to_date = qd.up_to_date;
+		qi->present = qd.present;
+		qi->quorum_at = quorum_at;
+		qi->min_redundancy_at = min_redundancy_at;
+	}
 
-	/* After C_DISCONNECTING only C_STANDALONE may follow */
-	if (oc == C_DISCONNECTING && nc != C_STANDALONE)
-		return SS_IN_TRANSIENT_STATE;
+	have_quorum = qd.quorate_peers ||
+		((qd.up_to_date + qd.present) >= quorum_at && qd.up_to_date >= min_redundancy_at);
+
+	if (!have_quorum && voters != 0 && voters % 2 == 0 && qd.up_to_date + qd.present == quorum_at - 1 &&
+		/* It is an even number of nodes (think 2) and we failed by one vote.
+		   Check if we have majority of the diskless nodes connected.
+		   Using the diskless nodes a tie-breaker! */
+	    qd.diskless >= diskless_majority_at && device->have_quorum[NOW]) {
+		have_quorum = true;
+		if (!test_bit(TIEBREAKER_QUORUM, &device->flags)) {
+			set_bit(TIEBREAKER_QUORUM, &device->flags);
+			drbd_info(device, "Would lose quorum, but using tiebreaker logic to keep\n");
+		}
+	} else {
+		clear_bit(TIEBREAKER_QUORUM, &device->flags);
+	}
 
-	return SS_SUCCESS;
+	return have_quorum;
 }
 
+static __printf(2, 3) void _drbd_state_err(struct change_context *context, const char *fmt, ...)
+{
+	struct drbd_resource *resource = context->resource;
+	const char *err_str;
+	va_list args;
+
+	va_start(args, fmt);
+	err_str = kvasprintf(GFP_ATOMIC, fmt, args);
+	va_end(args);
+	if (!err_str)
+		return;
+	if (context->flags & CS_VERBOSE)
+		drbd_err(resource, "%s\n", err_str);
+
+	if (context->err_str)
+		*context->err_str = err_str;
+	else
+		kfree(err_str);
+}
+
+static __printf(2, 3) void drbd_state_err(struct drbd_resource *resource, const char *fmt, ...)
+{
+	const char *err_str;
+	va_list args;
+
+	va_start(args, fmt);
+	err_str = kvasprintf(GFP_ATOMIC, fmt, args);
+	va_end(args);
+	if (!err_str)
+		return;
+	if (resource->state_change_flags & CS_VERBOSE)
+		drbd_err(resource, "%s\n", err_str);
+
+	if (resource->state_change_err_str)
+		*resource->state_change_err_str = err_str;
+	else
+		kfree(err_str);
+}
+
+static enum drbd_state_rv __is_valid_soft_transition(struct drbd_resource *resource)
+{
+	enum drbd_role *role = resource->role;
+	bool *fail_io = resource->fail_io;
+	struct drbd_connection *connection;
+	struct drbd_device *device;
+	bool in_handshake = false;
+	int vnr;
+
+	/* See drbd_state_sw_errors in drbd_strings.c */
+
+	if (role[OLD] != R_PRIMARY && role[NEW] == R_PRIMARY) {
+		for_each_connection_rcu(connection, resource) {
+			struct net_conf *nc;
+
+			nc = rcu_dereference(connection->transport.net_conf);
+			if (!nc || nc->two_primaries)
+				continue;
+			if (connection->peer_role[NEW] == R_PRIMARY)
+				return SS_TWO_PRIMARIES;
+		}
+	}
+
+	for_each_connection_rcu(connection, resource) {
+		struct drbd_peer_device *peer_device;
+
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+			if (test_bit(HOLDING_UUID_READ_LOCK, &peer_device->flags) &&
+			    peer_device->repl_state[NOW] == L_OFF) {
+				in_handshake = true;
+				goto handshake_found;
+			}
+		}
+	}
+handshake_found:
+
+	if (in_handshake && role[OLD] != role[NEW])
+		return SS_IN_TRANSIENT_STATE;
+
+	if (role[OLD] == R_SECONDARY && role[NEW] == R_PRIMARY && fail_io[NEW])
+		return SS_DEVICE_IN_USE;
+
+	for_each_connection_rcu(connection, resource) {
+		enum drbd_conn_state *cstate = connection->cstate;
+		enum drbd_role *peer_role = connection->peer_role;
+		struct net_conf *nc;
+		bool two_primaries;
+
+		if (cstate[NEW] == C_DISCONNECTING && cstate[OLD] == C_STANDALONE)
+			return SS_ALREADY_STANDALONE;
+
+		if (cstate[NEW] == C_CONNECTING && cstate[OLD] < C_UNCONNECTED)
+			return SS_NO_NET_CONFIG;
+
+		if (cstate[NEW] == C_DISCONNECTING && cstate[OLD] == C_UNCONNECTED)
+			return SS_IN_TRANSIENT_STATE;
+
+		nc = rcu_dereference(connection->transport.net_conf);
+		two_primaries = nc ? nc->two_primaries : false;
+		if (peer_role[NEW] == R_PRIMARY && peer_role[OLD] != R_PRIMARY && !two_primaries) {
+			if (role[NOW] == R_PRIMARY)
+				return SS_TWO_PRIMARIES;
+			if (!fail_io[NEW]) {
+				idr_for_each_entry(&resource->devices, device, vnr) {
+					if (!device->writable && device->open_cnt)
+						return SS_PRIMARY_READER;
+					/*
+					 * One might be tempted to add "|| open_rw_cont" here.
+					 * That is wrong. The promotion of a rw opener will be
+					 * handled in its own two-phase commit.
+					 * Returning SS_PRIMARY_READER for a rw_opener might
+					 * causes confusion for the caller, if that then waits
+					 * for the read-only openers to go away.
+					 */
+				}
+			}
+		}
+	}
+
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		enum drbd_disk_state *disk_state = device->disk_state;
+		struct drbd_peer_device *peer_device;
+		bool any_disk_up_to_date[2];
+		enum which_state which;
+		int nr_negotiating = 0;
+
+		if (in_handshake &&
+		    ((disk_state[OLD] < D_ATTACHING && disk_state[NEW] == D_ATTACHING) ||
+		     (disk_state[OLD] > D_DETACHING && disk_state[NEW] == D_DETACHING)))
+			return SS_IN_TRANSIENT_STATE;
+
+		if (role[OLD] == R_PRIMARY && role[NEW] == R_SECONDARY && device->writable &&
+		    !(resource->state_change_flags & CS_FS_IGN_OPENERS))
+			return SS_DEVICE_IN_USE;
+
+		if (disk_state[NEW] > D_ATTACHING && disk_state[OLD] == D_DISKLESS)
+			return SS_IS_DISKLESS;
+
+		if (disk_state[NEW] == D_OUTDATED && disk_state[OLD] < D_OUTDATED &&
+		    disk_state[OLD] != D_ATTACHING && disk_state[OLD] != D_NEGOTIATING) {
+			/* Do not allow outdate of inconsistent or diskless.
+			   But we have to allow Inconsistent -> Outdated if a resync
+			   finishes over one connection, and is paused on other connections */
+
+			for_each_peer_device_rcu(peer_device, device) {
+				enum drbd_repl_state *repl_state = peer_device->repl_state;
+				if (repl_state[OLD] == L_SYNC_TARGET && repl_state[NEW] == L_ESTABLISHED)
+					goto allow;
+			}
+			return SS_LOWER_THAN_OUTDATED;
+		}
+		allow:
+
+		for (which = OLD; which <= NEW; which++)
+			any_disk_up_to_date[which] = drbd_data_accessible(device, which);
+
+		/* Prevent becoming primary while there is not data accessible
+		   and prevent detach or disconnect while primary */
+		if (!(role[OLD] == R_PRIMARY && !any_disk_up_to_date[OLD]) &&
+		     (role[NEW] == R_PRIMARY && !any_disk_up_to_date[NEW]))
+			return SS_NO_UP_TO_DATE_DISK;
+
+		/* Prevent detach or disconnect while held open read only */
+		if (!device->writable && device->open_cnt &&
+		    any_disk_up_to_date[OLD] && !any_disk_up_to_date[NEW])
+			return SS_NO_UP_TO_DATE_DISK;
+
+		if (disk_state[NEW] == D_NEGOTIATING)
+			nr_negotiating++;
+
+		/* Prevent promote when there is no quorum and
+		 * prevent graceful disconnect/detach that would kill quorum
+		 */
+		if ((role[OLD] == R_SECONDARY || device->have_quorum[OLD]) &&
+		    role[NEW] == R_PRIMARY && !device->have_quorum[NEW]) {
+			struct quorum_info qi;
+
+			calc_quorum(device, &qi);
+
+			if (disk_state[NEW] <= D_ATTACHING)
+				drbd_state_err(resource, "no UpToDate peer with quorum");
+			else if (qi.up_to_date + qi.present < qi.quorum_at)
+				drbd_state_err(resource, "%d of %d nodes visible, need %d for quorum",
+					       qi.up_to_date + qi.present, qi.voters, qi.quorum_at);
+			else if (qi.up_to_date < qi.min_redundancy_at)
+				drbd_state_err(resource, "%d of %d nodes up_to_date, need %d for "
+					       "quorum-minimum-redundancy",
+					       qi.up_to_date, qi.voters, qi.min_redundancy_at);
+			return SS_NO_QUORUM;
+		}
+
+		for_each_peer_device_rcu(peer_device, device) {
+			enum drbd_disk_state *peer_disk_state = peer_device->disk_state;
+			enum drbd_repl_state *repl_state = peer_device->repl_state;
+
+			if (peer_disk_state[NEW] == D_NEGOTIATING)
+				nr_negotiating++;
+
+			if (nr_negotiating > 1)
+				return SS_IN_TRANSIENT_STATE;
+
+			if (peer_device->connection->fencing_policy >= FP_RESOURCE &&
+			    !(role[OLD] == R_PRIMARY && repl_state[OLD] < L_ESTABLISHED && !(peer_disk_state[OLD] <= D_OUTDATED)) &&
+			     (role[NEW] == R_PRIMARY && repl_state[NEW] < L_ESTABLISHED && !(peer_disk_state[NEW] <= D_OUTDATED)))
+				return SS_PRIMARY_NOP;
+
+			if (!(repl_state[OLD] > L_ESTABLISHED && disk_state[OLD] < D_INCONSISTENT) &&
+			     (repl_state[NEW] > L_ESTABLISHED && disk_state[NEW] < D_INCONSISTENT))
+				return SS_NO_LOCAL_DISK;
+
+			if (!(repl_state[OLD] > L_ESTABLISHED && peer_disk_state[OLD] < D_INCONSISTENT) &&
+			     (repl_state[NEW] > L_ESTABLISHED && peer_disk_state[NEW] < D_INCONSISTENT))
+				return SS_NO_REMOTE_DISK;
+
+			if (disk_state[OLD] > D_OUTDATED && disk_state[NEW] == D_OUTDATED &&
+			    !local_disk_may_be_outdated(device))
+				return SS_CONNECTED_OUTDATES;
+
+			if (!(repl_state[OLD] == L_VERIFY_S || repl_state[OLD] == L_VERIFY_T) &&
+			     (repl_state[NEW] == L_VERIFY_S || repl_state[NEW] == L_VERIFY_T)) {
+				struct net_conf *nc = rcu_dereference(peer_device->connection->transport.net_conf);
+
+				if (!nc || nc->verify_alg[0] == 0)
+					return SS_NO_VERIFY_ALG;
+			}
+
+			if (!(repl_state[OLD] == L_VERIFY_S || repl_state[OLD] == L_VERIFY_T) &&
+			     (repl_state[NEW] == L_VERIFY_S || repl_state[NEW] == L_VERIFY_T) &&
+				  peer_device->connection->agreed_pro_version < 88)
+				return SS_NOT_SUPPORTED;
+
+			if (repl_is_sync_source(repl_state[OLD]) &&
+					repl_state[NEW] == L_WF_BITMAP_S)
+				return SS_RESYNC_RUNNING;
+
+			if (repl_is_sync_target(repl_state[OLD]) &&
+					repl_state[NEW] == L_WF_BITMAP_T)
+				return SS_RESYNC_RUNNING;
+
+			if (repl_state[NEW] != repl_state[OLD] &&
+			    (repl_state[NEW] == L_STARTING_SYNC_T || repl_state[NEW] == L_STARTING_SYNC_S) &&
+			    repl_state[OLD] > L_ESTABLISHED)
+				return SS_RESYNC_RUNNING;
+
+			if ((repl_state[NEW] == L_VERIFY_S || repl_state[NEW] == L_VERIFY_T) && repl_state[OLD] < L_ESTABLISHED)
+				return SS_NEED_CONNECTION;
+
+			if ((repl_state[NEW] == L_VERIFY_S || repl_state[NEW] == L_VERIFY_T) &&
+			    repl_state[NEW] != repl_state[OLD] && repl_state[OLD] > L_ESTABLISHED)
+				return SS_RESYNC_RUNNING;
+
+			if ((repl_state[NEW] == L_STARTING_SYNC_S || repl_state[NEW] == L_STARTING_SYNC_T) &&
+			    repl_state[OLD] < L_ESTABLISHED)
+				return SS_NEED_CONNECTION;
+
+			if ((repl_state[NEW] == L_SYNC_TARGET || repl_state[NEW] == L_SYNC_SOURCE)
+			    && repl_state[OLD] < L_OFF)
+				return SS_NEED_CONNECTION; /* No NetworkFailure -> SyncTarget etc... */
+
+			if ((peer_disk_state[NEW] > D_DISKLESS && peer_disk_state[NEW] != D_UNKNOWN) &&
+			    peer_disk_state[OLD] == D_DISKLESS && !want_bitmap(peer_device))
+				return SS_ATTACH_NO_BITMAP;  /* peer with --bitmap=no wannts to attach ??? */
+		}
+	}
+
+	return SS_SUCCESS;
+}
+
+/**
+ * is_valid_soft_transition() - Returns an SS_ error code if state[NEW] is not valid
+ *
+ * "Soft" transitions are voluntary state changes which drbd may decline, such
+ * as a user request to promote a resource to primary.  Opposed to that are
+ * involuntary or "hard" transitions like a network connection loss.
+ *
+ * When deciding if a "soft" transition should be allowed, "hard" transitions
+ * may already have forced the resource into a critical state.  It may take
+ * several "soft" transitions to get the resource back to normal.  To allow
+ * those, rather than checking if the desired new state is valid, we can only
+ * check if the desired new state is "at least as good" as the current state.
+ *
+ * @resource:	DRBD resource
+ */
+static enum drbd_state_rv is_valid_soft_transition(struct drbd_resource *resource)
+{
+	enum drbd_state_rv rv;
+
+	rcu_read_lock();
+	rv = __is_valid_soft_transition(resource);
+	rcu_read_unlock();
+
+	return rv;
+}
+
+static enum drbd_state_rv
+is_valid_conn_transition(enum drbd_conn_state oc, enum drbd_conn_state nc)
+{
+	/* no change -> nothing to do, at least for the connection part */
+	if (oc == nc)
+		return SS_NOTHING_TO_DO;
+
+	/* disconnect of an unconfigured connection does not make sense */
+	if (oc == C_STANDALONE && nc == C_DISCONNECTING)
+		return SS_ALREADY_STANDALONE;
+
+	/* from C_STANDALONE, we start with C_UNCONNECTED */
+	if (oc == C_STANDALONE && nc != C_UNCONNECTED)
+		return SS_NEED_CONNECTION;
+
+	/* After a network error only C_UNCONNECTED or C_DISCONNECTING may follow. */
+	if (oc >= C_TIMEOUT && oc <= C_TEAR_DOWN && nc != C_UNCONNECTED && nc != C_DISCONNECTING)
+		return SS_IN_TRANSIENT_STATE;
+
+	/* After C_DISCONNECTING only C_STANDALONE may follow */
+	if (oc == C_DISCONNECTING && nc != C_STANDALONE)
+		return SS_IN_TRANSIENT_STATE;
+
+	return SS_SUCCESS;
+}
+
+
+/**
+ * is_valid_transition() - Returns an SS_ error code if the state transition is not possible
+ * This limits hard state transitions. Hard state transitions are facts there are
+ * imposed on DRBD by the environment. E.g. disk broke or network broke down.
+ * But those hard state transitions are still not allowed to do everything.
+ * @resource: DRBD resource.
+ */
+static enum drbd_state_rv is_valid_transition(struct drbd_resource *resource)
+{
+	enum drbd_state_rv rv;
+	struct drbd_connection *connection;
+	struct drbd_device *device;
+	int vnr;
+
+	for_each_connection(connection, resource) {
+		rv = is_valid_conn_transition(connection->cstate[OLD], connection->cstate[NEW]);
+		if (rv < SS_SUCCESS)
+			return rv;
+	}
+
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		/* we cannot fail (again) if we already detached */
+		if ((device->disk_state[NEW] == D_FAILED || device->disk_state[NEW] == D_DETACHING) &&
+		    device->disk_state[OLD] == D_DISKLESS) {
+			return SS_IS_DISKLESS;
+		}
+	}
+
+	return SS_SUCCESS;
+}
+
+static bool is_sync_target_other_c(struct drbd_peer_device *ign_peer_device)
+{
+	struct drbd_device *device = ign_peer_device->device;
+	struct drbd_peer_device *peer_device;
+
+	for_each_peer_device(peer_device, device) {
+		enum drbd_repl_state r;
+
+		if (peer_device == ign_peer_device)
+			continue;
+
+		r = peer_device->repl_state[NEW];
+		if (r == L_SYNC_TARGET || r == L_PAUSED_SYNC_T)
+			return true;
+	}
+
+	return false;
+}
+
+static void drbd_start_other_targets_paused(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_peer_device *p;
+
+	for_each_peer_device(p, device) {
+		if (p == peer_device)
+			continue;
+
+		if (p->disk_state[NEW] >= D_INCONSISTENT && p->repl_state[NEW] == L_ESTABLISHED)
+			p->repl_state[NEW] = L_PAUSED_SYNC_T;
+	}
+}
+
+static bool drbd_is_sync_target_candidate(struct drbd_peer_device *peer_device)
+{
+	if (!repl_is_sync_target(peer_device->repl_state[NEW]))
+		return false;
+
+	if (peer_device->resync_susp_dependency[NEW] ||
+			peer_device->resync_susp_peer[NEW] ||
+			peer_device->resync_susp_user[NEW])
+		return false;
+
+	if (peer_device->disk_state[NEW] < D_OUTDATED)
+		return false;
+
+	return true;
+
+}
+
+static void drbd_select_sync_target(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	struct drbd_peer_device *target_current = NULL;
+	struct drbd_peer_device *target_active = NULL;
+	struct drbd_peer_device *target_desired = NULL;
+
+	/* Find current and active resync peers. */
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->repl_state[OLD] == L_SYNC_TARGET && drbd_is_sync_target_candidate(peer_device))
+			target_current = peer_device;
+
+		if (peer_device->resync_active[NEW])
+			target_active = peer_device;
+	}
+
+	/* Choose desired resync peer. */
+	for_each_peer_device_rcu(peer_device, device) {
+		if (!drbd_is_sync_target_candidate(peer_device))
+			continue;
+
+		if (target_desired && drbd_bm_total_weight(peer_device) > drbd_bm_total_weight(target_desired))
+			continue;
+
+		target_desired = peer_device;
+	}
+
+	/* Keep current resync target if the alternative has less than 1MiB
+	 * storage (256 bits) less to resync. */
+	if (target_current && target_desired &&
+			drbd_bm_total_weight(target_current) < drbd_bm_total_weight(target_desired) + 256UL)
+		target_desired = target_current;
+
+	/* Do not activate/unpause a resync if some other is still active. */
+	if (target_desired && target_active && target_desired != target_active)
+		target_desired = NULL;
+
+	/* Activate resync (if not already active). */
+	if (target_desired)
+		target_desired->resync_active[NEW] = true;
+
+	/* Make sure that the targets are correctly paused/unpaused. */
+	for_each_peer_device_rcu(peer_device, device) {
+		enum drbd_repl_state *repl_state = peer_device->repl_state;
+
+		peer_device->resync_susp_other_c[NEW] = target_desired && peer_device != target_desired;
+
+		if (!repl_is_sync_target(repl_state[NEW]))
+			continue;
+
+		peer_device->repl_state[NEW] = peer_device == target_desired ? L_SYNC_TARGET : L_PAUSED_SYNC_T;
+	}
+}
+
+static bool drbd_change_to_inconsistent(enum drbd_disk_state *disk_state,
+		enum drbd_conn_state *cstate)
+{
+	return !(disk_state[OLD] == D_INCONSISTENT && cstate[OLD] == C_CONNECTED) &&
+		(disk_state[NEW] == D_INCONSISTENT && cstate[NEW] == C_CONNECTED);
+}
+
+static void sanitize_state(struct drbd_resource *resource)
+{
+	enum drbd_role *role = resource->role;
+	struct drbd_connection *connection;
+	struct drbd_device *device;
+	bool maybe_crashed_primary = false;
+	bool volume_lost_data_access = false;
+	bool volumes_have_data_access = true;
+	bool resource_has_quorum = true;
+	int connected_primaries = 0;
+	int vnr;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		enum drbd_conn_state *cstate = connection->cstate;
+
+		if (cstate[NEW] < C_CONNECTED)
+			connection->peer_role[NEW] = R_UNKNOWN;
+
+		if (connection->peer_role[OLD] == R_PRIMARY && cstate[OLD] == C_CONNECTED &&
+		    ((cstate[NEW] >= C_TIMEOUT && cstate[NEW] <= C_PROTOCOL_ERROR) ||
+		     (cstate[NEW] == C_DISCONNECTING && resource->state_change_flags & CS_HARD)))
+			/* implies also C_BROKEN_PIPE and C_NETWORK_FAILURE */
+			maybe_crashed_primary = true;
+
+		if (connection->peer_role[NEW] == R_PRIMARY)
+			connected_primaries++;
+	}
+
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		struct drbd_peer_device *peer_device;
+		enum drbd_disk_state *disk_state = device->disk_state;
+		bool lost_connection = false;
+		bool have_good_peer = false;
+
+		if (disk_state[OLD] == D_DISKLESS && disk_state[NEW] == D_DETACHING)
+			disk_state[NEW] = D_DISKLESS;
+
+		if ((resource->state_change_flags & CS_IGN_OUTD_FAIL) &&
+		    disk_state[OLD] < D_OUTDATED && disk_state[NEW] == D_OUTDATED)
+			disk_state[NEW] = disk_state[OLD];
+
+		if (disk_state[NEW] == D_NEGOTIATING) {
+			int all = 0, target = 0, no_result = 0;
+			bool up_to_date_neighbor = false;
+
+			if (disk_state[OLD] != D_NEGOTIATING) {
+				for_each_peer_device_rcu(peer_device, device)
+					peer_device->negotiation_result = L_NEGOTIATING;
+			}
+
+			for_each_peer_device_rcu(peer_device, device) {
+				enum drbd_repl_state repl_state = peer_device->repl_state[NEW];
+				enum drbd_repl_state nr = peer_device->negotiation_result;
+				enum drbd_disk_state pdsk = peer_device->disk_state[NEW];
+
+				if (pdsk < D_NEGOTIATING || repl_state == L_OFF)
+					continue;
+
+				if (pdsk == D_UP_TO_DATE)
+					up_to_date_neighbor = true;
+
+				all++;
+				if (nr == L_NEG_NO_RESULT)
+					no_result++;
+				else if (nr == L_NEGOTIATING)
+					goto stay_negotiating;
+				else if (nr == L_WF_BITMAP_T)
+					target++;
+				else if (nr != L_ESTABLISHED && nr != L_WF_BITMAP_S)
+					drbd_err(peer_device, "Unexpected nr = %s\n", drbd_repl_str(nr));
+			}
+
+			/* negotiation finished */
+			if (no_result > 0 && no_result == all)
+				disk_state[NEW] = D_DETACHING;
+			else if (target)
+				disk_state[NEW] = D_INCONSISTENT;
+			else
+				disk_state[NEW] = up_to_date_neighbor ? D_UP_TO_DATE :
+					/* ldev_safe: dstate */ disk_state_from_md(device);
+
+			for_each_peer_device_rcu(peer_device, device) {
+				enum drbd_repl_state nr = peer_device->negotiation_result;
+
+				if (peer_device->connection->cstate[NEW] < C_CONNECTED ||
+				    nr == L_NEGOTIATING)
+					continue;
+
+				if (nr == L_NEG_NO_RESULT)
+					nr = L_ESTABLISHED;
+
+				if (nr == L_WF_BITMAP_S && disk_state[NEW] == D_INCONSISTENT) {
+					/* Should be sync source for one peer and sync
+					   target for an other peer. Delay the sync source
+					   role */
+					nr = L_PAUSED_SYNC_S;
+					peer_device->resync_susp_other_c[NEW] = true;
+					drbd_warn(peer_device, "Finish me\n");
+				}
+				peer_device->repl_state[NEW] = nr;
+			}
+		}
+	stay_negotiating:
+
+		for_each_peer_device_rcu(peer_device, device) {
+			enum drbd_repl_state *repl_state = peer_device->repl_state;
+			enum drbd_disk_state *peer_disk_state = peer_device->disk_state;
+			struct drbd_connection *connection = peer_device->connection;
+			enum drbd_conn_state *cstate = connection->cstate;
+
+			if (peer_disk_state[NEW] == D_UP_TO_DATE &&
+			    (device->exposed_data_uuid & ~UUID_PRIMARY) ==
+			    (peer_device->current_uuid & ~UUID_PRIMARY))
+				have_good_peer = true;
+
+			if (repl_state[NEW] < L_ESTABLISHED) {
+				peer_device->resync_susp_peer[NEW] = false;
+				if (peer_disk_state[NEW] > D_UNKNOWN ||
+				    peer_disk_state[NEW] < D_INCONSISTENT)
+					peer_disk_state[NEW] = D_UNKNOWN;
+			}
+			if (repl_state[OLD] >= L_ESTABLISHED && repl_state[NEW] < L_ESTABLISHED) {
+				lost_connection = true;
+				peer_device->resync_active[NEW] = false;
+			}
+
+			/* Clear the aftr_isp when becoming unconfigured */
+			if (cstate[NEW] == C_STANDALONE &&
+			    disk_state[NEW] == D_DISKLESS &&
+			    role[NEW] == R_SECONDARY)
+				peer_device->resync_susp_dependency[NEW] = false;
+
+			/* Abort resync if a disk fails/detaches */
+			if (repl_state[NEW] > L_ESTABLISHED &&
+			    (disk_state[NEW] <= D_FAILED ||
+			     peer_disk_state[NEW] <= D_FAILED)) {
+				repl_state[NEW] = L_ESTABLISHED;
+				clear_bit(RECONCILIATION_RESYNC, &peer_device->flags);
+				peer_device->resync_active[NEW] = false;
+			}
+
+			/* Suspend IO while fence-peer handler runs (peer lost) */
+			if (connection->fencing_policy == FP_STONITH &&
+			    (role[NEW] == R_PRIMARY &&
+			     repl_state[NEW] < L_ESTABLISHED &&
+			     peer_disk_state[NEW] == D_UNKNOWN) &&
+			    (role[OLD] != R_PRIMARY ||
+			     peer_disk_state[OLD] != D_UNKNOWN))
+				connection->susp_fen[NEW] = true;
+		}
+
+		drbd_select_sync_target(device);
+
+		for_each_peer_device_rcu(peer_device, device) {
+			enum drbd_repl_state *repl_state = peer_device->repl_state;
+			enum drbd_disk_state *peer_disk_state = peer_device->disk_state;
+			struct drbd_connection *connection = peer_device->connection;
+			enum drbd_conn_state *cstate = connection->cstate;
+			enum drbd_disk_state min_disk_state, max_disk_state;
+			enum drbd_disk_state min_peer_disk_state, max_peer_disk_state;
+			enum drbd_role *peer_role = connection->peer_role;
+			bool uuids_match, cond;
+
+			/* Pause a SyncSource until it finishes resync as target on other connections */
+			if (repl_state[OLD] != L_SYNC_SOURCE && repl_state[NEW] == L_SYNC_SOURCE &&
+			    is_sync_target_other_c(peer_device))
+				peer_device->resync_susp_other_c[NEW] = true;
+
+			if (resync_suspended(peer_device, NEW)) {
+				if (repl_state[NEW] == L_SYNC_SOURCE)
+					repl_state[NEW] = L_PAUSED_SYNC_S;
+			} else {
+				if (repl_state[NEW] == L_PAUSED_SYNC_S)
+					repl_state[NEW] = L_SYNC_SOURCE;
+			}
+
+			/* Implication of the repl state on other peer's repl state */
+			if (repl_state[OLD] != L_STARTING_SYNC_T && repl_state[NEW] == L_STARTING_SYNC_T)
+				drbd_start_other_targets_paused(peer_device);
+
+			/* D_CONSISTENT vanish when we get connected (pre 9.0) */
+			if (connection->agreed_pro_version < 110 &&
+			    repl_state[NEW] >= L_ESTABLISHED && repl_state[NEW] < L_AHEAD) {
+				if (disk_state[NEW] == D_CONSISTENT)
+					disk_state[NEW] = D_UP_TO_DATE;
+				if (peer_disk_state[NEW] == D_CONSISTENT)
+					peer_disk_state[NEW] = D_UP_TO_DATE;
+			}
+
+			/* Implications of the repl state on the disk states */
+			min_disk_state = D_DISKLESS;
+			max_disk_state = D_UP_TO_DATE;
+			min_peer_disk_state = D_INCONSISTENT;
+			max_peer_disk_state = D_UNKNOWN;
+			switch (repl_state[NEW]) {
+			case L_OFF:
+				/* values from above */
+				break;
+			case L_WF_BITMAP_T:
+			case L_STARTING_SYNC_T:
+			case L_WF_SYNC_UUID:
+			case L_BEHIND:
+				min_disk_state = D_INCONSISTENT;
+				max_disk_state = D_OUTDATED;
+				min_peer_disk_state = D_INCONSISTENT;
+				max_peer_disk_state = D_UP_TO_DATE;
+				break;
+			case L_VERIFY_S:
+			case L_VERIFY_T:
+				min_disk_state = D_INCONSISTENT;
+				max_disk_state = D_UP_TO_DATE;
+				min_peer_disk_state = D_INCONSISTENT;
+				max_peer_disk_state = D_UP_TO_DATE;
+				break;
+			case L_ESTABLISHED:
+				min_disk_state = D_DISKLESS;
+				max_disk_state = D_UP_TO_DATE;
+				min_peer_disk_state = D_DISKLESS;
+				max_peer_disk_state = D_UP_TO_DATE;
+				break;
+			case L_WF_BITMAP_S:
+			case L_PAUSED_SYNC_S:
+			case L_STARTING_SYNC_S:
+			case L_AHEAD:
+				min_disk_state = D_INCONSISTENT;
+				max_disk_state = D_UP_TO_DATE;
+				min_peer_disk_state = D_INCONSISTENT;
+				max_peer_disk_state = D_CONSISTENT; /* D_OUTDATED would be nice. But explicit outdate necessary*/
+				break;
+			case L_PAUSED_SYNC_T:
+			case L_SYNC_TARGET:
+				min_disk_state = D_INCONSISTENT;
+				max_disk_state = D_INCONSISTENT;
+				min_peer_disk_state = D_INCONSISTENT;
+				max_peer_disk_state = D_UP_TO_DATE;
+				break;
+			case L_SYNC_SOURCE:
+				min_disk_state = D_INCONSISTENT;
+				max_disk_state = D_UP_TO_DATE;
+				min_peer_disk_state = D_INCONSISTENT;
+				max_peer_disk_state = D_INCONSISTENT;
+				break;
+			}
+
+			/* Implications of the repl state on the disk states */
+			if (disk_state[NEW] > max_disk_state)
+				disk_state[NEW] = max_disk_state;
+
+			if (disk_state[NEW] < min_disk_state)
+				disk_state[NEW] = min_disk_state;
+
+			if (peer_disk_state[NEW] > max_peer_disk_state)
+				peer_disk_state[NEW] = max_peer_disk_state;
+
+			if (peer_disk_state[NEW] < min_peer_disk_state)
+				peer_disk_state[NEW] = min_peer_disk_state;
+
+			/* A detach is a cluster wide transaction. The peer_disk_state updates
+			   are coming in while we have it prepared. When the cluster wide
+			   state change gets committed prevent D_DISKLESS -> D_FAILED */
+			if (peer_disk_state[OLD] == D_DISKLESS &&
+			    (peer_disk_state[NEW] == D_FAILED || peer_disk_state[NEW] == D_DETACHING))
+				peer_disk_state[NEW] = D_DISKLESS;
+
+			/* Upgrade myself from D_OUTDATED if..
+			   1) We connect to stable D_UP_TO_DATE(or D_CONSISTENT) peer without resync
+			   2) The peer just became stable
+			   3) the peer was stable and just became D_UP_TO_DATE */
+			if (repl_state[NEW] == L_ESTABLISHED && disk_state[NEW] == D_OUTDATED &&
+			    peer_disk_state[NEW] >= D_CONSISTENT && test_bit(UUIDS_RECEIVED, &peer_device->flags) &&
+			    peer_device->uuid_flags & UUID_FLAG_STABLE &&
+			    (repl_state[OLD] < L_ESTABLISHED ||
+			     peer_device->uuid_flags & UUID_FLAG_GOT_STABLE ||
+			     peer_disk_state[OLD] == D_OUTDATED))
+				disk_state[NEW] = peer_disk_state[NEW];
+
+			/* The attempted resync made us D_OUTDATED, roll that back in case */
+			if (repl_state[OLD] == L_WF_BITMAP_T && repl_state[NEW] == L_OFF &&
+			    disk_state[NEW] == D_OUTDATED && stable_up_to_date_neighbor(device) &&
+			    /* ldev_safe: repl_state[OLD] */ may_be_up_to_date(device, NEW))
+				disk_state[NEW] = D_UP_TO_DATE;
+
+			/* clause intentional here, the D_CONSISTENT form above might trigger this */
+			if (repl_state[OLD] < L_ESTABLISHED && repl_state[NEW] >= L_ESTABLISHED &&
+			    disk_state[NEW] == D_CONSISTENT &&
+			    /* ldev_safe: repl_state[NEW] */ may_be_up_to_date(device, NEW))
+				disk_state[NEW] = D_UP_TO_DATE;
+
+			/* Follow a neighbor that goes from D_CONSISTENT TO D_UP_TO_DATE */
+			if (disk_state[NEW] == D_CONSISTENT &&
+			    peer_disk_state[OLD] == D_CONSISTENT &&
+			    peer_disk_state[NEW] == D_UP_TO_DATE &&
+			    peer_device->uuid_flags & UUID_FLAG_STABLE)
+				disk_state[NEW] = D_UP_TO_DATE;
+
+			peer_device->uuid_flags &= ~UUID_FLAG_GOT_STABLE;
+
+			uuids_match =
+				(peer_device->current_uuid & ~UUID_PRIMARY) ==
+				(drbd_current_uuid(device) & ~UUID_PRIMARY);
+
+			if (peer_role[OLD] == R_UNKNOWN && peer_role[NEW] == R_PRIMARY &&
+			    peer_disk_state[NEW] == D_DISKLESS && disk_state[NEW] >= D_NEGOTIATING) {
+				/* Got connected to a diskless primary */
+				if (uuids_match && !is_sync_target_other_c(peer_device)) {
+					if (device->disk_state[NOW] < D_UP_TO_DATE) {
+						drbd_info(peer_device, "Upgrading local disk to D_UP_TO_DATE since current UUID matches.\n");
+						disk_state[NEW] = D_UP_TO_DATE;
+					}
+				} else {
+					set_bit(TRY_TO_GET_RESYNC, &device->flags);
+					if (disk_state[NEW] == D_UP_TO_DATE) {
+						drbd_info(peer_device, "Downgrading local disk to D_CONSISTENT since current UUID differs.\n");
+						disk_state[NEW] = D_CONSISTENT;
+						/* This is a "safety net"; it can only happen if fencing and quorum
+						   are both disabled. This alone would be racy, look for
+						   "Do not trust this guy!" (see also may_return_to_up_to_date()) */
+					}
+				}
+			}
+
+			if (connection->agreed_features & DRBD_FF_RS_SKIP_UUID)
+				cond = have_good_peer &&
+					(device->exposed_data_uuid & ~UUID_PRIMARY) !=
+					(peer_device->current_uuid & ~UUID_PRIMARY);
+			else
+				cond = peer_disk_state[OLD] == D_UNKNOWN &&
+					role[NEW] == R_PRIMARY && !uuids_match;
+
+			if (disk_state[NEW] == D_DISKLESS && peer_disk_state[NEW] == D_UP_TO_DATE &&
+			    cond) {
+				/* Do not trust this guy!
+				   He wants to be D_UP_TO_DATE, but has a different current
+				   UUID. Do not accept him as D_UP_TO_DATE but downgrade that to
+				   D_CONSISTENT here.
+				*/
+				peer_disk_state[NEW] = D_CONSISTENT;
+			}
+
+			/*
+			 * Determine whether peer will disable replication due to this transition.
+			 *
+			 * This matches the condition on the peer below.
+			 */
+			if (drbd_change_to_inconsistent(disk_state, cstate) ||
+					(!repl_is_sync_target(repl_state[OLD]) &&
+					 repl_is_sync_target(repl_state[NEW])))
+				peer_device->peer_replication[NEW] =
+					test_bit(PEER_REPLICATION_NEXT, &peer_device->flags);
+
+			/*
+			 * Decide whether to disable replication when the peer
+			 * transitions to Inconsistent. Only consider the disk
+			 * state when we are Connected because we want to wait
+			 * until we know whether replication should be enabled
+			 * on the next transition to Inconsistent. This is
+			 * communicated with the P_ENABLE_REPLICATION_NEXT
+			 * packet.
+			 *
+			 * Also re-evaluate whether to disable replication when
+			 * we become SyncSource, even when the peer's disk was
+			 * already Inconsistent. This is relevant when
+			 * switching between Ahead-Behind+Inconsistent and
+			 * SyncSource-SyncTarget.
+			 *
+			 * This matches the condition on the peer above.
+			 */
+			if (drbd_change_to_inconsistent(peer_disk_state, cstate) ||
+					(!repl_is_sync_source(repl_state[OLD]) &&
+					 repl_is_sync_source(repl_state[NEW])))
+				peer_device->replication[NEW] =
+					test_bit(REPLICATION_NEXT, &peer_device->flags);
+
+			/*
+			 * Not strictly necessary, since "replication" is only
+			 * considered when the peer disk is Inconsistent, but
+			 * it makes the logs clearer.
+			 */
+			if (peer_disk_state[OLD] == D_INCONSISTENT &&
+					peer_disk_state[NEW] != D_INCONSISTENT)
+				peer_device->replication[NEW] = true;
+		}
+
+		if (resource->res_opts.quorum != QOU_OFF)
+			device->have_quorum[NEW] = calc_quorum(device, NULL);
+		else
+			device->have_quorum[NEW] = true;
+
+		if (!device->have_quorum[NEW] && disk_state[NEW] == D_UP_TO_DATE &&
+		    test_bit(RESTORE_QUORUM, &device->flags)) {
+			device->have_quorum[NEW] = true;
+			set_bit(RESTORING_QUORUM, &device->flags);
+		}
+
+		if (!device->have_quorum[NEW])
+			resource_has_quorum = false;
+
+		/* Suspend IO if we have no accessible data available.
+		 * Policy may be extended later to be able to suspend
+		 * if redundancy falls below a certain level. */
+		if (role[NEW] == R_PRIMARY && !drbd_data_accessible(device, NEW)) {
+			volumes_have_data_access = false;
+			if (role[OLD] != R_PRIMARY || drbd_data_accessible(device, OLD))
+				volume_lost_data_access = true;
+		}
+
+		if (lost_connection && disk_state[NEW] == D_NEGOTIATING)
+			disk_state[NEW] = /* ldev_safe: disk_state */ disk_state_from_md(device);
+
+		if (maybe_crashed_primary && !connected_primaries &&
+		    disk_state[NEW] == D_UP_TO_DATE && role[NOW] == R_SECONDARY)
+			disk_state[NEW] = D_CONSISTENT;
+	}
+	rcu_read_unlock();
+
+	if (volumes_have_data_access)
+		resource->susp_nod[NEW] = false;
+	if (volume_lost_data_access && resource->res_opts.on_no_data == OND_SUSPEND_IO)
+		resource->susp_nod[NEW] = true;
+
+	resource->susp_quorum[NEW] =
+		resource->res_opts.on_no_quorum == ONQ_SUSPEND_IO ? !resource_has_quorum : false;
+
+	if (!resource->susp_uuid[OLD] &&
+	    resource_is_suspended(resource, OLD) && !resource_is_suspended(resource, NEW)) {
+		idr_for_each_entry(&resource->devices, device, vnr) {
+			if (test_bit(NEW_CUR_UUID, &device->flags)) {
+				resource->susp_uuid[NEW] = true;
+				break;
+			}
+		}
+	}
+
+	if (role[OLD] == R_PRIMARY && role[NEW] == R_SECONDARY &&
+	    (resource->state_change_flags & CS_FS_IGN_OPENERS)) {
+		int rw_count, ro_count;
+		drbd_open_counts(resource, &rw_count, &ro_count);
+		if (rw_count)
+			resource->fail_io[NEW] = true;
+	}
+}
+
+void drbd_resume_al(struct drbd_device *device)
+{
+	if (test_and_clear_bit(AL_SUSPENDED, &device->flags))
+		drbd_info(device, "Resumed AL updates\n");
+}
+
+static bool drbd_need_twopc_after_lost_peer(struct drbd_connection *connection)
+{
+	enum drbd_conn_state *cstate = connection->cstate;
+
+	/* Is the state change a disconnect? */
+	if (!(cstate[OLD] == C_CONNECTED && cstate[NEW] < C_CONNECTED))
+		return false;
+
+	/*
+	 * The peer did not provide reachable_nodes when disconnecting, so
+	 * trigger a twopc ourselves.
+	 */
+	if (!(connection->agreed_features & DRBD_FF_2PC_V2))
+		return true;
+
+	/* Trigger a twopc if it was a non-graceful disconnect. */
+	return cstate[NEW] != C_TEAR_DOWN;
+}
+
+static void drbd_schedule_empty_twopc(struct drbd_resource *resource)
+{
+	kref_get(&resource->kref);
+	if (!schedule_work(&resource->empty_twopc)) {
+		kref_put(&resource->kref, drbd_destroy_resource);
+	}
+}
+
+/*
+ * We cache a node mask of the online members of the cluster. It might
+ * be off because a node is still marked as online immediately after
+ * it crashes. That means it might have an online mark for an already
+ * offline node. On the other hand, we guarantee that it never has
+ * a zero for an online node.
+ */
+static void update_members(struct drbd_resource *resource)
+{
+	enum chg_state_flags flags = resource->state_change_flags;
+	struct twopc_reply *reply = &resource->twopc_reply;
+	const int my_node_id = resource->res_opts.node_id;
+	struct drbd_connection *connection;
+
+	/* in case we initiated 2PC we know the reachable nodes */
+	if (flags & CS_TWOPC && reply->initiator_node_id == my_node_id) {
+		resource->members = reply->reachable_nodes;
+		return;
+	}
+
+	/* In case I am 2PC target of a connect or non-graceful disconnect */
+	for_each_connection(connection, resource) {
+		enum drbd_conn_state *cstate = connection->cstate;
+		const int peer_node_mask = NODE_MASK(connection->peer_node_id);
+
+		/* add a fresh connection to the members */
+		if (cstate[OLD] < C_CONNECTED && cstate[NEW] == C_CONNECTED)
+			resource->members |= peer_node_mask;
+
+		/* Connection to peer lost. Check if we should remove it from the members */
+		if (drbd_need_twopc_after_lost_peer(connection) &&
+				resource->members & peer_node_mask)
+			drbd_schedule_empty_twopc(resource);
+	}
+}
+
+static bool drbd_any_peer_device_up_to_date(struct drbd_connection *connection)
+{
+	int vnr;
+	struct drbd_peer_device *peer_device;
+
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		if (peer_device->disk_state[NEW] == D_UP_TO_DATE)
+			return true;
+	}
+
+	return false;
+}
+
+/* Whether replication is enabled on all peers for this device */
+bool drbd_all_peer_replication(struct drbd_device *device, enum which_state which)
+{
+	struct drbd_peer_device *peer_device;
+	bool all_peer_replication = true;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (!peer_device->peer_replication[which])
+			all_peer_replication = false;
+	}
+	rcu_read_unlock();
+
+	return all_peer_replication;
+}
+
+/* As drbd_all_peer_replication() but takes a state change object */
+static bool drbd_all_peer_replication_change(struct drbd_state_change *state_change, int n_device,
+		enum which_state which)
+{
+	int n_connection;
+
+	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+		struct drbd_peer_device_state_change *peer_device_state_change =
+			&state_change->peer_devices[
+				n_device * state_change->n_connections + n_connection];
+
+		if (!peer_device_state_change->peer_replication[which])
+			return false;
+	}
+
+	return true;
+}
+
+static void drbd_determine_flush_pending(struct drbd_resource *resource)
+{
+	struct drbd_device *device;
+	struct drbd_connection *primary_connection;
+	struct drbd_connection *up_to_date_connection;
+	int vnr;
+	bool send_flush_requests = false;
+
+	/* Clear any bits if we no longer expect or require a flush ack */
+	spin_lock(&resource->initiator_flush_lock);
+	for_each_connection(primary_connection, resource) {
+		u64 *pending_flush_mask = &primary_connection->pending_flush_mask;
+
+		/*
+		 * Clear bits if we no longer expect or require a flush ack due
+		 * to loss of connection to the Primary peer.
+		 */
+		if (primary_connection->cstate[NEW] != C_CONNECTED) {
+			if (*pending_flush_mask)
+				*pending_flush_mask = 0;
+			continue;
+		}
+
+		/*
+		 * Clear bits if we no longer expect or require a flush ack
+		 * because the peer that was UpToDate is no longer UpToDate.
+		 * For instance, if we lose the connection to that peer.
+		 */
+		for_each_connection(up_to_date_connection, resource) {
+			u64 up_to_date_mask = NODE_MASK(up_to_date_connection->peer_node_id);
+
+			if (drbd_any_peer_device_up_to_date(up_to_date_connection))
+				continue;
+
+			if (*pending_flush_mask & up_to_date_mask)
+				*pending_flush_mask &= ~up_to_date_mask;
+		}
+	}
+	spin_unlock(&resource->initiator_flush_lock);
+
+	/* Check if we need a new flush */
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		struct drbd_peer_device *peer_device;
+
+		for_each_peer_device(peer_device, device) {
+			if (!(is_sync_target_state(peer_device, NOW) &&
+					drbd_all_peer_replication(device, NOW)) &&
+					is_sync_target_state(peer_device, NEW) &&
+					drbd_all_peer_replication(device, NEW))
+				send_flush_requests = true;
+		}
+	}
+
+	if (!send_flush_requests)
+		return;
+
+	/* We need a new flush. Mark which acks we are waiting for. */
+	spin_lock(&resource->initiator_flush_lock);
+	resource->current_flush_sequence++;
+
+	for_each_connection(primary_connection, resource) {
+		primary_connection->pending_flush_mask = 0;
+
+		if (primary_connection->peer_role[NEW] != R_PRIMARY)
+			continue;
+
+		if (primary_connection->agreed_pro_version < 123)
+			continue;
+
+		for_each_connection(up_to_date_connection, resource) {
+			u64 up_to_date_mask = NODE_MASK(up_to_date_connection->peer_node_id);
+
+			if (!drbd_any_peer_device_up_to_date(up_to_date_connection))
+				continue;
+
+			if (up_to_date_connection->agreed_pro_version < 123)
+				continue;
+
+			primary_connection->pending_flush_mask |= up_to_date_mask;
+		}
+	}
+	spin_unlock(&resource->initiator_flush_lock);
+}
+
+static void set_ov_position(struct drbd_peer_device *peer_device,
+			    enum drbd_repl_state repl_state)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_bitmap *bm = device->bitmap;
+
+	if (peer_device->connection->agreed_pro_version < 90)
+		peer_device->ov_start_sector = 0;
+	peer_device->rs_total = drbd_bm_bits(device);
+	peer_device->ov_position = 0;
+	if (repl_state == L_VERIFY_T) {
+		/* starting online verify from an arbitrary position
+		 * does not fit well into the existing protocol.
+		 * on L_VERIFY_T, we initialize ov_left and friends
+		 * implicitly in receive_common_data_request once the
+		 * first P_OV_REQUEST is received */
+		peer_device->ov_start_sector = ~(sector_t)0;
+	} else {
+		unsigned long bit = bm_sect_to_bit(bm, peer_device->ov_start_sector);
+		if (bit >= peer_device->rs_total) {
+			peer_device->ov_start_sector =
+				bm_bit_to_sect(bm, peer_device->rs_total - 1);
+			peer_device->rs_total = 1;
+		} else
+			peer_device->rs_total -= bit;
+		peer_device->ov_position = peer_device->ov_start_sector;
+	}
+	atomic64_set(&peer_device->ov_left, peer_device->rs_total);
+	peer_device->ov_skipped = 0;
+}
+
+static void initialize_resync_progress_marks(struct drbd_peer_device *peer_device)
+{
+	unsigned long tw = drbd_bm_total_weight(peer_device);
+	unsigned long now = jiffies;
+	int i;
+
+	peer_device->rs_last_progress_report_ts = now;
+	for (i = 0; i < DRBD_SYNC_MARKS; i++) {
+		peer_device->rs_mark_left[i] = tw;
+		peer_device->rs_mark_time[i] = now;
+	}
+}
+
+static void initialize_resync(struct drbd_peer_device *peer_device)
+{
+	unsigned long tw = drbd_bm_total_weight(peer_device);
+	unsigned long now = jiffies;
+
+	peer_device->last_in_sync_end = 0;
+	peer_device->resync_next_bit = 0;
+	peer_device->last_resync_pass_bits = tw;
+	peer_device->rs_failed = 0;
+	peer_device->rs_paused = 0;
+	peer_device->rs_same_csum = 0;
+	peer_device->rs_total = tw;
+	peer_device->rs_start = now;
+	peer_device->rs_last_writeout = now;
+	initialize_resync_progress_marks(peer_device);
+	drbd_rs_controller_reset(peer_device);
+}
+
+/* Is there a primary with access to up to date data known */
+static bool primary_and_data_present(struct drbd_device *device)
+{
+	bool up_to_date_data = device->disk_state[NEW] == D_UP_TO_DATE;
+	struct drbd_resource *resource = device->resource;
+	bool primary = resource->role[NEW] == R_PRIMARY;
+	struct drbd_peer_device *peer_device;
+
+	for_each_peer_device(peer_device, device) {
+		struct drbd_connection *connection = peer_device->connection;
+
+		/* Do not consider the peer if we are disconnecting. */
+		if (resource->remote_state_change &&
+				drbd_twopc_between_peer_and_me(connection) &&
+				resource->twopc_reply.is_disconnect)
+			continue;
+
+		if (connection->peer_role[NEW] == R_PRIMARY)
+			primary = true;
+
+		if (peer_device->disk_state[NEW] == D_UP_TO_DATE)
+			up_to_date_data = true;
+	}
+
+	return primary && up_to_date_data;
+}
+
+static bool extra_ldev_ref_for_after_state_chg(enum drbd_disk_state *disk_state)
+{
+	return (disk_state[OLD] != D_FAILED && disk_state[NEW] == D_FAILED) ||
+	       (disk_state[OLD] != D_DETACHING && disk_state[NEW] == D_DETACHING) ||
+	       (disk_state[OLD] != D_DISKLESS && disk_state[NEW] == D_DISKLESS);
+}
+
+static bool has_starting_resyncs(struct drbd_connection *connection)
+{
+	struct drbd_peer_device *peer_device;
+	int vnr;
+
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		if (peer_device->repl_state[NEW] > L_ESTABLISHED)
+			return true;
+	}
+	return false;
+}
+
+static bool should_try_become_up_to_date(struct drbd_device *device, enum drbd_disk_state *disk_state,
+		enum which_state which)
+{
+	return disk_state[OLD] == D_UP_TO_DATE && disk_state[NEW] == D_CONSISTENT &&
+			may_return_to_up_to_date(device, which);
+}
+
+/**
+ * finish_state_change  -  carry out actions triggered by a state change
+ * @resource: DBRD resource.
+ * @tag: State change tag to print in status messages.
+ */
+static void finish_state_change(struct drbd_resource *resource, const char *tag)
+{
+	enum drbd_role *role = resource->role;
+	bool *susp_uuid = resource->susp_uuid;
+	struct drbd_device *device;
+	struct drbd_connection *connection;
+	bool starting_resync = false;
+	bool start_new_epoch = false;
+	bool lost_a_primary_peer = false;
+	bool some_peer_is_primary = false;
+	bool some_peer_request_in_flight = false;
+	bool resource_suspended[2];
+	bool unfreeze_io = false;
+	int vnr;
+
+	print_state_change(resource, "", tag);
+
+	resource_suspended[OLD] = resource_is_suspended(resource, OLD);
+	resource_suspended[NEW] = resource_is_suspended(resource, NEW);
+
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		bool *have_quorum = device->have_quorum;
+		struct drbd_peer_device *peer_device;
+
+		for_each_peer_device(peer_device, device) {
+			struct drbd_connection *connection = peer_device->connection;
+			bool did, should;
+
+			did = drbd_should_do_remote(peer_device, NOW);
+			should = drbd_should_do_remote(peer_device, NEW);
+
+			if (!did && should) {
+				/* Since "did" is false, the request with this
+				 * dagtag and prior requests were not be marked
+				 * to be sent to this peer. Hence this will not
+				 * send a dagtag packet before the
+				 * corresponding data packet.
+				 *
+				 * It is possible that this peer does not
+				 * actually have the data corresponding to this
+				 * dagtag. However in that case, the disk state
+				 * of that peer will not be D_UP_TO_DATE, so it
+				 * not be relevant what dagtag we have sent it. */
+				connection->send_dagtag = resource->dagtag_sector;
+				drbd_queue_work_if_unqueued(
+						&connection->sender_work,
+						&connection->send_dagtag_work);
+			}
+
+			if (did != should)
+				start_new_epoch = true;
+
+			if (peer_device->repl_state[OLD] != L_WF_BITMAP_S &&
+					peer_device->repl_state[NEW] == L_WF_BITMAP_S)
+				clear_bit(B_RS_H_DONE, &peer_device->flags);
+
+			if (peer_device->repl_state[OLD] != L_WF_BITMAP_T &&
+					peer_device->repl_state[NEW] == L_WF_BITMAP_T)
+				clear_bit(B_RS_H_DONE, &peer_device->flags);
+
+			if (!is_sync_state(peer_device, NOW) &&
+			    is_sync_state(peer_device, NEW)) {
+				clear_bit(RS_DONE, &peer_device->flags);
+				clear_bit(B_RS_H_DONE, &peer_device->flags);
+				clear_bit(SYNC_TARGET_TO_BEHIND, &peer_device->flags);
+			}
+		}
+
+		if (role[NEW] == R_PRIMARY && !have_quorum[NEW])
+			set_bit(PRIMARY_LOST_QUORUM, &device->flags);
+	}
+	if (start_new_epoch)
+		start_new_tl_epoch(resource);
+
+	spin_lock(&resource->peer_ack_lock);
+	if (role[OLD] == R_PRIMARY && role[NEW] == R_SECONDARY && resource->peer_ack_req) {
+		resource->last_peer_acked_dagtag = resource->peer_ack_req->dagtag_sector;
+		drbd_queue_peer_ack(resource, resource->peer_ack_req);
+		resource->peer_ack_req = NULL;
+	}
+	spin_unlock(&resource->peer_ack_lock);
+
+	drbd_determine_flush_pending(resource);
+
+	if (!resource->fail_io[OLD] && resource->fail_io[NEW])
+		drbd_warn(resource, "Failing IOs\n");
+
+	for_each_connection(connection, resource) {
+		enum drbd_role *peer_role = connection->peer_role;
+		enum drbd_conn_state *cstate = connection->cstate;
+
+		if (peer_role[NEW] == R_PRIMARY)
+			some_peer_is_primary = true;
+
+		switch (cstate[NEW]) {
+		case C_CONNECTED:
+			if (atomic_read(&connection->active_ee_cnt)
+					|| atomic_read(&connection->done_ee_cnt))
+				some_peer_request_in_flight = true;
+			break;
+		case C_STANDALONE:
+		case C_UNCONNECTED:
+		case C_CONNECTING:
+			/* maybe others are safe as well? which ones? */
+			break;
+		default:
+			/* if we just disconnected, there may still be some request in flight. */
+			some_peer_request_in_flight = true;
+		}
+
+		if (some_peer_is_primary && some_peer_request_in_flight)
+			break;
+	}
+
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		struct drbd_peer_device *peer_device;
+		enum drbd_disk_state *disk_state = device->disk_state;
+		bool create_new_uuid = false;
+
+		if (test_bit(RESTORING_QUORUM, &device->flags) &&
+		    !device->have_quorum[OLD] && device->have_quorum[NEW]) {
+			clear_bit(RESTORING_QUORUM, &device->flags);
+			drbd_info(resource, "Restored quorum from before reboot\n");
+		}
+
+		if (test_bit(RESTORE_QUORUM, &device->flags) &&
+		    (device->have_quorum[NEW] || disk_state[NEW] < D_UP_TO_DATE))
+			clear_bit(RESTORE_QUORUM, &device->flags);
+
+		/* if we are going -> D_FAILED or D_DISKLESS, grab one extra reference
+		 * on the ldev here, to be sure the transition -> D_DISKLESS resp.
+		 * drbd_ldev_destroy() won't happen before our corresponding
+		 * w_after_state_change works run, where we put_ldev again. */
+		if (extra_ldev_ref_for_after_state_chg(disk_state))
+			atomic_inc(&device->local_cnt);
+
+		if (disk_state[OLD] != D_DISKLESS && disk_state[NEW] == D_DISKLESS) {
+			/* who knows if we are ever going to be attached again,
+			 * and whether that will be the same device, or a newly
+			 * initialized one. */
+			for_each_peer_device(peer_device, device)
+				peer_device->bitmap_index = -1;
+		}
+
+		/* ldev_safe: transitioning from D_ATTACHING, ldev just established */
+		if (disk_state[OLD] == D_ATTACHING && disk_state[NEW] >= D_NEGOTIATING)
+			drbd_info(device, "attached to current UUID: %016llX\n", device->ldev->md.current_uuid);
+
+		for_each_peer_device(peer_device, device) {
+			enum drbd_repl_state *repl_state = peer_device->repl_state;
+			enum drbd_disk_state *peer_disk_state = peer_device->disk_state;
+			struct drbd_connection *connection = peer_device->connection;
+			enum drbd_role *peer_role = connection->peer_role;
+
+			if (repl_state[OLD] <= L_ESTABLISHED && repl_state[NEW] == L_WF_BITMAP_S)
+				starting_resync = true;
+
+			if ((disk_state[OLD] != D_UP_TO_DATE || peer_disk_state[OLD] != D_UP_TO_DATE) &&
+			    (disk_state[NEW] == D_UP_TO_DATE && peer_disk_state[NEW] == D_UP_TO_DATE)) {
+				clear_bit(CRASHED_PRIMARY, &device->flags);
+				if (test_bit(UUIDS_RECEIVED, &peer_device->flags))
+					peer_device->uuid_flags &= ~((u64)UUID_FLAG_CRASHED_PRIMARY);
+			}
+
+			/* Aborted verify run, or we reached the stop sector.
+			 * Log the last position, unless end-of-device. */
+			if ((repl_state[OLD] == L_VERIFY_S || repl_state[OLD] == L_VERIFY_T) &&
+			    repl_state[NEW] <= L_ESTABLISHED) {
+				/* ldev_safe: repl_state[OLD] */
+				struct drbd_bitmap *bm = device->bitmap;
+				unsigned long ov_left = atomic64_read(&peer_device->ov_left);
+
+				/* ldev_safe: repl_state[OLD] */
+				peer_device->ov_start_sector =
+					bm_bit_to_sect(bm, drbd_bm_bits(device) - ov_left);
+				if (ov_left)
+					drbd_info(peer_device, "Online Verify reached sector %llu\n",
+						  (unsigned long long)peer_device->ov_start_sector);
+			}
+
+			if ((repl_state[OLD] == L_PAUSED_SYNC_T || repl_state[OLD] == L_PAUSED_SYNC_S) &&
+			    (repl_state[NEW] == L_SYNC_TARGET  || repl_state[NEW] == L_SYNC_SOURCE)) {
+				drbd_info(peer_device, "Syncer continues.\n");
+				peer_device->rs_paused += (long)jiffies
+						  -(long)peer_device->rs_mark_time[peer_device->rs_last_mark];
+				initialize_resync_progress_marks(peer_device);
+				peer_device->resync_next_bit = 0;
+			}
+
+			if ((repl_state[OLD] == L_SYNC_TARGET  || repl_state[OLD] == L_SYNC_SOURCE) &&
+			    (repl_state[NEW] == L_PAUSED_SYNC_T || repl_state[NEW] == L_PAUSED_SYNC_S)) {
+				drbd_info(peer_device, "Resync suspended\n");
+				peer_device->rs_mark_time[peer_device->rs_last_mark] = jiffies;
+			}
+
+
+			if (repl_state[OLD] > L_ESTABLISHED && repl_state[NEW] <= L_ESTABLISHED)
+				clear_bit(RECONCILIATION_RESYNC, &peer_device->flags);
+
+			if (repl_state[OLD] >= L_ESTABLISHED && repl_state[NEW] < L_ESTABLISHED)
+				clear_bit(AHEAD_TO_SYNC_SOURCE, &peer_device->flags);
+
+			if (repl_state[OLD] == L_ESTABLISHED &&
+			    (repl_state[NEW] == L_VERIFY_S || repl_state[NEW] == L_VERIFY_T)) {
+				unsigned long now = jiffies;
+				int i;
+
+				/* ldev_safe: repl_state[NEW] */
+				set_ov_position(peer_device, repl_state[NEW]);
+				peer_device->rs_start = now;
+				peer_device->ov_last_oos_size = 0;
+				peer_device->ov_last_oos_start = 0;
+				peer_device->ov_last_skipped_size = 0;
+				peer_device->ov_last_skipped_start = 0;
+				peer_device->rs_last_writeout = now;
+				peer_device->rs_last_progress_report_ts = now;
+				for (i = 0; i < DRBD_SYNC_MARKS; i++) {
+					peer_device->rs_mark_left[i] = peer_device->rs_total;
+					peer_device->rs_mark_time[i] = now;
+				}
+
+				drbd_rs_controller_reset(peer_device);
+			} else if (!(repl_state[OLD] >= L_SYNC_SOURCE && repl_state[OLD] <= L_PAUSED_SYNC_T) &&
+				   (repl_state[NEW] >= L_SYNC_SOURCE && repl_state[NEW] <= L_PAUSED_SYNC_T)) {
+				initialize_resync(peer_device);
+			}
+
+			if (disk_state[NEW] != D_NEGOTIATING && get_ldev(device)) {
+				if (peer_device->bitmap_index != -1) {
+					enum drbd_disk_state pdsk = peer_device->disk_state[NEW];
+					u32 mdf = device->ldev->md.peers[peer_device->node_id].flags;
+					/* Do NOT clear MDF_PEER_DEVICE_SEEN here.
+					 * We want to be able to refuse a resize beyond "last agreed" size,
+					 * even if the peer is currently detached.
+					 */
+					mdf &= ~(MDF_PEER_CONNECTED | MDF_PEER_OUTDATED | MDF_PEER_FENCING);
+					if (repl_state[NEW] > L_OFF)
+						mdf |= MDF_PEER_CONNECTED;
+					if (pdsk >= D_INCONSISTENT) {
+						if (pdsk <= D_OUTDATED)
+							mdf |= MDF_PEER_OUTDATED;
+						if (pdsk != D_UNKNOWN)
+							mdf |= MDF_PEER_DEVICE_SEEN;
+					}
+					if (pdsk == D_DISKLESS && !want_bitmap(peer_device))
+						mdf &= ~MDF_PEER_DEVICE_SEEN;
+					if (peer_device->connection->fencing_policy != FP_DONT_CARE)
+						mdf |= MDF_PEER_FENCING;
+					if (mdf != device->ldev->md.peers[peer_device->node_id].flags) {
+						device->ldev->md.peers[peer_device->node_id].flags = mdf;
+						drbd_md_mark_dirty(device);
+					}
+				}
+
+				/* Peer was forced D_UP_TO_DATE & R_PRIMARY, consider to resync */
+				if (disk_state[OLD] == D_INCONSISTENT &&
+				    peer_disk_state[OLD] == D_INCONSISTENT && peer_disk_state[NEW] == D_UP_TO_DATE &&
+				    peer_role[OLD] == R_SECONDARY && peer_role[NEW] == R_PRIMARY)
+					set_bit(CONSIDER_RESYNC, &peer_device->flags);
+
+				/* Resume AL writing if we get a connection */
+				if (repl_state[OLD] < L_ESTABLISHED && repl_state[NEW] >= L_ESTABLISHED)
+					drbd_resume_al(device);
+				put_ldev(device);
+			}
+
+			if (repl_state[OLD] == L_AHEAD && repl_state[NEW] == L_SYNC_SOURCE) {
+				set_bit(SEND_STATE_AFTER_AHEAD, &peer_device->flags);
+				set_bit(SEND_STATE_AFTER_AHEAD_C, &connection->flags);
+
+				clear_bit(CONN_CONGESTED, &connection->flags);
+				wake_up(&connection->sender_work.q_wait);
+			}
+
+			/* We start writing locally without replicating the changes,
+			 * better start a new data generation */
+			if (repl_state[OLD] != L_AHEAD && repl_state[NEW] == L_AHEAD)
+				create_new_uuid = true;
+
+			if (lost_contact_to_peer_data(peer_disk_state)) {
+				if (role[NEW] == R_PRIMARY && !test_bit(UNREGISTERED, &device->flags) &&
+				    drbd_data_accessible(device, NEW))
+					create_new_uuid = true;
+
+				if (connection->agreed_pro_version < 110 &&
+				    peer_role[NEW] == R_PRIMARY &&
+				    disk_state[NEW] >= D_UP_TO_DATE)
+					create_new_uuid = true;
+			}
+			if (peer_returns_diskless(peer_device, peer_disk_state[OLD], peer_disk_state[NEW])) {
+				if (role[NEW] == R_PRIMARY && !test_bit(UNREGISTERED, &device->flags) &&
+				    disk_state[NEW] == D_UP_TO_DATE)
+					create_new_uuid = true;
+			}
+
+			if (disk_state[OLD] > D_FAILED && disk_state[NEW] == D_FAILED &&
+			    role[NEW] == R_PRIMARY && drbd_data_accessible(device, NEW))
+				create_new_uuid = true;
+
+			if (peer_disk_state[NEW] < D_UP_TO_DATE && test_bit(GOT_NEG_ACK, &peer_device->flags))
+				clear_bit(GOT_NEG_ACK, &peer_device->flags);
+
+			if (repl_state[OLD] > L_ESTABLISHED && repl_state[NEW] <= L_ESTABLISHED)
+				clear_bit(SYNC_SRC_CRASHED_PRI, &peer_device->flags);
+
+			if (peer_role[OLD] != peer_role[NEW] || role[OLD] != role[NEW] ||
+			    peer_disk_state[OLD] != peer_disk_state[NEW])
+				drbd_update_mdf_al_disabled(device, NEW);
+		}
+
+		if (disk_state[OLD] >= D_INCONSISTENT && disk_state[NEW] < D_INCONSISTENT &&
+		    role[NEW] == R_PRIMARY && drbd_data_accessible(device, NEW))
+			create_new_uuid = true;
+
+		if (role[OLD] == R_SECONDARY && role[NEW] == R_PRIMARY)
+			create_new_uuid = true;
+
+		/* Only a single new current uuid when susp_uuid becomes true */
+		if (create_new_uuid && !susp_uuid[OLD])
+			set_bit(__NEW_CUR_UUID, &device->flags);
+
+		if (disk_state[NEW] != D_NEGOTIATING && get_ldev_if_state(device, D_DETACHING)) {
+			u32 mdf = device->ldev->md.flags;
+			bool graceful_detach = disk_state[NEW] == D_DETACHING && !test_bit(FORCE_DETACH, &device->flags);
+
+			/* For now, always require a drbdmeta apply-al run,
+			 * even if that ends up only re-initializing the AL */
+			mdf &= ~MDF_AL_CLEAN;
+			/* reset some flags to what we know now */
+			mdf &= ~MDF_CRASHED_PRIMARY;
+			if (test_bit(CRASHED_PRIMARY, &device->flags) ||
+			    (role[NEW] == R_PRIMARY && !graceful_detach))
+				mdf |= MDF_CRASHED_PRIMARY;
+			mdf &= ~MDF_PRIMARY_LOST_QUORUM;
+			if (test_bit(PRIMARY_LOST_QUORUM, &device->flags))
+				mdf |= MDF_PRIMARY_LOST_QUORUM;
+			/* Do not touch MDF_CONSISTENT if we are D_FAILED */
+			if (disk_state[NEW] >= D_INCONSISTENT) {
+				mdf &= ~(MDF_CONSISTENT | MDF_WAS_UP_TO_DATE);
+
+				if (disk_state[NEW] > D_INCONSISTENT)
+					mdf |= MDF_CONSISTENT;
+				if (disk_state[NEW] > D_OUTDATED)
+					mdf |= MDF_WAS_UP_TO_DATE;
+			} else if ((disk_state[NEW] == D_FAILED || disk_state[NEW] == D_DETACHING) &&
+				   mdf & MDF_WAS_UP_TO_DATE &&
+				   primary_and_data_present(device)) {
+				/* There are cases when we still can update meta-data even if disk
+				   state is failed.... Clear MDF_WAS_UP_TO_DATE if appropriate */
+				mdf &= ~MDF_WAS_UP_TO_DATE;
+			}
+
+/*
+ * MDF_PRIMARY_IND  IS set: apply activity log after crash
+ * MDF_PRIMARY_IND NOT set: do not apply, forget and re-initialize activity log after crash.
+ * We want the MDF_PRIMARY_IND set *always* before our backend could possibly
+ * be target of write requests, whether we are Secondary or Primary ourselves.
+ *
+ * We want to avoid to clear that flag just because we lost the connection to a
+ * detached Primary, but before all in-flight IO was drained, because we may
+ * have some dirty bits not yet persisted.
+ *
+ * We want it cleared only once we are *certain* that we no longer see any Primary,
+ * are not Primary ourselves, AND all previously received WRITE (peer-) requests
+ * have been processed, NOTHING is in flight against our backend anymore,
+ * AND we have successfully written out any dirty bitmap pages.
+ *
+ *
+ * MDF_PEER_DEVICE_SEEN ... The peer had a backing device at some point
+ * MDF_NODE_EXISTS ... We have seen evidence that this node exists in the cluster.
+ *   Note: This bit does **not** get set when a new peer/connection is created with
+ *   `drbdsetup new-peer ...`.  The bit gets set when we establish a connection
+ *   successfully for the first time or we learn via other nodes about the
+ *   existence.
+ */
+
+			/* set, if someone is/becomes primary */
+			if (role[NEW] == R_PRIMARY || some_peer_is_primary)
+				mdf |= MDF_PRIMARY_IND;
+			/* clear, if */
+			else if (/* NO peer requests in flight, AND */
+				 !some_peer_request_in_flight &&
+				 (graceful_detach ||
+				  /* or everyone secondary ... */
+				  (role[NEW] == R_SECONDARY && !some_peer_is_primary &&
+				   /* ... and not detaching because of IO error. */
+				   disk_state[NEW] >= D_INCONSISTENT)))
+				mdf &= ~MDF_PRIMARY_IND;
+
+			if (device->have_quorum[NEW])
+				mdf |= MDF_HAVE_QUORUM;
+			else
+				mdf &= ~MDF_HAVE_QUORUM;
+			/* apply changed flags to md.flags,
+			 * and "schedule" for write-out */
+			if (mdf != device->ldev->md.flags ||
+			    device->ldev->md.members != resource->members) {
+				device->ldev->md.flags = mdf;
+				device->ldev->md.members = resource->members;
+				drbd_md_mark_dirty(device);
+			}
+			if (disk_state[OLD] < D_CONSISTENT && disk_state[NEW] >= D_CONSISTENT)
+				drbd_uuid_set_exposed(device, device->ldev->md.current_uuid, true);
+			put_ldev(device);
+		}
+
+		/* remember last attach time so request_timer_fn() won't
+		 * kill newly established sessions while we are still trying to thaw
+		 * previously frozen IO */
+		if ((disk_state[OLD] == D_ATTACHING || disk_state[OLD] == D_NEGOTIATING) &&
+		    disk_state[NEW] > D_NEGOTIATING)
+			device->last_reattach_jif = jiffies;
+
+		if (!device->have_quorum[OLD] && device->have_quorum[NEW])
+			clear_bit(PRIMARY_LOST_QUORUM, &device->flags);
+
+		if (resource_suspended[NEW] &&
+		    !(role[OLD] == R_PRIMARY && !drbd_data_accessible(device, OLD)) &&
+		     (role[NEW] == R_PRIMARY && !drbd_data_accessible(device, NEW)) &&
+		    resource->res_opts.on_no_data == OND_IO_ERROR)
+			unfreeze_io = true;
+
+		if (!resource->fail_io[OLD] && resource->fail_io[NEW])
+			unfreeze_io = true;
+
+		if (role[OLD] == R_PRIMARY && role[NEW] == R_SECONDARY)
+			clear_bit(NEW_CUR_UUID, &device->flags);
+
+		if (should_try_become_up_to_date(device, disk_state, NEW))
+			set_bit(TRY_BECOME_UP_TO_DATE_PENDING, &resource->flags);
+	}
+
+	for_each_connection(connection, resource) {
+		enum drbd_conn_state *cstate = connection->cstate;
+		enum drbd_role *peer_role = connection->peer_role;
+
+		/*
+		 * If we lose connection to a Primary node then we need to
+		 * inform our peers so that we can potentially do a
+		 * reconciliation resync. The function conn_disconnect()
+		 * informs the peers. So we must set the flag before stopping
+		 * the receiver.
+		 */
+		if (cstate[OLD] == C_CONNECTED && cstate[NEW] < C_CONNECTED &&
+				peer_role[OLD] == R_PRIMARY)
+			set_bit(NOTIFY_PEERS_LOST_PRIMARY, &connection->flags);
+
+		/* Receiver should clean up itself */
+		if (cstate[OLD] != C_DISCONNECTING && cstate[NEW] == C_DISCONNECTING)
+			drbd_thread_stop_nowait(&connection->receiver);
+
+		/* Now the receiver finished cleaning up itself, it should die */
+		if (cstate[OLD] != C_STANDALONE && cstate[NEW] == C_STANDALONE)
+			drbd_thread_stop_nowait(&connection->receiver);
+
+		/* Upon network failure, we need to restart the receiver. */
+		if (cstate[OLD] >= C_CONNECTING &&
+		    cstate[NEW] <= C_TEAR_DOWN && cstate[NEW] >= C_TIMEOUT)
+			drbd_thread_restart_nowait(&connection->receiver);
+
+		if (cstate[OLD] == C_CONNECTED && cstate[NEW] < C_CONNECTED)
+			twopc_connection_down(connection);
+
+		/* remember last connect time so request_timer_fn() won't
+		 * kill newly established sessions while we are still trying to thaw
+		 * previously frozen IO */
+		if (cstate[OLD] < C_CONNECTED && cstate[NEW] == C_CONNECTED)
+			connection->last_reconnect_jif = jiffies;
+
+		if (resource_suspended[OLD]) {
+			enum drbd_req_event walk_event = -1;
+
+			/* If we resume IO without this connection, then we
+			 * need to cancel suspended requests. */
+			if ((!resource_suspended[NEW] || unfreeze_io) && cstate[NEW] < C_CONNECTED)
+				walk_event = CANCEL_SUSPENDED_IO;
+			/* On reconnection when we have been suspended we need
+			 * to process suspended requests. If there are resyncs,
+			 * that means that it was not a simple disconnect and
+			 * reconnect, so we cannot resend. We must cancel
+			 * instead. */
+			else if (cstate[OLD] < C_CONNECTED && cstate[NEW] == C_CONNECTED)
+				walk_event = has_starting_resyncs(connection) ? CANCEL_SUSPENDED_IO : RESEND;
+
+			if (walk_event != -1)
+				__tl_walk(resource, connection, &connection->req_not_net_done, walk_event);
+
+			/* Since we are in finish_state_change(), and the state
+			 * was previously not C_CONNECTED, the sender cannot
+			 * have received any requests yet. So it will find any
+			 * requests to resend when it rescans the transfer log. */
+			if (walk_event == RESEND)
+				wake_up(&connection->sender_work.q_wait);
+		}
+
+		if (cstate[OLD] == C_CONNECTED && cstate[NEW] < C_CONNECTED)
+			set_bit(RECONNECT, &connection->flags);
+
+		if (starting_resync && peer_role[NEW] == R_PRIMARY)
+			apply_unacked_peer_requests(connection);
+
+		if (peer_role[OLD] == R_PRIMARY && peer_role[NEW] == R_UNKNOWN)
+			lost_a_primary_peer = true;
+	}
+
+	if (lost_a_primary_peer) {
+		idr_for_each_entry(&resource->devices, device, vnr) {
+			struct drbd_peer_device *peer_device;
+
+			for_each_peer_device(peer_device, device) {
+				enum drbd_repl_state repl_state = peer_device->repl_state[NEW];
+
+				if (!test_bit(UNSTABLE_RESYNC, &peer_device->flags) &&
+				    (repl_state == L_SYNC_TARGET || repl_state == L_PAUSED_SYNC_T) &&
+				    !(peer_device->uuid_flags & UUID_FLAG_STABLE) &&
+				    !drbd_stable_sync_source_present(peer_device, NEW))
+					set_bit(UNSTABLE_RESYNC, &peer_device->flags);
+			}
+		}
+	}
+
+	if (resource_suspended[OLD] && !resource_suspended[NEW])
+		drbd_restart_suspended_reqs(resource);
+
+	if ((resource_suspended[OLD] && !resource_suspended[NEW]) || unfreeze_io)
+		__tl_walk(resource, NULL, NULL, COMPLETION_RESUMED);
+}
+
+static void abw_start_sync(struct drbd_device *device,
+			   struct drbd_peer_device *peer_device, int rv)
+{
+	struct drbd_peer_device *pd;
+
+	if (rv) {
+		drbd_err(device, "Writing the bitmap failed not starting resync.\n");
+		stable_change_repl_state(peer_device, L_ESTABLISHED, CS_VERBOSE, "start-sync");
+		return;
+	}
+
+	switch (peer_device->repl_state[NOW]) {
+	case L_STARTING_SYNC_T:
+		/* Since the number of set bits changed and the other peer_devices are
+		   lready in L_PAUSED_SYNC_T state, we need to set rs_total here */
+		rcu_read_lock();
+		for_each_peer_device_rcu(pd, device)
+			initialize_resync(pd);
+		rcu_read_unlock();
+
+		if (peer_device->connection->agreed_pro_version < 110)
+			stable_change_repl_state(peer_device, L_WF_SYNC_UUID, CS_VERBOSE,
+					"start-sync");
+		else
+			drbd_start_resync(peer_device, L_SYNC_TARGET, "start-sync");
+		break;
+	case L_STARTING_SYNC_S:
+		drbd_start_resync(peer_device, L_SYNC_SOURCE, "start-sync");
+		break;
+	default:
+		break;
+	}
+}
+
+int drbd_bitmap_io_from_worker(struct drbd_device *device,
+		int (*io_fn)(struct drbd_device *, struct drbd_peer_device *),
+		char *why, enum bm_flag flags,
+		struct drbd_peer_device *peer_device)
+{
+	int rv;
+
+	D_ASSERT(device, current == device->resource->worker.task);
+
+	if (!device->bitmap)
+		return 0;
+
+	/* open coded non-blocking drbd_suspend_io(device); */
+	atomic_inc(&device->suspend_cnt);
+
+	if (flags & BM_LOCK_SINGLE_SLOT)
+		drbd_bm_slot_lock(peer_device, why, flags);
+	else
+		drbd_bm_lock(device, why, flags);
+	rv = io_fn(device, peer_device);
+	if (flags & BM_LOCK_SINGLE_SLOT)
+		drbd_bm_slot_unlock(peer_device);
+	else
+		drbd_bm_unlock(device);
+
+	drbd_resume_io(device);
+
+	return rv;
+}
+
+static bool state_change_is_susp_fen(struct drbd_state_change *state_change,
+					    enum which_state which)
+{
+	int n_connection;
+
+	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+		struct drbd_connection_state_change *connection_state_change =
+				&state_change->connections[n_connection];
+
+		if (connection_state_change->susp_fen[which])
+			return true;
+	}
+
+	return false;
+}
+
+static bool state_change_is_susp_quorum(struct drbd_state_change *state_change,
+					       enum which_state which)
+{
+	struct drbd_resource *resource = state_change->resource[0].resource;
+	int n_device;
+
+	if (resource->res_opts.on_no_quorum != ONQ_SUSPEND_IO)
+		return false;
+
+	for (n_device = 0; n_device < state_change->n_devices; n_device++) {
+		struct drbd_device_state_change *device_state_change =
+				&state_change->devices[n_device];
+
+		if (!device_state_change->have_quorum[which])
+			return true;
+	}
+
+	return false;
+}
+
+static bool resync_susp_comb_dep_sc(struct drbd_state_change *state_change,
+				    unsigned int n_device, int n_connection,
+				    enum which_state which)
+{
+	struct drbd_peer_device_state_change *peer_device_state_change =
+		&state_change->peer_devices[n_device * state_change->n_connections + n_connection];
+	struct drbd_device_state_change *device_state_change = &state_change->devices[n_device];
+	bool resync_susp_dependency = peer_device_state_change->resync_susp_dependency[which];
+	bool resync_susp_other_c = peer_device_state_change->resync_susp_other_c[which];
+	enum drbd_repl_state repl_state = peer_device_state_change->repl_state[which];
+	enum drbd_disk_state disk_state = device_state_change->disk_state[which];
+
+	return resync_susp_dependency || resync_susp_other_c ||
+		((repl_state == L_SYNC_SOURCE || repl_state == L_PAUSED_SYNC_S)
+		 && disk_state <= D_INCONSISTENT);
+}
+
+static union drbd_state state_change_word(struct drbd_state_change *state_change,
+					  unsigned int n_device, int n_connection,
+					  enum which_state which)
+{
+	struct drbd_resource_state_change *resource_state_change =
+		&state_change->resource[0];
+	struct drbd_device_state_change *device_state_change =
+		&state_change->devices[n_device];
+	union drbd_state state = { {
+		.role = R_UNKNOWN,
+		.peer = R_UNKNOWN,
+		.conn = C_STANDALONE,
+		.disk = D_UNKNOWN,
+		.pdsk = D_UNKNOWN,
+	} };
+
+	state.role = resource_state_change->role[which];
+	state.susp = resource_state_change->susp[which] || state_change_is_susp_quorum(state_change, which) ||
+		resource_state_change->susp_uuid[which];
+	state.susp_nod = resource_state_change->susp_nod[which];
+	state.susp_fen = state_change_is_susp_fen(state_change, which);
+	state.quorum = device_state_change->have_quorum[which];
+	state.disk = device_state_change->disk_state[which];
+	if (n_connection != -1) {
+		struct drbd_connection_state_change *connection_state_change =
+			&state_change->connections[n_connection];
+		struct drbd_peer_device_state_change *peer_device_state_change =
+			&state_change->peer_devices[n_device * state_change->n_connections + n_connection];
+
+		state.peer = connection_state_change->peer_role[which];
+		state.conn = peer_device_state_change->repl_state[which];
+		if (state.conn <= L_OFF)
+			state.conn = connection_state_change->cstate[which];
+		state.pdsk = peer_device_state_change->disk_state[which];
+		state.aftr_isp = resync_susp_comb_dep_sc(state_change, n_device, n_connection, which);
+		state.peer_isp = peer_device_state_change->resync_susp_peer[which];
+		state.user_isp = peer_device_state_change->resync_susp_user[which];
+	}
+	return state;
+}
+
+int notify_resource_state_change(struct sk_buff *skb,
+				  unsigned int seq,
+				  void *state_change,
+				  enum drbd_notification_type type)
+{
+	struct drbd_resource_state_change *resource_state_change =
+		((struct drbd_state_change *)state_change)->resource;
+	struct drbd_resource *resource = resource_state_change->resource;
+	struct resource_info resource_info = {
+		.res_role = resource_state_change->role[NEW],
+		.res_susp = resource_state_change->susp[NEW],
+		.res_susp_nod = resource_state_change->susp_nod[NEW],
+		.res_susp_fen = state_change_is_susp_fen(state_change, NEW),
+		.res_susp_quorum = state_change_is_susp_quorum(state_change, NEW) ||
+			resource_state_change->susp_uuid[NEW],
+		.res_fail_io = resource_state_change->fail_io[NEW],
+	};
+
+	return notify_resource_state(skb, seq, resource, &resource_info, NULL, type);
+}
+
+int notify_connection_state_change(struct sk_buff *skb,
+				    unsigned int seq,
+				    void *state_change,
+				    enum drbd_notification_type type)
+{
+	struct drbd_connection_state_change *connection_state_change = state_change;
+	struct drbd_connection *connection = connection_state_change->connection;
+	struct connection_info connection_info = {
+		.conn_connection_state = connection_state_change->cstate[NEW],
+		.conn_role = connection_state_change->peer_role[NEW],
+	};
+
+	return notify_connection_state(skb, seq, connection, &connection_info, type);
+}
+
+int notify_device_state_change(struct sk_buff *skb,
+				unsigned int seq,
+				void *state_change,
+				enum drbd_notification_type type)
+{
+	struct drbd_device_state_change *device_state_change = state_change;
+	struct drbd_device *device = device_state_change->device;
+	struct device_info device_info;
+	device_state_change_to_info(&device_info, device_state_change);
+
+	return notify_device_state(skb, seq, device, &device_info, type);
+}
+
+int notify_peer_device_state_change(struct sk_buff *skb,
+				     unsigned int seq,
+				     void *state_change,
+				     enum drbd_notification_type type)
+{
+	struct drbd_peer_device_state_change *peer_device_state_change = state_change;
+	struct drbd_peer_device *peer_device = peer_device_state_change->peer_device;
+	struct peer_device_info peer_device_info;
+	peer_device_state_change_to_info(&peer_device_info, state_change);
+
+	return notify_peer_device_state(skb, seq, peer_device, &peer_device_info, type);
+}
+
+static void notify_state_change(struct drbd_state_change *state_change)
+{
+	struct drbd_resource_state_change *resource_state_change = &state_change->resource[0];
+	bool resource_state_has_changed;
+	unsigned int n_device, n_connection, n_peer_device, n_peer_devices;
+	int (*last_func)(struct sk_buff *, unsigned int, void *,
+			  enum drbd_notification_type) = NULL;
+	void *last_arg = NULL;
+
+#define HAS_CHANGED(state) ((state)[OLD] != (state)[NEW])
+#define FINAL_STATE_CHANGE(type) \
+	({ if (last_func) \
+		last_func(NULL, 0, last_arg, type); \
+	})
+#define REMEMBER_STATE_CHANGE(func, arg, type) \
+	({ FINAL_STATE_CHANGE(type | NOTIFY_CONTINUES); \
+	   last_func = (typeof(last_func))func; \
+	   last_arg = arg; \
+	 })
+
+	mutex_lock(&notification_mutex);
+
+	resource_state_has_changed =
+		HAS_CHANGED(resource_state_change->role) ||
+		HAS_CHANGED(resource_state_change->susp) ||
+		HAS_CHANGED(resource_state_change->susp_nod) ||
+		HAS_CHANGED(resource_state_change->susp_uuid) ||
+		state_change_is_susp_fen(state_change, OLD) !=
+		state_change_is_susp_fen(state_change, NEW) ||
+		state_change_is_susp_quorum(state_change, OLD) !=
+		state_change_is_susp_quorum(state_change, NEW) ||
+		HAS_CHANGED(resource_state_change->fail_io);
+
+	if (resource_state_has_changed)
+		REMEMBER_STATE_CHANGE(notify_resource_state_change,
+				      state_change, NOTIFY_CHANGE);
+
+	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+		struct drbd_connection_state_change *connection_state_change =
+				&state_change->connections[n_connection];
+
+		if (HAS_CHANGED(connection_state_change->peer_role) ||
+		    HAS_CHANGED(connection_state_change->cstate))
+			REMEMBER_STATE_CHANGE(notify_connection_state_change,
+					      connection_state_change, NOTIFY_CHANGE);
+	}
+
+	for (n_device = 0; n_device < state_change->n_devices; n_device++) {
+		struct drbd_device_state_change *device_state_change =
+			&state_change->devices[n_device];
+
+		if (HAS_CHANGED(device_state_change->disk_state) ||
+		    HAS_CHANGED(device_state_change->have_quorum))
+			REMEMBER_STATE_CHANGE(notify_device_state_change,
+					      device_state_change, NOTIFY_CHANGE);
+	}
+
+	n_peer_devices = state_change->n_devices * state_change->n_connections;
+	for (n_peer_device = 0; n_peer_device < n_peer_devices; n_peer_device++) {
+		struct drbd_peer_device_state_change *p =
+			&state_change->peer_devices[n_peer_device];
+
+		if (HAS_CHANGED(p->disk_state) ||
+		    HAS_CHANGED(p->repl_state) ||
+		    HAS_CHANGED(p->resync_susp_user) ||
+		    HAS_CHANGED(p->resync_susp_peer) ||
+		    HAS_CHANGED(p->resync_susp_dependency) ||
+		    HAS_CHANGED(p->resync_susp_other_c))
+			REMEMBER_STATE_CHANGE(notify_peer_device_state_change,
+					      p, NOTIFY_CHANGE);
+	}
+
+	FINAL_STATE_CHANGE(NOTIFY_CHANGE);
+	mutex_unlock(&notification_mutex);
+
+#undef HAS_CHANGED
+#undef FINAL_STATE_CHANGE
+#undef REMEMBER_STATE_CHANGE
+}
+
+static void send_role_to_all_peers(struct drbd_state_change *state_change)
+{
+	unsigned int n_connection;
+
+	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+		struct drbd_connection_state_change *connection_state_change =
+			&state_change->connections[n_connection];
+		struct drbd_connection *connection = connection_state_change->connection;
+		enum drbd_conn_state new_cstate = connection_state_change->cstate[NEW];
+
+		if (new_cstate < C_CONNECTED)
+			continue;
+
+		if (connection->agreed_pro_version < 110) {
+			unsigned int n_device;
+
+			/* Before DRBD 9, the role is a device attribute
+			 * instead of a resource attribute. */
+			for (n_device = 0; n_device < state_change->n_devices; n_device++) {
+				struct drbd_peer_device *peer_device =
+					state_change->peer_devices[n_connection].peer_device;
+				union drbd_state state =
+					state_change_word(state_change, n_device, n_connection, NEW);
+
+				drbd_send_state(peer_device, state);
+			}
+		} else {
+			union drbd_state state = { {
+				.role = state_change->resource[0].role[NEW],
+			} };
+
+			conn_send_state(connection, state);
+		}
+	}
+}
+
+static void send_new_state_to_all_peer_devices(struct drbd_state_change *state_change, int n_device)
+{
+	unsigned int n_connection;
+
+	BUG_ON(state_change->n_devices <= n_device);
+	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+		struct drbd_peer_device_state_change *peer_device_state_change =
+			&state_change->peer_devices[n_device * state_change->n_connections + n_connection];
+		struct drbd_peer_device *peer_device = peer_device_state_change->peer_device;
+		union drbd_state new_state = state_change_word(state_change, n_device, n_connection, NEW);
+
+		if (new_state.conn >= C_CONNECTED)
+			drbd_send_state(peer_device, new_state);
+	}
+}
+
+/* This function is supposed to have the same semantics as drbd_device_stable() in drbd_main.c
+   A primary is stable since it is authoritative.
+   Unstable are neighbors of a primary and resync target nodes.
+   Nodes further away from a primary are stable! Do no confuse with "weak".*/
+static bool calc_device_stable(struct drbd_state_change *state_change, int n_device, enum which_state which)
+{
+	int n_connection;
+
+	if (state_change->resource->role[which] == R_PRIMARY)
+		return true;
+
+	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+		struct drbd_connection_state_change *connection_state_change =
+			&state_change->connections[n_connection];
+		enum drbd_role *peer_role = connection_state_change->peer_role;
+
+		if (peer_role[which] == R_PRIMARY)
+			return false;
+	}
+
+	return true;
+}
+
+static bool calc_resync_target(struct drbd_state_change *state_change, int n_device, enum which_state which)
+{
+	int n_connection;
+
+	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+		struct drbd_peer_device_state_change *peer_device_state_change =
+			&state_change->peer_devices[n_device * state_change->n_connections + n_connection];
+		enum drbd_repl_state *repl_state = peer_device_state_change->repl_state;
+
+		switch (repl_state[which]) {
+		case L_WF_BITMAP_T:
+		case L_SYNC_TARGET:
+		case L_PAUSED_SYNC_T:
+			return true;
+		default:
+			continue;
+		}
+	}
+
+	return false;
+}
+
+/* takes old and new peer disk state */
+static bool lost_contact_to_peer_data(enum drbd_disk_state *peer_disk_state)
+{
+	enum drbd_disk_state os = peer_disk_state[OLD];
+	enum drbd_disk_state ns = peer_disk_state[NEW];
+
+	return (os >= D_INCONSISTENT && os != D_UNKNOWN && os != D_OUTDATED)
+		&& (ns < D_INCONSISTENT || ns == D_UNKNOWN || ns == D_OUTDATED);
+}
+
+static bool peer_returns_diskless(struct drbd_peer_device *peer_device,
+				  enum drbd_disk_state os, enum drbd_disk_state ns)
+{
+	struct drbd_device *device = peer_device->device;
+	bool rv = false;
+
+	/* Scenario, starting with normal operation
+	 * Connected Primary/Secondary UpToDate/UpToDate
+	 * NetworkFailure Primary/Unknown UpToDate/DUnknown (frozen)
+	 * ...
+	 * Connected Primary/Secondary UpToDate/Diskless (resumed; needs to bump uuid!)
+	 */
+
+	if (get_ldev(device)) {
+		if (os == D_UNKNOWN && (ns == D_DISKLESS || ns == D_FAILED || ns == D_OUTDATED) &&
+		    drbd_bitmap_uuid(peer_device) == 0)
+			rv = true;
+		put_ldev(device);
+	}
+	return rv;
+}
+
+static void check_may_resume_io_after_fencing(struct drbd_state_change *state_change, int n_connection)
+{
+	struct drbd_connection_state_change *connection_state_change = &state_change->connections[n_connection];
+	struct drbd_resource_state_change *resource_state_change = &state_change->resource[0];
+	struct drbd_connection *connection = connection_state_change->connection;
+	struct drbd_resource *resource = resource_state_change->resource;
+	bool all_peer_disks_outdated = true;
+	bool all_peer_disks_connected = true;
+	struct drbd_peer_device *peer_device;
+	unsigned long irq_flags;
+	int vnr, n_device;
+
+	for (n_device = 0; n_device < state_change->n_devices; n_device++) {
+		struct drbd_peer_device_state_change *peer_device_state_change =
+			&state_change->peer_devices[n_device * state_change->n_connections + n_connection];
+		enum drbd_repl_state *repl_state = peer_device_state_change->repl_state;
+		enum drbd_disk_state *peer_disk_state = peer_device_state_change->disk_state;
+
+		if (peer_disk_state[NEW] > D_OUTDATED)
+			all_peer_disks_outdated = false;
+		if (repl_state[NEW] < L_ESTABLISHED)
+			all_peer_disks_connected = false;
+	}
+
+	/* case1: The outdate peer handler is successful: */
+	if (all_peer_disks_outdated) {
+		rcu_read_lock();
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+			struct drbd_device *device = peer_device->device;
+			if (test_and_clear_bit(NEW_CUR_UUID, &device->flags)) {
+				kref_get(&device->kref);
+				rcu_read_unlock();
+				drbd_uuid_new_current(device, false);
+				kref_put(&device->kref, drbd_destroy_device);
+				rcu_read_lock();
+			}
+		}
+		rcu_read_unlock();
+		begin_state_change(resource, &irq_flags, CS_VERBOSE);
+		__change_io_susp_fencing(connection, false);
+		end_state_change(resource, &irq_flags, "after-fencing");
+	}
+	/* case2: The connection was established again: */
+	if (all_peer_disks_connected) {
+		rcu_read_lock();
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+			struct drbd_device *device = peer_device->device;
+			clear_bit(NEW_CUR_UUID, &device->flags);
+		}
+		rcu_read_unlock();
+		begin_state_change(resource, &irq_flags, CS_VERBOSE);
+		__change_io_susp_fencing(connection, false);
+		end_state_change(resource, &irq_flags, "after-fencing");
+	}
+}
+
+static bool drbd_should_unfence(struct drbd_state_change *state_change, int n_connection)
+{
+	bool some_peer_was_not_up_to_date = false;
+	int n_device;
+
+	for (n_device = 0; n_device < state_change->n_devices; n_device++) {
+		struct drbd_device_state_change *device_state_change =
+			&state_change->devices[n_device];
+		enum drbd_disk_state *disk_state = device_state_change->disk_state;
+		struct drbd_peer_device_state_change *peer_device_state_change =
+			&state_change->peer_devices[
+				n_device * state_change->n_connections + n_connection];
+		enum drbd_disk_state *peer_disk_state = peer_device_state_change->disk_state;
+
+		/* Do not unfence if some volume is not yet up-to-date. */
+		if (disk_state[NEW] != D_UP_TO_DATE || peer_disk_state[NEW] != D_UP_TO_DATE)
+			return false;
+
+		/* Only unfence when the final volume becomes up-to-date. */
+		if (peer_disk_state[OLD] != D_UP_TO_DATE)
+			some_peer_was_not_up_to_date = true;
+	}
+
+	return some_peer_was_not_up_to_date;
+}
+
+static bool use_checksum_based_resync(struct drbd_connection *connection, struct drbd_device *device)
+{
+	bool csums_after_crash_only;
+	rcu_read_lock();
+	csums_after_crash_only = rcu_dereference(connection->transport.net_conf)->csums_after_crash_only;
+	rcu_read_unlock();
+	return connection->agreed_pro_version >= 89 &&		/* supported? */
+		connection->csums_tfm &&			/* configured? */
+		(csums_after_crash_only == false		/* use for each resync? */
+		 || test_bit(CRASHED_PRIMARY, &device->flags));	/* or only after Primary crash? */
+}
+
+static void drbd_run_resync(struct drbd_peer_device *peer_device, enum drbd_repl_state repl_state)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_bitmap *bm = device->bitmap;
+	struct drbd_connection *connection = peer_device->connection;
+	enum drbd_repl_state side = repl_is_sync_target(repl_state) ? L_SYNC_TARGET : L_SYNC_SOURCE;
+
+	drbd_info(peer_device, "Began resync as %s (will sync %llu KB [%lu bits set]).\n",
+			drbd_repl_str(repl_state),
+			bm_bit_to_kb(bm, peer_device->rs_total),
+			(unsigned long) peer_device->rs_total);
+
+	if (side == L_SYNC_TARGET)
+		drbd_uuid_set_exposed(device, peer_device->current_uuid, false);
+
+	peer_device->use_csums = side == L_SYNC_TARGET ?
+		use_checksum_based_resync(connection, device) : false;
+
+	if (side == L_SYNC_TARGET &&
+			!(peer_device->uuid_flags & UUID_FLAG_STABLE) &&
+			!drbd_stable_sync_source_present(peer_device, NOW))
+		set_bit(UNSTABLE_RESYNC, &peer_device->flags);
+
+	/* Since protocol 96, we must serialize drbd_gen_and_send_sync_uuid
+	 * with w_send_oos, or the sync target will get confused as to
+	 * how much bits to resync.  We cannot do that always, because for an
+	 * empty resync and protocol < 95, we need to do it here, as we call
+	 * drbd_resync_finished from here in that case.
+	 * We drbd_gen_and_send_sync_uuid here for protocol < 96,
+	 * and from after_state_ch otherwise. */
+	if (side == L_SYNC_SOURCE && connection->agreed_pro_version < 96)
+		drbd_gen_and_send_sync_uuid(peer_device);
+
+	if (connection->agreed_pro_version < 95 && peer_device->rs_total == 0) {
+		/* This still has a race (about when exactly the peers
+		 * detect connection loss) that can lead to a full sync
+		 * on next handshake. In 8.3.9 we fixed this with explicit
+		 * resync-finished notifications, but the fix
+		 * introduces a protocol change.  Sleeping for some
+		 * time longer than the ping interval + timeout on the
+		 * SyncSource, to give the SyncTarget the chance to
+		 * detect connection loss, then waiting for a ping
+		 * response (implicit in drbd_resync_finished) reduces
+		 * the race considerably, but does not solve it. */
+		if (side == L_SYNC_SOURCE) {
+			struct net_conf *nc;
+			int timeo;
+
+			rcu_read_lock();
+			nc = rcu_dereference(connection->transport.net_conf);
+			timeo = nc->ping_int * HZ + nc->ping_timeo * HZ / 9;
+			rcu_read_unlock();
+			schedule_timeout_interruptible(timeo);
+		}
+		drbd_resync_finished(peer_device, D_MASK);
+	}
+
+	/* ns.conn may already be != peer_device->repl_state[NOW],
+	 * we may have been paused in between, or become paused until
+	 * the timer triggers.
+	 * No matter, that is handled in resync_timer_fn() */
+	if (repl_state == L_SYNC_TARGET || repl_state == L_PAUSED_SYNC_T)
+		drbd_uuid_resync_starting(peer_device);
+
+	drbd_md_sync_if_dirty(device);
+}
+
+
+/*
+ * Perform after state change actions that may sleep.
+ */
+static int w_after_state_change(struct drbd_work *w, int unused)
+{
+	struct after_state_change_work *work =
+		container_of(w, struct after_state_change_work, w);
+	struct drbd_state_change *state_change = work->state_change;
+	struct drbd_resource_state_change *resource_state_change = &state_change->resource[0];
+	struct drbd_resource *resource = resource_state_change->resource;
+	enum drbd_role *role = resource_state_change->role;
+	bool *susp_uuid = resource_state_change->susp_uuid;
+	struct drbd_peer_device *send_state_others = NULL;
+	int n_device, n_connection;
+	bool still_connected = false;
+	bool try_become_up_to_date = false;
+	bool healed_primary = false;
+	bool send_flush_requests = false;
+
+	notify_state_change(state_change);
+
+	for (n_device = 0; n_device < state_change->n_devices; n_device++) {
+		struct drbd_device_state_change *device_state_change = &state_change->devices[n_device];
+		struct drbd_device *device = device_state_change->device;
+		enum drbd_disk_state *disk_state = device_state_change->disk_state;
+		bool have_ldev = extra_ldev_ref_for_after_state_chg(disk_state);
+		bool *have_quorum = device_state_change->have_quorum;
+		bool effective_disk_size_determined = false;
+		bool device_stable[2], resync_target[2];
+		bool data_accessible[2];
+		bool all_peer_replication[2];
+		bool resync_finished = false;
+		bool some_peer_demoted = false;
+		bool new_current_uuid = false;
+		enum which_state which;
+
+		for (which = OLD; which <= NEW; which++) {
+			device_stable[which] = calc_device_stable(state_change, n_device, which);
+			resync_target[which] = calc_resync_target(state_change, n_device, which);
+			data_accessible[which] =
+				calc_data_accessible(state_change, n_device, which);
+			all_peer_replication[which] =
+				drbd_all_peer_replication_change(state_change, n_device, which);
+
+		}
+
+		if (disk_state[NEW] == D_UP_TO_DATE)
+			effective_disk_size_determined = true;
+
+		for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+			struct drbd_peer_device_state_change *peer_device_state_change =
+				&state_change->peer_devices[
+					n_device * state_change->n_connections + n_connection];
+			struct drbd_peer_device *peer_device = peer_device_state_change->peer_device;
+			enum drbd_disk_state *peer_disk_state = peer_device_state_change->disk_state;
+			enum drbd_repl_state *repl_state = peer_device_state_change->repl_state;
+
+			if ((repl_state[OLD] == L_SYNC_TARGET || repl_state[OLD] == L_PAUSED_SYNC_T) &&
+			    repl_state[NEW] == L_ESTABLISHED)
+				resync_finished = true;
+
+			if (disk_state[OLD] == D_INCONSISTENT && disk_state[NEW] == D_UP_TO_DATE &&
+			    peer_disk_state[OLD] == D_INCONSISTENT && peer_disk_state[NEW] == D_UP_TO_DATE)
+				send_state_others = peer_device;
+
+			/* connect without resync or remote attach without resync */
+			if (disk_state[NOW] >= D_OUTDATED && repl_state[NEW] == L_ESTABLISHED &&
+			    ((repl_state[OLD] == L_OFF && peer_disk_state[NEW] >= D_OUTDATED) ||
+			     (peer_disk_state[OLD] == D_DISKLESS && peer_disk_state[NEW] >= D_OUTDATED))) {
+				u64 peer_current_uuid = peer_device->current_uuid & ~UUID_PRIMARY;
+				u64 my_current_uuid = drbd_current_uuid(device) & ~UUID_PRIMARY;
+
+				if (peer_current_uuid == my_current_uuid && get_ldev(device)) {
+					down_write(&device->uuid_sem);
+					drbd_uuid_set_bitmap(peer_device, 0);
+					up_write(&device->uuid_sem);
+					drbd_print_uuids(peer_device, "cleared bm UUID and bitmap");
+					drbd_bitmap_io_from_worker(device, &drbd_bmio_clear_one_peer,
+								   "clearing bm one peer", BM_LOCK_CLEAR | BM_LOCK_BULK,
+								   peer_device);
+					put_ldev(device);
+				}
+			}
+		}
+
+		if (role[NEW] == R_PRIMARY && !data_accessible[OLD] && data_accessible[NEW])
+			healed_primary = true;
+
+		for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+			struct drbd_connection_state_change *connection_state_change = &state_change->connections[n_connection];
+			struct drbd_connection *connection = connection_state_change->connection;
+			enum drbd_conn_state *cstate = connection_state_change->cstate;
+			enum drbd_role *peer_role = connection_state_change->peer_role;
+			struct drbd_peer_device_state_change *peer_device_state_change =
+				&state_change->peer_devices[
+					n_device * state_change->n_connections + n_connection];
+			struct drbd_peer_device *peer_device = peer_device_state_change->peer_device;
+			enum drbd_repl_state *repl_state = peer_device_state_change->repl_state;
+			enum drbd_disk_state *peer_disk_state = peer_device_state_change->disk_state;
+			bool *resync_susp_user = peer_device_state_change->resync_susp_user;
+			bool *resync_susp_peer = peer_device_state_change->resync_susp_peer;
+			bool *resync_susp_dependency = peer_device_state_change->resync_susp_dependency;
+			union drbd_state new_state =
+				state_change_word(state_change, n_device, n_connection, NEW);
+			bool send_uuids, send_state = false;
+
+			/* In case we finished a resync as resync-target update all neighbors
+			 * about having a bitmap_uuid of 0 towards the previous sync-source.
+			 * That needs to go out before sending the new disk state
+			 * to avoid a race where the other node might downgrade our disk
+			 * state due to old UUID values.
+			 *
+			 * Also check the replication state to ensure that we
+			 * do not send these extra UUIDs before the initial
+			 * handshake. */
+			send_uuids = resync_finished &&
+				peer_disk_state[NEW] != D_UNKNOWN &&
+				repl_state[NEW] > L_OFF;
+
+			/* Send UUIDs again if they changed while establishing the connection */
+			if (repl_state[OLD] == L_OFF && repl_state[NEW] > L_OFF &&
+			    peer_device->comm_current_uuid != drbd_resolved_uuid(peer_device, NULL))
+				send_uuids = true;
+
+			if (repl_state[NEW] > L_OFF && device_stable[OLD] != device_stable[NEW])
+				send_uuids = true;
+
+			if (send_uuids)
+				drbd_send_uuids(peer_device, 0, 0);
+
+			if (peer_disk_state[NEW] == D_UP_TO_DATE)
+				effective_disk_size_determined = true;
+
+			if (!(role[OLD] == R_PRIMARY && !data_accessible[OLD]) &&
+			     (role[NEW] == R_PRIMARY && !data_accessible[NEW]) &&
+			    !test_bit(UNREGISTERED, &device->flags))
+				drbd_maybe_khelper(device, connection, "pri-on-incon-degr");
+
+			/* Became sync source.  With protocol >= 96, we still need to send out
+			 * the sync uuid now. Need to do that before any drbd_send_state, or
+			 * the other side may go "paused sync" before receiving the sync uuids,
+			 * which is unexpected. */
+			if (!(repl_state[OLD] == L_SYNC_SOURCE || repl_state[OLD] == L_PAUSED_SYNC_S) &&
+			     (repl_state[NEW] == L_SYNC_SOURCE || repl_state[NEW] == L_PAUSED_SYNC_S) &&
+			    connection->agreed_pro_version >= 96 && connection->agreed_pro_version < 110 &&
+			    get_ldev(device)) {
+				drbd_gen_and_send_sync_uuid(peer_device);
+				put_ldev(device);
+			}
+
+			/* Do not change the order of the if above and the two below... */
+			if (peer_disk_state[OLD] < D_NEGOTIATING &&
+			    peer_disk_state[NEW] == D_NEGOTIATING) { /* attach on the peer */
+				/* we probably will start a resync soon.
+				 * make sure those things are properly reset. */
+				peer_device->rs_total = 0;
+				peer_device->rs_failed = 0;
+
+				drbd_send_uuids(peer_device, 0, 0);
+				drbd_send_state(peer_device, new_state);
+			}
+			/* No point in queuing send_bitmap if we don't have a connection
+			 * anymore, so check also the _current_ state, not only the new state
+			 * at the time this work was queued. */
+			if (repl_state[OLD] != L_WF_BITMAP_S && repl_state[NEW] == L_WF_BITMAP_S &&
+			    peer_device->repl_state[NOW] == L_WF_BITMAP_S) {
+				/* Now that the connection is L_WF_BITMAP_S,
+				 * new requests will be sent to the peer as
+				 * P_OUT_OF_SYNC packets. However, active
+				 * requests may not have been communicated to
+				 * the peer and may not yet be marked in the
+				 * local bitmap. Mark these requests in the
+				 * bitmap before reading and sending that
+				 * bitmap. This may set bits unnecessarily, but
+				 * it does no harm to resync a small amount of
+				 * additional data. */
+				drbd_set_pending_out_of_sync(peer_device);
+				/* ldev_safe: ref from extra_ldev_ref_for_after_state_chg() */
+				drbd_queue_bitmap_io(device, &drbd_send_bitmap, NULL,
+						"send_bitmap (WFBitMapS)",
+						BM_LOCK_SET | BM_LOCK_CLEAR | BM_LOCK_BULK | BM_LOCK_SINGLE_SLOT,
+						peer_device);
+			}
+
+			if (peer_role[OLD] == R_PRIMARY && peer_role[NEW] == R_SECONDARY)
+				some_peer_demoted = true;
+
+			/* Last part of the attaching process ... */
+			if (cstate[NEW] == C_CONNECTED && /* repl_state[NEW] might still be L_OFF */
+			    disk_state[OLD] == D_ATTACHING && disk_state[NEW] >= D_NEGOTIATING) {
+				drbd_send_sizes(peer_device, 0, 0);  /* to start sync... */
+				drbd_send_uuids(peer_device, 0, 0);
+				drbd_send_state(peer_device, new_state);
+			}
+
+			/* Started resync, tell peer if drbd9 */
+			if (repl_state[NEW] >= L_SYNC_SOURCE && repl_state[NEW] <= L_PAUSED_SYNC_T &&
+			    (repl_state[OLD] < L_SYNC_SOURCE || repl_state[OLD] > L_PAUSED_SYNC_T))
+				send_state = true;
+
+			/* We want to pause/continue resync, tell peer. */
+			if (repl_state[NEW] >= L_ESTABLISHED &&
+			    ((resync_susp_comb_dep_sc(state_change, n_device, n_connection, OLD) !=
+			      resync_susp_comb_dep_sc(state_change, n_device, n_connection, NEW)) ||
+			     (resync_susp_user[OLD] != resync_susp_user[NEW])))
+				send_state = true;
+
+			/* finished resync, tell sync source */
+			if ((repl_state[OLD] == L_SYNC_TARGET || repl_state[OLD] == L_PAUSED_SYNC_T) &&
+			    repl_state[NEW] == L_ESTABLISHED)
+				send_state = true;
+
+			/* In case one of the isp bits got set, suspend other devices. */
+			if (!(resync_susp_dependency[OLD] || resync_susp_peer[OLD] || resync_susp_user[OLD]) &&
+			     (resync_susp_dependency[NEW] || resync_susp_peer[NEW] || resync_susp_user[NEW]))
+				/* ldev_safe: ref from extra_ldev_ref_for_after_state_chg() */
+				suspend_other_sg(device);
+
+			/* Make sure the peer gets informed about eventual state
+			   changes (ISP bits) while we were in L_OFF. */
+			if (repl_state[OLD] == L_OFF && repl_state[NEW] >= L_ESTABLISHED)
+				send_state = true;
+
+			if (repl_state[OLD] != L_AHEAD && repl_state[NEW] == L_AHEAD)
+				send_state = true;
+
+			/* We are in the progress to start a full sync. SyncTarget sets all slots. */
+			if (repl_state[OLD] != L_STARTING_SYNC_T && repl_state[NEW] == L_STARTING_SYNC_T)
+				/* ldev_safe: ref from extra_ldev_ref_for_after_state_chg() */
+				drbd_queue_bitmap_io(device,
+					&drbd_bmio_set_all_n_write, &abw_start_sync,
+					"set_n_write from StartingSync",
+					BM_LOCK_CLEAR | BM_LOCK_BULK,
+					peer_device);
+
+			/* We are in the progress to start a full sync. SyncSource one slot. */
+			if (repl_state[OLD] != L_STARTING_SYNC_S && repl_state[NEW] == L_STARTING_SYNC_S)
+				/* ldev_safe: ref from extra_ldev_ref_for_after_state_chg() */
+				drbd_queue_bitmap_io(device,
+					&drbd_bmio_set_n_write, &abw_start_sync,
+					"set_n_write from StartingSync",
+					BM_LOCK_CLEAR | BM_LOCK_BULK,
+					peer_device);
+
+			/* Disks got bigger while they were detached */
+			if (disk_state[NEW] > D_NEGOTIATING && peer_disk_state[NEW] > D_NEGOTIATING &&
+			    test_and_clear_bit(RESYNC_AFTER_NEG, &peer_device->flags)) {
+				if (repl_state[NEW] == L_ESTABLISHED)
+					resync_after_online_grow(peer_device);
+			}
+
+			/* A resync finished or aborted, wake paused devices... */
+			if ((repl_state[OLD] > L_ESTABLISHED && repl_state[NEW] <= L_ESTABLISHED) ||
+			    (resync_susp_peer[OLD] && !resync_susp_peer[NEW]) ||
+			    (resync_susp_user[OLD] && !resync_susp_user[NEW]))
+				/* ldev_safe: ref from extra_ldev_ref_for_after_state_chg() */
+				resume_next_sg(device);
+
+			/* sync target done with resync. Explicitly notify all peers. Our sync
+			   source should even know by himself, but the others need that info. */
+			if (disk_state[OLD] < D_UP_TO_DATE && repl_state[OLD] >= L_SYNC_SOURCE && repl_state[NEW] == L_ESTABLISHED)
+				send_new_state_to_all_peer_devices(state_change, n_device);
+
+			/* Outdated myself, or became D_UP_TO_DATE tell peers
+			 * Do not do it, when the local node was forced from R_SECONDARY to R_PRIMARY,
+			 * because that is part of the 2-phase-commit and that is necessary to trigger
+			 * the initial resync. */
+			if ((disk_state[NEW] >= D_INCONSISTENT && disk_state[NEW] != disk_state[OLD] &&
+			     repl_state[OLD] >= L_ESTABLISHED && repl_state[NEW] >= L_ESTABLISHED) &&
+			    !(role[OLD] == R_SECONDARY && role[NEW] == R_PRIMARY))
+				send_state = true;
+
+			/* diskless peers need to be informed about quorum changes, since they consider
+			   the quorum state of the diskfull nodes. */
+			if (have_quorum[OLD] != have_quorum[NEW] && disk_state[NEW] >= D_INCONSISTENT)
+				send_state = true;
+
+			/* Skipped resync with peer_device, tell others... */
+			if (send_state_others && send_state_others != peer_device)
+				send_state = true;
+
+			/* This triggers bitmap writeout of potentially still unwritten pages
+			 * if the resync finished cleanly, or aborted because of peer disk
+			 * failure, or on transition from resync back to AHEAD/BEHIND.
+			 *
+			 * Connection loss is handled in conn_disconnect() by the receiver.
+			 *
+			 * For resync aborted because of local disk failure, we cannot do
+			 * any bitmap writeout anymore.
+			 *
+			 * No harm done if some bits change during this phase.
+			 */
+			if ((repl_state[OLD] > L_ESTABLISHED && repl_state[OLD] < L_AHEAD) &&
+			    (repl_state[NEW] == L_ESTABLISHED || repl_state[NEW] >= L_AHEAD) &&
+			    get_ldev(device)) {
+				drbd_queue_bitmap_io(device, &drbd_bm_write_copy_pages, NULL,
+					"write from resync_finished", BM_LOCK_BULK,
+					NULL);
+				put_ldev(device);
+			}
+
+			/* Verify finished, or reached stop sector.  Peer did not know about
+			 * the stop sector, and we may even have changed the stop sector during
+			 * verify to interrupt/stop early.  Send the new state. */
+			if (repl_state[OLD] == L_VERIFY_S && repl_state[NEW] == L_ESTABLISHED
+			    && verify_can_do_stop_sector(peer_device))
+				send_new_state_to_all_peer_devices(state_change, n_device);
+
+			if (disk_state[NEW] == D_DISKLESS &&
+			    cstate[NEW] == C_STANDALONE &&
+			    role[NEW] == R_SECONDARY) {
+				if (resync_susp_dependency[OLD] != resync_susp_dependency[NEW])
+					/* ldev_safe: ref from extra_ldev_ref_for_after_state_chg */
+					resume_next_sg(device);
+			}
+
+			if (device_stable[OLD] && !device_stable[NEW] &&
+			    repl_state[NEW] >= L_ESTABLISHED && get_ldev(device)) {
+				/* Inform peers about being unstable...
+				   Maybe it would be a better idea to have the stable bit as
+				   part of the state (and being sent with the state) */
+				drbd_send_uuids(peer_device, 0, 0);
+				put_ldev(device);
+			}
+
+			if (send_state && cstate[NEW] == C_CONNECTED)
+				drbd_send_state(peer_device, new_state);
+
+			if (((!device_stable[OLD] && device_stable[NEW]) ||
+			     (resync_target[OLD] && !resync_target[NEW] && device_stable[NEW])) &&
+			    !(repl_state[OLD] == L_SYNC_TARGET || repl_state[OLD] == L_PAUSED_SYNC_T) &&
+			    !(peer_role[OLD] == R_PRIMARY) && disk_state[NEW] >= D_OUTDATED &&
+			    repl_state[NEW] >= L_ESTABLISHED &&
+			    get_ldev(device)) {
+				/* Offer all peers a resync, with the exception of ...
+				   ... the node that made me up-to-date (with a resync)
+				   ... I was primary
+				   ... the peer that transitioned from primary to secondary
+				*/
+				drbd_send_uuids(peer_device, UUID_FLAG_GOT_STABLE, 0);
+				put_ldev(device);
+			}
+
+			if (peer_disk_state[OLD] == D_UP_TO_DATE &&
+			    (peer_disk_state[NEW] == D_FAILED || peer_disk_state[NEW] == D_INCONSISTENT) &&
+			    test_and_clear_bit(NEW_CUR_UUID, &device->flags))
+				/* When a peer disk goes from D_UP_TO_DATE to D_FAILED or D_INCONSISTENT
+				   we know that a write failed on that node. Therefore we need to create
+				   the new UUID right now (not wait for the next write to come in) */
+				new_current_uuid = true;
+
+			if (disk_state[OLD] > D_FAILED && disk_state[NEW] == D_FAILED &&
+			    role[NEW] == R_PRIMARY && test_and_clear_bit(NEW_CUR_UUID, &device->flags))
+				new_current_uuid = true;
+
+			if (repl_state[OLD] != L_VERIFY_S && repl_state[NEW] == L_VERIFY_S) {
+				drbd_info(peer_device, "Starting Online Verify from sector %llu\n",
+						(unsigned long long)peer_device->ov_position);
+				drbd_queue_work_if_unqueued(
+						&peer_device->connection->sender_work,
+						&peer_device->resync_work);
+			}
+
+			if (!repl_is_sync(repl_state[OLD]) && repl_is_sync(repl_state[NEW]))
+				/* ldev_safe: ref from extra_ldev_ref_for_after_state_chg() */
+				drbd_run_resync(peer_device, repl_state[NEW]);
+
+			if (repl_is_sync(repl_state[OLD]) && !repl_is_sync(repl_state[NEW]))
+				drbd_last_resync_request(peer_device, false);
+
+			if (peer_device_state_change->repl_state[OLD] != L_SYNC_TARGET &&
+					peer_device_state_change->repl_state[NEW] == L_SYNC_TARGET)
+				drbd_queue_work_if_unqueued(
+						&peer_device->connection->sender_work,
+						&peer_device->resync_work);
+
+			if (!(repl_is_sync_target(repl_state[OLD]) &&
+					all_peer_replication[OLD]) &&
+					repl_is_sync_target(repl_state[NEW]) &&
+					all_peer_replication[NEW])
+				send_flush_requests = true;
+
+			if (!peer_device_state_change->peer_replication[OLD] &&
+					peer_device_state_change->peer_replication[NEW])
+				drbd_send_enable_replication(peer_device, true);
+		}
+
+		if (((role[OLD] == R_PRIMARY && role[NEW] == R_SECONDARY) || some_peer_demoted) &&
+		    get_ldev(device)) {
+			/* The some_peer_demoted case is superseded by
+			 * handle_neighbor_demotion(). We keep this call for
+			 * compatibility until support for protocol version 121
+			 * is removed.
+			 *
+			 * No changes to the bitmap expected after this point, so write out any
+			 * changes up to now to ensure that the metadata disk has the full
+			 * bitmap content. Even if the bitmap changes (e.g. it was dual primary)
+			 * no harm was done if it did change. */
+			drbd_bitmap_io_from_worker(device, &drbd_bm_write,
+						   "demote", BM_LOCK_SET | BM_LOCK_CLEAR | BM_LOCK_BULK,
+						   NULL);
+			put_ldev(device);
+		}
+
+		/* Make sure the effective disk size is stored in the metadata
+		 * if a local disk is attached and either the local disk state
+		 * or a peer disk state is D_UP_TO_DATE.  */
+		if (effective_disk_size_determined && get_ldev(device)) {
+			sector_t size = get_capacity(device->vdisk);
+			if (device->ldev->md.effective_size != size) {
+				char ppb[10];
+
+				drbd_info(device, "persisting effective size = %s (%llu KB)\n",
+				     ppsize(ppb, size >> 1),
+				     (unsigned long long)size >> 1);
+				device->ldev->md.effective_size = size;
+				drbd_md_mark_dirty(device);
+			}
+			put_ldev(device);
+		}
+
+		/* first half of local IO error, failure to attach,
+		 * or administrative detach */
+		if ((disk_state[OLD] != D_FAILED && disk_state[NEW] == D_FAILED) ||
+		    (disk_state[OLD] != D_DETACHING && disk_state[NEW] == D_DETACHING)) {
+			enum drbd_io_error_p eh = EP_PASS_ON;
+			int was_io_error = 0;
+
+			/* Our cleanup here with the transition to D_DISKLESS.
+			 * It is still not safe to dereference ldev here, since
+			 * we might come from an failed Attach before ldev was set. */
+			/* ldev_safe: ref from extra_ldev_ref_for_after_state_chg() */
+			if (have_ldev && device->ldev) {
+				rcu_read_lock();
+				eh = rcu_dereference(device->ldev->disk_conf)->on_io_error;
+				rcu_read_unlock();
+
+				was_io_error = disk_state[NEW] == D_FAILED;
+
+				/* Intentionally call this handler first, before drbd_send_state().
+				 * See: 2932204 drbd: call local-io-error handler early
+				 * People may chose to hard-reset the box from this handler.
+				 * It is useful if this looks like a "regular node crash". */
+				if (was_io_error && eh == EP_CALL_HELPER)
+					drbd_maybe_khelper(device, NULL, "local-io-error");
+
+				/* Immediately allow completion of all application IO,
+				 * that waits for completion from the local disk,
+				 * if this was a force-detach due to disk_timeout
+				 * or administrator request (drbdsetup detach --force).
+				 * Do NOT abort otherwise.
+				 * Aborting local requests may cause serious problems,
+				 * if requests are completed to upper layers already,
+				 * and then later the already submitted local bio completes.
+				 * This can cause DMA into former bio pages that meanwhile
+				 * have been re-used for other things.
+				 * So aborting local requests may cause crashes,
+				 * or even worse, silent data corruption.
+				 */
+				if (test_and_clear_bit(FORCE_DETACH, &device->flags))
+					tl_abort_disk_io(device);
+
+				send_new_state_to_all_peer_devices(state_change, n_device);
+
+				/* In case we want to get something to stable storage still,
+				 * this may be the last chance.
+				 * Following put_ldev may transition to D_DISKLESS. */
+				drbd_bitmap_io_from_worker(device, &drbd_bm_write,
+						"detach", BM_LOCK_SET | BM_LOCK_CLEAR | BM_LOCK_BULK,
+						NULL);
+				drbd_md_sync_if_dirty(device);
+			}
+		}
+
+		/* second half of local IO error, failure to attach,
+		 * or administrative detach,
+		 * after local_cnt references have reached zero again */
+		if (disk_state[OLD] != D_DISKLESS && disk_state[NEW] == D_DISKLESS) {
+			/* We must still be diskless,
+			 * re-attach has to be serialized with this! */
+			if (device->disk_state[NOW] != D_DISKLESS)
+				drbd_err(device,
+					"ASSERT FAILED: disk is %s while going diskless\n",
+					drbd_disk_str(device->disk_state[NOW]));
+
+			/* we may need to cancel the md_sync timer */
+			timer_delete_sync(&device->md_sync_timer);
+
+			if (have_ldev)
+				send_new_state_to_all_peer_devices(state_change, n_device);
+		}
+
+		if (have_ldev)
+			put_ldev(device);
+
+		/* Notify peers that I had a local IO error and did not detach. */
+		if (disk_state[OLD] == D_UP_TO_DATE && disk_state[NEW] == D_INCONSISTENT)
+			send_new_state_to_all_peer_devices(state_change, n_device);
+
+		/* Testing EMPTY_TWOPC_PENDING would cause more queuing than necessary */
+		if (should_try_become_up_to_date(device, disk_state, NOW))
+			try_become_up_to_date = true;
+
+		if (test_bit(TRY_TO_GET_RESYNC, &device->flags)) {
+			/* Got connected to a diskless primary */
+			clear_bit(TRY_TO_GET_RESYNC, &device->flags);
+			drbd_try_to_get_resynced(device);
+		}
+
+		drbd_md_sync_if_dirty(device);
+
+		if (role[NEW] == R_PRIMARY && have_quorum[OLD] && !have_quorum[NEW])
+			drbd_maybe_khelper(device, NULL, "quorum-lost");
+
+		if (!susp_uuid[OLD] && susp_uuid[NEW] &&
+		    test_and_clear_bit(NEW_CUR_UUID, &device->flags))
+			new_current_uuid = true;
+
+		if (new_current_uuid)
+			drbd_uuid_new_current(device, false);
+
+		if (disk_state[OLD] > D_DISKLESS && disk_state[NEW] == D_DISKLESS)
+			drbd_reconsider_queue_parameters(device, NULL);
+	}
+
+	if (role[OLD] == R_PRIMARY && role[NEW] == R_SECONDARY)
+		send_role_to_all_peers(state_change);
+
+	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+		struct drbd_connection_state_change *connection_state_change = &state_change->connections[n_connection];
+		struct drbd_connection *connection = connection_state_change->connection;
+		enum drbd_conn_state *cstate = connection_state_change->cstate;
+		bool *susp_fen = connection_state_change->susp_fen;
+		enum drbd_fencing_policy fencing_policy;
+
+		if (connection_state_change->peer_role[NEW] == R_PRIMARY && send_flush_requests &&
+				connection->agreed_pro_version >= 123) {
+			u64 current_flush_sequence;
+
+			spin_lock_irq(&resource->initiator_flush_lock);
+			/* Requirement: At least the value from the corresponding state change */
+			current_flush_sequence = resource->current_flush_sequence;
+			spin_unlock_irq(&resource->initiator_flush_lock);
+
+			drbd_send_flush_requests(connection, current_flush_sequence);
+		}
+
+		/* Upon network configuration, we need to start the receiver */
+		if (cstate[OLD] == C_STANDALONE && cstate[NEW] == C_UNCONNECTED)
+			drbd_thread_start(&connection->receiver);
+
+		if (susp_fen[NEW])
+			check_may_resume_io_after_fencing(state_change, n_connection);
+
+		rcu_read_lock();
+		fencing_policy = connection->fencing_policy;
+		rcu_read_unlock();
+		if (fencing_policy != FP_DONT_CARE &&
+				drbd_should_unfence(state_change, n_connection))
+			drbd_maybe_khelper(NULL, connection, "unfence-peer");
+	}
+
+	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+		struct drbd_connection_state_change *connection_state_change = &state_change->connections[n_connection];
+		enum drbd_conn_state *cstate = connection_state_change->cstate;
+
+		if (cstate[NEW] == C_CONNECTED || cstate[NEW] == C_CONNECTING)
+			still_connected = true;
+	}
+
+	if (susp_uuid[NEW]) {
+		unsigned long irq_flags;
+
+		begin_state_change(resource, &irq_flags, CS_VERBOSE);
+		resource->susp_uuid[NEW] = false;
+		end_state_change(resource, &irq_flags, "susp-uuid");
+	}
+
+	if (try_become_up_to_date || healed_primary)
+		drbd_schedule_empty_twopc(resource);
+
+	if (!still_connected)
+		mod_timer_pending(&resource->twopc_timer, jiffies);
+
+	if (work->done)
+		complete(work->done);
+	forget_state_change(state_change);
+	kfree(work);
+
+	return 0;
+}
+
+static bool local_state_change(enum chg_state_flags flags)
+{
+	return flags & (CS_HARD | CS_LOCAL_ONLY);
+}
+
+static enum drbd_state_rv
+__peer_request(struct drbd_connection *connection, int vnr,
+	       union drbd_state mask, union drbd_state val)
+{
+	enum drbd_state_rv rv = SS_SUCCESS;
+
+	if (connection->cstate[NOW] == C_CONNECTED) {
+		enum drbd_packet cmd = (vnr == -1) ? P_CONN_ST_CHG_REQ : P_STATE_CHG_REQ;
+		if (!conn_send_state_req(connection, vnr, cmd, mask, val)) {
+			set_bit(TWOPC_PREPARED, &connection->flags);
+			rv = SS_CW_SUCCESS;
+		}
+	}
+	return rv;
+}
+
+static enum drbd_state_rv __peer_reply(struct drbd_connection *connection)
+{
+	if (test_and_clear_bit(TWOPC_NO, &connection->flags))
+		return SS_CW_FAILED_BY_PEER;
+	if (test_and_clear_bit(TWOPC_YES, &connection->flags) ||
+	    !test_bit(TWOPC_PREPARED, &connection->flags))
+		return SS_CW_SUCCESS;
+
+	/* This is DRBD 9.x <-> 8.4 compat code.
+	 * Consistent with __peer_request() above:
+	 * No more connection: fake success. */
+	if (connection->cstate[NOW] != C_CONNECTED)
+		return SS_SUCCESS;
+	return SS_UNKNOWN_ERROR;
+}
+
+static bool when_done_lock(struct drbd_resource *resource,
+			   unsigned long *irq_flags)
+{
+	write_lock_irqsave(&resource->state_rwlock, *irq_flags);
+	if (!resource->remote_state_change && !test_bit(TWOPC_WORK_PENDING, &resource->flags))
+		return true;
+	write_unlock_irqrestore(&resource->state_rwlock, *irq_flags);
+	return false;
+}
 
 /**
- * is_valid_transition() - Returns an SS_ error code if the state transition is not possible
- * This limits hard state transitions. Hard state transitions are facts there are
- * imposed on DRBD by the environment. E.g. disk broke or network broke down.
- * But those hard state transitions are still not allowed to do everything.
- * @ns:		new state.
- * @os:		old state.
+ * complete_remote_state_change  -  Wait for other remote state changes to complete
+ * @resource: DRBD resource.
+ * @irq_flags: IRQ flags from begin_state_change.
  */
+static void complete_remote_state_change(struct drbd_resource *resource,
+					 unsigned long *irq_flags)
+{
+	if (resource->remote_state_change) {
+		enum chg_state_flags flags = resource->state_change_flags;
+
+		begin_remote_state_change(resource, irq_flags);
+		for (;;) {
+			long t = twopc_timeout(resource);
+
+			t = wait_event_timeout(resource->twopc_wait,
+				   when_done_lock(resource, irq_flags), t);
+			if (t)
+				break;
+			if (when_done_lock(resource, irq_flags)) {
+				drbd_info(resource, "Two-phase commit: "
+					  "not woken up in time\n");
+				break;
+			}
+		}
+		__end_remote_state_change(resource, flags);
+	}
+}
+
 static enum drbd_state_rv
-is_valid_transition(union drbd_state os, union drbd_state ns)
+change_peer_state(struct drbd_connection *connection, int vnr,
+		  union drbd_state mask, union drbd_state val, unsigned long *irq_flags)
 {
+	struct drbd_resource *resource = connection->resource;
+	enum chg_state_flags flags = resource->state_change_flags | CS_TWOPC;
 	enum drbd_state_rv rv;
 
-	rv = is_valid_conn_transition(os.conn, ns.conn);
+	if (!expect(resource, flags & CS_SERIALIZE))
+		return SS_CW_FAILED_BY_PEER;
+
+	complete_remote_state_change(resource, irq_flags);
+
+	resource->remote_state_change = true;
+	resource->twopc_reply.initiator_node_id = resource->res_opts.node_id;
+	resource->twopc_reply.tid = 0;
+	begin_remote_state_change(resource, irq_flags);
+	rv = __peer_request(connection, vnr, mask, val);
+	if (rv == SS_CW_SUCCESS) {
+		wait_event(resource->state_wait,
+			((rv = __peer_reply(connection)) != SS_UNKNOWN_ERROR));
+		clear_bit(TWOPC_PREPARED, &connection->flags);
+	}
+	end_remote_state_change(resource, irq_flags, flags);
+	return rv;
+}
+
+static enum drbd_state_rv
+__cluster_wide_request(struct drbd_resource *resource, struct twopc_request *request,
+		       u64 reach_immediately)
+{
+	enum drbd_packet cmd = request->cmd;
+	struct drbd_connection *connection;
+	enum drbd_state_rv rv = SS_SUCCESS;
+	u64 im;
+
+	for_each_connection_ref(connection, im, resource) {
+		u64 mask;
+		int err;
+
+		clear_bit(TWOPC_PREPARED, &connection->flags);
+
+		if (connection->agreed_pro_version < 110)
+			continue;
+		mask = NODE_MASK(connection->peer_node_id);
+		if (reach_immediately & mask)
+			set_bit(TWOPC_PREPARED, &connection->flags);
+		else
+			continue;
 
-	/* we cannot fail (again) if we already detached */
-	if (ns.disk == D_FAILED && os.disk == D_DISKLESS)
-		rv = SS_IS_DISKLESS;
+		clear_bit(TWOPC_YES, &connection->flags);
+		clear_bit(TWOPC_NO, &connection->flags);
+		clear_bit(TWOPC_RETRY, &connection->flags);
 
+		err = conn_send_twopc_request(connection, request);
+		if (err) {
+			clear_bit(TWOPC_PREPARED, &connection->flags);
+			wake_up(&resource->work.q_wait);
+			continue;
+		}
+		if (cmd == P_TWOPC_PREPARE || cmd == P_TWOPC_PREP_RSZ)
+			schedule_work(&connection->send_ping_work);
+		rv = SS_CW_SUCCESS;
+	}
 	return rv;
 }
 
-static void print_sanitize_warnings(struct drbd_device *device, enum sanitize_state_warnings warn)
+bool drbd_twopc_between_peer_and_me(struct drbd_connection *connection)
 {
-	static const char *msg_table[] = {
-		[NO_WARNING] = "",
-		[ABORTED_ONLINE_VERIFY] = "Online-verify aborted.",
-		[ABORTED_RESYNC] = "Resync aborted.",
-		[CONNECTION_LOST_NEGOTIATING] = "Connection lost while negotiating, no data!",
-		[IMPLICITLY_UPGRADED_DISK] = "Implicitly upgraded disk",
-		[IMPLICITLY_UPGRADED_PDSK] = "Implicitly upgraded pdsk",
-	};
+	const int my_node_id = connection->resource->res_opts.node_id;
+	struct twopc_reply *o = &connection->resource->twopc_reply;
+
+	return ((o->target_node_id == my_node_id || o->target_node_id == -1) &&
+		o->initiator_node_id == connection->peer_node_id) ||
+		((o->target_node_id == connection->peer_node_id || o->target_node_id == -1) &&
+		 o->initiator_node_id == my_node_id);
+}
+
+bool cluster_wide_reply_ready(struct drbd_resource *resource)
+{
+	struct drbd_connection *connection;
+	bool connect_ready = true;
+	bool have_no = resource->twopc_reply.state_change_failed;
+	bool have_retry = false;
+	bool all_yes = true;
+
+	if (test_bit(TWOPC_ABORT_LOCAL, &resource->flags))
+		return true;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (connection->agreed_pro_version >= 118 &&
+				!idr_is_empty(&resource->devices) &&
+				resource->twopc_reply.is_connect &&
+				drbd_twopc_between_peer_and_me(connection) &&
+				!test_bit(CONN_HANDSHAKE_READY, &connection->flags))
+			connect_ready = false;
+
+		if (!test_bit(TWOPC_PREPARED, &connection->flags))
+			continue;
+		if (test_bit(TWOPC_NO, &connection->flags))
+			have_no = true;
+		if (test_bit(TWOPC_RETRY, &connection->flags))
+			have_retry = true;
+		if (!test_bit(TWOPC_YES, &connection->flags))
+			all_yes = false;
+	}
+	rcu_read_unlock();
+
+	return have_retry || (connect_ready && (have_no || all_yes));
+}
+
+static enum drbd_state_rv get_cluster_wide_reply(struct drbd_resource *resource,
+						 struct change_context *context)
+{
+	struct drbd_connection *connection, *failed_by = NULL;
+	bool handshake_disconnect = false;
+	bool handshake_retry = false;
+	bool have_no = resource->twopc_reply.state_change_failed;
+	bool have_retry = false;
+	enum drbd_state_rv rv = SS_CW_SUCCESS;
+
+	if (test_bit(TWOPC_ABORT_LOCAL, &resource->flags))
+		return SS_CONCURRENT_ST_CHG;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (resource->twopc_reply.is_connect &&
+				drbd_twopc_between_peer_and_me(connection)) {
+			if (test_bit(CONN_HANDSHAKE_DISCONNECT, &connection->flags))
+				handshake_disconnect = true;
+			if (test_bit(CONN_HANDSHAKE_RETRY, &connection->flags))
+				handshake_retry = true;
+		}
+
+		if (!test_bit(TWOPC_PREPARED, &connection->flags))
+			continue;
+		if (test_bit(TWOPC_NO, &connection->flags)) {
+			failed_by = connection;
+			have_no = true;
+		}
+		if (test_bit(TWOPC_RETRY, &connection->flags))
+			have_retry = true;
+	}
+
+	if (have_retry)
+		rv = SS_CONCURRENT_ST_CHG;
+	else if (handshake_retry)
+		rv = SS_HANDSHAKE_RETRY;
+	else if (handshake_disconnect)
+		rv = SS_HANDSHAKE_DISCONNECT;
+	else if (have_no) {
+		if (context && failed_by)
+			_drbd_state_err(context, "Declined by peer %s (id: %d), see the kernel log there",
+					rcu_dereference(failed_by->transport.net_conf)->name,
+					failed_by->peer_node_id);
+		rv = SS_CW_FAILED_BY_PEER;
+	}
+	rcu_read_unlock();
+
+	if (rv == SS_CW_SUCCESS && test_bit(TWOPC_RECV_SIZES_ERR, &resource->flags))
+		rv = SS_HANDSHAKE_DISCONNECT;
+
+	return rv;
+}
+
+static bool supports_two_phase_commit(struct drbd_resource *resource)
+{
+	struct drbd_connection *connection;
+	bool supported = true;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (connection->cstate[NOW] != C_CONNECTED)
+			continue;
+		if (connection->agreed_pro_version < 110) {
+			supported = false;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return supported;
+}
+
+static struct drbd_connection *get_first_connection(struct drbd_resource *resource)
+{
+	struct drbd_connection *connection = NULL;
+
+	rcu_read_lock();
+	if (!list_empty(&resource->connections)) {
+		connection = first_connection(resource);
+		kref_get(&connection->kref);
+	}
+	rcu_read_unlock();
+	return connection;
+}
+
+/* That two_primaries is a connection option is one of those things of
+   the past, that should be cleaned up!! it should be a resource config!
+   Here is a inaccurate heuristic */
+static bool multiple_primaries_allowed(struct drbd_resource *resource)
+{
+	struct drbd_connection *connection;
+	bool allowed = false;
+	struct net_conf *nc;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		nc = rcu_dereference(connection->transport.net_conf);
+		if (nc && nc->two_primaries) {
+			allowed = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return allowed;
+}
+
+static enum drbd_state_rv
+check_primaries_distances(struct drbd_resource *resource)
+{
+	struct twopc_reply *reply = &resource->twopc_reply;
+	int nr_primaries = hweight64(reply->primary_nodes);
+	u64 common_server;
+
+	if (nr_primaries <= 1)
+		return SS_SUCCESS;
+	if (nr_primaries > 1 && !multiple_primaries_allowed(resource))
+		return SS_TWO_PRIMARIES;
+	/* All primaries directly connected. Good */
+	if (!(reply->primary_nodes & reply->weak_nodes))
+		return SS_SUCCESS;
+
+	/* For virtualization setups with diskless hypervisors (R_PRIMARY) and one
+	   or multiple storage servers (R_SECONDARY) allow live-migration between the
+	   hypervisors. */
+	common_server = ~reply->weak_nodes;
+	if (common_server) {
+		int node_id;
+		/* Only allow if the new primary is diskless. See also far_away_change()
+		   in drbd_receiver.c for the diskless check on the other primary */
+		if ((reply->primary_nodes & NODE_MASK(resource->res_opts.node_id)) &&
+		    drbd_have_local_disk(resource))
+			return SS_WEAKLY_CONNECTED;
+
+		for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+			struct drbd_connection *connection;
+			struct net_conf *nc;
+			bool two_primaries;
+
+			if (!(common_server & NODE_MASK(node_id)))
+				continue;
+			connection = drbd_connection_by_node_id(resource, node_id);
+			if (!connection)
+				continue;
+
+			rcu_read_lock();
+			nc = rcu_dereference(connection->transport.net_conf);
+			two_primaries = nc ? nc->two_primaries : false;
+			rcu_read_unlock();
+
+			if (!two_primaries)
+				return SS_TWO_PRIMARIES;
+		}
+
+		return SS_SUCCESS;
+	}
+	return SS_WEAKLY_CONNECTED;
+}
+
+static enum drbd_state_rv
+check_ro_cnt_and_primary(struct drbd_resource *resource)
+{
+	struct twopc_reply *reply = &resource->twopc_reply;
+	struct drbd_connection *connection;
+	enum drbd_state_rv rv = SS_SUCCESS;
+	struct net_conf *nc;
+
+	if (drbd_open_ro_count(resource) == 0)
+		return rv;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		nc = rcu_dereference(connection->transport.net_conf);
+		if (!nc->two_primaries &&
+		    NODE_MASK(connection->peer_node_id) & reply->primary_nodes) {
+			rv = SS_PRIMARY_READER;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return rv;
+}
+
+long twopc_retry_timeout(struct drbd_resource *resource, int retries)
+{
+	struct drbd_connection *connection;
+	int connections = 0;
+	long timeout = 0;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (connection->cstate[NOW] < C_CONNECTING)
+			continue;
+		connections++;
+	}
+	rcu_read_unlock();
+
+	if (connections > 0) {
+		if (retries > 5)
+			retries = 5;
+		timeout = resource->res_opts.twopc_retry_timeout *
+			  HZ / 10 * connections * (1 << retries);
+		timeout = get_random_u32_below(timeout);
+	}
+	return timeout;
+}
+
+void abort_connect(struct drbd_connection *connection)
+{
+	struct drbd_peer_device *peer_device;
+	int vnr;
+
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		if (test_and_clear_bit(HOLDING_UUID_READ_LOCK, &peer_device->flags))
+			up_read_non_owner(&peer_device->device->uuid_sem);
+		clear_bit(INITIAL_STATE_SENT, &peer_device->flags);
+		clear_bit(INITIAL_STATE_RECEIVED, &peer_device->flags);
+		clear_bit(UUIDS_RECEIVED, &peer_device->flags);
+		clear_bit(CURRENT_UUID_RECEIVED, &peer_device->flags);
+	}
+	rcu_read_unlock();
+}
+
+static void twopc_phase2(struct drbd_resource *resource,
+			 struct twopc_request *request,
+			 u64 reach_immediately)
+{
+	struct drbd_connection *connection;
+	u64 im;
+
+	for_each_connection_ref(connection, im, resource) {
+		u64 mask = NODE_MASK(connection->peer_node_id);
+		if (!(reach_immediately & mask))
+			continue;
+
+		conn_send_twopc_request(connection, request);
+	}
+}
+
+void drbd_print_cluster_wide_state_change(struct drbd_resource *resource, const char *message,
+		unsigned int tid, unsigned int initiator_node_id, int target_node_id,
+		union drbd_state mask, union drbd_state val)
+{
+	char buffer[150], *b, *end = buffer + sizeof(buffer);
+
+	b = buffer;
+	b += scnprintf(b, end - b, "%u->", initiator_node_id);
+	if (target_node_id == -1)
+		b += scnprintf(b, end - b, "all");
+	else
+		b += scnprintf(b, end - b, "%d", target_node_id);
+
+	if (mask.role)
+		b += scnprintf(b, end - b, " role( %s )", drbd_role_str(val.role));
+
+	if (mask.peer)
+		b += scnprintf(b, end - b, " peer( %s )", drbd_role_str(val.peer));
+
+	if (mask.conn) {
+		if (val.conn > C_CONNECTED)
+			b += scnprintf(b, end - b, " repl( %s )", drbd_repl_str(val.conn));
+		else
+			b += scnprintf(b, end - b, " conn( %s )", drbd_conn_str(val.conn));
+	}
+
+	if (mask.disk)
+		b += scnprintf(b, end - b, " disk( %s )", drbd_disk_str(val.disk));
+
+	if (mask.pdsk)
+		b += scnprintf(b, end - b, " pdsk( %s )", drbd_disk_str(val.pdsk));
+
+	// Any of "susp-io( user )", "susp-io( quorum )" or "susp-io( uuid )"
+	if (mask.susp)
+		b += scnprintf(b, end - b, " %ssusp-io", val.susp ? "+" : "-");
+
+	if (mask.susp_nod)
+		b += scnprintf(b, end - b, " susp-io( %sno-disk )", val.susp_nod ? "+" : "-");
+
+	if (mask.susp_fen)
+		b += scnprintf(b, end - b, " susp-io( %sfencing )", val.susp_fen ? "+" : "-");
+
+	if (mask.user_isp)
+		b += scnprintf(b, end - b, " resync-susp( %suser )", val.user_isp ? "+" : "-");
+
+	if (mask.peer_isp)
+		b += scnprintf(b, end - b, " resync-susp( %speer )", val.peer_isp ? "+" : "-");
+
+	if (mask.aftr_isp)
+		b += scnprintf(b, end - b, " resync-susp( %safter dependency )",
+				val.aftr_isp ? "+" : "-");
 
-	if (warn != NO_WARNING)
-		drbd_warn(device, "%s\n", msg_table[warn]);
+	if (!mask.i)
+		b += scnprintf(b, end - b, " empty");
+
+	drbd_info(resource, "%s %u: %s\n", message, tid, buffer);
 }
 
 /**
- * sanitize_state() - Resolves implicitly necessary additional changes to a state transition
- * @device:	DRBD device.
- * @os:		old state.
- * @ns:		new state.
- * @warn:	placeholder for returned state warning.
+ * change_cluster_wide_state  -  Cluster-wide two-phase commit
+ * @change: The callback function that does the actual state change.
+ * @context: State change context.
+ * @tag: State change tag to print in status messages.
+ *
+ * Perform a two-phase commit transaction among all (reachable) nodes in the
+ * cluster.  In our transaction model, the initiator of a transaction is also
+ * the coordinator.
+ *
+ * In phase one of the transaction, the coordinator sends all nodes in the
+ * cluster a P_TWOPC_PREPARE packet.  Each node replies with either P_TWOPC_YES
+ * if it consents or with P_TWOPC_NO if it denies the transaction.  Once all
+ * replies have been received, the coordinator sends all nodes in the cluster a
+ * P_TWOPC_COMMIT or P_TWOPC_ABORT packet to finish the transaction.
+ *
+ * When a node in the cluster is busy with another transaction, it replies with
+ * P_TWOPC_NO.  The coordinator is then responsible for retrying the
+ * transaction.
  *
- * When we loose connection, we have to set the state of the peers disk (pdsk)
- * to D_UNKNOWN. This rule and many more along those lines are in this function.
+ * Since a cluster is not guaranteed to always be fully connected, some nodes
+ * will not be directly reachable from other nodes.  In order to still reach
+ * all nodes in the cluster, participants will forward requests to nodes which
+ * haven't received the request yet:
+ *
+ * The nodes_to_reach field in requests indicates which nodes have received the
+ * request already.  Before forwarding a request to a peer, a node removes
+ * itself from nodes_to_reach; it then sends the request to all directly
+ * connected nodes in nodes_to_reach.
+ *
+ * If there are redundant paths in the cluster, requests will reach some nodes
+ * more than once.  Nodes remember when they are taking part in a transaction;
+ * they detect duplicate requests and reply to them with P_TWOPC_YES packets.
+ * (Transactions are identified by the node id of the initiator and a random,
+ * unique-enough transaction identifier.)
+ *
+ * A configurable timeout determines how long a coordinator or participant will
+ * wait for a transaction to finish.  A transaction that times out is assumed
+ * to have aborted.
  */
-static union drbd_state sanitize_state(struct drbd_device *device, union drbd_state os,
-				       union drbd_state ns, enum sanitize_state_warnings *warn)
+static enum drbd_state_rv
+change_cluster_wide_state(bool (*change)(struct change_context *, enum change_phase),
+			  struct change_context *context, const char *tag)
 {
-	enum drbd_fencing_p fp;
-	enum drbd_disk_state disk_min, disk_max, pdsk_min, pdsk_max;
+	struct drbd_resource *resource = context->resource;
+	unsigned long irq_flags;
+	struct twopc_request request;
+	struct twopc_reply *reply = &resource->twopc_reply;
+	struct drbd_connection *connection, *target_connection = NULL;
+	enum drbd_state_rv rv;
+	u64 reach_immediately;
+	int retries = 1;
+	unsigned long start_time;
+	bool have_peers;
+
+	begin_state_change(resource, &irq_flags, context->flags | CS_LOCAL_ONLY);
+	resource->state_change_err_str = context->err_str;
+
+	if (local_state_change(context->flags)) {
+		/* Not a cluster-wide state change. */
+		change(context, PH_LOCAL_COMMIT);
+		return end_state_change(resource, &irq_flags, tag);
+	} else {
+		if (!change(context, PH_PREPARE)) {
+			/* Not a cluster-wide state change. */
+			return end_state_change(resource, &irq_flags, tag);
+		}
+		rv = try_state_change(resource);
+		if (rv != SS_SUCCESS) {
+			/* Failure or nothing to do. */
+			/* abort_state_change(resource, &irq_flags); */
+			if (rv == SS_NOTHING_TO_DO)
+				resource->state_change_flags &= ~CS_VERBOSE;
+			return __end_state_change(resource, &irq_flags, rv, tag);
+		}
+		/* Really a cluster-wide state change. */
+	}
+
+	if (!supports_two_phase_commit(resource)) {
+		connection = get_first_connection(resource);
+		rv = SS_SUCCESS;
+		if (connection) {
+			rv = change_peer_state(connection, context->vnr, context->mask, context->val, &irq_flags);
+			kref_put(&connection->kref, drbd_destroy_connection);
+		}
+		if (rv >= SS_SUCCESS)
+			change(context, PH_84_COMMIT);
+		return __end_state_change(resource, &irq_flags, rv, tag);
+	}
+
+	if (!expect(resource, context->flags & CS_SERIALIZE || context->mask.i == 0)) {
+		rv = SS_CW_FAILED_BY_PEER;
+		return __end_state_change(resource, &irq_flags, rv, tag);
+	}
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (!expect(connection, current != connection->receiver.task)) {
+			rcu_read_unlock();
+			BUG();
+		}
+	}
+	rcu_read_unlock();
+
+    retry:
+	if (current == resource->worker.task && resource->remote_state_change)
+		return __end_state_change(resource, &irq_flags, SS_CONCURRENT_ST_CHG, tag);
+
+	complete_remote_state_change(resource, &irq_flags);
+	start_time = jiffies;
+	resource->state_change_err_str = context->err_str;
+
+	*reply = (struct twopc_reply) { 0 };
+
+	reach_immediately = directly_connected_nodes(resource, NOW);
+	if (context->target_node_id != -1) {
+		struct drbd_connection *connection;
+
+		/* Fail if the target node is no longer directly reachable. */
+		connection = drbd_get_connection_by_node_id(resource, context->target_node_id);
+		if (!connection) {
+			rv = SS_NEED_CONNECTION;
+			return __end_state_change(resource, &irq_flags, rv, tag);
+		}
 
-	if (warn)
-		*warn = NO_WARNING;
+		if (!(connection->cstate[NOW] == C_CONNECTED ||
+		      (connection->cstate[NOW] == C_CONNECTING &&
+		       context->mask.conn == conn_MASK &&
+		       context->val.conn == C_CONNECTED))) {
+			rv = SS_NEED_CONNECTION;
 
-	fp = FP_DONT_CARE;
-	if (get_ldev(device)) {
-		rcu_read_lock();
-		fp = rcu_dereference(device->ldev->disk_conf)->fencing;
-		rcu_read_unlock();
-		put_ldev(device);
+			kref_put(&connection->kref, drbd_destroy_connection);
+			return __end_state_change(resource, &irq_flags, rv, tag);
+		}
+		target_connection = connection;
+
+		/* For connect transactions, add the target node id. */
+		reach_immediately |= NODE_MASK(context->target_node_id);
 	}
 
-	/* Implications from connection to peer and peer_isp */
-	if (ns.conn < C_CONNECTED) {
-		ns.peer_isp = 0;
-		ns.peer = R_UNKNOWN;
-		if (ns.pdsk > D_UNKNOWN || ns.pdsk < D_INCONSISTENT)
-			ns.pdsk = D_UNKNOWN;
+	do
+		reply->tid = get_random_u32();
+	while (!reply->tid);
+
+	clear_bit(TWOPC_RECV_SIZES_ERR, &resource->flags);
+	request.tid = reply->tid;
+	request.initiator_node_id = resource->res_opts.node_id;
+	request.target_node_id = context->target_node_id;
+	request.nodes_to_reach = ~(reach_immediately | NODE_MASK(resource->res_opts.node_id));
+	request.vnr = context->vnr;
+	request.cmd = P_TWOPC_PREPARE;
+	request.flags = TWOPC_HAS_REACHABLE;
+
+	resource->twopc.type = TWOPC_STATE_CHANGE;
+	resource->twopc.state_change.mask = context->mask;
+	resource->twopc.state_change.val = context->val;
+	resource->twopc.state_change.primary_nodes = 0;
+	resource->twopc.state_change.reachable_nodes = 0;
+	resource->twopc_parent_nodes = 0;
+	resource->remote_state_change = true;
+
+	drbd_print_cluster_wide_state_change(resource, "Preparing cluster-wide state change",
+			request.tid, resource->res_opts.node_id, context->target_node_id,
+			context->mask, context->val);
+
+	reply->initiator_node_id = resource->res_opts.node_id;
+	reply->target_node_id = context->target_node_id;
+
+	reply->reachable_nodes = directly_connected_nodes(resource, NOW) |
+				       NODE_MASK(resource->res_opts.node_id);
+	if (context->mask.conn == conn_MASK && context->val.conn == C_CONNECTED) {
+		reply->reachable_nodes |= NODE_MASK(context->target_node_id);
+		reply->target_reachable_nodes = reply->reachable_nodes;
+		reply->is_connect = 1;
+		drbd_init_connect_state(target_connection);
+	} else if (context->mask.conn == conn_MASK && context->val.conn == C_DISCONNECTING) {
+		reply->target_reachable_nodes = NODE_MASK(context->target_node_id);
+		reply->reachable_nodes &= ~reply->target_reachable_nodes;
+		reply->is_disconnect = 1;
+	} else {
+		reply->target_reachable_nodes = reply->reachable_nodes;
 	}
 
-	/* Clear the aftr_isp when becoming unconfigured */
-	if (ns.conn == C_STANDALONE && ns.disk == D_DISKLESS && ns.role == R_SECONDARY)
-		ns.aftr_isp = 0;
+	D_ASSERT(resource, !test_bit(TWOPC_WORK_PENDING, &resource->flags));
+	begin_remote_state_change(resource, &irq_flags);
+	rv = __cluster_wide_request(resource, &request, reach_immediately);
 
-	/* An implication of the disk states onto the connection state */
-	/* Abort resync if a disk fails/detaches */
-	if (ns.conn > C_CONNECTED && (ns.disk <= D_FAILED || ns.pdsk <= D_FAILED)) {
-		if (warn)
-			*warn = ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T ?
-				ABORTED_ONLINE_VERIFY : ABORTED_RESYNC;
-		ns.conn = C_CONNECTED;
-	}
+	/* If we are changing state attached to a particular connection then we
+	 * expect that connection to remain connected. A failure to send
+	 * P_TWOPC_PREPARE on that connection is a failure for the whole
+	 * cluster-wide state change. */
+	if (target_connection && !test_bit(TWOPC_PREPARED, &target_connection->flags))
+		rv = SS_NEED_CONNECTION;
 
-	/* Connection breaks down before we finished "Negotiating" */
-	if (ns.conn < C_CONNECTED && ns.disk == D_NEGOTIATING &&
-	    get_ldev_if_state(device, D_NEGOTIATING)) {
-		if (device->ed_uuid == device->ldev->md.uuid[UI_CURRENT]) {
-			ns.disk = device->new_state_tmp.disk;
-			ns.pdsk = device->new_state_tmp.pdsk;
-		} else {
-			if (warn)
-				*warn = CONNECTION_LOST_NEGOTIATING;
-			ns.disk = D_DISKLESS;
-			ns.pdsk = D_UNKNOWN;
+	have_peers = rv == SS_CW_SUCCESS;
+	if (have_peers) {
+		long t;
+
+		if (context->mask.conn == conn_MASK && context->val.conn == C_CONNECTED &&
+		    target_connection->agreed_pro_version >= 118)
+			conn_connect2(target_connection);
+
+		t = wait_event_interruptible_timeout(resource->state_wait,
+						     cluster_wide_reply_ready(resource),
+						     twopc_timeout(resource));
+		if (t > 0)
+			rv = get_cluster_wide_reply(resource, context);
+		else
+			rv = t == 0 ? SS_TIMEOUT : SS_INTERRUPTED;
+
+		/* while waiting for the replies, reach_immediately might have changed. */
+		reach_immediately = directly_connected_nodes(resource, NOW);
+		if (target_connection && target_connection->cstate[NOW] == C_CONNECTING)
+			reach_immediately |= NODE_MASK(context->target_node_id);
+
+		request.nodes_to_reach =
+			~(reach_immediately | NODE_MASK(resource->res_opts.node_id));
+
+		if (rv == SS_CW_SUCCESS) {
+			u64 directly_reachable = reach_immediately |
+				NODE_MASK(resource->res_opts.node_id);
+
+			if (context->mask.conn == conn_MASK && context->val.conn == C_DISCONNECTING)
+				directly_reachable &= ~NODE_MASK(context->target_node_id);
+
+			if ((context->mask.role == role_MASK && context->val.role == R_PRIMARY) ||
+			    (context->mask.role != role_MASK && resource->role[NOW] == R_PRIMARY)) {
+				reply->primary_nodes |= NODE_MASK(resource->res_opts.node_id);
+				if (drbd_res_data_accessible(resource))
+					reply->weak_nodes |= ~directly_reachable;
+			}
+
+			/*
+			 * When a node is Primary and has access to UpToDate data, it sets
+			 * weak_nodes to the mask of those it is not connected to. This includes the
+			 * bits for nodes which are not configured, so will always have some set
+			 * bits. Thus if there is a Primary node and no bits are set in weak_nodes,
+			 * the Primary cannot have access to UpToDate data.
+			 */
+			if (reply->primary_nodes && !reply->weak_nodes)
+				request.flags |= TWOPC_PRI_INCAPABLE;
+
+			drbd_info(resource, "State change %u: primary_nodes=%lX, weak_nodes=%lX\n",
+				  reply->tid, (unsigned long)reply->primary_nodes,
+				  (unsigned long)reply->weak_nodes);
+
+			if ((context->mask.role == role_MASK && context->val.role == R_PRIMARY) ||
+			    (context->mask.conn == conn_MASK && context->val.conn == C_CONNECTED))
+				rv = check_primaries_distances(resource);
+
+			if (rv >= SS_SUCCESS &&
+			    context->mask.conn == conn_MASK && context->val.conn == C_CONNECTED)
+				rv = check_ro_cnt_and_primary(resource);
+
+			if (!(context->mask.conn == conn_MASK && context->val.conn == C_DISCONNECTING) ||
+			    (reply->reachable_nodes & reply->target_reachable_nodes)) {
+				/* The cluster is still connected after this
+				 * transaction: either this transaction does
+				 * not disconnect a connection, or there are
+				 * redundant connections.  */
+
+				u64 m;
+
+				m = reply->reachable_nodes | reply->target_reachable_nodes;
+				reply->reachable_nodes = m;
+				reply->target_reachable_nodes = m;
+			} else {
+				rcu_read_lock();
+				for_each_connection_rcu(connection, resource) {
+					int node_id = connection->peer_node_id;
+
+					if (node_id == context->target_node_id) {
+						drbd_info(connection, "Cluster is now split\n");
+						break;
+					}
+				}
+				rcu_read_unlock();
+			}
+
+			resource->twopc.state_change.primary_nodes = reply->primary_nodes;
+			resource->twopc.state_change.reachable_nodes =
+				reply->target_reachable_nodes;
 		}
-		put_ldev(device);
-	}
 
-	/* D_CONSISTENT and D_OUTDATED vanish when we get connected */
-	if (ns.conn >= C_CONNECTED && ns.conn < C_AHEAD) {
-		if (ns.disk == D_CONSISTENT || ns.disk == D_OUTDATED)
-			ns.disk = D_UP_TO_DATE;
-		if (ns.pdsk == D_CONSISTENT || ns.pdsk == D_OUTDATED)
-			ns.pdsk = D_UP_TO_DATE;
-	}
-
-	/* Implications of the connection state on the disk states */
-	disk_min = D_DISKLESS;
-	disk_max = D_UP_TO_DATE;
-	pdsk_min = D_INCONSISTENT;
-	pdsk_max = D_UNKNOWN;
-	switch ((enum drbd_conns)ns.conn) {
-	case C_WF_BITMAP_T:
-	case C_PAUSED_SYNC_T:
-	case C_STARTING_SYNC_T:
-	case C_WF_SYNC_UUID:
-	case C_BEHIND:
-		disk_min = D_INCONSISTENT;
-		disk_max = D_OUTDATED;
-		pdsk_min = D_UP_TO_DATE;
-		pdsk_max = D_UP_TO_DATE;
-		break;
-	case C_VERIFY_S:
-	case C_VERIFY_T:
-		disk_min = D_UP_TO_DATE;
-		disk_max = D_UP_TO_DATE;
-		pdsk_min = D_UP_TO_DATE;
-		pdsk_max = D_UP_TO_DATE;
-		break;
-	case C_CONNECTED:
-		disk_min = D_DISKLESS;
-		disk_max = D_UP_TO_DATE;
-		pdsk_min = D_DISKLESS;
-		pdsk_max = D_UP_TO_DATE;
-		break;
-	case C_WF_BITMAP_S:
-	case C_PAUSED_SYNC_S:
-	case C_STARTING_SYNC_S:
-	case C_AHEAD:
-		disk_min = D_UP_TO_DATE;
-		disk_max = D_UP_TO_DATE;
-		pdsk_min = D_INCONSISTENT;
-		pdsk_max = D_CONSISTENT; /* D_OUTDATED would be nice. But explicit outdate necessary*/
-		break;
-	case C_SYNC_TARGET:
-		disk_min = D_INCONSISTENT;
-		disk_max = D_INCONSISTENT;
-		pdsk_min = D_UP_TO_DATE;
-		pdsk_max = D_UP_TO_DATE;
-		break;
-	case C_SYNC_SOURCE:
-		disk_min = D_UP_TO_DATE;
-		disk_max = D_UP_TO_DATE;
-		pdsk_min = D_INCONSISTENT;
-		pdsk_max = D_INCONSISTENT;
-		break;
-	case C_STANDALONE:
-	case C_DISCONNECTING:
-	case C_UNCONNECTED:
-	case C_TIMEOUT:
-	case C_BROKEN_PIPE:
-	case C_NETWORK_FAILURE:
-	case C_PROTOCOL_ERROR:
-	case C_TEAR_DOWN:
-	case C_WF_CONNECTION:
-	case C_WF_REPORT_PARAMS:
-	case C_MASK:
-		break;
-	}
-	if (ns.disk > disk_max)
-		ns.disk = disk_max;
+		if (context->mask.conn == conn_MASK && context->val.conn == C_CONNECTED &&
+		    target_connection->agreed_pro_version >= 118) {
+			wait_initial_states_received(target_connection);
 
-	if (ns.disk < disk_min) {
-		if (warn)
-			*warn = IMPLICITLY_UPGRADED_DISK;
-		ns.disk = disk_min;
+			if (rv >= SS_SUCCESS && test_bit(TWOPC_RECV_SIZES_ERR, &resource->flags))
+				rv = SS_HANDSHAKE_DISCONNECT;
+		}
 	}
-	if (ns.pdsk > pdsk_max)
-		ns.pdsk = pdsk_max;
 
-	if (ns.pdsk < pdsk_min) {
-		if (warn)
-			*warn = IMPLICITLY_UPGRADED_PDSK;
-		ns.pdsk = pdsk_min;
+	request.cmd = rv >= SS_SUCCESS ? P_TWOPC_COMMIT : P_TWOPC_ABORT;
+	if (rv < SS_SUCCESS && target_connection)
+		abort_connect(target_connection);
+
+	if ((rv == SS_TIMEOUT || rv == SS_CONCURRENT_ST_CHG) &&
+	    !(context->flags & CS_DONT_RETRY)) {
+		long timeout = twopc_retry_timeout(resource, retries++);
+		drbd_info(resource, "Retrying cluster-wide state change after %ums\n",
+			  jiffies_to_msecs(timeout));
+		if (have_peers)
+			twopc_phase2(resource, &request, reach_immediately);
+		if (target_connection) {
+			kref_put(&target_connection->kref, drbd_destroy_connection);
+			target_connection = NULL;
+		}
+		clear_remote_state_change(resource);
+		schedule_timeout_interruptible(timeout);
+		end_remote_state_change(resource, &irq_flags, context->flags | CS_TWOPC);
+		goto retry;
 	}
 
-	if (fp == FP_STONITH &&
-	    (ns.role == R_PRIMARY && ns.conn < C_CONNECTED && ns.pdsk > D_OUTDATED) &&
-	    !(os.role == R_PRIMARY && os.conn < C_CONNECTED && os.pdsk > D_OUTDATED))
-		ns.susp_fen = 1; /* Suspend IO while fence-peer handler runs (peer lost) */
-
-	if (device->resource->res_opts.on_no_data == OND_SUSPEND_IO &&
-	    (ns.role == R_PRIMARY && ns.disk < D_UP_TO_DATE && ns.pdsk < D_UP_TO_DATE) &&
-	    !(os.role == R_PRIMARY && os.disk < D_UP_TO_DATE && os.pdsk < D_UP_TO_DATE))
-		ns.susp_nod = 1; /* Suspend IO while no data available (no accessible data available) */
+	if (rv >= SS_SUCCESS)
+		drbd_info(resource, "Committing cluster-wide state change %u (%ums)\n",
+			  request.tid,
+			  jiffies_to_msecs(jiffies - start_time));
+	else
+		drbd_info(resource, "Aborting cluster-wide state change %u (%ums) rv = %d\n",
+			  request.tid,
+			  jiffies_to_msecs(jiffies - start_time),
+			  rv);
+
+	if (have_peers && context->change_local_state_last) {
+		set_bit(TWOPC_STATE_CHANGE_PENDING, &resource->flags);
+		twopc_phase2(resource, &request, reach_immediately);
+	}
 
-	if (ns.aftr_isp || ns.peer_isp || ns.user_isp) {
-		if (ns.conn == C_SYNC_SOURCE)
-			ns.conn = C_PAUSED_SYNC_S;
-		if (ns.conn == C_SYNC_TARGET)
-			ns.conn = C_PAUSED_SYNC_T;
+	end_remote_state_change(resource, &irq_flags, context->flags | CS_TWOPC);
+	clear_bit(TWOPC_STATE_CHANGE_PENDING, &resource->flags);
+	if (rv >= SS_SUCCESS) {
+		change(context, PH_COMMIT);
+		rv = end_state_change(resource, &irq_flags, tag);
+		if (rv < SS_SUCCESS)
+			drbd_err(resource, "FATAL: Local commit of already committed %u failed! \n",
+				 request.tid);
 	} else {
-		if (ns.conn == C_PAUSED_SYNC_S)
-			ns.conn = C_SYNC_SOURCE;
-		if (ns.conn == C_PAUSED_SYNC_T)
-			ns.conn = C_SYNC_TARGET;
+		abort_state_change(resource, &irq_flags);
 	}
 
-	return ns;
-}
+	if (have_peers && !context->change_local_state_last)
+		twopc_phase2(resource, &request, reach_immediately);
 
-void drbd_resume_al(struct drbd_device *device)
-{
-	if (test_and_clear_bit(AL_SUSPENDED, &device->flags))
-		drbd_info(device, "Resumed AL updates\n");
+	if (target_connection) {
+		kref_put(&target_connection->kref, drbd_destroy_connection);
+	}
+	return rv;
 }
 
-/* helper for _drbd_set_state */
-static void set_ov_position(struct drbd_peer_device *peer_device, enum drbd_conns cs)
+enum determine_dev_size
+change_cluster_wide_device_size(struct drbd_device *device,
+				sector_t local_max_size,
+				uint64_t new_user_size,
+				enum dds_flags dds_flags,
+				struct resize_parms *rs)
 {
-	struct drbd_device *device = peer_device->device;
+	struct drbd_resource *resource = device->resource;
+	struct twopc_reply *reply = &resource->twopc_reply;
+	struct twopc_request request;
+	unsigned long start_time;
+	unsigned long irq_flags;
+	enum drbd_state_rv rv;
+	enum determine_dev_size dd;
+	u64 reach_immediately;
+	bool have_peers, commit_it;
+	sector_t new_size = 0;
+	int retries = 1;
+
+retry:
+	rv = drbd_support_2pc_resize(resource);
+	if (rv < SS_SUCCESS)
+		return DS_2PC_NOT_SUPPORTED;
 
-	if (peer_device->connection->agreed_pro_version < 90)
-		device->ov_start_sector = 0;
-	device->rs_total = drbd_bm_bits(device);
-	device->ov_position = 0;
-	if (cs == C_VERIFY_T) {
-		/* starting online verify from an arbitrary position
-		 * does not fit well into the existing protocol.
-		 * on C_VERIFY_T, we initialize ov_left and friends
-		 * implicitly in receive_DataRequest once the
-		 * first P_OV_REQUEST is received */
-		device->ov_start_sector = ~(sector_t)0;
-	} else {
-		unsigned long bit = BM_SECT_TO_BIT(device->ov_start_sector);
-		if (bit >= device->rs_total) {
-			device->ov_start_sector =
-				BM_BIT_TO_SECT(device->rs_total - 1);
-			device->rs_total = 1;
-		} else
-			device->rs_total -= bit;
-		device->ov_position = device->ov_start_sector;
-	}
-	device->ov_left = device->rs_total;
-}
+	state_change_lock(resource, &irq_flags, CS_VERBOSE | CS_LOCAL_ONLY);
+	rcu_read_lock();
+	complete_remote_state_change(resource, &irq_flags);
+	start_time = jiffies;
+	reach_immediately = directly_connected_nodes(resource, NOW);
+
+	*reply = (struct twopc_reply) { 0 };
+
+	do
+		reply->tid = get_random_u32();
+	while (!reply->tid);
+
+	request.tid = reply->tid;
+	request.initiator_node_id = resource->res_opts.node_id;
+	request.target_node_id = -1;
+	request.nodes_to_reach = ~(reach_immediately | NODE_MASK(resource->res_opts.node_id));
+	request.vnr = device->vnr;
+	request.cmd = P_TWOPC_PREP_RSZ;
+	request.flags = 0;
+	resource->twopc.type = TWOPC_RESIZE;
+	resource->twopc.resize.dds_flags = dds_flags;
+	resource->twopc.resize.user_size = new_user_size;
+	resource->twopc.resize.diskful_primary_nodes = 0;
+	resource->twopc.resize.new_size = 0;
+	resource->twopc_parent_nodes = 0;
+	resource->remote_state_change = true;
+
+	reply->initiator_node_id = resource->res_opts.node_id;
+	reply->target_node_id = -1;
+	reply->max_possible_size = local_max_size;
+	reply->reachable_nodes = reach_immediately | NODE_MASK(resource->res_opts.node_id);
+	reply->target_reachable_nodes = reply->reachable_nodes;
+	if (resource->role[NOW] == R_PRIMARY)
+		reply->diskful_primary_nodes = NODE_MASK(resource->res_opts.node_id);
+	rcu_read_unlock();
+	state_change_unlock(resource, &irq_flags);
 
-/**
- * _drbd_set_state() - Set a new DRBD state
- * @device:	DRBD device.
- * @ns:		new state.
- * @flags:	Flags
- * @done:	Optional completion, that will get completed after the after_state_ch() finished
- *
- * Caller needs to hold req_lock. Do not call directly.
- */
-enum drbd_state_rv
-_drbd_set_state(struct drbd_device *device, union drbd_state ns,
-	        enum chg_state_flags flags, struct completion *done)
-{
-	struct drbd_peer_device *peer_device = first_peer_device(device);
-	struct drbd_connection *connection = peer_device ? peer_device->connection : NULL;
-	union drbd_state os;
-	enum drbd_state_rv rv = SS_SUCCESS;
-	enum sanitize_state_warnings ssw;
-	struct after_state_chg_work *ascw;
-	struct drbd_state_change *state_change;
+	drbd_info(device, "Preparing cluster-wide size change %u "
+		  "(local_max_size = %llu KB, user_cap = %llu KB)\n",
+		  request.tid,
+		  (unsigned long long)local_max_size >> 1,
+		  (unsigned long long)new_user_size >> 1);
 
-	os = drbd_read_state(device);
+	rv = __cluster_wide_request(resource, &request, reach_immediately);
 
-	ns = sanitize_state(device, os, ns, &ssw);
-	if (ns.i == os.i)
-		return SS_NOTHING_TO_DO;
+	have_peers = rv == SS_CW_SUCCESS;
+	if (have_peers) {
+		if (wait_event_timeout(resource->state_wait,
+				       cluster_wide_reply_ready(resource),
+				       twopc_timeout(resource)))
+			rv = get_cluster_wide_reply(resource, NULL);
+		else
+			rv = SS_TIMEOUT;
 
-	rv = is_valid_transition(os, ns);
-	if (rv < SS_SUCCESS)
-		return rv;
+		if (rv == SS_TIMEOUT || rv == SS_CONCURRENT_ST_CHG) {
+			long timeout = twopc_retry_timeout(resource, retries++);
 
-	if (!(flags & CS_HARD)) {
-		/*  pre-state-change checks ; only look at ns  */
-		/* See drbd_state_sw_errors in drbd_strings.c */
+			drbd_info(device, "Retrying cluster-wide size change after %ums\n",
+				  jiffies_to_msecs(timeout));
 
-		rv = is_valid_state(device, ns);
-		if (rv < SS_SUCCESS) {
-			/* If the old state was illegal as well, then let
-			   this happen...*/
+			request.cmd = P_TWOPC_ABORT;
+			twopc_phase2(resource, &request, reach_immediately);
 
-			if (is_valid_state(device, os) == rv)
-				rv = is_valid_soft_transition(os, ns, connection);
-		} else
-			rv = is_valid_soft_transition(os, ns, connection);
+			clear_remote_state_change(resource);
+			schedule_timeout_interruptible(timeout);
+			goto retry;
+		}
 	}
 
-	if (rv < SS_SUCCESS) {
-		if (flags & CS_VERBOSE)
-			print_st_err(device, os, ns, rv);
-		return rv;
+	if (rv >= SS_SUCCESS) {
+		new_size = drbd_new_dev_size(device, reply->max_possible_size,
+						new_user_size, dds_flags | DDSF_2PC);
+		commit_it = new_size != get_capacity(device->vdisk);
+
+		if (commit_it) {
+			resource->twopc.resize.new_size = new_size;
+			resource->twopc.resize.diskful_primary_nodes = reply->diskful_primary_nodes;
+			drbd_info(device, "Committing cluster-wide size change %u (%ums)\n",
+				  request.tid,
+				  jiffies_to_msecs(jiffies - start_time));
+		} else {
+			drbd_info(device, "Aborting cluster-wide size change %u (%ums) size unchanged\n",
+				  request.tid,
+				  jiffies_to_msecs(jiffies - start_time));
+		}
+	} else {
+		commit_it = false;
+		drbd_info(device, "Aborting cluster-wide size change %u (%ums) rv = %d\n",
+			  request.tid,
+			  jiffies_to_msecs(jiffies - start_time),
+			  rv);
 	}
 
-	print_sanitize_warnings(device, ssw);
+	request.cmd = commit_it ? P_TWOPC_COMMIT : P_TWOPC_ABORT;
+	if (have_peers)
+		twopc_phase2(resource, &request, reach_immediately);
 
-	drbd_pr_state_change(device, os, ns, flags);
+	if (commit_it) {
+		struct twopc_resize *tr = &resource->twopc.resize;
 
-	/* Display changes to the susp* flags that where caused by the call to
-	   sanitize_state(). Only display it here if we where not called from
-	   _conn_request_state() */
-	if (!(flags & CS_DC_SUSP))
-		conn_pr_state_change(connection, os, ns,
-				     (flags & ~CS_DC_MASK) | CS_DC_SUSP);
+		tr->diskful_primary_nodes = reply->diskful_primary_nodes;
+		tr->new_size = new_size;
+		tr->dds_flags = dds_flags;
+		tr->user_size = new_user_size;
 
-	/* if we are going -> D_FAILED or D_DISKLESS, grab one extra reference
-	 * on the ldev here, to be sure the transition -> D_DISKLESS resp.
-	 * drbd_ldev_destroy() won't happen before our corresponding
-	 * after_state_ch works run, where we put_ldev again. */
-	if ((os.disk != D_FAILED && ns.disk == D_FAILED) ||
-	    (os.disk != D_DISKLESS && ns.disk == D_DISKLESS))
-		atomic_inc(&device->local_cnt);
+		dd = drbd_commit_size_change(device, rs, reach_immediately);
+	} else {
+		if (rv == SS_CW_FAILED_BY_PEER)
+			dd = DS_2PC_NOT_SUPPORTED;
+		else if (rv >= SS_SUCCESS)
+			dd = DS_UNCHANGED;
+		else
+			dd = DS_2PC_ERR;
+	}
 
-	if (!is_sync_state(os.conn) && is_sync_state(ns.conn))
-		clear_bit(RS_DONE, &device->flags);
+	clear_remote_state_change(resource);
+	return dd;
+}
 
-	/* FIXME: Have any flags been set earlier in this function already? */
-	state_change = remember_old_state(device->resource, GFP_ATOMIC);
+static void twopc_end_nested(struct drbd_resource *resource, enum drbd_packet cmd)
+{
+	struct drbd_connection *twopc_parent;
+	u64 im;
+	struct twopc_reply twopc_reply;
+	u64 twopc_parent_nodes = 0;
+
+	write_lock_irq(&resource->state_rwlock);
+	twopc_reply = resource->twopc_reply;
+	/* Only send replies if we are in a twopc and have not yet sent replies. */
+	if (twopc_reply.tid && resource->twopc_prepare_reply_cmd == 0) {
+		resource->twopc_prepare_reply_cmd = cmd;
+		twopc_parent_nodes = resource->twopc_parent_nodes;
+	}
+	clear_bit(TWOPC_WORK_PENDING, &resource->flags);
+	write_unlock_irq(&resource->state_rwlock);
 
-	/* changes to local_cnt and device flags should be visible before
-	 * changes to state, which again should be visible before anything else
-	 * depending on that change happens. */
-	smp_wmb();
-	device->state.i = ns.i;
-	device->resource->susp = ns.susp;
-	device->resource->susp_nod = ns.susp_nod;
-	device->resource->susp_fen = ns.susp_fen;
-	smp_wmb();
+	if (!twopc_reply.tid)
+		return;
 
-	remember_new_state(state_change);
+	for_each_connection_ref(twopc_parent, im, resource) {
+		if (!(twopc_parent_nodes & NODE_MASK(twopc_parent->peer_node_id)))
+			continue;
 
-	/* put replicated vs not-replicated requests in seperate epochs */
-	if (drbd_should_do_remote((union drbd_dev_state)os.i) !=
-	    drbd_should_do_remote((union drbd_dev_state)ns.i))
-		start_new_tl_epoch(connection);
+		if (twopc_reply.is_disconnect)
+			set_bit(DISCONNECT_EXPECTED, &twopc_parent->flags);
 
-	if (os.disk == D_ATTACHING && ns.disk >= D_NEGOTIATING)
-		drbd_print_uuids(device, "attached to UUIDs");
+		dynamic_drbd_dbg(twopc_parent, "Nested state change %u result: %s\n",
+			   twopc_reply.tid, drbd_packet_name(cmd));
 
-	/* Wake up role changes, that were delayed because of connection establishing */
-	if (os.conn == C_WF_REPORT_PARAMS && ns.conn != C_WF_REPORT_PARAMS &&
-	    no_peer_wf_report_params(connection)) {
-		clear_bit(STATE_SENT, &connection->flags);
-		wake_up_all_devices(connection);
+		drbd_send_twopc_reply(twopc_parent, cmd, &twopc_reply);
 	}
+	wake_up_all(&resource->twopc_wait);
+}
 
-	wake_up(&device->misc_wait);
-	wake_up(&device->state_wait);
-	wake_up(&connection->ping_wait);
-
-	/* Aborted verify run, or we reached the stop sector.
-	 * Log the last position, unless end-of-device. */
-	if ((os.conn == C_VERIFY_S || os.conn == C_VERIFY_T) &&
-	    ns.conn <= C_CONNECTED) {
-		device->ov_start_sector =
-			BM_BIT_TO_SECT(drbd_bm_bits(device) - device->ov_left);
-		if (device->ov_left)
-			drbd_info(device, "Online Verify reached sector %llu\n",
-				(unsigned long long)device->ov_start_sector);
-	}
+static void __nested_twopc_work(struct drbd_resource *resource)
+{
+	enum drbd_state_rv rv;
+	enum drbd_packet cmd;
 
-	if ((os.conn == C_PAUSED_SYNC_T || os.conn == C_PAUSED_SYNC_S) &&
-	    (ns.conn == C_SYNC_TARGET  || ns.conn == C_SYNC_SOURCE)) {
-		drbd_info(device, "Syncer continues.\n");
-		device->rs_paused += (long)jiffies
-				  -(long)device->rs_mark_time[device->rs_last_mark];
-		if (ns.conn == C_SYNC_TARGET)
-			mod_timer(&device->resync_timer, jiffies);
-	}
+	rv = get_cluster_wide_reply(resource, NULL);
+	if (rv >= SS_SUCCESS)
+		cmd = P_TWOPC_YES;
+	else if (rv == SS_CONCURRENT_ST_CHG || rv == SS_HANDSHAKE_RETRY)
+		cmd = P_TWOPC_RETRY;
+	else
+		cmd = P_TWOPC_NO;
+	twopc_end_nested(resource, cmd);
+}
 
-	if ((os.conn == C_SYNC_TARGET  || os.conn == C_SYNC_SOURCE) &&
-	    (ns.conn == C_PAUSED_SYNC_T || ns.conn == C_PAUSED_SYNC_S)) {
-		drbd_info(device, "Resync suspended\n");
-		device->rs_mark_time[device->rs_last_mark] = jiffies;
-	}
+void nested_twopc_work(struct work_struct *work)
+{
+	struct drbd_resource *resource =
+		container_of(work, struct drbd_resource, twopc_work);
 
-	if (os.conn == C_CONNECTED &&
-	    (ns.conn == C_VERIFY_S || ns.conn == C_VERIFY_T)) {
-		unsigned long now = jiffies;
-		int i;
+	__nested_twopc_work(resource);
 
-		set_ov_position(peer_device, ns.conn);
-		device->rs_start = now;
-		device->rs_last_sect_ev = 0;
-		device->ov_last_oos_size = 0;
-		device->ov_last_oos_start = 0;
+	kref_put(&resource->kref, drbd_destroy_resource);
+}
 
-		for (i = 0; i < DRBD_SYNC_MARKS; i++) {
-			device->rs_mark_left[i] = device->ov_left;
-			device->rs_mark_time[i] = now;
-		}
+void drbd_maybe_cluster_wide_reply(struct drbd_resource *resource)
+{
+	lockdep_assert_held(&resource->state_rwlock);
 
-		drbd_rs_controller_reset(peer_device);
+	if (!resource->remote_state_change || !cluster_wide_reply_ready(resource))
+		return;
 
-		if (ns.conn == C_VERIFY_S) {
-			drbd_info(device, "Starting Online Verify from sector %llu\n",
-					(unsigned long long)device->ov_position);
-			mod_timer(&device->resync_timer, jiffies);
-		}
+	if (resource->twopc_reply.initiator_node_id == resource->res_opts.node_id) {
+		wake_up_all(&resource->state_wait);
+		return;
 	}
 
-	if (get_ldev(device)) {
-		u32 mdf = device->ldev->md.flags & ~(MDF_CONSISTENT|MDF_PRIMARY_IND|
-						 MDF_CONNECTED_IND|MDF_WAS_UP_TO_DATE|
-						 MDF_PEER_OUT_DATED|MDF_CRASHED_PRIMARY);
-
-		mdf &= ~MDF_AL_CLEAN;
-		if (test_bit(CRASHED_PRIMARY, &device->flags))
-			mdf |= MDF_CRASHED_PRIMARY;
-		if (device->state.role == R_PRIMARY ||
-		    (device->state.pdsk < D_INCONSISTENT && device->state.peer == R_PRIMARY))
-			mdf |= MDF_PRIMARY_IND;
-		if (device->state.conn > C_WF_REPORT_PARAMS)
-			mdf |= MDF_CONNECTED_IND;
-		if (device->state.disk > D_INCONSISTENT)
-			mdf |= MDF_CONSISTENT;
-		if (device->state.disk > D_OUTDATED)
-			mdf |= MDF_WAS_UP_TO_DATE;
-		if (device->state.pdsk <= D_OUTDATED && device->state.pdsk >= D_INCONSISTENT)
-			mdf |= MDF_PEER_OUT_DATED;
-		if (mdf != device->ldev->md.flags) {
-			device->ldev->md.flags = mdf;
-			drbd_md_mark_dirty(device);
-		}
-		if (os.disk < D_CONSISTENT && ns.disk >= D_CONSISTENT)
-			drbd_set_ed_uuid(device, device->ldev->md.uuid[UI_CURRENT]);
-		put_ldev(device);
-	}
+	if (test_and_set_bit(TWOPC_WORK_PENDING, &resource->flags))
+		return;
 
-	/* Peer was forced D_UP_TO_DATE & R_PRIMARY, consider to resync */
-	if (os.disk == D_INCONSISTENT && os.pdsk == D_INCONSISTENT &&
-	    os.peer == R_SECONDARY && ns.peer == R_PRIMARY)
-		set_bit(CONSIDER_RESYNC, &device->flags);
-
-	/* Receiver should clean up itself */
-	if (os.conn != C_DISCONNECTING && ns.conn == C_DISCONNECTING)
-		drbd_thread_stop_nowait(&connection->receiver);
-
-	/* Now the receiver finished cleaning up itself, it should die */
-	if (os.conn != C_STANDALONE && ns.conn == C_STANDALONE)
-		drbd_thread_stop_nowait(&connection->receiver);
-
-	/* Upon network failure, we need to restart the receiver. */
-	if (os.conn > C_WF_CONNECTION &&
-	    ns.conn <= C_TEAR_DOWN && ns.conn >= C_TIMEOUT)
-		drbd_thread_restart_nowait(&connection->receiver);
-
-	/* Resume AL writing if we get a connection */
-	if (os.conn < C_CONNECTED && ns.conn >= C_CONNECTED) {
-		drbd_resume_al(device);
-		connection->connect_cnt++;
-	}
-
-	/* remember last attach time so request_timer_fn() won't
-	 * kill newly established sessions while we are still trying to thaw
-	 * previously frozen IO */
-	if ((os.disk == D_ATTACHING || os.disk == D_NEGOTIATING) &&
-	    ns.disk > D_NEGOTIATING)
-		device->last_reattach_jif = jiffies;
-
-	ascw = kmalloc_obj(*ascw, GFP_ATOMIC);
-	if (ascw) {
-		ascw->os = os;
-		ascw->ns = ns;
-		ascw->flags = flags;
-		ascw->w.cb = w_after_state_ch;
-		ascw->device = device;
-		ascw->done = done;
-		ascw->state_change = state_change;
-		drbd_queue_work(&connection->sender_work,
-				&ascw->w);
-	} else {
-		drbd_err(device, "Could not kmalloc an ascw\n");
-	}
+	kref_get(&resource->kref);
+	schedule_work(&resource->twopc_work);
+}
 
+enum drbd_state_rv
+nested_twopc_request(struct drbd_resource *resource, struct twopc_request *request)
+{
+	u64 nodes_to_reach, reach_immediately;
+	enum drbd_packet cmd = request->cmd;
+	enum drbd_state_rv rv;
+	bool have_peers;
+
+	write_lock_irq(&resource->state_rwlock);
+	nodes_to_reach = request->nodes_to_reach;
+	reach_immediately = directly_connected_nodes(resource, NOW) & nodes_to_reach;
+	nodes_to_reach &= ~(reach_immediately | NODE_MASK(resource->res_opts.node_id));
+	request->nodes_to_reach = nodes_to_reach;
+	write_unlock_irq(&resource->state_rwlock);
+
+	rv = __cluster_wide_request(resource, request, reach_immediately);
+	have_peers = rv == SS_CW_SUCCESS;
+	if (cmd == P_TWOPC_PREPARE || cmd == P_TWOPC_PREP_RSZ) {
+		if (rv < SS_SUCCESS)
+			twopc_end_nested(resource, P_TWOPC_NO);
+		else if (!have_peers && cluster_wide_reply_ready(resource)) /* no nested nodes */
+			__nested_twopc_work(resource);
+	}
 	return rv;
 }
 
-static int w_after_state_ch(struct drbd_work *w, int unused)
+static bool has_up_to_date_peer_disks(struct drbd_device *device)
 {
-	struct after_state_chg_work *ascw =
-		container_of(w, struct after_state_chg_work, w);
-	struct drbd_device *device = ascw->device;
+	struct drbd_peer_device *peer_device;
 
-	after_state_ch(device, ascw->os, ascw->ns, ascw->flags, ascw->state_change);
-	forget_state_change(ascw->state_change);
-	if (ascw->flags & CS_WAIT_COMPLETE)
-		complete(ascw->done);
-	kfree(ascw);
+	for_each_peer_device(peer_device, device)
+		if (peer_device->disk_state[NEW] == D_UP_TO_DATE)
+			return true;
+	return false;
+}
 
-	return 0;
+static void disconnect_where_resync_target(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+
+	for_each_peer_device(peer_device, device)
+		if (is_sync_target_state(peer_device, NEW))
+			__change_cstate(peer_device->connection, C_TEAR_DOWN);
 }
 
-static void abw_start_sync(struct drbd_device *device, int rv)
+static bool do_change_role(struct change_context *context, enum change_phase phase)
 {
-	if (rv) {
-		drbd_err(device, "Writing the bitmap failed not starting resync.\n");
-		_drbd_request_state(device, NS(conn, C_CONNECTED), CS_VERBOSE);
-		return;
+	struct drbd_resource *resource = context->resource;
+	enum drbd_role role = context->val.role;
+	int flags = context->flags;
+	struct drbd_device *device;
+	int vnr;
+
+	resource->role[NEW] = role;
+
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		if (role == R_PRIMARY && (flags & CS_FP_LOCAL_UP_TO_DATE)) {
+			if (device->disk_state[NEW] < D_UP_TO_DATE &&
+			    device->disk_state[NEW] >= D_INCONSISTENT &&
+			    !has_up_to_date_peer_disks(device)) {
+				device->disk_state[NEW] = D_UP_TO_DATE;
+				/* adding it to the context so that it gets sent to the peers */
+				context->mask.disk |= disk_MASK;
+				context->val.disk |= D_UP_TO_DATE;
+				disconnect_where_resync_target(device);
+			}
+		}
+
+		if (role == R_PRIMARY && (flags & CS_FP_OUTDATE_PEERS)) {
+			struct drbd_peer_device *peer_device;
+			for_each_peer_device_rcu(peer_device, device) {
+				if (peer_device->disk_state[NEW] == D_UNKNOWN)
+					__change_peer_disk_state(peer_device, D_OUTDATED);
+			}
+		}
+
+		if (role == R_PRIMARY && phase == PH_COMMIT) {
+			u64 reachable_nodes = resource->twopc_reply.reachable_nodes;
+			struct drbd_peer_device *peer_device;
+
+			for_each_peer_device_rcu(peer_device, device) {
+				if (NODE_MASK(peer_device->node_id) & reachable_nodes &&
+				    peer_device->disk_state[NEW] == D_UNKNOWN &&
+				    want_bitmap(peer_device))
+					__change_peer_disk_state(peer_device, D_OUTDATED);
+			}
+		}
 	}
+	rcu_read_unlock();
 
-	switch (device->state.conn) {
-	case C_STARTING_SYNC_T:
-		_drbd_request_state(device, NS(conn, C_WF_SYNC_UUID), CS_VERBOSE);
-		break;
-	case C_STARTING_SYNC_S:
-		drbd_start_resync(device, C_SYNC_SOURCE);
-		break;
+	return phase != PH_PREPARE ||
+		context->resource->role[NOW] != context->val.role;
+}
+
+enum drbd_state_rv change_role(struct drbd_resource *resource,
+			       enum drbd_role role,
+			       enum chg_state_flags flags,
+			       const char *tag,
+			       const char **err_str)
+{
+	struct change_context role_context = {
+		.resource = resource,
+		.vnr = -1,
+		.mask = { { .role = role_MASK } },
+		.val = { { .role = role } },
+		.target_node_id = -1,
+		.flags = flags | CS_SERIALIZE,
+		.err_str = err_str,
+	};
+	enum drbd_state_rv rv;
+	bool got_state_sem = false;
+
+	if (role == R_SECONDARY) {
+		if (!(flags & CS_ALREADY_SERIALIZED)) {
+			down(&resource->state_sem);
+			got_state_sem = true;
+			role_context.flags |= CS_ALREADY_SERIALIZED;
+		}
+		role_context.change_local_state_last = true;
 	}
+	rv = change_cluster_wide_state(do_change_role, &role_context, tag);
+	if (got_state_sem)
+		up(&resource->state_sem);
+	return rv;
 }
 
-int drbd_bitmap_io_from_worker(struct drbd_device *device,
-		int (*io_fn)(struct drbd_device *, struct drbd_peer_device *),
-		char *why, enum bm_flag flags,
-		struct drbd_peer_device *peer_device)
+void __change_io_susp_user(struct drbd_resource *resource, bool value)
 {
-	int rv;
+	resource->susp_user[NEW] = value;
+}
 
-	D_ASSERT(device, current == first_peer_device(device)->connection->worker.task);
+enum drbd_state_rv change_io_susp_user(struct drbd_resource *resource,
+				       bool value,
+				       enum chg_state_flags flags)
+{
+	unsigned long irq_flags;
 
-	/* open coded non-blocking drbd_suspend_io(device); */
-	atomic_inc(&device->suspend_cnt);
+	begin_state_change(resource, &irq_flags, flags);
+	__change_io_susp_user(resource, value);
+	return end_state_change(resource, &irq_flags, value ? "suspend-io" : "resume-io");
+}
 
-	drbd_bm_lock(device, why, flags);
-	rv = io_fn(device, peer_device);
-	drbd_bm_unlock(device);
+void __change_io_susp_no_data(struct drbd_resource *resource, bool value)
+{
+	resource->susp_nod[NEW] = value;
+}
 
-	drbd_resume_io(device);
+void __change_io_susp_fencing(struct drbd_connection *connection, bool value)
+{
+	connection->susp_fen[NEW] = value;
+}
 
-	return rv;
+void __change_io_susp_quorum(struct drbd_resource *resource, bool value)
+{
+	resource->susp_quorum[NEW] = value;
 }
 
-int notify_resource_state_change(struct sk_buff *skb,
-				  unsigned int seq,
-				  void *state_change,
-				  enum drbd_notification_type type)
+void __change_disk_state(struct drbd_device *device, enum drbd_disk_state disk_state)
 {
-	struct drbd_resource_state_change *resource_state_change = state_change;
-	struct drbd_resource *resource = resource_state_change->resource;
-	struct resource_info resource_info = {
-		.res_role = resource_state_change->role[NEW],
-		.res_susp = resource_state_change->susp[NEW],
-		.res_susp_nod = resource_state_change->susp_nod[NEW],
-		.res_susp_fen = resource_state_change->susp_fen[NEW],
-	};
+	device->disk_state[NEW] = disk_state;
+}
+
+void __downgrade_disk_states(struct drbd_resource *resource, enum drbd_disk_state disk_state)
+{
+	struct drbd_device *device;
+	int vnr;
 
-	return notify_resource_state(skb, seq, resource, &resource_info, type);
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		if (device->disk_state[NEW] > disk_state)
+			__change_disk_state(device, disk_state);
+	}
+	rcu_read_unlock();
 }
 
-int notify_connection_state_change(struct sk_buff *skb,
-				    unsigned int seq,
-				    void *state_change,
-				    enum drbd_notification_type type)
+void __outdate_myself(struct drbd_resource *resource)
 {
-	struct drbd_connection_state_change *p = state_change;
-	struct drbd_connection *connection = p->connection;
-	struct connection_info connection_info = {
-		.conn_connection_state = p->cstate[NEW],
-		.conn_role = p->peer_role[NEW],
-	};
+	struct drbd_device *device;
+	int vnr;
 
-	return notify_connection_state(skb, seq, connection, &connection_info, type);
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		if (device->disk_state[NOW] > D_OUTDATED)
+			__change_disk_state(device, D_OUTDATED);
+	}
 }
 
-int notify_device_state_change(struct sk_buff *skb,
-				unsigned int seq,
-				void *state_change,
-				enum drbd_notification_type type)
+static bool device_has_connected_peer_devices(struct drbd_device *device)
 {
-	struct drbd_device_state_change *device_state_change = state_change;
-	struct drbd_device *device = device_state_change->device;
-	struct device_info device_info = {
-		.dev_disk_state = device_state_change->disk_state[NEW],
-	};
+	struct drbd_peer_device *peer_device;
 
-	return notify_device_state(skb, seq, device, &device_info, type);
+	for_each_peer_device(peer_device, device)
+		if (peer_device->repl_state[NOW] >= L_ESTABLISHED)
+			return true;
+	return false;
 }
 
-int notify_peer_device_state_change(struct sk_buff *skb,
-				     unsigned int seq,
-				     void *state_change,
-				     enum drbd_notification_type type)
+static bool device_has_peer_devices_with_disk(struct drbd_device *device)
 {
-	struct drbd_peer_device_state_change *p = state_change;
-	struct drbd_peer_device *peer_device = p->peer_device;
-	struct peer_device_info peer_device_info = {
-		.peer_repl_state = p->repl_state[NEW],
-		.peer_disk_state = p->disk_state[NEW],
-		.peer_resync_susp_user = p->resync_susp_user[NEW],
-		.peer_resync_susp_peer = p->resync_susp_peer[NEW],
-		.peer_resync_susp_dependency = p->resync_susp_dependency[NEW],
-	};
+	struct drbd_peer_device *peer_device;
+	bool rv = false;
+
+	for_each_peer_device(peer_device, device) {
+		if (peer_device->connection->cstate[NOW] == C_CONNECTED) {
+			/* We expect to receive up-to-date UUIDs soon.
+			   To avoid a race in receive_state, "clear" uuids while
+			   holding state_rwlock. I.e. atomic with the state change */
+			clear_bit(UUIDS_RECEIVED, &peer_device->flags);
+			if (peer_device->disk_state[NOW] > D_DISKLESS)
+				rv = true;
+		}
+	}
 
-	return notify_peer_device_state(skb, seq, peer_device, &peer_device_info, type);
+	return rv;
 }
 
-static void broadcast_state_change(struct drbd_state_change *state_change)
+static void restore_outdated_in_pdsk(struct drbd_device *device)
 {
-	struct drbd_resource_state_change *resource_state_change = &state_change->resource[0];
-	bool resource_state_has_changed;
-	unsigned int n_device, n_connection, n_peer_device, n_peer_devices;
-	int (*last_func)(struct sk_buff *, unsigned int,
-		void *, enum drbd_notification_type) = NULL;
-	void *last_arg = NULL;
+	struct drbd_peer_device *peer_device;
 
-#define HAS_CHANGED(state) ((state)[OLD] != (state)[NEW])
-#define FINAL_STATE_CHANGE(type) \
-	({ if (last_func) \
-		last_func(NULL, 0, last_arg, type); \
-	})
-#define REMEMBER_STATE_CHANGE(func, arg, type) \
-	({ FINAL_STATE_CHANGE(type | NOTIFY_CONTINUES); \
-	   last_func = func; \
-	   last_arg = arg; \
-	 })
+	if (!get_ldev_if_state(device, D_ATTACHING))
+		return;
 
-	mutex_lock(&notification_mutex);
+	for_each_peer_device(peer_device, device) {
+		int node_id = peer_device->connection->peer_node_id;
+		struct drbd_peer_md *peer_md = &device->ldev->md.peers[node_id];
 
-	resource_state_has_changed =
-	    HAS_CHANGED(resource_state_change->role) ||
-	    HAS_CHANGED(resource_state_change->susp) ||
-	    HAS_CHANGED(resource_state_change->susp_nod) ||
-	    HAS_CHANGED(resource_state_change->susp_fen);
+		if ((peer_md->flags & MDF_PEER_OUTDATED) &&
+		    peer_device->disk_state[NEW] == D_UNKNOWN)
+			__change_peer_disk_state(peer_device, D_OUTDATED);
+	}
 
-	if (resource_state_has_changed)
-		REMEMBER_STATE_CHANGE(notify_resource_state_change,
-				      resource_state_change, NOTIFY_CHANGE);
+	put_ldev(device);
+}
 
-	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
-		struct drbd_connection_state_change *connection_state_change =
-				&state_change->connections[n_connection];
+static bool do_twopc_after_lost_peer(struct change_context *context, enum change_phase phase)
+{
+	struct drbd_resource *resource = context->resource;
+	struct twopc_reply *reply = &resource->twopc_reply;
+	u64 directly_reachable = directly_connected_nodes(resource, NEW) |
+		NODE_MASK(resource->res_opts.node_id);
+	bool pri_incapable = reply->primary_nodes && !reply->weak_nodes; /* TWOPC_PRI_INCAPABLE */
+
+	if (phase == PH_COMMIT && (reply->primary_nodes & ~directly_reachable && !pri_incapable)) {
+		__outdate_myself(resource);
+	} else {
+		struct drbd_device *device;
+		int vnr;
 
-		if (HAS_CHANGED(connection_state_change->peer_role) ||
-		    HAS_CHANGED(connection_state_change->cstate))
-			REMEMBER_STATE_CHANGE(notify_connection_state_change,
-					      connection_state_change, NOTIFY_CHANGE);
+		idr_for_each_entry(&resource->devices, device, vnr) {
+			if (device->disk_state[NOW] == D_CONSISTENT &&
+			    may_return_to_up_to_date(device, NOW))
+				__change_disk_state(device, D_UP_TO_DATE);
+		}
 	}
 
-	for (n_device = 0; n_device < state_change->n_devices; n_device++) {
-		struct drbd_device_state_change *device_state_change =
-			&state_change->devices[n_device];
+	return phase != PH_PREPARE || reply->reachable_nodes != NODE_MASK(resource->res_opts.node_id);
+}
+
+static enum drbd_state_rv twopc_after_lost_peer(struct drbd_resource *resource,
+						enum chg_state_flags flags)
+{
+	struct change_context context = {
+		.resource = resource,
+		.vnr = -1,
+		.mask = { },
+		.val = { },
+		.target_node_id = -1,
+		.flags = flags | (resource->res_opts.quorum != QOU_OFF ? CS_FORCE_RECALC : 0),
+		.change_local_state_last = false,
+	};
+
+	/* The other nodes get the request for an empty state change. I.e. they
+	   will agree to this change request. At commit time we know where to
+	   go from the D_CONSISTENT, since we got the primary mask. */
+	return change_cluster_wide_state(do_twopc_after_lost_peer, &context, "lost-peer");
+}
+
+void drbd_empty_twopc_work_fn(struct work_struct *work)
+{
+	struct drbd_resource *resource = container_of(work, struct drbd_resource, empty_twopc);
 
-		if (HAS_CHANGED(device_state_change->disk_state))
-			REMEMBER_STATE_CHANGE(notify_device_state_change,
-					      device_state_change, NOTIFY_CHANGE);
-	}
+	twopc_after_lost_peer(resource, CS_VERBOSE);
 
-	n_peer_devices = state_change->n_devices * state_change->n_connections;
-	for (n_peer_device = 0; n_peer_device < n_peer_devices; n_peer_device++) {
-		struct drbd_peer_device_state_change *p =
-			&state_change->peer_devices[n_peer_device];
+	clear_bit(TRY_BECOME_UP_TO_DATE_PENDING, &resource->flags);
+	wake_up_all(&resource->state_wait);
 
-		if (HAS_CHANGED(p->disk_state) ||
-		    HAS_CHANGED(p->repl_state) ||
-		    HAS_CHANGED(p->resync_susp_user) ||
-		    HAS_CHANGED(p->resync_susp_peer) ||
-		    HAS_CHANGED(p->resync_susp_dependency))
-			REMEMBER_STATE_CHANGE(notify_peer_device_state_change,
-					      p, NOTIFY_CHANGE);
+	kref_put(&resource->kref, drbd_destroy_resource);
+}
+
+static bool do_change_disk_state(struct change_context *context, enum change_phase phase)
+{
+	struct drbd_device *device =
+		container_of(context, struct change_disk_state_context, context)->device;
+	bool cluster_wide_state_change = false;
+
+	if (device->disk_state[NOW] == D_ATTACHING &&
+	    context->val.disk == D_NEGOTIATING) {
+		if (device_has_peer_devices_with_disk(device)) {
+			cluster_wide_state_change =
+				supports_two_phase_commit(device->resource);
+		} else {
+			/* very last part of attach */
+			/* ldev_safe: D_ATTACHING->D_NEGOTIATING, state_rwlock held, ldev exists */
+			context->val.disk = disk_state_from_md(device);
+			restore_outdated_in_pdsk(device);
+		}
+	} else if (device->disk_state[NOW] != D_DETACHING &&
+		   context->val.disk == D_DETACHING &&
+		   device_has_connected_peer_devices(device)) {
+		cluster_wide_state_change = true;
 	}
+	__change_disk_state(device, context->val.disk);
+	return phase != PH_PREPARE || cluster_wide_state_change;
+}
 
-	FINAL_STATE_CHANGE(NOTIFY_CHANGE);
-	mutex_unlock(&notification_mutex);
+enum drbd_state_rv change_disk_state(struct drbd_device *device,
+				     enum drbd_disk_state disk_state,
+				     enum chg_state_flags flags,
+				     const char *tag,
+				     const char **err_str)
+{
+	struct change_disk_state_context disk_state_context = {
+		.context = {
+			.resource = device->resource,
+			.vnr = device->vnr,
+			.mask = { { .disk = disk_MASK } },
+			.val = { { .disk = disk_state } },
+			.target_node_id = -1,
+			.flags = flags,
+			.change_local_state_last = true,
+			.err_str = err_str,
+		},
+		.device = device,
+	};
 
-#undef HAS_CHANGED
-#undef FINAL_STATE_CHANGE
-#undef REMEMBER_STATE_CHANGE
+	return change_cluster_wide_state(do_change_disk_state,
+					 &disk_state_context.context, tag);
 }
 
-/* takes old and new peer disk state */
-static bool lost_contact_to_peer_data(enum drbd_disk_state os, enum drbd_disk_state ns)
+void __change_cstate(struct drbd_connection *connection, enum drbd_conn_state cstate)
 {
-	if ((os >= D_INCONSISTENT && os != D_UNKNOWN && os != D_OUTDATED)
-	&&  (ns < D_INCONSISTENT || ns == D_UNKNOWN || ns == D_OUTDATED))
-		return true;
+	if (cstate == C_DISCONNECTING)
+		set_bit(DISCONNECT_EXPECTED, &connection->flags);
 
-	/* Scenario, starting with normal operation
-	 * Connected Primary/Secondary UpToDate/UpToDate
-	 * NetworkFailure Primary/Unknown UpToDate/DUnknown (frozen)
-	 * ...
-	 * Connected Primary/Secondary UpToDate/Diskless (resumed; needs to bump uuid!)
-	 */
-	if (os == D_UNKNOWN
-	&&  (ns == D_DISKLESS || ns == D_FAILED || ns == D_OUTDATED))
-		return true;
+	connection->cstate[NEW] = cstate;
+	if (cstate < C_CONNECTED) {
+		struct drbd_peer_device *peer_device;
+		int vnr;
 
-	return false;
+		rcu_read_lock();
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
+			__change_repl_state(peer_device, L_OFF);
+		rcu_read_unlock();
+	}
 }
 
-/**
- * after_state_ch() - Perform after state change actions that may sleep
- * @device:	DRBD device.
- * @os:		old state.
- * @ns:		new state.
- * @flags:	Flags
- * @state_change: state change to broadcast
- */
-static void after_state_ch(struct drbd_device *device, union drbd_state os,
-			   union drbd_state ns, enum chg_state_flags flags,
-			   struct drbd_state_change *state_change)
+static bool connection_has_connected_peer_devices(struct drbd_connection *connection)
 {
-	struct drbd_resource *resource = device->resource;
-	struct drbd_peer_device *peer_device = first_peer_device(device);
-	struct drbd_connection *connection = peer_device ? peer_device->connection : NULL;
-	struct sib_info sib;
+	struct drbd_peer_device *peer_device;
+	int vnr;
 
-	broadcast_state_change(state_change);
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		if (peer_device->repl_state[NOW] >= L_ESTABLISHED)
+			return true;
+	}
+	return false;
+}
 
-	sib.sib_reason = SIB_STATE_CHANGE;
-	sib.os = os;
-	sib.ns = ns;
+enum outdate_what { OUTDATE_NOTHING, OUTDATE_DISKS, OUTDATE_PEER_DISKS };
 
-	if ((os.disk != D_UP_TO_DATE || os.pdsk != D_UP_TO_DATE)
-	&&  (ns.disk == D_UP_TO_DATE && ns.pdsk == D_UP_TO_DATE)) {
-		clear_bit(CRASHED_PRIMARY, &device->flags);
-		if (device->p_uuid)
-			device->p_uuid[UI_FLAGS] &= ~((u64)2);
+static enum outdate_what outdate_on_disconnect(struct drbd_connection *connection)
+{
+	struct drbd_resource *resource = connection->resource;
+
+	if (connection->cstate[NOW] == C_CONNECTED &&
+	    (connection->fencing_policy >= FP_RESOURCE ||
+	     connection->resource->res_opts.quorum != QOU_OFF) &&
+	    resource->role[NOW] != connection->peer_role[NOW]) {
+		/* primary politely disconnects from secondary,
+		 * tells peer to please outdate itself */
+		if (resource->role[NOW] == R_PRIMARY)
+			return OUTDATE_PEER_DISKS;
+
+		/* secondary politely disconnect from primary,
+		 * proposes to outdate itself. */
+		if (connection->peer_role[NOW] == R_PRIMARY)
+			return OUTDATE_DISKS;
 	}
+	return OUTDATE_NOTHING;
+}
 
-	/* Inform userspace about the change... */
-	drbd_bcast_event(device, &sib);
+static void __change_cstate_and_outdate(struct drbd_connection *connection,
+					enum drbd_conn_state cstate,
+					enum outdate_what outdate_what)
+{
+	__change_cstate(connection, cstate);
+	switch (outdate_what) {
+	case OUTDATE_DISKS:
+		__downgrade_disk_states(connection->resource, D_OUTDATED);
+		break;
+	case OUTDATE_PEER_DISKS:
+		__downgrade_peer_disk_states(connection, D_OUTDATED);
+		break;
+	case OUTDATE_NOTHING:
+		break;
+	}
+}
 
-	if (!(os.role == R_PRIMARY && os.disk < D_UP_TO_DATE && os.pdsk < D_UP_TO_DATE) &&
-	    (ns.role == R_PRIMARY && ns.disk < D_UP_TO_DATE && ns.pdsk < D_UP_TO_DATE))
-		drbd_khelper(device, "pri-on-incon-degr");
+void apply_connect(struct drbd_connection *connection, bool commit)
+{
+	struct drbd_peer_device *peer_device;
+	int vnr;
 
-	/* Here we have the actions that are performed after a
-	   state change. This function might sleep */
+	if (!commit || connection->cstate[NEW] != C_CONNECTED)
+		return;
 
-	if (ns.susp_nod) {
-		enum drbd_req_event what = NOTHING;
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		struct drbd_device *device = peer_device->device;
+		union drbd_state s = peer_device->connect_state;
 
-		spin_lock_irq(&device->resource->req_lock);
-		if (os.conn < C_CONNECTED && conn_lowest_conn(connection) >= C_CONNECTED)
-			what = RESEND;
+		if (s.disk != D_MASK)
+			__change_disk_state(device, s.disk);
+		if (device->disk_state[NOW] != D_NEGOTIATING)
+			__change_repl_state(peer_device, s.conn);
+		__change_peer_disk_state(peer_device, s.pdsk);
+		__change_resync_susp_peer(peer_device, s.peer_isp);
 
-		if ((os.disk == D_ATTACHING || os.disk == D_NEGOTIATING) &&
-		    conn_lowest_disk(connection) == D_UP_TO_DATE)
-			what = RESTART_FROZEN_DISK_IO;
+		if (s.conn == L_OFF)
+			__change_cstate(connection, C_DISCONNECTING);
 
-		if (resource->susp_nod && what != NOTHING) {
-			_tl_restart(connection, what);
-			_conn_request_state(connection,
-					    (union drbd_state) { { .susp_nod = 1 } },
-					    (union drbd_state) { { .susp_nod = 0 } },
-					    CS_VERBOSE);
-		}
-		spin_unlock_irq(&device->resource->req_lock);
+		if (commit)
+			clear_bit(DISCARD_MY_DATA, &peer_device->flags);
 	}
+}
 
-	if (ns.susp_fen) {
-		spin_lock_irq(&device->resource->req_lock);
-		if (resource->susp_fen && conn_lowest_conn(connection) >= C_CONNECTED) {
-			/* case2: The connection was established again: */
-			struct drbd_peer_device *peer_device;
-			int vnr;
-
-			rcu_read_lock();
-			idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
-				clear_bit(NEW_CUR_UUID, &peer_device->device->flags);
-			rcu_read_unlock();
-
-			/* We should actively create a new uuid, _before_
-			 * we resume/resent, if the peer is diskless
-			 * (recovery from a multiple error scenario).
-			 * Currently, this happens with a slight delay
-			 * below when checking lost_contact_to_peer_data() ...
-			 */
-			_tl_restart(connection, RESEND);
-			_conn_request_state(connection,
-					    (union drbd_state) { { .susp_fen = 1 } },
-					    (union drbd_state) { { .susp_fen = 0 } },
-					    CS_VERBOSE);
-		}
-		spin_unlock_irq(&device->resource->req_lock);
-	}
-
-	/* Became sync source.  With protocol >= 96, we still need to send out
-	 * the sync uuid now. Need to do that before any drbd_send_state, or
-	 * the other side may go "paused sync" before receiving the sync uuids,
-	 * which is unexpected. */
-	if ((os.conn != C_SYNC_SOURCE && os.conn != C_PAUSED_SYNC_S) &&
-	    (ns.conn == C_SYNC_SOURCE || ns.conn == C_PAUSED_SYNC_S) &&
-	    connection->agreed_pro_version >= 96 && get_ldev(device)) {
-		drbd_gen_and_send_sync_uuid(peer_device);
-		put_ldev(device);
-	}
+struct change_cstate_context {
+	struct change_context context;
+	struct drbd_connection *connection;
+	enum outdate_what outdate_what;
+};
 
-	/* Do not change the order of the if above and the two below... */
-	if (os.pdsk == D_DISKLESS &&
-	    ns.pdsk > D_DISKLESS && ns.pdsk != D_UNKNOWN) {      /* attach on the peer */
-		/* we probably will start a resync soon.
-		 * make sure those things are properly reset. */
-		device->rs_total = 0;
-		device->rs_failed = 0;
-		atomic_set(&device->rs_pending_cnt, 0);
-		drbd_rs_cancel_all(device);
-
-		drbd_send_uuids(peer_device);
-		drbd_send_state(peer_device, ns);
-	}
-	/* No point in queuing send_bitmap if we don't have a connection
-	 * anymore, so check also the _current_ state, not only the new state
-	 * at the time this work was queued. */
-	if (os.conn != C_WF_BITMAP_S && ns.conn == C_WF_BITMAP_S &&
-	    device->state.conn == C_WF_BITMAP_S)
-		drbd_queue_bitmap_io(device, &drbd_send_bitmap, NULL,
-				"send_bitmap (WFBitMapS)",
-				BM_LOCKED_TEST_ALLOWED, peer_device);
-
-	/* Lost contact to peer's copy of the data */
-	if (lost_contact_to_peer_data(os.pdsk, ns.pdsk)) {
-		if (get_ldev(device)) {
-			if ((ns.role == R_PRIMARY || ns.peer == R_PRIMARY) &&
-			    device->ldev->md.uuid[UI_BITMAP] == 0 && ns.disk >= D_UP_TO_DATE) {
-				if (drbd_suspended(device)) {
-					set_bit(NEW_CUR_UUID, &device->flags);
-				} else {
-					drbd_uuid_new_current(device);
-					drbd_send_uuids(peer_device);
-				}
+static bool do_change_cstate(struct change_context *context, enum change_phase phase)
+{
+	struct change_cstate_context *cstate_context =
+		container_of(context, struct change_cstate_context, context);
+	struct drbd_connection *connection = cstate_context->connection;
+	struct drbd_resource *resource = context->resource;
+	struct twopc_reply *reply = &resource->twopc_reply;
+
+	if (phase == PH_PREPARE) {
+		cstate_context->outdate_what = OUTDATE_NOTHING;
+		if (context->val.conn == C_DISCONNECTING && !(context->flags & CS_HARD)) {
+			cstate_context->outdate_what =
+				outdate_on_disconnect(connection);
+			switch (cstate_context->outdate_what) {
+			case OUTDATE_DISKS:
+				context->mask.disk = disk_MASK;
+				context->val.disk = D_OUTDATED;
+				break;
+			case OUTDATE_PEER_DISKS:
+				context->mask.pdsk = pdsk_MASK;
+				context->val.pdsk = D_OUTDATED;
+				break;
+			case OUTDATE_NOTHING:
+				break;
 			}
-			put_ldev(device);
 		}
 	}
+	if ((context->val.conn == C_CONNECTED && connection->cstate[NEW] == C_CONNECTING) ||
+	    context->val.conn != C_CONNECTED)
+		__change_cstate_and_outdate(connection,
+					    context->val.conn,
+					    cstate_context->outdate_what);
+
+	if (context->val.conn == C_CONNECTED &&
+	    connection->agreed_pro_version >= 117)
+		apply_connect(connection, phase == PH_COMMIT);
+
+	if (phase == PH_COMMIT) {
+		u64 directly_reachable = directly_connected_nodes(resource, NEW) |
+			NODE_MASK(resource->res_opts.node_id);
+
+		if (reply->primary_nodes & ~directly_reachable)
+			__outdate_myself(resource);
+	}
 
-	if (ns.pdsk < D_INCONSISTENT && get_ldev(device)) {
-		if (os.peer != R_PRIMARY && ns.peer == R_PRIMARY &&
-		    device->ldev->md.uuid[UI_BITMAP] == 0 && ns.disk >= D_UP_TO_DATE) {
-			drbd_uuid_new_current(device);
-			drbd_send_uuids(peer_device);
-		}
-		/* D_DISKLESS Peer becomes secondary */
-		if (os.peer == R_PRIMARY && ns.peer == R_SECONDARY)
-			/* We may still be Primary ourselves.
-			 * No harm done if the bitmap still changes,
-			 * redirtied pages will follow later. */
-			drbd_bitmap_io_from_worker(device, &drbd_bm_write,
-				"demote diskless peer", BM_LOCKED_SET_ALLOWED, peer_device);
-		put_ldev(device);
+	if (context->val.conn == C_CONNECTED && connection->peer_role[NOW] == R_UNKNOWN) {
+		enum drbd_role target_role =
+			(reply->primary_nodes & NODE_MASK(context->target_node_id)) ?
+			R_PRIMARY : R_SECONDARY;
+
+		__change_peer_role(connection, target_role);
 	}
 
-	/* Write out all changed bits on demote.
-	 * Though, no need to da that just yet
-	 * if there is a resync going on still */
-	if (os.role == R_PRIMARY && ns.role == R_SECONDARY &&
-		device->state.conn <= C_CONNECTED && get_ldev(device)) {
-		/* No changes to the bitmap expected this time, so assert that,
-		 * even though no harm was done if it did change. */
-		drbd_bitmap_io_from_worker(device, &drbd_bm_write,
-				"demote", BM_LOCKED_TEST_ALLOWED, peer_device);
-		put_ldev(device);
+	return phase != PH_PREPARE ||
+	       context->val.conn == C_CONNECTED ||
+	       (context->val.conn == C_DISCONNECTING &&
+		connection_has_connected_peer_devices(connection));
+}
+
+/**
+ * change_cstate_tag()  -  change the connection state of a connection
+ * @connection: DRBD connection.
+ * @cstate: The connection state to change to.
+ * @flags: State change flags.
+ * @tag: State change tag to print in status messages.
+ * @err_str: Pointer to save the error string to.
+ *
+ * When disconnecting from a peer, we may also need to outdate the local or
+ * peer disks depending on the fencing policy.  This cannot easily be split
+ * into two state changes.
+ */
+enum drbd_state_rv change_cstate_tag(struct drbd_connection *connection,
+				    enum drbd_conn_state cstate,
+				    enum chg_state_flags flags,
+				    const char *tag,
+				    const char **err_str)
+{
+	struct change_cstate_context cstate_context = {
+		.context = {
+			.resource = connection->resource,
+			.vnr = -1,
+			.mask = { { .conn = conn_MASK } },
+			.val = { { .conn = cstate } },
+			.target_node_id = connection->peer_node_id,
+			.flags = flags,
+			.change_local_state_last = true,
+			.err_str = err_str,
+		},
+		.connection = connection,
+	};
+
+	if (cstate == C_CONNECTED) {
+		cstate_context.context.mask.role = role_MASK;
+		cstate_context.context.val.role = connection->resource->role[NOW];
 	}
 
-	/* Last part of the attaching process ... */
-	if (ns.conn >= C_CONNECTED &&
-	    os.disk == D_ATTACHING && ns.disk == D_NEGOTIATING) {
-		drbd_send_sizes(peer_device, 0, 0);  /* to start sync... */
-		drbd_send_uuids(peer_device);
-		drbd_send_state(peer_device, ns);
-	}
-
-	/* We want to pause/continue resync, tell peer. */
-	if (ns.conn >= C_CONNECTED &&
-	     ((os.aftr_isp != ns.aftr_isp) ||
-	      (os.user_isp != ns.user_isp)))
-		drbd_send_state(peer_device, ns);
-
-	/* In case one of the isp bits got set, suspend other devices. */
-	if ((!os.aftr_isp && !os.peer_isp && !os.user_isp) &&
-	    (ns.aftr_isp || ns.peer_isp || ns.user_isp))
-		suspend_other_sg(device);
-
-	/* Make sure the peer gets informed about eventual state
-	   changes (ISP bits) while we were in WFReportParams. */
-	if (os.conn == C_WF_REPORT_PARAMS && ns.conn >= C_CONNECTED)
-		drbd_send_state(peer_device, ns);
-
-	if (os.conn != C_AHEAD && ns.conn == C_AHEAD)
-		drbd_send_state(peer_device, ns);
-
-	/* We are in the progress to start a full sync... */
-	if ((os.conn != C_STARTING_SYNC_T && ns.conn == C_STARTING_SYNC_T) ||
-	    (os.conn != C_STARTING_SYNC_S && ns.conn == C_STARTING_SYNC_S))
-		/* no other bitmap changes expected during this phase */
-		drbd_queue_bitmap_io(device,
-			&drbd_bmio_set_n_write, &abw_start_sync,
-			"set_n_write from StartingSync", BM_LOCKED_TEST_ALLOWED,
-			peer_device);
-
-	/* first half of local IO error, failure to attach,
-	 * or administrative detach */
-	if (os.disk != D_FAILED && ns.disk == D_FAILED) {
-		enum drbd_io_error_p eh = EP_PASS_ON;
-		int was_io_error = 0;
-		/* corresponding get_ldev was in _drbd_set_state, to serialize
-		 * our cleanup here with the transition to D_DISKLESS.
-		 * But is is still not save to dreference ldev here, since
-		 * we might come from an failed Attach before ldev was set. */
-		if (device->ldev) {
-			rcu_read_lock();
-			eh = rcu_dereference(device->ldev->disk_conf)->on_io_error;
-			rcu_read_unlock();
+	/*
+	 * Hard connection state changes like a protocol error or forced
+	 * disconnect may occur while we are holding resource->state_sem.  In
+	 * that case, omit CS_SERIALIZE so that we don't deadlock trying to
+	 * grab that mutex again.
+	 */
+	if (!(flags & CS_HARD))
+		cstate_context.context.flags |= CS_SERIALIZE;
 
-			was_io_error = test_and_clear_bit(WAS_IO_ERROR, &device->flags);
-
-			/* Intentionally call this handler first, before drbd_send_state().
-			 * See: 2932204 drbd: call local-io-error handler early
-			 * People may chose to hard-reset the box from this handler.
-			 * It is useful if this looks like a "regular node crash". */
-			if (was_io_error && eh == EP_CALL_HELPER)
-				drbd_khelper(device, "local-io-error");
-
-			/* Immediately allow completion of all application IO,
-			 * that waits for completion from the local disk,
-			 * if this was a force-detach due to disk_timeout
-			 * or administrator request (drbdsetup detach --force).
-			 * Do NOT abort otherwise.
-			 * Aborting local requests may cause serious problems,
-			 * if requests are completed to upper layers already,
-			 * and then later the already submitted local bio completes.
-			 * This can cause DMA into former bio pages that meanwhile
-			 * have been re-used for other things.
-			 * So aborting local requests may cause crashes,
-			 * or even worse, silent data corruption.
-			 */
-			if (test_and_clear_bit(FORCE_DETACH, &device->flags))
-				tl_abort_disk_io(device);
+	return change_cluster_wide_state(do_change_cstate, &cstate_context.context, tag);
+}
 
-			/* current state still has to be D_FAILED,
-			 * there is only one way out: to D_DISKLESS,
-			 * and that may only happen after our put_ldev below. */
-			if (device->state.disk != D_FAILED)
-				drbd_err(device,
-					"ASSERT FAILED: disk is %s during detach\n",
-					drbd_disk_str(device->state.disk));
+void __change_peer_role(struct drbd_connection *connection, enum drbd_role peer_role)
+{
+	connection->peer_role[NEW] = peer_role;
+}
 
-			if (ns.conn >= C_CONNECTED)
-				drbd_send_state(peer_device, ns);
+void __change_repl_state(struct drbd_peer_device *peer_device, enum drbd_repl_state repl_state)
+{
+	peer_device->repl_state[NEW] = repl_state;
+	if (repl_state > L_OFF)
+		peer_device->connection->cstate[NEW] = C_CONNECTED;
+}
 
-			drbd_rs_cancel_all(device);
+struct change_repl_context {
+	struct change_context context;
+	struct drbd_peer_device *peer_device;
+};
 
-			/* In case we want to get something to stable storage still,
-			 * this may be the last chance.
-			 * Following put_ldev may transition to D_DISKLESS. */
-			drbd_md_sync(device);
-		}
-		put_ldev(device);
-	}
+static bool do_change_repl_state(struct change_context *context, enum change_phase phase)
+{
+	struct change_repl_context *repl_context =
+		container_of(context, struct change_repl_context, context);
+	struct drbd_peer_device *peer_device = repl_context->peer_device;
+	enum drbd_repl_state *repl_state = peer_device->repl_state;
+	enum drbd_repl_state new_repl_state = context->val.conn;
+	bool cluster_wide = context->flags & CS_CLUSTER_WIDE;
+
+	__change_repl_state(peer_device, new_repl_state);
+
+	return phase != PH_PREPARE ||
+		((repl_state[NOW] >= L_ESTABLISHED &&
+		  (new_repl_state == L_STARTING_SYNC_S || new_repl_state == L_STARTING_SYNC_T)) ||
+		 (repl_state[NOW] == L_ESTABLISHED &&
+		  (new_repl_state == L_VERIFY_S || new_repl_state == L_OFF)) ||
+		 (repl_state[NOW] == L_ESTABLISHED && cluster_wide &&
+		  (new_repl_state == L_WF_BITMAP_S || new_repl_state == L_WF_BITMAP_T)));
+}
 
-	/* second half of local IO error, failure to attach,
-	 * or administrative detach,
-	 * after local_cnt references have reached zero again */
-	if (os.disk != D_DISKLESS && ns.disk == D_DISKLESS) {
-		/* We must still be diskless,
-		 * re-attach has to be serialized with this! */
-		if (device->state.disk != D_DISKLESS)
-			drbd_err(device,
-				 "ASSERT FAILED: disk is %s while going diskless\n",
-				 drbd_disk_str(device->state.disk));
-
-		if (ns.conn >= C_CONNECTED)
-			drbd_send_state(peer_device, ns);
-		/* corresponding get_ldev in __drbd_set_state
-		 * this may finally trigger drbd_ldev_destroy. */
-		put_ldev(device);
-	}
+enum drbd_state_rv change_repl_state(struct drbd_peer_device *peer_device,
+				     enum drbd_repl_state new_repl_state,
+				     enum chg_state_flags flags,
+				     const char *tag)
+{
+	struct change_repl_context repl_context = {
+		.context = {
+			.resource = peer_device->device->resource,
+			.vnr = peer_device->device->vnr,
+			.mask = { { .conn = conn_MASK } },
+			.val = { { .conn = new_repl_state } },
+			.target_node_id = peer_device->node_id,
+			.flags = flags
+		},
+		.peer_device = peer_device
+	};
 
-	/* Notify peer that I had a local IO error, and did not detached.. */
-	if (os.disk == D_UP_TO_DATE && ns.disk == D_INCONSISTENT && ns.conn >= C_CONNECTED)
-		drbd_send_state(peer_device, ns);
-
-	/* Disks got bigger while they were detached */
-	if (ns.disk > D_NEGOTIATING && ns.pdsk > D_NEGOTIATING &&
-	    test_and_clear_bit(RESYNC_AFTER_NEG, &device->flags)) {
-		if (ns.conn == C_CONNECTED)
-			resync_after_online_grow(device);
-	}
-
-	/* A resync finished or aborted, wake paused devices... */
-	if ((os.conn > C_CONNECTED && ns.conn <= C_CONNECTED) ||
-	    (os.peer_isp && !ns.peer_isp) ||
-	    (os.user_isp && !ns.user_isp))
-		resume_next_sg(device);
-
-	/* sync target done with resync.  Explicitly notify peer, even though
-	 * it should (at least for non-empty resyncs) already know itself. */
-	if (os.disk < D_UP_TO_DATE && os.conn >= C_SYNC_SOURCE && ns.conn == C_CONNECTED)
-		drbd_send_state(peer_device, ns);
-
-	/* Verify finished, or reached stop sector.  Peer did not know about
-	 * the stop sector, and we may even have changed the stop sector during
-	 * verify to interrupt/stop early.  Send the new state. */
-	if (os.conn == C_VERIFY_S && ns.conn == C_CONNECTED
-	&& verify_can_do_stop_sector(device))
-		drbd_send_state(peer_device, ns);
-
-	/* This triggers bitmap writeout of potentially still unwritten pages
-	 * if the resync finished cleanly, or aborted because of peer disk
-	 * failure, or on transition from resync back to AHEAD/BEHIND.
-	 *
-	 * Connection loss is handled in drbd_disconnected() by the receiver.
-	 *
-	 * For resync aborted because of local disk failure, we cannot do
-	 * any bitmap writeout anymore.
-	 *
-	 * No harm done if some bits change during this phase.
-	 */
-	if ((os.conn > C_CONNECTED && os.conn < C_AHEAD) &&
-	    (ns.conn == C_CONNECTED || ns.conn >= C_AHEAD) && get_ldev(device)) {
-		drbd_queue_bitmap_io(device, &drbd_bm_write_copy_pages, NULL,
-			"write from resync_finished", BM_LOCKED_CHANGE_ALLOWED,
-			peer_device);
-		put_ldev(device);
-	}
+	if (new_repl_state == L_WF_BITMAP_S || new_repl_state == L_VERIFY_S)
+		repl_context.context.change_local_state_last = true;
 
-	if (ns.disk == D_DISKLESS &&
-	    ns.conn == C_STANDALONE &&
-	    ns.role == R_SECONDARY) {
-		if (os.aftr_isp != ns.aftr_isp)
-			resume_next_sg(device);
-	}
+	return change_cluster_wide_state(do_change_repl_state, &repl_context.context, tag);
+}
 
-	drbd_md_sync(device);
+enum drbd_state_rv stable_change_repl_state(struct drbd_peer_device *peer_device,
+					    enum drbd_repl_state repl_state,
+					    enum chg_state_flags flags,
+					    const char *tag)
+{
+	return stable_state_change(peer_device->device->resource,
+		change_repl_state(peer_device, repl_state, flags, tag));
 }
 
-struct after_conn_state_chg_work {
-	struct drbd_work w;
-	enum drbd_conns oc;
-	union drbd_state ns_min;
-	union drbd_state ns_max; /* new, max state, over all devices */
-	enum chg_state_flags flags;
-	struct drbd_connection *connection;
-	struct drbd_state_change *state_change;
-};
+void __change_peer_disk_state(struct drbd_peer_device *peer_device, enum drbd_disk_state disk_state)
+{
+	peer_device->disk_state[NEW] = disk_state;
+}
 
-static int w_after_conn_state_ch(struct drbd_work *w, int unused)
+void __downgrade_peer_disk_states(struct drbd_connection *connection, enum drbd_disk_state disk_state)
 {
-	struct after_conn_state_chg_work *acscw =
-		container_of(w, struct after_conn_state_chg_work, w);
-	struct drbd_connection *connection = acscw->connection;
-	enum drbd_conns oc = acscw->oc;
-	union drbd_state ns_max = acscw->ns_max;
 	struct drbd_peer_device *peer_device;
 	int vnr;
 
-	broadcast_state_change(acscw->state_change);
-	forget_state_change(acscw->state_change);
-	kfree(acscw);
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		if (peer_device->disk_state[NEW] > disk_state)
+			__change_peer_disk_state(peer_device, disk_state);
+	}
+	rcu_read_unlock();
+}
 
-	/* Upon network configuration, we need to start the receiver */
-	if (oc == C_STANDALONE && ns_max.conn == C_UNCONNECTED)
-		drbd_thread_start(&connection->receiver);
+enum drbd_state_rv change_peer_disk_state(struct drbd_peer_device *peer_device,
+					  enum drbd_disk_state disk_state,
+					  enum chg_state_flags flags,
+					  const char *tag)
+{
+	struct drbd_resource *resource = peer_device->device->resource;
+	unsigned long irq_flags;
 
-	if (oc == C_DISCONNECTING && ns_max.conn == C_STANDALONE) {
-		struct net_conf *old_conf;
+	begin_state_change(resource, &irq_flags, flags);
+	__change_peer_disk_state(peer_device, disk_state);
+	return end_state_change(resource, &irq_flags, tag);
+}
 
-		mutex_lock(&notification_mutex);
-		idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
-			notify_peer_device_state(NULL, 0, peer_device, NULL,
-						 NOTIFY_DESTROY | NOTIFY_CONTINUES);
-		notify_connection_state(NULL, 0, connection, NULL, NOTIFY_DESTROY);
-		mutex_unlock(&notification_mutex);
+void __change_resync_susp_user(struct drbd_peer_device *peer_device,
+				       bool value)
+{
+	peer_device->resync_susp_user[NEW] = value;
+}
 
-		mutex_lock(&connection->resource->conf_update);
-		old_conf = connection->net_conf;
-		connection->my_addr_len = 0;
-		connection->peer_addr_len = 0;
-		RCU_INIT_POINTER(connection->net_conf, NULL);
-		conn_free_crypto(connection);
-		mutex_unlock(&connection->resource->conf_update);
+enum drbd_state_rv change_resync_susp_user(struct drbd_peer_device *peer_device,
+						   bool value,
+						   enum chg_state_flags flags)
+{
+	struct drbd_resource *resource = peer_device->device->resource;
+	unsigned long irq_flags;
 
-		kvfree_rcu_mightsleep(old_conf);
-	}
+	begin_state_change(resource, &irq_flags, flags);
+	__change_resync_susp_user(peer_device, value);
+	return end_state_change(resource, &irq_flags, value ? "pause-sync" : "resume-sync");
+}
 
-	if (ns_max.susp_fen) {
-		/* case1: The outdate peer handler is successful: */
-		if (ns_max.pdsk <= D_OUTDATED) {
-			rcu_read_lock();
-			idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-				struct drbd_device *device = peer_device->device;
-				if (test_bit(NEW_CUR_UUID, &device->flags)) {
-					drbd_uuid_new_current(device);
-					clear_bit(NEW_CUR_UUID, &device->flags);
-				}
-			}
-			rcu_read_unlock();
-			spin_lock_irq(&connection->resource->req_lock);
-			_tl_restart(connection, CONNECTION_LOST_WHILE_PENDING);
-			_conn_request_state(connection,
-					    (union drbd_state) { { .susp_fen = 1 } },
-					    (union drbd_state) { { .susp_fen = 0 } },
-					    CS_VERBOSE);
-			spin_unlock_irq(&connection->resource->req_lock);
-		}
-	}
-	conn_md_sync(connection);
-	kref_put(&connection->kref, drbd_destroy_connection);
+void __change_resync_susp_peer(struct drbd_peer_device *peer_device,
+				       bool value)
+{
+	peer_device->resync_susp_peer[NEW] = value;
+}
 
-	return 0;
+void __change_resync_susp_dependency(struct drbd_peer_device *peer_device,
+					     bool value)
+{
+	peer_device->resync_susp_dependency[NEW] = value;
 }
 
-static void conn_old_common_state(struct drbd_connection *connection, union drbd_state *pcs, enum chg_state_flags *pf)
+static void log_current_uuids(struct drbd_device *device)
 {
-	enum chg_state_flags flags = ~0;
 	struct drbd_peer_device *peer_device;
-	int vnr, first_vol = 1;
-	union drbd_dev_state os, cs = {
-		{ .role = R_SECONDARY,
-		  .peer = R_UNKNOWN,
-		  .conn = connection->cstate,
-		  .disk = D_DISKLESS,
-		  .pdsk = D_UNKNOWN,
-		} };
+	struct drbd_connection *connection;
+	char msg[120];
+	int ret, pos = 0;
 
 	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		os = device->state;
-
-		if (first_vol) {
-			cs = os;
-			first_vol = 0;
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->disk_state[NOW] != D_UP_TO_DATE)
 			continue;
+		connection = peer_device->connection;
+		ret = snprintf(msg + pos, 120 - pos, "%s: %016llX ",
+			       rcu_dereference(connection->transport.net_conf)->name,
+			       peer_device->current_uuid);
+		if (ret > 0)
+			pos += ret;
+		if (pos >= 120)
+			break;
+	}
+	rcu_read_unlock();
+	drbd_warn(device, "%s", msg);
+}
+
+bool drbd_res_data_accessible(struct drbd_resource *resource)
+{
+	bool data_accessible = false;
+	struct drbd_device *device;
+	int vnr;
+
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		if (drbd_data_accessible(device, NOW)) {
+			data_accessible = true;
+			break;
 		}
+	}
 
-		if (cs.role != os.role)
-			flags &= ~CS_DC_ROLE;
+	return data_accessible;
+}
 
-		if (cs.peer != os.peer)
-			flags &= ~CS_DC_PEER;
+/**
+ * calc_data_accessible() - returns if up-to-data data is reachable
+ *
+ * @state_change: where to get the state information from
+ * @n_device:     index into the devices array
+ * @which:        OLD or NEW
+ *
+ * calc_data_accessible() returns true if either the local disk is up-to-date
+ * or of the peers. The related drbd_data_accessible() computes the same
+ * result from different inputs.
+ */
+static bool calc_data_accessible(struct drbd_state_change *state_change, int n_device,
+				 enum which_state which)
+{
+	struct drbd_device_state_change *device_state_change = &state_change->devices[n_device];
+	enum drbd_disk_state *disk_state = device_state_change->disk_state;
+	int n_connection;
 
-		if (cs.conn != os.conn)
-			flags &= ~CS_DC_CONN;
+	if (disk_state[which] == D_UP_TO_DATE)
+		return true;
 
-		if (cs.disk != os.disk)
-			flags &= ~CS_DC_DISK;
+	for (n_connection = 0; n_connection < state_change->n_connections; n_connection++) {
+		struct drbd_peer_device_state_change *peer_device_state_change =
+			&state_change->peer_devices[
+				n_device * state_change->n_connections + n_connection];
+		struct drbd_peer_device *peer_device = peer_device_state_change->peer_device;
+		enum drbd_disk_state *peer_disk_state = peer_device_state_change->disk_state;
+		struct net_conf *nc;
+		bool allow_remote_read;
 
-		if (cs.pdsk != os.pdsk)
-			flags &= ~CS_DC_PDSK;
+		rcu_read_lock();
+		nc = rcu_dereference(peer_device->connection->transport.net_conf);
+		allow_remote_read = nc->allow_remote_read;
+		rcu_read_unlock();
+		if (nc && !allow_remote_read)
+			continue;
+		if (peer_disk_state[which] == D_UP_TO_DATE)
+			return true;
 	}
-	rcu_read_unlock();
 
-	*pf |= CS_DC_MASK;
-	*pf &= flags;
-	(*pcs).i = cs.i;
+	return false;
 }
 
-static enum drbd_state_rv
-conn_is_valid_transition(struct drbd_connection *connection, union drbd_state mask, union drbd_state val,
-			 enum chg_state_flags flags)
+/**
+ * drbd_data_accessible() - returns if up-to-data data is reachable
+ *
+ * @device: the device, the question is about
+ * @which:  OLD, NEW, or NOW (Only use OLD within a state change!)
+ *
+ * drbd_data_accessible() returns true if either the local disk is up-to-date
+ * or of the peers. The related calc_data_accessible() computes the same
+ * result from different inputs.
+ */
+bool drbd_data_accessible(struct drbd_device *device, enum which_state which)
 {
-	enum drbd_state_rv rv = SS_SUCCESS;
-	union drbd_state ns, os;
 	struct drbd_peer_device *peer_device;
-	int vnr;
+	bool data_accessible = false;
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		os = drbd_read_state(device);
-		ns = sanitize_state(device, os, apply_mask_val(os, mask, val), NULL);
-
-		if (flags & CS_IGN_OUTD_FAIL && ns.disk == D_OUTDATED && os.disk < D_OUTDATED)
-			ns.disk = os.disk;
+	if (device->disk_state[which] == D_UP_TO_DATE)
+		return true;
 
-		if (ns.i == os.i)
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		struct net_conf *nc;
+		nc = rcu_dereference(peer_device->connection->transport.net_conf);
+		if (nc && !nc->allow_remote_read)
 			continue;
-
-		rv = is_valid_transition(os, ns);
-
-		if (rv >= SS_SUCCESS && !(flags & CS_HARD)) {
-			rv = is_valid_state(device, ns);
-			if (rv < SS_SUCCESS) {
-				if (is_valid_state(device, os) == rv)
-					rv = is_valid_soft_transition(os, ns, connection);
-			} else
-				rv = is_valid_soft_transition(os, ns, connection);
-		}
-
-		if (rv < SS_SUCCESS) {
-			if (flags & CS_VERBOSE)
-				print_st_err(device, os, ns, rv);
+		if (peer_device->disk_state[which] == D_UP_TO_DATE) {
+			data_accessible = true;
 			break;
 		}
 	}
 	rcu_read_unlock();
 
-	return rv;
+	return data_accessible;
 }
-
-static void
-conn_set_state(struct drbd_connection *connection, union drbd_state mask, union drbd_state val,
-	       union drbd_state *pns_min, union drbd_state *pns_max, enum chg_state_flags flags)
+/* drbd_data_accessible() and exposable_data_uuid() have the same structure. By intention. */
+static u64 exposable_data_uuid(struct drbd_device *device)
 {
-	union drbd_state ns, os, ns_max = { };
-	union drbd_state ns_min = {
-		{ .role = R_MASK,
-		  .peer = R_MASK,
-		  .conn = val.conn,
-		  .disk = D_MASK,
-		  .pdsk = D_MASK
-		} };
 	struct drbd_peer_device *peer_device;
-	enum drbd_state_rv rv;
-	int vnr, number_of_volumes = 0;
-
-	if (mask.conn == C_MASK) {
-		/* remember last connect time so request_timer_fn() won't
-		 * kill newly established sessions while we are still trying to thaw
-		 * previously frozen IO */
-		if (connection->cstate != C_WF_REPORT_PARAMS && val.conn == C_WF_REPORT_PARAMS)
-			connection->last_reconnect_jif = jiffies;
+	u64 uuid = 0;
 
-		connection->cstate = val.conn;
+	if (get_ldev_if_state(device, D_UP_TO_DATE)) {
+		uuid = device->ldev->md.current_uuid;
+		put_ldev(device);
+		return uuid;
 	}
 
 	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		number_of_volumes++;
-		os = drbd_read_state(device);
-		ns = apply_mask_val(os, mask, val);
-		ns = sanitize_state(device, os, ns, NULL);
-
-		if (flags & CS_IGN_OUTD_FAIL && ns.disk == D_OUTDATED && os.disk < D_OUTDATED)
-			ns.disk = os.disk;
-
-		rv = _drbd_set_state(device, ns, flags, NULL);
-		BUG_ON(rv < SS_SUCCESS);
-		ns.i = device->state.i;
-		ns_max.role = max_role(ns.role, ns_max.role);
-		ns_max.peer = max_role(ns.peer, ns_max.peer);
-		ns_max.conn = max_t(enum drbd_conns, ns.conn, ns_max.conn);
-		ns_max.disk = max_t(enum drbd_disk_state, ns.disk, ns_max.disk);
-		ns_max.pdsk = max_t(enum drbd_disk_state, ns.pdsk, ns_max.pdsk);
-
-		ns_min.role = min_role(ns.role, ns_min.role);
-		ns_min.peer = min_role(ns.peer, ns_min.peer);
-		ns_min.conn = min_t(enum drbd_conns, ns.conn, ns_min.conn);
-		ns_min.disk = min_t(enum drbd_disk_state, ns.disk, ns_min.disk);
-		ns_min.pdsk = min_t(enum drbd_disk_state, ns.pdsk, ns_min.pdsk);
+	for_each_peer_device_rcu(peer_device, device) {
+		struct net_conf *nc;
+		nc = rcu_dereference(peer_device->connection->transport.net_conf);
+		if (nc && !nc->allow_remote_read)
+			continue;
+		if (peer_device->disk_state[NOW] == D_UP_TO_DATE &&
+		    (uuid & ~UUID_PRIMARY) != (peer_device->current_uuid & ~UUID_PRIMARY)) {
+			if (!uuid) {
+				uuid = peer_device->current_uuid;
+				continue;
+			}
+			drbd_err(device, "Multiple UpToDate peers have different current UUIDs\n");
+			log_current_uuids(device);
+		}
 	}
 	rcu_read_unlock();
 
-	if (number_of_volumes == 0) {
-		ns_min = ns_max = (union drbd_state) { {
-				.role = R_SECONDARY,
-				.peer = R_UNKNOWN,
-				.conn = val.conn,
-				.disk = D_DISKLESS,
-				.pdsk = D_UNKNOWN
-			} };
-	}
-
-	ns_min.susp = ns_max.susp = connection->resource->susp;
-	ns_min.susp_nod = ns_max.susp_nod = connection->resource->susp_nod;
-	ns_min.susp_fen = ns_max.susp_fen = connection->resource->susp_fen;
-
-	*pns_min = ns_min;
-	*pns_max = ns_max;
+	return uuid;
 }
 
-static enum drbd_state_rv
-_conn_rq_cond(struct drbd_connection *connection, union drbd_state mask, union drbd_state val)
+static void ensure_exposed_data_uuid(struct drbd_device *device)
 {
-	enum drbd_state_rv err, rv = SS_UNKNOWN_ERROR; /* continue waiting */;
-
-	if (test_and_clear_bit(CONN_WD_ST_CHG_OKAY, &connection->flags))
-		rv = SS_CW_SUCCESS;
-
-	if (test_and_clear_bit(CONN_WD_ST_CHG_FAIL, &connection->flags))
-		rv = SS_CW_FAILED_BY_PEER;
+	u64 uuid = exposable_data_uuid(device);
 
-	err = conn_is_valid_transition(connection, mask, val, 0);
-	if (err == SS_SUCCESS && connection->cstate == C_WF_REPORT_PARAMS)
-		return rv;
+	if (uuid)
+		drbd_uuid_set_exposed(device, uuid, true);
 
-	return err;
 }
 
-enum drbd_state_rv
-_conn_request_state(struct drbd_connection *connection, union drbd_state mask, union drbd_state val,
-		    enum chg_state_flags flags)
+/* Between 9.1.7 and 9.1.12 drbd was setting MDF_NODE_EXISTS for all peers.
+ * With that the flag got useless. It is a meta-data flag that persists.
+ * Clear it for all not configured nodes if we find it in every peer slot.
+ */
+static void check_wrongly_set_mdf_exists(struct drbd_device *device)
 {
-	enum drbd_state_rv rv = SS_SUCCESS;
-	struct after_conn_state_chg_work *acscw;
-	enum drbd_conns oc = connection->cstate;
-	union drbd_state ns_max, ns_min, os;
-	bool have_mutex = false;
-	struct drbd_state_change *state_change;
+	struct drbd_resource *resource = device->resource;
+	const int my_node_id = resource->res_opts.node_id;
+	bool wrong = true;
+	int node_id;
 
-	if (mask.conn) {
-		rv = is_valid_conn_transition(oc, val.conn);
-		if (rv < SS_SUCCESS)
-			goto abort;
-	}
+	if (!get_ldev(device))
+		return;
 
-	rv = conn_is_valid_transition(connection, mask, val, flags);
-	if (rv < SS_SUCCESS)
-		goto abort;
-
-	if (oc == C_WF_REPORT_PARAMS && val.conn == C_DISCONNECTING &&
-	    !(flags & (CS_LOCAL_ONLY | CS_HARD))) {
-
-		/* This will be a cluster-wide state change.
-		 * Need to give up the spinlock, grab the mutex,
-		 * then send the state change request, ... */
-		spin_unlock_irq(&connection->resource->req_lock);
-		mutex_lock(&connection->cstate_mutex);
-		have_mutex = true;
-
-		set_bit(CONN_WD_ST_CHG_REQ, &connection->flags);
-		if (conn_send_state_req(connection, mask, val)) {
-			/* sending failed. */
-			clear_bit(CONN_WD_ST_CHG_REQ, &connection->flags);
-			rv = SS_CW_FAILED_BY_PEER;
-			/* need to re-aquire the spin lock, though */
-			goto abort_unlocked;
-		}
-
-		if (val.conn == C_DISCONNECTING)
-			set_bit(DISCONNECT_SENT, &connection->flags);
-
-		/* ... and re-aquire the spinlock.
-		 * If _conn_rq_cond() returned >= SS_SUCCESS, we must call
-		 * conn_set_state() within the same spinlock. */
-		spin_lock_irq(&connection->resource->req_lock);
-		wait_event_lock_irq(connection->ping_wait,
-				(rv = _conn_rq_cond(connection, mask, val)),
-				connection->resource->req_lock);
-		clear_bit(CONN_WD_ST_CHG_REQ, &connection->flags);
-		if (rv < SS_SUCCESS)
-			goto abort;
-	}
-
-	state_change = remember_old_state(connection->resource, GFP_ATOMIC);
-	conn_old_common_state(connection, &os, &flags);
-	flags |= CS_DC_SUSP;
-	conn_set_state(connection, mask, val, &ns_min, &ns_max, flags);
-	conn_pr_state_change(connection, os, ns_max, flags);
-	remember_new_state(state_change);
-
-	acscw = kmalloc_obj(*acscw, GFP_ATOMIC);
-	if (acscw) {
-		acscw->oc = os.conn;
-		acscw->ns_min = ns_min;
-		acscw->ns_max = ns_max;
-		acscw->flags = flags;
-		acscw->w.cb = w_after_conn_state_ch;
-		kref_get(&connection->kref);
-		acscw->connection = connection;
-		acscw->state_change = state_change;
-		drbd_queue_work(&connection->sender_work, &acscw->w);
-	} else {
-		drbd_err(connection, "Could not kmalloc an acscw\n");
-	}
+	rcu_read_lock();
 
- abort:
-	if (have_mutex) {
-		/* mutex_unlock() "... must not be used in interrupt context.",
-		 * so give up the spinlock, then re-aquire it */
-		spin_unlock_irq(&connection->resource->req_lock);
- abort_unlocked:
-		mutex_unlock(&connection->cstate_mutex);
-		spin_lock_irq(&connection->resource->req_lock);
-	}
-	if (rv < SS_SUCCESS && flags & CS_VERBOSE) {
-		drbd_err(connection, "State change failed: %s\n", drbd_set_st_err_str(rv));
-		drbd_err(connection, " mask = 0x%x val = 0x%x\n", mask.i, val.i);
-		drbd_err(connection, " old_conn:%s wanted_conn:%s\n", drbd_conn_str(oc), drbd_conn_str(val.conn));
-	}
-	return rv;
-}
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		struct drbd_peer_device *peer_device = peer_device_by_node_id(device, node_id);
+		struct drbd_peer_md *peer_md = &device->ldev->md.peers[node_id];
 
-enum drbd_state_rv
-conn_request_state(struct drbd_connection *connection, union drbd_state mask, union drbd_state val,
-		   enum chg_state_flags flags)
-{
-	enum drbd_state_rv rv;
+		if (!(peer_md->flags & MDF_NODE_EXISTS || peer_device || node_id == my_node_id)) {
+			wrong = false;
+			break;
+		}
+	}
 
-	spin_lock_irq(&connection->resource->req_lock);
-	rv = _conn_request_state(connection, mask, val, flags);
-	spin_unlock_irq(&connection->resource->req_lock);
+	if (wrong) {
+		for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+			struct drbd_peer_device *peer_device = peer_device_by_node_id(device, node_id);
+			struct drbd_peer_md *peer_md = &device->ldev->md.peers[node_id];
 
-	return rv;
+			if (!peer_device)
+				peer_md->flags &= ~MDF_NODE_EXISTS;
+		}
+		if (!test_bit(WRONG_MDF_EXISTS, &resource->flags)) {
+			set_bit(WRONG_MDF_EXISTS, &resource->flags);
+			drbd_warn(resource, "Clearing excess MDF_NODE_EXISTS flags\n");
+		}
+	}
+	rcu_read_unlock();
+	put_ldev(device);
 }
diff --git a/include/linux/drbd_genl.h b/include/linux/drbd_genl.h
index 75e671a3c5d1..eaaf1a9c641f 100644
--- a/include/linux/drbd_genl.h
+++ b/include/linux/drbd_genl.h
@@ -236,6 +236,7 @@ GENL_struct(DRBD_NLA_DEVICE_CONF, 14, device_conf,
 	__u32_field_def(1, DRBD_F_INVARIANT,	max_bio_size, DRBD_MAX_BIO_SIZE_DEF)
 	__flg_field_def(2, 0 /* OPTIONAL */, intentional_diskless, DRBD_DISK_DISKLESS_DEF)
 	__u32_field_def(3, 0 /* OPTIONAL */, block_size, DRBD_BLOCK_SIZE_DEF)
+	__u32_field_def(4, 0 /* OPTIONAL */, discard_granularity, DRBD_DISCARD_GRANULARITY_DEF)
 )
 
 GENL_struct(DRBD_NLA_RESOURCE_INFO, 15, resource_info,
@@ -357,6 +358,7 @@ GENL_struct(DRBD_NLA_PEER_DEVICE_OPTS, 27, peer_device_conf,
 #if (PRO_FEATURES & DRBD_FF_RESYNC_WITHOUT_REPLICATION) || !defined(__KERNEL__)
 	__flg_field_def(8, 0 /* OPTIONAL */, resync_without_replication, DRBD_RESYNC_WITHOUT_REPLICATION_DEF)
 #endif
+	__flg_field_def(9, 0 /* OPTIONAL */, peer_tiebreaker, DRBD_PEER_TIEBREAKER_DEF)
 )
 
 GENL_struct(DRBD_NLA_PATH_PARMS, 28, path_parms,
diff --git a/include/linux/drbd_limits.h b/include/linux/drbd_limits.h
index ed38f94d43c6..bbcb5b0dc3be 100644
--- a/include/linux/drbd_limits.h
+++ b/include/linux/drbd_limits.h
@@ -313,6 +313,11 @@
 #define DRBD_BLOCK_SIZE_DEF 512
 #define DRBD_BLOCK_SIZE_SCALE '1' /* Bytes */
 
+#define DRBD_DISCARD_GRANULARITY_SCALE '1'             /* Bytes */
+#define DRBD_DISCARD_GRANULARITY_MIN 0U                /* 0 = disable discards */
+#define DRBD_DISCARD_GRANULARITY_MAX (128U<<20)        /* 128 MiB, current DRBD_MAX_BATCH_BIO_SIZE */
+#define DRBD_DISCARD_GRANULARITY_DEF 0xFFFFFFFFU       /* sentinel: not configured; use legacy behavior */
+
 /* By default freeze IO, if set error all IOs as quick as possible */
 #define DRBD_ON_NO_QUORUM_DEF ONQ_SUSPEND_IO
 
@@ -326,6 +331,8 @@
 
 #define DRBD_LOAD_BALANCE_PATHS_DEF 0U
 
+#define DRBD_PEER_TIEBREAKER_DEF 1U
+
 #define DRBD_RDMA_CTRL_RCVBUF_SIZE_MIN  0U
 #define DRBD_RDMA_CTRL_RCVBUF_SIZE_MAX  (10U<<20)
 #define DRBD_RDMA_CTRL_RCVBUF_SIZE_DEF 0
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 14/20] drbd: rework activity log and bitmap for multi-peer replication
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (12 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 13/20] drbd: rewrite state machine for DRBD 9 multi-peer clusters Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 15/20] drbd: rework request processing for DRBD 9 multi-peer IO Christoph Böhmwalder
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Adapt the activity log and on-disk bitmap from the single-peer
DRBD 8.4 model to the multi-peer DRBD 9 architecture.

Restructure the bitmap from a single flat layout to an interleaved
per-peer format: consecutive words on disk cycle through all configured
peers, so that all peers' bits for the same disk region share a page.
This enables atomic cross-peer operations and is a prerequisite for
coordinated multi-peer resync.

Consolidate all bitmap operations into a single function instead of
many separate ones.

Make the bitmap block size adjustable at runtime rather than being a
compile-time constant.

Introduce a per-peer-slot lock variant so that concurrent operations
on different peer slots no longer need to serialize.

On the activity log side, the resync extent LRU cache and its
associated write-blocking protocol are removed.
In DRBD 9, resync-to-application-write conflict detection is handled
by the sender's interval tree, making the old extent-lock layer
redundant.
Resync progress tracking moves from the device to the per-peer-device
object, enabling independent progress reporting and rate control per
peer.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_actlog.c | 1122 +++++++-----------
 drivers/block/drbd/drbd_bitmap.c | 1824 +++++++++++++++---------------
 2 files changed, 1331 insertions(+), 1615 deletions(-)

diff --git a/drivers/block/drbd/drbd_actlog.c b/drivers/block/drbd/drbd_actlog.c
index b3dbf6c76e98..7a69d643560d 100644
--- a/drivers/block/drbd/drbd_actlog.c
+++ b/drivers/block/drbd/drbd_actlog.c
@@ -14,81 +14,41 @@
 #include <linux/slab.h>
 #include <linux/crc32c.h>
 #include <linux/drbd.h>
-#include <linux/drbd_limits.h>
+#include <linux/dynamic_debug.h>
 #include "drbd_int.h"
-
-
-enum al_transaction_types {
-	AL_TR_UPDATE = 0,
-	AL_TR_INITIALIZED = 0xffff
-};
-/* all fields on disc in big endian */
-struct __packed al_transaction_on_disk {
-	/* don't we all like magic */
-	__be32	magic;
-
-	/* to identify the most recent transaction block
-	 * in the on disk ring buffer */
-	__be32	tr_number;
-
-	/* checksum on the full 4k block, with this field set to 0. */
-	__be32	crc32c;
-
-	/* type of transaction, special transaction types like:
-	 * purge-all, set-all-idle, set-all-active, ... to-be-defined
-	 * see also enum al_transaction_types */
-	__be16	transaction_type;
-
-	/* we currently allow only a few thousand extents,
-	 * so 16bit will be enough for the slot number. */
-
-	/* how many updates in this transaction */
-	__be16	n_updates;
-
-	/* maximum slot number, "al-extents" in drbd.conf speak.
-	 * Having this in each transaction should make reconfiguration
-	 * of that parameter easier. */
-	__be16	context_size;
-
-	/* slot number the context starts with */
-	__be16	context_start_slot_nr;
-
-	/* Some reserved bytes.  Expected usage is a 64bit counter of
-	 * sectors-written since device creation, and other data generation tag
-	 * supporting usage */
-	__be32	__reserved[4];
-
-	/* --- 36 byte used --- */
-
-	/* Reserve space for up to AL_UPDATES_PER_TRANSACTION changes
-	 * in one transaction, then use the remaining byte in the 4k block for
-	 * context information.  "Flexible" number of updates per transaction
-	 * does not help, as we have to account for the case when all update
-	 * slots are used anyways, so it would only complicate code without
-	 * additional benefit.
-	 */
-	__be16	update_slot_nr[AL_UPDATES_PER_TRANSACTION];
-
-	/* but the extent number is 32bit, which at an extent size of 4 MiB
-	 * allows to cover device sizes of up to 2**54 Byte (16 PiB) */
-	__be32	update_extent_nr[AL_UPDATES_PER_TRANSACTION];
-
-	/* --- 420 bytes used (36 + 64*6) --- */
-
-	/* 4096 - 420 = 3676 = 919 * 4 */
-	__be32	context[AL_CONTEXT_PER_TRANSACTION];
-};
+#include "drbd_meta_data.h"
+#include "drbd_dax_pmem.h"
 
 void *drbd_md_get_buffer(struct drbd_device *device, const char *intent)
 {
 	int r;
+	long t;
+	unsigned long t0 = jiffies;
+	unsigned int warn_s = 10;
+
+	for (;;) {
+		t = wait_event_timeout(device->misc_wait,
+				(r = atomic_cmpxchg(&device->md_io.in_use, 0, 1)) == 0 ||
+				device->disk_state[NOW] <= D_FAILED,
+				HZ * warn_s);
 
-	wait_event(device->misc_wait,
-		   (r = atomic_cmpxchg(&device->md_io.in_use, 0, 1)) == 0 ||
-		   device->state.disk <= D_FAILED);
+		if (r == 0)
+			break;
+
+		if (t != 0) {
+			drbd_err(device, "Failed to get md_buffer for %s: disk state %s\n",
+				 intent, drbd_disk_str(device->disk_state[NOW]));
+			return NULL;
+		}
 
-	if (r)
-		return NULL;
+		/* r != 0, t == 0: still in use, hit the timeout above.
+		 * Warn, but keep trying.
+		 */
+		drbd_err(device, "Waited %lds on md_buffer for %s; in use by %s; still trying...\n",
+			 (jiffies - t0 + HZ-1)/HZ, intent, device->md_io.current_use);
+		/* reduce warn frequency */
+		warn_s = max(30U, warn_s + 10);
+	}
 
 	device->md_io.current_use = intent;
 	device->md_io.start_jif = jiffies;
@@ -103,7 +63,7 @@ void drbd_md_put_buffer(struct drbd_device *device)
 }
 
 void wait_until_done_or_force_detached(struct drbd_device *device, struct drbd_backing_dev *bdev,
-				     unsigned int *done)
+				       unsigned int *done)
 {
 	long dt;
 
@@ -115,10 +75,14 @@ void wait_until_done_or_force_detached(struct drbd_device *device, struct drbd_b
 		dt = MAX_SCHEDULE_TIMEOUT;
 
 	dt = wait_event_timeout(device->misc_wait,
-			*done || test_bit(FORCE_DETACH, &device->flags), dt);
+			*done ||
+			test_bit(FORCE_DETACH, &device->flags) ||
+			test_bit(ABORT_MDIO, &device->flags),
+			dt);
+
 	if (dt == 0) {
 		drbd_err(device, "meta-data IO operation timed out\n");
-		drbd_chk_io_error(device, 1, DRBD_FORCE_DETACH);
+		drbd_handle_io_error(device, DRBD_FORCE_DETACH);
 	}
 }
 
@@ -132,15 +96,15 @@ static int _drbd_md_sync_page_io(struct drbd_device *device,
 	int err;
 	blk_opf_t op_flags = 0;
 
-	device->md_io.done = 0;
-	device->md_io.error = -ENODEV;
-
 	if ((op == REQ_OP_WRITE) && !test_bit(MD_NO_FUA, &device->flags))
 		op_flags |= REQ_FUA | REQ_PREFLUSH;
-	op_flags |= REQ_SYNC;
+	op_flags |= REQ_META | REQ_SYNC;
+
+	device->md_io.done = 0;
+	device->md_io.error = -ENODEV;
 
-	bio = bio_alloc_bioset(bdev->md_bdev, 1, op | op_flags, GFP_NOIO,
-			       &drbd_md_io_bio_set);
+	bio = bio_alloc_bioset(bdev->md_bdev, 1, op | op_flags,
+		GFP_NOIO, &drbd_md_io_bio_set);
 	bio->bi_iter.bi_sector = sector;
 	err = -EIO;
 	if (bio_add_page(bio, device->md_io.page, size, 0) != size)
@@ -148,7 +112,7 @@ static int _drbd_md_sync_page_io(struct drbd_device *device,
 	bio->bi_private = device;
 	bio->bi_end_io = drbd_md_endio;
 
-	if (op != REQ_OP_WRITE && device->state.disk == D_DISKLESS && device->ldev == NULL)
+	if (op != REQ_OP_WRITE && device->disk_state[NOW] == D_DISKLESS && device->ldev == NULL)
 		/* special case, drbd_md_read() during drbd_adm_attach(): no get_ldev */
 		;
 	else if (!get_ldev_if_state(device, D_ATTACHING)) {
@@ -161,14 +125,14 @@ static int _drbd_md_sync_page_io(struct drbd_device *device,
 	bio_get(bio); /* one bio_put() is in the completion handler */
 	atomic_inc(&device->md_io.in_use); /* drbd_md_put_buffer() is in the completion handler */
 	device->md_io.submit_jif = jiffies;
-	if (drbd_insert_fault(device, (op == REQ_OP_WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD))
-		bio_io_error(bio);
-	else
+	if (drbd_insert_fault(device, (op == REQ_OP_WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+	} else {
 		submit_bio(bio);
+	}
 	wait_until_done_or_force_detached(device, bdev, &device->md_io.done);
-	if (!bio->bi_status)
-		err = device->md_io.error;
-
+	err = device->md_io.error;
  out:
 	bio_put(bio);
 	return err;
@@ -180,7 +144,10 @@ int drbd_md_sync_page_io(struct drbd_device *device, struct drbd_backing_dev *bd
 	int err;
 	D_ASSERT(device, atomic_read(&device->md_io.in_use) == 1);
 
-	BUG_ON(!bdev->md_bdev);
+	if (!bdev->md_bdev) {
+		drbd_err_ratelimit(device, "bdev->md_bdev==NULL\n");
+		return -EIO;
+	}
 
 	dynamic_drbd_dbg(device, "meta_data io: %s [%d]:%s(,%llus,%s) %pS\n",
 	     current->comm, current->pid, __func__,
@@ -203,96 +170,142 @@ int drbd_md_sync_page_io(struct drbd_device *device, struct drbd_backing_dev *bd
 	return err;
 }
 
-static struct bm_extent *find_active_resync_extent(struct drbd_device *device, unsigned int enr)
+bool drbd_al_active(struct drbd_device *device, sector_t sector, unsigned int size)
 {
-	struct lc_element *tmp;
-	tmp = lc_find(device->resync, enr/AL_EXT_PER_BM_SECT);
-	if (unlikely(tmp != NULL)) {
-		struct bm_extent  *bm_ext = lc_entry(tmp, struct bm_extent, lce);
-		if (test_bit(BME_NO_WRITES, &bm_ext->flags))
-			return bm_ext;
+	unsigned first = sector >> (AL_EXTENT_SHIFT-9);
+	unsigned last = size == 0 ? first : (sector + (size >> 9) - 1) >> (AL_EXTENT_SHIFT-9);
+	unsigned enr;
+	bool active = false;
+
+	spin_lock_irq(&device->al_lock);
+	for (enr = first; enr <= last; enr++) {
+		struct lc_element *al_ext;
+		al_ext = lc_find(device->act_log, enr);
+		if (al_ext && al_ext->refcnt > 0) {
+			active = true;
+			break;
+		}
 	}
-	return NULL;
+	spin_unlock_irq(&device->al_lock);
+
+	return active;
 }
 
-static struct lc_element *_al_get(struct drbd_device *device, unsigned int enr, bool nonblock)
+static
+struct lc_element *_al_get_nonblock(struct drbd_device *device, unsigned int enr)
 {
 	struct lc_element *al_ext;
-	struct bm_extent *bm_ext;
-	int wake;
 
 	spin_lock_irq(&device->al_lock);
-	bm_ext = find_active_resync_extent(device, enr);
-	if (bm_ext) {
-		wake = !test_and_set_bit(BME_PRIORITY, &bm_ext->flags);
-		spin_unlock_irq(&device->al_lock);
-		if (wake)
-			wake_up(&device->al_wait);
-		return NULL;
-	}
-	if (nonblock)
-		al_ext = lc_try_get(device->act_log, enr);
-	else
-		al_ext = lc_get(device->act_log, enr);
+	al_ext = lc_try_get(device->act_log, enr);
 	spin_unlock_irq(&device->al_lock);
+
 	return al_ext;
 }
 
-bool drbd_al_begin_io_fastpath(struct drbd_device *device, struct drbd_interval *i)
+#if IS_ENABLED(CONFIG_DEV_DAX_PMEM)
+static
+struct lc_element *_al_get(struct drbd_device *device, unsigned int enr)
 {
-	/* for bios crossing activity log extent boundaries,
-	 * we may need to activate two extents in one go */
-	unsigned first = i->sector >> (AL_EXTENT_SHIFT-9);
-	unsigned last = i->size == 0 ? first : (i->sector + (i->size >> 9) - 1) >> (AL_EXTENT_SHIFT-9);
+	struct lc_element *al_ext;
 
-	D_ASSERT(device, first <= last);
-	D_ASSERT(device, atomic_read(&device->local_cnt) > 0);
+	spin_lock_irq(&device->al_lock);
+	al_ext = lc_get(device->act_log, enr);
+	spin_unlock_irq(&device->al_lock);
 
-	/* FIXME figure out a fast path for bios crossing AL extent boundaries */
-	if (first != last)
-		return false;
+	return al_ext;
+}
+
+static bool
+drbd_dax_begin_io_fp(struct drbd_device *device, unsigned int first, unsigned int last)
+{
+	struct lc_element *al_ext;
+	unsigned long flags;
+	unsigned int enr;
+	unsigned int abort_enr;
+	bool wake = 0;
 
-	return _al_get(device, first, true);
+	for (enr = first; enr <= last; enr++) {
+		al_ext = _al_get(device, enr);
+		if (!al_ext)
+			goto abort;
+
+		if (al_ext->lc_number != enr) {
+			spin_lock_irqsave(&device->al_lock, flags);
+			drbd_dax_al_update(device, al_ext);
+			lc_committed(device->act_log);
+			spin_unlock_irqrestore(&device->al_lock, flags);
+		}
+	}
+	return true;
+abort:
+	abort_enr = enr;
+	for (enr = first; enr < abort_enr; enr++) {
+		spin_lock_irqsave(&device->al_lock, flags);
+		al_ext = lc_find(device->act_log, enr);
+		wake |= lc_put(device->act_log, al_ext) == 0;
+		spin_unlock_irqrestore(&device->al_lock, flags);
+	}
+	if (wake)
+		wake_up(&device->al_wait);
+	return false;
 }
+#else
+static bool
+drbd_dax_begin_io_fp(struct drbd_device *device, unsigned int first, unsigned int last)
+{
+	return false;
+}
+#endif
 
-bool drbd_al_begin_io_prepare(struct drbd_device *device, struct drbd_interval *i)
+bool drbd_al_begin_io_fastpath(struct drbd_device *device, struct drbd_interval *i)
 {
 	/* for bios crossing activity log extent boundaries,
 	 * we may need to activate two extents in one go */
 	unsigned first = i->sector >> (AL_EXTENT_SHIFT-9);
 	unsigned last = i->size == 0 ? first : (i->sector + (i->size >> 9) - 1) >> (AL_EXTENT_SHIFT-9);
-	unsigned enr;
-	bool need_transaction = false;
 
 	D_ASSERT(device, first <= last);
 	D_ASSERT(device, atomic_read(&device->local_cnt) > 0);
 
-	for (enr = first; enr <= last; enr++) {
-		struct lc_element *al_ext;
-		wait_event(device->al_wait,
-				(al_ext = _al_get(device, enr, false)) != NULL);
-		if (al_ext->lc_number != enr)
-			need_transaction = true;
-	}
-	return need_transaction;
+	if (drbd_md_dax_active(device->ldev))
+		return drbd_dax_begin_io_fp(device, first, last);
+
+	/* FIXME figure out a fast path for bios crossing AL extent boundaries */
+	if (first != last)
+		return false;
+
+	return _al_get_nonblock(device, first) != NULL;
 }
 
-#if (PAGE_SHIFT + 3) < (AL_EXTENT_SHIFT - BM_BLOCK_SHIFT)
-/* Currently BM_BLOCK_SHIFT, BM_EXT_SHIFT and AL_EXTENT_SHIFT
+#if AL_EXTENT_SHIFT > 27
+/* Condition used to be:
+ * (PAGE_SHIFT + 3) < (AL_EXTENT_SHIFT - BM_BLOCK_SHIFT)
+ * """
+ * Currently BM_BLOCK_SHIFT and AL_EXTENT_SHIFT
  * are still coupled, or assume too much about their relation.
  * Code below will not work if this is violated.
- * Will be cleaned up with some followup patch.
+ * """
+ *
+ * We want configurable bitmap granularity now.
+ * We only allow bytes per bit >= 4k, though: BM_BLOCK_SHIFT >= 12,
+ * increasing it will only make the right side smaller,
+ * not changing the boolean result.
+ * PAGE_SHIFT is 12 or larger (may be 14,16,18 for some architectures).
+ * That will increase the left side, so won't change the boolean result.
+ *
+ * Unfortunately I don't remember the specifics of which simplifications
+ * below this is supposed to protect.
+ * But assuming it is still relevant,
+ * we keep AL_EXTENT_SHIFT at 22, and must not increase it above 27
+ * without proving the code below to still work.
  */
 # error FIXME
 #endif
 
-static unsigned int al_extent_to_bm_page(unsigned int al_enr)
+static unsigned long al_extent_to_bm_bit(const struct drbd_bitmap *bm, unsigned int al_enr)
 {
-	return al_enr >>
-		/* bit to page */
-		((PAGE_SHIFT + 3) -
-		/* al extent number to bit */
-		 (AL_EXTENT_SHIFT - BM_BLOCK_SHIFT));
+	return (unsigned long)al_enr << (AL_EXTENT_SHIFT - bm->bm_block_shift);
 }
 
 static sector_t al_tr_number_to_on_disk_sector(struct drbd_device *device)
@@ -315,12 +328,14 @@ static sector_t al_tr_number_to_on_disk_sector(struct drbd_device *device)
 
 static int __al_write_transaction(struct drbd_device *device, struct al_transaction_on_disk *buffer)
 {
+	struct drbd_bitmap *bm = device->bitmap;
 	struct lc_element *e;
 	sector_t sector;
 	int i, mx;
 	unsigned extent_nr;
 	unsigned crc = 0;
 	int err = 0;
+	ktime_var_for_accounting(start_kt);
 
 	memset(buffer, 0, sizeof(*buffer));
 	buffer->magic = cpu_to_be32(DRBD_AL_MAGIC);
@@ -342,9 +357,13 @@ static int __al_write_transaction(struct drbd_device *device, struct al_transact
 		}
 		buffer->update_slot_nr[i] = cpu_to_be16(e->lc_index);
 		buffer->update_extent_nr[i] = cpu_to_be32(e->lc_new_number);
-		if (e->lc_number != LC_FREE)
-			drbd_bm_mark_for_writeout(device,
-					al_extent_to_bm_page(e->lc_number));
+		if (e->lc_number != LC_FREE) {
+			unsigned long start, end;
+
+			start = al_extent_to_bm_bit(bm, e->lc_number);
+			end = al_extent_to_bm_bit(bm, e->lc_number + 1) - 1;
+			drbd_bm_mark_range_for_writeout(device, start, end);
+		}
 		i++;
 	}
 	spin_unlock_irq(&device->al_lock);
@@ -378,22 +397,21 @@ static int __al_write_transaction(struct drbd_device *device, struct al_transact
 	crc = crc32c(0, buffer, 4096);
 	buffer->crc32c = cpu_to_be32(crc);
 
-	if (drbd_bm_write_hinted(device))
-		err = -EIO;
-	else {
-		bool write_al_updates;
-		rcu_read_lock();
-		write_al_updates = rcu_dereference(device->ldev->disk_conf)->al_updates;
-		rcu_read_unlock();
-		if (write_al_updates) {
-			if (drbd_md_sync_page_io(device, device->ldev, sector, REQ_OP_WRITE)) {
-				err = -EIO;
-				drbd_chk_io_error(device, 1, DRBD_META_IO_ERROR);
-			} else {
-				device->al_tr_number++;
-				device->al_writ_cnt++;
-			}
+	ktime_aggregate_delta(device, start_kt, al_before_bm_write_hinted_kt);
+	err = drbd_bm_write_hinted(device);
+	if (!err) {
+		ktime_aggregate_delta(device, start_kt, al_mid_kt);
+		if (drbd_md_sync_page_io(device, device->ldev, sector, REQ_OP_WRITE)) {
+			err = -EIO;
+			drbd_handle_io_error(device, DRBD_META_IO_ERROR);
+		} else {
+			device->al_tr_number++;
+			device->al_writ_cnt++;
+			device->al_histogram[min_t(unsigned int,
+					device->act_log->pending_changes,
+					AL_UPDATES_PER_TRANSACTION)]++;
 		}
+		ktime_aggregate_delta(device, start_kt, al_after_sync_page_kt);
 	}
 
 	return err;
@@ -406,15 +424,15 @@ static int al_write_transaction(struct drbd_device *device)
 
 	if (!get_ldev(device)) {
 		drbd_err(device, "disk is %s, cannot start al transaction\n",
-			drbd_disk_str(device->state.disk));
+			drbd_disk_str(device->disk_state[NOW]));
 		return -EIO;
 	}
 
 	/* The bitmap write may have failed, causing a state change. */
-	if (device->state.disk < D_INCONSISTENT) {
+	if (device->disk_state[NOW] < D_INCONSISTENT) {
 		drbd_err(device,
 			"disk is %s, cannot write al transaction\n",
-			drbd_disk_str(device->state.disk));
+			drbd_disk_str(device->disk_state[NOW]));
 		put_ldev(device);
 		return -EIO;
 	}
@@ -435,27 +453,47 @@ static int al_write_transaction(struct drbd_device *device)
 	return err;
 }
 
+bool drbd_al_try_lock(struct drbd_device *device)
+{
+	bool locked;
+
+	spin_lock_irq(&device->al_lock);
+	locked = lc_try_lock(device->act_log);
+	spin_unlock_irq(&device->al_lock);
+
+	return locked;
+}
+
+bool drbd_al_try_lock_for_transaction(struct drbd_device *device)
+{
+	bool locked;
+
+	spin_lock_irq(&device->al_lock);
+	locked = lc_try_lock_for_transaction(device->act_log);
+	spin_unlock_irq(&device->al_lock);
+
+	return locked;
+}
 
 void drbd_al_begin_io_commit(struct drbd_device *device)
 {
 	bool locked = false;
 
-	/* Serialize multiple transactions.
-	 * This uses test_and_set_bit, memory barrier is implicit.
-	 */
+
+	if (drbd_md_dax_active(device->ldev)) {
+		drbd_dax_al_begin_io_commit(device);
+		return;
+	}
+
 	wait_event(device->al_wait,
 			device->act_log->pending_changes == 0 ||
-			(locked = lc_try_lock_for_transaction(device->act_log)));
+			(locked = drbd_al_try_lock_for_transaction(device)));
 
 	if (locked) {
-		/* Double check: it may have been committed by someone else,
-		 * while we have been waiting for the lock. */
+		/* Double check: it may have been committed by someone else
+		 * while we were waiting for the lock. */
 		if (device->act_log->pending_changes) {
-			bool write_al_updates;
-
-			rcu_read_lock();
-			write_al_updates = rcu_dereference(device->ldev->disk_conf)->al_updates;
-			rcu_read_unlock();
+			bool write_al_updates = !(device->ldev->md.flags & MDF_AL_DISABLED);
 
 			if (write_al_updates)
 				al_write_transaction(device);
@@ -472,13 +510,32 @@ void drbd_al_begin_io_commit(struct drbd_device *device)
 	}
 }
 
-/*
- * @delegate:   delegate activity log I/O to the worker thread
- */
-void drbd_al_begin_io(struct drbd_device *device, struct drbd_interval *i)
+static bool put_actlog(struct drbd_device *device, unsigned int first, unsigned int last)
 {
-	if (drbd_al_begin_io_prepare(device, i))
-		drbd_al_begin_io_commit(device);
+	struct lc_element *extent;
+	unsigned long flags;
+	unsigned int enr;
+	bool wake = false;
+
+	D_ASSERT(device, first <= last);
+	spin_lock_irqsave(&device->al_lock, flags);
+	for (enr = first; enr <= last; enr++) {
+		extent = lc_find(device->act_log, enr);
+		/* Yes, this masks a bug elsewhere.  However, during normal
+		 * operation this is harmless, so no need to crash the kernel
+		 * by the BUG_ON(refcount == 0) in lc_put().
+		 */
+		if (!extent || extent->refcnt == 0) {
+			drbd_err(device, "al_complete_io() called on inactive extent %u\n", enr);
+			continue;
+		}
+		if (lc_put(device->act_log, extent) == 0)
+			wake = true;
+	}
+	spin_unlock_irqrestore(&device->al_lock, flags);
+	if (wake)
+		wake_up(&device->al_wait);
+	return wake;
 }
 
 int drbd_al_begin_io_nonblock(struct drbd_device *device, struct drbd_interval *i)
@@ -497,20 +554,6 @@ int drbd_al_begin_io_nonblock(struct drbd_device *device, struct drbd_interval *
 
 	D_ASSERT(device, first <= last);
 
-	/* Is resync active in this area? */
-	for (enr = first; enr <= last; enr++) {
-		struct lc_element *tmp;
-		tmp = lc_find(device->resync, enr/AL_EXT_PER_BM_SECT);
-		if (unlikely(tmp != NULL)) {
-			struct bm_extent  *bm_ext = lc_entry(tmp, struct bm_extent, lce);
-			if (test_bit(BME_NO_WRITES, &bm_ext->flags)) {
-				if (!test_and_set_bit(BME_PRIORITY, &bm_ext->flags))
-					return -EBUSY;
-				return -EWOULDBLOCK;
-			}
-		}
-	}
-
 	/* Try to checkout the refcounts. */
 	for (enr = first; enr <= last; enr++) {
 		struct lc_element *al_ext;
@@ -530,33 +573,18 @@ int drbd_al_begin_io_nonblock(struct drbd_device *device, struct drbd_interval *
 	return 0;
 }
 
-void drbd_al_complete_io(struct drbd_device *device, struct drbd_interval *i)
+/* put activity log extent references corresponding to interval i, return true
+ * if at least one extent is now unreferenced. */
+bool drbd_al_complete_io(struct drbd_device *device, struct drbd_interval *i)
 {
 	/* for bios crossing activity log extent boundaries,
 	 * we may need to activate two extents in one go */
 	unsigned first = i->sector >> (AL_EXTENT_SHIFT-9);
 	unsigned last = i->size == 0 ? first : (i->sector + (i->size >> 9) - 1) >> (AL_EXTENT_SHIFT-9);
-	unsigned enr;
-	struct lc_element *extent;
-	unsigned long flags;
 
 	D_ASSERT(device, first <= last);
-	spin_lock_irqsave(&device->al_lock, flags);
 
-	for (enr = first; enr <= last; enr++) {
-		extent = lc_find(device->act_log, enr);
-		/* Yes, this masks a bug elsewhere.  However, during normal
-		 * operation this is harmless, so no need to crash the kernel
-		 * by the BUG_ON(refcount == 0) in lc_put().
-		 */
-		if (!extent || extent->refcnt == 0) {
-			drbd_err(device, "al_complete_io() called on inactive extent %u\n", enr);
-			continue;
-		}
-		lc_put(device->act_log, extent);
-	}
-	spin_unlock_irqrestore(&device->al_lock, flags);
-	wake_up(&device->al_wait);
+	return put_actlog(device, first, last);
 }
 
 static int _try_lc_del(struct drbd_device *device, struct lc_element *al_ext)
@@ -605,6 +633,9 @@ int drbd_al_initialize(struct drbd_device *device, void *buffer)
 	int al_size_4k = md->al_stripes * md->al_stripe_size_4k;
 	int i;
 
+	if (drbd_md_dax_active(device->ldev))
+		return drbd_dax_al_initialize(device);
+
 	__al_write_transaction(device, al);
 	/* There may or may not have been a pending transaction. */
 	spin_lock_irq(&device->al_lock);
@@ -622,219 +653,91 @@ int drbd_al_initialize(struct drbd_device *device, void *buffer)
 	return 0;
 }
 
-static const char *drbd_change_sync_fname[] = {
-	[RECORD_RS_FAILED] = "drbd_rs_failed_io",
-	[SET_IN_SYNC] = "drbd_set_in_sync",
-	[SET_OUT_OF_SYNC] = "drbd_set_out_of_sync"
-};
-
-/* ATTENTION. The AL's extents are 4MB each, while the extents in the
- * resync LRU-cache are 16MB each.
- * The caller of this function has to hold an get_ldev() reference.
- *
- * Adjusts the caching members ->rs_left (success) or ->rs_failed (!success),
- * potentially pulling in (and recounting the corresponding bits)
- * this resync extent into the resync extent lru cache.
- *
- * Returns whether all bits have been cleared for this resync extent,
- * precisely: (rs_left <= rs_failed)
- *
- * TODO will be obsoleted once we have a caching lru of the on disk bitmap
- */
-static bool update_rs_extent(struct drbd_device *device,
-		unsigned int enr, int count,
-		enum update_sync_bits_mode mode)
+void drbd_advance_rs_marks(struct drbd_peer_device *peer_device, unsigned long still_to_go)
 {
-	struct lc_element *e;
+	unsigned long now;
+	int next;
 
-	D_ASSERT(device, atomic_read(&device->local_cnt));
-
-	/* When setting out-of-sync bits,
-	 * we don't need it cached (lc_find).
-	 * But if it is present in the cache,
-	 * we should update the cached bit count.
-	 * Otherwise, that extent should be in the resync extent lru cache
-	 * already -- or we want to pull it in if necessary -- (lc_get),
-	 * then update and check rs_left and rs_failed. */
-	if (mode == SET_OUT_OF_SYNC)
-		e = lc_find(device->resync, enr);
-	else
-		e = lc_get(device->resync, enr);
-	if (e) {
-		struct bm_extent *ext = lc_entry(e, struct bm_extent, lce);
-		if (ext->lce.lc_number == enr) {
-			if (mode == SET_IN_SYNC)
-				ext->rs_left -= count;
-			else if (mode == SET_OUT_OF_SYNC)
-				ext->rs_left += count;
-			else
-				ext->rs_failed += count;
-			if (ext->rs_left < ext->rs_failed) {
-				drbd_warn(device, "BAD! enr=%u rs_left=%d "
-				    "rs_failed=%d count=%d cstate=%s\n",
-				     ext->lce.lc_number, ext->rs_left,
-				     ext->rs_failed, count,
-				     drbd_conn_str(device->state.conn));
-
-				/* We don't expect to be able to clear more bits
-				 * than have been set when we originally counted
-				 * the set bits to cache that value in ext->rs_left.
-				 * Whatever the reason (disconnect during resync,
-				 * delayed local completion of an application write),
-				 * try to fix it up by recounting here. */
-				ext->rs_left = drbd_bm_e_weight(device, enr);
-			}
-		} else {
-			/* Normally this element should be in the cache,
-			 * since drbd_rs_begin_io() pulled it already in.
-			 *
-			 * But maybe an application write finished, and we set
-			 * something outside the resync lru_cache in sync.
-			 */
-			int rs_left = drbd_bm_e_weight(device, enr);
-			if (ext->flags != 0) {
-				drbd_warn(device, "changing resync lce: %d[%u;%02lx]"
-				     " -> %d[%u;00]\n",
-				     ext->lce.lc_number, ext->rs_left,
-				     ext->flags, enr, rs_left);
-				ext->flags = 0;
-			}
-			if (ext->rs_failed) {
-				drbd_warn(device, "Kicking resync_lru element enr=%u "
-				     "out with rs_failed=%d\n",
-				     ext->lce.lc_number, ext->rs_failed);
-			}
-			ext->rs_left = rs_left;
-			ext->rs_failed = (mode == RECORD_RS_FAILED) ? count : 0;
-			/* we don't keep a persistent log of the resync lru,
-			 * we can commit any change right away. */
-			lc_committed(device->resync);
-		}
-		if (mode != SET_OUT_OF_SYNC)
-			lc_put(device->resync, &ext->lce);
-		/* no race, we are within the al_lock! */
+	/* report progress and advance marks only if we made progress */
+	if (peer_device->rs_mark_left[peer_device->rs_last_mark] == still_to_go)
+		return;
 
-		if (ext->rs_left <= ext->rs_failed) {
-			ext->rs_failed = 0;
-			return true;
-		}
-	} else if (mode != SET_OUT_OF_SYNC) {
-		/* be quiet if lc_find() did not find it. */
-		drbd_err(device, "lc_get() failed! locked=%d/%d flags=%lu\n",
-		    device->resync_locked,
-		    device->resync->nr_elements,
-		    device->resync->flags);
-	}
-	return false;
-}
+	/* report progress and advance marks at most once every DRBD_SYNC_MARK_STEP (3 seconds) */
+	now = jiffies;
+	if (!time_after_eq(now, peer_device->rs_last_progress_report_ts + DRBD_SYNC_MARK_STEP))
+		return;
 
-void drbd_advance_rs_marks(struct drbd_peer_device *peer_device, unsigned long still_to_go)
-{
-	struct drbd_device *device = peer_device->device;
-	unsigned long now = jiffies;
-	unsigned long last = device->rs_mark_time[device->rs_last_mark];
-	int next = (device->rs_last_mark + 1) % DRBD_SYNC_MARKS;
-	if (time_after_eq(now, last + DRBD_SYNC_MARK_STEP)) {
-		if (device->rs_mark_left[device->rs_last_mark] != still_to_go &&
-		    device->state.conn != C_PAUSED_SYNC_T &&
-		    device->state.conn != C_PAUSED_SYNC_S) {
-			device->rs_mark_time[next] = now;
-			device->rs_mark_left[next] = still_to_go;
-			device->rs_last_mark = next;
-		}
+	/* Do not advance marks if we are "paused" */
+	if (peer_device->repl_state[NOW] != L_PAUSED_SYNC_T &&
+	    peer_device->repl_state[NOW] != L_PAUSED_SYNC_S) {
+		next = (peer_device->rs_last_mark + 1) % DRBD_SYNC_MARKS;
+		peer_device->rs_mark_time[next] = now;
+		peer_device->rs_mark_left[next] = still_to_go;
+		peer_device->rs_last_mark = next;
 	}
-}
 
-/* It is called lazy update, so don't do write-out too often. */
-static bool lazy_bitmap_update_due(struct drbd_device *device)
-{
-	return time_after(jiffies, device->rs_last_bcast + 2*HZ);
+	/* But still report progress even if paused. */
+	peer_device->rs_last_progress_report_ts = now;
+	drbd_peer_device_post_work(peer_device, RS_PROGRESS);
 }
 
-static void maybe_schedule_on_disk_bitmap_update(struct drbd_device *device, bool rs_done)
+/* It is called lazy update, so don't do write-out too often. */
+bool drbd_lazy_bitmap_update_due(struct drbd_peer_device *peer_device)
 {
-	if (rs_done) {
-		struct drbd_connection *connection = first_peer_device(device)->connection;
-		if (connection->agreed_pro_version <= 95 ||
-		    is_sync_target_state(device->state.conn))
-			set_bit(RS_DONE, &device->flags);
-			/* and also set RS_PROGRESS below */
-
-		/* Else: rather wait for explicit notification via receive_state,
-		 * to avoid uuids-rotated-too-fast causing full resync
-		 * in next handshake, in case the replication link breaks
-		 * at the most unfortunate time... */
-	} else if (!lazy_bitmap_update_due(device))
-		return;
-
-	drbd_device_post_work(device, RS_PROGRESS);
+	return time_after(jiffies, peer_device->rs_last_writeout + 2*HZ);
 }
 
-static int update_sync_bits(struct drbd_device *device,
+static int update_sync_bits(struct drbd_peer_device *peer_device,
 		unsigned long sbnr, unsigned long ebnr,
 		enum update_sync_bits_mode mode)
 {
-	/*
-	 * We keep a count of set bits per resync-extent in the ->rs_left
-	 * caching member, so we need to loop and work within the resync extent
-	 * alignment. Typically this loop will execute exactly once.
-	 */
-	unsigned long flags;
+	struct drbd_device *device = peer_device->device;
 	unsigned long count = 0;
-	unsigned int cleared = 0;
-	while (sbnr <= ebnr) {
-		/* set temporary boundary bit number to last bit number within
-		 * the resync extent of the current start bit number,
-		 * but cap at provided end bit number */
-		unsigned long tbnr = min(ebnr, sbnr | BM_BLOCKS_PER_BM_EXT_MASK);
-		unsigned long c;
-
-		if (mode == RECORD_RS_FAILED)
-			/* Only called from drbd_rs_failed_io(), bits
-			 * supposedly still set.  Recount, maybe some
-			 * of the bits have been successfully cleared
-			 * by application IO meanwhile.
-			 */
-			c = drbd_bm_count_bits(device, sbnr, tbnr);
-		else if (mode == SET_IN_SYNC)
-			c = drbd_bm_clear_bits(device, sbnr, tbnr);
-		else /* if (mode == SET_OUT_OF_SYNC) */
-			c = drbd_bm_set_bits(device, sbnr, tbnr);
+	int bmi = peer_device->bitmap_index;
+
+	if (mode == RECORD_RS_FAILED)
+		/* Only called from drbd_rs_failed_io(), bits
+		 * supposedly still set.  Recount, maybe some
+		 * of the bits have been successfully cleared
+		 * by application IO meanwhile.
+		 */
+		count = drbd_bm_count_bits(device, bmi, sbnr, ebnr);
+	else if (mode == SET_IN_SYNC)
+		count = drbd_bm_clear_bits(device, bmi, sbnr, ebnr);
+	else /* if (mode == SET_OUT_OF_SYNC) */
+		count = drbd_bm_set_bits(device, bmi, sbnr, ebnr);
 
-		if (c) {
-			spin_lock_irqsave(&device->al_lock, flags);
-			cleared += update_rs_extent(device, BM_BIT_TO_EXT(sbnr), c, mode);
-			spin_unlock_irqrestore(&device->al_lock, flags);
-			count += c;
-		}
-		sbnr = tbnr + 1;
-	}
 	if (count) {
 		if (mode == SET_IN_SYNC) {
-			unsigned long still_to_go = drbd_bm_total_weight(device);
-			bool rs_is_done = (still_to_go <= device->rs_failed);
-			drbd_advance_rs_marks(first_peer_device(device), still_to_go);
-			if (cleared || rs_is_done)
-				maybe_schedule_on_disk_bitmap_update(device, rs_is_done);
-		} else if (mode == RECORD_RS_FAILED)
-			device->rs_failed += count;
+			unsigned long still_to_go = drbd_bm_total_weight(peer_device);
+
+			drbd_advance_rs_marks(peer_device, still_to_go);
+
+			if (drbd_lazy_bitmap_update_due(peer_device))
+				drbd_peer_device_post_work(peer_device, RS_LAZY_BM_WRITE);
+
+			if (peer_device->connection->agreed_pro_version <= 95 &&
+					still_to_go <= peer_device->rs_failed &&
+					is_sync_source_state(peer_device, NOW))
+				drbd_peer_device_post_work(peer_device, RS_DONE);
+		} else if (mode == RECORD_RS_FAILED) {
+			peer_device->rs_failed += count;
+		} else /* if (mode == SET_OUT_OF_SYNC) */ {
+			enum drbd_repl_state repl_state = peer_device->repl_state[NOW];
+			if (repl_state >= L_SYNC_SOURCE && repl_state <= L_PAUSED_SYNC_T)
+				peer_device->rs_total += count;
+		}
 		wake_up(&device->al_wait);
 	}
 	return count;
 }
 
-static bool plausible_request_size(int size)
-{
-	return size > 0
-		&& size <= DRBD_MAX_BATCH_BIO_SIZE
-		&& IS_ALIGNED(size, 512);
-}
-
-/* clear the bit corresponding to the piece of storage in question:
- * size byte of data starting from sector.  Only clear bits of the affected
- * one or more _aligned_ BM_BLOCK_SIZE blocks.
+/* Change bits corresponding to the piece of storage in question:
+ * size byte of data starting from sector.
+ * Only clear bits for fully affected _aligned_ BM_BLOCK_SIZE blocks.
+ * Set bits even for partially affected blocks.
  *
- * called by worker on C_SYNC_TARGET and receiver on SyncSource.
+ * called by worker on L_SYNC_TARGET and receiver on SyncSource.
  *
  */
 int __drbd_change_sync(struct drbd_peer_device *peer_device, sector_t sector, int size,
@@ -842,395 +745,152 @@ int __drbd_change_sync(struct drbd_peer_device *peer_device, sector_t sector, in
 {
 	/* Is called from worker and receiver context _only_ */
 	struct drbd_device *device = peer_device->device;
+	struct drbd_bitmap *bm;
 	unsigned long sbnr, ebnr, lbnr;
 	unsigned long count = 0;
 	sector_t esector, nr_sectors;
 
-	/* This would be an empty REQ_PREFLUSH, be silent. */
+	/* This would be an empty REQ_OP_FLUSH, be silent. */
 	if ((mode == SET_OUT_OF_SYNC) && size == 0)
 		return 0;
 
-	if (!plausible_request_size(size)) {
-		drbd_err(device, "%s: sector=%llus size=%d nonsense!\n",
-				drbd_change_sync_fname[mode],
-				(unsigned long long)sector, size);
+	if (peer_device->bitmap_index == -1) /* no bitmap... */
 		return 0;
-	}
 
 	if (!get_ldev(device))
 		return 0; /* no disk, no metadata, no bitmap to manipulate bits in */
 
+	bm = device->bitmap;
+
 	nr_sectors = get_capacity(device->vdisk);
 	esector = sector + (size >> 9) - 1;
 
-	if (!expect(device, sector < nr_sectors))
+	if (!expect(peer_device, sector < nr_sectors))
 		goto out;
-	if (!expect(device, esector < nr_sectors))
+	if (!expect(peer_device, esector < nr_sectors))
 		esector = nr_sectors - 1;
 
-	lbnr = BM_SECT_TO_BIT(nr_sectors-1);
+	lbnr = bm_sect_to_bit(bm, nr_sectors-1);
 
 	if (mode == SET_IN_SYNC) {
 		/* Round up start sector, round down end sector.  We make sure
 		 * we only clear full, aligned, BM_BLOCK_SIZE blocks. */
-		if (unlikely(esector < BM_SECT_PER_BIT-1))
+		if (unlikely(esector < bm_sect_per_bit(bm)-1))
 			goto out;
 		if (unlikely(esector == (nr_sectors-1)))
 			ebnr = lbnr;
 		else
-			ebnr = BM_SECT_TO_BIT(esector - (BM_SECT_PER_BIT-1));
-		sbnr = BM_SECT_TO_BIT(sector + BM_SECT_PER_BIT-1);
+			ebnr = bm_sect_to_bit(bm, esector - (bm_sect_per_bit(bm)-1));
+		sbnr = bm_sect_to_bit(bm, sector + bm_sect_per_bit(bm)-1);
 	} else {
 		/* We set it out of sync, or record resync failure.
 		 * Should not round anything here. */
-		sbnr = BM_SECT_TO_BIT(sector);
-		ebnr = BM_SECT_TO_BIT(esector);
+		sbnr = bm_sect_to_bit(bm, sector);
+		ebnr = bm_sect_to_bit(bm, esector);
 	}
 
-	count = update_sync_bits(device, sbnr, ebnr, mode);
+	count = update_sync_bits(peer_device, sbnr, ebnr, mode);
 out:
 	put_ldev(device);
 	return count;
 }
 
-static
-struct bm_extent *_bme_get(struct drbd_device *device, unsigned int enr)
-{
-	struct lc_element *e;
-	struct bm_extent *bm_ext;
-	int wakeup = 0;
-	unsigned long rs_flags;
-
-	spin_lock_irq(&device->al_lock);
-	if (device->resync_locked > device->resync->nr_elements/2) {
-		spin_unlock_irq(&device->al_lock);
-		return NULL;
-	}
-	e = lc_get(device->resync, enr);
-	bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL;
-	if (bm_ext) {
-		if (bm_ext->lce.lc_number != enr) {
-			bm_ext->rs_left = drbd_bm_e_weight(device, enr);
-			bm_ext->rs_failed = 0;
-			lc_committed(device->resync);
-			wakeup = 1;
-		}
-		if (bm_ext->lce.refcnt == 1)
-			device->resync_locked++;
-		set_bit(BME_NO_WRITES, &bm_ext->flags);
-	}
-	rs_flags = device->resync->flags;
-	spin_unlock_irq(&device->al_lock);
-	if (wakeup)
-		wake_up(&device->al_wait);
-
-	if (!bm_ext) {
-		if (rs_flags & LC_STARVING)
-			drbd_warn(device, "Have to wait for element"
-			     " (resync LRU too small?)\n");
-		BUG_ON(rs_flags & LC_LOCKED);
-	}
-
-	return bm_ext;
-}
-
-static int _is_in_al(struct drbd_device *device, unsigned int enr)
+unsigned long drbd_set_all_out_of_sync(struct drbd_device *device, sector_t sector, int size)
 {
-	int rv;
-
-	spin_lock_irq(&device->al_lock);
-	rv = lc_is_used(device->act_log, enr);
-	spin_unlock_irq(&device->al_lock);
-
-	return rv;
+	return drbd_set_sync(device, sector, size, -1, -1);
 }
 
 /**
- * drbd_rs_begin_io() - Gets an extent in the resync LRU cache and sets it to BME_LOCKED
- * @device:	DRBD device.
- * @sector:	The sector number.
+ * drbd_set_sync  -  Set a disk range in or out of sync
+ * @device:	DRBD device
+ * @sector:	start sector of disk range
+ * @size:	size of disk range in bytes
+ * @bits:	bit values to use by bitmap index
+ * @mask:	bitmap indexes to modify (mask set)
  *
- * This functions sleeps on al_wait.
- *
- * Returns: %0 on success, -EINTR if interrupted.
+ * Returns a mask of the bitmap indexes which were modified.
  */
-int drbd_rs_begin_io(struct drbd_device *device, sector_t sector)
+unsigned long drbd_set_sync(struct drbd_device *device, sector_t sector, int size,
+		   unsigned long bits, unsigned long mask)
 {
-	unsigned int enr = BM_SECT_TO_EXT(sector);
-	struct bm_extent *bm_ext;
-	int i, sig;
-	bool sa;
-
-retry:
-	sig = wait_event_interruptible(device->al_wait,
-			(bm_ext = _bme_get(device, enr)));
-	if (sig)
-		return -EINTR;
-
-	if (test_bit(BME_LOCKED, &bm_ext->flags))
-		return 0;
-
-	/* step aside only while we are above c-min-rate; unless disabled. */
-	sa = drbd_rs_c_min_rate_throttle(device);
-
-	for (i = 0; i < AL_EXT_PER_BM_SECT; i++) {
-		sig = wait_event_interruptible(device->al_wait,
-					       !_is_in_al(device, enr * AL_EXT_PER_BM_SECT + i) ||
-					       (sa && test_bit(BME_PRIORITY, &bm_ext->flags)));
+	long set_start, set_end, clear_start, clear_end;
+	struct drbd_peer_device *peer_device;
+	struct drbd_bitmap *bm;
+	sector_t esector, nr_sectors;
+	unsigned long irq_flags;
+	unsigned long modified = 0;
 
-		if (sig || (sa && test_bit(BME_PRIORITY, &bm_ext->flags))) {
-			spin_lock_irq(&device->al_lock);
-			if (lc_put(device->resync, &bm_ext->lce) == 0) {
-				bm_ext->flags = 0; /* clears BME_NO_WRITES and eventually BME_PRIORITY */
-				device->resync_locked--;
-				wake_up(&device->al_wait);
-			}
-			spin_unlock_irq(&device->al_lock);
-			if (sig)
-				return -EINTR;
-			if (schedule_timeout_interruptible(HZ/10))
-				return -EINTR;
-			goto retry;
-		}
+	if (size <= 0 || !IS_ALIGNED(size, 512)) {
+		drbd_err(device, "%s sector: %llus, size: %d\n",
+			 __func__, (unsigned long long)sector, size);
+		return 0;
 	}
-	set_bit(BME_LOCKED, &bm_ext->flags);
-	return 0;
-}
-
-/**
- * drbd_try_rs_begin_io() - Gets an extent in the resync LRU cache, does not sleep
- * @peer_device: DRBD device.
- * @sector:	The sector number.
- *
- * Gets an extent in the resync LRU cache, sets it to BME_NO_WRITES, then
- * tries to set it to BME_LOCKED.
- *
- * Returns: %0 upon success, and -EAGAIN
- * if there is still application IO going on in this area.
- */
-int drbd_try_rs_begin_io(struct drbd_peer_device *peer_device, sector_t sector)
-{
-	struct drbd_device *device = peer_device->device;
-	unsigned int enr = BM_SECT_TO_EXT(sector);
-	const unsigned int al_enr = enr*AL_EXT_PER_BM_SECT;
-	struct lc_element *e;
-	struct bm_extent *bm_ext;
-	int i;
-	bool throttle = drbd_rs_should_slow_down(peer_device, sector, true);
 
-	/* If we need to throttle, a half-locked (only marked BME_NO_WRITES,
-	 * not yet BME_LOCKED) extent needs to be kicked out explicitly if we
-	 * need to throttle. There is at most one such half-locked extent,
-	 * which is remembered in resync_wenr. */
-
-	if (throttle && device->resync_wenr != enr)
-		return -EAGAIN;
-
-	spin_lock_irq(&device->al_lock);
-	if (device->resync_wenr != LC_FREE && device->resync_wenr != enr) {
-		/* in case you have very heavy scattered io, it may
-		 * stall the syncer undefined if we give up the ref count
-		 * when we try again and requeue.
-		 *
-		 * if we don't give up the refcount, but the next time
-		 * we are scheduled this extent has been "synced" by new
-		 * application writes, we'd miss the lc_put on the
-		 * extent we keep the refcount on.
-		 * so we remembered which extent we had to try again, and
-		 * if the next requested one is something else, we do
-		 * the lc_put here...
-		 * we also have to wake_up
-		 */
-		e = lc_find(device->resync, device->resync_wenr);
-		bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL;
-		if (bm_ext) {
-			D_ASSERT(device, !test_bit(BME_LOCKED, &bm_ext->flags));
-			D_ASSERT(device, test_bit(BME_NO_WRITES, &bm_ext->flags));
-			clear_bit(BME_NO_WRITES, &bm_ext->flags);
-			device->resync_wenr = LC_FREE;
-			if (lc_put(device->resync, &bm_ext->lce) == 0) {
-				bm_ext->flags = 0;
-				device->resync_locked--;
-			}
-			wake_up(&device->al_wait);
-		} else {
-			drbd_alert(device, "LOGIC BUG\n");
-		}
-	}
-	/* TRY. */
-	e = lc_try_get(device->resync, enr);
-	bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL;
-	if (bm_ext) {
-		if (test_bit(BME_LOCKED, &bm_ext->flags))
-			goto proceed;
-		if (!test_and_set_bit(BME_NO_WRITES, &bm_ext->flags)) {
-			device->resync_locked++;
-		} else {
-			/* we did set the BME_NO_WRITES,
-			 * but then could not set BME_LOCKED,
-			 * so we tried again.
-			 * drop the extra reference. */
-			bm_ext->lce.refcnt--;
-			D_ASSERT(device, bm_ext->lce.refcnt > 0);
-		}
-		goto check_al;
-	} else {
-		/* do we rather want to try later? */
-		if (device->resync_locked > device->resync->nr_elements-3)
-			goto try_again;
-		/* Do or do not. There is no try. -- Yoda */
-		e = lc_get(device->resync, enr);
-		bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL;
-		if (!bm_ext) {
-			const unsigned long rs_flags = device->resync->flags;
-			if (rs_flags & LC_STARVING)
-				drbd_warn(device, "Have to wait for element"
-				     " (resync LRU too small?)\n");
-			BUG_ON(rs_flags & LC_LOCKED);
-			goto try_again;
-		}
-		if (bm_ext->lce.lc_number != enr) {
-			bm_ext->rs_left = drbd_bm_e_weight(device, enr);
-			bm_ext->rs_failed = 0;
-			lc_committed(device->resync);
-			wake_up(&device->al_wait);
-			D_ASSERT(device, test_bit(BME_LOCKED, &bm_ext->flags) == 0);
-		}
-		set_bit(BME_NO_WRITES, &bm_ext->flags);
-		D_ASSERT(device, bm_ext->lce.refcnt == 1);
-		device->resync_locked++;
-		goto check_al;
-	}
-check_al:
-	for (i = 0; i < AL_EXT_PER_BM_SECT; i++) {
-		if (lc_is_used(device->act_log, al_enr+i))
-			goto try_again;
-	}
-	set_bit(BME_LOCKED, &bm_ext->flags);
-proceed:
-	device->resync_wenr = LC_FREE;
-	spin_unlock_irq(&device->al_lock);
-	return 0;
+	if (!get_ldev(device))
+		return 0; /* no disk, no metadata, no bitmap to set bits in */
 
-try_again:
-	if (bm_ext) {
-		if (throttle) {
-			D_ASSERT(device, !test_bit(BME_LOCKED, &bm_ext->flags));
-			D_ASSERT(device, test_bit(BME_NO_WRITES, &bm_ext->flags));
-			clear_bit(BME_NO_WRITES, &bm_ext->flags);
-			device->resync_wenr = LC_FREE;
-			if (lc_put(device->resync, &bm_ext->lce) == 0) {
-				bm_ext->flags = 0;
-				device->resync_locked--;
-			}
-			wake_up(&device->al_wait);
-		} else
-			device->resync_wenr = enr;
-	}
-	spin_unlock_irq(&device->al_lock);
-	return -EAGAIN;
-}
+	bm = device->bitmap;
+	mask &= (1 << bm->bm_max_peers) - 1;
 
-void drbd_rs_complete_io(struct drbd_device *device, sector_t sector)
-{
-	unsigned int enr = BM_SECT_TO_EXT(sector);
-	struct lc_element *e;
-	struct bm_extent *bm_ext;
-	unsigned long flags;
+	nr_sectors = get_capacity(device->vdisk);
+	esector = sector + (size >> 9) - 1;
 
-	spin_lock_irqsave(&device->al_lock, flags);
-	e = lc_find(device->resync, enr);
-	bm_ext = e ? lc_entry(e, struct bm_extent, lce) : NULL;
-	if (!bm_ext) {
-		spin_unlock_irqrestore(&device->al_lock, flags);
-		if (drbd_ratelimit())
-			drbd_err(device, "drbd_rs_complete_io() called, but extent not found\n");
-		return;
-	}
+	if (!expect(device, sector < nr_sectors))
+		goto out;
+	if (!expect(device, esector < nr_sectors))
+		esector = nr_sectors - 1;
 
-	if (bm_ext->lce.refcnt == 0) {
-		spin_unlock_irqrestore(&device->al_lock, flags);
-		drbd_err(device, "drbd_rs_complete_io(,%llu [=%u]) called, "
-		    "but refcnt is 0!?\n",
-		    (unsigned long long)sector, enr);
-		return;
-	}
+	/* For marking sectors as out of sync, we need to round up. */
+	set_start = bm_sect_to_bit(bm, sector);
+	set_end = bm_sect_to_bit(bm, esector);
+
+	/* For marking sectors as in sync, we need to round down except when we
+	 * reach the end of the device: The last bit in the bitmap does not
+	 * account for sectors past the end of the device.
+	 * CLEAR_END can become negative here. */
+	clear_start = bm_sect_to_bit(bm, sector + bm_sect_per_bit(bm) - 1);
+	if (esector == nr_sectors - 1)
+		clear_end = bm_sect_to_bit(bm, esector);
+	else
+		clear_end = bm_sect_to_bit(bm, esector + 1) - 1;
 
-	if (lc_put(device->resync, &bm_ext->lce) == 0) {
-		bm_ext->flags = 0; /* clear BME_LOCKED, BME_NO_WRITES and BME_PRIORITY */
-		device->resync_locked--;
-		wake_up(&device->al_wait);
-	}
+	spin_lock_irqsave(&bm->bm_all_slots_lock, irq_flags);
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		int bitmap_index = peer_device->bitmap_index;
 
-	spin_unlock_irqrestore(&device->al_lock, flags);
-}
+		if (bitmap_index == -1)
+			continue;
 
-/**
- * drbd_rs_cancel_all() - Removes all extents from the resync LRU (even BME_LOCKED)
- * @device:	DRBD device.
- */
-void drbd_rs_cancel_all(struct drbd_device *device)
-{
-	spin_lock_irq(&device->al_lock);
+		if (!test_and_clear_bit(bitmap_index, &mask))
+			continue;
 
-	if (get_ldev_if_state(device, D_FAILED)) { /* Makes sure ->resync is there. */
-		lc_reset(device->resync);
-		put_ldev(device);
+		if (test_bit(bitmap_index, &bits)) {
+			if (update_sync_bits(peer_device, set_start, set_end, SET_OUT_OF_SYNC))
+				__set_bit(bitmap_index, &modified);
+		} else if (clear_start <= clear_end) {
+			if (update_sync_bits(peer_device, clear_start, clear_end, SET_IN_SYNC))
+				__set_bit(bitmap_index, &modified);
+		}
 	}
-	device->resync_locked = 0;
-	device->resync_wenr = LC_FREE;
-	spin_unlock_irq(&device->al_lock);
-	wake_up(&device->al_wait);
-}
-
-/**
- * drbd_rs_del_all() - Gracefully remove all extents from the resync LRU
- * @device:	DRBD device.
- *
- * Returns: %0 upon success, -EAGAIN if at least one reference count was
- * not zero.
- */
-int drbd_rs_del_all(struct drbd_device *device)
-{
-	struct lc_element *e;
-	struct bm_extent *bm_ext;
-	int i;
-
-	spin_lock_irq(&device->al_lock);
-
-	if (get_ldev_if_state(device, D_FAILED)) {
-		/* ok, ->resync is there. */
-		for (i = 0; i < device->resync->nr_elements; i++) {
-			e = lc_element_by_index(device->resync, i);
-			bm_ext = lc_entry(e, struct bm_extent, lce);
-			if (bm_ext->lce.lc_number == LC_FREE)
-				continue;
-			if (bm_ext->lce.lc_number == device->resync_wenr) {
-				drbd_info(device, "dropping %u in drbd_rs_del_all, apparently"
-				     " got 'synced' by application io\n",
-				     device->resync_wenr);
-				D_ASSERT(device, !test_bit(BME_LOCKED, &bm_ext->flags));
-				D_ASSERT(device, test_bit(BME_NO_WRITES, &bm_ext->flags));
-				clear_bit(BME_NO_WRITES, &bm_ext->flags);
-				device->resync_wenr = LC_FREE;
-				lc_put(device->resync, &bm_ext->lce);
-			}
-			if (bm_ext->lce.refcnt != 0) {
-				drbd_info(device, "Retrying drbd_rs_del_all() later. "
-				     "refcnt=%d\n", bm_ext->lce.refcnt);
-				put_ldev(device);
-				spin_unlock_irq(&device->al_lock);
-				return -EAGAIN;
+	rcu_read_unlock();
+	if (mask) {
+		int bitmap_index;
+
+		for_each_set_bit(bitmap_index, &mask, BITS_PER_LONG) {
+			if (test_bit(bitmap_index, &bits)) {
+				if (drbd_bm_set_bits(device, bitmap_index, set_start, set_end))
+					__set_bit(bitmap_index, &modified);
+			} else if (clear_start <= clear_end) {
+				if (drbd_bm_clear_bits(device, bitmap_index,
+							clear_start, clear_end))
+					__set_bit(bitmap_index, &modified);
 			}
-			D_ASSERT(device, !test_bit(BME_LOCKED, &bm_ext->flags));
-			D_ASSERT(device, !test_bit(BME_NO_WRITES, &bm_ext->flags));
-			lc_del(device->resync, &bm_ext->lce);
 		}
-		D_ASSERT(device, device->resync->used == 0);
-		put_ldev(device);
 	}
-	spin_unlock_irq(&device->al_lock);
-	wake_up(&device->al_wait);
+	spin_unlock_irqrestore(&bm->bm_all_slots_lock, irq_flags);
+out:
+	put_ldev(device);
 
-	return 0;
+	return modified;
 }
diff --git a/drivers/block/drbd/drbd_bitmap.c b/drivers/block/drbd/drbd_bitmap.c
index 65ea6ec66bfd..24fc9489b7ec 100644
--- a/drivers/block/drbd/drbd_bitmap.c
+++ b/drivers/block/drbd/drbd_bitmap.c
@@ -12,15 +12,27 @@
 
 #define pr_fmt(fmt)	KBUILD_MODNAME ": " fmt
 
-#include <linux/bitmap.h>
+#include <linux/bitops.h>
 #include <linux/vmalloc.h>
 #include <linux/string.h>
 #include <linux/drbd.h>
 #include <linux/slab.h>
-#include <linux/highmem.h>
+#include <linux/dynamic_debug.h>
+#include <linux/libnvdimm.h>
 
 #include "drbd_int.h"
+#include "drbd_meta_data.h"
+#include "drbd_dax_pmem.h"
 
+#ifndef BITS_PER_PAGE
+#define BITS_PER_PAGE		(1UL << (PAGE_SHIFT + 3))
+#else
+# if BITS_PER_PAGE != (1UL << (PAGE_SHIFT + 3))
+#  error "ambiguous BITS_PER_PAGE"
+# endif
+#endif
+
+#define PAGES_TO_KIB(pages) (((unsigned long long) (pages)) * (PAGE_SIZE / 1024))
 
 /* OPAQUE outside this file!
  * interface defined in drbd_int.h
@@ -80,48 +92,57 @@
  *  so we need spin_lock_irqsave().
  *  And we need the kmap_atomic.
  */
-struct drbd_bitmap {
-	struct page **bm_pages;
-	spinlock_t bm_lock;
 
-	/* exclusively to be used by __al_write_transaction(),
-	 * drbd_bm_mark_for_writeout() and
-	 * and drbd_bm_write_hinted() -> bm_rw() called from there.
-	 */
-	unsigned int n_bitmap_hints;
-	unsigned int al_bitmap_hints[AL_UPDATES_PER_TRANSACTION];
-
-	/* see LIMITATIONS: above */
-
-	unsigned long bm_set;       /* nr of set bits; THINK maybe atomic_t? */
-	unsigned long bm_bits;
-	size_t   bm_words;
-	size_t   bm_number_of_pages;
-	sector_t bm_dev_capacity;
-	struct mutex bm_change; /* serializes resize operations */
-
-	wait_queue_head_t bm_io_wait; /* used to serialize IO of single pages */
-
-	enum bm_flag bm_flags;
-
-	/* debugging aid, in case we are still racy somewhere */
-	char          *bm_why;
-	struct task_struct *bm_task;
+enum bitmap_operations {
+	BM_OP_CLEAR,
+	BM_OP_SET,
+	BM_OP_TEST,
+	BM_OP_COUNT,
+	BM_OP_MERGE,
+	BM_OP_EXTRACT,
+	BM_OP_FIND_BIT,
+	BM_OP_FIND_ZERO_BIT,
 };
 
-#define bm_print_lock_info(m) __bm_print_lock_info(m, __func__)
-static void __bm_print_lock_info(struct drbd_device *device, const char *func)
-{
+static void
+bm_print_lock_info(struct drbd_device *device, unsigned int bitmap_index, enum bitmap_operations op)
+{
+	static const char *op_names[] = {
+		[BM_OP_CLEAR] = "clear",
+		[BM_OP_SET] = "set",
+		[BM_OP_TEST] = "test",
+		[BM_OP_COUNT] = "count",
+		[BM_OP_MERGE] = "merge",
+		[BM_OP_EXTRACT] = "extract",
+		[BM_OP_FIND_BIT] = "find_bit",
+		[BM_OP_FIND_ZERO_BIT] = "find_zero_bit",
+	};
+
 	struct drbd_bitmap *b = device->bitmap;
-	if (!drbd_ratelimit())
+	if (!drbd_device_ratelimit(device, GENERIC))
 		return;
-	drbd_err(device, "FIXME %s[%d] in %s, bitmap locked for '%s' by %s[%d]\n",
+	drbd_err(device, "FIXME %s[%d] op %s, bitmap locked for '%s' by %s[%d]\n",
 		 current->comm, task_pid_nr(current),
-		 func, b->bm_why ?: "?",
-		 b->bm_task->comm, task_pid_nr(b->bm_task));
+		 op_names[op], b->bm_why ?: "?",
+		 b->bm_task_comm, b->bm_task_pid);
 }
 
-void drbd_bm_lock(struct drbd_device *device, char *why, enum bm_flag flags)
+/* drbd_bm_lock() was introduced before drbd-9.0 to ensure that access to
+   bitmap is locked out by other means (states, etc..). If a needed lock was
+   not acquired or already taken a warning gets logged, and the critical
+   sections get serialized on a mutex.
+
+   Since drbd-9.0 actions on the bitmap could happen in parallel (e.g. "receive
+   bitmap").
+   The cheap solution taken right now, is to completely serialize bitmap
+   operations but do not warn if they operate on different bitmap slots.
+
+   The real solution is to make the locking more fine grained (one lock per
+   bitmap slot) and to allow those operations to happen parallel.
+ */
+static void
+_drbd_bm_lock(struct drbd_device *device, struct drbd_peer_device *peer_device,
+	      const char *why, enum bm_flag flags)
 {
 	struct drbd_bitmap *b = device->bitmap;
 	int trylock_failed;
@@ -133,19 +154,36 @@ void drbd_bm_lock(struct drbd_device *device, char *why, enum bm_flag flags)
 
 	trylock_failed = !mutex_trylock(&b->bm_change);
 
+	if (trylock_failed && peer_device && b->bm_locked_peer != peer_device) {
+		mutex_lock(&b->bm_change);
+		trylock_failed = 0;
+	}
+
 	if (trylock_failed) {
 		drbd_warn(device, "%s[%d] going to '%s' but bitmap already locked for '%s' by %s[%d]\n",
 			  current->comm, task_pid_nr(current),
 			  why, b->bm_why ?: "?",
-			  b->bm_task->comm, task_pid_nr(b->bm_task));
+			  b->bm_task_comm, b->bm_task_pid);
 		mutex_lock(&b->bm_change);
 	}
-	if (BM_LOCKED_MASK & b->bm_flags)
+	if (b->bm_flags & BM_LOCK_ALL)
 		drbd_err(device, "FIXME bitmap already locked in bm_lock\n");
-	b->bm_flags |= flags & BM_LOCKED_MASK;
+	b->bm_flags |= flags & BM_LOCK_ALL;
 
 	b->bm_why  = why;
-	b->bm_task = current;
+	strscpy(b->bm_task_comm, current->comm);
+	b->bm_task_pid = task_pid_nr(current);
+	b->bm_locked_peer = peer_device;
+}
+
+void drbd_bm_lock(struct drbd_device *device, const char *why, enum bm_flag flags)
+{
+	_drbd_bm_lock(device, NULL, why, flags);
+}
+
+void drbd_bm_slot_lock(struct drbd_peer_device *peer_device, char *why, enum bm_flag flags)
+{
+	_drbd_bm_lock(peer_device->device, peer_device, why, flags);
 }
 
 void drbd_bm_unlock(struct drbd_device *device)
@@ -156,15 +194,22 @@ void drbd_bm_unlock(struct drbd_device *device)
 		return;
 	}
 
-	if (!(BM_LOCKED_MASK & device->bitmap->bm_flags))
+	if (!(b->bm_flags & BM_LOCK_ALL))
 		drbd_err(device, "FIXME bitmap not locked in bm_unlock\n");
 
-	b->bm_flags &= ~BM_LOCKED_MASK;
+	b->bm_flags &= ~BM_LOCK_ALL;
 	b->bm_why  = NULL;
-	b->bm_task = NULL;
+	b->bm_task_comm[0] = 0;
+	b->bm_task_pid = 0;
+	b->bm_locked_peer = NULL;
 	mutex_unlock(&b->bm_change);
 }
 
+void drbd_bm_slot_unlock(struct drbd_peer_device *peer_device)
+{
+	drbd_bm_unlock(peer_device->device);
+}
+
 /* we store some "meta" info about our pages in page->private */
 /* at a granularity of 4k storage per bitmap bit:
  * one peta byte storage: 1<<50 byte, 1<<38 * 4k storage blocks
@@ -220,7 +265,7 @@ static void bm_page_unlock_io(struct drbd_device *device, int page_nr)
 	struct drbd_bitmap *b = device->bitmap;
 	void *addr = &page_private(b->bm_pages[page_nr]);
 	clear_bit_unlock(BM_PAGE_IO_LOCK, addr);
-	wake_up(&device->bitmap->bm_io_wait);
+	wake_up(&b->bm_io_wait);
 }
 
 /* set _before_ submit_io, so it may be reset due to being changed
@@ -232,9 +277,12 @@ static void bm_set_page_unchanged(struct page *page)
 	clear_bit(BM_PAGE_LAZY_WRITEOUT, &page_private(page));
 }
 
-static void bm_set_page_need_writeout(struct page *page)
+static void bm_set_page_need_writeout(struct drbd_bitmap *bitmap, unsigned int page_nr)
 {
-	set_bit(BM_PAGE_NEED_WRITEOUT, &page_private(page));
+	if (!(bitmap->bm_flags & BM_ON_DAX_PMEM)) {
+		struct page *page = bitmap->bm_pages[page_nr];
+		set_bit(BM_PAGE_NEED_WRITEOUT, &page_private(page));
+	}
 }
 
 void drbd_bm_reset_al_hints(struct drbd_device *device)
@@ -242,30 +290,6 @@ void drbd_bm_reset_al_hints(struct drbd_device *device)
 	device->bitmap->n_bitmap_hints = 0;
 }
 
-/**
- * drbd_bm_mark_for_writeout() - mark a page with a "hint" to be considered for writeout
- * @device:	DRBD device.
- * @page_nr:	the bitmap page to mark with the "hint" flag
- *
- * From within an activity log transaction, we mark a few pages with these
- * hints, then call drbd_bm_write_hinted(), which will only write out changed
- * pages which are flagged with this mark.
- */
-void drbd_bm_mark_for_writeout(struct drbd_device *device, int page_nr)
-{
-	struct drbd_bitmap *b = device->bitmap;
-	struct page *page;
-	if (page_nr >= device->bitmap->bm_number_of_pages) {
-		drbd_warn(device, "BAD: page_nr: %u, number_of_pages: %u\n",
-			 page_nr, (int)device->bitmap->bm_number_of_pages);
-		return;
-	}
-	page = device->bitmap->bm_pages[page_nr];
-	BUG_ON(b->n_bitmap_hints >= ARRAY_SIZE(b->al_bitmap_hints));
-	if (!test_and_set_bit(BM_PAGE_HINT_WRITEOUT, &page_private(page)))
-		b->al_bitmap_hints[b->n_bitmap_hints++] = page_nr;
-}
-
 static int bm_test_page_unchanged(struct page *page)
 {
 	volatile const unsigned long *addr = &page_private(page);
@@ -282,9 +306,12 @@ static void bm_clear_page_io_err(struct page *page)
 	clear_bit(BM_PAGE_IO_ERROR, &page_private(page));
 }
 
-static void bm_set_page_lazy_writeout(struct page *page)
+static void bm_set_page_lazy_writeout(struct drbd_bitmap *bitmap, unsigned int page_nr)
 {
-	set_bit(BM_PAGE_LAZY_WRITEOUT, &page_private(page));
+	if (!(bitmap->bm_flags & BM_ON_DAX_PMEM)) {
+		struct page *page = bitmap->bm_pages[page_nr];
+		set_bit(BM_PAGE_LAZY_WRITEOUT, &page_private(page));
+	}
 }
 
 static int bm_test_page_lazy_writeout(struct page *page)
@@ -292,57 +319,6 @@ static int bm_test_page_lazy_writeout(struct page *page)
 	return test_bit(BM_PAGE_LAZY_WRITEOUT, &page_private(page));
 }
 
-/* on a 32bit box, this would allow for exactly (2<<38) bits. */
-static unsigned int bm_word_to_page_idx(struct drbd_bitmap *b, unsigned long long_nr)
-{
-	/* page_nr = (word*sizeof(long)) >> PAGE_SHIFT; */
-	unsigned int page_nr = long_nr >> (PAGE_SHIFT - LN2_BPL + 3);
-	BUG_ON(page_nr >= b->bm_number_of_pages);
-	return page_nr;
-}
-
-static unsigned int bm_bit_to_page_idx(struct drbd_bitmap *b, u64 bitnr)
-{
-	/* page_nr = (bitnr/8) >> PAGE_SHIFT; */
-	unsigned int page_nr = bitnr >> (PAGE_SHIFT + 3);
-	BUG_ON(page_nr >= b->bm_number_of_pages);
-	return page_nr;
-}
-
-static unsigned long *__bm_map_pidx(struct drbd_bitmap *b, unsigned int idx)
-{
-	struct page *page = b->bm_pages[idx];
-	return (unsigned long *) kmap_atomic(page);
-}
-
-static unsigned long *bm_map_pidx(struct drbd_bitmap *b, unsigned int idx)
-{
-	return __bm_map_pidx(b, idx);
-}
-
-static void __bm_unmap(unsigned long *p_addr)
-{
-	kunmap_atomic(p_addr);
-};
-
-static void bm_unmap(unsigned long *p_addr)
-{
-	return __bm_unmap(p_addr);
-}
-
-/* long word offset of _bitmap_ sector */
-#define S2W(s)	((s)<<(BM_EXT_SHIFT-BM_BLOCK_SHIFT-LN2_BPL))
-/* word offset from start of bitmap to word number _in_page_
- * modulo longs per page
-#define MLPP(X) ((X) % (PAGE_SIZE/sizeof(long))
- hm, well, Philipp thinks gcc might not optimize the % into & (... - 1)
- so do it explicitly:
- */
-#define MLPP(X) ((X) & ((PAGE_SIZE/sizeof(long))-1))
-
-/* Long words per page */
-#define LWPP (PAGE_SIZE/sizeof(long))
-
 /*
  * actually most functions herein should take a struct drbd_bitmap*, not a
  * struct drbd_device*, but for the debug macros I like to have the device around
@@ -367,16 +343,12 @@ static void bm_free_pages(struct page **pages, unsigned long number)
 	}
 }
 
-static inline void bm_vk_free(void *ptr)
-{
-	kvfree(ptr);
-}
-
 /*
  * "have" and "want" are NUMBER OF PAGES.
  */
-static struct page **bm_realloc_pages(struct drbd_bitmap *b, unsigned long want)
+static struct page **bm_realloc_pages(struct drbd_device *device, unsigned long want)
 {
+	struct drbd_bitmap *b = device->bitmap;
 	struct page **old_pages = b->bm_pages;
 	struct page **new_pages, *page;
 	unsigned int i, bytes;
@@ -388,15 +360,18 @@ static struct page **bm_realloc_pages(struct drbd_bitmap *b, unsigned long want)
 	if (have == want)
 		return old_pages;
 
-	/* Trying kmalloc first, falling back to vmalloc.
+	/*
+	 * Trying kmalloc first, falling back to vmalloc.
 	 * GFP_NOIO, as this is called while drbd IO is "suspended",
 	 * and during resize or attach on diskless Primary,
 	 * we must not block on IO to ourselves.
-	 * Context is receiver thread or dmsetup. */
+	 * Context is receiver thread or drbdsetup.
+	 */
 	bytes = sizeof(struct page *)*want;
 	new_pages = kzalloc(bytes, GFP_NOIO | __GFP_NOWARN);
 	if (!new_pages) {
-		new_pages = __vmalloc(bytes, GFP_NOIO | __GFP_ZERO);
+		new_pages = __vmalloc(bytes,
+				GFP_NOIO | __GFP_HIGHMEM | __GFP_ZERO);
 		if (!new_pages)
 			return NULL;
 	}
@@ -405,10 +380,14 @@ static struct page **bm_realloc_pages(struct drbd_bitmap *b, unsigned long want)
 		for (i = 0; i < have; i++)
 			new_pages[i] = old_pages[i];
 		for (; i < want; i++) {
-			page = alloc_page(GFP_NOIO | __GFP_HIGHMEM);
+			page = alloc_page(GFP_NOIO | __GFP_HIGHMEM | __GFP_RETRY_MAYFAIL |
+					__GFP_NOWARN | __GFP_ZERO);
 			if (!page) {
 				bm_free_pages(new_pages + have, i - have);
-				bm_vk_free(new_pages);
+				kvfree(new_pages);
+				drbd_err(device, "Failed to allocate bitmap; allocated %lu KiB / %lu KiB\n",
+						(unsigned long) i << (PAGE_SHIFT - 10),
+						want << (PAGE_SHIFT - 10));
 				return NULL;
 			}
 			/* we want to know which page it is
@@ -423,27 +402,32 @@ static struct page **bm_realloc_pages(struct drbd_bitmap *b, unsigned long want)
 		bm_free_pages(old_pages + want, have - want);
 		*/
 	}
-
 	return new_pages;
 }
 
-/*
- * allocates the drbd_bitmap and stores it in device->bitmap.
- */
-int drbd_bm_init(struct drbd_device *device)
+struct drbd_bitmap *drbd_bm_alloc(unsigned int max_peers, unsigned int bm_block_shift)
 {
-	struct drbd_bitmap *b = device->bitmap;
-	WARN_ON(b != NULL);
+	struct drbd_bitmap *b;
+
+	if (bm_block_shift < BM_BLOCK_SHIFT_MIN
+	||  bm_block_shift > BM_BLOCK_SIZE_MAX)
+		return NULL;
+	if (max_peers < 1 || max_peers > DRBD_PEERS_MAX)
+		return NULL;
+
 	b = kzalloc_obj(struct drbd_bitmap);
 	if (!b)
-		return -ENOMEM;
+		return NULL;
+
 	spin_lock_init(&b->bm_lock);
+	spin_lock_init(&b->bm_all_slots_lock);
 	mutex_init(&b->bm_change);
 	init_waitqueue_head(&b->bm_io_wait);
 
-	device->bitmap = b;
+	b->bm_max_peers = max_peers;
+	b->bm_block_shift = bm_block_shift;
 
-	return 0;
+	return b;
 }
 
 sector_t drbd_bm_capacity(struct drbd_device *device)
@@ -453,170 +437,454 @@ sector_t drbd_bm_capacity(struct drbd_device *device)
 	return device->bitmap->bm_dev_capacity;
 }
 
-/* called on driver unload. TODO: call when a device is destroyed.
- */
-void drbd_bm_cleanup(struct drbd_device *device)
+void drbd_bm_free(struct drbd_device *device)
 {
-	if (!expect(device, device->bitmap))
+	/* ldev_safe: explicit NULL check below */
+	struct drbd_bitmap *bitmap = device->bitmap;
+
+	if (bitmap == NULL)
 		return;
-	bm_free_pages(device->bitmap->bm_pages, device->bitmap->bm_number_of_pages);
-	bm_vk_free(device->bitmap->bm_pages);
-	kfree(device->bitmap);
+
+	/* ldev_safe: explicit NULL check above */
+	drbd_bm_resize(device, 0, 0);
+
+	kfree(bitmap);
+
+	/* ldev_safe: clearing pointer */
 	device->bitmap = NULL;
 }
 
+static inline unsigned long interleaved_word32(struct drbd_bitmap *bitmap,
+					       unsigned int bitmap_index,
+					       unsigned long bit)
+{
+	return (bit >> 5) * bitmap->bm_max_peers + bitmap_index;
+}
+
+static inline unsigned long word32_to_page(unsigned long word)
+{
+	return word >> (PAGE_SHIFT - 2);
+}
+
+static inline unsigned int word32_in_page(unsigned long word)
+{
+	return word & ((1 << (PAGE_SHIFT - 2)) - 1);
+}
+
+static inline unsigned long last_bit_on_page(struct drbd_bitmap *bitmap,
+					     unsigned int bitmap_index,
+					     unsigned long bit)
+{
+	unsigned long word = interleaved_word32(bitmap, bitmap_index, bit);
+
+	return (bit | 31) + ((word32_in_page(-(word + 1)) / bitmap->bm_max_peers) << 5);
+}
+
+static inline unsigned long bit_to_page_interleaved(struct drbd_bitmap *bitmap,
+						    unsigned int bitmap_index,
+						    unsigned long bit)
+{
+	return word32_to_page(interleaved_word32(bitmap, bitmap_index, bit));
+}
+
+static void *bm_map(struct drbd_bitmap *bitmap, unsigned int page)
+{
+	if (!(bitmap->bm_flags & BM_ON_DAX_PMEM))
+		return kmap_atomic(bitmap->bm_pages[page]);
+
+	return ((unsigned char *)bitmap->bm_on_pmem) + (unsigned long)page * PAGE_SIZE;
+}
+
+static void bm_unmap(struct drbd_bitmap *bitmap, void *addr)
+{
+	if (!(bitmap->bm_flags & BM_ON_DAX_PMEM))
+		kunmap_atomic(addr);
+}
+
+
 /*
- * since (b->bm_bits % BITS_PER_LONG) != 0,
- * this masks out the remaining bits.
- * Returns the number of bits cleared.
+ * find_next_bit() and find_next_zero_bit() expect an (unsigned long *),
+ * and will dereference it.
+ * When scanning our bitmap, we are interested in 32bit words of it.
+ * The "current 32 bit word pointer" may point to the last 32 bits in a page.
+ * For 64bit long, if the page after the current page is not mapped,
+ * this causes "page fault - not-present page".
+ * Duplicate the "fast path" of these functions,
+ * simplified for "size: 32, offset: 0".
+ * Little endian arch: le32_to_cpu is a no-op.
+ * Big endian arch: le32_to_cpu moves the least significant 32 bits around.
+ * __ffs / ffz do an implicit cast to (unsignd long). On 64bit, that fills up
+ * the most significant bits with 0; we are not interested in those anyways.
  */
-#ifndef BITS_PER_PAGE
-#define BITS_PER_PAGE		(1UL << (PAGE_SHIFT + 3))
-#define BITS_PER_PAGE_MASK	(BITS_PER_PAGE - 1)
-#else
-# if BITS_PER_PAGE != (1UL << (PAGE_SHIFT + 3))
-#  error "ambiguous BITS_PER_PAGE"
-# endif
-#endif
-#define BITS_PER_LONG_MASK	(BITS_PER_LONG - 1)
-static int bm_clear_surplus(struct drbd_bitmap *b)
-{
-	unsigned long mask;
-	unsigned long *p_addr, *bm;
-	int tmp;
-	int cleared = 0;
-
-	/* number of bits modulo bits per page */
-	tmp = (b->bm_bits & BITS_PER_PAGE_MASK);
-	/* mask the used bits of the word containing the last bit */
-	mask = (1UL << (tmp & BITS_PER_LONG_MASK)) -1;
-	/* bitmap is always stored little endian,
-	 * on disk and in core memory alike */
-	mask = cpu_to_lel(mask);
-
-	p_addr = bm_map_pidx(b, b->bm_number_of_pages - 1);
-	bm = p_addr + (tmp/BITS_PER_LONG);
-	if (mask) {
-		/* If mask != 0, we are not exactly aligned, so bm now points
-		 * to the long containing the last bit.
-		 * If mask == 0, bm already points to the word immediately
-		 * after the last (long word aligned) bit. */
-		cleared = hweight_long(*bm & ~mask);
-		*bm &= mask;
-		bm++;
-	}
+static inline unsigned long find_next_bit_le32(const __le32 *addr)
+{
+	uint32_t val = *addr;
+
+	return val ? __ffs(le32_to_cpu(val)) : 32;
+}
+
+static inline unsigned long find_next_zero_bit_le32(const __le32 *addr)
+{
+	uint32_t val = *addr;
+
+	return val == ~0U ? 32 : ffz(le32_to_cpu(val));
+}
+
+
+static __always_inline unsigned long
+____bm_op(struct drbd_device *device, unsigned int bitmap_index, unsigned long start, unsigned long end,
+	 enum bitmap_operations op, __le32 *buffer)
+{
+	struct drbd_bitmap *bitmap = device->bitmap;
+	unsigned int word32_skip = 32 * bitmap->bm_max_peers;
+	unsigned long total = 0;
+	unsigned long word;
+	unsigned int page, bit_in_page;
+
+	if (end >= bitmap->bm_bits)
+		end = bitmap->bm_bits - 1;
+
+	word = interleaved_word32(bitmap, bitmap_index, start);
+	page = word32_to_page(word);
+	bit_in_page = (word32_in_page(word) << 5) | (start & 31);
+
+	for (; start <= end; page++) {
+		unsigned int count = 0;
+		void *addr;
+
+		addr = bm_map(bitmap, page);
+		if (((start & 31) && (start | 31) <= end) || op == BM_OP_TEST) {
+			unsigned int last = bit_in_page | 31;
+
+			switch (op) {
+			default:
+				do {
+					switch (op) {
+					case BM_OP_CLEAR:
+						if (__test_and_clear_bit_le(bit_in_page, addr))
+							count++;
+						break;
+					case BM_OP_SET:
+						if (!__test_and_set_bit_le(bit_in_page, addr))
+							count++;
+						break;
+					case BM_OP_COUNT:
+						if (test_bit_le(bit_in_page, addr))
+							total++;
+						break;
+					case BM_OP_TEST:
+						total = !!test_bit_le(bit_in_page, addr);
+						bm_unmap(bitmap, addr);
+						return total;
+					default:
+						break;
+					}
+					bit_in_page++;
+				} while (bit_in_page <= last);
+				break;
+			case BM_OP_MERGE:
+			case BM_OP_EXTRACT:
+				BUG();
+				break;
+			case BM_OP_FIND_BIT:
+				count = find_next_bit_le(addr, last + 1, bit_in_page);
+				if (count < last + 1)
+					goto found;
+				bit_in_page = last + 1;
+				break;
+			case BM_OP_FIND_ZERO_BIT:
+				count = find_next_zero_bit_le(addr, last + 1, bit_in_page);
+				if (count < last + 1)
+					goto found;
+				bit_in_page = last + 1;
+				break;
+			}
+			start = (start | 31) + 1;
+			bit_in_page += word32_skip - 32;
+			if (bit_in_page >= BITS_PER_PAGE)
+				goto next_page;
+		}
+
+		while (start + 31 <= end) {
+			__le32 *p = (__le32 *)addr + (bit_in_page >> 5);
+
+			switch (op) {
+			case BM_OP_CLEAR:
+				count += hweight32(*p);
+				*p = 0;
+				break;
+			case BM_OP_SET:
+				count += hweight32(~*p);
+				*p = -1;
+				break;
+			case BM_OP_TEST:
+				BUG();
+				break;
+			case BM_OP_COUNT:
+				total += hweight32(*p);
+				break;
+			case BM_OP_MERGE:
+				count += hweight32(~*p & *buffer);
+				*p |= *buffer++;
+				break;
+			case BM_OP_EXTRACT:
+				*buffer++ = *p;
+				break;
+			case BM_OP_FIND_BIT:
+				count = find_next_bit_le32(p);
+				if (count < 32) {
+					count += bit_in_page;
+					goto found;
+				}
+				break;
+			case BM_OP_FIND_ZERO_BIT:
+				count = find_next_zero_bit_le32(p);
+				if (count < 32) {
+					count += bit_in_page;
+					goto found;
+				}
+				break;
+			}
+			start += 32;
+			bit_in_page += word32_skip;
+			if (bit_in_page >= BITS_PER_PAGE)
+				goto next_page;
+		}
 
-	if (BITS_PER_LONG == 32 && ((bm - p_addr) & 1) == 1) {
-		/* on a 32bit arch, we may need to zero out
-		 * a padding long to align with a 64bit remote */
-		cleared += hweight_long(*bm);
-		*bm = 0;
+		/* don't overrun buffers with MERGE or EXTRACT,
+		 * jump to the kunmap and then out... */
+		if (start > end)
+			goto next_page;
+
+		switch (op) {
+		default:
+			while (start <= end) {
+				switch (op) {
+				case BM_OP_CLEAR:
+					if (__test_and_clear_bit_le(bit_in_page, addr))
+						count++;
+					break;
+				case BM_OP_SET:
+					if (!__test_and_set_bit_le(bit_in_page, addr))
+						count++;
+					break;
+				case BM_OP_COUNT:
+					if (test_bit_le(bit_in_page, addr))
+						total++;
+					break;
+				default:
+					break;
+				}
+				start++;
+				bit_in_page++;
+			}
+			break;
+		case BM_OP_MERGE:
+			{
+				__le32 *p = (__le32 *)addr + (bit_in_page >> 5);
+				__le32 b = *buffer++ & cpu_to_le32((1 << (end - start + 1)) - 1);
+
+				count += hweight32(~*p & b);
+				*p |= b;
+
+				start = end + 1;
+			}
+			break;
+		case BM_OP_EXTRACT:
+			{
+				__le32 *p = (__le32 *)addr + (bit_in_page >> 5);
+
+				*buffer++ = *p & cpu_to_le32((1 << (end - start + 1)) - 1);
+				start = end + 1;
+			}
+			break;
+		case BM_OP_FIND_BIT:
+			{
+				unsigned int last = bit_in_page + (end - start);
+
+				count = find_next_bit_le(addr, last + 1, bit_in_page);
+				if (count < last + 1)
+					goto found;
+				start = end + 1;
+			}
+			break;
+		case BM_OP_FIND_ZERO_BIT:
+			{
+				unsigned int last = bit_in_page + (end - start);
+				count = find_next_zero_bit_le(addr, last + 1, bit_in_page);
+				if (count < last + 1)
+					goto found;
+				start = end + 1;
+			}
+			break;
+		}
+
+	    next_page:
+		bm_unmap(bitmap, addr);
+		bit_in_page -= BITS_PER_PAGE;
+		switch (op) {
+		case BM_OP_CLEAR:
+			if (count) {
+				bm_set_page_lazy_writeout(bitmap, page);
+				total += count;
+			}
+			break;
+		case BM_OP_SET:
+		case BM_OP_MERGE:
+			if (count) {
+				bm_set_page_need_writeout(bitmap, page);
+				total += count;
+			}
+			break;
+		default:
+			break;
+		}
+		continue;
+
+	    found:
+		bm_unmap(bitmap, addr);
+		return start + count - bit_in_page;
 	}
-	bm_unmap(p_addr);
-	return cleared;
-}
-
-static void bm_set_surplus(struct drbd_bitmap *b)
-{
-	unsigned long mask;
-	unsigned long *p_addr, *bm;
-	int tmp;
-
-	/* number of bits modulo bits per page */
-	tmp = (b->bm_bits & BITS_PER_PAGE_MASK);
-	/* mask the used bits of the word containing the last bit */
-	mask = (1UL << (tmp & BITS_PER_LONG_MASK)) -1;
-	/* bitmap is always stored little endian,
-	 * on disk and in core memory alike */
-	mask = cpu_to_lel(mask);
-
-	p_addr = bm_map_pidx(b, b->bm_number_of_pages - 1);
-	bm = p_addr + (tmp/BITS_PER_LONG);
-	if (mask) {
-		/* If mask != 0, we are not exactly aligned, so bm now points
-		 * to the long containing the last bit.
-		 * If mask == 0, bm already points to the word immediately
-		 * after the last (long word aligned) bit. */
-		*bm |= ~mask;
-		bm++;
+	switch (op) {
+	case BM_OP_CLEAR:
+		if (total)
+			bitmap->bm_set[bitmap_index] -= total;
+		break;
+	case BM_OP_SET:
+	case BM_OP_MERGE:
+		if (total)
+			bitmap->bm_set[bitmap_index] += total;
+		break;
+	case BM_OP_FIND_BIT:
+	case BM_OP_FIND_ZERO_BIT:
+		total = DRBD_END_OF_BITMAP;
+		break;
+	default:
+		break;
 	}
+	return total;
+}
+
+/* Returns the number of bits changed.  */
+static __always_inline unsigned long
+__bm_op(struct drbd_device *device, unsigned int bitmap_index, unsigned long start, unsigned long end,
+	enum bitmap_operations op, __le32 *buffer)
+{
+	struct drbd_bitmap *bitmap = device->bitmap;
 
-	if (BITS_PER_LONG == 32 && ((bm - p_addr) & 1) == 1) {
-		/* on a 32bit arch, we may need to zero out
-		 * a padding long to align with a 64bit remote */
-		*bm = ~0UL;
+	if (!expect(device, bitmap))
+		return 1;
+	if (!expect(device, bitmap->bm_pages))
+		return 0;
+
+	if (!bitmap->bm_bits)
+		return 0;
+
+	if (bitmap->bm_task_pid != task_pid_nr(current)) {
+		switch (op) {
+		case BM_OP_CLEAR:
+			if (bitmap->bm_flags & BM_LOCK_CLEAR)
+				bm_print_lock_info(device, bitmap_index, op);
+			break;
+		case BM_OP_SET:
+		case BM_OP_MERGE:
+			if (bitmap->bm_flags & BM_LOCK_SET)
+				bm_print_lock_info(device, bitmap_index, op);
+			break;
+		case BM_OP_TEST:
+		case BM_OP_COUNT:
+		case BM_OP_EXTRACT:
+		case BM_OP_FIND_BIT:
+		case BM_OP_FIND_ZERO_BIT:
+			if (bitmap->bm_flags & BM_LOCK_TEST)
+				bm_print_lock_info(device, bitmap_index, op);
+			break;
+		}
 	}
-	bm_unmap(p_addr);
+	return ____bm_op(device, bitmap_index, start, end, op, buffer);
 }
 
+static __always_inline unsigned long
+bm_op(struct drbd_device *device, unsigned int bitmap_index, unsigned long start, unsigned long end,
+      enum bitmap_operations op, __le32 *buffer)
+{
+	struct drbd_bitmap *bitmap = device->bitmap;
+	unsigned long irq_flags;
+	unsigned long count;
+
+	spin_lock_irqsave(&bitmap->bm_lock, irq_flags);
+	count = __bm_op(device, bitmap_index, start, end, op, buffer);
+	spin_unlock_irqrestore(&bitmap->bm_lock, irq_flags);
+	return count;
+}
+
+#ifdef BITMAP_DEBUG
+#define bm_op(device, bitmap_index, start, end, op, buffer) \
+	({ unsigned long ret; \
+	   drbd_info(device, "%s: bm_op(..., %u, %lu, %lu, %u, %p)\n", \
+		     __func__, bitmap_index, start, end, op, buffer); \
+	   ret = bm_op(device, bitmap_index, start, end, op, buffer); \
+	   drbd_info(device, "= %lu\n", ret); \
+	   ret; })
+
+#define __bm_op(device, bitmap_index, start, end, op, buffer) \
+	({ unsigned long ret; \
+	   drbd_info(device, "%s: __bm_op(..., %u, %lu, %lu, %u, %p)\n", \
+		     __func__, bitmap_index, start, end, op, buffer); \
+	   ret = __bm_op(device, bitmap_index, start, end, op, buffer); \
+	   drbd_info(device, "= %lu\n", ret); \
+	   ret; })
+#endif
+
+#ifdef BITMAP_DEBUG
+#define ___bm_op(device, bitmap_index, start, end, op, buffer) \
+	({ unsigned long ret; \
+	   drbd_info(device, "%s: ___bm_op(..., %u, %lu, %lu, %u, %p)\n", \
+		     __func__, bitmap_index, start, end, op, buffer); \
+	   ret = ____bm_op(device, bitmap_index, start, end, op, buffer); \
+	   drbd_info(device, "= %lu\n", ret); \
+	   ret; })
+#else
+#define ___bm_op(device, bitmap_index, start, end, op, buffer) \
+	____bm_op(device, bitmap_index, start, end, op, buffer)
+#endif
+
 /* you better not modify the bitmap while this is running,
  * or its results will be stale */
-static unsigned long bm_count_bits(struct drbd_bitmap *b)
-{
-	unsigned long *p_addr;
-	unsigned long bits = 0;
-	unsigned long mask = (1UL << (b->bm_bits & BITS_PER_LONG_MASK)) -1;
-	int idx, last_word;
-
-	/* all but last page */
-	for (idx = 0; idx < b->bm_number_of_pages - 1; idx++) {
-		p_addr = __bm_map_pidx(b, idx);
-		bits += bitmap_weight(p_addr, BITS_PER_PAGE);
-		__bm_unmap(p_addr);
-		cond_resched();
-	}
-	/* last (or only) page */
-	last_word = ((b->bm_bits - 1) & BITS_PER_PAGE_MASK) >> LN2_BPL;
-	p_addr = __bm_map_pidx(b, idx);
-	bits += bitmap_weight(p_addr, last_word * BITS_PER_LONG);
-	p_addr[last_word] &= cpu_to_lel(mask);
-	bits += hweight_long(p_addr[last_word]);
-	/* 32bit arch, may have an unused padding long */
-	if (BITS_PER_LONG == 32 && (last_word & 1) == 0)
-		p_addr[last_word+1] = 0;
-	__bm_unmap(p_addr);
-	return bits;
-}
-
-/* offset and len in long words.*/
-static void bm_memset(struct drbd_bitmap *b, size_t offset, int c, size_t len)
-{
-	unsigned long *p_addr, *bm;
-	unsigned int idx;
-	size_t do_now, end;
-
-	end = offset + len;
-
-	if (end > b->bm_words) {
-		pr_alert("bm_memset end > bm_words\n");
-		return;
-	}
+static void bm_count_bits(struct drbd_device *device)
+{
+	struct drbd_bitmap *bitmap = device->bitmap;
+	unsigned int bitmap_index;
+
+	for (bitmap_index = 0; bitmap_index < bitmap->bm_max_peers; bitmap_index++) {
+		unsigned long bit = 0, bits_set = 0;
 
-	while (offset < end) {
-		do_now = min_t(size_t, ALIGN(offset + 1, LWPP), end) - offset;
-		idx = bm_word_to_page_idx(b, offset);
-		p_addr = bm_map_pidx(b, idx);
-		bm = p_addr + MLPP(offset);
-		if (bm+do_now > p_addr + LWPP) {
-			pr_alert("BUG BUG BUG! p_addr:%p bm:%p do_now:%d\n",
-			       p_addr, bm, (int)do_now);
-		} else
-			memset(bm, c, do_now * sizeof(long));
-		bm_unmap(p_addr);
-		bm_set_page_need_writeout(b->bm_pages[idx]);
-		offset += do_now;
+		while (bit < bitmap->bm_bits) {
+			unsigned long last_bit = last_bit_on_page(bitmap, bitmap_index, bit);
+
+			bits_set += ___bm_op(device, bitmap_index, bit, last_bit, BM_OP_COUNT, NULL);
+			bit = last_bit + 1;
+			cond_resched();
+		}
+		bitmap->bm_set[bitmap_index] = bits_set;
 	}
 }
 
 /* For the layout, see comment above drbd_md_set_sector_offsets(). */
-static u64 drbd_md_on_disk_bits(struct drbd_backing_dev *ldev)
+static u64 drbd_md_on_disk_bits(struct drbd_device *device)
 {
-	u64 bitmap_sectors;
+	struct drbd_backing_dev *ldev = device->ldev;
+	u64 bitmap_sectors, word64_on_disk;
 	if (ldev->md.al_offset == 8)
 		bitmap_sectors = ldev->md.md_size_sect - ldev->md.bm_offset;
 	else
 		bitmap_sectors = ldev->md.al_offset - ldev->md.bm_offset;
-	return bitmap_sectors << (9 + 3);
+
+	/* for interoperability between 32bit and 64bit architectures,
+	 * we round on 64bit words.  FIXME do we still need this? */
+	word64_on_disk = bitmap_sectors << (9 - 3); /* x * (512/8) */
+	do_div(word64_on_disk, ldev->md.max_peers);
+	return word64_on_disk << 6; /* x * 64 */;
 }
 
 /*
@@ -627,116 +895,151 @@ static u64 drbd_md_on_disk_bits(struct drbd_backing_dev *ldev)
  * In case this is actually a resize, we copy the old bitmap into the new one.
  * Otherwise, the bitmap is initialized to all bits set.
  */
-int drbd_bm_resize(struct drbd_device *device, sector_t capacity, int set_new_bits)
+int drbd_bm_resize(struct drbd_device *device, sector_t capacity, bool set_new_bits)
 {
 	struct drbd_bitmap *b = device->bitmap;
-	unsigned long bits, words, owords, obits;
+	unsigned long bits, words, obits;
 	unsigned long want, have, onpages; /* number of pages */
-	struct page **npages, **opages = NULL;
+	struct page **npages = NULL, **opages = NULL;
+	void *bm_on_pmem = NULL;
 	int err = 0;
 	bool growing;
 
-	if (!expect(device, b))
-		return -ENOMEM;
-
-	drbd_bm_lock(device, "resize", BM_LOCKED_MASK);
-
-	drbd_info(device, "drbd_bm_resize called with capacity == %llu\n",
-			(unsigned long long)capacity);
+	drbd_bm_lock(device, "resize", BM_LOCK_ALL);
 
 	if (capacity == b->bm_dev_capacity)
 		goto out;
 
 	if (capacity == 0) {
+		unsigned int bitmap_index;
+
 		spin_lock_irq(&b->bm_lock);
 		opages = b->bm_pages;
 		onpages = b->bm_number_of_pages;
-		owords = b->bm_words;
+		drbd_info(device, "Freeing bitmap of size %llu KiB\n", PAGES_TO_KIB(onpages));
 		b->bm_pages = NULL;
-		b->bm_number_of_pages =
-		b->bm_set   =
-		b->bm_bits  =
-		b->bm_words =
+		b->bm_number_of_pages = 0;
+		for (bitmap_index = 0; bitmap_index < b->bm_max_peers; bitmap_index++)
+			b->bm_set[bitmap_index] = 0;
+		b->bm_bits = 0;
+		b->bm_bits_4k = 0;
+		b->bm_words = 0;
 		b->bm_dev_capacity = 0;
 		spin_unlock_irq(&b->bm_lock);
-		bm_free_pages(opages, onpages);
-		bm_vk_free(opages);
+		if (!(b->bm_flags & BM_ON_DAX_PMEM)) {
+			bm_free_pages(opages, onpages);
+			kvfree(opages);
+		}
 		goto out;
 	}
-	bits  = BM_SECT_TO_BIT(ALIGN(capacity, BM_SECT_PER_BIT));
-
-	/* if we would use
-	   words = ALIGN(bits,BITS_PER_LONG) >> LN2_BPL;
-	   a 32bit host could present the wrong number of words
-	   to a 64bit host.
-	*/
-	words = ALIGN(bits, 64) >> LN2_BPL;
+	bits  = bm_sect_to_bit(b, ALIGN(capacity, bm_sect_per_bit(b)));
+	words = (ALIGN(bits, 64) * b->bm_max_peers) / BITS_PER_LONG;
 
+	want = PFN_UP(words * sizeof(long));
+	have = b->bm_number_of_pages;
 	if (get_ldev(device)) {
-		u64 bits_on_disk = drbd_md_on_disk_bits(device->ldev);
-		put_ldev(device);
+		u64 bits_on_disk = drbd_md_on_disk_bits(device);
 		if (bits > bits_on_disk) {
-			drbd_info(device, "bits = %lu\n", bits);
-			drbd_info(device, "bits_on_disk = %llu\n", bits_on_disk);
+			put_ldev(device);
+			drbd_err(device, "Not enough space for bitmap: %lu > %lu\n",
+				(unsigned long)bits, (unsigned long)bits_on_disk);
 			err = -ENOSPC;
 			goto out;
 		}
+		if (drbd_md_dax_active(device->ldev)) {
+			drbd_info(device, "DAX/PMEM bitmap has size %llu KiB\n",
+				  PAGES_TO_KIB(want));
+			bm_on_pmem = drbd_dax_bitmap(device, want);
+		}
+		put_ldev(device);
 	}
 
-	want = PFN_UP(words*sizeof(long));
-	have = b->bm_number_of_pages;
-	if (want == have) {
-		D_ASSERT(device, b->bm_pages != NULL);
-		npages = b->bm_pages;
-	} else {
-		if (drbd_insert_fault(device, DRBD_FAULT_BM_ALLOC))
-			npages = NULL;
-		else
-			npages = bm_realloc_pages(b, want);
-	}
+	if (!bm_on_pmem) {
+		if (want == have) {
+			D_ASSERT(device, b->bm_pages != NULL);
+			drbd_info(device, "Bitmap size remains %llu KiB\n", PAGES_TO_KIB(have));
+			npages = b->bm_pages;
+		} else {
+			if (have == 0) {
+				drbd_info(device, "Allocating %llu KiB for new bitmap\n",
+						PAGES_TO_KIB(want));
+			} else if (want > have) {
+				drbd_info(device, "Allocating %llu KiB for bitmap, new size %llu KiB\n",
+						PAGES_TO_KIB(want - have), PAGES_TO_KIB(want));
+			}
 
-	if (!npages) {
-		err = -ENOMEM;
-		goto out;
+			if (drbd_insert_fault(device, DRBD_FAULT_BM_ALLOC))
+				npages = NULL;
+			else
+				npages = bm_realloc_pages(device, want);
+		}
+
+		if (!npages) {
+			err = -ENOMEM;
+			goto out;
+		}
 	}
 
 	spin_lock_irq(&b->bm_lock);
-	opages = b->bm_pages;
-	owords = b->bm_words;
 	obits  = b->bm_bits;
 
 	growing = bits > obits;
-	if (opages && growing && set_new_bits)
-		bm_set_surplus(b);
 
-	b->bm_pages = npages;
+	if (bm_on_pmem) {
+		if (b->bm_on_pmem) {
+			void *src = b->bm_on_pmem;
+			memmove(bm_on_pmem, src, b->bm_words * sizeof(long));
+			arch_wb_cache_pmem(bm_on_pmem, b->bm_words * sizeof(long));
+		} else {
+			/* We are attaching a bitmap on PMEM. Since the memory
+			 * is persistent, the bitmap is still valid. Do not
+			 * overwrite it. */
+			growing = false;
+		}
+		b->bm_on_pmem = bm_on_pmem;
+		b->bm_flags |= BM_ON_DAX_PMEM;
+	} else {
+		opages = b->bm_pages;
+		b->bm_pages = npages;
+	}
 	b->bm_number_of_pages = want;
-	b->bm_bits  = bits;
+	b->bm_bits = bits;
+	b->bm_bits_4k = sect_to_bit(ALIGN(capacity, sect_per_bit(BM_BLOCK_SHIFT_4k)),
+				BM_BLOCK_SHIFT_4k);
 	b->bm_words = words;
 	b->bm_dev_capacity = capacity;
 
 	if (growing) {
-		if (set_new_bits) {
-			bm_memset(b, owords, 0xff, words-owords);
-			b->bm_set += bits - obits;
-		} else
-			bm_memset(b, owords, 0x00, words-owords);
+		unsigned int bitmap_index;
 
+		for (bitmap_index = 0; bitmap_index < b->bm_max_peers; bitmap_index++) {
+			unsigned long bm_set = b->bm_set[bitmap_index];
+
+			if (set_new_bits) {
+				___bm_op(device, bitmap_index, obits, -1UL, BM_OP_SET, NULL);
+				bm_set += bits - obits;
+			} else {
+				___bm_op(device, bitmap_index, obits, -1UL, BM_OP_CLEAR, NULL);
+			}
+
+			b->bm_set[bitmap_index] = bm_set;
+		}
 	}
 
-	if (want < have) {
+	if (want < have && !(b->bm_flags & BM_ON_DAX_PMEM)) {
 		/* implicit: (opages != NULL) && (opages != npages) */
+		drbd_info(device, "Freeing %llu KiB from bitmap, new size %llu KiB\n",
+				PAGES_TO_KIB(have - want), PAGES_TO_KIB(want));
 		bm_free_pages(opages + want, have - want);
 	}
 
-	(void)bm_clear_surplus(b);
-
 	spin_unlock_irq(&b->bm_lock);
 	if (opages != npages)
-		bm_vk_free(opages);
+		kvfree(opages);
 	if (!growing)
-		b->bm_set = bm_count_bits(b);
-	drbd_info(device, "resync bitmap: bits=%lu words=%lu pages=%lu\n", bits, words, want);
+		bm_count_bits(device);
+	drbd_info(device, "resync bitmap: bits=%lu bits_4k=%lu words=%lu pages=%lu\n",
+			bits, b->bm_bits_4k, words, want);
 
  out:
 	drbd_bm_unlock(device);
@@ -748,10 +1051,8 @@ int drbd_bm_resize(struct drbd_device *device, sector_t capacity, int set_new_bi
  * leaving this function...
  * we still need to lock it, since it is important that this returns
  * bm_set == 0 precisely.
- *
- * maybe bm_set should be atomic_t ?
  */
-unsigned long _drbd_bm_total_weight(struct drbd_device *device)
+unsigned long _drbd_bm_total_weight(struct drbd_device *device, int bitmap_index)
 {
 	struct drbd_bitmap *b = device->bitmap;
 	unsigned long s;
@@ -763,172 +1064,98 @@ unsigned long _drbd_bm_total_weight(struct drbd_device *device)
 		return 0;
 
 	spin_lock_irqsave(&b->bm_lock, flags);
-	s = b->bm_set;
+	s = b->bm_set[bitmap_index];
 	spin_unlock_irqrestore(&b->bm_lock, flags);
 
 	return s;
 }
 
-unsigned long drbd_bm_total_weight(struct drbd_device *device)
+unsigned long drbd_bm_total_weight(struct drbd_peer_device *peer_device)
 {
+	struct drbd_device *device = peer_device->device;
 	unsigned long s;
+
+	if (peer_device->bitmap_index == -1)
+		return 0;
+
 	/* if I don't have a disk, I don't know about out-of-sync status */
 	if (!get_ldev_if_state(device, D_NEGOTIATING))
 		return 0;
-	s = _drbd_bm_total_weight(device);
+	s = _drbd_bm_total_weight(device, peer_device->bitmap_index);
 	put_ldev(device);
 	return s;
 }
 
+/* Returns the number of unsigned long words per peer */
 size_t drbd_bm_words(struct drbd_device *device)
 {
 	struct drbd_bitmap *b = device->bitmap;
+
 	if (!expect(device, b))
 		return 0;
 	if (!expect(device, b->bm_pages))
 		return 0;
 
-	return b->bm_words;
+	return b->bm_words / b->bm_max_peers;
 }
 
 unsigned long drbd_bm_bits(struct drbd_device *device)
 {
 	struct drbd_bitmap *b = device->bitmap;
+
 	if (!expect(device, b))
 		return 0;
 
 	return b->bm_bits;
 }
 
+unsigned long drbd_bm_bits_4k(struct drbd_device *device)
+{
+	struct drbd_bitmap *b = device->bitmap;
+
+	if (!expect(device, b))
+		return 0;
+
+	return b->bm_bits_4k;
+}
+
 /* merge number words from buffer into the bitmap starting at offset.
  * buffer[i] is expected to be little endian unsigned long.
  * bitmap must be locked by drbd_bm_lock.
  * currently only used from receive_bitmap.
  */
-void drbd_bm_merge_lel(struct drbd_device *device, size_t offset, size_t number,
+void drbd_bm_merge_lel(struct drbd_peer_device *peer_device, size_t offset, size_t number,
 			unsigned long *buffer)
 {
-	struct drbd_bitmap *b = device->bitmap;
-	unsigned long *p_addr, *bm;
-	unsigned long word, bits;
-	unsigned int idx;
-	size_t end, do_now;
-
-	end = offset + number;
-
-	if (!expect(device, b))
-		return;
-	if (!expect(device, b->bm_pages))
-		return;
-	if (number == 0)
-		return;
-	WARN_ON(offset >= b->bm_words);
-	WARN_ON(end    >  b->bm_words);
+	unsigned long start, end;
 
-	spin_lock_irq(&b->bm_lock);
-	while (offset < end) {
-		do_now = min_t(size_t, ALIGN(offset+1, LWPP), end) - offset;
-		idx = bm_word_to_page_idx(b, offset);
-		p_addr = bm_map_pidx(b, idx);
-		bm = p_addr + MLPP(offset);
-		offset += do_now;
-		while (do_now--) {
-			bits = hweight_long(*bm);
-			word = *bm | *buffer++;
-			*bm++ = word;
-			b->bm_set += hweight_long(word) - bits;
-		}
-		bm_unmap(p_addr);
-		bm_set_page_need_writeout(b->bm_pages[idx]);
-	}
-	/* with 32bit <-> 64bit cross-platform connect
-	 * this is only correct for current usage,
-	 * where we _know_ that we are 64 bit aligned,
-	 * and know that this function is used in this way, too...
-	 */
-	if (end == b->bm_words)
-		b->bm_set -= bm_clear_surplus(b);
-	spin_unlock_irq(&b->bm_lock);
+	start = offset * BITS_PER_LONG;
+	end = start + number * BITS_PER_LONG - 1;
+	bm_op(peer_device->device, peer_device->bitmap_index, start, end, BM_OP_MERGE, (__le32 *)buffer);
 }
 
 /* copy number words from the bitmap starting at offset into the buffer.
  * buffer[i] will be little endian unsigned long.
  */
-void drbd_bm_get_lel(struct drbd_device *device, size_t offset, size_t number,
+void drbd_bm_get_lel(struct drbd_peer_device *peer_device, size_t offset, size_t number,
 		     unsigned long *buffer)
 {
-	struct drbd_bitmap *b = device->bitmap;
-	unsigned long *p_addr, *bm;
-	size_t end, do_now;
-
-	end = offset + number;
+	unsigned long start, end;
 
-	if (!expect(device, b))
-		return;
-	if (!expect(device, b->bm_pages))
-		return;
-
-	spin_lock_irq(&b->bm_lock);
-	if ((offset >= b->bm_words) ||
-	    (end    >  b->bm_words) ||
-	    (number <= 0))
-		drbd_err(device, "offset=%lu number=%lu bm_words=%lu\n",
-			(unsigned long)	offset,
-			(unsigned long)	number,
-			(unsigned long) b->bm_words);
-	else {
-		while (offset < end) {
-			do_now = min_t(size_t, ALIGN(offset+1, LWPP), end) - offset;
-			p_addr = bm_map_pidx(b, bm_word_to_page_idx(b, offset));
-			bm = p_addr + MLPP(offset);
-			offset += do_now;
-			while (do_now--)
-				*buffer++ = *bm++;
-			bm_unmap(p_addr);
-		}
-	}
-	spin_unlock_irq(&b->bm_lock);
-}
-
-/* set all bits in the bitmap */
-void drbd_bm_set_all(struct drbd_device *device)
-{
-	struct drbd_bitmap *b = device->bitmap;
-	if (!expect(device, b))
-		return;
-	if (!expect(device, b->bm_pages))
-		return;
-
-	spin_lock_irq(&b->bm_lock);
-	bm_memset(b, 0, 0xff, b->bm_words);
-	(void)bm_clear_surplus(b);
-	b->bm_set = b->bm_bits;
-	spin_unlock_irq(&b->bm_lock);
+	start = offset * BITS_PER_LONG;
+	end = start + number * BITS_PER_LONG - 1;
+	bm_op(peer_device->device, peer_device->bitmap_index, start, end, BM_OP_EXTRACT, (__le32 *)buffer);
 }
 
-/* clear all bits in the bitmap */
-void drbd_bm_clear_all(struct drbd_device *device)
-{
-	struct drbd_bitmap *b = device->bitmap;
-	if (!expect(device, b))
-		return;
-	if (!expect(device, b->bm_pages))
-		return;
-
-	spin_lock_irq(&b->bm_lock);
-	bm_memset(b, 0, 0, b->bm_words);
-	b->bm_set = 0;
-	spin_unlock_irq(&b->bm_lock);
-}
 
 static void drbd_bm_aio_ctx_destroy(struct kref *kref)
 {
 	struct drbd_bm_aio_ctx *ctx = container_of(kref, struct drbd_bm_aio_ctx, kref);
 	unsigned long flags;
 
-	spin_lock_irqsave(&ctx->device->resource->req_lock, flags);
+	spin_lock_irqsave(&ctx->device->pending_bmio_lock, flags);
 	list_del(&ctx->list);
-	spin_unlock_irqrestore(&ctx->device->resource->req_lock, flags);
+	spin_unlock_irqrestore(&ctx->device->pending_bmio_lock, flags);
 	put_ldev(ctx->device);
 	kfree(ctx);
 }
@@ -936,25 +1163,28 @@ static void drbd_bm_aio_ctx_destroy(struct kref *kref)
 /* bv_page may be a copy, or may be the original */
 static void drbd_bm_endio(struct bio *bio)
 {
+	/* ldev_ref_transfer: ldev ref from bio submit in bitmap I/O path */
 	struct drbd_bm_aio_ctx *ctx = bio->bi_private;
 	struct drbd_device *device = ctx->device;
 	struct drbd_bitmap *b = device->bitmap;
-	unsigned int idx = bm_page_to_idx(bio_first_page_all(bio));
+	unsigned int idx = bm_page_to_idx(bio->bi_io_vec[0].bv_page);
+
+	blk_status_t status = bio->bi_status;
 
 	if ((ctx->flags & BM_AIO_COPY_PAGES) == 0 &&
 	    !bm_test_page_unchanged(b->bm_pages[idx]))
 		drbd_warn(device, "bitmap page idx %u changed during IO!\n", idx);
 
-	if (bio->bi_status) {
+	if (status) {
 		/* ctx error will hold the completed-last non-zero error code,
 		 * in case error codes differ. */
-		ctx->error = blk_status_to_errno(bio->bi_status);
+		ctx->error = blk_status_to_errno(status);
 		bm_set_page_io_err(b->bm_pages[idx]);
 		/* Not identical to on disk version of it.
 		 * Is BM_PAGE_IO_ERROR enough? */
-		if (drbd_ratelimit())
+		if (drbd_device_ratelimit(device, BACKEND))
 			drbd_err(device, "IO ERROR %d on bitmap page idx %u\n",
-					bio->bi_status, idx);
+				 status, idx);
 	} else {
 		bm_clear_page_io_err(b->bm_pages[idx]);
 		dynamic_drbd_dbg(device, "bitmap page idx %u completed\n", idx);
@@ -987,17 +1217,17 @@ static inline sector_t drbd_md_last_bitmap_sector(struct drbd_backing_dev *bdev)
 	}
 }
 
-static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr) __must_hold(local)
+static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr)
 {
+	struct bio *bio;
 	struct drbd_device *device = ctx->device;
-	enum req_op op = ctx->flags & BM_AIO_READ ? REQ_OP_READ : REQ_OP_WRITE;
 	struct drbd_bitmap *b = device->bitmap;
-	struct bio *bio;
 	struct page *page;
 	sector_t last_bm_sect;
 	sector_t first_bm_sect;
 	sector_t on_disk_sector;
 	unsigned int len;
+	enum req_op op = ctx->flags & BM_AIO_READ ? REQ_OP_READ : REQ_OP_WRITE;
 
 	first_bm_sect = device->ldev->md.md_offset + device->ldev->md.bm_offset;
 	on_disk_sector = first_bm_sect + (((sector_t)page_nr) << (PAGE_SHIFT-SECTOR_SHIFT));
@@ -1013,9 +1243,9 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr) __must_ho
 		else
 			len = PAGE_SIZE;
 	} else {
-		if (drbd_ratelimit()) {
+		if (drbd_device_ratelimit(device, METADATA)) {
 			drbd_err(device, "Invalid offset during on-disk bitmap access: "
-				 "page idx %u, sector %llu\n", page_nr, on_disk_sector);
+				 "page idx %u, sector %llu\n", page_nr, (unsigned long long) on_disk_sector);
 		}
 		ctx->error = -EIO;
 		bm_set_page_io_err(b->bm_pages[page_nr]);
@@ -1040,35 +1270,57 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr) __must_ho
 		bm_store_page_idx(page, page_nr);
 	} else
 		page = b->bm_pages[page_nr];
+
 	bio = bio_alloc_bioset(device->ldev->md_bdev, 1, op, GFP_NOIO,
-			&drbd_md_io_bio_set);
+		&drbd_md_io_bio_set);
 	bio->bi_iter.bi_sector = on_disk_sector;
 	__bio_add_page(bio, page, len, 0);
 	bio->bi_private = ctx;
 	bio->bi_end_io = drbd_bm_endio;
 
 	if (drbd_insert_fault(device, (op == REQ_OP_WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) {
-		bio_io_error(bio);
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
 	} else {
 		submit_bio(bio);
+		if (op == REQ_OP_WRITE)
+			device->bm_writ_cnt++;
 		/* this should not count as user activity and cause the
 		 * resync to throttle -- see drbd_rs_should_slow_down(). */
 		atomic_add(len >> 9, &device->rs_sect_ev);
 	}
 }
 
-/*
- * bm_rw: read/write the whole bitmap from/to its on disk location.
+/**
+ * bm_rw_range() - read/write the specified range of bitmap pages
+ * @device: drbd device this bitmap is associated with
+ * @start_page: start of bitmap page indices to process
+ * @end_page: end of bitmap page indices to process
+ * @flags: BM_AIO_*, see struct bm_aio_ctx.
+ *
+ * Silently limits end_page to the current bitmap size.
+ *
+ * We don't want to special case on logical_block_size of the backend device,
+ * so we submit PAGE_SIZE aligned pieces.
+ * Note that on "most" systems, PAGE_SIZE is 4k.
+ *
+ * In case this becomes an issue on systems with larger PAGE_SIZE,
+ * we may want to change this again to do 4k aligned 4k pieces.
  */
-static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned lazy_writeout_upper_idx) __must_hold(local)
+static int bm_rw_range(struct drbd_device *device, unsigned int start_page, unsigned int end_page,
+		       unsigned int flags)
 {
 	struct drbd_bm_aio_ctx *ctx;
 	struct drbd_bitmap *b = device->bitmap;
-	unsigned int num_pages, i, count = 0;
+	unsigned int i, count = 0;
 	unsigned long now;
-	char ppb[10];
 	int err = 0;
 
+	if (b->bm_flags & BM_ON_DAX_PMEM) {
+		if (flags & (BM_AIO_WRITE_HINTED | BM_AIO_WRITE_ALL_PAGES | BM_AIO_WRITE_LAZY))
+			arch_wb_cache_pmem(b->bm_on_pmem, b->bm_words * sizeof(long));
+		return 0;
+	}
 	/*
 	 * We are protected against bitmap disappearing/resizing by holding an
 	 * ldev reference (caller must have called get_ldev()).
@@ -1078,6 +1330,10 @@ static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned
 	 * as we submit copies of pages anyways.
 	 */
 
+	/* if we reach this, we should have at least *some* bitmap pages. */
+	if (!expect(device, b->bm_number_of_pages))
+		return -ENODEV;
+
 	ctx = kmalloc_obj(struct drbd_bm_aio_ctx, GFP_NOIO);
 	if (!ctx)
 		return -ENOMEM;
@@ -1092,29 +1348,33 @@ static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned
 		.kref = KREF_INIT(2),
 	};
 
-	if (!get_ldev_if_state(device, D_ATTACHING)) {  /* put is in drbd_bm_aio_ctx_destroy() */
-		drbd_err(device, "ASSERT FAILED: get_ldev_if_state() == 1 in bm_rw()\n");
+	if (!expect(device, get_ldev_if_state(device, D_ATTACHING))) {  /* put is in drbd_bm_aio_ctx_destroy() */
 		kfree(ctx);
 		return -ENODEV;
 	}
-	/* Here D_ATTACHING is sufficient since drbd_bm_read() is called only from
-	   drbd_adm_attach(), after device->ldev was assigned. */
+	/* Here, D_ATTACHING is sufficient because drbd_bm_read() is only
+	 * called from drbd_adm_attach(), after device->ldev has been assigned.
+	 *
+	 * The corresponding put_ldev() happens in bm_aio_ctx_destroy().
+	 */
 
 	if (0 == (ctx->flags & ~BM_AIO_READ))
-		WARN_ON(!(BM_LOCKED_MASK & b->bm_flags));
+		WARN_ON(!(b->bm_flags & BM_LOCK_ALL));
 
-	spin_lock_irq(&device->resource->req_lock);
-	list_add_tail(&ctx->list, &device->pending_bitmap_io);
-	spin_unlock_irq(&device->resource->req_lock);
+	if (end_page >= b->bm_number_of_pages)
+		end_page = b->bm_number_of_pages - 1;
 
-	num_pages = b->bm_number_of_pages;
+	spin_lock_irq(&device->pending_bmio_lock);
+	list_add_tail(&ctx->list, &device->pending_bitmap_io);
+	spin_unlock_irq(&device->pending_bmio_lock);
 
 	now = jiffies;
 
-	/* let the layers below us try to merge these bios... */
+	blk_start_plug(&ctx->bm_aio_plug);
+	/* implicit unplug if scheduled for whatever reason */
 
 	if (flags & BM_AIO_READ) {
-		for (i = 0; i < num_pages; i++) {
+		for (i = start_page; i <= end_page; i++) {
 			atomic_inc(&ctx->in_flight);
 			bm_page_io_async(ctx, i);
 			++count;
@@ -1125,7 +1385,7 @@ static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned
 		unsigned int hint;
 		for (hint = 0; hint < b->n_bitmap_hints; hint++) {
 			i = b->al_bitmap_hints[hint];
-			if (i >= num_pages) /* == -1U: no hint here. */
+			if (i > end_page)
 				continue;
 			/* Several AL-extents may point to the same page. */
 			if (!test_and_clear_bit(BM_PAGE_HINT_WRITEOUT,
@@ -1139,10 +1399,9 @@ static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned
 			++count;
 		}
 	} else {
-		for (i = 0; i < num_pages; i++) {
-			/* ignore completely unchanged pages */
-			if (lazy_writeout_upper_idx && i == lazy_writeout_upper_idx)
-				break;
+		for (i = start_page; i <= end_page; i++) {
+			/* ignore completely unchanged pages,
+			 * unless specifically requested to write ALL pages */
 			if (!(flags & BM_AIO_WRITE_ALL_PAGES) &&
 			    bm_test_page_unchanged(b->bm_pages[i])) {
 				dynamic_drbd_dbg(device, "skipped bm write for idx %u\n", i);
@@ -1150,7 +1409,7 @@ static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned
 			}
 			/* during lazy writeout,
 			 * ignore those pages not marked for lazy writeout. */
-			if (lazy_writeout_upper_idx &&
+			if ((flags & BM_AIO_WRITE_LAZY) &&
 			    !bm_test_page_lazy_writeout(b->bm_pages[i])) {
 				dynamic_drbd_dbg(device, "skipped bm lazy write for idx %u\n", i);
 				continue;
@@ -1161,6 +1420,8 @@ static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned
 			cond_resched();
 		}
 	}
+	/* explicit unplug, we are done submitting */
+	blk_finish_plug(&ctx->bm_aio_plug);
 
 	/*
 	 * We initialize ctx->in_flight to one to make sure drbd_bm_endio
@@ -1170,13 +1431,14 @@ static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned
 	 * no need to wait.  Still, we need to put the kref associated with the
 	 * "in_flight reached zero, all done" event.
 	 */
-	if (!atomic_dec_and_test(&ctx->in_flight))
+	if (!atomic_dec_and_test(&ctx->in_flight)) {
+		/* ldev_safe: get_ldev_if_state() above, put_ldev in drbd_bm_aio_ctx_destroy() */
 		wait_until_done_or_force_detached(device, device->ldev, &ctx->done);
-	else
+	} else
 		kref_put(&ctx->kref, &drbd_bm_aio_ctx_destroy);
 
-	/* summary for global bitmap IO */
-	if (flags == 0) {
+	/* summary stats for global bitmap IO */
+	if ((flags & BM_AIO_NO_STATS) == 0 && count) {
 		unsigned int ms = jiffies_to_msecs(jiffies - now);
 		if (ms > 5) {
 			drbd_info(device, "bitmap %s of %u pages took %u ms\n",
@@ -1186,63 +1448,106 @@ static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned
 	}
 
 	if (ctx->error) {
-		drbd_alert(device, "we had at least one MD IO ERROR during bitmap IO\n");
-		drbd_chk_io_error(device, 1, DRBD_META_IO_ERROR);
+		drbd_err(device, "we had at least one MD IO ERROR during bitmap IO\n");
+		drbd_handle_io_error(device, DRBD_META_IO_ERROR);
 		err = -EIO; /* ctx->error ? */
 	}
 
 	if (atomic_read(&ctx->in_flight))
 		err = -EIO; /* Disk timeout/force-detach during IO... */
 
-	now = jiffies;
 	if (flags & BM_AIO_READ) {
-		b->bm_set = bm_count_bits(b);
-		drbd_info(device, "recounting of set bits took additional %lu jiffies\n",
-		     jiffies - now);
+		unsigned int ms;
+		now = jiffies;
+		bm_count_bits(device);
+		ms = jiffies_to_msecs(jiffies - now);
+		/* If we can count quickly, there is no need to report this either */
+		if (ms > 3)
+			drbd_info(device, "recounting of set bits took additional %ums\n", ms);
 	}
-	now = b->bm_set;
-
-	if ((flags & ~BM_AIO_READ) == 0)
-		drbd_info(device, "%s (%lu bits) marked out-of-sync by on disk bit-map.\n",
-		     ppsize(ppb, now << (BM_BLOCK_SHIFT-10)), now);
 
 	kref_put(&ctx->kref, &drbd_bm_aio_ctx_destroy);
 	return err;
 }
 
+static int bm_rw(struct drbd_device *device, unsigned flags)
+{
+	return bm_rw_range(device, 0, -1U, flags);
+}
+
 /*
  * drbd_bm_read() - Read the whole bitmap from its on disk location.
  * @device:	DRBD device.
+ * @peer_device: parameter ignored
  */
 int drbd_bm_read(struct drbd_device *device,
-		 struct drbd_peer_device *peer_device) __must_hold(local)
+		 struct drbd_peer_device *peer_device)
+{
+	return bm_rw(device, BM_AIO_READ);
+}
 
+static void push_al_bitmap_hint(struct drbd_device *device, unsigned int page_nr)
 {
-	return bm_rw(device, BM_AIO_READ, 0);
+	struct drbd_bitmap *b = device->bitmap;
+	struct page *page = b->bm_pages[page_nr];
+	BUG_ON(b->n_bitmap_hints >= ARRAY_SIZE(b->al_bitmap_hints));
+	if (!test_and_set_bit(BM_PAGE_HINT_WRITEOUT, &page_private(page)))
+		b->al_bitmap_hints[b->n_bitmap_hints++] = page_nr;
+}
+
+/**
+ * drbd_bm_mark_range_for_writeout() - mark with a "hint" to be considered for writeout
+ * @device:	DRBD device.
+ * @start: 	Start index of the range to mark.
+ * @end:	End index of the range to mark.
+ *
+ * From within an activity log transaction, we mark a few pages with these
+ * hints, then call drbd_bm_write_hinted(), which will only write out changed
+ * pages which are flagged with this mark.
+ */
+void drbd_bm_mark_range_for_writeout(struct drbd_device *device, unsigned long start, unsigned long end)
+{
+	struct drbd_bitmap *bitmap = device->bitmap;
+	unsigned int page_nr, last_page;
+
+	if (bitmap->bm_flags & BM_ON_DAX_PMEM)
+		return;
+
+	if (end >= bitmap->bm_bits)
+		end = bitmap->bm_bits - 1;
+
+	page_nr = bit_to_page_interleaved(bitmap, 0, start);
+	last_page = bit_to_page_interleaved(bitmap, bitmap->bm_max_peers - 1, end);
+	for (; page_nr <= last_page; page_nr++)
+		push_al_bitmap_hint(device, page_nr);
 }
 
+
 /*
  * drbd_bm_write() - Write the whole bitmap to its on disk location.
  * @device:	DRBD device.
+ * @peer_device: parameter ignored
  *
  * Will only write pages that have changed since last IO.
  */
 int drbd_bm_write(struct drbd_device *device,
-		 struct drbd_peer_device *peer_device) __must_hold(local)
+		  struct drbd_peer_device *peer_device)
 {
-	return bm_rw(device, 0, 0);
+	return bm_rw(device, 0);
 }
 
 /*
  * drbd_bm_write_all() - Write the whole bitmap to its on disk location.
- * @device:	DRBD device.
+ * @device:	 DRBD device.
+ * @peer_device: parameter ignored
  *
- * Will write all pages.
+ * Will write all pages. Is used for online resize operations. The
+ * whole bitmap should be written into its new position.
  */
 int drbd_bm_write_all(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local)
+		      struct drbd_peer_device *peer_device)
 {
-	return bm_rw(device, BM_AIO_WRITE_ALL_PAGES, 0);
+	return bm_rw(device, BM_AIO_WRITE_ALL_PAGES);
 }
 
 /**
@@ -1250,14 +1555,15 @@ int drbd_bm_write_all(struct drbd_device *device,
  * @device:	DRBD device.
  * @upper_idx:	0: write all changed pages; +ve: page index to stop scanning for changed pages
  */
-int drbd_bm_write_lazy(struct drbd_device *device, unsigned upper_idx) __must_hold(local)
+int drbd_bm_write_lazy(struct drbd_device *device, unsigned int upper_idx)
 {
-	return bm_rw(device, BM_AIO_COPY_PAGES, upper_idx);
+	return bm_rw_range(device, 0, upper_idx - 1, BM_AIO_COPY_PAGES | BM_AIO_WRITE_LAZY);
 }
 
 /*
  * drbd_bm_write_copy_pages() - Write the whole bitmap to its on disk location.
  * @device:	DRBD device.
+ * @peer_device: parameter ignored
  *
  * Will only write pages that have changed since last IO.
  * In contrast to drbd_bm_write(), this will copy the bitmap pages
@@ -1267,431 +1573,181 @@ int drbd_bm_write_lazy(struct drbd_device *device, unsigned upper_idx) __must_ho
  * pending resync acks are still being processed.
  */
 int drbd_bm_write_copy_pages(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local)
+			     struct drbd_peer_device *peer_device)
 {
-	return bm_rw(device, BM_AIO_COPY_PAGES, 0);
+	return bm_rw(device, BM_AIO_COPY_PAGES);
 }
 
 /*
  * drbd_bm_write_hinted() - Write bitmap pages with "hint" marks, if they have changed.
  * @device:	DRBD device.
  */
-int drbd_bm_write_hinted(struct drbd_device *device) __must_hold(local)
-{
-	return bm_rw(device, BM_AIO_WRITE_HINTED | BM_AIO_COPY_PAGES, 0);
-}
-
-/* NOTE
- * find_first_bit returns int, we return unsigned long.
- * For this to work on 32bit arch with bitnumbers > (1<<32),
- * we'd need to return u64, and get a whole lot of other places
- * fixed where we still use unsigned long.
- *
- * this returns a bit number, NOT a sector!
- */
-static unsigned long __bm_find_next(struct drbd_device *device, unsigned long bm_fo,
-	const int find_zero_bit)
+int drbd_bm_write_hinted(struct drbd_device *device)
 {
-	struct drbd_bitmap *b = device->bitmap;
-	unsigned long *p_addr;
-	unsigned long bit_offset;
-	unsigned i;
-
-
-	if (bm_fo > b->bm_bits) {
-		drbd_err(device, "bm_fo=%lu bm_bits=%lu\n", bm_fo, b->bm_bits);
-		bm_fo = DRBD_END_OF_BITMAP;
-	} else {
-		while (bm_fo < b->bm_bits) {
-			/* bit offset of the first bit in the page */
-			bit_offset = bm_fo & ~BITS_PER_PAGE_MASK;
-			p_addr = __bm_map_pidx(b, bm_bit_to_page_idx(b, bm_fo));
-
-			if (find_zero_bit)
-				i = find_next_zero_bit_le(p_addr,
-						PAGE_SIZE*8, bm_fo & BITS_PER_PAGE_MASK);
-			else
-				i = find_next_bit_le(p_addr,
-						PAGE_SIZE*8, bm_fo & BITS_PER_PAGE_MASK);
-
-			__bm_unmap(p_addr);
-			if (i < PAGE_SIZE*8) {
-				bm_fo = bit_offset + i;
-				if (bm_fo >= b->bm_bits)
-					break;
-				goto found;
-			}
-			bm_fo = bit_offset + PAGE_SIZE*8;
-		}
-		bm_fo = DRBD_END_OF_BITMAP;
-	}
- found:
-	return bm_fo;
+	return bm_rw(device, BM_AIO_WRITE_HINTED | BM_AIO_COPY_PAGES);
 }
 
-static unsigned long bm_find_next(struct drbd_device *device,
-	unsigned long bm_fo, const int find_zero_bit)
+unsigned long drbd_bm_find_next(struct drbd_peer_device *peer_device, unsigned long start)
 {
-	struct drbd_bitmap *b = device->bitmap;
-	unsigned long i = DRBD_END_OF_BITMAP;
-
-	if (!expect(device, b))
-		return i;
-	if (!expect(device, b->bm_pages))
-		return i;
-
-	spin_lock_irq(&b->bm_lock);
-	if (BM_DONT_TEST & b->bm_flags)
-		bm_print_lock_info(device);
-
-	i = __bm_find_next(device, bm_fo, find_zero_bit);
-
-	spin_unlock_irq(&b->bm_lock);
-	return i;
+	return bm_op(peer_device->device, peer_device->bitmap_index, start, -1UL,
+		     BM_OP_FIND_BIT, NULL);
 }
 
-unsigned long drbd_bm_find_next(struct drbd_device *device, unsigned long bm_fo)
+/* does not spin_lock_irqsave.
+ * you must take drbd_bm_lock() first */
+unsigned long _drbd_bm_find_next(struct drbd_peer_device *peer_device, unsigned long start)
 {
-	return bm_find_next(device, bm_fo, 0);
+	/* WARN_ON(!(device->b->bm_flags & BM_LOCK_SET)); */
+	return ____bm_op(peer_device->device, peer_device->bitmap_index, start, -1UL,
+		    BM_OP_FIND_BIT, NULL);
 }
 
-#if 0
-/* not yet needed for anything. */
-unsigned long drbd_bm_find_next_zero(struct drbd_device *device, unsigned long bm_fo)
+unsigned long _drbd_bm_find_next_zero(struct drbd_peer_device *peer_device, unsigned long start)
 {
-	return bm_find_next(device, bm_fo, 1);
+	/* WARN_ON(!(device->b->bm_flags & BM_LOCK_SET)); */
+	return ____bm_op(peer_device->device, peer_device->bitmap_index, start, -1UL,
+		    BM_OP_FIND_ZERO_BIT, NULL);
 }
-#endif
 
-/* does not spin_lock_irqsave.
- * you must take drbd_bm_lock() first */
-unsigned long _drbd_bm_find_next(struct drbd_device *device, unsigned long bm_fo)
+unsigned int drbd_bm_set_bits(struct drbd_device *device, unsigned int bitmap_index,
+			      unsigned long start, unsigned long end)
 {
-	/* WARN_ON(!(BM_DONT_SET & device->b->bm_flags)); */
-	return __bm_find_next(device, bm_fo, 0);
+	return bm_op(device, bitmap_index, start, end, BM_OP_SET, NULL);
 }
 
-unsigned long _drbd_bm_find_next_zero(struct drbd_device *device, unsigned long bm_fo)
+static __always_inline void
+__bm_many_bits_op(struct drbd_device *device, unsigned int bitmap_index, unsigned long start, unsigned long end,
+		  enum bitmap_operations op)
 {
-	/* WARN_ON(!(BM_DONT_SET & device->b->bm_flags)); */
-	return __bm_find_next(device, bm_fo, 1);
-}
+	struct drbd_bitmap *bitmap = device->bitmap;
+	unsigned long bit = start;
 
-/* returns number of bits actually changed.
- * for val != 0, we change 0 -> 1, return code positive
- * for val == 0, we change 1 -> 0, return code negative
- * wants bitnr, not sector.
- * expected to be called for only a few bits (e - s about BITS_PER_LONG).
- * Must hold bitmap lock already. */
-static int __bm_change_bits_to(struct drbd_device *device, const unsigned long s,
-	unsigned long e, int val)
-{
-	struct drbd_bitmap *b = device->bitmap;
-	unsigned long *p_addr = NULL;
-	unsigned long bitnr;
-	unsigned int last_page_nr = -1U;
-	int c = 0;
-	int changed_total = 0;
-
-	if (e >= b->bm_bits) {
-		drbd_err(device, "ASSERT FAILED: bit_s=%lu bit_e=%lu bm_bits=%lu\n",
-				s, e, b->bm_bits);
-		e = b->bm_bits ? b->bm_bits -1 : 0;
-	}
-	for (bitnr = s; bitnr <= e; bitnr++) {
-		unsigned int page_nr = bm_bit_to_page_idx(b, bitnr);
-		if (page_nr != last_page_nr) {
-			if (p_addr)
-				__bm_unmap(p_addr);
-			if (c < 0)
-				bm_set_page_lazy_writeout(b->bm_pages[last_page_nr]);
-			else if (c > 0)
-				bm_set_page_need_writeout(b->bm_pages[last_page_nr]);
-			changed_total += c;
-			c = 0;
-			p_addr = __bm_map_pidx(b, page_nr);
-			last_page_nr = page_nr;
-		}
-		if (val)
-			c += (0 == __test_and_set_bit_le(bitnr & BITS_PER_PAGE_MASK, p_addr));
-		else
-			c -= (0 != __test_and_clear_bit_le(bitnr & BITS_PER_PAGE_MASK, p_addr));
-	}
-	if (p_addr)
-		__bm_unmap(p_addr);
-	if (c < 0)
-		bm_set_page_lazy_writeout(b->bm_pages[last_page_nr]);
-	else if (c > 0)
-		bm_set_page_need_writeout(b->bm_pages[last_page_nr]);
-	changed_total += c;
-	b->bm_set += changed_total;
-	return changed_total;
-}
-
-/* returns number of bits actually changed.
- * for val != 0, we change 0 -> 1, return code positive
- * for val == 0, we change 1 -> 0, return code negative
- * wants bitnr, not sector */
-static int bm_change_bits_to(struct drbd_device *device, const unsigned long s,
-	const unsigned long e, int val)
-{
-	unsigned long flags;
-	struct drbd_bitmap *b = device->bitmap;
-	int c = 0;
+	spin_lock_irq(&bitmap->bm_lock);
 
-	if (!expect(device, b))
-		return 1;
-	if (!expect(device, b->bm_pages))
-		return 0;
+	if (end >= bitmap->bm_bits)
+		end = bitmap->bm_bits - 1;
 
-	spin_lock_irqsave(&b->bm_lock, flags);
-	if ((val ? BM_DONT_SET : BM_DONT_CLEAR) & b->bm_flags)
-		bm_print_lock_info(device);
+	while (bit <= end) {
+		unsigned long last_bit = last_bit_on_page(bitmap, bitmap_index, bit);
 
-	c = __bm_change_bits_to(device, s, e, val);
+		if (end < last_bit)
+			last_bit = end;
 
-	spin_unlock_irqrestore(&b->bm_lock, flags);
-	return c;
+		__bm_op(device, bitmap_index, bit, last_bit, op, NULL);
+		bit = last_bit + 1;
+		spin_unlock_irq(&bitmap->bm_lock);
+		if (need_resched())
+			cond_resched();
+		spin_lock_irq(&bitmap->bm_lock);
+	}
+	spin_unlock_irq(&bitmap->bm_lock);
 }
 
-/* returns number of bits changed 0 -> 1 */
-int drbd_bm_set_bits(struct drbd_device *device, const unsigned long s, const unsigned long e)
+void drbd_bm_set_many_bits(struct drbd_peer_device *peer_device, unsigned long start, unsigned long end)
 {
-	return bm_change_bits_to(device, s, e, 1);
+	if (peer_device->bitmap_index == -1)
+		return;
+	__bm_many_bits_op(peer_device->device, peer_device->bitmap_index, start, end, BM_OP_SET);
 }
 
-/* returns number of bits changed 1 -> 0 */
-int drbd_bm_clear_bits(struct drbd_device *device, const unsigned long s, const unsigned long e)
+void drbd_bm_clear_many_bits(struct drbd_peer_device *peer_device, unsigned long start, unsigned long end)
 {
-	return -bm_change_bits_to(device, s, e, 0);
+	if (peer_device->bitmap_index == -1)
+		return;
+	__bm_many_bits_op(peer_device->device, peer_device->bitmap_index, start, end, BM_OP_CLEAR);
 }
 
-/* sets all bits in full words,
- * from first_word up to, but not including, last_word */
-static inline void bm_set_full_words_within_one_page(struct drbd_bitmap *b,
-		int page_nr, int first_word, int last_word)
+void
+_drbd_bm_clear_many_bits(struct drbd_device *device, int bitmap_index, unsigned long start, unsigned long end)
 {
-	int i;
-	int bits;
-	int changed = 0;
-	unsigned long *paddr = kmap_atomic(b->bm_pages[page_nr]);
-
-	/* I think it is more cache line friendly to hweight_long then set to ~0UL,
-	 * than to first bitmap_weight() all words, then bitmap_fill() all words */
-	for (i = first_word; i < last_word; i++) {
-		bits = hweight_long(paddr[i]);
-		paddr[i] = ~0UL;
-		changed += BITS_PER_LONG - bits;
-	}
-	kunmap_atomic(paddr);
-	if (changed) {
-		/* We only need lazy writeout, the information is still in the
-		 * remote bitmap as well, and is reconstructed during the next
-		 * bitmap exchange, if lost locally due to a crash. */
-		bm_set_page_lazy_writeout(b->bm_pages[page_nr]);
-		b->bm_set += changed;
-	}
+	__bm_many_bits_op(device, bitmap_index, start, end, BM_OP_CLEAR);
 }
 
-/* Same thing as drbd_bm_set_bits,
- * but more efficient for a large bit range.
- * You must first drbd_bm_lock().
- * Can be called to set the whole bitmap in one go.
- * Sets bits from s to e _inclusive_. */
-void _drbd_bm_set_bits(struct drbd_device *device, const unsigned long s, const unsigned long e)
+void
+_drbd_bm_set_many_bits(struct drbd_device *device, int bitmap_index, unsigned long start, unsigned long end)
 {
-	/* First set_bit from the first bit (s)
-	 * up to the next long boundary (sl),
-	 * then assign full words up to the last long boundary (el),
-	 * then set_bit up to and including the last bit (e).
-	 *
-	 * Do not use memset, because we must account for changes,
-	 * so we need to loop over the words with hweight() anyways.
-	 */
-	struct drbd_bitmap *b = device->bitmap;
-	unsigned long sl = ALIGN(s,BITS_PER_LONG);
-	unsigned long el = (e+1) & ~((unsigned long)BITS_PER_LONG-1);
-	int first_page;
-	int last_page;
-	int page_nr;
-	int first_word;
-	int last_word;
-
-	if (e - s <= 3*BITS_PER_LONG) {
-		/* don't bother; el and sl may even be wrong. */
-		spin_lock_irq(&b->bm_lock);
-		__bm_change_bits_to(device, s, e, 1);
-		spin_unlock_irq(&b->bm_lock);
-		return;
-	}
-
-	/* difference is large enough that we can trust sl and el */
-
-	spin_lock_irq(&b->bm_lock);
-
-	/* bits filling the current long */
-	if (sl)
-		__bm_change_bits_to(device, s, sl-1, 1);
-
-	first_page = sl >> (3 + PAGE_SHIFT);
-	last_page = el >> (3 + PAGE_SHIFT);
-
-	/* MLPP: modulo longs per page */
-	/* LWPP: long words per page */
-	first_word = MLPP(sl >> LN2_BPL);
-	last_word = LWPP;
-
-	/* first and full pages, unless first page == last page */
-	for (page_nr = first_page; page_nr < last_page; page_nr++) {
-		bm_set_full_words_within_one_page(device->bitmap, page_nr, first_word, last_word);
-		spin_unlock_irq(&b->bm_lock);
-		cond_resched();
-		first_word = 0;
-		spin_lock_irq(&b->bm_lock);
-	}
-	/* last page (respectively only page, for first page == last page) */
-	last_word = MLPP(el >> LN2_BPL);
-
-	/* consider bitmap->bm_bits = 32768, bitmap->bm_number_of_pages = 1. (or multiples).
-	 * ==> e = 32767, el = 32768, last_page = 2,
-	 * and now last_word = 0.
-	 * We do not want to touch last_page in this case,
-	 * as we did not allocate it, it is not present in bitmap->bm_pages.
-	 */
-	if (last_word)
-		bm_set_full_words_within_one_page(device->bitmap, last_page, first_word, last_word);
-
-	/* possibly trailing bits.
-	 * example: (e & 63) == 63, el will be e+1.
-	 * if that even was the very last bit,
-	 * it would trigger an assert in __bm_change_bits_to()
-	 */
-	if (el <= e)
-		__bm_change_bits_to(device, el, e, 1);
-	spin_unlock_irq(&b->bm_lock);
+	__bm_many_bits_op(device, bitmap_index, start, end, BM_OP_SET);
 }
 
-/* returns bit state
- * wants bitnr, NOT sector.
- * inherently racy... area needs to be locked by means of {al,rs}_lru
- *  1 ... bit set
- *  0 ... bit not set
- * -1 ... first out of bounds access, stop testing for bits!
- */
-int drbd_bm_test_bit(struct drbd_device *device, const unsigned long bitnr)
+/* set all bits in the bitmap */
+void drbd_bm_set_all(struct drbd_device *device)
 {
-	unsigned long flags;
-	struct drbd_bitmap *b = device->bitmap;
-	unsigned long *p_addr;
-	int i;
+	struct drbd_bitmap *bitmap = device->bitmap;
+	unsigned int bitmap_index;
 
-	if (!expect(device, b))
-		return 0;
-	if (!expect(device, b->bm_pages))
-		return 0;
+	for (bitmap_index = 0; bitmap_index < bitmap->bm_max_peers; bitmap_index++)
+		__bm_many_bits_op(device, bitmap_index, 0, -1, BM_OP_SET);
+}
 
-	spin_lock_irqsave(&b->bm_lock, flags);
-	if (BM_DONT_TEST & b->bm_flags)
-		bm_print_lock_info(device);
-	if (bitnr < b->bm_bits) {
-		p_addr = bm_map_pidx(b, bm_bit_to_page_idx(b, bitnr));
-		i = test_bit_le(bitnr & BITS_PER_PAGE_MASK, p_addr) ? 1 : 0;
-		bm_unmap(p_addr);
-	} else if (bitnr == b->bm_bits) {
-		i = -1;
-	} else { /* (bitnr > b->bm_bits) */
-		drbd_err(device, "bitnr=%lu > bm_bits=%lu\n", bitnr, b->bm_bits);
-		i = 0;
-	}
+/* clear all bits in the bitmap */
+void drbd_bm_clear_all(struct drbd_device *device)
+{
+	struct drbd_bitmap *bitmap = device->bitmap;
+	unsigned int bitmap_index;
 
-	spin_unlock_irqrestore(&b->bm_lock, flags);
-	return i;
+	for (bitmap_index = 0; bitmap_index < bitmap->bm_max_peers; bitmap_index++)
+		__bm_many_bits_op(device, bitmap_index, 0, -1, BM_OP_CLEAR);
 }
 
-/* returns number of bits set in the range [s, e] */
-int drbd_bm_count_bits(struct drbd_device *device, const unsigned long s, const unsigned long e)
+unsigned int drbd_bm_clear_bits(struct drbd_device *device, unsigned int bitmap_index,
+				unsigned long start, unsigned long end)
 {
-	unsigned long flags;
-	struct drbd_bitmap *b = device->bitmap;
-	unsigned long *p_addr = NULL;
-	unsigned long bitnr;
-	unsigned int page_nr = -1U;
-	int c = 0;
-
-	/* If this is called without a bitmap, that is a bug.  But just to be
-	 * robust in case we screwed up elsewhere, in that case pretend there
-	 * was one dirty bit in the requested area, so we won't try to do a
-	 * local read there (no bitmap probably implies no disk) */
-	if (!expect(device, b))
-		return 1;
-	if (!expect(device, b->bm_pages))
-		return 1;
-
-	spin_lock_irqsave(&b->bm_lock, flags);
-	if (BM_DONT_TEST & b->bm_flags)
-		bm_print_lock_info(device);
-	for (bitnr = s; bitnr <= e; bitnr++) {
-		unsigned int idx = bm_bit_to_page_idx(b, bitnr);
-		if (page_nr != idx) {
-			page_nr = idx;
-			if (p_addr)
-				bm_unmap(p_addr);
-			p_addr = bm_map_pidx(b, idx);
-		}
-		if (expect(device, bitnr < b->bm_bits))
-			c += (0 != test_bit_le(bitnr - (page_nr << (PAGE_SHIFT+3)), p_addr));
-		else
-			drbd_err(device, "bitnr=%lu bm_bits=%lu\n", bitnr, b->bm_bits);
-	}
-	if (p_addr)
-		bm_unmap(p_addr);
-	spin_unlock_irqrestore(&b->bm_lock, flags);
-	return c;
+	return bm_op(device, bitmap_index, start, end, BM_OP_CLEAR, NULL);
 }
 
 
-/* inherently racy...
- * return value may be already out-of-date when this function returns.
- * but the general usage is that this is only use during a cstate when bits are
- * only cleared, not set, and typically only care for the case when the return
- * value is zero, or we already "locked" this "bitmap extent" by other means.
- *
- * enr is bm-extent number, since we chose to name one sector (512 bytes)
- * worth of the bitmap a "bitmap extent".
- *
- * TODO
- * I think since we use it like a reference count, we should use the real
- * reference count of some bitmap extent element from some lru instead...
- *
- */
-int drbd_bm_e_weight(struct drbd_device *device, unsigned long enr)
-{
-	struct drbd_bitmap *b = device->bitmap;
-	int count, s, e;
-	unsigned long flags;
-	unsigned long *p_addr, *bm;
+/* returns number of bits set in the range [s, e] */
+int drbd_bm_count_bits(struct drbd_device *device, unsigned int bitmap_index, unsigned long s, unsigned long e)
+{
+	return bm_op(device, bitmap_index, s, e, BM_OP_COUNT, NULL);
+}
+
+void drbd_bm_copy_slot(struct drbd_device *device, unsigned int from_index, unsigned int to_index)
+{
+	struct drbd_bitmap *bitmap = device->bitmap;
+	unsigned long word_nr, from_word_nr, to_word_nr, words32_total;
+	unsigned int from_page_nr, to_page_nr, current_page_nr;
+	u32 data_word, *addr;
+
+	words32_total = bitmap->bm_words * sizeof(unsigned long) / sizeof(u32);
+	spin_lock_irq(&bitmap->bm_all_slots_lock);
+	spin_lock(&bitmap->bm_lock);
+
+	bitmap->bm_set[to_index] = 0;
+	current_page_nr = 0;
+	addr = bm_map(bitmap, current_page_nr);
+	for (word_nr = 0; word_nr < words32_total; word_nr += bitmap->bm_max_peers) {
+		from_word_nr = word_nr + from_index;
+		from_page_nr = word32_to_page(from_word_nr);
+		to_word_nr = word_nr + to_index;
+		to_page_nr = word32_to_page(to_word_nr);
+
+		if (current_page_nr != from_page_nr) {
+			bm_unmap(bitmap, addr);
+			spin_unlock(&bitmap->bm_lock);
+			spin_unlock_irq(&bitmap->bm_all_slots_lock);
+			if (need_resched())
+				cond_resched();
+			spin_lock_irq(&bitmap->bm_all_slots_lock);
+			spin_lock(&bitmap->bm_lock);
+			current_page_nr = from_page_nr;
+			addr = bm_map(bitmap, current_page_nr);
+		}
+		data_word = addr[word32_in_page(from_word_nr)];
 
-	if (!expect(device, b))
-		return 0;
-	if (!expect(device, b->bm_pages))
-		return 0;
+		if (current_page_nr != to_page_nr) {
+			bm_unmap(bitmap, addr);
+			current_page_nr = to_page_nr;
+			addr = bm_map(bitmap, current_page_nr);
+		}
 
-	spin_lock_irqsave(&b->bm_lock, flags);
-	if (BM_DONT_TEST & b->bm_flags)
-		bm_print_lock_info(device);
-
-	s = S2W(enr);
-	e = min((size_t)S2W(enr+1), b->bm_words);
-	count = 0;
-	if (s < b->bm_words) {
-		int n = e-s;
-		p_addr = bm_map_pidx(b, bm_word_to_page_idx(b, s));
-		bm = p_addr + MLPP(s);
-		count += bitmap_weight(bm, n * BITS_PER_LONG);
-		bm_unmap(p_addr);
-	} else {
-		drbd_err(device, "start offset (%d) too large in drbd_bm_e_weight\n", s);
+		if (addr[word32_in_page(to_word_nr)] != data_word)
+			bm_set_page_need_writeout(bitmap, current_page_nr);
+		addr[word32_in_page(to_word_nr)] = data_word;
+		bitmap->bm_set[to_index] += hweight32(data_word);
 	}
-	spin_unlock_irqrestore(&b->bm_lock, flags);
-	return count;
+	bm_unmap(bitmap, addr);
+
+	spin_unlock(&bitmap->bm_lock);
+	spin_unlock_irq(&bitmap->bm_all_slots_lock);
 }
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 15/20] drbd: rework request processing for DRBD 9 multi-peer IO
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (13 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 14/20] drbd: rework activity log and bitmap for multi-peer replication Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 16/20] drbd: rework module core for DRBD 9 transport and multi-peer Christoph Böhmwalder
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Restructure the request state model to support simultaneous replication
to multiple peers.
Split the single request state word into a local and per-peer-node
state, with a per-request spinlock protecting concurrent updates from
independent peer connections.
Move the transfer log from per-connection to per-resource scope, and
replace coarse request-path locking by fine-grained locks for the
interval tree, completion lists, and transfer log.

Track request lifetime with a three-level reference counting scheme
that separates upper-layer completion, bitmap/activity-log cleanup,
and peer-ack processing.
Destruction defers memory reclaim via call_rcu() to allow lock-free
transfer log traversal.
Replace blocking write conflict waits by an asynchronous conflict
resolution system that queues overlapping requests by type onto
per-device work lists for deferred submission.

A new peer acknowledgment subsystem batches cross-node write
confirmations using dagtag-based sequencing, ensuring all peers have
acknowledged a write window before its requests are reclaimed.
The write path independently decides replication or out-of-sync
notification for each peer, while the read path adds peer selection
with read balancing.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_req.c | 2990 +++++++++++++++++++++++----------
 1 file changed, 2143 insertions(+), 847 deletions(-)

diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c
index 70f75ef07945..8652824b1d2e 100644
--- a/drivers/block/drbd/drbd_req.c
+++ b/drivers/block/drbd/drbd_req.c
@@ -18,7 +18,6 @@
 #include "drbd_int.h"
 #include "drbd_req.h"
 
-
 static bool drbd_may_do_local_read(struct drbd_device *device, sector_t sector, int size);
 
 static struct drbd_request *drbd_req_new(struct drbd_device *device, struct bio *bio_src)
@@ -28,160 +27,554 @@ static struct drbd_request *drbd_req_new(struct drbd_device *device, struct bio
 	req = mempool_alloc(&drbd_request_mempool, GFP_NOIO);
 	if (!req)
 		return NULL;
+
 	memset(req, 0, sizeof(*req));
 
-	req->rq_state = (bio_data_dir(bio_src) == WRITE ? RQ_WRITE : 0)
-		      | (bio_op(bio_src) == REQ_OP_WRITE_ZEROES ? RQ_ZEROES : 0)
-		      | (bio_op(bio_src) == REQ_OP_DISCARD ? RQ_UNMAP : 0);
+	kref_get(&device->kref);
+
 	req->device = device;
 	req->master_bio = bio_src;
 	req->epoch = 0;
 
 	drbd_clear_interval(&req->i);
-	req->i.sector     = bio_src->bi_iter.bi_sector;
-	req->i.size      = bio_src->bi_iter.bi_size;
-	req->i.local = true;
-	req->i.waiting = false;
+	req->i.sector = bio_src->bi_iter.bi_sector;
+	req->i.size = bio_src->bi_iter.bi_size;
+	req->i.type = bio_data_dir(bio_src) == WRITE ? INTERVAL_LOCAL_WRITE : INTERVAL_LOCAL_READ;
 
 	INIT_LIST_HEAD(&req->tl_requests);
-	INIT_LIST_HEAD(&req->w.list);
+	INIT_LIST_HEAD(&req->list);
 	INIT_LIST_HEAD(&req->req_pending_master_completion);
 	INIT_LIST_HEAD(&req->req_pending_local);
 
 	/* one reference to be put by __drbd_make_request */
 	atomic_set(&req->completion_ref, 1);
-	/* one kref as long as completion_ref > 0 */
+	/* one reference as long as completion_ref > 0 */
+	refcount_set(&req->done_ref, 1);
+	/* one reference as long as done_ref > 0 */
+	refcount_set(&req->oos_send_ref, 1);
+	/* one kref as long as oos_send_ref > 0 */
 	kref_init(&req->kref);
+	spin_lock_init(&req->rq_lock);
+
+	req->local_rq_state = (bio_data_dir(bio_src) == WRITE ? RQ_WRITE : 0)
+			      | (bio_op(bio_src) == REQ_OP_WRITE_ZEROES ? RQ_ZEROES : 0)
+			      | (bio_op(bio_src) == REQ_OP_DISCARD ? RQ_UNMAP : 0);
+
 	return req;
 }
 
+void drbd_reclaim_req(struct rcu_head *rp)
+{
+	struct drbd_request *req = container_of(rp, struct drbd_request, rcu);
+
+	kref_put(&req->device->kref, drbd_destroy_device);
+
+	mempool_free(req, &drbd_request_mempool);
+}
+
+static u64 peer_ack_mask(struct drbd_request *req)
+{
+	struct drbd_resource *resource = req->device->resource;
+	struct drbd_connection *connection;
+	u64 mask = 0;
+
+	spin_lock_irq(&req->rq_lock);
+	if (req->local_rq_state & RQ_LOCAL_OK)
+		mask |= NODE_MASK(resource->res_opts.node_id);
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		int node_id = connection->peer_node_id;
+
+		if (req->net_rq_state[node_id] & RQ_NET_OK)
+			mask |= NODE_MASK(node_id);
+	}
+	rcu_read_unlock();
+	spin_unlock_irq(&req->rq_lock);
+
+	return mask;
+}
+
+static void queue_peer_ack_send(struct drbd_resource *resource,
+		struct drbd_request *req, struct drbd_peer_ack *peer_ack)
+{
+	struct drbd_connection *connection;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		unsigned int node_id = connection->peer_node_id;
+		if (connection->agreed_pro_version < 110 ||
+				connection->cstate[NOW] != C_CONNECTED) {
+			connection->last_peer_ack_dagtag_seen = peer_ack->dagtag_sector;
+			continue;
+		}
+
+		if (req->net_rq_state[node_id] & RQ_NET_SENT)
+			peer_ack->pending_mask |= NODE_MASK(node_id);
+
+		peer_ack->queued_mask |= NODE_MASK(node_id);
+		queue_work(connection->ack_sender, &connection->peer_ack_work);
+	}
+	rcu_read_unlock();
+}
+
+void drbd_destroy_peer_ack_if_done(struct drbd_peer_ack *peer_ack)
+{
+	struct drbd_resource *resource = peer_ack->resource;
+
+	lockdep_assert_held(&resource->peer_ack_lock);
+
+	if (peer_ack->queued_mask)
+		return;
+
+	list_del(&peer_ack->list);
+	kfree(peer_ack);
+}
+
+int w_queue_peer_ack(struct drbd_work *w, int cancel)
+{
+	struct drbd_resource *resource =
+		container_of(w, struct drbd_resource, peer_ack_work);
+	LIST_HEAD(work_list);
+	struct drbd_request *req, *tmp;
+
+	spin_lock_irq(&resource->peer_ack_lock);
+	list_splice_init(&resource->peer_ack_req_list, &work_list);
+	spin_unlock_irq(&resource->peer_ack_lock);
+
+	list_for_each_entry_safe(req, tmp, &work_list, list) {
+		struct drbd_peer_ack *peer_ack =
+			kzalloc_obj(struct drbd_peer_ack);
+
+		peer_ack->resource = resource;
+		INIT_LIST_HEAD(&peer_ack->list);
+		peer_ack->mask = peer_ack_mask(req);
+		peer_ack->dagtag_sector = req->dagtag_sector;
+
+		spin_lock_irq(&resource->peer_ack_lock);
+		list_add_tail(&peer_ack->list, &resource->peer_ack_list);
+		queue_peer_ack_send(resource, req, peer_ack);
+		drbd_destroy_peer_ack_if_done(peer_ack);
+		spin_unlock_irq(&resource->peer_ack_lock);
+
+		kref_put(&req->kref, drbd_req_destroy);
+	}
+	return 0;
+}
+
+void drbd_queue_peer_ack(struct drbd_resource *resource, struct drbd_request *req)
+{
+	lockdep_assert_held(&resource->peer_ack_lock);
+
+	list_add_tail(&req->list, &resource->peer_ack_req_list);
+	drbd_queue_work_if_unqueued(&resource->work, &resource->peer_ack_work);
+}
+
+static bool peer_ack_differs(struct drbd_request *req1, struct drbd_request *req2)
+{
+	unsigned int max_node_id = req1->device->resource->max_node_id;
+	unsigned int node_id;
+
+	for (node_id = 0; node_id <= max_node_id; node_id++)
+		if ((req1->net_rq_state[node_id] & RQ_NET_OK) !=
+		    (req2->net_rq_state[node_id] & RQ_NET_OK))
+			return true;
+	return false;
+}
+
+static bool peer_ack_window_full(struct drbd_request *req)
+{
+	struct drbd_resource *resource = req->device->resource;
+	u32 peer_ack_window = resource->res_opts.peer_ack_window;
+	u64 last_dagtag = resource->last_peer_acked_dagtag + peer_ack_window;
+
+	return dagtag_newer_eq(req->dagtag_sector, last_dagtag);
+}
+
 static void drbd_remove_request_interval(struct rb_root *root,
 					 struct drbd_request *req)
 {
 	struct drbd_device *device = req->device;
-	struct drbd_interval *i = &req->i;
-
-	drbd_remove_interval(root, i);
+	unsigned long flags;
 
-	/* Wake up any processes waiting for this request to complete.  */
-	if (i->waiting)
-		wake_up(&device->misc_wait);
+	spin_lock_irqsave(&device->interval_lock, flags);
+	drbd_remove_interval(root, &req->i);
+	spin_unlock_irqrestore(&device->interval_lock, flags);
 }
 
 void drbd_req_destroy(struct kref *kref)
 {
 	struct drbd_request *req = container_of(kref, struct drbd_request, kref);
+
+	call_rcu(&req->rcu, drbd_reclaim_req);
+}
+
+static void drbd_req_done(struct drbd_request *req)
+{
 	struct drbd_device *device = req->device;
-	const unsigned s = req->rq_state;
+	struct drbd_resource *resource = device->resource;
+	struct drbd_peer_device *peer_device;
+	unsigned int s = req->local_rq_state;
+	unsigned long modified_mask = 0;
+
+	lockdep_assert_held(&resource->state_rwlock);
+	lockdep_assert_irqs_disabled();
+
+#ifdef CONFIG_DRBD_TIMING_STATS
+	if (s & RQ_WRITE && req->i.size != 0) {
+		spin_lock(&device->timing_lock); /* local irq already disabled */
+		device->reqs++;
+		ktime_aggregate(device, req, in_actlog_kt);
+		ktime_aggregate(device, req, pre_submit_kt);
+		for_each_peer_device(peer_device, device) {
+			int node_id = peer_device->node_id;
+			unsigned ns = req->net_rq_state[node_id];
+			if (!(ns & RQ_NET_MASK))
+				continue;
+			ktime_aggregate_pd(peer_device, node_id, req, pre_send_kt);
+			ktime_aggregate_pd(peer_device, node_id, req, acked_kt);
+			ktime_aggregate_pd(peer_device, node_id, req, net_done_kt);
+		}
+		spin_unlock(&device->timing_lock);
+	}
+#endif
+
+	/* paranoia */
+	for_each_peer_device(peer_device, device) {
+		unsigned ns = req->net_rq_state[peer_device->node_id];
+		if (!(ns & RQ_NET_MASK))
+			continue;
+		if (ns & RQ_NET_DONE)
+			continue;
+
+		drbd_err(device,
+			"%s: Logic BUG rq_state: (0:%x, %d:%x), completion_ref = %d\n",
+			__func__, s, peer_device->node_id, ns, atomic_read(&req->completion_ref));
+		return;
+	}
 
+	/* more paranoia */
 	if ((req->master_bio && !(s & RQ_POSTPONED)) ||
-		atomic_read(&req->completion_ref) ||
-		(s & RQ_LOCAL_PENDING) ||
-		((s & RQ_NET_MASK) && !(s & RQ_NET_DONE))) {
-		drbd_err(device, "drbd_req_destroy: Logic BUG rq_state = 0x%x, completion_ref = %d\n",
-				s, atomic_read(&req->completion_ref));
+		atomic_read(&req->completion_ref) || (s & RQ_LOCAL_PENDING)) {
+		drbd_err(device, "%s: Logic BUG master_bio:%d rq_state: %x, completion_ref = %d\n",
+				__func__, !!req->master_bio, s, atomic_read(&req->completion_ref));
 		return;
 	}
 
-	/* If called from mod_rq_state (expected normal case) or
-	 * drbd_send_and_submit (the less likely normal path), this holds the
-	 * req_lock, and req->tl_requests will typicaly be on ->transfer_log,
-	 * though it may be still empty (never added to the transfer log).
-	 *
-	 * If called from do_retry(), we do NOT hold the req_lock, but we are
-	 * still allowed to unconditionally list_del(&req->tl_requests),
-	 * because it will be on a local on-stack list only. */
-	list_del_init(&req->tl_requests);
-
 	/* finally remove the request from the conflict detection
 	 * respective block_id verification interval tree. */
-	if (!drbd_interval_empty(&req->i)) {
-		struct rb_root *root;
+	if (s & RQ_WRITE && !drbd_interval_empty(&req->i))
+		drbd_remove_request_interval(&device->requests, req);
+
+	/* There is a special case:
+	 * we may notice late that IO was suspended,
+	 * and postpone, or schedule for retry, a write,
+	 * before it even was submitted or sent.
+	 * In that case we do not want to touch the bitmap at all.
+	 */
+	if ((s & RQ_WRITE) && (s & (RQ_POSTPONED|RQ_LOCAL_MASK)) != RQ_POSTPONED &&
+			req->i.size && get_ldev_if_state(device, D_DETACHING)) {
+		struct drbd_peer_md *peer_md = device->ldev->md.peers;
+		unsigned long bits = -1, mask = -1;
+		int node_id, max_node_id = device->resource->max_node_id;
+
+		for (node_id = 0; node_id <= max_node_id; node_id++) {
+			unsigned int net_rq_state;
+
+			net_rq_state = req->net_rq_state[node_id];
+			if (net_rq_state & RQ_NET_OK) {
+				int bitmap_index = peer_md[node_id].bitmap_index;
+
+				if (bitmap_index == -1)
+					continue;
+
+				if (net_rq_state & RQ_NET_SIS)
+					clear_bit(bitmap_index, &bits);
+				else
+					clear_bit(bitmap_index, &mask);
+			}
+		}
+		if (device->bitmap)
+			modified_mask =
+				drbd_set_sync(device, req->i.sector, req->i.size, bits, mask);
+		put_ldev(device);
+	}
 
-		if (s & RQ_WRITE)
-			root = &device->write_requests;
-		else
-			root = &device->read_requests;
-		drbd_remove_request_interval(root, req);
-	} else if (s & (RQ_NET_MASK & ~RQ_NET_DONE) && req->i.size != 0)
-		drbd_err(device, "drbd_req_destroy: Logic BUG: interval empty, but: rq_state=0x%x, sect=%llu, size=%u\n",
-			s, (unsigned long long)req->i.sector, req->i.size);
-
-	/* if it was a write, we may have to set the corresponding
-	 * bit(s) out-of-sync first. If it had a local part, we need to
-	 * release the reference to the activity log. */
 	if (s & RQ_WRITE) {
-		/* Set out-of-sync unless both OK flags are set
-		 * (local only or remote failed).
-		 * Other places where we set out-of-sync:
-		 * READ with local io-error */
-
-		/* There is a special case:
-		 * we may notice late that IO was suspended,
-		 * and postpone, or schedule for retry, a write,
-		 * before it even was submitted or sent.
-		 * In that case we do not want to touch the bitmap at all.
-		 */
-		struct drbd_peer_device *peer_device = first_peer_device(device);
-		if ((s & (RQ_POSTPONED|RQ_LOCAL_MASK|RQ_NET_MASK)) != RQ_POSTPONED) {
-			if (!(s & RQ_NET_OK) || !(s & RQ_LOCAL_OK))
-				drbd_set_out_of_sync(peer_device, req->i.sector, req->i.size);
+		for_each_peer_device(peer_device, device) {
+			if (!(req->net_rq_state[peer_device->node_id] & RQ_NET_PENDING_OOS))
+				continue;
+
+			if (s & RQ_POSTPONED) {
+				drbd_err(device, "%s: Logic BUG RQ_NET_PENDING_OOS|RQ_POSTPONED\n",
+						__func__);
+				continue;
+			}
 
-			if ((s & RQ_NET_OK) && (s & RQ_LOCAL_OK) && (s & RQ_NET_SIS))
-				drbd_set_in_sync(peer_device, req->i.sector, req->i.size);
+			/*
+			 * As an optimization, we only send out-of-sync if we
+			 * set some bit for this peer. If we are not
+			 * replicating to this peer and the same block(s) are
+			 * overwritten several times, the peer only needs to be
+			 * informed of the first change.
+			 */
+			if (peer_device->bitmap_index != -1 &&
+					test_bit(peer_device->bitmap_index, &modified_mask))
+				_req_mod(req, READY_FOR_NET, peer_device);
+			else
+				_req_mod(req, SKIP_OOS, peer_device);
+
+			wake_up(&peer_device->connection->sender_work.q_wait);
 		}
+	}
+
+	/* one might be tempted to move the drbd_al_complete_io
+	 * to the local io completion callback drbd_request_endio.
+	 * but, if this was a mirror write, we may only
+	 * drbd_al_complete_io after this is RQ_NET_DONE,
+	 * otherwise the extent could be dropped from the al
+	 * before it has actually been written on the peer.
+	 * if we crash before our peer knows about the request,
+	 * but after the extent has been dropped from the al,
+	 * we would forget to resync the corresponding extent.
+	 */
+	if (s & RQ_IN_ACT_LOG) {
+		if (get_ldev_if_state(device, D_DETACHING)) {
+			if (drbd_al_complete_io(device, &req->i))
+				set_bit(INTERVAL_AL_EXTENT_LAST, &req->i.flags);
+			put_ldev(device);
+		} else if (drbd_device_ratelimit(device, BACKEND)) {
+			drbd_warn(device, "Should have called drbd_al_complete_io(, %llu, %u), but my Disk seems to have failed :(\n",
+					(unsigned long long) req->i.sector, req->i.size);
 
-		/* one might be tempted to move the drbd_al_complete_io
-		 * to the local io completion callback drbd_request_endio.
-		 * but, if this was a mirror write, we may only
-		 * drbd_al_complete_io after this is RQ_NET_DONE,
-		 * otherwise the extent could be dropped from the al
-		 * before it has actually been written on the peer.
-		 * if we crash before our peer knows about the request,
-		 * but after the extent has been dropped from the al,
-		 * we would forget to resync the corresponding extent.
-		 */
-		if (s & RQ_IN_ACT_LOG) {
-			if (get_ldev_if_state(device, D_FAILED)) {
-				drbd_al_complete_io(device, &req->i);
-				put_ldev(device);
-			} else if (drbd_ratelimit()) {
-				drbd_warn(device, "Should have called drbd_al_complete_io(, %llu, %u), "
-					 "but my Disk seems to have failed :(\n",
-					 (unsigned long long) req->i.sector, req->i.size);
-			}
 		}
 	}
+}
 
-	mempool_free(req, &drbd_request_mempool);
+static void drbd_req_oos_sent(struct drbd_request *req)
+{
+	struct drbd_device *device = req->device;
+	struct drbd_resource *resource = device->resource;
+	unsigned int s = req->local_rq_state;
+
+	lockdep_assert_held(&resource->state_rwlock);
+	lockdep_assert_irqs_disabled();
+
+	if (s & RQ_WRITE && req->i.size) {
+		struct drbd_resource *resource = device->resource;
+		struct drbd_request *peer_ack_req;
+
+		spin_lock(&resource->peer_ack_lock); /* local irq already disabled */
+		peer_ack_req = resource->peer_ack_req;
+		if (peer_ack_req) {
+			bool al_extent_last = test_bit(INTERVAL_AL_EXTENT_LAST, &req->i.flags);
+
+			if (peer_ack_differs(req, peer_ack_req) ||
+			    (al_extent_last && atomic_read(&device->ap_actlog_cnt)) ||
+			    peer_ack_window_full(req)) {
+				drbd_queue_peer_ack(resource, peer_ack_req);
+				peer_ack_req = NULL;
+			} else
+				kref_put(&peer_ack_req->kref, drbd_req_destroy);
+		}
+		resource->peer_ack_req = req;
+
+		if (!peer_ack_req)
+			resource->last_peer_acked_dagtag = req->dagtag_sector;
+		spin_unlock(&resource->peer_ack_lock);
+
+		mod_timer(&resource->peer_ack_timer,
+			  jiffies + resource->res_opts.peer_ack_delay * HZ / 1000);
+	} else
+		kref_put(&req->kref, drbd_req_destroy);
 }
 
-static void wake_all_senders(struct drbd_connection *connection)
+static void wake_all_senders(struct drbd_resource *resource)
 {
-	wake_up(&connection->sender_work.q_wait);
+	struct drbd_connection *connection;
+	/* We need make sure any update is visible before we wake up the
+	 * threads that may check the values in their wait_event() condition.
+	 * Do we need smp_mb here? Or rather switch to atomic_t? */
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource)
+		wake_up(&connection->sender_work.q_wait);
+	rcu_read_unlock();
 }
 
-/* must hold resource->req_lock */
-void start_new_tl_epoch(struct drbd_connection *connection)
+bool start_new_tl_epoch(struct drbd_resource *resource)
 {
+	unsigned long flags;
+	bool new_epoch_started;
+
+	spin_lock_irqsave(&resource->current_tle_lock, flags);
 	/* no point closing an epoch, if it is empty, anyways. */
-	if (connection->current_tle_writes == 0)
-		return;
+	if (resource->current_tle_writes == 0) {
+		new_epoch_started = false;
+	} else {
+		resource->current_tle_writes = 0;
+		atomic_inc(&resource->current_tle_nr);
+		wake_all_senders(resource);
+		new_epoch_started = true;
+	}
+	spin_unlock_irqrestore(&resource->current_tle_lock, flags);
 
-	connection->current_tle_writes = 0;
-	atomic_inc(&connection->current_tle_nr);
-	wake_all_senders(connection);
+	return new_epoch_started;
 }
 
 void complete_master_bio(struct drbd_device *device,
 		struct bio_and_error *m)
 {
+	int rw = bio_data_dir(m->bio);
 	if (unlikely(m->error))
 		m->bio->bi_status = errno_to_blk_status(m->error);
 	bio_endio(m->bio);
-	dec_ap_bio(device);
+	dec_ap_bio(device, rw);
+}
+
+static void queue_conflicting_resync_write(
+		struct conflict_worker *submit_conflict, struct drbd_interval *i)
+{
+	struct drbd_peer_request *peer_req = container_of(i, struct drbd_peer_request, i);
+
+	list_add_tail(&peer_req->w.list, &submit_conflict->resync_writes);
+}
+
+static void queue_conflicting_resync_read(
+		struct conflict_worker *submit_conflict, struct drbd_interval *i)
+{
+	struct drbd_peer_request *peer_req = container_of(i, struct drbd_peer_request, i);
+
+	list_add_tail(&peer_req->w.list, &submit_conflict->resync_reads);
+}
+
+static void queue_conflicting_write(
+		struct conflict_worker *submit_conflict, struct drbd_interval *i)
+{
+	struct drbd_request *req = container_of(i, struct drbd_request, i);
+
+	list_add_tail(&req->list, &submit_conflict->writes);
+}
+
+static void queue_conflicting_peer_write(
+		struct conflict_worker *submit_conflict, struct drbd_interval *i)
+{
+	struct drbd_peer_request *peer_req = container_of(i, struct drbd_peer_request, i);
+
+	list_add_tail(&peer_req->w.list, &submit_conflict->peer_writes);
+}
+
+/* Queue any conflicting requests in this interval to be submitted. */
+void drbd_release_conflicts(struct drbd_device *device, struct drbd_interval *release_interval)
+{
+	struct conflict_worker *submit_conflict = &device->submit_conflict;
+	struct drbd_interval *i;
+	bool any_queued = false;
+
+	lockdep_assert_held(&device->interval_lock);
+
+	drbd_for_each_overlap(i, &device->requests, release_interval->sector, release_interval->size) {
+		if (test_bit(INTERVAL_SUBMITTED, &i->flags))
+			continue;
+
+		/* If we are waiting for a reply from the peer, then there is
+		 * no need to process the conflict. */
+		if (test_bit(INTERVAL_READY_TO_SEND, &i->flags) &&
+				!test_bit(INTERVAL_RECEIVED, &i->flags))
+			continue;
+
+		dynamic_drbd_dbg(device,
+				"%s %s request at %llus+%u after conflict with %llus+%u\n",
+				test_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &i->flags) ? "Already queued" : "Queue",
+				drbd_interval_type_str(i),
+				(unsigned long long) i->sector, i->size,
+				(unsigned long long) release_interval->sector, release_interval->size);
+
+		if (test_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &i->flags))
+			continue;
+
+		/* Verify requests never wait for conflicting intervals. If
+		 * there are no conflicts, they are marked direcly as
+		 * submitted. Hence we should not see any here. */
+		if (unlikely(drbd_interval_is_verify(i))) {
+			if (drbd_ratelimit())
+				drbd_err(device, "Found verify request that was not yet submitted\n");
+			continue;
+		}
+
+		set_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &i->flags);
+
+		spin_lock(&submit_conflict->lock);
+		/* Queue the request regardless of whether other conflicts
+		 * remain. The conflict submitter will only actually submit the
+		 * request if there are no conflicts. */
+		switch (i->type) {
+		case INTERVAL_LOCAL_WRITE:
+			queue_conflicting_write(submit_conflict, i);
+			break;
+		case INTERVAL_PEER_WRITE:
+			queue_conflicting_peer_write(submit_conflict, i);
+			break;
+		case INTERVAL_RESYNC_WRITE:
+			queue_conflicting_resync_write(submit_conflict, i);
+			break;
+		case INTERVAL_RESYNC_READ:
+			queue_conflicting_resync_read(submit_conflict, i);
+			break;
+		default:
+			BUG();
+		}
+		spin_unlock(&submit_conflict->lock);
+
+		any_queued = true;
+	}
+
+	if (any_queued)
+		queue_work(submit_conflict->wq, &submit_conflict->worker);
 }
 
+void drbd_put_ref_tl_walk(struct drbd_request *req, int done_put, int oos_send_put)
+{
+	struct drbd_resource *resource = req->device->resource;
+
+	lockdep_assert_held(&resource->state_rwlock);
+
+	while (req) {
+		struct drbd_request *next_write;
+		bool done = false;
+		bool oos_sent = false;
+
+		if (done_put && refcount_sub_and_test(done_put, &req->done_ref)) {
+			done = true;
+			drbd_req_done(req);
+			oos_send_put++;
+		}
+
+		if (oos_send_put && refcount_sub_and_test(oos_send_put, &req->oos_send_ref))
+			oos_sent = true;
+
+		if (!done && !oos_sent)
+			break;
+
+		spin_lock(&resource->tl_update_lock); /* local irq already disabled */
+		next_write = req->next_write;
+		if (oos_sent) {
+			list_del_rcu(&req->tl_requests);
+			if (resource->tl_previous_write == req)
+				resource->tl_previous_write = NULL;
+		} else if (done) {
+			set_bit(INTERVAL_DONE, &req->i.flags);
+		}
+		spin_unlock(&resource->tl_update_lock);
+
+		if (oos_sent)
+			/* potentially destroy */
+			drbd_req_oos_sent(req);
+
+		req = next_write;
+		done_put = done ? 1 : 0;
+		oos_send_put = oos_sent ? 1 : 0;
+	}
+}
 
 /* Helper for __req_mod().
  * Set m->bio to the master bio, if it is fit to be completed,
@@ -192,30 +585,11 @@ void complete_master_bio(struct drbd_device *device,
 static
 void drbd_req_complete(struct drbd_request *req, struct bio_and_error *m)
 {
-	const unsigned s = req->rq_state;
+	const unsigned s = req->local_rq_state;
 	struct drbd_device *device = req->device;
-	int error, ok;
-
-	/* we must not complete the master bio, while it is
-	 *	still being processed by _drbd_send_zc_bio (drbd_send_dblock)
-	 *	not yet acknowledged by the peer
-	 *	not yet completed by the local io subsystem
-	 * these flags may get cleared in any order by
-	 *	the worker,
-	 *	the receiver,
-	 *	the bio_endio completion callbacks.
-	 */
-	if ((s & RQ_LOCAL_PENDING && !(s & RQ_LOCAL_ABORTED)) ||
-	    (s & RQ_NET_QUEUED) || (s & RQ_NET_PENDING) ||
-	    (s & RQ_COMPLETION_SUSP)) {
-		drbd_err(device, "drbd_req_complete: Logic BUG rq_state = 0x%x\n", s);
-		return;
-	}
-
-	if (!req->master_bio) {
-		drbd_err(device, "drbd_req_complete: Logic BUG, master_bio == NULL!\n");
-		return;
-	}
+	struct drbd_peer_device *peer_device;
+	unsigned long flags;
+	int error, ok = 0;
 
 	/*
 	 * figure out whether to report success or failure.
@@ -230,69 +604,136 @@ void drbd_req_complete(struct drbd_request *req, struct bio_and_error *m)
 	 * local completion error, if any, has been stored as ERR_PTR
 	 * in private_bio within drbd_request_endio.
 	 */
-	ok = (s & RQ_LOCAL_OK) || (s & RQ_NET_OK);
+	if (s & RQ_LOCAL_OK)
+		++ok;
 	error = PTR_ERR(req->private_bio);
 
+	for_each_peer_device(peer_device, device) {
+		unsigned ns = req->net_rq_state[peer_device->node_id];
+		/* any net ok ok local ok is good enough to complete this bio as OK */
+		if (ns & RQ_NET_OK)
+			++ok;
+		/* paranoia */
+		/* we must not complete the master bio, while it is
+		 *	still being processed by _drbd_send_zc_bio (drbd_send_dblock),
+		 *	respectively still needed for the second drbd_csum_bio() there.
+		 *	not yet acknowledged by the peer
+		 *	not yet completed by the local io subsystem
+		 * these flags may get cleared in any order by
+		 *	the worker,
+		 *	the sender,
+		 *	the receiver,
+		 *	the bio_endio completion callbacks.
+		 */
+		if (!(ns & RQ_NET_MASK))
+			continue;
+		if (ns & RQ_NET_PENDING_OOS)
+			continue;
+		if (!(ns & (RQ_NET_PENDING|RQ_NET_QUEUED)))
+			continue;
+
+		drbd_err(device,
+			"drbd_req_complete: Logic BUG rq_state: (0:%x, %d:%x), completion_ref = %d\n",
+			 s, peer_device->node_id, ns, atomic_read(&req->completion_ref));
+		return;
+	}
+
+	/* more paranoia */
+	if (atomic_read(&req->completion_ref) ||
+	    ((s & RQ_LOCAL_PENDING) && !(s & RQ_LOCAL_ABORTED))) {
+		drbd_err(device, "drbd_req_complete: Logic BUG rq_state: %x, completion_ref = %d\n",
+				s, atomic_read(&req->completion_ref));
+		return;
+	}
+
+	if (!req->master_bio) {
+		drbd_err(device, "drbd_req_complete: Logic BUG, master_bio == NULL!\n");
+		return;
+	}
+
 	/* Before we can signal completion to the upper layers,
 	 * we may need to close the current transfer log epoch.
-	 * We are within the request lock, so we can simply compare
-	 * the request epoch number with the current transfer log
-	 * epoch number.  If they match, increase the current_tle_nr,
-	 * and reset the transfer log epoch write_cnt.
+	 * We simply compare the request epoch number with the current
+	 * transfer log epoch number.
+	 * With very specific timing, this may cause unnecessary barriers
+	 * to be sent, but that is harmless.
+	 *
+	 * There is no need to close the transfer log epoch for empty flushes.
+	 * The completion of the previous requests had the required effect on
+	 * the peers already.
 	 */
-	if (op_is_write(bio_op(req->master_bio)) &&
-	    req->epoch == atomic_read(&first_peer_device(device)->connection->current_tle_nr))
-		start_new_tl_epoch(first_peer_device(device)->connection);
+	if (bio_data_dir(req->master_bio) == WRITE &&
+	    likely(req->i.size != 0) &&
+	    req->epoch == atomic_read(&device->resource->current_tle_nr))
+		start_new_tl_epoch(device->resource);
 
 	/* Update disk stats */
 	bio_end_io_acct(req->master_bio, req->start_jif);
 
-	/* If READ failed,
-	 * have it be pushed back to the retry work queue,
-	 * so it will re-enter __drbd_make_request(),
-	 * and be re-assigned to a suitable local or remote path,
-	 * or failed if we do not have access to good data anymore.
-	 *
-	 * Unless it was failed early by __drbd_make_request(),
-	 * because no path was available, in which case
-	 * it was not even added to the transfer_log.
-	 *
-	 * read-ahead may fail, and will not be retried.
-	 *
-	 * WRITE should have used all available paths already.
-	 */
-	if (!ok &&
-	    bio_op(req->master_bio) == REQ_OP_READ &&
-	    !(req->master_bio->bi_opf & REQ_RAHEAD) &&
-	    !list_empty(&req->tl_requests))
-		req->rq_state |= RQ_POSTPONED;
-
-	if (!(req->rq_state & RQ_POSTPONED)) {
-		m->error = ok ? 0 : (error ?: -EIO);
+	if (device->cached_err_io) {
+		ok = 0;
+		req->local_rq_state &= ~RQ_POSTPONED;
+	} else if (!ok &&
+		   bio_op(req->master_bio) == REQ_OP_READ &&
+		   !(req->master_bio->bi_opf & REQ_RAHEAD) &&
+		   !list_empty(&req->tl_requests)) {
+		/* If READ failed,
+		 * have it be pushed back to the retry work queue,
+		 * so it will re-enter __drbd_make_request(),
+		 * and be re-assigned to a suitable local or remote path,
+		 * or failed if we do not have access to good data anymore.
+		 *
+		 * Unless it was failed early by __drbd_make_request(),
+		 * because no path was available, in which case
+		 * it was not even added to the transfer_log.
+		 *
+		 * read-ahead may fail, and will not be retried.
+		 *
+		 * WRITE should have used all available paths already.
+		 */
+		req->local_rq_state |= RQ_POSTPONED;
+	}
+
+	if (!(req->local_rq_state & RQ_POSTPONED)) {
+		struct drbd_resource *resource = device->resource;
+		bool quorum =
+			resource->res_opts.on_no_quorum == ONQ_IO_ERROR ?
+			resource->cached_all_devices_have_quorum : true;
+
+		m->error = ok && quorum ? 0 : (error ?: -EIO);
 		m->bio = req->master_bio;
 		req->master_bio = NULL;
-		/* We leave it in the tree, to be able to verify later
-		 * write-acks in protocol != C during resync.
-		 * But we mark it as "complete", so it won't be counted as
-		 * conflict in a multi-primary setup. */
-		req->i.completed = true;
+
+		if (req->local_rq_state & RQ_WRITE) {
+			spin_lock_irqsave(&device->interval_lock, flags);
+			/* We leave it in the tree, to be able to verify later
+			 * write-acks in protocol != C during resync.
+			 * But we mark it as "complete", so it won't be counted as
+			 * conflict in a multi-primary setup.
+			 */
+			set_bit(INTERVAL_COMPLETED, &req->i.flags);
+			drbd_release_conflicts(device, &req->i);
+			spin_unlock_irqrestore(&device->interval_lock, flags);
+		}
 	}
 
-	if (req->i.waiting)
-		wake_up(&device->misc_wait);
+	if (!(req->local_rq_state & RQ_WRITE))
+		drbd_remove_request_interval(&device->read_requests, req);
 
 	/* Either we are about to complete to upper layers,
 	 * or we will restart this request.
 	 * In either case, the request object will be destroyed soon,
 	 * so better remove it from all lists. */
+	spin_lock_irqsave(&device->pending_completion_lock, flags);
 	list_del_init(&req->req_pending_master_completion);
+	spin_unlock_irqrestore(&device->pending_completion_lock, flags);
 }
 
-/* still holds resource->req_lock */
 static void drbd_req_put_completion_ref(struct drbd_request *req, struct bio_and_error *m, int put)
 {
-	struct drbd_device *device = req->device;
-	D_ASSERT(device, m || (req->rq_state & RQ_POSTPONED));
+	D_ASSERT(req->device, m || (req->local_rq_state & RQ_POSTPONED));
+
+	lockdep_assert_held(&req->device->resource->state_rwlock);
 
 	if (!put)
 		return;
@@ -304,229 +745,368 @@ static void drbd_req_put_completion_ref(struct drbd_request *req, struct bio_and
 
 	/* local completion may still come in later,
 	 * we need to keep the req object around. */
-	if (req->rq_state & RQ_LOCAL_ABORTED)
+	if (req->local_rq_state & RQ_LOCAL_ABORTED)
 		return;
 
-	if (req->rq_state & RQ_POSTPONED) {
+	if (req->local_rq_state & RQ_POSTPONED) {
 		/* don't destroy the req object just yet,
 		 * but queue it for retry */
 		drbd_restart_request(req);
 		return;
 	}
 
-	kref_put(&req->kref, drbd_req_destroy);
+	drbd_put_ref_tl_walk(req, 1, 0);
 }
 
-static void set_if_null_req_next(struct drbd_peer_device *peer_device, struct drbd_request *req)
+void drbd_set_pending_out_of_sync(struct drbd_peer_device *peer_device)
 {
-	struct drbd_connection *connection = peer_device ? peer_device->connection : NULL;
-	if (!connection)
-		return;
-	if (connection->req_next == NULL)
-		connection->req_next = req;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_resource *resource = device->resource;
+	const int node_id = peer_device->node_id;
+	struct drbd_request *req;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(req, &resource->transfer_log, tl_requests) {
+		unsigned int local_rq_state, net_rq_state;
+
+		/*
+		 * This is similar to the bitmap modification performed in
+		 * drbd_req_done(), but simplified for this special case.
+		 */
+
+		spin_lock_irq(&req->rq_lock);
+		local_rq_state = req->local_rq_state;
+		net_rq_state = req->net_rq_state[node_id];
+		spin_unlock_irq(&req->rq_lock);
+
+		if (!(local_rq_state & RQ_WRITE))
+			continue;
+
+		if ((local_rq_state & (RQ_POSTPONED|RQ_LOCAL_MASK)) == RQ_POSTPONED)
+			continue;
+
+		if (!req->i.size)
+			continue;
+
+		if (net_rq_state & RQ_NET_OK)
+			continue;
+
+		drbd_set_out_of_sync(peer_device, req->i.sector, req->i.size);
+	}
+	rcu_read_unlock();
 }
 
-static void advance_conn_req_next(struct drbd_peer_device *peer_device, struct drbd_request *req)
+static void advance_conn_req_next(struct drbd_connection *connection, struct drbd_request *req)
 {
-	struct drbd_connection *connection = peer_device ? peer_device->connection : NULL;
-	struct drbd_request *iter = req;
-	if (!connection)
+	struct drbd_request *found_req = NULL;
+	/* Only the sender thread comes here. No other caller context of req_mod() ever arrives here */
+	if (connection->todo.req_next != req)
 		return;
-	if (connection->req_next != req)
-		return;
-
-	req = NULL;
-	list_for_each_entry_continue(iter, &connection->transfer_log, tl_requests) {
-		const unsigned int s = iter->rq_state;
+	rcu_read_lock();
+	list_for_each_entry_continue_rcu(req, &connection->resource->transfer_log, tl_requests) {
+		const unsigned s = req->net_rq_state[connection->peer_node_id];
 
-		if (s & RQ_NET_QUEUED) {
-			req = iter;
+		if (likely(s & RQ_NET_QUEUED)) {
+			found_req = req;
 			break;
 		}
 	}
-	connection->req_next = req;
+	rcu_read_unlock();
+	connection->todo.req_next = found_req;
 }
 
-static void set_if_null_req_ack_pending(struct drbd_peer_device *peer_device, struct drbd_request *req)
+/**
+ * set_cache_ptr_if_null() - Set caching pointer to given request if not currently set.
+ * @connection: DRBD connection to operate on.
+ * @cache_ptr: Pointer to set.
+ * @req: Request to potentially set the pointer to.
+ *
+ * The caching pointer system is designed to track the oldest request in the
+ * transfer log fulfilling some condition. In particular, a combination of
+ * flags towards a given peer. This condition must guarantee that the request
+ * will not be destroyed.
+ *
+ * This system is implemented by set_cache_ptr_if_null() and
+ * advance_cache_ptr(). A request must be in the transfer log and fulfil the
+ * condition before set_cache_ptr_if_null() is called. If
+ * set_cache_ptr_if_null() is called before this request is in the transfer log
+ * or before it fulfils the condition, the pointer may be advanced past this
+ * request, or unset, which also has the effect of skipping the request.
+ *
+ * Once the condition is no longer fulfilled for a request, advance_cache_ptr()
+ * must be called. If the caching pointer currently points to this request,
+ * this will advance it to the next request fulfilling the condition.
+ *
+ * set_cache_ptr_if_null() may be called concurrently with advance_cache_ptr().
+ */
+static void set_cache_ptr_if_null(struct drbd_connection *connection,
+		struct drbd_request **cache_ptr, struct drbd_request *req)
 {
-	struct drbd_connection *connection = peer_device ? peer_device->connection : NULL;
-	if (!connection)
-		return;
-	if (connection->req_ack_pending == NULL)
-		connection->req_ack_pending = req;
+	spin_lock(&connection->advance_cache_ptr_lock); /* local IRQ already disabled */
+	if (*cache_ptr == NULL) {
+		smp_wmb(); /* make list_add_tail_rcu(req, transfer_log) visible before cache_ptr */
+		WRITE_ONCE(*cache_ptr, req);
+	}
+	spin_unlock(&connection->advance_cache_ptr_lock);
 }
 
-static void advance_conn_req_ack_pending(struct drbd_peer_device *peer_device, struct drbd_request *req)
+/* See set_cache_ptr_if_null(). */
+static void advance_cache_ptr(struct drbd_connection *connection,
+			      struct drbd_request __rcu **cache_ptr, struct drbd_request *req,
+			      unsigned int is_set, unsigned int is_clear)
 {
-	struct drbd_connection *connection = peer_device ? peer_device->connection : NULL;
-	struct drbd_request *iter = req;
-	if (!connection)
-		return;
-	if (connection->req_ack_pending != req)
-		return;
+	struct drbd_request *old_req;
+	struct drbd_request *found_req = NULL;
 
-	req = NULL;
-	list_for_each_entry_continue(iter, &connection->transfer_log, tl_requests) {
-		const unsigned int s = iter->rq_state;
+	/*
+	 * Prevent concurrent updates of the same caching pointer. Otherwise if
+	 * this function is called concurrently for a given caching pointer,
+	 * the call for the older request may advance the pointer to the newer
+	 * request, although the newer request has concurrently been modified
+	 * such that it no longer fulfils the condition.
+	 */
+	spin_lock(&connection->advance_cache_ptr_lock); /* local IRQ already disabled */
 
-		if ((s & RQ_NET_SENT) && (s & RQ_NET_PENDING)) {
-			req = iter;
+	rcu_read_lock();
+	old_req = rcu_dereference(*cache_ptr);
+	if (old_req != req) {
+		rcu_read_unlock();
+		spin_unlock(&connection->advance_cache_ptr_lock);
+		return;
+	}
+	list_for_each_entry_continue_rcu(req, &connection->resource->transfer_log, tl_requests) {
+		const unsigned s = READ_ONCE(req->net_rq_state[connection->peer_node_id]);
+		if (!(s & RQ_NET_MASK))
+			continue;
+		if (((s & is_set) == is_set) && !(s & is_clear)) {
+			found_req = req;
 			break;
 		}
 	}
-	connection->req_ack_pending = req;
+
+	WRITE_ONCE(*cache_ptr, found_req);
+	rcu_read_unlock();
+
+	spin_unlock(&connection->advance_cache_ptr_lock);
 }
 
-static void set_if_null_req_not_net_done(struct drbd_peer_device *peer_device, struct drbd_request *req)
+/* for wsame, discard, and zero-out requests, the payload (amount of data we
+ * need to send) is much smaller than the number of storage sectors affected */
+static unsigned int req_payload_sectors(struct drbd_request *req)
 {
-	struct drbd_connection *connection = peer_device ? peer_device->connection : NULL;
-	if (!connection)
-		return;
-	if (connection->req_not_net_done == NULL)
-		connection->req_not_net_done = req;
+	/* actually: physical_block_size,
+	 * but lets just hardcode 4k in sectors: */
+	if (unlikely(req->local_rq_state & RQ_WSAME))
+		return 8;
+	/* really only a few bytes, but let's pretend one sector */
+	if (unlikely(req->local_rq_state & (RQ_UNMAP|RQ_ZEROES)))
+		return 1;
+	/* other have all the data as payload on the wire */
+	return req->i.size >> 9;
 }
 
-static void advance_conn_req_not_net_done(struct drbd_peer_device *peer_device, struct drbd_request *req)
+static bool drbd_sender_needs_master_bio(unsigned int net_rq_state)
 {
-	struct drbd_connection *connection = peer_device ? peer_device->connection : NULL;
-	struct drbd_request *iter = req;
-	if (!connection)
-		return;
-	if (connection->req_not_net_done != req)
-		return;
-
-	req = NULL;
-	list_for_each_entry_continue(iter, &connection->transfer_log, tl_requests) {
-		const unsigned int s = iter->rq_state;
-
-		if ((s & RQ_NET_SENT) && !(s & RQ_NET_DONE)) {
-			req = iter;
-			break;
-		}
-	}
-	connection->req_not_net_done = req;
+	return (net_rq_state & RQ_NET_QUEUED) && !(net_rq_state & RQ_NET_DONE);
 }
 
 /* I'd like this to be the only place that manipulates
  * req->completion_ref and req->kref. */
 static void mod_rq_state(struct drbd_request *req, struct bio_and_error *m,
+		struct drbd_peer_device *peer_device,
 		int clear, int set)
 {
-	struct drbd_device *device = req->device;
-	struct drbd_peer_device *peer_device = first_peer_device(device);
-	unsigned s = req->rq_state;
+	unsigned int old_local, old_net = 0, new_net = 0;
+	unsigned int set_local = set & RQ_STATE_0_MASK;
+	unsigned int clear_local = clear & RQ_STATE_0_MASK;
 	int c_put = 0;
-
-	if (drbd_suspended(device) && !((s | clear) & RQ_COMPLETION_SUSP))
-		set |= RQ_COMPLETION_SUSP;
+	int d_put = 0;
+	int o_put = 0;
+	const int idx = peer_device ? peer_device->node_id : -1;
+	struct drbd_connection *connection = NULL;
+	bool unchanged;
+
+	set &= ~RQ_STATE_0_MASK;
+	clear &= ~RQ_STATE_0_MASK;
+
+	if (idx == -1) {
+		/* do not try to manipulate net state bits
+		 * without an associated state slot! */
+		BUG_ON(set);
+		BUG_ON(clear);
+	}
 
 	/* apply */
+	spin_lock(&req->rq_lock); /* local IRQ already disabled */
 
-	req->rq_state &= ~clear;
-	req->rq_state |= set;
+	old_local = req->local_rq_state;
+	req->local_rq_state &= ~clear_local;
+	req->local_rq_state |= set_local;
+
+	if (idx != -1) {
+		old_net = req->net_rq_state[idx];
+		new_net = (req->net_rq_state[idx] & ~clear) | set;
+		WRITE_ONCE(req->net_rq_state[idx], new_net);
+		connection = peer_device->connection;
+	}
 
 	/* no change? */
-	if (req->rq_state == s)
+	unchanged = req->local_rq_state == old_local &&
+	  (idx == -1 || req->net_rq_state[idx] == old_net);
+
+	if (unchanged) {
+		spin_unlock(&req->rq_lock);
 		return;
+	}
 
 	/* intent: get references */
 
-	kref_get(&req->kref);
-
-	if (!(s & RQ_LOCAL_PENDING) && (set & RQ_LOCAL_PENDING))
+	if (!(old_local & RQ_LOCAL_PENDING) && (set_local & RQ_LOCAL_PENDING))
 		atomic_inc(&req->completion_ref);
 
-	if (!(s & RQ_NET_PENDING) && (set & RQ_NET_PENDING)) {
-		inc_ap_pending(device);
+	if (!(old_net & RQ_NET_PENDING) && (set & RQ_NET_PENDING)) {
+		inc_ap_pending(peer_device);
 		atomic_inc(&req->completion_ref);
 	}
 
-	if (!(s & RQ_NET_QUEUED) && (set & RQ_NET_QUEUED)) {
+	if (!(old_net & RQ_NET_QUEUED) && (set & RQ_NET_QUEUED)) {
+		/* Keep request on transfer log while queued for sender */
+		refcount_inc(&req->oos_send_ref);
+	}
+
+	if (!drbd_sender_needs_master_bio(old_net) && drbd_sender_needs_master_bio(new_net)) {
+		/*
+		 * This completion ref is necessary to avoid premature
+		 * completion in case a WRITE_ACKED_BY_PEER comes in before the
+		 * sender can do HANDED_OVER_TO_NETWORK.
+		 */
 		atomic_inc(&req->completion_ref);
-		set_if_null_req_next(peer_device, req);
 	}
 
-	if (!(s & RQ_EXP_BARR_ACK) && (set & RQ_EXP_BARR_ACK))
-		kref_get(&req->kref); /* wait for the DONE */
+	if (!(old_net & RQ_NET_READY) && (set & RQ_NET_READY) &&
+			!(req->net_rq_state[idx] & RQ_NET_DONE))
+		set_cache_ptr_if_null(connection, &connection->req_not_net_done, req);
+
+	if (!(old_net & RQ_EXP_BARR_ACK) && (set & RQ_EXP_BARR_ACK))
+		refcount_inc(&req->done_ref); /* wait for the DONE */
 
-	if (!(s & RQ_NET_SENT) && (set & RQ_NET_SENT)) {
+	if (!(old_net & RQ_NET_SENT) && (set & RQ_NET_SENT)) {
 		/* potentially already completed in the ack_receiver thread */
-		if (!(s & RQ_NET_DONE)) {
-			atomic_add(req->i.size >> 9, &device->ap_in_flight);
-			set_if_null_req_not_net_done(peer_device, req);
-		}
-		if (req->rq_state & RQ_NET_PENDING)
-			set_if_null_req_ack_pending(peer_device, req);
+		if (!(old_net & RQ_NET_DONE))
+			atomic_add(req_payload_sectors(req), &peer_device->connection->ap_in_flight);
+		if (req->net_rq_state[idx] & RQ_NET_PENDING)
+			set_cache_ptr_if_null(connection, &connection->req_ack_pending, req);
 	}
 
-	if (!(s & RQ_COMPLETION_SUSP) && (set & RQ_COMPLETION_SUSP))
+	if (!(old_local & RQ_COMPLETION_SUSP) && (set_local & RQ_COMPLETION_SUSP))
 		atomic_inc(&req->completion_ref);
 
+	spin_unlock(&req->rq_lock);
+
 	/* progress: put references */
 
-	if ((s & RQ_COMPLETION_SUSP) && (clear & RQ_COMPLETION_SUSP))
+	if ((old_local & RQ_COMPLETION_SUSP) && (clear_local & RQ_COMPLETION_SUSP))
 		++c_put;
 
-	if (!(s & RQ_LOCAL_ABORTED) && (set & RQ_LOCAL_ABORTED)) {
-		D_ASSERT(device, req->rq_state & RQ_LOCAL_PENDING);
+	if (!(old_local & RQ_LOCAL_ABORTED) && (set_local & RQ_LOCAL_ABORTED)) {
+		D_ASSERT(req->device, req->local_rq_state & RQ_LOCAL_PENDING);
 		++c_put;
 	}
 
-	if ((s & RQ_LOCAL_PENDING) && (clear & RQ_LOCAL_PENDING)) {
-		if (req->rq_state & RQ_LOCAL_ABORTED)
-			kref_put(&req->kref, drbd_req_destroy);
+	if ((old_local & RQ_LOCAL_PENDING) && (clear_local & RQ_LOCAL_PENDING)) {
+		struct drbd_device *device = req->device;
+
+		if (req->local_rq_state & RQ_LOCAL_ABORTED)
+			++d_put;
 		else
 			++c_put;
+		spin_lock(&device->pending_completion_lock); /* local irq already disabled */
 		list_del_init(&req->req_pending_local);
+		spin_unlock(&device->pending_completion_lock);
 	}
 
-	if ((s & RQ_NET_PENDING) && (clear & RQ_NET_PENDING)) {
-		dec_ap_pending(device);
+	if ((old_net & RQ_NET_PENDING) && (clear & RQ_NET_PENDING)) {
+		dec_ap_pending(peer_device);
 		++c_put;
-		req->acked_jif = jiffies;
-		advance_conn_req_ack_pending(peer_device, req);
+		ktime_get_accounting(req->acked_kt[peer_device->node_id]);
+		advance_cache_ptr(connection, &connection->req_ack_pending,
+				  req, RQ_NET_SENT | RQ_NET_PENDING, 0);
 	}
 
-	if ((s & RQ_NET_QUEUED) && (clear & RQ_NET_QUEUED)) {
-		++c_put;
-		advance_conn_req_next(peer_device, req);
+	if ((old_net & RQ_NET_QUEUED) && (clear & RQ_NET_QUEUED)) {
+		++o_put;
+		advance_conn_req_next(connection, req);
 	}
 
-	if (!(s & RQ_NET_DONE) && (set & RQ_NET_DONE)) {
-		if (s & RQ_NET_SENT)
-			atomic_sub(req->i.size >> 9, &device->ap_in_flight);
-		if (s & RQ_EXP_BARR_ACK)
-			kref_put(&req->kref, drbd_req_destroy);
-		req->net_done_jif = jiffies;
+	if (drbd_sender_needs_master_bio(old_net) && !drbd_sender_needs_master_bio(new_net))
+		++c_put;
 
-		/* in ahead/behind mode, or just in case,
-		 * before we finally destroy this request,
-		 * the caching pointers must not reference it anymore */
-		advance_conn_req_next(peer_device, req);
-		advance_conn_req_ack_pending(peer_device, req);
-		advance_conn_req_not_net_done(peer_device, req);
-	}
+	if (!(old_net & RQ_NET_DONE) && (set & RQ_NET_DONE)) {
+		if (old_net & RQ_NET_SENT)
+			atomic_sub(req_payload_sectors(req),
+					&peer_device->connection->ap_in_flight);
+		if (old_net & RQ_EXP_BARR_ACK)
+			++d_put;
+		ktime_get_accounting(req->net_done_kt[peer_device->node_id]);
 
-	/* potentially complete and destroy */
+		advance_cache_ptr(connection, &connection->req_not_net_done,
+				  req, 0, RQ_NET_DONE);
+	}
 
-	/* If we made progress, retry conflicting peer requests, if any. */
-	if (req->i.waiting)
-		wake_up(&device->misc_wait);
+	if ((old_net & RQ_NET_PENDING_OOS) && (clear & RQ_NET_PENDING_OOS)) {
+		if (peer_device->repl_state[NOW] == L_AHEAD &&
+		    atomic_read(&peer_device->connection->ap_in_flight) == 0) {
+			struct drbd_peer_device *pd;
+			int vnr;
+			/* The first peer device to notice that it is time to
+			 * go Ahead -> SyncSource tries to trigger that
+			 * transition for *all* peer devices currently in
+			 * L_AHEAD for this connection. */
+			idr_for_each_entry(&peer_device->connection->peer_devices, pd, vnr) {
+				if (pd->repl_state[NOW] != L_AHEAD)
+					continue;
+				if (test_and_set_bit(AHEAD_TO_SYNC_SOURCE, &pd->flags))
+					continue; /* already done */
+				pd->start_resync_side = L_SYNC_SOURCE;
+				mod_timer(&pd->start_resync_timer, jiffies + HZ);
+			}
+		}
+	}
 
+	/* potentially complete and destroy */
 	drbd_req_put_completion_ref(req, m, c_put);
-	kref_put(&req->kref, drbd_req_destroy);
+
+	/* req cannot have been destroyed if there are still references */
+	if (d_put || o_put)
+		/* potentially destroy */
+		drbd_put_ref_tl_walk(req, d_put, o_put);
 }
 
 static void drbd_report_io_error(struct drbd_device *device, struct drbd_request *req)
 {
-	if (!drbd_ratelimit())
+	if (!drbd_device_ratelimit(device, BACKEND))
 		return;
 
 	drbd_warn(device, "local %s IO error sector %llu+%u on %pg\n",
-			(req->rq_state & RQ_WRITE) ? "WRITE" : "READ",
-			(unsigned long long)req->i.sector,
-			req->i.size >> 9,
-			device->ldev->backing_bdev);
+		  (req->local_rq_state & RQ_WRITE) ? "WRITE" : "READ",
+		  (unsigned long long)req->i.sector,
+		  req->i.size >> 9,
+		  device->ldev->backing_bdev);
+}
+
+static int drbd_protocol_state_bits(struct drbd_connection *connection)
+{
+	struct net_conf *nc;
+	int p;
+
+	rcu_read_lock();
+	nc = rcu_dereference(connection->transport.net_conf);
+	p = nc->wire_protocol;
+	rcu_read_unlock();
+
+	return p == DRBD_PROT_C ? RQ_EXP_WRITE_ACK :
+		p == DRBD_PROT_B ? RQ_EXP_RECEIVE_ACK : 0;
+
 }
 
 /* Helper for HANDED_OVER_TO_NETWORK.
@@ -535,11 +1115,12 @@ static void drbd_report_io_error(struct drbd_device *device, struct drbd_request
  * --> If so, clear PENDING and set NET_OK below.
  * If it is a protocol A write, but not RQ_PENDING anymore, neg-ack was faster
  * (and we must not set RQ_NET_OK) */
-static inline bool is_pending_write_protocol_A(struct drbd_request *req)
+static inline bool is_pending_write_protocol_A(struct drbd_request *req, int idx)
 {
-	return (req->rq_state &
-		   (RQ_WRITE|RQ_NET_PENDING|RQ_EXP_WRITE_ACK|RQ_EXP_RECEIVE_ACK))
-		== (RQ_WRITE|RQ_NET_PENDING);
+	return (req->local_rq_state & RQ_WRITE) == 0 ? 0 :
+		(req->net_rq_state[idx] &
+		   (RQ_NET_PENDING|RQ_EXP_WRITE_ACK|RQ_EXP_RECEIVE_ACK))
+		==  RQ_NET_PENDING;
 }
 
 /* obviously this could be coded as many single functions
@@ -550,95 +1131,76 @@ static inline bool is_pending_write_protocol_A(struct drbd_request *req)
  * but having it this way
  *  enforces that it is all in this one place, where it is easier to audit,
  *  it makes it obvious that whatever "event" "happens" to a request should
- *  happen "atomically" within the req_lock,
+ *  happen with the state_rwlock read lock held,
  *  and it enforces that we have to think in a very structured manner
  *  about the "events" that may happen to a request during its life time ...
  *
  *
  * peer_device == NULL means local disk
  */
-int __req_mod(struct drbd_request *req, enum drbd_req_event what,
+void __req_mod(struct drbd_request *req, enum drbd_req_event what,
 		struct drbd_peer_device *peer_device,
 		struct bio_and_error *m)
 {
-	struct drbd_device *const device = req->device;
-	struct drbd_connection *const connection = peer_device ? peer_device->connection : NULL;
+	struct drbd_device *device = req->device;
 	struct net_conf *nc;
-	int p, rv = 0;
+	unsigned long flags;
+	int p;
+	int idx;
+
+	lockdep_assert_held(&device->resource->state_rwlock);
 
 	if (m)
 		m->bio = NULL;
 
+	idx = peer_device ? peer_device->node_id : -1;
+
 	switch (what) {
 	default:
 		drbd_err(device, "LOGIC BUG in %s:%u\n", __FILE__ , __LINE__);
 		break;
 
-	/* does not happen...
-	 * initialization done in drbd_req_new
-	case CREATED:
-		break;
-		*/
-
-	case TO_BE_SENT: /* via network */
-		/* reached via __drbd_make_request
-		 * and from w_read_retry_remote */
-		D_ASSERT(device, !(req->rq_state & RQ_NET_MASK));
-		rcu_read_lock();
-		nc = rcu_dereference(connection->net_conf);
-		p = nc->wire_protocol;
-		rcu_read_unlock();
-		req->rq_state |=
-			p == DRBD_PROT_C ? RQ_EXP_WRITE_ACK :
-			p == DRBD_PROT_B ? RQ_EXP_RECEIVE_ACK : 0;
-		mod_rq_state(req, m, 0, RQ_NET_PENDING);
-		break;
-
 	case TO_BE_SUBMITTED: /* locally */
 		/* reached via __drbd_make_request */
-		D_ASSERT(device, !(req->rq_state & RQ_LOCAL_MASK));
-		mod_rq_state(req, m, 0, RQ_LOCAL_PENDING);
+		D_ASSERT(device, !(req->local_rq_state & RQ_LOCAL_MASK));
+		mod_rq_state(req, m, peer_device, 0, RQ_LOCAL_PENDING);
 		break;
 
 	case COMPLETED_OK:
-		if (req->rq_state & RQ_WRITE)
+		if (req->local_rq_state & RQ_WRITE)
 			device->writ_cnt += req->i.size >> 9;
 		else
 			device->read_cnt += req->i.size >> 9;
 
-		mod_rq_state(req, m, RQ_LOCAL_PENDING,
+		mod_rq_state(req, m, peer_device, RQ_LOCAL_PENDING,
 				RQ_LOCAL_COMPLETED|RQ_LOCAL_OK);
 		break;
 
 	case ABORT_DISK_IO:
-		mod_rq_state(req, m, 0, RQ_LOCAL_ABORTED);
+		mod_rq_state(req, m, peer_device, 0, RQ_LOCAL_ABORTED);
 		break;
 
 	case WRITE_COMPLETED_WITH_ERROR:
 		drbd_report_io_error(device, req);
-		__drbd_chk_io_error(device, DRBD_WRITE_ERROR);
-		mod_rq_state(req, m, RQ_LOCAL_PENDING, RQ_LOCAL_COMPLETED);
+		mod_rq_state(req, m, peer_device, RQ_LOCAL_PENDING, RQ_LOCAL_COMPLETED);
 		break;
 
 	case READ_COMPLETED_WITH_ERROR:
-		drbd_set_out_of_sync(first_peer_device(device),
-				req->i.sector, req->i.size);
+		drbd_set_all_out_of_sync(device, req->i.sector, req->i.size);
 		drbd_report_io_error(device, req);
-		__drbd_chk_io_error(device, DRBD_READ_ERROR);
 		fallthrough;
 	case READ_AHEAD_COMPLETED_WITH_ERROR:
-		/* it is legal to fail read-ahead, no __drbd_chk_io_error in that case. */
-		mod_rq_state(req, m, RQ_LOCAL_PENDING, RQ_LOCAL_COMPLETED);
+		mod_rq_state(req, m, peer_device, RQ_LOCAL_PENDING, RQ_LOCAL_COMPLETED);
 		break;
 
 	case DISCARD_COMPLETED_NOTSUPP:
 	case DISCARD_COMPLETED_WITH_ERROR:
 		/* I'd rather not detach from local disk just because it
 		 * failed a REQ_OP_DISCARD. */
-		mod_rq_state(req, m, RQ_LOCAL_PENDING, RQ_LOCAL_COMPLETED);
+		mod_rq_state(req, m, peer_device, RQ_LOCAL_PENDING, RQ_LOCAL_COMPLETED);
 		break;
 
-	case QUEUE_FOR_NET_READ:
+	case NEW_NET_READ:
 		/* READ, and
 		 * no local disk,
 		 * or target area marked as invalid,
@@ -650,27 +1212,19 @@ int __req_mod(struct drbd_request *req, enum drbd_req_event what,
 		 * Corresponding drbd_remove_request_interval is in
 		 * drbd_req_complete() */
 		D_ASSERT(device, drbd_interval_empty(&req->i));
+		spin_lock_irqsave(&device->interval_lock, flags);
 		drbd_insert_interval(&device->read_requests, &req->i);
+		spin_unlock_irqrestore(&device->interval_lock, flags);
 
-		set_bit(UNPLUG_REMOTE, &device->flags);
-
-		D_ASSERT(device, req->rq_state & RQ_NET_PENDING);
-		D_ASSERT(device, (req->rq_state & RQ_LOCAL_MASK) == 0);
-		mod_rq_state(req, m, 0, RQ_NET_QUEUED);
-		req->w.cb = w_send_read_req;
-		drbd_queue_work(&connection->sender_work,
-				&req->w);
+		D_ASSERT(device, !(req->net_rq_state[idx] & RQ_NET_MASK));
+		D_ASSERT(device, !(req->local_rq_state & RQ_LOCAL_MASK));
+		mod_rq_state(req, m, peer_device, 0, RQ_NET_PENDING|RQ_NET_QUEUED);
 		break;
 
-	case QUEUE_FOR_NET_WRITE:
+	case NEW_NET_WRITE:
 		/* assert something? */
 		/* from __drbd_make_request only */
 
-		/* Corresponding drbd_remove_request_interval is in
-		 * drbd_req_complete() */
-		D_ASSERT(device, drbd_interval_empty(&req->i));
-		drbd_insert_interval(&device->write_requests, &req->i);
-
 		/* NOTE
 		 * In case the req ended up on the transfer log before being
 		 * queued on the worker, it could lead to this request being
@@ -685,85 +1239,109 @@ int __req_mod(struct drbd_request *req, enum drbd_req_event what,
 		 *
 		 * Add req to the (now) current epoch (barrier). */
 
-		/* otherwise we may lose an unplug, which may cause some remote
-		 * io-scheduler timeout to expire, increasing maximum latency,
-		 * hurting performance. */
-		set_bit(UNPLUG_REMOTE, &device->flags);
+		D_ASSERT(device, !(req->net_rq_state[idx] & RQ_NET_MASK));
 
 		/* queue work item to send data */
-		D_ASSERT(device, req->rq_state & RQ_NET_PENDING);
-		mod_rq_state(req, m, 0, RQ_NET_QUEUED|RQ_EXP_BARR_ACK);
-		req->w.cb =  w_send_dblock;
-		drbd_queue_work(&connection->sender_work,
-				&req->w);
+		mod_rq_state(req, m, peer_device, 0, RQ_NET_PENDING|RQ_NET_QUEUED|RQ_EXP_BARR_ACK|
+				drbd_protocol_state_bits(peer_device->connection));
+
+		/* Close the epoch, in case it outgrew the limit.
+		 * Or if this is a "batch bio", and some of our peers is "old",
+		 * because a batch bio "storm" (like, large scale discarding
+		 * during mkfs time) would be likely to starve out the peers
+		 * activity log, if it is smaller than ours (or we don't have
+		 * any).  And a fix for the resulting potential distributed
+		 * deadlock was only implemented with P_CONFIRM_STABLE with
+		 * protocol version 114.
+		 */
+		if (device->resource->cached_min_aggreed_protocol_version < 114 &&
+		    (req->local_rq_state & (RQ_UNMAP|RQ_WSAME|RQ_ZEROES)))
+			p = 1;
+		else {
+			rcu_read_lock();
+			nc = rcu_dereference(peer_device->connection->transport.net_conf);
+			p = nc->max_epoch_size;
+			rcu_read_unlock();
+		}
+		if (device->resource->current_tle_writes >= p)
+			start_new_tl_epoch(device->resource);
+		break;
 
-		/* close the epoch, in case it outgrew the limit */
-		rcu_read_lock();
-		nc = rcu_dereference(connection->net_conf);
-		p = nc->max_epoch_size;
-		rcu_read_unlock();
-		if (connection->current_tle_writes >= p)
-			start_new_tl_epoch(connection);
+	case NEW_NET_OOS:
+		/* We will just send P_OUT_OF_SYNC to this peer. The request is
+		 * "done" from the start in the sense that everything necessary
+		 * in the data stage has been done.
+		 */
+		mod_rq_state(req, m, peer_device, 0, RQ_NET_PENDING_OOS|RQ_NET_QUEUED|RQ_NET_DONE);
+		break;
 
+	case READY_FOR_NET:
+		mod_rq_state(req, m, peer_device, 0, RQ_NET_READY);
 		break;
 
-	case QUEUE_FOR_SEND_OOS:
-		mod_rq_state(req, m, 0, RQ_NET_QUEUED);
-		req->w.cb =  w_send_out_of_sync;
-		drbd_queue_work(&connection->sender_work,
-				&req->w);
+	case SKIP_OOS:
+		mod_rq_state(req, m, peer_device, RQ_NET_PENDING_OOS, RQ_NET_READY);
 		break;
 
-	case READ_RETRY_REMOTE_CANCELED:
+	case OOS_HANDED_TO_NETWORK:
 	case SEND_CANCELED:
 	case SEND_FAILED:
-		/* real cleanup will be done from tl_clear.  just update flags
-		 * so it is no longer marked as on the worker queue */
-		mod_rq_state(req, m, RQ_NET_QUEUED, 0);
+		/* Sending P_OUT_OF_SYNC is irrelevant if the connection was
+		 * lost. Hence, when the intention was to send P_OUT_OF_SYNC,
+		 * the effect of successfully sending the packet and connection
+		 * loss are the same.
+		 *
+		 * Otherwise just update flags so it is no longer marked as on
+		 * the sender queue; real cleanup will be done from
+		 * tl_walk(,CONNECTION_LOST*).
+		 */
+		mod_rq_state(req, m, peer_device, RQ_NET_PENDING_OOS|RQ_NET_QUEUED, 0);
 		break;
 
 	case HANDED_OVER_TO_NETWORK:
 		/* assert something? */
-		if (is_pending_write_protocol_A(req))
+		if (is_pending_write_protocol_A(req, idx))
 			/* this is what is dangerous about protocol A:
 			 * pretend it was successfully written on the peer. */
-			mod_rq_state(req, m, RQ_NET_QUEUED|RQ_NET_PENDING,
-						RQ_NET_SENT|RQ_NET_OK);
+			mod_rq_state(req, m, peer_device, RQ_NET_QUEUED|RQ_NET_PENDING,
+				     RQ_NET_SENT|RQ_NET_OK);
 		else
-			mod_rq_state(req, m, RQ_NET_QUEUED, RQ_NET_SENT);
+			mod_rq_state(req, m, peer_device, RQ_NET_QUEUED, RQ_NET_SENT);
 		/* It is still not yet RQ_NET_DONE until the
 		 * corresponding epoch barrier got acked as well,
 		 * so we know what to dirty on connection loss. */
 		break;
 
-	case OOS_HANDED_TO_NETWORK:
-		/* Was not set PENDING, no longer QUEUED, so is now DONE
-		 * as far as this connection is concerned. */
-		mod_rq_state(req, m, RQ_NET_QUEUED, RQ_NET_DONE);
-		break;
-
-	case CONNECTION_LOST_WHILE_PENDING:
-		/* transfer log cleanup after connection loss */
-		mod_rq_state(req, m,
-				RQ_NET_OK|RQ_NET_PENDING|RQ_COMPLETION_SUSP,
-				RQ_NET_DONE);
-		break;
+	case CONNECTION_LOST:
+	case CONNECTION_LOST_WHILE_SUSPENDED:
+		/* Only apply to requests that were for this peer but not done. */
+		if (!(req->net_rq_state[idx] & RQ_NET_MASK) || req->net_rq_state[idx] & RQ_NET_DONE)
+			break;
 
-	case CONFLICT_RESOLVED:
-		/* for superseded conflicting writes of multiple primaries,
-		 * there is no need to keep anything in the tl, potential
-		 * node crashes are covered by the activity log.
+		/* For protocol A, or when not suspended, we consider the
+		 * request to be lost towards this peer.
+		 *
+		 * Protocol B&C requests are kept while suspended because
+		 * resending is allowed. If such a request is pending to this
+		 * peer, we suspend its completion until IO is resumed. This is
+		 * a conservative simplification. We could complete it while
+		 * suspended once we know it has been received by "enough"
+		 * peers. However, we do not track that.
 		 *
-		 * If this request had been marked as RQ_POSTPONED before,
-		 * it will actually not be completed, but "restarted",
-		 * resubmitted from the retry worker context. */
-		D_ASSERT(device, req->rq_state & RQ_NET_PENDING);
-		D_ASSERT(device, req->rq_state & RQ_EXP_WRITE_ACK);
-		mod_rq_state(req, m, RQ_NET_PENDING, RQ_NET_DONE|RQ_NET_OK);
+		 * If the request is no longer pending to this peer, then we
+		 * have already received the corresponding ack. The request may
+		 * complete as far as this peer is concerned. */
+		if (what == CONNECTION_LOST ||
+				!(req->net_rq_state[idx] & (RQ_EXP_RECEIVE_ACK|RQ_EXP_WRITE_ACK)))
+			mod_rq_state(req, m, peer_device, RQ_NET_PENDING|RQ_NET_OK, RQ_NET_DONE);
+		else if (req->net_rq_state[idx] & RQ_NET_PENDING)
+			mod_rq_state(req, m, peer_device, 0, RQ_COMPLETION_SUSP);
 		break;
 
 	case WRITE_ACKED_BY_PEER_AND_SIS:
-		req->rq_state |= RQ_NET_SIS;
+		spin_lock_irqsave(&req->rq_lock, flags);
+		req->net_rq_state[idx] |= RQ_NET_SIS;
+		spin_unlock_irqrestore(&req->rq_lock, flags);
 		fallthrough;
 	case WRITE_ACKED_BY_PEER:
 		/* Normal operation protocol C: successfully written on peer.
@@ -775,155 +1353,162 @@ int __req_mod(struct drbd_request *req, enum drbd_req_event what,
 		 * for volatile write-back caches on lower level devices. */
 		goto ack_common;
 	case RECV_ACKED_BY_PEER:
-		D_ASSERT(device, req->rq_state & RQ_EXP_RECEIVE_ACK);
+		D_ASSERT(device, req->net_rq_state[idx] & RQ_EXP_RECEIVE_ACK);
 		/* protocol B; pretends to be successfully written on peer.
 		 * see also notes above in HANDED_OVER_TO_NETWORK about
 		 * protocol != C */
 	ack_common:
-		mod_rq_state(req, m, RQ_NET_PENDING, RQ_NET_OK);
-		break;
-
-	case POSTPONE_WRITE:
-		D_ASSERT(device, req->rq_state & RQ_EXP_WRITE_ACK);
-		/* If this node has already detected the write conflict, the
-		 * worker will be waiting on misc_wait.  Wake it up once this
-		 * request has completed locally.
-		 */
-		D_ASSERT(device, req->rq_state & RQ_NET_PENDING);
-		req->rq_state |= RQ_POSTPONED;
-		if (req->i.waiting)
-			wake_up(&device->misc_wait);
-		/* Do not clear RQ_NET_PENDING. This request will make further
-		 * progress via restart_conflicting_writes() or
-		 * fail_postponed_requests(). Hopefully. */
+		mod_rq_state(req, m, peer_device, RQ_NET_PENDING, RQ_NET_OK);
 		break;
 
 	case NEG_ACKED:
-		mod_rq_state(req, m, RQ_NET_OK|RQ_NET_PENDING, 0);
+		mod_rq_state(req, m, peer_device, RQ_NET_OK|RQ_NET_PENDING,
+			     (req->local_rq_state & RQ_WRITE) ? 0 : RQ_NET_DONE);
 		break;
 
-	case FAIL_FROZEN_DISK_IO:
-		if (!(req->rq_state & RQ_LOCAL_COMPLETED))
-			break;
-		mod_rq_state(req, m, RQ_COMPLETION_SUSP, 0);
+	case COMPLETION_RESUMED:
+		mod_rq_state(req, m, peer_device, RQ_COMPLETION_SUSP, 0);
 		break;
 
-	case RESTART_FROZEN_DISK_IO:
-		if (!(req->rq_state & RQ_LOCAL_COMPLETED))
+	case CANCEL_SUSPENDED_IO:
+		/* Only apply to requests that were for this peer but not done. */
+		if (!(req->net_rq_state[idx] & RQ_NET_MASK) || req->net_rq_state[idx] & RQ_NET_DONE)
 			break;
 
-		mod_rq_state(req, m,
-				RQ_COMPLETION_SUSP|RQ_LOCAL_COMPLETED,
-				RQ_LOCAL_PENDING);
-
-		rv = MR_READ;
-		if (bio_data_dir(req->master_bio) == WRITE)
-			rv = MR_WRITE;
-
-		get_ldev(device); /* always succeeds in this call path */
-		req->w.cb = w_restart_disk_io;
-		drbd_queue_work(&connection->sender_work,
-				&req->w);
+		/* CONNECTION_LOST_WHILE_SUSPENDED followed by
+		 * CANCEL_SUSPENDED_IO should be essentially the same as
+		 * CONNECTION_LOST. Make the corresponding changes. The
+		 * RQ_COMPLETION_SUSP flag is handled by COMPLETION_RESUMED. */
+		mod_rq_state(req, m, peer_device, RQ_NET_PENDING|RQ_NET_OK, RQ_NET_DONE);
 		break;
 
 	case RESEND:
-		/* Simply complete (local only) READs. */
-		if (!(req->rq_state & RQ_WRITE) && !req->w.cb) {
-			mod_rq_state(req, m, RQ_COMPLETION_SUSP, 0);
-			break;
-		}
-
 		/* If RQ_NET_OK is already set, we got a P_WRITE_ACK or P_RECV_ACK
 		   before the connection loss (B&C only); only P_BARRIER_ACK
 		   (or the local completion?) was missing when we suspended.
 		   Throwing them out of the TL here by pretending we got a BARRIER_ACK.
-		   During connection handshake, we ensure that the peer was not rebooted. */
-		if (!(req->rq_state & RQ_NET_OK)) {
-			/* FIXME could this possibly be a req->dw.cb == w_send_out_of_sync?
-			 * in that case we must not set RQ_NET_PENDING. */
-
-			mod_rq_state(req, m, RQ_COMPLETION_SUSP, RQ_NET_QUEUED|RQ_NET_PENDING);
-			if (req->w.cb) {
-				/* w.cb expected to be w_send_dblock, or w_send_read_req */
-				drbd_queue_work(&connection->sender_work,
-						&req->w);
-				rv = req->rq_state & RQ_WRITE ? MR_WRITE : MR_READ;
-			} /* else: FIXME can this happen? */
+		   During connection handshake, we ensure that the peer was not rebooted.
+
+		   Protocol A requests always have RQ_NET_OK removed when the
+		   connection is lost, so this will never apply to them.
+
+		   Resending is only allowed on synchronous connections,
+		   where all requests not yet completed to upper layers would
+		   be in the same "reorder-domain", there can not possibly be
+		   any dependency between incomplete requests, and we are
+		   allowed to complete this one "out-of-sequence".
+		 */
+		if (req->net_rq_state[idx] & RQ_NET_OK)
+			goto barrier_acked;
+
+		/* Only apply to requests that are pending a response from
+		 * this peer. */
+		if (!(req->net_rq_state[idx] & RQ_NET_PENDING))
 			break;
-		}
-		fallthrough;	/* to BARRIER_ACKED */
+
+		D_ASSERT(device, !(req->net_rq_state[idx] & RQ_NET_QUEUED));
+		mod_rq_state(req, m, peer_device, RQ_NET_SENT, RQ_NET_QUEUED);
+		break;
 
 	case BARRIER_ACKED:
+barrier_acked:
 		/* barrier ack for READ requests does not make sense */
-		if (!(req->rq_state & RQ_WRITE))
+		if (!(req->local_rq_state & RQ_WRITE))
 			break;
 
-		if (req->rq_state & RQ_NET_PENDING) {
+		if (req->net_rq_state[idx] & RQ_NET_PENDING) {
 			/* barrier came in before all requests were acked.
 			 * this is bad, because if the connection is lost now,
 			 * we won't be able to clean them up... */
 			drbd_err(device, "FIXME (BARRIER_ACKED but pending)\n");
+			mod_rq_state(req, m, peer_device, RQ_NET_PENDING, RQ_NET_OK);
 		}
-		/* Allowed to complete requests, even while suspended.
-		 * As this is called for all requests within a matching epoch,
+		/* As this is called for all requests within a matching epoch,
 		 * we need to filter, and only set RQ_NET_DONE for those that
 		 * have actually been on the wire. */
-		mod_rq_state(req, m, RQ_COMPLETION_SUSP,
-				(req->rq_state & RQ_NET_MASK) ? RQ_NET_DONE : 0);
+		if (req->net_rq_state[idx] & RQ_NET_MASK)
+			mod_rq_state(req, m, peer_device, 0, RQ_NET_DONE);
 		break;
 
 	case DATA_RECEIVED:
-		D_ASSERT(device, req->rq_state & RQ_NET_PENDING);
-		mod_rq_state(req, m, RQ_NET_PENDING, RQ_NET_OK|RQ_NET_DONE);
+		D_ASSERT(device, req->net_rq_state[idx] & RQ_NET_PENDING);
+		mod_rq_state(req, m, peer_device, RQ_NET_PENDING, RQ_NET_OK|RQ_NET_DONE);
 		break;
 
-	case QUEUE_AS_DRBD_BARRIER:
-		start_new_tl_epoch(connection);
-		mod_rq_state(req, m, 0, RQ_NET_OK|RQ_NET_DONE);
+	case BARRIER_SENT:
+		mod_rq_state(req, m, peer_device, 0, RQ_NET_OK|RQ_NET_DONE);
 		break;
 	}
-
-	return rv;
 }
 
 /* we may do a local read if:
  * - we are consistent (of course),
  * - or we are generally inconsistent,
- *   BUT we are still/already IN SYNC for this area.
+ *   BUT we are still/already IN SYNC with all peers for this area.
  *   since size may be bigger than BM_BLOCK_SIZE,
  *   we may need to check several bits.
  */
 static bool drbd_may_do_local_read(struct drbd_device *device, sector_t sector, int size)
 {
+	struct drbd_md *md = &device->ldev->md;
+	struct drbd_bitmap *bm;
+	unsigned int node_id;
+	unsigned int n_checked = 0;
+
 	unsigned long sbnr, ebnr;
 	sector_t esector, nr_sectors;
 
-	if (device->state.disk == D_UP_TO_DATE)
+	if (device->disk_state[NOW] == D_UP_TO_DATE)
 		return true;
-	if (device->state.disk != D_INCONSISTENT)
+	if (device->disk_state[NOW] != D_INCONSISTENT)
 		return false;
 	esector = sector + (size >> 9) - 1;
 	nr_sectors = get_capacity(device->vdisk);
 	D_ASSERT(device, sector  < nr_sectors);
 	D_ASSERT(device, esector < nr_sectors);
 
-	sbnr = BM_SECT_TO_BIT(sector);
-	ebnr = BM_SECT_TO_BIT(esector);
+	bm = device->bitmap;
+	if (!bm)
+		return true;
+
+	sbnr = bm_sect_to_bit(bm, sector);
+	ebnr = bm_sect_to_bit(bm, esector);
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		struct drbd_peer_md *peer_md = &md->peers[node_id];
+
+		/* Skip bitmap indexes which are not assigned to a peer. */
+		if (!(peer_md->flags & MDF_HAVE_BITMAP))
+			continue;
 
-	return drbd_bm_count_bits(device, sbnr, ebnr) == 0;
+		if (drbd_bm_count_bits(device, peer_md->bitmap_index, sbnr, ebnr))
+			return false;
+		++n_checked;
+	}
+	if (n_checked == 0) {
+		drbd_err_ratelimit(device, "No valid bitmap slots found to check!\n");
+		return false;
+	}
+	return true;
 }
 
-static bool remote_due_to_read_balancing(struct drbd_device *device, sector_t sector,
+/* TODO improve for more than one peer.
+ * also take into account the drbd protocol. */
+static bool remote_due_to_read_balancing(struct drbd_device *device,
+		struct drbd_peer_device *peer_device, sector_t sector,
 		enum drbd_read_balancing rbm)
 {
 	int stripe_shift;
 
 	switch (rbm) {
 	case RB_CONGESTED_REMOTE:
+		/* originally, this used the bdi congestion framework,
+		 * but that was removed in linux 5.18.
+		 * so just never report the lower device as congested. */
 		return false;
 	case RB_LEAST_PENDING:
 		return atomic_read(&device->local_cnt) >
-			atomic_read(&device->ap_pending_cnt) + atomic_read(&device->rs_pending_cnt);
+			atomic_read(&peer_device->ap_pending_cnt) + atomic_read(&peer_device->rs_pending_cnt);
 	case RB_32K_STRIPING:  /* stripe_shift = 15 */
 	case RB_64K_STRIPING:
 	case RB_128K_STRIPING:
@@ -942,63 +1527,32 @@ static bool remote_due_to_read_balancing(struct drbd_device *device, sector_t se
 	}
 }
 
-/*
- * complete_conflicting_writes  -  wait for any conflicting write requests
- *
- * The write_requests tree contains all active write requests which we
- * currently know about.  Wait for any requests to complete which conflict with
- * the new one.
- *
- * Only way out: remove the conflicting intervals from the tree.
- */
-static void complete_conflicting_writes(struct drbd_request *req)
-{
-	DEFINE_WAIT(wait);
-	struct drbd_device *device = req->device;
-	struct drbd_interval *i;
-	sector_t sector = req->i.sector;
-	int size = req->i.size;
-
-	for (;;) {
-		drbd_for_each_overlap(i, &device->write_requests, sector, size) {
-			/* Ignore, if already completed to upper layers. */
-			if (i->completed)
-				continue;
-			/* Handle the first found overlap.  After the schedule
-			 * we have to restart the tree walk. */
-			break;
-		}
-		if (!i)	/* if any */
-			break;
-
-		/* Indicate to wake up device->misc_wait on progress.  */
-		prepare_to_wait(&device->misc_wait, &wait, TASK_UNINTERRUPTIBLE);
-		i->waiting = true;
-		spin_unlock_irq(&device->resource->req_lock);
-		schedule();
-		spin_lock_irq(&device->resource->req_lock);
-	}
-	finish_wait(&device->misc_wait, &wait);
-}
-
-/* called within req_lock */
-static void maybe_pull_ahead(struct drbd_device *device)
+static void __maybe_pull_ahead(struct drbd_device *device, struct drbd_connection *connection)
 {
-	struct drbd_connection *connection = first_peer_device(device)->connection;
 	struct net_conf *nc;
 	bool congested = false;
 	enum drbd_on_congestion on_congestion;
+	u32 cong_fill = 0, cong_extents = 0;
+	struct drbd_peer_device *peer_device = conn_peer_device(connection, device->vnr);
 
-	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
-	on_congestion = nc ? nc->on_congestion : OC_BLOCK;
-	rcu_read_unlock();
-	if (on_congestion == OC_BLOCK ||
-	    connection->agreed_pro_version < 96)
+	lockdep_assert_held(&device->resource->state_rwlock);
+
+	if (connection->agreed_pro_version < 96)
+		return;
+
+	nc = rcu_dereference(connection->transport.net_conf);
+	if (nc) {
+		on_congestion = nc->on_congestion;
+		cong_fill = nc->cong_fill;
+		cong_extents = nc->cong_extents;
+	} else {
+		on_congestion = OC_BLOCK;
+	}
+	if (on_congestion == OC_BLOCK)
 		return;
 
-	if (on_congestion == OC_PULL_AHEAD && device->state.conn == C_AHEAD)
-		return; /* nothing to do ... */
+	if (!drbd_should_do_remote(peer_device, NOW))
+		return; /* Ignore congestion if we are not replicating writes */
 
 	/* If I don't even have good local storage, we can not reasonably try
 	 * to pull ahead of the peer. We also need the local reference to make
@@ -1007,44 +1561,125 @@ static void maybe_pull_ahead(struct drbd_device *device)
 	if (!get_ldev_if_state(device, D_UP_TO_DATE))
 		return;
 
-	if (nc->cong_fill &&
-	    atomic_read(&device->ap_in_flight) >= nc->cong_fill) {
-		drbd_info(device, "Congestion-fill threshold reached\n");
-		congested = true;
+	if (test_and_set_bit(HANDLING_CONGESTION, &peer_device->flags))
+		goto out;
+
+	/* if an other volume already found that we are congested, short circuit. */
+	congested = test_bit(CONN_CONGESTED, &connection->flags);
+
+	if (!congested && cong_fill) {
+		int n = atomic_read(&connection->ap_in_flight) +
+			atomic_read(&connection->rs_in_flight);
+		if (n >= cong_fill) {
+			drbd_info(device, "Congestion-fill threshold reached (%d >= %d)\n", n, cong_fill);
+			congested = true;
+		}
 	}
 
-	if (device->act_log->used >= nc->cong_extents) {
-		drbd_info(device, "Congestion-extents threshold reached\n");
+	if (!congested && device->act_log->used >= cong_extents) {
+		drbd_info(device, "Congestion-extents threshold reached (%d >= %d)\n",
+			device->act_log->used, cong_extents);
 		congested = true;
 	}
 
 	if (congested) {
-		/* start a new epoch for non-mirrored writes */
-		start_new_tl_epoch(first_peer_device(device)->connection);
-
-		if (on_congestion == OC_PULL_AHEAD)
-			_drbd_set_state(_NS(device, conn, C_AHEAD), 0, NULL);
-		else  /*nc->on_congestion == OC_DISCONNECT */
-			_drbd_set_state(_NS(device, conn, C_DISCONNECTING), 0, NULL);
+		set_bit(CONN_CONGESTED, &connection->flags);
+		drbd_peer_device_post_work(peer_device, HANDLE_CONGESTION);
+	} else {
+		clear_bit(HANDLING_CONGESTION, &peer_device->flags);
 	}
+out:
 	put_ldev(device);
 }
 
-/* If this returns false, and req->private_bio is still set,
- * this should be submitted locally.
+static void maybe_pull_ahead(struct drbd_device *device)
+{
+	struct drbd_connection *connection;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, device->resource)
+		if (connection->cstate[NOW] == C_CONNECTED)
+			__maybe_pull_ahead(device, connection);
+	rcu_read_unlock();
+}
+
+bool drbd_should_do_remote(struct drbd_peer_device *peer_device, enum which_state which)
+{
+	enum drbd_disk_state peer_disk_state = peer_device->disk_state[which];
+	enum drbd_repl_state repl_state = peer_device->repl_state[which];
+	bool replication = peer_device->replication[which];
+
+	return peer_disk_state == D_UP_TO_DATE ||
+		(peer_disk_state == D_INCONSISTENT && replication &&
+		 (repl_state == L_ESTABLISHED ||
+		  (repl_state >= L_WF_BITMAP_T && repl_state < L_AHEAD)));
+	/* Before proto 96 that was >= CONNECTED instead of >= L_WF_BITMAP_T.
+	   That is equivalent since before 96 IO was frozen in the L_WF_BITMAP*
+	   states. */
+}
+
+static bool drbd_should_send_out_of_sync(struct drbd_peer_device *peer_device)
+{
+	enum drbd_disk_state peer_disk_state = peer_device->disk_state[NOW];
+	enum drbd_repl_state repl_state = peer_device->repl_state[NOW];
+	bool replication = peer_device->replication[NOW];
+
+	return repl_state == L_AHEAD ||
+		repl_state == L_WF_BITMAP_S ||
+		(repl_state >= L_ESTABLISHED &&
+		 (peer_disk_state == D_OUTDATED ||
+		  (peer_disk_state == D_INCONSISTENT && !replication)));
+
+	/* proto 96 check omitted, there was no L_AHEAD back then,
+	 * peer disk was never Outdated while connection was established,
+	 * and IO was frozen during bitmap exchange */
+}
+
+/* Prefer to read from protcol C peers, then B, last A */
+static u64 calc_nodes_to_read_from(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	u64 candidates[DRBD_PROT_C] = {};
+	int wp;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		struct net_conf *nc;
+
+		if (peer_device->disk_state[NOW] != D_UP_TO_DATE)
+			continue;
+		nc = rcu_dereference(peer_device->connection->transport.net_conf);
+		if (!nc || !nc->allow_remote_read)
+			continue;
+		wp = nc->wire_protocol;
+		candidates[wp - 1] |= NODE_MASK(peer_device->node_id);
+	}
+	rcu_read_unlock();
+
+	for (wp = DRBD_PROT_C; wp >= DRBD_PROT_A; wp--) {
+		if (candidates[wp - 1])
+			return candidates[wp - 1];
+	}
+	return 0;
+}
+
+/* If this returns NULL, and req->private_bio is still set,
+ * the request should be submitted locally.
  *
- * If it returns false, but req->private_bio is not set,
+ * If it returns NULL, but req->private_bio is not set,
  * we do not have access to good data :(
  *
  * Otherwise, this destroys req->private_bio, if any,
- * and returns true.
+ * and returns the peer device which should be asked for data.
  */
-static bool do_remote_read(struct drbd_request *req)
+static struct drbd_peer_device *find_peer_device_for_read(struct drbd_request *req)
 {
+	struct drbd_peer_device *peer_device;
 	struct drbd_device *device = req->device;
-	enum drbd_read_balancing rbm;
+	enum drbd_read_balancing rbm = RB_PREFER_REMOTE;
 
 	if (req->private_bio) {
+		/* ldev_safe: have private_bio */
 		if (!drbd_may_do_local_read(device,
 					req->i.sector, req->i.size)) {
 			bio_put(req->private_bio);
@@ -1053,90 +1688,123 @@ static bool do_remote_read(struct drbd_request *req)
 		}
 	}
 
-	if (device->state.pdsk != D_UP_TO_DATE)
-		return false;
-
-	if (req->private_bio == NULL)
-		return true;
-
-	/* TODO: improve read balancing decisions, take into account drbd
-	 * protocol, pending requests etc. */
-
-	rcu_read_lock();
-	rbm = rcu_dereference(device->ldev->disk_conf)->read_balancing;
-	rcu_read_unlock();
-
-	if (rbm == RB_PREFER_LOCAL && req->private_bio)
-		return false; /* submit locally */
+	if (device->disk_state[NOW] > D_DISKLESS) {
+		rcu_read_lock();
+		/* ldev_safe: checked disk_state while holding state_rwlock */
+		rbm = rcu_dereference(device->ldev->disk_conf)->read_balancing;
+		rcu_read_unlock();
+		if (rbm == RB_PREFER_LOCAL && req->private_bio) {
+			return NULL; /* submit locally */
+		}
+	}
 
-	if (remote_due_to_read_balancing(device, req->i.sector, rbm)) {
-		if (req->private_bio) {
-			bio_put(req->private_bio);
-			req->private_bio = NULL;
-			put_ldev(device);
+	/* TODO: improve read balancing decisions, allow user to configure node weights */
+	while (true) {
+		if (!device->read_nodes)
+			device->read_nodes = calc_nodes_to_read_from(device);
+		if (device->read_nodes) {
+			int peer_node_id = __ffs64(device->read_nodes);
+			device->read_nodes &= ~NODE_MASK(peer_node_id);
+			peer_device = peer_device_by_node_id(device, peer_node_id);
+			if (!peer_device)
+				continue;
+			if (peer_device->disk_state[NOW] != D_UP_TO_DATE)
+				continue;
+			if (req->private_bio &&
+			    !remote_due_to_read_balancing(device, peer_device, req->i.sector, rbm))
+				peer_device = NULL;
+		} else {
+			peer_device = NULL;
 		}
-		return true;
+		break;
 	}
 
-	return false;
+	if (peer_device && req->private_bio) {
+		bio_put(req->private_bio);
+		req->private_bio = NULL;
+		put_ldev(device);
+	}
+	return peer_device;
 }
 
-bool drbd_should_do_remote(union drbd_dev_state s)
+static int drbd_process_empty_flush(struct drbd_request *req)
 {
-	return s.pdsk == D_UP_TO_DATE ||
-		(s.pdsk >= D_INCONSISTENT &&
-		 s.conn >= C_WF_BITMAP_T &&
-		 s.conn < C_AHEAD);
-	/* Before proto 96 that was >= CONNECTED instead of >= C_WF_BITMAP_T.
-	   That is equivalent since before 96 IO was frozen in the C_WF_BITMAP*
-	   states. */
-}
+	struct drbd_device *device = req->device;
+	struct drbd_peer_device *peer_device;
+	int count = 0;
+
+	for_each_peer_device(peer_device, device) {
+		/* When a flush is submitted, the expectation is that the data
+		 * is written somewhere in a usable form. Hence only
+		 * D_UP_TO_DATE peers are included and not all peers that
+		 * receive the data. */
+		if (peer_device->disk_state[NOW] == D_UP_TO_DATE) {
+			++count;
+
+			/* An empty flush indicates that all previously
+			 * completed requests should be written out to stable
+			 * storage. Request completion already triggers a
+			 * barrier to be sent and the current epoch closed. The
+			 * barrier causes the data to be written out unless
+			 * that is configured not to be necessary.
+			 *
+			 * Hence there is nothing more to be done to cause the
+			 * writing out to persistent storage which was
+			 * requested. We just mark the request so that we know
+			 * that a flush has effectively occurred on this peer
+			 * so that we can complete it successfully.
+			 *
+			 * We _should_ wait for any outstanding barriers to
+			 * protocol C peers to be acked before completing this
+			 * request, so that we are sure that the previously
+			 * completed requests have really been written out
+			 * there too. However, DRBD has never yet implemented
+			 * this. */
+			_req_mod(req, BARRIER_SENT, peer_device);
+		}
+	}
 
-static bool drbd_should_send_out_of_sync(union drbd_dev_state s)
-{
-	return s.conn == C_AHEAD || s.conn == C_WF_BITMAP_S;
-	/* pdsk = D_INCONSISTENT as a consequence. Protocol 96 check not necessary
-	   since we enter state C_AHEAD only if proto >= 96 */
+	return count;
 }
 
-/* returns number of connections (== 1, for drbd 8.4)
- * expected to actually write this data,
+/* returns the number of connections expected to actually write this data,
  * which does NOT include those that we are L_AHEAD for. */
 static int drbd_process_write_request(struct drbd_request *req)
 {
 	struct drbd_device *device = req->device;
-	struct drbd_peer_device *peer_device = first_peer_device(device);
+	struct drbd_peer_device *peer_device;
 	int remote, send_oos;
+	int count = 0;
+
+	for_each_peer_device(peer_device, device) {
+		remote = drbd_should_do_remote(peer_device, NOW);
+		send_oos = drbd_should_send_out_of_sync(peer_device);
 
-	remote = drbd_should_do_remote(device->state);
-	send_oos = drbd_should_send_out_of_sync(device->state);
+		if (!remote && !send_oos)
+			continue;
 
-	/* Need to replicate writes.  Unless it is an empty flush,
-	 * which is better mapped to a DRBD P_BARRIER packet,
-	 * also for drbd wire protocol compatibility reasons.
-	 * If this was a flush, just start a new epoch.
-	 * Unless the current epoch was empty anyways, or we are not currently
-	 * replicating, in which case there is no point. */
-	if (unlikely(req->i.size == 0)) {
-		/* The only size==0 bios we expect are empty flushes. */
-		D_ASSERT(device, req->master_bio->bi_opf & REQ_PREFLUSH);
-		if (remote)
-			_req_mod(req, QUEUE_AS_DRBD_BARRIER, peer_device);
-		return remote;
-	}
+		D_ASSERT(device, !(remote && send_oos));
 
-	if (!remote && !send_oos)
-		return 0;
+		if (remote) {
+			++count;
+			_req_mod(req, NEW_NET_WRITE, peer_device);
+		} else
+			_req_mod(req, NEW_NET_OOS, peer_device);
+	}
 
-	D_ASSERT(device, !(remote && send_oos));
+	return count;
+}
 
-	if (remote) {
-		_req_mod(req, TO_BE_SENT, peer_device);
-		_req_mod(req, QUEUE_FOR_NET_WRITE, peer_device);
-	} else if (drbd_set_out_of_sync(peer_device, req->i.sector, req->i.size))
-		_req_mod(req, QUEUE_FOR_SEND_OOS, peer_device);
+static void drbd_request_ready_for_net(struct drbd_request *req)
+{
+	struct drbd_device *device = req->device;
+	struct drbd_peer_device *peer_device;
 
-	return remote;
+	for_each_peer_device(peer_device, device) {
+		/* Do not mark RQ_NET_PENDING_OOS requests ready yet */
+		if (req->net_rq_state[peer_device->node_id] & RQ_NET_PENDING)
+			_req_mod(req, READY_FOR_NET, peer_device);
+	}
 }
 
 static void drbd_process_discard_or_zeroes_req(struct drbd_request *req, int flags)
@@ -1162,45 +1830,67 @@ drbd_submit_req_private_bio(struct drbd_request *req)
 	else
 		type = DRBD_FAULT_DT_RD;
 
+	/* ldev_safe: req->private_bio implies an ldev reference is held */
+	bio_set_dev(bio, device->ldev->backing_bdev);
+
 	/* State may have changed since we grabbed our reference on the
-	 * ->ldev member. Double check, and short-circuit to endio.
+	 * device->ldev member. Double check, and short-circuit to endio.
 	 * In case the last activity log transaction failed to get on
 	 * stable storage, and this is a WRITE, we may not even submit
 	 * this bio. */
 	if (get_ldev(device)) {
-		if (drbd_insert_fault(device, type))
-			bio_io_error(bio);
-		else if (bio_op(bio) == REQ_OP_WRITE_ZEROES)
+		if (drbd_insert_fault(device, type)) {
+			bio->bi_status = BLK_STS_IOERR;
+			bio_endio(bio);
+		} else if (bio_op(bio) == REQ_OP_WRITE_ZEROES) {
 			drbd_process_discard_or_zeroes_req(req, EE_ZEROOUT |
 			    ((bio->bi_opf & REQ_NOUNMAP) ? 0 : EE_TRIM));
-		else if (bio_op(bio) == REQ_OP_DISCARD)
+		} else if (bio_op(bio) == REQ_OP_DISCARD) {
 			drbd_process_discard_or_zeroes_req(req, EE_TRIM);
-		else
+		} else {
 			submit_bio_noacct(bio);
+		}
 		put_ldev(device);
-	} else
-		bio_io_error(bio);
-}
+	} else {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+	}
+ }
 
 static void drbd_queue_write(struct drbd_device *device, struct drbd_request *req)
 {
-	spin_lock_irq(&device->resource->req_lock);
-	list_add_tail(&req->tl_requests, &device->submit.writes);
+	if (req->private_bio)
+		atomic_inc(&device->ap_actlog_cnt);
+	spin_lock_irq(&device->pending_completion_lock);
 	list_add_tail(&req->req_pending_master_completion,
 			&device->pending_master_completion[1 /* WRITE */]);
-	spin_unlock_irq(&device->resource->req_lock);
+	spin_unlock_irq(&device->pending_completion_lock);
+	spin_lock(&device->submit.lock);
+	list_add_tail(&req->list, &device->submit.writes);
+	spin_unlock(&device->submit.lock);
 	queue_work(device->submit.wq, &device->submit.worker);
 	/* do_submit() may sleep internally on al_wait, too */
 	wake_up(&device->al_wait);
 }
 
-/* returns the new drbd_request pointer, if the caller is expected to
- * drbd_send_and_submit() it (to save latency), or NULL if we queued the
- * request on the submitter thread.
+static void drbd_req_in_actlog(struct drbd_request *req)
+{
+	req->local_rq_state |= RQ_IN_ACT_LOG;
+	ktime_get_accounting(req->in_actlog_kt);
+	atomic_sub(interval_to_al_extents(&req->i), &req->device->wait_for_actlog_ecnt);
+}
+
+/* returns the new drbd_request pointer, if the caller is expected to submit it
+ * (to save latency), or NULL if we queued the request on the submitter thread.
  * Returns ERR_PTR(-ENOMEM) if we cannot allocate a drbd_request.
  */
+#ifndef CONFIG_DRBD_TIMING_STATS
+#define drbd_request_prepare(d, b, k, j) drbd_request_prepare(d, b, j)
+#endif
 static struct drbd_request *
-drbd_request_prepare(struct drbd_device *device, struct bio *bio)
+drbd_request_prepare(struct drbd_device *device, struct bio *bio,
+		ktime_t start_kt,
+		unsigned long start_jif)
 {
 	const int rw = bio_data_dir(bio);
 	struct drbd_request *req;
@@ -1208,44 +1898,66 @@ drbd_request_prepare(struct drbd_device *device, struct bio *bio)
 	/* allocate outside of all locks; */
 	req = drbd_req_new(device, bio);
 	if (!req) {
-		dec_ap_bio(device);
-		/* only pass the error to the upper layers.
-		 * if user cannot handle io errors, that's not our business. */
 		drbd_err(device, "could not kmalloc() req\n");
-		bio->bi_status = BLK_STS_RESOURCE;
-		bio_endio(bio);
-		return ERR_PTR(-ENOMEM);
+		goto no_mem;
 	}
 
 	/* Update disk stats */
 	req->start_jif = bio_start_io_acct(req->master_bio);
 
 	if (get_ldev(device)) {
-		req->private_bio = bio_alloc_clone(device->ldev->backing_bdev,
-						   bio, GFP_NOIO,
-						   &drbd_io_bio_set);
+		req->private_bio = bio_alloc_clone(device->ldev->backing_bdev, bio, GFP_NOIO, &drbd_io_bio_set);
+		if (!req->private_bio) {
+			drbd_err(device, "could not bio_alloc_clone() req->private_bio\n");
+			kfree(req);
+			put_ldev(device);
+			goto no_mem;
+		}
 		req->private_bio->bi_private = req;
 		req->private_bio->bi_end_io = drbd_request_endio;
 	}
 
+	ktime_get_accounting_assign(req->start_kt, start_kt);
+
+	if (rw != WRITE || req->i.size == 0)
+		return req;
+
+	/* Let the activity log know we are about to use it...
+	 * FIXME
+	 * Needs to slow down to not congest on the activity log, in case we
+	 * have multiple primaries and the peer sends huge scattered epochs.
+	 * See also how peer_requests are handled
+	 * in receive_Data() { ... drbd_wait_for_activity_log_extents(); ... }
+	 */
+	if (req->private_bio)
+		atomic_add(interval_to_al_extents(&req->i), &device->wait_for_actlog_ecnt);
+
 	/* process discards always from our submitter thread */
-	if (bio_op(bio) == REQ_OP_WRITE_ZEROES ||
-	    bio_op(bio) == REQ_OP_DISCARD)
+	if ((bio_op(bio) == REQ_OP_WRITE_ZEROES) ||
+	    (bio_op(bio) == REQ_OP_DISCARD))
 		goto queue_for_submitter_thread;
 
-	if (rw == WRITE && req->private_bio && req->i.size
-	&& !test_bit(AL_SUSPENDED, &device->flags)) {
+	if (req->private_bio && !test_bit(AL_SUSPENDED, &device->flags)) {
+		/* ldev_safe: have private_bio */
 		if (!drbd_al_begin_io_fastpath(device, &req->i))
 			goto queue_for_submitter_thread;
-		req->rq_state |= RQ_IN_ACT_LOG;
-		req->in_actlog_jif = jiffies;
+		drbd_req_in_actlog(req);
 	}
 	return req;
 
  queue_for_submitter_thread:
-	atomic_inc(&device->ap_actlog_cnt);
+	ktime_aggregate_delta(device, req->start_kt, before_queue_kt);
 	drbd_queue_write(device, req);
 	return NULL;
+
+ no_mem:
+	dec_ap_bio(device, rw);
+	/* only pass the error to the upper layers.
+	 * if user cannot handle io errors, that's not our business.
+	 */
+	bio->bi_status = BLK_STS_RESOURCE;
+	bio_endio(bio);
+	return ERR_PTR(-ENOMEM);
 }
 
 /* Require at least one path to current data.
@@ -1260,8 +1972,17 @@ drbd_request_prepare(struct drbd_device *device, struct bio *bio)
  */
 static bool may_do_writes(struct drbd_device *device)
 {
-	const union drbd_dev_state s = device->state;
-	return s.disk == D_UP_TO_DATE || s.pdsk == D_UP_TO_DATE;
+	struct drbd_peer_device *peer_device;
+
+	if (device->disk_state[NOW] == D_UP_TO_DATE)
+		return true;
+
+	for_each_peer_device(peer_device, device) {
+		if (peer_device->disk_state[NOW] == D_UP_TO_DATE)
+		    return true;
+	}
+
+	return false;
 }
 
 struct drbd_plug_cb {
@@ -1273,21 +1994,25 @@ struct drbd_plug_cb {
 static void drbd_unplug(struct blk_plug_cb *cb, bool from_schedule)
 {
 	struct drbd_plug_cb *plug = container_of(cb, struct drbd_plug_cb, cb);
-	struct drbd_resource *resource = plug->cb.data;
 	struct drbd_request *req = plug->most_recent_req;
+	struct drbd_resource *resource;
 
 	kfree(cb);
 	if (!req)
 		return;
 
-	spin_lock_irq(&resource->req_lock);
+	resource = req->device->resource;
+
+	read_lock_irq(&resource->state_rwlock);
 	/* In case the sender did not process it yet, raise the flag to
 	 * have it followed with P_UNPLUG_REMOTE just after. */
-	req->rq_state |= RQ_UNPLUG;
+	spin_lock(&req->rq_lock);
+	req->local_rq_state |= RQ_UNPLUG;
+	spin_unlock(&req->rq_lock);
 	/* but also queue a generic unplug */
 	drbd_queue_unplug(req->device);
 	kref_put(&req->kref, drbd_req_destroy);
-	spin_unlock_irq(&resource->req_lock);
+	read_unlock_irq(&resource->state_rwlock);
 }
 
 static struct drbd_plug_cb* drbd_check_plugged(struct drbd_resource *resource)
@@ -1307,40 +2032,34 @@ static struct drbd_plug_cb* drbd_check_plugged(struct drbd_resource *resource)
 static void drbd_update_plug(struct drbd_plug_cb *plug, struct drbd_request *req)
 {
 	struct drbd_request *tmp = plug->most_recent_req;
-	/* Will be sent to some peer.
-	 * Remember to tag it with UNPLUG_REMOTE on unplug */
+	/* Will be sent to some peer. */
 	kref_get(&req->kref);
 	plug->most_recent_req = req;
 	if (tmp)
 		kref_put(&tmp->kref, drbd_req_destroy);
 }
 
-static void drbd_send_and_submit(struct drbd_device *device, struct drbd_request *req)
+static void drbd_send_and_submit(struct drbd_request *req)
 {
+	struct drbd_device *device = req->device;
 	struct drbd_resource *resource = device->resource;
-	struct drbd_peer_device *peer_device = first_peer_device(device);
+	struct drbd_peer_device *peer_device = NULL; /* for read */
 	const int rw = bio_data_dir(req->master_bio);
 	struct bio_and_error m = { NULL, };
 	bool no_remote = false;
 	bool submit_private_bio = false;
 
-	spin_lock_irq(&resource->req_lock);
-	if (rw == WRITE) {
-		/* This may temporarily give up the req_lock,
-		 * but will re-aquire it before it returns here.
-		 * Needs to be before the check on drbd_suspended() */
-		complete_conflicting_writes(req);
-		/* no more giving up req_lock from now on! */
+	read_lock_irq(&resource->state_rwlock);
 
+	if (rw == WRITE) {
 		/* check for congestion, and potentially stop sending
 		 * full data updates, but start sending "dirty bits" only. */
 		maybe_pull_ahead(device);
 	}
 
-
 	if (drbd_suspended(device)) {
 		/* push back and retry: */
-		req->rq_state |= RQ_POSTPONED;
+		req->local_rq_state |= RQ_POSTPONED;
 		if (req->private_bio) {
 			bio_put(req->private_bio);
 			req->private_bio = NULL;
@@ -1349,44 +2068,87 @@ static void drbd_send_and_submit(struct drbd_device *device, struct drbd_request
 		goto out;
 	}
 
-	/* We fail READ early, if we can not serve it.
-	 * We must do this before req is registered on any lists.
-	 * Otherwise, drbd_req_complete() will queue failed READ for retry. */
-	if (rw != WRITE) {
-		if (!do_remote_read(req) && !req->private_bio)
+	if (rw == WRITE) {
+		if (!may_do_writes(device)) {
+			if (req->private_bio) {
+				bio_put(req->private_bio);
+				req->private_bio = NULL;
+				put_ldev(device);
+			}
+			goto nodata;
+		}
+	} else {
+		/* We fail READ early, if we can not serve it.
+		 * We must do this before req is registered on any lists.
+		 * Otherwise, drbd_req_complete() will queue failed READ for retry. */
+		peer_device = find_peer_device_for_read(req);
+		if (!peer_device && !req->private_bio)
 			goto nodata;
 	}
 
-	/* which transfer log epoch does this belong to? */
-	req->epoch = atomic_read(&first_peer_device(device)->connection->current_tle_nr);
+	spin_lock(&resource->tl_update_lock); /* local irq already disabled */
+	if (rw == WRITE) {
+		/* Update dagtag_sector before determining current_tle_nr so
+		 * that senders can detect if there are requests currently
+		 * being submitted. Updates are protected by tl_update_lock,
+		 * but reads are not, so WRITE_ONCE(). */
+		WRITE_ONCE(resource->dagtag_sector, resource->dagtag_sector + (req->i.size >> 9));
+		/* Ensure that the written value is visible to the senders. */
+		smp_wmb();
+	}
+	req->dagtag_sector = resource->dagtag_sector;
 
-	/* no point in adding empty flushes to the transfer log,
-	 * they are mapped to drbd barriers already. */
-	if (likely(req->i.size!=0)) {
-		if (rw == WRITE)
-			first_peer_device(device)->connection->current_tle_writes++;
+	spin_lock(&resource->current_tle_lock);
+	/* which transfer log epoch does this belong to? */
+	req->epoch = atomic_read(&resource->current_tle_nr);
+	if (rw == WRITE && likely(req->i.size != 0))
+		resource->current_tle_writes++;
+	spin_unlock(&resource->current_tle_lock);
 
-		list_add_tail(&req->tl_requests, &first_peer_device(device)->connection->transfer_log);
-	}
+	/* A size==0 bio can only be an empty flush, which is mapped to a DRBD
+	 * P_BARRIER packet. */
+	if (unlikely(req->i.size == 0)) {
+		/* The only size==0 bios we expect are empty flushes. */
+		D_ASSERT(device, req->master_bio->bi_opf & REQ_PREFLUSH);
 
-	if (rw == WRITE) {
-		if (req->private_bio && !may_do_writes(device)) {
-			bio_put(req->private_bio);
-			req->private_bio = NULL;
-			put_ldev(device);
-			goto nodata;
-		}
-		if (!drbd_process_write_request(req))
+		if (!drbd_process_empty_flush(req))
 			no_remote = true;
 	} else {
-		/* We either have a private_bio, or we can read from remote.
-		 * Otherwise we had done the goto nodata above. */
-		if (req->private_bio == NULL) {
-			_req_mod(req, TO_BE_SENT, peer_device);
-			_req_mod(req, QUEUE_FOR_NET_READ, peer_device);
-		} else
-			no_remote = true;
+		if (rw == WRITE) {
+			struct drbd_request *prev_write = resource->tl_previous_write;
+			resource->tl_previous_write = req;
+
+			if (prev_write) {
+				if (!test_bit(INTERVAL_DONE, &prev_write->i.flags))
+					refcount_inc(&req->done_ref);
+				refcount_inc(&req->oos_send_ref);
+				prev_write->next_write = req;
+			}
+
+			if (!drbd_process_write_request(req))
+				no_remote = true;
+		} else {
+			if (peer_device)
+				_req_mod(req, NEW_NET_READ, peer_device);
+			else
+				no_remote = true;
+		}
+
+		/* req may now be accessed by other threads - do not modify
+		 * "immutable" fields after this point */
+		list_add_tail_rcu(&req->tl_requests, &resource->transfer_log);
+
+		/* Do this after adding to the transfer log so that the
+		 * caching pointer req_not_net_done is set if
+		 * necessary. */
+		drbd_request_ready_for_net(req);
 	}
+	spin_unlock(&resource->tl_update_lock);
+
+	if (rw == WRITE)
+		wake_all_senders(resource);
+	else if (peer_device)
+		wake_up(&peer_device->connection->sender_work.q_wait);
 
 	if (no_remote == false) {
 		struct drbd_plug_cb *plug = drbd_check_plugged(resource);
@@ -1396,29 +2158,38 @@ static void drbd_send_and_submit(struct drbd_device *device, struct drbd_request
 
 	/* If it took the fast path in drbd_request_prepare, add it here.
 	 * The slow path has added it already. */
+	spin_lock(&device->pending_completion_lock); /* local irq already disabled */
 	if (list_empty(&req->req_pending_master_completion))
 		list_add_tail(&req->req_pending_master_completion,
 			&device->pending_master_completion[rw == WRITE]);
 	if (req->private_bio) {
-		/* needs to be marked within the same spinlock */
+		/* pre_submit_jif is used in request_timer_fn() */
 		req->pre_submit_jif = jiffies;
+		ktime_get_accounting(req->pre_submit_kt);
 		list_add_tail(&req->req_pending_local,
 			&device->pending_completion[rw == WRITE]);
 		_req_mod(req, TO_BE_SUBMITTED, NULL);
-		/* but we need to give up the spinlock to submit */
+		/* needs to be marked within the same spinlock
+		 * but we need to give up the spinlock to submit */
 		submit_private_bio = true;
-	} else if (no_remote) {
+		spin_unlock(&device->pending_completion_lock);
+	} else {
+		spin_unlock(&device->pending_completion_lock);
+		if (no_remote) {
 nodata:
-		if (drbd_ratelimit())
-			drbd_err(device, "IO ERROR: neither local nor remote data, sector %llu+%u\n",
-					(unsigned long long)req->i.sector, req->i.size >> 9);
-		/* A write may have been queued for send_oos, however.
-		 * So we can not simply free it, we must go through drbd_req_put_completion_ref() */
+			drbd_err_ratelimit(req->device,
+				"IO ERROR: neither local nor remote data, sector %llu+%u\n",
+				 (unsigned long long)req->i.sector, req->i.size >> 9);
+			/* A write may have been queued for send_oos, however.
+			 * So we can not simply free it, we must go through
+			 * drbd_req_put_completion_ref()
+			 */
+		}
 	}
 
 out:
 	drbd_req_put_completion_ref(req, &m, 1);
-	spin_unlock_irq(&resource->req_lock);
+	read_unlock_irq(&resource->state_rwlock);
 
 	/* Even though above is a kref_put(), this is safe.
 	 * As long as we still need to submit our private bio,
@@ -1428,114 +2199,396 @@ static void drbd_send_and_submit(struct drbd_device *device, struct drbd_request
 	 * That's why we cannot check on req->private_bio. */
 	if (submit_private_bio)
 		drbd_submit_req_private_bio(req);
+
 	if (m.bio)
 		complete_master_bio(device, &m);
 }
 
-void __drbd_make_request(struct drbd_device *device, struct bio *bio)
+/* Insert the request into the tree of writes. Pass it through to be submitted
+ * if possible. Otherwise it will be submitted asynchronously via
+ * drbd_release_conflicts once the conflict has been resolved. */
+static void drbd_conflict_submit_write(struct drbd_request *req)
+{
+	struct drbd_device *device = req->device;
+	struct drbd_interval *conflict;
+
+	spin_lock_irq(&device->interval_lock);
+	clear_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &req->i.flags);
+	conflict = drbd_find_conflict(device, &req->i, 0);
+	if (drbd_interval_empty(&req->i))
+		drbd_insert_interval(&device->requests, &req->i);
+	if (!conflict) {
+		set_bit(INTERVAL_SUBMITTED, &req->i.flags);
+	} else if (drbd_interval_is_local(conflict)) {
+		struct drbd_request *conflicting_req =
+			container_of(conflict, struct drbd_request, i);
+
+		if (conflicting_req->local_rq_state & RQ_POSTPONED) {
+			req->local_rq_state |= RQ_POSTPONED;
+
+			/*
+			 * Remove interval from tree to prevent req from being
+			 * queued when conflicts are released.
+			 */
+			drbd_remove_interval(&device->requests, &req->i);
+		}
+	}
+	spin_unlock_irq(&device->interval_lock);
+
+	/*
+	 * If there is a conflict, the request will be submitted once the
+	 * conflict has cleared.
+	 */
+	if (!conflict) {
+		drbd_send_and_submit(req);
+	} else if (req->local_rq_state & RQ_POSTPONED) {
+		if (req->private_bio) {
+			bio_put(req->private_bio);
+			req->private_bio = NULL;
+			put_ldev(device);
+		}
+		drbd_req_put_completion_ref(req, NULL, 1);
+	}
+}
+
+static bool inc_ap_bio_cond(struct drbd_device *device, int rw)
+{
+	int ap_bio_cnt;
+	bool rv;
+
+	read_lock_irq(&device->resource->state_rwlock);
+	rv = may_inc_ap_bio(device);
+	read_unlock_irq(&device->resource->state_rwlock);
+	if (!rv)
+		return false;
+
+	/* check need for new current uuid _AFTER_ ensuring IO is not suspended via may_inc_ap_bio */
+	if (test_bit(NEW_CUR_UUID, &device->flags)) {
+		if (!test_and_set_bit(WRITING_NEW_CUR_UUID, &device->flags))
+			drbd_device_post_work(device, MAKE_NEW_CUR_UUID);
+
+		return false;
+	}
+
+	do {
+		unsigned int nr_requests = device->resource->res_opts.nr_requests;
+
+		ap_bio_cnt = atomic_read(&device->ap_bio_cnt[rw]);
+		if (ap_bio_cnt >= nr_requests)
+			return false;
+	} while (atomic_cmpxchg(&device->ap_bio_cnt[rw], ap_bio_cnt, ap_bio_cnt + 1) != ap_bio_cnt);
+
+	return true;
+}
+
+static void inc_ap_bio(struct drbd_device *device, int rw)
+{
+	/* we wait here
+	 *    as long as the device is suspended
+	 *    until the bitmap is no longer on the fly during connection
+	 *    handshake as long as we would exceed the max_buffer limit.
+	 *
+	 * to avoid races with the reconnect code,
+	 * we need to atomic_inc within the spinlock. */
+
+	wait_event(device->misc_wait, inc_ap_bio_cond(device, rw));
+}
+
+void __drbd_make_request(struct drbd_device *device, struct bio *bio,
+		ktime_t start_kt,
+		unsigned long start_jif)
 {
-	struct drbd_request *req = drbd_request_prepare(device, bio);
+	const int rw = bio_data_dir(bio);
+	struct drbd_request *req;
+
+	inc_ap_bio(device, bio_data_dir(bio));
+	req = drbd_request_prepare(device, bio, start_kt, start_jif);
 	if (IS_ERR_OR_NULL(req))
 		return;
-	drbd_send_and_submit(device, req);
+
+	if (rw == WRITE)
+		drbd_conflict_submit_write(req);
+	else
+		drbd_send_and_submit(req);
+}
+
+/* Work function to submit requests once they are released after conflicts. The
+ * queued requests are processed and, if no other conflict is found, submitted. */
+void drbd_do_submit_conflict(struct work_struct *ws)
+{
+	struct drbd_device *device = container_of(ws, struct drbd_device, submit_conflict.worker);
+	struct drbd_peer_request *peer_req, *peer_req_tmp;
+	struct drbd_request *req, *tmp;
+	LIST_HEAD(resync_writes);
+	LIST_HEAD(resync_reads);
+	LIST_HEAD(writes);
+	LIST_HEAD(peer_writes);
+
+	spin_lock_irq(&device->submit_conflict.lock);
+	list_splice_init(&device->submit_conflict.resync_writes, &resync_writes);
+	list_splice_init(&device->submit_conflict.resync_reads, &resync_reads);
+	list_splice_init(&device->submit_conflict.writes, &writes);
+	list_splice_init(&device->submit_conflict.peer_writes, &peer_writes);
+	spin_unlock_irq(&device->submit_conflict.lock);
+
+	/* Delete the list entries when iterating them so that they can be re-used
+	 * for adding them to the conflict lists again once the
+	 * submit_conflict_queued flag has been cleared. */
+
+	list_for_each_entry_safe(peer_req, peer_req_tmp, &resync_writes, w.list) {
+		list_del_init(&peer_req->w.list);
+		if (!test_bit(INTERVAL_READY_TO_SEND, &peer_req->i.flags))
+			drbd_conflict_send_resync_request(peer_req);
+		else
+			drbd_conflict_submit_resync_request(peer_req);
+	}
+
+	list_for_each_entry_safe(peer_req, peer_req_tmp, &resync_reads, w.list) {
+		list_del_init(&peer_req->w.list);
+		drbd_conflict_submit_peer_read(peer_req);
+	}
+
+	list_for_each_entry_safe(req, tmp, &writes, list) {
+		list_del_init(&req->list);
+		drbd_conflict_submit_write(req);
+	}
+
+	list_for_each_entry_safe(peer_req, peer_req_tmp, &peer_writes, w.list) {
+		list_del_init(&peer_req->w.list);
+		/* ldev_safe: queued peer requests hold their own ldev references */
+		drbd_conflict_submit_peer_write(peer_req);
+	}
+}
+
+/* helpers for do_submit */
+
+struct incoming_pending {
+	/* from drbd_submit_bio() or receive_Data() */
+	struct list_head incoming;
+	/* for non-blocking fill-up # of updates in the transaction */
+	struct list_head more_incoming;
+	/* to be submitted after next AL-transaction commit */
+	struct list_head pending;
+	/* need cleanup */
+	struct list_head cleanup;
+};
+
+struct waiting_for_act_log {
+	struct incoming_pending requests;
+	struct incoming_pending peer_requests;
+};
+
+static void ipb_init(struct incoming_pending *ipb)
+{
+	INIT_LIST_HEAD(&ipb->incoming);
+	INIT_LIST_HEAD(&ipb->more_incoming);
+	INIT_LIST_HEAD(&ipb->pending);
+	INIT_LIST_HEAD(&ipb->cleanup);
+}
+
+static void wfa_init(struct waiting_for_act_log *wfa)
+{
+	ipb_init(&wfa->requests);
+	ipb_init(&wfa->peer_requests);
+}
+
+#define wfa_lists_empty(_wfa, name)	\
+	(list_empty(&(_wfa)->requests.name) && list_empty(&(_wfa)->peer_requests.name))
+#define wfa_splice_tail_init(_wfa, from, to) do { \
+	list_splice_tail_init(&(_wfa)->requests.from, &(_wfa)->requests.to); \
+	list_splice_tail_init(&(_wfa)->peer_requests.from, &(_wfa)->peer_requests.to); \
+	} while (0)
+
+static void __drbd_submit_peer_request(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	int err;
+
+	peer_req->flags |= EE_IN_ACTLOG;
+	atomic_sub(interval_to_al_extents(&peer_req->i), &device->wait_for_actlog_ecnt);
+	atomic_dec(&device->wait_for_actlog);
+	list_del_init(&peer_req->w.list);
+
+	err = drbd_submit_peer_request(peer_req);
+
+	if (err)
+		drbd_cleanup_after_failed_submit_peer_write(peer_req);
 }
 
-static void submit_fast_path(struct drbd_device *device, struct list_head *incoming)
+static void submit_fast_path(struct drbd_device *device, struct waiting_for_act_log *wfa)
 {
 	struct blk_plug plug;
 	struct drbd_request *req, *tmp;
+	struct drbd_peer_request *pr, *pr_tmp;
 
 	blk_start_plug(&plug);
-	list_for_each_entry_safe(req, tmp, incoming, tl_requests) {
+	list_for_each_entry_safe(pr, pr_tmp, &wfa->peer_requests.incoming, w.list) {
+		if (!drbd_al_begin_io_fastpath(pr->peer_device->device, &pr->i))
+			continue;
+
+		__drbd_submit_peer_request(pr);
+	}
+	list_for_each_entry_safe(req, tmp, &wfa->requests.incoming, list) {
 		const int rw = bio_data_dir(req->master_bio);
 
-		if (rw == WRITE /* rw != WRITE should not even end up here! */
-		&& req->private_bio && req->i.size
-		&& !test_bit(AL_SUSPENDED, &device->flags)) {
+		if (rw == WRITE && req->private_bio && req->i.size
+				&& !test_bit(AL_SUSPENDED, &device->flags)) {
 			if (!drbd_al_begin_io_fastpath(device, &req->i))
 				continue;
 
-			req->rq_state |= RQ_IN_ACT_LOG;
-			req->in_actlog_jif = jiffies;
+			drbd_req_in_actlog(req);
 			atomic_dec(&device->ap_actlog_cnt);
 		}
 
-		list_del_init(&req->tl_requests);
-		drbd_send_and_submit(device, req);
+		list_del_init(&req->list);
+		drbd_conflict_submit_write(req);
 	}
 	blk_finish_plug(&plug);
 }
 
+static struct drbd_request *wfa_next_request(struct waiting_for_act_log *wfa)
+{
+	struct list_head *lh = !list_empty(&wfa->requests.more_incoming) ?
+			&wfa->requests.more_incoming : &wfa->requests.incoming;
+	return list_first_entry_or_null(lh, struct drbd_request, list);
+}
+
+static struct drbd_peer_request *wfa_next_peer_request(struct waiting_for_act_log *wfa)
+{
+	struct list_head *lh = !list_empty(&wfa->peer_requests.more_incoming) ?
+			&wfa->peer_requests.more_incoming : &wfa->peer_requests.incoming;
+	return list_first_entry_or_null(lh, struct drbd_peer_request, w.list);
+}
+
 static bool prepare_al_transaction_nonblock(struct drbd_device *device,
-					    struct list_head *incoming,
-					    struct list_head *pending,
-					    struct list_head *later)
+					    struct waiting_for_act_log *wfa)
 {
+	struct drbd_peer_request *peer_req;
 	struct drbd_request *req;
-	int wake = 0;
+	bool made_progress = false;
 	int err;
 
 	spin_lock_irq(&device->al_lock);
-	while ((req = list_first_entry_or_null(incoming, struct drbd_request, tl_requests))) {
+
+	/* Don't even try, if someone has it locked right now. */
+	if (test_bit(__LC_LOCKED, &device->act_log->flags))
+		goto out;
+
+	while ((peer_req = wfa_next_peer_request(wfa))) {
+		if (peer_req->peer_device->connection->cstate[NOW] < C_CONNECTED) {
+			list_move_tail(&peer_req->w.list, &wfa->peer_requests.cleanup);
+			made_progress = true;
+			continue;
+		}
+		err = drbd_al_begin_io_nonblock(device, &peer_req->i);
+		if (err) {
+			if (err != -ENOBUFS && drbd_ratelimit())
+				drbd_err(device, "Unexpected error %d from drbd_al_begin_io_nonblock\n", err);
+			break;
+		}
+		list_move_tail(&peer_req->w.list, &wfa->peer_requests.pending);
+		made_progress = true;
+	}
+	while ((req = wfa_next_request(wfa))) {
+		ktime_aggregate_delta(device, req->start_kt, before_al_begin_io_kt);
 		err = drbd_al_begin_io_nonblock(device, &req->i);
-		if (err == -ENOBUFS)
+		if (err) {
+			if (err != -ENOBUFS && drbd_ratelimit())
+				drbd_err(device, "Unexpected error %d from drbd_al_begin_io_nonblock\n", err);
 			break;
-		if (err == -EBUSY)
-			wake = 1;
-		if (err)
-			list_move_tail(&req->tl_requests, later);
-		else
-			list_move_tail(&req->tl_requests, pending);
+		}
+		list_move_tail(&req->list, &wfa->requests.pending);
+		made_progress = true;
 	}
+ out:
 	spin_unlock_irq(&device->al_lock);
-	if (wake)
-		wake_up(&device->al_wait);
-	return !list_empty(pending);
+	return made_progress;
 }
 
-static void send_and_submit_pending(struct drbd_device *device, struct list_head *pending)
+static void send_and_submit_pending(struct drbd_device *device, struct waiting_for_act_log *wfa)
 {
 	struct blk_plug plug;
-	struct drbd_request *req;
+	struct drbd_request *req, *tmp;
+	struct drbd_peer_request *pr, *pr_tmp;
 
 	blk_start_plug(&plug);
-	while ((req = list_first_entry_or_null(pending, struct drbd_request, tl_requests))) {
-		req->rq_state |= RQ_IN_ACT_LOG;
-		req->in_actlog_jif = jiffies;
+	list_for_each_entry_safe(pr, pr_tmp, &wfa->peer_requests.pending, w.list) {
+		__drbd_submit_peer_request(pr);
+	}
+	list_for_each_entry_safe(req, tmp, &wfa->requests.pending, list) {
+		drbd_req_in_actlog(req);
 		atomic_dec(&device->ap_actlog_cnt);
-		list_del_init(&req->tl_requests);
-		drbd_send_and_submit(device, req);
+		list_del_init(&req->list);
+		drbd_conflict_submit_write(req);
 	}
 	blk_finish_plug(&plug);
 }
 
+/* more: for non-blocking fill-up # of updates in the transaction */
+static bool grab_new_incoming_requests(struct drbd_device *device, struct waiting_for_act_log *wfa, bool more)
+{
+	/* grab new incoming requests */
+	struct list_head *reqs = more ? &wfa->requests.more_incoming : &wfa->requests.incoming;
+	struct list_head *peer_reqs = more ? &wfa->peer_requests.more_incoming : &wfa->peer_requests.incoming;
+	bool found_new = false;
+
+	spin_lock(&device->submit.lock);
+	found_new = !list_empty(&device->submit.writes);
+	list_splice_tail_init(&device->submit.writes, reqs);
+	found_new |= !list_empty(&device->submit.peer_writes);
+	list_splice_tail_init(&device->submit.peer_writes, peer_reqs);
+	spin_unlock(&device->submit.lock);
+
+	return found_new;
+}
+
 void do_submit(struct work_struct *ws)
 {
 	struct drbd_device *device = container_of(ws, struct drbd_device, submit.worker);
-	LIST_HEAD(incoming);	/* from drbd_make_request() */
-	LIST_HEAD(pending);	/* to be submitted after next AL-transaction commit */
-	LIST_HEAD(busy);	/* blocked by resync requests */
+	struct waiting_for_act_log wfa;
+	bool made_progress;
 
-	/* grab new incoming requests */
-	spin_lock_irq(&device->resource->req_lock);
-	list_splice_tail_init(&device->submit.writes, &incoming);
-	spin_unlock_irq(&device->resource->req_lock);
+	wfa_init(&wfa);
+
+	grab_new_incoming_requests(device, &wfa, false);
 
 	for (;;) {
 		DEFINE_WAIT(wait);
 
-		/* move used-to-be-busy back to front of incoming */
-		list_splice_init(&busy, &incoming);
-		submit_fast_path(device, &incoming);
-		if (list_empty(&incoming))
+		/* ldev_safe: queued requests acquired ldev in drbd_request_prepare() */
+		submit_fast_path(device, &wfa);
+		if (wfa_lists_empty(&wfa, incoming))
 			break;
 
 		for (;;) {
+			/*
+			 * We put ourselves on device->al_wait, then check if
+			 * we can need to actually sleep and wait for someone
+			 * else to make progress.
+			 *
+			 * We need to sleep if we cannot activate enough
+			 * activity log extents for even one single request.
+			 * That would mean that all (peer-)requests in our
+			 * incoming lists target "cold" activity log extents,
+			 * all activity log extent slots are have on-going
+			 * in-flight IO (are "hot"), and no idle or free slot
+			 * is available.
+			 *
+			 * prepare_to_wait() can internally cause a wake_up()
+			 * as well, though, so this may appear to busy-loop
+			 * a couple times, but should settle down quickly.
+			 *
+			 * When application requests make sufficient progress,
+			 * some refcount on some extent will eventually drop to
+			 * zero, we will be woken up, and can try to move that
+			 * now idle extent to "cold", and recycle its slot for
+			 * one of the extents we'd like to become hot.
+			 */
 			prepare_to_wait(&device->al_wait, &wait, TASK_UNINTERRUPTIBLE);
 
-			list_splice_init(&busy, &incoming);
-			prepare_al_transaction_nonblock(device, &incoming, &pending, &busy);
-			if (!list_empty(&pending))
+			made_progress = prepare_al_transaction_nonblock(device, &wfa);
+			if (made_progress)
 				break;
 
 			schedule();
@@ -1551,15 +2604,12 @@ void do_submit(struct work_struct *ws)
 			 * effectively blocking all new requests until we made
 			 * at least _some_ progress with what we currently have.
 			 */
-			if (!list_empty(&incoming))
+			if (!wfa_lists_empty(&wfa, incoming))
 				continue;
 
-			/* Nothing moved to pending, but nothing left
-			 * on incoming: all moved to busy!
-			 * Grab new and iterate. */
-			spin_lock_irq(&device->resource->req_lock);
-			list_splice_tail_init(&device->submit.writes, &incoming);
-			spin_unlock_irq(&device->resource->req_lock);
+			/* Nothing moved to pending, but nothing left on
+			 * incoming. Grab new and iterate. */
+			grab_new_incoming_requests(device, &wfa, false);
 		}
 		finish_wait(&device->al_wait, &wait);
 
@@ -1567,81 +2617,216 @@ void do_submit(struct work_struct *ws)
 		 * had been processed, skip ahead to commit, and iterate
 		 * without splicing in more incoming requests from upper layers.
 		 *
-		 * Else, if all incoming have been processed,
-		 * they have become either "pending" (to be submitted after
-		 * next transaction commit) or "busy" (blocked by resync).
+		 * Else, if all incoming have been processed, they have become
+		 * "pending" (to be submitted after next transaction commit).
 		 *
 		 * Maybe more was queued, while we prepared the transaction?
 		 * Try to stuff those into this transaction as well.
 		 * Be strictly non-blocking here,
 		 * we already have something to commit.
 		 *
-		 * Commit if we don't make any more progres.
+		 * Commit as soon as we don't make any more progress.
 		 */
 
-		while (list_empty(&incoming)) {
-			LIST_HEAD(more_pending);
-			LIST_HEAD(more_incoming);
-			bool made_progress;
-
+		while (wfa_lists_empty(&wfa, incoming)) {
 			/* It is ok to look outside the lock,
 			 * it's only an optimization anyways */
-			if (list_empty(&device->submit.writes))
+			if (list_empty(&device->submit.writes) &&
+			    list_empty(&device->submit.peer_writes))
 				break;
 
-			spin_lock_irq(&device->resource->req_lock);
-			list_splice_tail_init(&device->submit.writes, &more_incoming);
-			spin_unlock_irq(&device->resource->req_lock);
-
-			if (list_empty(&more_incoming))
+			if (!grab_new_incoming_requests(device, &wfa, true))
 				break;
 
-			made_progress = prepare_al_transaction_nonblock(device, &more_incoming, &more_pending, &busy);
+			made_progress = prepare_al_transaction_nonblock(device, &wfa);
 
-			list_splice_tail_init(&more_pending, &pending);
-			list_splice_tail_init(&more_incoming, &incoming);
+			wfa_splice_tail_init(&wfa, more_incoming, incoming);
 			if (!made_progress)
 				break;
 		}
+		if (!list_empty(&wfa.peer_requests.cleanup))
+			drbd_cleanup_peer_requests_wfa(device, &wfa.peer_requests.cleanup);
 
+		/* ldev_safe: queued requests acquired ldev in drbd_request_prepare() */
 		drbd_al_begin_io_commit(device);
-		send_and_submit_pending(device, &pending);
+
+		send_and_submit_pending(device, &wfa);
+	}
+}
+
+static bool drbd_reject_write_early(struct drbd_device *device, struct bio *bio)
+{
+	struct drbd_resource *resource = device->resource;
+
+	/* If you "mount -o ro", then later "mount -o remount,rw", you can end
+	 * up with a DRBD "Secondary" receiving WRITE requests from the VFS.
+	 * We cannot have that. */
+
+	if (bio_data_dir(bio) == READ)
+		return false;
+
+	if (resource->role[NOW] != R_PRIMARY) {
+		/* You can fsync() on an O_RDONLY fd. Only be noisy
+		 * if there is data.  Ratelimit on per device "unspec"
+		 * ratelimit state before kmalloc / adding the specific
+		 * openers hint.
+		 */
+		if (bio_has_data(bio) && drbd_device_ratelimit(device, GENERIC)) {
+			char *buf = kmalloc(128, __GFP_NORETRY);
+
+			if (buf)
+				youngest_and_oldest_opener_to_str(device, buf, 128);
+			drbd_err(device,
+				"Rejected WRITE request, not in Primary role.%s\n", buf ?: "");
+			kfree(buf);
+		}
+		return true;
+	} else if (device->open_cnt == 0) {
+		drbd_err_ratelimit(device, "WRITE request, but open_cnt == 0!\n");
+	} else if (!device->writable && bio_has_data(bio)) {
+		/*
+		 * If the resource was (temporarily, auto) promoted,
+		 * a remount,rw may have succeeded without marking the device
+		 * open_cnt as "writable".  Once we let writes through, we need
+		 * _all_ openers to release(), before we attempt to auto-demote
+		 * again, so we mark it writable here.  Grab the open_release
+		 * mutex to protect against races with new openers.
+		 */
+		mutex_lock(&resource->open_release);
+		drbd_info(device, "open_cnt:%d, implicitly promoted to writable\n",
+			device->open_cnt);
+		device->writable = true;
+		mutex_unlock(&resource->open_release);
+	}
+	return false;
+}
+
+/* Check if bio is "bad", likely to be rejected by lower layers or peers:
+ * Must not be too large, must not be unaligned.
+ */
+static bool bio_bad(struct drbd_device *device, struct bio *bio)
+{
+	unsigned int bss_mask = queue_logical_block_size(device->rq_queue) / SECTOR_SIZE - 1;
+	unsigned int bs_mask = queue_logical_block_size(device->rq_queue) - 1;
+	unsigned long long sector = bio->bi_iter.bi_sector;
+	unsigned int size = bio->bi_iter.bi_size;
+
+	if (size > DRBD_MAX_BATCH_BIO_SIZE || (size & bs_mask) || (sector & bss_mask)) {
+		char comm[TASK_COMM_LEN];
+
+		get_task_comm(comm, current);
+		drbd_warn(device, "bad bio: %llu +%u 0x%x submitted by %s[%u]\n",
+			sector, size, bio->bi_opf, comm, task_pid_nr(current));
+		return true;
 	}
+
+	return false;
 }
 
+/* drbd_submit_bio() - entry point for data into DRBD
+ *
+ * Request handling flow:
+ *
+ *                                    drbd_submit_bio
+ *                                           |
+ *                                           v          wait for AL
+ * do_retry -----------------------> __drbd_make_request --------> drbd_queue_write
+ *     ^                                     |                          |
+ *     |                                     |                         ...
+ *     |                                     |                          |
+ *     |                                     |                          v    AL extent active
+ *     |     drbd_do_submit_conflict --------+                     do_submit ----------------+
+ *     |                ^                    |                          |                    |
+ *    ...               |                    |                          v                    v
+ *     |               ...                   |               send_and_submit_pending   submit_fast_path
+ *     |                |                    v                          |                    |
+ *     |                +----------- drbd_conflict_submit_write <-------+--------------------+
+ *     |                  conflict           |
+ *     |                                     v
+ * drbd_restart_request <----------- drbd_send_and_submit
+ *                      RQ_POSTPONED         |
+ *                                           v
+ *                                   Request state machine
+ */
 void drbd_submit_bio(struct bio *bio)
 {
 	struct drbd_device *device = bio->bi_bdev->bd_disk->private_data;
+#ifdef CONFIG_DRBD_TIMING_STATS
+	ktime_t start_kt;
+#endif
+	unsigned long start_jif;
+
+	if (drbd_reject_write_early(device, bio)) {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+		return;
+	}
 
 	bio = bio_split_to_limits(bio);
 	if (!bio)
 		return;
 
-	/*
-	 * what we "blindly" assume:
+	if (device->cached_err_io || bio_bad(device, bio)) {
+		bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+		return;
+	}
+
+	/* This is both an optimization: READ of size 0, nothing to do
+	 * and a workaround: (older) ZFS explodes on size zero reads, see
+	 * https://github.com/zfsonlinux/zfs/issues/8379
+	 * Actually don't do anything for size zero bios.
+	 * Add a "WARN_ONCE", so we can tell the caller to stop doing this.
 	 */
-	D_ASSERT(device, IS_ALIGNED(bio->bi_iter.bi_size, 512));
+	if (bio_op(bio) == REQ_OP_READ && bio->bi_iter.bi_size == 0) {
+		WARN_ONCE(1, "size zero read from upper layers");
+		bio_endio(bio);
+		return;
+	}
+
+	ktime_get_accounting(start_kt);
+	start_jif = jiffies;
+
+	__drbd_make_request(device, bio, start_kt, start_jif);
+}
+
+static unsigned long time_min_in_future(unsigned long now,
+		unsigned long t1, unsigned long t2)
+{
+	bool t1_in_future = time_after(t1, now);
+	bool t2_in_future = time_after(t2, now);
+
+	/* Ensure that we never return a time in the past. */
+	t1 = t1_in_future ? t1 : now;
+	t2 = t2_in_future ? t2 : now;
 
-	inc_ap_bio(device);
-	__drbd_make_request(device, bio);
+	if (!t1_in_future)
+		return t2;
+
+	if (!t2_in_future)
+		return t1;
+
+	return time_after(t1, t2) ? t2 : t1;
 }
 
 static bool net_timeout_reached(struct drbd_request *net_req,
-		struct drbd_connection *connection,
+		struct drbd_peer_device *peer_device,
 		unsigned long now, unsigned long ent,
 		unsigned int ko_count, unsigned int timeout)
 {
-	struct drbd_device *device = net_req->device;
+	struct drbd_connection *connection = peer_device->connection;
+	int peer_node_id = peer_device->node_id;
+	unsigned long pre_send_jif = net_req->pre_send_jif[peer_node_id];
 
-	if (!time_after(now, net_req->pre_send_jif + ent))
+	if (!time_after(now, pre_send_jif + ent))
 		return false;
 
 	if (time_in_range(now, connection->last_reconnect_jif, connection->last_reconnect_jif + ent))
 		return false;
 
-	if (net_req->rq_state & RQ_NET_PENDING) {
-		drbd_warn(device, "Remote failed to finish a request within %ums > ko-count (%u) * timeout (%u * 0.1s)\n",
-			jiffies_to_msecs(now - net_req->pre_send_jif), ko_count, timeout);
+	if (net_req->net_rq_state[peer_node_id] & RQ_NET_PENDING) {
+		drbd_warn(peer_device, "Remote failed to finish a request within %ums > ko-count (%u) * timeout (%u * 0.1s)\n",
+			jiffies_to_msecs(now - pre_send_jif), ko_count, timeout);
 		return true;
 	}
 
@@ -1650,9 +2835,12 @@ static bool net_timeout_reached(struct drbd_request *net_req,
 	 * Check if we sent the barrier already.  We should not blame the peer
 	 * for being unresponsive, if we did not even ask it yet. */
 	if (net_req->epoch == connection->send.current_epoch_nr) {
-		drbd_warn(device,
-			"We did not send a P_BARRIER for %ums > ko-count (%u) * timeout (%u * 0.1s); drbd kernel thread blocked?\n",
-			jiffies_to_msecs(now - net_req->pre_send_jif), ko_count, timeout);
+		/* It is OK for the barrier to be delayed for a long time for a
+		 * suspended request. */
+		if (!(net_req->local_rq_state & RQ_COMPLETION_SUSP))
+			drbd_warn(peer_device,
+					"We did not send a P_BARRIER for %ums > ko-count (%u) * timeout (%u * 0.1s); drbd kernel thread blocked?\n",
+					jiffies_to_msecs(now - pre_send_jif), ko_count, timeout);
 		return false;
 	}
 
@@ -1673,7 +2861,7 @@ static bool net_timeout_reached(struct drbd_request *net_req,
 	 * barrier packet is relevant enough.
 	 */
 	if (time_after(now, connection->send.last_sent_barrier_jif + ent)) {
-		drbd_warn(device, "Remote failed to answer a P_BARRIER (sent at %lu jif; now=%lu jif) within %ums > ko-count (%u) * timeout (%u * 0.1s)\n",
+		drbd_warn(peer_device, "Remote failed to answer a P_BARRIER (sent at %lu jif; now=%lu jif) within %ums > ko-count (%u) * timeout (%u * 0.1s)\n",
 			connection->send.last_sent_barrier_jif, now,
 			jiffies_to_msecs(now - connection->send.last_sent_barrier_jif), ko_count, timeout);
 		return true;
@@ -1690,7 +2878,7 @@ static bool net_timeout_reached(struct drbd_request *net_req,
  * - the connection was established (resp. disk was attached)
  *   for longer than the timeout already.
  * Note that for 32bit jiffies and very stable connections/disks,
- * we may have a wrap around, which is catched by
+ * we may have a wrap around, which is caught by
  *   !time_in_range(now, last_..._jif, last_..._jif + timeout).
  *
  * Side effect: once per 32bit wrap-around interval, which means every
@@ -1700,92 +2888,200 @@ static bool net_timeout_reached(struct drbd_request *net_req,
 
 void request_timer_fn(struct timer_list *t)
 {
-	struct drbd_device *device = timer_container_of(device, t,
-							request_timer);
-	struct drbd_connection *connection = first_peer_device(device)->connection;
-	struct drbd_request *req_read, *req_write, *req_peer; /* oldest request */
-	struct net_conf *nc;
-	unsigned long oldest_submit_jif;
-	unsigned long ent = 0, dt = 0, et, nt; /* effective timeout = ko_count * timeout */
-	unsigned long now;
-	unsigned int ko_count = 0, timeout = 0;
+	struct drbd_device *device = timer_container_of(device, t, request_timer);
+	struct drbd_resource *resource = device->resource;
+	struct drbd_connection *connection;
+	struct drbd_request *req_read, *req_write;
+	unsigned long oldest_submit_jif, irq_flags;
+	unsigned long disk_timeout = 0, effective_timeout = 0, now = jiffies, next_trigger_time = now;
+	bool restart_timer = false, io_error = false;
+	unsigned long timeout_peers = 0;
+	int node_id;
 
 	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
-	if (nc && device->state.conn >= C_WF_REPORT_PARAMS) {
-		ko_count = nc->ko_count;
-		timeout = nc->timeout;
-	}
-
 	if (get_ldev(device)) { /* implicit state.disk >= D_INCONSISTENT */
-		dt = rcu_dereference(device->ldev->disk_conf)->disk_timeout * HZ / 10;
+		disk_timeout = rcu_dereference(device->ldev->disk_conf)->disk_timeout * HZ / 10;
 		put_ldev(device);
 	}
 	rcu_read_unlock();
 
+	/* FIXME right now, this basically does a full transfer log walk *every time* */
+	read_lock_irq(&resource->state_rwlock);
+	if (disk_timeout) {
+		unsigned long write_pre_submit_jif = 0, read_pre_submit_jif = 0;
+
+		spin_lock(&device->pending_completion_lock); /* local irq already disabled */
+		req_read = list_first_entry_or_null(&device->pending_completion[0], struct drbd_request, req_pending_local);
+		req_write = list_first_entry_or_null(&device->pending_completion[1], struct drbd_request, req_pending_local);
+		spin_unlock(&device->pending_completion_lock);
+
+		if (req_write)
+			write_pre_submit_jif = req_write->pre_submit_jif;
+		if (req_read)
+			read_pre_submit_jif = req_read->pre_submit_jif;
+		oldest_submit_jif =
+			(req_write && req_read)
+			? (time_before(write_pre_submit_jif, read_pre_submit_jif)
+			  ? write_pre_submit_jif : read_pre_submit_jif)
+			: req_write ? write_pre_submit_jif
+			: req_read ? read_pre_submit_jif : now;
+
+		if (device->disk_state[NOW] > D_FAILED) {
+			effective_timeout = min_not_zero(effective_timeout, disk_timeout);
+			next_trigger_time = time_min_in_future(now,
+					next_trigger_time, oldest_submit_jif + disk_timeout);
+			restart_timer = true;
+		}
 
-	ent = timeout * HZ/10 * ko_count;
-	et = min_not_zero(dt, ent);
+		if (time_after(now, oldest_submit_jif + disk_timeout) &&
+		    !time_in_range(now, device->last_reattach_jif, device->last_reattach_jif + disk_timeout))
+			io_error = true;
+	}
+	for_each_connection(connection, resource) {
+		struct drbd_peer_device *peer_device = conn_peer_device(connection, device->vnr);
+		struct net_conf *nc;
+		struct drbd_request *req;
+		unsigned long effective_net_timeout = 0;
+		unsigned long pre_send_jif = now;
+		unsigned int ko_count = 0, timeout = 0;
 
-	if (!et)
-		return; /* Recurring timer stopped */
+		rcu_read_lock();
+		nc = rcu_dereference(connection->transport.net_conf);
+		if (nc) {
+			/* effective timeout = ko_count * timeout */
+			if (connection->cstate[NOW] == C_CONNECTED) {
+				ko_count = nc->ko_count;
+				timeout = nc->timeout;
+				effective_net_timeout = timeout * HZ/10 * ko_count;
+			}
+		}
+		rcu_read_unlock();
 
-	now = jiffies;
-	nt = now + et;
+		/* This connection is not established,
+		 * or has the effective timeout disabled.
+		 * no timer restart needed (for this connection). */
+		if (!effective_net_timeout)
+			continue;
 
-	spin_lock_irq(&device->resource->req_lock);
-	req_read = list_first_entry_or_null(&device->pending_completion[0], struct drbd_request, req_pending_local);
-	req_write = list_first_entry_or_null(&device->pending_completion[1], struct drbd_request, req_pending_local);
+		/* maybe the oldest request waiting for the peer is in fact still
+		 * blocking in tcp sendmsg.  That's ok, though, that's handled via the
+		 * socket send timeout, requesting a ping, and bumping ko-count in
+		 * drbd_stream_send_timed_out().
+		 */
 
-	/* maybe the oldest request waiting for the peer is in fact still
-	 * blocking in tcp sendmsg.  That's ok, though, that's handled via the
-	 * socket send timeout, requesting a ping, and bumping ko-count in
-	 * we_should_drop_the_connection().
-	 */
+		/* check the oldest request we did successfully sent,
+		 * but which is still waiting for an ACK. */
+		req = connection->req_ack_pending;
+
+		/* If we don't have such request (e.g. protocol A)
+		 * check the oldest request which is still waiting on its epoch
+		 * closing barrier ack. */
+		if (!req) {
+			req = connection->req_not_net_done;
+
+			/* If we did not send the request yet then pre_send_jif
+			 * is not set. Treat this the same as when there are no
+			 * requests pending. */
+			if (req && !(req->net_rq_state[connection->peer_node_id] & RQ_NET_SENT))
+				req = NULL;
+		}
 
-	/* check the oldest request we did successfully sent,
-	 * but which is still waiting for an ACK. */
-	req_peer = connection->req_ack_pending;
+		if (req)
+			pre_send_jif = req->pre_send_jif[connection->peer_node_id];
+
+		effective_timeout = min_not_zero(effective_timeout, effective_net_timeout);
+		next_trigger_time = time_min_in_future(now,
+				next_trigger_time, pre_send_jif + effective_net_timeout);
+		/* Restart the timer, even if there are no pending requests at all.
+		 * We currently do not re-arm from the submit path. */
+		restart_timer = true;
+
+		/* We have one timer per "device",
+		 * but the "oldest" request is per "connection".
+		 * Evaluate the oldest peer request only in one timer! */
+		if (req == NULL || req->device != device)
+			continue;
+
+		if (net_timeout_reached(req, peer_device, now, effective_net_timeout, ko_count, timeout)) {
+			dynamic_drbd_dbg(peer_device, "Request at %llus+%u timed out\n",
+					(unsigned long long) req->i.sector,
+					req->i.size);
+			timeout_peers |= NODE_MASK(connection->peer_node_id);
+		}
+	}
+	read_unlock_irq(&resource->state_rwlock);
 
-	/* if we don't have such request (e.g. protocoll A)
-	 * check the oldest requests which is still waiting on its epoch
-	 * closing barrier ack. */
-	if (!req_peer)
-		req_peer = connection->req_not_net_done;
+	if (io_error) {
+		drbd_warn(device, "Local backing device failed to meet the disk-timeout\n");
+		drbd_handle_io_error(device, DRBD_FORCE_DETACH);
+	}
 
-	/* evaluate the oldest peer request only in one timer! */
-	if (req_peer && req_peer->device != device)
-		req_peer = NULL;
+	BUILD_BUG_ON(sizeof(timeout_peers) * 8 < DRBD_NODE_ID_MAX);
+	for_each_set_bit(node_id, &timeout_peers, DRBD_NODE_ID_MAX) {
+		connection = drbd_get_connection_by_node_id(resource, node_id);
+		if (!connection)
+			continue;
+		begin_state_change(resource, &irq_flags, CS_VERBOSE | CS_HARD);
+		__change_cstate(connection, C_TIMEOUT);
+		end_state_change(resource, &irq_flags, "timeout");
+		kref_put(&connection->kref, drbd_destroy_connection);
+	}
 
-	/* do we have something to evaluate? */
-	if (req_peer == NULL && req_write == NULL && req_read == NULL)
-		goto out;
+	if (restart_timer) {
+		next_trigger_time = time_min_in_future(now, next_trigger_time, now + effective_timeout);
+		mod_timer(&device->request_timer, next_trigger_time);
+	}
+}
 
-	oldest_submit_jif =
-		(req_write && req_read)
-		? ( time_before(req_write->pre_submit_jif, req_read->pre_submit_jif)
-		  ? req_write->pre_submit_jif : req_read->pre_submit_jif )
-		: req_write ? req_write->pre_submit_jif
-		: req_read ? req_read->pre_submit_jif : now;
+/**
+ * drbd_handle_io_error_: Handle the on_io_error setting, should be called from all io completion handlers
+ * @device: DRBD device.
+ * @df:     Detach flags indicating the kind of IO that failed.
+ * @where:  Calling function name.
+ */
+void drbd_handle_io_error_(struct drbd_device *device,
+	enum drbd_force_detach_flags df, const char *where)
+{
+	unsigned long flags;
+	enum drbd_io_error_p ep;
 
-	if (ent && req_peer && net_timeout_reached(req_peer, connection, now, ent, ko_count, timeout))
-		_conn_request_state(connection, NS(conn, C_TIMEOUT), CS_VERBOSE | CS_HARD);
+	write_lock_irqsave(&device->resource->state_rwlock, flags);
 
-	if (dt && oldest_submit_jif != now &&
-		 time_after(now, oldest_submit_jif + dt) &&
-		!time_in_range(now, device->last_reattach_jif, device->last_reattach_jif + dt)) {
-		drbd_warn(device, "Local backing device failed to meet the disk-timeout\n");
-		__drbd_chk_io_error(device, DRBD_FORCE_DETACH);
+	rcu_read_lock();
+	/* ldev_safe: called from endio handlers where ldev is still held */
+	ep = rcu_dereference(device->ldev->disk_conf)->on_io_error;
+	rcu_read_unlock();
+	switch (ep) {
+	case EP_PASS_ON: /* FIXME would this be better named "Ignore"? */
+		if (df == DRBD_READ_ERROR ||  df == DRBD_WRITE_ERROR) {
+			if (drbd_device_ratelimit(device, BACKEND))
+				drbd_err(device, "Local IO failed in %s.\n", where);
+			if (device->disk_state[NOW] > D_INCONSISTENT) {
+				begin_state_change_locked(device->resource, CS_HARD);
+				__change_disk_state(device, D_INCONSISTENT);
+				end_state_change_locked(device->resource, "local-io-error");
+			}
+			break;
+		}
+		fallthrough;	/* for DRBD_META_IO_ERROR or DRBD_FORCE_DETACH */
+	case EP_DETACH:
+	case EP_CALL_HELPER:
+		/* Force-detach is not really an IO error, but rather a
+		 * desperate measure to try to deal with a completely
+		 * unresponsive lower level IO stack.
+		 * Still it should be treated as a WRITE error.
+		 */
+		if (df == DRBD_FORCE_DETACH)
+			set_bit(FORCE_DETACH, &device->flags);
+		if (device->disk_state[NOW] > D_FAILED) {
+			begin_state_change_locked(device->resource, CS_HARD);
+			__change_disk_state(device, D_FAILED);
+			end_state_change_locked(device->resource, "local-io-error");
+			drbd_err(device,
+				"Local IO failed in %s. Detaching...\n", where);
+		}
+		break;
 	}
 
-	/* Reschedule timer for the nearest not already expired timeout.
-	 * Fallback to now + min(effective network timeout, disk timeout). */
-	ent = (ent && req_peer && time_before(now, req_peer->pre_send_jif + ent))
-		? req_peer->pre_send_jif + ent : now + et;
-	dt = (dt && oldest_submit_jif != now && time_before(now, oldest_submit_jif + dt))
-		? oldest_submit_jif + dt : now + et;
-	nt = time_before(ent, dt) ? ent : dt;
-out:
-	spin_unlock_irq(&device->resource->req_lock);
-	mod_timer(&device->request_timer, nt);
+	write_unlock_irqrestore(&device->resource->state_rwlock, flags);
 }
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 16/20] drbd: rework module core for DRBD 9 transport and multi-peer
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (14 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 15/20] drbd: rework request processing for DRBD 9 multi-peer IO Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 17/20] drbd: rework receiver for DRBD 9 transport and multi-peer protocol Christoph Böhmwalder
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Rework drbd_main.c to align the module core with the DRBD 9 multi-peer
architecture introduced by the surrounding header and transport commits.

Refactor all packet sending around a page-based send buffer with
explicit cork/uncork semantics driven by the transport layer,
replacing the old per-socket static buffer and direct socket calls.

Move the transfer log from per-connection to per-resource scope, and
switch its traversal to RCU, allowing safe concurrent walks without
the coarse req_lock spinlock.

Rewrite UUID management for multi-peer: the fixed 4-slot layout is
replaced by a per-device current UUID, per-peer bitmap UUIDs, and a
history array.
This enables DRBD 9 to track resyncs across more than one peer
simultaneously. The on-disk metadata format is extended to match.

Separate the resource and connection lifecycles so that resources and
connections are created, torn down, and reference-counted
independently, with threads scoped appropriately to each object.

Add quorum-aware auto-promote semantics to the block device
open/release path.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_main.c | 6008 ++++++++++++++++++++++----------
 1 file changed, 4180 insertions(+), 1828 deletions(-)

diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 200d464e984b..acce6c4b4a16 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
-   drbd.c
+   drbd_main.c
 
    This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
 
@@ -19,41 +19,47 @@
 #include <linux/module.h>
 #include <linux/jiffies.h>
 #include <linux/drbd.h>
-#include <linux/uaccess.h>
 #include <asm/types.h>
+#include <net/net_namespace.h>
 #include <net/sock.h>
 #include <linux/ctype.h>
-#include <linux/mutex.h>
 #include <linux/fs.h>
 #include <linux/file.h>
 #include <linux/proc_fs.h>
 #include <linux/init.h>
 #include <linux/mm.h>
-#include <linux/memcontrol.h>
+#include <linux/memcontrol.h> /* needed on kernels <4.3 */
 #include <linux/mm_inline.h>
 #include <linux/slab.h>
 #include <linux/string.h>
-#include <linux/random.h>
-#include <linux/reboot.h>
 #include <linux/notifier.h>
-#include <linux/kthread.h>
 #include <linux/workqueue.h>
-#include <linux/unistd.h>
+#include <linux/kthread.h>
 #include <linux/vmalloc.h>
-#include <linux/sched/signal.h>
+#include <linux/dynamic_debug.h>
+#include <linux/libnvdimm.h>
+#include <linux/swab.h>
+#include <linux/overflow.h>
 
 #include <linux/drbd_limits.h>
 #include "drbd_int.h"
 #include "drbd_protocol.h"
-#include "drbd_req.h" /* only for _req_mod in tl_release and tl_clear */
+#include "drbd_req.h"
 #include "drbd_vli.h"
 #include "drbd_debugfs.h"
+#include "drbd_meta_data.h"
+#include "drbd_legacy_84.h"
+#include "drbd_dax_pmem.h"
 
-static DEFINE_MUTEX(drbd_main_mutex);
-static int drbd_open(struct gendisk *disk, blk_mode_t mode);
+static int drbd_open(struct gendisk *gd, blk_mode_t mode);
 static void drbd_release(struct gendisk *gd);
 static void md_sync_timer_fn(struct timer_list *t);
 static int w_bitmap_io(struct drbd_work *w, int unused);
+static int flush_send_buffer(struct drbd_connection *connection, enum drbd_stream drbd_stream);
+static u64 __set_bitmap_slots(struct drbd_device *device, u64 bitmap_uuid, u64 do_nodes);
+static u64 __test_bitmap_slots(struct drbd_device *device);
+static void drbd_send_ping_ack_wf(struct work_struct *ws);
+static void __net_exit __drbd_net_exit(struct net *net);
 
 MODULE_AUTHOR("Philipp Reisner <phil@linbit.com>, "
 	      "Lars Ellenberg <lars@linbit.com>");
@@ -63,16 +69,16 @@ MODULE_LICENSE("GPL");
 MODULE_PARM_DESC(minor_count, "Approximate number of drbd devices ("
 		 __stringify(DRBD_MINOR_COUNT_MIN) "-" __stringify(DRBD_MINOR_COUNT_MAX) ")");
 MODULE_ALIAS_BLOCKDEV_MAJOR(DRBD_MAJOR);
+MODULE_SOFTDEP("post: handshake");
 
 #include <linux/moduleparam.h>
-/* thanks to these macros, if compiled into the kernel (not-module),
- * these become boot parameters (e.g., drbd.minor_count) */
 
 #ifdef CONFIG_DRBD_FAULT_INJECTION
 int drbd_enable_faults;
 int drbd_fault_rate;
 static int drbd_fault_count;
 static int drbd_fault_devs;
+
 /* bitmap of enabled faults */
 module_param_named(enable_faults, drbd_enable_faults, int, 0664);
 /* fault rate % value - applies to all enabled faults */
@@ -84,15 +90,12 @@ module_param_named(fault_devs, drbd_fault_devs, int, 0644);
 #endif
 
 /* module parameters we can keep static */
-static bool drbd_allow_oos; /* allow_open_on_secondary */
 static bool drbd_disable_sendpage;
+static bool drbd_allow_oos; /* allow_open_on_secondary */
 MODULE_PARM_DESC(allow_oos, "DONT USE!");
-module_param_named(allow_oos, drbd_allow_oos, bool, 0);
 module_param_named(disable_sendpage, drbd_disable_sendpage, bool, 0644);
+module_param_named(allow_oos, drbd_allow_oos, bool, 0);
 
-/* module parameters we share */
-int drbd_proc_details; /* Detail level in proc drbd*/
-module_param_named(proc_details, drbd_proc_details, int, 0644);
 /* module parameters shared with defaults */
 unsigned int drbd_minor_count = DRBD_MINOR_COUNT_DEF;
 /* Module parameter for setting the user mode helper program
@@ -101,16 +104,60 @@ char drbd_usermode_helper[80] = "/sbin/drbdadm";
 module_param_named(minor_count, drbd_minor_count, uint, 0444);
 module_param_string(usermode_helper, drbd_usermode_helper, sizeof(drbd_usermode_helper), 0644);
 
+static int param_set_drbd_protocol_version(const char *s, const struct kernel_param *kp)
+{
+	unsigned long long tmp;
+	unsigned int *res = kp->arg;
+	int rv;
+
+	rv = kstrtoull(s, 0, &tmp);
+	if (rv < 0)
+		return rv;
+	if (!drbd_protocol_version_acceptable(tmp))
+		return -ERANGE;
+	*res = tmp;
+	return 0;
+}
+
+#define param_check_drbd_protocol_version	param_check_uint
+#define param_get_drbd_protocol_version		param_get_uint
+
+static const struct kernel_param_ops param_ops_drbd_protocol_version = {
+	.set = param_set_drbd_protocol_version,
+	.get = param_get_drbd_protocol_version,
+};
+
+unsigned int drbd_protocol_version_min = PRO_VERSION_8_MIN;
+module_param_named(protocol_version_min, drbd_protocol_version_min, drbd_protocol_version, 0644);
+#define protocol_version_min_desc								\
+	"\n\t\tReject DRBD dialects older than this.\n\t\t"					\
+	"Supported: "										\
+	"DRBD 8 [" __stringify(PRO_VERSION_8_MIN) "-" __stringify(PRO_VERSION_8_MAX) "]; "	\
+	"DRBD 9 [" __stringify(PRO_VERSION_MIN) "-" __stringify(PRO_VERSION_MAX) "].\n\t\t"	\
+	"Default: " __stringify(PRO_VERSION_8_MIN)
+MODULE_PARM_DESC(protocol_version_min, protocol_version_min_desc);
+
+#define param_check_drbd_strict_names		param_check_bool
+#define param_get_drbd_strict_names		param_get_bool
+const struct kernel_param_ops param_ops_drbd_strict_names = {
+	.set = param_set_drbd_strict_names,
+	.get = param_get_drbd_strict_names,
+};
+bool drbd_strict_names = true;
+MODULE_PARM_DESC(strict_names, "restrict resource and connection names to ascii alnum and a subset of punct");
+module_param_named(strict_names, drbd_strict_names, drbd_strict_names, 0644);
+
 /* in 2.6.x, our device mapping and config info contains our virtual gendisks
  * as member "struct gendisk *vdisk;"
  */
 struct idr drbd_devices;
 struct list_head drbd_resources;
-struct mutex resources_mutex;
+static DEFINE_SPINLOCK(drbd_devices_lock);
+DEFINE_MUTEX(resources_mutex);
 
+struct workqueue_struct *ping_ack_sender;
 struct kmem_cache *drbd_request_cache;
 struct kmem_cache *drbd_ee_cache;	/* peer requests */
-struct kmem_cache *drbd_bm_ext_cache;	/* bitmap extents */
 struct kmem_cache *drbd_al_ext_cache;	/* activity log extents */
 mempool_t drbd_request_mempool;
 mempool_t drbd_ee_mempool;
@@ -119,8 +166,6 @@ mempool_t drbd_buffer_page_pool;
 struct bio_set drbd_md_io_bio_set;
 struct bio_set drbd_io_bio_set;
 
-DEFINE_RATELIMIT_STATE(drbd_ratelimit_state, 5 * HZ, 5);
-
 static const struct block_device_operations drbd_ops = {
 	.owner		= THIS_MODULE,
 	.submit_bio	= drbd_submit_bio,
@@ -128,71 +173,241 @@ static const struct block_device_operations drbd_ops = {
 	.release	= drbd_release,
 };
 
-#ifdef __CHECKER__
-/* When checking with sparse, and this is an inline function, sparse will
-   give tons of false positives. When this is a real functions sparse works.
- */
-int _get_ldev_if_state(struct drbd_device *device, enum drbd_disk_state mins)
+static struct pernet_operations drbd_pernet_ops = {
+	.exit = __drbd_net_exit,
+};
+
+struct drbd_connection *__drbd_next_connection_ref(u64 *visited,
+						   struct drbd_connection *connection,
+						   struct drbd_resource *resource)
+{
+	int node_id;
+
+	rcu_read_lock();
+	if (!connection) {
+		connection = list_first_or_null_rcu(&resource->connections,
+						    struct drbd_connection,
+						    connections);
+		*visited = 0;
+	} else {
+		struct list_head *pos;
+		bool previous_visible; /* on the resources connections list */
+
+		pos = list_next_rcu(&connection->connections);
+		/* follow the pointer first, then check if the previous element was
+		   still an element on the list of visible connections. */
+		smp_rmb();
+		previous_visible = !test_bit(C_UNREGISTERED, &connection->flags);
+
+		kref_put(&connection->kref, drbd_destroy_connection);
+
+		if (pos == &resource->connections) {
+			connection = NULL;
+		} else if (previous_visible) {	/* visible -> we are now on a vital element */
+			connection = list_entry_rcu(pos, struct drbd_connection, connections);
+		} else { /* not visible -> pos might point to a dead element now */
+			for_each_connection_rcu(connection, resource) {
+				node_id = connection->peer_node_id;
+				if (!(*visited & NODE_MASK(node_id)))
+					goto found;
+			}
+			connection = NULL;
+		}
+	}
+
+	if (connection) {
+	found:
+		node_id = connection->peer_node_id;
+		*visited |= NODE_MASK(node_id);
+
+		kref_get(&connection->kref);
+	}
+
+	rcu_read_unlock();
+	return connection;
+}
+
+
+struct drbd_peer_device *__drbd_next_peer_device_ref(u64 *visited,
+						     struct drbd_peer_device *peer_device,
+						     struct drbd_device *device)
 {
-	int io_allowed;
+	rcu_read_lock();
+	if (!peer_device) {
+		peer_device = list_first_or_null_rcu(&device->peer_devices,
+						    struct drbd_peer_device,
+						    peer_devices);
+		*visited = 0;
+	} else {
+		struct list_head *pos;
+		bool previous_visible;
+
+		pos = list_next_rcu(&peer_device->peer_devices);
+		smp_rmb();
+		previous_visible = !test_bit(C_UNREGISTERED, &peer_device->connection->flags);
+
+		kref_put(&peer_device->connection->kref, drbd_destroy_connection);
 
-	atomic_inc(&device->local_cnt);
-	io_allowed = (device->state.disk >= mins);
-	if (!io_allowed) {
-		if (atomic_dec_and_test(&device->local_cnt))
-			wake_up(&device->misc_wait);
+		if (pos == &device->peer_devices) {
+			peer_device = NULL;
+		} else if (previous_visible) {
+			peer_device = list_entry_rcu(pos, struct drbd_peer_device, peer_devices);
+		} else {
+			for_each_peer_device_rcu(peer_device, device) {
+				if (!(*visited & NODE_MASK(peer_device->node_id)))
+					goto found;
+			}
+			peer_device = NULL;
+		}
+	}
+
+	if (peer_device) {
+	found:
+		*visited |= NODE_MASK(peer_device->node_id);
+
+		kref_get(&peer_device->connection->kref);
 	}
-	return io_allowed;
+
+	rcu_read_unlock();
+	return peer_device;
 }
 
-#endif
+static void dump_epoch(struct drbd_resource *resource, int node_id, int epoch)
+{
+	struct drbd_request *req;
+	bool found_epoch = false;
+
+	list_for_each_entry_rcu(req, &resource->transfer_log, tl_requests) {
+		if (!found_epoch && req->epoch == epoch)
+			found_epoch = true;
+
+		if (found_epoch) {
+			if (req->epoch != epoch)
+				break;
+			drbd_info(req->device, "XXX %u %llu+%u 0x%x 0x%x\n",
+					req->epoch,
+					(unsigned long long)req->i.sector, req->i.size >> 9,
+					req->local_rq_state, req->net_rq_state[node_id]
+				 );
+		}
+	}
+}
 
 /**
  * tl_release() - mark as BARRIER_ACKED all requests in the corresponding transfer log epoch
  * @connection:	DRBD connection.
+ * @o_block_id: "block id" aka expected pointer address of the oldest request
+ * @y_block_id: "block id" aka expected pointer address of the youngest request
+ *		confirmed to be on stable storage.
  * @barrier_nr:	Expected identifier of the DRBD write barrier packet.
- * @set_size:	Expected number of requests before that barrier.
+ * @set_size:	Expected number of requests before that barrier, respectively
+ *		number of requests in the interval [o_block_id;y_block_id]
+ *
+ * Called for both P_BARRIER_ACK and P_CONFIRM_STABLE,
+ * which is similar to an unsolicited partial barrier ack.
+ *
+ * Either barrier_nr (for barrier acks) or both o_block_id and y_blockid (for
+ * confirm stable) are given.  For barrier acks, all requests in the epoch
+ * designated by "barrier_nr" are confirmed to be on stable storage.
+ *
+ * For confirm stable, both o_block_id and y_block_id are given, barrier_nr is
+ * ignored, and all requests from "o_block_id" up to and including y_block_id
+ * are confirmed to be on stable storage on the reporting peer.
  *
  * In case the passed barrier_nr or set_size does not match the oldest
  * epoch of not yet barrier-acked requests, this function will cause a
  * termination of the connection.
  */
-void tl_release(struct drbd_connection *connection, unsigned int barrier_nr,
+int tl_release(struct drbd_connection *connection,
+		uint64_t o_block_id,
+		uint64_t y_block_id,
+		unsigned int barrier_nr,
 		unsigned int set_size)
 {
+	struct drbd_resource *resource = connection->resource;
+	const int idx = connection->peer_node_id;
 	struct drbd_request *r;
-	struct drbd_request *req = NULL, *tmp = NULL;
+	struct drbd_request *req = NULL;
+	struct drbd_request *req_y = NULL;
 	int expect_epoch = 0;
 	int expect_size = 0;
 
-	spin_lock_irq(&connection->resource->req_lock);
-
+	rcu_read_lock();
 	/* find oldest not yet barrier-acked write request,
 	 * count writes in its epoch. */
-	list_for_each_entry(r, &connection->transfer_log, tl_requests) {
-		const unsigned s = r->rq_state;
+	r = READ_ONCE(connection->req_not_net_done);
+	if (r == NULL) {
+		drbd_err(connection, "BarrierAck #%u received, but req_not_net_done = NULL\n",
+			 barrier_nr);
+		goto bail;
+	}
+	smp_rmb(); /* paired with smp_wmb() in set_cache_ptr_if_null() */
+	list_for_each_entry_from_rcu(r, &resource->transfer_log, tl_requests) {
+		unsigned int local_rq_state, net_rq_state;
+
+		spin_lock_irq(&r->rq_lock);
+		local_rq_state = r->local_rq_state;
+		net_rq_state = r->net_rq_state[idx];
+		spin_unlock_irq(&r->rq_lock);
+
 		if (!req) {
-			if (!(s & RQ_WRITE))
+			if (!(local_rq_state & RQ_WRITE))
 				continue;
-			if (!(s & RQ_NET_MASK))
+			if (!(net_rq_state & RQ_NET_MASK))
 				continue;
-			if (s & RQ_NET_DONE)
+			if (net_rq_state & RQ_NET_DONE)
 				continue;
 			req = r;
 			expect_epoch = req->epoch;
-			expect_size ++;
+			expect_size++;
 		} else {
+			const u16 s = r->net_rq_state[idx];
 			if (r->epoch != expect_epoch)
 				break;
-			if (!(s & RQ_WRITE))
+			if (!(local_rq_state & RQ_WRITE))
 				continue;
-			/* if (s & RQ_DONE): not expected */
-			/* if (!(s & RQ_NET_MASK)): not expected */
+			/* probably a "send_out_of_sync", during Ahead/Behind mode,
+			 * while at least one volume already started to resync again.
+			 * Or a write that was not replicated during a resync, and
+			 * replication has been enabled since it was submitted.
+			 */
+			if ((s & RQ_NET_MASK) && !(s & RQ_EXP_BARR_ACK))
+				continue;
+			if (s & RQ_NET_DONE || (s & RQ_NET_MASK) == 0) {
+				drbd_warn(connection, "unexpected state flags: 0x%x during BarrierAck #%u\n",
+					s, barrier_nr);
+			}
 			expect_size++;
 		}
+		if (y_block_id && (struct drbd_request *)(unsigned long)y_block_id == r) {
+			req_y = r;
+			break;
+		}
 	}
 
 	/* first some paranoia code */
+	if (o_block_id) {
+		if ((struct drbd_request *)(unsigned long)o_block_id != req) {
+			drbd_err(connection, "BAD! ConfirmedStable: expected %p, found %p\n",
+				(struct drbd_request *)(unsigned long)o_block_id, req);
+			goto bail;
+		}
+		if (!req_y) {
+			drbd_err(connection, "BAD! ConfirmedStable: expected youngest request %p NOT found\n",
+				(struct drbd_req *)(unsigned long)y_block_id);
+			goto bail;
+		}
+		/* A P_CONFIRM_STABLE cannot tell me the to-be-expected barrier nr,
+		 * it does not know it yet. But we just confirmed it knew the
+		 * expected request, so just use that one. */
+		barrier_nr = expect_epoch;
+		/* Both requests referenced must be in the same epoch. */
+		if (req_y->epoch != expect_epoch) {
+			drbd_err(connection, "BAD! ConfirmedStable: reported requests not in the same epoch (%u != %u)\n",
+				req->epoch, req_y->epoch);
+			goto bail;
+		}
+	}
 	if (req == NULL) {
 		drbd_err(connection, "BAD! BarrierAck #%u received, but no epoch in tl!?\n",
 			 barrier_nr);
@@ -205,111 +420,135 @@ void tl_release(struct drbd_connection *connection, unsigned int barrier_nr,
 	}
 
 	if (expect_size != set_size) {
-		drbd_err(connection, "BAD! BarrierAck #%u received with n_writes=%u, expected n_writes=%u!\n",
-			 barrier_nr, set_size, expect_size);
+		if (!o_block_id) {
+			DEFINE_DYNAMIC_DEBUG_METADATA(ddm, "Bad barrier ack dump");
+
+			drbd_err(connection, "BAD! BarrierAck #%u received with n_writes=%u, expected n_writes=%u!\n",
+				 barrier_nr, set_size, expect_size);
+
+			if (DYNAMIC_DEBUG_BRANCH(ddm))
+				dump_epoch(resource, connection->peer_node_id, expect_epoch);
+		} else
+			drbd_err(connection, "BAD! ConfirmedStable [%p,%p] received with n_writes=%u, expected n_writes=%u!\n",
+				 req, req_y, set_size, expect_size);
 		goto bail;
 	}
 
 	/* Clean up list of requests processed during current epoch. */
-	/* this extra list walk restart is paranoia,
-	 * to catch requests being barrier-acked "unexpectedly".
-	 * It usually should find the same req again, or some READ preceding it. */
-	list_for_each_entry(req, &connection->transfer_log, tl_requests)
-		if (req->epoch == expect_epoch) {
-			tmp = req;
-			break;
-		}
-	req = list_prepare_entry(tmp, &connection->transfer_log, tl_requests);
-	list_for_each_entry_safe_from(req, r, &connection->transfer_log, tl_requests) {
+	list_for_each_entry_from_rcu(req, &resource->transfer_log, tl_requests) {
 		struct drbd_peer_device *peer_device;
+
 		if (req->epoch != expect_epoch)
 			break;
 		peer_device = conn_peer_device(connection, req->device->vnr);
-		_req_mod(req, BARRIER_ACKED, peer_device);
+		req_mod(req, BARRIER_ACKED, peer_device);
+		if (req == req_y)
+			break;
+	}
+	rcu_read_unlock();
+
+	/* urgently flush out peer acks for P_CONFIRM_STABLE */
+	if (req_y) {
+		drbd_flush_peer_acks(resource);
+	} else if (barrier_nr == connection->send.last_sent_epoch_nr) {
+		clear_bit(BARRIER_ACK_PENDING, &connection->flags);
+		wake_up(&resource->barrier_wait);
 	}
-	spin_unlock_irq(&connection->resource->req_lock);
 
-	return;
+	return 0;
 
 bail:
-	spin_unlock_irq(&connection->resource->req_lock);
-	conn_request_state(connection, NS(conn, C_PROTOCOL_ERROR), CS_HARD);
+	rcu_read_unlock();
+	return -EPROTO;
 }
 
 
 /**
- * _tl_restart() - Walks the transfer log, and applies an action to all requests
- * @connection:	DRBD connection to operate on.
+ * __tl_walk() - Walks the transfer log, and applies an action to all requests
+ * @resource:	DRBD resource to opterate on
+ * @connection: DRBD connection to operate on
+ * @from_req:    If set, the walk starts from the request that this points to
  * @what:       The action/event to perform with all request objects
  *
- * @what might be one of CONNECTION_LOST_WHILE_PENDING, RESEND, FAIL_FROZEN_DISK_IO,
- * RESTART_FROZEN_DISK_IO.
+ * @what might be one of CONNECTION_LOST, CONNECTION_LOST_WHILE_SUSPENDED,
+ * RESEND, CANCEL_SUSPENDED_IO, COMPLETION_RESUMED.
  */
-/* must hold resource->req_lock */
-void _tl_restart(struct drbd_connection *connection, enum drbd_req_event what)
+void __tl_walk(struct drbd_resource *const resource,
+		struct drbd_connection *const connection,
+		struct drbd_request **from_req,
+		const enum drbd_req_event what)
 {
 	struct drbd_peer_device *peer_device;
-	struct drbd_request *req, *r;
+	struct drbd_request *req = NULL;
 
-	list_for_each_entry_safe(req, r, &connection->transfer_log, tl_requests) {
-		peer_device = conn_peer_device(connection, req->device->vnr);
+	rcu_read_lock();
+	if (from_req)
+		req = READ_ONCE(*from_req);
+	if (!req)
+		req = list_entry_rcu(resource->transfer_log.next, struct drbd_request, tl_requests);
+	smp_rmb(); /* paired with smp_wmb() in set_cache_ptr_if_null() */
+	list_for_each_entry_from_rcu(req, &resource->transfer_log, tl_requests) {
+		/* Skip if the request has already been destroyed. */
+		if (!kref_get_unless_zero(&req->kref))
+			continue;
+
+		peer_device = connection == NULL ? NULL :
+			conn_peer_device(connection, req->device->vnr);
 		_req_mod(req, what, peer_device);
+		kref_put(&req->kref, drbd_req_destroy);
 	}
+	rcu_read_unlock();
 }
 
-void tl_restart(struct drbd_connection *connection, enum drbd_req_event what)
+void tl_walk(struct drbd_connection *connection, struct drbd_request **from_req, enum drbd_req_event what)
 {
-	spin_lock_irq(&connection->resource->req_lock);
-	_tl_restart(connection, what);
-	spin_unlock_irq(&connection->resource->req_lock);
-}
+	struct drbd_resource *resource = connection->resource;
 
-/**
- * tl_clear() - Clears all requests and &struct drbd_tl_epoch objects out of the TL
- * @connection:	DRBD connection.
- *
- * This is called after the connection to the peer was lost. The storage covered
- * by the requests on the transfer gets marked as our of sync. Called from the
- * receiver thread and the worker thread.
- */
-void tl_clear(struct drbd_connection *connection)
-{
-	tl_restart(connection, CONNECTION_LOST_WHILE_PENDING);
+	read_lock_irq(&resource->state_rwlock);
+	__tl_walk(connection->resource, connection, from_req, what);
+	read_unlock_irq(&resource->state_rwlock);
 }
 
 /**
  * tl_abort_disk_io() - Abort disk I/O for all requests for a certain device in the TL
- * @device:	DRBD device.
+ * @device:     DRBD device.
  */
 void tl_abort_disk_io(struct drbd_device *device)
 {
-	struct drbd_connection *connection = first_peer_device(device)->connection;
-	struct drbd_request *req, *r;
+	struct drbd_resource *resource = device->resource;
+	struct drbd_request *req;
 
-	spin_lock_irq(&connection->resource->req_lock);
-	list_for_each_entry_safe(req, r, &connection->transfer_log, tl_requests) {
-		if (!(req->rq_state & RQ_LOCAL_PENDING))
+	rcu_read_lock();
+	list_for_each_entry_rcu(req, &resource->transfer_log, tl_requests) {
+		if (!(READ_ONCE(req->local_rq_state) & RQ_LOCAL_PENDING))
 			continue;
 		if (req->device != device)
 			continue;
-		_req_mod(req, ABORT_DISK_IO, NULL);
+		/* Skip if the request has already been destroyed. */
+		if (!kref_get_unless_zero(&req->kref))
+			continue;
+
+		req_mod(req, ABORT_DISK_IO, NULL);
+		kref_put(&req->kref, drbd_req_destroy);
 	}
-	spin_unlock_irq(&connection->resource->req_lock);
+	rcu_read_unlock();
 }
 
 static int drbd_thread_setup(void *arg)
 {
 	struct drbd_thread *thi = (struct drbd_thread *) arg;
 	struct drbd_resource *resource = thi->resource;
+	struct drbd_connection *connection = thi->connection;
 	unsigned long flags;
 	int retval;
 
-	snprintf(current->comm, sizeof(current->comm), "drbd_%c_%s",
-		 thi->name[0],
-		 resource->name);
-
 	allow_kernel_signal(DRBD_SIGKILL);
 	allow_kernel_signal(SIGXCPU);
+
+	if (connection)
+		kref_get(&connection->kref);
+	else
+		kref_get(&resource->kref);
 restart:
 	retval = thi->function(thi);
 
@@ -326,26 +565,33 @@ static int drbd_thread_setup(void *arg)
 	 */
 
 	if (thi->t_state == RESTARTING) {
-		drbd_info(resource, "Restarting %s thread\n", thi->name);
+		if (connection)
+			drbd_info(connection, "Restarting %s thread\n", thi->name);
+		else
+			drbd_info(resource, "Restarting %s thread\n", thi->name);
 		thi->t_state = RUNNING;
 		spin_unlock_irqrestore(&thi->t_lock, flags);
+		flush_signals(current); /* likely it got a signal to look at t_state... */
 		goto restart;
 	}
 
 	thi->task = NULL;
 	thi->t_state = NONE;
 	smp_mb();
-	complete_all(&thi->stop);
-	spin_unlock_irqrestore(&thi->t_lock, flags);
 
-	drbd_info(resource, "Terminating %s\n", current->comm);
+	if (connection)
+		drbd_info(connection, "Terminating %s thread\n", thi->name);
+	else
+		drbd_info(resource, "Terminating %s thread\n", thi->name);
 
-	/* Release mod reference taken when thread was started */
+	complete(&thi->stop);
+	spin_unlock_irqrestore(&thi->t_lock, flags);
+
+	if (connection)
+		kref_put(&connection->kref, drbd_destroy_connection);
+	else
+		kref_put(&resource->kref, drbd_destroy_resource);
 
-	if (thi->connection)
-		kref_put(&thi->connection->kref, drbd_destroy_connection);
-	kref_put(&resource->kref, drbd_destroy_resource);
-	module_put(THIS_MODULE);
 	return retval;
 }
 
@@ -364,6 +610,7 @@ static void drbd_thread_init(struct drbd_resource *resource, struct drbd_thread
 int drbd_thread_start(struct drbd_thread *thi)
 {
 	struct drbd_resource *resource = thi->resource;
+	struct drbd_connection *connection = thi->connection;
 	struct task_struct *nt;
 	unsigned long flags;
 
@@ -373,36 +620,29 @@ int drbd_thread_start(struct drbd_thread *thi)
 
 	switch (thi->t_state) {
 	case NONE:
-		drbd_info(resource, "Starting %s thread (from %s [%d])\n",
-			 thi->name, current->comm, current->pid);
-
-		/* Get ref on module for thread - this is released when thread exits */
-		if (!try_module_get(THIS_MODULE)) {
-			drbd_err(resource, "Failed to get module reference in drbd_thread_start\n");
-			spin_unlock_irqrestore(&thi->t_lock, flags);
-			return false;
-		}
-
-		kref_get(&resource->kref);
-		if (thi->connection)
-			kref_get(&thi->connection->kref);
+		if (connection)
+			drbd_info(connection, "Starting %s thread (peer-node-id %d)\n",
+				 thi->name, connection->peer_node_id);
+		else
+			drbd_info(resource, "Starting %s thread (node-id %d)\n",
+				 thi->name, resource->res_opts.node_id);
 
 		init_completion(&thi->stop);
+		D_ASSERT(resource, thi->task == NULL);
 		thi->reset_cpu_mask = 1;
 		thi->t_state = RUNNING;
 		spin_unlock_irqrestore(&thi->t_lock, flags);
 		flush_signals(current); /* otherw. may get -ERESTARTNOINTR */
 
 		nt = kthread_create(drbd_thread_setup, (void *) thi,
-				    "drbd_%c_%s", thi->name[0], thi->resource->name);
+				    "drbd_%c_%s", thi->name[0], resource->name);
 
 		if (IS_ERR(nt)) {
-			drbd_err(resource, "Couldn't start thread\n");
+			if (connection)
+				drbd_err(connection, "Couldn't start thread: %ld\n", PTR_ERR(nt));
+			else
+				drbd_err(resource, "Couldn't start thread: %ld\n", PTR_ERR(nt));
 
-			if (thi->connection)
-				kref_put(&thi->connection->kref, drbd_destroy_connection);
-			kref_put(&resource->kref, drbd_destroy_resource);
-			module_put(THIS_MODULE);
 			return false;
 		}
 		spin_lock_irqsave(&thi->t_lock, flags);
@@ -413,8 +653,10 @@ int drbd_thread_start(struct drbd_thread *thi)
 		break;
 	case EXITING:
 		thi->t_state = RESTARTING;
-		drbd_info(resource, "Restarting %s thread (from %s [%d])\n",
-				thi->name, current->comm, current->pid);
+		if (connection)
+			drbd_info(connection, "Restarting %s thread\n", thi->name);
+		else
+			drbd_info(resource, "Restarting %s thread\n", thi->name);
 		fallthrough;
 	case RUNNING:
 	case RESTARTING:
@@ -443,6 +685,12 @@ void _drbd_thread_stop(struct drbd_thread *thi, int restart, int wait)
 		return;
 	}
 
+	if (thi->t_state == EXITING && ns == RESTARTING) {
+		/* Do not abort a stop request, otherwise a waiter might never wake up */
+		spin_unlock_irqrestore(&thi->t_lock, flags);
+		return;
+	}
+
 	if (thi->t_state != ns) {
 		if (thi->task == NULL) {
 			spin_unlock_irqrestore(&thi->t_lock, flags);
@@ -455,7 +703,6 @@ void _drbd_thread_stop(struct drbd_thread *thi, int restart, int wait)
 		if (thi->task != current)
 			send_sig(DRBD_SIGKILL, thi->task, 1);
 	}
-
 	spin_unlock_irqrestore(&thi->t_lock, flags);
 
 	if (wait)
@@ -473,8 +720,7 @@ static void drbd_calc_cpu_mask(cpumask_var_t *cpu_mask)
 {
 	unsigned int *resources_per_cpu, min_index = ~0;
 
-	resources_per_cpu = kcalloc(nr_cpu_ids, sizeof(*resources_per_cpu),
-				    GFP_KERNEL);
+	resources_per_cpu = kzalloc(nr_cpu_ids * sizeof(*resources_per_cpu), GFP_KERNEL);
 	if (resources_per_cpu) {
 		struct drbd_resource *resource;
 		unsigned int cpu, min = ~0;
@@ -521,6 +767,46 @@ void drbd_thread_current_set_cpu(struct drbd_thread *thi)
 #define drbd_calc_cpu_mask(A) ({})
 #endif
 
+static bool drbd_all_neighbor_secondary(struct drbd_device *device, u64 *authoritative_ptr)
+{
+	struct drbd_peer_device *peer_device;
+	bool all_secondary = true;
+	u64 authoritative = 0;
+	int id;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->repl_state[NOW] >= L_ESTABLISHED &&
+		    peer_device->connection->peer_role[NOW] == R_PRIMARY) {
+			all_secondary = false;
+			id = peer_device->node_id;
+			authoritative |= NODE_MASK(id);
+		}
+	}
+	rcu_read_unlock();
+	if (authoritative_ptr)
+		*authoritative_ptr = authoritative;
+	return all_secondary;
+}
+
+/* This function is supposed to have the same semantics as calc_device_stable() in drbd_state.c
+   A primary is stable since it is authoritative.
+   Unstable are neighbors of a primary and resync target nodes.
+   Nodes further away from a primary are stable! */
+bool drbd_device_stable(struct drbd_device *device, u64 *authoritative_ptr)
+{
+	struct drbd_resource *resource = device->resource;
+	bool device_stable = true;
+
+	if (resource->role[NOW] == R_PRIMARY)
+		return true;
+
+	if (!drbd_all_neighbor_secondary(device, authoritative_ptr))
+		return false;
+
+	return device_stable;
+}
+
 /*
  * drbd_header_size  -  size of a packet header
  *
@@ -532,177 +818,370 @@ unsigned int drbd_header_size(struct drbd_connection *connection)
 {
 	if (connection->agreed_pro_version >= 100) {
 		BUILD_BUG_ON(!IS_ALIGNED(sizeof(struct p_header100), 8));
-		return sizeof(struct p_header100);
+		return sizeof(struct p_header100); /* 16 */
 	} else {
 		BUILD_BUG_ON(sizeof(struct p_header80) !=
 			     sizeof(struct p_header95));
 		BUILD_BUG_ON(!IS_ALIGNED(sizeof(struct p_header80), 8));
-		return sizeof(struct p_header80);
+		return sizeof(struct p_header80); /* 8 */
 	}
 }
 
-static unsigned int prepare_header80(struct p_header80 *h, enum drbd_packet cmd, int size)
+static void prepare_header80(struct p_header80 *h, enum drbd_packet cmd, int size)
 {
 	h->magic   = cpu_to_be32(DRBD_MAGIC);
 	h->command = cpu_to_be16(cmd);
-	h->length  = cpu_to_be16(size);
-	return sizeof(struct p_header80);
+	h->length  = cpu_to_be16(size - sizeof(struct p_header80));
 }
 
-static unsigned int prepare_header95(struct p_header95 *h, enum drbd_packet cmd, int size)
+static void prepare_header95(struct p_header95 *h, enum drbd_packet cmd, int size)
 {
 	h->magic   = cpu_to_be16(DRBD_MAGIC_BIG);
 	h->command = cpu_to_be16(cmd);
-	h->length = cpu_to_be32(size);
-	return sizeof(struct p_header95);
+	h->length = cpu_to_be32(size - sizeof(struct p_header95));
 }
 
-static unsigned int prepare_header100(struct p_header100 *h, enum drbd_packet cmd,
+static void prepare_header100(struct p_header100 *h, enum drbd_packet cmd,
 				      int size, int vnr)
 {
 	h->magic = cpu_to_be32(DRBD_MAGIC_100);
 	h->volume = cpu_to_be16(vnr);
 	h->command = cpu_to_be16(cmd);
-	h->length = cpu_to_be32(size);
+	h->length = cpu_to_be32(size - sizeof(struct p_header100));
 	h->pad = 0;
-	return sizeof(struct p_header100);
 }
 
-static unsigned int prepare_header(struct drbd_connection *connection, int vnr,
-				   void *buffer, enum drbd_packet cmd, int size)
+static void prepare_header(struct drbd_connection *connection, int vnr,
+			   void *buffer, enum drbd_packet cmd, int size)
 {
 	if (connection->agreed_pro_version >= 100)
-		return prepare_header100(buffer, cmd, size, vnr);
+		prepare_header100(buffer, cmd, size, vnr);
 	else if (connection->agreed_pro_version >= 95 &&
 		 size > DRBD_MAX_SIZE_H80_PACKET)
-		return prepare_header95(buffer, cmd, size);
+		prepare_header95(buffer, cmd, size);
 	else
-		return prepare_header80(buffer, cmd, size);
+		prepare_header80(buffer, cmd, size);
+}
+
+static void new_or_recycle_send_buffer_page(struct drbd_send_buffer *sbuf)
+{
+	while (1) {
+		struct page *page;
+		int count = page_count(sbuf->page);
+
+		BUG_ON(count == 0);
+		if (count == 1)
+			goto have_page;
+
+		page = alloc_page(GFP_NOIO | __GFP_NORETRY | __GFP_NOWARN);
+		if (page) {
+			put_page(sbuf->page);
+			sbuf->page = page;
+			goto have_page;
+		}
+
+		schedule_timeout_uninterruptible(HZ / 10);
+	}
+have_page:
+	sbuf->unsent =
+	sbuf->pos = page_address(sbuf->page);
+}
+
+static char * __must_check alloc_send_buffer(struct drbd_connection *connection, int size,
+			       enum drbd_stream drbd_stream)
+{
+	struct drbd_send_buffer *sbuf = &connection->send_buffer[drbd_stream];
+	char *page_start = page_address(sbuf->page);
+	int err;
+
+	if (sbuf->pos - page_start + size > PAGE_SIZE) {
+		err = flush_send_buffer(connection, drbd_stream);
+		if (err)
+			return ERR_PTR(err);
+		new_or_recycle_send_buffer_page(sbuf);
+	}
+
+	sbuf->allocated_size = size;
+	sbuf->additional_size = 0;
+
+	return sbuf->pos;
+}
+
+/* If we called alloc_send_buffer(), possibly indirectly via __conn_prepare_command(),
+ * but then decide that we actually don't want to use it.
+ */
+static void cancel_send_buffer(struct drbd_connection *connection,
+		enum drbd_stream drbd_stream)
+{
+	connection->send_buffer[drbd_stream].allocated_size = 0;
+}
+
+/* Only used the shrink the previously allocated size. */
+static void resize_prepared_command(struct drbd_connection *connection,
+				    enum drbd_stream drbd_stream,
+				    int size)
+{
+	connection->send_buffer[drbd_stream].allocated_size =
+		size + drbd_header_size(connection);
 }
 
-static void *__conn_prepare_command(struct drbd_connection *connection,
-				    struct drbd_socket *sock)
+static void additional_size_command(struct drbd_connection *connection,
+				    enum drbd_stream drbd_stream,
+				    int additional_size)
 {
-	if (!sock->socket)
+	connection->send_buffer[drbd_stream].additional_size = additional_size;
+}
+
+void *__conn_prepare_command(struct drbd_connection *connection, int size,
+				    enum drbd_stream drbd_stream)
+{
+	struct drbd_transport *transport = &connection->transport;
+	int header_size;
+	void *p;
+
+	if (connection->cstate[NOW] < C_CONNECTING)
+		return NULL;
+
+	if (!transport->class->ops.stream_ok(transport, drbd_stream))
+		return NULL;
+
+	header_size = drbd_header_size(connection);
+	p = alloc_send_buffer(connection, header_size + size, drbd_stream) + header_size;
+	if (IS_ERR(p))
 		return NULL;
-	return sock->sbuf + drbd_header_size(connection);
+	return p;
 }
 
-void *conn_prepare_command(struct drbd_connection *connection, struct drbd_socket *sock)
+/**
+ * conn_prepare_command() - Allocate a send buffer for a packet/command
+ * @connection: the connections the packet will be sent through
+ * @size:	number of bytes to allocate
+ * @drbd_stream: DATA_STREAM or CONTROL_STREAM
+ *
+ * This allocates a buffer with capacity to hold the header, and
+ * the requested size. Upon success is return a pointer that points
+ * to the first byte behind the header. The caller is expected to
+ * call xxx_send_command() soon.
+ */
+void *conn_prepare_command(struct drbd_connection *connection, int size,
+			   enum drbd_stream drbd_stream)
 {
 	void *p;
 
-	mutex_lock(&sock->mutex);
-	p = __conn_prepare_command(connection, sock);
+	mutex_lock(&connection->mutex[drbd_stream]);
+	p = __conn_prepare_command(connection, size, drbd_stream);
 	if (!p)
-		mutex_unlock(&sock->mutex);
+		mutex_unlock(&connection->mutex[drbd_stream]);
 
 	return p;
 }
 
-void *drbd_prepare_command(struct drbd_peer_device *peer_device, struct drbd_socket *sock)
+/**
+ * drbd_prepare_command() - Allocate a send buffer for a packet/command
+ * @peer_device: the DRBD peer device the packet will be sent to
+ * @size: number of bytes to allocate
+ * @drbd_stream: DATA_STREAM or CONTROL_STREAM
+ *
+ * This allocates a buffer with capacity to hold the header, and
+ * the requested size. Upon success is return a pointer that points
+ * to the first byte behind the header. The caller is expected to
+ * call xxx_send_command() soon.
+ */
+void *drbd_prepare_command(struct drbd_peer_device *peer_device, int size, enum drbd_stream drbd_stream)
 {
-	return conn_prepare_command(peer_device->connection, sock);
+	return conn_prepare_command(peer_device->connection, size, drbd_stream);
 }
 
-static int __send_command(struct drbd_connection *connection, int vnr,
-			  struct drbd_socket *sock, enum drbd_packet cmd,
-			  unsigned int header_size, void *data,
-			  unsigned int size)
+static int flush_send_buffer(struct drbd_connection *connection, enum drbd_stream drbd_stream)
 {
-	int msg_flags;
+	struct drbd_send_buffer *sbuf = &connection->send_buffer[drbd_stream];
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_transport_ops *tr_ops = &transport->class->ops;
+	unsigned int flags, offset, size;
 	int err;
 
-	/*
-	 * Called with @data == NULL and the size of the data blocks in @size
-	 * for commands that send data blocks.  For those commands, omit the
-	 * MSG_MORE flag: this will increase the likelihood that data blocks
-	 * which are page aligned on the sender will end up page aligned on the
-	 * receiver.
-	 */
-	msg_flags = data ? MSG_MORE : 0;
-
-	header_size += prepare_header(connection, vnr, sock->sbuf, cmd,
-				      header_size + size);
-	err = drbd_send_all(connection, sock->socket, sock->sbuf, header_size,
-			    msg_flags);
-	if (data && !err)
-		err = drbd_send_all(connection, sock->socket, data, size, 0);
-	/* DRBD protocol "pings" are latency critical.
-	 * This is supposed to trigger tcp_push_pending_frames() */
-	if (!err && (cmd == P_PING || cmd == P_PING_ACK))
-		tcp_sock_set_nodelay(sock->socket->sk);
+	size = sbuf->pos - sbuf->unsent + sbuf->allocated_size;
+	if (size == 0)
+		return 0;
+
+	if (drbd_stream == CONTROL_STREAM) {
+		connection->ctl_packets++;
+		if (check_add_overflow(connection->ctl_bytes, size, &connection->ctl_bytes)) {
+			connection->ctl_bytes = size;
+			connection->ctl_packets = 1;
+		}
+	}
+
+	if (drbd_stream == DATA_STREAM) {
+		rcu_read_lock();
+		connection->transport.ko_count = rcu_dereference(connection->transport.net_conf)->ko_count;
+		rcu_read_unlock();
+	}
+
+	flags = (connection->cstate[NOW] < C_CONNECTING ? MSG_DONTWAIT : 0) |
+		(sbuf->additional_size ? MSG_MORE : 0);
+	offset = sbuf->unsent - (char *)page_address(sbuf->page);
+	err = tr_ops->send_page(transport, drbd_stream, sbuf->page, offset, size, flags);
+	if (err) {
+		change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
+	} else {
+		sbuf->unsent =
+		sbuf->pos += sbuf->allocated_size;      /* send buffer submitted! */
+	}
+
+	sbuf->allocated_size = 0;
 
 	return err;
 }
 
-static int __conn_send_command(struct drbd_connection *connection, struct drbd_socket *sock,
-			       enum drbd_packet cmd, unsigned int header_size,
-			       void *data, unsigned int size)
+/*
+ * SFLAG_FLUSH makes sure the packet (and everything queued in front
+ * of it) gets sent immediately independently if it is currently
+ * corked.
+ *
+ * This is used for P_PING, P_PING_ACK, P_TWOPC_PREPARE, P_TWOPC_ABORT,
+ * P_TWOPC_YES, P_TWOPC_NO, P_TWOPC_RETRY and P_TWOPC_COMMIT.
+ *
+ * This quirk is necessary because it is corked while the worker
+ * thread processes work items. When it stops processing items, it
+ * uncorks. That works perfectly to coalesce ack packets etc..
+ * A work item doing two-phase commits needs to override that behavior.
+ */
+#define SFLAG_FLUSH 0x10
+#define DRBD_STREAM_FLAGS (SFLAG_FLUSH)
+
+static inline enum drbd_stream extract_stream(int stream_and_flags)
 {
-	return __send_command(connection, 0, sock, cmd, header_size, data, size);
+	return stream_and_flags & ~DRBD_STREAM_FLAGS;
 }
 
-int conn_send_command(struct drbd_connection *connection, struct drbd_socket *sock,
-		      enum drbd_packet cmd, unsigned int header_size,
-		      void *data, unsigned int size)
+int __send_command(struct drbd_connection *connection, int vnr,
+		   enum drbd_packet cmd, int stream_and_flags)
 {
+	enum drbd_stream drbd_stream = extract_stream(stream_and_flags);
+	struct drbd_send_buffer *sbuf = &connection->send_buffer[drbd_stream];
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_transport_ops *tr_ops = &transport->class->ops;
+	/* CORKED + drbd_stream is either DATA_CORKED or CONTROL_CORKED */
+	bool corked = test_bit(CORKED + drbd_stream, &connection->flags);
+	bool flush = stream_and_flags & SFLAG_FLUSH;
 	int err;
 
-	err = __conn_send_command(connection, sock, cmd, header_size, data, size);
-	mutex_unlock(&sock->mutex);
+	if (connection->cstate[NOW] < C_CONNECTING)
+		return -EIO;
+	prepare_header(connection, vnr, sbuf->pos, cmd,
+		       sbuf->allocated_size + sbuf->additional_size);
+
+	if (corked && !flush) {
+		sbuf->pos += sbuf->allocated_size;
+		sbuf->allocated_size = 0;
+		err = 0;
+	} else {
+		err = flush_send_buffer(connection, drbd_stream);
+
+		/* DRBD protocol "pings" are latency critical.
+		 * This is supposed to trigger tcp_push_pending_frames() */
+		if (!err && flush)
+			tr_ops->hint(transport, drbd_stream, NODELAY);
+
+	}
+
 	return err;
 }
 
-int drbd_send_command(struct drbd_peer_device *peer_device, struct drbd_socket *sock,
-		      enum drbd_packet cmd, unsigned int header_size,
-		      void *data, unsigned int size)
+void drbd_cork(struct drbd_connection *connection, enum drbd_stream stream)
 {
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_transport_ops *tr_ops = &transport->class->ops;
+
+	mutex_lock(&connection->mutex[stream]);
+	set_bit(CORKED + stream, &connection->flags);
+	/* only call into transport, if we expect it to work */
+	if (connection->cstate[NOW] >= C_CONNECTING)
+		tr_ops->hint(transport, stream, CORK);
+	mutex_unlock(&connection->mutex[stream]);
+}
+
+int drbd_uncork(struct drbd_connection *connection, enum drbd_stream stream)
+{
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_transport_ops *tr_ops = &transport->class->ops;
 	int err;
 
-	err = __send_command(peer_device->connection, peer_device->device->vnr,
-			     sock, cmd, header_size, data, size);
-	mutex_unlock(&sock->mutex);
+	mutex_lock(&connection->mutex[stream]);
+	err = flush_send_buffer(connection, stream);
+	if (!err) {
+		clear_bit(CORKED + stream, &connection->flags);
+		/* only call into transport, if we expect it to work */
+		if (connection->cstate[NOW] >= C_CONNECTING)
+			tr_ops->hint(transport, stream, UNCORK);
+	}
+	mutex_unlock(&connection->mutex[stream]);
 	return err;
 }
 
-int drbd_send_ping(struct drbd_connection *connection)
+int send_command(struct drbd_connection *connection, int vnr,
+		 enum drbd_packet cmd, int stream_and_flags)
 {
-	struct drbd_socket *sock;
+	enum drbd_stream drbd_stream = extract_stream(stream_and_flags);
+	int err;
 
-	sock = &connection->meta;
-	if (!conn_prepare_command(connection, sock))
-		return -EIO;
-	return conn_send_command(connection, sock, P_PING, 0, NULL, 0);
+	err = __send_command(connection, vnr, cmd, stream_and_flags);
+	mutex_unlock(&connection->mutex[drbd_stream]);
+	return err;
 }
 
-int drbd_send_ping_ack(struct drbd_connection *connection)
+int drbd_send_command(struct drbd_peer_device *peer_device,
+		      enum drbd_packet cmd, enum drbd_stream drbd_stream)
 {
-	struct drbd_socket *sock;
+	return send_command(peer_device->connection, peer_device->device->vnr,
+			    cmd, drbd_stream);
+}
 
-	sock = &connection->meta;
-	if (!conn_prepare_command(connection, sock))
+int drbd_send_ping(struct drbd_connection *connection)
+{
+	if (!conn_prepare_command(connection, 0, CONTROL_STREAM))
 		return -EIO;
-	return conn_send_command(connection, sock, P_PING_ACK, 0, NULL, 0);
+	return send_command(connection, -1, P_PING, CONTROL_STREAM | SFLAG_FLUSH);
 }
 
-int drbd_send_sync_param(struct drbd_peer_device *peer_device)
+void drbd_send_ping_ack_wf(struct work_struct *ws)
 {
-	struct drbd_socket *sock;
-	struct p_rs_param_95 *p;
-	int size;
-	const int apv = peer_device->connection->agreed_pro_version;
-	enum drbd_packet cmd;
-	struct net_conf *nc;
-	struct disk_conf *dc;
-
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
-	if (!p)
-		return -EIO;
+	struct drbd_connection *connection =
+		container_of(ws, struct drbd_connection, send_ping_ack_work);
+	int err;
 
-	rcu_read_lock();
-	nc = rcu_dereference(peer_device->connection->net_conf);
+	err = conn_prepare_command(connection, 0, CONTROL_STREAM) ? 0 : -EIO;
+	if (!err)
+		err = send_command(connection, -1, P_PING_ACK, CONTROL_STREAM | SFLAG_FLUSH);
+	if (err)
+		change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
+}
+
+int drbd_send_peer_ack(struct drbd_connection *connection, u64 mask, u64 dagtag_sector)
+{
+	struct p_peer_ack *p;
+
+	p = conn_prepare_command(connection, sizeof(*p), CONTROL_STREAM);
+	if (!p)
+		return -EIO;
+	p->mask = cpu_to_be64(mask);
+	p->dagtag = cpu_to_be64(dagtag_sector);
+
+	return send_command(connection, -1, P_PEER_ACK, CONTROL_STREAM);
+}
+
+int drbd_send_sync_param(struct drbd_peer_device *peer_device)
+{
+	struct p_rs_param_95 *p;
+	int size;
+	const int apv = peer_device->connection->agreed_pro_version;
+	enum drbd_packet cmd;
+	struct net_conf *nc;
+	struct peer_device_conf *pdc;
+
+	rcu_read_lock();
+	nc = rcu_dereference(peer_device->connection->transport.net_conf);
 
 	size = apv <= 87 ? sizeof(struct p_rs_param)
 		: apv == 88 ? sizeof(struct p_rs_param)
@@ -711,18 +1190,30 @@ int drbd_send_sync_param(struct drbd_peer_device *peer_device)
 		: /* apv >= 95 */ sizeof(struct p_rs_param_95);
 
 	cmd = apv >= 89 ? P_SYNC_PARAM89 : P_SYNC_PARAM;
+	rcu_read_unlock();
+
+	p = drbd_prepare_command(peer_device, size, DATA_STREAM);
+	if (!p)
+		return -EIO;
 
 	/* initialize verify_alg and csums_alg */
-	BUILD_BUG_ON(sizeof(p->algs) != 2 * SHARED_SECRET_MAX);
-	memset(&p->algs, 0, sizeof(p->algs));
+	memset(p->verify_alg, 0, sizeof(p->verify_alg));
+	memset(p->csums_alg, 0, sizeof(p->csums_alg));
+
+	rcu_read_lock();
+	nc = rcu_dereference(peer_device->connection->transport.net_conf);
 
 	if (get_ldev(peer_device->device)) {
-		dc = rcu_dereference(peer_device->device->ldev->disk_conf);
-		p->resync_rate = cpu_to_be32(dc->resync_rate);
-		p->c_plan_ahead = cpu_to_be32(dc->c_plan_ahead);
-		p->c_delay_target = cpu_to_be32(dc->c_delay_target);
-		p->c_fill_target = cpu_to_be32(dc->c_fill_target);
-		p->c_max_rate = cpu_to_be32(dc->c_max_rate);
+		pdc = rcu_dereference(peer_device->conf);
+		/* These values will be ignored by peers running DRBD 9.2+, but
+		 * we have to send something, so send the real values. We
+		 * cannot omit the entire packet because we must verify that
+		 * the algorithms match. */
+		p->resync_rate = cpu_to_be32(pdc->resync_rate);
+		p->c_plan_ahead = cpu_to_be32(pdc->c_plan_ahead);
+		p->c_delay_target = cpu_to_be32(pdc->c_delay_target);
+		p->c_fill_target = cpu_to_be32(pdc->c_fill_target);
+		p->c_max_rate = cpu_to_be32(pdc->c_max_rate);
 		put_ldev(peer_device->device);
 	} else {
 		p->resync_rate = cpu_to_be32(DRBD_RESYNC_RATE_DEF);
@@ -738,36 +1229,37 @@ int drbd_send_sync_param(struct drbd_peer_device *peer_device)
 		strscpy(p->csums_alg, nc->csums_alg);
 	rcu_read_unlock();
 
-	return drbd_send_command(peer_device, sock, cmd, size, NULL, 0);
+	return drbd_send_command(peer_device, cmd, DATA_STREAM);
 }
 
 int __drbd_send_protocol(struct drbd_connection *connection, enum drbd_packet cmd)
 {
-	struct drbd_socket *sock;
 	struct p_protocol *p;
 	struct net_conf *nc;
 	size_t integrity_alg_len;
 	int size, cf;
 
-	sock = &connection->data;
-	p = __conn_prepare_command(connection, sock);
-	if (!p)
-		return -EIO;
-
-	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
-
-	if (nc->tentative && connection->agreed_pro_version < 92) {
-		rcu_read_unlock();
+	if (test_bit(CONN_DRY_RUN, &connection->flags) && connection->agreed_pro_version < 92) {
+		clear_bit(CONN_DRY_RUN, &connection->flags);
 		drbd_err(connection, "--dry-run is not supported by peer");
 		return -EOPNOTSUPP;
 	}
 
 	size = sizeof(*p);
+	rcu_read_lock();
+	nc = rcu_dereference(connection->transport.net_conf);
 	if (connection->agreed_pro_version >= 87) {
 		integrity_alg_len = strlen(nc->integrity_alg) + 1;
 		size += integrity_alg_len;
 	}
+	rcu_read_unlock();
+
+	p = __conn_prepare_command(connection, size, DATA_STREAM);
+	if (!p)
+		return -EIO;
+
+	rcu_read_lock();
+	nc = rcu_dereference(connection->transport.net_conf);
 
 	p->protocol      = cpu_to_be32(nc->wire_protocol);
 	p->after_sb_0p   = cpu_to_be32(nc->after_sb_0p);
@@ -775,9 +1267,9 @@ int __drbd_send_protocol(struct drbd_connection *connection, enum drbd_packet cm
 	p->after_sb_2p   = cpu_to_be32(nc->after_sb_2p);
 	p->two_primaries = cpu_to_be32(nc->two_primaries);
 	cf = 0;
-	if (nc->discard_my_data)
+	if (test_bit(CONN_DISCARD_MY_DATA, &connection->flags))
 		cf |= CF_DISCARD_MY_DATA;
-	if (nc->tentative)
+	if (test_bit(CONN_DRY_RUN, &connection->flags))
 		cf |= CF_DRY_RUN;
 	p->conn_flags    = cpu_to_be32(cf);
 
@@ -785,133 +1277,301 @@ int __drbd_send_protocol(struct drbd_connection *connection, enum drbd_packet cm
 		strscpy(p->integrity_alg, nc->integrity_alg, integrity_alg_len);
 	rcu_read_unlock();
 
-	return __conn_send_command(connection, sock, cmd, size, NULL, 0);
-}
-
-int drbd_send_protocol(struct drbd_connection *connection)
-{
-	int err;
-
-	mutex_lock(&connection->data.mutex);
-	err = __drbd_send_protocol(connection, P_PROTOCOL);
-	mutex_unlock(&connection->data.mutex);
-
-	return err;
+	return __send_command(connection, -1, cmd, DATA_STREAM);
 }
 
 static int _drbd_send_uuids(struct drbd_peer_device *peer_device, u64 uuid_flags)
 {
 	struct drbd_device *device = peer_device->device;
-	struct drbd_socket *sock;
 	struct p_uuids *p;
 	int i;
 
 	if (!get_ldev_if_state(device, D_NEGOTIATING))
 		return 0;
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
+	p = drbd_prepare_command(peer_device, sizeof(*p), DATA_STREAM);
 	if (!p) {
 		put_ldev(device);
 		return -EIO;
 	}
+
 	spin_lock_irq(&device->ldev->md.uuid_lock);
-	for (i = UI_CURRENT; i < UI_SIZE; i++)
-		p->uuid[i] = cpu_to_be64(device->ldev->md.uuid[i]);
+	p->current_uuid = cpu_to_be64(drbd_current_uuid(device));
+	p->bitmap_uuid = cpu_to_be64(drbd_bitmap_uuid(peer_device));
+	for (i = 0; i < ARRAY_SIZE(p->history_uuids); i++)
+		p->history_uuids[i] = cpu_to_be64(drbd_history_uuid(device, i));
 	spin_unlock_irq(&device->ldev->md.uuid_lock);
 
-	device->comm_bm_set = drbd_bm_total_weight(device);
-	p->uuid[UI_SIZE] = cpu_to_be64(device->comm_bm_set);
+	peer_device->comm_bm_set = drbd_bm_total_weight(peer_device);
+	p->dirty_bits = cpu_to_be64(peer_device->comm_bm_set);
+
+	if (test_bit(DISCARD_MY_DATA, &peer_device->flags))
+		uuid_flags |= UUID_FLAG_DISCARD_MY_DATA;
+	if (test_bit(CRASHED_PRIMARY, &device->flags))
+		uuid_flags |= UUID_FLAG_CRASHED_PRIMARY;
+	if (!drbd_md_test_flag(device->ldev, MDF_CONSISTENT))
+		uuid_flags |= UUID_FLAG_INCONSISTENT;
+
+	/* Silently mask out any "too recent" flags,
+	 * we cannot communicate those in old DRBD
+	 * protocol versions. */
+	uuid_flags &= UUID_FLAG_MASK_COMPAT_84;
+
+	peer_device->comm_uuid_flags = uuid_flags;
+	p->uuid_flags = cpu_to_be64(uuid_flags);
+
+	put_ldev(device);
+
+	return drbd_send_command(peer_device, P_UUIDS, DATA_STREAM);
+}
+
+static u64 __bitmap_uuid(struct drbd_device *device, int node_id)
+{
+	struct drbd_peer_device *peer_device;
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	u64 bitmap_uuid = peer_md[node_id].bitmap_uuid;
+
+	/* Sending a bitmap_uuid of 0 means that we are in sync with that peer.
+	   The recipient of this message might use this assumption to throw away it's
+	   bitmap to that peer.
+
+	   Send -1 instead if we are (resync target from that peer) not at the same
+	   current uuid.
+	   This corner case is relevant if we finish resync from an UpToDate peer first,
+	   and the second resync (which was paused first) is from an Outdated node.
+	   And that second resync gets canceled by the resync target due to the first
+	   resync finished successfully.
+
+	   Exceptions to the above are when the peer's UUID is not known yet
+	 */
+
 	rcu_read_lock();
-	uuid_flags |= rcu_dereference(peer_device->connection->net_conf)->discard_my_data ? 1 : 0;
+	peer_device = peer_device_by_node_id(device, node_id);
+	if (peer_device) {
+		enum drbd_repl_state repl_state = peer_device->repl_state[NOW];
+		if (bitmap_uuid == 0 &&
+		    (repl_state == L_SYNC_TARGET || repl_state == L_PAUSED_SYNC_T) &&
+		    peer_device->current_uuid != 0 &&
+		    (peer_device->current_uuid & ~UUID_PRIMARY) !=
+		    (drbd_current_uuid(device) & ~UUID_PRIMARY))
+			bitmap_uuid = -1;
+	}
 	rcu_read_unlock();
-	uuid_flags |= test_bit(CRASHED_PRIMARY, &device->flags) ? 2 : 0;
-	uuid_flags |= device->new_state_tmp.disk == D_INCONSISTENT ? 4 : 0;
-	p->uuid[UI_FLAGS] = cpu_to_be64(uuid_flags);
 
-	put_ldev(device);
-	return drbd_send_command(peer_device, sock, P_UUIDS, sizeof(*p), NULL, 0);
+	return bitmap_uuid;
 }
 
-int drbd_send_uuids(struct drbd_peer_device *peer_device)
+u64 drbd_collect_local_uuid_flags(struct drbd_peer_device *peer_device, u64 *authoritative_mask)
 {
-	return _drbd_send_uuids(peer_device, 0);
+	struct drbd_device *device = peer_device->device;
+	u64 uuid_flags = 0;
+
+	if (test_bit(DISCARD_MY_DATA, &peer_device->flags))
+		uuid_flags |= UUID_FLAG_DISCARD_MY_DATA;
+	if (test_bit(CRASHED_PRIMARY, &device->flags))
+		uuid_flags |= UUID_FLAG_CRASHED_PRIMARY;
+	if (!drbd_md_test_flag(device->ldev, MDF_CONSISTENT))
+		uuid_flags |= UUID_FLAG_INCONSISTENT;
+	if (test_bit(RECONNECT, &peer_device->connection->flags))
+		uuid_flags |= UUID_FLAG_RECONNECT;
+	if (test_bit(PRIMARY_LOST_QUORUM, &device->flags))
+		uuid_flags |= UUID_FLAG_PRIMARY_LOST_QUORUM;
+	if (drbd_device_stable(device, authoritative_mask))
+		uuid_flags |= UUID_FLAG_STABLE;
+
+	return uuid_flags;
+}
+
+/* sets UUID_FLAG_SYNC_TARGET on uuid_flags as appropriate (may be NULL) */
+u64 drbd_resolved_uuid(struct drbd_peer_device *peer_device_base, u64 *uuid_flags)
+{
+	struct drbd_device *device = peer_device_base->device;
+	struct drbd_peer_device *peer_device;
+	u64 uuid = drbd_current_uuid(device);
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->node_id == peer_device_base->node_id)
+			continue;
+		if (peer_device->repl_state[NOW] == L_SYNC_TARGET) {
+			uuid = peer_device->current_uuid;
+			if (uuid_flags)
+				*uuid_flags |= UUID_FLAG_SYNC_TARGET;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return uuid;
+}
+
+static int _drbd_send_uuids110(struct drbd_peer_device *peer_device, u64 uuid_flags, u64 node_mask)
+{
+	struct drbd_device *device = peer_device->device;
+	const int my_node_id = device->resource->res_opts.node_id;
+	struct drbd_peer_md *peer_md;
+	struct p_uuids110 *p;
+	bool sent_one_unallocated;
+	int i, pos = 0;
+	u64 local_uuid_flags = 0, authoritative_mask, bitmap_uuids_mask = 0;
+	int p_size = sizeof(*p);
+
+	if (!get_ldev_if_state(device, D_NEGOTIATING))
+		return drbd_send_current_uuid(peer_device, device->exposed_data_uuid,
+					      drbd_weak_nodes_device(device));
+
+	peer_md = device->ldev->md.peers;
+
+	p_size += (DRBD_PEERS_MAX + HISTORY_UUIDS) * sizeof(p->other_uuids[0]);
+	p = drbd_prepare_command(peer_device, p_size, DATA_STREAM);
+	if (!p) {
+		put_ldev(device);
+		return -EIO;
+	}
+
+	spin_lock_irq(&device->ldev->md.uuid_lock);
+	peer_device->comm_current_uuid = drbd_resolved_uuid(peer_device, &local_uuid_flags);
+	p->current_uuid = cpu_to_be64(peer_device->comm_current_uuid);
+
+	sent_one_unallocated = peer_device->connection->agreed_pro_version < 116;
+	for (i = 0; i < DRBD_NODE_ID_MAX; i++) {
+		u64 val = __bitmap_uuid(device, i);
+		bool send_this = peer_md[i].flags & (MDF_HAVE_BITMAP | MDF_NODE_EXISTS);
+		if (!send_this && !sent_one_unallocated &&
+		    i != my_node_id && i != peer_device->node_id && val) {
+			send_this = true;
+			sent_one_unallocated = true;
+			uuid_flags |= (u64)i << UUID_FLAG_UNALLOC_SHIFT;
+			uuid_flags |= UUID_FLAG_HAS_UNALLOC;
+		}
+		if (send_this) {
+			bitmap_uuids_mask |= NODE_MASK(i);
+			p->other_uuids[pos++] = cpu_to_be64(val);
+		}
+	}
+	peer_device->comm_bitmap_uuid = drbd_bitmap_uuid(peer_device);
+
+	for (i = 0; i < HISTORY_UUIDS; i++)
+		p->other_uuids[pos++] = cpu_to_be64(drbd_history_uuid(device, i));
+	spin_unlock_irq(&device->ldev->md.uuid_lock);
+
+	p->bitmap_uuids_mask = cpu_to_be64(bitmap_uuids_mask);
+
+	peer_device->comm_bm_set = drbd_bm_total_weight(peer_device);
+	p->dirty_bits = cpu_to_be64(peer_device->comm_bm_set);
+	local_uuid_flags |= drbd_collect_local_uuid_flags(peer_device, &authoritative_mask);
+	peer_device->comm_uuid_flags = local_uuid_flags;
+	uuid_flags |= local_uuid_flags;
+	if (uuid_flags & UUID_FLAG_STABLE) {
+		p->node_mask = cpu_to_be64(node_mask);
+	} else {
+		D_ASSERT(peer_device, node_mask == 0);
+		p->node_mask = cpu_to_be64(authoritative_mask);
+	}
+
+	p->uuid_flags = cpu_to_be64(uuid_flags);
+
+	put_ldev(device);
+
+	p_size = sizeof(*p) +
+		(hweight64(bitmap_uuids_mask) + HISTORY_UUIDS) * sizeof(p->other_uuids[0]);
+	resize_prepared_command(peer_device->connection, DATA_STREAM, p_size);
+	return drbd_send_command(peer_device, P_UUIDS110, DATA_STREAM);
 }
 
-int drbd_send_uuids_skip_initial_sync(struct drbd_peer_device *peer_device)
+int drbd_send_uuids(struct drbd_peer_device *peer_device, u64 uuid_flags, u64 node_mask)
 {
-	return _drbd_send_uuids(peer_device, 8);
+	if (peer_device->connection->agreed_pro_version >= 110)
+		return _drbd_send_uuids110(peer_device, uuid_flags, node_mask);
+	else
+		return _drbd_send_uuids(peer_device, uuid_flags);
 }
 
-void drbd_print_uuids(struct drbd_device *device, const char *text)
+void drbd_print_uuids(struct drbd_peer_device *peer_device, const char *text)
 {
+	struct drbd_device *device = peer_device->device;
+
 	if (get_ldev_if_state(device, D_NEGOTIATING)) {
-		u64 *uuid = device->ldev->md.uuid;
-		drbd_info(device, "%s %016llX:%016llX:%016llX:%016llX\n",
-		     text,
-		     (unsigned long long)uuid[UI_CURRENT],
-		     (unsigned long long)uuid[UI_BITMAP],
-		     (unsigned long long)uuid[UI_HISTORY_START],
-		     (unsigned long long)uuid[UI_HISTORY_END]);
+		drbd_info(peer_device, "%s %016llX:%016llX:%016llX:%016llX\n",
+			  text,
+			  (unsigned long long)drbd_current_uuid(device),
+			  (unsigned long long)drbd_bitmap_uuid(peer_device),
+			  (unsigned long long)drbd_history_uuid(device, 0),
+			  (unsigned long long)drbd_history_uuid(device, 1));
 		put_ldev(device);
 	} else {
-		drbd_info(device, "%s effective data uuid: %016llX\n",
-				text,
-				(unsigned long long)device->ed_uuid);
+		drbd_info(peer_device, "%s exposed data uuid: %016llX\n",
+			  text,
+			  (unsigned long long)device->exposed_data_uuid);
 	}
 }
 
+int drbd_send_current_uuid(struct drbd_peer_device *peer_device, u64 current_uuid, u64 weak_nodes)
+{
+	struct p_current_uuid *p;
+
+	p = drbd_prepare_command(peer_device, sizeof(*p), DATA_STREAM);
+	if (!p)
+		return -EIO;
+
+	peer_device->comm_current_uuid = current_uuid;
+	p->uuid = cpu_to_be64(current_uuid);
+	p->weak_nodes = cpu_to_be64(weak_nodes);
+	return drbd_send_command(peer_device, P_CURRENT_UUID, DATA_STREAM);
+}
+
 void drbd_gen_and_send_sync_uuid(struct drbd_peer_device *peer_device)
 {
 	struct drbd_device *device = peer_device->device;
-	struct drbd_socket *sock;
-	struct p_rs_uuid *p;
+	struct p_uuid *p;
 	u64 uuid;
 
-	D_ASSERT(device, device->state.disk == D_UP_TO_DATE);
+	D_ASSERT(device, device->disk_state[NOW] == D_UP_TO_DATE);
 
-	uuid = device->ldev->md.uuid[UI_BITMAP];
+	down_write(&device->uuid_sem);
+	uuid = drbd_bitmap_uuid(peer_device);
 	if (uuid && uuid != UUID_JUST_CREATED)
 		uuid = uuid + UUID_NEW_BM_OFFSET;
 	else
 		get_random_bytes(&uuid, sizeof(u64));
-	drbd_uuid_set(device, UI_BITMAP, uuid);
-	drbd_print_uuids(device, "updated sync UUID");
+	drbd_uuid_set_bitmap(peer_device, uuid);
+	drbd_print_uuids(peer_device, "updated sync UUID");
 	drbd_md_sync(device);
+	downgrade_write(&device->uuid_sem);
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
+	p = drbd_prepare_command(peer_device, sizeof(*p), DATA_STREAM);
 	if (p) {
 		p->uuid = cpu_to_be64(uuid);
-		drbd_send_command(peer_device, sock, P_SYNC_UUID, sizeof(*p), NULL, 0);
+		drbd_send_command(peer_device, P_SYNC_UUID, DATA_STREAM);
 	}
+	up_read(&device->uuid_sem);
 }
 
-int drbd_send_sizes(struct drbd_peer_device *peer_device, int trigger_reply, enum dds_flags flags)
+int drbd_send_sizes(struct drbd_peer_device *peer_device,
+		    uint64_t u_size_diskless, enum dds_flags flags)
 {
+	struct drbd_connection *connection = peer_device->connection;
 	struct drbd_device *device = peer_device->device;
-	struct drbd_socket *sock;
 	struct p_sizes *p;
 	sector_t d_size, u_size;
 	int q_order_type;
 	unsigned int max_bio_size;
 	unsigned int packet_size;
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
-	if (!p)
-		return -EIO;
-
 	packet_size = sizeof(*p);
-	if (peer_device->connection->agreed_features & DRBD_FF_WSAME)
+	if (connection->agreed_features & DRBD_FF_WSAME)
 		packet_size += sizeof(p->qlim[0]);
 
+	p = drbd_prepare_command(peer_device, packet_size, DATA_STREAM);
+	if (!p)
+		return -EIO;
+
 	memset(p, 0, packet_size);
 	if (get_ldev_if_state(device, D_NEGOTIATING)) {
 		struct block_device *bdev = device->ldev->backing_bdev;
 		struct request_queue *q = bdev_get_queue(bdev);
 
-		d_size = drbd_get_max_capacity(device->ldev);
+		d_size = drbd_get_max_capacity(device, device->ldev, false);
 		rcu_read_lock();
 		u_size = rcu_dereference(device->ldev->disk_conf)->disk_size;
 		rcu_read_unlock();
@@ -927,6 +1587,10 @@ int drbd_send_sizes(struct drbd_peer_device *peer_device, int trigger_reply, enu
 		p->qlim->io_min = cpu_to_be32(bdev_io_min(bdev));
 		p->qlim->io_opt = cpu_to_be32(bdev_io_opt(bdev));
 		p->qlim->discard_enabled = !!bdev_max_discard_sectors(bdev);
+		p->qlim->write_same_capable = 0;
+		if (connection->agreed_features & DRBD_FF_BM_BLOCK_SHIFT)
+			p->qlim->bm_block_shift_minus_12 =
+				device->bitmap->bm_block_shift - BM_BLOCK_SHIFT_4k;
 		put_ldev(device);
 	} else {
 		struct request_queue *q = device->rq_queue;
@@ -939,128 +1603,307 @@ int drbd_send_sizes(struct drbd_peer_device *peer_device, int trigger_reply, enu
 		p->qlim->io_min = cpu_to_be32(queue_io_min(q));
 		p->qlim->io_opt = cpu_to_be32(queue_io_opt(q));
 		p->qlim->discard_enabled = 0;
+		p->qlim->write_same_capable = 0;
 
 		d_size = 0;
-		u_size = 0;
+		u_size = u_size_diskless;
 		q_order_type = QUEUE_ORDERED_NONE;
 		max_bio_size = DRBD_MAX_BIO_SIZE; /* ... multiple BIOs per peer_request */
 	}
 
-	if (peer_device->connection->agreed_pro_version <= 94)
+	if (connection->agreed_pro_version <= 94)
 		max_bio_size = min(max_bio_size, DRBD_MAX_SIZE_H80_PACKET);
-	else if (peer_device->connection->agreed_pro_version < 100)
+	else if (connection->agreed_pro_version < 100)
 		max_bio_size = min(max_bio_size, DRBD_MAX_BIO_SIZE_P95);
 
+	/* 9.0.4 bumped pro_version to 112 and introduced 2PC resizes */
+	if (connection->agreed_pro_version >= 112)
+		d_size = drbd_partition_data_capacity(device);
+
 	p->d_size = cpu_to_be64(d_size);
 	p->u_size = cpu_to_be64(u_size);
-	if (trigger_reply)
-		p->c_size = 0;
-	else
-		p->c_size = cpu_to_be64(get_capacity(device->vdisk));
+	/*
+	TODO verify: this may be needed for v8 compatibility still.
+	p->c_size = cpu_to_be64(trigger_reply ? 0 : get_capacity(device->vdisk));
+	*/
+	p->c_size = cpu_to_be64(get_capacity(device->vdisk));
 	p->max_bio_size = cpu_to_be32(max_bio_size);
 	p->queue_order_type = cpu_to_be16(q_order_type);
 	p->dds_flags = cpu_to_be16(flags);
 
-	return drbd_send_command(peer_device, sock, P_SIZES, packet_size, NULL, 0);
+	return drbd_send_command(peer_device, P_SIZES, DATA_STREAM);
 }
 
-/**
- * drbd_send_current_state() - Sends the drbd state to the peer
- * @peer_device:	DRBD peer device.
- */
 int drbd_send_current_state(struct drbd_peer_device *peer_device)
 {
-	struct drbd_socket *sock;
+	return drbd_send_state(peer_device, drbd_get_peer_device_state(peer_device, NOW));
+}
+
+static int send_state(struct drbd_connection *connection, int vnr, union drbd_state state)
+{
 	struct p_state *p;
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
+	p = conn_prepare_command(connection, sizeof(*p), DATA_STREAM);
 	if (!p)
 		return -EIO;
-	p->state = cpu_to_be32(peer_device->device->state.i); /* Within the send mutex */
-	return drbd_send_command(peer_device, sock, P_STATE, sizeof(*p), NULL, 0);
+
+	if (connection->agreed_pro_version < 110) {
+		/* D_DETACHING was introduced with drbd-9.0 */
+		if (state.disk > D_DETACHING)
+			state.disk--;
+		if (state.pdsk > D_DETACHING)
+			state.pdsk--;
+	}
+
+	p->state = cpu_to_be32(state.i); /* Within the send mutex */
+	return send_command(connection, vnr, P_STATE, DATA_STREAM);
+}
+
+int conn_send_state(struct drbd_connection *connection, union drbd_state state)
+{
+	BUG_ON(connection->agreed_pro_version < 100);
+	return send_state(connection, -1, state);
 }
 
 /**
- * drbd_send_state() - After a state change, sends the new state to the peer
- * @peer_device:      DRBD peer device.
- * @state:     the state to send, not necessarily the current state.
- *
- * Each state change queues an "after_state_ch" work, which will eventually
- * send the resulting new state to the peer. If more state changes happen
- * between queuing and processing of the after_state_ch work, we still
- * want to send each intermediary state in the order it occurred.
+ * drbd_send_state() - Sends the drbd state to the peer
+ * @peer_device: Peer DRBD device to send the state to.
+ * @state: state to send
  */
 int drbd_send_state(struct drbd_peer_device *peer_device, union drbd_state state)
 {
-	struct drbd_socket *sock;
-	struct p_state *p;
-
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
-	if (!p)
-		return -EIO;
-	p->state = cpu_to_be32(state.i); /* Within the send mutex */
-	return drbd_send_command(peer_device, sock, P_STATE, sizeof(*p), NULL, 0);
+	peer_device->comm_state = state;
+	return send_state(peer_device->connection, peer_device->device->vnr, state);
 }
 
-int drbd_send_state_req(struct drbd_peer_device *peer_device, union drbd_state mask, union drbd_state val)
+int conn_send_state_req(struct drbd_connection *connection, int vnr, enum drbd_packet cmd,
+			union drbd_state mask, union drbd_state val)
 {
-	struct drbd_socket *sock;
 	struct p_req_state *p;
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
+	/* Protocols before version 100 only support one volume and connection.
+	 * All state change requests are via P_STATE_CHG_REQ. */
+	if (connection->agreed_pro_version < 100)
+		cmd = P_STATE_CHG_REQ;
+
+	p = conn_prepare_command(connection, sizeof(*p), DATA_STREAM);
 	if (!p)
 		return -EIO;
 	p->mask = cpu_to_be32(mask.i);
 	p->val = cpu_to_be32(val.i);
-	return drbd_send_command(peer_device, sock, P_STATE_CHG_REQ, sizeof(*p), NULL, 0);
+
+	return send_command(connection, vnr, cmd, DATA_STREAM);
 }
 
-int conn_send_state_req(struct drbd_connection *connection, union drbd_state mask, union drbd_state val)
+int conn_send_twopc_request(struct drbd_connection *connection, struct twopc_request *request)
 {
-	enum drbd_packet cmd;
-	struct drbd_socket *sock;
-	struct p_req_state *p;
+	struct drbd_resource *resource = connection->resource;
+	struct p_twopc_request *p;
 
-	cmd = connection->agreed_pro_version < 100 ? P_STATE_CHG_REQ : P_CONN_ST_CHG_REQ;
-	sock = &connection->data;
-	p = conn_prepare_command(connection, sock);
+	dynamic_drbd_dbg(connection, "Sending %s request for state change %u\n",
+			 drbd_packet_name(request->cmd),
+			 request->tid);
+
+	p = conn_prepare_command(connection, sizeof(*p), DATA_STREAM);
 	if (!p)
 		return -EIO;
-	p->mask = cpu_to_be32(mask.i);
-	p->val = cpu_to_be32(val.i);
-	return conn_send_command(connection, sock, cmd, sizeof(*p), NULL, 0);
+	p->tid = cpu_to_be32(request->tid);
+	if (connection->agreed_features & DRBD_FF_2PC_V2) {
+		p->flags = cpu_to_be32(TWOPC_HAS_FLAGS | request->flags);
+		p->_pad = 0;
+		p->s8_initiator_node_id = request->initiator_node_id;
+		p->s8_target_node_id = request->target_node_id;
+	} else {
+		p->u32_initiator_node_id = cpu_to_be32(request->initiator_node_id);
+		p->u32_target_node_id = cpu_to_be32(request->target_node_id);
+	}
+	p->nodes_to_reach = cpu_to_be64(request->nodes_to_reach);
+	switch (resource->twopc.type) {
+	case TWOPC_STATE_CHANGE:
+		if (request->cmd == P_TWOPC_PREPARE) {
+			p->_compat_pad = 0;
+			p->mask = cpu_to_be32(resource->twopc.state_change.mask.i);
+			p->val = cpu_to_be32(resource->twopc.state_change.val.i);
+		} else { /* P_TWOPC_COMMIT */
+			p->primary_nodes = cpu_to_be64(resource->twopc.state_change.primary_nodes);
+			if (request->flags & TWOPC_HAS_REACHABLE &&
+			    connection->agreed_features & DRBD_FF_2PC_V2) {
+				p->reachable_nodes = cpu_to_be64(
+					resource->twopc.state_change.reachable_nodes);
+			} else {
+				p->mask = cpu_to_be32(resource->twopc.state_change.mask.i);
+				p->val = cpu_to_be32(resource->twopc.state_change.val.i);
+			}
+		}
+		break;
+	case TWOPC_RESIZE:
+		if (request->cmd == P_TWOPC_PREP_RSZ) {
+			p->user_size = cpu_to_be64(resource->twopc.resize.user_size);
+			p->dds_flags = cpu_to_be16(resource->twopc.resize.dds_flags);
+		} else { /* P_TWOPC_COMMIT */
+			p->diskful_primary_nodes =
+				cpu_to_be64(resource->twopc.resize.diskful_primary_nodes);
+			p->exposed_size = cpu_to_be64(resource->twopc.resize.new_size);
+		}
+	}
+	return send_command(connection, request->vnr, request->cmd, DATA_STREAM | SFLAG_FLUSH);
 }
 
-void drbd_send_sr_reply(struct drbd_peer_device *peer_device, enum drbd_state_rv retcode)
+void drbd_send_sr_reply(struct drbd_connection *connection, int vnr, enum drbd_state_rv retcode)
 {
-	struct drbd_socket *sock;
 	struct p_req_state_reply *p;
 
-	sock = &peer_device->connection->meta;
-	p = drbd_prepare_command(peer_device, sock);
+	p = conn_prepare_command(connection, sizeof(*p), CONTROL_STREAM);
 	if (p) {
+		enum drbd_packet cmd = P_STATE_CHG_REPLY;
+
+		if (connection->agreed_pro_version >= 100 && vnr < 0)
+			cmd = P_CONN_ST_CHG_REPLY;
+
 		p->retcode = cpu_to_be32(retcode);
-		drbd_send_command(peer_device, sock, P_STATE_CHG_REPLY, sizeof(*p), NULL, 0);
+		send_command(connection, vnr, cmd, CONTROL_STREAM);
 	}
 }
 
-void conn_send_sr_reply(struct drbd_connection *connection, enum drbd_state_rv retcode)
+void drbd_send_twopc_reply(struct drbd_connection *connection,
+			   enum drbd_packet cmd, struct twopc_reply *reply)
 {
-	struct drbd_socket *sock;
-	struct p_req_state_reply *p;
-	enum drbd_packet cmd = connection->agreed_pro_version < 100 ? P_STATE_CHG_REPLY : P_CONN_ST_CHG_REPLY;
+	struct p_twopc_reply *p;
 
-	sock = &connection->meta;
-	p = conn_prepare_command(connection, sock);
+	p = conn_prepare_command(connection, sizeof(*p), CONTROL_STREAM);
 	if (p) {
-		p->retcode = cpu_to_be32(retcode);
-		conn_send_command(connection, sock, cmd, sizeof(*p), NULL, 0);
+		p->tid = cpu_to_be32(reply->tid);
+		p->initiator_node_id = cpu_to_be32(reply->initiator_node_id);
+		p->reachable_nodes = cpu_to_be64(reply->reachable_nodes);
+		switch (connection->resource->twopc.type) {
+		case TWOPC_STATE_CHANGE:
+			p->primary_nodes = cpu_to_be64(reply->primary_nodes);
+			p->weak_nodes = cpu_to_be64(reply->weak_nodes);
+			break;
+		case TWOPC_RESIZE:
+			p->diskful_primary_nodes = cpu_to_be64(reply->diskful_primary_nodes);
+			p->max_possible_size = cpu_to_be64(reply->max_possible_size);
+			break;
+		}
+		send_command(connection, reply->vnr, cmd, CONTROL_STREAM | SFLAG_FLUSH);
+	}
+}
+
+void drbd_send_peers_in_sync(struct drbd_peer_device *peer_device, u64 mask, sector_t sector, int size)
+{
+	struct p_peer_block_desc *p;
+
+	p = drbd_prepare_command(peer_device, sizeof(*p), CONTROL_STREAM);
+	if (p) {
+		p->sector = cpu_to_be64(sector);
+		p->mask = cpu_to_be64(mask);
+		p->size = cpu_to_be32(size);
+		p->pad = 0;
+		drbd_send_command(peer_device, P_PEERS_IN_SYNC, CONTROL_STREAM);
 	}
 }
 
+int drbd_send_peer_dagtag(struct drbd_connection *connection, struct drbd_connection *lost_peer)
+{
+	struct p_peer_dagtag *p;
+
+	p = conn_prepare_command(connection, sizeof(*p), DATA_STREAM);
+	if (!p)
+		return -EIO;
+
+	p->dagtag = cpu_to_be64(atomic64_read(&lost_peer->last_dagtag_sector));
+	p->node_id = cpu_to_be32(lost_peer->peer_node_id);
+
+	return send_command(connection, -1, P_PEER_DAGTAG, DATA_STREAM);
+}
+
+int drbd_send_flush_requests(struct drbd_connection *connection, u64 flush_sequence)
+{
+	struct p_flush_requests *p;
+
+	p = conn_prepare_command(connection, sizeof(*p), DATA_STREAM);
+	if (!p)
+		return -EIO;
+
+	p->flush_sequence = cpu_to_be64(flush_sequence);
+
+	return send_command(connection, -1, P_FLUSH_REQUESTS, DATA_STREAM);
+}
+
+int drbd_send_flush_forward(struct drbd_connection *connection, u64 flush_sequence,
+		int initiator_node_id)
+{
+	struct p_flush_forward *p;
+
+	p = conn_prepare_command(connection, sizeof(*p), CONTROL_STREAM);
+	if (!p)
+		return -EIO;
+
+	p->flush_sequence = cpu_to_be64(flush_sequence);
+	p->initiator_node_id = cpu_to_be32(initiator_node_id);
+
+	return send_command(connection, -1, P_FLUSH_FORWARD, CONTROL_STREAM);
+}
+
+int drbd_send_flush_requests_ack(struct drbd_connection *connection, u64 flush_sequence,
+		int primary_node_id)
+{
+	struct p_flush_ack *p;
+
+	p = conn_prepare_command(connection, sizeof(*p), DATA_STREAM);
+	if (!p)
+		return -EIO;
+
+	p->flush_sequence = cpu_to_be64(flush_sequence);
+	p->primary_node_id = cpu_to_be32(primary_node_id);
+
+	return send_command(connection, -1, P_FLUSH_REQUESTS_ACK, DATA_STREAM);
+}
+
+int drbd_send_enable_replication_next(struct drbd_peer_device *peer_device)
+{
+	struct p_enable_replication *p;
+	struct peer_device_conf *pdc;
+	bool resync_without_replication;
+
+	set_bit(PEER_REPLICATION_NEXT, &peer_device->flags);
+	if (!(peer_device->connection->agreed_features & DRBD_FF_RESYNC_WITHOUT_REPLICATION))
+		return 0;
+
+	p = drbd_prepare_command(peer_device, sizeof(*p), DATA_STREAM);
+	if (!p)
+		return -EIO;
+
+	rcu_read_lock();
+	pdc = rcu_dereference(peer_device->conf);
+	resync_without_replication = pdc->resync_without_replication;
+	rcu_read_unlock();
+
+	if (resync_without_replication)
+		clear_bit(PEER_REPLICATION_NEXT, &peer_device->flags);
+
+	p->enable = !resync_without_replication;
+	p->_pad1 = 0;
+	p->_pad2 = 0;
+
+	return drbd_send_command(peer_device, P_ENABLE_REPLICATION_NEXT, DATA_STREAM);
+}
+
+int drbd_send_enable_replication(struct drbd_peer_device *peer_device, bool enable)
+{
+	struct p_enable_replication *p;
+
+	p = drbd_prepare_command(peer_device, sizeof(*p), DATA_STREAM);
+	if (!p)
+		return -EIO;
+
+	p->enable = enable;
+	p->_pad1 = 0;
+	p->_pad2 = 0;
+
+	return drbd_send_command(peer_device, P_ENABLE_REPLICATION, DATA_STREAM);
+}
+
 static void dcbp_set_code(struct p_compressed_bm *p, enum drbd_bitmap_code code)
 {
 	BUG_ON(code & ~0xf);
@@ -1078,24 +1921,28 @@ static void dcbp_set_pad_bits(struct p_compressed_bm *p, int n)
 	p->encoding = (p->encoding & (~0x7 << 4)) | (n << 4);
 }
 
-static int fill_bitmap_rle_bits(struct drbd_device *device,
-			 struct p_compressed_bm *p,
-			 unsigned int size,
-			 struct bm_xfer_ctx *c)
+/* For compat reasons, encode bitmap as if it was 4k per bit!
+ * Easy: just scale the run length.
+ */
+static int fill_bitmap_rle_bits(struct drbd_peer_device *peer_device,
+				struct p_compressed_bm *p,
+				unsigned int size,
+				struct bm_xfer_ctx *c)
 {
 	struct bitstream bs;
 	unsigned long plain_bits;
 	unsigned long tmp;
 	unsigned long rl;
+	unsigned long rl_4k;
 	unsigned len;
 	unsigned toggle;
 	int bits, use_rle;
 
 	/* may we use this feature? */
 	rcu_read_lock();
-	use_rle = rcu_dereference(first_peer_device(device)->connection->net_conf)->use_rle;
+	use_rle = rcu_dereference(peer_device->connection->transport.net_conf)->use_rle;
 	rcu_read_unlock();
-	if (!use_rle || first_peer_device(device)->connection->agreed_pro_version < 90)
+	if (!use_rle || peer_device->connection->agreed_pro_version < 90)
 		return 0;
 
 	if (c->bit_offset >= c->bm_bits)
@@ -1115,11 +1962,16 @@ static int fill_bitmap_rle_bits(struct drbd_device *device,
 	/* see how much plain bits we can stuff into one packet
 	 * using RLE and VLI. */
 	do {
-		tmp = (toggle == 0) ? _drbd_bm_find_next_zero(device, c->bit_offset)
-				    : _drbd_bm_find_next(device, c->bit_offset);
-		if (tmp == -1UL)
+		tmp = (toggle == 0) ? _drbd_bm_find_next_zero(peer_device, c->bit_offset)
+				    : _drbd_bm_find_next(peer_device, c->bit_offset);
+		if (tmp == -1UL) {
 			tmp = c->bm_bits;
-		rl = tmp - c->bit_offset;
+			rl = tmp - c->bit_offset;
+			rl_4k = c->bm_bits_4k - (c->bit_offset << c->scale);
+		} else {
+			rl = tmp - c->bit_offset;
+			rl_4k = rl << c->scale;
+		}
 
 		if (toggle == 2) { /* first iteration */
 			if (rl == 0) {
@@ -1136,16 +1988,16 @@ static int fill_bitmap_rle_bits(struct drbd_device *device,
 		/* paranoia: catch zero runlength.
 		 * can only happen if bitmap is modified while we scan it. */
 		if (rl == 0) {
-			drbd_err(device, "unexpected zero runlength while encoding bitmap "
+			drbd_err(peer_device, "unexpected zero runlength while encoding bitmap "
 			    "t:%u bo:%lu\n", toggle, c->bit_offset);
 			return -1;
 		}
 
-		bits = vli_encode_bits(&bs, rl);
+		bits = vli_encode_bits(&bs, rl_4k);
 		if (bits == -ENOBUFS) /* buffer full */
 			break;
 		if (bits <= 0) {
-			drbd_err(device, "error while encoding bitmap: %d\n", bits);
+			drbd_err(peer_device, "error while encoding bitmap: %d\n", bits);
 			return 0;
 		}
 
@@ -1156,7 +2008,7 @@ static int fill_bitmap_rle_bits(struct drbd_device *device,
 
 	len = bs.cur.b - p->code + !!bs.cur.bit;
 
-	if (plain_bits < (len << 3)) {
+	if (plain_bits << c->scale < (len << 3)) {
 		/* incompressible with this method.
 		 * we need to rewind both word and bit position. */
 		c->bit_offset -= plain_bits;
@@ -1175,33 +2027,69 @@ static int fill_bitmap_rle_bits(struct drbd_device *device,
 	return len;
 }
 
+/* Repeat extracted bits by "peeling off" words from the end.
+ * scale != 0 implies that repeat >= 2.
+ * Feel free to optimize ...
+ */
+static void repeat_bits(unsigned long *base, unsigned long num, unsigned int scale)
+{
+	unsigned long *src, *dst;
+	unsigned int repeat = 1 << scale;
+	unsigned int n;
+	int sbit, dbit, i;
+
+	for (n = num - 1; n > 0; n--) {
+		src = &base[n];
+		for (i = 0; i < repeat; i++) {
+			dst = &base[n*repeat + i];
+			*dst = 0;
+			for (dbit = 0; dbit < BITS_PER_LONG; dbit++) {
+				sbit = (i * BITS_PER_LONG + dbit) >> scale;
+				if (test_bit(sbit, src))
+					*dst |= 1UL << dbit;
+			}
+		}
+	}
+}
+
 /*
  * send_bitmap_rle_or_plain
  *
  * Return 0 when done, 1 when another iteration is needed, and a negative error
  * code upon failure.
+ *
+ * For compat reasons, send bitmap as if it was 4k per bit!
+ * Good thing that a "scaled" bitmap will always "compress".
  */
 static int
 send_bitmap_rle_or_plain(struct drbd_peer_device *peer_device, struct bm_xfer_ctx *c)
 {
 	struct drbd_device *device = peer_device->device;
-	struct drbd_socket *sock = &peer_device->connection->data;
 	unsigned int header_size = drbd_header_size(peer_device->connection);
-	struct p_compressed_bm *p = sock->sbuf + header_size;
+	struct p_compressed_bm *pc;
+	char *p;
 	int len, err;
 
-	len = fill_bitmap_rle_bits(device, p,
-			DRBD_SOCKET_BUFFER_SIZE - header_size - sizeof(*p), c);
-	if (len < 0)
+	p = alloc_send_buffer(peer_device->connection, DRBD_SOCKET_BUFFER_SIZE, DATA_STREAM);
+	if (IS_ERR(p))
 		return -EIO;
 
+	pc = (struct p_compressed_bm *)(p + header_size);
+
+	len = fill_bitmap_rle_bits(peer_device, pc,
+			DRBD_SOCKET_BUFFER_SIZE - header_size - sizeof(*pc), c);
+	if (len < 0) {
+		cancel_send_buffer(peer_device->connection, DATA_STREAM);
+		return -EIO;
+	}
+
 	if (len) {
-		dcbp_set_code(p, RLE_VLI_Bits);
-		err = __send_command(peer_device->connection, device->vnr, sock,
-				     P_COMPRESSED_BITMAP, sizeof(*p) + len,
-				     NULL, 0);
+		dcbp_set_code(pc, RLE_VLI_Bits);
+		resize_prepared_command(peer_device->connection, DATA_STREAM, sizeof(*pc) + len);
+		err = __send_command(peer_device->connection, device->vnr,
+				     P_COMPRESSED_BITMAP, DATA_STREAM);
 		c->packets[0]++;
-		c->bytes[0] += header_size + sizeof(*p) + len;
+		c->bytes[0] += header_size + sizeof(*pc) + len;
 
 		if (c->bit_offset >= c->bm_bits)
 			len = 0; /* DONE */
@@ -1210,16 +2098,40 @@ send_bitmap_rle_or_plain(struct drbd_peer_device *peer_device, struct bm_xfer_ct
 		 * send a buffer full of plain text bits instead. */
 		unsigned int data_size;
 		unsigned long num_words;
-		unsigned long *p = sock->sbuf + header_size;
-
+		unsigned long words_left = c->bm_words - c->word_offset;
+		unsigned long *pu = (unsigned long *)pc;
+
+		/* Only send full native bitmap words (actual granularity),
+		 * scaled to what they would look like at 4k granularity.
+		 * At maximum scale, which is (20 - 12), factor 256,
+		 * to transfer at least one word of unscaled bitmap,
+		 * we need data_size >= 256 (unsigned long) words,
+		 * that is >= 2048 byte. Which we always have.
+		 */
 		data_size = DRBD_SOCKET_BUFFER_SIZE - header_size;
-		num_words = min_t(size_t, data_size / sizeof(*p),
-				  c->bm_words - c->word_offset);
-		len = num_words * sizeof(*p);
-		if (len)
-			drbd_bm_get_lel(device, c->word_offset, num_words, p);
-		err = __send_command(peer_device->connection, device->vnr, sock, P_BITMAP,
-				     len, NULL, 0);
+		data_size = ALIGN_DOWN(data_size, sizeof(*pu) * (1UL << c->scale));
+		num_words = (data_size / sizeof(*pu)) >> c->scale;
+		num_words = min_t(size_t, num_words, words_left);
+
+		len = num_words * sizeof(*pu);
+		if (len) {
+			drbd_bm_get_lel(peer_device, c->word_offset, num_words, pu);
+
+			if (c->scale) {
+				repeat_bits(pu, num_words, c->scale);
+				len <<= c->scale;
+			}
+		} else if (words_left != 0) {
+			drbd_err(peer_device,
+				"failed to scale %lu words by %u while sending bitmap\n",
+				words_left, c->scale);
+			cancel_send_buffer(peer_device->connection, DATA_STREAM);
+			return -ERANGE;
+		}
+
+		resize_prepared_command(peer_device->connection, DATA_STREAM, len);
+		err = __send_command(peer_device->connection, device->vnr, P_BITMAP, DATA_STREAM);
+
 		c->word_offset += num_words;
 		c->bit_offset = c->word_offset * BITS_PER_LONG;
 
@@ -1240,396 +2152,233 @@ send_bitmap_rle_or_plain(struct drbd_peer_device *peer_device, struct bm_xfer_ct
 }
 
 /* See the comment at receive_bitmap() */
-static int _drbd_send_bitmap(struct drbd_device *device,
-			    struct drbd_peer_device *peer_device)
+static bool _drbd_send_bitmap(struct drbd_device *device,
+			     struct drbd_peer_device *peer_device)
 {
 	struct bm_xfer_ctx c;
-	int err;
-
-	if (!expect(device, device->bitmap))
-		return false;
+	int res;
 
 	if (get_ldev(device)) {
-		if (drbd_md_test_flag(device->ldev, MDF_FULL_SYNC)) {
+		if (drbd_md_test_peer_flag(peer_device, MDF_PEER_FULL_SYNC)) {
 			drbd_info(device, "Writing the whole bitmap, MDF_FullSync was set.\n");
-			drbd_bm_set_all(device);
-			if (drbd_bm_write(device, peer_device)) {
+			drbd_bm_set_many_bits(peer_device, 0, -1UL);
+			if (drbd_bm_write(device, NULL)) {
 				/* write_bm did fail! Leave full sync flag set in Meta P_DATA
 				 * but otherwise process as per normal - need to tell other
 				 * side that a full resync is required! */
 				drbd_err(device, "Failed to write bitmap to disk!\n");
 			} else {
-				drbd_md_clear_flag(device, MDF_FULL_SYNC);
+				drbd_md_clear_peer_flag(peer_device, MDF_PEER_FULL_SYNC);
 				drbd_md_sync(device);
 			}
 		}
+		c = (struct bm_xfer_ctx) {
+			.bm_bits_4k = drbd_bm_bits_4k(device),
+			.bm_bits = drbd_bm_bits(device),
+			.bm_words = drbd_bm_words(device),
+			.scale = device->bitmap->bm_block_shift - BM_BLOCK_SHIFT_4k,
+		};
+
 		put_ldev(device);
+	} else {
+		return false;
 	}
 
-	c = (struct bm_xfer_ctx) {
-		.bm_bits = drbd_bm_bits(device),
-		.bm_words = drbd_bm_words(device),
-	};
-
 	do {
-		err = send_bitmap_rle_or_plain(peer_device, &c);
-	} while (err > 0);
+		if (get_ldev(device)) {
+			res = send_bitmap_rle_or_plain(peer_device, &c);
+			put_ldev(device);
+		} else {
+			return false;
+		}
+	} while (res > 0);
 
-	return err == 0;
+	return res == 0;
 }
 
 int drbd_send_bitmap(struct drbd_device *device, struct drbd_peer_device *peer_device)
 {
-	struct drbd_socket *sock = &peer_device->connection->data;
+	struct drbd_transport *peer_transport = &peer_device->connection->transport;
 	int err = -1;
 
-	mutex_lock(&sock->mutex);
-	if (sock->socket)
-		err = !_drbd_send_bitmap(device, peer_device);
-	mutex_unlock(&sock->mutex);
-	return err;
-}
-
-void drbd_send_b_ack(struct drbd_connection *connection, u32 barrier_nr, u32 set_size)
-{
-	struct drbd_socket *sock;
-	struct p_barrier_ack *p;
-
-	if (connection->cstate < C_WF_REPORT_PARAMS)
-		return;
-
-	sock = &connection->meta;
-	p = conn_prepare_command(connection, sock);
-	if (!p)
-		return;
-	p->barrier = barrier_nr;
-	p->set_size = cpu_to_be32(set_size);
-	conn_send_command(connection, sock, P_BARRIER_ACK, sizeof(*p), NULL, 0);
-}
-
-/**
- * _drbd_send_ack() - Sends an ack packet
- * @peer_device:	DRBD peer device.
- * @cmd:		Packet command code.
- * @sector:		sector, needs to be in big endian byte order
- * @blksize:		size in byte, needs to be in big endian byte order
- * @block_id:		Id, big endian byte order
- */
-static int _drbd_send_ack(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
-			  u64 sector, u32 blksize, u64 block_id)
-{
-	struct drbd_socket *sock;
-	struct p_block_ack *p;
-
-	if (peer_device->device->state.conn < C_CONNECTED)
+	if (peer_device->bitmap_index == -1) {
+		drbd_err(peer_device, "No bitmap allocated in drbd_send_bitmap()!\n");
 		return -EIO;
+	}
 
-	sock = &peer_device->connection->meta;
-	p = drbd_prepare_command(peer_device, sock);
-	if (!p)
-		return -EIO;
-	p->sector = sector;
-	p->block_id = block_id;
-	p->blksize = blksize;
-	p->seq_num = cpu_to_be32(atomic_inc_return(&peer_device->device->packet_seq));
-	return drbd_send_command(peer_device, sock, cmd, sizeof(*p), NULL, 0);
-}
-
-/* dp->sector and dp->block_id already/still in network byte order,
- * data_size is payload size according to dp->head,
- * and may need to be corrected for digest size. */
-void drbd_send_ack_dp(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
-		      struct p_data *dp, int data_size)
-{
-	if (peer_device->connection->peer_integrity_tfm)
-		data_size -= crypto_shash_digestsize(peer_device->connection->peer_integrity_tfm);
-	_drbd_send_ack(peer_device, cmd, dp->sector, cpu_to_be32(data_size),
-		       dp->block_id);
-}
-
-void drbd_send_ack_rp(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
-		      struct p_block_req *rp)
-{
-	_drbd_send_ack(peer_device, cmd, rp->sector, rp->blksize, rp->block_id);
-}
-
-/**
- * drbd_send_ack() - Sends an ack packet
- * @peer_device:	DRBD peer device
- * @cmd:		packet command code
- * @peer_req:		peer request
- */
-int drbd_send_ack(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
-		  struct drbd_peer_request *peer_req)
-{
-	return _drbd_send_ack(peer_device, cmd,
-			      cpu_to_be64(peer_req->i.sector),
-			      cpu_to_be32(peer_req->i.size),
-			      peer_req->block_id);
-}
+	mutex_lock(&peer_device->connection->mutex[DATA_STREAM]);
+	if (peer_transport->class->ops.stream_ok(peer_transport, DATA_STREAM))
+		err = !_drbd_send_bitmap(device, peer_device);
+	mutex_unlock(&peer_device->connection->mutex[DATA_STREAM]);
 
-/* This function misuses the block_id field to signal if the blocks
- * are is sync or not. */
-int drbd_send_ack_ex(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
-		     sector_t sector, int blksize, u64 block_id)
-{
-	return _drbd_send_ack(peer_device, cmd,
-			      cpu_to_be64(sector),
-			      cpu_to_be32(blksize),
-			      cpu_to_be64(block_id));
+	return err;
 }
 
 int drbd_send_rs_deallocated(struct drbd_peer_device *peer_device,
 			     struct drbd_peer_request *peer_req)
 {
-	struct drbd_socket *sock;
-	struct p_block_desc *p;
+	struct p_block_ack *p_id;
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
-	if (!p)
+	if (peer_device->connection->agreed_pro_version < 122) {
+		struct p_block_desc *p;
+
+		p = drbd_prepare_command(peer_device, sizeof(*p), DATA_STREAM);
+		if (!p)
+			return -EIO;
+		p->sector = cpu_to_be64(peer_req->i.sector);
+		p->blksize = cpu_to_be32(peer_req->i.size);
+		p->pad = 0;
+		return drbd_send_command(peer_device, P_RS_DEALLOCATED, DATA_STREAM);
+	}
+
+	p_id = drbd_prepare_command(peer_device, sizeof(*p_id), DATA_STREAM);
+	if (!p_id)
 		return -EIO;
-	p->sector = cpu_to_be64(peer_req->i.sector);
-	p->blksize = cpu_to_be32(peer_req->i.size);
-	p->pad = 0;
-	return drbd_send_command(peer_device, sock, P_RS_DEALLOCATED, sizeof(*p), NULL, 0);
+	p_id->sector = cpu_to_be64(peer_req->i.sector);
+	p_id->blksize = cpu_to_be32(peer_req->i.size);
+	p_id->block_id = peer_req->block_id;
+	p_id->seq_num = 0;
+	return drbd_send_command(peer_device, P_RS_DEALLOCATED_ID, DATA_STREAM);
 }
 
-int drbd_send_drequest(struct drbd_peer_device *peer_device, int cmd,
+int drbd_send_drequest(struct drbd_peer_device *peer_device,
 		       sector_t sector, int size, u64 block_id)
 {
-	struct drbd_socket *sock;
 	struct p_block_req *p;
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
+	p = drbd_prepare_command(peer_device, sizeof(*p), DATA_STREAM);
 	if (!p)
 		return -EIO;
 	p->sector = cpu_to_be64(sector);
 	p->block_id = block_id;
 	p->blksize = cpu_to_be32(size);
-	return drbd_send_command(peer_device, sock, cmd, sizeof(*p), NULL, 0);
-}
-
-int drbd_send_drequest_csum(struct drbd_peer_device *peer_device, sector_t sector, int size,
-			    void *digest, int digest_size, enum drbd_packet cmd)
-{
-	struct drbd_socket *sock;
-	struct p_block_req *p;
+	p->pad = 0;
+	return drbd_send_command(peer_device, P_DATA_REQUEST, DATA_STREAM);
+}
+
+static void *drbd_prepare_rs_req(struct drbd_peer_device *peer_device, enum drbd_packet cmd, int payload_size,
+		sector_t sector, int blksize, u64 block_id, unsigned int dagtag_node_id, u64 dagtag)
+{
+	void *payload;
+	struct p_block_req_common *req_common;
+
+	if (cmd == P_RS_DAGTAG_REQ || cmd == P_RS_CSUM_DAGTAG_REQ || cmd == P_RS_THIN_DAGTAG_REQ ||
+			cmd == P_OV_DAGTAG_REQ || cmd == P_OV_DAGTAG_REPLY) {
+		struct p_rs_req *p;
+		/* Due to the slightly complicated nested struct definition,
+		 * verify that the packet size is as expected. */
+		BUILD_BUG_ON(sizeof(struct p_rs_req) != 32);
+		p = drbd_prepare_command(peer_device, sizeof(*p) + payload_size, DATA_STREAM);
+		if (!p)
+			return NULL;
+		payload = p + 1;
+		req_common = &p->req_common;
+		p->dagtag_node_id = cpu_to_be32(dagtag_node_id);
+		p->dagtag = cpu_to_be64(dagtag);
+	} else {
+		struct p_block_req *p;
+		/* Due to the slightly complicated nested struct definition,
+		 * verify that the packet size is as expected. */
+		BUILD_BUG_ON(sizeof(struct p_block_req) != 24);
+		p = drbd_prepare_command(peer_device, sizeof(*p) + payload_size, DATA_STREAM);
+		if (!p)
+			return NULL;
+		payload = p + 1;
+		req_common = &p->req_common;
+		p->pad = 0;
+	}
 
-	/* FIXME: Put the digest into the preallocated socket buffer.  */
+	req_common->sector = cpu_to_be64(sector);
+	req_common->block_id = block_id;
+	req_common->blksize = cpu_to_be32(blksize);
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
-	if (!p)
-		return -EIO;
-	p->sector = cpu_to_be64(sector);
-	p->block_id = ID_SYNCER /* unused */;
-	p->blksize = cpu_to_be32(size);
-	return drbd_send_command(peer_device, sock, cmd, sizeof(*p), digest, digest_size);
+	return payload;
 }
 
-int drbd_send_ov_request(struct drbd_peer_device *peer_device, sector_t sector, int size)
+int drbd_send_rs_request(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
+		       sector_t sector, int size, u64 block_id,
+		       unsigned int dagtag_node_id, u64 dagtag)
 {
-	struct drbd_socket *sock;
-	struct p_block_req *p;
-
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
-	if (!p)
+	if (!drbd_prepare_rs_req(peer_device, cmd, 0,
+				sector, size, block_id, dagtag_node_id, dagtag))
 		return -EIO;
-	p->sector = cpu_to_be64(sector);
-	p->block_id = ID_SYNCER /* unused */;
-	p->blksize = cpu_to_be32(size);
-	return drbd_send_command(peer_device, sock, P_OV_REQUEST, sizeof(*p), NULL, 0);
+	return drbd_send_command(peer_device, cmd, DATA_STREAM);
 }
 
-/* called on sndtimeo
- * returns false if we should retry,
- * true if we think connection is dead
- */
-static int we_should_drop_the_connection(struct drbd_connection *connection, struct socket *sock)
+void *drbd_prepare_drequest_csum(struct drbd_peer_request *peer_req, enum drbd_packet cmd,
+		int digest_size, unsigned int dagtag_node_id, u64 dagtag)
 {
-	int drop_it;
-	/* long elapsed = (long)(jiffies - device->last_received); */
-
-	drop_it =   connection->meta.socket == sock
-		|| !connection->ack_receiver.task
-		|| get_t_state(&connection->ack_receiver) != RUNNING
-		|| connection->cstate < C_WF_REPORT_PARAMS;
-
-	if (drop_it)
-		return true;
-
-	drop_it = !--connection->ko_count;
-	if (!drop_it) {
-		drbd_err(connection, "[%s/%d] sock_sendmsg time expired, ko = %u\n",
-			 current->comm, current->pid, connection->ko_count);
-		request_ping(connection);
-	}
-
-	return drop_it; /* && (device->state == R_PRIMARY) */;
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	return drbd_prepare_rs_req(peer_device, cmd, digest_size,
+			peer_req->i.sector, peer_req->i.size, peer_req->block_id,
+			dagtag_node_id, dagtag);
 }
 
-static void drbd_update_congested(struct drbd_connection *connection)
-{
-	struct sock *sk = connection->data.socket->sk;
-	if (sk->sk_wmem_queued > sk->sk_sndbuf * 4 / 5)
-		set_bit(NET_CONGESTED, &connection->flags);
-}
 
-/* The idea of sendpage seems to be to put some kind of reference
- * to the page into the skb, and to hand it over to the NIC. In
- * this process get_page() gets called.
- *
- * As soon as the page was really sent over the network put_page()
- * gets called by some part of the network layer. [ NIC driver? ]
- *
- * [ get_page() / put_page() increment/decrement the count. If count
- *   reaches 0 the page will be freed. ]
- *
- * This works nicely with pages from FSs.
- * But this means that in protocol A we might signal IO completion too early!
- *
- * In order not to corrupt data during a resync we must make sure
- * that we do not reuse our own buffer pages (EEs) to early, therefore
- * we have the net_ee list.
- *
- * XFS seems to have problems, still, it submits pages with page_count == 0!
- * As a workaround, we disable sendpage on pages
- * with page_count == 0 or PageSlab.
- */
-static int _drbd_no_send_page(struct drbd_peer_device *peer_device, struct page *page,
-			      int offset, size_t size, unsigned msg_flags)
+static int __send_bio(struct drbd_peer_device *peer_device, struct bio *bio, unsigned int msg_flags)
 {
-	struct socket *socket;
-	void *addr;
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_transport_ops *tr_ops = &transport->class->ops;
 	int err;
 
-	socket = peer_device->connection->data.socket;
-	addr = kmap(page) + offset;
-	err = drbd_send_all(peer_device->connection, socket, addr, size, msg_flags);
-	kunmap(page);
-	if (!err)
-		peer_device->device->send_cnt += size >> 9;
+	err = flush_send_buffer(connection, DATA_STREAM);
+	if (!err) {
+		err = tr_ops->send_bio(transport, bio, msg_flags);
+		if (!err)
+			peer_device->send_cnt += bio->bi_iter.bi_size >> 9;
+	}
+
 	return err;
 }
 
-static int _drbd_send_page(struct drbd_peer_device *peer_device, struct page *page,
-		    int offset, size_t size, unsigned msg_flags)
+/* sendmsg(MSG_SPLICE_PAGES) (former (sendpage()) increases the page ref_count
+ * and hands it to the network stack. After the NIC DMA sends the data, it
+ * decreases that page's ref_count.
+ * We may not do this for protocol A, where we could complete a write operation
+ * before the network stack sends the data.
+ */
+static int
+drbd_send_bio(struct drbd_peer_device *peer_device, struct bio *bio, unsigned int msg_flags)
 {
-	struct socket *socket = peer_device->connection->data.socket;
-	struct msghdr msg = { .msg_flags = msg_flags, };
-	struct bio_vec bvec;
-	int len = size;
-	int err = -EIO;
+	if (drbd_disable_sendpage)
+		msg_flags &= ~MSG_SPLICE_PAGES;
 
-	/* e.g. XFS meta- & log-data is in slab pages, which have a
-	 * page_count of 0 and/or have PageSlab() set.
-	 * we cannot use send_page for those, as that does get_page();
-	 * put_page(); and would cause either a VM_BUG directly, or
-	 * __page_cache_release a page that would actually still be referenced
-	 * by someone, leading to some obscure delayed Oops somewhere else. */
-	if (!drbd_disable_sendpage && sendpages_ok(page, len, offset))
-		msg.msg_flags |= MSG_NOSIGNAL | MSG_SPLICE_PAGES;
+	/* e.g. XFS meta- & log-data is in slab pages have !sendpage_ok(page) */
+	if (msg_flags & MSG_SPLICE_PAGES) {
+		struct bvec_iter iter;
+		struct bio_vec bvec;
 
-	drbd_update_congested(peer_device->connection);
-	do {
-		int sent;
-
-		bvec_set_page(&bvec, page, len, offset);
-		iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, len);
+		bio_for_each_segment(bvec, bio, iter) {
+			struct page *page = bvec.bv_page;
 
-		sent = sock_sendmsg(socket, &msg);
-		if (sent <= 0) {
-			if (sent == -EAGAIN) {
-				if (we_should_drop_the_connection(peer_device->connection, socket))
-					break;
-				continue;
+			if (!sendpage_ok(page)) {
+				msg_flags &= ~MSG_SPLICE_PAGES;
+				break;
 			}
-			drbd_warn(peer_device->device, "%s: size=%d len=%d sent=%d\n",
-			     __func__, (int)size, len, sent);
-			if (sent < 0)
-				err = sent;
-			break;
 		}
-		len    -= sent;
-		offset += sent;
-	} while (len > 0 /* THINK && device->cstate >= C_CONNECTED*/);
-	clear_bit(NET_CONGESTED, &peer_device->connection->flags);
-
-	if (len == 0) {
-		err = 0;
-		peer_device->device->send_cnt += size >> 9;
 	}
-	return err;
-}
-
-static int _drbd_send_bio(struct drbd_peer_device *peer_device, struct bio *bio)
-{
-	struct bio_vec bvec;
-	struct bvec_iter iter;
 
-	/* hint all but last page with MSG_MORE */
-	bio_for_each_segment(bvec, bio, iter) {
-		int err;
-
-		err = _drbd_no_send_page(peer_device, bvec.bv_page,
-					 bvec.bv_offset, bvec.bv_len,
-					 bio_iter_last(bvec, iter)
-					 ? 0 : MSG_MORE);
-		if (err)
-			return err;
-	}
-	return 0;
+	return __send_bio(peer_device, bio, msg_flags);
 }
 
-static int _drbd_send_zc_bio(struct drbd_peer_device *peer_device, struct bio *bio)
+static int drbd_send_ee(struct drbd_peer_device *peer_device, struct drbd_peer_request *peer_req)
 {
-	struct bio_vec bvec;
-	struct bvec_iter iter;
+	struct bio *bio;
+	int err = 0;
 
-	/* hint all but last page with MSG_MORE */
-	bio_for_each_segment(bvec, bio, iter) {
-		int err;
-
-		err = _drbd_send_page(peer_device, bvec.bv_page,
-				      bvec.bv_offset, bvec.bv_len,
-				      bio_iter_last(bvec, iter) ? 0 : MSG_MORE);
+	bio_list_for_each(bio, &peer_req->bios) {
+		err = __send_bio(peer_device, bio,
+				 peer_req->flags & EE_RELEASE_TO_MEMPOOL ? 0 : MSG_SPLICE_PAGES);
 		if (err)
-			return err;
+			break;
 	}
-	return 0;
-}
 
-static int _drbd_send_zc_ee(struct drbd_peer_device *peer_device,
-			    struct drbd_peer_request *peer_req)
-{
-	bool use_sendpage = !(peer_req->flags & EE_RELEASE_TO_MEMPOOL);
-	struct page *page = peer_req->pages;
-	unsigned len = peer_req->i.size;
-	int err;
-
-	/* hint all but last page with MSG_MORE */
-	page_chain_for_each(page) {
-		unsigned l = min_t(unsigned, len, PAGE_SIZE);
-
-		if (likely(use_sendpage))
-			err = _drbd_send_page(peer_device, page, 0, l,
-					      page_chain_next(page) ? MSG_MORE : 0);
-		else
-			err = _drbd_no_send_page(peer_device, page, 0, l,
-						 page_chain_next(page) ? MSG_MORE : 0);
-
-		if (err)
-			return err;
-		len -= l;
-	}
-	return 0;
+	return err;
 }
 
-static u32 bio_flags_to_wire(struct drbd_connection *connection,
-			     struct bio *bio)
+/* see also wire_flags_to_bio() */
+static u32 bio_flags_to_wire(struct drbd_connection *connection, struct bio *bio)
 {
 	if (connection->agreed_pro_version >= 95)
 		return  (bio->bi_opf & REQ_SYNC ? DP_RW_SYNC : 0) |
@@ -1637,12 +2386,13 @@ static u32 bio_flags_to_wire(struct drbd_connection *connection,
 			(bio->bi_opf & REQ_PREFLUSH ? DP_FLUSH : 0) |
 			(bio_op(bio) == REQ_OP_DISCARD ? DP_DISCARD : 0) |
 			(bio_op(bio) == REQ_OP_WRITE_ZEROES ?
-			  ((connection->agreed_features & DRBD_FF_WZEROES) ?
-			   (DP_ZEROES |(!(bio->bi_opf & REQ_NOUNMAP) ? DP_DISCARD : 0))
-			   : DP_DISCARD)
-			: 0);
-	else
-		return bio->bi_opf & REQ_SYNC ? DP_RW_SYNC : 0;
+			 ((connection->agreed_features & DRBD_FF_WZEROES) ?
+			  (DP_ZEROES | (!(bio->bi_opf & REQ_NOUNMAP) ? DP_DISCARD : 0))
+			  : DP_DISCARD)
+			 : 0);
+
+	/* else: we used to communicate one bit only in older DRBD */
+	return bio->bi_opf & REQ_SYNC ? DP_RW_SYNC : 0;
 }
 
 /* Used to send write or TRIM aka REQ_OP_DISCARD requests
@@ -1651,53 +2401,62 @@ static u32 bio_flags_to_wire(struct drbd_connection *connection,
 int drbd_send_dblock(struct drbd_peer_device *peer_device, struct drbd_request *req)
 {
 	struct drbd_device *device = peer_device->device;
-	struct drbd_socket *sock;
+	struct drbd_connection *connection = peer_device->connection;
+	char *const before = connection->scratch_buffer.d.before;
+	char *const after = connection->scratch_buffer.d.after;
+	struct p_trim *trim = NULL;
 	struct p_data *p;
-	void *digest_out;
+	void *digest_out = NULL;
 	unsigned int dp_flags = 0;
-	int digest_size;
+	int digest_size = 0;
 	int err;
+	const unsigned s = req->net_rq_state[peer_device->node_id];
+	const enum req_op op = bio_op(req->master_bio);
+
+	if (op == REQ_OP_DISCARD || op == REQ_OP_WRITE_ZEROES) {
+		trim = drbd_prepare_command(peer_device, sizeof(*trim), DATA_STREAM);
+		if (!trim)
+			return -EIO;
+		p = &trim->p_data;
+		trim->size = cpu_to_be32(req->i.size);
+	} else {
+		if (connection->integrity_tfm)
+			digest_size = crypto_shash_digestsize(connection->integrity_tfm);
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
-	digest_size = peer_device->connection->integrity_tfm ?
-		      crypto_shash_digestsize(peer_device->connection->integrity_tfm) : 0;
+		p = drbd_prepare_command(peer_device, sizeof(*p) + digest_size, DATA_STREAM);
+		if (!p)
+			return -EIO;
+		digest_out = p + 1;
+	}
 
-	if (!p)
-		return -EIO;
 	p->sector = cpu_to_be64(req->i.sector);
 	p->block_id = (unsigned long)req;
-	p->seq_num = cpu_to_be32(atomic_inc_return(&device->packet_seq));
-	dp_flags = bio_flags_to_wire(peer_device->connection, req->master_bio);
-	if (device->state.conn >= C_SYNC_SOURCE &&
-	    device->state.conn <= C_PAUSED_SYNC_T)
+	p->seq_num = cpu_to_be32(atomic_inc_return(&peer_device->packet_seq));
+	dp_flags = bio_flags_to_wire(connection, req->master_bio);
+	if (peer_device->repl_state[NOW] >= L_SYNC_SOURCE && peer_device->repl_state[NOW] <= L_PAUSED_SYNC_T)
 		dp_flags |= DP_MAY_SET_IN_SYNC;
-	if (peer_device->connection->agreed_pro_version >= 100) {
-		if (req->rq_state & RQ_EXP_RECEIVE_ACK)
+	if (connection->agreed_pro_version >= 100) {
+		if (s & RQ_EXP_RECEIVE_ACK)
 			dp_flags |= DP_SEND_RECEIVE_ACK;
-		/* During resync, request an explicit write ack,
-		 * even in protocol != C */
-		if (req->rq_state & RQ_EXP_WRITE_ACK
-		|| (dp_flags & DP_MAY_SET_IN_SYNC))
+		if (s & RQ_EXP_WRITE_ACK || dp_flags & DP_MAY_SET_IN_SYNC)
 			dp_flags |= DP_SEND_WRITE_ACK;
 	}
 	p->dp_flags = cpu_to_be32(dp_flags);
 
-	if (dp_flags & (DP_DISCARD|DP_ZEROES)) {
-		enum drbd_packet cmd = (dp_flags & DP_ZEROES) ? P_ZEROES : P_TRIM;
-		struct p_trim *t = (struct p_trim*)p;
-		t->size = cpu_to_be32(req->i.size);
-		err = __send_command(peer_device->connection, device->vnr, sock, cmd, sizeof(*t), NULL, 0);
+	if (trim) {
+		err = __send_command(connection, device->vnr,
+				(dp_flags & DP_ZEROES) ? P_ZEROES : P_TRIM, DATA_STREAM);
 		goto out;
 	}
-	digest_out = p + 1;
 
-	/* our digest is still only over the payload.
-	 * TRIM does not carry any payload. */
-	if (digest_size)
-		drbd_csum_bio(peer_device->connection->integrity_tfm, req->master_bio, digest_out);
-	err = __send_command(peer_device->connection, device->vnr, sock, P_DATA,
-			     sizeof(*p) + digest_size, NULL, req->i.size);
+	if (digest_size && digest_out) {
+		WARN_ON(digest_size > sizeof(connection->scratch_buffer.d.before));
+		drbd_csum_bio(connection->integrity_tfm, req->master_bio, before);
+		memcpy(digest_out, before, digest_size);
+	}
+
+	additional_size_command(connection, DATA_STREAM, req->i.size);
+	err = __send_command(connection, device->vnr, P_DATA, DATA_STREAM);
 	if (!err) {
 		/* For protocol A, we have to memcpy the payload into
 		 * socket buffers, as we may complete right away
@@ -1710,50 +2469,43 @@ int drbd_send_dblock(struct drbd_peer_device *peer_device, struct drbd_request *
 		 * out ok after sending on this side, but does not fit on the
 		 * receiving side, we sure have detected corruption elsewhere.
 		 */
-		if (!(req->rq_state & (RQ_EXP_RECEIVE_ACK | RQ_EXP_WRITE_ACK)) || digest_size)
-			err = _drbd_send_bio(peer_device, req->master_bio);
-		else
-			err = _drbd_send_zc_bio(peer_device, req->master_bio);
+		bool proto_b_or_c = (s & (RQ_EXP_RECEIVE_ACK | RQ_EXP_WRITE_ACK));
+		int msg_flags =  proto_b_or_c && !digest_size ? MSG_SPLICE_PAGES : 0;
+
+		err = drbd_send_bio(peer_device, req->master_bio, msg_flags);
 
 		/* double check digest, sometimes buffers have been modified in flight. */
-		if (digest_size > 0 && digest_size <= 64) {
-			/* 64 byte, 512 bit, is the largest digest size
-			 * currently supported in kernel crypto. */
-			unsigned char digest[64];
-			drbd_csum_bio(peer_device->connection->integrity_tfm, req->master_bio, digest);
-			if (memcmp(p + 1, digest, digest_size)) {
+		if (digest_size > 0) {
+			drbd_csum_bio(connection->integrity_tfm, req->master_bio, after);
+			if (memcmp(before, after, digest_size)) {
 				drbd_warn(device,
 					"Digest mismatch, buffer modified by upper layers during write: %llus +%u\n",
 					(unsigned long long)req->i.sector, req->i.size);
 			}
-		} /* else if (digest_size > 64) {
-		     ... Be noisy about digest too large ...
-		} */
+		}
 	}
 out:
-	mutex_unlock(&sock->mutex);  /* locked by drbd_prepare_command() */
+	mutex_unlock(&connection->mutex[DATA_STREAM]);
 
 	return err;
 }
 
 /* answer packet, used to send data back for read requests:
  *  Peer       -> (diskless) R_PRIMARY   (P_DATA_REPLY)
- *  C_SYNC_SOURCE -> C_SYNC_TARGET         (P_RS_DATA_REPLY)
+ *  L_SYNC_SOURCE -> L_SYNC_TARGET         (P_RS_DATA_REPLY)
  */
 int drbd_send_block(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
 		    struct drbd_peer_request *peer_req)
 {
-	struct drbd_device *device = peer_device->device;
-	struct drbd_socket *sock;
+	struct drbd_connection *connection = peer_device->connection;
 	struct p_data *p;
 	int err;
 	int digest_size;
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
+	digest_size = connection->integrity_tfm ?
+		      crypto_shash_digestsize(connection->integrity_tfm) : 0;
 
-	digest_size = peer_device->connection->integrity_tfm ?
-		      crypto_shash_digestsize(peer_device->connection->integrity_tfm) : 0;
+	p = drbd_prepare_command(peer_device, sizeof(*p) + digest_size, DATA_STREAM);
 
 	if (!p)
 		return -EIO;
@@ -1761,314 +2513,721 @@ int drbd_send_block(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
 	p->block_id = peer_req->block_id;
 	p->seq_num = 0;  /* unused */
 	p->dp_flags = 0;
+
+	/* Older peers expect block_id for P_RS_DATA_REPLY to be ID_SYNCER. */
+	if (connection->agreed_pro_version < 122 && cmd == P_RS_DATA_REPLY)
+		p->block_id = ID_SYNCER;
+
 	if (digest_size)
-		drbd_csum_ee(peer_device->connection->integrity_tfm, peer_req, p + 1);
-	err = __send_command(peer_device->connection, device->vnr, sock, cmd, sizeof(*p) + digest_size, NULL, peer_req->i.size);
+		drbd_csum_bios(connection->integrity_tfm, &peer_req->bios, p + 1);
+	additional_size_command(connection, DATA_STREAM, peer_req->i.size);
+	err = __send_command(connection,
+			     peer_device->device->vnr, cmd, DATA_STREAM);
 	if (!err)
-		err = _drbd_send_zc_ee(peer_device, peer_req);
-	mutex_unlock(&sock->mutex);  /* locked by drbd_prepare_command() */
+		err = drbd_send_ee(peer_device, peer_req);
+	mutex_unlock(&connection->mutex[DATA_STREAM]);
 
 	return err;
 }
 
-int drbd_send_out_of_sync(struct drbd_peer_device *peer_device, struct drbd_request *req)
+int drbd_send_out_of_sync(struct drbd_peer_device *peer_device, sector_t sector, unsigned int size)
 {
-	struct drbd_socket *sock;
 	struct p_block_desc *p;
 
-	sock = &peer_device->connection->data;
-	p = drbd_prepare_command(peer_device, sock);
+	p = drbd_prepare_command(peer_device, sizeof(*p), DATA_STREAM);
 	if (!p)
 		return -EIO;
-	p->sector = cpu_to_be64(req->i.sector);
-	p->blksize = cpu_to_be32(req->i.size);
-	return drbd_send_command(peer_device, sock, P_OUT_OF_SYNC, sizeof(*p), NULL, 0);
+	p->sector = cpu_to_be64(sector);
+	p->blksize = cpu_to_be32(size);
+	return drbd_send_command(peer_device, P_OUT_OF_SYNC, DATA_STREAM);
 }
 
-/*
-  drbd_send distinguishes two cases:
+int drbd_send_dagtag(struct drbd_connection *connection, u64 dagtag)
+{
+	struct p_dagtag *p;
 
-  Packets sent via the data socket "sock"
-  and packets sent via the meta data socket "msock"
+	if (connection->agreed_pro_version < 110)
+		return 0;
 
-		    sock                      msock
-  -----------------+-------------------------+------------------------------
-  timeout           conf.timeout / 2          conf.timeout / 2
-  timeout action    send a ping via msock     Abort communication
-					      and close all sockets
-*/
+	p = conn_prepare_command(connection, sizeof(*p), DATA_STREAM);
+	if (!p)
+		return -EIO;
+	p->dagtag = cpu_to_be64(dagtag);
+	return send_command(connection, -1, P_DAGTAG, DATA_STREAM);
+}
 
-/*
- * you must have down()ed the appropriate [m]sock_mutex elsewhere!
- */
-int drbd_send(struct drbd_connection *connection, struct socket *sock,
-	      void *buf, size_t size, unsigned msg_flags)
+/* primary_peer_present_and_not_two_primaries_allowed() */
+static bool primary_peer_present(struct drbd_resource *resource)
 {
-	struct kvec iov = {.iov_base = buf, .iov_len = size};
-	struct msghdr msg = {.msg_flags = msg_flags | MSG_NOSIGNAL};
-	int rv, sent = 0;
+	struct drbd_connection *connection;
+	struct net_conf *nc;
+	bool two_primaries, rv = false;
 
-	if (!sock)
-		return -EBADR;
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		nc = rcu_dereference(connection->transport.net_conf);
+		two_primaries = nc ? nc->two_primaries : false;
 
-	/* THINK  if (signal_pending) return ... ? */
+		if (connection->peer_role[NOW] == R_PRIMARY && !two_primaries) {
+			rv = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
 
-	iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &iov, 1, size);
+	return rv;
+}
 
-	if (sock == connection->data.socket) {
-		rcu_read_lock();
-		connection->ko_count = rcu_dereference(connection->net_conf)->ko_count;
-		rcu_read_unlock();
-		drbd_update_congested(connection);
-	}
-	do {
-		rv = sock_sendmsg(sock, &msg);
-		if (rv == -EAGAIN) {
-			if (we_should_drop_the_connection(connection, sock))
+static bool any_disk_is_uptodate(struct drbd_device *device)
+{
+	bool ret = false;
+
+	rcu_read_lock();
+	if (device->disk_state[NOW] == D_UP_TO_DATE)
+		ret = true;
+	else {
+		struct drbd_peer_device *peer_device;
+
+		for_each_peer_device_rcu(peer_device, device) {
+			if (peer_device->disk_state[NOW] == D_UP_TO_DATE) {
+				ret = true;
 				break;
-			else
-				continue;
-		}
-		if (rv == -EINTR) {
-			flush_signals(current);
-			rv = 0;
+			}
 		}
-		if (rv < 0)
-			break;
-		sent += rv;
-	} while (sent < size);
-
-	if (sock == connection->data.socket)
-		clear_bit(NET_CONGESTED, &connection->flags);
-
-	if (rv <= 0) {
-		if (rv != -EAGAIN) {
-			drbd_err(connection, "%s_sendmsg returned %d\n",
-				 sock == connection->meta.socket ? "msock" : "sock",
-				 rv);
-			conn_request_state(connection, NS(conn, C_BROKEN_PIPE), CS_HARD);
-		} else
-			conn_request_state(connection, NS(conn, C_TIMEOUT), CS_HARD);
 	}
+	rcu_read_unlock();
 
-	return sent;
+	return ret;
 }
 
-/*
- * drbd_send_all  -  Send an entire buffer
- *
- * Returns 0 upon success and a negative error value otherwise.
+/* If we are trying to (re-)establish some connection,
+ * it may be useful to re-try the conditions in drbd_open().
+ * But if we have no connection at all (yet/anymore),
+ * or are disconnected and not trying to (re-)establish,
+ * or are established already, retrying won't help at all.
+ * Asking the same peer(s) the same question
+ * is unlikely to change their answer.
+ * Almost always triggered by udev (and the configured probes) while bringing
+ * the resource "up", just after "new-minor", even before "attach" or any
+ * "peers"/"paths" are configured.
  */
-int drbd_send_all(struct drbd_connection *connection, struct socket *sock, void *buffer,
-		  size_t size, unsigned msg_flags)
+static bool connection_state_may_improve_soon(struct drbd_resource *resource)
 {
-	int err;
-
-	err = drbd_send(connection, sock, buffer, size, msg_flags);
-	if (err < 0)
-		return err;
-	if (err != size)
-		return -EIO;
-	return 0;
+	struct drbd_connection *connection;
+	bool ret = false;
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		enum drbd_conn_state cstate = connection->cstate[NOW];
+		if (C_DISCONNECTING < cstate && cstate < C_CONNECTED) {
+			ret = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+	return ret;
 }
 
-static int drbd_open(struct gendisk *disk, blk_mode_t mode)
+/* TASK_COMM_LEN reserves one '\0', sizeof("") both include '\0',
+ * that's room enough for ':' and ' ' separators and the EOS.
+ */
+union comm_pid_tag_buf {
+	char comm[TASK_COMM_LEN];
+	char buf[TASK_COMM_LEN + sizeof("2147483647") + sizeof("auto-promote")];
+};
+
+static void snprintf_current_comm_pid_tag(union comm_pid_tag_buf *s, const char *tag)
 {
-	struct drbd_device *device = disk->private_data;
-	unsigned long flags;
-	int rv = 0;
+	int len;
 
-	mutex_lock(&drbd_main_mutex);
-	spin_lock_irqsave(&device->resource->req_lock, flags);
-	/* to have a stable device->state.role
-	 * and no race with updating open_cnt */
+	get_task_comm(s->comm, current);
+	len = strlen(s->buf);
+	snprintf(s->buf + len, sizeof(s->buf)-len, ":%d %s", task_pid_nr(current), tag);
+}
 
-	if (device->state.role != R_PRIMARY) {
-		if (mode & BLK_OPEN_WRITE)
-			rv = -EROFS;
-		else if (!drbd_allow_oos)
-			rv = -EMEDIUMTYPE;
-	}
+static int try_to_promote(struct drbd_device *device, long timeout, bool ndelay)
+{
+	struct drbd_resource *resource = device->resource;
+	int rv;
 
-	if (!rv)
-		device->open_cnt++;
-	spin_unlock_irqrestore(&device->resource->req_lock, flags);
-	mutex_unlock(&drbd_main_mutex);
+	do {
+		union comm_pid_tag_buf tag;
+		unsigned long start = jiffies;
+		long t;
 
+		snprintf_current_comm_pid_tag(&tag, "auto-promote");
+		rv = drbd_set_role(resource, R_PRIMARY, false, tag.buf, NULL);
+		timeout -= jiffies - start;
+
+		if (ndelay || rv >= SS_SUCCESS || timeout <= 0) {
+			break;
+		} else if (rv == SS_CW_FAILED_BY_PEER) {
+			/* Probably udev has it open read-only on one of the peers;
+			   since commit cbcbb50a65 from 2017 it waits on the peer;
+			   retry only if the timeout permits */
+			if (jiffies - start < HZ / 10) {
+				t = schedule_timeout_interruptible(HZ / 10);
+				if (t)
+					break;
+				timeout -= HZ / 10;
+			}
+		} else if (rv == SS_TWO_PRIMARIES) {
+			/* Wait till the peer demoted itself */
+			t = wait_event_interruptible_timeout(resource->state_wait,
+				resource->role[NOW] == R_PRIMARY ||
+				(!primary_peer_present(resource) && any_disk_is_uptodate(device)),
+				timeout);
+			if (t <= 0)
+				break;
+			timeout -= t;
+		} else if (rv == SS_NO_UP_TO_DATE_DISK && connection_state_may_improve_soon(resource)) {
+			/* Wait until we get a connection established */
+			t = wait_event_interruptible_timeout(resource->state_wait,
+				any_disk_is_uptodate(device), timeout);
+			if (t <= 0)
+				break;
+			timeout -= t;
+		} else {
+			break;
+		}
+	} while (timeout > 0);
 	return rv;
 }
 
-static void drbd_release(struct gendisk *gd)
+static int ro_open_cond(struct drbd_device *device)
 {
-	struct drbd_device *device = gd->private_data;
+	struct drbd_resource *resource = device->resource;
 
-	mutex_lock(&drbd_main_mutex);
-	device->open_cnt--;
-	mutex_unlock(&drbd_main_mutex);
+	if (!device->have_quorum[NOW])
+		return -ENODATA;
+	else if (resource->role[NOW] != R_PRIMARY &&
+		primary_peer_present(resource) && !drbd_allow_oos)
+		return -EMEDIUMTYPE;
+	else if (any_disk_is_uptodate(device))
+		return 0;
+	else if (connection_state_may_improve_soon(resource))
+		return -EAGAIN;
+	else
+		return -ENODATA;
 }
 
-/* need to hold resource->req_lock */
-void drbd_queue_unplug(struct drbd_device *device)
+enum ioc_rv {
+	IOC_SLEEP = 0,
+	IOC_OK = 1,
+	IOC_ABORT = 2,
+};
+
+/* If we are in the middle of a cluster wide state change, we don't want
+ * to change (open_cnt == 0), as that then could cause a failure to commit
+ * some already promised peer auto-promote locally.
+ * So we wait until the pending remote_state_change is finalized,
+ * or give up when the timeout is reached.
+ *
+ * But we don't want to fail an open on a Primary just because it happens
+ * during some unrelated remote state change.
+ * If we are already Primary, or already have an open count != 0,
+ * we don't need to wait, it won't change anything.
+ */
+static enum ioc_rv inc_open_count(struct drbd_device *device, blk_mode_t mode)
 {
-	if (device->state.pdsk >= D_INCONSISTENT && device->state.conn >= C_CONNECTED) {
-		D_ASSERT(device, device->state.role == R_PRIMARY);
-		if (test_and_clear_bit(UNPLUG_REMOTE, &device->flags)) {
-			drbd_queue_work_if_unqueued(
-				&first_peer_device(device)->connection->sender_work,
-				&device->unplug_work);
-		}
+	struct drbd_resource *resource = device->resource;
+	enum ioc_rv r;
+
+	if (test_bit(DOWN_IN_PROGRESS, &resource->flags))
+		return IOC_ABORT;
+
+	read_lock_irq(&resource->state_rwlock);
+	if (test_bit(UNREGISTERED, &device->flags))
+		r = IOC_ABORT;
+	else if (resource->remote_state_change &&
+		resource->role[NOW] != R_PRIMARY &&
+		(device->open_cnt == 0 || mode & BLK_OPEN_WRITE)) {
+		if (mode & BLK_OPEN_NDELAY)
+			r = IOC_ABORT;
+		else
+			r = IOC_SLEEP;
+	} else {
+		r = IOC_OK;
+		device->open_cnt++;
+		if (mode & BLK_OPEN_WRITE)
+			device->writable = true;
 	}
-}
+	read_unlock_irq(&resource->state_rwlock);
 
-static void drbd_set_defaults(struct drbd_device *device)
-{
-	/* Beware! The actual layout differs
-	 * between big endian and little endian */
-	device->state = (union drbd_dev_state) {
-		{ .role = R_SECONDARY,
-		  .peer = R_UNKNOWN,
-		  .conn = C_STANDALONE,
-		  .disk = D_DISKLESS,
-		  .pdsk = D_UNKNOWN,
-		} };
+	return r;
 }
 
-void drbd_init_set_defaults(struct drbd_device *device)
+static void __prune_or_free_openers(struct drbd_device *device, pid_t pid)
 {
-	/* the memset(,0,) did most of this.
-	 * note: only assignments, no allocation in here */
+	struct opener *pos, *tmp;
 
-	drbd_set_defaults(device);
+	list_for_each_entry_safe(pos, tmp, &device->openers, list) {
+		// if pid == 0, i.e., counts were 0, delete all entries, else the matching one
+		if (pid == 0 || pid == pos->pid) {
+			dynamic_drbd_dbg(device, "%sopeners del: %s(%d)\n", pid == 0 ? "" : "all ",
+					pos->comm, pos->pid);
+			list_del(&pos->list);
+			kfree(pos);
 
-	atomic_set(&device->ap_bio_cnt, 0);
-	atomic_set(&device->ap_actlog_cnt, 0);
-	atomic_set(&device->ap_pending_cnt, 0);
-	atomic_set(&device->rs_pending_cnt, 0);
-	atomic_set(&device->unacked_cnt, 0);
-	atomic_set(&device->local_cnt, 0);
-	atomic_set(&device->pp_in_use_by_net, 0);
-	atomic_set(&device->rs_sect_in, 0);
-	atomic_set(&device->rs_sect_ev, 0);
-	atomic_set(&device->ap_in_flight, 0);
-	atomic_set(&device->md_io.in_use, 0);
+			/* in case we remove a real process, stop here, there might be multiple openers with the same pid */
+			/* this assumes that the oldest opener with the same pid releases first. "as good as it gets" */
+			if (pid != 0)
+				break;
+		}
+	}
+}
 
-	mutex_init(&device->own_state_mutex);
-	device->state_mutex = &device->own_state_mutex;
+static void free_openers(struct drbd_device *device)
+{
+	__prune_or_free_openers(device, 0);
+}
 
-	spin_lock_init(&device->al_lock);
-	spin_lock_init(&device->peer_seq_lock);
-
-	INIT_LIST_HEAD(&device->active_ee);
-	INIT_LIST_HEAD(&device->sync_ee);
-	INIT_LIST_HEAD(&device->done_ee);
-	INIT_LIST_HEAD(&device->read_ee);
-	INIT_LIST_HEAD(&device->resync_reads);
-	INIT_LIST_HEAD(&device->resync_work.list);
-	INIT_LIST_HEAD(&device->unplug_work.list);
-	INIT_LIST_HEAD(&device->bm_io_work.w.list);
-	INIT_LIST_HEAD(&device->pending_master_completion[0]);
-	INIT_LIST_HEAD(&device->pending_master_completion[1]);
-	INIT_LIST_HEAD(&device->pending_completion[0]);
-	INIT_LIST_HEAD(&device->pending_completion[1]);
+static void prune_or_free_openers(struct drbd_device *device, pid_t pid)
+{
+	spin_lock(&device->openers_lock);
+	__prune_or_free_openers(device, pid);
+	spin_unlock(&device->openers_lock);
+}
 
-	device->resync_work.cb  = w_resync_timer;
-	device->unplug_work.cb  = w_send_write_hint;
-	device->bm_io_work.w.cb = w_bitmap_io;
+static void add_opener(struct drbd_device *device, bool did_auto_promote)
+{
+	struct opener *opener, *tmp;
+	ktime_t now = ktime_get_real();
+	int len = 0;
 
-	timer_setup(&device->resync_timer, resync_timer_fn, 0);
-	timer_setup(&device->md_sync_timer, md_sync_timer_fn, 0);
-	timer_setup(&device->start_resync_timer, start_resync_timer_fn, 0);
-	timer_setup(&device->request_timer, request_timer_fn, 0);
+	if (did_auto_promote) {
+		struct drbd_resource *resource = device->resource;
 
-	init_waitqueue_head(&device->misc_wait);
-	init_waitqueue_head(&device->state_wait);
-	init_waitqueue_head(&device->ee_wait);
-	init_waitqueue_head(&device->al_wait);
-	init_waitqueue_head(&device->seq_wait);
+		resource->auto_promoted_by.minor = device->minor;
+		resource->auto_promoted_by.pid = task_pid_nr(current);
+		resource->auto_promoted_by.opened = now;
+		get_task_comm(resource->auto_promoted_by.comm, current);
+	}
+	opener = kmalloc_obj(*opener, GFP_NOIO);
+	if (!opener)
+		return;
+	get_task_comm(opener->comm, current);
+	opener->pid = task_pid_nr(current);
+	opener->opened = now;
+
+	spin_lock(&device->openers_lock);
+	list_for_each_entry(tmp, &device->openers, list)
+		if (++len > 100) { /* 100 ought to be enough for everybody */
+			dynamic_drbd_dbg(device, "openers: list full, do not add new opener\n");
+			kfree(opener);
+			goto out;
+		}
 
-	device->resync_wenr = LC_FREE;
-	device->peer_max_bio_size = DRBD_MAX_BIO_SIZE_SAFE;
-	device->local_max_bio_size = DRBD_MAX_BIO_SIZE_SAFE;
+	list_add(&opener->list, &device->openers);
+	dynamic_drbd_dbg(device, "openers add: %s(%d)\n", opener->comm, opener->pid);
+out:
+	spin_unlock(&device->openers_lock);
 }
 
-void drbd_set_my_capacity(struct drbd_device *device, sector_t size)
+static int drbd_open(struct gendisk *gd, blk_mode_t mode)
 {
-	char ppb[10];
+	struct drbd_device *device = gd->private_data;
+	struct drbd_resource *resource = device->resource;
+	long timeout = resource->res_opts.auto_promote_timeout * HZ / 10;
+	enum drbd_state_rv rv = SS_UNKNOWN_ERROR;
+	bool was_writable;
+	enum ioc_rv r;
+	int err = 0;
+
+	/* Fail read-only open from systemd-udev (version <= 238) */
+	if (!(mode & BLK_OPEN_WRITE) && !drbd_allow_oos) {
+		char comm[TASK_COMM_LEN];
+		get_task_comm(comm, current);
+		if (!strcmp("systemd-udevd", comm))
+			return -EACCES;
+	}
 
-	set_capacity_and_notify(device->vdisk, size);
+	/* Fail read-write open early,
+	 * in case someone explicitly set us read-only (blockdev --setro) */
+	if (bdev_read_only(gd->part0) && (mode & BLK_OPEN_WRITE))
+		return -EACCES;
 
-	drbd_info(device, "size = %s (%llu KB)\n",
-		ppsize(ppb, size>>1), (unsigned long long)size>>1);
-}
+	if (resource->fail_io[NOW])
+		return -ENOTRECOVERABLE;
 
-void drbd_device_cleanup(struct drbd_device *device)
-{
-	int i;
-	if (first_peer_device(device)->connection->receiver.t_state != NONE)
-		drbd_err(device, "ASSERT FAILED: receiver t_state == %d expected 0.\n",
-				first_peer_device(device)->connection->receiver.t_state);
-
-	device->al_writ_cnt  =
-	device->bm_writ_cnt  =
-	device->read_cnt     =
-	device->recv_cnt     =
-	device->send_cnt     =
-	device->writ_cnt     =
-	device->p_size       =
-	device->rs_start     =
-	device->rs_total     =
-	device->rs_failed    = 0;
-	device->rs_last_events = 0;
-	device->rs_last_sect_ev = 0;
-	for (i = 0; i < DRBD_SYNC_MARKS; i++) {
-		device->rs_mark_left[i] = 0;
-		device->rs_mark_time[i] = 0;
-	}
-	D_ASSERT(device, first_peer_device(device)->connection->net_conf == NULL);
-
-	set_capacity_and_notify(device->vdisk, 0);
-	if (device->bitmap) {
-		/* maybe never allocated. */
-		drbd_bm_resize(device, 0, 1);
-		drbd_bm_cleanup(device);
+	kref_get(&device->kref);
+
+	mutex_lock(&resource->open_release);
+	was_writable = device->writable;
+
+	timeout = wait_event_interruptible_timeout(resource->twopc_wait,
+						   (r = inc_open_count(device, mode)),
+						   timeout);
+
+	if (r == IOC_ABORT || (r == IOC_SLEEP && timeout <= 0)) {
+		mutex_unlock(&resource->open_release);
+
+		kref_put(&device->kref, drbd_destroy_device);
+		return -EAGAIN;
 	}
 
-	drbd_backing_dev_free(device, device->ldev);
-	device->ldev = NULL;
+	if (resource->res_opts.auto_promote) {
+		/* Allow opening in read-only mode on an unconnected secondary.
+		   This avoids split brain when the drbd volume gets opened
+		   temporarily by udev while it scans for PV signatures. */
+
+		if (mode & BLK_OPEN_WRITE) {
+			if (resource->role[NOW] == R_SECONDARY) {
+				rv = try_to_promote(device, timeout, (mode & BLK_OPEN_NDELAY));
+				if (rv < SS_SUCCESS)
+					drbd_info(resource, "Auto-promote failed: %s (%d)\n",
+						  drbd_set_st_err_str(rv), rv);
+			}
+		} else if ((mode & BLK_OPEN_NDELAY) == 0) {
+			/* Double check peers
+			 *
+			 * Some services may try to first open ro, and only if that
+			 * works open rw.  An attempt to failover immediately after
+			 * primary crash, before DRBD has noticed that the primary peer
+			 * is gone, would result in open failure, thus failure to take
+			 * over services. */
+			err = ro_open_cond(device);
+			if (err == -EMEDIUMTYPE) {
+				drbd_check_peers(resource);
+				err = -EAGAIN;
+			}
+			if (err == -EAGAIN) {
+				wait_event_interruptible_timeout(resource->state_wait,
+					ro_open_cond(device) != -EAGAIN,
+					resource->res_opts.auto_promote_timeout * HZ / 10);
+			}
+		}
+	} else if (resource->role[NOW] != R_PRIMARY &&
+			!(mode & BLK_OPEN_WRITE) && !drbd_allow_oos) {
+		err = -EMEDIUMTYPE;
+		goto out;
+	}
 
-	clear_bit(AL_SUSPENDED, &device->flags);
+	if (test_bit(UNREGISTERED, &device->flags)) {
+		err = -ENODEV;
+	} else if (mode & BLK_OPEN_WRITE) {
+		if (resource->role[NOW] != R_PRIMARY)
+			err = rv == SS_INTERRUPTED ? -ERESTARTSYS : -EROFS;
+	} else /* READ access only */ {
+		err = ro_open_cond(device);
+	}
+out:
+	/* still keep mutex, but release ASAP */
+	if (!err) {
+		add_opener(device, rv >= SS_SUCCESS);
+		/* Only interested in first open and last close. */
+		if (device->open_cnt == 1) {
+			struct device_info info;
+
+			device_to_info(&info, device);
+			mutex_lock(&notification_mutex);
+			notify_device_state(NULL, 0, device, &info, NOTIFY_CHANGE);
+			mutex_unlock(&notification_mutex);
+		}
+	} else
+		device->writable = was_writable;
 
-	D_ASSERT(device, list_empty(&device->active_ee));
-	D_ASSERT(device, list_empty(&device->sync_ee));
-	D_ASSERT(device, list_empty(&device->done_ee));
-	D_ASSERT(device, list_empty(&device->read_ee));
-	D_ASSERT(device, list_empty(&device->resync_reads));
-	D_ASSERT(device, list_empty(&first_peer_device(device)->connection->sender_work.q));
-	D_ASSERT(device, list_empty(&device->resync_work.list));
-	D_ASSERT(device, list_empty(&device->unplug_work.list));
+	mutex_unlock(&resource->open_release);
+	if (err) {
+		drbd_release(gd);
+		if (err == -EAGAIN && !(mode & BLK_OPEN_NDELAY))
+			err = -EMEDIUMTYPE;
+	}
 
-	drbd_set_defaults(device);
+	return err;
 }
 
+void drbd_open_counts(struct drbd_resource *resource, int *rw_count_ptr, int *ro_count_ptr)
+{
+	struct drbd_device *device;
+	int vnr, rw_count = 0, ro_count = 0;
+
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		if (device->writable)
+			rw_count += device->open_cnt;
+		else
+			ro_count += device->open_cnt;
+	}
+	rcu_read_unlock();
+	*rw_count_ptr = rw_count;
+	*ro_count_ptr = ro_count;
+}
 
-static void drbd_destroy_mempools(void)
+static void wait_for_peer_disk_updates(struct drbd_resource *resource)
+{
+	struct drbd_peer_device *peer_device;
+	struct drbd_device *device;
+	int vnr;
+
+restart:
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		for_each_peer_device_rcu(peer_device, device) {
+			if (test_bit(GOT_NEG_ACK, &peer_device->flags)) {
+				clear_bit(GOT_NEG_ACK, &peer_device->flags);
+				rcu_read_unlock();
+				wait_event(resource->state_wait, peer_device->disk_state[NOW] < D_UP_TO_DATE);
+				goto restart;
+			}
+		}
+	}
+	rcu_read_unlock();
+}
+
+static void drbd_fsync_device(struct drbd_device *device)
+{
+	struct drbd_resource *resource = device->resource;
+
+	sync_blockdev(device->vdisk->part0);
+	/* Prevent writes occurring after demotion, at least
+	 * the writes already submitted in this context. This
+	 * covers the case where DRBD auto-demotes on release,
+	 * which is important because it often occurs
+	 * immediately after a write. */
+	wait_event(device->misc_wait, !atomic_read(&device->ap_bio_cnt[WRITE]));
+
+	if (start_new_tl_epoch(resource)) {
+		struct drbd_connection *connection;
+		u64 im;
+
+		for_each_connection_ref(connection, im, resource)
+			drbd_flush_workqueue(&connection->sender_work);
+	}
+	wait_event(resource->barrier_wait, !barrier_pending(resource));
+	/* After waiting for pending barriers, we got any possible NEG_ACKs,
+	   and see them in wait_for_peer_disk_updates() */
+	wait_for_peer_disk_updates(resource);
+
+	/* In case switching from R_PRIMARY to R_SECONDARY works
+	   out, there is no rw opener at this point. Thus, no new
+	   writes can come in. -> Flushing queued peer acks is
+	   necessary and sufficient.
+	   The cluster wide role change required packets to be
+	   received by the sender. -> We can be sure that the
+	   peer_acks queued on a sender's TODO list go out before
+	   we send the two phase commit packet.
+	*/
+	drbd_flush_peer_acks(resource);
+}
+
+static void drbd_release(struct gendisk *gd)
+{
+	struct drbd_device *device = gd->private_data;
+	struct drbd_resource *resource = device->resource;
+	int open_rw_cnt, open_ro_cnt;
+
+	mutex_lock(&resource->open_release);
+	/* The last one to close already called sync_blockdevice(), generic
+	 * bdev_release() respectively blkdev_put_whole() takes care of that.
+	 * We still want our side effects of drbd_fsync_device():
+	 * wait until all peers confirmed they have all the data, regardless of
+	 * replication protocol, even if that is asynchronous.
+	 * Still, do it before decreasing the open_cnt, just in case, so we
+	 * won't confuse drbd_reject_write_early() or other code paths that may
+	 * check for open_cnt != 0 when they see write requests.
+	 */
+	if (device->writable && device->open_cnt == 1) {
+		drbd_fsync_device(device);
+		device->writable = false;
+	}
+	device->open_cnt--;
+	drbd_open_counts(resource, &open_rw_cnt, &open_ro_cnt);
+
+	if (open_ro_cnt == 0)
+		wake_up_all(&resource->state_wait);
+
+	if (test_bit(UNREGISTERED, &device->flags) && device->open_cnt == 0 &&
+	    !test_and_set_bit(DESTROYING_DEV, &device->flags))
+		call_rcu(&device->rcu, drbd_reclaim_device);
+
+	if (resource->res_opts.auto_promote &&
+			open_rw_cnt == 0 &&
+			resource->role[NOW] == R_PRIMARY &&
+			!test_bit(EXPLICIT_PRIMARY, &resource->flags)) {
+		union comm_pid_tag_buf tag;
+		sigset_t mask, oldmask;
+		int rv;
+
+		snprintf_current_comm_pid_tag(&tag, "auto-demote");
+
+		/*
+		 * Auto-demote is triggered by the last opener releasing the
+		 * DRBD device. However, it is an implicit action, so it should
+		 * not be affected by the state of the process. In particular,
+		 * it should ignore any pending signals. It may be the case
+		 * that the process is releasing DRBD because it is being
+		 * terminated using a signal.
+		 */
+		sigfillset(&mask);
+		sigprocmask(SIG_BLOCK, &mask, &oldmask);
+
+		rv = drbd_set_role(resource, R_SECONDARY, false, tag.buf, NULL);
+		if (rv < SS_SUCCESS)
+			drbd_warn(resource, "Auto-demote failed: %s (%d)\n",
+					drbd_set_st_err_str(rv), rv);
+
+		sigprocmask(SIG_SETMASK, &oldmask, NULL);
+	}
+
+	if (open_ro_cnt == 0 && open_rw_cnt == 0 && resource->fail_io[NOW]) {
+		unsigned long irq_flags;
+
+		begin_state_change(resource, &irq_flags, CS_VERBOSE);
+		resource->fail_io[NEW] = false;
+		end_state_change(resource, &irq_flags, "release");
+	}
+
+	/* if the open count is 0, we free the whole list, otherwise we remove the specific pid */
+	prune_or_free_openers(device, (device->open_cnt == 0) ? 0 : task_pid_nr(current));
+	if (open_rw_cnt == 0 && open_ro_cnt == 0 && resource->auto_promoted_by.pid != 0)
+		memset(&resource->auto_promoted_by, 0, sizeof(resource->auto_promoted_by));
+	if (device->open_cnt == 0) {
+		struct device_info info;
+
+		device_to_info(&info, device);
+		mutex_lock(&notification_mutex);
+		notify_device_state(NULL, 0, device, &info, NOTIFY_CHANGE);
+		mutex_unlock(&notification_mutex);
+	}
+	mutex_unlock(&resource->open_release);
+
+	kref_put(&device->kref, drbd_destroy_device);  /* might destroy the resource as well */
+}
+
+static void drbd_remove_all_paths(struct drbd_connection *connection)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_path *path, *tmp;
+
+	lockdep_assert_held(&resource->conf_update);
+
+	list_for_each_entry(path, &transport->paths, list)
+		set_bit(TR_UNREGISTERED, &path->flags);
+
+	/* Ensure flag visible before list manipulation. */
+	smp_wmb();
+
+	list_for_each_entry_safe(path, tmp, &transport->paths, list) {
+		/* Exclusive with reading state, in particular remember_state_change() */
+		write_lock_irq(&resource->state_rwlock);
+		list_del_rcu(&path->list);
+		write_unlock_irq(&resource->state_rwlock);
+
+		transport->class->ops.remove_path(path);
+		notify_path(connection, path, NOTIFY_DESTROY);
+		call_rcu(&path->rcu, drbd_reclaim_path);
+	}
+}
+
+/** __drbd_net_exit is called when a network namespace is removed.
+ *
+ * For DRBD this means it needs to remove any sockets assigned to that namespace,
+ * i.e. it needs to disconnect some connections. It also needs to remove those
+ * paths associated with the to be removed namespace, so the connection can be
+ * reconfigured from a new namespace.
+ */
+static void __net_exit __drbd_net_exit(struct net *net)
+{
+	struct drbd_resource *resource;
+	struct drbd_connection *connection, *n;
+	enum drbd_state_rv rv;
+	LIST_HEAD(connections_wait_list);
+
+	/* Disconnect and removal of paths works in 3 steps:
+	 * 1. Find all connections associated with the namespace, add it to a separate list.
+	 * 2. Iterate over all connections in the new list and start the disconnect.
+	 * 3. Iterate again over all connections, waiting for them to disconnect and remove the path configuration.*/
+
+	/* Step 1 */
+	rcu_read_lock();
+	for_each_resource_rcu(resource, &drbd_resources) {
+		for_each_connection_rcu(connection, resource) {
+			/* We don't have to worry about any races here:
+			 * For a connection to be "missed", it would need to be configured
+			 * from the namespace to be removed. Since netlink does keep the
+			 * namespace alive for the duration of it's connection, we can
+			 * assume the namespace assignment can no longer be changed. */
+			if (net_eq(net, drbd_net_assigned_to_connection(connection))) {
+				drbd_info(connection, "Disconnect because network namespace is exiting\n");
+
+				kref_get(&connection->kref);
+
+				list_add(&connection->remove_net_list, &connections_wait_list);
+			}
+		}
+	}
+	rcu_read_unlock();
+
+	/* Step 2 */
+	list_for_each_entry(connection, &connections_wait_list, remove_net_list) {
+		/* We just start the disconnect here.  We have to use force=true here,
+		 * otherwise the disconnect might fail waiting for some openers to disappear.
+		 *
+		 * Actually waiting for the disconnect is relegated to step 3, so we disconnect
+		 * in parallel. */
+		rv = change_cstate(connection, C_DISCONNECTING, CS_HARD);
+		if (rv < SS_SUCCESS && rv != SS_ALREADY_STANDALONE)
+			drbd_err(connection, "Failed to disconnect: %s\n", drbd_set_st_err_str(rv));
+	}
+
+	/* Step 3 */
+	list_for_each_entry_safe(connection, n, &connections_wait_list, remove_net_list) {
+		list_del_init(&connection->remove_net_list);
+
+		/* Wait here for StandAlone: a path can only be removed if it's not established */
+		wait_event(connection->resource->state_wait, connection->cstate[NOW] == C_STANDALONE);
+
+		mutex_lock(&connection->resource->adm_mutex);
+		mutex_lock(&connection->resource->conf_update);
+		drbd_remove_all_paths(connection);
+		mutex_unlock(&connection->resource->conf_update);
+		mutex_unlock(&connection->resource->adm_mutex);
+
+		kref_put(&connection->kref, drbd_destroy_connection);
+	}
+}
+
+void drbd_queue_unplug(struct drbd_device *device)
+{
+	struct drbd_resource *resource = device->resource;
+	struct drbd_connection *connection;
+	u64 dagtag_sector;
+
+	dagtag_sector = resource->dagtag_sector;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		/* use the "next" slot */
+		unsigned int i = !connection->todo.unplug_slot;
+		connection->todo.unplug_dagtag_sector[i] = dagtag_sector;
+		wake_up(&connection->sender_work.q_wait);
+	}
+	rcu_read_unlock();
+}
+
+static void drbd_set_defaults(struct drbd_device *device)
 {
-	/* D_ASSERT(device, atomic_read(&drbd_pp_vacant)==0); */
+	device->disk_state[NOW] = D_DISKLESS;
+}
 
+static void drbd_destroy_mempools(void)
+{
 	bioset_exit(&drbd_io_bio_set);
 	bioset_exit(&drbd_md_io_bio_set);
 	mempool_exit(&drbd_buffer_page_pool);
 	mempool_exit(&drbd_md_io_page_pool);
 	mempool_exit(&drbd_ee_mempool);
 	mempool_exit(&drbd_request_mempool);
-	kmem_cache_destroy(drbd_ee_cache);
-	kmem_cache_destroy(drbd_request_cache);
-	kmem_cache_destroy(drbd_bm_ext_cache);
-	kmem_cache_destroy(drbd_al_ext_cache);
+	if (drbd_ee_cache)
+		kmem_cache_destroy(drbd_ee_cache);
+	if (drbd_request_cache)
+		kmem_cache_destroy(drbd_request_cache);
+	if (drbd_al_ext_cache)
+		kmem_cache_destroy(drbd_al_ext_cache);
 
 	drbd_ee_cache        = NULL;
 	drbd_request_cache   = NULL;
-	drbd_bm_ext_cache    = NULL;
 	drbd_al_ext_cache    = NULL;
 
 	return;
@@ -2090,11 +3249,6 @@ static int drbd_create_mempools(void)
 	if (drbd_ee_cache == NULL)
 		goto Enomem;
 
-	drbd_bm_ext_cache = kmem_cache_create(
-		"drbd_bm", sizeof(struct bm_extent), 0, 0, NULL);
-	if (drbd_bm_ext_cache == NULL)
-		goto Enomem;
-
 	drbd_al_ext_cache = kmem_cache_create(
 		"drbd_al", sizeof(struct lc_element), 0, 0, NULL);
 	if (drbd_al_ext_cache == NULL)
@@ -2113,7 +3267,6 @@ static int drbd_create_mempools(void)
 	ret = mempool_init_page_pool(&drbd_md_io_page_pool, DRBD_MIN_POOL_PAGES, 0);
 	if (ret)
 		goto Enomem;
-
 	ret = mempool_init_page_pool(&drbd_buffer_page_pool, number, 0);
 	if (ret)
 		goto Enomem;
@@ -2134,70 +3287,77 @@ static int drbd_create_mempools(void)
 	return -ENOMEM;
 }
 
-static void drbd_release_all_peer_reqs(struct drbd_device *device)
+static void free_peer_device(struct drbd_peer_device *peer_device)
 {
-	int rr;
+	if (test_and_clear_bit(HOLDING_UUID_READ_LOCK, &peer_device->flags))
+		up_read_non_owner(&peer_device->device->uuid_sem);
 
-	rr = drbd_free_peer_reqs(device, &device->active_ee);
-	if (rr)
-		drbd_err(device, "%d EEs in active list found!\n", rr);
+	kfree(peer_device->rs_plan_s);
+	kfree(peer_device->conf);
+	kfree(peer_device);
+}
 
-	rr = drbd_free_peer_reqs(device, &device->sync_ee);
-	if (rr)
-		drbd_err(device, "%d EEs in sync list found!\n", rr);
+static void drbd_device_finalize_work_fn(struct work_struct *work)
+{
+	struct drbd_device *device = container_of(work, struct drbd_device, finalize_work);
+	struct drbd_resource *resource = device->resource;
 
-	rr = drbd_free_peer_reqs(device, &device->read_ee);
-	if (rr)
-		drbd_err(device, "%d EEs in read list found!\n", rr);
+	/* ldev_safe: no other contexts can access */
+	drbd_bm_free(device);
 
-	rr = drbd_free_peer_reqs(device, &device->done_ee);
-	if (rr)
-		drbd_err(device, "%d EEs in done list found!\n", rr);
+	put_disk(device->vdisk);
+
+	kfree(device);
+
+	kref_put(&resource->kref, drbd_destroy_resource);
 }
 
-/* caution. no locking. */
+/* may not sleep, called from call_rcu. */
 void drbd_destroy_device(struct kref *kref)
 {
 	struct drbd_device *device = container_of(kref, struct drbd_device, kref);
-	struct drbd_resource *resource = device->resource;
-	struct drbd_peer_device *peer_device, *tmp_peer_device;
-
-	timer_shutdown_sync(&device->request_timer);
-
-	/* paranoia asserts */
-	D_ASSERT(device, device->open_cnt == 0);
-	/* end paranoia asserts */
+	struct drbd_peer_device *peer_device, *tmp;
 
 	/* cleanup stuff that may have been allocated during
 	 * device (re-)configuration or state changes */
 
-	drbd_backing_dev_free(device, device->ldev);
-	device->ldev = NULL;
+#ifdef CONFIG_DRBD_COMPAT_84
+	if (device->resource->res_opts.drbd8_compat_mode)
+		atomic_dec(&nr_drbd8_devices);
+#endif
 
-	drbd_release_all_peer_reqs(device);
+	free_openers(device);
 
 	lc_destroy(device->act_log);
-	lc_destroy(device->resync);
-
-	kfree(device->p_uuid);
-	/* device->p_uuid = NULL; */
+	for_each_peer_device_safe(peer_device, tmp, device) {
+		kref_put(&peer_device->connection->kref, drbd_destroy_connection);
+		free_peer_device(peer_device);
+	}
 
-	if (device->bitmap) /* should no longer be there. */
-		drbd_bm_cleanup(device);
 	__free_page(device->md_io.page);
-	put_disk(device->vdisk);
-	kfree(device->rs_plan_s);
 
-	/* not for_each_connection(connection, resource):
-	 * those may have been cleaned up and disassociated already.
-	 */
-	for_each_peer_device_safe(peer_device, tmp_peer_device, device) {
-		kref_put(&peer_device->connection->kref, drbd_destroy_connection);
-		kfree(peer_device);
-	}
-	if (device->submit.wq)
-		destroy_workqueue(device->submit.wq);
-	kfree(device);
+	INIT_WORK(&device->finalize_work, drbd_device_finalize_work_fn);
+	schedule_work(&device->finalize_work);
+}
+
+void drbd_destroy_resource(struct kref *kref)
+{
+	struct drbd_resource *resource = container_of(kref, struct drbd_resource, kref);
+
+	idr_destroy(&resource->devices);
+	free_cpumask_var(resource->cpu_mask);
+	kfree(resource->name);
+	kfree(resource);
+	module_put(THIS_MODULE);
+}
+
+void drbd_reclaim_resource(struct rcu_head *rp)
+{
+	struct drbd_resource *resource = container_of(rp, struct drbd_resource, rcu);
+
+	drbd_thread_stop_nowait(&resource->worker);
+
+	mempool_free(resource->peer_ack_req, &drbd_request_mempool);
 	kref_put(&resource->kref, drbd_destroy_resource);
 }
 
@@ -2222,96 +3382,88 @@ static void do_retry(struct work_struct *ws)
 	list_splice_init(&retry->writes, &writes);
 	spin_unlock_irq(&retry->lock);
 
-	list_for_each_entry_safe(req, tmp, &writes, tl_requests) {
+	list_for_each_entry_safe(req, tmp, &writes, list) {
 		struct drbd_device *device = req->device;
+		struct drbd_resource *resource = device->resource;
 		struct bio *bio = req->master_bio;
+		unsigned long start_jif = req->start_jif;
 		bool expected;
+		ktime_get_accounting_assign(ktime_t start_kt, req->start_kt);
 
+
+		/* No locking when accessing local_rq_state & net_rq_state, since
+		 * this request is not active at the moment. */
 		expected =
 			expect(device, atomic_read(&req->completion_ref) == 0) &&
-			expect(device, req->rq_state & RQ_POSTPONED) &&
-			expect(device, (req->rq_state & RQ_LOCAL_PENDING) == 0 ||
-				(req->rq_state & RQ_LOCAL_ABORTED) != 0);
+			expect(device, req->local_rq_state & RQ_POSTPONED) &&
+			expect(device, (req->local_rq_state & RQ_LOCAL_PENDING) == 0 ||
+			       (req->local_rq_state & RQ_LOCAL_ABORTED) != 0);
 
 		if (!expected)
 			drbd_err(device, "req=%p completion_ref=%d rq_state=%x\n",
 				req, atomic_read(&req->completion_ref),
-				req->rq_state);
+				req->local_rq_state);
 
-		/* We still need to put one kref associated with the
+		/* We still need to put one done reference associated with the
 		 * "completion_ref" going zero in the code path that queued it
 		 * here.  The request object may still be referenced by a
 		 * frozen local req->private_bio, in case we force-detached.
 		 */
-		kref_put(&req->kref, drbd_req_destroy);
+		read_lock_irq(&resource->state_rwlock);
+		drbd_put_ref_tl_walk(req, 1, 0);
+		read_unlock_irq(&resource->state_rwlock);
 
 		/* A single suspended or otherwise blocking device may stall
-		 * all others as well.  Fortunately, this code path is to
-		 * recover from a situation that "should not happen":
-		 * concurrent writes in multi-primary setup.
-		 * In a "normal" lifecycle, this workqueue is supposed to be
-		 * destroyed without ever doing anything.
-		 * If it turns out to be an issue anyways, we can do per
+		 * all others as well. This code path is to recover from a
+		 * situation that "should not happen": concurrent writes in
+		 * multi-primary setup. It is also used for retrying failed
+		 * reads. If it turns out to be an issue, we can do per
 		 * resource (replication group) or per device (minor) retry
 		 * workqueues instead.
 		 */
 
 		/* We are not just doing submit_bio_noacct(),
 		 * as we want to keep the start_time information. */
-		inc_ap_bio(device);
-		__drbd_make_request(device, bio);
+		__drbd_make_request(device, bio, start_kt, start_jif);
 	}
 }
 
-/* called via drbd_req_put_completion_ref(),
- * holds resource->req_lock */
+/* called via drbd_req_put_completion_ref() */
 void drbd_restart_request(struct drbd_request *req)
 {
+	struct drbd_device *device = req->device;
+	struct drbd_resource *resource = device->resource;
+	bool susp = drbd_suspended(device);
 	unsigned long flags;
+
 	spin_lock_irqsave(&retry.lock, flags);
-	list_move_tail(&req->tl_requests, &retry.writes);
+	list_move_tail(&req->list, susp ? &resource->suspended_reqs : &retry.writes);
 	spin_unlock_irqrestore(&retry.lock, flags);
 
 	/* Drop the extra reference that would otherwise
 	 * have been dropped by complete_master_bio.
 	 * do_retry() needs to grab a new one. */
-	dec_ap_bio(req->device);
+	dec_ap_bio(req->device, bio_data_dir(req->master_bio));
 
-	queue_work(retry.wq, &retry.worker);
+	if (!susp)
+		queue_work(retry.wq, &retry.worker);
 }
 
-void drbd_destroy_resource(struct kref *kref)
+void drbd_restart_suspended_reqs(struct drbd_resource *resource)
 {
-	struct drbd_resource *resource =
-		container_of(kref, struct drbd_resource, kref);
-
-	idr_destroy(&resource->devices);
-	free_cpumask_var(resource->cpu_mask);
-	kfree(resource->name);
-	kfree(resource);
-}
+	unsigned long flags;
 
-void drbd_free_resource(struct drbd_resource *resource)
-{
-	struct drbd_connection *connection, *tmp;
+	spin_lock_irqsave(&retry.lock, flags);
+	list_splice_init(&resource->suspended_reqs, &retry.writes);
+	spin_unlock_irqrestore(&retry.lock, flags);
 
-	for_each_connection_safe(connection, tmp, resource) {
-		list_del(&connection->connections);
-		drbd_debugfs_connection_cleanup(connection);
-		kref_put(&connection->kref, drbd_destroy_connection);
-	}
-	drbd_debugfs_resource_cleanup(resource);
-	kref_put(&resource->kref, drbd_destroy_resource);
+	queue_work(retry.wq, &retry.worker);
 }
 
 static void drbd_cleanup(void)
 {
-	unsigned int i;
-	struct drbd_device *device;
-	struct drbd_resource *resource, *tmp;
-
 	/* first remove proc,
-	 * drbdsetup uses it's presence to detect
+	 * drbdsetup uses its presence to detect
 	 * whether DRBD is loaded.
 	 * If we would get stuck in proc removal,
 	 * but have netlink already deregistered,
@@ -2325,19 +3477,13 @@ static void drbd_cleanup(void)
 		destroy_workqueue(retry.wq);
 
 	drbd_genl_unregister();
-
-	idr_for_each_entry(&drbd_devices, device, i)
-		drbd_delete_device(device);
-
-	/* not _rcu since, no other updater anymore. Genl already unregistered */
-	for_each_resource_safe(resource, tmp, &drbd_resources) {
-		list_del(&resource->resources);
-		drbd_free_resource(resource);
-	}
-
 	drbd_debugfs_cleanup();
 
+	unregister_pernet_device(&drbd_pernet_ops);
+
 	drbd_destroy_mempools();
+	if (ping_ack_sender)
+		destroy_workqueue(ping_ack_sender);
 	unregister_blkdev(DRBD_MAJOR, "drbd");
 
 	idr_destroy(&drbd_devices);
@@ -2366,6 +3512,16 @@ static int w_complete(struct drbd_work *w, int cancel)
 	return 0;
 }
 
+void drbd_queue_work(struct drbd_work_queue *q, struct drbd_work *w)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&q->q_lock, flags);
+	list_add_tail(&w->list, &q->q);
+	spin_unlock_irqrestore(&q->q_lock, flags);
+	wake_up(&q->q_wait);
+}
+
 void drbd_flush_workqueue(struct drbd_work_queue *work_queue)
 {
 	struct completion_work completion_work;
@@ -2376,6 +3532,23 @@ void drbd_flush_workqueue(struct drbd_work_queue *work_queue)
 	wait_for_completion(&completion_work.done);
 }
 
+void drbd_flush_workqueue_interruptible(struct drbd_device *device)
+{
+	struct completion_work completion_work;
+	int err;
+
+	completion_work.w.cb = w_complete;
+	init_completion(&completion_work.done);
+	drbd_queue_work(&device->resource->work, &completion_work.w);
+	err = wait_for_completion_interruptible(&completion_work.done);
+	if (err == -ERESTARTSYS) {
+		set_bit(ABORT_MDIO, &device->flags);
+		wake_up_all(&device->misc_wait);
+		wait_for_completion(&completion_work.done);
+		clear_bit(ABORT_MDIO, &device->flags);
+	}
+}
+
 struct drbd_resource *drbd_find_resource(const char *name)
 {
 	struct drbd_resource *resource;
@@ -2396,51 +3569,58 @@ struct drbd_resource *drbd_find_resource(const char *name)
 	return resource;
 }
 
-struct drbd_connection *conn_get_by_addrs(void *my_addr, int my_addr_len,
-				     void *peer_addr, int peer_addr_len)
+static void drbd_put_send_buffers(struct drbd_connection *connection)
 {
-	struct drbd_resource *resource;
-	struct drbd_connection *connection;
+	unsigned int i;
 
-	rcu_read_lock();
-	for_each_resource_rcu(resource, &drbd_resources) {
-		for_each_connection_rcu(connection, resource) {
-			if (connection->my_addr_len == my_addr_len &&
-			    connection->peer_addr_len == peer_addr_len &&
-			    !memcmp(&connection->my_addr, my_addr, my_addr_len) &&
-			    !memcmp(&connection->peer_addr, peer_addr, peer_addr_len)) {
-				kref_get(&connection->kref);
-				goto found;
-			}
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++) {
+		if (connection->send_buffer[i].page) {
+			put_page(connection->send_buffer[i].page);
+			connection->send_buffer[i].page = NULL;
 		}
 	}
-	connection = NULL;
-found:
-	rcu_read_unlock();
-	return connection;
 }
 
-static int drbd_alloc_socket(struct drbd_socket *socket)
+static int drbd_alloc_send_buffers(struct drbd_connection *connection)
 {
-	socket->rbuf = (void *) __get_free_page(GFP_KERNEL);
-	if (!socket->rbuf)
-		return -ENOMEM;
-	socket->sbuf = (void *) __get_free_page(GFP_KERNEL);
-	if (!socket->sbuf)
-		return -ENOMEM;
+	unsigned int i;
+
+	for (i = DATA_STREAM; i <= CONTROL_STREAM ; i++) {
+		struct page *page;
+
+		page = alloc_page(GFP_KERNEL);
+		if (!page) {
+			drbd_put_send_buffers(connection);
+			return -ENOMEM;
+		}
+		connection->send_buffer[i].page = page;
+		connection->send_buffer[i].unsent =
+		connection->send_buffer[i].pos = page_address(page);
+	}
+
 	return 0;
 }
 
-static void drbd_free_socket(struct drbd_socket *socket)
+void drbd_flush_peer_acks(struct drbd_resource *resource)
 {
-	free_page((unsigned long) socket->sbuf);
-	free_page((unsigned long) socket->rbuf);
+	spin_lock_irq(&resource->peer_ack_lock);
+	if (resource->peer_ack_req) {
+		resource->last_peer_acked_dagtag = resource->peer_ack_req->dagtag_sector;
+		drbd_queue_peer_ack(resource, resource->peer_ack_req);
+		resource->peer_ack_req = NULL;
+	}
+	spin_unlock_irq(&resource->peer_ack_lock);
 }
 
-void conn_free_crypto(struct drbd_connection *connection)
+static void peer_ack_timer_fn(struct timer_list *t)
 {
-	drbd_free_sock(connection);
+	struct drbd_resource *resource = timer_container_of(resource, t, peer_ack_timer);
+
+	drbd_flush_peer_acks(resource);
+}
 
+void conn_free_crypto(struct drbd_connection *connection)
+{
 	crypto_free_shash(connection->csums_tfm);
 	crypto_free_shash(connection->verify_tfm);
 	crypto_free_shash(connection->cram_hmac_tfm);
@@ -2458,11 +3638,25 @@ void conn_free_crypto(struct drbd_connection *connection)
 	connection->int_dig_vv = NULL;
 }
 
-int set_resource_options(struct drbd_resource *resource, struct res_opts *res_opts)
+static void wake_all_device_misc(struct drbd_resource *resource)
+{
+	struct drbd_device *device;
+	int vnr;
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr)
+		wake_up(&device->misc_wait);
+	rcu_read_unlock();
+}
+
+int set_resource_options(struct drbd_resource *resource, struct res_opts *res_opts, const char *tag)
 {
 	struct drbd_connection *connection;
 	cpumask_var_t new_cpu_mask;
 	int err;
+	bool wake_device_misc = false;
+	bool force_state_recalc = false;
+	unsigned long irq_flags;
+	struct res_opts *old_opts = &resource->res_opts;
 
 	if (!zalloc_cpumask_var(&new_cpu_mask, GFP_KERNEL))
 		return -ENOMEM;
@@ -2491,26 +3685,47 @@ int set_resource_options(struct drbd_resource *resource, struct res_opts *res_op
 			goto fail;
 		}
 	}
+	if (res_opts->nr_requests < DRBD_NR_REQUESTS_MIN)
+		res_opts->nr_requests = DRBD_NR_REQUESTS_MIN;
+
+	if (old_opts->quorum != res_opts->quorum ||
+	    old_opts->on_no_quorum != res_opts->on_no_quorum)
+		force_state_recalc = true;
+
+	if (resource->res_opts.nr_requests < res_opts->nr_requests)
+		wake_device_misc = true;
+
 	resource->res_opts = *res_opts;
 	if (cpumask_empty(new_cpu_mask))
 		drbd_calc_cpu_mask(&new_cpu_mask);
 	if (!cpumask_equal(resource->cpu_mask, new_cpu_mask)) {
 		cpumask_copy(resource->cpu_mask, new_cpu_mask);
+		resource->worker.reset_cpu_mask = 1;
+		rcu_read_lock();
 		for_each_connection_rcu(connection, resource) {
 			connection->receiver.reset_cpu_mask = 1;
-			connection->ack_receiver.reset_cpu_mask = 1;
-			connection->worker.reset_cpu_mask = 1;
+			connection->sender.reset_cpu_mask = 1;
 		}
+		rcu_read_unlock();
 	}
 	err = 0;
 
+	if (force_state_recalc) {
+		begin_state_change(resource, &irq_flags, CS_VERBOSE | CS_FORCE_RECALC);
+		end_state_change(resource, &irq_flags, tag);
+	}
+
+	if (wake_device_misc)
+		wake_all_device_misc(resource);
+
 fail:
 	free_cpumask_var(new_cpu_mask);
 	return err;
 
 }
 
-struct drbd_resource *drbd_create_resource(const char *name)
+struct drbd_resource *drbd_create_resource(const char *name,
+					   struct res_opts *res_opts)
 {
 	struct drbd_resource *resource;
 
@@ -2525,12 +3740,52 @@ struct drbd_resource *drbd_create_resource(const char *name)
 	kref_init(&resource->kref);
 	idr_init(&resource->devices);
 	INIT_LIST_HEAD(&resource->connections);
-	resource->write_ordering = WO_BDEV_FLUSH;
-	list_add_tail_rcu(&resource->resources, &drbd_resources);
+	spin_lock_init(&resource->tl_update_lock);
+	INIT_LIST_HEAD(&resource->transfer_log);
+	spin_lock_init(&resource->peer_ack_lock);
+	INIT_LIST_HEAD(&resource->peer_ack_req_list);
+	INIT_LIST_HEAD(&resource->peer_ack_list);
+	INIT_LIST_HEAD(&resource->peer_ack_work.list);
+	resource->peer_ack_work.cb = w_queue_peer_ack;
+	timer_setup(&resource->peer_ack_timer, peer_ack_timer_fn, 0);
+	spin_lock_init(&resource->initiator_flush_lock);
+	sema_init(&resource->state_sem, 1);
+	resource->role[NOW] = R_SECONDARY;
+	resource->max_node_id = res_opts->drbd8_compat_mode ? 1 : res_opts->node_id;
+	resource->twopc_reply.initiator_node_id = -1;
 	mutex_init(&resource->conf_update);
 	mutex_init(&resource->adm_mutex);
-	spin_lock_init(&resource->req_lock);
+	mutex_init(&resource->open_release);
+	rwlock_init(&resource->state_rwlock);
+	INIT_LIST_HEAD(&resource->listeners);
+	spin_lock_init(&resource->listeners_lock);
+	init_waitqueue_head(&resource->state_wait);
+	init_waitqueue_head(&resource->twopc_wait);
+	init_waitqueue_head(&resource->barrier_wait);
+	timer_setup(&resource->twopc_timer, twopc_timer_fn, 0);
+	INIT_WORK(&resource->twopc_work, nested_twopc_work);
+	drbd_init_workqueue(&resource->work);
+	drbd_thread_init(resource, &resource->worker, drbd_worker, "worker");
+	spin_lock_init(&resource->current_tle_lock);
 	drbd_debugfs_resource_add(resource);
+	resource->cached_min_aggreed_protocol_version = drbd_protocol_version_min;
+	/* members is a bit mask of the "seen" nodes in this resource.
+	 * In drbd8 compatibility mode, we only have one peer, so we can
+	 * set this to 1. */
+	resource->members = res_opts->drbd8_compat_mode ? 1 : NODE_MASK(res_opts->node_id);
+	INIT_WORK(&resource->empty_twopc, drbd_empty_twopc_work_fn);
+	INIT_LIST_HEAD(&resource->suspended_reqs);
+
+	ratelimit_state_init(&resource->ratelimit[D_RL_R_GENERIC], 5*HZ, 10);
+
+
+	if (set_resource_options(resource, res_opts, "create-resource"))
+		goto fail_free_name;
+
+	drbd_thread_start(&resource->worker);
+
+	list_add_tail_rcu(&resource->resources, &drbd_resources);
+
 	return resource;
 
 fail_free_name:
@@ -2542,128 +3797,291 @@ struct drbd_resource *drbd_create_resource(const char *name)
 }
 
 /* caller must be under adm_mutex */
-struct drbd_connection *conn_create(const char *name, struct res_opts *res_opts)
+struct drbd_connection *drbd_create_connection(struct drbd_resource *resource,
+					       struct drbd_transport_class *tc)
 {
-	struct drbd_resource *resource;
 	struct drbd_connection *connection;
+	int size;
 
-	connection = kzalloc_obj(struct drbd_connection);
+	size = sizeof(*connection) - sizeof(connection->transport) + tc->instance_size;
+	connection = kzalloc(size, GFP_KERNEL);
 	if (!connection)
 		return NULL;
 
-	if (drbd_alloc_socket(&connection->data))
-		goto fail;
-	if (drbd_alloc_socket(&connection->meta))
+	ratelimit_state_init(&connection->ratelimit[D_RL_C_GENERIC], 5*HZ, /* no burst */ 1);
+
+	if (drbd_alloc_send_buffers(connection))
 		goto fail;
 
 	connection->current_epoch = kzalloc_obj(struct drbd_epoch);
 	if (!connection->current_epoch)
 		goto fail;
 
-	INIT_LIST_HEAD(&connection->transfer_log);
-
 	INIT_LIST_HEAD(&connection->current_epoch->list);
 	connection->epochs = 1;
 	spin_lock_init(&connection->epoch_lock);
 
+	INIT_LIST_HEAD(&connection->todo.work_list);
+	connection->todo.req = NULL;
+
+	atomic_set(&connection->ap_in_flight, 0);
+	atomic_set(&connection->rs_in_flight, 0);
 	connection->send.seen_any_write_yet = false;
 	connection->send.current_epoch_nr = 0;
 	connection->send.current_epoch_writes = 0;
+	connection->send.current_dagtag_sector =
+		resource->dagtag_sector - ((BIO_MAX_VECS << PAGE_SHIFT) >> SECTOR_SHIFT) - 1;
 
-	resource = drbd_create_resource(name);
-	if (!resource)
-		goto fail;
-
-	connection->cstate = C_STANDALONE;
-	mutex_init(&connection->cstate_mutex);
-	init_waitqueue_head(&connection->ping_wait);
+	connection->cstate[NOW] = C_STANDALONE;
+	connection->peer_role[NOW] = R_UNKNOWN;
 	idr_init(&connection->peer_devices);
 
 	drbd_init_workqueue(&connection->sender_work);
-	mutex_init(&connection->data.mutex);
-	mutex_init(&connection->meta.mutex);
+	mutex_init(&connection->mutex[DATA_STREAM]);
+	mutex_init(&connection->mutex[CONTROL_STREAM]);
+
+	INIT_LIST_HEAD(&connection->connect_timer_work.list);
+	timer_setup(&connection->connect_timer, connect_timer_fn, 0);
 
 	drbd_thread_init(resource, &connection->receiver, drbd_receiver, "receiver");
 	connection->receiver.connection = connection;
-	drbd_thread_init(resource, &connection->worker, drbd_worker, "worker");
-	connection->worker.connection = connection;
-	drbd_thread_init(resource, &connection->ack_receiver, drbd_ack_receiver, "ack_recv");
-	connection->ack_receiver.connection = connection;
+	drbd_thread_init(resource, &connection->sender, drbd_sender, "sender");
+	connection->sender.connection = connection;
+	spin_lock_init(&connection->primary_flush_lock);
+	spin_lock_init(&connection->flush_ack_lock);
+	spin_lock_init(&connection->peer_reqs_lock);
+	spin_lock_init(&connection->send_oos_lock);
+	INIT_LIST_HEAD(&connection->peer_requests);
+	INIT_LIST_HEAD(&connection->peer_reads);
+	INIT_LIST_HEAD(&connection->send_oos);
+	INIT_LIST_HEAD(&connection->connections);
+	INIT_LIST_HEAD(&connection->done_ee);
+	INIT_LIST_HEAD(&connection->dagtag_wait_ee);
+	INIT_LIST_HEAD(&connection->remove_net_list);
+	init_waitqueue_head(&connection->ee_wait);
 
 	kref_init(&connection->kref);
 
-	connection->resource = resource;
+	INIT_WORK(&connection->peer_ack_work, drbd_send_peer_ack_wf);
+	INIT_LIST_HEAD(&connection->send_oos_work.list);
+	connection->send_oos_work.cb = drbd_send_out_of_sync_wf;
+	INIT_LIST_HEAD(&connection->flush_ack_work.list);
+	connection->flush_ack_work.cb = drbd_flush_ack_wf;
+	INIT_WORK(&connection->send_acks_work, drbd_send_acks_wf);
+	INIT_WORK(&connection->send_ping_ack_work, drbd_send_ping_ack_wf);
+	INIT_WORK(&connection->send_ping_work, drbd_send_ping_wf);
+
+	INIT_LIST_HEAD(&connection->send_dagtag_work.list);
+	connection->send_dagtag_work.cb = w_send_dagtag;
 
-	if (set_resource_options(resource, res_opts))
-		goto fail_resource;
+	spin_lock_init(&connection->advance_cache_ptr_lock);
 
 	kref_get(&resource->kref);
-	list_add_tail_rcu(&connection->connections, &resource->connections);
-	drbd_debugfs_connection_add(connection);
+	connection->resource = resource;
+	connection->after_reconciliation.lost_node_id = -1;
+
+	connection->reassemble_buffer.buffer = connection->reassemble_buffer_bytes.bytes;
+
+	INIT_LIST_HEAD(&connection->transport.paths);
+	connection->transport.log_prefix = resource->name;
+	if (tc->ops.init(&connection->transport))
+		goto fail;
+
 	return connection;
 
-fail_resource:
-	list_del(&resource->resources);
-	drbd_free_resource(resource);
 fail:
+	drbd_put_send_buffers(connection);
 	kfree(connection->current_epoch);
-	drbd_free_socket(&connection->meta);
-	drbd_free_socket(&connection->data);
 	kfree(connection);
+
 	return NULL;
 }
 
+/**
+ * drbd_transport_shutdown() - Free the transport specific members (e.g., sockets) of a connection
+ * @connection: The connection to shut down
+ * @op: The operation. Only close the connection or destroy the whole transport
+ *
+ * Must be called with conf_update held.
+ */
+void drbd_transport_shutdown(struct drbd_connection *connection, enum drbd_tr_free_op op)
+{
+	struct drbd_transport *transport = &connection->transport;
+
+	lockdep_assert_held(&connection->resource->conf_update);
+
+	mutex_lock(&connection->mutex[DATA_STREAM]);
+	mutex_lock(&connection->mutex[CONTROL_STREAM]);
+
+	/* Ignore send errors, if any: we are shutting down. */
+	flush_send_buffer(connection, DATA_STREAM);
+	flush_send_buffer(connection, CONTROL_STREAM);
+
+	/* Holding conf_update ensures that paths list is not modified concurrently. */
+	transport->class->ops.free(transport, op);
+	if (op == DESTROY_TRANSPORT) {
+		drbd_remove_all_paths(connection);
+
+		/* Wait for the delayed drbd_reclaim_path() calls. */
+		rcu_barrier();
+		drbd_put_transport_class(transport->class);
+	}
+
+	mutex_unlock(&connection->mutex[CONTROL_STREAM]);
+	mutex_unlock(&connection->mutex[DATA_STREAM]);
+}
+
+void drbd_destroy_path(struct kref *kref)
+{
+	struct drbd_path *path = container_of(kref, struct drbd_path, kref);
+	struct drbd_connection *connection =
+		container_of(path->transport, struct drbd_connection, transport);
+
+	kref_put(&connection->kref, drbd_destroy_connection);
+	kfree(path);
+}
+
 void drbd_destroy_connection(struct kref *kref)
 {
 	struct drbd_connection *connection = container_of(kref, struct drbd_connection, kref);
 	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_device *peer_device;
+	int vnr;
 
 	if (atomic_read(&connection->current_epoch->epoch_size) !=  0)
 		drbd_err(connection, "epoch_size:%d\n", atomic_read(&connection->current_epoch->epoch_size));
 	kfree(connection->current_epoch);
 
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		struct drbd_device *device = peer_device->device;
+		free_peer_device(peer_device);
+		kref_put(&device->kref, drbd_destroy_device);
+	}
 	idr_destroy(&connection->peer_devices);
 
-	drbd_free_socket(&connection->meta);
-	drbd_free_socket(&connection->data);
-	kfree(connection->int_dig_in);
-	kfree(connection->int_dig_vv);
+	kfree(connection->transport.net_conf);
 	kfree(connection);
 	kref_put(&resource->kref, drbd_destroy_resource);
 }
 
+struct drbd_peer_device *create_peer_device(struct drbd_device *device, struct drbd_connection *connection)
+{
+	struct drbd_peer_device *peer_device;
+	int err;
+
+	peer_device = kzalloc_obj(struct drbd_peer_device);
+	if (!peer_device)
+		return NULL;
+
+	peer_device->connection = connection;
+	peer_device->device = device;
+	peer_device->disk_state[NOW] = D_UNKNOWN;
+	peer_device->repl_state[NOW] = L_OFF;
+	peer_device->replication[NOW] = true;
+	peer_device->peer_replication[NOW] = true;
+	spin_lock_init(&peer_device->peer_seq_lock);
+
+	ratelimit_state_init(&peer_device->ratelimit[D_RL_PD_GENERIC], 5*HZ, /* no burst */ 1);
+
+	err = drbd_create_peer_device_default_config(peer_device);
+	if (err) {
+		kfree(peer_device);
+		return NULL;
+	}
+
+	timer_setup(&peer_device->start_resync_timer, start_resync_timer_fn, 0);
+
+	INIT_LIST_HEAD(&peer_device->resync_work.list);
+	peer_device->resync_work.cb  = w_resync_timer;
+	timer_setup(&peer_device->resync_timer, resync_timer_fn, 0);
+
+	INIT_LIST_HEAD(&peer_device->propagate_uuids_work.list);
+	peer_device->propagate_uuids_work.cb = w_send_uuids;
+
+	atomic_set(&peer_device->ap_pending_cnt, 0);
+	atomic_set(&peer_device->unacked_cnt, 0);
+	atomic_set(&peer_device->rs_pending_cnt, 0);
+
+	INIT_LIST_HEAD(&peer_device->resync_requests);
+
+	atomic_set(&peer_device->rs_sect_in, 0);
+
+	peer_device->bitmap_index = -1;
+	peer_device->resync_finished_pdsk = D_UNKNOWN;
+
+	peer_device->q_limits.physical_block_size = SECTOR_SIZE;
+	peer_device->q_limits.logical_block_size = SECTOR_SIZE;
+	peer_device->q_limits.alignment_offset = 0;
+	peer_device->q_limits.io_min = SECTOR_SIZE;
+	peer_device->q_limits.io_opt = PAGE_SIZE;
+	peer_device->q_limits.max_bio_size = DRBD_MAX_BIO_SIZE;
+
+	return peer_device;
+}
+
+static void drbd_ldev_destroy(struct work_struct *ws)
+{
+	struct drbd_device *device = container_of(ws, struct drbd_device, ldev_destroy_work);
+
+	/* ldev_safe: destroying the bitmap */
+	drbd_bm_free(device);
+	lc_destroy(device->act_log);
+	device->act_log = NULL;
+	/* ldev_safe: destroying ldev */
+	drbd_backing_dev_free(device, device->ldev);
+	/* ldev_safe: final teardown, no other user possible */
+	device->ldev = NULL;
+
+	clear_bit(GOING_DISKLESS, &device->flags);
+	wake_up(&device->misc_wait);
+	kref_put(&device->kref, drbd_destroy_device);
+}
+
+static int init_conflict_submitter(struct drbd_device *device)
+{
+	/* Short name so that it is recognizable from the first 15 characters. */
+	device->submit_conflict.wq =
+		alloc_ordered_workqueue("drbd%u_sc", WQ_MEM_RECLAIM, device->minor);
+	if (!device->submit_conflict.wq)
+		return -ENOMEM;
+	INIT_WORK(&device->submit_conflict.worker, drbd_do_submit_conflict);
+	INIT_LIST_HEAD(&device->submit_conflict.resync_writes);
+	INIT_LIST_HEAD(&device->submit_conflict.resync_reads);
+	INIT_LIST_HEAD(&device->submit_conflict.writes);
+	INIT_LIST_HEAD(&device->submit_conflict.peer_writes);
+	spin_lock_init(&device->submit_conflict.lock);
+	return 0;
+}
+
 static int init_submitter(struct drbd_device *device)
 {
-	/* opencoded create_singlethread_workqueue(),
-	 * to be able to say "drbd%d", ..., minor */
 	device->submit.wq =
 		alloc_ordered_workqueue("drbd%u_submit", WQ_MEM_RECLAIM, device->minor);
 	if (!device->submit.wq)
 		return -ENOMEM;
-
 	INIT_WORK(&device->submit.worker, do_submit);
 	INIT_LIST_HEAD(&device->submit.writes);
+	INIT_LIST_HEAD(&device->submit.peer_writes);
+	spin_lock_init(&device->submit.lock);
 	return 0;
 }
 
-enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsigned int minor)
+enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsigned int minor,
+				      struct device_conf *device_conf, struct drbd_device **p_device)
 {
 	struct drbd_resource *resource = adm_ctx->resource;
-	struct drbd_connection *connection, *n;
+	struct drbd_connection *connection;
 	struct drbd_device *device;
 	struct drbd_peer_device *peer_device, *tmp_peer_device;
 	struct gendisk *disk;
+	LIST_HEAD(peer_devices);
+	LIST_HEAD(tmp);
 	int id;
 	int vnr = adm_ctx->volume;
 	enum drbd_ret_code err = ERR_NOMEM;
-	struct queue_limits lim = {
-		/*
-		 * Setting the max_hw_sectors to an odd value of 8kibyte here.
-		 * This triggers a max_bio_size message upon first attach or
-		 * connect.
-		 */
-		.max_hw_sectors		= DRBD_MAX_BIO_SIZE_SAFE >> 8,
-	};
+	bool locked = false;
+
+	lockdep_assert_held(&resource->conf_update);
 
 	device = minor_to_device(minor);
 	if (device)
@@ -2675,24 +4093,65 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 		return ERR_NOMEM;
 	kref_init(&device->kref);
 
+	ratelimit_state_init(&device->ratelimit[D_RL_D_GENERIC], 5*HZ, /* no burst */ 1);
+	ratelimit_state_init(&device->ratelimit[D_RL_D_METADATA], 5*HZ, 10);
+	ratelimit_state_init(&device->ratelimit[D_RL_D_BACKEND], 5*HZ, 10);
+
 	kref_get(&resource->kref);
 	device->resource = resource;
 	device->minor = minor;
 	device->vnr = vnr;
+	device->device_conf = *device_conf;
+
+	drbd_set_defaults(device);
+
+	atomic_set(&device->ap_bio_cnt[READ], 0);
+	atomic_set(&device->ap_bio_cnt[WRITE], 0);
+	atomic_set(&device->ap_actlog_cnt, 0);
+	atomic_set(&device->wait_for_actlog, 0);
+	atomic_set(&device->wait_for_actlog_ecnt, 0);
+	atomic_set(&device->local_cnt, 0);
+	atomic_set(&device->rs_sect_ev, 0);
+	atomic_set(&device->md_io.in_use, 0);
+
+#ifdef CONFIG_DRBD_TIMING_STATS
+	spin_lock_init(&device->timing_lock);
+#endif
+	spin_lock_init(&device->al_lock);
+
+	spin_lock_init(&device->pending_completion_lock);
+	INIT_LIST_HEAD(&device->pending_master_completion[0]);
+	INIT_LIST_HEAD(&device->pending_master_completion[1]);
+	INIT_LIST_HEAD(&device->pending_completion[0]);
+	INIT_LIST_HEAD(&device->pending_completion[1]);
+	INIT_LIST_HEAD(&device->openers);
+	spin_lock_init(&device->openers_lock);
+	spin_lock_init(&device->peer_req_bio_completion_lock);
+
+	atomic_set(&device->pending_bitmap_work.n, 0);
+	spin_lock_init(&device->pending_bitmap_work.q_lock);
+	INIT_LIST_HEAD(&device->pending_bitmap_work.q);
+
+	timer_setup(&device->md_sync_timer, md_sync_timer_fn, 0);
+	timer_setup(&device->request_timer, request_timer_fn, 0);
+
+	init_waitqueue_head(&device->misc_wait);
+	init_waitqueue_head(&device->al_wait);
+	init_waitqueue_head(&device->seq_wait);
 
-	drbd_init_set_defaults(device);
+	init_rwsem(&device->uuid_sem);
 
-	disk = blk_alloc_disk(&lim, NUMA_NO_NODE);
+	disk = blk_alloc_disk(NULL, NUMA_NO_NODE);
 	if (IS_ERR(disk)) {
 		err = PTR_ERR(disk);
 		goto out_no_disk;
 	}
 
+	INIT_WORK(&device->ldev_destroy_work, drbd_ldev_destroy);
+
 	device->vdisk = disk;
 	device->rq_queue = disk->queue;
 
-	set_disk_ro(disk, true);
-
 	disk->major = DRBD_MAJOR;
 	disk->first_minor = minor;
 	disk->minors = 1;
@@ -2705,12 +4164,39 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 	if (!device->md_io.page)
 		goto out_no_io_page;
 
-	if (drbd_bm_init(device))
-		goto out_no_bitmap;
+	/* Just put in some sane default; should never be used. */
+	device->last_bm_block_shift = BM_BLOCK_SHIFT_MIN;
+
+	spin_lock_init(&device->interval_lock);
 	device->read_requests = RB_ROOT;
-	device->write_requests = RB_ROOT;
+	device->requests = RB_ROOT;
+
+	BUG_ON(!mutex_is_locked(&resource->conf_update));
+	for_each_connection(connection, resource) {
+		peer_device = create_peer_device(device, connection);
+		if (!peer_device)
+			goto out_no_peer_device;
+		list_add(&peer_device->peer_devices, &peer_devices);
+	}
+
+	/* Insert the new device into all idrs under state_rwlock write lock
+	   to guarantee a consistent object model. idr_preload() doesn't help
+	   because it can only guarantee that a single idr_alloc() will
+	   succeed. This fails (and will be retried) if no memory is
+	   immediately available.
+	   Keep in mid that RCU readers might find the device in the moment
+	   we add it to the resources->devices IDR!
+	*/
+
+	INIT_LIST_HEAD(&device->peer_devices);
+	spin_lock_init(&device->pending_bmio_lock);
+	INIT_LIST_HEAD(&device->pending_bitmap_io);
 
-	id = idr_alloc(&drbd_devices, device, minor, minor + 1, GFP_KERNEL);
+	locked = true;
+	write_lock_irq(&resource->state_rwlock);
+	spin_lock(&drbd_devices_lock);
+	id = idr_alloc(&drbd_devices, device, minor, minor + 1, GFP_NOWAIT);
+	spin_unlock(&drbd_devices_lock);
 	if (id < 0) {
 		if (id == -ENOSPC)
 			err = ERR_MINOR_OR_VOLUME_EXISTS;
@@ -2718,7 +4204,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 	}
 	kref_get(&device->kref);
 
-	id = idr_alloc(&resource->devices, device, vnr, vnr + 1, GFP_KERNEL);
+	id = idr_alloc(&resource->devices, device, vnr, vnr + 1, GFP_NOWAIT);
 	if (id < 0) {
 		if (id == -ENOSPC)
 			err = ERR_MINOR_OR_VOLUME_EXISTS;
@@ -2726,105 +4212,219 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 	}
 	kref_get(&device->kref);
 
-	INIT_LIST_HEAD(&device->peer_devices);
-	INIT_LIST_HEAD(&device->pending_bitmap_io);
-	for_each_connection(connection, resource) {
-		peer_device = kzalloc_obj(struct drbd_peer_device);
-		if (!peer_device)
-			goto out_idr_remove_from_resource;
-		peer_device->connection = connection;
-		peer_device->device = device;
-
-		list_add(&peer_device->peer_devices, &device->peer_devices);
+	list_for_each_entry_safe(peer_device, tmp_peer_device, &peer_devices, peer_devices) {
+		connection = peer_device->connection;
+		id = idr_alloc(&connection->peer_devices, peer_device,
+			       device->vnr, device->vnr + 1, GFP_NOWAIT);
+		if (id < 0)
+			goto out_remove_peer_device;
+		list_del(&peer_device->peer_devices);
+		list_add_rcu(&peer_device->peer_devices, &device->peer_devices);
+		kref_get(&connection->kref);
 		kref_get(&device->kref);
+	}
+	write_unlock_irq(&resource->state_rwlock);
+	locked = false;
 
-		id = idr_alloc(&connection->peer_devices, peer_device, vnr, vnr + 1, GFP_KERNEL);
-		if (id < 0) {
-			if (id == -ENOSPC)
-				err = ERR_INVALID_REQUEST;
-			goto out_idr_remove_from_resource;
-		}
-		kref_get(&connection->kref);
-		INIT_WORK(&peer_device->send_acks_work, drbd_send_acks_wf);
+	if (init_conflict_submitter(device)) {
+		err = ERR_NOMEM;
+		goto out_remove_peer_device;
 	}
 
 	if (init_submitter(device)) {
 		err = ERR_NOMEM;
-		goto out_idr_remove_from_resource;
+		goto out_remove_peer_device;
 	}
 
 	err = add_disk(disk);
 	if (err)
-		goto out_destroy_workqueue;
+		goto out_destroy_submitter;
+	device->have_quorum[OLD] =
+	device->have_quorum[NEW] =
+		(resource->res_opts.quorum == QOU_OFF);
 
-	/* inherit the connection state */
-	device->state.conn = first_connection(resource)->cstate;
-	if (device->state.conn == C_WF_REPORT_PARAMS) {
-		for_each_peer_device(peer_device, device)
+	for_each_peer_device(peer_device, device) {
+		connection = peer_device->connection;
+		peer_device->node_id = connection->peer_node_id;
+
+		if (connection->cstate[NOW] >= C_CONNECTED)
 			drbd_connected(peer_device);
 	}
-	/* move to create_peer_device() */
-	for_each_peer_device(peer_device, device)
-		drbd_debugfs_peer_device_add(peer_device);
+
 	drbd_debugfs_device_add(device);
+	*p_device = device;
 	return NO_ERROR;
 
-out_destroy_workqueue:
+out_destroy_submitter:
 	destroy_workqueue(device->submit.wq);
-out_idr_remove_from_resource:
-	for_each_connection_safe(connection, n, resource) {
-		peer_device = idr_remove(&connection->peer_devices, vnr);
-		if (peer_device)
-			kref_put(&connection->kref, drbd_destroy_connection);
-	}
-	for_each_peer_device_safe(peer_device, tmp_peer_device, device) {
+	device->submit.wq = NULL;
+out_remove_peer_device:
+	list_splice_init_rcu(&device->peer_devices, &tmp, synchronize_rcu);
+	list_for_each_entry_safe(peer_device, tmp_peer_device, &tmp, peer_devices) {
+		struct drbd_connection *connection = peer_device->connection;
+
+		idr_remove(&connection->peer_devices, device->vnr);
 		list_del(&peer_device->peer_devices);
 		kfree(peer_device);
+		kref_put(&connection->kref, drbd_destroy_connection);
 	}
 	idr_remove(&resource->devices, vnr);
+
 out_idr_remove_minor:
+	spin_lock(&drbd_devices_lock);
 	idr_remove(&drbd_devices, minor);
-	synchronize_rcu();
+	spin_unlock(&drbd_devices_lock);
 out_no_minor_idr:
-	drbd_bm_cleanup(device);
-out_no_bitmap:
+	if (locked)
+		write_unlock_irq(&resource->state_rwlock);
+	synchronize_rcu();
+
+out_no_peer_device:
+	list_for_each_entry_safe(peer_device, tmp_peer_device, &peer_devices, peer_devices) {
+		list_del(&peer_device->peer_devices);
+		kfree(peer_device);
+	}
+
 	__free_page(device->md_io.page);
 out_no_io_page:
 	put_disk(disk);
 out_no_disk:
 	kref_put(&resource->kref, drbd_destroy_resource);
+		/* kref debugging wants an extra put, see has_refs() */
 	kfree(device);
 	return err;
 }
 
-void drbd_delete_device(struct drbd_device *device)
+/**
+ * drbd_unregister_device()  -  make a device "invisible"
+ * @device: DRBD device to unregister
+ *
+ * Remove the device from the drbd object model and unregister it in the
+ * kernel.  Keep reference counts on device->kref; they are dropped in
+ * drbd_reclaim_device().
+ */
+void drbd_unregister_device(struct drbd_device *device)
 {
 	struct drbd_resource *resource = device->resource;
 	struct drbd_connection *connection;
 	struct drbd_peer_device *peer_device;
 
-	/* move to free_peer_device() */
-	for_each_peer_device(peer_device, device)
-		drbd_debugfs_peer_device_cleanup(peer_device);
-	drbd_debugfs_device_cleanup(device);
+	write_lock_irq(&resource->state_rwlock);
 	for_each_connection(connection, resource) {
 		idr_remove(&connection->peer_devices, device->vnr);
-		kref_put(&device->kref, drbd_destroy_device);
 	}
 	idr_remove(&resource->devices, device->vnr);
-	kref_put(&device->kref, drbd_destroy_device);
-	idr_remove(&drbd_devices, device_to_minor(device));
-	kref_put(&device->kref, drbd_destroy_device);
+	spin_lock(&drbd_devices_lock);
+	idr_remove(&drbd_devices, device->minor);
+	spin_unlock(&drbd_devices_lock);
+	write_unlock_irq(&resource->state_rwlock);
+
+	for_each_peer_device(peer_device, device)
+		drbd_debugfs_peer_device_cleanup(peer_device);
+	drbd_debugfs_device_cleanup(device);
 	del_gendisk(device->vdisk);
-	synchronize_rcu();
-	kref_put(&device->kref, drbd_destroy_device);
+
+	destroy_workqueue(device->submit_conflict.wq);
+	device->submit_conflict.wq = NULL;
+	destroy_workqueue(device->submit.wq);
+	device->submit.wq = NULL;
+	timer_shutdown_sync(&device->request_timer);
+}
+
+void drbd_reclaim_device(struct rcu_head *rp)
+{
+	struct drbd_device *device = container_of(rp, struct drbd_device, rcu);
+	struct drbd_peer_device *peer_device;
+	int i;
+
+	for_each_peer_device(peer_device, device) {
+		kref_put(&device->kref, drbd_destroy_device);
+	}
+
+	for (i = 0; i < 3; i++) {
+		kref_put(&device->kref, drbd_destroy_device);
+	}
+}
+
+static void shutdown_connect_timer(struct drbd_connection *connection)
+{
+	if (timer_shutdown_sync(&connection->connect_timer)) {
+		kref_put(&connection->kref, drbd_destroy_connection);
+	}
+}
+
+void del_connect_timer(struct drbd_connection *connection)
+{
+	if (timer_delete_sync(&connection->connect_timer)) {
+		kref_put(&connection->kref, drbd_destroy_connection);
+	}
+}
+
+/**
+ * drbd_unregister_connection()  -  make a connection "invisible"
+ * @connection: DRBD connection to unregister
+ *
+ * Remove the connection from the drbd object model.  Keep reference counts on
+ * connection->kref; they are dropped in drbd_reclaim_connection().
+ */
+void drbd_unregister_connection(struct drbd_connection *connection)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_device *peer_device;
+	int vnr, rr;
+
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
+		drbd_debugfs_peer_device_cleanup(peer_device);
+
+	write_lock_irq(&resource->state_rwlock);
+	set_bit(C_UNREGISTERED, &connection->flags);
+	smp_wmb();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
+		list_del_rcu(&peer_device->peer_devices);
+	list_del_rcu(&connection->connections);
+	write_unlock_irq(&resource->state_rwlock);
+
+	drbd_debugfs_connection_cleanup(connection);
+
+	shutdown_connect_timer(connection);
+
+	rr = drbd_free_peer_reqs(connection, &connection->done_ee);
+	if (rr)
+		drbd_err(connection, "%d EEs in done list found!\n", rr);
+
+	drbd_transport_shutdown(connection, DESTROY_TRANSPORT);
+	drbd_put_send_buffers(connection);
+	conn_free_crypto(connection);
+}
+
+void drbd_reclaim_connection(struct rcu_head *rp)
+{
+	struct drbd_connection *connection =
+		container_of(rp, struct drbd_connection, rcu);
+	struct drbd_peer_device *peer_device;
+	int vnr;
+
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		kref_put(&connection->kref, drbd_destroy_connection);
+	}
+	kref_put(&connection->kref, drbd_destroy_connection);
+}
+
+void drbd_reclaim_path(struct rcu_head *rp)
+{
+	struct drbd_path *path = container_of(rp, struct drbd_path, rcu);
+
+	INIT_LIST_HEAD(&path->list);
+	kref_put(&path->kref, drbd_destroy_path);
 }
 
 static int __init drbd_init(void)
 {
 	int err;
 
-	if (drbd_minor_count < DRBD_MINOR_COUNT_MIN || drbd_minor_count > DRBD_MINOR_COUNT_MAX) {
+
+	if (drbd_minor_count < DRBD_MINOR_COUNT_MIN
+	||  drbd_minor_count > DRBD_MINOR_COUNT_MAX) {
 		pr_err("invalid minor_count (%d)\n", drbd_minor_count);
 #ifdef MODULE
 		return -EINVAL;
@@ -2840,24 +4440,41 @@ static int __init drbd_init(void)
 		return err;
 	}
 
+	/*
+	 * allocate all necessary structs
+	 */
 	drbd_proc = NULL; /* play safe for drbd_cleanup */
 	idr_init(&drbd_devices);
 
-	mutex_init(&resources_mutex);
 	INIT_LIST_HEAD(&drbd_resources);
 
+	err = register_pernet_device(&drbd_pernet_ops);
+	if (err) {
+		pr_err("unable to register net namespace handlers\n");
+		goto fail;
+	}
+
+	drbd_enable_netns();
 	err = drbd_genl_register();
 	if (err) {
 		pr_err("unable to register generic netlink family\n");
 		goto fail;
 	}
 
+	err = -ENOMEM;
+	ping_ack_sender = alloc_workqueue("drbd_pas",
+			WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
+	if (!ping_ack_sender)
+		goto fail;
+
 	err = drbd_create_mempools();
 	if (err)
 		goto fail;
 
 	err = -ENOMEM;
-	drbd_proc = proc_create_single("drbd", S_IFREG | 0444 , NULL, drbd_seq_show);
+	drbd_proc = proc_create_single("drbd", S_IFREG | 0444, NULL,
+			drbd_seq_show);
+
 	if (!drbd_proc)	{
 		pr_err("unable to register proc file\n");
 		goto fail;
@@ -2879,6 +4496,11 @@ static int __init drbd_init(void)
 	       GENL_MAGIC_VERSION, PRO_VERSION_MIN, PRO_VERSION_MAX);
 	pr_info("%s\n", drbd_buildtag());
 	pr_info("registered as block device major %d\n", DRBD_MAJOR);
+
+#ifdef CONFIG_DRBD_COMPAT_84
+	atomic_set(&nr_drbd8_devices, 0);
+#endif
+
 	return 0; /* Success! */
 
 fail:
@@ -2890,493 +4512,1104 @@ static int __init drbd_init(void)
 	return err;
 }
 
-static void drbd_free_one_sock(struct drbd_socket *ds)
-{
-	struct socket *s;
-	mutex_lock(&ds->mutex);
-	s = ds->socket;
-	ds->socket = NULL;
-	mutex_unlock(&ds->mutex);
-	if (s) {
-		/* so debugfs does not need to mutex_lock() */
-		synchronize_rcu();
-		kernel_sock_shutdown(s, SHUT_RDWR);
-		sock_release(s);
-	}
-}
-
-void drbd_free_sock(struct drbd_connection *connection)
-{
-	if (connection->data.socket)
-		drbd_free_one_sock(&connection->data);
-	if (connection->meta.socket)
-		drbd_free_one_sock(&connection->meta);
-}
-
 /* meta data management */
 
-void conn_md_sync(struct drbd_connection *connection)
+static
+void drbd_md_encode_9(struct drbd_device *device, struct meta_data_on_disk_9 *buffer)
 {
-	struct drbd_peer_device *peer_device;
-	int vnr;
+	int i;
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-
-		kref_get(&device->kref);
-		rcu_read_unlock();
-		drbd_md_sync(device);
-		kref_put(&device->kref, drbd_destroy_device);
-		rcu_read_lock();
-	}
-	rcu_read_unlock();
-}
-
-/* aligned 4kByte */
-struct meta_data_on_disk {
-	u64 la_size_sect;      /* last agreed size. */
-	u64 uuid[UI_SIZE];   /* UUIDs. */
-	u64 device_uuid;
-	u64 reserved_u64_1;
-	u32 flags;             /* MDF */
-	u32 magic;
-	u32 md_size_sect;
-	u32 al_offset;         /* offset to this block */
-	u32 al_nr_extents;     /* important for restoring the AL (userspace) */
-	      /* `-- act_log->nr_elements <-- ldev->dc.al_extents */
-	u32 bm_offset;         /* offset to the bitmap, from here */
-	u32 bm_bytes_per_bit;  /* BM_BLOCK_SIZE */
-	u32 la_peer_max_bio_size;   /* last peer max_bio_size */
-
-	/* see al_tr_number_to_on_disk_sector() */
-	u32 al_stripes;
-	u32 al_stripe_size_4k;
-
-	u8 reserved_u8[4096 - (7*8 + 10*4)];
-} __packed;
-
-
-
-void drbd_md_write(struct drbd_device *device, void *b)
-{
-	struct meta_data_on_disk *buffer = b;
-	sector_t sector;
-	int i;
-
-	memset(buffer, 0, sizeof(*buffer));
-
-	buffer->la_size_sect = cpu_to_be64(get_capacity(device->vdisk));
-	for (i = UI_CURRENT; i < UI_SIZE; i++)
-		buffer->uuid[i] = cpu_to_be64(device->ldev->md.uuid[i]);
+	buffer->effective_size = cpu_to_be64(device->ldev->md.effective_size);
+	buffer->current_uuid = cpu_to_be64(device->ldev->md.current_uuid);
+	buffer->members = cpu_to_be64(device->ldev->md.members);
 	buffer->flags = cpu_to_be32(device->ldev->md.flags);
-	buffer->magic = cpu_to_be32(DRBD_MD_MAGIC_84_UNCLEAN);
+	buffer->magic = cpu_to_be32(DRBD_MD_MAGIC_09);
 
 	buffer->md_size_sect  = cpu_to_be32(device->ldev->md.md_size_sect);
 	buffer->al_offset     = cpu_to_be32(device->ldev->md.al_offset);
 	buffer->al_nr_extents = cpu_to_be32(device->act_log->nr_elements);
-	buffer->bm_bytes_per_bit = cpu_to_be32(BM_BLOCK_SIZE);
+	buffer->bm_bytes_per_bit = cpu_to_be32(device->ldev->md.bm_block_size);
 	buffer->device_uuid = cpu_to_be64(device->ldev->md.device_uuid);
 
 	buffer->bm_offset = cpu_to_be32(device->ldev->md.bm_offset);
-	buffer->la_peer_max_bio_size = cpu_to_be32(device->peer_max_bio_size);
+	buffer->la_peer_max_bio_size = cpu_to_be32(device->device_conf.max_bio_size);
+	buffer->bm_max_peers = cpu_to_be32(device->ldev->md.max_peers);
+	buffer->node_id = cpu_to_be32(device->ldev->md.node_id);
+	for (i = 0; i < DRBD_NODE_ID_MAX; i++) {
+		struct drbd_peer_md *peer_md = &device->ldev->md.peers[i];
+
+		buffer->peers[i].bitmap_uuid = cpu_to_be64(peer_md->bitmap_uuid);
+		buffer->peers[i].bitmap_dagtag = cpu_to_be64(peer_md->bitmap_dagtag);
+		buffer->peers[i].flags = cpu_to_be32(peer_md->flags & ~MDF_HAVE_BITMAP);
+		buffer->peers[i].bitmap_index = cpu_to_be32(peer_md->bitmap_index);
+	}
+	BUILD_BUG_ON(ARRAY_SIZE(device->ldev->md.history_uuids) != ARRAY_SIZE(buffer->history_uuids));
+	for (i = 0; i < ARRAY_SIZE(buffer->history_uuids); i++)
+		buffer->history_uuids[i] = cpu_to_be64(device->ldev->md.history_uuids[i]);
 
 	buffer->al_stripes = cpu_to_be32(device->ldev->md.al_stripes);
 	buffer->al_stripe_size_4k = cpu_to_be32(device->ldev->md.al_stripe_size_4k);
 
+	if (device->bitmap == NULL)
+		for (i = 0; i < DRBD_PEERS_MAX; i++)
+			buffer->peers[i].flags |= cpu_to_be32(MDF_PEER_FULL_SYNC);
+}
+
+static void drbd_md_encode(struct drbd_device *device, void *buffer)
+{
+	if (test_bit(LEGACY_84_MD, &device->flags))
+		drbd_md_encode_84(device, buffer);
+	else
+		drbd_md_encode_9(device, buffer);
+}
+
+int drbd_md_write(struct drbd_device *device, struct meta_data_on_disk_9 *buffer)
+{
+	sector_t sector;
+	int err;
+
+	if (drbd_md_dax_active(device->ldev)) {
+		drbd_md_encode(device, drbd_dax_md_addr(device->ldev));
+		arch_wb_cache_pmem(drbd_dax_md_addr(device->ldev),
+				   sizeof(struct meta_data_on_disk_9));
+		return 0;
+	}
+
+	memset(buffer, 0, sizeof(*buffer));
+
+	drbd_md_encode(device, buffer);
+
 	D_ASSERT(device, drbd_md_ss(device->ldev) == device->ldev->md.md_offset);
 	sector = device->ldev->md.md_offset;
 
-	if (drbd_md_sync_page_io(device, device->ldev, sector, REQ_OP_WRITE)) {
-		/* this was a try anyways ... */
+	err = drbd_md_sync_page_io(device, device->ldev, sector, REQ_OP_WRITE);
+	if (err) {
 		drbd_err(device, "meta data update failed!\n");
-		drbd_chk_io_error(device, 1, DRBD_META_IO_ERROR);
+		drbd_handle_io_error(device, DRBD_META_IO_ERROR);
 	}
+
+	return err;
 }
 
 /**
- * drbd_md_sync() - Writes the meta data super block if the MD_DIRTY flag bit is set
+ * __drbd_md_sync() - Writes the meta data super block (conditionally) if the MD_DIRTY flag bit is set
  * @device:	DRBD device.
+ * @maybe:	meta data may in fact be "clean", the actual write may be skipped.
  */
-void drbd_md_sync(struct drbd_device *device)
+static int __drbd_md_sync(struct drbd_device *device, bool maybe)
 {
-	struct meta_data_on_disk *buffer;
+	struct meta_data_on_disk_9 *buffer;
+	int err = -EIO;
 
 	/* Don't accidentally change the DRBD meta data layout. */
-	BUILD_BUG_ON(UI_SIZE != 4);
-	BUILD_BUG_ON(sizeof(struct meta_data_on_disk) != 4096);
-
-	timer_delete(&device->md_sync_timer);
-	/* timer may be rearmed by drbd_md_mark_dirty() now. */
-	if (!test_and_clear_bit(MD_DIRTY, &device->flags))
-		return;
+	BUILD_BUG_ON(DRBD_PEERS_MAX != 32);
+	BUILD_BUG_ON(HISTORY_UUIDS != 32);
+	BUILD_BUG_ON(sizeof(struct meta_data_on_disk_9) != 4096);
 
-	/* We use here D_FAILED and not D_ATTACHING because we try to write
-	 * metadata even if we detach due to a disk failure! */
-	if (!get_ldev_if_state(device, D_FAILED))
-		return;
+	if (!get_ldev_if_state(device, D_DETACHING))
+		return -EIO;
 
 	buffer = drbd_md_get_buffer(device, __func__);
 	if (!buffer)
 		goto out;
 
-	drbd_md_write(device, buffer);
+	timer_delete(&device->md_sync_timer);
+	/* timer may be rearmed by drbd_md_mark_dirty() now. */
 
-	/* Update device->ldev->md.la_size_sect,
-	 * since we updated it on metadata. */
-	device->ldev->md.la_size_sect = get_capacity(device->vdisk);
+	if (test_and_clear_bit(MD_DIRTY, &device->flags) || !maybe) {
+		err = drbd_md_write(device, buffer);
+		if (err)
+			set_bit(MD_DIRTY, &device->flags);
+	}
 
 	drbd_md_put_buffer(device);
 out:
 	put_ldev(device);
+
+	return err;
+}
+
+int drbd_md_sync(struct drbd_device *device)
+{
+	return __drbd_md_sync(device, false);
+}
+
+int drbd_md_sync_if_dirty(struct drbd_device *device)
+{
+	return __drbd_md_sync(device, true);
+}
+
+/**
+ * drbd_md_mark_dirty() - Mark meta data super block as dirty
+ * @device:	DRBD device.
+ *
+ * Call this function if you change anything that should be written to
+ * the meta-data super block. This function sets MD_DIRTY, and starts a
+ * timer that ensures that within five seconds you have to call drbd_md_sync().
+ */
+void drbd_md_mark_dirty(struct drbd_device *device)
+{
+	if (!test_and_set_bit(MD_DIRTY, &device->flags))
+		mod_timer(&device->md_sync_timer, jiffies + 5*HZ);
+}
+
+void _drbd_uuid_push_history(struct drbd_device *device, u64 val)
+{
+	struct drbd_md *md = &device->ldev->md;
+	int node_id, i;
+
+	if (val == UUID_JUST_CREATED || val == 0)
+		return;
+
+	val &= ~UUID_PRIMARY;
+
+	if (val == (md->current_uuid & ~UUID_PRIMARY))
+		return;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if (node_id == md->node_id)
+			continue;
+		if (val == (md->peers[node_id].bitmap_uuid & ~UUID_PRIMARY))
+			return;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(md->history_uuids); i++) {
+		if (md->history_uuids[i] == val)
+			return;
+	}
+
+	for (i = ARRAY_SIZE(md->history_uuids) - 1; i > 0; i--)
+		md->history_uuids[i] = md->history_uuids[i - 1];
+	md->history_uuids[i] = val;
+}
+
+u64 _drbd_uuid_pull_history(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_md *md = &device->ldev->md;
+	u64 first_history_uuid;
+	int i;
+
+	first_history_uuid = md->history_uuids[0];
+	for (i = 0; i < ARRAY_SIZE(md->history_uuids) - 1; i++)
+		md->history_uuids[i] = md->history_uuids[i + 1];
+	md->history_uuids[i] = 0;
+
+	return first_history_uuid;
+}
+
+static void __drbd_uuid_set_current(struct drbd_device *device, u64 val)
+{
+	drbd_md_mark_dirty(device);
+	if (device->resource->role[NOW] == R_PRIMARY)
+		val |= UUID_PRIMARY;
+	else
+		val &= ~UUID_PRIMARY;
+
+	device->ldev->md.current_uuid = val;
+	drbd_uuid_set_exposed(device, val, false);
+}
+
+static void __drbd_uuid_set_bitmap(struct drbd_peer_device *peer_device, u64 val)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_peer_md *peer_md = &device->ldev->md.peers[peer_device->node_id];
+
+	drbd_md_mark_dirty(device);
+	peer_md->bitmap_uuid = val;
+	peer_md->bitmap_dagtag = val ? device->resource->dagtag_sector : 0;
+}
+
+void _drbd_uuid_set_current(struct drbd_device *device, u64 val)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
+	__drbd_uuid_set_current(device, val);
+	spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
+}
+
+void _drbd_uuid_set_bitmap(struct drbd_peer_device *peer_device, u64 val)
+{
+	struct drbd_device *device = peer_device->device;
+	unsigned long flags;
+
+	down_write(&device->uuid_sem);
+	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
+	__drbd_uuid_set_bitmap(peer_device, val);
+	spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
+	up_write(&device->uuid_sem);
+}
+
+/* call holding down_write(uuid_sem) */
+void drbd_uuid_set_bitmap(struct drbd_peer_device *peer_device, u64 uuid)
+{
+	struct drbd_device *device = peer_device->device;
+	unsigned long flags;
+	u64 previous_uuid;
+
+	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
+	previous_uuid = drbd_bitmap_uuid(peer_device);
+	__drbd_uuid_set_bitmap(peer_device, uuid);
+	if (previous_uuid)
+		_drbd_uuid_push_history(device, previous_uuid);
+	spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
+}
+
+/**
+ * drbd_uuid_is_day0() - Check if device is in "day0" UUID state
+ * @device: DRBD device (caller must hold ldev reference)
+ *
+ * Returns true if the current UUID appears to be a "day0" UUID:
+ * a real UUID value was set (e.g. by linstor during create-md),
+ * but no UUID rotation has ever happened (all history and bitmap
+ * UUIDs are still zero).
+ */
+bool drbd_uuid_is_day0(struct drbd_device *device)
+{
+	struct drbd_md *md = &device->ldev->md;
+	int i;
+
+	if ((md->current_uuid & ~UUID_PRIMARY) == UUID_JUST_CREATED ||
+	    md->current_uuid == 0)
+		return false;
+
+	for (i = 0; i < ARRAY_SIZE(md->history_uuids); i++)
+		if (md->history_uuids[i])
+			return false;
+
+	for (i = 0; i < DRBD_NODE_ID_MAX; i++) {
+		if (i == md->node_id)
+			continue;
+		if (md->peers[i].bitmap_uuid)
+			return false;
+	}
+
+	return true;
+}
+
+static u64 rotate_current_into_bitmap(struct drbd_device *device, u64 weak_nodes, u64 dagtag)
+{
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	struct drbd_peer_device *peer_device;
+	int node_id;
+	u64 bm_uuid, prev_c_uuid;
+	u64 node_mask = 0;  /* bit mask of node-ids processed */
+	u64 slot_mask = 0;  /* bit mask of on-disk bitmap slots processed */
+	/* return value, bit mask of node-ids for which we
+	 * actually set a new bitmap uuid */
+	u64 got_new_bitmap_uuid = 0;
+
+	if (device->ldev->md.current_uuid != UUID_JUST_CREATED)
+		prev_c_uuid = device->ldev->md.current_uuid;
+	else
+		get_random_bytes(&prev_c_uuid, sizeof(u64));
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		enum drbd_disk_state pdsk;
+		node_id = peer_device->node_id;
+		node_mask |= NODE_MASK(node_id);
+		if (peer_device->bitmap_index != -1)
+			__set_bit(peer_device->bitmap_index, (unsigned long *)&slot_mask);
+		bm_uuid = peer_md[node_id].bitmap_uuid;
+		if (bm_uuid && bm_uuid != prev_c_uuid)
+			continue;
+
+		pdsk = peer_device->disk_state[NOW];
+
+		/* Create a new current UUID for a peer that is diskless but usually has a backing disk.
+		 * Do not create a new current UUID for a CONNECTED intentional diskless peer.
+		 * Create one for an intentional diskless peer that is currently away. */
+		if (pdsk == D_DISKLESS && !(peer_md[node_id].flags & MDF_HAVE_BITMAP))
+			continue;
+
+		if ((pdsk <= D_UNKNOWN && pdsk != D_NEGOTIATING) ||
+		    (NODE_MASK(node_id) & weak_nodes)) {
+			peer_md[node_id].bitmap_uuid = prev_c_uuid;
+			peer_md[node_id].bitmap_dagtag = dagtag;
+			drbd_md_mark_dirty(device);
+			got_new_bitmap_uuid |= NODE_MASK(node_id);
+		}
+	}
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		int slot_nr;
+		if (node_id == device->ldev->md.node_id)
+			continue;
+		if (node_mask & NODE_MASK(node_id))
+			continue;
+		slot_nr = peer_md[node_id].bitmap_index;
+		if (slot_nr != -1) {
+			if (test_bit(slot_nr, (unsigned long *)&slot_mask))
+				continue;
+			__set_bit(slot_nr, (unsigned long *)&slot_mask);
+		}
+		bm_uuid = peer_md[node_id].bitmap_uuid;
+		if (bm_uuid && bm_uuid != prev_c_uuid)
+			continue;
+		if (slot_nr == -1) {
+			slot_nr = find_first_zero_bit((unsigned long *)&slot_mask, sizeof(slot_mask) * BITS_PER_BYTE);
+			__set_bit(slot_nr, (unsigned long *)&slot_mask);
+		}
+		peer_md[node_id].bitmap_uuid = prev_c_uuid;
+		peer_md[node_id].bitmap_dagtag = dagtag;
+		drbd_md_mark_dirty(device);
+		/* count, but only if that bitmap index exists. */
+		if (slot_nr < device->ldev->md.max_peers)
+			got_new_bitmap_uuid |= NODE_MASK(node_id);
+	}
+	rcu_read_unlock();
+
+	return got_new_bitmap_uuid;
+}
+
+static u64 initial_resync_nodes(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	u64 nodes = 0;
+
+	for_each_peer_device(peer_device, device) {
+		if (peer_device->disk_state[NOW] == D_INCONSISTENT &&
+		    peer_device->repl_state[NOW] == L_ESTABLISHED)
+			nodes |= NODE_MASK(peer_device->node_id);
+	}
+
+	return nodes;
+}
+
+u64 drbd_weak_nodes_device(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	u64 not_weak = 0;
+
+	if (device->disk_state[NOW] == D_UP_TO_DATE)
+		not_weak = NODE_MASK(device->resource->res_opts.node_id);
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		enum drbd_disk_state pdsk = peer_device->disk_state[NOW];
+		if (!(pdsk <= D_FAILED || pdsk == D_UNKNOWN || pdsk == D_OUTDATED))
+			not_weak |= NODE_MASK(peer_device->node_id);
+
+	}
+	rcu_read_unlock();
+
+	return ~not_weak;
+}
+
+
+static bool __new_current_uuid_prepare(struct drbd_device *device, bool forced)
+{
+	u64 got_new_bitmap_uuid, val, old_current_uuid;
+	bool day0;
+	int err;
+
+	spin_lock_irq(&device->ldev->md.uuid_lock);
+	day0 = drbd_uuid_is_day0(device);
+	got_new_bitmap_uuid = rotate_current_into_bitmap(device,
+					forced ? initial_resync_nodes(device) : 0,
+					device->resource->dagtag_sector);
+
+	if (!got_new_bitmap_uuid && !day0) {
+		spin_unlock_irq(&device->ldev->md.uuid_lock);
+		return false;
+	}
+
+	old_current_uuid = device->ldev->md.current_uuid;
+	get_random_bytes(&val, sizeof(u64));
+	__drbd_uuid_set_current(device, val);
+	spin_unlock_irq(&device->ldev->md.uuid_lock);
+
+	/* get it to stable storage _now_ */
+	err = drbd_md_sync(device);
+	if (err) {
+		_drbd_uuid_set_current(device, old_current_uuid);
+		return false;
+	}
+
+	return true;
+}
+
+static void __new_current_uuid_info(struct drbd_device *device, u64 weak_nodes)
+{
+	drbd_info(device, "new current UUID: %016llX weak: %016llX\n",
+		  device->ldev->md.current_uuid, weak_nodes);
+}
+
+static void __new_current_uuid_send(struct drbd_device *device, u64 weak_nodes, bool forced)
+{
+	struct drbd_peer_device *peer_device;
+	u64 im;
+
+	for_each_peer_device_ref(peer_device, im, device) {
+		if (peer_device->repl_state[NOW] >= L_ESTABLISHED)
+			drbd_send_uuids(peer_device, forced ? 0 : UUID_FLAG_NEW_DATAGEN, weak_nodes);
+	}
+}
+
+static void __drbd_uuid_new_current_send(struct drbd_device *device, bool forced)
+{
+	u64 weak_nodes;
+
+	down_write(&device->uuid_sem);
+	if (!__new_current_uuid_prepare(device, forced)) {
+		up_write(&device->uuid_sem);
+		return;
+	}
+	downgrade_write(&device->uuid_sem);
+	weak_nodes = drbd_weak_nodes_device(device);
+	__new_current_uuid_info(device, weak_nodes);
+	__new_current_uuid_send(device, weak_nodes, forced);
+	up_read(&device->uuid_sem);
+}
+
+static void __drbd_uuid_new_current_holding_uuid_sem(struct drbd_device *device)
+{
+	u64 weak_nodes;
+
+	if (!__new_current_uuid_prepare(device, false))
+		return;
+	weak_nodes = drbd_weak_nodes_device(device);
+	__new_current_uuid_info(device, weak_nodes);
+}
+
+static bool peer_can_fill_a_bitmap_slot(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	const bool intentional_diskless = device->device_conf.intentional_diskless;
+	const int my_node_id = device->resource->res_opts.node_id;
+	int node_id;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if (node_id == peer_device->node_id)
+			continue;
+		if (peer_device->bitmap_uuids[node_id] == 0) {
+			struct drbd_peer_device *p2;
+			p2 = peer_device_by_node_id(peer_device->device, node_id);
+			if (p2 && !want_bitmap(p2))
+				continue;
+
+			if (node_id == my_node_id && intentional_diskless)
+				continue;
+
+			return true;
+		}
+	}
+
+	return false;
+}
+
+static bool diskfull_peers_need_new_cur_uuid(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	bool rv = false;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->connection->agreed_pro_version < 110)
+			continue;
+
+		/* Only an up-to-date peer persists a new current uuid! */
+		if (peer_device->disk_state[NOW] < D_UP_TO_DATE)
+			continue;
+		if (peer_can_fill_a_bitmap_slot(peer_device)) {
+			rv = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return rv;
+}
+
+static bool a_lost_peer_is_on_same_cur_uuid(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	bool rv = false;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		enum drbd_disk_state pdsk = peer_device->disk_state[NOW];
+
+		if (pdsk >= D_INCONSISTENT && pdsk <= D_UNKNOWN &&
+		    (device->exposed_data_uuid & ~UUID_PRIMARY) ==
+		    (peer_device->current_uuid & ~UUID_PRIMARY) &&
+		    !(peer_device->uuid_flags & UUID_FLAG_SYNC_TARGET)) {
+			rv = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return rv;
+}
+
+/**
+ * drbd_uuid_new_current() - Creates a new current UUID
+ * @device:	DRBD device.
+ * @forced:	Force UUID creation
+ *
+ * Creates a new current UUID, and rotates the old current UUID into
+ * the bitmap slot. Causes an incremental resync upon next connect.
+ */
+void drbd_uuid_new_current(struct drbd_device *device, bool forced)
+{
+	if (get_ldev_if_state(device, D_UP_TO_DATE)) {
+		__drbd_uuid_new_current_send(device, forced);
+		put_ldev(device);
+	} else if (diskfull_peers_need_new_cur_uuid(device) ||
+		   a_lost_peer_is_on_same_cur_uuid(device)) {
+		struct drbd_peer_device *peer_device;
+		/* The peers will store the new current UUID... */
+		u64 current_uuid, weak_nodes;
+		get_random_bytes(&current_uuid, sizeof(u64));
+		if (device->resource->role[NOW] == R_PRIMARY)
+			current_uuid |= UUID_PRIMARY;
+		else
+			current_uuid &= ~UUID_PRIMARY;
+
+		down_write(&device->uuid_sem);
+		drbd_uuid_set_exposed(device, current_uuid, false);
+		downgrade_write(&device->uuid_sem);
+		drbd_info(device, "sending new current UUID: %016llX\n", current_uuid);
+
+		weak_nodes = drbd_weak_nodes_device(device);
+		for_each_peer_device(peer_device, device) {
+			if (peer_device->repl_state[NOW] >= L_ESTABLISHED) {
+				drbd_send_current_uuid(peer_device, current_uuid, weak_nodes);
+				peer_device->current_uuid = current_uuid;
+			}
+		}
+		up_read(&device->uuid_sem);
+	}
+}
+
+void drbd_uuid_new_current_by_user(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+
+	down_write(&device->uuid_sem);
+	for_each_peer_device(peer_device, device)
+		drbd_uuid_set_bitmap(peer_device, 0); /* Rotate UI_BITMAP to History 1, etc... */
+
+	if (get_ldev(device)) {
+		__drbd_uuid_new_current_holding_uuid_sem(device);
+		put_ldev(device);
+	}
+	up_write(&device->uuid_sem);
+}
+
+static void drbd_propagate_uuids(struct drbd_device *device, u64 nodes)
+{
+	struct drbd_peer_device *peer_device;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (!(nodes & NODE_MASK(peer_device->node_id)))
+			continue;
+
+		if (peer_device->repl_state[NOW] < L_ESTABLISHED)
+			continue;
+
+		if (list_empty(&peer_device->propagate_uuids_work.list))
+			drbd_queue_work(&peer_device->connection->sender_work,
+					&peer_device->propagate_uuids_work);
+	}
+	rcu_read_unlock();
+}
+
+void drbd_uuid_received_new_current(struct drbd_peer_device *from_pd, u64 val, u64 weak_nodes)
+{
+	struct drbd_device *device = from_pd->device;
+	u64 dagtag = atomic64_read(&from_pd->connection->last_dagtag_sector);
+	struct drbd_peer_device *peer_device;
+	u64 recipients = 0;
+	bool set_current = true;
+
+	down_write(&device->uuid_sem);
+	spin_lock_irq(&device->ldev->md.uuid_lock);
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->repl_state[NOW] == L_SYNC_TARGET ||
+		    peer_device->repl_state[NOW] == L_BEHIND      ||
+		    peer_device->repl_state[NOW] == L_PAUSED_SYNC_T) {
+			peer_device->current_uuid = val;
+			set_current = false;
+		}
+		if (peer_device->repl_state[NOW] == L_WF_BITMAP_S ||
+		    peer_device->repl_state[NOW] == L_SYNC_SOURCE ||
+		    peer_device->repl_state[NOW] == L_PAUSED_SYNC_S ||
+		    peer_device->repl_state[NOW] == L_ESTABLISHED)
+			recipients |= NODE_MASK(peer_device->node_id);
+
+		if (peer_device->disk_state[NOW] == D_DISKLESS)
+			recipients |= NODE_MASK(peer_device->node_id);
+	}
+	rcu_read_unlock();
+
+	if (set_current) {
+		u64 old_current = device->ldev->md.current_uuid;
+		u64 upd;
+
+		if (device->disk_state[NOW] == D_UP_TO_DATE)
+			recipients |= rotate_current_into_bitmap(device, weak_nodes, dagtag);
+
+		upd = ~weak_nodes; /* These nodes are connected to the primary */
+		upd &= __test_bitmap_slots(device); /* of those, I have a bitmap for */
+		__set_bitmap_slots(device, val, upd);
+		/* Setting bitmap to the (new) current-UUID, means, at this moment
+		   we know that we are at the same data as this not connected peer. */
+
+		__drbd_uuid_set_current(device, val);
+
+		/* Even when the old current UUID was not used as any bitmap
+		 * UUID, we still add it to the history. This is relevant, in
+		 * particular, when we afterwards perform a sync handshake with
+		 * a peer which is not one of the "weak_nodes", but hasn't
+		 * received the new current UUID. If we do not add the current
+		 * UUID to the history, we will end up with a spurious
+		 * unrelated data or split-brain decision. */
+		_drbd_uuid_push_history(device, old_current);
+	}
+
+	spin_unlock_irq(&device->ldev->md.uuid_lock);
+	downgrade_write(&device->uuid_sem);
+	if (set_current)
+		drbd_propagate_uuids(device, recipients);
+	up_read(&device->uuid_sem);
+}
+
+static u64 __set_bitmap_slots(struct drbd_device *device, u64 bitmap_uuid, u64 do_nodes)
+{
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	u64 modified = 0;
+	int node_id;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if (node_id == device->ldev->md.node_id)
+			continue;
+		if (!(do_nodes & NODE_MASK(node_id)))
+			continue;
+		if (!(peer_md[node_id].flags & MDF_HAVE_BITMAP))
+			continue;
+		if (peer_md[node_id].bitmap_uuid != bitmap_uuid) {
+			u64 previous_bitmap_uuid = peer_md[node_id].bitmap_uuid;
+			/* drbd_info(device, "XXX bitmap[node_id=%d] = %llX\n", node_id, bitmap_uuid); */
+			peer_md[node_id].bitmap_uuid = bitmap_uuid;
+			peer_md[node_id].bitmap_dagtag =
+				bitmap_uuid ? device->resource->dagtag_sector : 0;
+			_drbd_uuid_push_history(device, previous_bitmap_uuid);
+			drbd_md_mark_dirty(device);
+			modified |= NODE_MASK(node_id);
+		}
+	}
+
+	return modified;
+}
+
+static u64 __test_bitmap_slots(struct drbd_device *device)
+{
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	int node_id;
+	u64 rv = 0;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if (peer_md[node_id].bitmap_uuid)
+			rv |= NODE_MASK(node_id);
+	}
+
+	return rv;
+}
+
+/* __test_bitmap_slots_of_peer() operates on view of the world I know the
+   SyncSource had. It might be that in the mean time some peers sent more
+   recent UUIDs to me. Remove all peers that are on the same UUID as I am
+   now from the set of nodes */
+static u64 __test_bitmap_slots_of_peer(struct drbd_peer_device *peer_device)
+{
+	u64 set_bitmap_slots = 0;
+	int node_id;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		u64 bitmap_uuid = peer_device->bitmap_uuids[node_id];
+
+		if (bitmap_uuid != 0 && bitmap_uuid != -1)
+			set_bitmap_slots |= NODE_MASK(node_id);
+	}
+
+	return set_bitmap_slots;
+}
+
+static u64
+peers_with_current_uuid(struct drbd_device *device, u64 current_uuid)
+{
+	struct drbd_peer_device *peer_device;
+	u64 nodes = 0;
+
+	current_uuid &= ~UUID_PRIMARY;
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		enum drbd_disk_state peer_disk_state = peer_device->disk_state[NOW];
+		if (peer_disk_state < D_INCONSISTENT || peer_disk_state == D_UNKNOWN)
+			continue;
+		if (current_uuid == (peer_device->current_uuid & ~UUID_PRIMARY))
+			nodes |= NODE_MASK(peer_device->node_id);
+	}
+	rcu_read_unlock();
+
+	return nodes;
+}
+
+void drbd_uuid_resync_starting(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+
+	peer_device->rs_start_uuid = drbd_current_uuid(device);
+	if (peer_device->uuid_flags & UUID_FLAG_CRASHED_PRIMARY)
+		set_bit(SYNC_SRC_CRASHED_PRI, &peer_device->flags);
+	rotate_current_into_bitmap(device, 0, device->resource->dagtag_sector);
+}
+
+u64 drbd_uuid_resync_finished(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	unsigned long flags;
+	int i;
+	u64 ss_nz_bm; /* sync_source has non zero bitmap for. expressed as nodemask */
+	u64 pwcu; /* peers with current uuid */
+	u64 newer;
+
+	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
+	// Inherit history from the sync source
+	for (i = 0; i < ARRAY_SIZE(peer_device->history_uuids); i++)
+		_drbd_uuid_push_history(device, peer_device->history_uuids[i] & ~UUID_PRIMARY);
+
+	// Inherit history in bitmap UUIDs from the sync source
+	for (i = 0; i < DRBD_PEERS_MAX; i++)
+		if (peer_device->bitmap_uuids[i] != -1)
+			_drbd_uuid_push_history(device,
+					peer_device->bitmap_uuids[i] & ~UUID_PRIMARY);
+
+	ss_nz_bm = __test_bitmap_slots_of_peer(peer_device);
+	pwcu = peers_with_current_uuid(device, peer_device->current_uuid);
+
+	newer = __set_bitmap_slots(device, peer_device->rs_start_uuid, ss_nz_bm & ~pwcu);
+	__set_bitmap_slots(device, 0, ~ss_nz_bm | pwcu);
+	_drbd_uuid_push_history(device, drbd_current_uuid(device));
+	__drbd_uuid_set_current(device, peer_device->current_uuid);
+	spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
+
+	return newer;
+}
+
+bool drbd_uuid_set_exposed(struct drbd_device *device, u64 val, bool log)
+{
+	if ((device->exposed_data_uuid & ~UUID_PRIMARY) == (val & ~UUID_PRIMARY) ||
+	    val == UUID_JUST_CREATED)
+		return false;
+
+	if (device->resource->role[NOW] == R_PRIMARY)
+		val |= UUID_PRIMARY;
+	else
+		val &= ~UUID_PRIMARY;
+
+	device->exposed_data_uuid = val;
+
+	if (log)
+		drbd_info(device, "Setting exposed data uuid: %016llX\n", (unsigned long long)val);
+
+	return true;
+}
+
+static const char *name_of_node_id(struct drbd_resource *resource, int node_id)
+{
+	/* Caller need to hold rcu_read_lock */
+	struct drbd_connection *connection = drbd_connection_by_node_id(resource, node_id);
+
+	return connection ? rcu_dereference(connection->transport.net_conf)->name : "";
 }
 
-static int check_activity_log_stripe_size(struct drbd_device *device,
-		struct meta_data_on_disk *on_disk,
-		struct drbd_md *in_core)
+static void forget_bitmap(struct drbd_device *device, int node_id)
 {
-	u32 al_stripes = be32_to_cpu(on_disk->al_stripes);
-	u32 al_stripe_size_4k = be32_to_cpu(on_disk->al_stripe_size_4k);
-	u64 al_size_4k;
+	int bitmap_index = device->ldev->md.peers[node_id].bitmap_index;
+	const char *name;
 
-	/* both not set: default to old fixed size activity log */
-	if (al_stripes == 0 && al_stripe_size_4k == 0) {
-		al_stripes = 1;
-		al_stripe_size_4k = MD_32kB_SECT/8;
-	}
+	if (_drbd_bm_total_weight(device, bitmap_index) == 0)
+		return;
 
-	/* some paranoia plausibility checks */
+	spin_unlock_irq(&device->ldev->md.uuid_lock);
+	rcu_read_lock();
+	name = name_of_node_id(device->resource, node_id);
+	drbd_info(device, "clearing bitmap UUID and content (%lu bits) for node %d (%s)(slot %d)\n",
+		  _drbd_bm_total_weight(device, bitmap_index), node_id, name, bitmap_index);
+	rcu_read_unlock();
+	drbd_suspend_io(device, WRITE_ONLY);
+	drbd_bm_lock(device, "forget_bitmap()", BM_LOCK_TEST | BM_LOCK_SET);
+	_drbd_bm_clear_many_bits(device, bitmap_index, 0, -1UL);
+	drbd_bm_unlock(device);
+	drbd_resume_io(device);
+	drbd_md_mark_dirty(device);
+	spin_lock_irq(&device->ldev->md.uuid_lock);
+}
 
-	/* we need both values to be set */
-	if (al_stripes == 0 || al_stripe_size_4k == 0)
-		goto err;
+static void copy_bitmap(struct drbd_device *device, int from_id, int to_id)
+{
+	struct drbd_peer_device *peer_device = peer_device_by_node_id(device, to_id);
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	u64 previous_bitmap_uuid = peer_md[to_id].bitmap_uuid;
+	int from_index = peer_md[from_id].bitmap_index;
+	int to_index = peer_md[to_id].bitmap_index;
+	const char *from_name, *to_name;
 
-	al_size_4k = (u64)al_stripes * al_stripe_size_4k;
+	peer_md[to_id].bitmap_uuid = peer_md[from_id].bitmap_uuid;
+	peer_md[to_id].bitmap_dagtag = peer_md[from_id].bitmap_dagtag;
+	_drbd_uuid_push_history(device, previous_bitmap_uuid);
 
-	/* Upper limit of activity log area, to avoid potential overflow
-	 * problems in al_tr_number_to_on_disk_sector(). As right now, more
-	 * than 72 * 4k blocks total only increases the amount of history,
-	 * limiting this arbitrarily to 16 GB is not a real limitation ;-)  */
-	if (al_size_4k > (16 * 1024 * 1024/4))
-		goto err;
+	/* Pretending that the updated UUID was sent is a hack.
+	   Unfortunately Necessary to not interrupt the handshake */
+	if (peer_device && peer_device->comm_bitmap_uuid == previous_bitmap_uuid)
+		peer_device->comm_bitmap_uuid = peer_md[from_id].bitmap_uuid;
 
-	/* Lower limit: we need at least 8 transaction slots (32kB)
-	 * to not break existing setups */
-	if (al_size_4k < MD_32kB_SECT/8)
-		goto err;
+	spin_unlock_irq(&device->ldev->md.uuid_lock);
+	rcu_read_lock();
+	from_name = name_of_node_id(device->resource, from_id);
+	to_name = name_of_node_id(device->resource, to_id);
+	drbd_info(device, "Node %d (%s) synced up to node %d (%s). copying bitmap slot %d to %d.\n",
+		  to_id, to_name, from_id, from_name, from_index, to_index);
+	rcu_read_unlock();
+	drbd_suspend_io(device, WRITE_ONLY);
+	drbd_bm_lock(device, "copy_bitmap()", BM_LOCK_ALL);
+	drbd_bm_copy_slot(device, from_index, to_index);
+	drbd_bm_unlock(device);
+	drbd_resume_io(device);
+	drbd_md_mark_dirty(device);
+	spin_lock_irq(&device->ldev->md.uuid_lock);
+}
 
-	in_core->al_stripe_size_4k = al_stripe_size_4k;
-	in_core->al_stripes = al_stripes;
-	in_core->al_size_4k = al_size_4k;
+static int find_node_id_by_bitmap_uuid(struct drbd_device *device, u64 bm_uuid)
+{
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	int node_id;
 
-	return 0;
-err:
-	drbd_err(device, "invalid activity log striping: al_stripes=%u, al_stripe_size_4k=%u\n",
-			al_stripes, al_stripe_size_4k);
-	return -EINVAL;
-}
-
-static int check_offsets_and_sizes(struct drbd_device *device, struct drbd_backing_dev *bdev)
-{
-	sector_t capacity = drbd_get_capacity(bdev->md_bdev);
-	struct drbd_md *in_core = &bdev->md;
-	s32 on_disk_al_sect;
-	s32 on_disk_bm_sect;
-
-	/* The on-disk size of the activity log, calculated from offsets, and
-	 * the size of the activity log calculated from the stripe settings,
-	 * should match.
-	 * Though we could relax this a bit: it is ok, if the striped activity log
-	 * fits in the available on-disk activity log size.
-	 * Right now, that would break how resize is implemented.
-	 * TODO: make drbd_determine_dev_size() (and the drbdmeta tool) aware
-	 * of possible unused padding space in the on disk layout. */
-	if (in_core->al_offset < 0) {
-		if (in_core->bm_offset > in_core->al_offset)
-			goto err;
-		on_disk_al_sect = -in_core->al_offset;
-		on_disk_bm_sect = in_core->al_offset - in_core->bm_offset;
-	} else {
-		if (in_core->al_offset != MD_4kB_SECT)
-			goto err;
-		if (in_core->bm_offset < in_core->al_offset + in_core->al_size_4k * MD_4kB_SECT)
-			goto err;
+	bm_uuid &= ~UUID_PRIMARY;
 
-		on_disk_al_sect = in_core->bm_offset - MD_4kB_SECT;
-		on_disk_bm_sect = in_core->md_size_sect - in_core->bm_offset;
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if ((peer_md[node_id].bitmap_uuid & ~UUID_PRIMARY) == bm_uuid &&
+		    peer_md[node_id].flags & MDF_HAVE_BITMAP)
+			return node_id;
 	}
 
-	/* old fixed size meta data is exactly that: fixed. */
-	if (in_core->meta_dev_idx >= 0) {
-		if (in_core->md_size_sect != MD_128MB_SECT
-		||  in_core->al_offset != MD_4kB_SECT
-		||  in_core->bm_offset != MD_4kB_SECT + MD_32kB_SECT
-		||  in_core->al_stripes != 1
-		||  in_core->al_stripe_size_4k != MD_32kB_SECT/8)
-			goto err;
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if ((peer_md[node_id].bitmap_uuid & ~UUID_PRIMARY) == bm_uuid)
+			return node_id;
 	}
 
-	if (capacity < in_core->md_size_sect)
-		goto err;
-	if (capacity - in_core->md_size_sect < drbd_md_first_sector(bdev))
-		goto err;
-
-	/* should be aligned, and at least 32k */
-	if ((on_disk_al_sect & 7) || (on_disk_al_sect < MD_32kB_SECT))
-		goto err;
-
-	/* should fit (for now: exactly) into the available on-disk space;
-	 * overflow prevention is in check_activity_log_stripe_size() above. */
-	if (on_disk_al_sect != in_core->al_size_4k * MD_4kB_SECT)
-		goto err;
-
-	/* again, should be aligned */
-	if (in_core->bm_offset & 7)
-		goto err;
+	return -1;
+}
 
-	/* FIXME check for device grow with flex external meta data? */
+static bool node_connected(struct drbd_resource *resource, int node_id)
+{
+	struct drbd_connection *connection;
+	bool r = false;
 
-	/* can the available bitmap space cover the last agreed device size? */
-	if (on_disk_bm_sect < (in_core->la_size_sect+7)/MD_4kB_SECT/8/512)
-		goto err;
+	rcu_read_lock();
+	connection = drbd_connection_by_node_id(resource, node_id);
+	if (connection)
+		r = connection->cstate[NOW] == C_CONNECTED;
+	rcu_read_unlock();
 
-	return 0;
+	return r;
+}
 
-err:
-	drbd_err(device, "meta data offsets don't make sense: idx=%d "
-			"al_s=%u, al_sz4k=%u, al_offset=%d, bm_offset=%d, "
-			"md_size_sect=%u, la_size=%llu, md_capacity=%llu\n",
-			in_core->meta_dev_idx,
-			in_core->al_stripes, in_core->al_stripe_size_4k,
-			in_core->al_offset, in_core->bm_offset, in_core->md_size_sect,
-			(unsigned long long)in_core->la_size_sect,
-			(unsigned long long)capacity);
+static bool detect_copy_ops_on_peer(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	struct drbd_resource *resource = device->resource;
+	int node_id1, node_id2, from_id;
+	u64 peer_bm_uuid;
+	bool modified = false;
 
-	return -EINVAL;
-}
+	for (node_id1 = 0; node_id1 < DRBD_NODE_ID_MAX; node_id1++) {
+		if (device->ldev->md.peers[node_id1].bitmap_index == -1)
+			continue;
 
+		if (node_connected(resource, node_id1))
+			continue;
 
-/**
- * drbd_md_read() - Reads in the meta data super block
- * @device:	DRBD device.
- * @bdev:	Device from which the meta data should be read in.
- *
- * Return NO_ERROR on success, and an enum drbd_ret_code in case
- * something goes wrong.
- *
- * Called exactly once during drbd_adm_attach(), while still being D_DISKLESS,
- * even before @bdev is assigned to @device->ldev.
- */
-int drbd_md_read(struct drbd_device *device, struct drbd_backing_dev *bdev)
-{
-	struct meta_data_on_disk *buffer;
-	u32 magic, flags;
-	int i, rv = NO_ERROR;
+		peer_bm_uuid = peer_device->bitmap_uuids[node_id1];
+		if (peer_bm_uuid == 0 || peer_bm_uuid == -1ULL)
+			continue;
 
-	if (device->state.disk != D_DISKLESS)
-		return ERR_DISK_CONFIGURED;
+		peer_bm_uuid &= ~UUID_PRIMARY;
+		for (node_id2 = node_id1 + 1; node_id2 < DRBD_NODE_ID_MAX; node_id2++) {
+			if (device->ldev->md.peers[node_id2].bitmap_index == -1)
+				continue;
 
-	buffer = drbd_md_get_buffer(device, __func__);
-	if (!buffer)
-		return ERR_NOMEM;
+			if (node_connected(resource, node_id2))
+				continue;
 
-	/* First, figure out where our meta data superblock is located,
-	 * and read it. */
-	bdev->md.meta_dev_idx = bdev->disk_conf->meta_dev_idx;
-	bdev->md.md_offset = drbd_md_ss(bdev);
-	/* Even for (flexible or indexed) external meta data,
-	 * initially restrict us to the 4k superblock for now.
-	 * Affects the paranoia out-of-range access check in drbd_md_sync_page_io(). */
-	bdev->md.md_size_sect = 8;
-
-	if (drbd_md_sync_page_io(device, bdev, bdev->md.md_offset,
-				 REQ_OP_READ)) {
-		/* NOTE: can't do normal error processing here as this is
-		   called BEFORE disk is attached */
-		drbd_err(device, "Error while reading metadata.\n");
-		rv = ERR_IO_MD_DISK;
-		goto err;
-	}
-
-	magic = be32_to_cpu(buffer->magic);
-	flags = be32_to_cpu(buffer->flags);
-	if (magic == DRBD_MD_MAGIC_84_UNCLEAN ||
-	    (magic == DRBD_MD_MAGIC_08 && !(flags & MDF_AL_CLEAN))) {
-			/* btw: that's Activity Log clean, not "all" clean. */
-		drbd_err(device, "Found unclean meta data. Did you \"drbdadm apply-al\"?\n");
-		rv = ERR_MD_UNCLEAN;
-		goto err;
-	}
-
-	rv = ERR_MD_INVALID;
-	if (magic != DRBD_MD_MAGIC_08) {
-		if (magic == DRBD_MD_MAGIC_07)
-			drbd_err(device, "Found old (0.7) meta data magic. Did you \"drbdadm create-md\"?\n");
-		else
-			drbd_err(device, "Meta data magic not found. Did you \"drbdadm create-md\"?\n");
-		goto err;
+			if (peer_bm_uuid == (peer_device->bitmap_uuids[node_id2] & ~UUID_PRIMARY))
+				goto found;
+		}
 	}
+	return false;
 
-	if (be32_to_cpu(buffer->bm_bytes_per_bit) != BM_BLOCK_SIZE) {
-		drbd_err(device, "unexpected bm_bytes_per_bit: %u (expected %u)\n",
-		    be32_to_cpu(buffer->bm_bytes_per_bit), BM_BLOCK_SIZE);
-		goto err;
+found:
+	from_id = find_node_id_by_bitmap_uuid(device, peer_bm_uuid);
+	if (from_id == -1) {
+		if (peer_md[node_id1].bitmap_uuid == 0 && peer_md[node_id2].bitmap_uuid == 0)
+			return false;
+		drbd_err(peer_device, "unexpected\n");
+		drbd_err(peer_device, "In UUIDs from node %d found equal UUID (%llX) for nodes %d %d\n",
+			 peer_device->node_id, peer_bm_uuid, node_id1, node_id2);
+		drbd_err(peer_device, "I have %llX for node_id=%d\n",
+			 peer_md[node_id1].bitmap_uuid, node_id1);
+		drbd_err(peer_device, "I have %llX for node_id=%d\n",
+			 peer_md[node_id2].bitmap_uuid, node_id2);
+		return false;
 	}
 
+	if (!(peer_md[from_id].flags & MDF_HAVE_BITMAP))
+		return false;
 
-	/* convert to in_core endian */
-	bdev->md.la_size_sect = be64_to_cpu(buffer->la_size_sect);
-	for (i = UI_CURRENT; i < UI_SIZE; i++)
-		bdev->md.uuid[i] = be64_to_cpu(buffer->uuid[i]);
-	bdev->md.flags = be32_to_cpu(buffer->flags);
-	bdev->md.device_uuid = be64_to_cpu(buffer->device_uuid);
-
-	bdev->md.md_size_sect = be32_to_cpu(buffer->md_size_sect);
-	bdev->md.al_offset = be32_to_cpu(buffer->al_offset);
-	bdev->md.bm_offset = be32_to_cpu(buffer->bm_offset);
-
-	if (check_activity_log_stripe_size(device, buffer, &bdev->md))
-		goto err;
-	if (check_offsets_and_sizes(device, bdev))
-		goto err;
+	if (from_id != node_id1 &&
+	    peer_md[node_id1].bitmap_uuid != peer_bm_uuid) {
+		copy_bitmap(device, from_id, node_id1);
+		modified = true;
 
-	if (be32_to_cpu(buffer->bm_offset) != bdev->md.bm_offset) {
-		drbd_err(device, "unexpected bm_offset: %d (expected %d)\n",
-		    be32_to_cpu(buffer->bm_offset), bdev->md.bm_offset);
-		goto err;
-	}
-	if (be32_to_cpu(buffer->md_size_sect) != bdev->md.md_size_sect) {
-		drbd_err(device, "unexpected md_size: %u (expected %u)\n",
-		    be32_to_cpu(buffer->md_size_sect), bdev->md.md_size_sect);
-		goto err;
 	}
-
-	rv = NO_ERROR;
-
-	spin_lock_irq(&device->resource->req_lock);
-	if (device->state.conn < C_CONNECTED) {
-		unsigned int peer;
-		peer = be32_to_cpu(buffer->la_peer_max_bio_size);
-		peer = max(peer, DRBD_MAX_BIO_SIZE_SAFE);
-		device->peer_max_bio_size = peer;
+	if (from_id != node_id2 &&
+	    peer_md[node_id2].bitmap_uuid != peer_bm_uuid) {
+		copy_bitmap(device, from_id, node_id2);
+		modified = true;
 	}
-	spin_unlock_irq(&device->resource->req_lock);
 
- err:
-	drbd_md_put_buffer(device);
-
-	return rv;
+	return modified;
 }
 
-/**
- * drbd_md_mark_dirty() - Mark meta data super block as dirty
- * @device:	DRBD device.
- *
- * Call this function if you change anything that should be written to
- * the meta-data super block. This function sets MD_DIRTY, and starts a
- * timer that ensures that within five seconds you have to call drbd_md_sync().
- */
-void drbd_md_mark_dirty(struct drbd_device *device)
+void drbd_uuid_detect_finished_resyncs(struct drbd_peer_device *peer_device)
 {
-	if (!test_and_set_bit(MD_DIRTY, &device->flags))
-		mod_timer(&device->md_sync_timer, jiffies + 5*HZ);
-}
+	u64 peer_current_uuid = peer_device->current_uuid & ~UUID_PRIMARY;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	const int my_node_id = device->resource->res_opts.node_id;
+	bool write_bm = false;
+	bool filled = false;
+	bool current_equal;
+	int node_id;
 
-void drbd_uuid_move_history(struct drbd_device *device) __must_hold(local)
-{
-	int i;
+	current_equal = peer_current_uuid == (drbd_resolved_uuid(peer_device, NULL) & ~UUID_PRIMARY) &&
+		!(peer_device->uuid_flags & UUID_FLAG_SYNC_TARGET) &&
+		!(peer_device->comm_uuid_flags & UUID_FLAG_SYNC_TARGET);
 
-	for (i = UI_HISTORY_START; i < UI_HISTORY_END; i++)
-		device->ldev->md.uuid[i+1] = device->ldev->md.uuid[i];
-}
+	spin_lock_irq(&device->ldev->md.uuid_lock);
 
-void __drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local)
-{
-	if (idx == UI_CURRENT) {
-		if (device->state.role == R_PRIMARY)
-			val |= 1;
-		else
-			val &= ~((u64)1);
+	if (peer_device->repl_state[NOW] == L_OFF && current_equal) {
+		u64 bm_to_peer = peer_device->comm_bitmap_uuid & ~UUID_PRIMARY;
+		u64 bm_towards_me = peer_device->bitmap_uuids[my_node_id] & ~UUID_PRIMARY;
 
-		drbd_set_ed_uuid(device, val);
+		if (bm_towards_me != 0 && bm_to_peer == 0 &&
+		    bm_towards_me != peer_current_uuid) {
+			if (peer_device->comm_bm_set == 0 && peer_device->dirty_bits == 0) {
+				drbd_info(peer_device, "Peer missed end of resync, 0 to sync\n");
+				if (peer_device->connection->agreed_pro_version < 124)
+					set_bit(RS_PEER_MISSED_END, &peer_device->flags);
+			} else {
+				drbd_info(peer_device, "Peer missed end of resync\n");
+				set_bit(RS_PEER_MISSED_END, &peer_device->flags);
+			}
+		}
+		if (bm_towards_me == 0 && bm_to_peer != 0 &&
+		    bm_to_peer != peer_current_uuid) {
+			if (peer_device->comm_bm_set == 0 && peer_device->dirty_bits == 0) {
+				int peer_node_id = peer_device->node_id;
+				u64 previous = peer_md[peer_node_id].bitmap_uuid;
+
+				drbd_info(peer_device,
+					"Missed end of resync as sync-source, no bits to sync\n");
+				peer_md[peer_node_id].bitmap_uuid = 0;
+				_drbd_uuid_push_history(device, previous);
+				peer_device->comm_bitmap_uuid = 0;
+				drbd_md_mark_dirty(device);
+				if (peer_device->connection->agreed_pro_version < 124)
+					set_bit(RS_SOURCE_MISSED_END, &peer_device->flags);
+			} else {
+				drbd_info(peer_device, "Missed end of resync as sync-source\n");
+				set_bit(RS_SOURCE_MISSED_END, &peer_device->flags);
+			}
+		}
+		spin_unlock_irq(&device->ldev->md.uuid_lock);
+		return;
 	}
 
-	device->ldev->md.uuid[idx] = val;
-	drbd_md_mark_dirty(device);
-}
-
-void _drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local)
-{
-	unsigned long flags;
-	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
-	__drbd_uuid_set(device, idx, val);
-	spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
-}
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		struct drbd_peer_device *pd2;
 
-void drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local)
-{
-	unsigned long flags;
-	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
-	if (device->ldev->md.uuid[idx]) {
-		drbd_uuid_move_history(device);
-		device->ldev->md.uuid[UI_HISTORY_START] = device->ldev->md.uuid[idx];
-	}
-	__drbd_uuid_set(device, idx, val);
-	spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
-}
+		if (node_id == device->ldev->md.node_id)
+			continue;
 
-/**
- * drbd_uuid_new_current() - Creates a new current UUID
- * @device:	DRBD device.
- *
- * Creates a new current UUID, and rotates the old current UUID into
- * the bitmap slot. Causes an incremental resync upon next connect.
- */
-void drbd_uuid_new_current(struct drbd_device *device) __must_hold(local)
-{
-	u64 val;
-	unsigned long long bm_uuid;
+		if (!(peer_md[node_id].flags & MDF_HAVE_BITMAP) && !(peer_md[node_id].flags & MDF_NODE_EXISTS))
+			continue;
 
-	get_random_bytes(&val, sizeof(u64));
+		pd2 = peer_device_by_node_id(device, node_id);
+		if (pd2 && pd2 != peer_device && pd2->repl_state[NOW] > L_ESTABLISHED)
+			continue;
 
-	spin_lock_irq(&device->ldev->md.uuid_lock);
-	bm_uuid = device->ldev->md.uuid[UI_BITMAP];
+		if (peer_device->bitmap_uuids[node_id] == 0 && peer_md[node_id].bitmap_uuid != 0) {
+			int from_node_id;
+
+			if (current_equal) {
+				u64 previous_bitmap_uuid = peer_md[node_id].bitmap_uuid;
+				peer_md[node_id].bitmap_uuid = 0;
+				_drbd_uuid_push_history(device, previous_bitmap_uuid);
+				if (node_id == peer_device->node_id)
+					drbd_print_uuids(peer_device, "updated UUIDs");
+				else if (peer_md[node_id].flags & MDF_HAVE_BITMAP)
+					forget_bitmap(device, node_id);
+				else
+					drbd_info(device, "Clearing bitmap UUID for node %d\n",
+						  node_id);
+				drbd_md_mark_dirty(device);
+				write_bm = true;
+			}
 
-	if (bm_uuid)
-		drbd_warn(device, "bm UUID was already set: %llX\n", bm_uuid);
+			from_node_id = find_node_id_by_bitmap_uuid(device, peer_current_uuid);
+			if (from_node_id != -1 && node_id != from_node_id &&
+			    dagtag_newer(peer_md[from_node_id].bitmap_dagtag,
+					 peer_md[node_id].bitmap_dagtag)) {
+				if (peer_md[node_id].flags & MDF_HAVE_BITMAP &&
+				    peer_md[from_node_id].flags & MDF_HAVE_BITMAP)
+					copy_bitmap(device, from_node_id, node_id);
+				else
+					drbd_info(device, "Node %d synced up to node %d.\n",
+						  node_id, from_node_id);
+				drbd_md_mark_dirty(device);
+				filled = true;
+			}
+		}
+	}
 
-	device->ldev->md.uuid[UI_BITMAP] = device->ldev->md.uuid[UI_CURRENT];
-	__drbd_uuid_set(device, UI_CURRENT, val);
+	write_bm |= detect_copy_ops_on_peer(peer_device);
 	spin_unlock_irq(&device->ldev->md.uuid_lock);
 
-	drbd_print_uuids(device, "new current UUID");
-	/* get it to stable storage _now_ */
-	drbd_md_sync(device);
+	if (write_bm || filled) {
+		u64 to_nodes = filled ? -1 : ~NODE_MASK(peer_device->node_id);
+		drbd_propagate_uuids(device, to_nodes);
+		drbd_suspend_io(device, WRITE_ONLY);
+		drbd_bm_lock(device, "detect_finished_resyncs()", BM_LOCK_BULK);
+		drbd_bm_write(device, NULL);
+		drbd_bm_unlock(device);
+		drbd_resume_io(device);
+	}
 }
 
-void drbd_uuid_set_bm(struct drbd_device *device, u64 val) __must_hold(local)
+int drbd_bmio_set_all_n_write(struct drbd_device *device,
+			      struct drbd_peer_device *peer_device)
 {
-	unsigned long flags;
-	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
-	if (device->ldev->md.uuid[UI_BITMAP] == 0 && val == 0) {
-		spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
-		return;
-	}
-
-	if (val == 0) {
-		drbd_uuid_move_history(device);
-		device->ldev->md.uuid[UI_HISTORY_START] = device->ldev->md.uuid[UI_BITMAP];
-		device->ldev->md.uuid[UI_BITMAP] = 0;
-	} else {
-		unsigned long long bm_uuid = device->ldev->md.uuid[UI_BITMAP];
-		if (bm_uuid)
-			drbd_warn(device, "bm UUID was already set: %llX\n", bm_uuid);
-
-		device->ldev->md.uuid[UI_BITMAP] = val & ~((u64)1);
-	}
-	spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
-
-	drbd_md_mark_dirty(device);
+	drbd_bm_set_all(device);
+	return drbd_bm_write(device, NULL);
 }
 
 /**
@@ -3384,22 +5617,21 @@ void drbd_uuid_set_bm(struct drbd_device *device, u64 val) __must_hold(local)
  * @device:	DRBD device.
  * @peer_device: Peer DRBD device.
  *
- * Sets all bits in the bitmap and writes the whole bitmap to stable storage.
+ * Sets all bits in the bitmap towards one peer and writes the whole bitmap to stable storage.
  */
 int drbd_bmio_set_n_write(struct drbd_device *device,
-			  struct drbd_peer_device *peer_device) __must_hold(local)
-
+			  struct drbd_peer_device *peer_device)
 {
 	int rv = -EIO;
 
-	drbd_md_set_flag(device, MDF_FULL_SYNC);
+	drbd_md_set_peer_flag(peer_device, MDF_PEER_FULL_SYNC);
 	drbd_md_sync(device);
-	drbd_bm_set_all(device);
+	drbd_bm_set_many_bits(peer_device, 0, -1UL);
 
-	rv = drbd_bm_write(device, peer_device);
+	rv = drbd_bm_write(device, NULL);
 
 	if (!rv) {
-		drbd_md_clear_flag(device, MDF_FULL_SYNC);
+		drbd_md_clear_peer_flag(peer_device, MDF_PEER_FULL_SYNC);
 		drbd_md_sync(device);
 	}
 
@@ -3407,67 +5639,109 @@ int drbd_bmio_set_n_write(struct drbd_device *device,
 }
 
 /**
- * drbd_bmio_clear_n_write() - io_fn for drbd_queue_bitmap_io() or drbd_bitmap_io()
+ * drbd_bmio_set_allocated_n_write() - io_fn for drbd_queue_bitmap_io() or drbd_bitmap_io()
+ * @device:	DRBD device.
+ * @peer_device: parameter ignored
+ *
+ * Sets all bits in all allocated bitmap slots and writes it to stable storage.
+ */
+int drbd_bmio_set_allocated_n_write(struct drbd_device *device,
+				    struct drbd_peer_device *peer_device)
+{
+	const int my_node_id = device->resource->res_opts.node_id;
+	struct drbd_md *md = &device->ldev->md;
+	int rv = -EIO;
+	int node_id, bitmap_index;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if (node_id == my_node_id)
+			continue;
+		bitmap_index = md->peers[node_id].bitmap_index;
+		if (bitmap_index == -1)
+			continue;
+		_drbd_bm_set_many_bits(device, bitmap_index, 0, -1UL);
+	}
+	rv = drbd_bm_write(device, NULL);
+
+	return rv;
+}
+
+/**
+ * drbd_bmio_clear_all_n_write() - io_fn for drbd_queue_bitmap_io() or drbd_bitmap_io()
  * @device:	DRBD device.
  * @peer_device: Peer DRBD device.
  *
  * Clears all bits in the bitmap and writes the whole bitmap to stable storage.
  */
-int drbd_bmio_clear_n_write(struct drbd_device *device,
-			  struct drbd_peer_device *peer_device) __must_hold(local)
-
+int drbd_bmio_clear_all_n_write(struct drbd_device *device,
+			    struct drbd_peer_device *peer_device)
 {
 	drbd_resume_al(device);
 	drbd_bm_clear_all(device);
-	return drbd_bm_write(device, peer_device);
+	return drbd_bm_write(device, NULL);
+}
+
+int drbd_bmio_clear_one_peer(struct drbd_device *device,
+			     struct drbd_peer_device *peer_device)
+{
+	drbd_bm_clear_many_bits(peer_device, 0, -1UL);
+	return drbd_bm_write(device, NULL);
 }
 
 static int w_bitmap_io(struct drbd_work *w, int unused)
 {
-	struct drbd_device *device =
-		container_of(w, struct drbd_device, bm_io_work.w);
-	struct bm_io_work *work = &device->bm_io_work;
+	struct bm_io_work *work =
+		container_of(w, struct bm_io_work, w);
+	struct drbd_device *device = work->device;
 	int rv = -EIO;
 
-	if (work->flags != BM_LOCKED_CHANGE_ALLOWED) {
-		int cnt = atomic_read(&device->ap_bio_cnt);
-		if (cnt)
-			drbd_err(device, "FIXME: ap_bio_cnt %d, expected 0; queued for '%s'\n",
-					cnt, work->why);
-	}
-
 	if (get_ldev(device)) {
-		drbd_bm_lock(device, work->why, work->flags);
+		if (work->flags & BM_LOCK_SINGLE_SLOT)
+			drbd_bm_slot_lock(work->peer_device, work->why, work->flags);
+		else
+			drbd_bm_lock(device, work->why, work->flags);
 		rv = work->io_fn(device, work->peer_device);
-		drbd_bm_unlock(device);
+		if (work->flags & BM_LOCK_SINGLE_SLOT)
+			drbd_bm_slot_unlock(work->peer_device);
+		else
+			drbd_bm_unlock(device);
 		put_ldev(device);
 	}
 
-	clear_bit_unlock(BITMAP_IO, &device->flags);
-	wake_up(&device->misc_wait);
-
 	if (work->done)
-		work->done(device, rv);
+		work->done(device, work->peer_device, rv);
 
-	clear_bit(BITMAP_IO_QUEUED, &device->flags);
-	work->why = NULL;
-	work->flags = 0;
+	if (atomic_dec_and_test(&device->pending_bitmap_work.n))
+		wake_up(&device->misc_wait);
+	kfree(work);
 
 	return 0;
 }
 
+void drbd_queue_pending_bitmap_work(struct drbd_device *device)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&device->pending_bitmap_work.q_lock, flags);
+	spin_lock(&device->resource->work.q_lock);
+	list_splice_tail_init(&device->pending_bitmap_work.q, &device->resource->work.q);
+	spin_unlock(&device->resource->work.q_lock);
+	spin_unlock_irqrestore(&device->pending_bitmap_work.q_lock, flags);
+	wake_up(&device->resource->work.q_wait);
+}
+
 /**
  * drbd_queue_bitmap_io() - Queues an IO operation on the whole bitmap
  * @device:	DRBD device.
  * @io_fn:	IO callback to be called when bitmap IO is possible
  * @done:	callback to be called after the bitmap IO was performed
  * @why:	Descriptive text of the reason for doing the IO
- * @flags:	Bitmap flags
+ * @flags:	Bitmap operation flags
  * @peer_device: Peer DRBD device.
  *
  * While IO on the bitmap happens we freeze application IO thus we ensure
  * that drbd_set_out_of_sync() can not be called. This function MAY ONLY be
- * called from worker context. It MUST NOT be used while a previous such
+ * called from sender context. It MUST NOT be used while a previous such
  * work is still pending!
  *
  * Its worker function encloses the call of io_fn() by get_ldev() and
@@ -3475,35 +5749,63 @@ static int w_bitmap_io(struct drbd_work *w, int unused)
  */
 void drbd_queue_bitmap_io(struct drbd_device *device,
 			  int (*io_fn)(struct drbd_device *, struct drbd_peer_device *),
-			  void (*done)(struct drbd_device *, int),
+			  void (*done)(struct drbd_device *, struct drbd_peer_device *, int),
 			  char *why, enum bm_flag flags,
 			  struct drbd_peer_device *peer_device)
 {
-	D_ASSERT(device, current == peer_device->connection->worker.task);
-
-	D_ASSERT(device, !test_bit(BITMAP_IO_QUEUED, &device->flags));
-	D_ASSERT(device, !test_bit(BITMAP_IO, &device->flags));
-	D_ASSERT(device, list_empty(&device->bm_io_work.w.list));
-	if (device->bm_io_work.why)
-		drbd_err(device, "FIXME going to queue '%s' but '%s' still pending?\n",
-			why, device->bm_io_work.why);
-
-	device->bm_io_work.peer_device = peer_device;
-	device->bm_io_work.io_fn = io_fn;
-	device->bm_io_work.done = done;
-	device->bm_io_work.why = why;
-	device->bm_io_work.flags = flags;
-
-	spin_lock_irq(&device->resource->req_lock);
-	set_bit(BITMAP_IO, &device->flags);
-	/* don't wait for pending application IO if the caller indicates that
-	 * application IO does not conflict anyways. */
-	if (flags == BM_LOCKED_CHANGE_ALLOWED || atomic_read(&device->ap_bio_cnt) == 0) {
-		if (!test_and_set_bit(BITMAP_IO_QUEUED, &device->flags))
-			drbd_queue_work(&peer_device->connection->sender_work,
-					&device->bm_io_work.w);
+	struct bm_io_work *bm_io_work;
+
+	D_ASSERT(device, current == device->resource->worker.task);
+
+	bm_io_work = kmalloc_obj(*bm_io_work, GFP_NOIO);
+	if (!bm_io_work) {
+		if (done)
+			done(device, peer_device, -ENOMEM);
+		return;
 	}
-	spin_unlock_irq(&device->resource->req_lock);
+	bm_io_work->w.cb = w_bitmap_io;
+	bm_io_work->device = device;
+	bm_io_work->peer_device = peer_device;
+	bm_io_work->io_fn = io_fn;
+	bm_io_work->done = done;
+	bm_io_work->why = why;
+	bm_io_work->flags = flags;
+
+	/*
+	 * Whole-bitmap operations can only take place when there is no
+	 * concurrent application I/O.  We ensure exclusion between the two
+	 * types of I/O  with the following mechanism:
+	 *
+	 *  - device->ap_bio_cnt keeps track of the number of application I/O
+	 *    requests in progress.
+	 *
+	 *  - A non-empty device->pending_bitmap_work list indicates that
+	 *    whole-bitmap I/O operations are pending, and no new application
+	 *    I/O should be started.  We make sure that the list doesn't appear
+	 *    empty system wide before trying to queue the whole-bitmap I/O.
+	 *
+	 *  - In dec_ap_bio(), we decrement device->ap_bio_cnt.  If it reaches
+	 *    zero and the device->pending_bitmap_work list is non-empty, we
+	 *    queue the whole-bitmap operations.
+	 *
+	 *  - In inc_ap_bio(), we increment device->ap_bio_cnt before checking
+	 *    if the device->pending_bitmap_work list is non-empty.  If
+	 *    device->pending_bitmap_work is non-empty, we immediately call
+	 *    dec_ap_bio().
+	 *
+	 * This ensures that whenever there is pending whole-bitmap I/O, we
+	 * realize in dec_ap_bio().
+	 *
+	 */
+
+	/* no one should accidentally schedule the next bitmap IO
+	 * when it is only half-queued yet */
+	atomic_inc(&device->ap_bio_cnt[WRITE]);
+	atomic_inc(&device->pending_bitmap_work.n);
+	spin_lock_irq(&device->pending_bitmap_work.q_lock);
+	list_add_tail(&bm_io_work->w.list, &device->pending_bitmap_work.q);
+	spin_unlock_irq(&device->pending_bitmap_work.q_lock);
+	dec_ap_bio(device, WRITE);  /* may move to actual work queue */
 }
 
 /**
@@ -3511,11 +5813,11 @@ void drbd_queue_bitmap_io(struct drbd_device *device,
  * @device:	DRBD device.
  * @io_fn:	IO callback to be called when bitmap IO is possible
  * @why:	Descriptive text of the reason for doing the IO
- * @flags:	Bitmap flags
+ * @flags:	Bitmap operation flags
  * @peer_device: Peer DRBD device.
  *
  * freezes application IO while that the actual IO operations runs. This
- * functions MAY NOT be called from worker context.
+ * functions MAY NOT be called from sender context.
  */
 int drbd_bitmap_io(struct drbd_device *device,
 		int (*io_fn)(struct drbd_device *, struct drbd_peer_device *),
@@ -3523,17 +5825,28 @@ int drbd_bitmap_io(struct drbd_device *device,
 		struct drbd_peer_device *peer_device)
 {
 	/* Only suspend io, if some operation is supposed to be locked out */
-	const bool do_suspend_io = flags & (BM_DONT_CLEAR|BM_DONT_SET|BM_DONT_TEST);
+	const bool do_suspend_io = flags & (BM_LOCK_CLEAR|BM_LOCK_SET|BM_LOCK_TEST);
 	int rv;
 
-	D_ASSERT(device, current != first_peer_device(device)->connection->worker.task);
+	D_ASSERT(device, current != device->resource->worker.task);
+
+	if (!device->bitmap)
+		return 0;
 
 	if (do_suspend_io)
-		drbd_suspend_io(device);
+		drbd_suspend_io(device, WRITE_ONLY);
+
+	if (flags & BM_LOCK_SINGLE_SLOT)
+		drbd_bm_slot_lock(peer_device, why, flags);
+	else
+		drbd_bm_lock(device, why, flags);
 
-	drbd_bm_lock(device, why, flags);
 	rv = io_fn(device, peer_device);
-	drbd_bm_unlock(device);
+
+	if (flags & BM_LOCK_SINGLE_SLOT)
+		drbd_bm_slot_unlock(peer_device);
+	else
+		drbd_bm_unlock(device);
 
 	if (do_suspend_io)
 		drbd_resume_io(device);
@@ -3541,142 +5854,52 @@ int drbd_bitmap_io(struct drbd_device *device,
 	return rv;
 }
 
-void drbd_md_set_flag(struct drbd_device *device, int flag) __must_hold(local)
+void drbd_md_set_peer_flag(struct drbd_peer_device *peer_device,
+			   enum mdf_peer_flag flag)
 {
-	if ((device->ldev->md.flags & flag) != flag) {
+	struct drbd_device *device = peer_device->device;
+	struct drbd_md *md = &device->ldev->md;
+
+	if (!(md->peers[peer_device->node_id].flags & flag)) {
 		drbd_md_mark_dirty(device);
-		device->ldev->md.flags |= flag;
+		md->peers[peer_device->node_id].flags |= flag;
 	}
 }
 
-void drbd_md_clear_flag(struct drbd_device *device, int flag) __must_hold(local)
+void drbd_md_clear_peer_flag(struct drbd_peer_device *peer_device,
+			     enum mdf_peer_flag flag)
 {
-	if ((device->ldev->md.flags & flag) != 0) {
+	struct drbd_device *device = peer_device->device;
+	struct drbd_md *md = &device->ldev->md;
+
+	if (md->peers[peer_device->node_id].flags & flag) {
 		drbd_md_mark_dirty(device);
-		device->ldev->md.flags &= ~flag;
+		md->peers[peer_device->node_id].flags &= ~flag;
 	}
 }
-int drbd_md_test_flag(struct drbd_backing_dev *bdev, int flag)
+
+int drbd_md_test_flag(struct drbd_backing_dev *bdev, enum mdf_flag flag)
 {
 	return (bdev->md.flags & flag) != 0;
 }
 
-static void md_sync_timer_fn(struct timer_list *t)
+bool drbd_md_test_peer_flag(struct drbd_peer_device *peer_device, enum mdf_peer_flag flag)
 {
-	struct drbd_device *device = timer_container_of(device, t,
-							md_sync_timer);
-	drbd_device_post_work(device, MD_SYNC);
-}
+	struct drbd_md *md = &peer_device->device->ldev->md;
 
-const char *cmdname(enum drbd_packet cmd)
-{
-	/* THINK may need to become several global tables
-	 * when we want to support more than
-	 * one PRO_VERSION */
-	static const char *cmdnames[] = {
-
-		[P_DATA]	        = "Data",
-		[P_DATA_REPLY]	        = "DataReply",
-		[P_RS_DATA_REPLY]	= "RSDataReply",
-		[P_BARRIER]	        = "Barrier",
-		[P_BITMAP]	        = "ReportBitMap",
-		[P_BECOME_SYNC_TARGET]  = "BecomeSyncTarget",
-		[P_BECOME_SYNC_SOURCE]  = "BecomeSyncSource",
-		[P_UNPLUG_REMOTE]	= "UnplugRemote",
-		[P_DATA_REQUEST]	= "DataRequest",
-		[P_RS_DATA_REQUEST]     = "RSDataRequest",
-		[P_SYNC_PARAM]	        = "SyncParam",
-		[P_PROTOCOL]            = "ReportProtocol",
-		[P_UUIDS]	        = "ReportUUIDs",
-		[P_SIZES]	        = "ReportSizes",
-		[P_STATE]	        = "ReportState",
-		[P_SYNC_UUID]           = "ReportSyncUUID",
-		[P_AUTH_CHALLENGE]      = "AuthChallenge",
-		[P_AUTH_RESPONSE]	= "AuthResponse",
-		[P_STATE_CHG_REQ]       = "StateChgRequest",
-		[P_PING]		= "Ping",
-		[P_PING_ACK]	        = "PingAck",
-		[P_RECV_ACK]	        = "RecvAck",
-		[P_WRITE_ACK]	        = "WriteAck",
-		[P_RS_WRITE_ACK]	= "RSWriteAck",
-		[P_SUPERSEDED]          = "Superseded",
-		[P_NEG_ACK]	        = "NegAck",
-		[P_NEG_DREPLY]	        = "NegDReply",
-		[P_NEG_RS_DREPLY]	= "NegRSDReply",
-		[P_BARRIER_ACK]	        = "BarrierAck",
-		[P_STATE_CHG_REPLY]     = "StateChgReply",
-		[P_OV_REQUEST]          = "OVRequest",
-		[P_OV_REPLY]            = "OVReply",
-		[P_OV_RESULT]           = "OVResult",
-		[P_CSUM_RS_REQUEST]     = "CsumRSRequest",
-		[P_RS_IS_IN_SYNC]	= "CsumRSIsInSync",
-		[P_SYNC_PARAM89]	= "SyncParam89",
-		[P_COMPRESSED_BITMAP]   = "CBitmap",
-		[P_DELAY_PROBE]         = "DelayProbe",
-		[P_OUT_OF_SYNC]		= "OutOfSync",
-		[P_RS_CANCEL]		= "RSCancel",
-		[P_CONN_ST_CHG_REQ]	= "conn_st_chg_req",
-		[P_CONN_ST_CHG_REPLY]	= "conn_st_chg_reply",
-		[P_PROTOCOL_UPDATE]	= "protocol_update",
-		[P_TRIM]	        = "Trim",
-		[P_RS_THIN_REQ]         = "rs_thin_req",
-		[P_RS_DEALLOCATED]      = "rs_deallocated",
-		[P_WSAME]	        = "WriteSame",
-		[P_ZEROES]		= "Zeroes",
-
-		/* enum drbd_packet, but not commands - obsoleted flags:
-		 *	P_MAY_IGNORE
-		 *	P_MAX_OPT_CMD
-		 */
-	};
+	if (peer_device->bitmap_index == -1)
+		return false;
 
-	/* too big for the array: 0xfffX */
-	if (cmd == P_INITIAL_META)
-		return "InitialMeta";
-	if (cmd == P_INITIAL_DATA)
-		return "InitialData";
-	if (cmd == P_CONNECTION_FEATURES)
-		return "ConnectionFeatures";
-	if (cmd >= ARRAY_SIZE(cmdnames))
-		return "Unknown";
-	return cmdnames[cmd];
+	return md->peers[peer_device->node_id].flags & flag;
 }
 
-/**
- * drbd_wait_misc  -  wait for a request to make progress
- * @device:	device associated with the request
- * @i:		the struct drbd_interval embedded in struct drbd_request or
- *		struct drbd_peer_request
- */
-int drbd_wait_misc(struct drbd_device *device, struct drbd_interval *i)
+static void md_sync_timer_fn(struct timer_list *t)
 {
-	struct net_conf *nc;
-	DEFINE_WAIT(wait);
-	long timeout;
-
-	rcu_read_lock();
-	nc = rcu_dereference(first_peer_device(device)->connection->net_conf);
-	if (!nc) {
-		rcu_read_unlock();
-		return -ETIMEDOUT;
-	}
-	timeout = nc->ko_count ? nc->timeout * HZ / 10 * nc->ko_count : MAX_SCHEDULE_TIMEOUT;
-	rcu_read_unlock();
-
-	/* Indicate to wake up device->misc_wait on progress.  */
-	i->waiting = true;
-	prepare_to_wait(&device->misc_wait, &wait, TASK_INTERRUPTIBLE);
-	spin_unlock_irq(&device->resource->req_lock);
-	timeout = schedule_timeout(timeout);
-	finish_wait(&device->misc_wait, &wait);
-	spin_lock_irq(&device->resource->req_lock);
-	if (!timeout || device->state.conn < C_CONNECTED)
-		return -ETIMEDOUT;
-	if (signal_pending(current))
-		return -ERESTARTSYS;
-	return 0;
+	struct drbd_device *device = timer_container_of(device, t, md_sync_timer);
+	drbd_device_post_work(device, MD_SYNC);
 }
 
+
 void lock_all_resources(void)
 {
 	struct drbd_resource *resource;
@@ -3685,7 +5908,7 @@ void lock_all_resources(void)
 	mutex_lock(&resources_mutex);
 	local_irq_disable();
 	for_each_resource(resource, &drbd_resources)
-		spin_lock_nested(&resource->req_lock, i++);
+		read_lock(&resource->state_rwlock);
 }
 
 void unlock_all_resources(void)
@@ -3693,11 +5916,141 @@ void unlock_all_resources(void)
 	struct drbd_resource *resource;
 
 	for_each_resource(resource, &drbd_resources)
-		spin_unlock(&resource->req_lock);
+		read_unlock(&resource->state_rwlock);
 	local_irq_enable();
 	mutex_unlock(&resources_mutex);
 }
 
+long twopc_timeout(struct drbd_resource *resource)
+{
+	return resource->res_opts.twopc_timeout * HZ/10;
+}
+
+u64 directly_connected_nodes(struct drbd_resource *resource, enum which_state which)
+{
+	u64 directly_connected = 0;
+	struct drbd_connection *connection;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (connection->cstate[which] < C_CONNECTED)
+			continue;
+		directly_connected |= NODE_MASK(connection->peer_node_id);
+	}
+	rcu_read_unlock();
+
+	return directly_connected;
+}
+
+static sector_t bm_sect_to_max_capacity(const struct drbd_md *md, sector_t bm_sect)
+{
+	/* we do our meta data IO in 4k units */
+	u64 bm_bytes = ALIGN_DOWN(bm_sect << SECTOR_SHIFT, 4096);
+	u64 bm_bytes_per_peer = div_u64(bm_bytes, md->max_peers);
+	u64 bm_bits_per_peer = bm_bytes_per_peer * BITS_PER_BYTE;
+	return bm_bits_per_peer << (md->bm_block_shift - SECTOR_SHIFT);
+}
+
+
+/**
+ * drbd_get_max_capacity() - Returns the capacity for user-data on the local backing device
+ * @device: The DRBD device.
+ * @bdev: Meta data block device.
+ * @warn: Whether to warn when size is clipped.
+ *
+ * This function returns the capacity for user-data on the local backing
+ * device. In the case of internal meta-data, this is the backing disk size
+ * reduced by the meta-data size. In the case of external meta-data, this is
+ * the size of the backing disk.
+ */
+sector_t drbd_get_max_capacity(
+		struct drbd_device *device, struct drbd_backing_dev *bdev, bool warn)
+{
+	unsigned int bm_max_peers = bdev->md.max_peers;
+	unsigned int bm_block_size = bdev->md.bm_block_size;
+	sector_t backing_bdev_capacity = drbd_get_capacity(bdev->backing_bdev);
+	sector_t bm_sect;
+	sector_t backing_capacity_remaining;
+	sector_t metadata_limit;
+	sector_t max_capacity;
+
+	switch (bdev->md.meta_dev_idx) {
+	case DRBD_MD_INDEX_INTERNAL:
+	case DRBD_MD_INDEX_FLEX_INT:
+		bm_sect = bdev->md.al_offset - bdev->md.bm_offset;
+		backing_capacity_remaining = drbd_md_first_sector(bdev);
+		break;
+	case DRBD_MD_INDEX_FLEX_EXT:
+		bm_sect = bdev->md.md_size_sect - bdev->md.bm_offset;
+		backing_capacity_remaining = backing_bdev_capacity;
+		break;
+	default:
+		bm_sect = DRBD_BM_SECTORS_INDEXED;
+		backing_capacity_remaining = backing_bdev_capacity;
+	}
+
+	metadata_limit = bm_sect_to_max_capacity(&bdev->md, bm_sect);
+
+	dynamic_drbd_dbg(device,
+			"Backing device capacity: %llus, remaining: %llus, bitmap sectors: %llus\n",
+			(unsigned long long) backing_bdev_capacity,
+			(unsigned long long) backing_capacity_remaining,
+			(unsigned long long) bm_sect);
+	dynamic_drbd_dbg(device,
+			"Max peers: %u, bytes_per_bit: %u, metadata limit: %llus, hard limit: %llus\n",
+			bm_max_peers, bm_block_size,
+			(unsigned long long) metadata_limit,
+			(unsigned long long) DRBD_MAX_SECTORS);
+
+	max_capacity = backing_capacity_remaining;
+	if (max_capacity > DRBD_MAX_SECTORS) {
+		if (warn)
+			drbd_warn(device, "Device size clipped from %llus to %llus due to DRBD limitations\n",
+					(unsigned long long) max_capacity,
+					(unsigned long long) DRBD_MAX_SECTORS);
+		max_capacity = DRBD_MAX_SECTORS;
+	}
+	if (max_capacity > metadata_limit) {
+		if (warn)
+			drbd_warn(device, "Device size clipped from %llus to %llus due to metadata size\n",
+					(unsigned long long) max_capacity,
+					(unsigned long long) metadata_limit);
+		max_capacity = metadata_limit;
+	}
+	return max_capacity;
+}
+
+/* this is about cluster partitions, not block device partitions */
+sector_t drbd_partition_data_capacity(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	sector_t capacity = (sector_t)(-1);
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (test_bit(HAVE_SIZES, &peer_device->flags)) {
+			dynamic_drbd_dbg(peer_device, "d_size: %llus\n",
+					(unsigned long long)peer_device->d_size);
+			capacity = min_not_zero(capacity, peer_device->d_size);
+		}
+	}
+	rcu_read_unlock();
+
+	if (get_ldev_if_state(device, D_ATTACHING)) {
+		/* In case we somehow end up here while attaching, but before
+		 * we even assigned the ldev, pretend to still be diskless.
+		 */
+		if (device->ldev != NULL) {
+			sector_t local_capacity = drbd_local_max_size(device);
+
+			capacity = min_not_zero(capacity, local_capacity);
+		}
+		put_ldev(device);
+	}
+
+	return capacity != (sector_t)(-1) ? capacity : 0;
+}
+
 #ifdef CONFIG_DRBD_FAULT_INJECTION
 /* Fault insertion support including random number generator shamelessly
  * stolen from kernel/rcutorture.c */
@@ -3741,6 +6094,7 @@ _drbd_fault_str(unsigned int type) {
 		[DRBD_FAULT_BM_ALLOC] = "BM allocation",
 		[DRBD_FAULT_AL_EE] = "EE allocation",
 		[DRBD_FAULT_RECEIVE] = "receive data corruption",
+		[DRBD_FAULT_BIO_TOO_SMALL] = "BIO too small",
 	};
 
 	return (type < DRBD_FAULT_MAX) ? _faults[type] : "**Unknown**";
@@ -3753,14 +6107,13 @@ _drbd_insert_fault(struct drbd_device *device, unsigned int type)
 
 	unsigned int ret = (
 		(drbd_fault_devs == 0 ||
-			((1 << device_to_minor(device)) & drbd_fault_devs) != 0) &&
+			((1 << device->minor) & drbd_fault_devs) != 0) &&
 		(((_drbd_fault_random(&rrs) % 100) + 1) <= drbd_fault_rate));
 
 	if (ret) {
 		drbd_fault_count++;
 
-		if (drbd_ratelimit())
-			drbd_warn(device, "***Simulating %s failure\n",
+		drbd_warn_ratelimit(device, "***Simulating %s failure\n",
 				_drbd_fault_str(type));
 	}
 
@@ -3771,7 +6124,6 @@ _drbd_insert_fault(struct drbd_device *device, unsigned int type)
 module_init(drbd_init)
 module_exit(drbd_cleanup)
 
-EXPORT_SYMBOL(drbd_conn_str);
-EXPORT_SYMBOL(drbd_role_str);
-EXPORT_SYMBOL(drbd_disk_str);
-EXPORT_SYMBOL(drbd_set_st_err_str);
+/* For transport layer */
+EXPORT_SYMBOL(drbd_destroy_connection);
+EXPORT_SYMBOL(drbd_destroy_path);
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 17/20] drbd: rework receiver for DRBD 9 transport and multi-peer protocol
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (15 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 16/20] drbd: rework module core for DRBD 9 transport and multi-peer Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 18/20] drbd: rework netlink management interface for DRBD 9 Christoph Böhmwalder
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Adapt the receiver to the DRBD 9 multi-peer architecture.
Replace all direct socket I/O by calls through the transport abstraction
layer, with the transport managing buffer allocation.
Move peer request tracking from per-device lists to connection-level
structures, enabling a single receiver thread to serve all volumes on
a connection.

UUID-based resync decisions replace the old integer heuristic with enums
for strategies and rules, so each sync handshake outcome and its reason
are self-describing and logged by name.
Update UUID comparison for multi-peer: each peer now carries per-node
bitmap UUIDs and a history array, replacing the fixed four-slot layout.

Introduce DAG-tag ordering as a causal consistency mechanism, letting
peer requests declare dependencies on writes seen at another node and
waiting until those dependencies are resolved.

Add two-phase commit handling so that coordinated state changes (role
transitions, resync initiation, resize) can be propagated to all nodes
atomically.

Write conflict detection moves from a flag-based approach to an
interval-tree with typed intervals, using asynchronous deferred
submission for conflicting requests.

The disconnect path is restructured at the connection level: it
cancels dagtag-dependent requests, drains resync activity, flushes
workqueues, and performs per-peer-device teardown before returning
the connection to the unconnected state.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_receiver.c         | 12258 ++++++++++++++-----
 drivers/block/drbd/drbd_transport.h        |   127 +-
 drivers/block/drbd/drbd_transport_lb-tcp.c |    50 +-
 drivers/block/drbd/drbd_transport_rdma.c   |    74 +-
 drivers/block/drbd/drbd_transport_tcp.c    |    49 +-
 5 files changed, 9029 insertions(+), 3529 deletions(-)

diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index 58b95bf4bdca..e8c4cd1cda14 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -10,42 +10,54 @@
 
  */
 
-
-#include <linux/module.h>
-
-#include <linux/uaccess.h>
 #include <net/sock.h>
 
+#include <linux/bio.h>
 #include <linux/drbd.h>
 #include <linux/fs.h>
 #include <linux/file.h>
 #include <linux/in.h>
 #include <linux/mm.h>
-#include <linux/memcontrol.h>
+#include <linux/memcontrol.h> /* needed on kernels <4.3 */
 #include <linux/mm_inline.h>
 #include <linux/slab.h>
-#include <uapi/linux/sched/types.h>
-#include <linux/sched/signal.h>
 #include <linux/pkt_sched.h>
-#include <linux/unistd.h>
+#include <uapi/linux/sched/types.h>
 #include <linux/vmalloc.h>
 #include <linux/random.h>
-#include <linux/string.h>
-#include <linux/scatterlist.h>
+#include <net/ipv6.h>
 #include <linux/part_stat.h>
-#include <linux/mempool.h>
+
 #include "drbd_int.h"
+#include "drbd_meta_data.h"
 #include "drbd_protocol.h"
 #include "drbd_req.h"
 #include "drbd_vli.h"
 
-#define PRO_FEATURES (DRBD_FF_TRIM|DRBD_FF_THIN_RESYNC|DRBD_FF_WSAME|DRBD_FF_WZEROES)
 
-struct packet_info {
-	enum drbd_packet cmd;
-	unsigned int size;
-	unsigned int vnr;
-	void *data;
+enum ao_op {
+	OUTDATE_DISKS,
+	OUTDATE_DISKS_AND_DISCONNECT,
+};
+
+struct flush_work {
+	struct drbd_work w;
+	struct drbd_epoch *epoch;
+};
+
+struct update_peers_work {
+       struct drbd_work w;
+       struct drbd_peer_device *peer_device;
+       sector_t sector_start;
+       sector_t sector_end;
+};
+
+enum epoch_event {
+	EV_PUT,
+	EV_GOT_BARRIER_NR,
+	EV_BARRIER_DONE,
+	EV_BECAME_LAST,
+	EV_CLEANUP = 32, /* used as flag */
 };
 
 enum finish_epoch {
@@ -54,201 +66,508 @@ enum finish_epoch {
 	FE_RECYCLED,
 };
 
-static int drbd_do_features(struct drbd_connection *connection);
-static int drbd_do_auth(struct drbd_connection *connection);
-static int drbd_disconnected(struct drbd_peer_device *);
-static void conn_wait_active_ee_empty(struct drbd_connection *connection);
+enum resync_reason {
+	AFTER_UNSTABLE,
+	DISKLESS_PRIMARY,
+};
+
+enum sync_rule {
+	RULE_SYNC_SOURCE_MISSED_FINISH,
+	RULE_SYNC_SOURCE_PEER_MISSED_FINISH,
+	RULE_SYNC_TARGET_MISSED_FINISH,
+	RULE_SYNC_TARGET_PEER_MISSED_FINISH,
+	RULE_SYNC_TARGET_MISSED_START,
+	RULE_SYNC_SOURCE_MISSED_START,
+	RULE_INITIAL_HANDSHAKE_CHANGED,
+	RULE_JUST_CREATED_PEER,
+	RULE_JUST_CREATED_SELF,
+	RULE_JUST_CREATED_BOTH,
+	RULE_CRASHED_PRIMARY,
+	RULE_LOST_QUORUM,
+	RULE_RECONNECTED,
+	RULE_BOTH_OFF,
+	RULE_BITMAP_PEER,
+	RULE_BITMAP_PEER_OTHER,
+	RULE_BITMAP_SELF,
+	RULE_BITMAP_SELF_OTHER,
+	RULE_BITMAP_BOTH,
+	RULE_HISTORY_PEER,
+	RULE_HISTORY_SELF,
+	RULE_HISTORY_BOTH,
+};
+
+static const char * const sync_rule_names[] = {
+	[RULE_SYNC_SOURCE_MISSED_FINISH] = "sync-source-missed-finish",
+	[RULE_SYNC_SOURCE_PEER_MISSED_FINISH] = "sync-source-peer-missed-finish",
+	[RULE_SYNC_TARGET_MISSED_FINISH] = "sync-target-missed-finish",
+	[RULE_SYNC_TARGET_PEER_MISSED_FINISH] = "sync-target-peer-missed-finish",
+	[RULE_SYNC_TARGET_MISSED_START] = "sync-target-missed-start",
+	[RULE_SYNC_SOURCE_MISSED_START] = "sync-source-missed-start",
+	[RULE_INITIAL_HANDSHAKE_CHANGED] = "initial-handshake-changed",
+	[RULE_JUST_CREATED_PEER] = "just-created-peer",
+	[RULE_JUST_CREATED_SELF] = "just-created-self",
+	[RULE_JUST_CREATED_BOTH] = "just-created-both",
+	[RULE_CRASHED_PRIMARY] = "crashed-primary",
+	[RULE_LOST_QUORUM] = "lost-quorum",
+	[RULE_RECONNECTED] = "reconnected",
+	[RULE_BOTH_OFF] = "both-off",
+	[RULE_BITMAP_PEER] = "bitmap-peer",
+	[RULE_BITMAP_PEER_OTHER] = "bitmap-peer-other",
+	[RULE_BITMAP_SELF] = "bitmap-self",
+	[RULE_BITMAP_SELF_OTHER] = "bitmap-self-other",
+	[RULE_BITMAP_BOTH] = "bitmap-both",
+	[RULE_HISTORY_PEER] = "history-peer",
+	[RULE_HISTORY_SELF] = "history-self",
+	[RULE_HISTORY_BOTH] = "history-both",
+};
+
+enum sync_strategy {
+	UNDETERMINED = 0,
+	NO_SYNC,
+	SYNC_SOURCE_IF_BOTH_FAILED,
+	SYNC_SOURCE_USE_BITMAP,
+	SYNC_SOURCE_SET_BITMAP,
+	SYNC_SOURCE_COPY_BITMAP,
+	SYNC_TARGET_IF_BOTH_FAILED,
+	SYNC_TARGET_USE_BITMAP,
+	SYNC_TARGET_SET_BITMAP,
+	SYNC_TARGET_CLEAR_BITMAP,
+	SPLIT_BRAIN_AUTO_RECOVER,
+	SPLIT_BRAIN_DISCONNECT,
+	UNRELATED_DATA,
+	RETRY_CONNECT,
+	REQUIRES_PROTO_91,
+	REQUIRES_PROTO_96,
+	REQUIRES_PROTO_124,
+	SYNC_TARGET_PRIMARY_RECONNECT,
+	SYNC_TARGET_PRIMARY_DISCONNECT,
+};
+
+struct sync_descriptor {
+	char * const name;
+	int required_protocol;
+	bool is_split_brain;
+	bool is_sync_source;
+	bool is_sync_target;
+	bool reconnect;
+	bool disconnect;
+	int resync_peer_preference;
+	enum sync_strategy full_sync_equivalent;
+	enum sync_strategy reverse;
+};
+
+static const struct sync_descriptor sync_descriptors[] = {
+	[UNDETERMINED] = {
+		.name = "?",
+	},
+	[NO_SYNC] = {
+		.name = "no-sync",
+		.resync_peer_preference = 5,
+	},
+	[SYNC_SOURCE_IF_BOTH_FAILED] = {
+		.name = "source-if-both-failed",
+		.is_sync_source = true,
+		.reverse = SYNC_TARGET_IF_BOTH_FAILED,
+	},
+	[SYNC_SOURCE_USE_BITMAP] = {
+		.name = "source-use-bitmap",
+		.is_sync_source = true,
+		.full_sync_equivalent = SYNC_SOURCE_SET_BITMAP,
+		.reverse = SYNC_TARGET_USE_BITMAP,
+	},
+	[SYNC_SOURCE_SET_BITMAP] = {
+		.name = "source-set-bitmap",
+		.is_sync_source = true,
+		.reverse = SYNC_TARGET_SET_BITMAP,
+	},
+	[SYNC_SOURCE_COPY_BITMAP] = {
+		.name = "source-copy-other-bitmap",
+		.is_sync_source = true,
+	},
+	[SYNC_TARGET_IF_BOTH_FAILED] = {
+		.name = "target-if-both-failed",
+		.is_sync_target = true,
+		.resync_peer_preference = 4,
+		.reverse = SYNC_SOURCE_IF_BOTH_FAILED,
+	},
+	[SYNC_TARGET_USE_BITMAP] = {
+		.name = "target-use-bitmap",
+		.is_sync_target = true,
+		.full_sync_equivalent = SYNC_TARGET_SET_BITMAP,
+		.resync_peer_preference = 3,
+		.reverse = SYNC_SOURCE_USE_BITMAP,
+	},
+	[SYNC_TARGET_SET_BITMAP] = {
+		.name = "target-set-bitmap",
+		.is_sync_target = true,
+		.resync_peer_preference = 2,
+		.reverse = SYNC_SOURCE_SET_BITMAP,
+	},
+	[SYNC_TARGET_CLEAR_BITMAP] = {
+		.name = "target-clear-bitmap",
+		.is_sync_target = true,
+		.resync_peer_preference = 1,
+	},
+	[SPLIT_BRAIN_AUTO_RECOVER] = {
+		.name = "split-brain-auto-recover",
+		.is_split_brain = true,
+		.disconnect = true,
+	},
+	[SPLIT_BRAIN_DISCONNECT] = {
+		.name = "split-brain-disconnect",
+		.is_split_brain = true,
+		.disconnect = true,
+	},
+	[UNRELATED_DATA] = {
+		.name = "unrelated-data",
+		.disconnect = true,
+	},
+	[RETRY_CONNECT] = {
+		.name = "retry-connect",
+		.reconnect = true,
+	},
+	[REQUIRES_PROTO_91] = {
+		.name = "requires-proto-91",
+		.required_protocol = 91,
+		.disconnect = true,
+	},
+	[REQUIRES_PROTO_96] = {
+		.name = "requires-proto-96",
+		.required_protocol = 96,
+		.disconnect = true,
+	},
+	[REQUIRES_PROTO_124] = {
+		.name = "requires-proto-124",
+		.required_protocol = 124,
+		.disconnect = true,
+	},
+	[SYNC_TARGET_PRIMARY_RECONNECT] = {
+		.name = "sync-target-primary-reconnect",
+		.is_sync_target = true,
+		.reconnect = true,
+	},
+	[SYNC_TARGET_PRIMARY_DISCONNECT] = {
+		.name = "sync-target-primary-disconnect",
+		.is_sync_target = true,
+		.disconnect = true,
+	},
+};
+
+enum rcv_timeou_kind {
+	PING_TIMEOUT,
+	REGULAR_TIMEOUT,
+};
+
+int drbd_do_features(struct drbd_connection *connection);
+int drbd_do_auth(struct drbd_connection *connection);
+static void conn_disconnect(struct drbd_connection *connection);
+
 static enum finish_epoch drbd_may_finish_epoch(struct drbd_connection *, struct drbd_epoch *, enum epoch_event);
 static int e_end_block(struct drbd_work *, int);
+static void cleanup_unacked_peer_requests(struct drbd_connection *connection);
+static void cleanup_peer_ack_list(struct drbd_connection *connection);
+static u64 node_ids_to_bitmap(struct drbd_device *device, u64 node_ids);
+static void process_twopc(struct drbd_connection *, struct twopc_reply *, struct packet_info *, unsigned long);
+static void drbd_resync(struct drbd_peer_device *, enum resync_reason);
+static void drbd_unplug_all_devices(struct drbd_connection *connection);
+static int decode_header(struct drbd_connection *, const void *, struct packet_info *);
+static void check_resync_source(struct drbd_device *device, u64 weak_nodes);
+static void set_rcvtimeo(struct drbd_connection *connection, enum rcv_timeou_kind kind);
+static bool disconnect_expected(struct drbd_connection *connection);
+static bool uuid_in_peer_history(struct drbd_peer_device *peer_device, u64 uuid);
+static bool uuid_in_my_history(struct drbd_device *device, u64 uuid);
+static void drbd_cancel_conflicting_resync_requests(struct drbd_peer_device *peer_device);
+
+static const char *drbd_sync_rule_str(enum sync_rule rule)
+{
+	if (rule < 0 || rule >= ARRAY_SIZE(sync_rule_names)) {
+		WARN_ON(true);
+		return "?";
+	}
+	return sync_rule_names[rule];
+}
 
+static struct sync_descriptor strategy_descriptor(enum sync_strategy strategy)
+{
+	if (strategy < 0 || strategy > ARRAY_SIZE(sync_descriptors)) {
+		WARN_ON(true);
+		return sync_descriptors[UNDETERMINED];
+	}
+	return sync_descriptors[strategy];
+}
+
+static bool is_strategy_determined(enum sync_strategy strategy)
+{
+	return strategy == NO_SYNC ||
+			strategy_descriptor(strategy).is_sync_source ||
+			strategy_descriptor(strategy).is_sync_target;
+}
 
-#define GFP_TRY	(__GFP_HIGHMEM | __GFP_NOWARN)
+static struct drbd_epoch *previous_epoch(struct drbd_connection *connection, struct drbd_epoch *epoch)
+{
+	struct drbd_epoch *prev;
+	spin_lock(&connection->epoch_lock);
+	prev = list_entry(epoch->list.prev, struct drbd_epoch, list);
+	if (prev == epoch || prev == connection->current_epoch)
+		prev = NULL;
+	spin_unlock(&connection->epoch_lock);
+	return prev;
+}
 
-static struct page *__drbd_alloc_pages(unsigned int number)
+static void rs_sectors_came_in(struct drbd_peer_device *peer_device, int size)
 {
-	struct page *page = NULL;
-	struct page *tmp = NULL;
-	unsigned int i = 0;
+	int rs_sect_in = atomic_add_return(size >> 9, &peer_device->rs_sect_in);
 
-	/* GFP_TRY, because we must not cause arbitrary write-out: in a DRBD
-	 * "criss-cross" setup, that might cause write-out on some other DRBD,
-	 * which in turn might block on the other node at this very place.  */
-	for (i = 0; i < number; i++) {
-		tmp = mempool_alloc(&drbd_buffer_page_pool, GFP_TRY);
-		if (!tmp)
-			goto fail;
-		set_page_private(tmp, (unsigned long)page);
-		page = tmp;
+	/* When resync runs faster than anticipated, consider running the
+	 * resync_work early. */
+	if (rs_sect_in >= peer_device->rs_in_flight)
+		drbd_rs_all_in_flight_came_back(peer_device, rs_sect_in);
+}
+
+void drbd_peer_req_strip_bio(struct drbd_peer_request *peer_req)
+{
+	struct drbd_transport *transport = &peer_req->peer_device->connection->transport;
+	struct bvec_iter iter;
+	struct bio_vec bvec;
+	struct bio *bio;
+
+	while ((bio = bio_list_pop(&peer_req->bios))) {
+		bio_for_each_bvec(bvec, bio, iter) {
+			struct page *page = bvec.bv_page;
+			unsigned int len = bvec.bv_len;
+
+			/* bio_add_page() may have merged contiguous pages from
+			 * separate allocations into a single bvec. Step through
+			 * by compound_order to free each allocation unit.
+			 */
+			while (len) {
+				unsigned int order = compound_order(page);
+
+				drbd_free_page(transport, page);
+				page += 1 << order;
+				len -= min_t(unsigned int, PAGE_SIZE << order, len);
+			}
+		}
+		bio_put(bio);
 	}
+}
+
+static struct page *
+__drbd_alloc_pages(struct drbd_connection *connection, gfp_t gfp_mask, int order)
+{
+	struct page *page;
+	unsigned int mxb;
+
+	rcu_read_lock();
+	mxb = rcu_dereference(connection->transport.net_conf)->max_buffers;
+	rcu_read_unlock();
+
+	if (atomic_read(&connection->pp_in_use) >= mxb)
+		schedule_timeout_interruptible(HZ / 10);
+
+	if (order == 0)
+		page = mempool_alloc(&drbd_buffer_page_pool, gfp_mask);
+	else
+		page = alloc_pages(gfp_mask | __GFP_COMP | __GFP_NORETRY, order);
+
+	if (page)
+		atomic_add(1 << order, &connection->pp_in_use);
+
 	return page;
-fail:
-	page_chain_for_each_safe(page, tmp) {
-		set_page_private(page, 0);
-		mempool_free(page, &drbd_buffer_page_pool);
-	}
-	return NULL;
 }
 
 /**
- * drbd_alloc_pages() - Returns @number pages, retries forever (or until signalled)
- * @peer_device:	DRBD device.
- * @number:		number of pages requested
- * @retry:		whether to retry, if not enough pages are available right now
- *
- * Tries to allocate number pages, first from our own page pool, then from
- * the kernel.
- * Possibly retry until DRBD frees sufficient pages somewhere else.
+ * drbd_alloc_pages() - Returns a page, which might be a single or compound page
+ * @transport:	DRBD transport
+ * @gfp_mask:	how to allocate and whether to loop until we succeed
+ * @size:	Desired size, gets rounded down to the closest power of two
  *
- * If this allocation would exceed the max_buffers setting, we throttle
- * allocation (schedule_timeout) to give the system some room to breathe.
+ * Allocates a page from the kernel or from the private mempool. When this
+ * allocation exceeds the max_buffers setting, throttle the allocation via
+ * schedule_timeout.
  *
- * We do not use max-buffers as hard limit, because it could lead to
- * congestion and further to a distributed deadlock during online-verify or
- * (checksum based) resync, if the max-buffers, socket buffer sizes and
+ * We do not use max-buffers as a hard limit, because it could lead to
+ * congestion and, further, to a distributed deadlock during online-verify or
+ * (checksum-based) resync, if the max-buffers, socket buffer sizes, and
  * resync-rate settings are mis-configured.
- *
- * Returns a page chain linked via page->private.
  */
-struct page *drbd_alloc_pages(struct drbd_peer_device *peer_device, unsigned int number,
-			      bool retry)
+struct page *drbd_alloc_pages(struct drbd_transport *transport, gfp_t gfp_mask, unsigned int size)
 {
-	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection =
+		container_of(transport, struct drbd_connection, transport);
+	int order = max(ilog2(size) - PAGE_SHIFT, 0);
 	struct page *page;
-	struct net_conf *nc;
-	unsigned int mxb;
 
-	rcu_read_lock();
-	nc = rcu_dereference(peer_device->connection->net_conf);
-	mxb = nc ? nc->max_buffers : 1000000;
-	rcu_read_unlock();
+	if (order && drbd_insert_fault_conn(connection, DRBD_FAULT_BIO_TOO_SMALL))
+		order = 0;
 
-	if (atomic_read(&device->pp_in_use) >= mxb)
-		schedule_timeout_interruptible(HZ / 10);
-	page = __drbd_alloc_pages(number);
+	page = __drbd_alloc_pages(connection, gfp_mask | __GFP_NOWARN, order);
+	if (!page && order)
+		page = __drbd_alloc_pages(connection, gfp_mask, 0);
 
-	if (page)
-		atomic_add(number, &device->pp_in_use);
 	return page;
 }
+EXPORT_SYMBOL(drbd_alloc_pages); /* for transports */
 
-/* Must not be used from irq, as that may deadlock: see drbd_alloc_pages.
- * Is also used from inside an other spin_lock_irq(&resource->req_lock);
- * Either links the page chain back to the global pool,
+/* Must not be used from irq, as that may deadlock: see drbd_alloc_pages().
+ * Either links the page chain back to the pool of free pages,
  * or returns all pages to the system. */
-static void drbd_free_pages(struct drbd_device *device, struct page *page)
+void drbd_free_page(struct drbd_transport *transport, struct page *page)
 {
-	struct page *tmp;
-	int i = 0;
+	struct drbd_connection *connection =
+		container_of(transport, struct drbd_connection, transport);
+	int order = compound_order(page), i = 0;
 
 	if (page == NULL)
 		return;
 
-	page_chain_for_each_safe(page, tmp) {
-		set_page_private(page, 0);
-		if (page_count(page) == 1)
-			mempool_free(page, &drbd_buffer_page_pool);
-		else
-			put_page(page);
-		i++;
-	}
-	i = atomic_sub_return(i, &device->pp_in_use);
+	if (page_count(page) == 1 && order == 0)
+		mempool_free(page, &drbd_buffer_page_pool);
+	else
+		put_page(page);
+
+	i = atomic_sub_return(1 << order, &connection->pp_in_use);
 	if (i < 0)
-		drbd_warn(device, "ASSERTION FAILED: pp_in_use: %d < 0\n", i);
+		drbd_warn(connection, "ASSERTION FAILED: pp_in_use: %d < 0\n", i);
 }
+EXPORT_SYMBOL(drbd_free_page);
 
-/*
-You need to hold the req_lock:
- _drbd_wait_ee_list_empty()
-
-You must not have the req_lock:
- drbd_free_peer_req()
- drbd_alloc_peer_req()
- drbd_free_peer_reqs()
- drbd_ee_fix_bhs()
- drbd_finish_peer_reqs()
- drbd_clear_done_ee()
- drbd_wait_ee_list_empty()
-*/
+static int
+peer_req_alloc_bio(struct drbd_peer_request *peer_req, size_t size, gfp_t gfp_mask, blk_opf_t opf)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_transport *transport = &peer_device->connection->transport;
+	struct drbd_device *device = peer_device->device;
+	enum req_op op = opf & REQ_OP_MASK;
+	unsigned short nr_vecs;
+	struct page *page;
+	struct bio *bio;
+
+	nr_vecs = DIV_ROUND_UP(size, PAGE_SIZE);
+	if (nr_vecs > BIO_MAX_VECS)
+		nr_vecs = BIO_MAX_VECS;
+
+	if (drbd_insert_fault(device, DRBD_FAULT_BIO_TOO_SMALL))
+		nr_vecs = DIV_ROUND_UP(nr_vecs, 4);
+
+	bio = bio_alloc(device->ldev->backing_bdev, nr_vecs, opf, gfp_mask);
+	if (!bio)
+		return -ENOMEM;
+
+	bio_list_add(&peer_req->bios, bio);
+
+	if (op == REQ_OP_READ) {
+		while (size) {
+			int len;
+
+			page = drbd_alloc_pages(transport, gfp_mask, size);
+			if (!page)
+				goto out_free_pages;
+			len = min(PAGE_SIZE << compound_order(page), size);
+
+			len = drbd_bio_add_page(transport, &peer_req->bios, page, len, 0);
+			if (len < 0)
+				goto out_free_pages;
+			size -= len;
+		}
+		if (!mempool_is_saturated(&drbd_buffer_page_pool))
+			peer_req->flags |= EE_RELEASE_TO_MEMPOOL;
+	}
+	return 0;
+
+out_free_pages:
+	drbd_peer_req_strip_bio(peer_req);
+	return -ENOMEM;
+}
 
-/* normal: payload_size == request size (bi_size)
- * w_same: payload_size == logical_block_size
- * trim: payload_size == 0 */
+/**
+ * drbd_alloc_peer_req() - Allocate a drbd_peer_request
+ * @drbd_peer_device: peer device object
+ * @gfp_mask:	      how to allocate and whether to loop until we succeed
+ * @size:	      size (normal I/O), logical_block_size (w_same), 0 (trim)
+ * @opf:              REQ_OP_READ or REQ_OP_WRITE
+ *
+ * For REQ_OP_READ, it allocates the peer_req with a BIO and populates it
+ * entirely with buffer pages. Otherwise it allocates the peer_req with
+ * an empty BIO.
+ */
 struct drbd_peer_request *
-drbd_alloc_peer_req(struct drbd_peer_device *peer_device, u64 id, sector_t sector,
-		    unsigned int request_size, unsigned int payload_size, gfp_t gfp_mask) __must_hold(local)
+drbd_alloc_peer_req(struct drbd_peer_device *peer_device, gfp_t gfp_mask,
+		    size_t size, blk_opf_t opf)
 {
 	struct drbd_device *device = peer_device->device;
 	struct drbd_peer_request *peer_req;
-	struct page *page = NULL;
-	unsigned int nr_pages = PFN_UP(payload_size);
+	int err;
 
+	gfp_mask &= ~__GFP_HIGHMEM;
 	if (drbd_insert_fault(device, DRBD_FAULT_AL_EE))
 		return NULL;
 
-	peer_req = mempool_alloc(&drbd_ee_mempool, gfp_mask & ~__GFP_HIGHMEM);
+	peer_req = mempool_alloc(&drbd_ee_mempool, gfp_mask);
 	if (!peer_req) {
 		if (!(gfp_mask & __GFP_NOWARN))
 			drbd_err(device, "%s: allocation failed\n", __func__);
 		return NULL;
 	}
-
-	if (nr_pages) {
-		page = drbd_alloc_pages(peer_device, nr_pages,
-					gfpflags_allow_blocking(gfp_mask));
-		if (!page)
-			goto fail;
-		if (!mempool_is_saturated(&drbd_buffer_page_pool))
-			peer_req->flags |= EE_RELEASE_TO_MEMPOOL;
-	}
-
 	memset(peer_req, 0, sizeof(*peer_req));
+
 	INIT_LIST_HEAD(&peer_req->w.list);
 	drbd_clear_interval(&peer_req->i);
-	peer_req->i.size = request_size;
-	peer_req->i.sector = sector;
+	INIT_LIST_HEAD(&peer_req->recv_order);
 	peer_req->submit_jif = jiffies;
+	kref_get(&device->kref); /* this kref holds the peer_req->peer_device object alive */
 	peer_req->peer_device = peer_device;
-	peer_req->pages = page;
-	/*
-	 * The block_id is opaque to the receiver.  It is not endianness
-	 * converted, and sent back to the sender unchanged.
-	 */
-	peer_req->block_id = id;
+	peer_req->block_id = (unsigned long) peer_req;
+
+	if (opf == REQ_NO_BIO)
+		return peer_req;
+
+	err = peer_req_alloc_bio(peer_req, size, gfp_mask, opf);
+	if (err)
+		goto out_free_peer_req;
 
 	return peer_req;
 
- fail:
+out_free_peer_req:
 	mempool_free(peer_req, &drbd_ee_mempool);
 	return NULL;
 }
 
-void drbd_free_peer_req(struct drbd_device *device, struct drbd_peer_request *peer_req)
+void drbd_free_peer_req(struct drbd_peer_request *peer_req)
 {
-	might_sleep();
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_connection *connection = peer_device->connection;
+
+	if (peer_req->flags & EE_ON_RECV_ORDER) {
+		spin_lock_irq(&connection->peer_reqs_lock);
+		if (peer_req->i.type == INTERVAL_RESYNC_WRITE)
+			drbd_list_del_resync_request(peer_req);
+		else
+			list_del(&peer_req->recv_order);
+		spin_unlock_irq(&connection->peer_reqs_lock);
+	}
+
 	if (peer_req->flags & EE_HAS_DIGEST)
 		kfree(peer_req->digest);
-	drbd_free_pages(device, peer_req->pages);
-	D_ASSERT(device, atomic_read(&peer_req->pending_bios) == 0);
-	D_ASSERT(device, drbd_interval_empty(&peer_req->i));
-	if (!expect(device, !(peer_req->flags & EE_CALL_AL_COMPLETE_IO))) {
-		peer_req->flags &= ~EE_CALL_AL_COMPLETE_IO;
-		drbd_al_complete_io(device, &peer_req->i);
-	}
+	D_ASSERT(peer_device, atomic_read(&peer_req->pending_bios) == 0);
+	D_ASSERT(peer_device, drbd_interval_empty(&peer_req->i));
+	drbd_peer_req_strip_bio(peer_req);
+	kref_put(&peer_device->device->kref, drbd_destroy_device);
 	mempool_free(peer_req, &drbd_ee_mempool);
 }
 
-int drbd_free_peer_reqs(struct drbd_device *device, struct list_head *list)
+int drbd_free_peer_reqs(struct drbd_connection *connection, struct list_head *list)
 {
 	LIST_HEAD(work_list);
 	struct drbd_peer_request *peer_req, *t;
 	int count = 0;
 
-	spin_lock_irq(&device->resource->req_lock);
+	spin_lock_irq(&connection->peer_reqs_lock);
 	list_splice_init(list, &work_list);
-	spin_unlock_irq(&device->resource->req_lock);
+	spin_unlock_irq(&connection->peer_reqs_lock);
 
 	list_for_each_entry_safe(peer_req, t, &work_list, w.list) {
-		drbd_free_peer_req(device, peer_req);
+		drbd_free_peer_req(peer_req);
 		count++;
 	}
 	return count;
@@ -257,90 +576,58 @@ int drbd_free_peer_reqs(struct drbd_device *device, struct list_head *list)
 /*
  * See also comments in _req_mod(,BARRIER_ACKED) and receive_Barrier.
  */
-static int drbd_finish_peer_reqs(struct drbd_device *device)
+static int drbd_finish_peer_reqs(struct drbd_connection *connection)
 {
 	LIST_HEAD(work_list);
 	struct drbd_peer_request *peer_req, *t;
 	int err = 0;
+	int n = 0;
 
-	spin_lock_irq(&device->resource->req_lock);
-	list_splice_init(&device->done_ee, &work_list);
-	spin_unlock_irq(&device->resource->req_lock);
+	spin_lock_irq(&connection->peer_reqs_lock);
+	list_splice_init(&connection->done_ee, &work_list);
+	spin_unlock_irq(&connection->peer_reqs_lock);
 
 	/* possible callbacks here:
-	 * e_end_block, and e_end_resync_block, e_send_superseded.
+	 * e_end_block, and e_end_resync_block.
 	 * all ignore the last argument.
 	 */
 	list_for_each_entry_safe(peer_req, t, &work_list, w.list) {
 		int err2;
 
+		++n;
 		/* list_del not necessary, next/prev members not touched */
+		/* The callback may free peer_req. */
 		err2 = peer_req->w.cb(&peer_req->w, !!err);
 		if (!err)
 			err = err2;
-		drbd_free_peer_req(device, peer_req);
 	}
-	wake_up(&device->ee_wait);
+	if (atomic_sub_and_test(n, &connection->done_ee_cnt))
+		wake_up(&connection->ee_wait);
 
 	return err;
 }
 
-static void _drbd_wait_ee_list_empty(struct drbd_device *device,
-				     struct list_head *head)
-{
-	DEFINE_WAIT(wait);
-
-	/* avoids spin_lock/unlock
-	 * and calling prepare_to_wait in the fast path */
-	while (!list_empty(head)) {
-		prepare_to_wait(&device->ee_wait, &wait, TASK_UNINTERRUPTIBLE);
-		spin_unlock_irq(&device->resource->req_lock);
-		io_schedule();
-		finish_wait(&device->ee_wait, &wait);
-		spin_lock_irq(&device->resource->req_lock);
-	}
-}
-
-static void drbd_wait_ee_list_empty(struct drbd_device *device,
-				    struct list_head *head)
-{
-	spin_lock_irq(&device->resource->req_lock);
-	_drbd_wait_ee_list_empty(device, head);
-	spin_unlock_irq(&device->resource->req_lock);
-}
-
-static int drbd_recv_short(struct socket *sock, void *buf, size_t size, int flags)
-{
-	struct kvec iov = {
-		.iov_base = buf,
-		.iov_len = size,
-	};
-	struct msghdr msg = {
-		.msg_flags = (flags ? flags : MSG_WAITALL | MSG_NOSIGNAL)
-	};
-	iov_iter_kvec(&msg.msg_iter, ITER_DEST, &iov, 1, size);
-	return sock_recvmsg(sock, &msg, msg.msg_flags);
-}
-
-static int drbd_recv(struct drbd_connection *connection, void *buf, size_t size)
+static int drbd_recv(struct drbd_connection *connection, void **buf, size_t size, int flags)
 {
+	struct drbd_transport_ops *tr_ops = &connection->transport.class->ops;
 	int rv;
 
-	rv = drbd_recv_short(connection->data.socket, buf, size, 0);
+	rv = tr_ops->recv(&connection->transport, DATA_STREAM, buf, size, flags);
 
 	if (rv < 0) {
 		if (rv == -ECONNRESET)
 			drbd_info(connection, "sock was reset by peer\n");
 		else if (rv != -ERESTARTSYS)
-			drbd_err(connection, "sock_recvmsg returned %d\n", rv);
+			drbd_info(connection, "sock_recvmsg returned %d\n", rv);
 	} else if (rv == 0) {
-		if (test_bit(DISCONNECT_SENT, &connection->flags)) {
+		if (test_bit(DISCONNECT_EXPECTED, &connection->flags)) {
 			long t;
 			rcu_read_lock();
-			t = rcu_dereference(connection->net_conf)->ping_timeo * HZ/10;
+			t = rcu_dereference(connection->transport.net_conf)->ping_timeo * HZ/10;
 			rcu_read_unlock();
 
-			t = wait_event_timeout(connection->ping_wait, connection->cstate < C_WF_REPORT_PARAMS, t);
+			t = wait_event_timeout(connection->resource->state_wait,
+					       connection->cstate[NOW] < C_CONNECTED, t);
 
 			if (t)
 				goto out;
@@ -349,17 +636,32 @@ static int drbd_recv(struct drbd_connection *connection, void *buf, size_t size)
 	}
 
 	if (rv != size)
-		conn_request_state(connection, NS(conn, C_BROKEN_PIPE), CS_HARD);
+		change_cstate(connection, C_BROKEN_PIPE, CS_HARD);
 
 out:
 	return rv;
 }
 
-static int drbd_recv_all(struct drbd_connection *connection, void *buf, size_t size)
+static int drbd_recv_into(struct drbd_connection *connection, void *buf, size_t size)
+{
+	int err;
+
+	err = drbd_recv(connection, &buf, size, CALLER_BUFFER);
+
+	if (err != size) {
+		if (err >= 0)
+			err = -EIO;
+	} else
+		err = 0;
+	return err;
+}
+
+static int drbd_recv_all(struct drbd_connection *connection, void **buf, size_t size)
 {
 	int err;
 
-	err = drbd_recv(connection, buf, size);
+	err = drbd_recv(connection, buf, size, 0);
+
 	if (err != size) {
 		if (err >= 0)
 			err = -EIO;
@@ -368,7 +670,7 @@ static int drbd_recv_all(struct drbd_connection *connection, void *buf, size_t s
 	return err;
 }
 
-static int drbd_recv_all_warn(struct drbd_connection *connection, void *buf, size_t size)
+static int drbd_recv_all_warn(struct drbd_connection *connection, void **buf, size_t size)
 {
 	int err;
 
@@ -378,628 +680,545 @@ static int drbd_recv_all_warn(struct drbd_connection *connection, void *buf, siz
 	return err;
 }
 
-/* quoting tcp(7):
- *   On individual connections, the socket buffer size must be set prior to the
- *   listen(2) or connect(2) calls in order to have it take effect.
- * This is our wrapper to do so.
- */
-static void drbd_setbufsize(struct socket *sock, unsigned int snd,
-		unsigned int rcv)
+static int drbd_send_disconnect(struct drbd_connection *connection)
 {
-	/* open coded SO_SNDBUF, SO_RCVBUF */
-	if (snd) {
-		sock->sk->sk_sndbuf = snd;
-		sock->sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
-	}
-	if (rcv) {
-		sock->sk->sk_rcvbuf = rcv;
-		sock->sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
-	}
+	if (connection->agreed_pro_version < 118)
+		return 0;
+
+	if (!conn_prepare_command(connection, 0, DATA_STREAM))
+		return -EIO;
+	return send_command(connection, -1, P_DISCONNECT, DATA_STREAM);
 }
 
-static struct socket *drbd_try_connect(struct drbd_connection *connection)
+static void initialize_send_buffer(struct drbd_connection *connection, enum drbd_stream drbd_stream)
 {
-	const char *what;
-	struct socket *sock;
-	struct sockaddr_in6 src_in6;
-	struct sockaddr_in6 peer_in6;
-	struct net_conf *nc;
-	int err, peer_addr_len, my_addr_len;
-	int sndbuf_size, rcvbuf_size, connect_int;
-	int disconnect_on_error = 1;
-
-	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
-	if (!nc) {
-		rcu_read_unlock();
-		return NULL;
-	}
-	sndbuf_size = nc->sndbuf_size;
-	rcvbuf_size = nc->rcvbuf_size;
-	connect_int = nc->connect_int;
-	rcu_read_unlock();
-
-	my_addr_len = min_t(int, connection->my_addr_len, sizeof(src_in6));
-	memcpy(&src_in6, &connection->my_addr, my_addr_len);
+	struct drbd_send_buffer *sbuf = &connection->send_buffer[drbd_stream];
 
-	if (((struct sockaddr *)&connection->my_addr)->sa_family == AF_INET6)
-		src_in6.sin6_port = 0;
-	else
-		((struct sockaddr_in *)&src_in6)->sin_port = 0; /* AF_INET & AF_SCI */
+	sbuf->unsent =
+	sbuf->pos = page_address(sbuf->page);
+	sbuf->allocated_size = 0;
+	sbuf->additional_size = 0;
+}
 
-	peer_addr_len = min_t(int, connection->peer_addr_len, sizeof(src_in6));
-	memcpy(&peer_in6, &connection->peer_addr, peer_addr_len);
+/* Gets called if a connection is established, or if a new minor gets created
+   in a connection */
+int drbd_connected(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	u64 weak_nodes = 0;
+	int err;
 
-	what = "sock_create_kern";
-	err = sock_create_kern(&init_net, ((struct sockaddr *)&src_in6)->sa_family,
-			       SOCK_STREAM, IPPROTO_TCP, &sock);
-	if (err < 0) {
-		sock = NULL;
-		goto out;
-	}
+	atomic_set(&peer_device->packet_seq, 0);
+	peer_device->peer_seq = 0;
 
-	sock->sk->sk_rcvtimeo =
-	sock->sk->sk_sndtimeo = connect_int * HZ;
-	drbd_setbufsize(sock, sndbuf_size, rcvbuf_size);
-
-       /* explicitly bind to the configured IP as source IP
-	*  for the outgoing connections.
-	*  This is needed for multihomed hosts and to be
-	*  able to use lo: interfaces for drbd.
-	* Make sure to use 0 as port number, so linux selects
-	*  a free one dynamically.
-	*/
-	what = "bind before connect";
-	err = sock->ops->bind(sock, (struct sockaddr_unsized *) &src_in6, my_addr_len);
-	if (err < 0)
-		goto out;
+	if (device->resource->role[NOW] == R_PRIMARY)
+		weak_nodes = drbd_weak_nodes_device(device);
 
-	/* connect may fail, peer not yet available.
-	 * stay C_WF_CONNECTION, don't go Disconnecting! */
-	disconnect_on_error = 0;
-	what = "connect";
-	err = sock->ops->connect(sock, (struct sockaddr_unsized *) &peer_in6, peer_addr_len, 0);
+	err = drbd_send_sync_param(peer_device);
 
-out:
-	if (err < 0) {
-		if (sock) {
-			sock_release(sock);
-			sock = NULL;
-		}
-		switch (-err) {
-			/* timeout, busy, signal pending */
-		case ETIMEDOUT: case EAGAIN: case EINPROGRESS:
-		case EINTR: case ERESTARTSYS:
-			/* peer not (yet) available, network problem */
-		case ECONNREFUSED: case ENETUNREACH:
-		case EHOSTDOWN:    case EHOSTUNREACH:
-			disconnect_on_error = 0;
-			break;
-		default:
-			drbd_err(connection, "%s failed, err = %d\n", what, err);
-		}
-		if (disconnect_on_error)
-			conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_HARD);
+	if (!err)
+		err = drbd_send_enable_replication_next(peer_device);
+	if (!err)
+		err = drbd_send_sizes(peer_device, 0, 0);
+	if (!err)
+		err = drbd_send_uuids(peer_device, 0, weak_nodes);
+	if (!err) {
+		set_bit(INITIAL_STATE_SENT, &peer_device->flags);
+		err = drbd_send_current_state(peer_device);
 	}
 
-	return sock;
+	clear_bit(USE_DEGR_WFC_T, &peer_device->flags);
+	clear_bit(RESIZE_PENDING, &peer_device->flags);
+	mod_timer(&device->request_timer, jiffies + HZ); /* just start it here. */
+	return err;
 }
 
-struct accept_wait_data {
-	struct drbd_connection *connection;
-	struct socket *s_listen;
-	struct completion door_bell;
-	void (*original_sk_state_change)(struct sock *sk);
-
-};
-
-static void drbd_incoming_connection(struct sock *sk)
+void conn_connect2(struct drbd_connection *connection)
 {
-	struct accept_wait_data *ad = sk->sk_user_data;
-	void (*state_change)(struct sock *sk);
+	struct drbd_peer_device *peer_device;
+	int vnr;
 
-	state_change = ad->original_sk_state_change;
-	if (sk->sk_state == TCP_ESTABLISHED)
-		complete(&ad->door_bell);
-	state_change(sk);
-}
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		struct drbd_device *device = peer_device->device;
 
-static int prepare_listen_socket(struct drbd_connection *connection, struct accept_wait_data *ad)
-{
-	int err, sndbuf_size, rcvbuf_size, my_addr_len;
-	struct sockaddr_in6 my_addr;
-	struct socket *s_listen;
-	struct net_conf *nc;
-	const char *what;
+		kref_get(&device->kref);
 
-	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
-	if (!nc) {
+		/* connection cannot go away: caller holds a reference. */
 		rcu_read_unlock();
-		return -EIO;
+
+		/* In the compatibility case with protocol version < 110, that
+		 * is DRBD 8.4, we do not hold uuid_sem while exchanging the
+		 * initial UUID and state packets. There is no need because
+		 * there are no other peers which could interfere. */
+		if (connection->agreed_pro_version >= 110) {
+			down_read_non_owner(&device->uuid_sem);
+			set_bit(HOLDING_UUID_READ_LOCK, &peer_device->flags);
+			/* since drbd_connected() is also called from drbd_create_device()
+			   aquire lock here before calling drbd_connected(). */
+		}
+		drbd_connected(peer_device);
+
+		rcu_read_lock();
+		kref_put(&device->kref, drbd_destroy_device);
 	}
-	sndbuf_size = nc->sndbuf_size;
-	rcvbuf_size = nc->rcvbuf_size;
 	rcu_read_unlock();
+	drbd_uncork(connection, DATA_STREAM);
+}
 
-	my_addr_len = min_t(int, connection->my_addr_len, sizeof(struct sockaddr_in6));
-	memcpy(&my_addr, &connection->my_addr, my_addr_len);
+static bool initial_states_received(struct drbd_connection *connection)
+{
+	struct drbd_peer_device *peer_device;
+	int vnr;
+	bool rv = true;
 
-	what = "sock_create_kern";
-	err = sock_create_kern(&init_net, ((struct sockaddr *)&my_addr)->sa_family,
-			       SOCK_STREAM, IPPROTO_TCP, &s_listen);
-	if (err) {
-		s_listen = NULL;
-		goto out;
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		if (!test_bit(INITIAL_STATE_RECEIVED, &peer_device->flags)) {
+			rv = false;
+			break;
+		}
 	}
+	rcu_read_unlock();
 
-	s_listen->sk->sk_reuse = SK_CAN_REUSE; /* SO_REUSEADDR */
-	drbd_setbufsize(s_listen, sndbuf_size, rcvbuf_size);
-
-	what = "bind before listen";
-	err = s_listen->ops->bind(s_listen, (struct sockaddr_unsized *)&my_addr, my_addr_len);
-	if (err < 0)
-		goto out;
+	return rv;
+}
 
-	ad->s_listen = s_listen;
-	write_lock_bh(&s_listen->sk->sk_callback_lock);
-	ad->original_sk_state_change = s_listen->sk->sk_state_change;
-	s_listen->sk->sk_state_change = drbd_incoming_connection;
-	s_listen->sk->sk_user_data = ad;
-	write_unlock_bh(&s_listen->sk->sk_callback_lock);
+void wait_initial_states_received(struct drbd_connection *connection)
+{
+	struct net_conf *nc;
+	long timeout;
 
-	what = "listen";
-	err = s_listen->ops->listen(s_listen, 5);
-	if (err < 0)
-		goto out;
+	rcu_read_lock();
+	nc = rcu_dereference(connection->transport.net_conf);
+	timeout = nc->ping_timeo * HZ/10;
+	rcu_read_unlock();
+	wait_event_interruptible_timeout(connection->ee_wait,
+					 initial_states_received(connection),
+					 timeout);
+}
 
-	return 0;
-out:
-	if (s_listen)
-		sock_release(s_listen);
-	if (err < 0) {
-		if (err != -EAGAIN && err != -EINTR && err != -ERESTARTSYS) {
-			drbd_err(connection, "%s failed, err = %d\n", what, err);
-			conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_HARD);
-		}
-	}
+void connect_timer_fn(struct timer_list *t)
+{
+	struct drbd_connection *connection = timer_container_of(connection, t, connect_timer);
 
-	return -EIO;
+	drbd_queue_work(&connection->sender_work, &connection->connect_timer_work);
 }
 
-static void unregister_state_change(struct sock *sk, struct accept_wait_data *ad)
+static void arm_connect_timer(struct drbd_connection *connection, unsigned long expires)
 {
-	write_lock_bh(&sk->sk_callback_lock);
-	sk->sk_state_change = ad->original_sk_state_change;
-	sk->sk_user_data = NULL;
-	write_unlock_bh(&sk->sk_callback_lock);
+	bool was_pending = mod_timer(&connection->connect_timer, expires);
+
+	if (was_pending) {
+		kref_put(&connection->kref, drbd_destroy_connection);
+	}
 }
 
-static struct socket *drbd_wait_for_connect(struct drbd_connection *connection, struct accept_wait_data *ad)
+static bool retry_by_rr_conflict(struct drbd_connection *connection)
 {
-	int timeo, connect_int, err = 0;
-	struct socket *s_estab = NULL;
+	enum drbd_after_sb_p rr_conflict;
 	struct net_conf *nc;
 
 	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
-	if (!nc) {
-		rcu_read_unlock();
-		return NULL;
-	}
-	connect_int = nc->connect_int;
+	nc = rcu_dereference(connection->transport.net_conf);
+	rr_conflict = nc->rr_conflict;
 	rcu_read_unlock();
 
-	timeo = connect_int * HZ;
-	/* 28.5% random jitter */
-	timeo += get_random_u32_below(2) ? timeo / 7 : -timeo / 7;
-
-	err = wait_for_completion_interruptible_timeout(&ad->door_bell, timeo);
-	if (err <= 0)
-		return NULL;
-
-	err = kernel_accept(ad->s_listen, &s_estab, 0);
-	if (err < 0) {
-		if (err != -EAGAIN && err != -EINTR && err != -ERESTARTSYS) {
-			drbd_err(connection, "accept failed, err = %d\n", err);
-			conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_HARD);
-		}
-	}
-
-	if (s_estab)
-		unregister_state_change(s_estab->sk, ad);
-
-	return s_estab;
+	return rr_conflict == ASB_RETRY_CONNECT;
 }
 
-static int decode_header(struct drbd_connection *, void *, struct packet_info *);
-
-static int send_first_packet(struct drbd_connection *connection, struct drbd_socket *sock,
-			     enum drbd_packet cmd)
+static void apply_local_state_change(struct drbd_connection *connection, enum ao_op ao_op, bool force_demote)
 {
-	if (!conn_prepare_command(connection, sock))
-		return -EIO;
-	return conn_send_command(connection, sock, cmd, 0, NULL, 0);
-}
+	/* Although the connect failed, outdate local disks if we learn from the
+	 * handshake that the peer has more recent data */
+	struct drbd_resource *resource = connection->resource;
+	unsigned long irq_flags;
+	int vnr;
 
-static int receive_first_packet(struct drbd_connection *connection, struct socket *sock)
-{
-	unsigned int header_size = drbd_header_size(connection);
-	struct packet_info pi;
-	struct net_conf *nc;
-	int err;
+	mutex_lock(&resource->open_release);
+	begin_state_change(resource, &irq_flags, CS_HARD | (force_demote ? CS_FS_IGN_OPENERS : 0));
+	if (ao_op == OUTDATE_DISKS_AND_DISCONNECT)
+		__change_cstate(connection, C_DISCONNECTING);
+	if (resource->role[NOW] == R_SECONDARY ||
+	    (resource->cached_susp && (
+		    resource->res_opts.on_no_data == OND_IO_ERROR ||
+		    resource->res_opts.on_susp_primary_outdated == SPO_FORCE_SECONDARY))) {
+		/* One day we might relax the above condition to
+		 * resource->role[NOW] == R_SECONDARY || resource->cached_susp
+		 * Right now it is that way, because we do not offer a way to gracefully
+		 * get out of a Primary/Outdated state */
+		struct drbd_peer_device *peer_device;
+		bool set_fail_io = false;
 
-	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
-	if (!nc) {
-		rcu_read_unlock();
-		return -EIO;
-	}
-	sock->sk->sk_rcvtimeo = nc->ping_timeo * 4 * HZ / 10;
-	rcu_read_unlock();
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+			enum drbd_repl_state r = peer_device->connect_state.conn;
+			struct drbd_device *device = peer_device->device;
 
-	err = drbd_recv_short(sock, connection->data.rbuf, header_size, 0);
-	if (err != header_size) {
-		if (err >= 0)
-			err = -EIO;
-		return err;
+			if (r == L_WF_BITMAP_T || r == L_SYNC_TARGET || r == L_PAUSED_SYNC_T)
+				__change_disk_state(device, D_OUTDATED);
+
+			if (device->open_cnt)
+				set_fail_io = true;
+		}
+		if (resource->role[NOW] == R_PRIMARY && force_demote) {
+			drbd_warn(connection, "Remote node has more recent data;"
+				  " force secondary!\n");
+			resource->role[NEW] = R_SECONDARY;
+			if (set_fail_io)
+				resource->fail_io[NEW] = true;
+		}
 	}
-	err = decode_header(connection, connection->data.rbuf, &pi);
-	if (err)
-		return err;
-	return pi.cmd;
+	end_state_change(resource, &irq_flags, "connect-failed");
+	mutex_unlock(&resource->open_release);
 }
 
-/**
- * drbd_socket_okay() - Free the socket if its connection is not okay
- * @sock:	pointer to the pointer to the socket.
- */
-static bool drbd_socket_okay(struct socket **sock)
+static int connect_work(struct drbd_work *work, int cancel)
 {
-	int rr;
-	char tb[4];
+	struct drbd_connection *connection =
+		container_of(work, struct drbd_connection, connect_timer_work);
+	struct drbd_resource *resource = connection->resource;
+	enum drbd_state_rv rv;
+	long t = resource->res_opts.auto_promote_timeout * HZ / 10;
+	bool retry = retry_by_rr_conflict(connection);
+	bool incompat_states, force_demote;
 
-	if (!*sock)
-		return false;
+	if (connection->cstate[NOW] != C_CONNECTING)
+		goto out_put;
 
-	rr = drbd_recv_short(*sock, tb, 4, MSG_DONTWAIT | MSG_PEEK);
+	if (connection->agreed_pro_version == 117)
+		wait_initial_states_received(connection);
 
-	if (rr > 0 || rr == -EAGAIN) {
-		return true;
+	do {
+		/* Carefully check if it is okay to do a two_phase_commit from sender context */
+		if (down_trylock(&resource->state_sem)) {
+			rv = SS_CONCURRENT_ST_CHG;
+			break;
+		}
+		rv = change_cstate_tag(connection, C_CONNECTED, CS_SERIALIZE |
+				   CS_ALREADY_SERIALIZED | CS_VERBOSE | CS_DONT_RETRY,
+				   "connected", NULL);
+		up(&resource->state_sem);
+		if (rv != SS_PRIMARY_READER)
+			break;
+
+		/* We have a connection established, peer is primary. On my side is a
+		   read-only opener, probably udev or some other scanning after device creating.
+		   This short lived read-only open prevents now that we can continue.
+		   Better retry after the read-only opener goes away. */
+
+		t = wait_event_interruptible_timeout(resource->state_wait,
+						     !drbd_open_ro_count(resource),
+						     t);
+	} while (t > 0);
+
+	incompat_states = (rv == SS_CW_FAILED_BY_PEER || rv == SS_TWO_PRIMARIES);
+	force_demote = resource->role[NOW] == R_PRIMARY &&
+		resource->res_opts.on_susp_primary_outdated == SPO_FORCE_SECONDARY;
+	retry = retry || force_demote;
+
+	if (rv >= SS_SUCCESS) {
+		if (connection->agreed_pro_version < 117)
+			conn_connect2(connection);
+	} else if (rv == SS_TIMEOUT || rv == SS_CONCURRENT_ST_CHG) {
+		if (connection->cstate[NOW] != C_CONNECTING)
+			goto out_put;
+		arm_connect_timer(connection, jiffies + HZ/20);
+		return 0; /* Return early. Keep the reference on the connection! */
+	} else if (rv == SS_HANDSHAKE_RETRY || (incompat_states && retry)) {
+		arm_connect_timer(connection, jiffies + HZ);
+		apply_local_state_change(connection, OUTDATE_DISKS, force_demote);
+		return 0; /* Keep reference */
+	} else if (rv == SS_HANDSHAKE_DISCONNECT || (incompat_states && !retry)) {
+		drbd_send_disconnect(connection);
+		apply_local_state_change(connection, OUTDATE_DISKS_AND_DISCONNECT, force_demote);
 	} else {
-		sock_release(*sock);
-		*sock = NULL;
-		return false;
+		drbd_info(connection, "Failure to connect: %s (%d); retrying\n",
+			  drbd_set_st_err_str(rv), rv);
+		change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
 	}
-}
-
-static bool connection_established(struct drbd_connection *connection,
-				   struct socket **sock1,
-				   struct socket **sock2)
-{
-	struct net_conf *nc;
-	int timeout;
-	bool ok;
 
-	if (!*sock1 || !*sock2)
-		return false;
-
-	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
-	timeout = (nc->sock_check_timeo ?: nc->ping_timeo) * HZ / 10;
-	rcu_read_unlock();
-	schedule_timeout_interruptible(timeout);
-
-	ok = drbd_socket_okay(sock1);
-	ok = drbd_socket_okay(sock2) && ok;
-
-	return ok;
+ out_put:
+	kref_put(&connection->kref, drbd_destroy_connection);
+	return 0;
 }
 
-/* Gets called if a connection is established, or if a new minor gets created
-   in a connection */
-int drbd_connected(struct drbd_peer_device *peer_device)
+static int drbd_transport_connect(struct drbd_connection *connection)
 {
-	struct drbd_device *device = peer_device->device;
-	int err;
-
-	atomic_set(&device->packet_seq, 0);
-	device->peer_seq = 0;
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_resource *resource = connection->resource;
+	int err = 0;
 
-	device->state_mutex = peer_device->connection->agreed_pro_version < 100 ?
-		&peer_device->connection->cstate_mutex :
-		&device->own_state_mutex;
+	mutex_lock(&resource->conf_update);
+	err = transport->class->ops.prepare_connect(transport);
+	mutex_unlock(&resource->conf_update);
 
-	err = drbd_send_sync_param(peer_device);
-	if (!err)
-		err = drbd_send_sizes(peer_device, 0, 0);
-	if (!err)
-		err = drbd_send_uuids(peer_device);
 	if (!err)
-		err = drbd_send_current_state(peer_device);
-	clear_bit(USE_DEGR_WFC_T, &device->flags);
-	clear_bit(RESIZE_PENDING, &device->flags);
-	atomic_set(&device->ap_in_flight, 0);
-	mod_timer(&device->request_timer, jiffies + HZ); /* just start it here. */
+		err = transport->class->ops.connect(transport);
+
+	mutex_lock(&resource->conf_update);
+	transport->class->ops.finish_connect(transport);
+	mutex_unlock(&resource->conf_update);
+
 	return err;
 }
 
 /*
- * return values:
- *   1 yes, we have a valid connection
- *   0 oops, did not work out, please try again
- *  -1 peer talks different language,
- *     no point in trying again, please go standalone.
- *  -2 We do not have a network config...
+ * Returns true if we have a valid connection.
  */
-static int conn_connect(struct drbd_connection *connection)
+static bool conn_connect(struct drbd_connection *connection)
 {
-	struct drbd_socket sock, msock;
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_resource *resource = connection->resource;
+	int ping_timeo, ping_int, h, err, vnr;
 	struct drbd_peer_device *peer_device;
+	enum drbd_stream stream;
 	struct net_conf *nc;
-	int vnr, timeout, h;
-	bool discard_my_data, ok;
-	enum drbd_state_rv rv;
-	struct accept_wait_data ad = {
-		.connection = connection,
-		.door_bell = COMPLETION_INITIALIZER_ONSTACK(ad.door_bell),
-	};
-
-	clear_bit(DISCONNECT_SENT, &connection->flags);
-	if (conn_request_state(connection, NS(conn, C_WF_CONNECTION), CS_VERBOSE) < SS_SUCCESS)
-		return -2;
-
-	mutex_init(&sock.mutex);
-	sock.sbuf = connection->data.sbuf;
-	sock.rbuf = connection->data.rbuf;
-	sock.socket = NULL;
-	mutex_init(&msock.mutex);
-	msock.sbuf = connection->meta.sbuf;
-	msock.rbuf = connection->meta.rbuf;
-	msock.socket = NULL;
-
-	/* Assume that the peer only understands protocol 80 until we know better.  */
-	connection->agreed_pro_version = 80;
-
-	if (prepare_listen_socket(connection, &ad))
-		return 0;
+	bool discard_my_data;
+	bool have_mutex;
+	bool no_addr = false;
+
+start:
+	have_mutex = false;
+	clear_bit(PING_PENDING, &connection->flags);
+	clear_bit(DISCONNECT_EXPECTED, &connection->flags);
+	if (change_cstate_tag(connection, C_CONNECTING, CS_VERBOSE, "connecting", NULL)
+			< SS_SUCCESS) {
+		/* We do not have a network config. */
+		return false;
+	}
 
-	do {
-		struct socket *s;
-
-		s = drbd_try_connect(connection);
-		if (s) {
-			if (!sock.socket) {
-				sock.socket = s;
-				send_first_packet(connection, &sock, P_INITIAL_DATA);
-			} else if (!msock.socket) {
-				clear_bit(RESOLVE_CONFLICTS, &connection->flags);
-				msock.socket = s;
-				send_first_packet(connection, &msock, P_INITIAL_META);
-			} else {
-				drbd_err(connection, "Logic error in conn_connect()\n");
-				goto out_release_sockets;
-			}
-		}
+	/* Assume that the peer only understands our minimum supported
+	 * protocol version; until we know better. */
+	connection->agreed_pro_version = drbd_protocol_version_min;
 
-		if (connection_established(connection, &sock.socket, &msock.socket))
-			break;
+	err = drbd_transport_connect(connection);
+	if (err == -EAGAIN) {
+		enum drbd_conn_state cstate;
+		read_lock_irq(&resource->state_rwlock); /* See commit message */
+		cstate = connection->cstate[NOW];
+		read_unlock_irq(&resource->state_rwlock);
+		if (cstate == C_DISCONNECTING)
+			return false;
+		goto retry;
+	} else if (err == -EADDRNOTAVAIL) {
+		struct net_conf *nc;
+		int connect_int;
+		long t;
 
-retry:
-		s = drbd_wait_for_connect(connection, &ad);
-		if (s) {
-			int fp = receive_first_packet(connection, s);
-			drbd_socket_okay(&sock.socket);
-			drbd_socket_okay(&msock.socket);
-			switch (fp) {
-			case P_INITIAL_DATA:
-				if (sock.socket) {
-					drbd_warn(connection, "initial packet S crossed\n");
-					sock_release(sock.socket);
-					sock.socket = s;
-					goto randomize;
-				}
-				sock.socket = s;
-				break;
-			case P_INITIAL_META:
-				set_bit(RESOLVE_CONFLICTS, &connection->flags);
-				if (msock.socket) {
-					drbd_warn(connection, "initial packet M crossed\n");
-					sock_release(msock.socket);
-					msock.socket = s;
-					goto randomize;
-				}
-				msock.socket = s;
-				break;
-			default:
-				drbd_warn(connection, "Error receiving initial packet\n");
-				sock_release(s);
-randomize:
-				if (get_random_u32_below(2))
-					goto retry;
-			}
-		}
+		rcu_read_lock();
+		nc = rcu_dereference(transport->net_conf);
+		connect_int = nc ? nc->connect_int : 10;
+		rcu_read_unlock();
 
-		if (connection->cstate <= C_DISCONNECTING)
-			goto out_release_sockets;
-		if (signal_pending(current)) {
-			flush_signals(current);
-			smp_rmb();
-			if (get_t_state(&connection->receiver) == EXITING)
-				goto out_release_sockets;
+		if (!no_addr) {
+			drbd_warn(connection,
+				  "Configured local address not found, retrying every %d sec, "
+				  "err=%d\n", connect_int, err);
+			no_addr = true;
 		}
 
-		ok = connection_established(connection, &sock.socket, &msock.socket);
-	} while (!ok);
-
-	if (ad.s_listen)
-		sock_release(ad.s_listen);
-
-	sock.socket->sk->sk_reuse = SK_CAN_REUSE; /* SO_REUSEADDR */
-	msock.socket->sk->sk_reuse = SK_CAN_REUSE; /* SO_REUSEADDR */
-
-	sock.socket->sk->sk_allocation = GFP_NOIO;
-	msock.socket->sk->sk_allocation = GFP_NOIO;
-
-	sock.socket->sk->sk_use_task_frag = false;
-	msock.socket->sk->sk_use_task_frag = false;
+		t = schedule_timeout_interruptible(connect_int * HZ);
+		if (t || connection->cstate[NOW] == C_DISCONNECTING)
+			return false;
+		goto start;
+	} else if (err == -EDESTADDRREQ) {
+		/*
+		 * No destination address, we cannot possibly make a connection.
+		 * Maybe a resource was partially left over due to some other bug?
+		 * Either way, abort here and go StandAlone to prevent reconnection.
+		 */
+		drbd_err(connection, "No destination address, err=%d\n", err);
+		change_cstate_tag(connection, C_STANDALONE, CS_HARD, "no-dest-addr", NULL);
+		return false;
+	} else if (err < 0) {
+		drbd_warn(connection, "Failed to initiate connection, err=%d\n", err);
+		goto abort;
+	}
 
-	sock.socket->sk->sk_priority = TC_PRIO_INTERACTIVE_BULK;
-	msock.socket->sk->sk_priority = TC_PRIO_INTERACTIVE;
+	connection->reassemble_buffer.avail = 0;
 
-	/* NOT YET ...
-	 * sock.socket->sk->sk_sndtimeo = connection->net_conf->timeout*HZ/10;
-	 * sock.socket->sk->sk_rcvtimeo = MAX_SCHEDULE_TIMEOUT;
-	 * first set it to the P_CONNECTION_FEATURES timeout,
-	 * which we set to 4x the configured ping_timeout. */
 	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
-
-	sock.socket->sk->sk_sndtimeo =
-	sock.socket->sk->sk_rcvtimeo = nc->ping_timeo*4*HZ/10;
-
-	msock.socket->sk->sk_rcvtimeo = nc->ping_int*HZ;
-	timeout = nc->timeout * HZ / 10;
-	discard_my_data = nc->discard_my_data;
+	nc = rcu_dereference(connection->transport.net_conf);
+	ping_timeo = nc->ping_timeo;
+	ping_int = nc->ping_int;
 	rcu_read_unlock();
 
-	msock.socket->sk->sk_sndtimeo = timeout;
-
-	/* we don't want delays.
-	 * we use TCP_CORK where appropriate, though */
-	tcp_sock_set_nodelay(sock.socket->sk);
-	tcp_sock_set_nodelay(msock.socket->sk);
+	/* Make sure we are "uncorked", otherwise we risk timeouts,
+	 * in case this is a reconnect and we had been corked before. */
+	for (stream = DATA_STREAM; stream <= CONTROL_STREAM; stream++) {
+		initialize_send_buffer(connection, stream);
+		drbd_uncork(connection, stream);
+	}
 
-	connection->data.socket = sock.socket;
-	connection->meta.socket = msock.socket;
-	connection->last_received = jiffies;
+	/* Make sure the handshake happens without interference from other threads,
+	 * or the challenge response authentication could be garbled. */
+	mutex_lock(&connection->mutex[DATA_STREAM]);
+	have_mutex = true;
+	transport->class->ops.set_rcvtimeo(transport, DATA_STREAM, ping_timeo * 4 * HZ/10);
+	transport->class->ops.set_rcvtimeo(transport, CONTROL_STREAM, ping_int * HZ);
 
 	h = drbd_do_features(connection);
-	if (h <= 0)
-		return h;
+	if (h < 0)
+		goto abort;
+	if (h == 0)
+		goto retry;
 
 	if (connection->cram_hmac_tfm) {
-		/* drbd_request_state(device, NS(conn, WFAuth)); */
 		switch (drbd_do_auth(connection)) {
 		case -1:
 			drbd_err(connection, "Authentication of peer failed\n");
-			return -1;
+			goto abort;
 		case 0:
 			drbd_err(connection, "Authentication of peer failed, trying again.\n");
-			return 0;
+			goto retry;
 		}
 	}
 
-	connection->data.socket->sk->sk_sndtimeo = timeout;
-	connection->data.socket->sk->sk_rcvtimeo = MAX_SCHEDULE_TIMEOUT;
-
-	if (drbd_send_protocol(connection) == -EOPNOTSUPP)
-		return -1;
-
-	/* Prevent a race between resync-handshake and
-	 * being promoted to Primary.
-	 *
-	 * Grab and release the state mutex, so we know that any current
-	 * drbd_set_role() is finished, and any incoming drbd_set_role
-	 * will see the STATE_SENT flag, and wait for it to be cleared.
-	 */
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
-		mutex_lock(peer_device->device->state_mutex);
-
-	/* avoid a race with conn_request_state( C_DISCONNECTING ) */
-	spin_lock_irq(&connection->resource->req_lock);
-	set_bit(STATE_SENT, &connection->flags);
-	spin_unlock_irq(&connection->resource->req_lock);
+	discard_my_data = test_bit(CONN_DISCARD_MY_DATA, &connection->flags);
 
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
-		mutex_unlock(peer_device->device->state_mutex);
+	if (__drbd_send_protocol(connection, P_PROTOCOL) == -EOPNOTSUPP)
+		goto abort;
 
 	rcu_read_lock();
 	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		kref_get(&device->kref);
-		rcu_read_unlock();
-
+		set_bit(REPLICATION_NEXT, &peer_device->flags);
 		if (discard_my_data)
-			set_bit(DISCARD_MY_DATA, &device->flags);
+			set_bit(DISCARD_MY_DATA, &peer_device->flags);
 		else
-			clear_bit(DISCARD_MY_DATA, &device->flags);
-
-		drbd_connected(peer_device);
-		kref_put(&device->kref, drbd_destroy_device);
-		rcu_read_lock();
+			clear_bit(DISCARD_MY_DATA, &peer_device->flags);
 	}
 	rcu_read_unlock();
+	mutex_unlock(&connection->mutex[DATA_STREAM]);
+	have_mutex = false;
 
-	rv = conn_request_state(connection, NS(conn, C_WF_REPORT_PARAMS), CS_VERBOSE);
-	if (rv < SS_SUCCESS || connection->cstate != C_WF_REPORT_PARAMS) {
-		clear_bit(STATE_SENT, &connection->flags);
-		return 0;
-	}
-
-	drbd_thread_start(&connection->ack_receiver);
-	/* opencoded create_singlethread_workqueue(),
-	 * to be able to use format string arguments */
 	connection->ack_sender =
-		alloc_ordered_workqueue("drbd_as_%s", WQ_MEM_RECLAIM, connection->resource->name);
+		alloc_ordered_workqueue("drbd_as_%s", WQ_MEM_RECLAIM, resource->name);
 	if (!connection->ack_sender) {
 		drbd_err(connection, "Failed to create workqueue ack_sender\n");
-		return 0;
+		schedule_timeout_uninterruptible(HZ);
+		goto retry;
 	}
 
-	mutex_lock(&connection->resource->conf_update);
-	/* The discard_my_data flag is a single-shot modifier to the next
-	 * connection attempt, the handshake of which is now well underway.
-	 * No need for rcu style copying of the whole struct
-	 * just to clear a single value. */
-	connection->net_conf->discard_my_data = 0;
-	mutex_unlock(&connection->resource->conf_update);
+	atomic_set(&connection->ap_in_flight, 0);
+	atomic_set(&connection->rs_in_flight, 0);
 
-	return h;
+	if (connection->agreed_pro_version >= 110) {
+		/* Allow 10 times the ping_timeo for two-phase commits. That is
+		 * 5 seconds by default. The unit of ping_timeo is tenths of a
+		 * second. */
+		transport->class->ops.set_rcvtimeo(transport, DATA_STREAM, ping_timeo * HZ);
 
-out_release_sockets:
-	if (ad.s_listen)
-		sock_release(ad.s_listen);
-	if (sock.socket)
-		sock_release(sock.socket);
-	if (msock.socket)
-		sock_release(msock.socket);
-	return -1;
+		if (connection->agreed_pro_version == 117)
+			conn_connect2(connection);
+
+		if (resource->res_opts.node_id < connection->peer_node_id) {
+			kref_get(&connection->kref);
+			connection->connect_timer_work.cb = connect_work;
+			arm_connect_timer(connection, jiffies);
+		}
+	} else {
+		enum drbd_state_rv rv;
+		rv = change_cstate(connection, C_CONNECTED,
+				   CS_VERBOSE | CS_WAIT_COMPLETE | CS_SERIALIZE | CS_LOCAL_ONLY);
+		if (rv < SS_SUCCESS || connection->cstate[NOW] != C_CONNECTED)
+			goto retry;
+		conn_connect2(connection);
+	}
+
+	clear_bit(PING_TIMEOUT_ACTIVE, &connection->flags);
+	return true;
+
+retry:
+	if (have_mutex)
+		mutex_unlock(&connection->mutex[DATA_STREAM]);
+	conn_disconnect(connection);
+	schedule_timeout_interruptible(HZ);
+	goto start;
+
+abort:
+	if (have_mutex)
+		mutex_unlock(&connection->mutex[DATA_STREAM]);
+	change_cstate(connection, C_DISCONNECTING, CS_HARD);
+	return false;
 }
 
-static int decode_header(struct drbd_connection *connection, void *header, struct packet_info *pi)
+static unsigned int decode_header_size(const void *header)
 {
-	unsigned int header_size = drbd_header_size(connection);
+	const u32 first_dword = *(u32 *)header;
+	const u16 first_word = *(u16 *)header;
 
-	if (header_size == sizeof(struct p_header100) &&
-	    *(__be32 *)header == cpu_to_be32(DRBD_MAGIC_100)) {
-		struct p_header100 *h = header;
-		if (h->pad != 0) {
-			drbd_err(connection, "Header padding is not zero\n");
-			return -EINVAL;
-		}
-		pi->vnr = be16_to_cpu(h->volume);
+	return first_dword == cpu_to_be32(DRBD_MAGIC_100) ? sizeof(struct p_header100) :
+		first_word == cpu_to_be16(DRBD_MAGIC_BIG) ? sizeof(struct p_header95) :
+		sizeof(struct p_header80);
+}
+
+static int __decode_header(const void *header, struct packet_info *pi)
+{
+	const u32 first_dword = *(u32 *)header;
+	const u16 first_word = *(u16 *)header;
+	unsigned int header_size;
+	int header_version;
+
+	if (first_dword == cpu_to_be32(DRBD_MAGIC_100)) {
+		const struct p_header100 *h = header;
+		u16 vnr = be16_to_cpu(h->volume);
+
+		if (h->pad != 0)
+			return -ENOENT;
+
+		pi->vnr = vnr == ((u16) 0xFFFF) ? -1 : vnr;
 		pi->cmd = be16_to_cpu(h->command);
 		pi->size = be32_to_cpu(h->length);
-	} else if (header_size == sizeof(struct p_header95) &&
-		   *(__be16 *)header == cpu_to_be16(DRBD_MAGIC_BIG)) {
-		struct p_header95 *h = header;
+		header_size = sizeof(*h);
+		header_version = 100;
+	} else if (first_word == cpu_to_be16(DRBD_MAGIC_BIG)) {
+		const struct p_header95 *h = header;
+
 		pi->cmd = be16_to_cpu(h->command);
 		pi->size = be32_to_cpu(h->length);
 		pi->vnr = 0;
-	} else if (header_size == sizeof(struct p_header80) &&
-		   *(__be32 *)header == cpu_to_be32(DRBD_MAGIC)) {
-		struct p_header80 *h = header;
+		header_size = sizeof(*h);
+		header_version = 95;
+	} else if (first_dword == cpu_to_be32(DRBD_MAGIC)) {
+		const struct p_header80 *h = header;
+
 		pi->cmd = be16_to_cpu(h->command);
 		pi->size = be16_to_cpu(h->length);
 		pi->vnr = 0;
+		header_size = sizeof(*h);
+		header_version = 80;
 	} else {
-		drbd_err(connection, "Wrong magic value 0x%08x in protocol version %d\n",
-			 be32_to_cpu(*(__be32 *)header),
-			 connection->agreed_pro_version);
 		return -EINVAL;
 	}
-	pi->data = header + header_size;
+
+	pi->data = (void *)(header + header_size); /* casting away 'const'! */
+	return header_version;
+}
+
+static bool header_version_good(int header_version, int protocol_version)
+{
+	switch (header_version) {
+	case 100: return protocol_version >= 100;
+	case 95: return protocol_version < 100;
+	case 80: return protocol_version < 95;
+	default: return false;
+	}
+}
+
+static int decode_header(struct drbd_connection *connection, const void *header,
+			 struct packet_info *pi)
+{
+	const int agreed_pro_version = connection->agreed_pro_version;
+	int header_version = __decode_header(header, pi);
+
+	if (header_version == -ENOENT) {
+		drbd_err(connection, "Header padding is not zero\n");
+		return -EINVAL;
+	} else if (header_version < 0 || !header_version_good(header_version, agreed_pro_version)) {
+		drbd_err(connection, "Wrong magic value 0x%08x in protocol version %d, %d [data]\n",
+			 be32_to_cpu(*(__be32 *)header), agreed_pro_version, header_version);
+		return -EINVAL;
+	}
 	return 0;
 }
 
@@ -1013,49 +1232,58 @@ static void drbd_unplug_all_devices(struct drbd_connection *connection)
 
 static int drbd_recv_header(struct drbd_connection *connection, struct packet_info *pi)
 {
-	void *buffer = connection->data.rbuf;
+	void *buffer;
 	int err;
 
-	err = drbd_recv_all_warn(connection, buffer, drbd_header_size(connection));
+	err = drbd_recv_all_warn(connection, &buffer, drbd_header_size(connection));
 	if (err)
 		return err;
 
 	err = decode_header(connection, buffer, pi);
-	connection->last_received = jiffies;
 
 	return err;
 }
 
 static int drbd_recv_header_maybe_unplug(struct drbd_connection *connection, struct packet_info *pi)
 {
-	void *buffer = connection->data.rbuf;
+	struct drbd_transport_ops *tr_ops = &connection->transport.class->ops;
 	unsigned int size = drbd_header_size(connection);
+	void *buffer;
 	int err;
 
-	err = drbd_recv_short(connection->data.socket, buffer, size, MSG_NOSIGNAL|MSG_DONTWAIT);
+	err = tr_ops->recv(&connection->transport, DATA_STREAM, &buffer,
+			   size, MSG_NOSIGNAL | MSG_DONTWAIT);
 	if (err != size) {
+		int rflags = 0;
+
 		/* If we have nothing in the receive buffer now, to reduce
 		 * application latency, try to drain the backend queues as
 		 * quickly as possible, and let remote TCP know what we have
 		 * received so far. */
 		if (err == -EAGAIN) {
-			tcp_sock_set_quickack(connection->data.socket->sk, 2);
+			tr_ops->hint(&connection->transport, DATA_STREAM, QUICKACK);
 			drbd_unplug_all_devices(connection);
-		}
-		if (err > 0) {
-			buffer += err;
+		} else if (err > 0) {
 			size -= err;
+			rflags |= GROW_BUFFER;
 		}
-		err = drbd_recv_all_warn(connection, buffer, size);
+
+		err = drbd_recv(connection, &buffer, size, rflags);
+		if (err != size) {
+			if (err >= 0)
+				err = -EIO;
+		} else
+			err = 0;
+
 		if (err)
 			return err;
 	}
 
-	err = decode_header(connection, connection->data.rbuf, pi);
-	connection->last_received = jiffies;
+	err = decode_header(connection, buffer, pi);
 
 	return err;
 }
+
 /* This is blkdev_issue_flush, but asynchronous.
  * We want to submit to all component volumes in parallel,
  * then wait for all completions.
@@ -1076,9 +1304,11 @@ static void one_flush_endio(struct bio *bio)
 	struct drbd_device *device = octx->device;
 	struct issue_flush_context *ctx = octx->ctx;
 
-	if (bio->bi_status) {
-		ctx->error = blk_status_to_errno(bio->bi_status);
-		drbd_info(device, "local disk FLUSH FAILED with status %d\n", bio->bi_status);
+	blk_status_t status = bio->bi_status;
+
+	if (status) {
+		ctx->error = blk_status_to_errno(status);
+		drbd_info(device, "local disk FLUSH FAILED with status %d\n", status);
 	}
 	kfree(octx);
 	bio_put(bio);
@@ -1094,7 +1324,7 @@ static void one_flush_endio(struct bio *bio)
 static void submit_one_flush(struct drbd_device *device, struct issue_flush_context *ctx)
 {
 	struct bio *bio = bio_alloc(device->ldev->backing_bdev, 0,
-				    REQ_OP_WRITE | REQ_PREFLUSH, GFP_NOIO);
+			REQ_OP_WRITE | REQ_PREFLUSH, GFP_NOIO);
 	struct one_flush_context *octx = kmalloc_obj(*octx, GFP_NOIO);
 
 	if (!octx) {
@@ -1121,10 +1351,12 @@ static void submit_one_flush(struct drbd_device *device, struct issue_flush_cont
 	submit_bio(bio);
 }
 
-static void drbd_flush(struct drbd_connection *connection)
+static enum finish_epoch drbd_flush_after_epoch(struct drbd_connection *connection, struct drbd_epoch *epoch)
 {
-	if (connection->resource->write_ordering >= WO_BDEV_FLUSH) {
-		struct drbd_peer_device *peer_device;
+	struct drbd_resource *resource = connection->resource;
+
+	if (resource->write_ordering >= WO_BDEV_FLUSH) {
+		struct drbd_device *device;
 		struct issue_flush_context ctx;
 		int vnr;
 
@@ -1133,9 +1365,7 @@ static void drbd_flush(struct drbd_connection *connection)
 		init_completion(&ctx.done);
 
 		rcu_read_lock();
-		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-			struct drbd_device *device = peer_device->device;
-
+		idr_for_each_entry(&resource->devices, device, vnr) {
 			if (!get_ldev(device))
 				continue;
 			kref_get(&device->kref);
@@ -1160,6 +1390,88 @@ static void drbd_flush(struct drbd_connection *connection)
 			drbd_bump_write_ordering(connection->resource, NULL, WO_DRAIN_IO);
 		}
 	}
+
+	/* If called before sending P_CONFIRM_STABLE, we don't have the epoch
+	 * (and must not finish it yet, anyways) */
+	if (epoch == NULL)
+		return FE_STILL_LIVE;
+	return drbd_may_finish_epoch(connection, epoch, EV_BARRIER_DONE);
+}
+
+static int w_flush(struct drbd_work *w, int cancel)
+{
+	struct flush_work *fw = container_of(w, struct flush_work, w);
+	struct drbd_epoch *epoch = fw->epoch;
+	struct drbd_connection *connection = epoch->connection;
+
+	kfree(fw);
+
+	if (!test_and_set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags))
+		drbd_flush_after_epoch(connection, epoch);
+
+	drbd_may_finish_epoch(connection, epoch, EV_PUT |
+			      (connection->cstate[NOW] < C_CONNECTED ? EV_CLEANUP : 0));
+
+	return 0;
+}
+
+static void drbd_send_b_ack(struct drbd_connection *connection, u32 barrier_nr, u32 set_size)
+{
+	struct p_barrier_ack *p;
+
+	if (connection->cstate[NOW] < C_CONNECTED)
+		return;
+
+	p = conn_prepare_command(connection, sizeof(*p), CONTROL_STREAM);
+	if (!p)
+		return;
+	p->barrier = barrier_nr;
+	p->set_size = cpu_to_be32(set_size);
+	send_command(connection, -1, P_BARRIER_ACK, CONTROL_STREAM);
+}
+
+static void drbd_send_confirm_stable(struct drbd_peer_request *peer_req)
+{
+	struct drbd_connection *connection = peer_req->peer_device->connection;
+	struct drbd_epoch *epoch = peer_req->epoch;
+	struct drbd_peer_request *oldest, *youngest;
+	struct p_confirm_stable *p;
+	int count;
+
+	if (connection->cstate[NOW] < C_CONNECTED)
+		return;
+
+	/* peer_req is not on stable storage yet, but the only one in this epoch.
+	 * Nothing to confirm, just wait for the normal barrier_ack and peer_ack
+	 * to do their work. */
+	oldest = epoch->oldest_unconfirmed_peer_req;
+	if (oldest == peer_req)
+		return;
+
+	p = conn_prepare_command(connection, sizeof(*p), CONTROL_STREAM);
+	if (!p)
+		return;
+
+	/* peer_req has not been added to connection->peer_requests yet, so
+	 * connection->peer_requests.prev is the youngest request that should
+	 * now be on stable storage. */
+	spin_lock_irq(&connection->peer_reqs_lock);
+	youngest = list_entry(connection->peer_requests.prev, struct drbd_peer_request, recv_order);
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	count = atomic_read(&epoch->epoch_size) - atomic_read(&epoch->confirmed) - 1;
+	atomic_add(count, &epoch->confirmed);
+	epoch->oldest_unconfirmed_peer_req = peer_req;
+
+	D_ASSERT(connection, oldest->epoch == youngest->epoch);
+	D_ASSERT(connection, count > 0);
+
+	p->oldest_block_id = oldest->block_id;
+	p->youngest_block_id = youngest->block_id;
+	p->set_size = cpu_to_be32(count);
+	p->pad = 0;
+
+	send_command(connection, -1, P_CONFIRM_STABLE, CONTROL_STREAM);
 }
 
 /**
@@ -1172,13 +1484,16 @@ static enum finish_epoch drbd_may_finish_epoch(struct drbd_connection *connectio
 					       struct drbd_epoch *epoch,
 					       enum epoch_event ev)
 {
-	int epoch_size;
+	int finish, epoch_size;
 	struct drbd_epoch *next_epoch;
+	int schedule_flush = 0;
 	enum finish_epoch rv = FE_STILL_LIVE;
+	struct drbd_resource *resource = connection->resource;
 
 	spin_lock(&connection->epoch_lock);
 	do {
 		next_epoch = NULL;
+		finish = 0;
 
 		epoch_size = atomic_read(&epoch->epoch_size);
 
@@ -1188,6 +1503,16 @@ static enum finish_epoch drbd_may_finish_epoch(struct drbd_connection *connectio
 			break;
 		case EV_GOT_BARRIER_NR:
 			set_bit(DE_HAVE_BARRIER_NUMBER, &epoch->flags);
+
+			/* Special case: If we just switched from WO_BIO_BARRIER to
+			   WO_BDEV_FLUSH we should not finish the current epoch */
+			if (test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags) && epoch_size == 1 &&
+			    resource->write_ordering != WO_BIO_BARRIER &&
+			    epoch == connection->current_epoch)
+				clear_bit(DE_CONTAINS_A_BARRIER, &epoch->flags);
+			break;
+		case EV_BARRIER_DONE:
+			set_bit(DE_BARRIER_IN_NEXT_EPOCH_DONE, &epoch->flags);
 			break;
 		case EV_BECAME_LAST:
 			/* nothing to do*/
@@ -1196,18 +1521,30 @@ static enum finish_epoch drbd_may_finish_epoch(struct drbd_connection *connectio
 
 		if (epoch_size != 0 &&
 		    atomic_read(&epoch->active) == 0 &&
-		    (test_bit(DE_HAVE_BARRIER_NUMBER, &epoch->flags) || ev & EV_CLEANUP)) {
+		    (test_bit(DE_HAVE_BARRIER_NUMBER, &epoch->flags) || ev & EV_CLEANUP) &&
+		    epoch->list.prev == &connection->current_epoch->list &&
+		    !test_bit(DE_IS_FINISHING, &epoch->flags)) {
+			/* Nearly all conditions are met to finish that epoch... */
+			if (test_bit(DE_BARRIER_IN_NEXT_EPOCH_DONE, &epoch->flags) ||
+			    resource->write_ordering == WO_NONE ||
+			    (epoch_size == 1 && test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags)) ||
+			    ev & EV_CLEANUP) {
+				finish = 1;
+				set_bit(DE_IS_FINISHING, &epoch->flags);
+			} else if (!test_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags) &&
+				 resource->write_ordering == WO_BIO_BARRIER) {
+				atomic_inc(&epoch->active);
+				schedule_flush = 1;
+			}
+		}
+		if (finish) {
 			if (!(ev & EV_CLEANUP)) {
+				/* adjust for nr requests already confirmed via P_CONFIRM_STABLE, if any. */
+				epoch_size -= atomic_read(&epoch->confirmed);
 				spin_unlock(&connection->epoch_lock);
 				drbd_send_b_ack(epoch->connection, epoch->barrier_nr, epoch_size);
 				spin_lock(&connection->epoch_lock);
 			}
-#if 0
-			/* FIXME: dec unacked on connection, once we have
-			 * something to count pending connection packets in. */
-			if (test_bit(DE_HAVE_BARRIER_NUMBER, &epoch->flags))
-				dec_unacked(epoch->connection);
-#endif
 
 			if (connection->current_epoch != epoch) {
 				next_epoch = list_entry(epoch->list.next, struct drbd_epoch, list);
@@ -1219,9 +1556,11 @@ static enum finish_epoch drbd_may_finish_epoch(struct drbd_connection *connectio
 				if (rv == FE_STILL_LIVE)
 					rv = FE_DESTROYED;
 			} else {
+				epoch->oldest_unconfirmed_peer_req = NULL;
 				epoch->flags = 0;
 				atomic_set(&epoch->epoch_size, 0);
-				/* atomic_set(&epoch->active, 0); is already zero */
+				atomic_set(&epoch->confirmed, 0);
+				/* atomic_set(&epoch->active, 0); is alrady zero */
 				if (rv == FE_STILL_LIVE)
 					rv = FE_RECYCLED;
 			}
@@ -1235,6 +1574,22 @@ static enum finish_epoch drbd_may_finish_epoch(struct drbd_connection *connectio
 
 	spin_unlock(&connection->epoch_lock);
 
+	if (schedule_flush) {
+		struct flush_work *fw;
+		fw = kmalloc_obj(*fw, GFP_ATOMIC);
+		if (fw) {
+			fw->w.cb = w_flush;
+			fw->epoch = epoch;
+			drbd_queue_work(&resource->work, &fw->w);
+		} else {
+			drbd_warn(resource, "Could not kmalloc a flush_work obj\n");
+			set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags);
+			/* That is not a recursion, only one level */
+			drbd_may_finish_epoch(connection, epoch, EV_BARRIER_DONE);
+			drbd_may_finish_epoch(connection, epoch, EV_PUT);
+		}
+	}
+
 	return rv;
 }
 
@@ -1245,6 +1600,8 @@ max_allowed_wo(struct drbd_backing_dev *bdev, enum write_ordering_e wo)
 
 	dc = rcu_dereference(bdev->disk_conf);
 
+	if (wo == WO_BIO_BARRIER && !dc->disk_barrier)
+		wo = WO_BDEV_FLUSH;
 	if (wo == WO_BDEV_FLUSH && !dc->disk_flushes)
 		wo = WO_DRAIN_IO;
 	if (wo == WO_DRAIN_IO && !dc->disk_drain)
@@ -1262,18 +1619,22 @@ void drbd_bump_write_ordering(struct drbd_resource *resource, struct drbd_backin
 {
 	struct drbd_device *device;
 	enum write_ordering_e pwo;
-	int vnr;
+	int vnr, i = 0;
 	static char *write_ordering_str[] = {
 		[WO_NONE] = "none",
 		[WO_DRAIN_IO] = "drain",
 		[WO_BDEV_FLUSH] = "flush",
+		[WO_BIO_BARRIER] = "barrier",
 	};
 
 	pwo = resource->write_ordering;
-	if (wo != WO_BDEV_FLUSH)
+	if (wo != WO_BIO_BARRIER)
 		wo = min(pwo, wo);
 	rcu_read_lock();
 	idr_for_each_entry(&resource->devices, device, vnr) {
+		if (i++ == 1 && wo == WO_BIO_BARRIER)
+			wo = WO_BDEV_FLUSH; /* WO = barrier does not handle multiple volumes */
+
 		if (get_ldev(device)) {
 			wo = max_allowed_wo(device->ldev, wo);
 			if (device->ldev == bdev)
@@ -1288,21 +1649,11 @@ void drbd_bump_write_ordering(struct drbd_resource *resource, struct drbd_backin
 	rcu_read_unlock();
 
 	resource->write_ordering = wo;
-	if (pwo != resource->write_ordering || wo == WO_BDEV_FLUSH)
+	if (pwo != resource->write_ordering || wo == WO_BIO_BARRIER)
 		drbd_info(resource, "Method to ensure write ordering: %s\n", write_ordering_str[resource->write_ordering]);
 }
 
 /*
- * Mapping "discard" to ZEROOUT with UNMAP does not work for us:
- * Drivers have to "announce" q->limits.max_write_zeroes_sectors, or it
- * will directly go to fallback mode, submitting normal writes, and
- * never even try to UNMAP.
- *
- * And dm-thin does not do this (yet), mostly because in general it has
- * to assume that "skip_block_zeroing" is set.  See also:
- * https://www.mail-archive.com/dm-devel%40redhat.com/msg07965.html
- * https://www.redhat.com/archives/dm-devel/2018-January/msg00271.html
- *
  * We *may* ignore the discard-zeroes-data setting, if so configured.
  *
  * Assumption is that this "discard_zeroes_data=0" is only because the backend
@@ -1325,6 +1676,7 @@ void drbd_bump_write_ordering(struct drbd_resource *resource, struct drbd_backin
 int drbd_issue_discard_or_zero_out(struct drbd_device *device, sector_t start, unsigned int nr_sectors, int flags)
 {
 	struct block_device *bdev = device->ldev->backing_bdev;
+	struct request_queue *q = bdev_get_queue(bdev);
 	sector_t tmp, nr;
 	unsigned int max_discard_sectors, granularity;
 	int alignment;
@@ -1334,7 +1686,7 @@ int drbd_issue_discard_or_zero_out(struct drbd_device *device, sector_t start, u
 		goto zero_out;
 
 	/* Zero-sector (unknown) and one-sector granularities are the same.  */
-	granularity = max(bdev_discard_granularity(bdev) >> 9, 1U);
+	granularity = max(q->limits.discard_granularity >> 9, 1U);
 	alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
 
 	max_discard_sectors = min(bdev_max_discard_sectors(bdev), (1U << 22));
@@ -1361,8 +1713,7 @@ int drbd_issue_discard_or_zero_out(struct drbd_device *device, sector_t start, u
 		start = tmp;
 	}
 	while (nr_sectors >= max_discard_sectors) {
-		err |= blkdev_issue_discard(bdev, start, max_discard_sectors,
-					    GFP_NOIO);
+		err |= blkdev_issue_discard(bdev, start, max_discard_sectors, GFP_NOIO);
 		nr_sectors -= max_discard_sectors;
 		start += max_discard_sectors;
 	}
@@ -1419,11 +1770,11 @@ static void drbd_issue_peer_discard_or_zero_out(struct drbd_device *device, stru
 
 static int peer_request_fault_type(struct drbd_peer_request *peer_req)
 {
-	if (peer_req_op(peer_req) == REQ_OP_READ) {
-		return peer_req->flags & EE_APPLICATION ?
+	if (bio_op(peer_req->bios.head) == REQ_OP_READ) {
+		return drbd_interval_is_application(&peer_req->i) ?
 			DRBD_FAULT_DT_RD : DRBD_FAULT_RS_RD;
 	} else {
-		return peer_req->flags & EE_APPLICATION ?
+		return drbd_interval_is_application(&peer_req->i) ?
 			DRBD_FAULT_DT_WR : DRBD_FAULT_RS_WR;
 	}
 }
@@ -1441,18 +1792,23 @@ static int peer_request_fault_type(struct drbd_peer_request *peer_req)
  *  single page to an empty bio (which should never happen and likely indicates
  *  that the lower level IO stack is in some way broken). This has been observed
  *  on certain Xen deployments.
+ *
+ *  When this function returns 0, it "consumes" an ldev reference; the
+ *  reference is released when the request completes.
  */
 /* TODO allocate from our own bio_set. */
 int drbd_submit_peer_request(struct drbd_peer_request *peer_req)
 {
 	struct drbd_device *device = peer_req->peer_device->device;
-	struct bio *bios = NULL;
-	struct bio *bio;
-	struct page *page = peer_req->pages;
+	struct bio *bio, *next_bio;
 	sector_t sector = peer_req->i.sector;
-	unsigned int data_size = peer_req->i.size;
-	unsigned int n_bios = 0;
-	unsigned int nr_pages = PFN_UP(data_size);
+	struct bio_list bios;
+	struct page *page;
+	int fault_type, err, nr_bios = 0;
+
+	if (peer_req->flags & EE_SET_OUT_OF_SYNC)
+		drbd_set_out_of_sync(peer_req->peer_device,
+				sector, peer_req->i.size);
 
 	/* TRIM/DISCARD: for now, always use the helper function
 	 * blkdev_issue_zeroout(..., discard=true).
@@ -1460,27 +1816,18 @@ int drbd_submit_peer_request(struct drbd_peer_request *peer_req)
 	 * Correctness first, performance later.  Next step is to code an
 	 * asynchronous variant of the same.
 	 */
-	if (peer_req->flags & (EE_TRIM | EE_ZEROOUT)) {
-		/* wait for all pending IO completions, before we start
-		 * zeroing things out. */
-		conn_wait_active_ee_empty(peer_req->peer_device->connection);
-		/* add it to the active list now,
-		 * so we can find it to present it in debugfs */
+	if (peer_req->flags & (EE_TRIM|EE_ZEROOUT)) {
 		peer_req->submit_jif = jiffies;
-		peer_req->flags |= EE_SUBMITTED;
-
-		/* If this was a resync request from receive_rs_deallocated(),
-		 * it is already on the sync_ee list */
-		if (list_empty(&peer_req->w.list)) {
-			spin_lock_irq(&device->resource->req_lock);
-			list_add_tail(&peer_req->w.list, &device->active_ee);
-			spin_unlock_irq(&device->resource->req_lock);
-		}
 
+		/* ldev_safe: a peer_req has a ldev reference */
 		drbd_issue_peer_discard_or_zero_out(device, peer_req);
 		return 0;
 	}
 
+	fault_type = peer_request_fault_type(peer_req);
+	bios = peer_req->bios;
+	bio_list_init(&peer_req->bios);
+
 	/* In most cases, we will only need one bio.  But in case the lower
 	 * level restrictions happen to be different at this offset on this
 	 * side than those of the sending peer, we may need to submit the
@@ -1489,90 +1836,167 @@ int drbd_submit_peer_request(struct drbd_peer_request *peer_req)
 	 * Plain bio_alloc is good enough here, this is no DRBD internally
 	 * generated bio, but a bio allocated on behalf of the peer.
 	 */
-next_bio:
 	/* _DISCARD, _WRITE_ZEROES handled above.
 	 * REQ_OP_FLUSH (empty flush) not expected,
 	 * should have been mapped to a "drbd protocol barrier".
 	 * REQ_OP_SECURE_ERASE: I don't see how we could ever support that.
 	 */
-	if (!(peer_req_op(peer_req) == REQ_OP_WRITE ||
-				peer_req_op(peer_req) == REQ_OP_READ)) {
-		drbd_err(device, "Invalid bio op received: 0x%x\n", peer_req->opf);
-		return -EINVAL;
+	bio = bio_list_peek(&bios);
+	if (!(bio_op(bio) == REQ_OP_WRITE || bio_op(bio) == REQ_OP_READ)) {
+		drbd_err(device, "Invalid bio op received: 0x%x\n", bio->bi_opf);
+		err = -EINVAL;
+		goto fail;
 	}
 
-	bio = bio_alloc(device->ldev->backing_bdev, nr_pages, peer_req->opf, GFP_NOIO);
-	/* > peer_req->i.sector, unless this is the first bio */
-	bio->bi_iter.bi_sector = sector;
-	bio->bi_private = peer_req;
-	bio->bi_end_io = drbd_peer_request_endio;
+	/* we special case some flags in the multi-bio case, see below
+	 * (REQ_PREFLUSH, or BIO_RW_BARRIER in older kernels) */
 
-	bio->bi_next = bios;
-	bios = bio;
-	++n_bios;
+	/* Get reference for the first bio */
+	atomic_inc(&peer_req->pending_bios);
 
-	page_chain_for_each(page) {
-		unsigned len = min_t(unsigned, data_size, PAGE_SIZE);
-		if (!bio_add_page(bio, page, len, 0))
-			goto next_bio;
-		data_size -= len;
-		sector += len >> 9;
-		--nr_pages;
-	}
-	D_ASSERT(device, data_size == 0);
-	D_ASSERT(device, page == NULL);
-
-	atomic_set(&peer_req->pending_bios, n_bios);
 	/* for debugfs: update timestamp, mark as submitted */
 	peer_req->submit_jif = jiffies;
-	peer_req->flags |= EE_SUBMITTED;
-	do {
-		bio = bios;
-		bios = bios->bi_next;
-		bio->bi_next = NULL;
+	while ((bio = bio_list_pop(&bios))) {
+		/* bio_list_pop() clears bio->bi_next; it is a kernel-private
+		 * field used during I/O; used temprorarily by DRBD pre submit
+		 * and post completion
+		 */
+		bio->bi_iter.bi_sector = sector;
+		bio->bi_private = peer_req;
+		bio->bi_end_io = drbd_peer_request_endio;
 
-		drbd_submit_bio_noacct(device, peer_request_fault_type(peer_req), bio);
-	} while (bios);
-	return 0;
-}
+		/* Store sector and size in first struct page for restoration after I/O. */
+		page = bio->bi_io_vec[0].bv_page;
+		page->private = sector - peer_req->i.sector;
+		page->lru.next = (void *)(unsigned long)bio->bi_iter.bi_size;
 
-static void drbd_remove_epoch_entry_interval(struct drbd_device *device,
-					     struct drbd_peer_request *peer_req)
-{
-	struct drbd_interval *i = &peer_req->i;
+		sector += bio_sectors(bio);
 
-	drbd_remove_interval(&device->write_requests, i);
-	drbd_clear_interval(i);
+		nr_bios++;
 
-	/* Wake up any processes waiting for this peer request to complete.  */
-	if (i->waiting)
-		wake_up(&device->misc_wait);
-}
+		/* Get reference for the next bio (if any) now to prevent premature completion */
+		next_bio = bio_list_peek(&bios);
+		if (next_bio)
+			atomic_inc(&peer_req->pending_bios);
+		drbd_submit_bio_noacct(device, fault_type, bio);
 
-static void conn_wait_active_ee_empty(struct drbd_connection *connection)
-{
-	struct drbd_peer_device *peer_device;
-	int vnr;
+		/* strip off REQ_PREFLUSH,
+		 * unless it is the first or last bio */
+		if (next_bio && next_bio->bi_next)
+			next_bio->bi_opf &= ~REQ_PREFLUSH;
+	}
+	if (nr_bios > 1)
+		device->multi_bio_cnt++;
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
+	return 0;
 
-		kref_get(&device->kref);
-		rcu_read_unlock();
-		drbd_wait_ee_list_empty(device, &device->active_ee);
-		kref_put(&device->kref, drbd_destroy_device);
-		rcu_read_lock();
+fail:
+	while ((bio = bio_list_pop(&bios)))
+		bio_put(bio);
+	return err;
+}
+
+void drbd_remove_peer_req_interval(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_interval *i = &peer_req->i;
+	unsigned long flags;
+
+	spin_lock_irqsave(&device->interval_lock, flags);
+	D_ASSERT(device, !drbd_interval_empty(i));
+	drbd_remove_interval(&device->requests, i);
+	drbd_clear_interval(i);
+	if (!drbd_interval_is_verify(&peer_req->i))
+		drbd_release_conflicts(device, i);
+	spin_unlock_irqrestore(&device->interval_lock, flags);
+}
+
+/**
+ * w_e_reissue() - Worker callback; Resubmit a bio
+ * @w:		work object.
+ * @cancel:	The connection will be closed anyways (unused in this callback)
+ */
+int w_e_reissue(struct drbd_work *w, int cancel)
+{
+	struct drbd_peer_request *peer_req =
+		container_of(w, struct drbd_peer_request, w);
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+	int err;
+	/* We leave DE_CONTAINS_A_BARRIER and EE_IS_BARRIER in place,
+	   (and DE_BARRIER_IN_NEXT_EPOCH_ISSUED in the previous Epoch)
+	   so that we can finish that epoch in drbd_may_finish_epoch().
+	   That is necessary if we already have a long chain of Epochs, before
+	   we realize that BARRIER is actually not supported */
+
+	/* As long as the -ENOTSUPP on the barrier is reported immediately
+	   that will never trigger. If it is reported late, we will just
+	   print that warning and continue correctly for all future requests
+	   with WO_BDEV_FLUSH */
+	if (previous_epoch(connection, peer_req->epoch))
+		drbd_warn(device, "Write ordering was not enforced (one time event)\n");
+
+	/* we still have a local reference,
+	 * get_ldev was done in receive_Data. */
+
+	peer_req->w.cb = e_end_block;
+	err = drbd_submit_peer_request(peer_req);
+	switch (err) {
+	case -ENOMEM:
+		peer_req->w.cb = w_e_reissue;
+		drbd_queue_work(&connection->sender_work,
+				&peer_req->w);
+		/* retry later */
+		fallthrough;
+	case 0:
+		/* keep worker happy and connection up */
+		return 0;
+
+	case -ENOSPC:
+		/* no other error expected, but anyways: */
+	default:
+		/* forget the object,
+		 * and cause a "Network failure" */
+		drbd_remove_peer_req_interval(peer_req);
+		drbd_al_complete_io(device, &peer_req->i);
+		drbd_may_finish_epoch(connection, peer_req->epoch, EV_PUT | EV_CLEANUP);
+		drbd_free_peer_req(peer_req);
+		drbd_err(device, "submit failed, triggering re-connect\n");
+		return err;
 	}
-	rcu_read_unlock();
+}
+
+static void conn_wait_done_ee_empty_or_disconnect(struct drbd_connection *connection)
+{
+	wait_event(connection->ee_wait,
+		atomic_read(&connection->done_ee_cnt) == 0
+		|| connection->cstate[NOW] < C_CONNECTED);
+}
+
+static void conn_wait_active_ee_empty_or_disconnect(struct drbd_connection *connection)
+{
+	if (atomic_read(&connection->active_ee_cnt) == 0)
+		return;
+
+	drbd_unplug_all_devices(connection);
+
+	wait_event(connection->ee_wait,
+		atomic_read(&connection->active_ee_cnt) == 0
+		|| connection->cstate[NOW] < C_CONNECTED);
 }
 
 static int receive_Barrier(struct drbd_connection *connection, struct packet_info *pi)
 {
-	int rv;
+	struct drbd_transport_ops *tr_ops = &connection->transport.class->ops;
+	int rv, issue_flush;
 	struct p_barrier *p = pi->data;
 	struct drbd_epoch *epoch;
 
+	tr_ops->hint(&connection->transport, DATA_STREAM, QUICKACK);
+	drbd_unplug_all_devices(connection);
+
 	/* FIXME these are unacked on connection,
 	 * not a specific (peer)device.
 	 */
@@ -1586,41 +2010,48 @@ static int receive_Barrier(struct drbd_connection *connection, struct packet_inf
 	 * Therefore we must send the barrier_ack after the barrier request was
 	 * completed. */
 	switch (connection->resource->write_ordering) {
+	case WO_BIO_BARRIER:
 	case WO_NONE:
 		if (rv == FE_RECYCLED)
 			return 0;
-
-		/* receiver context, in the writeout path of the other node.
-		 * avoid potential distributed deadlock */
-		epoch = kmalloc_obj(struct drbd_epoch, GFP_NOIO);
-		if (epoch)
-			break;
-		else
-			drbd_warn(connection, "Allocation of an epoch failed, slowing down\n");
-		fallthrough;
+		break;
 
 	case WO_BDEV_FLUSH:
 	case WO_DRAIN_IO:
-		conn_wait_active_ee_empty(connection);
-		drbd_flush(connection);
+		if (rv == FE_STILL_LIVE) {
+			set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &connection->current_epoch->flags);
+			conn_wait_active_ee_empty_or_disconnect(connection);
+			rv = drbd_flush_after_epoch(connection, connection->current_epoch);
+		}
+		if (rv == FE_RECYCLED)
+			return 0;
 
-		if (atomic_read(&connection->current_epoch->epoch_size)) {
-			epoch = kmalloc_obj(struct drbd_epoch, GFP_NOIO);
-			if (epoch)
-				break;
+		/*
+		 * The ack_sender will send all the ACKs and barrier ACKs out,
+		 * since all EEs added to done_ee. We need to provide a new
+		 * epoch object for the EEs that come in soon.
+		 */
+		break;
+	}
+
+	/* receiver context, in the writeout path of the other node.
+	 * avoid potential distributed deadlock */
+	epoch = kzalloc_obj(struct drbd_epoch, GFP_NOIO);
+	if (!epoch) {
+		drbd_warn(connection, "Allocation of an epoch failed, slowing down\n");
+		issue_flush = !test_and_set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &connection->current_epoch->flags);
+		conn_wait_active_ee_empty_or_disconnect(connection);
+		if (issue_flush) {
+			rv = drbd_flush_after_epoch(connection, connection->current_epoch);
+			if (rv == FE_RECYCLED)
+				return 0;
 		}
 
+		conn_wait_done_ee_empty_or_disconnect(connection);
+
 		return 0;
-	default:
-		drbd_err(connection, "Strangeness in connection->write_ordering %d\n",
-			 connection->resource->write_ordering);
-		return -EIO;
 	}
 
-	epoch->flags = 0;
-	atomic_set(&epoch->epoch_size, 0);
-	atomic_set(&epoch->active, 0);
-
 	spin_lock(&connection->epoch_lock);
 	if (atomic_read(&connection->current_epoch->epoch_size)) {
 		list_add(&epoch->list, &connection->current_epoch->list);
@@ -1635,15 +2066,25 @@ static int receive_Barrier(struct drbd_connection *connection, struct packet_inf
 	return 0;
 }
 
-/* quick wrapper in case payload size != request_size (write same) */
-static void drbd_csum_ee_size(struct crypto_shash *h,
-			      struct drbd_peer_request *r, void *d,
-			      unsigned int payload_size)
+/* pi->data points into some recv buffer, which may be
+ * re-used/recycled/overwritten by the next receive operation.
+ * (read_in_block via recv_resync_read) */
+static void p_req_detail_from_pi(struct drbd_connection *connection,
+		struct drbd_peer_request_details *d, struct packet_info *pi)
 {
-	unsigned int tmp = r->i.size;
-	r->i.size = payload_size;
-	drbd_csum_ee(h, r, d);
-	r->i.size = tmp;
+	struct p_trim *p = pi->data;
+	bool is_trim_or_zeroes = pi->cmd == P_TRIM || pi->cmd == P_ZEROES;
+	unsigned int digest_size =
+		pi->cmd != P_TRIM && connection->peer_integrity_tfm ?
+		crypto_shash_digestsize(connection->peer_integrity_tfm) : 0;
+
+	d->sector = be64_to_cpu(p->p_data.sector);
+	d->block_id = p->p_data.block_id;
+	d->peer_seq = be32_to_cpu(p->p_data.seq_num);
+	d->dp_flags = be32_to_cpu(p->p_data.dp_flags);
+	d->length = pi->size;
+	d->bi_size = is_trim_or_zeroes ? be32_to_cpu(p->size) : pi->size - digest_size;
+	d->digest_size = digest_size;
 }
 
 /* used from receive_RSDataReply (recv_resync_read)
@@ -1655,140 +2096,103 @@ static void drbd_csum_ee_size(struct crypto_shash *h,
  * both trim and write same have the bi_size ("data len to be affected")
  * as extra argument in the packet header.
  */
-static struct drbd_peer_request *
-read_in_block(struct drbd_peer_device *peer_device, u64 id, sector_t sector,
-	      struct packet_info *pi) __must_hold(local)
+static int
+read_in_block(struct drbd_peer_request *peer_req, struct drbd_peer_request_details *d)
 {
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
 	struct drbd_device *device = peer_device->device;
-	const sector_t capacity = get_capacity(device->vdisk);
-	struct drbd_peer_request *peer_req;
-	struct page *page;
-	int digest_size, err;
-	unsigned int data_size = pi->size, ds;
-	void *dig_in = peer_device->connection->int_dig_in;
-	void *dig_vv = peer_device->connection->int_dig_vv;
-	unsigned long *data;
-	struct p_trim *trim = (pi->cmd == P_TRIM) ? pi->data : NULL;
-	struct p_trim *zeroes = (pi->cmd == P_ZEROES) ? pi->data : NULL;
-
-	digest_size = 0;
-	if (!trim && peer_device->connection->peer_integrity_tfm) {
-		digest_size = crypto_shash_digestsize(peer_device->connection->peer_integrity_tfm);
-		/*
-		 * FIXME: Receive the incoming digest into the receive buffer
-		 *	  here, together with its struct p_data?
-		 */
-		err = drbd_recv_all_warn(peer_device->connection, dig_in, digest_size);
+	struct drbd_connection *connection = peer_device->connection;
+	const uint64_t capacity = get_capacity(device->vdisk);
+	void *dig_in = connection->int_dig_in;
+	void *dig_vv = connection->int_dig_vv;
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_transport_ops *tr_ops = &transport->class->ops;
+	int size, err;
+
+	if (d->digest_size) {
+		err = drbd_recv_into(connection, dig_in, d->digest_size);
 		if (err)
-			return NULL;
-		data_size -= digest_size;
-	}
-
-	/* assume request_size == data_size, but special case trim. */
-	ds = data_size;
-	if (trim) {
-		if (!expect(peer_device, data_size == 0))
-			return NULL;
-		ds = be32_to_cpu(trim->size);
-	} else if (zeroes) {
-		if (!expect(peer_device, data_size == 0))
-			return NULL;
-		ds = be32_to_cpu(zeroes->size);
+			return err;
 	}
 
-	if (!expect(peer_device, IS_ALIGNED(ds, 512)))
-		return NULL;
-	if (trim || zeroes) {
-		if (!expect(peer_device, ds <= (DRBD_MAX_BBIO_SECTORS << 9)))
-			return NULL;
-	} else if (!expect(peer_device, ds <= DRBD_MAX_BIO_SIZE))
-		return NULL;
+	if (!expect(peer_device, IS_ALIGNED(d->bi_size, 512)))
+		return -EINVAL;
+	/* The WSAME mechanism was removed in Linux 5.18,
+	 * and subsequently from drbd.
+	 * In theory, a "modern" drbd will never advertise support for
+	 * WRITE_SAME, so a compliant peer should never send a DP_WSAME
+	 * packet. If we receive one anyway, that's a protocol error.
+	 */
+	if (!expect(peer_device, (d->dp_flags & DP_WSAME) == 0))
+		return -EINVAL;
+	if (d->dp_flags & (DP_DISCARD|DP_ZEROES)) {
+		if (!expect(peer_device, d->bi_size <= (DRBD_MAX_BBIO_SECTORS << 9)))
+			return -EINVAL;
+	} else if (!expect(peer_device, d->bi_size <= DRBD_MAX_BIO_SIZE))
+		return -EINVAL;
 
-	/* even though we trust out peer,
+	/* even though we trust our peer,
 	 * we sometimes have to double check. */
-	if (sector + (ds>>9) > capacity) {
+	if (d->sector + (d->bi_size>>9) > capacity) {
 		drbd_err(device, "request from peer beyond end of local disk: "
 			"capacity: %llus < sector: %llus + size: %u\n",
-			(unsigned long long)capacity,
-			(unsigned long long)sector, ds);
-		return NULL;
+			capacity, d->sector, d->bi_size);
+		return -EINVAL;
 	}
 
-	/* GFP_NOIO, because we must not cause arbitrary write-out: in a DRBD
-	 * "criss-cross" setup, that might cause write-out on some other DRBD,
-	 * which in turn might block on the other node at this very place.  */
-	peer_req = drbd_alloc_peer_req(peer_device, id, sector, ds, data_size, GFP_NOIO);
-	if (!peer_req)
-		return NULL;
+	peer_req->block_id = d->block_id;
 
-	peer_req->flags |= EE_WRITE;
-	if (trim) {
-		peer_req->flags |= EE_TRIM;
-		return peer_req;
-	}
-	if (zeroes) {
-		peer_req->flags |= EE_ZEROOUT;
-		return peer_req;
+	if (d->length == 0)
+		return 0;
+
+	size = d->length - d->digest_size;
+	if (bio_list_empty(&peer_req->bios)) {
+		/* For a checksum resync, the bio was consumed for reading. */
+		err = peer_req_alloc_bio(peer_req, size, GFP_NOIO, REQ_OP_WRITE);
+		if (err)
+			return err;
 	}
+	err = tr_ops->recv_bio(transport, &peer_req->bios, size);
+	if (err)
+		return err;
 
-	/* receive payload size bytes into page chain */
-	ds = data_size;
-	page = peer_req->pages;
-	page_chain_for_each(page) {
-		unsigned len = min_t(int, ds, PAGE_SIZE);
-		data = kmap_local_page(page);
-		err = drbd_recv_all_warn(peer_device->connection, data, len);
-		if (drbd_insert_fault(device, DRBD_FAULT_RECEIVE)) {
-			drbd_err(device, "Fault injection: Corrupting data on receive\n");
-			data[0] = data[0] ^ (unsigned long)-1;
-		}
-		kunmap_local(data);
-		if (err) {
-			drbd_free_peer_req(device, peer_req);
-			return NULL;
-		}
-		ds -= len;
+	if (drbd_insert_fault(device, DRBD_FAULT_RECEIVE)) {
+		struct bio *bio = bio_list_peek(&peer_req->bios);
+		unsigned long *data;
+
+		drbd_err(device, "Fault injection: Corrupting data on receive, sector %llu\n",
+				d->sector);
+
+		data = bvec_virt(&bio->bi_io_vec[0]);
+		data[0] = ~data[0];
 	}
 
-	if (digest_size) {
-		drbd_csum_ee_size(peer_device->connection->peer_integrity_tfm, peer_req, dig_vv, data_size);
-		if (memcmp(dig_in, dig_vv, digest_size)) {
+	if (d->digest_size) {
+		drbd_csum_bios(connection->peer_integrity_tfm, &peer_req->bios, dig_vv);
+		if (memcmp(dig_in, dig_vv, d->digest_size)) {
 			drbd_err(device, "Digest integrity check FAILED: %llus +%u\n",
-				(unsigned long long)sector, data_size);
-			drbd_free_peer_req(device, peer_req);
-			return NULL;
+				d->sector, d->bi_size);
+			return -EINVAL;
 		}
 	}
-	device->recv_cnt += data_size >> 9;
-	return peer_req;
+	peer_device->recv_cnt += d->bi_size >> 9;
+	return 0;
 }
 
-/* drbd_drain_block() just takes a data block
- * out of the socket input buffer, and discards it.
- */
-static int drbd_drain_block(struct drbd_peer_device *peer_device, int data_size)
+static int ignore_remaining_packet(struct drbd_connection *connection, int size)
 {
-	struct page *page;
-	int err = 0;
-	void *data;
-
-	if (!data_size)
-		return 0;
-
-	page = drbd_alloc_pages(peer_device, 1, 1);
+	void *data_to_ignore;
 
-	data = kmap_local_page(page);
-	while (data_size) {
-		unsigned int len = min_t(int, data_size, PAGE_SIZE);
+	while (size) {
+		int s = min_t(int, size, DRBD_SOCKET_BUFFER_SIZE);
+		int rv = drbd_recv(connection, &data_to_ignore, s, 0);
+		if (rv < 0)
+			return rv;
 
-		err = drbd_recv_all_warn(peer_device->connection, data, len);
-		if (err)
-			break;
-		data_size -= len;
+		size -= rv;
 	}
-	kunmap_local(data);
-	drbd_free_pages(peer_device->device, page);
-	return err;
+
+	return 0;
 }
 
 static int recv_dless_read(struct drbd_peer_device *peer_device, struct drbd_request *req,
@@ -1804,7 +2208,7 @@ static int recv_dless_read(struct drbd_peer_device *peer_device, struct drbd_req
 	digest_size = 0;
 	if (peer_device->connection->peer_integrity_tfm) {
 		digest_size = crypto_shash_digestsize(peer_device->connection->peer_integrity_tfm);
-		err = drbd_recv_all_warn(peer_device->connection, dig_in, digest_size);
+		err = drbd_recv_into(peer_device->connection, dig_in, digest_size);
 		if (err)
 			return err;
 		data_size -= digest_size;
@@ -1812,7 +2216,7 @@ static int recv_dless_read(struct drbd_peer_device *peer_device, struct drbd_req
 
 	/* optimistically update recv_cnt.  if receiving fails below,
 	 * we disconnect anyways, and counters will be reset. */
-	peer_device->device->recv_cnt += data_size>>9;
+	peer_device->recv_cnt += data_size >> 9;
 
 	bio = req->master_bio;
 	D_ASSERT(peer_device->device, sector == bio->bi_iter.bi_sector);
@@ -1820,7 +2224,7 @@ static int recv_dless_read(struct drbd_peer_device *peer_device, struct drbd_req
 	bio_for_each_segment(bvec, bio, iter) {
 		void *mapped = bvec_kmap_local(&bvec);
 		expect = min_t(int, data_size, bvec.bv_len);
-		err = drbd_recv_all_warn(peer_device->connection, mapped, expect);
+		err = drbd_recv_into(peer_device->connection, mapped, expect);
 		kunmap_local(mapped);
 		if (err)
 			return err;
@@ -1839,250 +2243,662 @@ static int recv_dless_read(struct drbd_peer_device *peer_device, struct drbd_req
 	return 0;
 }
 
-/*
- * e_end_resync_block() is called in ack_sender context via
- * drbd_finish_peer_reqs().
- */
-static int e_end_resync_block(struct drbd_work *w, int unused)
+static bool bits_in_sync(struct drbd_peer_device *peer_device, sector_t sector_start, sector_t sector_end)
 {
-	struct drbd_peer_request *peer_req =
-		container_of(w, struct drbd_peer_request, w);
-	struct drbd_peer_device *peer_device = peer_req->peer_device;
 	struct drbd_device *device = peer_device->device;
-	sector_t sector = peer_req->i.sector;
-	int err;
-
-	D_ASSERT(device, drbd_interval_empty(&peer_req->i));
-
-	if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
-		drbd_set_in_sync(peer_device, sector, peer_req->i.size);
-		err = drbd_send_ack(peer_device, P_RS_WRITE_ACK, peer_req);
-	} else {
-		/* Record failure to sync */
-		drbd_rs_failed_io(peer_device, sector, peer_req->i.size);
-
-		err  = drbd_send_ack(peer_device, P_NEG_ACK, peer_req);
+	struct drbd_bitmap *bm = device->bitmap;
+
+	if (peer_device->repl_state[NOW] == L_ESTABLISHED ||
+			peer_device->repl_state[NOW] == L_SYNC_SOURCE ||
+			peer_device->repl_state[NOW] == L_SYNC_TARGET ||
+			peer_device->repl_state[NOW] == L_PAUSED_SYNC_S ||
+			peer_device->repl_state[NOW] == L_PAUSED_SYNC_T) {
+		if (drbd_bm_total_weight(peer_device) == 0)
+			return true;
+		if (drbd_bm_count_bits(device, peer_device->bitmap_index,
+					bm_sect_to_bit(bm, sector_start),
+					bm_sect_to_bit(bm, sector_end - 1)) == 0)
+			return true;
 	}
-	dec_unacked(device);
-
-	return err;
+	return false;
 }
 
-static int recv_resync_read(struct drbd_peer_device *peer_device, sector_t sector,
-			    struct packet_info *pi) __releases(local)
+static void update_peers_for_interval(struct drbd_peer_device *peer_device,
+		struct drbd_interval *interval)
 {
 	struct drbd_device *device = peer_device->device;
-	struct drbd_peer_request *peer_req;
+	struct drbd_bitmap *bm = device->bitmap;
+	u64 mask = NODE_MASK(peer_device->node_id), im;
+	struct drbd_peer_device *p;
+	sector_t sector_end = interval->sector + (interval->size >> SECTOR_SHIFT);
+
+	/* Only send P_PEERS_IN_SYNC if we are actually in sync with this peer. */
+	if (drbd_bm_count_bits(device, peer_device->bitmap_index,
+				bm_sect_to_bit(bm, interval->sector),
+				bm_sect_to_bit(bm, sector_end - 1)))
+		return;
 
-	peer_req = read_in_block(peer_device, ID_SYNCER, sector, pi);
-	if (!peer_req)
-		goto fail;
+	for_each_peer_device_ref(p, im, device) {
+		if (p == peer_device)
+			continue;
 
-	dec_rs_pending(peer_device);
+		if (bits_in_sync(p, interval->sector, sector_end))
+			mask |= NODE_MASK(p->node_id);
+	}
 
-	inc_unacked(device);
-	/* corresponding dec_unacked() in e_end_resync_block()
-	 * respective _drbd_clear_done_ee */
+	for_each_peer_device_ref(p, im, device) {
+		/* Only send to the peer whose bitmap bits have been cleared if
+		 * we are connected to that peer. The bits may have been
+		 * cleared by a P_PEERS_IN_SYNC from another peer while we are
+		 * connecting to this one. We mustn't send P_PEERS_IN_SYNC
+		 * during the initial connection handshake. */
+		if (p == peer_device && p->connection->cstate[NOW] != C_CONNECTED)
+			continue;
 
-	peer_req->w.cb = e_end_resync_block;
-	peer_req->opf = REQ_OP_WRITE;
-	peer_req->submit_jif = jiffies;
+		if (mask & NODE_MASK(p->node_id))
+			drbd_send_peers_in_sync(p, mask, interval->sector, interval->size);
+	}
+}
 
-	spin_lock_irq(&device->resource->req_lock);
-	list_add_tail(&peer_req->w.list, &device->sync_ee);
-	spin_unlock_irq(&device->resource->req_lock);
+/* Potentially send P_PEERS_IN_SYNC for a range with size that fits in an int. */
+static void update_peers_for_small_range(struct drbd_peer_device *peer_device,
+		sector_t sector, int size)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_interval interval;
 
-	atomic_add(pi->size >> 9, &device->rs_sect_ev);
-	if (drbd_submit_peer_request(peer_req) == 0)
-		return 0;
+	memset(&interval, 0, sizeof(interval));
+	drbd_clear_interval(&interval);
+	interval.sector = sector;
+	interval.size = size;
+	interval.type = INTERVAL_PEERS_IN_SYNC_LOCK;
 
-	/* don't care for the reason here */
-	drbd_err(device, "submit failed, triggering re-connect\n");
-	spin_lock_irq(&device->resource->req_lock);
-	list_del(&peer_req->w.list);
-	spin_unlock_irq(&device->resource->req_lock);
+	spin_lock_irq(&device->interval_lock);
+	if (drbd_find_conflict(device, &interval, 0)) {
+		spin_unlock_irq(&device->interval_lock);
+		return;
+	}
+	drbd_insert_interval(&device->requests, &interval);
+	/* Interval is not waiting for conflicts to resolve so mark as "submitted". */
+	set_bit(INTERVAL_SUBMITTED, &interval.flags);
+	spin_unlock_irq(&device->interval_lock);
 
-	drbd_free_peer_req(device, peer_req);
-fail:
-	put_ldev(device);
-	return -EIO;
+	/* Check for activity in the activity log extent _after_ locking the
+	 * interval. Otherwise a write might occur between checking and
+	 * locking. */
+	if (!drbd_al_active(device, sector, size))
+		update_peers_for_interval(peer_device, &interval);
+
+	spin_lock_irq(&device->interval_lock);
+	drbd_remove_interval(&device->requests, &interval);
+	drbd_release_conflicts(device, &interval);
+	spin_unlock_irq(&device->interval_lock);
 }
 
-static struct drbd_request *
-find_request(struct drbd_device *device, struct rb_root *root, u64 id,
-	     sector_t sector, bool missing_ok, const char *func)
+static void update_peers_for_range(struct drbd_peer_device *peer_device,
+		sector_t sector_start, sector_t sector_end)
 {
-	struct drbd_request *req;
+	struct drbd_device *device = peer_device->device;
+	unsigned int enr_start = sector_start >> (AL_EXTENT_SHIFT - SECTOR_SHIFT);
+	unsigned int enr_end = ((sector_end - 1) >> (AL_EXTENT_SHIFT - SECTOR_SHIFT)) + 1;
+	unsigned int enr;
 
-	/* Request object according to our peer */
-	req = (struct drbd_request *)(unsigned long)id;
-	if (drbd_contains_interval(root, sector, &req->i) && req->i.local)
-		return req;
-	if (!missing_ok) {
-		drbd_err(device, "%s: failed to find request 0x%lx, sector %llus\n", func,
-			(unsigned long)id, (unsigned long long)sector);
+	if (!get_ldev(device))
+		return;
+
+	for (enr = enr_start; enr < enr_end; enr++) {
+		sector_t enr_start_sector = max(sector_start,
+				((sector_t) enr) << (AL_EXTENT_SHIFT - SECTOR_SHIFT));
+		sector_t enr_end_sector = min(sector_end,
+				((sector_t) (enr + 1)) << (AL_EXTENT_SHIFT - SECTOR_SHIFT));
+
+		update_peers_for_small_range(peer_device,
+				enr_start_sector, (enr_end_sector - enr_start_sector) << SECTOR_SHIFT);
 	}
-	return NULL;
+
+	put_ldev(device);
 }
 
-static int receive_DataReply(struct drbd_connection *connection, struct packet_info *pi)
+static int w_update_peers(struct drbd_work *w, int unused)
 {
-	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
-	struct drbd_request *req;
-	sector_t sector;
-	int err;
-	struct p_data *p = pi->data;
+	struct update_peers_work *upw = container_of(w, struct update_peers_work, w);
+	struct drbd_peer_device *peer_device = upw->peer_device;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return -EIO;
-	device = peer_device->device;
+	if (connection->agreed_pro_version >= 110)
+		update_peers_for_range(peer_device, upw->sector_start, upw->sector_end);
 
-	sector = be64_to_cpu(p->sector);
+	kfree(upw);
 
-	spin_lock_irq(&device->resource->req_lock);
-	req = find_request(device, &device->read_requests, p->block_id, sector, false, __func__);
-	spin_unlock_irq(&device->resource->req_lock);
-	if (unlikely(!req))
-		return -EIO;
+	kref_put(&device->kref, drbd_destroy_device);
 
-	err = recv_dless_read(peer_device, req, sector, pi->size);
-	if (!err)
-		req_mod(req, DATA_RECEIVED, peer_device);
-	/* else: nothing. handled from drbd_disconnect...
-	 * I don't think we may complete this just yet
-	 * in case we are "on-disconnect: freeze" */
+	kref_put(&connection->kref, drbd_destroy_connection);
 
-	return err;
+	return 0;
 }
 
-static int receive_RSDataReply(struct drbd_connection *connection, struct packet_info *pi)
+void drbd_queue_update_peers(struct drbd_peer_device *peer_device,
+		sector_t sector_start, sector_t sector_end)
 {
-	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
-	sector_t sector;
-	int err;
-	struct p_data *p = pi->data;
+	struct drbd_device *device = peer_device->device;
+	struct update_peers_work *upw;
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return -EIO;
-	device = peer_device->device;
+	upw = kmalloc_obj(*upw, GFP_ATOMIC | __GFP_NOWARN);
+	if (upw) {
+		upw->sector_start = sector_start;
+		upw->sector_end = sector_end;
+		upw->w.cb = w_update_peers;
 
-	sector = be64_to_cpu(p->sector);
-	D_ASSERT(device, p->block_id == ID_SYNCER);
+		kref_get(&peer_device->device->kref);
 
-	if (get_ldev(device)) {
-		/* data is submitted to disk within recv_resync_read.
-		 * corresponding put_ldev done below on error,
-		 * or in drbd_peer_request_endio. */
-		err = recv_resync_read(peer_device, sector, pi);
+		kref_get(&peer_device->connection->kref);
+
+		upw->peer_device = peer_device;
+		drbd_queue_work(&device->resource->work, &upw->w);
 	} else {
 		if (drbd_ratelimit())
-			drbd_err(device, "Can not write resync data to local disk.\n");
+			drbd_warn(peer_device, "kmalloc(upw) failed.\n");
+	}
+}
 
-		err = drbd_drain_block(peer_device, pi->size);
+static void drbd_peers_in_sync_progress(struct drbd_peer_device *peer_device,
+		sector_t sector_start, sector_t sector_end)
+{
+	/* P_PEERS_IN_SYNC "steps" are represented by their start sector */
+	sector_t step = sector_start & ~PEERS_IN_SYNC_STEP_SECT_MASK;
+	sector_t end_step = sector_end & ~PEERS_IN_SYNC_STEP_SECT_MASK;
+	sector_t last_end = peer_device->last_in_sync_end;
+	sector_t last_step = last_end & ~PEERS_IN_SYNC_STEP_SECT_MASK;
+	sector_t last_step_end = min(get_capacity(peer_device->device->vdisk),
+			last_step + PEERS_IN_SYNC_STEP_SECT);
 
-		drbd_send_ack_dp(peer_device, P_NEG_ACK, p, pi->size);
-	}
+	/* Send for last request if it was part way through a different step */
+	if (last_end > last_step && step != last_step)
+		drbd_queue_update_peers(peer_device, last_step, last_step_end);
 
-	atomic_add(pi->size >> 9, &device->rs_sect_in);
+	/* Send if the request reaches or passes a step boundary */
+	if (end_step != step)
+		drbd_queue_update_peers(peer_device, step, end_step);
 
-	return err;
+	peer_device->last_in_sync_end = sector_end;
+
+	/*
+	 * Consider scheduling a bitmap update to reduce the size of the next
+	 * resync if this one is disrupted.
+	 */
+	if (drbd_lazy_bitmap_update_due(peer_device))
+		drbd_peer_device_post_work(peer_device, RS_LAZY_BM_WRITE);
 }
 
-static void restart_conflicting_writes(struct drbd_device *device,
-				       sector_t sector, int size)
+static void drbd_check_peers_in_sync_progress(struct drbd_peer_device *peer_device)
 {
-	struct drbd_interval *i;
-	struct drbd_request *req;
+	struct drbd_connection *connection = peer_device->connection;
+	LIST_HEAD(completed);
+	struct drbd_peer_request *peer_req, *tmp;
 
-	drbd_for_each_overlap(i, &device->write_requests, sector, size) {
-		if (!i->local)
-			continue;
-		req = container_of(i, struct drbd_request, i);
-		if (req->rq_state & RQ_LOCAL_PENDING ||
-		    !(req->rq_state & RQ_POSTPONED))
-			continue;
-		/* as it is RQ_POSTPONED, this will cause it to
-		 * be queued on the retry workqueue. */
-		__req_mod(req, CONFLICT_RESOLVED, NULL, NULL);
+	spin_lock_irq(&connection->peer_reqs_lock);
+	list_for_each_entry_safe(peer_req, tmp, &peer_device->resync_requests, recv_order) {
+		if (!test_bit(INTERVAL_COMPLETED, &peer_req->i.flags))
+			break;
+
+		drbd_peers_in_sync_progress(peer_device, peer_req->i.sector,
+			peer_req->i.sector + (peer_req->i.size >> SECTOR_SHIFT));
+
+		drbd_list_del_resync_request(peer_req);
+		list_add_tail(&peer_req->recv_order, &completed);
 	}
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	list_for_each_entry_safe(peer_req, tmp, &completed, recv_order)
+		drbd_free_peer_req(peer_req);
 }
 
-/*
- * e_end_block() is called in ack_sender context via drbd_finish_peer_reqs().
- */
-static int e_end_block(struct drbd_work *w, int cancel)
+static void drbd_resync_request_complete(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+
+	/*
+	 * Free the pages now but leave the peer request until the
+	 * corresponding peers-in-sync has been scheduled.
+	 */
+	drbd_peer_req_strip_bio(peer_req);
+
+	/*
+	 * The interval is no longer in the tree, but use this flag anyway,
+	 * since it has an appropriate meaning. After setting the flag,
+	 * peer_req may be freed by another thread.
+	 */
+	set_bit(INTERVAL_COMPLETED, &peer_req->i.flags);
+	peer_req = NULL;
+
+	drbd_check_peers_in_sync_progress(peer_device);
+}
+
+/*
+ * e_end_resync_block() is called in ack_sender context via
+ * drbd_finish_peer_reqs().
+ */
+static int e_end_resync_block(struct drbd_work *w, int unused)
 {
 	struct drbd_peer_request *peer_req =
 		container_of(w, struct drbd_peer_request, w);
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
-	struct drbd_device *device = peer_device->device;
 	sector_t sector = peer_req->i.sector;
-	int err = 0, pcmd;
+	unsigned int size = peer_req->requested_size;
+	u64 block_id = peer_req->block_id;
+	int err;
 
-	if (peer_req->flags & EE_SEND_WRITE_ACK) {
-		if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
-			pcmd = (device->state.conn >= C_SYNC_SOURCE &&
-				device->state.conn <= C_PAUSED_SYNC_T &&
-				peer_req->flags & EE_MAY_SET_IN_SYNC) ?
-				P_RS_WRITE_ACK : P_WRITE_ACK;
-			err = drbd_send_ack(peer_device, pcmd, peer_req);
-			if (pcmd == P_RS_WRITE_ACK)
-				drbd_set_in_sync(peer_device, sector, peer_req->i.size);
-		} else {
-			err = drbd_send_ack(peer_device, P_NEG_ACK, peer_req);
-			/* we expect it to be marked out of sync anyways...
-			 * maybe assert this?  */
-		}
-		dec_unacked(device);
+	if (likely((peer_req->flags & EE_WAS_ERROR) == 0)) {
+		drbd_set_in_sync(peer_device, sector, size);
+		err = drbd_send_ack_be(peer_device, P_RS_WRITE_ACK, sector, size, block_id);
+	} else {
+		/* Record failure to sync */
+		drbd_rs_failed_io(peer_device, sector, size);
+
+		err = drbd_send_ack_be(peer_device, P_RS_NEG_ACK, sector, size, block_id);
 	}
 
-	/* we delete from the conflict detection hash _after_ we sent out the
-	 * P_WRITE_ACK / P_NEG_ACK, to get the sequence number right.  */
-	if (peer_req->flags & EE_IN_INTERVAL_TREE) {
-		spin_lock_irq(&device->resource->req_lock);
-		D_ASSERT(device, !drbd_interval_empty(&peer_req->i));
-		drbd_remove_epoch_entry_interval(device, peer_req);
-		if (peer_req->flags & EE_RESTART_REQUESTS)
-			restart_conflicting_writes(device, sector, peer_req->i.size);
-		spin_unlock_irq(&device->resource->req_lock);
-	} else
-		D_ASSERT(device, drbd_interval_empty(&peer_req->i));
+	dec_unacked(peer_device);
 
-	drbd_may_finish_epoch(peer_device->connection, peer_req->epoch, EV_PUT + (cancel ? EV_CLEANUP : 0));
+	/*
+	 * If INTERVAL_SUBMITTED is not set, this request was merged into
+	 * another discard. It has already been removed from the interval tree.
+	 */
+	if (test_bit(INTERVAL_SUBMITTED, &peer_req->i.flags))
+		drbd_remove_peer_req_interval(peer_req);
 
+	drbd_resync_request_complete(peer_req);
 	return err;
 }
 
-static int e_send_ack(struct drbd_work *w, enum drbd_packet ack)
+static struct drbd_peer_request *find_resync_request(struct drbd_peer_device *peer_device,
+		unsigned long type_mask, sector_t sector, unsigned int size, u64 block_id)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_interval *i;
+	struct drbd_peer_request *peer_req = NULL;
+
+	spin_lock_irq(&device->interval_lock);
+	drbd_for_each_overlap(i, &device->requests, sector, size) {
+		struct drbd_peer_request *pr;
+
+		if (!test_bit(INTERVAL_READY_TO_SEND, &i->flags))
+			continue;
+
+		if (!(INTERVAL_TYPE_MASK(i->type) & type_mask))
+			continue;
+
+		if (i->sector != sector || i->size != size)
+			continue;
+
+		pr = container_of(i, struct drbd_peer_request, i);
+		/* With agreed_pro_version < 122, block_id is always ID_SYNCER. */
+		if (pr->peer_device == peer_device &&
+				(block_id == ID_SYNCER || pr->block_id == block_id)) {
+			peer_req = pr;
+			break;
+		}
+	}
+	spin_unlock_irq(&device->interval_lock);
+
+	if (peer_req)
+		D_ASSERT(peer_device, peer_req->i.size == size);
+	else if (drbd_ratelimit())
+		drbd_err(peer_device, "Unexpected resync reply at %llus+%u\n",
+				(unsigned long long) sector, size);
+
+	return peer_req;
+}
+
+static void drbd_cleanup_received_resync_write(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+
+	drbd_remove_peer_req_interval(peer_req);
+
+	atomic_sub(peer_req->i.size >> SECTOR_SHIFT, &device->rs_sect_ev);
+	dec_unacked(peer_device);
+
+	drbd_free_peer_req(peer_req);
+	put_ldev(device);
+
+	if (atomic_dec_and_test(&connection->backing_ee_cnt))
+		wake_up(&connection->ee_wait);
+}
+
+void drbd_conflict_submit_resync_request(struct drbd_peer_request *peer_req)
 {
-	struct drbd_peer_request *peer_req =
-		container_of(w, struct drbd_peer_request, w);
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	bool conflict;
+	bool canceled;
+
+	spin_lock_irq(&device->interval_lock);
+	clear_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &peer_req->i.flags);
+	canceled = test_bit(INTERVAL_CANCELED, &peer_req->i.flags);
+	set_bit(INTERVAL_RECEIVED, &peer_req->i.flags);
+	conflict = drbd_find_conflict(device, &peer_req->i, 0);
+	if (!conflict)
+		set_bit(INTERVAL_SUBMITTED, &peer_req->i.flags);
+	spin_unlock_irq(&device->interval_lock);
+
+	if (!conflict) {
+		int err = drbd_submit_peer_request(peer_req);
+		if (err) {
+			if (drbd_ratelimit())
+				drbd_err(device, "submit failed, triggering re-connect\n");
+
+			drbd_cleanup_received_resync_write(peer_req);
+			change_cstate(peer_device->connection, C_PROTOCOL_ERROR, CS_HARD);
+		}
+	} else if (canceled) {
+		drbd_cleanup_received_resync_write(peer_req);
+	}
+}
+
+static int recv_resync_read(struct drbd_peer_device *peer_device,
+			    struct drbd_peer_request *peer_req,
+			    struct drbd_peer_request_details *d)
+{
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+	unsigned int size;
+	sector_t sector;
+	int err;
+	u64 im;
+
+	err = read_in_block(peer_req, d);
+	if (err)
+		return err;
+
+	if (test_bit(UNSTABLE_RESYNC, &peer_device->flags))
+		clear_bit(STABLE_RESYNC, &device->flags);
+
+	dec_rs_pending(peer_device);
+
+	inc_unacked(peer_device);
+	/* corresponding dec_unacked() in e_end_resync_block()
+	 * respective _drbd_clear_done_ee */
+
+	peer_req->w.cb = e_end_resync_block;
+	peer_req->submit_jif = jiffies;
+
+	atomic_add(d->bi_size >> 9, &device->rs_sect_ev);
+
+	sector = peer_req->i.sector;
+	size = peer_req->i.size;
+
+	/* Setting all peers out of sync here. The sync source peer will be
+	 * set in sync when the write completes. The sync source will soon
+	 * set other peers in sync with a P_PEERS_IN_SYNC packet.
+	 */
+	drbd_set_all_out_of_sync(device, sector, size);
+
+	atomic_inc(&connection->backing_ee_cnt);
+	drbd_conflict_submit_resync_request(peer_req);
+	peer_req = NULL; /* since submitted, might be destroyed already */
+
+	drbd_process_rs_discards(peer_device, false);
+
+	for_each_peer_device_ref(peer_device, im, device) {
+		enum drbd_repl_state repl_state = peer_device->repl_state[NOW];
+
+		if (repl_is_sync_source(repl_state) || repl_state == L_WF_BITMAP_S)
+			drbd_send_out_of_sync(peer_device, sector, size);
+	}
+	return 0;
+}
+
+/* caller must hold interval_lock */
+static struct drbd_request *
+find_request(struct drbd_device *device, enum drbd_interval_type type, u64 id,
+	     sector_t sector, bool missing_ok, const char *func)
+{
+	struct rb_root *root = type == INTERVAL_LOCAL_READ ? &device->read_requests : &device->requests;
+	struct drbd_request *req;
+
+	/* Request object according to our peer */
+	req = (struct drbd_request *)(unsigned long)id;
+	if (drbd_contains_interval(root, sector, &req->i) && req->i.type == type)
+		return req;
+	if (!missing_ok) {
+		drbd_err(device, "%s: failed to find request 0x%lx, sector %llus\n", func,
+			(unsigned long)id, (unsigned long long)sector);
+	}
+	return NULL;
+}
+
+static int receive_DataReply(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
+	struct drbd_device *device;
+	struct drbd_request *req;
+	sector_t sector;
 	int err;
+	struct p_data *p = pi->data;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+	device = peer_device->device;
+
+	sector = be64_to_cpu(p->sector);
+
+	spin_lock_irq(&device->interval_lock);
+	req = find_request(device, INTERVAL_LOCAL_READ, p->block_id, sector, false, __func__);
+	spin_unlock_irq(&device->interval_lock);
+	if (unlikely(!req))
+		return -EIO;
 
-	err = drbd_send_ack(peer_device, ack, peer_req);
-	dec_unacked(peer_device->device);
+	err = recv_dless_read(peer_device, req, sector, pi->size);
+	if (!err)
+		req_mod(req, DATA_RECEIVED, peer_device);
+	/* else: nothing. handled from drbd_disconnect...
+	 * I don't think we may complete this just yet
+	 * in case we are "on-disconnect: freeze" */
 
 	return err;
 }
 
-static int e_send_superseded(struct drbd_work *w, int unused)
+/**
+ * _drbd_send_ack() - Sends an ack packet
+ * @peer_device: DRBD peer device.
+ * @cmd:	Packet command code.
+ * @sector:	sector, needs to be in big endian byte order
+ * @blksize:	size in byte, needs to be in big endian byte order
+ * @block_id:	Id, big endian byte order
+ */
+static int _drbd_send_ack(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
+			  u64 sector, u32 blksize, u64 block_id)
+{
+	struct p_block_ack *p;
+
+	if (peer_device->repl_state[NOW] < L_ESTABLISHED)
+		return -EIO;
+
+	p = drbd_prepare_command(peer_device, sizeof(*p), CONTROL_STREAM);
+	if (!p)
+		return -EIO;
+	p->sector = sector;
+	p->block_id = block_id;
+	p->blksize = blksize;
+	p->seq_num = cpu_to_be32(atomic_inc_return(&peer_device->packet_seq));
+
+	if (peer_device->connection->agreed_pro_version < 122) {
+		switch (cmd) {
+		case P_RS_NEG_ACK:
+			cmd = P_NEG_ACK;
+			p->block_id = ID_SYNCER;
+			break;
+		case P_WRITE_ACK_IN_SYNC:
+			cmd = P_RS_WRITE_ACK;
+			break;
+		case P_RS_WRITE_ACK:
+			p->block_id = ID_SYNCER;
+			break;
+		default:
+			break;
+		}
+	}
+
+	return drbd_send_command(peer_device, cmd, CONTROL_STREAM);
+}
+
+static int drbd_send_ack_dp(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
+		  struct drbd_peer_request_details *d)
+{
+	return _drbd_send_ack(peer_device, cmd,
+			      cpu_to_be64(d->sector),
+			      cpu_to_be32(d->bi_size),
+			      d->block_id);
+}
+
+/* Send an ack packet wih a block ID that is already in big endian byte order. */
+int drbd_send_ack_be(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
+		      sector_t sector, int size, u64 block_id)
+{
+	return _drbd_send_ack(peer_device, cmd, cpu_to_be64(sector), cpu_to_be32(size), block_id);
+}
+
+/**
+ * drbd_send_ack() - Sends an ack packet
+ * @peer_device: DRBD peer device
+ * @cmd:	packet command code
+ * @peer_req:	peer request
+ */
+int drbd_send_ack(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
+		  struct drbd_peer_request *peer_req)
+{
+	return _drbd_send_ack(peer_device, cmd,
+			      cpu_to_be64(peer_req->i.sector),
+			      cpu_to_be32(peer_req->i.size),
+			      peer_req->block_id);
+}
+
+int drbd_send_ov_result(struct drbd_peer_device *peer_device, sector_t sector, int blksize,
+		u64 block_id, enum ov_result result)
+{
+	struct p_ov_result *p;
+
+	if (peer_device->connection->agreed_pro_version < 122)
+		/* Misuse the block_id field to signal if the blocks are is sync or not. */
+		return _drbd_send_ack(peer_device, P_OV_RESULT,
+				cpu_to_be64(sector),
+				cpu_to_be32(blksize),
+				cpu_to_be64(drbd_ov_result_to_block_id(result)));
+
+	if (peer_device->repl_state[NOW] < L_ESTABLISHED)
+		return -EIO;
+
+	p = drbd_prepare_command(peer_device, sizeof(*p), CONTROL_STREAM);
+	if (!p)
+		return -EIO;
+	p->sector = cpu_to_be64(sector);
+	p->block_id = block_id;
+	p->blksize = cpu_to_be32(blksize);
+	p->seq_num = cpu_to_be32(atomic_inc_return(&peer_device->packet_seq));
+	p->result = cpu_to_be32(result);
+	p->pad = 0;
+
+	return drbd_send_command(peer_device, P_OV_RESULT_ID, CONTROL_STREAM);
+}
+
+static int receive_RSDataReply(struct drbd_connection *connection, struct packet_info *pi)
 {
-	return e_send_ack(w, P_SUPERSEDED);
+	struct drbd_peer_request_details d;
+	struct drbd_peer_device *peer_device;
+	struct drbd_device *device;
+	struct drbd_peer_request *peer_req;
+	int err;
+
+	p_req_detail_from_pi(connection, &d, pi);
+	pi->data = NULL;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+	device = peer_device->device;
+
+	peer_req = find_resync_request(peer_device, INTERVAL_TYPE_MASK(INTERVAL_RESYNC_WRITE),
+			d.sector, d.bi_size, d.block_id);
+	if (!peer_req)
+		return -EIO;
+
+	if (get_ldev(device)) {
+		err = recv_resync_read(peer_device, peer_req, &d);
+		if (err)
+			put_ldev(device);
+	} else {
+		drbd_err_ratelimit(device, "Cannot write resync data to local disk.\n");
+
+		err = ignore_remaining_packet(connection, pi->size);
+
+		drbd_send_ack_dp(peer_device, P_RS_NEG_ACK, &d);
+
+		dec_rs_pending(peer_device);
+		drbd_remove_peer_req_interval(peer_req);
+		drbd_free_peer_req(peer_req);
+	}
+
+	rs_sectors_came_in(peer_device, d.bi_size);
+
+	return err;
 }
 
-static int e_send_retry_write(struct drbd_work *w, int unused)
+/*
+ * e_end_block() is called in ack_sender context via drbd_finish_peer_reqs().
+ */
+static int e_end_block(struct drbd_work *w, int cancel)
 {
 	struct drbd_peer_request *peer_req =
 		container_of(w, struct drbd_peer_request, w);
-	struct drbd_connection *connection = peer_req->peer_device->connection;
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
+	sector_t sector = peer_req->i.sector;
+	struct drbd_epoch *epoch;
+	int err = 0, pcmd;
+
+	if (peer_req->flags & EE_IS_BARRIER) {
+		epoch = previous_epoch(connection, peer_req->epoch);
+		if (epoch)
+			drbd_may_finish_epoch(connection, epoch, EV_BARRIER_DONE + (cancel ? EV_CLEANUP : 0));
+	}
+
+	if (peer_req->flags & EE_SEND_WRITE_ACK) {
+		if (unlikely(peer_req->flags & EE_WAS_ERROR)) {
+			pcmd = P_NEG_ACK;
+			/* we expect it to be marked out of sync anyways...
+			 * maybe assert this?  */
+		} else if (peer_device->repl_state[NOW] >= L_SYNC_SOURCE &&
+			   peer_device->repl_state[NOW] <= L_PAUSED_SYNC_T &&
+			   peer_req->flags & EE_MAY_SET_IN_SYNC) {
+			pcmd = P_WRITE_ACK_IN_SYNC;
+			drbd_set_in_sync(peer_device, sector, peer_req->i.size);
+		} else
+			pcmd = P_WRITE_ACK;
+		err = drbd_send_ack(peer_device, pcmd, peer_req);
+		dec_unacked(peer_device);
+	}
+
+	drbd_remove_peer_req_interval(peer_req);
+
+	if (connection->agreed_pro_version < 110) {
+		drbd_al_complete_io(device, &peer_req->i);
+		drbd_may_finish_epoch(connection, peer_req->epoch, EV_PUT + (cancel ? EV_CLEANUP : 0));
+		drbd_free_peer_req(peer_req);
+	} else {
+		drbd_peer_req_strip_bio(peer_req);
+		drbd_may_finish_epoch(connection, peer_req->epoch, EV_PUT + (cancel ? EV_CLEANUP : 0));
+		/* Do not use peer_req after this point. We may have sent the
+		 * corresponding barrier and received the corresponding peer ack. As a
+		 * result, peer_req may have been freed. */
+	}
 
-	return e_send_ack(w, connection->agreed_pro_version >= 100 ?
-			     P_RETRY_WRITE : P_SUPERSEDED);
+	return err;
 }
 
 static bool seq_greater(u32 a, u32 b)
@@ -2102,42 +2918,17 @@ static u32 seq_max(u32 a, u32 b)
 
 static void update_peer_seq(struct drbd_peer_device *peer_device, unsigned int peer_seq)
 {
-	struct drbd_device *device = peer_device->device;
 	unsigned int newest_peer_seq;
 
-	if (test_bit(RESOLVE_CONFLICTS, &peer_device->connection->flags)) {
-		spin_lock(&device->peer_seq_lock);
-		newest_peer_seq = seq_max(device->peer_seq, peer_seq);
-		device->peer_seq = newest_peer_seq;
-		spin_unlock(&device->peer_seq_lock);
-		/* wake up only if we actually changed device->peer_seq */
+	if (test_bit(RESOLVE_CONFLICTS, &peer_device->connection->transport.flags)) {
+		spin_lock_bh(&peer_device->peer_seq_lock);
+		newest_peer_seq = seq_max(peer_device->peer_seq, peer_seq);
+		peer_device->peer_seq = newest_peer_seq;
+		spin_unlock_bh(&peer_device->peer_seq_lock);
+		/* wake up only if we actually changed peer_device->peer_seq */
 		if (peer_seq == newest_peer_seq)
-			wake_up(&device->seq_wait);
-	}
-}
-
-static inline int overlaps(sector_t s1, int l1, sector_t s2, int l2)
-{
-	return !((s1 + (l1>>9) <= s2) || (s1 >= s2 + (l2>>9)));
-}
-
-/* maybe change sync_ee into interval trees as well? */
-static bool overlapping_resync_write(struct drbd_device *device, struct drbd_peer_request *peer_req)
-{
-	struct drbd_peer_request *rs_req;
-	bool rv = false;
-
-	spin_lock_irq(&device->resource->req_lock);
-	list_for_each_entry(rs_req, &device->sync_ee, w.list) {
-		if (overlaps(peer_req->i.sector, peer_req->i.size,
-			     rs_req->i.sector, rs_req->i.size)) {
-			rv = true;
-			break;
-		}
+			wake_up(&peer_device->device->seq_wait);
 	}
-	spin_unlock_irq(&device->resource->req_lock);
-
-	return rv;
 }
 
 /* Called from receive_Data.
@@ -2149,9 +2940,9 @@ static bool overlapping_resync_write(struct drbd_device *device, struct drbd_pee
  *
  * Note: we don't care for Ack packets overtaking P_DATA packets.
  *
- * In case packet_seq is larger than device->peer_seq number, there are
+ * In case packet_seq is larger than peer_device->peer_seq number, there are
  * outstanding packets on the msock. We wait for them to arrive.
- * In case we are the logically next packet, we update device->peer_seq
+ * In case we are the logically next packet, we update peer_device->peer_seq
  * ourselves. Correctly handles 32bit wrap around.
  *
  * Assume we have a 10 GBit connection, that is about 1<<30 byte per second,
@@ -2163,18 +2954,18 @@ static bool overlapping_resync_write(struct drbd_device *device, struct drbd_pee
  * -ERESTARTSYS if we were interrupted (by disconnect signal). */
 static int wait_for_and_update_peer_seq(struct drbd_peer_device *peer_device, const u32 peer_seq)
 {
-	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
 	DEFINE_WAIT(wait);
 	long timeout;
 	int ret = 0, tp;
 
-	if (!test_bit(RESOLVE_CONFLICTS, &peer_device->connection->flags))
+	if (!test_bit(RESOLVE_CONFLICTS, &connection->transport.flags))
 		return 0;
 
-	spin_lock(&device->peer_seq_lock);
+	spin_lock_bh(&peer_device->peer_seq_lock);
 	for (;;) {
-		if (!seq_greater(peer_seq - 1, device->peer_seq)) {
-			device->peer_seq = seq_max(device->peer_seq, peer_seq);
+		if (!seq_greater(peer_seq - 1, peer_device->peer_seq)) {
+			peer_device->peer_seq = seq_max(peer_device->peer_seq, peer_seq);
 			break;
 		}
 
@@ -2184,28 +2975,28 @@ static int wait_for_and_update_peer_seq(struct drbd_peer_device *peer_device, co
 		}
 
 		rcu_read_lock();
-		tp = rcu_dereference(peer_device->connection->net_conf)->two_primaries;
+		tp = rcu_dereference(connection->transport.net_conf)->two_primaries;
 		rcu_read_unlock();
 
 		if (!tp)
 			break;
 
 		/* Only need to wait if two_primaries is enabled */
-		prepare_to_wait(&device->seq_wait, &wait, TASK_INTERRUPTIBLE);
-		spin_unlock(&device->peer_seq_lock);
+		prepare_to_wait(&peer_device->device->seq_wait, &wait, TASK_INTERRUPTIBLE);
+		spin_unlock_bh(&peer_device->peer_seq_lock);
 		rcu_read_lock();
-		timeout = rcu_dereference(peer_device->connection->net_conf)->ping_timeo*HZ/10;
+		timeout = rcu_dereference(connection->transport.net_conf)->ping_timeo*HZ/10;
 		rcu_read_unlock();
 		timeout = schedule_timeout(timeout);
-		spin_lock(&device->peer_seq_lock);
+		spin_lock_bh(&peer_device->peer_seq_lock);
 		if (!timeout) {
 			ret = -ETIMEDOUT;
-			drbd_err(device, "Timed out waiting for missing ack packets; disconnecting\n");
+			drbd_err(peer_device, "Timed out waiting for missing ack packets; disconnecting\n");
 			break;
 		}
 	}
-	spin_unlock(&device->peer_seq_lock);
-	finish_wait(&device->seq_wait, &wait);
+	spin_unlock_bh(&peer_device->peer_seq_lock);
+	finish_wait(&peer_device->device->seq_wait, &wait);
 	return ret;
 }
 
@@ -2215,182 +3006,268 @@ static enum req_op wire_flags_to_bio_op(u32 dpf)
 		return REQ_OP_WRITE_ZEROES;
 	if (dpf & DP_DISCARD)
 		return REQ_OP_DISCARD;
-	else
-		return REQ_OP_WRITE;
+	return REQ_OP_WRITE;
 }
 
 /* see also bio_flags_to_wire() */
 static blk_opf_t wire_flags_to_bio(struct drbd_connection *connection, u32 dpf)
 {
-	return wire_flags_to_bio_op(dpf) |
-		(dpf & DP_RW_SYNC ? REQ_SYNC : 0) |
-		(dpf & DP_FUA ? REQ_FUA : 0) |
-		(dpf & DP_FLUSH ? REQ_PREFLUSH : 0);
-}
+	blk_opf_t opf = wire_flags_to_bio_op(dpf) |
+		(dpf & DP_RW_SYNC ? REQ_SYNC : 0);
 
-static void fail_postponed_requests(struct drbd_device *device, sector_t sector,
-				    unsigned int size)
-{
-	struct drbd_peer_device *peer_device = first_peer_device(device);
-	struct drbd_interval *i;
+	/* we used to communicate one bit only in older DRBD */
+	if (connection->agreed_pro_version >= 95)
+		opf |= (dpf & DP_FUA ? REQ_FUA : 0) |
+			    (dpf & DP_FLUSH ? REQ_PREFLUSH : 0);
 
-    repeat:
-	drbd_for_each_overlap(i, &device->write_requests, sector, size) {
-		struct drbd_request *req;
-		struct bio_and_error m;
+	return opf;
+}
 
-		if (!i->local)
-			continue;
-		req = container_of(i, struct drbd_request, i);
-		if (!(req->rq_state & RQ_POSTPONED))
-			continue;
-		req->rq_state &= ~RQ_POSTPONED;
-		__req_mod(req, NEG_ACKED, peer_device, &m);
-		spin_unlock_irq(&device->resource->req_lock);
-		if (m.bio)
-			complete_master_bio(device, &m);
-		spin_lock_irq(&device->resource->req_lock);
-		goto repeat;
+static void drbd_wait_for_activity_log_extents(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+	struct lru_cache *al;
+	int nr_al_extents;
+	int nr, used, ecnt;
+
+	/* Let the activity log know we are about to use it.
+	 * See also drbd_request_prepare() for the "request" entry point. */
+	nr_al_extents = interval_to_al_extents(&peer_req->i);
+	ecnt = atomic_add_return(nr_al_extents, &device->wait_for_actlog_ecnt);
+
+	spin_lock_irq(&device->al_lock);
+	al = device->act_log;
+	nr = al->nr_elements;
+	used = al->used;
+	spin_unlock_irq(&device->al_lock);
+
+	/* note: due to the slight delay between being accounted in "used" after
+	 * being committed to the activity log with drbd_al_begin_io_commit(),
+	 * and being subtracted from "wait_for_actlog_ecnt" in __drbd_submit_peer_request(),
+	 * this can err, but only on the conservative side (overestimating ecnt).
+	 * ecnt also includes any requests which are held due to conflicts,
+	 * conservatively overestimating the number of activity log extents
+	 * required. */
+	if (ecnt > nr - used) {
+		conn_wait_active_ee_empty_or_disconnect(connection);
+		drbd_flush_after_epoch(connection, NULL);
+		conn_wait_done_ee_empty_or_disconnect(connection);
+
+		/* would this peer even understand me? */
+		if (connection->agreed_pro_version >= 114)
+			drbd_send_confirm_stable(peer_req);
 	}
 }
 
-static int handle_write_conflicts(struct drbd_device *device,
-				  struct drbd_peer_request *peer_req)
+static int drbd_peer_write_conflicts(struct drbd_peer_request *peer_req)
 {
-	struct drbd_connection *connection = peer_req->peer_device->connection;
-	bool resolve_conflicts = test_bit(RESOLVE_CONFLICTS, &connection->flags);
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
 	sector_t sector = peer_req->i.sector;
 	const unsigned int size = peer_req->i.size;
 	struct drbd_interval *i;
-	bool equal;
-	int err;
 
-	/*
-	 * Inserting the peer request into the write_requests tree will prevent
-	 * new conflicting local requests from being added.
-	 */
-	drbd_insert_interval(&device->write_requests, &peer_req->i);
+	i = drbd_find_conflict(device, &peer_req->i, CONFLICT_FLAG_APPLICATION_ONLY);
 
-    repeat:
-	drbd_for_each_overlap(i, &device->write_requests, sector, size) {
-		if (i == &peer_req->i)
-			continue;
-		if (i->completed)
+	if (i) {
+		drbd_alert(device, "Concurrent writes detected: "
+				"local=%llus +%u, remote=%llus +%u\n",
+				(unsigned long long) i->sector, i->size,
+				(unsigned long long) sector, size);
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+static void drbd_queue_peer_request(struct drbd_device *device, struct drbd_peer_request *peer_req)
+{
+	atomic_inc(&device->wait_for_actlog);
+	spin_lock(&device->submit.lock);
+	list_add_tail(&peer_req->w.list, &device->submit.peer_writes);
+	spin_unlock(&device->submit.lock);
+	queue_work(device->submit.wq, &device->submit.worker);
+	/* do_submit() may sleep internally on al_wait, too */
+	wake_up(&device->al_wait);
+}
+
+static struct drbd_peer_request *find_released_peer_request(struct drbd_resource *resource, unsigned int node_id, u64 dagtag)
+{
+	struct drbd_connection *connection;
+	struct drbd_peer_request *released_peer_req = NULL;
+
+	read_lock_irq(&resource->state_rwlock);
+	for_each_connection(connection, resource) {
+		struct drbd_peer_request *peer_req;
+
+		/* Skip if we are not connected. If we are in the process of
+		 * disconnecting, the requests on dagtag_wait_ee will be
+		 * cleared up. Do not interfere with that process. */
+		if (connection->cstate[NOW] < C_CONNECTED)
 			continue;
 
-		if (!i->local) {
-			/*
-			 * Our peer has sent a conflicting remote request; this
-			 * should not happen in a two-node setup.  Wait for the
-			 * earlier peer request to complete.
-			 */
-			err = drbd_wait_misc(device, i);
-			if (err)
-				goto out;
-			goto repeat;
+		spin_lock(&connection->peer_reqs_lock);
+		list_for_each_entry(peer_req, &connection->dagtag_wait_ee, w.list) {
+			if (!peer_req->depend_dagtag ||
+					peer_req->depend_dagtag_node_id != node_id ||
+					peer_req->depend_dagtag > dagtag)
+				continue;
+
+			dynamic_drbd_dbg(peer_req->peer_device, "%s at %llus+%u: Wait for dagtag %llus from peer %u complete\n",
+					drbd_interval_type_str(&peer_req->i),
+					(unsigned long long) peer_req->i.sector, peer_req->i.size,
+					(unsigned long long) peer_req->depend_dagtag,
+					peer_req->depend_dagtag_node_id);
+
+			list_del(&peer_req->w.list);
+			released_peer_req = peer_req;
+			break;
 		}
+		spin_unlock(&connection->peer_reqs_lock);
 
-		equal = i->sector == sector && i->size == size;
-		if (resolve_conflicts) {
-			/*
-			 * If the peer request is fully contained within the
-			 * overlapping request, it can be considered overwritten
-			 * and thus superseded; otherwise, it will be retried
-			 * once all overlapping requests have completed.
-			 */
-			bool superseded = i->sector <= sector && i->sector +
-				       (i->size >> 9) >= sector + (size >> 9);
-
-			if (!equal)
-				drbd_alert(device, "Concurrent writes detected: "
-					       "local=%llus +%u, remote=%llus +%u, "
-					       "assuming %s came first\n",
-					  (unsigned long long)i->sector, i->size,
-					  (unsigned long long)sector, size,
-					  superseded ? "local" : "remote");
-
-			peer_req->w.cb = superseded ? e_send_superseded :
-						   e_send_retry_write;
-			list_add_tail(&peer_req->w.list, &device->done_ee);
-			/* put is in drbd_send_acks_wf() */
-			kref_get(&device->kref);
-			if (!queue_work(connection->ack_sender,
-					&peer_req->peer_device->send_acks_work))
-				kref_put(&device->kref, drbd_destroy_device);
+		if (released_peer_req)
+			break;
+	}
+	read_unlock_irq(&resource->state_rwlock);
 
-			err = -ENOENT;
-			goto out;
-		} else {
-			struct drbd_request *req =
-				container_of(i, struct drbd_request, i);
+	return released_peer_req;
+}
 
-			if (!equal)
-				drbd_alert(device, "Concurrent writes detected: "
-					       "local=%llus +%u, remote=%llus +%u\n",
-					  (unsigned long long)i->sector, i->size,
-					  (unsigned long long)sector, size);
+static void release_dagtag_wait(struct drbd_resource *resource, unsigned int node_id, u64 dagtag)
+{
+	struct drbd_peer_request *peer_req;
 
-			if (req->rq_state & RQ_LOCAL_PENDING ||
-			    !(req->rq_state & RQ_POSTPONED)) {
-				/*
-				 * Wait for the node with the discard flag to
-				 * decide if this request has been superseded
-				 * or needs to be retried.
-				 * Requests that have been superseded will
-				 * disappear from the write_requests tree.
-				 *
-				 * In addition, wait for the conflicting
-				 * request to finish locally before submitting
-				 * the conflicting peer request.
-				 */
-				err = drbd_wait_misc(device, &req->i);
-				if (err) {
-					_conn_request_state(connection, NS(conn, C_TIMEOUT), CS_HARD);
-					fail_postponed_requests(device, sector, size);
-					goto out;
-				}
-				goto repeat;
-			}
-			/*
-			 * Remember to restart the conflicting requests after
-			 * the new peer request has completed.
-			 */
-			peer_req->flags |= EE_RESTART_REQUESTS;
-		}
+	while ((peer_req = find_released_peer_request(resource, node_id, dagtag))) {
+		atomic_inc(&peer_req->peer_device->connection->backing_ee_cnt);
+		drbd_conflict_submit_peer_read(peer_req);
 	}
-	err = 0;
+}
+
+static void set_connection_dagtag(struct drbd_connection *connection, u64 dagtag)
+{
+	atomic64_set(&connection->last_dagtag_sector, dagtag);
+	set_bit(RECEIVED_DAGTAG, &connection->flags);
+
+	release_dagtag_wait(connection->resource, connection->peer_node_id, dagtag);
+}
+
+static void submit_peer_request_activity_log(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	int err;
+	int nr_al_extents = interval_to_al_extents(&peer_req->i);
+
+	if (nr_al_extents != 1 || !drbd_al_begin_io_fastpath(device, &peer_req->i)) {
+		drbd_queue_peer_request(device, peer_req);
+		return;
+	}
+
+	peer_req->flags |= EE_IN_ACTLOG;
+	atomic_sub(nr_al_extents, &device->wait_for_actlog_ecnt);
 
-    out:
+	err = drbd_submit_peer_request(peer_req);
 	if (err)
-		drbd_remove_epoch_entry_interval(device, peer_req);
-	return err;
+		drbd_cleanup_after_failed_submit_peer_write(peer_req);
+}
+
+void drbd_conflict_submit_peer_write(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	bool conflict = false;
+
+	spin_lock_irq(&device->interval_lock);
+	clear_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &peer_req->i.flags);
+	conflict = drbd_find_conflict(device, &peer_req->i, 0);
+	if (!conflict)
+		set_bit(INTERVAL_SUBMITTED, &peer_req->i.flags);
+	spin_unlock_irq(&device->interval_lock);
+
+	if (!conflict)
+		submit_peer_request_activity_log(peer_req);
 }
 
-/* mirrored write */
+/* mirrored write
+ *
+ * Request handling flow:
+ *
+ *                      conflict
+ * receive_Data -----------------------+
+ *       |                             |
+ *       |                            ...
+ *       |                             |
+ *       |                             v
+ *       |                   drbd_do_submit_conflict
+ *       |                             |
+ *       |                             v
+ *       +------------------ drbd_conflict_submit_peer_write
+ *       |
+ *       v                         wait for AL
+ * submit_peer_request_activity_log --------> drbd_queue_peer_request
+ *       |                                         |
+ *       |                                        ...
+ *       |                                         |
+ *       |                                         v    AL extent active
+ *       |                                    do_submit ----------------+
+ *       |                                         |                    |
+ *       |                                         v                    v
+ *       |                              send_and_submit_pending   submit_fast_path
+ *       |                                         |                    |
+ *       v                                         v                    |
+ * drbd_submit_peer_request <-------- __drbd_submit_peer_request <------+
+ *       |
+ *      ... backing device
+ *       |
+ *       v
+ * drbd_peer_request_endio
+ *       |
+ *       v
+ * drbd_endio_write_sec_final
+ *       |
+ *      ... done_ee
+ *       |
+ *       v
+ * drbd_finish_peer_reqs
+ *       |
+ *       v
+ * e_end_block
+ *       |
+ *      ... via peer
+ *       |
+ *       v
+ * got_peer_ack
+ */
 static int receive_Data(struct drbd_connection *connection, struct packet_info *pi)
 {
 	struct drbd_peer_device *peer_device;
 	struct drbd_device *device;
 	struct net_conf *nc;
-	sector_t sector;
 	struct drbd_peer_request *peer_req;
-	struct p_data *p = pi->data;
-	u32 peer_seq = be32_to_cpu(p->seq_num);
-	u32 dp_flags;
+	struct drbd_peer_request_details d;
 	int err, tp;
+	bool conflict = false;
 
 	peer_device = conn_peer_device(connection, pi->vnr);
 	if (!peer_device)
 		return -EIO;
 	device = peer_device->device;
 
+	if (pi->cmd == P_TRIM)
+		D_ASSERT(peer_device, pi->size == 0);
+
+	p_req_detail_from_pi(connection, &d, pi);
+	pi->data = NULL;
+
 	if (!get_ldev(device)) {
 		int err2;
 
-		err = wait_for_and_update_peer_seq(peer_device, peer_seq);
-		drbd_send_ack_dp(peer_device, P_NEG_ACK, p, pi->size);
+		err = wait_for_and_update_peer_seq(peer_device, d.peer_seq);
+		drbd_send_ack_dp(peer_device, P_NEG_ACK, &d);
 		atomic_inc(&connection->current_epoch->epoch_size);
-		err2 = drbd_drain_block(peer_device, pi->size);
+		err2 = ignore_remaining_packet(connection, pi->size);
 		if (!err)
 			err = err2;
 		return err;
@@ -2402,71 +3279,107 @@ static int receive_Data(struct drbd_connection *connection, struct packet_info *
 	 * end of this function.
 	 */
 
-	sector = be64_to_cpu(p->sector);
-	peer_req = read_in_block(peer_device, p->block_id, sector, pi);
+	peer_req = drbd_alloc_peer_req(peer_device, GFP_TRY, d.bi_size,
+				       wire_flags_to_bio(connection, d.dp_flags));
 	if (!peer_req) {
 		put_ldev(device);
 		return -EIO;
 	}
+	peer_req->i.size = d.bi_size; /* storage size */
+	peer_req->i.sector = d.sector;
+	peer_req->i.type = INTERVAL_PEER_WRITE;
+
+	err = read_in_block(peer_req, &d);
+	if (err) {
+		drbd_free_peer_req(peer_req);
+		put_ldev(device);
+		return err;
+	}
+
+	if (pi->cmd == P_TRIM)
+		peer_req->flags |= EE_TRIM;
+	else if (pi->cmd == P_ZEROES)
+		peer_req->flags |= EE_ZEROOUT;
 
 	peer_req->w.cb = e_end_block;
 	peer_req->submit_jif = jiffies;
-	peer_req->flags |= EE_APPLICATION;
 
-	dp_flags = be32_to_cpu(p->dp_flags);
-	peer_req->opf = wire_flags_to_bio(connection, dp_flags);
 	if (pi->cmd == P_TRIM) {
 		D_ASSERT(peer_device, peer_req->i.size > 0);
-		D_ASSERT(peer_device, peer_req_op(peer_req) == REQ_OP_DISCARD);
-		D_ASSERT(peer_device, peer_req->pages == NULL);
+		D_ASSERT(peer_device, d.dp_flags & DP_DISCARD);
+		D_ASSERT(peer_device, bio_op(peer_req->bios.head) == REQ_OP_DISCARD);
 		/* need to play safe: an older DRBD sender
 		 * may mean zero-out while sending P_TRIM. */
 		if (0 == (connection->agreed_features & DRBD_FF_WZEROES))
 			peer_req->flags |= EE_ZEROOUT;
 	} else if (pi->cmd == P_ZEROES) {
 		D_ASSERT(peer_device, peer_req->i.size > 0);
-		D_ASSERT(peer_device, peer_req_op(peer_req) == REQ_OP_WRITE_ZEROES);
-		D_ASSERT(peer_device, peer_req->pages == NULL);
+		D_ASSERT(peer_device, d.dp_flags & DP_ZEROES);
+		D_ASSERT(peer_device, bio_op(peer_req->bios.head) == REQ_OP_WRITE_ZEROES);
 		/* Do (not) pass down BLKDEV_ZERO_NOUNMAP? */
-		if (dp_flags & DP_DISCARD)
+		if (d.dp_flags & DP_DISCARD)
 			peer_req->flags |= EE_TRIM;
-	} else if (peer_req->pages == NULL) {
-		D_ASSERT(device, peer_req->i.size == 0);
-		D_ASSERT(device, dp_flags & DP_FLUSH);
+	} else {
+		D_ASSERT(peer_device, peer_req->i.size > 0);
+		D_ASSERT(peer_device, bio_op(peer_req->bios.head) == REQ_OP_WRITE);
 	}
 
-	if (dp_flags & DP_MAY_SET_IN_SYNC)
+	if (d.dp_flags & DP_MAY_SET_IN_SYNC)
 		peer_req->flags |= EE_MAY_SET_IN_SYNC;
 
 	spin_lock(&connection->epoch_lock);
 	peer_req->epoch = connection->current_epoch;
 	atomic_inc(&peer_req->epoch->epoch_size);
 	atomic_inc(&peer_req->epoch->active);
+	if (peer_req->epoch->oldest_unconfirmed_peer_req == NULL)
+		peer_req->epoch->oldest_unconfirmed_peer_req = peer_req;
+
+	if (connection->resource->write_ordering == WO_BIO_BARRIER &&
+	    atomic_read(&peer_req->epoch->epoch_size) == 1) {
+		struct drbd_epoch *epoch;
+		/* Issue a barrier if we start a new epoch, and the previous epoch
+		   was not a epoch containing a single request which already was
+		   a Barrier. */
+		epoch = list_entry(peer_req->epoch->list.prev, struct drbd_epoch, list);
+		if (epoch == peer_req->epoch) {
+			set_bit(DE_CONTAINS_A_BARRIER, &peer_req->epoch->flags);
+			peer_req->bios.head->bi_opf |= REQ_PREFLUSH | REQ_FUA;
+			peer_req->flags |= EE_IS_BARRIER;
+		} else {
+			if (atomic_read(&epoch->epoch_size) > 1 ||
+			    !test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags)) {
+				set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags);
+				set_bit(DE_CONTAINS_A_BARRIER, &peer_req->epoch->flags);
+				peer_req->bios.head->bi_opf |= REQ_PREFLUSH | REQ_FUA;
+				peer_req->flags |= EE_IS_BARRIER;
+			}
+		}
+	}
 	spin_unlock(&connection->epoch_lock);
 
 	rcu_read_lock();
-	nc = rcu_dereference(peer_device->connection->net_conf);
+	nc = rcu_dereference(connection->transport.net_conf);
 	tp = nc->two_primaries;
-	if (peer_device->connection->agreed_pro_version < 100) {
+	if (connection->agreed_pro_version < 100) {
 		switch (nc->wire_protocol) {
 		case DRBD_PROT_C:
-			dp_flags |= DP_SEND_WRITE_ACK;
+			d.dp_flags |= DP_SEND_WRITE_ACK;
 			break;
 		case DRBD_PROT_B:
-			dp_flags |= DP_SEND_RECEIVE_ACK;
+			d.dp_flags |= DP_SEND_RECEIVE_ACK;
 			break;
 		}
 	}
 	rcu_read_unlock();
 
-	if (dp_flags & DP_SEND_WRITE_ACK) {
+	if (d.dp_flags & DP_SEND_WRITE_ACK) {
 		peer_req->flags |= EE_SEND_WRITE_ACK;
-		inc_unacked(device);
+		inc_unacked(peer_device);
 		/* corresponding dec_unacked() in e_end_block()
 		 * respective _drbd_clear_done_ee */
 	}
 
-	if (dp_flags & DP_SEND_RECEIVE_ACK) {
+	if (d.dp_flags & DP_SEND_RECEIVE_ACK) {
 		/* I really don't like it that the receiver thread
 		 * sends on the msock, but anyways */
 		drbd_send_ack(peer_device, P_RECV_ACK, peer_req);
@@ -2474,66 +3387,137 @@ static int receive_Data(struct drbd_connection *connection, struct packet_info *
 
 	if (tp) {
 		/* two primaries implies protocol C */
-		D_ASSERT(device, dp_flags & DP_SEND_WRITE_ACK);
-		peer_req->flags |= EE_IN_INTERVAL_TREE;
-		err = wait_for_and_update_peer_seq(peer_device, peer_seq);
+		D_ASSERT(device, d.dp_flags & DP_SEND_WRITE_ACK);
+		err = wait_for_and_update_peer_seq(peer_device, d.peer_seq);
 		if (err)
-			goto out_interrupted;
-		spin_lock_irq(&device->resource->req_lock);
-		err = handle_write_conflicts(device, peer_req);
-		if (err) {
-			spin_unlock_irq(&device->resource->req_lock);
-			if (err == -ENOENT) {
-				put_ldev(device);
-				return 0;
-			}
-			goto out_interrupted;
-		}
+			goto out;
 	} else {
-		update_peer_seq(peer_device, peer_seq);
-		spin_lock_irq(&device->resource->req_lock);
-	}
-	/* TRIM and is processed synchronously,
-	 * we wait for all pending requests, respectively wait for
-	 * active_ee to become empty in drbd_submit_peer_request();
-	 * better not add ourselves here. */
-	if ((peer_req->flags & (EE_TRIM | EE_ZEROOUT)) == 0)
-		list_add_tail(&peer_req->w.list, &device->active_ee);
-	spin_unlock_irq(&device->resource->req_lock);
-
-	if (device->state.conn == C_SYNC_TARGET)
-		wait_event(device->ee_wait, !overlapping_resync_write(device, peer_req));
-
-	if (device->state.pdsk < D_INCONSISTENT) {
-		/* In case we have the only disk of the cluster, */
-		drbd_set_out_of_sync(peer_device, peer_req->i.sector, peer_req->i.size);
-		peer_req->flags &= ~EE_MAY_SET_IN_SYNC;
-		drbd_al_begin_io(device, &peer_req->i);
-		peer_req->flags |= EE_CALL_AL_COMPLETE_IO;
+		update_peer_seq(peer_device, d.peer_seq);
 	}
 
-	err = drbd_submit_peer_request(peer_req);
-	if (!err)
-		return 0;
+	peer_req->dagtag_sector = atomic64_read(&connection->last_dagtag_sector) + (peer_req->i.size >> 9);
 
-	/* don't care for the reason here */
-	drbd_err(device, "submit failed, triggering re-connect\n");
-	spin_lock_irq(&device->resource->req_lock);
-	list_del(&peer_req->w.list);
-	drbd_remove_epoch_entry_interval(device, peer_req);
-	spin_unlock_irq(&device->resource->req_lock);
-	if (peer_req->flags & EE_CALL_AL_COMPLETE_IO) {
-		peer_req->flags &= ~EE_CALL_AL_COMPLETE_IO;
-		drbd_al_complete_io(device, &peer_req->i);
+	drbd_wait_for_activity_log_extents(peer_req);
+
+	atomic_inc(&connection->active_ee_cnt);
+
+	spin_lock_irq(&connection->peer_reqs_lock);
+	list_add_tail(&peer_req->recv_order, &connection->peer_requests);
+	peer_req->flags |= EE_ON_RECV_ORDER;
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	/* Note: this now may or may not be "hot" in the activity log.
+	 * Still, it is the best time to record that we need to set the
+	 * out-of-sync bit, if we delay that until drbd_submit_peer_request(),
+	 * we may introduce a race with some re-attach on the peer.
+	 * Unless we want to guarantee that we drain all in-flight IO
+	 * whenever we receive a state change. Which I'm not sure about.
+	 * Use the EE_SET_OUT_OF_SYNC flag, to be acted on just before
+	 * the actual submit, when we can be sure it is "hot".
+	 */
+	if (peer_device->disk_state[NOW] < D_INCONSISTENT) {
+		peer_req->flags &= ~EE_MAY_SET_IN_SYNC;
+		peer_req->flags |= EE_SET_OUT_OF_SYNC;
 	}
 
-out_interrupted:
-	drbd_may_finish_epoch(connection, peer_req->epoch, EV_PUT | EV_CLEANUP);
+	spin_lock_irq(&device->interval_lock);
+	if (tp) {
+		err = drbd_peer_write_conflicts(peer_req);
+		if (err) {
+			spin_unlock_irq(&device->interval_lock);
+			goto out_del_list;
+		}
+	}
+	conflict = drbd_find_conflict(device, &peer_req->i, 0);
+	drbd_insert_interval(&device->requests, &peer_req->i);
+	if (!conflict)
+		set_bit(INTERVAL_SUBMITTED, &peer_req->i.flags);
+	spin_unlock_irq(&device->interval_lock);
+
+	/* The connection dagtag may only be set after inserting the interval
+	 * into the tree, so that requests that were waiting for the dagtag
+	 * enter the interval tree after the request with the dagtag itself. */
+	set_connection_dagtag(connection, peer_req->dagtag_sector);
+
+	if (!conflict)
+		submit_peer_request_activity_log(peer_req);
+	/* ldev_ref_transfer: put_ldev in peer_req endio */
+	return 0;
+
+out_del_list:
+	spin_lock_irq(&connection->peer_reqs_lock);
+	peer_req->flags &= ~EE_ON_RECV_ORDER;
+	list_del(&peer_req->recv_order);
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	atomic_dec(&connection->active_ee_cnt);
+	atomic_sub(interval_to_al_extents(&peer_req->i), &device->wait_for_actlog_ecnt);
+
+out:
+	if (peer_req->flags & EE_SEND_WRITE_ACK)
+		dec_unacked(peer_device);
+	drbd_may_finish_epoch(connection, peer_req->epoch, EV_PUT + EV_CLEANUP);
 	put_ldev(device);
-	drbd_free_peer_req(device, peer_req);
+	drbd_free_peer_req(peer_req);
 	return err;
 }
 
+/*
+ * To be called when drbd_submit_peer_request() fails for a peer write request.
+ */
+void drbd_cleanup_after_failed_submit_peer_write(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
+
+	drbd_err_ratelimit(peer_device, "submit failed, triggering re-connect\n");
+
+	if (peer_req->flags & EE_IN_ACTLOG)
+		drbd_al_complete_io(device, &peer_req->i);
+
+	drbd_remove_peer_req_interval(peer_req);
+
+	drbd_may_finish_epoch(connection, peer_req->epoch, EV_PUT + EV_CLEANUP);
+	put_ldev(device);
+	drbd_free_peer_req(peer_req);
+	change_cstate(connection, C_PROTOCOL_ERROR, CS_HARD);
+}
+
+/* Possibly "cancel" and forget about all peer_requests that had still been
+ * waiting for the activity log (wfa) when the connection to their peer failed,
+ * and pretend we never received them.
+ */
+void drbd_cleanup_peer_requests_wfa(struct drbd_device *device, struct list_head *cleanup)
+{
+	struct drbd_connection *connection;
+	struct drbd_peer_request *peer_req, *pr_tmp;
+
+	write_lock_irq(&device->resource->state_rwlock);
+	list_for_each_entry(peer_req, cleanup, w.list) {
+		atomic_dec(&peer_req->peer_device->connection->active_ee_cnt);
+		drbd_remove_peer_req_interval(peer_req);
+	}
+	write_unlock_irq(&device->resource->state_rwlock);
+
+	list_for_each_entry_safe(peer_req, pr_tmp, cleanup, w.list) {
+		atomic_sub(interval_to_al_extents(&peer_req->i), &device->wait_for_actlog_ecnt);
+		atomic_dec(&device->wait_for_actlog);
+		if (peer_req->flags & EE_SEND_WRITE_ACK)
+			dec_unacked(peer_req->peer_device);
+		list_del_init(&peer_req->w.list);
+		drbd_may_finish_epoch(peer_req->peer_device->connection, peer_req->epoch, EV_PUT | EV_CLEANUP);
+		drbd_free_peer_req(peer_req);
+		put_ldev(device);
+	}
+	/*
+	 * We changed (likely: cleared) active_ee_cnt for "at least one" connection.
+	 * We should wake potential waiters, just in case.
+	 */
+	for_each_connection(connection, device->resource)
+		wake_up(&connection->ee_wait);
+}
+
 /* We may throttle resync, if the lower device seems to be busy,
  * and current sync rate is above c_min_rate.
  *
@@ -2545,69 +3529,45 @@ static int receive_Data(struct drbd_connection *connection, struct packet_info *
  * The current sync rate used here uses only the most recent two step marks,
  * to have a short time average so we can react faster.
  */
-bool drbd_rs_should_slow_down(struct drbd_peer_device *peer_device, sector_t sector,
-		bool throttle_if_app_is_waiting)
+bool drbd_rs_c_min_rate_throttle(struct drbd_peer_device *peer_device)
 {
 	struct drbd_device *device = peer_device->device;
-	struct lc_element *tmp;
-	bool throttle = drbd_rs_c_min_rate_throttle(device);
-
-	if (!throttle || throttle_if_app_is_waiting)
-		return throttle;
-
-	spin_lock_irq(&device->al_lock);
-	tmp = lc_find(device->resync, BM_SECT_TO_EXT(sector));
-	if (tmp) {
-		struct bm_extent *bm_ext = lc_entry(tmp, struct bm_extent, lce);
-		if (test_bit(BME_PRIORITY, &bm_ext->flags))
-			throttle = false;
-		/* Do not slow down if app IO is already waiting for this extent,
-		 * and our progress is necessary for application IO to complete. */
-	}
-	spin_unlock_irq(&device->al_lock);
-
-	return throttle;
-}
-
-bool drbd_rs_c_min_rate_throttle(struct drbd_device *device)
-{
 	struct gendisk *disk = device->ldev->backing_bdev->bd_disk;
 	unsigned long db, dt, dbdt;
 	unsigned int c_min_rate;
 	int curr_events;
 
 	rcu_read_lock();
-	c_min_rate = rcu_dereference(device->ldev->disk_conf)->c_min_rate;
+	c_min_rate = rcu_dereference(peer_device->conf)->c_min_rate;
 	rcu_read_unlock();
 
 	/* feature disabled? */
 	if (c_min_rate == 0)
 		return false;
 
-	curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
-			atomic_read(&device->rs_sect_ev);
+	curr_events = (int)part_stat_read_accum(disk->part0, sectors)
+		- atomic_read(&device->rs_sect_ev);
 
-	if (atomic_read(&device->ap_actlog_cnt)
-	    || curr_events - device->rs_last_events > 64) {
+	if (atomic_read(&device->ap_actlog_cnt) || curr_events - peer_device->rs_last_events > 64) {
 		unsigned long rs_left;
 		int i;
 
-		device->rs_last_events = curr_events;
+		peer_device->rs_last_events = curr_events;
 
 		/* sync speed average over the last 2*DRBD_SYNC_MARK_STEP,
 		 * approx. */
-		i = (device->rs_last_mark + DRBD_SYNC_MARKS-1) % DRBD_SYNC_MARKS;
+		i = (peer_device->rs_last_mark + DRBD_SYNC_MARKS-1) % DRBD_SYNC_MARKS;
 
-		if (device->state.conn == C_VERIFY_S || device->state.conn == C_VERIFY_T)
-			rs_left = device->ov_left;
+		if (peer_device->repl_state[NOW] == L_VERIFY_S || peer_device->repl_state[NOW] == L_VERIFY_T)
+			rs_left = atomic64_read(&peer_device->ov_left);
 		else
-			rs_left = drbd_bm_total_weight(device) - device->rs_failed;
+			rs_left = drbd_bm_total_weight(peer_device) - peer_device->rs_failed;
 
-		dt = ((long)jiffies - (long)device->rs_mark_time[i]) / HZ;
+		dt = ((long)jiffies - (long)peer_device->rs_mark_time[i]) / HZ;
 		if (!dt)
 			dt++;
-		db = device->rs_mark_left[i] - rs_left;
-		dbdt = Bit2KB(db/dt);
+		db = peer_device->rs_mark_left[i] - rs_left;
+		dbdt = device_bit_to_kb(device, db/dt);
 
 		if (dbdt > c_min_rate)
 			return true;
@@ -2615,16 +3575,199 @@ bool drbd_rs_c_min_rate_throttle(struct drbd_device *device)
 	return false;
 }
 
-static int receive_DataRequest(struct drbd_connection *connection, struct packet_info *pi)
+void drbd_verify_skipped_block(struct drbd_peer_device *peer_device,
+		const sector_t sector, const unsigned int size)
+{
+	++peer_device->ov_skipped;
+	if (peer_device->ov_last_skipped_start + peer_device->ov_last_skipped_size == sector) {
+		peer_device->ov_last_skipped_size += size>>9;
+	} else {
+		ov_skipped_print(peer_device);
+		peer_device->ov_last_skipped_start = sector;
+		peer_device->ov_last_skipped_size = size>>9;
+	}
+}
+
+static void drbd_cleanup_peer_read(
+		struct drbd_peer_request *peer_req, bool in_interval_tree)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
+
+	if (in_interval_tree)
+		drbd_remove_peer_req_interval(peer_req);
+
+	if (peer_req->i.type != INTERVAL_PEER_READ)
+		atomic_sub(peer_req->i.size >> SECTOR_SHIFT, &device->rs_sect_ev);
+	dec_unacked(peer_device);
+
+	drbd_free_peer_req(peer_req);
+	put_ldev(device);
+
+	if (atomic_dec_and_test(&connection->backing_ee_cnt))
+		wake_up(&connection->ee_wait);
+}
+
+void drbd_conflict_submit_peer_read(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	bool submit = true;
+	bool interval_tree = false;
+	bool canceled = false;
+
+	/* Hold resync reads until conflicts have cleared so that we know which
+	 * bitmap bits we can safely clear. Also add verify requests on the
+	 * target to the interval tree so that conflicts can be detected.
+	 * Verify requests on the source have already been added. */
+	if (drbd_interval_is_resync(&peer_req->i) || peer_req->i.type == INTERVAL_OV_READ_TARGET) {
+		bool conflict = false;
+		interval_tree = true;
+		spin_lock_irq(&device->interval_lock);
+		clear_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &peer_req->i.flags);
+		canceled = test_bit(INTERVAL_CANCELED, &peer_req->i.flags);
+		conflict = drbd_find_conflict(device, &peer_req->i, 0);
+		if (drbd_interval_empty(&peer_req->i)) {
+			if (conflict)
+				set_bit(INTERVAL_CONFLICT, &peer_req->i.flags);
+			drbd_insert_interval(&device->requests, &peer_req->i);
+		}
+		if (!conflict || drbd_interval_is_verify(&peer_req->i))
+			set_bit(INTERVAL_SUBMITTED, &peer_req->i.flags);
+		else
+			submit = false;
+		spin_unlock_irq(&device->interval_lock);
+	}
+
+	/* Wait if there are conflicts unless this is a verify request, in
+	 * which case we submit it anyway but skip the block if it conflicted. */
+	if (submit) {
+		int err = drbd_submit_peer_request(peer_req);
+		if (err) {
+			if (drbd_ratelimit())
+				drbd_err(peer_device, "submit failed, triggering re-connect\n");
+
+			drbd_cleanup_peer_read(peer_req, interval_tree);
+			change_cstate(peer_device->connection, C_PROTOCOL_ERROR, CS_HARD);
+		}
+	} else if (canceled) {
+		drbd_cleanup_peer_read(peer_req, interval_tree);
+	}
+}
+
+static bool need_to_wait_for_dagtag_of_peer_request(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_resource *resource = device->resource;
+	struct drbd_connection *connection;
+	bool ret = false;
+
+	rcu_read_lock();
+	connection = drbd_connection_by_node_id(resource, peer_req->depend_dagtag_node_id);
+	if (connection && connection->cstate[NOW] == C_CONNECTED) {
+		if (atomic64_read(&connection->last_dagtag_sector) < peer_req->depend_dagtag)
+			ret = true;
+	}
+	/*
+	 * I am a weak node if the resync source (myself) is not connected to the
+	 * depend_dagtag_node_id. The resync target will abort this resync soon.
+	 * See check_resync_source().
+	 */
+	rcu_read_unlock();
+	return ret;
+}
+
+static void drbd_peer_resync_read_cancel(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	sector_t sector = peer_req->i.sector;
+	int size = peer_req->i.size;
+	u64 block_id = peer_req->block_id;
+
+	if (peer_req->i.type == INTERVAL_OV_READ_SOURCE) {
+		/* P_OV_REPLY */
+		dec_rs_pending(peer_device);
+		drbd_send_ov_result(peer_device, sector, size, block_id, OV_RESULT_SKIP);
+	} else if (peer_req->i.type == INTERVAL_OV_READ_TARGET) {
+		/* P_OV_REQUEST */
+		drbd_verify_skipped_block(peer_device, sector, size);
+		verify_progress(peer_device, sector, size);
+		drbd_send_ack_be(peer_device, P_RS_CANCEL, sector, size, block_id);
+	} else {
+		/* P_RS_DATA_REQUEST etc */
+		drbd_send_ack_be(peer_device, P_RS_CANCEL, sector, size, block_id);
+	}
+}
+
+static void drbd_peer_resync_read(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
+	unsigned int size = peer_req->i.size;
+
+	if (connection->peer_role[NOW] != R_PRIMARY &&
+			drbd_rs_c_min_rate_throttle(peer_device))
+		schedule_timeout_uninterruptible(HZ/10);
+
+	atomic_add(size >> 9, &device->rs_sect_ev);
+
+	/* dagtag 0 means that there is no dependency to be fulfilled,
+	 * so we can ignore it.
+	 * If we are the dependent node, we can also ignore the dagtag
+	 * dependency, because the request with this dagtag must already be in
+	 * the interval tree, so the read will wait until the interval tree
+	 * conflict is resolved before being submitted. */
+	if (peer_req->depend_dagtag &&
+	    peer_req->depend_dagtag_node_id != device->resource->res_opts.node_id &&
+	    need_to_wait_for_dagtag_of_peer_request(peer_req)) {
+		dynamic_drbd_dbg(peer_device,
+				 "%s at %llus+%u: Waiting for dagtag %llus from peer %u\n",
+				 drbd_interval_type_str(&peer_req->i),
+				 (unsigned long long)peer_req->i.sector, size,
+				 (unsigned long long)peer_req->depend_dagtag,
+				 peer_req->depend_dagtag_node_id);
+		spin_lock_irq(&connection->peer_reqs_lock);
+		list_add_tail(&peer_req->w.list, &connection->dagtag_wait_ee);
+		spin_unlock_irq(&connection->peer_reqs_lock);
+		return;
+	}
+
+	atomic_inc(&connection->backing_ee_cnt);
+	drbd_conflict_submit_peer_read(peer_req);
+}
+
+static int receive_digest(struct drbd_peer_request *peer_req, int digest_size)
+{
+	struct digest_info *di = NULL;
+
+	di = kmalloc(sizeof(*di) + digest_size, GFP_NOIO);
+	if (!di)
+		return -ENOMEM;
+
+	di->digest_size = digest_size;
+	di->digest = (((char *)di)+sizeof(struct digest_info));
+
+	peer_req->digest = di;
+	peer_req->flags |= EE_HAS_DIGEST;
+
+	return drbd_recv_into(peer_req->peer_device->connection, di->digest, digest_size);
+}
+
+static int receive_common_data_request(struct drbd_connection *connection, struct packet_info *pi,
+		struct p_block_req_common *p,
+		unsigned int depend_dagtag_node_id, u64 depend_dagtag)
 {
 	struct drbd_peer_device *peer_device;
 	struct drbd_device *device;
-	sector_t sector;
+	sector_t sector = be64_to_cpu(p->sector);
 	sector_t capacity;
 	struct drbd_peer_request *peer_req;
-	struct digest_info *di = NULL;
-	int size, verb;
-	struct p_block_req *p =	pi->data;
+	int size = be32_to_cpu(p->blksize);
+	enum drbd_disk_state min_d_state;
+	int err;
 
 	peer_device = conn_peer_device(connection, pi->vnr);
 	if (!peer_device)
@@ -2632,67 +3775,135 @@ static int receive_DataRequest(struct drbd_connection *connection, struct packet
 	device = peer_device->device;
 	capacity = get_capacity(device->vdisk);
 
-	sector = be64_to_cpu(p->sector);
-	size   = be32_to_cpu(p->blksize);
-
 	if (size <= 0 || !IS_ALIGNED(size, 512) || size > DRBD_MAX_BIO_SIZE) {
-		drbd_err(device, "%s:%d: sector: %llus, size: %u\n", __FILE__, __LINE__,
+		drbd_err(peer_device, "%s:%d: sector: %llus, size: %u\n", __FILE__, __LINE__,
 				(unsigned long long)sector, size);
 		return -EINVAL;
 	}
 	if (sector + (size>>9) > capacity) {
-		drbd_err(device, "%s:%d: sector: %llus, size: %u\n", __FILE__, __LINE__,
+		drbd_err(peer_device, "%s:%d: sector: %llus, size: %u\n", __FILE__, __LINE__,
 				(unsigned long long)sector, size);
 		return -EINVAL;
 	}
 
-	if (!get_ldev_if_state(device, D_UP_TO_DATE)) {
-		verb = 1;
+	/* Tell target to have a retry, waiting for the rescheduled
+	 * drbd_start_resync to complete. Otherwise the concurrency
+	 * of send oos and resync may lead to data loss.
+	 */
+	if (peer_device->repl_state[NOW] == L_WF_BITMAP_S ||
+			peer_device->repl_state[NOW] == L_STARTING_SYNC_S) {
 		switch (pi->cmd) {
-		case P_DATA_REQUEST:
-			drbd_send_ack_rp(peer_device, P_NEG_DREPLY, p);
-			break;
-		case P_RS_THIN_REQ:
 		case P_RS_DATA_REQUEST:
+		case P_RS_DAGTAG_REQ:
 		case P_CSUM_RS_REQUEST:
-		case P_OV_REQUEST:
-			drbd_send_ack_rp(peer_device, P_NEG_RS_DREPLY , p);
-			break;
-		case P_OV_REPLY:
-			verb = 0;
-			dec_rs_pending(peer_device);
-			drbd_send_ack_ex(peer_device, P_OV_RESULT, sector, size, ID_IN_SYNC);
-			break;
+		case P_RS_CSUM_DAGTAG_REQ:
+		case P_RS_THIN_REQ:
+		case P_RS_THIN_DAGTAG_REQ:
+			drbd_send_ack_be(peer_device, P_RS_CANCEL, sector, size, p->block_id);
+			return ignore_remaining_packet(connection, pi->size);
 		default:
-			BUG();
+			break;
 		}
-		if (verb && drbd_ratelimit())
-			drbd_err(device, "Can not satisfy peer's read request, "
-			    "no local data.\n");
-
-		/* drain possibly payload */
-		return drbd_drain_block(peer_device, pi->size);
 	}
 
-	/* GFP_NOIO, because we must not cause arbitrary write-out: in a DRBD
-	 * "criss-cross" setup, that might cause write-out on some other DRBD,
-	 * which in turn might block on the other node at this very place.  */
-	peer_req = drbd_alloc_peer_req(peer_device, p->block_id, sector, size,
-			size, GFP_NOIO);
-	if (!peer_req) {
-		put_ldev(device);
-		return -ENOMEM;
+	min_d_state = pi->cmd == P_DATA_REQUEST ? D_UP_TO_DATE : D_OUTDATED;
+	if (!get_ldev_if_state(device, min_d_state)) {
+		switch (pi->cmd) {
+		case P_DATA_REQUEST:
+			drbd_send_ack_be(peer_device, P_NEG_DREPLY, sector, size, p->block_id);
+			break;
+		case P_OV_REQUEST:
+		case P_OV_DAGTAG_REQ:
+			drbd_verify_skipped_block(peer_device, sector, size);
+			verify_progress(peer_device, sector, size);
+			drbd_send_ack_be(peer_device, P_RS_CANCEL, sector, size, p->block_id);
+			break;
+		case P_RS_DATA_REQUEST:
+		case P_RS_DAGTAG_REQ:
+		case P_CSUM_RS_REQUEST:
+		case P_RS_CSUM_DAGTAG_REQ:
+		case P_RS_THIN_REQ:
+		case P_RS_THIN_DAGTAG_REQ:
+			if (peer_device->repl_state[NOW] == L_PAUSED_SYNC_S)
+				drbd_send_ack_be(peer_device, P_RS_CANCEL, sector, size, p->block_id);
+			else
+				drbd_send_ack_be(peer_device, P_NEG_RS_DREPLY, sector, size, p->block_id);
+			break;
+		default:
+			BUG();
+		}
+
+		if (peer_device->repl_state[NOW] != L_PAUSED_SYNC_S)
+			drbd_err_ratelimit(device,
+				"Can not satisfy peer's read request, no local data.\n");
+
+		/* drain possible payload */
+		return ignore_remaining_packet(connection, pi->size);
+	}
+
+	if (pi->cmd != P_DATA_REQUEST
+	&& !IS_ALIGNED(size, bm_block_size(device->bitmap))
+	&& (sector + (size >> 9) != device->bitmap->bm_dev_capacity)) {
+		drbd_warn_ratelimit(peer_device,
+			"Unaligned %s request (%u vs %u) at %llu; may lead to hung or repeating resync.\n",
+			drbd_packet_name(pi->cmd), size, bm_block_size(device->bitmap), sector);
+		/* For now, try to continue anyways */
+	}
+
+	inc_unacked(peer_device);
+
+	peer_req = drbd_alloc_peer_req(peer_device, GFP_TRY, size, REQ_OP_READ);
+	err = -ENOMEM;
+	if (!peer_req)
+		goto fail;
+	peer_req->i.size = size;
+	peer_req->i.sector = sector;
+	peer_req->block_id = p->block_id;
+	peer_req->depend_dagtag_node_id = depend_dagtag_node_id;
+	peer_req->depend_dagtag = depend_dagtag;
+	/* no longer valid, about to call drbd_recv again for the digest... */
+	p = NULL;
+	pi->data = NULL;
+
+	if (peer_device->repl_state[NOW] == L_AHEAD) {
+		if (pi->cmd == P_DATA_REQUEST) {
+			/* P_DATA_REQUEST originates from a Primary,
+			 * so if I am "Ahead", the Primary would be "Behind":
+			 * Can not happen. */
+			drbd_err_ratelimit(peer_device, "received P_DATA_REQUEST while L_AHEAD\n");
+			err = -EINVAL;
+			goto fail2;
+		}
+		if (connection->agreed_pro_version >= 115) {
+			switch (pi->cmd) {
+			/* case P_DATA_REQUEST: see above, not based on protocol version */
+			case P_OV_REQUEST:
+			case P_OV_DAGTAG_REQ:
+				drbd_verify_skipped_block(peer_device, sector, size);
+				verify_progress(peer_device, sector, size);
+				fallthrough;
+			case P_RS_DATA_REQUEST:
+			case P_RS_DAGTAG_REQ:
+			case P_CSUM_RS_REQUEST:
+			case P_RS_CSUM_DAGTAG_REQ:
+			case P_RS_THIN_REQ:
+			case P_RS_THIN_DAGTAG_REQ:
+				err = drbd_send_ack(peer_device, P_RS_CANCEL_AHEAD, peer_req);
+				goto fail2;
+			default:
+				BUG();
+			}
+		}
 	}
-	peer_req->opf = REQ_OP_READ;
 
 	switch (pi->cmd) {
 	case P_DATA_REQUEST:
 		peer_req->w.cb = w_e_end_data_req;
-		/* application IO, don't drbd_rs_begin_io */
-		peer_req->flags |= EE_APPLICATION;
+		peer_req->i.type = INTERVAL_PEER_READ;
 		goto submit;
 
 	case P_RS_THIN_REQ:
+	case P_RS_THIN_DAGTAG_REQ:
 		/* If at some point in the future we have a smart way to
 		   find out if this data block is completely deallocated,
 		   then we would do something smarter here than reading
@@ -2700,56 +3911,44 @@ static int receive_DataRequest(struct drbd_connection *connection, struct packet
 		peer_req->flags |= EE_RS_THIN_REQ;
 		fallthrough;
 	case P_RS_DATA_REQUEST:
+	case P_RS_DAGTAG_REQ:
+		peer_req->i.type = INTERVAL_RESYNC_READ;
 		peer_req->w.cb = w_e_end_rsdata_req;
-		/* used in the sector offset progress display */
-		device->bm_resync_fo = BM_SECT_TO_BIT(sector);
 		break;
 
-	case P_OV_REPLY:
 	case P_CSUM_RS_REQUEST:
-		di = kmalloc(sizeof(*di) + pi->size, GFP_NOIO);
-		if (!di)
-			goto out_free_e;
-
-		di->digest_size = pi->size;
-		di->digest = (((char *)di)+sizeof(struct digest_info));
-
-		peer_req->digest = di;
-		peer_req->flags |= EE_HAS_DIGEST;
-
-		if (drbd_recv_all(peer_device->connection, di->digest, pi->size))
-			goto out_free_e;
-
-		if (pi->cmd == P_CSUM_RS_REQUEST) {
-			D_ASSERT(device, peer_device->connection->agreed_pro_version >= 89);
-			peer_req->w.cb = w_e_end_csum_rs_req;
-			/* used in the sector offset progress display */
-			device->bm_resync_fo = BM_SECT_TO_BIT(sector);
-			/* remember to report stats in drbd_resync_finished */
-			device->use_csums = true;
-		} else if (pi->cmd == P_OV_REPLY) {
-			/* track progress, we may need to throttle */
-			atomic_add(size >> 9, &device->rs_sect_in);
-			peer_req->w.cb = w_e_end_ov_reply;
-			dec_rs_pending(peer_device);
-			/* drbd_rs_begin_io done when we sent this request,
-			 * but accounting still needs to be done. */
-			goto submit_for_resync;
-		}
+	case P_RS_CSUM_DAGTAG_REQ:
+		D_ASSERT(device, connection->agreed_pro_version >= 89);
+		peer_req->i.type = INTERVAL_RESYNC_READ;
+
+		err = receive_digest(peer_req, pi->size);
+		if (err)
+			goto fail2;
+
+		peer_req->w.cb = w_e_end_rsdata_req;
+		/* remember to report stats in drbd_resync_finished */
+		peer_device->use_csums = true;
 		break;
 
 	case P_OV_REQUEST:
-		if (device->ov_start_sector == ~(sector_t)0 &&
-		    peer_device->connection->agreed_pro_version >= 90) {
+	case P_OV_DAGTAG_REQ:
+		peer_req->i.type = INTERVAL_OV_READ_TARGET;
+		peer_device->ov_position = sector;
+		if (peer_device->ov_start_sector == ~(sector_t)0 &&
+		    connection->agreed_pro_version >= 90) {
 			unsigned long now = jiffies;
 			int i;
-			device->ov_start_sector = sector;
-			device->ov_position = sector;
-			device->ov_left = drbd_bm_bits(device) - BM_SECT_TO_BIT(sector);
-			device->rs_total = device->ov_left;
+			unsigned long ov_left = drbd_bm_bits(device)
+					- bm_sect_to_bit(device->bitmap, sector);
+			atomic64_set(&peer_device->ov_left, ov_left);
+			peer_device->ov_start_sector = sector;
+			peer_device->ov_skipped = 0;
+			peer_device->rs_total = ov_left;
+			peer_device->rs_last_writeout = now;
+			peer_device->rs_last_progress_report_ts = now;
 			for (i = 0; i < DRBD_SYNC_MARKS; i++) {
-				device->rs_mark_left[i] = device->ov_left;
-				device->rs_mark_time[i] = now;
+				peer_device->rs_mark_left[i] = ov_left;
+				peer_device->rs_mark_time[i] = now;
 			}
 			drbd_info(device, "Online Verify start sector: %llu\n",
 					(unsigned long long)sector);
@@ -2761,146 +3960,372 @@ static int receive_DataRequest(struct drbd_connection *connection, struct packet
 		BUG();
 	}
 
-	/* Throttle, drbd_rs_begin_io and submit should become asynchronous
-	 * wrt the receiver, but it is not as straightforward as it may seem.
-	 * Various places in the resync start and stop logic assume resync
-	 * requests are processed in order, requeuing this on the worker thread
-	 * introduces a bunch of new code for synchronization between threads.
-	 *
-	 * Unlimited throttling before drbd_rs_begin_io may stall the resync
-	 * "forever", throttling after drbd_rs_begin_io will lock that extent
-	 * for application writes for the same time.  For now, just throttle
-	 * here, where the rest of the code expects the receiver to sleep for
-	 * a while, anyways.
-	 */
+submit:
+	spin_lock_irq(&connection->peer_reqs_lock);
+	list_add_tail(&peer_req->recv_order, &connection->peer_reads);
+	peer_req->flags |= EE_ON_RECV_ORDER;
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	if (pi->cmd == P_DATA_REQUEST) {
+		atomic_inc(&connection->backing_ee_cnt);
+		drbd_conflict_submit_peer_read(peer_req);
+	} else {
+		drbd_peer_resync_read(peer_req);
+	}
+	/* ldev_ref_transfer: put_ldev in peer_req endio */
+	return 0;
+fail2:
+	drbd_free_peer_req(peer_req);
+fail:
+	dec_unacked(peer_device);
+	put_ldev(device);
+	return err;
+}
 
-	/* Throttle before drbd_rs_begin_io, as that locks out application IO;
-	 * this defers syncer requests for some time, before letting at least
-	 * on request through.  The resync controller on the receiving side
-	 * will adapt to the incoming rate accordingly.
-	 *
-	 * We cannot throttle here if remote is Primary/SyncTarget:
-	 * we would also throttle its application reads.
-	 * In that case, throttling is done on the SyncTarget only.
+static int receive_data_request(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct p_block_req *p_block_req = pi->data;
+
+	return receive_common_data_request(connection, pi,
+			&p_block_req->req_common,
+			0, 0);
+}
+
+/* receive_dagtag_data_request() - handle a request for data with dagtag
+ * dependency initiated by the peer
+ *
+ * Request handling flow:
+ *
+ * receive_dagtag_data_request
+ *        |
+ *        V
+ * receive_common_data_request
+ *        |
+ *        v              dagtag waiting
+ * drbd_peer_resync_read --------------+
+ *        |                            |
+ *        |                           ... dagtag_wait_ee
+ *        |                            |
+ *        |                            v
+ *        +--------------- release_dagtag_wait
+ *        |
+ *        v                       conflict (resync only)
+ * drbd_conflict_submit_peer_read -----+
+ *        |          ^                 |
+ *        |          |                ...
+ *        |          |                 |
+ *        |          |                 v
+ *        |          +---- drbd_do_submit_conflict
+ *        v
+ * drbd_submit_peer_request
+ *        |
+ *       ... backing device
+ *        |
+ *        v
+ * drbd_peer_request_endio
+ *        |
+ *        v                  online verify request
+ * drbd_endio_read_sec_final ------------------+
+ *        |                                    |
+ *       ... sender_work                      ... sender_work
+ *        |                                    |
+ *        v                                    v
+ * w_e_end_rsdata_req                    w_e_end_ov_req
+ *        |                                    |
+ *       ... via peer                         ... via peer
+ *        |                                    |
+ *        v                                    v
+ * got_RSWriteAck                         got_OVResult
+ */
+static int receive_dagtag_data_request(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct p_rs_req *p_rs_req = pi->data;
+
+	return receive_common_data_request(connection, pi,
+			&p_rs_req->req_common,
+			be32_to_cpu(p_rs_req->dagtag_node_id), be64_to_cpu(p_rs_req->dagtag));
+}
+
+static int receive_common_ov_reply(struct drbd_connection *connection, struct packet_info *pi,
+		struct p_block_req_common *p,
+		unsigned int depend_dagtag_node_id, u64 depend_dagtag)
+{
+	struct drbd_peer_device *peer_device;
+	struct drbd_device *device;
+	sector_t sector = be64_to_cpu(p->sector);
+	struct drbd_peer_request *peer_req;
+	int size = be32_to_cpu(p->blksize);
+	int err;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+	device = peer_device->device;
+
+	peer_req = find_resync_request(peer_device, INTERVAL_TYPE_MASK(INTERVAL_OV_READ_SOURCE),
+			sector, size, p->block_id);
+	if (!peer_req)
+		return -EIO;
+
+	dec_rs_pending(peer_device);
+
+	if (!get_ldev_if_state(device, D_OUTDATED)) {
+		drbd_peer_resync_read_cancel(peer_req);
+		drbd_remove_peer_req_interval(peer_req);
+		drbd_free_peer_req(peer_req);
+
+		/* drain payload */
+		return ignore_remaining_packet(connection, pi->size);
+	}
+
+	err = receive_digest(peer_req, pi->size);
+	if (err)
+		goto fail;
+
+	set_bit(INTERVAL_RECEIVED, &peer_req->i.flags);
+
+	err = peer_req_alloc_bio(peer_req, size, GFP_NOIO, REQ_OP_READ);
+	if (err)
+		goto fail;
+
+	inc_unacked(peer_device);
+
+	peer_req->depend_dagtag_node_id = depend_dagtag_node_id;
+	peer_req->depend_dagtag = depend_dagtag;
+	peer_req->w.cb = w_e_end_ov_reply;
+
+	/* track progress, we may need to throttle */
+	rs_sectors_came_in(peer_device, size);
+
+	drbd_peer_resync_read(peer_req);
+	/* ldev_ref_transfer: put_ldev in peer_req endio */
+	return 0;
+fail:
+	drbd_remove_peer_req_interval(peer_req);
+	drbd_free_peer_req(peer_req);
+	put_ldev(device);
+	return err;
+}
+
+static int receive_ov_reply(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct p_block_req *p_block_req = pi->data;
+
+	return receive_common_ov_reply(connection, pi,
+			&p_block_req->req_common,
+			0, 0);
+}
+
+static int receive_dagtag_ov_reply(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct p_rs_req *p_rs_req = pi->data;
+
+	return receive_common_ov_reply(connection, pi,
+			&p_rs_req->req_common,
+			be32_to_cpu(p_rs_req->dagtag_node_id), be64_to_cpu(p_rs_req->dagtag));
+}
+
+static int receive_flush_requests(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_connection *other_connection;
+	struct p_flush_requests *p_flush_requests = pi->data;
+	u64 flush_requests_dagtag;
+
+	spin_lock_irq(&resource->tl_update_lock);
+	/*
+	 * If the current dagtag was read from the metadata then there is no
+	 * associated request. Hence there is nothing to flush. Flush up to the
+	 * preceding dagtag instead.
 	 */
+	if (resource->dagtag_sector == resource->dagtag_from_backing_dev)
+		flush_requests_dagtag = resource->dagtag_before_attach;
+	else
+		flush_requests_dagtag = resource->dagtag_sector;
+	spin_unlock_irq(&resource->tl_update_lock);
 
-	/* Even though this may be a resync request, we do add to "read_ee";
-	 * "sync_ee" is only used for resync WRITEs.
-	 * Add to list early, so debugfs can find this request
-	 * even if we have to sleep below. */
-	spin_lock_irq(&device->resource->req_lock);
-	list_add_tail(&peer_req->w.list, &device->read_ee);
-	spin_unlock_irq(&device->resource->req_lock);
-
-	update_receiver_timing_details(connection, drbd_rs_should_slow_down);
-	if (device->state.peer != R_PRIMARY
-	&& drbd_rs_should_slow_down(peer_device, sector, false))
-		schedule_timeout_uninterruptible(HZ/10);
-	update_receiver_timing_details(connection, drbd_rs_begin_io);
-	if (drbd_rs_begin_io(device, sector))
-		goto out_free_e;
+	spin_lock_irq(&connection->primary_flush_lock);
+	connection->flush_requests_dagtag = flush_requests_dagtag;
+	connection->flush_sequence = be64_to_cpu(p_flush_requests->flush_sequence);
+	connection->flush_forward_sent_mask = 0;
+	spin_unlock_irq(&connection->primary_flush_lock);
 
-submit_for_resync:
-	atomic_add(size >> 9, &device->rs_sect_ev);
+	/* Queue any request waiting for peer ack to be sent */
+	drbd_flush_peer_acks(resource);
 
-submit:
-	update_receiver_timing_details(connection, drbd_submit_peer_request);
-	inc_unacked(device);
-	if (drbd_submit_peer_request(peer_req) == 0)
+	/* For each peer, check if peer ack for this dagtag has already been sent */
+	rcu_read_lock();
+	for_each_connection_rcu(other_connection, resource) {
+		if (other_connection->cstate[NOW] == C_CONNECTED)
+			queue_work(other_connection->ack_sender, &other_connection->peer_ack_work);
+	}
+	rcu_read_unlock();
+
+	return 0;
+}
+
+static int receive_flush_requests_ack(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_connection *primary_connection;
+	struct p_flush_ack *p_flush_ack = pi->data;
+	u64 flush_sequence = be64_to_cpu(p_flush_ack->flush_sequence);
+	int primary_node_id = be32_to_cpu(p_flush_ack->primary_node_id);
+
+	spin_lock_irq(&resource->initiator_flush_lock);
+	if (flush_sequence < resource->current_flush_sequence) {
+		spin_unlock_irq(&resource->initiator_flush_lock);
 		return 0;
+	}
 
-	/* don't care for the reason here */
-	drbd_err(device, "submit failed, triggering re-connect\n");
+	rcu_read_lock();
+	primary_connection = drbd_connection_by_node_id(resource, primary_node_id);
+	if (primary_connection)
+		primary_connection->pending_flush_mask &= ~NODE_MASK(connection->peer_node_id);
+	rcu_read_unlock();
+	spin_unlock_irq(&resource->initiator_flush_lock);
+	return 0;
+}
 
-out_free_e:
-	spin_lock_irq(&device->resource->req_lock);
-	list_del(&peer_req->w.list);
-	spin_unlock_irq(&device->resource->req_lock);
-	/* no drbd_rs_complete_io(), we are dropping the connection anyways */
+/*
+ * config_unknown_volume  -  device configuration command for unknown volume
+ *
+ * When a device is added to an existing connection, the node on which the
+ * device is added first will send configuration commands to its peer but the
+ * peer will not know about the device yet.  It will warn and ignore these
+ * commands.  Once the device is added on the second node, the second node will
+ * send the same device configuration commands, but in the other direction.
+ *
+ * (We can also end up here if drbd is misconfigured.)
+ */
+static int config_unknown_volume(struct drbd_connection *connection, struct packet_info *pi)
+{
+	drbd_warn(connection, "%s packet received for volume %d, which is not configured locally\n",
+		  drbd_packet_name(pi->cmd), pi->vnr);
+	return ignore_remaining_packet(connection, pi->size);
+}
 
-	put_ldev(device);
-	drbd_free_peer_req(device, peer_req);
-	return -EIO;
+static int receive_enable_replication_next(struct drbd_connection *connection,
+		struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
+	struct p_enable_replication *p_enable_replication = pi->data;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return config_unknown_volume(connection, pi);
+
+	if (p_enable_replication->enable)
+		set_bit(REPLICATION_NEXT, &peer_device->flags);
+	else
+		clear_bit(REPLICATION_NEXT, &peer_device->flags);
+
+	return 0;
+}
+
+static int receive_enable_replication(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_device *peer_device;
+	struct p_enable_replication *p_enable_replication = pi->data;
+	unsigned long irq_flags;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+
+	begin_state_change(resource, &irq_flags, CS_VERBOSE);
+	peer_device->replication[NEW] = p_enable_replication->enable;
+	end_state_change(resource, &irq_flags, "enable-replication");
+	return 0;
 }
 
 /*
  * drbd_asb_recover_0p  -  Recover after split-brain with no remaining primaries
  */
-static int drbd_asb_recover_0p(struct drbd_peer_device *peer_device) __must_hold(local)
+static enum sync_strategy drbd_asb_recover_0p(struct drbd_peer_device *peer_device)
 {
-	struct drbd_device *device = peer_device->device;
-	int self, peer, rv = -100;
+	const int node_id = peer_device->device->resource->res_opts.node_id;
+	int self, peer;
+	enum sync_strategy rv = SPLIT_BRAIN_DISCONNECT;
 	unsigned long ch_self, ch_peer;
 	enum drbd_after_sb_p after_sb_0p;
 
-	self = device->ldev->md.uuid[UI_BITMAP] & 1;
-	peer = device->p_uuid[UI_BITMAP] & 1;
+	self = drbd_bitmap_uuid(peer_device) & UUID_PRIMARY;
+	peer = peer_device->bitmap_uuids[node_id] & UUID_PRIMARY;
 
-	ch_peer = device->p_uuid[UI_SIZE];
-	ch_self = device->comm_bm_set;
+	ch_peer = peer_device->dirty_bits;
+	ch_self = peer_device->comm_bm_set;
 
 	rcu_read_lock();
-	after_sb_0p = rcu_dereference(peer_device->connection->net_conf)->after_sb_0p;
+	after_sb_0p = rcu_dereference(peer_device->connection->transport.net_conf)->after_sb_0p;
 	rcu_read_unlock();
 	switch (after_sb_0p) {
 	case ASB_CONSENSUS:
 	case ASB_DISCARD_SECONDARY:
 	case ASB_CALL_HELPER:
 	case ASB_VIOLENTLY:
-		drbd_err(device, "Configuration error.\n");
+	case ASB_RETRY_CONNECT:
+	case ASB_AUTO_DISCARD:
+		drbd_err(peer_device, "Configuration error.\n");
 		break;
 	case ASB_DISCONNECT:
 		break;
 	case ASB_DISCARD_YOUNGER_PRI:
 		if (self == 0 && peer == 1) {
-			rv = -1;
+			rv = SYNC_TARGET_USE_BITMAP;
 			break;
 		}
 		if (self == 1 && peer == 0) {
-			rv =  1;
+			rv = SYNC_SOURCE_USE_BITMAP;
 			break;
 		}
 		fallthrough;	/* to one of the other strategies */
 	case ASB_DISCARD_OLDER_PRI:
 		if (self == 0 && peer == 1) {
-			rv = 1;
+			rv = SYNC_SOURCE_USE_BITMAP;
 			break;
 		}
 		if (self == 1 && peer == 0) {
-			rv = -1;
+			rv = SYNC_TARGET_USE_BITMAP;
 			break;
 		}
-		/* Else fall through to one of the other strategies... */
-		drbd_warn(device, "Discard younger/older primary did not find a decision\n"
-		     "Using discard-least-changes instead\n");
+		drbd_warn(peer_device, "Discard younger/older primary did not find a decision\n"
+			  "Using discard-least-changes instead\n");
 		fallthrough;
 	case ASB_DISCARD_ZERO_CHG:
 		if (ch_peer == 0 && ch_self == 0) {
-			rv = test_bit(RESOLVE_CONFLICTS, &peer_device->connection->flags)
-				? -1 : 1;
+			rv = test_bit(RESOLVE_CONFLICTS, &peer_device->connection->transport.flags)
+				? SYNC_TARGET_USE_BITMAP : SYNC_SOURCE_USE_BITMAP;
 			break;
 		} else {
-			if (ch_peer == 0) { rv =  1; break; }
-			if (ch_self == 0) { rv = -1; break; }
+			if (ch_peer == 0) {
+				rv = SYNC_SOURCE_USE_BITMAP;
+				break;
+			}
+			if (ch_self == 0) {
+				rv = SYNC_TARGET_USE_BITMAP;
+				break;
+			}
 		}
 		if (after_sb_0p == ASB_DISCARD_ZERO_CHG)
 			break;
 		fallthrough;
 	case ASB_DISCARD_LEAST_CHG:
 		if	(ch_self < ch_peer)
-			rv = -1;
+			rv = SYNC_TARGET_USE_BITMAP;
 		else if (ch_self > ch_peer)
-			rv =  1;
+			rv = SYNC_SOURCE_USE_BITMAP;
 		else /* ( ch_self == ch_peer ) */
 		     /* Well, then use something else. */
-			rv = test_bit(RESOLVE_CONFLICTS, &peer_device->connection->flags)
-				? -1 : 1;
+			rv = test_bit(RESOLVE_CONFLICTS, &peer_device->connection->transport.flags)
+				? SYNC_TARGET_USE_BITMAP : SYNC_SOURCE_USE_BITMAP;
 		break;
 	case ASB_DISCARD_LOCAL:
-		rv = -1;
+		rv = SYNC_TARGET_USE_BITMAP;
 		break;
 	case ASB_DISCARD_REMOTE:
-		rv =  1;
+		rv = SYNC_SOURCE_USE_BITMAP;
 	}
 
 	return rv;
@@ -2909,14 +4334,16 @@ static int drbd_asb_recover_0p(struct drbd_peer_device *peer_device) __must_hold
 /*
  * drbd_asb_recover_1p  -  Recover after split-brain with one remaining primary
  */
-static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device) __must_hold(local)
+static enum sync_strategy drbd_asb_recover_1p(struct drbd_peer_device *peer_device)
 {
 	struct drbd_device *device = peer_device->device;
-	int hg, rv = -100;
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_resource *resource = device->resource;
+	enum sync_strategy strategy, rv = SPLIT_BRAIN_DISCONNECT;
 	enum drbd_after_sb_p after_sb_1p;
 
 	rcu_read_lock();
-	after_sb_1p = rcu_dereference(peer_device->connection->net_conf)->after_sb_1p;
+	after_sb_1p = rcu_dereference(connection->transport.net_conf)->after_sb_1p;
 	rcu_read_unlock();
 	switch (after_sb_1p) {
 	case ASB_DISCARD_YOUNGER_PRI:
@@ -2925,39 +4352,42 @@ static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device) __must_hold
 	case ASB_DISCARD_LOCAL:
 	case ASB_DISCARD_REMOTE:
 	case ASB_DISCARD_ZERO_CHG:
+	case ASB_RETRY_CONNECT:
+	case ASB_AUTO_DISCARD:
 		drbd_err(device, "Configuration error.\n");
 		break;
 	case ASB_DISCONNECT:
 		break;
 	case ASB_CONSENSUS:
-		hg = drbd_asb_recover_0p(peer_device);
-		if (hg == -1 && device->state.role == R_SECONDARY)
-			rv = hg;
-		if (hg == 1  && device->state.role == R_PRIMARY)
-			rv = hg;
+		strategy = drbd_asb_recover_0p(peer_device);
+		if (strategy == SYNC_TARGET_USE_BITMAP && resource->role[NOW] == R_SECONDARY)
+			rv = strategy;
+		if (strategy == SYNC_SOURCE_USE_BITMAP && resource->role[NOW] == R_PRIMARY)
+			rv = strategy;
 		break;
 	case ASB_VIOLENTLY:
 		rv = drbd_asb_recover_0p(peer_device);
 		break;
 	case ASB_DISCARD_SECONDARY:
-		return device->state.role == R_PRIMARY ? 1 : -1;
+		return resource->role[NOW] == R_PRIMARY ? SYNC_SOURCE_USE_BITMAP : SYNC_TARGET_USE_BITMAP;
 	case ASB_CALL_HELPER:
-		hg = drbd_asb_recover_0p(peer_device);
-		if (hg == -1 && device->state.role == R_PRIMARY) {
+		strategy = drbd_asb_recover_0p(peer_device);
+		if (strategy == SYNC_TARGET_USE_BITMAP && resource->role[NOW] == R_PRIMARY) {
 			enum drbd_state_rv rv2;
 
 			 /* drbd_change_state() does not sleep while in SS_IN_TRANSIENT_STATE,
-			  * we might be here in C_WF_REPORT_PARAMS which is transient.
+			  * we might be here in L_OFF which is transient.
 			  * we do not need to wait for the after state change work either. */
-			rv2 = drbd_change_state(device, CS_VERBOSE, NS(role, R_SECONDARY));
+			rv2 = change_role(resource, R_SECONDARY, CS_VERBOSE,
+					"after-sb-1pri", NULL);
 			if (rv2 != SS_SUCCESS) {
-				drbd_khelper(device, "pri-lost-after-sb");
+				drbd_maybe_khelper(device, connection, "pri-lost-after-sb");
 			} else {
 				drbd_warn(device, "Successfully gave up primary role.\n");
-				rv = hg;
+				rv = strategy;
 			}
 		} else
-			rv = hg;
+			rv = strategy;
 	}
 
 	return rv;
@@ -2966,14 +4396,15 @@ static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device) __must_hold
 /*
  * drbd_asb_recover_2p  -  Recover after split-brain with two remaining primaries
  */
-static int drbd_asb_recover_2p(struct drbd_peer_device *peer_device) __must_hold(local)
+static enum sync_strategy drbd_asb_recover_2p(struct drbd_peer_device *peer_device)
 {
 	struct drbd_device *device = peer_device->device;
-	int hg, rv = -100;
+	struct drbd_connection *connection = peer_device->connection;
+	enum sync_strategy strategy, rv = SPLIT_BRAIN_DISCONNECT;
 	enum drbd_after_sb_p after_sb_2p;
 
 	rcu_read_lock();
-	after_sb_2p = rcu_dereference(peer_device->connection->net_conf)->after_sb_2p;
+	after_sb_2p = rcu_dereference(connection->transport.net_conf)->after_sb_2p;
 	rcu_read_unlock();
 	switch (after_sb_2p) {
 	case ASB_DISCARD_YOUNGER_PRI:
@@ -2984,6 +4415,8 @@ static int drbd_asb_recover_2p(struct drbd_peer_device *peer_device) __must_hold
 	case ASB_CONSENSUS:
 	case ASB_DISCARD_SECONDARY:
 	case ASB_DISCARD_ZERO_CHG:
+	case ASB_RETRY_CONNECT:
+	case ASB_AUTO_DISCARD:
 		drbd_err(device, "Configuration error.\n");
 		break;
 	case ASB_VIOLENTLY:
@@ -2992,440 +4425,1021 @@ static int drbd_asb_recover_2p(struct drbd_peer_device *peer_device) __must_hold
 	case ASB_DISCONNECT:
 		break;
 	case ASB_CALL_HELPER:
-		hg = drbd_asb_recover_0p(peer_device);
-		if (hg == -1) {
+		strategy = drbd_asb_recover_0p(peer_device);
+		if (strategy == SYNC_TARGET_USE_BITMAP) {
 			enum drbd_state_rv rv2;
 
 			 /* drbd_change_state() does not sleep while in SS_IN_TRANSIENT_STATE,
-			  * we might be here in C_WF_REPORT_PARAMS which is transient.
+			  * we might be here in L_OFF which is transient.
 			  * we do not need to wait for the after state change work either. */
-			rv2 = drbd_change_state(device, CS_VERBOSE, NS(role, R_SECONDARY));
+			rv2 = change_role(device->resource, R_SECONDARY, CS_VERBOSE,
+					"after-sb-2pri", NULL);
 			if (rv2 != SS_SUCCESS) {
-				drbd_khelper(device, "pri-lost-after-sb");
+				drbd_maybe_khelper(device, connection, "pri-lost-after-sb");
 			} else {
 				drbd_warn(device, "Successfully gave up primary role.\n");
-				rv = hg;
+				rv = strategy;
 			}
 		} else
-			rv = hg;
+			rv = strategy;
 	}
 
 	return rv;
 }
 
-static void drbd_uuid_dump(struct drbd_device *device, char *text, u64 *uuid,
-			   u64 bits, u64 flags)
+static void drbd_uuid_dump_self(struct drbd_peer_device *peer_device, u64 bits, u64 flags)
 {
-	if (!uuid) {
-		drbd_info(device, "%s uuid info vanished while I was looking!\n", text);
-		return;
-	}
-	drbd_info(device, "%s %016llX:%016llX:%016llX:%016llX bits:%llu flags:%llX\n",
-	     text,
-	     (unsigned long long)uuid[UI_CURRENT],
-	     (unsigned long long)uuid[UI_BITMAP],
-	     (unsigned long long)uuid[UI_HISTORY_START],
-	     (unsigned long long)uuid[UI_HISTORY_END],
-	     (unsigned long long)bits,
-	     (unsigned long long)flags);
+	struct drbd_device *device = peer_device->device;
+
+	drbd_info(peer_device, "self %016llX:%016llX:%016llX:%016llX bits:%llu flags:%llX\n",
+		  (unsigned long long)drbd_resolved_uuid(peer_device, NULL),
+		  (unsigned long long)drbd_bitmap_uuid(peer_device),
+		  (unsigned long long)drbd_history_uuid(device, 0),
+		  (unsigned long long)drbd_history_uuid(device, 1),
+		  (unsigned long long)bits,
+		  (unsigned long long)flags);
 }
 
-/*
-  100	after split brain try auto recover
-    2	C_SYNC_SOURCE set BitMap
-    1	C_SYNC_SOURCE use BitMap
-    0	no Sync
-   -1	C_SYNC_TARGET use BitMap
-   -2	C_SYNC_TARGET set BitMap
- -100	after split brain, disconnect
--1000	unrelated data
--1091   requires proto 91
--1096   requires proto 96
- */
 
-static int drbd_uuid_compare(struct drbd_peer_device *const peer_device,
-		enum drbd_role const peer_role, int *rule_nr) __must_hold(local)
+static void drbd_uuid_dump_peer(struct drbd_peer_device *peer_device, u64 bits, u64 flags)
 {
-	struct drbd_connection *const connection = peer_device->connection;
-	struct drbd_device *device = peer_device->device;
-	u64 self, peer;
-	int i, j;
+	const int node_id = peer_device->device->resource->res_opts.node_id;
 
-	self = device->ldev->md.uuid[UI_CURRENT] & ~((u64)1);
-	peer = device->p_uuid[UI_CURRENT] & ~((u64)1);
+	drbd_info(peer_device, "peer %016llX:%016llX:%016llX:%016llX bits:%llu flags:%llX\n",
+	     (unsigned long long)peer_device->current_uuid,
+	     (unsigned long long)peer_device->bitmap_uuids[node_id],
+	     (unsigned long long)peer_device->history_uuids[0],
+	     (unsigned long long)peer_device->history_uuids[1],
+	     (unsigned long long)bits,
+	     (unsigned long long)flags);
+}
 
-	*rule_nr = 10;
-	if (self == UUID_JUST_CREATED && peer == UUID_JUST_CREATED)
-		return 0;
+/* find the peer's bitmap slot for the given UUID, if they have one */
+static int drbd_find_peer_bitmap_by_uuid(struct drbd_peer_device *peer_device, u64 uuid)
+{
+	u64 peer;
+	int i;
 
-	*rule_nr = 20;
-	if ((self == UUID_JUST_CREATED || self == (u64)0) &&
-	     peer != UUID_JUST_CREATED)
-		return -2;
+	for (i = 0; i < DRBD_PEERS_MAX; i++) {
+		peer = peer_device->bitmap_uuids[i] & ~UUID_PRIMARY;
+		if (uuid == peer)
+			return i;
+	}
 
-	*rule_nr = 30;
-	if (self != UUID_JUST_CREATED &&
-	    (peer == UUID_JUST_CREATED || peer == (u64)0))
-		return 2;
+	return -1;
+}
 
-	if (self == peer) {
-		int rct, dc; /* roles at crash time */
+/* find our bitmap slot for the given UUID, if we have one */
+static int drbd_find_bitmap_by_uuid(struct drbd_peer_device *peer_device, u64 uuid)
+{
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+	u64 self;
+	int i;
 
-		if (device->p_uuid[UI_BITMAP] == (u64)0 && device->ldev->md.uuid[UI_BITMAP] != (u64)0) {
+	for (i = 0; i < DRBD_NODE_ID_MAX; i++) {
+		if (i == device->ldev->md.node_id)
+			continue;
+		if (connection->agreed_pro_version < 116 &&
+		    device->ldev->md.peers[i].bitmap_index == -1)
+			continue;
+		self = device->ldev->md.peers[i].bitmap_uuid & ~UUID_PRIMARY;
+		if (self == uuid)
+			return i;
+	}
 
-			if (connection->agreed_pro_version < 91)
-				return -1091;
+	return -1;
+}
 
-			if ((device->ldev->md.uuid[UI_BITMAP] & ~((u64)1)) == (device->p_uuid[UI_HISTORY_START] & ~((u64)1)) &&
-			    (device->ldev->md.uuid[UI_HISTORY_START] & ~((u64)1)) == (device->p_uuid[UI_HISTORY_START + 1] & ~((u64)1))) {
-				drbd_info(device, "was SyncSource, missed the resync finished event, corrected myself:\n");
-				drbd_uuid_move_history(device);
-				device->ldev->md.uuid[UI_HISTORY_START] = device->ldev->md.uuid[UI_BITMAP];
-				device->ldev->md.uuid[UI_BITMAP] = 0;
+static enum sync_strategy
+uuid_fixup_resync_end(struct drbd_peer_device *peer_device, enum sync_rule *rule)
+{
+	struct drbd_device *device = peer_device->device;
+	const int node_id = device->resource->res_opts.node_id;
 
-				drbd_uuid_dump(device, "self", device->ldev->md.uuid,
-					       device->state.disk >= D_NEGOTIATING ? drbd_bm_total_weight(device) : 0, 0);
-				*rule_nr = 34;
-			} else {
-				drbd_info(device, "was SyncSource (peer failed to write sync_uuid)\n");
-				*rule_nr = 36;
-			}
+	if (peer_device->bitmap_uuids[node_id] == (u64)0 && drbd_bitmap_uuid(peer_device) != (u64)0) {
 
-			return 1;
-		}
+		if (peer_device->connection->agreed_pro_version < 91)
+			return REQUIRES_PROTO_91;
 
-		if (device->ldev->md.uuid[UI_BITMAP] == (u64)0 && device->p_uuid[UI_BITMAP] != (u64)0) {
+		if ((drbd_bitmap_uuid(peer_device) & ~UUID_PRIMARY) ==
+		    (peer_device->history_uuids[0] & ~UUID_PRIMARY) &&
+		    (drbd_history_uuid(device, 0) & ~UUID_PRIMARY) ==
+		    (peer_device->history_uuids[0] & ~UUID_PRIMARY)) {
+			struct drbd_peer_md *peer_md = &device->ldev->md.peers[peer_device->node_id];
+			u64 previous_bitmap_uuid = peer_md->bitmap_uuid;
 
-			if (connection->agreed_pro_version < 91)
-				return -1091;
+			drbd_info(device, "was SyncSource, missed the resync finished event, corrected myself:\n");
+			peer_md->bitmap_uuid = 0;
+			_drbd_uuid_push_history(device, previous_bitmap_uuid);
 
-			if ((device->ldev->md.uuid[UI_HISTORY_START] & ~((u64)1)) == (device->p_uuid[UI_BITMAP] & ~((u64)1)) &&
-			    (device->ldev->md.uuid[UI_HISTORY_START + 1] & ~((u64)1)) == (device->p_uuid[UI_HISTORY_START] & ~((u64)1))) {
-				drbd_info(device, "was SyncTarget, peer missed the resync finished event, corrected peer:\n");
+			drbd_uuid_dump_self(peer_device,
+					    device->disk_state[NOW] >= D_NEGOTIATING ? drbd_bm_total_weight(peer_device) : 0, 0);
+			*rule = RULE_SYNC_SOURCE_MISSED_FINISH;
+		} else {
+			drbd_info(device, "was SyncSource (peer failed to write sync_uuid)\n");
+			*rule = RULE_SYNC_SOURCE_PEER_MISSED_FINISH;
+		}
 
-				device->p_uuid[UI_HISTORY_START + 1] = device->p_uuid[UI_HISTORY_START];
-				device->p_uuid[UI_HISTORY_START] = device->p_uuid[UI_BITMAP];
-				device->p_uuid[UI_BITMAP] = 0UL;
+		return SYNC_SOURCE_USE_BITMAP;
+	}
 
-				drbd_uuid_dump(device, "peer", device->p_uuid, device->p_uuid[UI_SIZE], device->p_uuid[UI_FLAGS]);
-				*rule_nr = 35;
-			} else {
-				drbd_info(device, "was SyncTarget (failed to write sync_uuid)\n");
-				*rule_nr = 37;
-			}
+	if (drbd_bitmap_uuid(peer_device) == (u64)0 && peer_device->bitmap_uuids[node_id] != (u64)0) {
 
-			return -1;
-		}
+		if (peer_device->connection->agreed_pro_version < 91)
+			return REQUIRES_PROTO_91;
 
-		/* Common power [off|failure] */
-		rct = (test_bit(CRASHED_PRIMARY, &device->flags) ? 1 : 0) +
-			(device->p_uuid[UI_FLAGS] & 2);
-		/* lowest bit is set when we were primary,
-		 * next bit (weight 2) is set when peer was primary */
-		*rule_nr = 40;
+		if ((drbd_history_uuid(device, 0) & ~UUID_PRIMARY) ==
+		    (peer_device->bitmap_uuids[node_id] & ~UUID_PRIMARY) &&
+		    (drbd_history_uuid(device, 1) & ~UUID_PRIMARY) ==
+		    (peer_device->history_uuids[0] & ~UUID_PRIMARY)) {
+			int i;
 
-		/* Neither has the "crashed primary" flag set,
-		 * only a replication link hickup. */
-		if (rct == 0)
-			return 0;
+			drbd_info(device, "was SyncTarget, peer missed the resync finished event, corrected peer:\n");
 
-		/* Current UUID equal and no bitmap uuid; does not necessarily
-		 * mean this was a "simultaneous hard crash", maybe IO was
-		 * frozen, so no UUID-bump happened.
-		 * This is a protocol change, overload DRBD_FF_WSAME as flag
-		 * for "new-enough" peer DRBD version. */
-		if (device->state.role == R_PRIMARY || peer_role == R_PRIMARY) {
-			*rule_nr = 41;
-			if (!(connection->agreed_features & DRBD_FF_WSAME)) {
-				drbd_warn(peer_device, "Equivalent unrotated UUIDs, but current primary present.\n");
-				return -(0x10000 | PRO_VERSION_MAX | (DRBD_FF_WSAME << 8));
-			}
-			if (device->state.role == R_PRIMARY && peer_role == R_PRIMARY) {
-				/* At least one has the "crashed primary" bit set,
-				 * both are primary now, but neither has rotated its UUIDs?
-				 * "Can not happen." */
-				drbd_err(peer_device, "Equivalent unrotated UUIDs, but both are primary. Can not resolve this.\n");
-				return -100;
-			}
-			if (device->state.role == R_PRIMARY)
-				return 1;
-			return -1;
-		}
+			for (i = ARRAY_SIZE(peer_device->history_uuids) - 1; i > 0; i--)
+				peer_device->history_uuids[i] = peer_device->history_uuids[i - 1];
+			peer_device->history_uuids[i] = peer_device->bitmap_uuids[node_id];
+			peer_device->bitmap_uuids[node_id] = 0;
 
-		/* Both are secondary.
-		 * Really looks like recovery from simultaneous hard crash.
-		 * Check which had been primary before, and arbitrate. */
-		switch (rct) {
-		case 0: /* !self_pri && !peer_pri */ return 0; /* already handled */
-		case 1: /*  self_pri && !peer_pri */ return 1;
-		case 2: /* !self_pri &&  peer_pri */ return -1;
-		case 3: /*  self_pri &&  peer_pri */
-			dc = test_bit(RESOLVE_CONFLICTS, &connection->flags);
-			return dc ? -1 : 1;
+			drbd_uuid_dump_peer(peer_device, peer_device->dirty_bits, peer_device->uuid_flags);
+			*rule = RULE_SYNC_TARGET_PEER_MISSED_FINISH;
+		} else {
+			drbd_info(device, "was SyncTarget (failed to write sync_uuid)\n");
+			*rule = RULE_SYNC_TARGET_MISSED_FINISH;
 		}
+
+		return SYNC_TARGET_USE_BITMAP;
 	}
 
-	*rule_nr = 50;
-	peer = device->p_uuid[UI_BITMAP] & ~((u64)1);
-	if (self == peer)
-		return -1;
+	return UNDETERMINED;
+}
+
+static enum sync_strategy
+uuid_fixup_resync_start1(struct drbd_peer_device *peer_device, enum sync_rule *rule)
+{
+	struct drbd_device *device = peer_device->device;
+	const int node_id = peer_device->device->resource->res_opts.node_id;
+	u64 self, peer;
+
+	self = drbd_current_uuid(device) & ~UUID_PRIMARY;
+	peer = peer_device->history_uuids[0] & ~UUID_PRIMARY;
 
-	*rule_nr = 51;
-	peer = device->p_uuid[UI_HISTORY_START] & ~((u64)1);
 	if (self == peer) {
-		if (connection->agreed_pro_version < 96 ?
-		    (device->ldev->md.uuid[UI_HISTORY_START] & ~((u64)1)) ==
-		    (device->p_uuid[UI_HISTORY_START + 1] & ~((u64)1)) :
-		    peer + UUID_NEW_BM_OFFSET == (device->p_uuid[UI_BITMAP] & ~((u64)1))) {
+		if (peer_device->connection->agreed_pro_version < 96 ?
+		    (drbd_history_uuid(device, 0) & ~UUID_PRIMARY) ==
+		    (peer_device->history_uuids[1] & ~UUID_PRIMARY) :
+		    peer + UUID_NEW_BM_OFFSET == (peer_device->bitmap_uuids[node_id] & ~UUID_PRIMARY)) {
+			int i;
+
 			/* The last P_SYNC_UUID did not get though. Undo the last start of
 			   resync as sync source modifications of the peer's UUIDs. */
+			*rule = RULE_SYNC_TARGET_MISSED_START;
 
-			if (connection->agreed_pro_version < 91)
-				return -1091;
+			if (peer_device->connection->agreed_pro_version < 91)
+				return REQUIRES_PROTO_91;
 
-			device->p_uuid[UI_BITMAP] = device->p_uuid[UI_HISTORY_START];
-			device->p_uuid[UI_HISTORY_START] = device->p_uuid[UI_HISTORY_START + 1];
+			peer_device->bitmap_uuids[node_id] = peer_device->history_uuids[0];
+			for (i = 0; i < ARRAY_SIZE(peer_device->history_uuids) - 1; i++)
+				peer_device->history_uuids[i] = peer_device->history_uuids[i + 1];
+			peer_device->history_uuids[i] = 0;
 
 			drbd_info(device, "Lost last syncUUID packet, corrected:\n");
-			drbd_uuid_dump(device, "peer", device->p_uuid, device->p_uuid[UI_SIZE], device->p_uuid[UI_FLAGS]);
+			drbd_uuid_dump_peer(peer_device, peer_device->dirty_bits, peer_device->uuid_flags);
 
-			return -1;
+			return SYNC_TARGET_USE_BITMAP;
 		}
 	}
 
-	*rule_nr = 60;
-	self = device->ldev->md.uuid[UI_CURRENT] & ~((u64)1);
-	for (i = UI_HISTORY_START; i <= UI_HISTORY_END; i++) {
-		peer = device->p_uuid[i] & ~((u64)1);
-		if (self == peer)
-			return -2;
-	}
+	return UNDETERMINED;
+}
 
-	*rule_nr = 70;
-	self = device->ldev->md.uuid[UI_BITMAP] & ~((u64)1);
-	peer = device->p_uuid[UI_CURRENT] & ~((u64)1);
-	if (self == peer)
-		return 1;
+static enum sync_strategy
+uuid_fixup_resync_start2(struct drbd_peer_device *peer_device, enum sync_rule *rule)
+{
+	struct drbd_device *device = peer_device->device;
+	u64 self, peer;
+
+	self = drbd_history_uuid(device, 0) & ~UUID_PRIMARY;
+	peer = peer_device->current_uuid & ~UUID_PRIMARY;
 
-	*rule_nr = 71;
-	self = device->ldev->md.uuid[UI_HISTORY_START] & ~((u64)1);
 	if (self == peer) {
-		if (connection->agreed_pro_version < 96 ?
-		    (device->ldev->md.uuid[UI_HISTORY_START + 1] & ~((u64)1)) ==
-		    (device->p_uuid[UI_HISTORY_START] & ~((u64)1)) :
-		    self + UUID_NEW_BM_OFFSET == (device->ldev->md.uuid[UI_BITMAP] & ~((u64)1))) {
+		if (peer_device->connection->agreed_pro_version < 96 ?
+		    (drbd_history_uuid(device, 1) & ~UUID_PRIMARY) ==
+		    (peer_device->history_uuids[0] & ~UUID_PRIMARY) :
+		    self + UUID_NEW_BM_OFFSET == (drbd_bitmap_uuid(peer_device) & ~UUID_PRIMARY)) {
+			u64 bitmap_uuid;
+
 			/* The last P_SYNC_UUID did not get though. Undo the last start of
 			   resync as sync source modifications of our UUIDs. */
+			*rule = RULE_SYNC_SOURCE_MISSED_START;
 
-			if (connection->agreed_pro_version < 91)
-				return -1091;
+			if (peer_device->connection->agreed_pro_version < 91)
+				return REQUIRES_PROTO_91;
 
-			__drbd_uuid_set(device, UI_BITMAP, device->ldev->md.uuid[UI_HISTORY_START]);
-			__drbd_uuid_set(device, UI_HISTORY_START, device->ldev->md.uuid[UI_HISTORY_START + 1]);
+			bitmap_uuid = _drbd_uuid_pull_history(peer_device);
+			_drbd_uuid_set_bitmap(peer_device, bitmap_uuid);
 
 			drbd_info(device, "Last syncUUID did not get through, corrected:\n");
-			drbd_uuid_dump(device, "self", device->ldev->md.uuid,
-				       device->state.disk >= D_NEGOTIATING ? drbd_bm_total_weight(device) : 0, 0);
+			drbd_uuid_dump_self(peer_device,
+					    device->disk_state[NOW] >= D_NEGOTIATING ? drbd_bm_total_weight(peer_device) : 0, 0);
 
-			return 1;
+			return SYNC_SOURCE_USE_BITMAP;
 		}
 	}
 
+	return UNDETERMINED;
+}
 
-	*rule_nr = 80;
-	peer = device->p_uuid[UI_CURRENT] & ~((u64)1);
-	for (i = UI_HISTORY_START; i <= UI_HISTORY_END; i++) {
-		self = device->ldev->md.uuid[i] & ~((u64)1);
-		if (self == peer)
-			return 2;
-	}
-
-	*rule_nr = 90;
-	self = device->ldev->md.uuid[UI_BITMAP] & ~((u64)1);
-	peer = device->p_uuid[UI_BITMAP] & ~((u64)1);
-	if (self == peer && self != ((u64)0))
-		return 100;
+static enum sync_strategy drbd_uuid_compare(struct drbd_peer_device *peer_device,
+			     enum sync_rule *rule, int *peer_node_id)
+{
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+	const int node_id = device->resource->res_opts.node_id;
+	bool my_current_in_peers_history, peers_current_in_my_history;
+	bool bitmap_matches, flags_matches, uuid_matches;
+	u64 resolved_uuid, bitmap_uuid;
+	u64 local_uuid_flags = 0;
+	u64 self, peer;
+	int i, j;
 
-	*rule_nr = 100;
-	for (i = UI_HISTORY_START; i <= UI_HISTORY_END; i++) {
-		self = device->ldev->md.uuid[i] & ~((u64)1);
-		for (j = UI_HISTORY_START; j <= UI_HISTORY_END; j++) {
-			peer = device->p_uuid[j] & ~((u64)1);
-			if (self == peer)
-				return -100;
+	resolved_uuid = drbd_resolved_uuid(peer_device, &local_uuid_flags) & ~UUID_PRIMARY;
+	bitmap_uuid = drbd_bitmap_uuid(peer_device);
+	local_uuid_flags |= drbd_collect_local_uuid_flags(peer_device, NULL);
+
+	uuid_matches = resolved_uuid == (peer_device->comm_current_uuid & ~UUID_PRIMARY);
+	bitmap_matches = bitmap_uuid == peer_device->comm_bitmap_uuid;
+	/* UUID_FLAG_INCONSISTENT is not relevant for the handshake, allow it to change */
+	flags_matches = !((local_uuid_flags ^ peer_device->comm_uuid_flags) & ~UUID_FLAG_INCONSISTENT);
+	if (!test_bit(INITIAL_STATE_SENT, &peer_device->flags)) {
+		drbd_warn(peer_device, "Initial UUIDs and state not sent yet. Not verifying\n");
+	} else if (!uuid_matches || !flags_matches || !bitmap_matches) {
+		if (!uuid_matches)
+			drbd_warn(peer_device, "My current UUID changed during handshake.\n");
+		if (!bitmap_matches)
+			drbd_warn(peer_device, "My bitmap UUID changed during "
+				  "handshake. 0x%llX to 0x%llX\n",
+				  (unsigned long long)peer_device->comm_bitmap_uuid,
+				  (unsigned long long)bitmap_uuid);
+		if (!flags_matches)
+			drbd_warn(peer_device,
+				  "My uuid_flags changed from 0x%llX to 0x%llX during handshake.\n",
+				  (unsigned long long)peer_device->comm_uuid_flags,
+				  (unsigned long long)local_uuid_flags);
+		if (connection->cstate[NOW] == C_CONNECTING) {
+			*rule = RULE_INITIAL_HANDSHAKE_CHANGED;
+			return RETRY_CONNECT;
 		}
 	}
 
-	return -1000;
-}
-
-/* drbd_sync_handshake() returns the new conn state on success, or
-   CONN_MASK (-1) on failure.
- */
-static enum drbd_conns drbd_sync_handshake(struct drbd_peer_device *peer_device,
-					   enum drbd_role peer_role,
-					   enum drbd_disk_state peer_disk) __must_hold(local)
-{
-	struct drbd_device *device = peer_device->device;
-	enum drbd_conns rv = C_MASK;
-	enum drbd_disk_state mydisk;
-	struct net_conf *nc;
-	int hg, rule_nr, rr_conflict, tentative, always_asbp;
+	self = resolved_uuid;
+	peer = peer_device->current_uuid & ~UUID_PRIMARY;
 
-	mydisk = device->state.disk;
-	if (mydisk == D_NEGOTIATING)
-		mydisk = device->new_state_tmp.disk;
+	/* Before DRBD 8.0.2 (from 2007), the uuid on sync targets was set to
+	 * zero during resyncs for no good reason. */
+	if (self == 0)
+		self = UUID_JUST_CREATED;
+	if (peer == 0)
+		peer = UUID_JUST_CREATED;
 
-	drbd_info(device, "drbd_sync_handshake:\n");
+	*rule = RULE_JUST_CREATED_BOTH;
+	if (self == UUID_JUST_CREATED && peer == UUID_JUST_CREATED)
+		return NO_SYNC;
 
-	spin_lock_irq(&device->ldev->md.uuid_lock);
-	drbd_uuid_dump(device, "self", device->ldev->md.uuid, device->comm_bm_set, 0);
-	drbd_uuid_dump(device, "peer", device->p_uuid,
-		       device->p_uuid[UI_SIZE], device->p_uuid[UI_FLAGS]);
+	*rule = RULE_JUST_CREATED_SELF;
+	if (self == UUID_JUST_CREATED)
+		return SYNC_TARGET_SET_BITMAP;
 
-	hg = drbd_uuid_compare(peer_device, peer_role, &rule_nr);
-	spin_unlock_irq(&device->ldev->md.uuid_lock);
+	*rule = RULE_JUST_CREATED_PEER;
+	if (peer == UUID_JUST_CREATED)
+		return SYNC_SOURCE_SET_BITMAP;
 
-	drbd_info(device, "uuid_compare()=%d by rule %d\n", hg, rule_nr);
+	if (self == peer) {
+		struct net_conf *nc;
+		int wire_protocol;
 
-	if (hg == -1000) {
-		drbd_alert(device, "Unrelated data, aborting!\n");
-		return C_MASK;
+		rcu_read_lock();
+		nc = rcu_dereference(connection->transport.net_conf);
+		wire_protocol = nc->wire_protocol;
+		rcu_read_unlock();
+
+		if (connection->agreed_pro_version < 110) {
+			enum sync_strategy rv = uuid_fixup_resync_end(peer_device, rule);
+			if (rv != UNDETERMINED)
+				return rv;
+		}
+
+		if (test_bit(RS_SOURCE_MISSED_END, &peer_device->flags)) {
+			*rule = RULE_SYNC_SOURCE_MISSED_FINISH;
+			return SYNC_SOURCE_USE_BITMAP;
+		}
+		if (test_bit(RS_PEER_MISSED_END, &peer_device->flags)) {
+			*rule = RULE_SYNC_TARGET_PEER_MISSED_FINISH;
+			return SYNC_TARGET_USE_BITMAP;
+		}
+
+		if (connection->agreed_pro_version >= 120) {
+			*rule = RULE_RECONNECTED;
+			if (peer_device->uuid_flags & UUID_FLAG_RECONNECT &&
+			    local_uuid_flags & UUID_FLAG_RECONNECT)
+				return NO_SYNC;
+		}
+
+		if (connection->agreed_pro_version >= 121 &&
+		    (wire_protocol == DRBD_PROT_A || wire_protocol == DRBD_PROT_B)) {
+			*rule = RULE_CRASHED_PRIMARY;
+			if (local_uuid_flags & UUID_FLAG_CRASHED_PRIMARY &&
+			    !(peer_device->uuid_flags & UUID_FLAG_CRASHED_PRIMARY))
+				return SYNC_SOURCE_USE_BITMAP;
+
+			if (peer_device->uuid_flags & UUID_FLAG_CRASHED_PRIMARY &&
+			    !(local_uuid_flags & UUID_FLAG_CRASHED_PRIMARY))
+				return SYNC_TARGET_USE_BITMAP;
+		}
+
+		*rule = RULE_LOST_QUORUM;
+		if (peer_device->uuid_flags & UUID_FLAG_PRIMARY_LOST_QUORUM &&
+		    !test_bit(PRIMARY_LOST_QUORUM, &device->flags))
+			return SYNC_TARGET_IF_BOTH_FAILED;
+
+		if (!(peer_device->uuid_flags & UUID_FLAG_PRIMARY_LOST_QUORUM) &&
+		    test_bit(PRIMARY_LOST_QUORUM, &device->flags))
+			return SYNC_SOURCE_IF_BOTH_FAILED;
+
+		if (peer_device->uuid_flags & UUID_FLAG_PRIMARY_LOST_QUORUM &&
+		    test_bit(PRIMARY_LOST_QUORUM, &device->flags))
+			return test_bit(RESOLVE_CONFLICTS, &connection->transport.flags) ?
+				SYNC_SOURCE_IF_BOTH_FAILED :
+				SYNC_TARGET_IF_BOTH_FAILED;
+
+		if (connection->agreed_pro_version < 120) {
+			*rule = RULE_RECONNECTED;
+			if (peer_device->uuid_flags & UUID_FLAG_RECONNECT &&
+			    local_uuid_flags & UUID_FLAG_RECONNECT)
+				return NO_SYNC;
+		}
+
+		/* Peer crashed as primary, I survived, resync from me */
+		if (peer_device->uuid_flags & UUID_FLAG_CRASHED_PRIMARY &&
+		    local_uuid_flags & UUID_FLAG_RECONNECT)
+			return SYNC_SOURCE_IF_BOTH_FAILED;
+
+		/* I am a crashed primary, peer survived, resync to me */
+		if (local_uuid_flags & UUID_FLAG_CRASHED_PRIMARY &&
+		    peer_device->uuid_flags & UUID_FLAG_RECONNECT)
+			return SYNC_TARGET_IF_BOTH_FAILED;
+
+		/* One of us had a connection to the other node before.
+		   i.e. this is not a common power failure. */
+		if (peer_device->uuid_flags & UUID_FLAG_RECONNECT ||
+		    local_uuid_flags & UUID_FLAG_RECONNECT)
+			return NO_SYNC;
+
+		/* Common power [off|failure]? */
+		*rule = RULE_BOTH_OFF;
+		if (local_uuid_flags & UUID_FLAG_CRASHED_PRIMARY) {
+			if ((peer_device->uuid_flags & UUID_FLAG_CRASHED_PRIMARY) &&
+			    test_bit(RESOLVE_CONFLICTS, &connection->transport.flags))
+				return SYNC_TARGET_IF_BOTH_FAILED;
+			return SYNC_SOURCE_IF_BOTH_FAILED;
+		} else if (peer_device->uuid_flags & UUID_FLAG_CRASHED_PRIMARY)
+				return SYNC_TARGET_IF_BOTH_FAILED;
+		else
+			return NO_SYNC;
+	}
+
+	*rule = RULE_BITMAP_PEER;
+	peer = peer_device->bitmap_uuids[node_id] & ~UUID_PRIMARY;
+	if (self == peer)
+		return SYNC_TARGET_USE_BITMAP;
+
+	*rule = RULE_BITMAP_PEER_OTHER;
+	i = drbd_find_peer_bitmap_by_uuid(peer_device, self);
+	if (i != -1) {
+		*peer_node_id = i;
+		return SYNC_TARGET_CLEAR_BITMAP;
+	}
+
+	if (connection->agreed_pro_version < 110) {
+		enum sync_strategy rv = uuid_fixup_resync_start1(peer_device, rule);
+		if (rv != UNDETERMINED)
+			return rv;
+	}
+
+	*rule = RULE_BITMAP_SELF;
+	self = bitmap_uuid & ~UUID_PRIMARY;
+	peer = peer_device->current_uuid & ~UUID_PRIMARY;
+	if (self == peer)
+		return SYNC_SOURCE_USE_BITMAP;
+
+	*rule = RULE_BITMAP_SELF_OTHER;
+	i = drbd_find_bitmap_by_uuid(peer_device, peer);
+	if (i != -1) {
+		*peer_node_id = i;
+		return SYNC_SOURCE_COPY_BITMAP;
+	}
+
+	self = resolved_uuid;
+	my_current_in_peers_history = uuid_in_peer_history(peer_device, self);
+
+	if (connection->agreed_pro_version < 110) {
+		enum sync_strategy rv = uuid_fixup_resync_start2(peer_device, rule);
+		if (rv != UNDETERMINED)
+			return rv;
+	}
+
+	peer = peer_device->current_uuid & ~UUID_PRIMARY;
+	peers_current_in_my_history = uuid_in_my_history(device, peer);
+
+	if (my_current_in_peers_history && !peers_current_in_my_history) {
+		*rule = RULE_HISTORY_PEER;
+		return SYNC_TARGET_SET_BITMAP;
+	}
+	if (!my_current_in_peers_history && peers_current_in_my_history) {
+		*rule = RULE_HISTORY_SELF;
+		return SYNC_SOURCE_SET_BITMAP;
+	}
+
+	*rule = RULE_BITMAP_BOTH;
+	self = bitmap_uuid & ~UUID_PRIMARY;
+	peer = peer_device->bitmap_uuids[node_id] & ~UUID_PRIMARY;
+	if (self == peer && self != ((u64)0))
+		return SPLIT_BRAIN_AUTO_RECOVER;
+
+	*rule = RULE_HISTORY_BOTH;
+	for (i = 0; i < HISTORY_UUIDS; i++) {
+		self = drbd_history_uuid(device, i) & ~UUID_PRIMARY;
+		/* Don't conclude to have "data divergence" from a "common ancestor"
+		 * if that common ancestor is just a not used yet slot in the history,
+		 * which is still initialized to zero on both peers. */
+		if (self == 0)
+			break;
+		for (j = 0; j < ARRAY_SIZE(peer_device->history_uuids); j++) {
+			peer = peer_device->history_uuids[j] & ~UUID_PRIMARY;
+			if (peer == 0)
+				break;
+			if (self == peer)
+				return SPLIT_BRAIN_DISCONNECT;
+		}
+	}
+
+	return UNRELATED_DATA;
+}
+
+static void log_handshake(struct drbd_peer_device *peer_device)
+{
+	u64 uuid_flags = drbd_collect_local_uuid_flags(peer_device, NULL);
+
+	drbd_info(peer_device, "drbd_sync_handshake:\n");
+	drbd_uuid_dump_self(peer_device, peer_device->comm_bm_set, uuid_flags);
+	drbd_uuid_dump_peer(peer_device, peer_device->dirty_bits, peer_device->uuid_flags);
+}
+
+static enum sync_strategy drbd_handshake(struct drbd_peer_device *peer_device,
+			  enum sync_rule *rule,
+			  int *peer_node_id,
+			  bool always_verbose)
+{
+	struct drbd_device *device = peer_device->device;
+	enum sync_strategy strategy;
+
+	spin_lock_irq(&device->ldev->md.uuid_lock);
+	if (always_verbose)
+		log_handshake(peer_device);
+
+	strategy = drbd_uuid_compare(peer_device, rule, peer_node_id);
+	if (strategy != NO_SYNC && !always_verbose)
+		log_handshake(peer_device);
+	spin_unlock_irq(&device->ldev->md.uuid_lock);
+
+	if (strategy != NO_SYNC || always_verbose)
+		drbd_info(peer_device, "uuid_compare()=%s by rule=%s\n",
+				strategy_descriptor(strategy).name,
+				drbd_sync_rule_str(*rule));
+
+	return strategy;
+}
+
+static bool is_resync_running(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	bool rv = false;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		enum drbd_repl_state repl_state = peer_device->repl_state[NOW];
+		if (repl_state == L_SYNC_TARGET || repl_state == L_PAUSED_SYNC_T) {
+			rv = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return rv;
+}
+
+static int bitmap_mod_after_handshake(struct drbd_peer_device *peer_device, enum sync_strategy strategy, int peer_node_id)
+{
+	struct drbd_device *device = peer_device->device;
+
+	if (strategy == SYNC_SOURCE_COPY_BITMAP) {
+		int from = device->ldev->md.peers[peer_node_id].bitmap_index;
+
+		if (from == -1)
+			from = drbd_unallocated_index(device->ldev);
+
+		if (peer_device->bitmap_index == -1)
+			return 0;
+
+		if (from == -1)
+			drbd_info(peer_device,
+				  "Setting all bitmap bits, day0 bm not available node_id=%d\n",
+				  peer_node_id);
+		else
+			drbd_info(peer_device,
+				  "Copying bitmap of peer node_id=%d (bitmap_index=%d)\n",
+				  peer_node_id, from);
+
+		drbd_suspend_io(device, WRITE_ONLY);
+		drbd_bm_slot_lock(peer_device, "copy_slot/set_many sync_handshake", BM_LOCK_BULK);
+		if (from == -1)
+			drbd_bm_set_many_bits(peer_device, 0, -1UL);
+		else
+			drbd_bm_copy_slot(device, from, peer_device->bitmap_index);
+		drbd_bm_write(device, NULL);
+		drbd_bm_slot_unlock(peer_device);
+		drbd_resume_io(device);
+	} else if (strategy == SYNC_TARGET_CLEAR_BITMAP) {
+		drbd_info(peer_device, "Resync source provides bitmap (node_id=%d)\n", peer_node_id);
+		drbd_suspend_io(device, WRITE_ONLY);
+		drbd_bm_slot_lock(peer_device, "bm_clear_many_bits sync_handshake", BM_LOCK_BULK);
+		drbd_bm_clear_many_bits(peer_device, 0, -1UL);
+		drbd_bm_write(device, NULL);
+		drbd_bm_slot_unlock(peer_device);
+		drbd_resume_io(device);
+	} else if (strategy == SYNC_SOURCE_SET_BITMAP || strategy == SYNC_TARGET_SET_BITMAP) {
+		int (*io_func)(struct drbd_device *, struct drbd_peer_device *);
+		int err;
+
+		if (strategy == SYNC_TARGET_SET_BITMAP &&
+		    drbd_current_uuid(device) == UUID_JUST_CREATED &&
+		    is_resync_running(device))
+			return 0;
+
+		if (drbd_current_uuid(device) == UUID_JUST_CREATED) {
+			drbd_info(peer_device, "Setting and writing the whole bitmap, fresh node\n");
+			io_func = &drbd_bmio_set_allocated_n_write;
+		} else {
+			drbd_info(peer_device, "Setting and writing one bitmap slot, after drbd_sync_handshake\n");
+			io_func = &drbd_bmio_set_n_write;
+		}
+		err = drbd_bitmap_io(device, io_func, "set_n_write sync_handshake",
+				     BM_LOCK_CLEAR | BM_LOCK_BULK, peer_device);
+		if (err)
+			return err;
+
+		if (drbd_current_uuid(device) != UUID_JUST_CREATED &&
+		    peer_device->current_uuid != UUID_JUST_CREATED &&
+		    strategy == SYNC_SOURCE_SET_BITMAP) {
+			/*
+			 * We have just written the bitmap slot. Update the
+			 * bitmap UUID so that the resync does not start from
+			 * the beginning again if we disconnect and reconnect.
+			 *
+			 * Initial resync continuation is handled in
+			 * drbd_start_resync() at comment:
+			 * prepare to continue an interrupted initial resync later
+			 */
+			drbd_uuid_set_bitmap(peer_device, peer_device->current_uuid);
+			drbd_print_uuids(peer_device, "updated bitmap UUID");
+			drbd_md_sync(device);
+		}
+	}
+	return 0;
+}
+
+static enum drbd_repl_state strategy_to_repl_state(struct drbd_peer_device *peer_device,
+						   enum drbd_role peer_role,
+						   enum sync_strategy strategy)
+{
+	enum drbd_role role = peer_device->device->resource->role[NOW];
+	enum drbd_repl_state rv;
+
+	if (strategy == SYNC_SOURCE_IF_BOTH_FAILED || strategy == SYNC_TARGET_IF_BOTH_FAILED) {
+		if (role == R_PRIMARY || peer_role == R_PRIMARY) {
+			/* We have at least one primary, follow that with the resync decision */
+			rv = peer_role == R_SECONDARY ? L_WF_BITMAP_S :
+				role == R_SECONDARY ? L_WF_BITMAP_T :
+				L_ESTABLISHED;
+			return rv;
+		}
+		/* No current primary. Handle it as a common power failure, consider the
+		   roles at crash time */
+	}
+
+	if (strategy_descriptor(strategy).is_sync_source) {
+		rv = L_WF_BITMAP_S;
+	} else if (strategy_descriptor(strategy).is_sync_target) {
+		rv = L_WF_BITMAP_T;
+	} else {
+		rv = L_ESTABLISHED;
+	}
+
+	return rv;
+}
+
+static enum sync_strategy drbd_disk_states_source_strategy(
+		struct drbd_peer_device *peer_device,
+		int *peer_node_id)
+{
+	const int node_id = peer_device->device->resource->res_opts.node_id;
+	u64 bitmap_uuid;
+	int i = -1;
+
+	if (!(peer_device->uuid_flags & UUID_FLAG_SYNC_TARGET))
+		return SYNC_SOURCE_USE_BITMAP;
+
+	/* A resync with identical current-UUIDs -> USE_BITMAP */
+	bitmap_uuid = peer_device->bitmap_uuids[node_id];
+	if (bitmap_uuid == peer_device->current_uuid &&
+	    bitmap_uuid == drbd_current_uuid(peer_device->device))
+		return SYNC_SOURCE_USE_BITMAP;
+
+	/* When the peer is already a sync target, we actually see its
+	 * current UUID in the bitmap UUID slot towards us. We may need
+	 * to pick a different bitmap as a result. */
+	if (bitmap_uuid)
+		i = drbd_find_bitmap_by_uuid(peer_device, bitmap_uuid);
+
+	if (i == -1)
+		return SYNC_SOURCE_SET_BITMAP;
+
+	if (i == peer_device->node_id)
+		return SYNC_SOURCE_USE_BITMAP;
+
+	*peer_node_id = i;
+	return SYNC_SOURCE_COPY_BITMAP;
+}
+
+static enum sync_strategy drbd_disk_states_target_strategy(
+		struct drbd_peer_device *peer_device,
+		int *peer_node_id)
+{
+	const int node_id = peer_device->device->resource->res_opts.node_id;
+	u64 bitmap_uuid;
+	int i;
+
+	if (!(peer_device->comm_uuid_flags & UUID_FLAG_SYNC_TARGET))
+		return SYNC_TARGET_USE_BITMAP;
+
+	bitmap_uuid = drbd_bitmap_uuid(peer_device);
+	if (bitmap_uuid == peer_device->current_uuid &&
+	    bitmap_uuid == drbd_current_uuid(peer_device->device))
+		return SYNC_TARGET_USE_BITMAP;
+
+	/* When we are already a sync target, we need to choose our
+	 * strategy to mirror the peer's choice (see
+	 * drbd_disk_states_source_strategy). */
+	i = drbd_find_peer_bitmap_by_uuid(peer_device, bitmap_uuid);
+
+	if (i == -1)
+		return SYNC_TARGET_SET_BITMAP;
+
+	if (i == node_id)
+		return SYNC_TARGET_USE_BITMAP;
+
+	*peer_node_id = i;
+	return SYNC_TARGET_CLEAR_BITMAP;
+}
+
+static void disk_states_to_strategy(struct drbd_peer_device *peer_device,
+				    enum drbd_disk_state peer_disk_state,
+				    enum sync_strategy *strategy, enum sync_rule rule,
+				    int *peer_node_id)
+{
+	enum drbd_disk_state disk_state = peer_device->comm_state.disk;
+	struct drbd_device *device = peer_device->device;
+	bool decide_based_on_dstates = false;
+	bool prefer_local, either_inconsistent;
+
+	if (disk_state == D_NEGOTIATING)
+		disk_state = disk_state_from_md(device);
+
+	either_inconsistent =
+		(disk_state == D_INCONSISTENT && peer_disk_state > D_INCONSISTENT) ||
+		(peer_disk_state == D_INCONSISTENT && disk_state > D_INCONSISTENT);
+
+	if (peer_device->connection->agreed_pro_version >= 119) {
+		bool dstates_want_resync =
+			disk_state != peer_disk_state && disk_state >= D_INCONSISTENT &&
+			peer_disk_state >= D_INCONSISTENT && peer_disk_state != D_UNKNOWN;
+		bool resync_direction_arbitrary =
+			*strategy == SYNC_TARGET_IF_BOTH_FAILED ||
+			*strategy == SYNC_SOURCE_IF_BOTH_FAILED;
+
+		decide_based_on_dstates =
+			dstates_want_resync &&
+			(((rule == RULE_RECONNECTED || rule == RULE_LOST_QUORUM || rule == RULE_BOTH_OFF) &&
+			  resync_direction_arbitrary) ||
+			 (*strategy == NO_SYNC && either_inconsistent));
+
+		prefer_local = disk_state > peer_disk_state;
+		/* RULE_BOTH_OFF means that the current UUIDs are equal. The decision
+		   was found by looking at the crashed_primary bits.
+		   The current disk states might give a better basis for decision-making! */
+
+		/* RULE_LOST_QUORUM means that the current UUIDs are equal. The resync direction
+		   was found by looking if a node lost quorum while being primary */
+	} else {
+		decide_based_on_dstates =
+			(rule == RULE_BOTH_OFF || *strategy == NO_SYNC) && either_inconsistent;
+
+		prefer_local = disk_state > D_INCONSISTENT;
+	}
+
+	if (decide_based_on_dstates) {
+		*strategy = prefer_local ?
+			drbd_disk_states_source_strategy(peer_device, peer_node_id) :
+			drbd_disk_states_target_strategy(peer_device, peer_node_id);
+		drbd_info(peer_device, "strategy = %s due to disk states. (%s/%s)\n",
+			  strategy_descriptor(*strategy).name,
+			  drbd_disk_str(disk_state), drbd_disk_str(peer_disk_state));
+	}
+}
+
+static enum sync_strategy drbd_attach_handshake(struct drbd_peer_device *peer_device,
+						  enum drbd_disk_state peer_disk_state)
+{
+	enum sync_strategy strategy;
+	enum sync_rule rule;
+	int peer_node_id, err;
+
+	strategy = drbd_handshake(peer_device, &rule, &peer_node_id, true);
+
+	if (!is_strategy_determined(strategy))
+		return strategy;
+
+	disk_states_to_strategy(peer_device, peer_disk_state, &strategy, rule, &peer_node_id);
+	err = bitmap_mod_after_handshake(peer_device, strategy, peer_node_id);
+	if (err)
+		return RETRY_CONNECT;
+
+	return strategy;
+}
+
+static enum sync_strategy discard_my_data_to_strategy(struct drbd_peer_device *peer_device)
+{
+	enum sync_strategy strategy = UNDETERMINED;
+
+	if (test_bit(DISCARD_MY_DATA, &peer_device->flags) &&
+	    !(peer_device->uuid_flags & UUID_FLAG_DISCARD_MY_DATA))
+		strategy = SYNC_TARGET_USE_BITMAP;
+
+	if (!test_bit(DISCARD_MY_DATA, &peer_device->flags) &&
+	    (peer_device->uuid_flags & UUID_FLAG_DISCARD_MY_DATA))
+		strategy = SYNC_SOURCE_USE_BITMAP;
+
+	return strategy;
+}
+
+/* drbd_sync_handshake() returns the new replication state on success, and -1
+ * on failure.
+ */
+static enum sync_strategy drbd_sync_handshake(struct drbd_peer_device *peer_device,
+					      union drbd_state peer_state)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
+	struct net_conf *nc;
+	enum sync_strategy strategy;
+	enum sync_rule rule;
+	int rr_conflict, always_asbp, peer_node_id = 0, err;
+	enum drbd_role peer_role = peer_state.role;
+	enum drbd_disk_state peer_disk_state = peer_state.disk;
+	int required_protocol;
+	enum sync_strategy strategy_from_user = discard_my_data_to_strategy(peer_device);
+	bool need_full_sync_after_split_brain;
+
+	strategy = drbd_handshake(peer_device, &rule, &peer_node_id, true);
+
+	if (strategy == RETRY_CONNECT)
+		return strategy;
+
+	if (strategy == UNRELATED_DATA) {
+		drbd_alert(peer_device, "Unrelated data, aborting!\n");
+		return strategy;
 	}
-	if (hg < -0x10000) {
-		int proto, fflags;
-		hg = -hg;
-		proto = hg & 0xff;
-		fflags = (hg >> 8) & 0xff;
-		drbd_alert(device, "To resolve this both sides have to support at least protocol %d and feature flags 0x%x\n",
-					proto, fflags);
-		return C_MASK;
+	required_protocol = strategy_descriptor(strategy).required_protocol;
+	if (required_protocol) {
+		drbd_alert(peer_device, "To resolve this both sides have to support at least protocol %d\n", required_protocol);
+		return strategy;
 	}
-	if (hg < -1000) {
-		drbd_alert(device, "To resolve this both sides have to support at least protocol %d\n", -hg - 1000);
-		return C_MASK;
+
+	/* Protocol < 124 peers don't handle 0-bit missed-end-of-resync correctly.
+	 * Retry the connection to let UUID cleanup resolve it, or keep retrying
+	 * if the peer needs to be upgraded.
+	 */
+	if (connection->agreed_pro_version < 124 &&
+	    peer_device->comm_bm_set == 0 && peer_device->dirty_bits == 0) {
+		if (strategy == SYNC_SOURCE_USE_BITMAP &&
+		    rule == RULE_SYNC_SOURCE_MISSED_FINISH) {
+			drbd_info(peer_device, "Missed end of resync as sync-source with 0 bits;"
+				  " retrying to let UUID cleanup resolve it\n");
+			return RETRY_CONNECT;
+		}
+		if (strategy == SYNC_TARGET_USE_BITMAP &&
+		    rule == RULE_SYNC_TARGET_PEER_MISSED_FINISH &&
+		    device->resource->role[NOW] == R_PRIMARY) {
+			drbd_info(peer_device, "Missed end of resync as sync-target with 0 bits on Primary;"
+				  " peer needs protocol 124+ to resolve, retrying\n");
+			return REQUIRES_PROTO_124;
+		}
 	}
 
-	if    ((mydisk == D_INCONSISTENT && peer_disk > D_INCONSISTENT) ||
-	    (peer_disk == D_INCONSISTENT && mydisk    > D_INCONSISTENT)) {
-		int f = (hg == -100) || abs(hg) == 2;
-		hg = mydisk > D_INCONSISTENT ? 1 : -1;
-		if (f)
-			hg = hg*2;
-		drbd_info(device, "Becoming sync %s due to disk states.\n",
-		     hg > 0 ? "source" : "target");
+	disk_states_to_strategy(peer_device, peer_disk_state, &strategy, rule, &peer_node_id);
+
+	if (strategy == SPLIT_BRAIN_AUTO_RECOVER && (!drbd_device_stable(device, NULL) || !(peer_device->uuid_flags & UUID_FLAG_STABLE))) {
+		drbd_warn(peer_device, "Ignore Split-Brain, for now, at least one side unstable\n");
+		strategy = NO_SYNC;
 	}
 
-	if (abs(hg) == 100)
-		drbd_khelper(device, "initial-split-brain");
+	if (strategy_descriptor(strategy).is_split_brain)
+		drbd_maybe_khelper(device, connection, "initial-split-brain");
 
 	rcu_read_lock();
-	nc = rcu_dereference(peer_device->connection->net_conf);
+	nc = rcu_dereference(connection->transport.net_conf);
 	always_asbp = nc->always_asbp;
 	rr_conflict = nc->rr_conflict;
-	tentative = nc->tentative;
 	rcu_read_unlock();
 
-	if (hg == 100 || (hg == -100 && always_asbp)) {
-		int pcount = (device->state.role == R_PRIMARY)
+	/* Evaluate the original strategy,
+	 * before it is re-mapped by additional configuration below.
+	 */
+	need_full_sync_after_split_brain = (strategy == SPLIT_BRAIN_DISCONNECT);
+
+	if (strategy == SPLIT_BRAIN_AUTO_RECOVER || (strategy == SPLIT_BRAIN_DISCONNECT && always_asbp)) {
+		int pcount = (device->resource->role[NOW] == R_PRIMARY)
 			   + (peer_role == R_PRIMARY);
-		int forced = (hg == -100);
 
-		switch (pcount) {
-		case 0:
-			hg = drbd_asb_recover_0p(peer_device);
-			break;
-		case 1:
-			hg = drbd_asb_recover_1p(peer_device);
-			break;
-		case 2:
-			hg = drbd_asb_recover_2p(peer_device);
-			break;
+		if (device->resource->res_opts.quorum != QOU_OFF &&
+		    connection->agreed_pro_version >= 113) {
+			if (device->have_quorum[NOW] && !peer_state.quorum)
+				strategy = SYNC_SOURCE_USE_BITMAP;
+			else if (!device->have_quorum[NOW] && peer_state.quorum)
+				strategy = SYNC_TARGET_USE_BITMAP;
+		}
+		if (strategy_descriptor(strategy).is_split_brain) {
+			switch (pcount) {
+			case 0:
+				strategy = drbd_asb_recover_0p(peer_device);
+				break;
+			case 1:
+				strategy = drbd_asb_recover_1p(peer_device);
+				break;
+			case 2:
+				strategy = drbd_asb_recover_2p(peer_device);
+				break;
+			}
 		}
-		if (abs(hg) < 100) {
-			drbd_warn(device, "Split-Brain detected, %d primaries, "
+		if (!strategy_descriptor(strategy).is_split_brain) {
+			drbd_warn(peer_device, "Split-Brain detected, %d primaries, "
 			     "automatically solved. Sync from %s node\n",
-			     pcount, (hg < 0) ? "peer" : "this");
-			if (forced) {
-				drbd_warn(device, "Doing a full sync, since"
+			     pcount, strategy_descriptor(strategy).is_sync_target ? "peer" : "this");
+			if (need_full_sync_after_split_brain) {
+				if (!strategy_descriptor(strategy).full_sync_equivalent) {
+					drbd_alert(peer_device, "Want full sync but cannot decide direction, dropping connection!\n");
+					return SPLIT_BRAIN_DISCONNECT;
+				}
+				drbd_warn(peer_device, "Doing a full sync, since"
 				     " UUIDs where ambiguous.\n");
-				hg = hg*2;
+				strategy = strategy_descriptor(strategy).full_sync_equivalent;
 			}
 		}
 	}
 
-	if (hg == -100) {
-		if (test_bit(DISCARD_MY_DATA, &device->flags) && !(device->p_uuid[UI_FLAGS]&1))
-			hg = -1;
-		if (!test_bit(DISCARD_MY_DATA, &device->flags) && (device->p_uuid[UI_FLAGS]&1))
-			hg = 1;
+	if (strategy == SPLIT_BRAIN_DISCONNECT && strategy_from_user != UNDETERMINED) {
+		/* strategy_from_user via "--discard-my-data" is either
+		 * SYNC_TARGET_USE_BITMAP or SYNC_SOURCE_USE_BITMAP.
+		 * But here we do no longer have a relevant bitmap anymore.
+		 * Map to their "full sync equivalent".
+		 */
+		if (need_full_sync_after_split_brain)
+			strategy = strategy_descriptor(strategy_from_user).full_sync_equivalent;
+		else
+			strategy = strategy_from_user;
+		drbd_warn(peer_device, "Split-Brain detected, manually solved. %s from %s node\n",
+			  need_full_sync_after_split_brain ? "Full sync" : "Sync",
+			  strategy_descriptor(strategy).is_sync_target ? "peer" : "this");
+	}
 
-		if (abs(hg) < 100)
-			drbd_warn(device, "Split-Brain detected, manually solved. "
-			     "Sync from %s node\n",
-			     (hg < 0) ? "peer" : "this");
+	if (strategy_descriptor(strategy).is_split_brain) {
+		drbd_alert(peer_device, "Split-Brain detected but unresolved, dropping connection!\n");
+		drbd_maybe_khelper(device, connection, "split-brain");
+		return strategy;
 	}
 
-	if (hg == -100) {
-		/* FIXME this log message is not correct if we end up here
-		 * after an attempted attach on a diskless node.
-		 * We just refuse to attach -- well, we drop the "connection"
-		 * to that disk, in a way... */
-		drbd_alert(device, "Split-Brain detected but unresolved, dropping connection!\n");
-		drbd_khelper(device, "split-brain");
-		return C_MASK;
+	if (!is_strategy_determined(strategy)) {
+		drbd_alert(peer_device, "Failed to fully determine sync strategy, dropping connection!\n");
+		return strategy;
 	}
 
-	if (hg > 0 && mydisk <= D_INCONSISTENT) {
-		drbd_err(device, "I shall become SyncSource, but I am inconsistent!\n");
-		return C_MASK;
+	if (connection->agreed_pro_version >= 121 && strategy != NO_SYNC &&
+	    strategy_from_user != UNDETERMINED &&
+	    strategy_descriptor(strategy).is_sync_source != strategy_descriptor(strategy_from_user).is_sync_source) {
+		if (strategy_descriptor(strategy).reverse != UNDETERMINED) {
+			enum sync_strategy reversed = strategy_descriptor(strategy).reverse;
+			enum drbd_disk_state resync_source_disk_state =
+				strategy_descriptor(reversed).is_sync_source ? device->disk_state[NOW] : peer_disk_state;
+			if (resync_source_disk_state > D_INCONSISTENT) {
+				strategy = reversed;
+				drbd_warn(peer_device, "Resync direction reversed by --discard-my-data. Reverting to older data!\n");
+			} else {
+				drbd_warn(peer_device, "Ignoring --discard-my-data\n");
+			}
+		} else {
+			drbd_warn(peer_device, "Can not reverse resync direction (requested via --discard-my-data)\n");
+		}
 	}
 
-	if (hg < 0 && /* by intention we do not use mydisk here. */
-	    device->state.role == R_PRIMARY && device->state.disk >= D_CONSISTENT) {
+	if (strategy_descriptor(strategy).is_sync_target &&
+	    strategy != SYNC_TARGET_IF_BOTH_FAILED &&
+	    device->resource->role[NOW] == R_PRIMARY && device->disk_state[NOW] >= D_CONSISTENT &&
+	    (peer_device->comm_bm_set > 0 || peer_device->dirty_bits > 0)) {
 		switch (rr_conflict) {
 		case ASB_CALL_HELPER:
-			drbd_khelper(device, "pri-lost");
+			drbd_maybe_khelper(device, connection, "pri-lost");
 			fallthrough;
 		case ASB_DISCONNECT:
-			drbd_err(device, "I shall become SyncTarget, but I am primary!\n");
-			return C_MASK;
+		case ASB_RETRY_CONNECT:
+			drbd_err(peer_device, "I shall become SyncTarget, but I am primary!\n");
+			strategy = rr_conflict == ASB_RETRY_CONNECT ?
+				SYNC_TARGET_PRIMARY_RECONNECT : SYNC_TARGET_PRIMARY_DISCONNECT;
+			break;
 		case ASB_VIOLENTLY:
-			drbd_warn(device, "Becoming SyncTarget, violating the stable-data"
+			drbd_warn(peer_device, "Becoming SyncTarget, violating the stable-data"
 			     "assumption\n");
+			break;
+		case ASB_AUTO_DISCARD:
+			if (strategy == SYNC_TARGET_USE_BITMAP && rule == RULE_CRASHED_PRIMARY) {
+				drbd_warn(peer_device, "reversing resync by auto-discard\n");
+				strategy = SYNC_SOURCE_USE_BITMAP;
+			}
 		}
 	}
-
-	if (tentative || test_bit(CONN_DRY_RUN, &peer_device->connection->flags)) {
-		if (hg == 0)
-			drbd_info(device, "dry-run connect: No resync, would become Connected immediately.\n");
-		else
-			drbd_info(device, "dry-run connect: Would become %s, doing a %s resync.",
-				 drbd_conn_str(hg > 0 ? C_SYNC_SOURCE : C_SYNC_TARGET),
-				 abs(hg) >= 2 ? "full" : "bit-map based");
-		return C_MASK;
+	if (strategy == SYNC_SOURCE_USE_BITMAP && rule == RULE_CRASHED_PRIMARY &&
+	    peer_role == R_PRIMARY && peer_disk_state >= D_CONSISTENT &&
+	    rr_conflict == ASB_AUTO_DISCARD) {
+		drbd_warn(peer_device, "reversing resync by auto-discard\n");
+		strategy = SYNC_TARGET_USE_BITMAP;
+	}
+
+	if (rule == RULE_SYNC_SOURCE_MISSED_FINISH || rule == RULE_SYNC_SOURCE_PEER_MISSED_FINISH ||
+	    rule == RULE_SYNC_TARGET_MISSED_FINISH || rule == RULE_SYNC_TARGET_PEER_MISSED_FINISH) {
+		if (strategy == SYNC_SOURCE_USE_BITMAP) {
+			enum drbd_disk_state disk_state = peer_device->comm_state.disk;
+
+			if (disk_state == D_NEGOTIATING)
+				disk_state = disk_state_from_md(device);
+			if (disk_state != D_UP_TO_DATE) {
+				drbd_info(peer_device,
+					  "Resync (rule=%s) skipped: sync-source (%s)\n",
+					  drbd_sync_rule_str(rule), drbd_disk_str(disk_state));
+				strategy = NO_SYNC;
+			}
+		} else if (strategy == SYNC_TARGET_USE_BITMAP) {
+			if (peer_disk_state != D_UP_TO_DATE) {
+				int peer_node_id = peer_device->node_id;
+				u64 previous = device->ldev->md.peers[peer_node_id].bitmap_uuid;
+
+				if (previous) {
+					device->ldev->md.peers[peer_node_id].bitmap_uuid = 0;
+					_drbd_uuid_push_history(device, previous);
+					drbd_md_mark_dirty(device);
+				}
+				drbd_info(peer_device,
+					  "Resync (rule=%s) skipped: peer sync-source (%s)\n",
+					  drbd_sync_rule_str(rule), drbd_disk_str(peer_disk_state));
+				strategy = NO_SYNC;
+			}
+		}
 	}
 
-	if (abs(hg) >= 2) {
-		drbd_info(device, "Writing the whole bitmap, full sync required after drbd_sync_handshake.\n");
-		if (drbd_bitmap_io(device, &drbd_bmio_set_n_write, "set_n_write from sync_handshake",
-					BM_LOCKED_SET_ALLOWED, NULL))
-			return C_MASK;
+	if (test_bit(CONN_DRY_RUN, &connection->flags)) {
+		if (strategy == NO_SYNC)
+			drbd_info(peer_device, "dry-run connect: No resync, would become Connected immediately.\n");
+		else
+			drbd_info(peer_device, "dry-run connect: Would become %s, doing a %s resync.",
+				 drbd_repl_str(strategy_descriptor(strategy).is_sync_target ? L_SYNC_TARGET : L_SYNC_SOURCE),
+				 strategy_descriptor(strategy).name);
+		return -2;
 	}
 
-	if (hg > 0) { /* become sync source. */
-		rv = C_WF_BITMAP_S;
-	} else if (hg < 0) { /* become sync target */
-		rv = C_WF_BITMAP_T;
-	} else {
-		rv = C_CONNECTED;
-		if (drbd_bm_total_weight(device)) {
-			drbd_info(device, "No resync, but %lu bits in bitmap!\n",
-			     drbd_bm_total_weight(device));
-		}
-	}
+	err = bitmap_mod_after_handshake(peer_device, strategy, peer_node_id);
+	if (err)
+		return RETRY_CONNECT;
 
-	return rv;
+	return strategy;
 }
 
 static enum drbd_after_sb_p convert_after_sb(enum drbd_after_sb_p peer)
@@ -3465,20 +5479,18 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
 
 		if (pi->size > sizeof(integrity_alg))
 			return -EIO;
-		err = drbd_recv_all(connection, integrity_alg, pi->size);
+		err = drbd_recv_into(connection, integrity_alg, pi->size);
 		if (err)
 			return err;
 		integrity_alg[SHARED_SECRET_MAX - 1] = 0;
 	}
 
 	if (pi->cmd != P_PROTOCOL_UPDATE) {
-		clear_bit(CONN_DRY_RUN, &connection->flags);
-
 		if (cf & CF_DRY_RUN)
 			set_bit(CONN_DRY_RUN, &connection->flags);
 
 		rcu_read_lock();
-		nc = rcu_dereference(connection->net_conf);
+		nc = rcu_dereference(connection->transport.net_conf);
 
 		if (p_proto != nc->wire_protocol) {
 			drbd_err(connection, "incompatible %s settings\n", "protocol");
@@ -3500,7 +5512,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
 			goto disconnect_rcu_unlock;
 		}
 
-		if (p_discard_my_data && nc->discard_my_data) {
+		if (p_discard_my_data && test_bit(CONN_DISCARD_MY_DATA, &connection->flags)) {
 			drbd_err(connection, "incompatible %s settings\n", "discard-my-data");
 			goto disconnect_rcu_unlock;
 		}
@@ -3551,9 +5563,13 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
 	if (!new_net_conf)
 		goto disconnect;
 
-	mutex_lock(&connection->data.mutex);
-	mutex_lock(&connection->resource->conf_update);
-	old_net_conf = connection->net_conf;
+	if (mutex_lock_interruptible(&connection->resource->conf_update)) {
+		drbd_err(connection, "Interrupted while waiting for conf_update\n");
+		goto disconnect;
+	}
+
+	mutex_lock(&connection->mutex[DATA_STREAM]);
+	old_net_conf = connection->transport.net_conf;
 	*new_net_conf = *old_net_conf;
 
 	new_net_conf->wire_protocol = p_proto;
@@ -3562,9 +5578,9 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
 	new_net_conf->after_sb_2p = convert_after_sb(p_after_sb_2p);
 	new_net_conf->two_primaries = p_two_primaries;
 
-	rcu_assign_pointer(connection->net_conf, new_net_conf);
+	rcu_assign_pointer(connection->transport.net_conf, new_net_conf);
+	mutex_unlock(&connection->mutex[DATA_STREAM]);
 	mutex_unlock(&connection->resource->conf_update);
-	mutex_unlock(&connection->data.mutex);
 
 	crypto_free_shash(connection->peer_integrity_tfm);
 	kfree(connection->int_dig_in);
@@ -3583,10 +5599,11 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
 disconnect_rcu_unlock:
 	rcu_read_unlock();
 disconnect:
+	kfree(new_net_conf);
 	crypto_free_shash(peer_integrity_tfm);
 	kfree(int_dig_in);
 	kfree(int_dig_vv);
-	conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_HARD);
+	change_cstate(connection, C_DISCONNECTING, CS_HARD);
 	return -EIO;
 }
 
@@ -3595,8 +5612,7 @@ static int receive_protocol(struct drbd_connection *connection, struct packet_in
  * return: NULL (alg name was "")
  *         ERR_PTR(error) if something goes wrong
  *         or the crypto hash ptr, if it worked out ok. */
-static struct crypto_shash *drbd_crypto_alloc_digest_safe(
-		const struct drbd_device *device,
+static struct crypto_shash *drbd_crypto_alloc_digest_safe(const struct drbd_device *device,
 		const char *alg, const char *name)
 {
 	struct crypto_shash *tfm;
@@ -3613,44 +5629,11 @@ static struct crypto_shash *drbd_crypto_alloc_digest_safe(
 	return tfm;
 }
 
-static int ignore_remaining_packet(struct drbd_connection *connection, struct packet_info *pi)
-{
-	void *buffer = connection->data.rbuf;
-	int size = pi->size;
-
-	while (size) {
-		int s = min_t(int, size, DRBD_SOCKET_BUFFER_SIZE);
-		s = drbd_recv(connection, buffer, s);
-		if (s <= 0) {
-			if (s < 0)
-				return s;
-			break;
-		}
-		size -= s;
-	}
-	if (size)
-		return -EIO;
-	return 0;
-}
-
-/*
- * config_unknown_volume  -  device configuration command for unknown volume
- *
- * When a device is added to an existing connection, the node on which the
- * device is added first will send configuration commands to its peer but the
- * peer will not know about the device yet.  It will warn and ignore these
- * commands.  Once the device is added on the second node, the second node will
- * send the same device configuration commands, but in the other direction.
- *
- * (We can also end up here if drbd is misconfigured.)
- */
-static int config_unknown_volume(struct drbd_connection *connection, struct packet_info *pi)
-{
-	drbd_warn(connection, "%s packet received for volume %u, which is not configured locally\n",
-		  cmdname(pi->cmd), pi->vnr);
-	return ignore_remaining_packet(connection, pi);
-}
-
+/* Receive P_SYNC_PARAM89 and the older P_SYNC_PARAM. The peer_device fields
+ * related to resync configuration are ignored. These include resync_rate,
+ * c_max_rate and the like. We ignore them because applying them to our own
+ * configuration would be confusing. It would cause us to swap configuration
+ * with our peer each time we connected. */
 static int receive_SyncParam(struct drbd_connection *connection, struct packet_info *pi)
 {
 	struct drbd_peer_device *peer_device;
@@ -3660,10 +5643,10 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
 	struct crypto_shash *verify_tfm = NULL;
 	struct crypto_shash *csums_tfm = NULL;
 	struct net_conf *old_net_conf, *new_net_conf = NULL;
-	struct disk_conf *old_disk_conf = NULL, *new_disk_conf = NULL;
+	struct peer_device_conf *old_peer_device_conf = NULL;
 	const int apv = connection->agreed_pro_version;
 	struct fifo_buffer *old_plan = NULL, *new_plan = NULL;
-	unsigned int fifo_size = 0;
+	struct drbd_resource *resource = connection->resource;
 	int err;
 
 	peer_device = conn_peer_device(connection, pi->vnr);
@@ -3696,48 +5679,26 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
 		D_ASSERT(device, data_size == 0);
 	}
 
-	/* initialize verify_alg and csums_alg */
-	p = pi->data;
-	BUILD_BUG_ON(sizeof(p->algs) != 2 * SHARED_SECRET_MAX);
-	memset(&p->algs, 0, sizeof(p->algs));
-
-	err = drbd_recv_all(peer_device->connection, p, header_size);
+	err = drbd_recv_all(connection, (void **)&p, header_size + data_size);
 	if (err)
 		return err;
 
-	mutex_lock(&connection->resource->conf_update);
-	old_net_conf = peer_device->connection->net_conf;
-	if (get_ldev(device)) {
-		new_disk_conf = kzalloc_obj(struct disk_conf);
-		if (!new_disk_conf) {
-			put_ldev(device);
-			mutex_unlock(&connection->resource->conf_update);
-			drbd_err(device, "Allocation of new disk_conf failed\n");
-			return -ENOMEM;
-		}
-
-		old_disk_conf = device->ldev->disk_conf;
-		*new_disk_conf = *old_disk_conf;
-
-		new_disk_conf->resync_rate = be32_to_cpu(p->resync_rate);
+	err = mutex_lock_interruptible(&resource->conf_update);
+	if (err) {
+		drbd_err(connection, "Interrupted while waiting for conf_update\n");
+		return err;
 	}
+	old_net_conf = connection->transport.net_conf;
 
 	if (apv >= 88) {
 		if (apv == 88) {
 			if (data_size > SHARED_SECRET_MAX || data_size == 0) {
-				drbd_err(device, "verify-alg of wrong size, "
-					"peer wants %u, accepting only up to %u byte\n",
-					data_size, SHARED_SECRET_MAX);
+				drbd_err(device, "verify-alg too long, "
+					 "peer wants %u, accepting only %u byte\n",
+					 data_size, SHARED_SECRET_MAX);
 				goto reconnect;
 			}
-
-			err = drbd_recv_all(peer_device->connection, p->verify_alg, data_size);
-			if (err)
-				goto reconnect;
-			/* we expect NUL terminated string */
-			/* but just in case someone tries to be evil */
-			D_ASSERT(device, p->verify_alg[data_size-1] == 0);
-			p->verify_alg[data_size-1] = 0;
+			p->verify_alg[data_size] = 0;
 
 		} else /* apv >= 89 */ {
 			/* we still expect NUL terminated strings */
@@ -3749,7 +5710,7 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
 		}
 
 		if (strcmp(old_net_conf->verify_alg, p->verify_alg)) {
-			if (device->state.conn == C_WF_REPORT_PARAMS) {
+			if (peer_device->repl_state[NOW] == L_OFF) {
 				drbd_err(device, "Different verify-alg settings. me=\"%s\" peer=\"%s\"\n",
 				    old_net_conf->verify_alg, p->verify_alg);
 				goto disconnect;
@@ -3763,7 +5724,7 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
 		}
 
 		if (apv >= 89 && strcmp(old_net_conf->csums_alg, p->csums_alg)) {
-			if (device->state.conn == C_WF_REPORT_PARAMS) {
+			if (peer_device->repl_state[NOW] == L_OFF) {
 				drbd_err(device, "Different csums-alg settings. me=\"%s\" peer=\"%s\"\n",
 				    old_net_conf->csums_alg, p->csums_alg);
 				goto disconnect;
@@ -3776,23 +5737,6 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
 			}
 		}
 
-		if (apv > 94 && new_disk_conf) {
-			new_disk_conf->c_plan_ahead = be32_to_cpu(p->c_plan_ahead);
-			new_disk_conf->c_delay_target = be32_to_cpu(p->c_delay_target);
-			new_disk_conf->c_fill_target = be32_to_cpu(p->c_fill_target);
-			new_disk_conf->c_max_rate = be32_to_cpu(p->c_max_rate);
-
-			fifo_size = (new_disk_conf->c_plan_ahead * 10 * SLEEP_TIME) / HZ;
-			if (fifo_size != device->rs_plan_s->size) {
-				new_plan = fifo_alloc(fifo_size);
-				if (!new_plan) {
-					drbd_err(device, "kmalloc of fifo_buffer failed");
-					put_ldev(device);
-					goto disconnect;
-				}
-			}
-		}
-
 		if (verify_tfm || csums_tfm) {
 			new_net_conf = kzalloc_obj(struct net_conf);
 			if (!new_net_conf)
@@ -3803,66 +5747,58 @@ static int receive_SyncParam(struct drbd_connection *connection, struct packet_i
 			if (verify_tfm) {
 				strscpy(new_net_conf->verify_alg, p->verify_alg);
 				new_net_conf->verify_alg_len = strlen(p->verify_alg) + 1;
-				crypto_free_shash(peer_device->connection->verify_tfm);
-				peer_device->connection->verify_tfm = verify_tfm;
+				crypto_free_shash(connection->verify_tfm);
+				connection->verify_tfm = verify_tfm;
 				drbd_info(device, "using verify-alg: \"%s\"\n", p->verify_alg);
 			}
 			if (csums_tfm) {
 				strscpy(new_net_conf->csums_alg, p->csums_alg);
 				new_net_conf->csums_alg_len = strlen(p->csums_alg) + 1;
-				crypto_free_shash(peer_device->connection->csums_tfm);
-				peer_device->connection->csums_tfm = csums_tfm;
+				crypto_free_shash(connection->csums_tfm);
+				connection->csums_tfm = csums_tfm;
 				drbd_info(device, "using csums-alg: \"%s\"\n", p->csums_alg);
 			}
-			rcu_assign_pointer(connection->net_conf, new_net_conf);
+			rcu_assign_pointer(connection->transport.net_conf, new_net_conf);
 		}
 	}
 
-	if (new_disk_conf) {
-		rcu_assign_pointer(device->ldev->disk_conf, new_disk_conf);
-		put_ldev(device);
-	}
-
-	if (new_plan) {
-		old_plan = device->rs_plan_s;
-		rcu_assign_pointer(device->rs_plan_s, new_plan);
-	}
+	if (new_plan)
+		rcu_assign_pointer(peer_device->rs_plan_s, new_plan);
 
-	mutex_unlock(&connection->resource->conf_update);
+	mutex_unlock(&resource->conf_update);
 	synchronize_rcu();
 	if (new_net_conf)
 		kfree(old_net_conf);
-	kfree(old_disk_conf);
-	kfree(old_plan);
+	kfree(old_peer_device_conf);
+	if (new_plan)
+		kfree(old_plan);
 
 	return 0;
 
 reconnect:
-	if (new_disk_conf) {
-		put_ldev(device);
-		kfree(new_disk_conf);
-	}
-	mutex_unlock(&connection->resource->conf_update);
+	mutex_unlock(&resource->conf_update);
 	return -EIO;
 
 disconnect:
 	kfree(new_plan);
-	if (new_disk_conf) {
-		put_ldev(device);
-		kfree(new_disk_conf);
-	}
-	mutex_unlock(&connection->resource->conf_update);
+	mutex_unlock(&resource->conf_update);
 	/* just for completeness: actually not needed,
 	 * as this is not reached if csums_tfm was ok. */
 	crypto_free_shash(csums_tfm);
 	/* but free the verify_tfm again, if csums_tfm did not work out */
 	crypto_free_shash(verify_tfm);
-	conn_request_state(peer_device->connection, NS(conn, C_DISCONNECTING), CS_HARD);
+	change_cstate(connection, C_DISCONNECTING, CS_HARD);
 	return -EIO;
 }
 
+static void drbd_setup_order_type(struct drbd_device *device, int peer)
+{
+	/* sorry, we currently have no working implementation
+	 * of distributed TCQ */
+}
+
 /* warn if the arguments differ by more than 12.5% */
-static void warn_if_differ_considerably(struct drbd_device *device,
+static void warn_if_differ_considerably(struct drbd_peer_device *peer_device,
 	const char *s, sector_t a, sector_t b)
 {
 	sector_t d;
@@ -3870,135 +5806,325 @@ static void warn_if_differ_considerably(struct drbd_device *device,
 		return;
 	d = (a > b) ? (a - b) : (b - a);
 	if (d > (a>>3) || d > (b>>3))
-		drbd_warn(device, "Considerable difference in %s: %llus vs. %llus\n", s,
+		drbd_warn(peer_device, "Considerable difference in %s: %llus vs. %llus\n", s,
 		     (unsigned long long)a, (unsigned long long)b);
 }
 
-static int receive_sizes(struct drbd_connection *connection, struct packet_info *pi)
+static bool drbd_other_peer_smaller(struct drbd_peer_device *reference_peer_device, uint64_t new_size)
 {
+	struct drbd_device *device = reference_peer_device->device;
 	struct drbd_peer_device *peer_device;
+	bool smaller = false;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device == reference_peer_device)
+			continue;
+
+		/* Ignore peers without an attached disk. */
+		if (peer_device->disk_state[NOW] < D_INCONSISTENT)
+			continue;
+
+		if (peer_device->d_size != 0 && peer_device->d_size < new_size)
+			smaller = true;
+	}
+	rcu_read_unlock();
+
+	return smaller;
+}
+
+/* Maximum bio size that a protocol version supports. */
+static unsigned int conn_max_bio_size(struct drbd_connection *connection)
+{
+	if (connection->agreed_pro_version >= 100)
+		return DRBD_MAX_BIO_SIZE;
+	else if (connection->agreed_pro_version >= 95)
+		return DRBD_MAX_BIO_SIZE_P95;
+	else
+		return DRBD_MAX_SIZE_H80_PACKET;
+}
+
+static struct drbd_peer_device *get_neighbor_device(struct drbd_device *device,
+		enum drbd_neighbor neighbor)
+{
+	s32 self_id, peer_id, pivot;
+	struct drbd_peer_device *peer_device, *peer_device_ret = NULL;
+
+	if (!get_ldev(device))
+		return NULL;
+	self_id = device->ldev->md.node_id;
+	put_ldev(device);
+
+	pivot = neighbor == NEXT_LOWER ? 0 : neighbor == NEXT_HIGHER ? S32_MAX : -1;
+	if (pivot == -1)
+		return NULL;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		bool found_new = false;
+		peer_id = peer_device->node_id;
+
+		if (neighbor == NEXT_LOWER && peer_id < self_id && peer_id >= pivot)
+			found_new = true;
+		else if (neighbor == NEXT_HIGHER && peer_id > self_id && peer_id <= pivot)
+			found_new = true;
+
+		if (found_new && peer_device->disk_state[NOW] >= D_INCONSISTENT) {
+			pivot = peer_id;
+			peer_device_ret = peer_device;
+		}
+	}
+	rcu_read_unlock();
+
+	return peer_device_ret;
+}
+
+static void maybe_trigger_resync(struct drbd_device *device, struct drbd_peer_device *peer_device, bool grew, bool skip)
+{
+	if (!peer_device)
+		return;
+	if (peer_device->repl_state[NOW] <= L_OFF)
+		return;
+	if (test_and_clear_bit(RESIZE_PENDING, &peer_device->flags) ||
+	    (grew && peer_device->repl_state[NOW] == L_ESTABLISHED)) {
+		if (peer_device->disk_state[NOW] >= D_INCONSISTENT &&
+		    device->disk_state[NOW] >= D_INCONSISTENT) {
+			if (skip)
+				drbd_info(peer_device, "Resync of new storage suppressed with --assume-clean\n");
+			else
+				resync_after_online_grow(peer_device);
+		} else
+			set_bit(RESYNC_AFTER_NEG, &peer_device->flags);
+	}
+}
+
+static int receive_sizes(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device, *peer_device_it = NULL;
 	struct drbd_device *device;
 	struct p_sizes *p = pi->data;
-	struct o_qlim *o = (connection->agreed_features & DRBD_FF_WSAME) ? p->qlim : NULL;
+	uint64_t p_size, p_usize, p_csize;
+	uint64_t my_usize, my_max_size, cur_size;
 	enum determine_dev_size dd = DS_UNCHANGED;
-	sector_t p_size, p_usize, p_csize, my_usize;
-	sector_t new_size, cur_size;
-	int ldsc = 0; /* local disk size changed */
+	bool should_send_sizes = false;
 	enum dds_flags ddsf;
+	unsigned int protocol_max_bio_size;
+	bool have_ldev = false;
+	bool have_mutex = false;
+	bool is_handshake;
+	int err;
+	u64 im;
 
 	peer_device = conn_peer_device(connection, pi->vnr);
 	if (!peer_device)
 		return config_unknown_volume(connection, pi);
 	device = peer_device->device;
-	cur_size = get_capacity(device->vdisk);
 
+	err = mutex_lock_interruptible(&connection->resource->conf_update);
+	if (err) {
+		drbd_err(connection, "Interrupted while waiting for conf_update\n");
+		goto out;
+	}
+	have_mutex = true;
+
+	/* just store the peer's disk size for now.
+	 * we still need to figure out whether we accept that. */
 	p_size = be64_to_cpu(p->d_size);
 	p_usize = be64_to_cpu(p->u_size);
 	p_csize = be64_to_cpu(p->c_size);
 
-	/* just store the peer's disk size for now.
-	 * we still need to figure out whether we accept that. */
-	device->p_size = p_size;
+	peer_device->d_size = p_size;
+	peer_device->u_size = p_usize;
+	peer_device->c_size = p_csize;
+
+	/* Ignore "current" size for calculating "max" size. */
+	/* If it used to have a disk, but now is detached, don't revert back to zero. */
+	if (p_size)
+		peer_device->max_size = p_size;
+
+	cur_size = get_capacity(device->vdisk);
+	dynamic_drbd_dbg(device, "current_size: %llu\n", (unsigned long long)cur_size);
+	dynamic_drbd_dbg(peer_device, "c_size: %llu u_size: %llu d_size: %llu max_size: %llu\n",
+			(unsigned long long)p_csize,
+			(unsigned long long)p_usize,
+			(unsigned long long)p_size,
+			(unsigned long long)peer_device->max_size);
+
+	if ((p_size && p_csize > p_size) || (p_usize && p_csize > p_usize)) {
+		drbd_warn(peer_device, "Peer sent bogus sizes, disconnecting\n");
+		goto disconnect;
+	}
+
+	/* The protocol version limits how big requests can be.  In addition,
+	 * peers before protocol version 94 cannot split large requests into
+	 * multiple bios; their reported max_bio_size is a hard limit.
+	 */
+	protocol_max_bio_size = conn_max_bio_size(connection);
+	peer_device->q_limits.max_bio_size = min(be32_to_cpu(p->max_bio_size),
+						 protocol_max_bio_size);
+	ddsf = be16_to_cpu(p->dds_flags);
+	is_handshake = (peer_device->repl_state[NOW] == L_OFF);
+	set_bit(HAVE_SIZES, &peer_device->flags);
 
 	if (get_ldev(device)) {
+		sector_t new_size;
+
+		have_ldev = true;
+
 		rcu_read_lock();
 		my_usize = rcu_dereference(device->ldev->disk_conf)->disk_size;
 		rcu_read_unlock();
 
-		warn_if_differ_considerably(device, "lower level device sizes",
-			   p_size, drbd_get_max_capacity(device->ldev));
-		warn_if_differ_considerably(device, "user requested size",
+		my_max_size = drbd_get_max_capacity(device, device->ldev, false);
+		dynamic_drbd_dbg(peer_device, "la_size: %llu my_usize: %llu my_max_size: %llu\n",
+			(unsigned long long)device->ldev->md.effective_size,
+			(unsigned long long)my_usize,
+			(unsigned long long)my_max_size);
+
+		if (peer_device->disk_state[NOW] > D_DISKLESS)
+			warn_if_differ_considerably(peer_device, "lower level device sizes",
+				   p_size, my_max_size);
+		warn_if_differ_considerably(peer_device, "user requested size",
 					    p_usize, my_usize);
 
-		/* if this is the first connect, or an otherwise expected
-		 * param exchange, choose the minimum */
-		if (device->state.conn == C_WF_REPORT_PARAMS)
+		if (is_handshake)
 			p_usize = min_not_zero(my_usize, p_usize);
 
+		if (p_usize == 0) {
+			/* Peer may reset usize to zero only if it has a backend.
+			 * Because a diskless node has no disk config,
+			 * and always sends zero. */
+			if (p_size == 0)
+				p_usize = my_usize;
+		}
+
+		new_size = drbd_new_dev_size(device, p_csize, p_usize, ddsf);
+
 		/* Never shrink a device with usable data during connect,
 		 * or "attach" on the peer.
 		 * But allow online shrinking if we are connected. */
-		new_size = drbd_new_dev_size(device, device->ldev, p_usize, 0);
 		if (new_size < cur_size &&
-		    device->state.disk >= D_OUTDATED &&
-		    (device->state.conn < C_CONNECTED || device->state.pdsk == D_DISKLESS)) {
-			drbd_err(device, "The peer's disk size is too small! (%llu < %llu sectors)\n",
+		    device->disk_state[NOW] >= D_OUTDATED &&
+		    (peer_device->repl_state[NOW] < L_ESTABLISHED || peer_device->disk_state[NOW] == D_DISKLESS)) {
+			drbd_err(peer_device, "The peer's disk size is too small! (%llu < %llu sectors)\n",
 					(unsigned long long)new_size, (unsigned long long)cur_size);
-			conn_request_state(peer_device->connection, NS(conn, C_DISCONNECTING), CS_HARD);
-			put_ldev(device);
-			return -EIO;
+			goto disconnect;
+		}
+
+		/* Disconnect, if we cannot grow to the peer's current size */
+		if (my_max_size < p_csize && !is_handshake) {
+			drbd_err(peer_device, "Peer's size larger than my maximum capacity (%llu < %llu sectors)\n",
+					(unsigned long long)my_max_size, (unsigned long long)p_csize);
+			goto disconnect;
 		}
 
 		if (my_usize != p_usize) {
-			struct disk_conf *old_disk_conf, *new_disk_conf = NULL;
+			struct disk_conf *old_disk_conf, *new_disk_conf;
 
 			new_disk_conf = kzalloc_obj(struct disk_conf);
 			if (!new_disk_conf) {
-				put_ldev(device);
-				return -ENOMEM;
+				err = -ENOMEM;
+				goto out;
 			}
 
-			mutex_lock(&connection->resource->conf_update);
 			old_disk_conf = device->ldev->disk_conf;
 			*new_disk_conf = *old_disk_conf;
 			new_disk_conf->disk_size = p_usize;
 
 			rcu_assign_pointer(device->ldev->disk_conf, new_disk_conf);
-			mutex_unlock(&connection->resource->conf_update);
 			kvfree_rcu_mightsleep(old_disk_conf);
 
-			drbd_info(device, "Peer sets u_size to %lu sectors (old: %lu)\n",
-				 (unsigned long)p_usize, (unsigned long)my_usize);
+			drbd_info(peer_device, "Peer sets u_size to %llu sectors (old: %llu)\n",
+				 (unsigned long long)p_usize, (unsigned long long)my_usize);
+			/* Do not set should_send_sizes here. That might cause packet storms */
 		}
+	}
 
-		put_ldev(device);
+	if (connection->agreed_features & DRBD_FF_WSAME) {
+		struct o_qlim *qlim = p->qlim;
+
+		peer_device->q_limits.physical_block_size = be32_to_cpu(qlim->physical_block_size);
+		peer_device->q_limits.logical_block_size = be32_to_cpu(qlim->logical_block_size);
+		peer_device->q_limits.alignment_offset = be32_to_cpu(qlim->alignment_offset);
+		peer_device->q_limits.io_min = be32_to_cpu(qlim->io_min);
+		peer_device->q_limits.io_opt = be32_to_cpu(qlim->io_opt);
+	}
+
+	if (connection->agreed_features & DRBD_FF_BM_BLOCK_SHIFT) {
+		peer_device->bm_block_shift =
+			p->qlim->bm_block_shift_minus_12 + BM_BLOCK_SHIFT_4k;
+	} else {
+		int bbs = have_ldev ? bm_block_size(device->bitmap) : BM_BLOCK_SIZE_4k;
+		/* May work as long as this one is SyncTarget. May result in
+		 * funny never ending / repeating resyncs if the other guy is
+		 * SyncTarget, but unaware of bitmap granularity issues.
+		 */
+		if (bbs != BM_BLOCK_SIZE_4k)
+			drbd_warn(peer_device,
+				"My bitmap granularity is %u. Upgrade this peer to make it aware.\n",
+				bbs);
+		peer_device->bm_block_shift = BM_BLOCK_SHIFT_4k;
 	}
 
-	device->peer_max_bio_size = be32_to_cpu(p->max_bio_size);
 	/* Leave drbd_reconsider_queue_parameters() before drbd_determine_dev_size().
 	   In case we cleared the QUEUE_FLAG_DISCARD from our queue in
 	   drbd_reconsider_queue_parameters(), we can be sure that after
-	   drbd_determine_dev_size() no REQ_DISCARDs are in the queue. */
+	   drbd_determine_dev_size() no REQ_OP_DISCARDs are in the queue. */
+	if (have_ldev) {
+		enum dds_flags local_ddsf = ddsf;
+		drbd_reconsider_queue_parameters(device, device->ldev);
 
-	ddsf = be16_to_cpu(p->dds_flags);
-	if (get_ldev(device)) {
-		drbd_reconsider_queue_parameters(device, device->ldev, o);
-		dd = drbd_determine_dev_size(device, ddsf, NULL);
-		put_ldev(device);
-		if (dd == DS_ERROR)
-			return -EIO;
-		drbd_md_sync(device);
+		/* To support thinly provisioned nodes (partial resync) joining later,
+		   clear all bitmap slots, including the unused ones. */
+		if (device->ldev->md.effective_size == 0)
+			local_ddsf |= DDSF_NO_RESYNC;
+
+		dd = drbd_determine_dev_size(device, p_csize, local_ddsf, NULL);
+
+		if (dd == DS_GREW || dd == DS_SHRUNK)
+			should_send_sizes = true;
+
+		if (dd == DS_ERROR) {
+			err = -EIO;
+			goto out;
+		}
+		drbd_md_sync_if_dirty(device);
 	} else {
-		/*
-		 * I am diskless, need to accept the peer's *current* size.
-		 * I must NOT accept the peers backing disk size,
-		 * it may have been larger than mine all along...
+		uint64_t new_size = 0;
+
+		drbd_reconsider_queue_parameters(device, NULL);
+		/* In case I am diskless, need to accept the peer's *current* size.
 		 *
 		 * At this point, the peer knows more about my disk, or at
 		 * least about what we last agreed upon, than myself.
 		 * So if his c_size is less than his d_size, the most likely
-		 * reason is that *my* d_size was smaller last time we checked.
-		 *
-		 * However, if he sends a zero current size,
-		 * take his (user-capped or) backing disk size anyways.
+		 * reason is that *my* d_size was smaller last time we checked,
+		 * or some other peer does not (yet) have enough room.
 		 *
 		 * Unless of course he does not have a disk himself.
 		 * In which case we ignore this completely.
 		 */
-		sector_t new_size = p_csize ?: p_usize ?: p_size;
-		drbd_reconsider_queue_parameters(device, NULL, o);
+		new_size = p_csize;
+		new_size = min_not_zero(new_size, p_usize);
+		new_size = min_not_zero(new_size, p_size);
+
 		if (new_size == 0) {
 			/* Ignore, peer does not know nothing. */
 		} else if (new_size == cur_size) {
 			/* nothing to do */
 		} else if (cur_size != 0 && p_size == 0) {
-			drbd_warn(device, "Ignored diskless peer device size (peer:%llu != me:%llu sectors)!\n",
+			dynamic_drbd_dbg(peer_device,
+					"Ignored diskless peer device size (peer:%llu != me:%llu sectors)!\n",
 					(unsigned long long)new_size, (unsigned long long)cur_size);
-		} else if (new_size < cur_size && device->state.role == R_PRIMARY) {
-			drbd_err(device, "The peer's device size is too small! (%llu < %llu sectors); demote me first!\n",
-					(unsigned long long)new_size, (unsigned long long)cur_size);
-			conn_request_state(peer_device->connection, NS(conn, C_DISCONNECTING), CS_HARD);
-			return -EIO;
+		} else if (new_size < cur_size && device->resource->role[NOW] == R_PRIMARY) {
+			drbd_err(peer_device,
+				"The peer's device size is too small! (%llu < %llu sectors); demote me first!\n",
+				(unsigned long long)new_size, (unsigned long long)cur_size);
+			goto disconnect;
+		} else if (drbd_other_peer_smaller(peer_device, new_size)) {
+			dynamic_drbd_dbg(peer_device,
+					"Ignored peer device size (peer:%llu sectors); other peer smaller!\n",
+					(unsigned long long)new_size);
 		} else {
 			/* I believe the peer, if
 			 *  - I don't have a current size myself
@@ -4009,1071 +6135,3893 @@ static int receive_sizes(struct drbd_connection *connection, struct packet_info
 			 *    and he has the only disk,
 			 *    which is larger than my current size
 			 */
+			should_send_sizes = true;
 			drbd_set_my_capacity(device, new_size);
 		}
 	}
 
-	if (get_ldev(device)) {
+	if (device->device_conf.max_bio_size > protocol_max_bio_size ||
+	    (connection->agreed_pro_version < 94 &&
+	     device->device_conf.max_bio_size > peer_device->q_limits.max_bio_size)) {
+		drbd_err(device, "Peer cannot deal with requests bigger than %u. "
+			 "Please reduce max_bio_size in the configuration.\n",
+			 peer_device->q_limits.max_bio_size);
+		goto disconnect;
+	}
+
+	if (have_ldev) {
 		if (device->ldev->known_size != drbd_get_capacity(device->ldev->backing_bdev)) {
 			device->ldev->known_size = drbd_get_capacity(device->ldev->backing_bdev);
-			ldsc = 1;
+			should_send_sizes = true;
 		}
 
+		drbd_setup_order_type(device, be16_to_cpu(p->queue_order_type));
+	}
+
+	cur_size = get_capacity(device->vdisk);
+
+	for_each_peer_device_ref(peer_device_it, im, device) {
+		struct drbd_connection *con_it = peer_device_it->connection;
+
+		/* drop cached max_size, if we already grew beyond it */
+		if (peer_device_it->max_size < cur_size)
+			peer_device_it->max_size = 0;
+
+		if (con_it->cstate[NOW] < C_CONNECTED)
+			continue;
+
+		/* Send size updates only if something relevant has changed.
+		 * TODO: only tell the sender thread to do so,
+		 * or we may end up in a distributed deadlock on congestion. */
+
+		if (should_send_sizes)
+			drbd_send_sizes(peer_device_it, p_usize, ddsf);
+	}
+
+	maybe_trigger_resync(device, get_neighbor_device(device, NEXT_HIGHER),
+					dd == DS_GREW, ddsf & DDSF_NO_RESYNC);
+	maybe_trigger_resync(device, get_neighbor_device(device, NEXT_LOWER),
+					dd == DS_GREW, ddsf & DDSF_NO_RESYNC);
+	err = 0;
+
+out:
+	if (have_ldev)
 		put_ldev(device);
+	if (have_mutex)
+		mutex_unlock(&connection->resource->conf_update);
+	return err;
+
+disconnect:
+	/* don't let a rejected peer confuse future handshakes with different peers. */
+	peer_device->max_size = 0;
+
+	if (connection->resource->remote_state_change)
+		set_bit(TWOPC_RECV_SIZES_ERR, &connection->resource->flags);
+	else
+		err = -EIO;
+	goto out;
+}
+
+static enum sync_strategy resolve_splitbrain_from_disk_states(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	enum drbd_disk_state peer_disk_state = peer_device->disk_state[NOW];
+	enum drbd_disk_state disk_state = device->disk_state[NOW];
+
+	return  disk_state <= D_UP_TO_DATE && peer_disk_state == D_UP_TO_DATE ? SYNC_TARGET_USE_BITMAP :
+		disk_state == D_UP_TO_DATE && peer_disk_state <= D_UP_TO_DATE ? SYNC_SOURCE_USE_BITMAP :
+		SPLIT_BRAIN_AUTO_RECOVER;
+}
+
+static void drbd_resync(struct drbd_peer_device *peer_device,
+			enum resync_reason reason)
+{
+	enum drbd_role peer_role = peer_device->connection->peer_role[NOW];
+	enum drbd_repl_state new_repl_state;
+	enum drbd_disk_state peer_disk_state;
+	enum sync_strategy strategy;
+	enum sync_rule rule;
+	int peer_node_id;
+	enum drbd_state_rv rv;
+	const char *tag = reason == AFTER_UNSTABLE ? "after-unstable" : "diskless-primary";
+
+	strategy = drbd_handshake(peer_device, &rule, &peer_node_id, reason == DISKLESS_PRIMARY);
+	if (strategy == SPLIT_BRAIN_AUTO_RECOVER && reason == AFTER_UNSTABLE)
+		strategy = resolve_splitbrain_from_disk_states(peer_device);
+
+	if (!is_strategy_determined(strategy)) {
+		drbd_info(peer_device, "Unexpected result of handshake() %s!\n", strategy_descriptor(strategy).name);
+		return;
 	}
 
-	if (device->state.conn > C_WF_REPORT_PARAMS) {
-		if (be64_to_cpu(p->c_size) != get_capacity(device->vdisk) ||
-		    ldsc) {
-			/* we have different sizes, probably peer
-			 * needs to know my new size... */
-			drbd_send_sizes(peer_device, 0, ddsf);
-		}
-		if (test_and_clear_bit(RESIZE_PENDING, &device->flags) ||
-		    (dd == DS_GREW && device->state.conn == C_CONNECTED)) {
-			if (device->state.pdsk >= D_INCONSISTENT &&
-			    device->state.disk >= D_INCONSISTENT) {
-				if (ddsf & DDSF_NO_RESYNC)
-					drbd_info(device, "Resync of new storage suppressed with --assume-clean\n");
-				else
-					resync_after_online_grow(device);
-			} else
-				set_bit(RESYNC_AFTER_NEG, &device->flags);
+	peer_disk_state = peer_device->disk_state[NOW];
+	if (reason == DISKLESS_PRIMARY)
+		disk_states_to_strategy(peer_device, peer_disk_state, &strategy, rule, &peer_node_id);
+
+	new_repl_state = strategy_to_repl_state(peer_device, peer_role, strategy);
+	if (new_repl_state != L_ESTABLISHED) {
+		bitmap_mod_after_handshake(peer_device, strategy, peer_node_id);
+		drbd_info(peer_device, "Becoming %s %s\n", drbd_repl_str(new_repl_state),
+			  reason == AFTER_UNSTABLE ? "after unstable" : "because primary is diskless");
+	}
+
+	if (new_repl_state == L_ESTABLISHED && peer_disk_state >= D_CONSISTENT &&
+	    peer_device->device->disk_state[NOW] == D_OUTDATED) {
+		/* No resync with up-to-date peer -> I should be consistent or up-to-date as well.
+		   Note: Former unstable (but up-to-date) nodes become consistent for a short
+		   time after loosing their primary peer. Therefore consider consistent here
+		   as well. */
+		drbd_info(peer_device, "Upgrading local disk to %s after unstable/weak (and no resync).\n",
+			  drbd_disk_str(peer_disk_state));
+		change_disk_state(peer_device->device, peer_disk_state, CS_VERBOSE, tag, NULL);
+		return;
+	}
+
+	rv = change_repl_state(peer_device, new_repl_state, CS_VERBOSE, tag);
+	if ((rv == SS_NOTHING_TO_DO || rv == SS_RESYNC_RUNNING) &&
+	    (new_repl_state == L_WF_BITMAP_S || new_repl_state == L_WF_BITMAP_T)) {
+		/* Those events might happen very quickly. In case we are still processing
+		   the previous resync we need to re-enter that state. Schedule sending of
+		   the bitmap here explicitly */
+		peer_device->resync_again++;
+		drbd_info(peer_device, "...postponing this until current resync finished\n");
+	}
+}
+
+static void update_bitmap_slot_of_peer(struct drbd_peer_device *peer_device, int node_id, u64 bitmap_uuid)
+{
+	struct drbd_device *device = peer_device->device;
+
+	if (peer_device->bitmap_uuids[node_id] && bitmap_uuid == 0) {
+		/* If we learn from a neighbor that it no longer has a bitmap
+		   against a third node, we need to deduce from that knowledge
+		   that in the other direction the bitmap was cleared as well.
+		 */
+		struct drbd_peer_device *peer_device2;
+
+		rcu_read_lock();
+		peer_device2 = peer_device_by_node_id(peer_device->device, node_id);
+		if (peer_device2) {
+			int node_id2 = peer_device->connection->peer_node_id;
+			peer_device2->bitmap_uuids[node_id2] = 0;
 		}
+		rcu_read_unlock();
 	}
 
-	return 0;
+	if (node_id != device->resource->res_opts.node_id && bitmap_uuid != -1 && get_ldev(device)) {
+		_drbd_uuid_push_history(device, bitmap_uuid);
+		put_ldev(device);
+	}
+	peer_device->bitmap_uuids[node_id] = bitmap_uuid;
 }
 
-static int receive_uuids(struct drbd_connection *connection, struct packet_info *pi)
+static void propagate_skip_initial_to_diskless(struct drbd_device *device)
 {
 	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
-	struct p_uuids *p = pi->data;
-	u64 *p_uuid;
-	int i, updated_uuids = 0;
-
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return config_unknown_volume(connection, pi);
-	device = peer_device->device;
+	u64 im;
 
-	p_uuid = kmalloc_array(UI_EXTENDED_SIZE, sizeof(*p_uuid), GFP_NOIO);
-	if (!p_uuid)
-		return false;
+	for_each_peer_device_ref(peer_device, im, device) {
+		if (peer_device->disk_state[NOW] == D_DISKLESS)
+			drbd_send_uuids(peer_device, UUID_FLAG_SKIP_INITIAL_SYNC, 0);
+	}
+}
 
-	for (i = UI_CURRENT; i < UI_EXTENDED_SIZE; i++)
-		p_uuid[i] = be64_to_cpu(p->uuid[i]);
+static int __receive_uuids(struct drbd_peer_device *peer_device, u64 node_mask)
+{
+	enum drbd_repl_state repl_state = peer_device->repl_state[NOW];
+	struct drbd_device *device = peer_device->device;
+	struct drbd_resource *resource = device->resource;
+	int updated_uuids = 0, err = 0;
+	bool bad_server, uuid_match;
+	struct net_conf *nc;
+	bool two_primaries_allowed;
 
-	kfree(device->p_uuid);
-	device->p_uuid = p_uuid;
+	uuid_match =
+		(device->exposed_data_uuid & ~UUID_PRIMARY) ==
+		(peer_device->current_uuid & ~UUID_PRIMARY);
+	bad_server =
+		repl_state < L_ESTABLISHED &&
+		device->disk_state[NOW] < D_INCONSISTENT &&
+		device->resource->role[NOW] == R_PRIMARY && !uuid_match;
 
-	if ((device->state.conn < C_CONNECTED || device->state.pdsk == D_DISKLESS) &&
-	    device->state.disk < D_INCONSISTENT &&
-	    device->state.role == R_PRIMARY &&
-	    (device->ed_uuid & ~((u64)1)) != (p_uuid[UI_CURRENT] & ~((u64)1))) {
+	if (peer_device->connection->agreed_pro_version < 110 && bad_server) {
 		drbd_err(device, "Can only connect to data with current UUID=%016llX\n",
-		    (unsigned long long)device->ed_uuid);
-		conn_request_state(peer_device->connection, NS(conn, C_DISCONNECTING), CS_HARD);
+		    (unsigned long long)device->exposed_data_uuid);
+		change_cstate(peer_device->connection, C_DISCONNECTING, CS_HARD);
 		return -EIO;
 	}
 
+	rcu_read_lock();
+	nc = rcu_dereference(peer_device->connection->transport.net_conf);
+	two_primaries_allowed = nc && nc->two_primaries;
+	rcu_read_unlock();
+
 	if (get_ldev(device)) {
-		int skip_initial_sync =
-			device->state.conn == C_CONNECTED &&
+		bool skip_initial_sync =
+			repl_state == L_ESTABLISHED &&
 			peer_device->connection->agreed_pro_version >= 90 &&
-			device->ldev->md.uuid[UI_CURRENT] == UUID_JUST_CREATED &&
-			(p_uuid[UI_FLAGS] & 8);
+			drbd_current_uuid(device) == UUID_JUST_CREATED &&
+			(peer_device->uuid_flags & UUID_FLAG_SKIP_INITIAL_SYNC);
 		if (skip_initial_sync) {
+			unsigned long irq_flags;
+
 			drbd_info(device, "Accepted new current UUID, preparing to skip initial sync\n");
-			drbd_bitmap_io(device, &drbd_bmio_clear_n_write,
+			drbd_bitmap_io(device, &drbd_bmio_clear_all_n_write,
 					"clear_n_write from receive_uuids",
-					BM_LOCKED_TEST_ALLOWED, NULL);
-			_drbd_uuid_set(device, UI_CURRENT, p_uuid[UI_CURRENT]);
-			_drbd_uuid_set(device, UI_BITMAP, 0);
-			_drbd_set_state(_NS2(device, disk, D_UP_TO_DATE, pdsk, D_UP_TO_DATE),
-					CS_VERBOSE, NULL);
-			drbd_md_sync(device);
+					BM_LOCK_SET | BM_LOCK_CLEAR | BM_LOCK_BULK, NULL);
+			_drbd_uuid_set_current(device, peer_device->current_uuid);
+			peer_device->comm_current_uuid = peer_device->current_uuid;
+			peer_device->comm_uuid_flags = peer_device->uuid_flags;
+			peer_device->comm_bitmap_uuid = 0;
+			_drbd_uuid_set_bitmap(peer_device, 0);
+			begin_state_change(device->resource, &irq_flags, CS_VERBOSE);
+			__change_disk_state(device, D_UP_TO_DATE);
+			__change_peer_disk_state(peer_device, D_UP_TO_DATE);
+			end_state_change(device->resource, &irq_flags, "skip-initial-sync");
 			updated_uuids = 1;
+			propagate_skip_initial_to_diskless(device);
 		}
+
+		if (peer_device->uuid_flags & UUID_FLAG_NEW_DATAGEN) {
+			drbd_warn(peer_device, "received new current UUID: %016llX "
+				  "weak_nodes=%016llX\n", peer_device->current_uuid, node_mask);
+			drbd_uuid_received_new_current(peer_device, peer_device->current_uuid, node_mask);
+		}
+
+		drbd_uuid_detect_finished_resyncs(peer_device);
+
+		drbd_md_sync_if_dirty(device);
 		put_ldev(device);
-	} else if (device->state.disk < D_INCONSISTENT &&
-		   device->state.role == R_PRIMARY) {
-		/* I am a diskless primary, the peer just created a new current UUID
-		   for me. */
-		updated_uuids = drbd_set_ed_uuid(device, p_uuid[UI_CURRENT]);
-	}
-
-	/* Before we test for the disk state, we should wait until an eventually
-	   ongoing cluster wide state change is finished. That is important if
-	   we are primary and are detaching from our disk. We need to see the
-	   new disk state... */
-	mutex_lock(device->state_mutex);
-	mutex_unlock(device->state_mutex);
-	if (device->state.conn >= C_CONNECTED && device->state.disk < D_INCONSISTENT)
-		updated_uuids |= drbd_set_ed_uuid(device, p_uuid[UI_CURRENT]);
+	} else if (device->disk_state[NOW] < D_INCONSISTENT && repl_state >= L_ESTABLISHED &&
+		   peer_device->disk_state[NOW] == D_UP_TO_DATE && !uuid_match &&
+		   (resource->role[NOW] == R_SECONDARY ||
+		    (two_primaries_allowed && test_and_clear_bit(NEW_CUR_UUID, &device->flags)))) {
+
+		write_lock_irq(&resource->state_rwlock);
+		if (resource->remote_state_change) {
+			drbd_info(peer_device, "Delaying update of exposed data uuid\n");
+			device->next_exposed_data_uuid = peer_device->current_uuid;
+		} else {
+			updated_uuids =
+				drbd_uuid_set_exposed(device, peer_device->current_uuid, false);
+		}
+		write_unlock_irq(&resource->state_rwlock);
+
+	}
+
+	if (device->disk_state[NOW] == D_DISKLESS && uuid_match &&
+	    peer_device->disk_state[NOW] == D_CONSISTENT) {
+		drbd_info(peer_device, "Peer is on same UUID now\n");
+		change_peer_disk_state(peer_device, D_UP_TO_DATE, CS_VERBOSE, "receive-uuids");
+	}
 
 	if (updated_uuids)
-		drbd_print_uuids(device, "receiver updated UUIDs to");
+		drbd_print_uuids(peer_device, "receiver updated UUIDs to");
 
-	return 0;
+	peer_device->uuid_node_mask = node_mask;
+
+	if ((repl_state == L_SYNC_TARGET || repl_state == L_PAUSED_SYNC_T) &&
+	    !(peer_device->uuid_flags & UUID_FLAG_STABLE) &&
+	    !drbd_stable_sync_source_present(peer_device, NOW))
+		set_bit(UNSTABLE_RESYNC, &peer_device->flags);
+
+	/* send notification in case UUID flags have changed */
+	drbd_broadcast_peer_device_state(peer_device);
+
+	return err;
 }
 
-/**
- * convert_state() - Converts the peer's view of the cluster state to our point of view
- * @ps:		The state as seen by the peer.
- */
-static union drbd_state convert_state(union drbd_state ps)
+/* drbd 8.4 compat */
+static int receive_uuids(struct drbd_connection *connection, struct packet_info *pi)
 {
-	union drbd_state ms;
-
-	static enum drbd_conns c_tab[] = {
-		[C_WF_REPORT_PARAMS] = C_WF_REPORT_PARAMS,
-		[C_CONNECTED] = C_CONNECTED,
+	const int node_id = connection->resource->res_opts.node_id;
+	struct drbd_peer_device *peer_device;
+	struct p_uuids *p = pi->data;
+	int history_uuids, i;
 
-		[C_STARTING_SYNC_S] = C_STARTING_SYNC_T,
-		[C_STARTING_SYNC_T] = C_STARTING_SYNC_S,
-		[C_DISCONNECTING] = C_TEAR_DOWN, /* C_NETWORK_FAILURE, */
-		[C_VERIFY_S]       = C_VERIFY_T,
-		[C_MASK]   = C_MASK,
-	};
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return config_unknown_volume(connection, pi);
 
-	ms.i = ps.i;
+	history_uuids = min_t(int, HISTORY_UUIDS_V08,
+			      ARRAY_SIZE(peer_device->history_uuids));
 
-	ms.conn = c_tab[ps.conn];
-	ms.peer = ps.role;
-	ms.role = ps.peer;
-	ms.pdsk = ps.disk;
-	ms.disk = ps.pdsk;
-	ms.peer_isp = (ps.aftr_isp | ps.user_isp);
+	peer_device->current_uuid = be64_to_cpu(p->current_uuid);
+	peer_device->bitmap_uuids[node_id] = be64_to_cpu(p->bitmap_uuid);
+	for (i = 0; i < history_uuids; i++)
+		peer_device->history_uuids[i] = be64_to_cpu(p->history_uuids[i]);
+	for (; i < ARRAY_SIZE(peer_device->history_uuids); i++)
+		peer_device->history_uuids[i] = 0;
+	peer_device->dirty_bits = be64_to_cpu(p->dirty_bits);
+	peer_device->uuid_flags = be64_to_cpu(p->uuid_flags) | UUID_FLAG_STABLE;
+	set_bit(UUIDS_RECEIVED, &peer_device->flags);
 
-	return ms;
+	return __receive_uuids(peer_device, 0);
 }
 
-static int receive_req_state(struct drbd_connection *connection, struct packet_info *pi)
+static int receive_uuids110(struct drbd_connection *connection, struct packet_info *pi)
 {
 	struct drbd_peer_device *peer_device;
+	struct p_uuids110 *p = pi->data;
+	int bitmap_uuids, history_uuids, rest, i, pos, err;
+	u64 bitmap_uuids_mask, node_mask;
+	struct drbd_peer_md *peer_md = NULL;
 	struct drbd_device *device;
-	struct p_req_state *p = pi->data;
-	union drbd_state mask, val;
-	enum drbd_state_rv rv;
+	int not_allocated = -1;
+
 
 	peer_device = conn_peer_device(connection, pi->vnr);
 	if (!peer_device)
-		return -EIO;
+		return config_unknown_volume(connection, pi);
+
 	device = peer_device->device;
+	bitmap_uuids_mask = be64_to_cpu(p->bitmap_uuids_mask);
+	if (bitmap_uuids_mask & ~(NODE_MASK(DRBD_PEERS_MAX) - 1))
+		return -EIO;
+	bitmap_uuids = hweight64(bitmap_uuids_mask);
+
+	if (pi->size / sizeof(p->other_uuids[0]) < bitmap_uuids)
+		return -EIO;
+	history_uuids = pi->size / sizeof(p->other_uuids[0]) - bitmap_uuids;
+	if (history_uuids > ARRAY_SIZE(peer_device->history_uuids))
+		history_uuids = ARRAY_SIZE(peer_device->history_uuids);
 
-	mask.i = be32_to_cpu(p->mask);
-	val.i = be32_to_cpu(p->val);
+	err = drbd_recv_into(connection, p->other_uuids,
+			     (bitmap_uuids + history_uuids) *
+			     sizeof(p->other_uuids[0]));
+	if (err)
+		return err;
 
-	if (test_bit(RESOLVE_CONFLICTS, &peer_device->connection->flags) &&
-	    mutex_is_locked(device->state_mutex)) {
-		drbd_send_sr_reply(peer_device, SS_CONCURRENT_ST_CHG);
-		return 0;
+	rest = pi->size - (bitmap_uuids + history_uuids) * sizeof(p->other_uuids[0]);
+	if (rest) {
+		err = ignore_remaining_packet(connection, rest);
+		if (err)
+			return err;
 	}
 
-	mask = convert_state(mask);
-	val = convert_state(val);
+	if (get_ldev(device)) {
+		peer_md = device->ldev->md.peers;
+		spin_lock_irq(&device->ldev->md.uuid_lock);
+	}
 
-	rv = drbd_change_state(device, CS_VERBOSE, mask, val);
-	drbd_send_sr_reply(peer_device, rv);
+	if (device->resource->role[NOW] != R_PRIMARY ||
+	    device->disk_state[NOW] != D_DISKLESS ||
+	    (peer_device->current_uuid & ~UUID_PRIMARY) !=
+						(device->exposed_data_uuid & ~UUID_PRIMARY) ||
+	    (peer_device->comm_current_uuid & ~UUID_PRIMARY) !=
+						(device->exposed_data_uuid & ~UUID_PRIMARY))
+		peer_device->current_uuid = be64_to_cpu(p->current_uuid);
 
-	drbd_md_sync(device);
+	peer_device->dirty_bits = be64_to_cpu(p->dirty_bits);
+	peer_device->uuid_flags = be64_to_cpu(p->uuid_flags);
+	if (peer_device->uuid_flags & UUID_FLAG_HAS_UNALLOC) {
+		not_allocated = peer_device->uuid_flags >> UUID_FLAG_UNALLOC_SHIFT;
+		peer_device->uuid_flags &= ~UUID_FLAG_UNALLOC_MASK;
+	}
 
-	return 0;
-}
+	pos = 0;
+	for (i = 0; i < ARRAY_SIZE(peer_device->bitmap_uuids); i++) {
+		u64 bitmap_uuid;
 
-static int receive_req_conn_state(struct drbd_connection *connection, struct packet_info *pi)
-{
-	struct p_req_state *p = pi->data;
-	union drbd_state mask, val;
-	enum drbd_state_rv rv;
+		if (bitmap_uuids_mask & NODE_MASK(i)) {
+			bitmap_uuid = be64_to_cpu(p->other_uuids[pos++]);
 
-	mask.i = be32_to_cpu(p->mask);
-	val.i = be32_to_cpu(p->val);
+			if (peer_md && !(peer_md[i].flags & MDF_HAVE_BITMAP) &&
+			    i != not_allocated)
+				peer_md[i].flags |= MDF_NODE_EXISTS;
+		} else {
+			bitmap_uuid = -1;
+		}
 
-	if (test_bit(RESOLVE_CONFLICTS, &connection->flags) &&
-	    mutex_is_locked(&connection->cstate_mutex)) {
-		conn_send_sr_reply(connection, SS_CONCURRENT_ST_CHG);
-		return 0;
+		update_bitmap_slot_of_peer(peer_device, i, bitmap_uuid);
 	}
 
-	mask = convert_state(mask);
-	val = convert_state(val);
-
-	rv = conn_request_state(connection, mask, val, CS_VERBOSE | CS_LOCAL_ONLY | CS_IGN_OUTD_FAIL);
-	conn_send_sr_reply(connection, rv);
+	for (i = 0; i < history_uuids; i++)
+		peer_device->history_uuids[i] = be64_to_cpu(p->other_uuids[pos++]);
+	while (i < ARRAY_SIZE(peer_device->history_uuids))
+		peer_device->history_uuids[i++] = 0;
+	set_bit(UUIDS_RECEIVED, &peer_device->flags);
+	if (peer_md) {
+		spin_unlock_irq(&device->ldev->md.uuid_lock);
+		put_ldev(device);
+	}
 
-	return 0;
-}
+	node_mask = be64_to_cpu(p->node_mask);
 
-static int receive_state(struct drbd_connection *connection, struct packet_info *pi)
-{
-	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
-	struct p_state *p = pi->data;
-	union drbd_state os, ns, peer_state;
-	enum drbd_disk_state real_peer_disk;
-	enum chg_state_flags cs_flags;
-	int rv;
+	if (peer_device->connection->peer_role[NOW] == R_PRIMARY &&
+	    peer_device->uuid_flags & UUID_FLAG_STABLE)
+		check_resync_source(device, node_mask);
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return config_unknown_volume(connection, pi);
-	device = peer_device->device;
+	err = __receive_uuids(peer_device, node_mask);
 
-	peer_state.i = be32_to_cpu(p->state);
+	if (!test_bit(RECONCILIATION_RESYNC, &peer_device->flags)) {
+		if (peer_device->uuid_flags & UUID_FLAG_GOT_STABLE) {
+			struct drbd_device *device = peer_device->device;
 
-	real_peer_disk = peer_state.disk;
-	if (peer_state.disk == D_NEGOTIATING) {
-		real_peer_disk = device->p_uuid[UI_FLAGS] & 4 ? D_INCONSISTENT : D_CONSISTENT;
-		drbd_info(device, "real peer disk state = %s\n", drbd_disk_str(real_peer_disk));
-	}
+			if (peer_device->repl_state[NOW] == L_ESTABLISHED &&
+			    drbd_device_stable(device, NULL) && get_ldev(device)) {
+				drbd_send_uuids(peer_device, UUID_FLAG_RESYNC, 0);
+				drbd_resync(peer_device, AFTER_UNSTABLE);
+				put_ldev(device);
+			}
+		}
 
-	spin_lock_irq(&device->resource->req_lock);
- retry:
-	os = ns = drbd_read_state(device);
-	spin_unlock_irq(&device->resource->req_lock);
+		if (peer_device->uuid_flags & UUID_FLAG_RESYNC) {
+			if (get_ldev(device)) {
+				bool dp = peer_device->uuid_flags & UUID_FLAG_DISKLESS_PRIMARY;
+				drbd_resync(peer_device, dp ? DISKLESS_PRIMARY : AFTER_UNSTABLE);
+				put_ldev(device);
+			}
+		}
+	}
 
-	/* If some other part of the code (ack_receiver thread, timeout)
-	 * already decided to close the connection again,
-	 * we must not "re-establish" it here. */
-	if (os.conn <= C_TEAR_DOWN)
-		return -ECONNRESET;
+	return err;
+}
 
-	/* If this is the "end of sync" confirmation, usually the peer disk
-	 * transitions from D_INCONSISTENT to D_UP_TO_DATE. For empty (0 bits
-	 * set) resync started in PausedSyncT, or if the timing of pause-/
-	 * unpause-sync events has been "just right", the peer disk may
-	 * transition from D_CONSISTENT to D_UP_TO_DATE as well.
-	 */
-	if ((os.pdsk == D_INCONSISTENT || os.pdsk == D_CONSISTENT) &&
-	    real_peer_disk == D_UP_TO_DATE &&
-	    os.conn > C_CONNECTED && os.disk == D_UP_TO_DATE) {
-		/* If we are (becoming) SyncSource, but peer is still in sync
-		 * preparation, ignore its uptodate-ness to avoid flapping, it
-		 * will change to inconsistent once the peer reaches active
-		 * syncing states.
-		 * It may have changed syncer-paused flags, however, so we
-		 * cannot ignore this completely. */
-		if (peer_state.conn > C_CONNECTED &&
-		    peer_state.conn < C_SYNC_SOURCE)
-			real_peer_disk = D_INCONSISTENT;
+/**
+ * check_resync_source() - Abort resync if the source is weak
+ * @device: The device to check
+ * @weak_nodes: Mask of currently weak nodes in the cluster
+ *
+ * If a primary loses connection to a SYNC_SOURCE node from us, then we
+ * need to abort that resync. Why?
+ *
+ * When the primary sends a write, we get that and write that as well. With
+ * the peer_ack packet, we will set that as out-of-sync towards the sync
+ * source node.
+ * When the resync process finds such bits, we request outdated
+ * data from the sync source!
+ * We are stopping the resync from such an outdated source here and waiting
+ * until all the resync activity has drained (P_RS_DATA_REPLY packets).
+ */
+static void check_resync_source(struct drbd_device *device, u64 weak_nodes)
+{
+	struct drbd_peer_device *peer_device;
+	struct drbd_connection *connection;
 
-		/* if peer_state changes to connected at the same time,
-		 * it explicitly notifies us that it finished resync.
-		 * Maybe we should finish it up, too? */
-		else if (os.conn >= C_SYNC_SOURCE &&
-			 peer_state.conn == C_CONNECTED) {
-			if (drbd_bm_total_weight(device) <= device->rs_failed)
-				drbd_resync_finished(peer_device);
-			return 0;
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		enum drbd_repl_state repl_state = peer_device->repl_state[NOW];
+		if ((repl_state == L_SYNC_TARGET || repl_state == L_PAUSED_SYNC_T) &&
+		    NODE_MASK(peer_device->node_id) & weak_nodes) {
+			rcu_read_unlock();
+			goto abort;
 		}
 	}
+	rcu_read_unlock();
+	return;
+abort:
+	connection = peer_device->connection;
+	drbd_info(peer_device, "My sync source became a weak node, aborting resync!\n");
+	change_repl_state(peer_device, L_ESTABLISHED, CS_VERBOSE, "abort-resync");
+	drbd_flush_workqueue(&connection->sender_work);
+	drbd_cancel_conflicting_resync_requests(peer_device);
 
-	/* explicit verify finished notification, stop sector reached. */
-	if (os.conn == C_VERIFY_T && os.disk == D_UP_TO_DATE &&
-	    peer_state.conn == C_CONNECTED && real_peer_disk == D_UP_TO_DATE) {
-		ov_out_of_sync_print(peer_device);
-		drbd_resync_finished(peer_device);
-		return 0;
-	}
+	wait_event_interruptible(connection->ee_wait,
+				 peer_device->repl_state[NOW] <= L_ESTABLISHED ||
+				 atomic_read(&connection->backing_ee_cnt) == 0);
+	wait_event_interruptible(device->misc_wait,
+				 peer_device->repl_state[NOW] <= L_ESTABLISHED ||
+				 atomic_read(&peer_device->rs_pending_cnt) == 0);
 
-	/* peer says his disk is inconsistent, while we think it is uptodate,
-	 * and this happens while the peer still thinks we have a sync going on,
-	 * but we think we are already done with the sync.
-	 * We ignore this to avoid flapping pdsk.
-	 * This should not happen, if the peer is a recent version of drbd. */
-	if (os.pdsk == D_UP_TO_DATE && real_peer_disk == D_INCONSISTENT &&
-	    os.conn == C_CONNECTED && peer_state.conn > C_SYNC_SOURCE)
-		real_peer_disk = D_UP_TO_DATE;
+	peer_device->rs_total  = 0;
+	peer_device->rs_failed = 0;
+	peer_device->rs_paused = 0;
+}
 
-	if (ns.conn == C_WF_REPORT_PARAMS)
-		ns.conn = C_CONNECTED;
+/**
+ * convert_state() - Converts the peer's view of the cluster state to our point of view
+ * @peer_state:	The state as seen by the peer.
+ */
+static union drbd_state convert_state(union drbd_state peer_state)
+{
+	union drbd_state state;
 
-	if (peer_state.conn == C_AHEAD)
-		ns.conn = C_BEHIND;
+	static unsigned int c_tab[] = {
+		[L_OFF] = L_OFF,
+		[L_ESTABLISHED] = L_ESTABLISHED,
 
-	/* TODO:
-	 * if (primary and diskless and peer uuid != effective uuid)
-	 *     abort attach on peer;
-	 *
-	 * If this node does not have good data, was already connected, but
-	 * the peer did a late attach only now, trying to "negotiate" with me,
-	 * AND I am currently Primary, possibly frozen, with some specific
-	 * "effective" uuid, this should never be reached, really, because
-	 * we first send the uuids, then the current state.
-	 *
-	 * In this scenario, we already dropped the connection hard
-	 * when we received the unsuitable uuids (receive_uuids().
-	 *
-	 * Should we want to change this, that is: not drop the connection in
-	 * receive_uuids() already, then we would need to add a branch here
-	 * that aborts the attach of "unsuitable uuids" on the peer in case
-	 * this node is currently Diskless Primary.
-	 */
+		[L_STARTING_SYNC_S] = L_STARTING_SYNC_T,
+		[L_STARTING_SYNC_T] = L_STARTING_SYNC_S,
+		[L_WF_BITMAP_S] = L_WF_BITMAP_T,
+		[L_WF_BITMAP_T] = L_WF_BITMAP_S,
+		[C_DISCONNECTING] = C_TEAR_DOWN, /* C_NETWORK_FAILURE, */
+		[C_CONNECTING] = C_CONNECTING,
+		[L_VERIFY_S] = L_VERIFY_T,
+		[C_MASK] = C_MASK,
+	};
 
-	if (device->p_uuid && peer_state.disk >= D_NEGOTIATING &&
-	    get_ldev_if_state(device, D_NEGOTIATING)) {
-		int cr; /* consider resync */
+	state.i = peer_state.i;
 
-		/* if we established a new connection */
-		cr  = (os.conn < C_CONNECTED);
-		/* if we had an established connection
-		 * and one of the nodes newly attaches a disk */
-		cr |= (os.conn == C_CONNECTED &&
-		       (peer_state.disk == D_NEGOTIATING ||
-			os.disk == D_NEGOTIATING));
-		/* if we have both been inconsistent, and the peer has been
-		 * forced to be UpToDate with --force */
-		cr |= test_bit(CONSIDER_RESYNC, &device->flags);
-		/* if we had been plain connected, and the admin requested to
-		 * start a sync by "invalidate" or "invalidate-remote" */
-		cr |= (os.conn == C_CONNECTED &&
-				(peer_state.conn >= C_STARTING_SYNC_S &&
-				 peer_state.conn <= C_WF_BITMAP_T));
+	state.conn = c_tab[peer_state.conn];
+	state.peer = peer_state.role;
+	state.role = peer_state.peer;
+	state.pdsk = peer_state.disk;
+	state.disk = peer_state.pdsk;
+	state.peer_isp = (peer_state.aftr_isp | peer_state.user_isp);
 
-		if (cr)
-			ns.conn = drbd_sync_handshake(peer_device, peer_state.role, real_peer_disk);
+	return state;
+}
 
-		put_ldev(device);
-		if (ns.conn == C_MASK) {
-			ns.conn = C_CONNECTED;
-			if (device->state.disk == D_NEGOTIATING) {
-				drbd_force_state(device, NS(disk, D_FAILED));
-			} else if (peer_state.disk == D_NEGOTIATING) {
-				drbd_err(device, "Disk attach process on the peer node was aborted.\n");
-				peer_state.disk = D_DISKLESS;
-				real_peer_disk = D_DISKLESS;
-			} else {
-				if (test_and_clear_bit(CONN_DRY_RUN, &peer_device->connection->flags))
-					return -EIO;
-				D_ASSERT(device, os.conn == C_WF_REPORT_PARAMS);
-				conn_request_state(peer_device->connection, NS(conn, C_DISCONNECTING), CS_HARD);
-				return -EIO;
-			}
-		}
-	}
+static enum drbd_state_rv
+__change_connection_state(struct drbd_connection *connection,
+			  union drbd_state mask, union drbd_state val,
+			  enum chg_state_flags flags)
+{
+	struct drbd_resource *resource = connection->resource;
 
-	spin_lock_irq(&device->resource->req_lock);
-	if (os.i != drbd_read_state(device).i)
-		goto retry;
-	clear_bit(CONSIDER_RESYNC, &device->flags);
-	ns.peer = peer_state.role;
-	ns.pdsk = real_peer_disk;
-	ns.peer_isp = (peer_state.aftr_isp | peer_state.user_isp);
-	if ((ns.conn == C_CONNECTED || ns.conn == C_WF_BITMAP_S) && ns.disk == D_NEGOTIATING)
-		ns.disk = device->new_state_tmp.disk;
-	cs_flags = CS_VERBOSE + (os.conn < C_CONNECTED && ns.conn >= C_CONNECTED ? 0 : CS_HARD);
-	if (ns.pdsk == D_CONSISTENT && drbd_suspended(device) && ns.conn == C_CONNECTED && os.conn < C_CONNECTED &&
-	    test_bit(NEW_CUR_UUID, &device->flags)) {
-		/* Do not allow tl_restart(RESEND) for a rebooted peer. We can only allow this
-		   for temporal network outages! */
-		spin_unlock_irq(&device->resource->req_lock);
-		drbd_err(device, "Aborting Connect, can not thaw IO with an only Consistent peer\n");
-		tl_clear(peer_device->connection);
-		drbd_uuid_new_current(device);
-		clear_bit(NEW_CUR_UUID, &device->flags);
-		conn_request_state(peer_device->connection, NS2(conn, C_PROTOCOL_ERROR, susp, 0), CS_HARD);
-		return -EIO;
+	if (mask.role) {
+		/* not allowed */
+	}
+	if (mask.susp) {
+		mask.susp ^= -1;
+		__change_io_susp_user(resource, val.susp);
+	}
+	if (mask.susp_nod) {
+		mask.susp_nod ^= -1;
+		__change_io_susp_no_data(resource, val.susp_nod);
+	}
+	if (mask.susp_fen) {
+		mask.susp_fen ^= -1;
+		__change_io_susp_fencing(connection, val.susp_fen);
+	}
+	if (mask.disk) {
+		/* Handled in __change_peer_device_state(). */
+		mask.disk ^= -1;
 	}
-	rv = _drbd_set_state(device, ns, cs_flags, NULL);
-	ns = drbd_read_state(device);
-	spin_unlock_irq(&device->resource->req_lock);
+	if (mask.conn) {
+		mask.conn ^= -1;
+		__change_cstate(connection,
+				min_t(enum drbd_conn_state, val.conn, C_CONNECTED));
+	}
+	if (mask.pdsk) {
+		/* Handled in __change_peer_device_state(). */
+		mask.pdsk ^= -1;
+	}
+	if (mask.peer) {
+		mask.peer ^= -1;
+		__change_peer_role(connection, val.peer);
+	}
+	if (mask.i) {
+		drbd_info(connection, "Remote state change: request %u/%u not "
+		"understood\n", mask.i, val.i & mask.i);
+		return SS_NOT_SUPPORTED;
+	}
+	return SS_SUCCESS;
+}
 
-	if (rv < SS_SUCCESS) {
-		conn_request_state(peer_device->connection, NS(conn, C_DISCONNECTING), CS_HARD);
-		return -EIO;
+static enum drbd_state_rv
+__change_peer_device_state(struct drbd_peer_device *peer_device,
+			   union drbd_state mask, union drbd_state val)
+{
+	struct drbd_device *device = peer_device->device;
+
+	if (mask.peer) {
+		/* Handled in __change_connection_state(). */
+		mask.peer ^= -1;
+	}
+	if (mask.disk) {
+		mask.disk ^= -1;
+		__change_disk_state(device, val.disk);
 	}
 
-	if (os.conn > C_WF_REPORT_PARAMS) {
-		if (ns.conn > C_CONNECTED && peer_state.conn <= C_CONNECTED &&
-		    peer_state.disk != D_NEGOTIATING ) {
-			/* we want resync, peer has not yet decided to sync... */
-			/* Nowadays only used when forcing a node into primary role and
-			   setting its disk to UpToDate with that */
-			drbd_send_uuids(peer_device);
-			drbd_send_current_state(peer_device);
-		}
+	if (mask.conn) {
+		mask.conn ^= -1;
+		__change_repl_state(peer_device,
+				max_t(enum drbd_repl_state, val.conn, L_OFF));
 	}
+	if (mask.pdsk) {
+		mask.pdsk ^= -1;
+		__change_peer_disk_state(peer_device, val.pdsk);
+	}
+	if (mask.user_isp) {
+		mask.user_isp ^= -1;
+		__change_resync_susp_user(peer_device, val.user_isp);
+	}
+	if (mask.peer_isp) {
+		mask.peer_isp ^= -1;
+		__change_resync_susp_peer(peer_device, val.peer_isp);
+	}
+	if (mask.aftr_isp) {
+		mask.aftr_isp ^= -1;
+		__change_resync_susp_dependency(peer_device, val.aftr_isp);
+	}
+	if (mask.i) {
+		drbd_info(peer_device, "Remote state change: request %u/%u not "
+		"understood\n", mask.i, val.i & mask.i);
+		return SS_NOT_SUPPORTED;
+	}
+	return SS_SUCCESS;
+}
 
-	clear_bit(DISCARD_MY_DATA, &device->flags);
+static union drbd_state
+sanitize_outdate(struct drbd_peer_device *peer_device,
+		 union drbd_state mask,
+		 union drbd_state val)
+{
+	struct drbd_device *device = peer_device->device;
+	union drbd_state result_mask = mask;
 
-	drbd_md_sync(device); /* update connected indicator, la_size_sect, ... */
+	if (val.pdsk == D_OUTDATED && peer_device->disk_state[NEW] < D_OUTDATED)
+		result_mask.pdsk = 0;
+	if (val.disk == D_OUTDATED && device->disk_state[NEW] < D_OUTDATED)
+		result_mask.disk = 0;
 
-	return 0;
+	return result_mask;
 }
 
-static int receive_sync_uuid(struct drbd_connection *connection, struct packet_info *pi)
+static void log_openers(struct drbd_resource *resource)
 {
-	struct drbd_peer_device *peer_device;
 	struct drbd_device *device;
-	struct p_rs_uuid *p = pi->data;
+	int vnr;
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return -EIO;
-	device = peer_device->device;
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		struct opener *opener;
 
-	wait_event(device->misc_wait,
-		   device->state.conn == C_WF_SYNC_UUID ||
-		   device->state.conn == C_BEHIND ||
-		   device->state.conn < C_CONNECTED ||
-		   device->state.disk < D_NEGOTIATING);
+		spin_lock(&device->openers_lock);
+		opener = list_first_entry_or_null(&device->openers, struct opener, list);
+		if (opener)
+			drbd_warn(device, "Held open by %s(%d)\n", opener->comm, opener->pid);
+		spin_unlock(&device->openers_lock);
+	}
+	rcu_read_unlock();
+}
 
-	/* D_ASSERT(device,  device->state.conn == C_WF_SYNC_UUID ); */
+/**
+ * change_connection_state()  -  change state of a connection and all its peer devices
+ * @connection: DRBD connection to operate on
+ * @state_change: Prepared state change
+ * @reply: Two phase commit reply
+ * @flags: State change flags
+ *
+ * Also changes the state of the peer devices' devices and of the resource.
+ * Cluster-wide state changes are not supported.
+ */
+static enum drbd_state_rv
+change_connection_state(struct drbd_connection *connection,
+			struct twopc_state_change *state_change,
+			struct twopc_reply *reply,
+			enum chg_state_flags flags)
+{
+	struct drbd_resource *resource = connection->resource;
+	long t = resource->res_opts.auto_promote_timeout * HZ / 10;
+	union drbd_state mask = state_change->mask;
+	union drbd_state val = state_change->val;
+	bool is_disconnect = false;
+	bool is_connect = false;
+	bool abort = flags & CS_ABORT;
+	struct drbd_peer_device *peer_device;
+	unsigned long irq_flags;
+	enum drbd_state_rv rv;
+	int vnr;
 
-	/* Here the _drbd_uuid_ functions are right, current should
-	   _not_ be rotated into the history */
-	if (get_ldev_if_state(device, D_NEGOTIATING)) {
-		_drbd_uuid_set(device, UI_CURRENT, be64_to_cpu(p->uuid));
-		_drbd_uuid_set(device, UI_BITMAP, 0UL);
+	if (reply) {
+		is_disconnect = reply->is_disconnect;
+		is_connect = reply->is_connect;
+	} else if (mask.conn == conn_MASK) {
+		is_connect = val.conn == C_CONNECTED;
+		is_disconnect = val.conn == C_DISCONNECTING;
+	}
 
-		drbd_print_uuids(device, "updated sync uuid");
-		drbd_start_resync(device, C_SYNC_TARGET);
+	mask = convert_state(mask);
+	val = convert_state(val);
 
-		put_ldev(device);
-	} else
-		drbd_err(device, "Ignoring SyncUUID packet!\n");
+	if (is_connect && connection->agreed_pro_version >= 118) {
+		if (flags & CS_PREPARE)
+			conn_connect2(connection);
+		if (abort)
+			abort_connect(connection);
+	}
+retry:
+	begin_state_change(resource, &irq_flags, flags & ~CS_VERBOSE);
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		union drbd_state l_mask;
+		l_mask = is_disconnect ? sanitize_outdate(peer_device, mask, val) : mask;
+		rv = __change_peer_device_state(peer_device, l_mask, val);
+		if (rv < SS_SUCCESS)
+			goto fail;
+	}
+	rv = __change_connection_state(connection, mask, val, flags);
+	if (rv < SS_SUCCESS)
+		goto fail;
 
-	return 0;
+	if (reply && !abort) {
+		u64 directly_reachable = directly_connected_nodes(resource, NEW) |
+			NODE_MASK(resource->res_opts.node_id);
+
+		if (reply->primary_nodes & ~directly_reachable)
+			__outdate_myself(resource);
+	}
+
+	if (is_connect && connection->agreed_pro_version >= 117)
+		apply_connect(connection, (flags & CS_PREPARED) && !abort);
+	rv = end_state_change(resource, &irq_flags, "remote");
+out:
+
+	if ((rv == SS_NO_UP_TO_DATE_DISK && resource->role[NOW] != R_PRIMARY) ||
+	    rv == SS_PRIMARY_READER) {
+		/* Most probably udev opened it read-only. That might happen
+		   if it was demoted very recently. Wait up to one second. */
+		t = wait_event_interruptible_timeout(resource->state_wait,
+						     drbd_open_ro_count(resource) == 0,
+						     t);
+		if (t > 0)
+			goto retry;
+	}
+
+	if (rv < SS_SUCCESS) {
+		drbd_err(resource, "State change failed: %s (%d)\n", drbd_set_st_err_str(rv), rv);
+		if (rv == SS_PRIMARY_READER)
+			log_openers(resource);
+	}
+
+	return rv;
+fail:
+	abort_state_change(resource, &irq_flags);
+	goto out;
 }
 
-/*
- * receive_bitmap_plain
+/**
+ * change_peer_device_state()  -  change state of a peer and its connection
+ * @peer_device: DRBD peer device
+ * @state_change: Prepared state change
+ * @flags: State change flags
  *
- * Return 0 when done, 1 when another iteration is needed, and a negative error
- * code upon failure.
+ * Also changes the state of the peer device's device and of the resource.
+ * Cluster-wide state changes are not supported.
  */
-static int
-receive_bitmap_plain(struct drbd_peer_device *peer_device, unsigned int size,
-		     unsigned long *p, struct bm_xfer_ctx *c)
+static enum drbd_state_rv
+change_peer_device_state(struct drbd_peer_device *peer_device,
+			 struct twopc_state_change *state_change,
+			 enum chg_state_flags flags)
 {
-	unsigned int data_size = DRBD_SOCKET_BUFFER_SIZE -
-				 drbd_header_size(peer_device->connection);
-	unsigned int num_words = min_t(size_t, data_size / sizeof(*p),
-				       c->bm_words - c->word_offset);
-	unsigned int want = num_words * sizeof(*p);
-	int err;
+	struct drbd_connection *connection = peer_device->connection;
+	union drbd_state mask = state_change->mask;
+	union drbd_state val = state_change->val;
+	unsigned long irq_flags;
+	enum drbd_state_rv rv;
 
-	if (want != size) {
-		drbd_err(peer_device, "%s:want (%u) != size (%u)\n", __func__, want, size);
+	mask = convert_state(mask);
+	val = convert_state(val);
+
+	begin_state_change(connection->resource, &irq_flags, flags);
+	rv = __change_peer_device_state(peer_device, mask, val);
+	if (rv < SS_SUCCESS)
+		goto fail;
+	rv = __change_connection_state(connection, mask, val, flags);
+	if (rv < SS_SUCCESS)
+		goto fail;
+	rv = end_state_change(connection->resource, &irq_flags, "remote");
+out:
+	return rv;
+fail:
+	abort_state_change(connection->resource, &irq_flags);
+	goto out;
+}
+
+static int receive_req_state(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct twopc_state_change *state_change = &resource->twopc.state_change;
+	struct drbd_peer_device *peer_device = NULL;
+	struct p_req_state *p = pi->data;
+	enum chg_state_flags flags = CS_VERBOSE | CS_LOCAL_ONLY | CS_TWOPC;
+	enum drbd_state_rv rv;
+	int vnr = -1;
+
+	if (!expect(connection, connection->agreed_pro_version < 110)) {
+		drbd_err(connection, "Packet %s not allowed in protocol version %d\n",
+			 drbd_packet_name(pi->cmd),
+			 connection->agreed_pro_version);
 		return -EIO;
 	}
-	if (want == 0)
+
+	state_change->mask.i = be32_to_cpu(p->mask);
+	state_change->val.i = be32_to_cpu(p->val);
+
+	/* P_STATE_CHG_REQ packets must have a valid vnr.  P_CONN_ST_CHG_REQ
+	 * packets have an undefined vnr. */
+	if (pi->cmd == P_STATE_CHG_REQ) {
+		peer_device = conn_peer_device(connection, pi->vnr);
+		if (!peer_device) {
+			const union drbd_state conn_mask = { .conn = conn_MASK };
+			const union drbd_state val_off = { .conn = L_OFF };
+
+			if (state_change->mask.i == conn_mask.i &&
+			    state_change->val.i == val_off.i) {
+				/* The peer removed this volume, we do not have it... */
+				drbd_send_sr_reply(connection, vnr, SS_NOTHING_TO_DO);
+				return 0;
+			}
+
+			return -EIO;
+		}
+		vnr = peer_device->device->vnr;
+	}
+
+	rv = SS_SUCCESS;
+	write_lock_irq(&resource->state_rwlock);
+	if (resource->remote_state_change)
+		rv = SS_CONCURRENT_ST_CHG;
+	else
+		resource->remote_state_change = true;
+	write_unlock_irq(&resource->state_rwlock);
+
+	if (rv != SS_SUCCESS) {
+		drbd_info(connection, "Rejecting concurrent remote state change\n");
+		drbd_send_sr_reply(connection, vnr, rv);
 		return 0;
-	err = drbd_recv_all(peer_device->connection, p, want);
-	if (err)
-		return err;
+	}
 
-	drbd_bm_merge_lel(peer_device->device, c->word_offset, num_words, p);
+	/* Send the reply before carrying out the state change: this is needed
+	 * for connection state changes which close the network connection.  */
+	if (peer_device) {
+		rv = change_peer_device_state(peer_device, state_change, flags | CS_PREPARE);
+		drbd_send_sr_reply(connection, vnr, rv);
+		rv = change_peer_device_state(peer_device, state_change, flags | CS_PREPARED);
+		if (rv >= SS_SUCCESS)
+			drbd_md_sync_if_dirty(peer_device->device);
+	} else {
+		flags |= CS_IGN_OUTD_FAIL;
+		rv = change_connection_state(connection, state_change, NULL, flags | CS_PREPARE);
+		drbd_send_sr_reply(connection, vnr, rv);
+		change_connection_state(connection, state_change, NULL, flags | CS_PREPARED);
+	}
 
-	c->word_offset += num_words;
-	c->bit_offset = c->word_offset * BITS_PER_LONG;
-	if (c->bit_offset > c->bm_bits)
-		c->bit_offset = c->bm_bits;
+	write_lock_irq(&resource->state_rwlock);
+	resource->remote_state_change = false;
+	write_unlock_irq(&resource->state_rwlock);
+	wake_up_all(&resource->twopc_wait);
 
-	return 1;
+	return 0;
 }
 
-static enum drbd_bitmap_code dcbp_get_code(struct p_compressed_bm *p)
+static void drbd_abort_twopc(struct drbd_resource *resource)
 {
-	return (enum drbd_bitmap_code)(p->encoding & 0x0f);
+	struct drbd_connection *connection;
+	int initiator_node_id;
+	bool is_connect;
+
+	initiator_node_id = resource->twopc_reply.initiator_node_id;
+	if (initiator_node_id != -1) {
+		connection = drbd_get_connection_by_node_id(resource, initiator_node_id);
+		is_connect = resource->twopc_reply.is_connect &&
+			resource->twopc_reply.target_node_id == resource->res_opts.node_id;
+		resource->remote_state_change = false;
+		resource->twopc_reply.initiator_node_id = -1;
+		resource->twopc_parent_nodes = 0;
+
+		if (connection) {
+			if (is_connect)
+				abort_connect(connection);
+			kref_put(&connection->kref, drbd_destroy_connection);
+			connection = NULL;
+		}
+
+		/* Aborting a prepared state change. Give up the state mutex! */
+		up(&resource->state_sem);
+	}
+
+	wake_up_all(&resource->twopc_wait);
 }
 
-static int dcbp_get_start(struct p_compressed_bm *p)
+void twopc_timer_fn(struct timer_list *t)
 {
-	return (p->encoding & 0x80) != 0;
+	struct drbd_resource *resource = timer_container_of(resource, t, twopc_timer);
+	unsigned long irq_flags;
+
+	write_lock_irqsave(&resource->state_rwlock, irq_flags);
+	if (!test_bit(TWOPC_WORK_PENDING, &resource->flags)) {
+		drbd_err(resource, "Two-phase commit %u timeout\n",
+			   resource->twopc_reply.tid);
+		drbd_abort_twopc(resource);
+	} else {
+		mod_timer(&resource->twopc_timer, jiffies + HZ/10);
+	}
+	write_unlock_irqrestore(&resource->state_rwlock, irq_flags);
 }
 
-static int dcbp_get_pad_bits(struct p_compressed_bm *p)
+bool drbd_have_local_disk(struct drbd_resource *resource)
 {
-	return (p->encoding >> 4) & 0x7;
+	struct drbd_device *device;
+	int vnr;
+
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		if (device->disk_state[NOW] > D_DISKLESS) {
+			rcu_read_unlock();
+			return true;
+		}
+	}
+	rcu_read_unlock();
+	return false;
 }
 
-/*
- * recv_bm_rle_bits
- *
- * Return 0 when done, 1 when another iteration is needed, and a negative error
- * code upon failure.
- */
-static int
-recv_bm_rle_bits(struct drbd_peer_device *peer_device,
-		struct p_compressed_bm *p,
-		 struct bm_xfer_ctx *c,
-		 unsigned int len)
+static enum drbd_state_rv
+far_away_change(struct drbd_connection *connection,
+		struct twopc_request *request,
+		struct twopc_reply *reply,
+		enum chg_state_flags flags)
 {
-	struct bitstream bs;
-	u64 look_ahead;
-	u64 rl;
-	u64 tmp;
-	unsigned long s = c->bit_offset;
-	unsigned long e;
-	int toggle = dcbp_get_start(p);
-	int have;
-	int bits;
+	struct drbd_resource *resource = connection->resource;
+	struct twopc_state_change *state_change = &resource->twopc.state_change;
+	u64 directly_reachable = directly_connected_nodes(resource, NOW) |
+		NODE_MASK(resource->res_opts.node_id);
+	union drbd_state mask = state_change->mask;
+	union drbd_state val = state_change->val;
+	int vnr = resource->twopc_reply.vnr;
+	struct drbd_device *device;
+	unsigned long irq_flags;
+	int iterate_vnr;
 
-	bitstream_init(&bs, p->code, len, dcbp_get_pad_bits(p));
 
-	bits = bitstream_get_bits(&bs, &look_ahead, 64);
-	if (bits < 0)
-		return -EIO;
+	if (flags & CS_PREPARE && mask.role == role_MASK && val.role == R_PRIMARY &&
+	    resource->role[NOW] == R_PRIMARY) {
+		struct net_conf *nc;
+		bool two_primaries_allowed = false;
 
-	for (have = bits; have > 0; s += rl, toggle = !toggle) {
-		bits = vli_decode_bits(&rl, look_ahead);
-		if (bits <= 0)
-			return -EIO;
+		rcu_read_lock();
+		nc = rcu_dereference(connection->transport.net_conf);
+		if (nc)
+			two_primaries_allowed = nc->two_primaries;
+		rcu_read_unlock();
+		if (!two_primaries_allowed)
+			return SS_TWO_PRIMARIES;
 
-		if (toggle) {
-			e = s + rl -1;
-			if (e >= c->bm_bits) {
-				drbd_err(peer_device, "bitmap overflow (e:%lu) while decoding bm RLE packet\n", e);
-				return -EIO;
+		/* A node further away wants to become primary. In case I am primary allow it only
+		 * when I am diskless. See also check_primaries_distances() in drbd_state.c
+		 */
+		if (drbd_have_local_disk(resource))
+			return SS_WEAKLY_CONNECTED;
+	}
+
+	begin_state_change(resource, &irq_flags, flags);
+	if (mask.i == 0 && val.i == 0 &&
+	    resource->role[NOW] == R_PRIMARY && vnr == -1) {
+		/* A node far away test if there are primaries. I am the guy he is concerned
+		 * about... He learned about me in the CS_PREPARE phase. Since he is committing it
+		 * I know that he is outdated now...
+		 */
+		struct drbd_connection *affected_connection;
+		int initiator_node_id = resource->twopc_reply.initiator_node_id;
+
+		affected_connection = drbd_get_connection_by_node_id(resource, initiator_node_id);
+		if (affected_connection) {
+			__downgrade_peer_disk_states(affected_connection, D_OUTDATED);
+			kref_put(&affected_connection->kref, drbd_destroy_connection);
+		} else if (flags & CS_PREPARED) {
+			idr_for_each_entry(&resource->devices, device, iterate_vnr) {
+				struct drbd_peer_md *peer_md;
+
+				if (!get_ldev(device))
+					continue;
+
+				peer_md = &device->ldev->md.peers[initiator_node_id];
+				peer_md->flags |= MDF_PEER_OUTDATED;
+				put_ldev(device);
+				drbd_md_mark_dirty(device);
 			}
-			_drbd_bm_set_bits(peer_device->device, s, e);
 		}
+	}
 
-		if (have < bits) {
-			drbd_err(peer_device, "bitmap decoding error: h:%d b:%d la:0x%08llx l:%u/%u\n",
-				have, bits, look_ahead,
-				(unsigned int)(bs.cur.b - p->code),
-				(unsigned int)bs.buf_len);
-			return -EIO;
-		}
-		/* if we consumed all 64 bits, assign 0; >> 64 is "undefined"; */
-		if (likely(bits < 64))
-			look_ahead >>= bits;
-		else
-			look_ahead = 0;
-		have -= bits;
+	if (state_change->primary_nodes & ~directly_reachable &&
+	    !(request->flags & TWOPC_PRI_INCAPABLE))
+		__outdate_myself(resource);
 
-		bits = bitstream_get_bits(&bs, &tmp, 64 - have);
-		if (bits < 0)
-			return -EIO;
-		look_ahead |= tmp << have;
-		have += bits;
+	idr_for_each_entry(&resource->devices, device, iterate_vnr) {
+		if (test_bit(OUTDATE_ON_2PC_COMMIT, &device->flags) &&
+		    device->disk_state[NEW] > D_OUTDATED)
+			__change_disk_state(device, D_OUTDATED);
 	}
 
-	c->bit_offset = s;
-	bm_xfer_ctx_bit_to_word_offset(c);
+	/* even if no outdate happens, CS_FORCE_RECALC might be set here */
+	return end_state_change(resource, &irq_flags, "far-away");
+}
+
+static void handle_neighbor_demotion(struct drbd_connection *connection,
+				     struct twopc_state_change *state_change,
+				     struct twopc_reply *reply)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_device *device;
+	int vnr;
+
+	if (reply->initiator_node_id != connection->peer_node_id ||
+	    connection->peer_role[NOW] != R_PRIMARY ||
+	    state_change->mask.role != role_MASK || state_change->val.role != R_SECONDARY)
+		return;
+
+	/* A directly connected neighbor that was primary demotes to secondary */
+
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		kref_get(&device->kref);
+		rcu_read_unlock();
+		if (get_ldev(device)) {
+			drbd_bitmap_io(device, &drbd_bm_write, "peer demote",
+				       BM_LOCK_SET | BM_LOCK_CLEAR | BM_LOCK_BULK, NULL);
+			put_ldev(device);
+		}
+		rcu_read_lock();
+		kref_put(&device->kref, drbd_destroy_device);
+	}
+	rcu_read_unlock();
+}
 
-	return (s != c->bm_bits);
+static void peer_device_init_connect_state(struct drbd_peer_device *peer_device)
+{
+	clear_bit(INITIAL_STATE_SENT, &peer_device->flags);
+	clear_bit(INITIAL_STATE_RECEIVED, &peer_device->flags);
+	clear_bit(HAVE_SIZES, &peer_device->flags);
+	clear_bit(UUIDS_RECEIVED, &peer_device->flags);
+	clear_bit(CURRENT_UUID_RECEIVED, &peer_device->flags);
+	clear_bit(PEER_QUORATE, &peer_device->flags);
+	peer_device->connect_state = (union drbd_state) {{ .disk = D_MASK }};
 }
 
-/*
- * decode_bitmap_c
+
+/**
+ * drbd_init_connect_state() - Prepare twopc that establishes the connection
+ * @connection:	The connection this is about
  *
- * Return 0 when done, 1 when another iteration is needed, and a negative error
- * code upon failure.
+ * After a transport implementation has established the lower-level aspects
+ * of a connection, DRBD executes a two-phase commit so that the membership
+ * information changes in a cluster-wide, consistent way. During that
+ * two-phase commit, DRBD exchanges the UUIDs, size information, and the
+ * initial state. A two-phase commit might be aborted, which needs to be
+ * retried. This function re-initializes the struct members for this. The
+ * callsites are at the beginning of a two-phase connect commit, active and
+ * passive side.
  */
-static int
-decode_bitmap_c(struct drbd_peer_device *peer_device,
-		struct p_compressed_bm *p,
-		struct bm_xfer_ctx *c,
-		unsigned int len)
+void drbd_init_connect_state(struct drbd_connection *connection)
+{
+	struct drbd_peer_device *peer_device;
+	int vnr;
+
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
+		peer_device_init_connect_state(peer_device);
+	rcu_read_unlock();
+	clear_bit(CONN_HANDSHAKE_DISCONNECT, &connection->flags);
+	clear_bit(CONN_HANDSHAKE_RETRY, &connection->flags);
+	clear_bit(CONN_HANDSHAKE_READY, &connection->flags);
+}
+
+enum csc_rv {
+	CSC_CLEAR,
+	CSC_REJECT,
+	CSC_ABORT_LOCAL,
+	CSC_TID_MISS,
+	CSC_MATCH,
+};
+
+static enum csc_rv
+check_concurrent_transactions(struct drbd_resource *resource, struct twopc_reply *new_r)
 {
-	if (dcbp_get_code(p) == RLE_VLI_Bits)
-		return recv_bm_rle_bits(peer_device, p, c, len - sizeof(*p));
+	struct twopc_reply *ongoing = &resource->twopc_reply;
 
-	/* other variants had been implemented for evaluation,
-	 * but have been dropped as this one turned out to be "best"
-	 * during all our tests. */
+	if (!resource->remote_state_change)
+		return CSC_CLEAR;
 
-	drbd_err(peer_device, "receive_bitmap_c: unknown encoding %u\n", p->encoding);
-	conn_request_state(peer_device->connection, NS(conn, C_PROTOCOL_ERROR), CS_HARD);
-	return -EIO;
+	if (new_r->initiator_node_id < ongoing->initiator_node_id) {
+		if (ongoing->initiator_node_id == resource->res_opts.node_id)
+			return CSC_ABORT_LOCAL;
+		else
+			return CSC_REJECT;
+	} else if (new_r->initiator_node_id > ongoing->initiator_node_id) {
+		return CSC_REJECT;
+	}
+	if (new_r->tid != ongoing->tid)
+		return CSC_TID_MISS;
+
+	return CSC_MATCH;
 }
 
-void INFO_bm_xfer_stats(struct drbd_peer_device *peer_device,
-		const char *direction, struct bm_xfer_ctx *c)
+
+enum alt_rv {
+	ALT_LOCKED,
+	ALT_MATCH,
+	ALT_TIMEOUT,
+};
+
+static enum alt_rv when_done_lock(struct drbd_resource *resource, unsigned int for_tid)
 {
-	/* what would it take to transfer it "plaintext" */
-	unsigned int header_size = drbd_header_size(peer_device->connection);
-	unsigned int data_size = DRBD_SOCKET_BUFFER_SIZE - header_size;
-	unsigned int plain =
-		header_size * (DIV_ROUND_UP(c->bm_words, data_size) + 1) +
-		c->bm_words * sizeof(unsigned long);
-	unsigned int total = c->bytes[0] + c->bytes[1];
-	unsigned int r;
+	write_lock_irq(&resource->state_rwlock);
+	if (!resource->remote_state_change)
+		return ALT_LOCKED;
+	write_unlock_irq(&resource->state_rwlock);
+	if (resource->twopc_reply.tid == for_tid)
+		return ALT_MATCH;
 
-	/* total can not be zero. but just in case: */
-	if (total == 0)
-		return;
+	return ALT_TIMEOUT;
+}
+static enum alt_rv abort_local_transaction(struct drbd_connection *connection, unsigned int for_tid)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct net_conf *nc;
+	enum alt_rv rv;
+	long t;
 
-	/* don't report if not compressed */
-	if (total >= plain)
-		return;
+	rcu_read_lock();
+	nc = rcu_dereference(connection->transport.net_conf);
+	t = nc->ping_timeo * HZ/10 * 3 / 2;
+	rcu_read_unlock();
 
-	/* total < plain. check for overflow, still */
-	r = (total > UINT_MAX/1000) ? (total / (plain/1000))
-		                    : (1000 * total / plain);
+	set_bit(TWOPC_ABORT_LOCAL, &resource->flags);
+	write_unlock_irq(&resource->state_rwlock);
+	wake_up_all(&resource->state_wait);
+	wait_event_timeout(resource->twopc_wait,
+			   (rv = when_done_lock(resource, for_tid)) != ALT_TIMEOUT, t);
+	clear_bit(TWOPC_ABORT_LOCAL, &resource->flags);
+	return rv;
+}
 
-	if (r > 1000)
-		r = 1000;
+static int receive_twopc(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct p_twopc_request *p = pi->data;
+	struct twopc_reply reply = {0};
 
-	r = 1000 - r;
-	drbd_info(peer_device, "%s bitmap stats [Bytes(packets)]: plain %u(%u), RLE %u(%u), "
-	     "total %u; compression: %u.%u%%\n",
-			direction,
-			c->bytes[1], c->packets[1],
-			c->bytes[0], c->packets[0],
-			total, r/10, r % 10);
+	reply.vnr = pi->vnr;
+	reply.tid = be32_to_cpu(p->tid);
+	if (connection->agreed_features & DRBD_FF_2PC_V2) {
+		reply.initiator_node_id = p->s8_initiator_node_id;
+		reply.target_node_id = p->s8_target_node_id;
+	} else {
+		reply.initiator_node_id = be32_to_cpu(p->u32_initiator_node_id);
+		reply.target_node_id = be32_to_cpu(p->u32_target_node_id);
+	}
+	reply.reachable_nodes = directly_connected_nodes(resource, NOW) |
+				NODE_MASK(resource->res_opts.node_id);
+
+	if (pi->cmd == P_TWOPC_PREPARE)
+		clear_bit(TWOPC_RECV_SIZES_ERR, &resource->flags);
+
+	process_twopc(connection, &reply, pi, jiffies);
+
+	return 0;
 }
 
-/* Since we are processing the bitfield from lower addresses to higher,
-   it does not matter if the process it in 32 bit chunks or 64 bit
-   chunks as long as it is little endian. (Understand it as byte stream,
-   beginning with the lowest byte...) If we would use big endian
-   we would need to process it from the highest address to the lowest,
-   in order to be agnostic to the 32 vs 64 bits issue.
+static void nested_twopc_abort(struct drbd_resource *resource, struct twopc_request *request)
+{
+	struct drbd_connection *connection;
+	u64 nodes_to_reach, reach_immediately, im;
 
-   returns 0 on failure, 1 if we successfully received it. */
-static int receive_bitmap(struct drbd_connection *connection, struct packet_info *pi)
+	read_lock_irq(&resource->state_rwlock);
+	nodes_to_reach = request->nodes_to_reach;
+	reach_immediately = directly_connected_nodes(resource, NOW) & nodes_to_reach;
+	nodes_to_reach &= ~(reach_immediately | NODE_MASK(resource->res_opts.node_id));
+	request->nodes_to_reach = nodes_to_reach;
+	read_unlock_irq(&resource->state_rwlock);
+
+	for_each_connection_ref(connection, im, resource) {
+		u64 mask = NODE_MASK(connection->peer_node_id);
+		if (reach_immediately & mask)
+			conn_send_twopc_request(connection, request);
+	}
+}
+
+static bool is_prepare(enum drbd_packet cmd)
+{
+	return cmd == P_TWOPC_PREP_RSZ || cmd == P_TWOPC_PREPARE;
+}
+
+
+enum determine_dev_size
+drbd_commit_size_change(struct drbd_device *device, struct resize_parms *rs, u64 nodes_to_reach)
 {
+	struct twopc_resize *tr = &device->resource->twopc.resize;
 	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
-	struct bm_xfer_ctx c;
-	int err;
+	enum determine_dev_size dd;
+	uint64_t my_usize;
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return -EIO;
-	device = peer_device->device;
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		/* update cached sizes, relevant for the next handshake */
+		peer_device->c_size = tr->new_size;
+		peer_device->u_size = tr->user_size;
 
-	drbd_bm_lock(device, "receive bitmap", BM_LOCKED_SET_ALLOWED);
-	/* you are supposed to send additional out-of-sync information
-	 * if you actually set bits during this phase */
+		if (peer_device->d_size)
+			peer_device->d_size = tr->new_size;
+		peer_device->max_size = tr->new_size;
+	}
+	rcu_read_unlock();
 
-	c = (struct bm_xfer_ctx) {
-		.bm_bits = drbd_bm_bits(device),
-		.bm_words = drbd_bm_words(device),
-	};
+	if (!get_ldev(device)) {
+		drbd_set_my_capacity(device, tr->new_size);
+		return DS_UNCHANGED; /* Not entirely true, but we are diskless... */
+	}
 
-	for(;;) {
-		if (pi->cmd == P_BITMAP)
-			err = receive_bitmap_plain(peer_device, pi->size, pi->data, &c);
-		else if (pi->cmd == P_COMPRESSED_BITMAP) {
-			/* MAYBE: sanity check that we speak proto >= 90,
-			 * and the feature is enabled! */
-			struct p_compressed_bm *p = pi->data;
+	rcu_read_lock();
+	my_usize = rcu_dereference(device->ldev->disk_conf)->disk_size;
+	rcu_read_unlock();
 
-			if (pi->size > DRBD_SOCKET_BUFFER_SIZE - drbd_header_size(connection)) {
-				drbd_err(device, "ReportCBitmap packet too large\n");
-				err = -EIO;
-				goto out;
-			}
-			if (pi->size <= sizeof(*p)) {
-				drbd_err(device, "ReportCBitmap packet too small (l:%u)\n", pi->size);
-				err = -EIO;
-				goto out;
-			}
-			err = drbd_recv_all(peer_device->connection, p, pi->size);
-			if (err)
-			       goto out;
-			err = decode_bitmap_c(peer_device, p, &c, pi->size);
-		} else {
-			drbd_warn(device, "receive_bitmap: cmd neither ReportBitMap nor ReportCBitMap (is 0x%x)", pi->cmd);
-			err = -EIO;
-			goto out;
-		}
+	if (my_usize != tr->user_size) {
+		struct disk_conf *old_disk_conf, *new_disk_conf;
 
-		c.packets[pi->cmd == P_BITMAP]++;
-		c.bytes[pi->cmd == P_BITMAP] += drbd_header_size(connection) + pi->size;
+		drbd_info(device, "New u_size %llu sectors\n",
+			  (unsigned long long)tr->user_size);
 
-		if (err <= 0) {
-			if (err < 0)
-				goto out;
-			break;
+		new_disk_conf = kzalloc_obj(struct disk_conf);
+		if (!new_disk_conf) {
+			device->ldev->disk_conf->disk_size = tr->user_size;
+			goto cont;
 		}
-		err = drbd_recv_header(peer_device->connection, pi);
-		if (err)
-			goto out;
+
+		old_disk_conf = device->ldev->disk_conf;
+		*new_disk_conf = *old_disk_conf;
+		new_disk_conf->disk_size = tr->user_size;
+
+		rcu_assign_pointer(device->ldev->disk_conf, new_disk_conf);
+		kvfree_rcu_mightsleep(old_disk_conf);
 	}
+cont:
+	dd = drbd_determine_dev_size(device, tr->new_size, tr->dds_flags | DDSF_2PC, rs);
 
-	INFO_bm_xfer_stats(peer_device, "receive", &c);
+	if (dd == DS_GREW && !(tr->dds_flags & DDSF_NO_RESYNC)) {
+		struct drbd_resource *resource = device->resource;
+		const int my_node_id = resource->res_opts.node_id;
+		struct drbd_peer_device *peer_device;
+		u64 im;
 
-	if (device->state.conn == C_WF_BITMAP_T) {
-		enum drbd_state_rv rv;
+		for_each_peer_device_ref(peer_device, im, device) {
+			if (peer_device->repl_state[NOW] != L_ESTABLISHED ||
+			    peer_device->disk_state[NOW] < D_INCONSISTENT)
+				continue;
 
-		err = drbd_send_bitmap(device, peer_device);
-		if (err)
-			goto out;
-		/* Omit CS_ORDERED with this state transition to avoid deadlocks. */
-		rv = _drbd_request_state(device, NS(conn, C_WF_SYNC_UUID), CS_VERBOSE);
-		D_ASSERT(device, rv == SS_SUCCESS);
-	} else if (device->state.conn != C_WF_BITMAP_S) {
-		/* admin may have requested C_DISCONNECTING,
-		 * other threads may have noticed network errors */
-		drbd_info(device, "unexpected cstate (%s) in receive_bitmap\n",
-		    drbd_conn_str(device->state.conn));
+			if (tr->diskful_primary_nodes) {
+				if (tr->diskful_primary_nodes & NODE_MASK(my_node_id)) {
+					enum drbd_repl_state resync;
+					if (tr->diskful_primary_nodes & NODE_MASK(peer_device->node_id)) {
+						/* peer is also primary */
+						resync = peer_device->node_id < my_node_id ?
+							L_SYNC_TARGET : L_SYNC_SOURCE;
+					} else {
+						/* peer is secondary */
+						resync = L_SYNC_SOURCE;
+					}
+					drbd_start_resync(peer_device, resync, "resize");
+				} else {
+					if (tr->diskful_primary_nodes & NODE_MASK(peer_device->node_id))
+						drbd_start_resync(peer_device, L_SYNC_TARGET,
+								"resize");
+					/* else  no resync */
+				}
+			} else {
+				if (resource->twopc_parent_nodes & NODE_MASK(peer_device->node_id))
+					drbd_start_resync(peer_device, L_SYNC_TARGET, "resize");
+				else if (nodes_to_reach & NODE_MASK(peer_device->node_id))
+					drbd_start_resync(peer_device, L_SYNC_SOURCE, "resize");
+				/* else  no resync */
+			}
+		}
 	}
-	err = 0;
 
- out:
-	drbd_bm_unlock(device);
-	if (!err && device->state.conn == C_WF_BITMAP_S)
-		drbd_start_resync(device, C_SYNC_SOURCE);
-	return err;
+	put_ldev(device);
+	return dd;
 }
 
-static int receive_skip(struct drbd_connection *connection, struct packet_info *pi)
+enum drbd_state_rv drbd_support_2pc_resize(struct drbd_resource *resource)
 {
-	drbd_warn(connection, "skipping unknown optional packet type %d, l: %d!\n",
-		 pi->cmd, pi->size);
+	struct drbd_connection *connection;
+	enum drbd_state_rv rv = SS_SUCCESS;
 
-	return ignore_remaining_packet(connection, pi);
-}
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (connection->cstate[NOW] == C_CONNECTED &&
+		    connection->agreed_pro_version < 112) {
+			rv = SS_NOT_SUPPORTED;
+			break;
+		}
+	}
+	rcu_read_unlock();
 
-static int receive_UnplugRemote(struct drbd_connection *connection, struct packet_info *pi)
-{
-	/* Make sure we've acked all the TCP data associated
-	 * with the data requests being unplugged */
-	tcp_sock_set_quickack(connection->data.socket->sk, 2);
-	return 0;
+	return rv;
 }
 
-static int receive_out_of_sync(struct drbd_connection *connection, struct packet_info *pi)
+static bool any_neighbor_quorate(struct drbd_resource *resource)
 {
 	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
-	struct p_block_desc *p = pi->data;
+	struct drbd_connection *connection;
+	bool peer_with_quorum = false;
+	int vnr;
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return -EIO;
-	device = peer_device->device;
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		peer_with_quorum = true;
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+			if (test_bit(PEER_QUORATE, &peer_device->flags))
+				continue;
+			peer_with_quorum = false;
+			break;
+		}
 
-	switch (device->state.conn) {
-	case C_WF_SYNC_UUID:
-	case C_WF_BITMAP_T:
-	case C_BEHIND:
+		if (peer_with_quorum)
 			break;
-	default:
-		drbd_err(device, "ASSERT FAILED cstate = %s, expected: WFSyncUUID|WFBitMapT|Behind\n",
-				drbd_conn_str(device->state.conn));
 	}
+	rcu_read_unlock();
 
-	drbd_set_out_of_sync(peer_device, be64_to_cpu(p->sector), be32_to_cpu(p->blksize));
+	return peer_with_quorum;
+}
+
+static void process_twopc(struct drbd_connection *connection,
+			 struct twopc_reply *reply,
+			 struct packet_info *pi,
+			 unsigned long receive_jif)
+{
+	struct drbd_connection *affected_connection = connection;
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_device *peer_device = NULL;
+	struct p_twopc_request *p = pi->data;
+	struct twopc_state_change *state_change = &resource->twopc.state_change;
+	enum chg_state_flags flags = CS_VERBOSE | CS_LOCAL_ONLY;
+	enum drbd_state_rv rv = SS_SUCCESS;
+	struct twopc_request request;
+	bool waiting_allowed = true;
+	enum csc_rv csc_rv;
+
+	request.tid = be32_to_cpu(p->tid);
+	if (connection->agreed_features & DRBD_FF_2PC_V2) {
+		request.flags = be32_to_cpu(p->flags);
+		request.initiator_node_id = p->s8_initiator_node_id;
+		request.target_node_id = p->s8_target_node_id;
+	} else {
+		request.flags = 0;
+		request.initiator_node_id = be32_to_cpu(p->u32_initiator_node_id);
+		request.target_node_id = be32_to_cpu(p->u32_target_node_id);
+	}
+	request.nodes_to_reach = be64_to_cpu(p->nodes_to_reach);
+	request.cmd = pi->cmd;
+	request.vnr = pi->vnr;
 
-	return 0;
-}
+	/* Check for concurrent transactions and duplicate packets. */
+retry:
+	write_lock_irq(&resource->state_rwlock);
 
-static int receive_rs_deallocated(struct drbd_connection *connection, struct packet_info *pi)
-{
-	struct drbd_peer_device *peer_device;
+	csc_rv = check_concurrent_transactions(resource, reply);
+
+	if (csc_rv == CSC_CLEAR && pi->cmd != P_TWOPC_ABORT) {
+		struct drbd_device *device;
+		int iterate_vnr;
+
+		if (!is_prepare(pi->cmd)) {
+			/* We have committed or aborted this transaction already. */
+			write_unlock_irq(&resource->state_rwlock);
+			dynamic_drbd_dbg(connection, "Ignoring %s packet %u\n",
+				   drbd_packet_name(pi->cmd),
+				   reply->tid);
+			return;
+		}
+		if (reply->is_aborted) {
+			write_unlock_irq(&resource->state_rwlock);
+			return;
+		}
+		resource->remote_state_change = true;
+		resource->twopc.type =
+			pi->cmd == P_TWOPC_PREPARE ? TWOPC_STATE_CHANGE : TWOPC_RESIZE;
+		resource->twopc_prepare_reply_cmd = 0;
+		resource->twopc_parent_nodes = NODE_MASK(connection->peer_node_id);
+		clear_bit(TWOPC_EXECUTED, &resource->flags);
+		idr_for_each_entry(&resource->devices, device, iterate_vnr)
+			clear_bit(OUTDATE_ON_2PC_COMMIT, &device->flags);
+	} else if (csc_rv == CSC_MATCH && !is_prepare(pi->cmd)) {
+		flags |= CS_PREPARED;
+
+		if (test_and_set_bit(TWOPC_EXECUTED, &resource->flags)) {
+			write_unlock_irq(&resource->state_rwlock);
+			drbd_info(connection, "Ignoring redundant %s packet %u.\n",
+				  drbd_packet_name(pi->cmd),
+				  reply->tid);
+			return;
+		}
+	} else if (csc_rv == CSC_ABORT_LOCAL && is_prepare(pi->cmd)) {
+		enum alt_rv alt_rv;
+
+		drbd_info(connection, "Aborting local state change %u to yield to remote "
+			  "state change %u.\n",
+			  resource->twopc_reply.tid,
+			  reply->tid);
+		alt_rv = abort_local_transaction(connection, reply->tid);
+		if (alt_rv == ALT_MATCH) {
+			/* abort_local_transaction() comes back unlocked in this case... */
+			goto match;
+		} else if (alt_rv == ALT_TIMEOUT) {
+			/* abort_local_transaction() comes back unlocked in this case... */
+			drbd_info(connection, "Aborting local state change %u "
+				  "failed. Rejecting remote state change %u.\n",
+				  resource->twopc_reply.tid,
+				  reply->tid);
+			drbd_send_twopc_reply(connection, P_TWOPC_RETRY, reply);
+			return;
+		}
+		/* abort_local_transaction() returned with the state_rwlock write lock */
+		if (reply->is_aborted) {
+			write_unlock_irq(&resource->state_rwlock);
+			return;
+		}
+		resource->remote_state_change = true;
+		resource->twopc.type =
+			pi->cmd == P_TWOPC_PREPARE ? TWOPC_STATE_CHANGE : TWOPC_RESIZE;
+		resource->twopc_parent_nodes = NODE_MASK(connection->peer_node_id);
+		resource->twopc_prepare_reply_cmd = 0;
+		clear_bit(TWOPC_EXECUTED, &resource->flags);
+	} else if (pi->cmd == P_TWOPC_ABORT) {
+		/* crc_rc != CRC_MATCH */
+		write_unlock_irq(&resource->state_rwlock);
+		nested_twopc_abort(resource, &request);
+		return;
+	} else {
+		write_unlock_irq(&resource->state_rwlock);
+
+		if (csc_rv == CSC_TID_MISS && is_prepare(pi->cmd) && waiting_allowed) {
+			/* CSC_TID_MISS implies the two transactions are from the same initiator */
+			if (!(resource->twopc_parent_nodes & NODE_MASK(connection->peer_node_id))) {
+				long timeout = twopc_timeout(resource) / 20; /* usually 1.5 sec */
+				/*
+				 * We are expecting the P_TWOPC_COMMIT or P_TWOPC_ABORT through
+				 * another connection. So we can wait without deadlocking.
+				 */
+				wait_event_interruptible_timeout(resource->twopc_wait,
+						!resource->remote_state_change, timeout);
+				waiting_allowed = false; /* retry only once */
+				goto retry;
+			}
+		}
+
+		if (csc_rv == CSC_REJECT ||
+		    (csc_rv == CSC_TID_MISS && is_prepare(pi->cmd))) {
+			drbd_info(connection, "Rejecting concurrent "
+				  "remote state change %u because of "
+				  "state change %u\n",
+				  reply->tid,
+				  resource->twopc_reply.tid);
+			drbd_send_twopc_reply(connection, P_TWOPC_RETRY, reply);
+			return;
+		}
+
+		if (is_prepare(pi->cmd)) {
+			if (csc_rv == CSC_MATCH) {
+				/* We have prepared this transaction already. */
+				enum drbd_packet reply_cmd;
+
+			match:
+				drbd_info(connection,
+						"Duplicate prepare for remote state change %u\n",
+						reply->tid);
+				write_lock_irq(&resource->state_rwlock);
+				resource->twopc_parent_nodes |= NODE_MASK(connection->peer_node_id);
+				reply_cmd = resource->twopc_prepare_reply_cmd;
+				write_unlock_irq(&resource->state_rwlock);
+
+				if (reply_cmd) {
+					drbd_send_twopc_reply(connection, reply_cmd,
+							      &resource->twopc_reply);
+				} else {
+					/* if a node sends us a prepare, that means he has
+					   prepared this himsilf successfully. */
+					write_lock_irq(&resource->state_rwlock);
+					set_bit(TWOPC_YES, &connection->flags);
+					drbd_maybe_cluster_wide_reply(resource);
+					write_unlock_irq(&resource->state_rwlock);
+				}
+			}
+		} else {
+			drbd_info(connection, "Ignoring %s packet %u "
+				  "current processing state change %u\n",
+				  drbd_packet_name(pi->cmd),
+				  reply->tid,
+				  resource->twopc_reply.tid);
+		}
+		return;
+	}
+
+	if (reply->initiator_node_id != connection->peer_node_id) {
+		/*
+		 * This is an indirect request.  Unless we are directly
+		 * connected to the initiator as well as indirectly, we don't
+		 * have connection or peer device objects for this peer.
+		 */
+		affected_connection = drbd_connection_by_node_id(resource, reply->initiator_node_id);
+	}
+
+	if (reply->target_node_id != -1 &&
+	    reply->target_node_id != resource->res_opts.node_id) {
+		affected_connection = NULL;
+	}
+
+	switch (resource->twopc.type) {
+	case TWOPC_STATE_CHANGE:
+		if (pi->cmd == P_TWOPC_PREPARE) {
+			state_change->mask.i = be32_to_cpu(p->mask);
+			state_change->val.i = be32_to_cpu(p->val);
+		} else { /* P_TWOPC_COMMIT */
+			state_change->primary_nodes = be64_to_cpu(p->primary_nodes);
+			state_change->reachable_nodes = be64_to_cpu(p->reachable_nodes);
+		}
+		break;
+	case TWOPC_RESIZE:
+		if (request.cmd == P_TWOPC_PREP_RSZ) {
+			resource->twopc.resize.user_size = be64_to_cpu(p->user_size);
+			resource->twopc.resize.dds_flags = be16_to_cpu(p->dds_flags);
+		} else { /* P_TWOPC_COMMIT */
+			resource->twopc.resize.diskful_primary_nodes =
+				be64_to_cpu(p->diskful_primary_nodes);
+			resource->twopc.resize.new_size = be64_to_cpu(p->exposed_size);
+		}
+	}
+
+	if (affected_connection && affected_connection->cstate[NOW] < C_CONNECTED &&
+	    state_change->mask.conn == 0)
+		affected_connection = NULL;
+
+	if (pi->vnr != -1 && affected_connection) {
+		peer_device = conn_peer_device(affected_connection, pi->vnr);
+		/* If we do not know the peer_device, then we are fine with
+		   whatever is going on in the cluster. E.g. detach and del-minor
+		   one each node, one after the other */
+
+		affected_connection = NULL; /* It is intended for a peer_device! */
+	}
+
+	if (state_change->mask.conn == conn_MASK) {
+		u64 m = NODE_MASK(reply->initiator_node_id);
+
+		if (state_change->val.conn == C_CONNECTED) {
+			reply->reachable_nodes |= m;
+			if (affected_connection) {
+				reply->is_connect = 1;
+
+				if (pi->cmd == P_TWOPC_PREPARE)
+					drbd_init_connect_state(connection);
+			}
+		}
+		if (state_change->val.conn == C_DISCONNECTING) {
+			reply->reachable_nodes &= ~m;
+			reply->is_disconnect = 1;
+		}
+	}
+
+	if (pi->cmd == P_TWOPC_PREPARE) {
+		reply->primary_nodes = be64_to_cpu(p->primary_nodes);
+		if (resource->role[NOW] == R_PRIMARY) {
+			reply->primary_nodes |= NODE_MASK(resource->res_opts.node_id);
+
+			if (drbd_res_data_accessible(resource))
+				reply->weak_nodes = ~reply->reachable_nodes;
+		}
+	}
+	if (pi->cmd == P_TWOPC_PREP_RSZ) {
+		struct drbd_device *device;
+
+		device = (peer_device ?: conn_peer_device(connection, pi->vnr))->device;
+		if (get_ldev(device)) {
+			if (resource->role[NOW] == R_PRIMARY)
+				reply->diskful_primary_nodes = NODE_MASK(resource->res_opts.node_id);
+			reply->max_possible_size = drbd_local_max_size(device);
+			put_ldev(device);
+		} else {
+			reply->max_possible_size = DRBD_MAX_SECTORS;
+			reply->diskful_primary_nodes = 0;
+		}
+	}
+
+	resource->twopc_reply = *reply;
+	write_unlock_irq(&resource->state_rwlock);
+
+	if (affected_connection && affected_connection != connection &&
+	    affected_connection->cstate[NOW] == C_CONNECTED) {
+		drbd_ping_peer(affected_connection);
+		if (affected_connection->cstate[NOW] < C_CONNECTED)
+			affected_connection = NULL;
+	}
+
+	switch (pi->cmd) {
+	case P_TWOPC_PREPARE:
+		drbd_print_cluster_wide_state_change(resource, "Preparing remote state change",
+				reply->tid, reply->initiator_node_id, reply->target_node_id,
+				state_change->mask, state_change->val);
+		flags |= CS_PREPARE;
+		break;
+	case P_TWOPC_PREP_RSZ:
+		drbd_info(connection, "Preparing remote state change %u "
+			  "(local_max_size = %llu KiB)\n",
+			  reply->tid, (unsigned long long)reply->max_possible_size >> 1);
+		flags |= CS_PREPARE;
+		break;
+	case P_TWOPC_ABORT:
+		drbd_info(connection, "Aborting remote state change %u\n",
+			  reply->tid);
+		flags |= CS_ABORT;
+		break;
+	case P_TWOPC_COMMIT:
+		drbd_info(connection, "Committing remote state change %u (primary_nodes=%llX)\n",
+			  reply->tid, be64_to_cpu(p->primary_nodes));
+		break;
+	default:
+		BUG();
+	}
+
+	switch (resource->twopc.type) {
+	case TWOPC_STATE_CHANGE:
+		if (flags & CS_PREPARED && !(flags & CS_ABORT)) {
+			reply->primary_nodes = state_change->primary_nodes;
+			handle_neighbor_demotion(connection, state_change, reply);
+
+			if ((resource->cached_all_devices_have_quorum ||
+			     any_neighbor_quorate(resource)) &&
+			    request.flags & TWOPC_HAS_REACHABLE) {
+				resource->members = state_change->reachable_nodes;
+				if (!resource->cached_all_devices_have_quorum)
+					flags |= CS_FORCE_RECALC;
+			}
+			if (state_change->mask.conn == conn_MASK &&
+			    state_change->val.conn == C_CONNECTED) {
+				/* Add nodes connecting "far away" to members */
+				u64 add_mask = NODE_MASK(reply->initiator_node_id) |
+					NODE_MASK(reply->target_node_id);
+
+				resource->members |= add_mask;
+			}
+		}
+
+		if (peer_device)
+			rv = change_peer_device_state(peer_device, state_change, flags);
+		else if (affected_connection)
+			rv = change_connection_state(affected_connection, state_change, reply,
+						     flags | CS_IGN_OUTD_FAIL);
+		else
+			rv = far_away_change(connection, &request, reply, flags);
+		break;
+	case TWOPC_RESIZE:
+		if (flags & CS_PREPARE)
+			rv = drbd_support_2pc_resize(resource);
+		break;
+	}
+
+	if (flags & CS_PREPARE) {
+		mod_timer(&resource->twopc_timer, receive_jif + twopc_timeout(resource));
+
+		/* Retry replies can be sent immediately. Otherwise use the
+		 * nested twopc path. This waits for the state handshake to
+		 * complete in the case of a twopc for transitioning to
+		 * C_CONNECTED. */
+		if (rv == SS_IN_TRANSIENT_STATE) {
+			resource->twopc_prepare_reply_cmd = P_TWOPC_RETRY;
+			drbd_send_twopc_reply(connection, P_TWOPC_RETRY, reply);
+		} else {
+			resource->twopc_reply.state_change_failed = rv < SS_SUCCESS;
+			nested_twopc_request(resource, &request);
+		}
+	} else {
+		if (flags & CS_PREPARED) {
+			if (rv < SS_SUCCESS)
+				drbd_err(resource, "FATAL: Local commit of prepared %u failed! \n",
+					 reply->tid);
+
+			timer_delete(&resource->twopc_timer);
+		}
+
+		nested_twopc_request(resource, &request);
+
+		if (resource->twopc.type == TWOPC_RESIZE && flags & CS_PREPARED &&
+		    !(flags & CS_ABORT)) {
+			struct drbd_device *device;
+
+			device = (peer_device ?: conn_peer_device(connection, pi->vnr))->device;
+
+			drbd_commit_size_change(device, NULL, request.nodes_to_reach);
+			rv = SS_SUCCESS;
+		}
+
+		clear_remote_state_change(resource);
+
+		if (peer_device && rv >= SS_SUCCESS && !(flags & CS_ABORT))
+			drbd_md_sync_if_dirty(peer_device->device);
+
+		if (connection->agreed_pro_version < 117 &&
+		    rv >= SS_SUCCESS && !(flags & CS_ABORT) &&
+		    affected_connection &&
+		    state_change->mask.conn == conn_MASK && state_change->val.conn == C_CONNECTED)
+			conn_connect2(connection);
+	}
+}
+
+void drbd_try_to_get_resynced(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device, *best_peer_device = NULL;
+	enum sync_strategy best_strategy = UNDETERMINED;
+	int best_preference = 0;
+
+	if (!get_ldev(device))
+		return;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		enum sync_strategy strategy;
+		enum sync_rule rule;
+		int peer_node_id;
+
+		if (peer_device->disk_state[NOW] != D_UP_TO_DATE)
+			continue;
+
+		strategy = drbd_uuid_compare(peer_device, &rule, &peer_node_id);
+		disk_states_to_strategy(peer_device, peer_device->disk_state[NOW], &strategy, rule,
+					&peer_node_id);
+		drbd_info(peer_device, "strategy = %s\n", strategy_descriptor(strategy).name);
+		if (strategy_descriptor(strategy).resync_peer_preference > best_preference) {
+			best_preference = strategy_descriptor(strategy).resync_peer_preference;
+			best_peer_device = peer_device;
+			best_strategy = strategy;
+		}
+	}
+	rcu_read_unlock();
+	peer_device = best_peer_device;
+
+	if (best_strategy == NO_SYNC) {
+		change_disk_state(device, D_UP_TO_DATE, CS_VERBOSE, "get-resync", NULL);
+	} else if (peer_device &&
+		   (!repl_is_sync_target(peer_device->repl_state[NOW]) ||
+		    test_bit(UNSTABLE_RESYNC, &peer_device->flags))) {
+		drbd_resync(peer_device, DISKLESS_PRIMARY);
+		drbd_send_uuids(peer_device, UUID_FLAG_RESYNC | UUID_FLAG_DISKLESS_PRIMARY, 0);
+	}
+	put_ldev(device);
+}
+
+static void finish_nested_twopc(struct drbd_connection *connection)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_device *peer_device;
+	int vnr = 0;
+
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		if (!test_bit(INITIAL_STATE_RECEIVED, &peer_device->flags))
+			return;
+	}
+
+	set_bit(CONN_HANDSHAKE_READY, &connection->flags);
+
+	wake_up_all(&resource->state_wait);
+
+	write_lock_irq(&resource->state_rwlock);
+	drbd_maybe_cluster_wide_reply(resource);
+	write_unlock_irq(&resource->state_rwlock);
+}
+
+static bool uuid_in_peer_history(struct drbd_peer_device *peer_device, u64 uuid)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(peer_device->history_uuids); i++)
+		if ((peer_device->history_uuids[i] & ~UUID_PRIMARY) == uuid)
+			return true;
+
+	return false;
+}
+
+static bool uuid_in_my_history(struct drbd_device *device, u64 uuid)
+{
+	int i;
+
+	for (i = 0; i < HISTORY_UUIDS; i++) {
+		if ((drbd_history_uuid(device, i) & ~UUID_PRIMARY) == uuid)
+			return true;
+	}
+
+	return false;
+}
+
+static bool peer_data_is_successor_of_mine(struct drbd_peer_device *peer_device)
+{
+	u64 exposed = peer_device->device->exposed_data_uuid & ~UUID_PRIMARY;
+	int i;
+
+	i = drbd_find_peer_bitmap_by_uuid(peer_device, exposed);
+	if (i != -1)
+		return true;
+
+	return uuid_in_peer_history(peer_device, exposed);
+}
+
+static bool peer_data_is_ancestor_of_mine(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	u64 peer_uuid = peer_device->current_uuid;
+	struct drbd_peer_device *p2;
+	bool rv = false;
+	int i;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(p2, device) {
+		if (peer_device == p2)
+			continue;
+		i = drbd_find_peer_bitmap_by_uuid(p2, peer_uuid);
+		if (i != -1 || uuid_in_peer_history(peer_device, peer_uuid)) {
+			rv = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return rv;
+}
+
+static void propagate_exposed_uuid(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	u64 im;
+
+	for_each_peer_device_ref(peer_device, im, device) {
+		if (!test_bit(INITIAL_STATE_SENT, &peer_device->flags))
+			continue;
+		drbd_send_current_uuid(peer_device, device->exposed_data_uuid, 0);
+	}
+}
+
+static void maybe_force_secondary(struct drbd_peer_device *peer_device)
+{
+	struct drbd_resource *resource = peer_device->device->resource;
+	unsigned long irq_flags;
+
+	if (!resource->fail_io[NOW] && resource->cached_susp &&
+	    resource->res_opts.on_susp_primary_outdated == SPO_FORCE_SECONDARY) {
+		drbd_warn(peer_device, "force secondary!\n");
+		begin_state_change(resource, &irq_flags,
+				   CS_VERBOSE | CS_HARD | CS_FS_IGN_OPENERS);
+		resource->role[NEW] = R_SECONDARY;
+		/* resource->fail_io[NEW] gets set via CS_FS_IGN_OPENERS */
+		end_state_change(resource, &irq_flags, "peer-state");
+	}
+}
+
+static void diskless_with_peers_different_current_uuids(struct drbd_peer_device *peer_device,
+							enum drbd_disk_state *peer_disk_state)
+{
+	bool data_successor = peer_data_is_successor_of_mine(peer_device);
+	bool data_ancestor = peer_data_is_ancestor_of_mine(peer_device);
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_device *device = peer_device->device;
+
+	if (data_successor && resource->role[NOW] == R_PRIMARY) {
+		drbd_warn(peer_device, "Remote node has more recent data\n");
+		maybe_force_secondary(peer_device);
+		set_bit(CONN_HANDSHAKE_RETRY, &connection->flags);
+	} else if (data_successor && resource->role[NOW] == R_SECONDARY) {
+		drbd_uuid_set_exposed(device, peer_device->current_uuid, true);
+		propagate_exposed_uuid(device);
+	} else if (data_ancestor) {
+		drbd_warn(peer_device, "Downgrading joining peer's disk as its data is older\n");
+		if (*peer_disk_state > D_OUTDATED)
+			*peer_disk_state = D_OUTDATED;
+			/* See "Do not trust this guy!" in sanitize_state() */
+	} else {
+		drbd_warn(peer_device, "Current UUID of peer does not match my exposed UUID.");
+		set_bit(CONN_HANDSHAKE_DISCONNECT, &connection->flags);
+	}
+}
+
+static int receive_state(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_device *peer_device = NULL;
+	enum drbd_repl_state *repl_state;
+	struct drbd_device *device = NULL;
+	struct p_state *p = pi->data;
+	union drbd_state old_peer_state, peer_state;
+	enum drbd_disk_state peer_disk_state;
+	enum drbd_repl_state new_repl_state;
+	bool peer_was_resync_target, do_handshake = false;
+	enum chg_state_flags begin_state_chg_flags = CS_VERBOSE | CS_WAIT_COMPLETE;
+	unsigned long irq_flags;
+	int rv;
+
+	if (pi->vnr != -1) {
+		peer_device = conn_peer_device(connection, pi->vnr);
+		if (!peer_device)
+			return config_unknown_volume(connection, pi);
+		device = peer_device->device;
+	}
+
+	peer_state.i = be32_to_cpu(p->state);
+
+	if (connection->agreed_pro_version < 110) {
+		/* Before drbd-9.0 there was no D_DETACHING it was D_FAILED... */
+		if (peer_state.disk >= D_DETACHING)
+			peer_state.disk++;
+		if (peer_state.pdsk >= D_DETACHING)
+			peer_state.pdsk++;
+	}
+
+	if (pi->vnr == -1) {
+		if (peer_state.role == R_SECONDARY) {
+			begin_state_change(resource, &irq_flags, CS_HARD | CS_VERBOSE);
+			__change_peer_role(connection, R_SECONDARY);
+			rv = end_state_change(resource, &irq_flags, "peer-state");
+			if (rv < SS_SUCCESS)
+				goto fail;
+		}
+		return 0;
+	}
+
+	peer_disk_state = peer_state.disk;
+
+	if (peer_disk_state > D_DISKLESS && !want_bitmap(peer_device)) {
+		drbd_warn(peer_device, "The peer is configured to be diskless but presents %s\n",
+			  drbd_disk_str(peer_disk_state));
+		goto fail;
+	}
+
+	if (peer_state.disk == D_NEGOTIATING) {
+		peer_disk_state = peer_device->uuid_flags & UUID_FLAG_INCONSISTENT ?
+			D_INCONSISTENT : D_CONSISTENT;
+		drbd_info(peer_device, "real peer disk state = %s\n", drbd_disk_str(peer_disk_state));
+	}
+
+	read_lock_irq(&resource->state_rwlock);
+	old_peer_state = drbd_get_peer_device_state(peer_device, NOW);
+	read_unlock_irq(&resource->state_rwlock);
+ retry:
+	new_repl_state = max_t(enum drbd_repl_state, old_peer_state.conn, L_OFF);
+
+	/* If some other part of the code (ack_receiver thread, timeout)
+	 * already decided to close the connection again,
+	 * we must not "re-establish" it here. */
+	if (old_peer_state.conn <= C_TEAR_DOWN)
+		return -ECONNRESET;
+
+	if (!test_bit(INITIAL_STATE_RECEIVED, &peer_device->flags) &&
+	    peer_state.role == R_PRIMARY && peer_device->uuid_flags & UUID_FLAG_STABLE)
+		check_resync_source(device, peer_device->uuid_node_mask);
+
+	peer_was_resync_target =
+		connection->agreed_pro_version >= 110 ?
+		peer_device->last_repl_state == L_SYNC_TARGET ||
+		peer_device->last_repl_state == L_PAUSED_SYNC_T
+		:
+		true;
+	/* If this is the "end of sync" confirmation, usually the peer disk
+	 * was D_INCONSISTENT or D_CONSISTENT. (Since the peer might be
+	 * weak we do not know anything about its new disk state)
+	 */
+	if (peer_was_resync_target &&
+	    (old_peer_state.pdsk == D_INCONSISTENT || old_peer_state.pdsk == D_CONSISTENT) &&
+	    old_peer_state.conn > L_ESTABLISHED && old_peer_state.disk >= D_INCONSISTENT) {
+		/* If we are (becoming) SyncSource, but peer is still in sync
+		 * preparation, ignore its uptodate-ness to avoid flapping, it
+		 * will change to inconsistent once the peer reaches active
+		 * syncing states.
+		 * It may have changed syncer-paused flags, however, so we
+		 * cannot ignore this completely. */
+		if (peer_state.conn > L_ESTABLISHED &&
+		    peer_state.conn < L_SYNC_SOURCE)
+			peer_disk_state = D_INCONSISTENT;
+
+		/* if peer_state changes to connected at the same time,
+		 * it explicitly notifies us that it finished resync.
+		 * Maybe we should finish it up, too? */
+		else if (peer_state.conn == L_ESTABLISHED) {
+			bool finish_now = false;
+
+			if (old_peer_state.conn == L_WF_BITMAP_S) {
+				read_lock_irq(&resource->state_rwlock);
+				if (peer_device->repl_state[NOW] == L_WF_BITMAP_S)
+					peer_device->resync_finished_pdsk = peer_state.disk;
+				else if (peer_device->repl_state[NOW] == L_SYNC_SOURCE)
+					finish_now = true;
+				read_unlock_irq(&resource->state_rwlock);
+			}
+
+			if (finish_now || old_peer_state.conn == L_SYNC_SOURCE ||
+			    old_peer_state.conn == L_PAUSED_SYNC_S) {
+				drbd_resync_finished(peer_device, peer_state.disk);
+				peer_device->last_repl_state = peer_state.conn;
+			}
+			return 0;
+		}
+	}
+
+	/* explicit verify finished notification, stop sector reached. */
+	if (old_peer_state.conn == L_VERIFY_T && old_peer_state.disk == D_UP_TO_DATE &&
+	    peer_state.conn == L_ESTABLISHED && peer_disk_state == D_UP_TO_DATE) {
+		ov_out_of_sync_print(peer_device);
+		drbd_resync_finished(peer_device, D_MASK);
+		peer_device->last_repl_state = peer_state.conn;
+		return 0;
+	}
+
+	/* Start resync after AHEAD/BEHIND */
+	if (connection->agreed_pro_version >= 110 &&
+	    peer_state.conn == L_SYNC_SOURCE && old_peer_state.conn == L_BEHIND) {
+		/*
+		 * Become Inconsistent immediately because we may now receive
+		 * data. Delay the start of the resync itself until any
+		 * previous resync is no longer active.
+		 */
+		rv = change_disk_state(device, D_INCONSISTENT, CS_VERBOSE,
+				"resync-after-behind", NULL);
+		if (rv < SS_SUCCESS)
+			goto fail;
+
+		peer_device->start_resync_side = L_SYNC_TARGET;
+		drbd_peer_device_post_work(peer_device, RS_START);
+		return 0;
+	}
+
+	/* peer says his disk is inconsistent, while we think it is uptodate,
+	 * and this happens while the peer still thinks we have a sync going on,
+	 * but we think we are already done with the sync.
+	 * We ignore this to avoid flapping pdsk.
+	 * This should not happen, if the peer is a recent version of drbd. */
+	if (old_peer_state.pdsk == D_UP_TO_DATE && peer_disk_state == D_INCONSISTENT &&
+	    old_peer_state.conn == L_ESTABLISHED && peer_state.conn > L_SYNC_SOURCE)
+		peer_disk_state = D_UP_TO_DATE;
+
+	if (new_repl_state == L_OFF)
+		new_repl_state = L_ESTABLISHED;
+
+	if (peer_state.conn == L_AHEAD)
+		new_repl_state = L_BEHIND;
+
+	/* with protocol >= 118 uuid & state packets come after the 2PC prepare packet */
+	do_handshake =
+		(test_bit(UUIDS_RECEIVED, &peer_device->flags) ||
+			test_bit(CURRENT_UUID_RECEIVED, &peer_device->flags)) &&
+		(connection->agreed_pro_version < 118 ||
+			drbd_twopc_between_peer_and_me(connection)) &&
+		old_peer_state.conn < L_ESTABLISHED;
+
+	if (test_bit(UUIDS_RECEIVED, &peer_device->flags) &&
+	    peer_state.disk >= D_NEGOTIATING &&
+	    get_ldev_if_state(device, D_NEGOTIATING)) {
+		enum sync_strategy strategy = UNDETERMINED;
+		bool consider_resync;
+
+		/* clear CONN_DISCARD_MY_DATA so late, to not lose it if peer
+		   gets aborted before we are able to do the resync handshake. */
+		clear_bit(CONN_DISCARD_MY_DATA, &connection->flags);
+
+		/* if we established a new connection */
+		consider_resync = do_handshake &&
+					!test_bit(INITIAL_STATE_RECEIVED, &peer_device->flags);
+		/* if we have both been inconsistent, and the peer has been
+		 * forced to be UpToDate with --force */
+		consider_resync |= test_bit(CONSIDER_RESYNC, &peer_device->flags);
+		/* if we had been plain connected, and the admin requested to
+		 * start a sync by "invalidate" or "invalidate-remote" */
+		consider_resync |= (old_peer_state.conn == L_ESTABLISHED &&
+				    (peer_state.conn == L_STARTING_SYNC_S ||
+				     peer_state.conn == L_STARTING_SYNC_T));
+
+		consider_resync |= peer_state.conn == L_WF_BITMAP_T &&
+				   peer_device->flags & UUID_FLAG_CRASHED_PRIMARY;
+
+		if (consider_resync) {
+			strategy = drbd_sync_handshake(peer_device, peer_state);
+			new_repl_state = strategy_to_repl_state(peer_device, peer_state.role, strategy);
+		} else if (old_peer_state.conn == L_ESTABLISHED &&
+			   (peer_state.disk == D_NEGOTIATING ||
+			    old_peer_state.disk == D_NEGOTIATING)) {
+			strategy = drbd_attach_handshake(peer_device, peer_disk_state);
+			new_repl_state = strategy_to_repl_state(peer_device, peer_state.role, strategy);
+			if (new_repl_state == L_ESTABLISHED && device->disk_state[NOW] == D_UP_TO_DATE)
+				peer_disk_state = D_UP_TO_DATE;
+		}
+
+		put_ldev(device);
+		if (strategy_descriptor(strategy).reconnect) { /* retry connect */
+			maybe_force_secondary(peer_device);
+			if (connection->agreed_pro_version >= 118)
+				set_bit(CONN_HANDSHAKE_RETRY, &connection->flags);
+			else
+				return -EIO; /* retry connect */
+		} else if (strategy_descriptor(strategy).disconnect) {
+			if (device->disk_state[NOW] == D_NEGOTIATING) {
+				new_repl_state = L_NEG_NO_RESULT;
+			} else if (peer_state.disk == D_NEGOTIATING) {
+				if (connection->agreed_pro_version < 110) {
+					drbd_err(device, "Disk attach process on the peer node was aborted.\n");
+					peer_state.disk = D_DISKLESS;
+					peer_disk_state = D_DISKLESS;
+				} else {
+					/* The peer will decide later and let us know... */
+					peer_disk_state = D_NEGOTIATING;
+				}
+			} else {
+				if (test_and_clear_bit(CONN_DRY_RUN, &connection->flags))
+					return -EIO;
+				if (connection->agreed_pro_version >= 118)
+					set_bit(CONN_HANDSHAKE_DISCONNECT, &connection->flags);
+				else
+					goto fail;
+			}
+		}
+
+		if (device->disk_state[NOW] == D_NEGOTIATING) {
+			begin_state_chg_flags |= CS_FORCE_RECALC;
+			peer_device->negotiation_result = new_repl_state;
+		}
+	}
+
+	if (test_bit(UUIDS_RECEIVED, &peer_device->flags) &&
+	    peer_device->repl_state[NOW] == L_OFF && device->disk_state[NOW] == D_DISKLESS) {
+		u64 exposed_data_uuid = device->exposed_data_uuid;
+		u64 peer_current_uuid = peer_device->current_uuid;
+
+		drbd_info(peer_device, "my exposed UUID: %016llX\n", exposed_data_uuid);
+		drbd_uuid_dump_peer(peer_device, peer_device->dirty_bits, peer_device->uuid_flags);
+
+		/* I am diskless connecting to a peer with disk, check that UUID match
+		   We only check if the peer claims to have D_UP_TO_DATE data. Only then is the
+		   peer a source for my data anyways. */
+		if (exposed_data_uuid && peer_state.disk == D_UP_TO_DATE &&
+		    (exposed_data_uuid & ~UUID_PRIMARY) != (peer_current_uuid & ~UUID_PRIMARY))
+			diskless_with_peers_different_current_uuids(peer_device, &peer_disk_state);
+		if (!exposed_data_uuid && peer_state.disk == D_UP_TO_DATE) {
+			drbd_uuid_set_exposed(device, peer_current_uuid, true);
+			propagate_exposed_uuid(device);
+		}
+	}
+	if (peer_device->repl_state[NOW] == L_OFF && peer_state.disk == D_DISKLESS && get_ldev(device)) {
+		u64 uuid_flags = 0;
+
+		drbd_collect_local_uuid_flags(peer_device, NULL);
+		drbd_uuid_dump_self(peer_device, peer_device->comm_bm_set, uuid_flags);
+		drbd_info(peer_device, "peer's exposed UUID: %016llX\n", peer_device->current_uuid);
+
+		if (peer_state.role == R_PRIMARY &&
+		    (peer_device->current_uuid & ~UUID_PRIMARY) ==
+		    (drbd_current_uuid(device) & ~UUID_PRIMARY)) {
+			/* Connecting to diskless primary peer. When the state change is committed,
+			 * sanitize_state might set me D_UP_TO_DATE. Make sure the
+			 * effective_size is set. */
+			peer_device->max_size = peer_device->c_size;
+			drbd_determine_dev_size(device, peer_device->max_size, 0, NULL);
+		}
+
+		put_ldev(device);
+	}
+
+	if (test_bit(HOLDING_UUID_READ_LOCK, &peer_device->flags) ||
+			connection->agreed_pro_version < 110) {
+		struct drbd_transport *transport = &connection->transport;
+		/* Last packet of handshake received, disarm receive timeout */
+		transport->class->ops.set_rcvtimeo(transport, DATA_STREAM, MAX_SCHEDULE_TIMEOUT);
+	}
+
+	if (new_repl_state == L_ESTABLISHED && peer_disk_state == D_CONSISTENT &&
+	    drbd_suspended(device) && peer_device->repl_state[NOW] < L_ESTABLISHED &&
+	    test_and_clear_bit(NEW_CUR_UUID, &device->flags)) {
+		/* Do not allow RESEND for a rebooted peer. We can only allow this
+	 * for temporary network outages! */
+		drbd_err(peer_device, "Aborting Connect, can not thaw IO with an only Consistent peer\n");
+		drbd_uuid_new_current(device, false);
+		begin_state_change(resource, &irq_flags, CS_HARD);
+		__change_cstate(connection, C_PROTOCOL_ERROR);
+		__change_io_susp_user(resource, false);
+		end_state_change(resource, &irq_flags, "abort-connect");
+		return -EIO;
+	}
+
+	clear_bit(RS_SOURCE_MISSED_END, &peer_device->flags);
+	clear_bit(RS_PEER_MISSED_END, &peer_device->flags);
+
+	if (peer_state.quorum)
+		set_bit(PEER_QUORATE, &peer_device->flags);
+	else
+		clear_bit(PEER_QUORATE, &peer_device->flags);
+
+	if (do_handshake) {
+		/* Ignoring state packets before the 2PC; they are from aborted 2PCs */
+		bool done = test_bit(INITIAL_STATE_RECEIVED, &peer_device->flags);
+
+		set_bit(INITIAL_STATE_RECEIVED, &peer_device->flags);
+		if (connection->cstate[NOW] == C_CONNECTING) {
+			peer_device->connect_state.peer_isp =
+				peer_state.aftr_isp | peer_state.user_isp;
+
+			if (!done) {
+				peer_device->connect_state.conn = new_repl_state;
+				peer_device->connect_state.peer = peer_state.role;
+				peer_device->connect_state.pdsk = peer_disk_state;
+			}
+			wake_up(&connection->ee_wait);
+			finish_nested_twopc(connection);
+		}
+	}
+
+	/* State change will be performed when the two-phase commit is committed. */
+	if (connection->cstate[NOW] == C_CONNECTING)
+		return 0;
+
+	if (peer_state.conn == L_OFF) {
+		/* device/minor hot add on the peer of a minor already locally known */
+		if (peer_device->repl_state[NOW] == L_NEGOTIATING) {
+			drbd_send_enable_replication_next(peer_device);
+			drbd_send_sizes(peer_device, 0, 0);
+			drbd_send_uuids(peer_device, 0, 0);
+		}
+		drbd_send_current_state(peer_device);
+	}
+
+	begin_state_change(resource, &irq_flags, begin_state_chg_flags);
+	if (old_peer_state.i != drbd_get_peer_device_state(peer_device, NOW).i) {
+		old_peer_state = drbd_get_peer_device_state(peer_device, NOW);
+		abort_state_change_locked(resource);
+		write_unlock_irq(&resource->state_rwlock);
+		goto retry;
+	}
+	clear_bit(CONSIDER_RESYNC, &peer_device->flags);
+	if (device->disk_state[NOW] != D_NEGOTIATING)
+		__change_repl_state(peer_device, new_repl_state);
+	__change_peer_role(connection, peer_state.role);
+	if (peer_state.disk != D_NEGOTIATING)
+		__change_peer_disk_state(peer_device, peer_disk_state);
+	__change_resync_susp_peer(peer_device, peer_state.aftr_isp | peer_state.user_isp);
+	repl_state = peer_device->repl_state;
+	if (repl_state[OLD] < L_ESTABLISHED && repl_state[NEW] >= L_ESTABLISHED)
+		resource->state_change_flags |= CS_HARD;
+
+	rv = end_state_change(resource, &irq_flags, "peer-state");
+	new_repl_state = peer_device->repl_state[NOW];
+
+	if (rv < SS_SUCCESS)
+		goto fail;
+
+	if (old_peer_state.conn > L_OFF) {
+		if (new_repl_state > L_ESTABLISHED && peer_state.conn <= L_ESTABLISHED &&
+		    peer_state.disk != D_NEGOTIATING) {
+			/* we want resync, peer has not yet decided to sync... */
+			/* Nowadays only used when forcing a node into primary role and
+			   setting its disk to UpToDate with that */
+			drbd_send_uuids(peer_device, 0, 0);
+			drbd_send_current_state(peer_device);
+		}
+	}
+
+	clear_bit(DISCARD_MY_DATA, &peer_device->flags); /* Only relevant for agreed_pro_version < 117 */
+
+	drbd_md_sync(device); /* update connected indicator, effective_size, ... */
+
+	peer_device->last_repl_state = peer_state.conn;
+	return 0;
+fail:
+	change_cstate(connection, C_DISCONNECTING, CS_HARD);
+	return -EIO;
+}
+
+static int receive_sync_uuid(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
+	struct drbd_device *device;
+	struct p_uuid *p = pi->data;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+	device = peer_device->device;
+
+	wait_event(device->misc_wait,
+		   peer_device->repl_state[NOW] == L_WF_SYNC_UUID ||
+		   peer_device->repl_state[NOW] == L_BEHIND ||
+		   peer_device->repl_state[NOW] < L_ESTABLISHED ||
+		   device->disk_state[NOW] < D_NEGOTIATING);
+
+	/* D_ASSERT(device,  peer_device->repl_state[NOW] == L_WF_SYNC_UUID ); */
+
+	/* Here the _drbd_uuid_ functions are right, current should
+	   _not_ be rotated into the history */
+	if (get_ldev_if_state(device, D_NEGOTIATING)) {
+		_drbd_uuid_set_current(device, be64_to_cpu(p->uuid));
+		_drbd_uuid_set_bitmap(peer_device, 0UL);
+
+		drbd_print_uuids(peer_device, "updated sync uuid");
+		drbd_start_resync(peer_device, L_SYNC_TARGET, "peer-sync-uuid");
+
+		put_ldev(device);
+	} else
+		drbd_err(device, "Ignoring SyncUUID packet!\n");
+
+	return 0;
+}
+
+static void scale_bits(unsigned long *base, unsigned int num_4k, unsigned int scale)
+{
+	unsigned int bits = num_4k * BITS_PER_LONG;
+	unsigned int sbit;
+
+	if (scale == 0)
+		return;
+
+	for (sbit = 0; sbit < bits; sbit++)
+		if (test_bit(sbit, base))
+			__set_bit(sbit >> scale, base);
+}
+
+/*
+ * receive_bitmap_plain
+ *
+ * Return 0 when done, 1 when another iteration is needed, and a negative error
+ * code upon failure.
+ *
+ * Received bitmap is 4k per bit, need to aggregate by c->scale.
+ */
+static int
+receive_bitmap_plain(struct drbd_peer_device *peer_device, unsigned int size,
+		     struct bm_xfer_ctx *c)
+{
+	unsigned long *p;
+	unsigned int data_size = DRBD_SOCKET_BUFFER_SIZE -
+				 drbd_header_size(peer_device->connection);
+	unsigned int num_words_4k = min_t(size_t, data_size / sizeof(*p),
+				       (c->bm_words - c->word_offset) << c->scale);
+	unsigned int want = num_words_4k * sizeof(*p);
+	int err;
+
+	if (want != size) {
+		drbd_err(peer_device, "%s:want (%u) != size (%u)\n", __func__, want, size);
+		return -EIO;
+	}
+	if (want == 0)
+		return 0;
+	err = drbd_recv_all(peer_device->connection, (void **)&p, want);
+	if (err)
+		return err;
+
+	if ((num_words_4k & ((1 << c->scale)-1)) != 0) {
+		drbd_err(peer_device,
+			"number of words %u not aligned to scale %u while receiving bitmap\n",
+			num_words_4k, c->scale);
+		return -ERANGE;
+	}
+
+	if (get_ldev(peer_device->device)) {
+		scale_bits(p, num_words_4k, c->scale);
+		drbd_bm_merge_lel(peer_device, c->word_offset, num_words_4k >> c->scale, p);
+		put_ldev(peer_device->device);
+	} else {
+		drbd_err(peer_device, "lost backend device while receiving bitmap\n");
+		return -EIO;
+	}
+
+	c->word_offset += num_words_4k >> c->scale;
+	c->bit_offset = c->word_offset * BITS_PER_LONG;
+	c->bit_offset_4k = (c->word_offset << c->scale) * BITS_PER_LONG;
+	if (c->bit_offset > c->bm_bits)
+		c->bit_offset = c->bm_bits;
+
+	return 1;
+}
+
+static enum drbd_bitmap_code dcbp_get_code(struct p_compressed_bm *p)
+{
+	return (enum drbd_bitmap_code)(p->encoding & 0x0f);
+}
+
+static int dcbp_get_start(struct p_compressed_bm *p)
+{
+	return (p->encoding & 0x80) != 0;
+}
+
+static int dcbp_get_pad_bits(struct p_compressed_bm *p)
+{
+	return (p->encoding >> 4) & 0x7;
+}
+
+/*
+ * recv_bm_rle_bits
+ *
+ * Return 0 when done, 1 when another iteration is needed, and a negative error
+ * code upon failure.
+ */
+static int
+recv_bm_rle_bits(struct drbd_peer_device *peer_device,
+		struct p_compressed_bm *p,
+		 struct bm_xfer_ctx *c,
+		 unsigned int len)
+{
+	struct bitstream bs;
+	u64 look_ahead;
+	u64 rl_4k;
+	u64 tmp;
+	unsigned long s_4k = c->bit_offset_4k;
+	int toggle = dcbp_get_start(p);
+	int have;
+	int bits;
+
+	bitstream_init(&bs, p->code, len, dcbp_get_pad_bits(p));
+
+	bits = bitstream_get_bits(&bs, &look_ahead, 64);
+	if (bits < 0)
+		return -EIO;
+
+	for (have = bits; have > 0; s_4k += rl_4k, toggle = !toggle) {
+		bits = vli_decode_bits(&rl_4k, look_ahead);
+		if (bits <= 0)
+			return -EIO;
+
+		if (toggle) {
+			/* If peers bm_block_size is smaller than ours,
+			 * this may be a "partially" set bit ;-)
+			 * there is no such thing. Round down s, round up e.
+			 */
+			unsigned long s = s_4k >> c->scale;
+			unsigned long e = ((s_4k + rl_4k + (1UL << c->scale)-1) >> c->scale) - 1;
+
+			if (e >= c->bm_bits) {
+				drbd_err(peer_device, "bitmap overflow (e:%lu) while decoding bm RLE packet\n", e);
+				return -EIO;
+			}
+			drbd_bm_set_many_bits(peer_device, s, e);
+		}
+
+		if (have < bits) {
+			drbd_err(peer_device, "bitmap decoding error: h:%d b:%d la:0x%08llx l:%u/%u\n",
+				have, bits, look_ahead,
+				(unsigned int)(bs.cur.b - p->code),
+				(unsigned int)bs.buf_len);
+			return -EIO;
+		}
+		/* if we consumed all 64 bits, assign 0; >> 64 is "undefined"; */
+		if (likely(bits < 64))
+			look_ahead >>= bits;
+		else
+			look_ahead = 0;
+		have -= bits;
+
+		bits = bitstream_get_bits(&bs, &tmp, 64 - have);
+		if (bits < 0)
+			return -EIO;
+		look_ahead |= tmp << have;
+		have += bits;
+	}
+
+	c->bit_offset_4k = s_4k;
+	c->bit_offset = s_4k >> c->scale;
+	bm_xfer_ctx_bit_to_word_offset(c);
+
+	return (c->bit_offset_4k != c->bm_bits_4k);
+}
+
+/*
+ * decode_bitmap_c
+ *
+ * Return 0 when done, 1 when another iteration is needed, and a negative error
+ * code upon failure.
+ */
+static int
+decode_bitmap_c(struct drbd_peer_device *peer_device,
+		struct p_compressed_bm *p,
+		struct bm_xfer_ctx *c,
+		unsigned int len)
+{
+	if (dcbp_get_code(p) == RLE_VLI_Bits) {
+		struct drbd_device *device = peer_device->device;
+		int res;
+
+		if (!get_ldev(device)) {
+			drbd_err(peer_device, "lost backend device while receiving bitmap\n");
+			return -EIO;
+		}
+
+		res = recv_bm_rle_bits(peer_device, p, c, len - sizeof(*p));
+		put_ldev(device);
+		return res;
+	}
+
+	/* other variants had been implemented for evaluation,
+	 * but have been dropped as this one turned out to be "best"
+	 * during all our tests.
+	 */
+
+	drbd_err(peer_device, "receive_bitmap_c: unknown encoding %u\n", p->encoding);
+	change_cstate(peer_device->connection, C_PROTOCOL_ERROR, CS_HARD);
+	return -EIO;
+}
+
+void INFO_bm_xfer_stats(struct drbd_peer_device *peer_device,
+		const char *direction, struct bm_xfer_ctx *c)
+{
+	/* what would it take to transfer it "plaintext" */
+	unsigned int header_size = drbd_header_size(peer_device->connection);
+	unsigned int data_size = DRBD_SOCKET_BUFFER_SIZE - header_size;
+	unsigned int plain =
+		header_size * (DIV_ROUND_UP(c->bm_words, data_size) + 1) +
+		c->bm_words * sizeof(unsigned long);
+	unsigned int total = c->bytes[0] + c->bytes[1];
+	unsigned int r;
+
+	/* total can not be zero. but just in case: */
+	if (total == 0)
+		return;
+
+	/* don't report if not compressed */
+	if (total >= plain)
+		return;
+
+	/* total < plain. check for overflow, still */
+	r = (total > UINT_MAX/1000) ? (total / (plain/1000))
+				    : (1000 * total / plain);
+
+	if (r > 1000)
+		r = 1000;
+
+	r = 1000 - r;
+	drbd_info(peer_device, "%s bitmap stats [Bytes(packets)]: plain %u(%u), RLE %u(%u), "
+	     "total %u; compression: %u.%u%%\n",
+			direction,
+			c->bytes[1], c->packets[1],
+			c->bytes[0], c->packets[0],
+			total, r/10, r % 10);
+}
+
+static bool ready_for_bitmap(struct drbd_device *device)
+{
+	struct drbd_resource *resource = device->resource;
+	bool ready = true;
+
+	read_lock_irq(&resource->state_rwlock);
+	if (device->disk_state[NOW] == D_NEGOTIATING)
+		ready = false;
+	if (test_bit(TWOPC_STATE_CHANGE_PENDING, &resource->flags))
+		ready = false;
+	read_unlock_irq(&resource->state_rwlock);
+
+	return ready;
+}
+
+/* Since we are processing the bitfield from lower addresses to higher,
+   it does not matter if the process it in 32 bit chunks or 64 bit
+   chunks as long as it is little endian. (Understand it as byte stream,
+   beginning with the lowest byte...) If we would use big endian
+   we would need to process it from the highest address to the lowest,
+   in order to be agnostic to the 32 vs 64 bits issue.
+
+   returns 0 on failure, 1 if we successfully received it. */
+static int receive_bitmap(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
+	enum drbd_repl_state repl_state;
+	struct drbd_device *device;
+	struct bm_xfer_ctx c;
+	int err = -EIO;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+	if (peer_device->bitmap_index == -1) {
+		drbd_err(peer_device, "No bitmap allocated in receive_bitmap()!\n");
+		return -EIO;
+	}
+	device = peer_device->device;
+
+	/* Final repl_states become visible when the disk leaves NEGOTIATING state */
+	wait_event_interruptible(device->resource->state_wait,
+				 ready_for_bitmap(device));
+
+	if (!get_ldev(device)) {
+		drbd_err(device, "Cannot receive bitmap, local disk gone\n");
+		return -EIO;
+	}
+
+	drbd_bm_slot_lock(peer_device, "receive bitmap", BM_LOCK_CLEAR | BM_LOCK_BULK);
+	/* you are supposed to send additional out-of-sync information
+	 * if you actually set bits during this phase */
+
+	if (!get_ldev(device))
+		goto out;
+
+	c = (struct bm_xfer_ctx) {
+		.bm_bits_4k = drbd_bm_bits_4k(device),
+		.bm_bits = drbd_bm_bits(device),
+		.bm_words = drbd_bm_words(device),
+		.scale = device->bitmap->bm_block_shift - BM_BLOCK_SHIFT_4k,
+	};
+	put_ldev(device);
+
+	for (;;) {
+		if (pi->cmd == P_BITMAP)
+			err = receive_bitmap_plain(peer_device, pi->size, &c);
+		else if (pi->cmd == P_COMPRESSED_BITMAP) {
+			/* MAYBE: sanity check that we speak proto >= 90,
+			 * and the feature is enabled! */
+			struct p_compressed_bm *p;
+
+			if (pi->size > DRBD_SOCKET_BUFFER_SIZE - drbd_header_size(connection)) {
+				drbd_err(device, "ReportCBitmap packet too large\n");
+				err = -EIO;
+				goto out;
+			}
+			if (pi->size <= sizeof(*p)) {
+				drbd_err(device, "ReportCBitmap packet too small (l:%u)\n", pi->size);
+				err = -EIO;
+				goto out;
+			}
+			err = drbd_recv_all(connection, (void **)&p, pi->size);
+			if (err)
+			       goto out;
+			err = decode_bitmap_c(peer_device, p, &c, pi->size);
+		} else {
+			drbd_warn(device, "receive_bitmap: cmd neither ReportBitMap nor ReportCBitMap (is 0x%x)", pi->cmd);
+			err = -EIO;
+			goto out;
+		}
+
+		c.packets[pi->cmd == P_BITMAP]++;
+		c.bytes[pi->cmd == P_BITMAP] += drbd_header_size(connection) + pi->size;
+
+		if (err <= 0) {
+			if (err < 0)
+				goto out;
+			break;
+		}
+		err = drbd_recv_header(connection, pi);
+		if (err)
+			goto out;
+	}
+
+	INFO_bm_xfer_stats(peer_device, "receive", &c);
+
+	repl_state = peer_device->repl_state[NOW];
+	if (repl_state == L_WF_BITMAP_T) {
+		err = drbd_send_bitmap(device, peer_device);
+		if (err)
+			goto out;
+	}
+
+	drbd_bm_slot_unlock(peer_device);
+	put_ldev(device);
+
+	if (test_bit(B_RS_H_DONE, &peer_device->flags)) {
+		/* We have entered drbd_start_resync() since starting the bitmap exchange. */
+		drbd_warn(peer_device, "Received bitmap more than once; ignoring\n");
+	} else if (repl_state == L_WF_BITMAP_S) {
+		drbd_start_resync(peer_device, L_SYNC_SOURCE, "receive-bitmap");
+	} else if (repl_state == L_WF_BITMAP_T) {
+		if (connection->agreed_pro_version < 110) {
+			enum drbd_state_rv rv;
+
+			/* Omit CS_WAIT_COMPLETE and CS_SERIALIZE with this state
+			 * transition to avoid deadlocks. */
+			rv = stable_change_repl_state(peer_device, L_WF_SYNC_UUID, CS_VERBOSE,
+					"receive-bitmap");
+			D_ASSERT(device, rv == SS_SUCCESS);
+		} else {
+			drbd_start_resync(peer_device, L_SYNC_TARGET, "receive-bitmap");
+		}
+	} else {
+		/* admin may have requested C_DISCONNECTING,
+		 * other threads may have noticed network errors */
+		drbd_info(peer_device, "unexpected repl_state (%s) in receive_bitmap\n",
+			  drbd_repl_str(repl_state));
+	}
+
+	return 0;
+ out:
+	drbd_bm_slot_unlock(peer_device);
+	put_ldev(device);
+	return err;
+}
+
+static int receive_skip(struct drbd_connection *connection, struct packet_info *pi)
+{
+	drbd_warn(connection, "skipping unknown optional packet type %d, l: %d!\n",
+		 pi->cmd, pi->size);
+
+	return ignore_remaining_packet(connection, pi->size);
+}
+
+static int receive_UnplugRemote(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_transport *transport = &connection->transport;
+
+	/* Make sure we've acked all the data associated
+	 * with the data requests being unplugged */
+	transport->class->ops.hint(transport, DATA_STREAM, QUICKACK);
+
+	/* just unplug all devices always, regardless which volume number */
+	drbd_unplug_all_devices(connection);
+
+	return 0;
+}
+
+static int receive_out_of_sync(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
 	struct p_block_desc *p = pi->data;
+	sector_t sector;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+
+	sector = be64_to_cpu(p->sector);
+
+	/* see also process_one_request(), before drbd_send_out_of_sync().
+	 * Make sure any pending write requests that potentially may
+	 * set in-sync have drained, before setting it out-of-sync.
+	 * That should be implicit, because of the "epoch" and P_BARRIER logic,
+	 * But let's just double-check.
+	 */
+	conn_wait_active_ee_empty_or_disconnect(connection);
+	conn_wait_done_ee_empty_or_disconnect(connection);
+
+	drbd_set_out_of_sync(peer_device, sector, be32_to_cpu(p->blksize));
+
+	return 0;
+}
+
+static int receive_dagtag(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct p_dagtag *p = pi->data;
+
+	set_connection_dagtag(connection, be64_to_cpu(p->dagtag));
+	return 0;
+}
+
+struct drbd_connection *drbd_connection_by_node_id(struct drbd_resource *resource, int node_id)
+{
+	/* Caller needs to hold rcu_read_lock(), conf_update */
+	struct drbd_connection *connection;
+
+	for_each_connection_rcu(connection, resource) {
+		if (connection->peer_node_id == node_id)
+			return connection;
+	}
+
+	return NULL;
+}
+
+struct drbd_connection *drbd_get_connection_by_node_id(struct drbd_resource *resource, int node_id)
+{
+	struct drbd_connection *connection;
+
+	rcu_read_lock();
+	connection = drbd_connection_by_node_id(resource, node_id);
+	if (connection)
+		kref_get(&connection->kref);
+	rcu_read_unlock();
+
+	return connection;
+}
+
+static int receive_peer_dagtag(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_device *peer_device;
+	enum drbd_repl_state new_repl_state;
+	struct p_peer_dagtag *p = pi->data;
+	struct drbd_connection *lost_peer;
+	enum sync_strategy strategy = NO_SYNC;
+	s64 dagtag_offset;
+	int vnr = 0;
+
+	lost_peer = drbd_get_connection_by_node_id(resource, be32_to_cpu(p->node_id));
+	if (!lost_peer)
+		return 0;
+
+
+	if (lost_peer->cstate[NOW] == C_CONNECTED) {
+		drbd_ping_peer(lost_peer);
+		if (lost_peer->cstate[NOW] == C_CONNECTED)
+			goto out;
+	}
+
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		enum sync_strategy ps;
+		enum sync_rule rule;
+		int unused;
+
+		if (peer_device->repl_state[NOW] > L_ESTABLISHED)
+			goto out;
+		if (peer_device->device->disk_state[NOW] != D_CONSISTENT &&
+		    peer_device->device->disk_state[NOW] != D_UP_TO_DATE)
+			goto out;
+		if (!get_ldev(peer_device->device))
+			continue;
+		ps = drbd_uuid_compare(peer_device, &rule, &unused);
+		put_ldev(peer_device->device);
+
+		if (strategy == NO_SYNC) {
+			strategy = ps;
+			if (strategy != NO_SYNC &&
+			    strategy != SYNC_SOURCE_USE_BITMAP &&
+			    strategy != SYNC_TARGET_USE_BITMAP) {
+				drbd_info(peer_device,
+					  "%s(): %s by rule=%s\n",
+					  __func__,
+					  strategy_descriptor(strategy).name,
+					  drbd_sync_rule_str(rule));
+				goto out;
+			}
+		} else if (ps != strategy) {
+			drbd_err(peer_device,
+				 "%s(): Inconsistent resync directions %s %s\n",
+				 __func__,
+				 strategy_descriptor(strategy).name, strategy_descriptor(ps).name);
+			goto out;
+		}
+	}
+
+	/* We must wait until the other receiver thread has called the
+	 * cleanup_unacked_peer_requests() and drbd_notify_peers_lost_primary() functions. If we
+	 * become a resync target, the peer would complain about being in the wrong state when he
+	 * gets the bitmap before the P_PEER_DAGTAG packet.
+	 */
+	wait_event(resource->state_wait,
+		   !test_bit(NOTIFY_PEERS_LOST_PRIMARY, &lost_peer->flags));
+
+	dagtag_offset = atomic64_read(&lost_peer->last_dagtag_sector) - (s64)be64_to_cpu(p->dagtag);
+	if (strategy == SYNC_SOURCE_USE_BITMAP)  {
+		new_repl_state = L_WF_BITMAP_S;
+	} else if (strategy == SYNC_TARGET_USE_BITMAP)  {
+		new_repl_state = L_WF_BITMAP_T;
+	} else {
+		if (dagtag_offset > 0)
+			new_repl_state = L_WF_BITMAP_S;
+		else if (dagtag_offset < 0)
+			new_repl_state = L_WF_BITMAP_T;
+		else
+			new_repl_state = L_ESTABLISHED;
+	}
+
+	if (new_repl_state != L_ESTABLISHED) {
+		unsigned long irq_flags;
+		enum drbd_state_rv rv;
+
+		if (new_repl_state == L_WF_BITMAP_T) {
+			connection->after_reconciliation.dagtag_sector = be64_to_cpu(p->dagtag);
+			connection->after_reconciliation.lost_node_id = be32_to_cpu(p->node_id);
+		}
+
+		begin_state_change(resource, &irq_flags, CS_VERBOSE);
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+			__change_repl_state(peer_device, new_repl_state);
+			set_bit(RECONCILIATION_RESYNC, &peer_device->flags);
+		}
+		rv = end_state_change(resource, &irq_flags, "receive-peer-dagtag");
+		if (rv == SS_SUCCESS)
+			drbd_info(connection, "Reconciliation resync because \'%s\' disappeared. (o=%d)\n",
+				  lost_peer->transport.net_conf->name, (int)dagtag_offset);
+		else if (rv == SS_NOTHING_TO_DO)
+			drbd_info(connection, "\'%s\' disappeared (o=%d), no reconciliation since one diskless\n",
+				  lost_peer->transport.net_conf->name, (int)dagtag_offset);
+			/* sanitize_state() silently removes the resync and the RECONCILIATION_RESYNC bit */
+		else
+			drbd_info(connection, "rv = %d", rv);
+	} else {
+		drbd_info(connection, "No reconciliation resync even though \'%s\' disappeared. (o=%d)\n",
+			  lost_peer->transport.net_conf->name, (int)dagtag_offset);
+
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+			if (get_ldev(peer_device->device)) {
+				drbd_bm_clear_many_bits(peer_device, 0, -1UL);
+				put_ldev(peer_device->device);
+			}
+		}
+	}
+
+out:
+	kref_put(&lost_peer->kref, drbd_destroy_connection);
+	return 0;
+}
+
+static bool drbd_diskless_moved_on(struct drbd_peer_device *peer_device, u64 current_uuid)
+{
+	struct drbd_device *device = peer_device->device;
+	u64 previous = peer_device->current_uuid;
+	bool from_the_past = false;
+
+	/* No exposed UUID => did not move on. */
+	if (!current_uuid)
+		return false;
+
+	/* Same as last time => did not move on. */
+	if ((previous & ~UUID_PRIMARY) == (current_uuid & ~UUID_PRIMARY))
+		return false;
+
+	/* Only consider the peer to have moved on if we were on the same UUID. */
+	if ((previous & ~UUID_PRIMARY) != (drbd_current_uuid(device) & ~UUID_PRIMARY))
+		return false;
+
+	if (get_ldev(device)) {
+		from_the_past =
+			drbd_find_bitmap_by_uuid(peer_device, current_uuid & ~UUID_PRIMARY) != -1;
+		if (!from_the_past)
+			from_the_past = uuid_in_my_history(device, current_uuid & ~UUID_PRIMARY);
+		put_ldev(device);
+	}
+
+	/* It is an old UUID => did not move on. */
+	if (from_the_past)
+		return false;
+
+	return true;
+}
+
+/* Accept a new current UUID generated on a diskless node, that just became primary
+   (or during handshake) */
+static int receive_current_uuid(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_device *peer_device;
+	struct drbd_device *device;
+	struct p_current_uuid *p = pi->data;
+	u64 current_uuid, weak_nodes;
+	bool moved_on;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return config_unknown_volume(connection, pi);
+	device = peer_device->device;
+
+	current_uuid = be64_to_cpu(p->uuid);
+	weak_nodes = be64_to_cpu(p->weak_nodes);
+	weak_nodes |= NODE_MASK(peer_device->node_id);
+	moved_on = drbd_diskless_moved_on(peer_device, current_uuid);
+
+	peer_device->current_uuid = current_uuid;
+
+	if (get_ldev(device)) {
+		struct drbd_peer_md *peer_md = &device->ldev->md.peers[peer_device->node_id];
+		peer_md->flags |= MDF_NODE_EXISTS;
+		put_ldev(device);
+	}
+	if (connection->peer_role[NOW] == R_PRIMARY)
+		check_resync_source(device, weak_nodes);
+
+	if (connection->peer_role[NOW] == R_UNKNOWN) {
+		set_bit(CURRENT_UUID_RECEIVED, &peer_device->flags);
+		if (moved_on && device->disk_state[NOW] > D_OUTDATED)
+			peer_device->connect_state.disk = D_OUTDATED;
+		return 0;
+	}
+
+	if (current_uuid == drbd_current_uuid(device))
+		return 0;
+
+	if (peer_device->repl_state[NOW] >= L_ESTABLISHED &&
+	    get_ldev_if_state(device, D_UP_TO_DATE)) {
+		if (connection->peer_role[NOW] == R_PRIMARY) {
+			drbd_warn(peer_device, "received new current UUID: %016llX "
+				  "weak_nodes=%016llX\n", current_uuid, weak_nodes);
+			drbd_uuid_received_new_current(peer_device, current_uuid, weak_nodes);
+			drbd_md_sync_if_dirty(device);
+		} else if (moved_on) {
+			if (resource->remote_state_change)
+				set_bit(OUTDATE_ON_2PC_COMMIT, &device->flags);
+			else
+				change_disk_state(device, D_OUTDATED, CS_VERBOSE,
+						"receive-current-uuid", NULL);
+		}
+		put_ldev(device);
+	} else if (device->disk_state[NOW] == D_DISKLESS && resource->role[NOW] == R_PRIMARY) {
+		drbd_uuid_set_exposed(device, peer_device->current_uuid, true);
+	}
+
+	return 0;
+}
+
+static bool interval_is_adjacent(const struct drbd_interval *i1, const struct drbd_interval *i2)
+{
+	return i1->sector + (i1->size >> SECTOR_SHIFT) == i2->sector;
+}
+
+/* Advance caching pointers received_last and discard_last. Return next discard to be submitted. */
+static struct drbd_peer_request *drbd_advance_to_next_rs_discard(
+		struct drbd_peer_device *peer_device, unsigned int align, bool submit_all)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_peer_request *peer_req;
+	struct drbd_peer_request *discard_last = peer_device->discard_last;
+	bool discard_range_end = false;
+
+	/* Advance received_last. */
+	peer_req = list_prepare_entry(peer_device->received_last,
+			&peer_device->resync_requests, recv_order);
+	list_for_each_entry_continue(peer_req, &peer_device->resync_requests, recv_order) {
+		if (!test_bit(INTERVAL_RECEIVED, &peer_req->i.flags) && !submit_all)
+			break;
+
+		if (peer_req->flags & EE_TRIM)
+			break;
+
+		peer_device->received_last = peer_req;
+	}
+
+	/* Advance discard_last. */
+	peer_req = discard_last ? discard_last :
+		list_prepare_entry(peer_device->received_last,
+				&peer_device->resync_requests, recv_order);
+	list_for_each_entry_continue(peer_req, &peer_device->resync_requests, recv_order) {
+		/* Consider submitting previous discards. */
+		if (discard_last && !interval_is_adjacent(&discard_last->i, &peer_req->i)) {
+			discard_range_end = true;
+			break;
+		}
+
+		if (!(peer_req->flags & EE_TRIM)) {
+			discard_range_end =
+				test_bit(INTERVAL_RECEIVED, &peer_req->i.flags) || submit_all;
+			break;
+		}
+
+		discard_last = peer_req;
+
+		if (IS_ALIGNED(peer_req->i.sector + (peer_req->i.size >> SECTOR_SHIFT), align)) {
+			discard_range_end = true;
+			break;
+		}
+	}
+
+	/*
+	 * If we haven't found a discard range, or that range is not
+	 * finished, then there is nothing to submit.
+	 */
+	if (!discard_last || !(discard_range_end || discard_last->flags & EE_LAST_RESYNC_REQUEST)) {
+		peer_device->discard_last = discard_last;
+		return NULL;
+	}
+
+	/* Find start of discard range. */
+	peer_req = list_next_entry(list_prepare_entry(peer_device->received_last,
+				&peer_device->resync_requests, recv_order), recv_order);
+
+	spin_lock(&device->interval_lock); /* irqs already disabled */
+	if (peer_req != discard_last) {
+		struct drbd_peer_request *peer_req_merged = peer_req;
+
+		list_for_each_entry_continue(peer_req_merged,
+				&peer_device->resync_requests, recv_order) {
+			drbd_remove_interval(&device->requests, &peer_req_merged->i);
+			drbd_clear_interval(&peer_req_merged->i);
+			peer_req_merged->w.cb = e_end_resync_block;
+			if (peer_req_merged == discard_last)
+				break;
+		}
+	}
+	drbd_update_interval_size(&peer_req->i,
+			discard_last->i.size +
+			((discard_last->i.sector - peer_req->i.sector) << SECTOR_SHIFT));
+	spin_unlock(&device->interval_lock);
+
+	peer_device->received_last = discard_last;
+	peer_device->discard_last = NULL;
+
+	return peer_req;
+}
+
+static void drbd_submit_rs_discard(struct drbd_peer_request *peer_req)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+
+	if (get_ldev(device)) {
+		list_del(&peer_req->w.list);
+
+		peer_req->w.cb = e_end_resync_block;
+		peer_req->bios.head->bi_opf = REQ_OP_DISCARD;
+
+		atomic_inc(&connection->backing_ee_cnt);
+		drbd_conflict_submit_resync_request(peer_req);
+
+		/* No put_ldev() here. Gets called in drbd_endio_write_sec_final(). */
+	} else {
+		LIST_HEAD(free_list);
+		struct drbd_peer_request *t;
+
+		if (drbd_ratelimit())
+			drbd_err(device, "Cannot discard on local disk.\n");
+
+		drbd_send_ack(peer_device, P_RS_NEG_ACK, peer_req);
+
+		drbd_remove_peer_req_interval(peer_req);
+		list_move_tail(&peer_req->w.list, &free_list);
+
+		spin_lock_irq(&connection->peer_reqs_lock);
+		drbd_unmerge_discard(peer_req, &free_list);
+		spin_unlock_irq(&connection->peer_reqs_lock);
+
+		list_for_each_entry_safe(peer_req, t, &free_list, w.list)
+			drbd_free_peer_req(peer_req);
+	}
+}
+
+/* Find and submit discards in resync_requests which are ready. */
+void drbd_process_rs_discards(struct drbd_peer_device *peer_device, bool submit_all)
+{
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+	struct drbd_peer_request *peer_req;
+	struct drbd_peer_request *pr_tmp;
+	unsigned int align = DRBD_MAX_RS_DISCARD_SIZE;
+	LIST_HEAD(work_list);
+
+	if (get_ldev(device)) {
+		/*
+		 * Limit the size of the merged requests. We want to allow the size to
+		 * increase up to the backing discard granularity.  If that is smaller
+		 * than DRBD_MAX_RS_DISCARD_SIZE, then allow merging up to a size of
+		 * DRBD_MAX_RS_DISCARD_SIZE.
+		 */
+		align = max(DRBD_MAX_RS_DISCARD_SIZE, bdev_discard_granularity(
+					device->ldev->backing_bdev)) >> SECTOR_SHIFT;
+		put_ldev(device);
+	}
+
+	spin_lock_irq(&connection->peer_reqs_lock);
+	while (true) {
+		peer_req = drbd_advance_to_next_rs_discard(peer_device, align, submit_all);
+		if (!peer_req)
+			break;
+
+		list_add_tail(&peer_req->w.list, &work_list);
+	}
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	list_for_each_entry_safe(peer_req, pr_tmp, &work_list, w.list)
+		drbd_submit_rs_discard(peer_req); /* removes it from the work_list */
+}
+
+static int receive_rs_deallocated(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
 	struct drbd_device *device;
+	struct drbd_peer_request *peer_req;
 	sector_t sector;
 	int size, err = 0;
+	u64 block_id;
+	u64 im;
 
 	peer_device = conn_peer_device(connection, pi->vnr);
 	if (!peer_device)
 		return -EIO;
 	device = peer_device->device;
 
-	sector = be64_to_cpu(p->sector);
-	size = be32_to_cpu(p->blksize);
+	if (pi->cmd == P_RS_DEALLOCATED) {
+		struct p_block_desc *p = pi->data;
+
+		sector = be64_to_cpu(p->sector);
+		size = be32_to_cpu(p->blksize);
+		block_id = ID_SYNCER;
+	} else { /* P_RS_DEALLOCATED_ID */
+		struct p_block_ack *p = pi->data;
+
+		sector = be64_to_cpu(p->sector);
+		size = be32_to_cpu(p->blksize);
+		block_id = p->block_id;
+	}
+
+	peer_req = find_resync_request(peer_device, INTERVAL_TYPE_MASK(INTERVAL_RESYNC_WRITE),
+			sector, size, block_id);
+	if (!peer_req)
+		return -EIO;
+
+	dec_rs_pending(peer_device);
+	inc_unacked(peer_device);
+	atomic_add(size >> 9, &device->rs_sect_ev);
+	peer_req->flags |= EE_TRIM;
+
+	/* Setting all peers out of sync here. The sync source peer will be
+	 * set in sync when the discard completes. The sync source will soon
+	 * set other peers in sync with a P_PEERS_IN_SYNC packet.
+	 */
+	drbd_set_all_out_of_sync(device, sector, size);
+	drbd_process_rs_discards(peer_device, false);
+	rs_sectors_came_in(peer_device, size);
+
+	for_each_peer_device_ref(peer_device, im, device) {
+		enum drbd_repl_state repl_state = peer_device->repl_state[NOW];
+
+		if (repl_is_sync_source(repl_state) || repl_state == L_WF_BITMAP_S)
+			drbd_send_out_of_sync(peer_device, sector, size);
+	}
+
+	return err;
+}
+
+void drbd_last_resync_request(struct drbd_peer_device *peer_device, bool submit_all)
+{
+	struct drbd_connection *connection = peer_device->connection;
+
+	spin_lock_irq(&connection->peer_reqs_lock);
+	if (!list_empty(&peer_device->resync_requests)) {
+		struct drbd_peer_request *peer_req = list_last_entry(&peer_device->resync_requests,
+				struct drbd_peer_request, recv_order);
+		peer_req->flags |= EE_LAST_RESYNC_REQUEST;
+	}
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	drbd_process_rs_discards(peer_device, submit_all);
+}
+
+static int receive_disconnect(struct drbd_connection *connection, struct packet_info *pi)
+{
+	change_cstate_tag(connection, C_DISCONNECTING, CS_HARD, "receive-disconnect", NULL);
+	return 0;
+}
+
+struct data_cmd {
+	int expect_payload;
+	unsigned int pkt_size;
+	int (*fn)(struct drbd_connection *, struct packet_info *);
+};
+
+static struct data_cmd drbd_cmd_handler[] = {
+	[P_DATA]	    = { 1, sizeof(struct p_data), receive_Data },
+	[P_DATA_REPLY]	    = { 1, sizeof(struct p_data), receive_DataReply },
+	[P_RS_DATA_REPLY]   = { 1, sizeof(struct p_data), receive_RSDataReply },
+	[P_BARRIER]	    = { 0, sizeof(struct p_barrier), receive_Barrier },
+	[P_BITMAP]	    = { 1, 0, receive_bitmap },
+	[P_COMPRESSED_BITMAP] = { 1, 0, receive_bitmap },
+	[P_UNPLUG_REMOTE]   = { 0, 0, receive_UnplugRemote },
+	[P_DATA_REQUEST]    = { 0, sizeof(struct p_block_req), receive_data_request },
+	[P_RS_DATA_REQUEST] = { 0, sizeof(struct p_block_req), receive_data_request },
+	[P_SYNC_PARAM]	    = { 1, 0, receive_SyncParam },
+	[P_SYNC_PARAM89]    = { 1, 0, receive_SyncParam },
+	[P_PROTOCOL]        = { 1, sizeof(struct p_protocol), receive_protocol },
+	[P_UUIDS]	    = { 0, sizeof(struct p_uuids), receive_uuids },
+	[P_SIZES]	    = { 0, sizeof(struct p_sizes), receive_sizes },
+	[P_STATE]	    = { 0, sizeof(struct p_state), receive_state },
+	[P_STATE_CHG_REQ]   = { 0, sizeof(struct p_req_state), receive_req_state },
+	[P_SYNC_UUID]       = { 0, sizeof(struct p_uuid), receive_sync_uuid },
+	[P_OV_REQUEST]      = { 0, sizeof(struct p_block_req), receive_data_request },
+	[P_OV_REPLY]        = { 1, sizeof(struct p_block_req), receive_ov_reply },
+	[P_CSUM_RS_REQUEST] = { 1, sizeof(struct p_block_req), receive_data_request },
+	[P_RS_THIN_REQ]     = { 0, sizeof(struct p_block_req), receive_data_request },
+	[P_DELAY_PROBE]     = { 0, sizeof(struct p_delay_probe93), receive_skip },
+	[P_OUT_OF_SYNC]     = { 0, sizeof(struct p_block_desc), receive_out_of_sync },
+	[P_CONN_ST_CHG_REQ] = { 0, sizeof(struct p_req_state), receive_req_state },
+	[P_PROTOCOL_UPDATE] = { 1, sizeof(struct p_protocol), receive_protocol },
+	[P_TWOPC_PREPARE] = { 0, sizeof(struct p_twopc_request), receive_twopc },
+	[P_TWOPC_PREP_RSZ]  = { 0, sizeof(struct p_twopc_request), receive_twopc },
+	[P_TWOPC_ABORT] = { 0, sizeof(struct p_twopc_request), receive_twopc },
+	[P_DAGTAG]	    = { 0, sizeof(struct p_dagtag), receive_dagtag },
+	[P_UUIDS110]	    = { 1, sizeof(struct p_uuids110), receive_uuids110 },
+	[P_PEER_DAGTAG]     = { 0, sizeof(struct p_peer_dagtag), receive_peer_dagtag },
+	[P_CURRENT_UUID]    = { 0, sizeof(struct p_current_uuid), receive_current_uuid },
+	[P_TWOPC_COMMIT]    = { 0, sizeof(struct p_twopc_request), receive_twopc },
+	[P_TRIM]	    = { 0, sizeof(struct p_trim), receive_Data },
+	[P_ZEROES]	    = { 0, sizeof(struct p_trim), receive_Data },
+	[P_RS_DEALLOCATED]  = { 0, sizeof(struct p_block_desc), receive_rs_deallocated },
+	[P_RS_DEALLOCATED_ID] = { 0, sizeof(struct p_block_ack), receive_rs_deallocated },
+	[P_DISCONNECT]      = { 0, 0, receive_disconnect },
+	[P_RS_DAGTAG_REQ]   = { 0, sizeof(struct p_rs_req), receive_dagtag_data_request },
+	[P_RS_CSUM_DAGTAG_REQ]   = { 1, sizeof(struct p_rs_req), receive_dagtag_data_request },
+	[P_RS_THIN_DAGTAG_REQ]   = { 0, sizeof(struct p_rs_req), receive_dagtag_data_request },
+	[P_OV_DAGTAG_REQ]   = { 0, sizeof(struct p_rs_req), receive_dagtag_data_request },
+	[P_OV_DAGTAG_REPLY]   = { 1, sizeof(struct p_rs_req), receive_dagtag_ov_reply },
+	[P_FLUSH_REQUESTS]  = { 0, sizeof(struct p_flush_requests), receive_flush_requests },
+	[P_FLUSH_REQUESTS_ACK] = { 0, sizeof(struct p_flush_ack), receive_flush_requests_ack },
+	[P_ENABLE_REPLICATION_NEXT] = { 0, sizeof(struct p_enable_replication),
+		receive_enable_replication_next },
+	[P_ENABLE_REPLICATION] = { 0, sizeof(struct p_enable_replication),
+		receive_enable_replication },
+};
+
+static void drbdd(struct drbd_connection *connection)
+{
+	struct packet_info pi;
+	size_t shs; /* sub header size */
+	int err;
+
+	while (get_t_state(&connection->receiver) == RUNNING) {
+		struct data_cmd const *cmd;
+
+		drbd_thread_current_set_cpu(&connection->receiver);
+		update_receiver_timing_details(connection, drbd_recv_header_maybe_unplug);
+		if (drbd_recv_header_maybe_unplug(connection, &pi))
+			goto err_out;
+
+		cmd = &drbd_cmd_handler[pi.cmd];
+		if (unlikely(pi.cmd >= ARRAY_SIZE(drbd_cmd_handler) || !cmd->fn)) {
+			drbd_err(connection, "Unexpected data packet %s (0x%04x)",
+				 drbd_packet_name(pi.cmd), pi.cmd);
+			goto err_out;
+		}
+
+		shs = cmd->pkt_size;
+		if (pi.cmd == P_SIZES && connection->agreed_features & DRBD_FF_WSAME)
+			shs += sizeof(struct o_qlim);
+		if (pi.size > shs && !cmd->expect_payload) {
+			drbd_err(connection, "No payload expected %s l:%d\n",
+				 drbd_packet_name(pi.cmd), pi.size);
+			goto err_out;
+		}
+		if (pi.size < shs) {
+			drbd_err(connection, "%s: unexpected packet size, expected:%d received:%d\n",
+				 drbd_packet_name(pi.cmd), (int)shs, pi.size);
+			goto err_out;
+		}
+
+		if (shs) {
+			update_receiver_timing_details(connection, drbd_recv_all_warn);
+			err = drbd_recv_all_warn(connection, &pi.data, shs);
+			if (err)
+				goto err_out;
+			pi.size -= shs;
+		}
+
+		update_receiver_timing_details(connection, cmd->fn);
+		err = cmd->fn(connection, &pi);
+		if (err) {
+			drbd_err(connection, "error receiving %s, e: %d l: %d!\n",
+				 drbd_packet_name(pi.cmd), err, pi.size);
+			goto err_out;
+		}
+	}
+	return;
+
+    err_out:
+	change_cstate(connection, C_PROTOCOL_ERROR, CS_HARD);
+}
+
+static void drbd_cancel_conflicting_resync_requests(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	struct conflict_worker *submit_conflict = &device->submit_conflict;
+	struct rb_node *node;
+	bool any_queued = false;
+
+	spin_lock_irq(&device->interval_lock);
+	for (node = rb_first(&device->requests); node; node = rb_next(node)) {
+		struct drbd_interval *i = rb_entry(node, struct drbd_interval, rb);
+		struct drbd_peer_request *peer_req;
+
+		if (!drbd_interval_is_resync(i))
+			continue;
+
+		peer_req = container_of(i, struct drbd_peer_request, i);
+
+		if (peer_req->peer_device != peer_device)
+			continue;
+
+		/* Only cancel requests which are waiting for conflicts to resolve. */
+		if (test_bit(INTERVAL_SUBMITTED, &i->flags) ||
+				(test_bit(INTERVAL_READY_TO_SEND, &i->flags) &&
+				 !test_bit(INTERVAL_RECEIVED, &i->flags)) ||
+				test_bit(INTERVAL_CANCELED, &i->flags))
+			continue;
+
+		set_bit(INTERVAL_CANCELED, &i->flags);
+
+		dynamic_drbd_dbg(device,
+				"Cancel %s %s request at %llus+%u (sent=%d)\n",
+				test_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &i->flags) ?
+					"already queued" : "unqueued",
+				drbd_interval_type_str(i),
+				(unsigned long long) i->sector, i->size,
+				test_bit(INTERVAL_READY_TO_SEND, &i->flags));
+
+		if (test_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &i->flags))
+			continue;
+
+		set_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &i->flags);
+
+		spin_lock(&submit_conflict->lock);
+		switch (i->type) {
+		case INTERVAL_RESYNC_WRITE:
+			list_add_tail(&peer_req->w.list, &submit_conflict->resync_writes);
+			break;
+		case INTERVAL_RESYNC_READ:
+			list_add_tail(&peer_req->w.list, &submit_conflict->resync_reads);
+			break;
+		default:
+			drbd_err(peer_device, "Unexpected interval type in %s\n", __func__);
+		}
+		spin_unlock(&submit_conflict->lock);
+
+		any_queued = true;
+	}
+	spin_unlock_irq(&device->interval_lock);
+
+	if (any_queued)
+		queue_work(submit_conflict->wq, &submit_conflict->worker);
+}
+
+static void cancel_dagtag_dependent_requests(struct drbd_resource *resource, unsigned int node_id)
+{
+	struct drbd_connection *connection;
+	LIST_HEAD(work_list);
+	struct drbd_peer_request *peer_req, *t;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		spin_lock_irq(&connection->peer_reqs_lock);
+		list_for_each_entry(peer_req, &connection->dagtag_wait_ee, w.list) {
+			if (peer_req->depend_dagtag_node_id != node_id)
+				continue;
+
+			dynamic_drbd_dbg(peer_req->peer_device, "%s at %llus+%u: Wait for dagtag %llus from peer %u cancelled\n",
+					drbd_interval_type_str(&peer_req->i),
+					(unsigned long long) peer_req->i.sector, peer_req->i.size,
+					(unsigned long long) peer_req->depend_dagtag,
+					node_id);
+
+			list_move_tail(&peer_req->w.list, &work_list);
+			break;
+		}
+		spin_unlock_irq(&connection->peer_reqs_lock);
+	}
+	rcu_read_unlock();
+
+	list_for_each_entry_safe(peer_req, t, &work_list, w.list) {
+		drbd_peer_resync_read_cancel(peer_req);
+		drbd_free_peer_req(peer_req);
+	}
+}
+
+static void cleanup_resync_leftovers(struct drbd_peer_device *peer_device)
+{
+	peer_device->rs_total = 0;
+	peer_device->rs_failed = 0;
+	D_ASSERT(peer_device, atomic_read(&peer_device->rs_pending_cnt) == 0);
+
+	timer_delete_sync(&peer_device->resync_timer);
+	resync_timer_fn(&peer_device->resync_timer);
+	timer_delete_sync(&peer_device->start_resync_timer);
+}
+
+static void free_waiting_resync_requests(struct drbd_connection *connection)
+{
+	LIST_HEAD(free_list);
+	struct drbd_peer_device *peer_device;
+	struct drbd_peer_request *peer_req, *t;
+	int vnr;
+
+	spin_lock_irq(&connection->peer_reqs_lock);
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		list_for_each_entry_safe(peer_req, t, &peer_device->resync_requests, recv_order) {
+			drbd_list_del_resync_request(peer_req);
+			list_add_tail(&peer_req->w.list, &free_list);
+		}
+	}
+	rcu_read_unlock();
+
+	list_for_each_entry_safe(peer_req, t, &connection->peer_reads, recv_order) {
+		if (peer_req->i.type == INTERVAL_PEER_READ)
+			continue;
+
+		peer_req->flags &= ~EE_ON_RECV_ORDER;
+		list_del(&peer_req->recv_order);
+
+		list_add_tail(&peer_req->w.list, &free_list);
+	}
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	list_for_each_entry_safe(peer_req, t, &free_list, w.list) {
+		/*
+		 * Resync write requests waiting for peers-in-sync to be sent
+		 * just need to be freed.
+		 */
+		if (test_bit(INTERVAL_COMPLETED, &peer_req->i.flags)) {
+			drbd_free_peer_req(peer_req);
+			continue;
+		}
+
+		D_ASSERT(connection, test_bit(INTERVAL_READY_TO_SEND, &peer_req->i.flags));
+		D_ASSERT(connection, !test_bit(INTERVAL_RECEIVED, &peer_req->i.flags));
+		D_ASSERT(connection, !(peer_req->flags & EE_TRIM));
+
+		if (peer_req->i.type == INTERVAL_RESYNC_READ)
+			atomic_sub(peer_req->i.size >> 9, &connection->rs_in_flight);
+
+		dec_rs_pending(peer_req->peer_device);
+		drbd_remove_peer_req_interval(peer_req);
+		drbd_free_peer_req(peer_req);
+	}
+}
+
+static void free_dagtag_wait_requests(struct drbd_connection *connection)
+{
+	LIST_HEAD(dagtag_wait_work_list);
+	struct drbd_peer_request *peer_req, *t;
+
+	spin_lock_irq(&connection->peer_reqs_lock);
+	list_splice_init(&connection->dagtag_wait_ee, &dagtag_wait_work_list);
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
+	list_for_each_entry_safe(peer_req, t, &dagtag_wait_work_list, w.list) {
+		struct drbd_peer_device *peer_device = peer_req->peer_device;
+
+		/* Verify requests are placed in the interval tree when
+		 * the request is made, so they need to be removed if
+		 * the reply was waiting for a dagtag to be reached. */
+		if (peer_req->i.type == INTERVAL_OV_READ_SOURCE)
+			drbd_remove_peer_req_interval(peer_req);
+
+		drbd_free_peer_req(peer_req);
+		dec_unacked(peer_device);
+		put_ldev(peer_device->device);
+	}
+}
+
+static void drain_resync_activity(struct drbd_connection *connection)
+{
+	struct drbd_peer_device *peer_device;
+	int vnr;
+
+	/*
+	 * In order to understand this function, refer to the flow diagrams in
+	 * the comments for make_resync_request(), make_ov_request() and
+	 * receive_dagtag_data_request().
+	 */
+
+	/*
+	 * We could receive data from a peer at any point. This might release a
+	 * request that is waiting for a dagtag. That would cause it to
+	 * progress to waiting for conflicts or the backing disk. So we need to
+	 * remove these requests before flushing the other stages.
+	 */
+	free_dagtag_wait_requests(connection);
+
+	/* Wait for w_resync_timer/w_e_send_csum to finish, if running. */
+	drbd_flush_workqueue(&connection->sender_work);
+
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		struct drbd_device *device = peer_device->device;
+
+		kref_get(&device->kref);
+		rcu_read_unlock();
+
+		/* Cause remaining discards to be submitted. */
+		drbd_last_resync_request(peer_device, true);
+		/* Cause requests waiting due to conflicts to be canceled. */
+		drbd_cancel_conflicting_resync_requests(peer_device);
+
+		kref_put(&device->kref, drbd_destroy_device);
+		rcu_read_lock();
+	}
+	rcu_read_unlock();
+
+	/* Drain conflicting and backing requests. */
+	wait_event(connection->ee_wait, atomic_read(&connection->backing_ee_cnt) == 0);
+
+	/* Wait for work queued when backing requests finished. */
+	drbd_flush_workqueue(&connection->sender_work);
+
+	/* Clear up and remove requests that have progressed to done_ee. */
+	drbd_finish_peer_reqs(connection);
+
+	/*
+	 * Requests that are waiting for a resync reply must be removed from
+	 * the interval tree and then freed.
+	 */
+	free_waiting_resync_requests(connection);
+
+	/* Requests that are waiting for a dagtag on this connection must be
+	 * cancelled, because the dependency will never be fulfilled. */
+	cancel_dagtag_dependent_requests(connection->resource, connection->peer_node_id);
+
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		struct drbd_device *device = peer_device->device;
+
+		kref_get(&device->kref);
+		rcu_read_unlock();
+
+		cleanup_resync_leftovers(peer_device);
+
+		kref_put(&device->kref, drbd_destroy_device);
+		rcu_read_lock();
+	}
+	rcu_read_unlock();
+}
 
-	dec_rs_pending(peer_device);
+static void peer_device_disconnected(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
 
-	if (get_ldev(device)) {
-		struct drbd_peer_request *peer_req;
+	if (test_and_clear_bit(HOLDING_UUID_READ_LOCK, &peer_device->flags))
+		up_read_non_owner(&device->uuid_sem);
 
-		peer_req = drbd_alloc_peer_req(peer_device, ID_SYNCER, sector,
-					       size, 0, GFP_NOIO);
-		if (!peer_req) {
-			put_ldev(device);
-			return -ENOMEM;
-		}
+	peer_device_init_connect_state(peer_device);
 
-		peer_req->w.cb = e_end_resync_block;
-		peer_req->opf = REQ_OP_DISCARD;
-		peer_req->submit_jif = jiffies;
-		peer_req->flags |= EE_TRIM;
+	/* No need to start additional resyncs after reconnection. */
+	peer_device->resync_again = 0;
 
-		spin_lock_irq(&device->resource->req_lock);
-		list_add_tail(&peer_req->w.list, &device->sync_ee);
-		spin_unlock_irq(&device->resource->req_lock);
+	if (!drbd_suspended(device)) {
+		struct drbd_resource *resource = device->resource;
 
-		atomic_add(pi->size >> 9, &device->rs_sect_ev);
-		err = drbd_submit_peer_request(peer_req);
+		/* We need to create the new UUID immediately when we finish
+		   requests that did not reach the lost peer.
+		   But when we lost quorum we are going to finish those
+		   requests with error, therefore do not create the new UUID
+		   immediately! */
+		if (!list_empty(&resource->transfer_log) &&
+		    drbd_data_accessible(device, NOW) &&
+		    !test_bit(PRIMARY_LOST_QUORUM, &device->flags) &&
+		    test_and_clear_bit(NEW_CUR_UUID, &device->flags))
+			drbd_check_peers_new_current_uuid(device);
+	}
 
-		if (err) {
-			spin_lock_irq(&device->resource->req_lock);
-			list_del(&peer_req->w.list);
-			spin_unlock_irq(&device->resource->req_lock);
+	drbd_md_sync(device);
 
-			drbd_free_peer_req(device, peer_req);
-			put_ldev(device);
-			err = 0;
-			goto fail;
-		}
+	if (get_ldev(device)) {
+		drbd_bitmap_io(device, &drbd_bm_write_copy_pages, "write from disconnected",
+				BM_LOCK_BULK | BM_LOCK_SINGLE_SLOT, peer_device);
+		put_ldev(device);
+	}
+}
 
-		inc_unacked(device);
+static bool initiator_can_commit_or_abort(struct drbd_connection *connection)
+{
+	struct drbd_resource *resource = connection->resource;
+	bool remote = resource->twopc_reply.initiator_node_id != resource->res_opts.node_id;
 
-		/* No put_ldev() here. Gets called in drbd_endio_write_sec_final(),
-		   as well as drbd_rs_complete_io() */
-	} else {
-	fail:
-		drbd_rs_complete_io(device, sector);
-		drbd_send_ack_ex(peer_device, P_NEG_ACK, sector, size, ID_SYNCER);
+	if (remote) {
+		u64 parents = resource->twopc_parent_nodes & ~NODE_MASK(connection->peer_node_id);
+
+		if (!parents)
+			return false;
+		resource->twopc_parent_nodes = parents;
 	}
 
-	atomic_add(size >> 9, &device->rs_sect_in);
+	if (test_bit(TWOPC_PREPARED, &connection->flags) &&
+	    !(test_bit(TWOPC_YES, &connection->flags) ||
+	      test_bit(TWOPC_NO, &connection->flags) ||
+	      test_bit(TWOPC_RETRY, &connection->flags)))
+		return false;
 
-	return err;
+	return true;
 }
 
-struct data_cmd {
-	int expect_payload;
-	unsigned int pkt_size;
-	int (*fn)(struct drbd_connection *, struct packet_info *);
-};
+static void cleanup_remote_state_change(struct drbd_connection *connection)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct twopc_reply *reply = &resource->twopc_reply;
+	struct twopc_request request;
+	bool remote = false;
 
-static struct data_cmd drbd_cmd_handler[] = {
-	[P_DATA]	    = { 1, sizeof(struct p_data), receive_Data },
-	[P_DATA_REPLY]	    = { 1, sizeof(struct p_data), receive_DataReply },
-	[P_RS_DATA_REPLY]   = { 1, sizeof(struct p_data), receive_RSDataReply } ,
-	[P_BARRIER]	    = { 0, sizeof(struct p_barrier), receive_Barrier } ,
-	[P_BITMAP]	    = { 1, 0, receive_bitmap } ,
-	[P_COMPRESSED_BITMAP] = { 1, 0, receive_bitmap } ,
-	[P_UNPLUG_REMOTE]   = { 0, 0, receive_UnplugRemote },
-	[P_DATA_REQUEST]    = { 0, sizeof(struct p_block_req), receive_DataRequest },
-	[P_RS_DATA_REQUEST] = { 0, sizeof(struct p_block_req), receive_DataRequest },
-	[P_SYNC_PARAM]	    = { 1, 0, receive_SyncParam },
-	[P_SYNC_PARAM89]    = { 1, 0, receive_SyncParam },
-	[P_PROTOCOL]        = { 1, sizeof(struct p_protocol), receive_protocol },
-	[P_UUIDS]	    = { 0, sizeof(struct p_uuids), receive_uuids },
-	[P_SIZES]	    = { 0, sizeof(struct p_sizes), receive_sizes },
-	[P_STATE]	    = { 0, sizeof(struct p_state), receive_state },
-	[P_STATE_CHG_REQ]   = { 0, sizeof(struct p_req_state), receive_req_state },
-	[P_SYNC_UUID]       = { 0, sizeof(struct p_rs_uuid), receive_sync_uuid },
-	[P_OV_REQUEST]      = { 0, sizeof(struct p_block_req), receive_DataRequest },
-	[P_OV_REPLY]        = { 1, sizeof(struct p_block_req), receive_DataRequest },
-	[P_CSUM_RS_REQUEST] = { 1, sizeof(struct p_block_req), receive_DataRequest },
-	[P_RS_THIN_REQ]     = { 0, sizeof(struct p_block_req), receive_DataRequest },
-	[P_DELAY_PROBE]     = { 0, sizeof(struct p_delay_probe93), receive_skip },
-	[P_OUT_OF_SYNC]     = { 0, sizeof(struct p_block_desc), receive_out_of_sync },
-	[P_CONN_ST_CHG_REQ] = { 0, sizeof(struct p_req_state), receive_req_conn_state },
-	[P_PROTOCOL_UPDATE] = { 1, sizeof(struct p_protocol), receive_protocol },
-	[P_TRIM]	    = { 0, sizeof(struct p_trim), receive_Data },
-	[P_ZEROES]	    = { 0, sizeof(struct p_trim), receive_Data },
-	[P_RS_DEALLOCATED]  = { 0, sizeof(struct p_block_desc), receive_rs_deallocated },
-};
+	write_lock_irq(&resource->state_rwlock);
+	if (resource->remote_state_change && !initiator_can_commit_or_abort(connection)) {
+		remote = reply->initiator_node_id != resource->res_opts.node_id;
 
-static void drbdd(struct drbd_connection *connection)
+		if (remote)
+			request = (struct twopc_request) {
+				.nodes_to_reach = ~0,
+				.cmd = P_TWOPC_ABORT,
+				.tid = reply->tid,
+				.initiator_node_id = reply->initiator_node_id,
+				.target_node_id = reply->target_node_id,
+				.vnr = reply->vnr,
+			};
+
+		drbd_info(connection, "Aborting %s state change %u commit not possible\n",
+			  remote ? "remote" : "local", reply->tid);
+		if (remote) {
+			timer_delete(&resource->twopc_timer);
+			__clear_remote_state_change(resource);
+		} else {
+			enum alt_rv alt_rv = abort_local_transaction(connection, 0);
+			if (alt_rv != ALT_LOCKED)
+				return;
+		}
+	}
+	write_unlock_irq(&resource->state_rwlock);
+
+	/* for a local transaction, change_cluster_wide_state() sends the P_TWOPC_ABORTs */
+	if (remote)
+		nested_twopc_abort(resource, &request);
+}
+
+static void drbd_notify_peers_lost_primary(struct drbd_connection *lost_peer)
 {
-	struct packet_info pi;
-	size_t shs; /* sub header size */
-	int err;
+	struct drbd_resource *resource = lost_peer->resource;
+	struct drbd_connection *connection;
+	u64 im;
 
-	while (get_t_state(&connection->receiver) == RUNNING) {
-		struct data_cmd const *cmd;
+	for_each_connection_ref(connection, im, resource) {
+		struct drbd_peer_device *peer_device;
+		bool send_dagtag = false;
+		int vnr;
 
-		drbd_thread_current_set_cpu(&connection->receiver);
-		update_receiver_timing_details(connection, drbd_recv_header_maybe_unplug);
-		if (drbd_recv_header_maybe_unplug(connection, &pi))
-			goto err_out;
+		if (connection == lost_peer)
+			continue;
+		if (connection->cstate[NOW] != C_CONNECTED)
+			continue;
 
-		cmd = &drbd_cmd_handler[pi.cmd];
-		if (unlikely(pi.cmd >= ARRAY_SIZE(drbd_cmd_handler) || !cmd->fn)) {
-			drbd_err(connection, "Unexpected data packet %s (0x%04x)",
-				 cmdname(pi.cmd), pi.cmd);
-			goto err_out;
-		}
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+			struct drbd_device *device = peer_device->device;
+			u64 current_uuid = drbd_current_uuid(device);
+			u64 weak_nodes = drbd_weak_nodes_device(device);
 
-		shs = cmd->pkt_size;
-		if (pi.cmd == P_SIZES && connection->agreed_features & DRBD_FF_WSAME)
-			shs += sizeof(struct o_qlim);
-		if (pi.size > shs && !cmd->expect_payload) {
-			drbd_err(connection, "No payload expected %s l:%d\n",
-				 cmdname(pi.cmd), pi.size);
-			goto err_out;
-		}
-		if (pi.size < shs) {
-			drbd_err(connection, "%s: unexpected packet size, expected:%d received:%d\n",
-				 cmdname(pi.cmd), (int)shs, pi.size);
-			goto err_out;
-		}
+			if (device->disk_state[NOW] < D_INCONSISTENT ||
+			    peer_device->disk_state[NOW] < D_INCONSISTENT)
+				continue; /* Ignore if one side is diskless */
 
-		if (shs) {
-			update_receiver_timing_details(connection, drbd_recv_all_warn);
-			err = drbd_recv_all_warn(connection, pi.data, shs);
-			if (err)
-				goto err_out;
-			pi.size -= shs;
+			drbd_send_current_uuid(peer_device, current_uuid, weak_nodes);
+			send_dagtag = true;
 		}
 
-		update_receiver_timing_details(connection, cmd->fn);
-		err = cmd->fn(connection, &pi);
-		if (err) {
-			drbd_err(connection, "error receiving %s, e: %d l: %d!\n",
-				 cmdname(pi.cmd), err, pi.size);
-			goto err_out;
-		}
+		if (send_dagtag)
+			drbd_send_peer_dagtag(connection, lost_peer);
 	}
-	return;
-
-    err_out:
-	conn_request_state(connection, NS(conn, C_PROTOCOL_ERROR), CS_HARD);
 }
 
 static void conn_disconnect(struct drbd_connection *connection)
 {
+	struct drbd_resource *resource = connection->resource;
 	struct drbd_peer_device *peer_device;
-	enum drbd_conns oc;
-	int vnr;
+	enum drbd_conn_state oc;
+	unsigned long irq_flags;
+	int vnr, i;
 
-	if (connection->cstate == C_STANDALONE)
+	clear_bit(CONN_DRY_RUN, &connection->flags);
+	clear_bit(CONN_CONGESTED, &connection->flags);
+
+	if (connection->cstate[NOW] == C_STANDALONE)
 		return;
 
 	/* We are about to start the cleanup after connection loss.
-	 * Make sure drbd_make_request knows about that.
+	 * Make sure drbd_submit_bio knows about that.
 	 * Usually we should be in some network failure state already,
 	 * but just in case we are not, we fix it up here.
 	 */
-	conn_request_state(connection, NS(conn, C_NETWORK_FAILURE), CS_HARD);
+	change_cstate_tag(connection, C_NETWORK_FAILURE, CS_HARD, "disconnected", NULL);
+
+	del_connect_timer(connection);
 
 	/* ack_receiver does not clean up anything. it must not interfere, either */
-	drbd_thread_stop(&connection->ack_receiver);
 	if (connection->ack_sender) {
 		destroy_workqueue(connection->ack_sender);
 		connection->ack_sender = NULL;
 	}
-	drbd_free_sock(connection);
+
+	/* restart sender thread,
+	 * potentially get it out of blocking network operations */
+	drbd_thread_stop(&connection->sender);
+	drbd_thread_start(&connection->sender);
+
+	mutex_lock(&resource->conf_update);
+	drbd_transport_shutdown(connection, CLOSE_CONNECTION);
+	mutex_unlock(&resource->conf_update);
+
+	cleanup_remote_state_change(connection);
+
+	drain_resync_activity(connection);
+
+	connection->after_reconciliation.lost_node_id = -1;
+
+	/* Wait for current activity to cease.  This includes waiting for
+	 * peer_request queued to the submitter workqueue. */
+	wait_event(connection->ee_wait,
+		atomic_read(&connection->active_ee_cnt) == 0);
+
+	/* wait for all w_e_end_data_req, w_e_end_rsdata_req, w_send_barrier,
+	 * etc. which may still be on the worker queue to be "canceled" */
+	drbd_flush_workqueue(&connection->sender_work);
+
+	drbd_finish_peer_reqs(connection);
+
+	/* This second workqueue flush is necessary, since drbd_finish_peer_reqs()
+	   might have issued a work again. The one before drbd_finish_peer_reqs() is
+	   necessary to reclaim net_ee in drbd_finish_peer_reqs(). */
+	drbd_flush_workqueue(&connection->sender_work);
 
 	rcu_read_lock();
 	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
 		struct drbd_device *device = peer_device->device;
+
 		kref_get(&device->kref);
 		rcu_read_unlock();
-		drbd_disconnected(peer_device);
+
+		peer_device_disconnected(peer_device);
+		if (get_ldev(device)) {
+			drbd_reconsider_queue_parameters(device, device->ldev);
+			put_ldev(device);
+		} else {
+			drbd_reconsider_queue_parameters(device, NULL);
+		}
+
 		kref_put(&device->kref, drbd_destroy_device);
 		rcu_read_lock();
 	}
 	rcu_read_unlock();
 
+	/* Apply these changes after peer_device_disconnected() because that
+	 * may cause the loss of other connections to be detected, which can
+	 * change the suspended state. */
+	tl_walk(connection, &connection->req_not_net_done,
+			resource->cached_susp ? CONNECTION_LOST_WHILE_SUSPENDED : CONNECTION_LOST);
+
+	i = drbd_free_peer_reqs(connection, &connection->done_ee);
+	if (i)
+		drbd_info(connection, "done_ee not empty, killed %u entries\n", i);
+	i = drbd_free_peer_reqs(connection, &connection->dagtag_wait_ee);
+	if (i)
+		drbd_info(connection, "dagtag_wait_ee not empty, killed %u entries\n", i);
+
+	cleanup_unacked_peer_requests(connection);
+	cleanup_peer_ack_list(connection);
+
+	i = atomic_read(&connection->pp_in_use);
+	if (i)
+		drbd_info(connection, "pp_in_use = %d, expected 0\n", i);
+	i = atomic_read(&connection->pp_in_use_by_net);
+	if (i)
+		drbd_info(connection, "pp_in_use_by_net = %d, expected 0\n", i);
+
 	if (!list_empty(&connection->current_epoch->list))
 		drbd_err(connection, "ASSERTION FAILED: connection->current_epoch->list not empty\n");
 	/* ok, no more ee's on the fly, it is safe to reset the epoch_size */
 	atomic_set(&connection->current_epoch->epoch_size, 0);
 	connection->send.seen_any_write_yet = false;
+	connection->send.current_dagtag_sector =
+		resource->dagtag_sector - ((BIO_MAX_VECS << PAGE_SHIFT) >> SECTOR_SHIFT) - 1;
+	connection->current_epoch->oldest_unconfirmed_peer_req = NULL;
+
+	/* Indicate that last_dagtag_sector may no longer be up-to-date. We
+	 * need to keep last_dagtag_sector because we may still need it to
+	 * resolve a reconciliation resync. However, we need to avoid issuing a
+	 * resync request dependent on that dagtag because the resync source
+	 * may not be aware of the dagtag, even though it has newer data. This
+	 * can occur if the peer has been re-started since the request with the
+	 * dagtag.
+	 * */
+	clear_bit(RECEIVED_DAGTAG, &connection->flags);
+
+	/* Release any threads waiting for a barrier to be acked. */
+	clear_bit(BARRIER_ACK_PENDING, &connection->flags);
+	wake_up(&resource->barrier_wait);
 
 	drbd_info(connection, "Connection closed\n");
 
-	if (conn_highest_role(connection) == R_PRIMARY && conn_highest_pdsk(connection) >= D_UNKNOWN)
+	if (resource->role[NOW] == R_PRIMARY &&
+	    connection->fencing_policy != FP_DONT_CARE &&
+	    conn_highest_pdsk(connection) >= D_UNKNOWN)
 		conn_try_outdate_peer_async(connection);
 
-	spin_lock_irq(&connection->resource->req_lock);
-	oc = connection->cstate;
-	if (oc >= C_UNCONNECTED)
-		_conn_request_state(connection, NS(conn, C_UNCONNECTED), CS_VERBOSE);
-
-	spin_unlock_irq(&connection->resource->req_lock);
-
-	if (oc == C_DISCONNECTING)
-		conn_request_state(connection, NS(conn, C_STANDALONE), CS_VERBOSE | CS_HARD);
-}
-
-static int drbd_disconnected(struct drbd_peer_device *peer_device)
-{
-	struct drbd_device *device = peer_device->device;
-	unsigned int i;
-
-	/* wait for current activity to cease. */
-	spin_lock_irq(&device->resource->req_lock);
-	_drbd_wait_ee_list_empty(device, &device->active_ee);
-	_drbd_wait_ee_list_empty(device, &device->sync_ee);
-	_drbd_wait_ee_list_empty(device, &device->read_ee);
-	spin_unlock_irq(&device->resource->req_lock);
-
-	/* We do not have data structures that would allow us to
-	 * get the rs_pending_cnt down to 0 again.
-	 *  * On C_SYNC_TARGET we do not have any data structures describing
-	 *    the pending RSDataRequest's we have sent.
-	 *  * On C_SYNC_SOURCE there is no data structure that tracks
-	 *    the P_RS_DATA_REPLY blocks that we sent to the SyncTarget.
-	 *  And no, it is not the sum of the reference counts in the
-	 *  resync_LRU. The resync_LRU tracks the whole operation including
-	 *  the disk-IO, while the rs_pending_cnt only tracks the blocks
-	 *  on the fly. */
-	drbd_rs_cancel_all(device);
-	device->rs_total = 0;
-	device->rs_failed = 0;
-	atomic_set(&device->rs_pending_cnt, 0);
-	wake_up(&device->misc_wait);
-
-	timer_delete_sync(&device->resync_timer);
-	resync_timer_fn(&device->resync_timer);
-
-	/* wait for all w_e_end_data_req, w_e_end_rsdata_req, w_send_barrier,
-	 * w_make_resync_request etc. which may still be on the worker queue
-	 * to be "canceled" */
-	drbd_flush_workqueue(&peer_device->connection->sender_work);
-
-	drbd_finish_peer_reqs(device);
-
-	/* This second workqueue flush is necessary, since drbd_finish_peer_reqs()
-	   might have issued a work again. The one before drbd_finish_peer_reqs() is
-	   necessary to reclain net_ee in drbd_finish_peer_reqs(). */
-	drbd_flush_workqueue(&peer_device->connection->sender_work);
-
-	/* need to do it again, drbd_finish_peer_reqs() may have populated it
-	 * again via drbd_try_clear_on_disk_bm(). */
-	drbd_rs_cancel_all(device);
-
-	kfree(device->p_uuid);
-	device->p_uuid = NULL;
-
-	if (!drbd_suspended(device))
-		tl_clear(peer_device->connection);
-
-	drbd_md_sync(device);
+	drbd_maybe_khelper(NULL, connection, "disconnected");
 
-	if (get_ldev(device)) {
-		drbd_bitmap_io(device, &drbd_bm_write_copy_pages,
-				"write from disconnected", BM_LOCKED_CHANGE_ALLOWED, NULL);
-		put_ldev(device);
+	begin_state_change(resource, &irq_flags, CS_VERBOSE | CS_LOCAL_ONLY);
+	oc = connection->cstate[NOW];
+	if (oc >= C_UNCONNECTED) {
+		__change_cstate(connection, C_UNCONNECTED);
+		/* drbd_receiver() has to be restarted after it returns */
+		drbd_thread_restart_nowait(&connection->receiver);
 	}
+	end_state_change(resource, &irq_flags, "disconnected");
 
-	i = atomic_read(&device->pp_in_use_by_net);
-	if (i)
-		drbd_info(device, "pp_in_use_by_net = %d, expected 0\n", i);
-	i = atomic_read(&device->pp_in_use);
-	if (i)
-		drbd_info(device, "pp_in_use = %d, expected 0\n", i);
-
-	D_ASSERT(device, list_empty(&device->read_ee));
-	D_ASSERT(device, list_empty(&device->active_ee));
-	D_ASSERT(device, list_empty(&device->sync_ee));
-	D_ASSERT(device, list_empty(&device->done_ee));
+	if (test_bit(NOTIFY_PEERS_LOST_PRIMARY, &connection->flags)) {
+		drbd_notify_peers_lost_primary(connection);
+		clear_bit(NOTIFY_PEERS_LOST_PRIMARY, &connection->flags);
+	}
 
-	return 0;
+	if (oc == C_DISCONNECTING)
+		change_cstate_tag(connection, C_STANDALONE, CS_VERBOSE | CS_HARD | CS_LOCAL_ONLY,
+				"disconnected", NULL);
 }
 
 /*
- * We support PRO_VERSION_MIN to PRO_VERSION_MAX. The protocol version
- * we can agree on is stored in agreed_pro_version.
+ * We support PRO_VERSION_MIN to PRO_VERSION_MAX.
+ * But see also drbd_protocol_version_acceptable() and module parameter
+ * drbd_protocol_version_min.
+ * The protocol version we can agree on is stored in agreed_pro_version.
  *
  * feature flags and the reserved array should be enough room for future
  * enhancements of the handshake protocol, and possible plugins...
+ * See also PRO_FEATURES.
  *
- * for now, they are expected to be zero, but ignored.
  */
 static int drbd_send_features(struct drbd_connection *connection)
 {
-	struct drbd_socket *sock;
 	struct p_connection_features *p;
 
-	sock = &connection->data;
-	p = conn_prepare_command(connection, sock);
+	p = __conn_prepare_command(connection, sizeof(*p), DATA_STREAM);
 	if (!p)
 		return -EIO;
 	memset(p, 0, sizeof(*p));
-	p->protocol_min = cpu_to_be32(PRO_VERSION_MIN);
+	p->protocol_min = cpu_to_be32(drbd_protocol_version_min);
 	p->protocol_max = cpu_to_be32(PRO_VERSION_MAX);
+	p->sender_node_id = cpu_to_be32(connection->resource->res_opts.node_id);
+	p->receiver_node_id = cpu_to_be32(connection->peer_node_id);
 	p->feature_flags = cpu_to_be32(PRO_FEATURES);
-	return conn_send_command(connection, sock, P_CONNECTION_FEATURES, sizeof(*p), NULL, 0);
+	return __send_command(connection, -1, P_CONNECTION_FEATURES, DATA_STREAM);
 }
 
 /*
@@ -5083,9 +10031,10 @@ static int drbd_send_features(struct drbd_connection *connection)
  *  -1 peer talks different language,
  *     no point in trying again, please go standalone.
  */
-static int drbd_do_features(struct drbd_connection *connection)
+int drbd_do_features(struct drbd_connection *connection)
 {
 	/* ASSERT current == connection->receiver ... */
+	struct drbd_resource *resource = connection->resource;
 	struct p_connection_features *p;
 	const int expect = sizeof(struct p_connection_features);
 	struct packet_info pi;
@@ -5096,12 +10045,15 @@ static int drbd_do_features(struct drbd_connection *connection)
 		return 0;
 
 	err = drbd_recv_header(connection, &pi);
-	if (err)
+	if (err) {
+		if (err == -EAGAIN)
+			drbd_err(connection, "timeout while waiting for feature packet\n");
 		return 0;
+	}
 
 	if (pi.cmd != P_CONNECTION_FEATURES) {
 		drbd_err(connection, "expected ConnectionFeatures packet, received: %s (0x%04x)\n",
-			 cmdname(pi.cmd), pi.cmd);
+			 drbd_packet_name(pi.cmd), pi.cmd);
 		return -1;
 	}
 
@@ -5111,8 +10063,7 @@ static int drbd_do_features(struct drbd_connection *connection)
 		return -1;
 	}
 
-	p = pi.data;
-	err = drbd_recv_all_warn(connection, p, expect);
+	err = drbd_recv_all_warn(connection, (void **)&p, expect);
 	if (err)
 		return 0;
 
@@ -5122,42 +10073,102 @@ static int drbd_do_features(struct drbd_connection *connection)
 		p->protocol_max = p->protocol_min;
 
 	if (PRO_VERSION_MAX < p->protocol_min ||
-	    PRO_VERSION_MIN > p->protocol_max)
-		goto incompat;
+	    drbd_protocol_version_min > p->protocol_max) {
+		drbd_err(connection, "incompatible DRBD dialects: "
+		    "I support %d-%d, peer supports %d-%d\n",
+		    drbd_protocol_version_min, PRO_VERSION_MAX,
+		    p->protocol_min, p->protocol_max);
+		return -1;
+	}
+	/* Older DRBD will always expect us to agree to their max,
+	 * if it falls within our [min, max] range.
+	 * But we have a gap in there that we do not support.
+	 */
+	if (p->protocol_max > PRO_VERSION_8_MAX &&
+	    p->protocol_max < PRO_VERSION_MIN) {
+		drbd_err(connection, "incompatible DRBD 9 dialects: I support %u-%u, peer supports %u-%u\n",
+		    PRO_VERSION_MIN, PRO_VERSION_MAX,
+		    p->protocol_min, p->protocol_max);
+		return -1;
+	}
 
 	connection->agreed_pro_version = min_t(int, PRO_VERSION_MAX, p->protocol_max);
 	connection->agreed_features = PRO_FEATURES & be32_to_cpu(p->feature_flags);
 
-	drbd_info(connection, "Handshake successful: "
-	     "Agreed network protocol version %d\n", connection->agreed_pro_version);
+	if (connection->agreed_pro_version == 121 &&
+			(connection->agreed_features & DRBD_FF_RESYNC_DAGTAG)) {
+		/*
+		 * Releases drbd-9.2.0, drbd-9.2.1 and drbd-9.2.2 used an
+		 * implementation of discard merging which caused one
+		 * P_RS_WRITE_ACK to be sent for the whole merged interval.
+		 * These are precisely the releases with PRO_VERSION_MAX == 121
+		 * and feature DRBD_FF_RESYNC_DAGTAG.
+		 *
+		 * We do no support this case, so reject the connection.
+		 */
+		drbd_err(connection, "incompatible DRBD 9 dialects: protocol 121 with feature RESYNC_DAGTAG; upgrade via DRBD 9.2.16\n");
+		return -1;
+	}
+
+	if (connection->agreed_pro_version < 110) {
+		struct drbd_connection *connection2;
+		bool multiple = false;
+
+		rcu_read_lock();
+		for_each_connection_rcu(connection2, resource) {
+			if (connection == connection2)
+				continue;
+			multiple = true;
+		}
+		rcu_read_unlock();
+
+		if (multiple) {
+			drbd_err(connection, "Peer supports protocols %d-%d, but "
+				 "multiple connections are only supported in protocol "
+				 "110 and above\n", p->protocol_min, p->protocol_max);
+			return -1;
+		}
+	}
 
-	drbd_info(connection, "Feature flags enabled on protocol level: 0x%x%s%s%s%s.\n",
+	if (connection->agreed_pro_version >= 110) {
+		if (be32_to_cpu(p->sender_node_id) != connection->peer_node_id) {
+			drbd_err(connection, "Peer presented a node_id of %d instead of %d\n",
+				 be32_to_cpu(p->sender_node_id), connection->peer_node_id);
+			return 0;
+		}
+		if (be32_to_cpu(p->receiver_node_id) != resource->res_opts.node_id) {
+			drbd_err(connection, "Peer expects me to have a node_id of %d instead of %d\n",
+				 be32_to_cpu(p->receiver_node_id), resource->res_opts.node_id);
+			return 0;
+		}
+	}
+
+	drbd_info(connection, "Handshake to peer %d successful: "
+			"Agreed network protocol version %d\n",
+			connection->peer_node_id,
+			connection->agreed_pro_version);
+
+	drbd_info(connection, "Feature flags enabled on protocol level: 0x%x%s%s%s%s%s\n",
 		  connection->agreed_features,
 		  connection->agreed_features & DRBD_FF_TRIM ? " TRIM" : "",
 		  connection->agreed_features & DRBD_FF_THIN_RESYNC ? " THIN_RESYNC" : "",
 		  connection->agreed_features & DRBD_FF_WSAME ? " WRITE_SAME" : "",
-		  connection->agreed_features & DRBD_FF_WZEROES ? " WRITE_ZEROES" :
+		  connection->agreed_features & DRBD_FF_WZEROES ? " WRITE_ZEROES" : "",
+		  connection->agreed_features & DRBD_FF_RESYNC_DAGTAG ? " RESYNC_DAGTAG" :
 		  connection->agreed_features ? "" : " none");
 
 	return 1;
-
- incompat:
-	drbd_err(connection, "incompatible DRBD dialects: "
-	    "I support %d-%d, peer supports %d-%d\n",
-	    PRO_VERSION_MIN, PRO_VERSION_MAX,
-	    p->protocol_min, p->protocol_max);
-	return -1;
 }
 
 #if !defined(CONFIG_CRYPTO_HMAC) && !defined(CONFIG_CRYPTO_HMAC_MODULE)
-static int drbd_do_auth(struct drbd_connection *connection)
+int drbd_do_auth(struct drbd_connection *connection)
 {
 	drbd_err(connection, "This kernel was build without CONFIG_CRYPTO_HMAC.\n");
 	drbd_err(connection, "You need to disable 'cram-hmac-alg' in drbd.conf.\n");
 	return -1;
 }
 #else
-#define CHALLENGE_LEN 64
+#define CHALLENGE_LEN 64 /* must be multiple of 4 */
 
 /* Return value:
 	1 - auth succeeded,
@@ -5165,25 +10176,28 @@ static int drbd_do_auth(struct drbd_connection *connection)
 	-1 - auth failed, don't try again.
 */
 
-static int drbd_do_auth(struct drbd_connection *connection)
+struct auth_challenge {
+	char d[CHALLENGE_LEN];
+	u32 i;
+} __attribute__((packed));
+
+int drbd_do_auth(struct drbd_connection *connection)
 {
-	struct drbd_socket *sock;
-	char my_challenge[CHALLENGE_LEN];  /* 64 Bytes... */
-	char *response = NULL;
+	struct auth_challenge my_challenge, *peers_ch = NULL;
+	void *response;
 	char *right_response = NULL;
-	char *peers_ch = NULL;
 	unsigned int key_len;
 	char secret[SHARED_SECRET_MAX]; /* 64 byte */
 	unsigned int resp_size;
 	struct shash_desc *desc;
 	struct packet_info pi;
 	struct net_conf *nc;
-	int err, rv;
-
-	/* FIXME: Put the challenge/response into the preallocated socket buffer.  */
+	int err, rv, dig_size;
+	bool peer_is_drbd_9 = connection->agreed_pro_version >= 110;
+	void *packet_body;
 
 	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
+	nc = rcu_dereference(connection->transport.net_conf);
 	key_len = strlen(nc->shared_secret);
 	memcpy(secret, nc->shared_secret, key_len);
 	rcu_read_unlock();
@@ -5204,15 +10218,16 @@ static int drbd_do_auth(struct drbd_connection *connection)
 		goto fail;
 	}
 
-	get_random_bytes(my_challenge, CHALLENGE_LEN);
+	get_random_bytes(my_challenge.d, sizeof(my_challenge.d));
 
-	sock = &connection->data;
-	if (!conn_prepare_command(connection, sock)) {
+	packet_body = __conn_prepare_command(connection, sizeof(my_challenge.d), DATA_STREAM);
+	if (!packet_body) {
 		rv = 0;
 		goto fail;
 	}
-	rv = !conn_send_command(connection, sock, P_AUTH_CHALLENGE, 0,
-				my_challenge, CHALLENGE_LEN);
+	memcpy(packet_body, my_challenge.d, sizeof(my_challenge.d));
+
+	rv = !__send_command(connection, -1, P_AUTH_CHALLENGE, DATA_STREAM);
 	if (!rv)
 		goto fail;
 
@@ -5224,61 +10239,56 @@ static int drbd_do_auth(struct drbd_connection *connection)
 
 	if (pi.cmd != P_AUTH_CHALLENGE) {
 		drbd_err(connection, "expected AuthChallenge packet, received: %s (0x%04x)\n",
-			 cmdname(pi.cmd), pi.cmd);
-		rv = -1;
-		goto fail;
-	}
-
-	if (pi.size > CHALLENGE_LEN * 2) {
-		drbd_err(connection, "expected AuthChallenge payload too big.\n");
+			 drbd_packet_name(pi.cmd), pi.cmd);
 		rv = -1;
 		goto fail;
 	}
 
-	if (pi.size < CHALLENGE_LEN) {
-		drbd_err(connection, "AuthChallenge payload too small.\n");
+	if (pi.size != sizeof(peers_ch->d)) {
+		drbd_err(connection, "unexpected AuthChallenge payload.\n");
 		rv = -1;
 		goto fail;
 	}
 
-	peers_ch = kmalloc(pi.size, GFP_NOIO);
+	peers_ch = kmalloc_obj(*peers_ch, GFP_NOIO);
 	if (!peers_ch) {
 		rv = -1;
 		goto fail;
 	}
 
-	err = drbd_recv_all_warn(connection, peers_ch, pi.size);
+	err = drbd_recv_into(connection, peers_ch->d, sizeof(peers_ch->d));
 	if (err) {
 		rv = 0;
 		goto fail;
 	}
 
-	if (!memcmp(my_challenge, peers_ch, CHALLENGE_LEN)) {
+	if (!memcmp(my_challenge.d, peers_ch->d, sizeof(my_challenge.d))) {
 		drbd_err(connection, "Peer presented the same challenge!\n");
 		rv = -1;
 		goto fail;
 	}
 
 	resp_size = crypto_shash_digestsize(connection->cram_hmac_tfm);
-	response = kmalloc(resp_size, GFP_NOIO);
+	response = __conn_prepare_command(connection, resp_size, DATA_STREAM);
 	if (!response) {
-		rv = -1;
+		rv = 0;
 		goto fail;
 	}
 
-	rv = crypto_shash_digest(desc, peers_ch, pi.size, response);
+	dig_size = pi.size;
+	if (peer_is_drbd_9) {
+		peers_ch->i = cpu_to_be32(connection->resource->res_opts.node_id);
+		dig_size += sizeof(peers_ch->i);
+	}
+
+	rv = crypto_shash_digest(desc, peers_ch->d, dig_size, response);
 	if (rv) {
-		drbd_err(connection, "crypto_hash_digest() failed with %d\n", rv);
+		drbd_err(connection, "crypto_shash_digest() failed with %d\n", rv);
 		rv = -1;
 		goto fail;
 	}
 
-	if (!conn_prepare_command(connection, sock)) {
-		rv = 0;
-		goto fail;
-	}
-	rv = !conn_send_command(connection, sock, P_AUTH_RESPONSE, 0,
-				response, resp_size);
+	rv = !__send_command(connection, -1, P_AUTH_RESPONSE, DATA_STREAM);
 	if (!rv)
 		goto fail;
 
@@ -5290,18 +10300,19 @@ static int drbd_do_auth(struct drbd_connection *connection)
 
 	if (pi.cmd != P_AUTH_RESPONSE) {
 		drbd_err(connection, "expected AuthResponse packet, received: %s (0x%04x)\n",
-			 cmdname(pi.cmd), pi.cmd);
+			 drbd_packet_name(pi.cmd), pi.cmd);
 		rv = 0;
 		goto fail;
 	}
 
 	if (pi.size != resp_size) {
-		drbd_err(connection, "expected AuthResponse payload of wrong size\n");
+		drbd_err(connection, "expected AuthResponse payload of %u bytes, received %u\n",
+				resp_size, pi.size);
 		rv = 0;
 		goto fail;
 	}
 
-	err = drbd_recv_all_warn(connection, response , resp_size);
+	err = drbd_recv_all(connection, &response, resp_size);
 	if (err) {
 		rv = 0;
 		goto fail;
@@ -5313,10 +10324,15 @@ static int drbd_do_auth(struct drbd_connection *connection)
 		goto fail;
 	}
 
-	rv = crypto_shash_digest(desc, my_challenge, CHALLENGE_LEN,
-				 right_response);
+	dig_size = sizeof(my_challenge.d);
+	if (peer_is_drbd_9) {
+		my_challenge.i = cpu_to_be32(connection->peer_node_id);
+		dig_size += sizeof(my_challenge.i);
+	}
+
+	rv = crypto_shash_digest(desc, my_challenge.d, dig_size, right_response);
 	if (rv) {
-		drbd_err(connection, "crypto_hash_digest() failed with %d\n", rv);
+		drbd_err(connection, "crypto_shash_digest() failed with %d\n", rv);
 		rv = -1;
 		goto fail;
 	}
@@ -5331,7 +10347,6 @@ static int drbd_do_auth(struct drbd_connection *connection)
 
  fail:
 	kfree(peers_ch);
-	kfree(response);
 	kfree(right_response);
 	if (desc) {
 		shash_desc_zero(desc);
@@ -5345,94 +10360,260 @@ static int drbd_do_auth(struct drbd_connection *connection)
 int drbd_receiver(struct drbd_thread *thi)
 {
 	struct drbd_connection *connection = thi->connection;
-	int h;
 
-	drbd_info(connection, "receiver (re)started\n");
+	if (conn_connect(connection)) {
+		blk_start_plug(&connection->receiver_plug);
+		drbdd(connection);
+		blk_finish_plug(&connection->receiver_plug);
+	}
+
+	conn_disconnect(connection);
+	return 0;
+}
+
+/* ********* acknowledge sender ******** */
+
+static void drbd_check_flush_dagtag_reached(struct drbd_connection *peer_ack_connection)
+{
+	struct drbd_resource *resource = peer_ack_connection->resource;
+	struct drbd_connection *flush_requests_connection;
+	u64 peer_ack_node_mask = NODE_MASK(peer_ack_connection->peer_node_id);
+	u64 last_peer_ack_dagtag_seen = peer_ack_connection->last_peer_ack_dagtag_seen;
+	u64 im;
+
+	for_each_connection_ref(flush_requests_connection, im, resource) {
+		u64 flush_sequence;
+		u64 *sent_mask;
+		u64 flush_requests_dagtag;
+
+		spin_lock_irq(&flush_requests_connection->primary_flush_lock);
+		flush_requests_dagtag = flush_requests_connection->flush_requests_dagtag;
+		flush_sequence = flush_requests_connection->flush_sequence;
+		sent_mask = &flush_requests_connection->flush_forward_sent_mask;
+
+		if (!flush_sequence || /* Active flushes use non-zero sequence numbers */
+				*sent_mask & peer_ack_node_mask ||
+				last_peer_ack_dagtag_seen < flush_requests_dagtag) {
+			spin_unlock_irq(&flush_requests_connection->primary_flush_lock);
+			continue;
+		}
+
+		*sent_mask |= peer_ack_node_mask;
+		spin_unlock_irq(&flush_requests_connection->primary_flush_lock);
+
+		if (peer_ack_connection == flush_requests_connection)
+			drbd_send_flush_requests_ack(peer_ack_connection,
+					flush_sequence,
+					resource->res_opts.node_id);
+		else
+			drbd_send_flush_forward(peer_ack_connection,
+					flush_sequence,
+					flush_requests_connection->peer_node_id);
+	}
+}
+
+static int process_peer_ack_list(struct drbd_connection *connection)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_ack *peer_ack, *tmp;
+	u64 node_id_mask;
+	int err = 0;
+
+	node_id_mask = NODE_MASK(connection->peer_node_id);
+
+	spin_lock_irq(&resource->peer_ack_lock);
+	peer_ack = list_first_entry(&resource->peer_ack_list, struct drbd_peer_ack, list);
+	while (&peer_ack->list != &resource->peer_ack_list) {
+		u64 pending_mask = peer_ack->pending_mask;
+		u64 mask = peer_ack->mask;
+		u64 dagtag_sector = peer_ack->dagtag_sector;
+
+		tmp = list_next_entry(peer_ack, list);
+
+		if (!(peer_ack->queued_mask & node_id_mask)) {
+			peer_ack = tmp;
+			continue;
+		}
+
+		/*
+		 * After disconnecting, queue_peer_ack_send() sets
+		 * last_peer_ack_dagtag_seen directly. Do not jump back if we
+		 * process a peer ack with a lower dagtag here shortly after.
+		 */
+		connection->last_peer_ack_dagtag_seen =
+			max(connection->last_peer_ack_dagtag_seen, dagtag_sector);
+
+		peer_ack->queued_mask &= ~node_id_mask;
+		drbd_destroy_peer_ack_if_done(peer_ack);
+		peer_ack = tmp;
+
+		if (!(pending_mask & node_id_mask))
+			continue;
+		spin_unlock_irq(&resource->peer_ack_lock);
+
+		err = drbd_send_peer_ack(connection, mask, dagtag_sector);
+
+		spin_lock_irq(&resource->peer_ack_lock);
+		if (err)
+			break;
+	}
+	spin_unlock_irq(&resource->peer_ack_lock);
+
+	if (!err && connection->agreed_pro_version >= 123)
+		drbd_check_flush_dagtag_reached(connection);
+
+	return err;
+}
+
+static int got_peers_in_sync(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
+	struct drbd_device *device;
+	struct p_peer_block_desc *p = pi->data;
+	sector_t sector;
+	u64 in_sync_b;
+	int size;
 
-	do {
-		h = conn_connect(connection);
-		if (h == 0) {
-			conn_disconnect(connection);
-			schedule_timeout_interruptible(HZ);
-		}
-		if (h == -1) {
-			drbd_warn(connection, "Discarding network configuration.\n");
-			conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_HARD);
-		}
-	} while (h == 0);
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
 
-	if (h > 0) {
-		blk_start_plug(&connection->receiver_plug);
-		drbdd(connection);
-		blk_finish_plug(&connection->receiver_plug);
-	}
+	device = peer_device->device;
 
-	conn_disconnect(connection);
+	if (get_ldev(device)) {
+		unsigned long modified;
 
-	drbd_info(connection, "receiver terminated\n");
-	return 0;
-}
+		sector = be64_to_cpu(p->sector);
+		size = be32_to_cpu(p->size);
+		in_sync_b = node_ids_to_bitmap(device, be64_to_cpu(p->mask));
 
-/* ********* acknowledge sender ******** */
+		modified = drbd_set_sync(device, sector, size, 0, in_sync_b);
 
-static int got_conn_RqSReply(struct drbd_connection *connection, struct packet_info *pi)
-{
-	struct p_req_state_reply *p = pi->data;
-	int retcode = be32_to_cpu(p->retcode);
+		/* If we are SyncSource then we rely on P_PEERS_IN_SYNC from
+		 * the peer to inform us of sync progress. Otherwise only send
+		 * peers-in-sync when we have actually cleared some bits.
+		 * This prevents an infinite loop with the peer. */
+		if (modified || peer_device->repl_state[NOW] == L_SYNC_SOURCE)
+			drbd_queue_update_peers(peer_device, sector, sector + (size >> SECTOR_SHIFT));
 
-	if (retcode >= SS_SUCCESS) {
-		set_bit(CONN_WD_ST_CHG_OKAY, &connection->flags);
-	} else {
-		set_bit(CONN_WD_ST_CHG_FAIL, &connection->flags);
-		drbd_err(connection, "Requested state change failed by peer: %s (%d)\n",
-			 drbd_set_st_err_str(retcode), retcode);
+		put_ldev(device);
 	}
-	wake_up(&connection->ping_wait);
 
 	return 0;
 }
 
 static int got_RqSReply(struct drbd_connection *connection, struct packet_info *pi)
 {
-	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
 	struct p_req_state_reply *p = pi->data;
 	int retcode = be32_to_cpu(p->retcode);
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return -EIO;
-	device = peer_device->device;
-
-	if (test_bit(CONN_WD_ST_CHG_REQ, &connection->flags)) {
-		D_ASSERT(device, connection->agreed_pro_version < 100);
-		return got_conn_RqSReply(connection, pi);
+	if (retcode >= SS_SUCCESS)
+		set_bit(TWOPC_YES, &connection->flags);
+	else {
+		set_bit(TWOPC_NO, &connection->flags);
+		dynamic_drbd_dbg(connection, "Requested state change failed by peer: %s (%d)\n",
+			   drbd_set_st_err_str(retcode), retcode);
 	}
 
-	if (retcode >= SS_SUCCESS) {
-		set_bit(CL_ST_CHG_SUCCESS, &device->flags);
+	wake_up_all(&connection->resource->state_wait);
+
+	return 0;
+}
+
+static int got_twopc_reply(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct p_twopc_reply *p = pi->data;
+
+	write_lock_irq(&resource->state_rwlock);
+	if (resource->twopc_reply.initiator_node_id == be32_to_cpu(p->initiator_node_id) &&
+	    resource->twopc_reply.tid == be32_to_cpu(p->tid)) {
+		dynamic_drbd_dbg(connection, "Got a %s reply for state change %u\n",
+			   drbd_packet_name(pi->cmd),
+			   resource->twopc_reply.tid);
+
+		if (pi->cmd == P_TWOPC_YES) {
+			struct drbd_peer_device *peer_device;
+			u64 reachable_nodes;
+			u64 max_size;
+
+			reachable_nodes = be64_to_cpu(p->reachable_nodes);
+
+			switch (resource->twopc.type) {
+			case TWOPC_STATE_CHANGE:
+				if (resource->res_opts.node_id ==
+				    resource->twopc_reply.initiator_node_id &&
+				    connection->peer_node_id ==
+				    resource->twopc_reply.target_node_id) {
+					resource->twopc_reply.target_reachable_nodes |=
+						reachable_nodes;
+				} else {
+					resource->twopc_reply.reachable_nodes |=
+						reachable_nodes;
+				}
+				resource->twopc_reply.primary_nodes |=
+					be64_to_cpu(p->primary_nodes);
+				resource->twopc_reply.weak_nodes |=
+					be64_to_cpu(p->weak_nodes);
+				break;
+			case TWOPC_RESIZE:
+				resource->twopc_reply.reachable_nodes |= reachable_nodes;
+				resource->twopc_reply.diskful_primary_nodes |=
+					be64_to_cpu(p->diskful_primary_nodes);
+				max_size = be64_to_cpu(p->max_possible_size);
+				resource->twopc_reply.max_possible_size =
+					min_t(sector_t, resource->twopc_reply.max_possible_size,
+					      max_size);
+				peer_device = conn_peer_device(connection, resource->twopc_reply.vnr);
+				if (peer_device)
+					peer_device->max_size = max_size;
+				break;
+			}
+		}
+
+		if (pi->cmd == P_TWOPC_YES)
+			set_bit(TWOPC_YES, &connection->flags);
+		else if (pi->cmd == P_TWOPC_NO)
+			set_bit(TWOPC_NO, &connection->flags);
+		else if (pi->cmd == P_TWOPC_RETRY)
+			set_bit(TWOPC_RETRY, &connection->flags);
+		drbd_maybe_cluster_wide_reply(resource);
 	} else {
-		set_bit(CL_ST_CHG_FAIL, &device->flags);
-		drbd_err(device, "Requested state change failed by peer: %s (%d)\n",
-			drbd_set_st_err_str(retcode), retcode);
+		dynamic_drbd_dbg(connection, "Ignoring %s reply for state change %u\n",
+			   drbd_packet_name(pi->cmd),
+			   be32_to_cpu(p->tid));
 	}
-	wake_up(&device->state_wait);
+	write_unlock_irq(&resource->state_rwlock);
 
 	return 0;
 }
 
-static int got_Ping(struct drbd_connection *connection, struct packet_info *pi)
+void twopc_connection_down(struct drbd_connection *connection)
 {
-	return drbd_send_ping_ack(connection);
+	struct drbd_resource *resource = connection->resource;
 
+	if (resource->twopc_reply.initiator_node_id != -1 &&
+	    test_bit(TWOPC_PREPARED, &connection->flags)) {
+		set_bit(TWOPC_RETRY, &connection->flags);
+		drbd_maybe_cluster_wide_reply(resource);
+	}
+}
+
+static int got_Ping(struct drbd_connection *connection, struct packet_info *pi)
+{
+	queue_work(ping_ack_sender, &connection->send_ping_ack_work);
+	return 0;
 }
 
 static int got_PingAck(struct drbd_connection *connection, struct packet_info *pi)
 {
-	/* restore idle timeout */
-	connection->meta.socket->sk->sk_rcvtimeo = connection->net_conf->ping_int*HZ;
-	if (!test_and_set_bit(GOT_PING_ACK, &connection->flags))
-		wake_up(&connection->ping_wait);
+	clear_bit(PING_TIMEOUT_ACTIVE, &connection->flags);
+	set_rcvtimeo(connection, REGULAR_TIMEOUT);
+
+	if (test_bit(PING_PENDING, &connection->flags)) {
+		clear_bit(PING_PENDING, &connection->flags);
+		wake_up_all(&connection->resource->state_wait);
+	}
 
 	return 0;
 }
@@ -5441,6 +10622,7 @@ static int got_IsInSync(struct drbd_connection *connection, struct packet_info *
 {
 	struct drbd_peer_device *peer_device;
 	struct drbd_device *device;
+	struct drbd_peer_request *peer_req;
 	struct p_block_ack *p = pi->data;
 	sector_t sector = be64_to_cpu(p->sector);
 	int blksize = be32_to_cpu(p->blksize);
@@ -5450,69 +10632,74 @@ static int got_IsInSync(struct drbd_connection *connection, struct packet_info *
 		return -EIO;
 	device = peer_device->device;
 
-	D_ASSERT(device, peer_device->connection->agreed_pro_version >= 89);
+	D_ASSERT(device, connection->agreed_pro_version >= 89);
 
 	update_peer_seq(peer_device, be32_to_cpu(p->seq_num));
 
+	/* Do not rely on the block_id from older peers. */
+	if (connection->agreed_pro_version < 122)
+		p->block_id = ID_SYNCER;
+
+	peer_req = find_resync_request(peer_device, INTERVAL_TYPE_MASK(INTERVAL_RESYNC_WRITE),
+			sector, blksize, p->block_id);
+	if (!peer_req)
+		return -EIO;
+
+	dec_rs_pending(peer_device);
+
+	set_bit(INTERVAL_RECEIVED, &peer_req->i.flags);
+
+	spin_lock_irq(&connection->peer_reqs_lock);
+	list_del(&peer_req->w.list);
+	spin_unlock_irq(&connection->peer_reqs_lock);
+
 	if (get_ldev(device)) {
-		drbd_rs_complete_io(device, sector);
 		drbd_set_in_sync(peer_device, sector, blksize);
 		/* rs_same_csums is supposed to count in units of BM_BLOCK_SIZE */
-		device->rs_same_csum += (blksize >> BM_BLOCK_SHIFT);
+		peer_device->rs_same_csum += (blksize >> device->ldev->md.bm_block_shift);
 		put_ldev(device);
 	}
-	dec_rs_pending(peer_device);
-	atomic_add(blksize >> 9, &device->rs_sect_in);
+	rs_sectors_came_in(peer_device, blksize);
 
+	drbd_remove_peer_req_interval(peer_req);
+	drbd_resync_request_complete(peer_req);
 	return 0;
 }
 
 static int
 validate_req_change_req_state(struct drbd_peer_device *peer_device, u64 id, sector_t sector,
-			      struct rb_root *root, const char *func,
+			      enum drbd_interval_type type, const char *func,
 			      enum drbd_req_event what, bool missing_ok)
 {
 	struct drbd_device *device = peer_device->device;
 	struct drbd_request *req;
-	struct bio_and_error m;
 
-	spin_lock_irq(&device->resource->req_lock);
-	req = find_request(device, root, id, sector, missing_ok, func);
-	if (unlikely(!req)) {
-		spin_unlock_irq(&device->resource->req_lock);
+	spin_lock_irq(&device->interval_lock);
+	req = find_request(device, type, id, sector, missing_ok, func);
+	spin_unlock_irq(&device->interval_lock);
+	if (unlikely(!req))
 		return -EIO;
-	}
-	__req_mod(req, what, peer_device, &m);
-	spin_unlock_irq(&device->resource->req_lock);
+	req_mod(req, what, peer_device);
 
-	if (m.bio)
-		complete_master_bio(device, &m);
 	return 0;
 }
 
 static int got_BlockAck(struct drbd_connection *connection, struct packet_info *pi)
 {
 	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
 	struct p_block_ack *p = pi->data;
 	sector_t sector = be64_to_cpu(p->sector);
-	int blksize = be32_to_cpu(p->blksize);
 	enum drbd_req_event what;
 
 	peer_device = conn_peer_device(connection, pi->vnr);
 	if (!peer_device)
 		return -EIO;
-	device = peer_device->device;
 
 	update_peer_seq(peer_device, be32_to_cpu(p->seq_num));
 
-	if (p->block_id == ID_SYNCER) {
-		drbd_set_in_sync(peer_device, sector, blksize);
-		dec_rs_pending(peer_device);
-		return 0;
-	}
 	switch (pi->cmd) {
-	case P_RS_WRITE_ACK:
+	case P_RS_WRITE_ACK: /* agreed_pro_version < 122 */
+	case P_WRITE_ACK_IN_SYNC:
 		what = WRITE_ACKED_BY_PEER_AND_SIS;
 		break;
 	case P_WRITE_ACK:
@@ -5521,209 +10708,591 @@ static int got_BlockAck(struct drbd_connection *connection, struct packet_info *
 	case P_RECV_ACK:
 		what = RECV_ACKED_BY_PEER;
 		break;
-	case P_SUPERSEDED:
-		what = CONFLICT_RESOLVED;
-		break;
-	case P_RETRY_WRITE:
-		what = POSTPONE_WRITE;
-		break;
 	default:
 		BUG();
 	}
 
-	return validate_req_change_req_state(peer_device, p->block_id, sector,
-					     &device->write_requests, __func__,
-					     what, false);
+	return validate_req_change_req_state(peer_device, p->block_id, sector,
+					     INTERVAL_LOCAL_WRITE, __func__,
+					     what, false);
+}
+
+/* Process acks for resync writes. */
+static int got_RSWriteAck(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
+	struct p_block_ack *p = pi->data;
+	bool is_neg_ack = pi->cmd == P_NEG_ACK || pi->cmd == P_RS_NEG_ACK;
+	sector_t sector = be64_to_cpu(p->sector);
+	int size = be32_to_cpu(p->blksize);
+	struct drbd_peer_request *peer_req;
+
+	/* P_RS_WRITE_ACK used to be used instead of P_WRITE_ACK_IN_SYNC. */
+	if (connection->agreed_pro_version < 122 && p->block_id != ID_SYNCER)
+		return got_BlockAck(connection, pi);
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+
+	update_peer_seq(peer_device, be32_to_cpu(p->seq_num));
+
+	if (is_neg_ack && peer_device->disk_state[NOW] == D_UP_TO_DATE)
+		set_bit(GOT_NEG_ACK, &peer_device->flags);
+
+	peer_req = find_resync_request(peer_device, INTERVAL_TYPE_MASK(INTERVAL_RESYNC_READ),
+			sector, size, p->block_id);
+	if (!peer_req)
+		return -EIO;
+
+	if (is_neg_ack)
+		drbd_rs_failed_io(peer_device, sector, size);
+	else
+		drbd_set_in_sync(peer_device, sector, size);
+
+	atomic_sub(size >> 9, &connection->rs_in_flight);
+
+	dec_rs_pending(peer_device);
+
+	/*
+	 * Remove from the interval tree now so that
+	 * find_resync_request() cannot find this request again
+	 * if we get another ack for this interval.
+	 */
+	drbd_remove_peer_req_interval(peer_req);
+
+	drbd_resync_read_req_mod(peer_req, INTERVAL_RECEIVED);
+	return 0;
+}
+
+static int got_NegAck(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
+	struct p_block_ack *p = pi->data;
+	sector_t sector = be64_to_cpu(p->sector);
+	int size = be32_to_cpu(p->blksize);
+	int err;
+
+	/* P_NEG_ACK used to be used instead of P_RS_NEG_ACK. */
+	if (p->block_id == ID_SYNCER)
+		return got_RSWriteAck(connection, pi);
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+
+	update_peer_seq(peer_device, be32_to_cpu(p->seq_num));
+
+	if (peer_device->disk_state[NOW] == D_UP_TO_DATE)
+		set_bit(GOT_NEG_ACK, &peer_device->flags);
+
+	err = validate_req_change_req_state(peer_device, p->block_id, sector,
+			INTERVAL_LOCAL_WRITE, __func__, NEG_ACKED, true);
+	if (err) {
+		/* Protocol A has no P_WRITE_ACKs, but has P_NEG_ACKs.
+		   The master bio might already be completed, therefore the
+		   request is no longer in the collision hash. */
+		/* In Protocol B we might already have got a P_RECV_ACK
+		   but then get a P_NEG_ACK afterwards. */
+		drbd_set_out_of_sync(peer_device, sector, size);
+	}
+	return 0;
+}
+
+static int got_NegDReply(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
+	struct p_block_ack *p = pi->data;
+	sector_t sector = be64_to_cpu(p->sector);
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+
+	update_peer_seq(peer_device, be32_to_cpu(p->seq_num));
+
+	drbd_warn_ratelimit(peer_device, "Got NegDReply; Sector %llus, len %u.\n",
+			(unsigned long long)sector, be32_to_cpu(p->blksize));
+
+	return validate_req_change_req_state(peer_device, p->block_id, sector,
+					     INTERVAL_LOCAL_READ, __func__,
+					     NEG_ACKED, false);
+}
+
+void drbd_unsuccessful_resync_request(struct drbd_peer_request *peer_req, bool failed)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device->device;
+
+	if (get_ldev_if_state(device, D_DETACHING)) {
+		if (failed) {
+			drbd_rs_failed_io(peer_device, peer_req->i.sector, peer_req->i.size);
+		} else {
+			if (drbd_interval_is_verify(&peer_req->i)) {
+				drbd_verify_skipped_block(peer_device, peer_req->i.sector, peer_req->i.size);
+				verify_progress(peer_device, peer_req->i.sector, peer_req->i.size);
+			} else {
+				set_bit(RS_REQUEST_UNSUCCESSFUL, &peer_device->flags);
+			}
+		}
+
+		rs_sectors_came_in(peer_device, peer_req->i.size);
+		mod_timer(&peer_device->resync_timer, jiffies + RS_MAKE_REQS_INTV);
+		put_ldev(device);
+	}
+
+	drbd_remove_peer_req_interval(peer_req);
+	drbd_free_peer_req(peer_req);
+}
+
+static int got_NegRSDReply(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
+	struct drbd_peer_request *peer_req;
+	sector_t sector;
+	int size;
+	u64 block_id;
+	struct p_block_ack *p = pi->data;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+
+	sector = be64_to_cpu(p->sector);
+	size = be32_to_cpu(p->blksize);
+
+	/* Prior to protocol version 122, block_id may be meaningless. */
+	block_id = peer_device->connection->agreed_pro_version >= 122 ? p->block_id : ID_SYNCER;
+
+	update_peer_seq(peer_device, be32_to_cpu(p->seq_num));
+
+	peer_req = find_resync_request(peer_device, INTERVAL_TYPE_MASK(INTERVAL_RESYNC_WRITE) |
+			INTERVAL_TYPE_MASK(INTERVAL_OV_READ_SOURCE),
+			sector, size, block_id);
+	if (!peer_req)
+		return -EIO;
+
+	dec_rs_pending(peer_device);
+
+	if (pi->cmd == P_RS_CANCEL_AHEAD)
+		set_bit(SYNC_TARGET_TO_BEHIND, &peer_device->flags);
+
+	drbd_unsuccessful_resync_request(peer_req, pi->cmd == P_NEG_RS_DREPLY);
+	return 0;
+}
+
+static int got_BarrierAck(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct p_barrier_ack *p = pi->data;
+
+	return tl_release(connection, 0, 0, p->barrier, be32_to_cpu(p->set_size));
+}
+
+static int got_confirm_stable(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct p_confirm_stable *p = pi->data;
+
+	return tl_release(connection, p->oldest_block_id, p->youngest_block_id, 0,
+			  be32_to_cpu(p->set_size));
+}
+
+static int got_OVResult(struct drbd_connection *connection, struct packet_info *pi)
+{
+	struct drbd_peer_device *peer_device;
+	struct drbd_device *device;
+	struct drbd_peer_request *peer_req;
+	sector_t sector;
+	int size;
+	u64 block_id;
+	u32 seq_num;
+	enum ov_result result;
+
+	peer_device = conn_peer_device(connection, pi->vnr);
+	if (!peer_device)
+		return -EIO;
+	device = peer_device->device;
+
+	if (pi->cmd == P_OV_RESULT) {
+		struct p_block_ack *p = pi->data;
+
+		sector = be64_to_cpu(p->sector);
+		size = be32_to_cpu(p->blksize);
+		block_id = ID_SYNCER;
+		seq_num = be32_to_cpu(p->seq_num);
+		result = drbd_block_id_to_ov_result(be64_to_cpu(p->block_id));
+	} else { /* P_OV_RESULT_ID */
+		struct p_ov_result *p = pi->data;
+
+		sector = be64_to_cpu(p->sector);
+		size = be32_to_cpu(p->blksize);
+		block_id = p->block_id;
+		seq_num = be32_to_cpu(p->seq_num);
+		result = be32_to_cpu(p->result);
+	}
+
+	update_peer_seq(peer_device, seq_num);
+
+	peer_req = find_resync_request(peer_device, INTERVAL_TYPE_MASK(INTERVAL_OV_READ_TARGET),
+			sector, size, block_id);
+	if (!peer_req)
+		return -EIO;
+
+	drbd_remove_peer_req_interval(peer_req);
+
+	/* This may be a request that we could not cancel because the peer does
+	 * not understand P_RS_CANCEL. Treat it as a skipped block. */
+	if (connection->agreed_pro_version < 110 && test_bit(INTERVAL_CONFLICT, &peer_req->i.flags))
+		result = OV_RESULT_SKIP;
+
+	drbd_free_peer_req(peer_req);
+	peer_req = NULL;
+
+	if (result == OV_RESULT_SKIP)
+		drbd_verify_skipped_block(peer_device, sector, size);
+	if (result == OV_RESULT_OUT_OF_SYNC)
+		drbd_ov_out_of_sync_found(peer_device, sector, size);
+	else
+		ov_out_of_sync_print(peer_device);
+
+	if (!get_ldev(device))
+		return 0;
+
+	dec_rs_pending(peer_device);
+
+	verify_progress(peer_device, sector, size);
+
+	put_ldev(device);
+	return 0;
+}
+
+static int got_skip(struct drbd_connection *connection, struct packet_info *pi)
+{
+	return 0;
+}
+
+static u64 node_id_to_mask(struct drbd_peer_md *peer_md, int node_id)
+{
+	int bitmap_bit = peer_md[node_id].bitmap_index;
+	return (bitmap_bit >= 0) ? NODE_MASK(bitmap_bit) : 0;
+}
+
+static u64 node_ids_to_bitmap(struct drbd_device *device, u64 node_ids)
+{
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	u64 bitmap_bits = 0;
+	int node_id;
+
+	for_each_set_bit(node_id, (unsigned long *)&node_ids, DRBD_NODE_ID_MAX)
+		bitmap_bits |= node_id_to_mask(peer_md, node_id);
+	return bitmap_bits;
+}
+
+static struct drbd_peer_request *drbd_send_oos_next_req(struct drbd_connection *peer_ack_connection,
+		int oos_node_id, struct drbd_peer_request *peer_req)
+{
+	lockdep_assert_held(&peer_ack_connection->send_oos_lock);
+
+	if (peer_req == NULL)
+		peer_req = list_entry(&peer_ack_connection->send_oos,
+				struct drbd_peer_request, recv_order);
+
+	list_for_each_entry_continue(peer_req, &peer_ack_connection->send_oos, recv_order) {
+		if (NODE_MASK(oos_node_id) & peer_req->send_oos_pending)
+			return peer_req;
+	}
+
+	return NULL;
 }
 
-static int got_NegAck(struct drbd_connection *connection, struct packet_info *pi)
+static void drbd_send_oos_from(struct drbd_connection *oos_connection, int peer_ack_node_id)
 {
-	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
-	struct p_block_ack *p = pi->data;
-	sector_t sector = be64_to_cpu(p->sector);
-	int size = be32_to_cpu(p->blksize);
-	int err;
+	int oos_node_id = oos_connection->peer_node_id;
+	struct drbd_resource *resource = oos_connection->resource;
+	struct drbd_connection *peer_ack_connection;
+	struct drbd_peer_request *peer_req;
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return -EIO;
-	device = peer_device->device;
+	rcu_read_lock();
+	peer_ack_connection = drbd_connection_by_node_id(resource, peer_ack_node_id);
+	/* Valid to use peer_ack_connection after unlock because we have kref */
+	rcu_read_unlock();
 
-	update_peer_seq(peer_device, be32_to_cpu(p->seq_num));
+	spin_lock_irq(&peer_ack_connection->send_oos_lock);
+	peer_req = drbd_send_oos_next_req(peer_ack_connection, oos_node_id, NULL);
+	spin_unlock_irq(&peer_ack_connection->send_oos_lock);
 
-	if (p->block_id == ID_SYNCER) {
-		dec_rs_pending(peer_device);
-		drbd_rs_failed_io(peer_device, sector, size);
-		return 0;
+	while (peer_req) {
+		struct drbd_peer_device *peer_device =
+			conn_peer_device(oos_connection, peer_req->peer_device->device->vnr);
+		struct drbd_peer_request *free_peer_req = NULL;
+
+		/* Ignore errors and keep iterating to clear up list */
+		drbd_send_out_of_sync(peer_device, peer_req->i.sector, peer_req->i.size);
+
+		spin_lock_irq(&peer_ack_connection->send_oos_lock);
+		peer_req->send_oos_pending &= ~NODE_MASK(oos_node_id);
+		if (!peer_req->send_oos_pending)
+			free_peer_req = peer_req;
+
+		peer_req = drbd_send_oos_next_req(peer_ack_connection, oos_node_id, peer_req);
+		spin_unlock_irq(&peer_ack_connection->send_oos_lock);
+
+		if (free_peer_req)
+			drbd_free_peer_req(free_peer_req);
 	}
 
-	err = validate_req_change_req_state(peer_device, p->block_id, sector,
-					    &device->write_requests, __func__,
-					    NEG_ACKED, true);
-	if (err) {
-		/* Protocol A has no P_WRITE_ACKs, but has P_NEG_ACKs.
-		   The master bio might already be completed, therefore the
-		   request is no longer in the collision hash. */
-		/* In Protocol B we might already have got a P_RECV_ACK
-		   but then get a P_NEG_ACK afterwards. */
-		drbd_set_out_of_sync(peer_device, sector, size);
+	kref_put(&peer_ack_connection->kref, drbd_destroy_connection);
+}
+
+int drbd_send_out_of_sync_wf(struct drbd_work *w, int cancel)
+{
+	struct drbd_connection *oos_connection = container_of(w, struct drbd_connection,
+			send_oos_work);
+	unsigned long send_oos_from_mask = READ_ONCE(oos_connection->send_oos_from_mask);
+	int peer_ack_node_id;
+
+	for_each_set_bit(peer_ack_node_id, &send_oos_from_mask, sizeof(unsigned long)) {
+		clear_bit(peer_ack_node_id, &oos_connection->send_oos_from_mask);
+		drbd_send_oos_from(oos_connection, peer_ack_node_id);
 	}
+
 	return 0;
 }
 
-static int got_NegDReply(struct drbd_connection *connection, struct packet_info *pi)
+static bool is_sync_source(struct drbd_peer_device *peer_device)
+{
+	return is_sync_source_state(peer_device, NOW) ||
+		peer_device->repl_state[NOW] == L_WF_BITMAP_S;
+}
+
+static u64 drbd_calculate_send_oos_pending(struct drbd_device *device, u64 in_sync)
 {
 	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
-	struct p_block_ack *p = pi->data;
-	sector_t sector = be64_to_cpu(p->sector);
+	u64 send_oos_pending = 0;
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return -EIO;
-	device = peer_device->device;
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (!(NODE_MASK(peer_device->node_id) & in_sync) &&
+				is_sync_source(peer_device))
+			send_oos_pending |= NODE_MASK(peer_device->node_id);
+	}
+	rcu_read_unlock();
 
-	update_peer_seq(peer_device, be32_to_cpu(p->seq_num));
+	return send_oos_pending;
+}
 
-	drbd_err(device, "Got NegDReply; Sector %llus, len %u.\n",
-	    (unsigned long long)sector, be32_to_cpu(p->blksize));
+static void drbd_queue_send_out_of_sync(struct drbd_connection *peer_ack_connection,
+		struct list_head *send_oos_peer_req_list, u64 any_send_oos_pending)
+{
+	struct drbd_resource *resource = peer_ack_connection->resource;
+	int peer_ack_node_id = peer_ack_connection->peer_node_id;
+	struct drbd_connection *oos_connection;
 
-	return validate_req_change_req_state(peer_device, p->block_id, sector,
-					     &device->read_requests, __func__,
-					     NEG_ACKED, false);
+	if (!any_send_oos_pending)
+		return;
+
+	spin_lock_irq(&peer_ack_connection->send_oos_lock);
+	list_splice_tail(send_oos_peer_req_list, &peer_ack_connection->send_oos);
+	spin_unlock_irq(&peer_ack_connection->send_oos_lock);
+
+	/* Take state_rwlock to ensure work is queued on sender that is still running */
+	read_lock_irq(&resource->state_rwlock);
+	for_each_connection(oos_connection, resource) {
+		if (!(NODE_MASK(oos_connection->peer_node_id) & any_send_oos_pending) ||
+				oos_connection->cstate[NOW] < C_CONNECTED)
+			continue;
+
+		if (test_and_set_bit(peer_ack_node_id, &oos_connection->send_oos_from_mask))
+			continue; /* Only get kref if we set the bit here */
+
+		kref_get(&peer_ack_connection->kref);
+		drbd_queue_work_if_unqueued(&oos_connection->sender_work,
+				&oos_connection->send_oos_work);
+	}
+	read_unlock_irq(&resource->state_rwlock);
 }
 
-static int got_NegRSDReply(struct drbd_connection *connection, struct packet_info *pi)
+static int got_peer_ack(struct drbd_connection *connection, struct packet_info *pi)
 {
-	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
-	sector_t sector;
-	int size;
-	struct p_block_ack *p = pi->data;
+	struct p_peer_ack *p = pi->data;
+	u64 dagtag, in_sync;
+	struct drbd_peer_request *peer_req, *tmp;
+	struct list_head work_list;
+	u64 any_send_oos_pending = 0;
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return -EIO;
-	device = peer_device->device;
+	dagtag = be64_to_cpu(p->dagtag);
+	in_sync = be64_to_cpu(p->mask);
 
-	sector = be64_to_cpu(p->sector);
-	size = be32_to_cpu(p->blksize);
+	spin_lock_irq(&connection->peer_reqs_lock);
+	list_for_each_entry(peer_req, &connection->peer_requests, recv_order) {
+		if (dagtag == peer_req->dagtag_sector)
+			goto found;
+	}
+	spin_unlock_irq(&connection->peer_reqs_lock);
 
-	update_peer_seq(peer_device, be32_to_cpu(p->seq_num));
+	drbd_err(connection, "peer request with dagtag %llu not found\n", dagtag);
+	return -EIO;
 
-	dec_rs_pending(peer_device);
+found:
+	list_cut_position(&work_list, &connection->peer_requests, &peer_req->recv_order);
+	spin_unlock_irq(&connection->peer_reqs_lock);
 
-	if (get_ldev_if_state(device, D_FAILED)) {
-		drbd_rs_complete_io(device, sector);
-		switch (pi->cmd) {
-		case P_NEG_RS_DREPLY:
-			drbd_rs_failed_io(peer_device, sector, size);
-			break;
-		case P_RS_CANCEL:
-			break;
-		default:
-			BUG();
+	list_for_each_entry_safe(peer_req, tmp, &work_list, recv_order) {
+		struct drbd_peer_device *peer_device = peer_req->peer_device;
+		struct drbd_device *device = peer_device->device;
+		u64 in_sync_b, mask;
+
+		D_ASSERT(peer_device, peer_req->flags & EE_IN_ACTLOG);
+
+		if (get_ldev(device)) {
+			if ((peer_req->flags & EE_WAS_ERROR) == 0)
+				in_sync_b = node_ids_to_bitmap(device, in_sync);
+			else
+				in_sync_b = 0;
+			mask = ~node_id_to_mask(device->ldev->md.peers,
+						connection->peer_node_id);
+
+			drbd_set_sync(device, peer_req->i.sector,
+				      peer_req->i.size, ~in_sync_b, mask);
+			drbd_al_complete_io(device, &peer_req->i);
+			put_ldev(device);
 		}
-		put_ldev(device);
+
+		peer_req->send_oos_pending = drbd_calculate_send_oos_pending(device, in_sync);
+		any_send_oos_pending |= peer_req->send_oos_pending;
+		if (!peer_req->send_oos_pending)
+			drbd_free_peer_req(peer_req);
 	}
 
+	drbd_queue_send_out_of_sync(connection, &work_list, any_send_oos_pending);
 	return 0;
 }
 
-static int got_BarrierAck(struct drbd_connection *connection, struct packet_info *pi)
+void apply_unacked_peer_requests(struct drbd_connection *connection)
 {
-	struct p_barrier_ack *p = pi->data;
-	struct drbd_peer_device *peer_device;
-	int vnr;
-
-	tl_release(connection, p->barrier, be32_to_cpu(p->set_size));
+	struct drbd_peer_request *peer_req;
+	unsigned long flags;
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+	spin_lock_irqsave(&connection->peer_reqs_lock, flags);
+	list_for_each_entry(peer_req, &connection->peer_requests, recv_order) {
+		struct drbd_peer_device *peer_device = peer_req->peer_device;
 		struct drbd_device *device = peer_device->device;
+		int bitmap_index = peer_device->bitmap_index;
+		u64 mask = ~(bitmap_index != -1 ? 1UL << bitmap_index : 0UL);
 
-		if (device->state.conn == C_AHEAD &&
-		    atomic_read(&device->ap_in_flight) == 0 &&
-		    !test_and_set_bit(AHEAD_TO_SYNC_SOURCE, &device->flags)) {
-			device->start_resync_timer.expires = jiffies + HZ;
-			add_timer(&device->start_resync_timer);
-		}
+		drbd_set_sync(device, peer_req->i.sector, peer_req->i.size,
+			      mask, mask);
 	}
-	rcu_read_unlock();
-
-	return 0;
+	spin_unlock_irqrestore(&connection->peer_reqs_lock, flags);
 }
 
-static int got_OVResult(struct drbd_connection *connection, struct packet_info *pi)
+static void cleanup_unacked_peer_requests(struct drbd_connection *connection)
 {
-	struct drbd_peer_device *peer_device;
-	struct drbd_device *device;
-	struct p_block_ack *p = pi->data;
-	struct drbd_device_work *dw;
-	sector_t sector;
-	int size;
+	struct drbd_peer_request *peer_req, *tmp;
+	LIST_HEAD(work_list);
+	u64 any_send_oos_pending = 0;
 
-	peer_device = conn_peer_device(connection, pi->vnr);
-	if (!peer_device)
-		return -EIO;
-	device = peer_device->device;
+	spin_lock_irq(&connection->peer_reqs_lock);
+	list_splice_init(&connection->peer_requests, &work_list);
+	spin_unlock_irq(&connection->peer_reqs_lock);
 
-	sector = be64_to_cpu(p->sector);
-	size = be32_to_cpu(p->blksize);
+	list_for_each_entry_safe(peer_req, tmp, &work_list, recv_order) {
+		struct drbd_peer_device *peer_device = peer_req->peer_device;
+		struct drbd_device *device = peer_device->device;
+		int bitmap_index = peer_device->bitmap_index;
+		u64 mask = ~(bitmap_index != -1 ? 1UL << bitmap_index : 0UL);
 
-	update_peer_seq(peer_device, be32_to_cpu(p->seq_num));
+		if (get_ldev(device)) {
+			drbd_set_sync(device, peer_req->i.sector, peer_req->i.size,
+				      mask, mask);
+			drbd_al_complete_io(device, &peer_req->i);
+			put_ldev(device);
+		}
 
-	if (be64_to_cpu(p->block_id) == ID_OUT_OF_SYNC)
-		drbd_ov_out_of_sync_found(peer_device, sector, size);
-	else
-		ov_out_of_sync_print(peer_device);
+		peer_req->send_oos_pending = drbd_calculate_send_oos_pending(device, 0);
+		any_send_oos_pending |= peer_req->send_oos_pending;
+		if (!peer_req->send_oos_pending)
+			drbd_free_peer_req(peer_req);
+	}
 
-	if (!get_ldev(device))
-		return 0;
+	drbd_queue_send_out_of_sync(connection, &work_list, any_send_oos_pending);
+}
 
-	drbd_rs_complete_io(device, sector);
-	dec_rs_pending(peer_device);
+static void cleanup_peer_ack_list(struct drbd_connection *connection)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_ack *peer_ack, *tmp;
+	struct drbd_request *req;
+	int idx = connection->peer_node_id;
+	u64 node_id_mask = NODE_MASK(idx);
+
+	spin_lock_irq(&resource->peer_ack_lock);
+	list_for_each_entry_safe(peer_ack, tmp, &resource->peer_ack_list, list) {
+		if (!(peer_ack->queued_mask & node_id_mask))
+			continue;
+		peer_ack->queued_mask &= ~node_id_mask;
+		drbd_destroy_peer_ack_if_done(peer_ack);
+	}
+	req = resource->peer_ack_req;
+	if (req)
+		req->net_rq_state[idx] &= ~RQ_NET_SENT;
+	spin_unlock_irq(&resource->peer_ack_lock);
+}
+
+int drbd_flush_ack_wf(struct drbd_work *w, int unused)
+{
+	struct drbd_connection *connection =
+		container_of(w, struct drbd_connection, flush_ack_work);
+	int primary_node_id;
 
-	--device->ov_left;
+	for (primary_node_id = 0; primary_node_id < DRBD_PEERS_MAX; primary_node_id++) {
+		u64 flush_sequence;
 
-	/* let's advance progress step marks only for every other megabyte */
-	if ((device->ov_left & 0x200) == 0x200)
-		drbd_advance_rs_marks(peer_device, device->ov_left);
+		spin_lock_irq(&connection->flush_ack_lock);
+		flush_sequence = connection->flush_ack_sequence[primary_node_id];
+		connection->flush_ack_sequence[primary_node_id] = 0;
+		spin_unlock_irq(&connection->flush_ack_lock);
 
-	if (device->ov_left == 0) {
-		dw = kmalloc_obj(*dw, GFP_NOIO);
-		if (dw) {
-			dw->w.cb = w_ov_finished;
-			dw->device = device;
-			drbd_queue_work(&peer_device->connection->sender_work, &dw->w);
-		} else {
-			drbd_err(device, "kmalloc(dw) failed.");
-			ov_out_of_sync_print(peer_device);
-			drbd_resync_finished(peer_device);
-		}
+		if (flush_sequence) /* Active flushes use non-zero sequence numbers */
+			drbd_send_flush_requests_ack(connection, flush_sequence, primary_node_id);
 	}
-	put_ldev(device);
+
 	return 0;
 }
 
-static int got_skip(struct drbd_connection *connection, struct packet_info *pi)
+static int got_flush_forward(struct drbd_connection *connection, struct packet_info *pi)
 {
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_connection *initiator_connection;
+	struct p_flush_forward *p = pi->data;
+	u64 flush_sequence = be64_to_cpu(p->flush_sequence);
+	int initiator_node_id = be32_to_cpu(p->initiator_node_id);
+
+	rcu_read_lock();
+	initiator_connection = drbd_connection_by_node_id(resource, initiator_node_id);
+	if (!initiator_connection) {
+		rcu_read_unlock();
+		return 0;
+	}
+
+	spin_lock_irq(&initiator_connection->flush_ack_lock);
+	initiator_connection->flush_ack_sequence[connection->peer_node_id] = flush_sequence;
+	drbd_queue_work_if_unqueued(&initiator_connection->sender_work,
+			&initiator_connection->flush_ack_work);
+	spin_unlock_irq(&initiator_connection->flush_ack_lock);
+	rcu_read_unlock();
 	return 0;
 }
 
-struct meta_sock_cmd {
-	size_t pkt_size;
-	int (*fn)(struct drbd_connection *connection, struct packet_info *);
-};
-
-static void set_rcvtimeo(struct drbd_connection *connection, bool ping_timeout)
+static void set_rcvtimeo(struct drbd_connection *connection, enum rcv_timeou_kind kind)
 {
-	long t;
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_transport_ops *tr_ops = &transport->class->ops;
+	bool ping_timeout = kind == PING_TIMEOUT;
 	struct net_conf *nc;
+	long t;
 
 	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
+	nc = rcu_dereference(transport->net_conf);
 	t = ping_timeout ? nc->ping_timeo : nc->ping_int;
 	rcu_read_unlock();
 
@@ -5731,202 +11300,263 @@ static void set_rcvtimeo(struct drbd_connection *connection, bool ping_timeout)
 	if (ping_timeout)
 		t /= 10;
 
-	connection->meta.socket->sk->sk_rcvtimeo = t;
+	tr_ops->set_rcvtimeo(transport, CONTROL_STREAM, t);
 }
 
-static void set_ping_timeout(struct drbd_connection *connection)
+void drbd_send_ping_wf(struct work_struct *ws)
 {
-	set_rcvtimeo(connection, 1);
-}
+	struct drbd_connection *connection =
+		container_of(ws, struct drbd_connection, send_ping_work);
+	int err;
 
-static void set_idle_timeout(struct drbd_connection *connection)
-{
-	set_rcvtimeo(connection, 0);
+	set_rcvtimeo(connection, PING_TIMEOUT);
+	set_bit(PING_TIMEOUT_ACTIVE, &connection->flags);
+	err = drbd_send_ping(connection);
+	if (err)
+		change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
 }
 
+struct meta_sock_cmd {
+	size_t pkt_size;
+	int (*fn)(struct drbd_connection *connection, struct packet_info *);
+};
+
 static struct meta_sock_cmd ack_receiver_tbl[] = {
-	[P_PING]	    = { 0, got_Ping },
-	[P_PING_ACK]	    = { 0, got_PingAck },
-	[P_RECV_ACK]	    = { sizeof(struct p_block_ack), got_BlockAck },
-	[P_WRITE_ACK]	    = { sizeof(struct p_block_ack), got_BlockAck },
-	[P_RS_WRITE_ACK]    = { sizeof(struct p_block_ack), got_BlockAck },
-	[P_SUPERSEDED]   = { sizeof(struct p_block_ack), got_BlockAck },
-	[P_NEG_ACK]	    = { sizeof(struct p_block_ack), got_NegAck },
-	[P_NEG_DREPLY]	    = { sizeof(struct p_block_ack), got_NegDReply },
-	[P_NEG_RS_DREPLY]   = { sizeof(struct p_block_ack), got_NegRSDReply },
-	[P_OV_RESULT]	    = { sizeof(struct p_block_ack), got_OVResult },
-	[P_BARRIER_ACK]	    = { sizeof(struct p_barrier_ack), got_BarrierAck },
-	[P_STATE_CHG_REPLY] = { sizeof(struct p_req_state_reply), got_RqSReply },
-	[P_RS_IS_IN_SYNC]   = { sizeof(struct p_block_ack), got_IsInSync },
-	[P_DELAY_PROBE]     = { sizeof(struct p_delay_probe93), got_skip },
-	[P_RS_CANCEL]       = { sizeof(struct p_block_ack), got_NegRSDReply },
-	[P_CONN_ST_CHG_REPLY]={ sizeof(struct p_req_state_reply), got_conn_RqSReply },
-	[P_RETRY_WRITE]	    = { sizeof(struct p_block_ack), got_BlockAck },
+	[P_PING]	      = { 0, got_Ping },
+	[P_PING_ACK]	      = { 0, got_PingAck },
+	[P_RECV_ACK]	      = { sizeof(struct p_block_ack), got_BlockAck },
+	[P_WRITE_ACK]	      = { sizeof(struct p_block_ack), got_BlockAck },
+	[P_WRITE_ACK_IN_SYNC] = { sizeof(struct p_block_ack), got_BlockAck },
+	[P_NEG_ACK]	      = { sizeof(struct p_block_ack), got_NegAck },
+	[P_NEG_DREPLY]	      = { sizeof(struct p_block_ack), got_NegDReply },
+	[P_NEG_RS_DREPLY]     = { sizeof(struct p_block_ack), got_NegRSDReply },
+	[P_RS_WRITE_ACK]      = { sizeof(struct p_block_ack), got_RSWriteAck },
+	[P_RS_NEG_ACK]	      = { sizeof(struct p_block_ack), got_RSWriteAck },
+	[P_OV_RESULT]	      = { sizeof(struct p_block_ack), got_OVResult },
+	[P_OV_RESULT_ID]      = { sizeof(struct p_ov_result), got_OVResult },
+	[P_BARRIER_ACK]	      = { sizeof(struct p_barrier_ack), got_BarrierAck },
+	[P_CONFIRM_STABLE]    = { sizeof(struct p_confirm_stable), got_confirm_stable },
+	[P_STATE_CHG_REPLY]   = { sizeof(struct p_req_state_reply), got_RqSReply },
+	[P_RS_IS_IN_SYNC]     = { sizeof(struct p_block_ack), got_IsInSync },
+	[P_DELAY_PROBE]	      = { sizeof(struct p_delay_probe93), got_skip },
+	[P_RS_CANCEL]	      = { sizeof(struct p_block_ack), got_NegRSDReply },
+	[P_RS_CANCEL_AHEAD]   = { sizeof(struct p_block_ack), got_NegRSDReply },
+	[P_CONN_ST_CHG_REPLY] = { sizeof(struct p_req_state_reply), got_RqSReply },
+	[P_PEER_ACK]	      = { sizeof(struct p_peer_ack), got_peer_ack },
+	[P_PEERS_IN_SYNC]     = { sizeof(struct p_peer_block_desc), got_peers_in_sync },
+	[P_TWOPC_YES]	      = { sizeof(struct p_twopc_reply), got_twopc_reply },
+	[P_TWOPC_NO]	      = { sizeof(struct p_twopc_reply), got_twopc_reply },
+	[P_TWOPC_RETRY]	      = { sizeof(struct p_twopc_reply), got_twopc_reply },
+	[P_FLUSH_FORWARD]     = { sizeof(struct p_flush_forward), got_flush_forward },
 };
 
-int drbd_ack_receiver(struct drbd_thread *thi)
+static void fillup_buffer_from(struct drbd_mutable_buffer *to_fill, unsigned int need, struct drbd_const_buffer *pool)
 {
-	struct drbd_connection *connection = thi->connection;
-	struct meta_sock_cmd *cmd = NULL;
+	if (to_fill->avail < need) {
+		unsigned int missing = min(need - to_fill->avail, pool->avail);
+
+		memcpy(to_fill->buffer + to_fill->avail, pool->buffer, missing);
+		pool->buffer += missing;
+		pool->avail -= missing;
+		to_fill->avail += missing;
+	}
+}
+
+static int decode_meta_cmd(struct drbd_connection *connection, const u8 *pos, struct packet_info *pi)
+{
+	int header_version, payload_size;
+	struct meta_sock_cmd *cmd;
+
+	/*
+	 * A ping packet (via the control stream) can overtake the
+	 * feature packet. We might get it with a different header version
+	 * than expected since we will agree on the protocol version
+	 * by receiving the feature packet.
+	 */
+	header_version = __decode_header(pos, pi);
+	if (header_version < 0) {
+		drbd_err(connection, "Wrong magic value 0x%08x in protocol version %d [control]\n",
+			 be32_to_cpu(*(__be32 *)pos), header_version);
+		return -EINVAL;
+	}
+
+	if (pi->cmd >= ARRAY_SIZE(ack_receiver_tbl)) {
+		drbd_err(connection, "Unexpected meta packet %s (0x%04x)\n",
+			 drbd_packet_name(pi->cmd), pi->cmd);
+		return -ENOENT;
+	}
+
+	cmd = &ack_receiver_tbl[pi->cmd];
+	payload_size = cmd->pkt_size;
+	if (pi->size != payload_size) {
+		drbd_err(connection, "Wrong packet size on meta (c: %d, l: %d)\n",
+			 pi->cmd, pi->size);
+		return -EINVAL;
+	}
+
+	return payload_size;
+}
+
+static int process_previous_part(struct drbd_connection *connection, struct drbd_const_buffer *pool)
+{
+	struct drbd_mutable_buffer *buffer = &connection->reassemble_buffer;
+	int payload_size, packet_size;
+	unsigned int header_size;
 	struct packet_info pi;
-	unsigned long pre_recv_jif;
-	int rv;
-	void *buf    = connection->meta.rbuf;
-	int received = 0;
-	unsigned int header_size = drbd_header_size(connection);
-	int expect   = header_size;
-	bool ping_timeout_active = false;
+	int err;
+
+	fillup_buffer_from(buffer, sizeof(u32), pool);
+	if (buffer->avail < sizeof(u32))
+		return 0;
 
-	sched_set_fifo_low(current);
+	header_size = decode_header_size(buffer->buffer);
+	fillup_buffer_from(buffer, header_size, pool);
+	if (buffer->avail < header_size)
+		return 0;
 
-	while (get_t_state(thi) == RUNNING) {
-		drbd_thread_current_set_cpu(thi);
+	payload_size = decode_meta_cmd(connection, buffer->buffer, &pi);
+	if (payload_size < 0)
+		return payload_size;
 
-		if (test_and_clear_bit(SEND_PING, &connection->flags)) {
-			if (drbd_send_ping(connection)) {
-				drbd_err(connection, "drbd_send_ping has failed\n");
-				goto reconnect;
-			}
-			set_ping_timeout(connection);
-			ping_timeout_active = true;
-		}
-
-		pre_recv_jif = jiffies;
-		rv = drbd_recv_short(connection->meta.socket, buf, expect-received, 0);
-
-		/* Note:
-		 * -EINTR	 (on meta) we got a signal
-		 * -EAGAIN	 (on meta) rcvtimeo expired
-		 * -ECONNRESET	 other side closed the connection
-		 * -ERESTARTSYS  (on data) we got a signal
-		 * rv <  0	 other than above: unexpected error!
-		 * rv == expected: full header or command
-		 * rv <  expected: "woken" by signal during receive
-		 * rv == 0	 : "connection shut down by peer"
-		 */
-		if (likely(rv > 0)) {
-			received += rv;
-			buf	 += rv;
-		} else if (rv == 0) {
-			if (test_bit(DISCONNECT_SENT, &connection->flags)) {
-				long t;
-				rcu_read_lock();
-				t = rcu_dereference(connection->net_conf)->ping_timeo * HZ/10;
-				rcu_read_unlock();
-
-				t = wait_event_timeout(connection->ping_wait,
-						       connection->cstate < C_WF_REPORT_PARAMS,
-						       t);
-				if (t)
-					break;
-			}
-			drbd_err(connection, "meta connection shut down by peer.\n");
+	packet_size = header_size + payload_size;
+	fillup_buffer_from(buffer, packet_size, pool);
+	if (buffer->avail < packet_size)
+		return 0;
+
+	err = ack_receiver_tbl[pi.cmd].fn(connection, &pi);
+	connection->reassemble_buffer.avail = 0;
+	return err;
+}
+
+void drbd_control_data_ready(struct drbd_transport *transport, struct drbd_const_buffer *pool)
+{
+	struct drbd_connection *connection =
+		container_of(transport, struct drbd_connection, transport);
+	unsigned int header_size;
+	int err;
+
+	if (connection->cstate[NOW] < C_TEAR_DOWN)
+		return;
+
+	if (connection->reassemble_buffer.avail) {
+		err = process_previous_part(connection, pool);
+		if (err < 0)
 			goto reconnect;
-		} else if (rv == -EAGAIN) {
-			/* If the data socket received something meanwhile,
-			 * that is good enough: peer is still alive. */
-			if (time_after(connection->last_received, pre_recv_jif))
-				continue;
-			if (ping_timeout_active) {
-				drbd_err(connection, "PingAck did not arrive in time.\n");
-				goto reconnect;
-			}
-			set_bit(SEND_PING, &connection->flags);
-			continue;
-		} else if (rv == -EINTR) {
-			/* maybe drbd_thread_stop(): the while condition will notice.
-			 * maybe woken for send_ping: we'll send a ping above,
-			 * and change the rcvtimeo */
-			flush_signals(current);
-			continue;
-		} else {
-			drbd_err(connection, "sock_recvmsg returned %d\n", rv);
+	}
+
+	while (pool->avail >= sizeof(u32)) {
+		int payload_size, packet_size;
+		struct packet_info pi;
+
+		header_size = decode_header_size(pool->buffer);
+		if (header_size > pool->avail)
+			goto keep_part;
+
+		payload_size = decode_meta_cmd(connection, pool->buffer, &pi);
+		if (payload_size < 0) {
+			err = payload_size;
 			goto reconnect;
 		}
 
-		if (received == expect && cmd == NULL) {
-			if (decode_header(connection, connection->meta.rbuf, &pi))
-				goto reconnect;
-			cmd = &ack_receiver_tbl[pi.cmd];
-			if (pi.cmd >= ARRAY_SIZE(ack_receiver_tbl) || !cmd->fn) {
-				drbd_err(connection, "Unexpected meta packet %s (0x%04x)\n",
-					 cmdname(pi.cmd), pi.cmd);
-				goto disconnect;
-			}
-			expect = header_size + cmd->pkt_size;
-			if (pi.size != expect - header_size) {
-				drbd_err(connection, "Wrong packet size on meta (c: %d, l: %d)\n",
-					pi.cmd, pi.size);
-				goto reconnect;
-			}
-		}
-		if (received == expect) {
-			bool err;
+		packet_size = header_size + payload_size;
+		if (packet_size > pool->avail)
+			goto keep_part;
 
-			err = cmd->fn(connection, &pi);
-			if (err) {
-				drbd_err(connection, "%ps failed\n", cmd->fn);
-				goto reconnect;
-			}
+		err = ack_receiver_tbl[pi.cmd].fn(connection, &pi);
+		if (err)
+			goto reconnect;
 
-			connection->last_received = jiffies;
+		pool->buffer += packet_size;
+		pool->avail -= packet_size;
+	}
+	if (pool->avail > 0) {
+keep_part:
+		memcpy(connection->reassemble_buffer.buffer, pool->buffer, pool->avail);
+		connection->reassemble_buffer.avail = pool->avail;
+		pool->avail = 0;
+	}
+	return;
 
-			if (cmd == &ack_receiver_tbl[P_PING_ACK]) {
-				set_idle_timeout(connection);
-				ping_timeout_active = false;
-			}
+reconnect:
+	change_cstate(connection, err == -EPROTO ? C_PROTOCOL_ERROR : C_NETWORK_FAILURE, CS_HARD);
+}
+EXPORT_SYMBOL(drbd_control_data_ready);
+
+void drbd_control_event(struct drbd_transport *transport, enum drbd_tr_event event)
+{
+	struct drbd_connection *connection =
+		container_of(transport, struct drbd_connection, transport);
 
-			buf	 = connection->meta.rbuf;
-			received = 0;
-			expect	 = header_size;
-			cmd	 = NULL;
+	if (event == TIMEOUT) {
+		if (!test_bit(PING_TIMEOUT_ACTIVE, &connection->flags)) {
+			schedule_work(&connection->send_ping_work);
+			return;
+		} else {
+			if (connection->cstate[NOW] == C_CONNECTED)
+				drbd_warn(connection, "PingAck did not arrive in time.\n");
 		}
+	} else /* event == CLOSED_BY_PEER */ {
+		if (connection->cstate[NOW] == C_CONNECTED && disconnect_expected(connection))
+			return;
+		drbd_warn(connection, "meta connection shut down by peer.\n");
 	}
 
-	if (0) {
-reconnect:
-		conn_request_state(connection, NS(conn, C_NETWORK_FAILURE), CS_HARD);
-		conn_md_sync(connection);
-	}
-	if (0) {
-disconnect:
-		conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_HARD);
-	}
+	change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
+}
+EXPORT_SYMBOL(drbd_control_event);
 
-	drbd_info(connection, "ack_receiver terminated\n");
+static bool disconnect_expected(struct drbd_connection *connection)
+{
+	struct drbd_resource *resource = connection->resource;
+	bool expect_disconnect;
 
-	return 0;
+	/* We are reacting to a not-committed state change! The disconnect might
+	   get aborted. This is not a problem worth much more complex code.
+	   In the unlikely case, it happens that a two-phase-commit of a graceful
+	   disconnect gets aborted and the control connection breaks in exactly
+	   this time window, we will notice it as soon as sending something on the
+	   control stream. */
+	read_lock_irq(&resource->state_rwlock);
+	expect_disconnect = resource->remote_state_change &&
+		drbd_twopc_between_peer_and_me(connection) &&
+		resource->twopc_reply.is_disconnect;
+	read_unlock_irq(&resource->state_rwlock);
+	return expect_disconnect;
 }
 
 void drbd_send_acks_wf(struct work_struct *ws)
 {
-	struct drbd_peer_device *peer_device =
-		container_of(ws, struct drbd_peer_device, send_acks_work);
-	struct drbd_connection *connection = peer_device->connection;
-	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection =
+		container_of(ws, struct drbd_connection, send_acks_work);
+	struct drbd_transport *transport = &connection->transport;
 	struct net_conf *nc;
 	int tcp_cork, err;
 
 	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
+	nc = rcu_dereference(transport->net_conf);
 	tcp_cork = nc->tcp_cork;
 	rcu_read_unlock();
 
+	/* TODO: conditionally cork; it may hurt latency if we cork without
+	   much to send */
 	if (tcp_cork)
-		tcp_sock_set_cork(connection->meta.socket->sk, true);
+		drbd_cork(connection, CONTROL_STREAM);
+	err = drbd_finish_peer_reqs(connection);
 
-	err = drbd_finish_peer_reqs(device);
-	kref_put(&device->kref, drbd_destroy_device);
-	/* get is in drbd_endio_write_sec_final(). That is necessary to keep the
-	   struct work_struct send_acks_work alive, which is in the peer_device object */
+	/* but unconditionally uncork unless disabled */
+	if (err)
+		change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
+	else if (tcp_cork)
+		drbd_uncork(connection, CONTROL_STREAM);
 
-	if (err) {
-		conn_request_state(connection, NS(conn, C_NETWORK_FAILURE), CS_HARD);
-		return;
-	}
+}
 
-	if (tcp_cork)
-		tcp_sock_set_cork(connection->meta.socket->sk, false);
+void drbd_send_peer_ack_wf(struct work_struct *ws)
+{
+	struct drbd_connection *connection =
+		container_of(ws, struct drbd_connection, peer_ack_work);
 
-	return;
+	if (process_peer_ack_list(connection))
+		change_cstate(connection, C_NETWORK_FAILURE, CS_HARD);
 }
+
diff --git a/drivers/block/drbd/drbd_transport.h b/drivers/block/drbd/drbd_transport.h
index ff393e8d12dc..b65950796f52 100644
--- a/drivers/block/drbd/drbd_transport.h
+++ b/drivers/block/drbd/drbd_transport.h
@@ -57,6 +57,7 @@
 struct drbd_resource;
 struct drbd_listener;
 struct drbd_transport;
+struct bio;
 
 enum drbd_stream {
 	DATA_STREAM,
@@ -136,12 +137,6 @@ struct drbd_transport_stats {
 	int send_buffer_used;
 };
 
-/* argument to ->recv_pages() */
-struct drbd_page_chain_head {
-	struct page *head;
-	unsigned int nr_pages;
-};
-
 struct drbd_const_buffer {
 	const u8 *buffer;
 	unsigned int avail;
@@ -208,18 +203,19 @@ struct drbd_transport_ops {
 	int (*recv)(struct drbd_transport *, enum drbd_stream, void **buf, size_t size, int flags);
 
 /**
- * recv_pages() - Receive bulk data via the transport's DATA_STREAM
+ * recv_bio() - Receive bulk data via the transport's DATA_STREAM into bios
  * @peer_device: Identify the transport and the device
- * @page_chain:	Here recv_pages() will place the page chain head and length
+ * @bios:	the bio_list to add received data to
  * @size:	Number of bytes to receive
  *
- * recv_pages() will return the requested amount of data from DATA_STREAM,
- * and place it into pages allocated with drbd_alloc_pages().
+ * recv_bio() receives the requested amount of data from DATA_STREAM. It
+ * allocates pages by using drbd_alloc_pages() and adds them to bios in the
+ * bio_list.
  *
  * Upon success the function returns 0. Upon error the function returns a
  * negative value
  */
-	int (*recv_pages)(struct drbd_transport *, struct drbd_page_chain_head *, size_t size);
+	int (*recv_bio)(struct drbd_transport *, struct bio_list *bios, size_t size);
 
 	void (*stats)(struct drbd_transport *, struct drbd_transport_stats *stats);
 /**
@@ -240,7 +236,7 @@ struct drbd_transport_ops {
 	long (*get_rcvtimeo)(struct drbd_transport *, enum drbd_stream);
 	int (*send_page)(struct drbd_transport *, enum drbd_stream, struct page *,
 			 int offset, size_t size, unsigned msg_flags);
-	int (*send_zc_bio)(struct drbd_transport *, struct bio *bio);
+	int (*send_bio)(struct drbd_transport *, struct bio *bio, unsigned int msg_flags);
 	bool (*stream_ok)(struct drbd_transport *, enum drbd_stream);
 	bool (*hint)(struct drbd_transport *, enum drbd_stream, enum drbd_tr_hints hint);
 	void (*debugfs_show)(struct drbd_transport *, struct seq_file *m);
@@ -324,6 +320,8 @@ void drbd_path_event(struct drbd_transport *transport, struct drbd_path *path);
 void drbd_listener_destroy(struct kref *kref);
 struct drbd_path *__drbd_next_path_ref(struct drbd_path *drbd_path,
 				       struct drbd_transport *transport);
+int drbd_bio_add_page(struct drbd_transport *transport, struct bio_list *bios,
+		      struct page *page, unsigned int len, unsigned int offset);
 
 /* Might restart iteration, if current element is removed from list!! */
 #define for_each_path_ref(path, transport)			\
@@ -332,112 +330,11 @@ struct drbd_path *__drbd_next_path_ref(struct drbd_path *drbd_path,
 	     path = __drbd_next_path_ref(path, transport))
 
 /* drbd_receiver.c*/
-struct page *drbd_alloc_pages(struct drbd_transport *transport,
-			      unsigned int number, gfp_t gfp_mask);
-void drbd_free_pages(struct drbd_transport *transport, struct page *page);
+struct page *drbd_alloc_pages(struct drbd_transport *transport, gfp_t gfp_mask, unsigned int size);
+void drbd_free_page(struct drbd_transport *transport, struct page *page);
 void drbd_control_data_ready(struct drbd_transport *transport,
 			     struct drbd_const_buffer *pool);
 void drbd_control_event(struct drbd_transport *transport,
 			enum drbd_tr_event event);
 
-static inline void drbd_alloc_page_chain(struct drbd_transport *t,
-	struct drbd_page_chain_head *chain, unsigned int nr, gfp_t gfp_flags)
-{
-	chain->head = drbd_alloc_pages(t, nr, gfp_flags);
-	chain->nr_pages = chain->head ? nr : 0;
-}
-
-static inline void drbd_free_page_chain(struct drbd_transport *transport,
-					struct drbd_page_chain_head *chain)
-{
-	drbd_free_pages(transport, chain->head);
-	chain->head = NULL;
-	chain->nr_pages = 0;
-}
-
-/*
- * Some helper functions to deal with our page chains.
- */
-/* Our transports may sometimes need to only partially use a page.
- * We need to express that somehow.  Use this struct, and "graft" it into
- * struct page at page->lru.
- *
- * According to include/linux/mm.h:
- *  | A page may be used by anyone else who does a __get_free_page().
- *  | In this case, page_count still tracks the references, and should only
- *  | be used through the normal accessor functions. The top bits of page->flags
- *  | and page->virtual store page management information, but all other fields
- *  | are unused and could be used privately, carefully. The management of this
- *  | page is the responsibility of the one who allocated it, and those who have
- *  | subsequently been given references to it.
- * (we do alloc_page(), that is equivalent).
- *
- * Red Hat struct page is different from upstream (layout and members) :(
- * So I am not too sure about the "all other fields", and it is not as easy to
- * find a place where sizeof(struct drbd_page_chain) would fit on all archs and
- * distribution-changed layouts.
- *
- * But (upstream) struct page also says:
- *  | struct list_head lru;   * ...
- *  |       * Can be used as a generic list
- *  |       * by the page owner.
- *
- * On 32bit, use unsigned short for offset and size,
- * to still fit in sizeof(page->lru).
- */
-
-/* grafted over struct page.lru */
-struct drbd_page_chain {
-	struct page *next;	/* next page in chain, if any */
-#ifdef CONFIG_64BIT
-	unsigned int offset;	/* start offset of data within this page */
-	unsigned int size;	/* number of data bytes within this page */
-#else
-#if PAGE_SIZE > (1U<<16)
-#error "won't work."
-#endif
-	unsigned short offset;	/* start offset of data within this page */
-	unsigned short size;	/* number of data bytes within this page */
-#endif
-};
-
-static inline void dummy_for_buildbug(void)
-{
-	struct page *dummy;
-	BUILD_BUG_ON(sizeof(struct drbd_page_chain) > sizeof(dummy->lru));
-}
-
-#define page_chain_next(page) \
-	(((struct drbd_page_chain *)&(page)->lru)->next)
-#define page_chain_size(page) \
-	(((struct drbd_page_chain *)&(page)->lru)->size)
-#define page_chain_offset(page) \
-	(((struct drbd_page_chain *)&(page)->lru)->offset)
-#define set_page_chain_next(page, v) \
-	(((struct drbd_page_chain *)&(page)->lru)->next = (v))
-#define set_page_chain_size(page, v) \
-	(((struct drbd_page_chain *)&(page)->lru)->size = (v))
-#define set_page_chain_offset(page, v) \
-	(((struct drbd_page_chain *)&(page)->lru)->offset = (v))
-#define set_page_chain_next_offset_size(page, n, o, s)		\
-	(*((struct drbd_page_chain *)&(page)->lru) =		\
-	((struct drbd_page_chain) {				\
-		.next = (n),					\
-		.offset = (o),					\
-		.size = (s),					\
-	 }))
-
-#define page_chain_for_each(page) \
-	for (; page && ({ prefetch(page_chain_next(page)); 1; }); \
-			page = page_chain_next(page))
-#define page_chain_for_each_safe(page, n) \
-	for (; page && ({ n = page_chain_next(page); 1; }); page = n)
-
-#ifndef SK_CAN_REUSE
-/* This constant was introduced by Pavel Emelyanov <xemul@parallels.com> on
-   Thu Apr 19 03:39:36 2012 +0000. Before the release of linux-3.5
-   commit 4a17fd52 sock: Introduce named constants for sk_reuse */
-#define SK_CAN_REUSE   1
-#endif
-
 #endif
diff --git a/drivers/block/drbd/drbd_transport_lb-tcp.c b/drivers/block/drbd/drbd_transport_lb-tcp.c
index 29f18df2be88..03ea93e7352f 100644
--- a/drivers/block/drbd/drbd_transport_lb-tcp.c
+++ b/drivers/block/drbd/drbd_transport_lb-tcp.c
@@ -121,7 +121,6 @@ struct dtl_path {
 	struct dtl_flow flow[2];
 };
 
-
 static int dtl_init(struct drbd_transport *transport);
 static void dtl_free(struct drbd_transport *transport, enum drbd_tr_free_op free_op);
 static void dtl_socket_free(struct drbd_transport *transport, struct socket **sock);
@@ -130,8 +129,7 @@ static int dtl_connect(struct drbd_transport *transport);
 static void dtl_finish_connect(struct drbd_transport *transport);
 static int dtl_recv(struct drbd_transport *transport, enum drbd_stream stream, void **buf,
 		    size_t size, int flags);
-static int dtl_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain,
-			  size_t size);
+static int dtl_recv_bio(struct drbd_transport *transport, struct bio_list *bios, size_t size);
 static void dtl_stats(struct drbd_transport *transport, struct drbd_transport_stats *stats);
 static int dtl_net_conf_change(struct drbd_transport *transport, struct net_conf *new_net_conf);
 static void dtl_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream,
@@ -139,7 +137,7 @@ static void dtl_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream
 static long dtl_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream);
 static int dtl_send_page(struct drbd_transport *transport, enum drbd_stream, struct page *page,
 		int offset, size_t size, unsigned int msg_flags);
-static int dtl_send_zc_bio(struct drbd_transport *, struct bio *bio);
+static int dtl_send_bio(struct drbd_transport *, struct bio *bio, unsigned int msg_flags);
 static bool dtl_stream_ok(struct drbd_transport *transport, enum drbd_stream stream);
 static bool dtl_hint(struct drbd_transport *transport, enum drbd_stream stream,
 		     enum drbd_tr_hints hint);
@@ -173,13 +171,13 @@ static struct drbd_transport_class dtl_transport_class = {
 		.connect = dtl_connect,
 		.finish_connect = dtl_finish_connect,
 		.recv = dtl_recv,
-		.recv_pages = dtl_recv_pages,
+		.recv_bio = dtl_recv_bio,
 		.stats = dtl_stats,
 		.net_conf_change = dtl_net_conf_change,
 		.set_rcvtimeo = dtl_set_rcvtimeo,
 		.get_rcvtimeo = dtl_get_rcvtimeo,
 		.send_page = dtl_send_page,
-		.send_zc_bio = dtl_send_zc_bio,
+		.send_bio = dtl_send_bio,
 		.stream_ok = dtl_stream_ok,
 		.hint = dtl_hint,
 		.debugfs_show = dtl_debugfs_show,
@@ -470,7 +468,7 @@ _dtl_recv_page(struct dtl_transport *dtl_transport, struct page *page, int size)
 		if (err)
 			goto out;
 
-		err = dtl_recv_short(flow->sock, data, min(size, flow->recv_bytes), 0);
+		err = dtl_recv_short(flow->sock, pos, min(size, flow->recv_bytes), 0);
 		if (err < 0)
 			goto out;
 		size -= err;
@@ -484,36 +482,37 @@ _dtl_recv_page(struct dtl_transport *dtl_transport, struct page *page, int size)
 }
 
 static int
-dtl_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain, size_t size)
+dtl_recv_bio(struct drbd_transport *transport, struct bio_list *bios, size_t size)
 {
 	struct dtl_transport *dtl_transport =
 		container_of(transport, struct dtl_transport, transport);
 	struct page *page;
 	int err;
 
-	drbd_alloc_page_chain(transport, chain, DIV_ROUND_UP(size, PAGE_SIZE), GFP_TRY);
-	page = chain->head;
-	if (!page)
-		return -ENOMEM;
+	do {
+		size_t len;
 
-	page_chain_for_each(page) {
-		size_t len = min_t(int, size, PAGE_SIZE);
+		page = drbd_alloc_pages(transport, GFP_KERNEL, size);
+		if (!page)
+			return -ENOMEM;
+		len = min(PAGE_SIZE << compound_order(page), size);
 
 		err = _dtl_recv_page(dtl_transport, page, len);
 		if (err < 0)
 			goto fail;
-		set_page_chain_offset(page, 0);
-		set_page_chain_size(page, len);
 		size -= err;
-	}
+		err = drbd_bio_add_page(transport, bios, page, len, 0);
+		if (err < 0)
+			goto fail;
+	} while (size > 0);
+
 	if (unlikely(size)) {
 		tr_warn(transport, "Not enough data received; missing %zu bytes\n", size);
-		err = -ENODATA;
-		goto fail;
+		return -ENODATA;
 	}
 	return 0;
 fail:
-	drbd_free_page_chain(transport, chain);
+	drbd_free_page(transport, page);
 	return err;
 }
 
@@ -1631,7 +1630,7 @@ static int dtl_select_send_flow(struct dtl_transport *dtl_transport,
 static int _dtl_send_page(struct dtl_transport *dtl_transport, struct dtl_flow *flow,
 			  struct page *page, int offset, size_t size, unsigned int msg_flags)
 {
-	struct msghdr msg = { .msg_flags = msg_flags | MSG_NOSIGNAL | MSG_SPLICE_PAGES };
+	struct msghdr msg = { .msg_flags = msg_flags | MSG_NOSIGNAL };
 	struct drbd_transport *transport = &dtl_transport->transport;
 	struct socket *sock = flow->sock;
 	struct bio_vec bvec;
@@ -1716,7 +1715,7 @@ static int dtl_bio_chunk_size_available(struct bio *bio, int wmem_available,
 }
 
 static int dtl_send_bio_pages(struct dtl_transport *dtl_transport, struct dtl_flow *flow,
-		struct bio *bio, struct bvec_iter *iter, int chunk)
+			struct bio *bio, struct bvec_iter *iter, int chunk, unsigned int msg_flags)
 {
 	struct bio_vec bvec;
 
@@ -1726,7 +1725,7 @@ static int dtl_send_bio_pages(struct dtl_transport *dtl_transport, struct dtl_fl
 		bvec = bio_iter_iovec(bio, *iter);
 		err = _dtl_send_page(dtl_transport, flow, bvec.bv_page,
 				bvec.bv_offset, bvec.bv_len,
-				bio_iter_last(bvec, *iter) ? 0 : MSG_MORE);
+				msg_flags | (bio_iter_last(bvec, *iter) ? 0 : MSG_MORE));
 		if (err)
 			return err;
 		chunk -= bvec.bv_len;
@@ -1736,7 +1735,8 @@ static int dtl_send_bio_pages(struct dtl_transport *dtl_transport, struct dtl_fl
 	return 0;
 }
 
-static int dtl_send_zc_bio(struct drbd_transport *transport, struct bio *bio)
+static int dtl_send_bio(struct drbd_transport *transport, struct bio *bio,
+			   unsigned int msg_flags)
 {
 	struct dtl_transport *dtl_transport =
 		container_of(transport, struct dtl_transport, transport);
@@ -1777,7 +1777,7 @@ static int dtl_send_zc_bio(struct drbd_transport *transport, struct bio *bio)
 				goto out;
 		}
 
-		err = dtl_send_bio_pages(dtl_transport, flow, bio, &iter, chunk);
+		err = dtl_send_bio_pages(dtl_transport, flow, bio, &iter, chunk, msg_flags);
 		if (err)
 			goto out;
 	} while (iter.bi_size);
diff --git a/drivers/block/drbd/drbd_transport_rdma.c b/drivers/block/drbd/drbd_transport_rdma.c
index fbdf6a4bcda9..69850bef34f8 100644
--- a/drivers/block/drbd/drbd_transport_rdma.c
+++ b/drivers/block/drbd/drbd_transport_rdma.c
@@ -322,8 +322,8 @@ static void dtr_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream
 static long dtr_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream);
 static int dtr_send_page(struct drbd_transport *transport, enum drbd_stream stream, struct page *page,
 		int offset, size_t size, unsigned msg_flags);
-static int dtr_send_zc_bio(struct drbd_transport *, struct bio *bio);
-static int dtr_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain, size_t size);
+static int dtr_send_bio(struct drbd_transport *, struct bio *bio, unsigned int msg_flags);
+static int dtr_recv_bio(struct drbd_transport *transport, struct bio_list *bios, size_t size);
 static bool dtr_stream_ok(struct drbd_transport *transport, enum drbd_stream stream);
 static bool dtr_hint(struct drbd_transport *transport, enum drbd_stream stream, enum drbd_tr_hints hint);
 static void dtr_debugfs_show(struct drbd_transport *, struct seq_file *m);
@@ -392,8 +392,8 @@ static struct drbd_transport_class rdma_transport_class = {
 		.set_rcvtimeo = dtr_set_rcvtimeo,
 		.get_rcvtimeo = dtr_get_rcvtimeo,
 		.send_page = dtr_send_page,
-		.send_zc_bio = dtr_send_zc_bio,
-		.recv_pages = dtr_recv_pages,
+		.send_bio = dtr_send_bio,
+		.recv_bio = dtr_recv_bio,
 		.stream_ok = dtr_stream_ok,
 		.hint = dtr_hint,
 		.debugfs_show = dtr_debugfs_show,
@@ -609,13 +609,13 @@ static int dtr_send(struct dtr_path *path, void *buf, size_t size, gfp_t gfp_mas
 }
 
 
-static int dtr_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain, size_t size)
+static int dtr_recv_bio(struct drbd_transport *transport, struct bio_list *bios, size_t size)
 {
 	struct dtr_transport *rdma_transport =
 		container_of(transport, struct dtr_transport, transport);
 	struct dtr_stream *rdma_stream = &rdma_transport->stream[DATA_STREAM];
-	struct page *page, *head = NULL, *tail = NULL;
-	int i = 0;
+	struct page *page;
+	int err, i = 0;
 
 	if (!dtr_transport_ok(transport))
 		return -ECONNRESET;
@@ -633,15 +633,8 @@ static int dtr_recv_pages(struct drbd_transport *transport, struct drbd_page_cha
 					dtr_receive_rx_desc(rdma_transport, DATA_STREAM, &rx_desc),
 					rdma_stream->recv_timeout);
 
-		if (t <= 0) {
-			/*
-			 * Cannot give back pages that may still be in use!
-			 * (More reason why we only have one rx_desc per page,
-			 * and don't get_page() in dtr_create_rx_desc).
-			 */
-			drbd_free_pages(transport, head);
+		if (t <= 0)
 			return t == 0 ? -EAGAIN : -EINTR;
-		}
 
 		page = rx_desc->page;
 		/* put_page() if we would get_page() in
@@ -655,24 +648,10 @@ static int dtr_recv_pages(struct drbd_transport *transport, struct drbd_page_cha
 		 * unaligned bvecs (as xfs often creates), rx_desc->size and
 		 * offset may well be not the PAGE_SIZE and 0 we hope for.
 		 */
-		if (tail) {
-			/* See also dtr_create_rx_desc().
-			 * For PAGE_SIZE > 4k, we may create several RR per page.
-			 * We cannot link a page to itself, though.
-			 *
-			 * Adding to size would be easy enough.
-			 * But what do we do about possible holes?
-			 * FIXME
-			 */
-			BUG_ON(page == tail);
 
-			set_page_chain_next(tail, page);
-			tail = page;
-		} else
-			head = tail = page;
-
-		set_page_chain_offset(page, 0);
-		set_page_chain_size(page, rx_desc->size);
+		err = drbd_bio_add_page(transport, bios, page, rx_desc->size, 0);
+		if (err < 0)
+			return err;
 
 		atomic_dec(&rx_desc->cm->path->flow[DATA_STREAM].rx_descs_allocated);
 		dtr_free_rx_desc(rx_desc);
@@ -682,8 +661,6 @@ static int dtr_recv_pages(struct drbd_transport *transport, struct drbd_page_cha
 	}
 
 	// pr_info("%s: rcvd %d pages\n", rdma_stream->name, i);
-	chain->head = head;
-	chain->nr_pages = i;
 	return 0;
 }
 
@@ -2023,7 +2000,7 @@ static void dtr_free_rx_desc(struct dtr_rx_desc *rx_desc)
 
 		/* put_page(), if we had more than one rx_desc per page,
 		 * but see comments in dtr_create_rx_desc */
-		drbd_free_pages(transport, rx_desc->page);
+		drbd_free_page(transport, rx_desc->page);
 	}
 	kfree(rx_desc);
 }
@@ -2032,23 +2009,17 @@ static int dtr_create_rx_desc(struct dtr_flow *flow, gfp_t gfp_mask, bool connec
 {
 	struct dtr_path *path = flow->path;
 	struct drbd_transport *transport = path->path.transport;
-	struct dtr_transport *rdma_transport =
-		container_of(transport, struct dtr_transport, transport);
 	struct dtr_rx_desc *rx_desc;
 	struct page *page;
-	int err, alloc_size = rdma_transport->rx_allocation_size;
-	int nr_pages = alloc_size / PAGE_SIZE;
+	int err;
 	struct dtr_cm *cm;
 
 	rx_desc = kzalloc_obj(*rx_desc, gfp_mask);
 	if (!rx_desc)
 		return -ENOMEM;
 
-	/* As of now, this MUST NEVER return a highmem page!
-	 * Which means no other user may ever have requested and then given
-	 * back a highmem page!
-	 */
-	page = drbd_alloc_pages(transport, nr_pages, gfp_mask);
+	/* Ignoring rdma_transport->rx_allocation_size for now! */
+	page = drbd_alloc_pages(transport, gfp_mask, PAGE_SIZE);
 	if (!page) {
 		kfree(rx_desc);
 		return -ENOMEM;
@@ -2066,14 +2037,14 @@ static int dtr_create_rx_desc(struct dtr_flow *flow, gfp_t gfp_mask, bool connec
 	rx_desc->page = page;
 	rx_desc->size = 0;
 	rx_desc->sge.lkey = dtr_cm_to_lkey(cm);
-	rx_desc->sge.addr = ib_dma_map_single(cm->id->device, page_address(page), alloc_size,
+	rx_desc->sge.addr = ib_dma_map_single(cm->id->device, page_address(page), PAGE_SIZE,
 					      DMA_FROM_DEVICE);
 	err = ib_dma_mapping_error(cm->id->device, rx_desc->sge.addr);
 	if (err) {
 		tr_err(transport, "ib_dma_map_single() failed %d\n", err);
 		goto out_put;
 	}
-	rx_desc->sge.length = alloc_size;
+	rx_desc->sge.length = PAGE_SIZE;
 
 	atomic_inc(&flow->rx_descs_allocated);
 	atomic_inc(&flow->rx_descs_posted);
@@ -2090,7 +2061,7 @@ static int dtr_create_rx_desc(struct dtr_flow *flow, gfp_t gfp_mask, bool connec
 	kref_put(&cm->kref, dtr_destroy_cm);
 out:
 	kfree(rx_desc);
-	drbd_free_pages(transport, page);
+	drbd_free_page(transport, page);
 	return err;
 }
 
@@ -3170,11 +3141,12 @@ static void dtr_update_congested(struct drbd_transport *transport)
 }
 
 static int dtr_send_page(struct drbd_transport *transport, enum drbd_stream stream,
-			 struct page *page, int offset, size_t size, unsigned msg_flags)
+			 struct page *caller_page, int offset, size_t size, unsigned int msg_flags)
 {
 	struct dtr_transport *rdma_transport =
 		container_of(transport, struct dtr_transport, transport);
 	struct dtr_tx_desc *tx_desc;
+	struct page *page;
 	int err;
 
 	// pr_info("%s: in send_page, size: %zu\n", rdma_stream->name, size);
@@ -3311,7 +3283,7 @@ static int dtr_send_bio_part(struct dtr_transport *rdma_transport,
 }
 #endif
 
-static int dtr_send_zc_bio(struct drbd_transport *transport, struct bio *bio)
+static int dtr_send_bio(struct drbd_transport *transport, struct bio *bio, unsigned int msg_flags)
 {
 #if SENDER_COMPACTS_BVECS
 	struct dtr_transport *rdma_transport =
@@ -3329,6 +3301,7 @@ static int dtr_send_zc_bio(struct drbd_transport *transport, struct bio *bio)
 		return -ECONNRESET;
 
 #if SENDER_COMPACTS_BVECS
+	/* TODO obey !MSG_SPLICE_PAGES in msg_flags */
 	bio_for_each_segment(bvec, bio, iter) {
 		size_tx_desc += bvec.bv_len;
 		//tr_info(transport, " bvec len = %d\n", bvec.bv_len);
@@ -3358,8 +3331,7 @@ static int dtr_send_zc_bio(struct drbd_transport *transport, struct bio *bio)
 #else
 	bio_for_each_segment(bvec, bio, iter) {
 		err = dtr_send_page(transport, DATA_STREAM,
-			bvec.bv_page, bvec.bv_offset, bvec.bv_len,
-			0 /* flags currently unused by dtr_send_page */);
+			bvec.bv_page, bvec.bv_offset, bvec.bv_len, msg_flags);
 		if (err)
 			break;
 	}
diff --git a/drivers/block/drbd/drbd_transport_tcp.c b/drivers/block/drbd/drbd_transport_tcp.c
index 5faa6b82c358..51169d7a5902 100644
--- a/drivers/block/drbd/drbd_transport_tcp.c
+++ b/drivers/block/drbd/drbd_transport_tcp.c
@@ -115,14 +115,14 @@ static int dtt_prepare_connect(struct drbd_transport *transport);
 static int dtt_connect(struct drbd_transport *transport);
 static void dtt_finish_connect(struct drbd_transport *transport);
 static int dtt_recv(struct drbd_transport *transport, enum drbd_stream stream, void **buf, size_t size, int flags);
-static int dtt_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain, size_t size);
+static int dtt_recv_bio(struct drbd_transport *transport, struct bio_list *bios, size_t size);
 static void dtt_stats(struct drbd_transport *transport, struct drbd_transport_stats *stats);
 static int dtt_net_conf_change(struct drbd_transport *transport, struct net_conf *new_net_conf);
 static void dtt_set_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream, long timeout);
 static long dtt_get_rcvtimeo(struct drbd_transport *transport, enum drbd_stream stream);
 static int dtt_send_page(struct drbd_transport *transport, enum drbd_stream, struct page *page,
-		int offset, size_t size, unsigned msg_flags);
-static int dtt_send_zc_bio(struct drbd_transport *, struct bio *bio);
+		int offset, size_t size, unsigned int msg_flags);
+static int dtt_send_bio(struct drbd_transport *, struct bio *bio, unsigned int msg_flags);
 static bool dtt_stream_ok(struct drbd_transport *transport, enum drbd_stream stream);
 static bool dtt_hint(struct drbd_transport *transport, enum drbd_stream stream, enum drbd_tr_hints hint);
 static void dtt_debugfs_show(struct drbd_transport *transport, struct seq_file *m);
@@ -146,13 +146,13 @@ static struct drbd_transport_class tcp_transport_class = {
 		.connect = dtt_connect,
 		.finish_connect = dtt_finish_connect,
 		.recv = dtt_recv,
-		.recv_pages = dtt_recv_pages,
+		.recv_bio = dtt_recv_bio,
 		.stats = dtt_stats,
 		.net_conf_change = dtt_net_conf_change,
 		.set_rcvtimeo = dtt_set_rcvtimeo,
 		.get_rcvtimeo = dtt_get_rcvtimeo,
 		.send_page = dtt_send_page,
-		.send_zc_bio = dtt_send_zc_bio,
+		.send_bio = dtt_send_bio,
 		.stream_ok = dtt_stream_ok,
 		.hint = dtt_hint,
 		.debugfs_show = dtt_debugfs_show,
@@ -357,7 +357,8 @@ static int dtt_recv(struct drbd_transport *transport, enum drbd_stream stream, v
 	return rv;
 }
 
-static int dtt_recv_pages(struct drbd_transport *transport, struct drbd_page_chain_head *chain, size_t size)
+
+static int dtt_recv_bio(struct drbd_transport *transport, struct bio_list *bios, size_t size)
 {
 	struct drbd_tcp_transport *tcp_transport =
 		container_of(transport, struct drbd_tcp_transport, transport);
@@ -368,30 +369,30 @@ static int dtt_recv_pages(struct drbd_transport *transport, struct drbd_page_cha
 	if (!socket)
 		return -ENOTCONN;
 
-	drbd_alloc_page_chain(transport, chain, DIV_ROUND_UP(size, PAGE_SIZE), GFP_TRY);
-	page = chain->head;
-	if (!page)
-		return -ENOMEM;
+	do {
+		size_t len;
+
+		page = drbd_alloc_pages(transport, GFP_KERNEL, size);
+		if (!page)
+			return -ENOMEM;
+		len = min(PAGE_SIZE << compound_order(page), size);
 
-	page_chain_for_each(page) {
-		size_t len = min_t(int, size, PAGE_SIZE);
-		void *data = kmap(page);
-		err = dtt_recv_short(socket, data, len, 0);
-		kunmap(page);
-		set_page_chain_offset(page, 0);
-		set_page_chain_size(page, len);
+		err = dtt_recv_short(socket, page_address(page), len, 0);
 		if (err < 0)
 			goto fail;
 		size -= err;
-	}
+		err = drbd_bio_add_page(transport, bios, page, len, 0);
+		if (err < 0)
+			goto fail;
+	} while (size > 0);
+
 	if (unlikely(size)) {
 		tr_warn(transport, "Not enough data received; missing %zu bytes\n", size);
-		err = -ENODATA;
-		goto fail;
+		return -ENODATA;
 	}
 	return 0;
 fail:
-	drbd_free_page_chain(transport, chain);
+	drbd_free_page(transport, page);
 	return err;
 }
 
@@ -1492,7 +1493,7 @@ static int dtt_send_page(struct drbd_transport *transport, enum drbd_stream stre
 	struct drbd_tcp_transport *tcp_transport =
 		container_of(transport, struct drbd_tcp_transport, transport);
 	struct socket *socket = tcp_transport->stream[stream];
-	struct msghdr msg = { .msg_flags = msg_flags | MSG_NOSIGNAL | MSG_SPLICE_PAGES };
+	struct msghdr msg = { .msg_flags = msg_flags | MSG_NOSIGNAL };
 	struct bio_vec bvec;
 	int len = size;
 	int err = -EIO;
@@ -1537,7 +1538,7 @@ static int dtt_send_page(struct drbd_transport *transport, enum drbd_stream stre
 	return err;
 }
 
-static int dtt_send_zc_bio(struct drbd_transport *transport, struct bio *bio)
+static int dtt_send_bio(struct drbd_transport *transport, struct bio *bio, unsigned int msg_flags)
 {
 	struct bio_vec bvec;
 	struct bvec_iter iter;
@@ -1547,7 +1548,7 @@ static int dtt_send_zc_bio(struct drbd_transport *transport, struct bio *bio)
 
 		err = dtt_send_page(transport, DATA_STREAM, bvec.bv_page,
 				      bvec.bv_offset, bvec.bv_len,
-				      bio_iter_last(bvec, iter) ? 0 : MSG_MORE);
+				      msg_flags | (bio_iter_last(bvec, iter) ? 0 : MSG_MORE));
 		if (err)
 			return err;
 	}
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 18/20] drbd: rework netlink management interface for DRBD 9
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (16 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 17/20] drbd: rework receiver for DRBD 9 transport and multi-peer protocol Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 19/20] drbd: update monitoring interfaces for multi-peer topology Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 20/20] drbd: remove BROKEN for DRBD Christoph Böhmwalder
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Rework the generic netlink administration interface to support
DRBD 9's multi-peer topology model.

Connections are now identified by peer node ID rather than address
pairs, and the admin API gains operations for creating/removing
peer connections and managing network paths within each connection.

Add per-peer-device configuration, metadata slot reclamation, and
resource renaming as new administrative commands.

Lift role promotion to resource scope and use quorum-aware logic
with auto-promote timeout, replacing the per-device state machine.

Disk attach and detach gain support for per-peer bitmap slot allocation,
DAX/PMEM-backed metadata, and variable bitmap block sizes.

Resize and other multi-peer operations use the new transactional state
change API to coordinate across all peers atomically.

The required capability for administrative commands changes from
CAP_NET_ADMIN to CAP_SYS_ADMIN, and the global genl_lock() serialization
is replaced by parallel_ops with fine-grained locking. Notifications
are extended to cover path-level state and detailed per-peer resync
progress.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_nl.c | 7244 ++++++++++++++++++++++++----------
 1 file changed, 5183 insertions(+), 2061 deletions(-)

diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index 463f57d33204..48abe5914889 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -19,66 +19,80 @@
 #include <linux/fs.h>
 #include <linux/file.h>
 #include <linux/slab.h>
-#include <linux/blkpg.h>
 #include <linux/cpumask.h>
+#include <linux/random.h>
 #include "drbd_int.h"
 #include "drbd_protocol.h"
-#include "drbd_req.h"
 #include "drbd_state_change.h"
-#include <linux/unaligned.h>
+#include "drbd_debugfs.h"
+#include "drbd_transport.h"
+#include "drbd_dax_pmem.h"
 #include <linux/drbd_limits.h>
 #include <linux/kthread.h>
-
+#include <linux/security.h>
 #include <net/genetlink.h>
+#include <net/sock.h>
+
+#include "drbd_meta_data.h"
+#include "drbd_legacy_84.h"
 
 /* .doit */
-// int drbd_adm_create_resource(struct sk_buff *skb, struct genl_info *info);
-// int drbd_adm_delete_resource(struct sk_buff *skb, struct genl_info *info);
-
-int drbd_adm_new_minor(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_del_minor(struct sk_buff *skb, struct genl_info *info);
-
-int drbd_adm_new_resource(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_del_resource(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_down(struct sk_buff *skb, struct genl_info *info);
-
-int drbd_adm_set_role(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_disk_opts(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_detach(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_resize(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_start_ov(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_new_c_uuid(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_disconnect(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_invalidate(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_invalidate_peer(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_pause_sync(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_resume_sync(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_suspend_io(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_resume_io(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_outdate(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_resource_opts(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_get_status(struct sk_buff *skb, struct genl_info *info);
-int drbd_adm_get_timeout_type(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_new_minor(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_del_minor(struct sk_buff *skb, struct genl_info *info);
+
+static int drbd_adm_new_resource(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_del_resource(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_down(struct sk_buff *skb, struct genl_info *info);
+
+static int drbd_adm_set_role(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_disk_opts(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_detach(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_new_peer(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_del_peer(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_new_path(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_del_path(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_peer_device_opts(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_resize(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_start_ov(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_new_c_uuid(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_disconnect(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_invalidate(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_invalidate_peer(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_pause_sync(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_resume_sync(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_suspend_io(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_resume_io(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_outdate(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_resource_opts(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_get_timeout_type(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_forget_peer(struct sk_buff *skb, struct genl_info *info);
+static int drbd_adm_rename_resource(struct sk_buff *skb, struct genl_info *info);
 /* .dumpit */
-int drbd_adm_get_status_all(struct sk_buff *skb, struct netlink_callback *cb);
-int drbd_adm_dump_resources(struct sk_buff *skb, struct netlink_callback *cb);
-int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb);
-int drbd_adm_dump_devices_done(struct netlink_callback *cb);
-int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb);
-int drbd_adm_dump_connections_done(struct netlink_callback *cb);
-int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb);
-int drbd_adm_dump_peer_devices_done(struct netlink_callback *cb);
-int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb);
+static int drbd_adm_dump_resources(struct sk_buff *skb, struct netlink_callback *cb);
+static int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb);
+static int drbd_adm_dump_devices_done(struct netlink_callback *cb);
+static int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb);
+static int drbd_adm_dump_connections_done(struct netlink_callback *cb);
+static int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb);
+static int drbd_adm_dump_peer_devices_done(struct netlink_callback *cb);
+static int drbd_adm_dump_paths(struct sk_buff *skb, struct netlink_callback *cb);
+static int drbd_adm_dump_paths_done(struct netlink_callback *cb);
+static int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb);
+static int drbd_adm_get_initial_state_done(struct netlink_callback *cb);
 
 #include "drbd_genl_api.h"
 #include "drbd_nla.h"
 #include <linux/genl_magic_func.h>
 
-static atomic_t drbd_genl_seq = ATOMIC_INIT(2); /* two. */
-static atomic_t notify_genl_seq = ATOMIC_INIT(2); /* two. */
+void drbd_enable_netns(void)
+{
+	drbd_genl_family.netnsok = true;
+}
+
+atomic_t drbd_genl_seq = ATOMIC_INIT(2); /* two. */
 
 DEFINE_MUTEX(notification_mutex);
 
@@ -110,11 +124,15 @@ static int drbd_msg_put_info(struct sk_buff *skb, const char *info)
 	if (err) {
 		nla_nest_cancel(skb, nla);
 		return err;
-	} else
-		nla_nest_end(skb, nla);
+	}
+	nla_nest_end(skb, nla);
 	return 0;
 }
 
+static int drbd_adm_finish(struct drbd_config_context *, struct genl_info *, int);
+
+extern struct genl_ops drbd_genl_ops[];
+
 __printf(2, 3)
 static int drbd_msg_sprintf_info(struct sk_buff *skb, const char *fmt, ...)
 {
@@ -122,6 +140,8 @@ static int drbd_msg_sprintf_info(struct sk_buff *skb, const char *fmt, ...)
 	struct nlattr *nla, *txt;
 	int err = -EMSGSIZE;
 	int len;
+	int aligned_len;
+	char *msg_buf;
 
 	nla = nla_nest_start_noflag(skb, DRBD_NLA_CFG_REPLY);
 	if (!nla)
@@ -132,30 +152,56 @@ static int drbd_msg_sprintf_info(struct sk_buff *skb, const char *fmt, ...)
 		nla_nest_cancel(skb, nla);
 		return err;
 	}
+	msg_buf = nla_data(txt);
 	va_start(args, fmt);
-	len = vscnprintf(nla_data(txt), 256, fmt, args);
+	len = vscnprintf(msg_buf, 256, fmt, args);
 	va_end(args);
 
 	/* maybe: retry with larger reserve, if truncated */
-	txt->nla_len = nla_attr_size(len+1);
-	nlmsg_trim(skb, (char*)txt + NLA_ALIGN(txt->nla_len));
+
+	/* zero-out padding bytes to avoid transmitting uninitialized bytes */
+	++len;
+	txt->nla_len = nla_attr_size(len);
+	aligned_len = NLA_ALIGN(len);
+	while (len < aligned_len) {
+		msg_buf[len] = '\0';
+		++len;
+	}
+	nlmsg_trim(skb, (char *) txt + NLA_ALIGN(txt->nla_len));
 	nla_nest_end(skb, nla);
 
 	return 0;
 }
 
+static bool need_sys_admin(u8 cmd)
+{
+	int i;
+	for (i = 0; i < ARRAY_SIZE(drbd_genl_ops); i++)
+		if (drbd_genl_ops[i].cmd == cmd)
+			return 0 != (drbd_genl_ops[i].flags & GENL_ADMIN_PERM);
+	return true;
+}
+
+static struct drbd_path *first_path(struct drbd_connection *connection)
+{
+	/* Ideally this function is removed at a later point in time.
+	   It was introduced when replacing the single address pair
+	   with a list of address pairs (or paths). */
+
+	return list_first_or_null_rcu(&connection->transport.paths, struct drbd_path, list);
+}
+
 /* This would be a good candidate for a "pre_doit" hook,
  * and per-family private info->pointers.
  * But we need to stay compatible with older kernels.
  * If it returns successfully, adm_ctx members are valid.
- *
- * At this point, we still rely on the global genl_lock().
- * If we want to avoid that, and allow "genl_family.parallel_ops", we may need
- * to add additional synchronization against object destruction/modification.
  */
-#define DRBD_ADM_NEED_MINOR	1
-#define DRBD_ADM_NEED_RESOURCE	2
-#define DRBD_ADM_NEED_CONNECTION 4
+#define DRBD_ADM_NEED_MINOR        (1 << 0)
+#define DRBD_ADM_NEED_RESOURCE     (1 << 1)
+#define DRBD_ADM_NEED_CONNECTION   (1 << 2)
+#define DRBD_ADM_NEED_PEER_DEVICE  (1 << 3)
+#define DRBD_ADM_NEED_PEER_NODE    (1 << 4)
+#define DRBD_ADM_IGNORE_VERSION    (1 << 5)
 static int drbd_adm_prepare(struct drbd_config_context *adm_ctx,
 	struct sk_buff *skb, struct genl_info *info, unsigned flags)
 {
@@ -165,9 +211,15 @@ static int drbd_adm_prepare(struct drbd_config_context *adm_ctx,
 
 	memset(adm_ctx, 0, sizeof(*adm_ctx));
 
-	/* genl_rcv_msg only checks for CAP_NET_ADMIN on "GENL_ADMIN_PERM" :( */
-	if (cmd != DRBD_ADM_GET_STATUS && !capable(CAP_NET_ADMIN))
-	       return -EPERM;
+	adm_ctx->net = sock_net(skb->sk);
+
+	/*
+	 * genl_rcv_msg() only checks if commands with the GENL_ADMIN_PERM flag
+	 * set have CAP_NET_ADMIN; we also require CAP_SYS_ADMIN for
+	 * administrative commands.
+	 */
+	if (need_sys_admin(cmd) && !capable(CAP_SYS_ADMIN))
+		return -EPERM;
 
 	adm_ctx->reply_skb = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
 	if (!adm_ctx->reply_skb) {
@@ -184,14 +236,29 @@ static int drbd_adm_prepare(struct drbd_config_context *adm_ctx,
 		goto fail;
 	}
 
+	if (info->genlhdr->version != GENL_MAGIC_VERSION && (flags & DRBD_ADM_IGNORE_VERSION) == 0) {
+		drbd_msg_put_info(adm_ctx->reply_skb, "Wrong API version, upgrade your drbd utils.");
+		err = -EINVAL;
+		goto fail;
+	}
+
+	if (flags & DRBD_ADM_NEED_PEER_DEVICE)
+		flags |= DRBD_ADM_NEED_CONNECTION;
+	if (flags & DRBD_ADM_NEED_CONNECTION)
+		flags |= DRBD_ADM_NEED_PEER_NODE;
+	if (flags & DRBD_ADM_NEED_PEER_NODE)
+		flags |= DRBD_ADM_NEED_RESOURCE;
+
 	adm_ctx->reply_dh->minor = d_in->minor;
 	adm_ctx->reply_dh->ret_code = NO_ERROR;
 
 	adm_ctx->volume = VOLUME_UNSPECIFIED;
+	adm_ctx->peer_node_id = PEER_NODE_ID_UNSPECIFIED;
 	if (info->attrs[DRBD_NLA_CFG_CONTEXT]) {
 		struct nlattr *nla;
+		struct nlattr **nested_attr_tb;
 		/* parse and validate only */
-		err = drbd_cfg_context_from_attrs(NULL, info);
+		err = drbd_cfg_context_ntb_from_attrs(&nested_attr_tb, info);
 		if (err)
 			goto fail;
 
@@ -207,108 +274,148 @@ static int drbd_adm_prepare(struct drbd_config_context *adm_ctx,
 		nla = nested_attr_tb[__nla_type(T_ctx_volume)];
 		if (nla)
 			adm_ctx->volume = nla_get_u32(nla);
+		nla = nested_attr_tb[__nla_type(T_ctx_peer_node_id)];
+		if (nla)
+			adm_ctx->peer_node_id = nla_get_u32(nla);
 		nla = nested_attr_tb[__nla_type(T_ctx_resource_name)];
 		if (nla)
 			adm_ctx->resource_name = nla_data(nla);
-		adm_ctx->my_addr = nested_attr_tb[__nla_type(T_ctx_my_addr)];
-		adm_ctx->peer_addr = nested_attr_tb[__nla_type(T_ctx_peer_addr)];
-		if ((adm_ctx->my_addr &&
-		     nla_len(adm_ctx->my_addr) > sizeof(adm_ctx->connection->my_addr)) ||
-		    (adm_ctx->peer_addr &&
-		     nla_len(adm_ctx->peer_addr) > sizeof(adm_ctx->connection->peer_addr))) {
-			err = -EINVAL;
-			goto fail;
-		}
+		kfree(nested_attr_tb);
+	}
+
+	if (adm_ctx->resource_name) {
+		adm_ctx->resource = drbd_find_resource(adm_ctx->resource_name);
 	}
 
 	adm_ctx->minor = d_in->minor;
+	rcu_read_lock();
 	adm_ctx->device = minor_to_device(d_in->minor);
-
-	/* We are protected by the global genl_lock().
-	 * But we may explicitly drop it/retake it in drbd_adm_set_role(),
-	 * so make sure this object stays around. */
-	if (adm_ctx->device)
+	if (adm_ctx->device) {
 		kref_get(&adm_ctx->device->kref);
-
-	if (adm_ctx->resource_name) {
-		adm_ctx->resource = drbd_find_resource(adm_ctx->resource_name);
 	}
+	rcu_read_unlock();
 
 	if (!adm_ctx->device && (flags & DRBD_ADM_NEED_MINOR)) {
 		drbd_msg_put_info(adm_ctx->reply_skb, "unknown minor");
-		return ERR_MINOR_INVALID;
+		err = ERR_MINOR_INVALID;
+		goto finish;
 	}
 	if (!adm_ctx->resource && (flags & DRBD_ADM_NEED_RESOURCE)) {
 		drbd_msg_put_info(adm_ctx->reply_skb, "unknown resource");
+		err = ERR_INVALID_REQUEST;
 		if (adm_ctx->resource_name)
-			return ERR_RES_NOT_KNOWN;
-		return ERR_INVALID_REQUEST;
+			err = ERR_RES_NOT_KNOWN;
+		goto finish;
 	}
-
-	if (flags & DRBD_ADM_NEED_CONNECTION) {
-		if (adm_ctx->resource) {
-			drbd_msg_put_info(adm_ctx->reply_skb, "no resource name expected");
-			return ERR_INVALID_REQUEST;
+	if (adm_ctx->peer_node_id != PEER_NODE_ID_UNSPECIFIED) {
+		/* peer_node_id is unsigned int */
+		if (adm_ctx->peer_node_id >= DRBD_NODE_ID_MAX) {
+			drbd_msg_put_info(adm_ctx->reply_skb, "peer node id out of range");
+			err = ERR_INVALID_REQUEST;
+			goto finish;
 		}
-		if (adm_ctx->device) {
-			drbd_msg_put_info(adm_ctx->reply_skb, "no minor number expected");
-			return ERR_INVALID_REQUEST;
+		if (!adm_ctx->resource) {
+			drbd_msg_put_info(adm_ctx->reply_skb,
+					"peer node id given without a resource");
+			err = ERR_INVALID_REQUEST;
+			goto finish;
+		}
+		if (adm_ctx->peer_node_id == adm_ctx->resource->res_opts.node_id) {
+			drbd_msg_put_info(adm_ctx->reply_skb, "peer node id cannot be my own node id");
+			err = ERR_INVALID_REQUEST;
+			goto finish;
 		}
-		if (adm_ctx->my_addr && adm_ctx->peer_addr)
-			adm_ctx->connection = conn_get_by_addrs(nla_data(adm_ctx->my_addr),
-							  nla_len(adm_ctx->my_addr),
-							  nla_data(adm_ctx->peer_addr),
-							  nla_len(adm_ctx->peer_addr));
+		adm_ctx->connection = drbd_get_connection_by_node_id(adm_ctx->resource, adm_ctx->peer_node_id);
+	} else if (flags & DRBD_ADM_NEED_PEER_NODE) {
+		drbd_msg_put_info(adm_ctx->reply_skb, "peer node id missing");
+		err = ERR_INVALID_REQUEST;
+		goto finish;
+	}
+	if (flags & DRBD_ADM_NEED_CONNECTION) {
 		if (!adm_ctx->connection) {
 			drbd_msg_put_info(adm_ctx->reply_skb, "unknown connection");
-			return ERR_INVALID_REQUEST;
+			err = ERR_INVALID_REQUEST;
+			goto finish;
 		}
 	}
+	if (flags & DRBD_ADM_NEED_PEER_DEVICE) {
+		rcu_read_lock();
+		if (adm_ctx->volume != VOLUME_UNSPECIFIED)
+			adm_ctx->peer_device =
+				idr_find(&adm_ctx->connection->peer_devices,
+					 adm_ctx->volume);
+		if (!adm_ctx->peer_device) {
+			drbd_msg_put_info(adm_ctx->reply_skb, "unknown volume");
+			err = ERR_INVALID_REQUEST;
+			rcu_read_unlock();
+			goto finish;
+		}
+		if (!adm_ctx->device) {
+			adm_ctx->device = adm_ctx->peer_device->device;
+			kref_get(&adm_ctx->device->kref);
+		}
+		rcu_read_unlock();
+	}
 
 	/* some more paranoia, if the request was over-determined */
 	if (adm_ctx->device && adm_ctx->resource &&
 	    adm_ctx->device->resource != adm_ctx->resource) {
 		pr_warn("request: minor=%u, resource=%s; but that minor belongs to resource %s\n",
-			adm_ctx->minor, adm_ctx->resource->name,
-			adm_ctx->device->resource->name);
+				adm_ctx->minor, adm_ctx->resource->name,
+				adm_ctx->device->resource->name);
 		drbd_msg_put_info(adm_ctx->reply_skb, "minor exists in different resource");
-		return ERR_INVALID_REQUEST;
+		err = ERR_INVALID_REQUEST;
+		goto finish;
 	}
 	if (adm_ctx->device &&
 	    adm_ctx->volume != VOLUME_UNSPECIFIED &&
 	    adm_ctx->volume != adm_ctx->device->vnr) {
 		pr_warn("request: minor=%u, volume=%u; but that minor is volume %u in %s\n",
-			adm_ctx->minor, adm_ctx->volume,
-			adm_ctx->device->vnr, adm_ctx->device->resource->name);
+				adm_ctx->minor, adm_ctx->volume,
+				adm_ctx->device->vnr,
+				adm_ctx->device->resource->name);
 		drbd_msg_put_info(adm_ctx->reply_skb, "minor exists as different volume");
-		return ERR_INVALID_REQUEST;
+		err = ERR_INVALID_REQUEST;
+		goto finish;
+	}
+	if (adm_ctx->device && adm_ctx->peer_device &&
+	    adm_ctx->resource && adm_ctx->resource->name &&
+	    adm_ctx->peer_device->device != adm_ctx->device) {
+		drbd_msg_put_info(adm_ctx->reply_skb, "peer_device->device != device");
+		pr_warn("request: minor=%u, resource=%s, volume=%u, peer_node=%u; device != peer_device->device\n",
+				adm_ctx->minor, adm_ctx->resource->name,
+				adm_ctx->device->vnr, adm_ctx->peer_node_id);
+		err = ERR_INVALID_REQUEST;
+		goto finish;
 	}
 
 	/* still, provide adm_ctx->resource always, if possible. */
 	if (!adm_ctx->resource) {
 		adm_ctx->resource = adm_ctx->device ? adm_ctx->device->resource
 			: adm_ctx->connection ? adm_ctx->connection->resource : NULL;
-		if (adm_ctx->resource)
+		if (adm_ctx->resource) {
 			kref_get(&adm_ctx->resource->kref);
+		}
 	}
-
 	return NO_ERROR;
 
 fail:
 	nlmsg_free(adm_ctx->reply_skb);
 	adm_ctx->reply_skb = NULL;
 	return err;
+
+finish:
+	return drbd_adm_finish(adm_ctx, info, err);
 }
 
-static int drbd_adm_finish(struct drbd_config_context *adm_ctx,
-	struct genl_info *info, int retcode)
+static int drbd_adm_finish(struct drbd_config_context *adm_ctx, struct genl_info *info, int retcode)
 {
 	if (adm_ctx->device) {
 		kref_put(&adm_ctx->device->kref, drbd_destroy_device);
 		adm_ctx->device = NULL;
 	}
 	if (adm_ctx->connection) {
-		kref_put(&adm_ctx->connection->kref, &drbd_destroy_connection);
+		kref_put(&adm_ctx->connection->kref, drbd_destroy_connection);
 		adm_ctx->connection = NULL;
 	}
 	if (adm_ctx->resource) {
@@ -321,220 +428,404 @@ static int drbd_adm_finish(struct drbd_config_context *adm_ctx,
 
 	adm_ctx->reply_dh->ret_code = retcode;
 	drbd_adm_send_reply(adm_ctx->reply_skb, info);
+	adm_ctx->reply_skb = NULL;
 	return 0;
 }
 
-static void setup_khelper_env(struct drbd_connection *connection, char **envp)
+static void conn_md_sync(struct drbd_connection *connection)
 {
-	char *afs;
+	struct drbd_peer_device *peer_device;
+	int vnr;
 
-	/* FIXME: A future version will not allow this case. */
-	if (connection->my_addr_len == 0 || connection->peer_addr_len == 0)
-		return;
+	rcu_read_lock();
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		struct drbd_device *device = peer_device->device;
+		kref_get(&device->kref);
+		rcu_read_unlock();
+		drbd_md_sync_if_dirty(device);
+		kref_put(&device->kref, drbd_destroy_device);
+		rcu_read_lock();
+	}
+	rcu_read_unlock();
+}
+
+/* Try to figure out where we are happy to become primary.
+   This is unsed by the crm-fence-peer mechanism
+*/
+static u64 up_to_date_nodes(struct drbd_device *device, bool op_is_fence)
+{
+	struct drbd_resource *resource = device->resource;
+	const int my_node_id = resource->res_opts.node_id;
+	u64 mask = NODE_MASK(my_node_id);
+
+	if (resource->role[NOW] == R_PRIMARY || op_is_fence) {
+		struct drbd_peer_device *peer_device;
+
+		rcu_read_lock();
+		for_each_peer_device_rcu(peer_device, device) {
+			enum drbd_disk_state pdsk = peer_device->disk_state[NOW];
+			if (pdsk == D_UP_TO_DATE)
+				mask |= NODE_MASK(peer_device->node_id);
+		}
+		rcu_read_unlock();
+	} else if (device->disk_state[NOW] == D_UP_TO_DATE) {
+		struct drbd_peer_md *peer_md = device->ldev->md.peers;
+		int node_id;
+
+		for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+			struct drbd_peer_device *peer_device;
+			if (node_id == my_node_id)
+				continue;
 
-	switch (((struct sockaddr *)&connection->peer_addr)->sa_family) {
+			peer_device = peer_device_by_node_id(device, node_id);
+
+			if ((peer_device && peer_device->disk_state[NOW] == D_UP_TO_DATE) ||
+			    (peer_md[node_id].flags & MDF_NODE_EXISTS &&
+			     peer_md[node_id].bitmap_uuid == 0))
+				mask |= NODE_MASK(node_id);
+		}
+	} else
+		  mask = 0;
+
+	return mask;
+}
+
+/* Buffer to construct the environment of a user-space helper in. */
+struct env {
+	char *buffer;
+	int size, pos;
+};
+
+/* Print into an env buffer. */
+static __printf(2, 3) int env_print(struct env *env, const char *fmt, ...)
+{
+	va_list args;
+	int pos, ret;
+
+	pos = env->pos;
+	if (pos < 0)
+		return pos;
+	va_start(args, fmt);
+	ret = vsnprintf(env->buffer + pos, env->size - pos, fmt, args);
+	va_end(args);
+	if (ret < 0) {
+		env->pos = ret;
+		goto out;
+	}
+	if (ret >= env->size - pos) {
+		ret = env->pos = -ENOMEM;
+		goto out;
+	}
+	env->pos += ret + 1;
+    out:
+	return ret;
+}
+
+/* Put env variables for an address into an env buffer. */
+static void env_print_address(struct env *env, const char *prefix,
+			      struct sockaddr_storage *storage)
+{
+	const char *afs;
+
+	switch (storage->ss_family) {
 	case AF_INET6:
 		afs = "ipv6";
-		snprintf(envp[4], 60, "DRBD_PEER_ADDRESS=%pI6",
-			 &((struct sockaddr_in6 *)&connection->peer_addr)->sin6_addr);
+		env_print(env, "%sADDRESS=%pI6", prefix,
+			  &((struct sockaddr_in6 *)storage)->sin6_addr);
 		break;
 	case AF_INET:
 		afs = "ipv4";
-		snprintf(envp[4], 60, "DRBD_PEER_ADDRESS=%pI4",
-			 &((struct sockaddr_in *)&connection->peer_addr)->sin_addr);
+		env_print(env, "%sADDRESS=%pI4", prefix,
+			  &((struct sockaddr_in *)storage)->sin_addr);
 		break;
 	default:
 		afs = "ssocks";
-		snprintf(envp[4], 60, "DRBD_PEER_ADDRESS=%pI4",
-			 &((struct sockaddr_in *)&connection->peer_addr)->sin_addr);
+		env_print(env, "%sADDRESS=%pI4", prefix,
+			  &((struct sockaddr_in *)storage)->sin_addr);
 	}
-	snprintf(envp[3], 20, "DRBD_PEER_AF=%s", afs);
+	env_print(env, "%sAF=%s", prefix, afs);
+}
+
+/* Construct char **envp inside an env buffer. */
+static char **make_envp(struct env *env)
+{
+	char **envp, *b;
+	unsigned int n;
+
+	if (env->pos < 0)
+		return NULL;
+	if (env->pos >= env->size)
+		goto out_nomem;
+	env->buffer[env->pos++] = 0;
+	for (b = env->buffer, n = 1; *b; n++)
+		b = strchr(b, 0) + 1;
+	if (env->size - env->pos < sizeof(envp) * n)
+		goto out_nomem;
+	envp = (char **)(env->buffer + env->size) - n;
+
+	for (b = env->buffer; *b; ) {
+		*envp++ = b;
+		b = strchr(b, 0) + 1;
+	}
+	*envp++ = NULL;
+	return envp - n;
+
+    out_nomem:
+	env->pos = -ENOMEM;
+	return NULL;
 }
 
-int drbd_khelper(struct drbd_device *device, char *cmd)
+/* Macro refers to local variables peer_device, device and connection! */
+#define magic_printk(level, fmt, args...)				\
+	do {								\
+		if (peer_device)					\
+			drbd_printk(NOLIMIT, level, peer_device, fmt, args);	\
+		else if (device)					\
+			drbd_printk(NOLIMIT, level, device, fmt, args);	\
+		else							\
+			drbd_printk(NOLIMIT, level, connection, fmt, args);	\
+	} while (0)
+
+static int drbd_khelper(struct drbd_device *device, struct drbd_connection *connection, char *cmd)
 {
-	char *envp[] = { "HOME=/",
-			"TERM=linux",
-			"PATH=/sbin:/usr/sbin:/bin:/usr/bin",
-			 (char[20]) { }, /* address family */
-			 (char[60]) { }, /* address */
-			NULL };
-	char mb[14];
-	char *argv[] = {drbd_usermode_helper, cmd, mb, NULL };
-	struct drbd_connection *connection = first_peer_device(device)->connection;
-	struct sib_info sib;
+	struct drbd_resource *resource = device ? device->resource : connection->resource;
+	char *argv[] = { drbd_usermode_helper, cmd, resource->name, NULL };
+	struct drbd_peer_device *peer_device = NULL;
+	struct env env = { .size = PAGE_SIZE };
+	char **envp;
 	int ret;
 
-	if (current == connection->worker.task)
-		set_bit(CALLBACK_PENDING, &connection->flags);
+    enlarge_buffer:
+	env.buffer = (char *)__get_free_pages(GFP_NOIO, get_order(env.size));
+	if (!env.buffer) {
+		ret = -ENOMEM;
+		goto out_err;
+	}
+	env.pos = 0;
+
+	rcu_read_lock();
+	env_print(&env, "HOME=/");
+	env_print(&env, "TERM=linux");
+	env_print(&env, "PATH=/sbin:/usr/sbin:/bin:/usr/bin");
+	if (device) {
+		env_print(&env, "DRBD_MINOR=%u", device->minor);
+		env_print(&env, "DRBD_VOLUME=%u", device->vnr);
+		if (get_ldev(device)) {
+			struct disk_conf *disk_conf =
+				rcu_dereference(device->ldev->disk_conf);
+			env_print(&env, "DRBD_BACKING_DEV=%s",
+				  disk_conf->backing_dev);
+			put_ldev(device);
+		}
+	}
+	if (connection) {
+		struct drbd_path *path;
+
+		rcu_read_lock();
+		path = first_path(connection);
+		if (path) {
+			/* TO BE DELETED */
+			env_print_address(&env, "DRBD_MY_", &path->my_addr);
+			env_print_address(&env, "DRBD_PEER_", &path->peer_addr);
+		}
+		rcu_read_unlock();
+
+		env_print(&env, "DRBD_PEER_NODE_ID=%u", connection->peer_node_id);
+		env_print(&env, "DRBD_CSTATE=%s", drbd_conn_str(connection->cstate[NOW]));
+	}
+	if (connection && !device) {
+		struct drbd_peer_device *peer_device;
+		int vnr;
+
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+			struct drbd_device *device = peer_device->device;
+
+			env_print(&env, "DRBD_MINOR_%u=%u",
+				  vnr, peer_device->device->minor);
+			if (get_ldev(device)) {
+				struct disk_conf *disk_conf =
+					rcu_dereference(device->ldev->disk_conf);
+				env_print(&env, "DRBD_BACKING_DEV_%u=%s",
+					  vnr, disk_conf->backing_dev);
+				put_ldev(device);
+			}
+		}
+	}
+	rcu_read_unlock();
+
+	if (strstr(cmd, "fence")) {
+		bool op_is_fence = strcmp(cmd, "fence-peer") == 0;
+		struct drbd_peer_device *peer_device;
+		u64 mask = -1ULL;
+		int vnr;
+
+		idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+			struct drbd_device *device = peer_device->device;
+
+			if (get_ldev(device)) {
+				u64 m = up_to_date_nodes(device, op_is_fence);
+				if (m)
+					mask &= m;
+				put_ldev(device);
+				/* Yes we outright ignore volumes that are not up-to-date
+				   on a single node. */
+			}
+		}
+		env_print(&env, "UP_TO_DATE_NODES=0x%08llX", mask);
+	}
+
+	envp = make_envp(&env);
+	if (!envp) {
+		if (env.pos == -ENOMEM) {
+			free_pages((unsigned long)env.buffer, get_order(env.size));
+			env.size += PAGE_SIZE;
+			goto enlarge_buffer;
+		}
+		ret = env.pos;
+		goto out_err;
+	}
 
-	snprintf(mb, 14, "minor-%d", device_to_minor(device));
-	setup_khelper_env(connection, envp);
+	if (current == resource->worker.task)
+		set_bit(CALLBACK_PENDING, &resource->flags);
 
 	/* The helper may take some time.
 	 * write out any unsynced meta data changes now */
-	drbd_md_sync(device);
+	if (device)
+		drbd_md_sync_if_dirty(device);
+	else if (connection)
+		conn_md_sync(connection);
+
+	if (connection && device)
+		peer_device = conn_peer_device(connection, device->vnr);
 
-	drbd_info(device, "helper command: %s %s %s\n", drbd_usermode_helper, cmd, mb);
-	sib.sib_reason = SIB_HELPER_PRE;
-	sib.helper_name = cmd;
-	drbd_bcast_event(device, &sib);
+	magic_printk(KERN_INFO, "helper command: %s %s\n", drbd_usermode_helper, cmd);
 	notify_helper(NOTIFY_CALL, device, connection, cmd, 0);
 	ret = call_usermodehelper(drbd_usermode_helper, argv, envp, UMH_WAIT_PROC);
 	if (ret)
-		drbd_warn(device, "helper command: %s %s %s exit code %u (0x%x)\n",
-				drbd_usermode_helper, cmd, mb,
-				(ret >> 8) & 0xff, ret);
+		magic_printk(KERN_WARNING,
+			     "helper command: %s %s exit code %u (0x%x)\n",
+			     drbd_usermode_helper, cmd,
+			     (ret >> 8) & 0xff, ret);
 	else
-		drbd_info(device, "helper command: %s %s %s exit code %u (0x%x)\n",
-				drbd_usermode_helper, cmd, mb,
-				(ret >> 8) & 0xff, ret);
-	sib.sib_reason = SIB_HELPER_POST;
-	sib.helper_exit_code = ret;
-	drbd_bcast_event(device, &sib);
+		magic_printk(KERN_INFO,
+			     "helper command: %s %s exit code 0\n",
+			     drbd_usermode_helper, cmd);
 	notify_helper(NOTIFY_RESPONSE, device, connection, cmd, ret);
 
-	if (current == connection->worker.task)
-		clear_bit(CALLBACK_PENDING, &connection->flags);
+	if (current == resource->worker.task)
+		clear_bit(CALLBACK_PENDING, &resource->flags);
 
 	if (ret < 0) /* Ignore any ERRNOs we got. */
 		ret = 0;
 
+	free_pages((unsigned long)env.buffer, get_order(env.size));
 	return ret;
-}
-
-enum drbd_peer_state conn_khelper(struct drbd_connection *connection, char *cmd)
-{
-	char *envp[] = { "HOME=/",
-			"TERM=linux",
-			"PATH=/sbin:/usr/sbin:/bin:/usr/bin",
-			 (char[20]) { }, /* address family */
-			 (char[60]) { }, /* address */
-			NULL };
-	char *resource_name = connection->resource->name;
-	char *argv[] = {drbd_usermode_helper, cmd, resource_name, NULL };
-	int ret;
 
-	setup_khelper_env(connection, envp);
-	conn_md_sync(connection);
-
-	drbd_info(connection, "helper command: %s %s %s\n", drbd_usermode_helper, cmd, resource_name);
-	/* TODO: conn_bcast_event() ?? */
-	notify_helper(NOTIFY_CALL, NULL, connection, cmd, 0);
+    out_err:
+	drbd_err(resource, "Could not call %s user-space helper: error %d"
+		 "out of memory\n", cmd, ret);
+	return 0;
+}
 
-	ret = call_usermodehelper(drbd_usermode_helper, argv, envp, UMH_WAIT_PROC);
-	if (ret)
-		drbd_warn(connection, "helper command: %s %s %s exit code %u (0x%x)\n",
-			  drbd_usermode_helper, cmd, resource_name,
-			  (ret >> 8) & 0xff, ret);
-	else
-		drbd_info(connection, "helper command: %s %s %s exit code %u (0x%x)\n",
-			  drbd_usermode_helper, cmd, resource_name,
-			  (ret >> 8) & 0xff, ret);
-	/* TODO: conn_bcast_event() ?? */
-	notify_helper(NOTIFY_RESPONSE, NULL, connection, cmd, ret);
+#undef magic_printk
 
-	if (ret < 0) /* Ignore any ERRNOs we got. */
-		ret = 0;
+int drbd_maybe_khelper(struct drbd_device *device, struct drbd_connection *connection, char *cmd)
+{
+	if (strcmp(drbd_usermode_helper, "disabled") == 0)
+		return DRBD_UMH_DISABLED;
 
-	return ret;
+	return drbd_khelper(device, connection, cmd);
 }
 
-static enum drbd_fencing_p highest_fencing_policy(struct drbd_connection *connection)
+static bool initial_states_pending(struct drbd_connection *connection)
 {
-	enum drbd_fencing_p fp = FP_NOT_AVAIL;
 	struct drbd_peer_device *peer_device;
 	int vnr;
+	bool pending = false;
 
 	rcu_read_lock();
 	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		if (get_ldev_if_state(device, D_CONSISTENT)) {
-			struct disk_conf *disk_conf =
-				rcu_dereference(peer_device->device->ldev->disk_conf);
-			fp = max_t(enum drbd_fencing_p, fp, disk_conf->fencing);
-			put_ldev(device);
+		if (test_bit(INITIAL_STATE_SENT, &peer_device->flags) &&
+		    peer_device->repl_state[NOW] == L_OFF) {
+			pending = true;
+			break;
 		}
 	}
 	rcu_read_unlock();
-
-	return fp;
+	return pending;
 }
 
-static bool resource_is_supended(struct drbd_resource *resource)
+static bool intentional_diskless(struct drbd_resource *resource)
 {
-	return resource->susp || resource->susp_fen || resource->susp_nod;
+	bool intentional_diskless = true;
+	struct drbd_device *device;
+	int vnr;
+
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		if (!device->device_conf.intentional_diskless) {
+			intentional_diskless = false;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return intentional_diskless;
 }
 
-bool conn_try_outdate_peer(struct drbd_connection *connection)
+static bool conn_try_outdate_peer(struct drbd_connection *connection, const char *tag)
 {
-	struct drbd_resource * const resource = connection->resource;
-	unsigned int connect_cnt;
-	union drbd_state mask = { };
-	union drbd_state val = { };
-	enum drbd_fencing_p fp;
+	struct drbd_resource *resource = connection->resource;
+	unsigned long last_reconnect_jif;
+	enum drbd_fencing_policy fencing_policy;
+	enum drbd_disk_state disk_state;
 	char *ex_to_string;
 	int r;
+	unsigned long irq_flags;
 
-	spin_lock_irq(&resource->req_lock);
-	if (connection->cstate >= C_WF_REPORT_PARAMS) {
-		drbd_err(connection, "Expected cstate < C_WF_REPORT_PARAMS\n");
-		spin_unlock_irq(&resource->req_lock);
+	read_lock_irq(&resource->state_rwlock);
+	if (connection->cstate[NOW] >= C_CONNECTED) {
+		drbd_err(connection, "Expected cstate < C_CONNECTED\n");
+		read_unlock_irq(&resource->state_rwlock);
 		return false;
 	}
 
-	connect_cnt = connection->connect_cnt;
-	spin_unlock_irq(&resource->req_lock);
-
-	fp = highest_fencing_policy(connection);
-	switch (fp) {
-	case FP_NOT_AVAIL:
-		drbd_warn(connection, "Not fencing peer, I'm not even Consistent myself.\n");
-		spin_lock_irq(&resource->req_lock);
-		if (connection->cstate < C_WF_REPORT_PARAMS) {
-			_conn_request_state(connection,
-					    (union drbd_state) { { .susp_fen = 1 } },
-					    (union drbd_state) { { .susp_fen = 0 } },
-					    CS_VERBOSE | CS_HARD | CS_DC_SUSP);
-			/* We are no longer suspended due to the fencing policy.
-			 * We may still be suspended due to the on-no-data-accessible policy.
-			 * If that was OND_IO_ERROR, fail pending requests. */
-			if (!resource_is_supended(resource))
-				_tl_restart(connection, CONNECTION_LOST_WHILE_PENDING);
-		}
-		/* Else: in case we raced with a connection handshake,
-		 * let the handshake figure out if we maybe can RESEND,
-		 * and do not resume/fail pending requests here.
-		 * Worst case is we stay suspended for now, which may be
-		 * resolved by either re-establishing the replication link, or
-		 * the next link failure, or eventually the administrator.  */
-		spin_unlock_irq(&resource->req_lock);
+	last_reconnect_jif = connection->last_reconnect_jif;
+
+	disk_state = conn_highest_disk(connection);
+	if (disk_state < D_CONSISTENT &&
+	    !(disk_state == D_DISKLESS && intentional_diskless(resource))) {
+		begin_state_change_locked(resource, CS_VERBOSE | CS_HARD);
+		__change_io_susp_fencing(connection, false);
+		end_state_change_locked(resource, tag);
+		read_unlock_irq(&resource->state_rwlock);
 		return false;
+	}
+	read_unlock_irq(&resource->state_rwlock);
 
-	case FP_DONT_CARE:
+	fencing_policy = connection->fencing_policy;
+	if (fencing_policy == FP_DONT_CARE)
 		return true;
-	default: ;
-	}
 
-	r = conn_khelper(connection, "fence-peer");
+	r = drbd_maybe_khelper(NULL, connection, "fence-peer");
+	if (r == DRBD_UMH_DISABLED)
+		return true;
 
+	begin_state_change(resource, &irq_flags, CS_VERBOSE);
 	switch ((r>>8) & 0xff) {
 	case P_INCONSISTENT: /* peer is inconsistent */
 		ex_to_string = "peer is inconsistent or worse";
-		mask.pdsk = D_MASK;
-		val.pdsk = D_INCONSISTENT;
+		__downgrade_peer_disk_states(connection, D_INCONSISTENT);
 		break;
 	case P_OUTDATED: /* peer got outdated, or was already outdated */
 		ex_to_string = "peer was fenced";
-		mask.pdsk = D_MASK;
-		val.pdsk = D_OUTDATED;
+		__downgrade_peer_disk_states(connection, D_OUTDATED);
 		break;
 	case P_DOWN: /* peer was down */
 		if (conn_highest_disk(connection) == D_UP_TO_DATE) {
 			/* we will(have) create(d) a new UUID anyways... */
 			ex_to_string = "peer is unreachable, assumed to be dead";
-			mask.pdsk = D_MASK;
-			val.pdsk = D_OUTDATED;
+			__downgrade_peer_disk_states(connection, D_OUTDATED);
 		} else {
 			ex_to_string = "peer unreachable, doing nothing since disk != UpToDate";
 		}
@@ -544,42 +835,44 @@ bool conn_try_outdate_peer(struct drbd_connection *connection)
 		 * become R_PRIMARY, but finds the other peer being active. */
 		ex_to_string = "peer is active";
 		drbd_warn(connection, "Peer is primary, outdating myself.\n");
-		mask.disk = D_MASK;
-		val.disk = D_OUTDATED;
+		__downgrade_disk_states(resource, D_OUTDATED);
 		break;
 	case P_FENCING:
 		/* THINK: do we need to handle this
-		 * like case 4, or more like case 5? */
-		if (fp != FP_STONITH)
+		 * like case 4 P_OUTDATED, or more like case 5 P_DOWN? */
+		if (fencing_policy != FP_STONITH)
 			drbd_err(connection, "fence-peer() = 7 && fencing != Stonith !!!\n");
 		ex_to_string = "peer was stonithed";
-		mask.pdsk = D_MASK;
-		val.pdsk = D_OUTDATED;
+		__downgrade_peer_disk_states(connection, D_OUTDATED);
 		break;
 	default:
 		/* The script is broken ... */
 		drbd_err(connection, "fence-peer helper broken, returned %d\n", (r>>8)&0xff);
+		abort_state_change(resource, &irq_flags);
 		return false; /* Eventually leave IO frozen */
 	}
 
 	drbd_info(connection, "fence-peer helper returned %d (%s)\n",
 		  (r>>8) & 0xff, ex_to_string);
 
-	/* Not using
-	   conn_request_state(connection, mask, val, CS_VERBOSE);
-	   here, because we might were able to re-establish the connection in the
-	   meantime. */
-	spin_lock_irq(&resource->req_lock);
-	if (connection->cstate < C_WF_REPORT_PARAMS && !test_bit(STATE_SENT, &connection->flags)) {
-		if (connection->connect_cnt != connect_cnt)
-			/* In case the connection was established and droped
-			   while the fence-peer handler was running, ignore it */
-			drbd_info(connection, "Ignoring fence-peer exit code\n");
-		else
-			_conn_request_state(connection, mask, val, CS_VERBOSE);
+	if (connection->cstate[NOW] >= C_CONNECTED ||
+	    initial_states_pending(connection)) {
+		/* connection re-established; do not fence */
+		goto abort;
+	}
+	if (connection->last_reconnect_jif != last_reconnect_jif) {
+		/* In case the connection was established and dropped
+		   while the fence-peer handler was running, ignore it */
+		drbd_info(connection, "Ignoring fence-peer exit code\n");
+		goto abort;
 	}
-	spin_unlock_irq(&resource->req_lock);
 
+	end_state_change(resource, &irq_flags, tag);
+
+	goto out;
+ abort:
+	abort_state_change(resource, &irq_flags);
+ out:
 	return conn_highest_pdsk(connection) <= D_OUTDATED;
 }
 
@@ -587,7 +880,7 @@ static int _try_outdate_peer_async(void *data)
 {
 	struct drbd_connection *connection = (struct drbd_connection *)data;
 
-	conn_try_outdate_peer(connection);
+	conn_try_outdate_peer(connection, "outdate-async");
 
 	kref_put(&connection->kref, drbd_destroy_connection);
 	return 0;
@@ -611,151 +904,451 @@ void conn_try_outdate_peer_async(struct drbd_connection *connection)
 	}
 }
 
-enum drbd_state_rv
-drbd_set_role(struct drbd_device *const device, enum drbd_role new_role, int force)
+bool barrier_pending(struct drbd_resource *resource)
 {
-	struct drbd_peer_device *const peer_device = first_peer_device(device);
-	struct drbd_connection *const connection = peer_device ? peer_device->connection : NULL;
-	const int max_tries = 4;
-	enum drbd_state_rv rv = SS_UNKNOWN_ERROR;
-	struct net_conf *nc;
-	int try = 0;
-	int forced = 0;
-	union drbd_state mask, val;
+	struct drbd_connection *connection;
+	bool rv = false;
 
-	if (new_role == R_PRIMARY) {
-		struct drbd_connection *connection;
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (test_bit(BARRIER_ACK_PENDING, &connection->flags)) {
+			rv = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
 
-		/* Detect dead peers as soon as possible.  */
+	return rv;
+}
 
-		rcu_read_lock();
-		for_each_connection(connection, device->resource)
-			request_ping(connection);
-		rcu_read_unlock();
+static int count_up_to_date(struct drbd_resource *resource)
+{
+	struct drbd_device *device;
+	int vnr, nr_up_to_date = 0;
+
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		enum drbd_disk_state disk_state = device->disk_state[NOW];
+		if (disk_state == D_UP_TO_DATE)
+			nr_up_to_date++;
 	}
+	rcu_read_unlock();
+	return nr_up_to_date;
+}
+
+static bool reconciliation_ongoing(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
 
-	mutex_lock(device->state_mutex);
+	for_each_peer_device_rcu(peer_device, device) {
+		if (test_bit(RECONCILIATION_RESYNC, &peer_device->flags))
+			return true;
+	}
+	return false;
+}
 
-	mask.i = 0; mask.role = R_MASK;
-	val.i  = 0; val.role  = new_role;
+static bool any_peer_is_consistent(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
 
-	while (try++ < max_tries) {
-		rv = _drbd_request_state_holding_state_mutex(device, mask, val, CS_WAIT_COMPLETE);
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->disk_state[NOW] == D_CONSISTENT)
+			return true;
+	}
+	return false;
+}
+/* reconciliation resyncs finished and I know if I am D_UP_TO_DATE or D_OUTDATED */
+static bool after_primary_lost_events_settled(struct drbd_resource *resource)
+{
+	struct drbd_device *device;
+	int vnr;
 
-		/* in case we first succeeded to outdate,
-		 * but now suddenly could establish a connection */
-		if (rv == SS_CW_FAILED_BY_PEER && mask.pdsk != 0) {
-			val.pdsk = 0;
-			mask.pdsk = 0;
-			continue;
-		}
+	if (test_bit(TRY_BECOME_UP_TO_DATE_PENDING, &resource->flags))
+		return false;
 
-		if (rv == SS_NO_UP_TO_DATE_DISK && force &&
-		    (device->state.disk < D_UP_TO_DATE &&
-		     device->state.disk >= D_INCONSISTENT)) {
-			mask.disk = D_MASK;
-			val.disk  = D_UP_TO_DATE;
-			forced = 1;
-			continue;
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		enum drbd_disk_state disk_state = device->disk_state[NOW];
+		if (disk_state == D_CONSISTENT ||
+		    any_peer_is_consistent(device) ||
+		    (reconciliation_ongoing(device) &&
+		     (disk_state == D_OUTDATED || disk_state == D_INCONSISTENT))) {
+			rcu_read_unlock();
+			return false;
 		}
+	}
+	rcu_read_unlock();
+	return true;
+}
 
-		if (rv == SS_NO_UP_TO_DATE_DISK &&
-		    device->state.disk == D_CONSISTENT && mask.pdsk == 0) {
-			D_ASSERT(device, device->state.pdsk == D_UNKNOWN);
+static long drbd_max_ping_timeout(struct drbd_resource *resource)
+{
+	struct drbd_connection *connection;
+	long ping_timeout = 0;
 
-			if (conn_try_outdate_peer(connection)) {
-				val.disk = D_UP_TO_DATE;
-				mask.disk = D_MASK;
-			}
-			continue;
-		}
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource)
+		ping_timeout = max(ping_timeout, (long) connection->transport.net_conf->ping_timeo);
+	rcu_read_unlock();
 
-		if (rv == SS_NOTHING_TO_DO)
-			goto out;
-		if (rv == SS_PRIMARY_NOP && mask.pdsk == 0) {
-			if (!conn_try_outdate_peer(connection) && force) {
-				drbd_warn(device, "Forced into split brain situation!\n");
-				mask.pdsk = D_MASK;
-				val.pdsk  = D_OUTDATED;
+	return ping_timeout;
+}
 
+static bool wait_up_to_date(struct drbd_resource *resource)
+{
+	/*
+	 * Adding ping-timeout is necessary to ensure that we do not proceed
+	 * while the loss of some connection has not yet been detected. Ideally
+	 * we would use the maximum ping timeout from the entire cluster. Since
+	 * we do not have that, use the maximum from our connections on a
+	 * best-effort basis.
+	 */
+	long timeout = (resource->res_opts.auto_promote_timeout +
+			drbd_max_ping_timeout(resource)) * HZ / 10;
+	int initial_up_to_date, up_to_date;
+
+	initial_up_to_date = count_up_to_date(resource);
+	wait_event_interruptible_timeout(resource->state_wait,
+					 after_primary_lost_events_settled(resource),
+					 timeout);
+	up_to_date = count_up_to_date(resource);
+	return up_to_date > initial_up_to_date;
+}
+
+enum drbd_state_rv
+drbd_set_role(struct drbd_resource *resource, enum drbd_role role, bool force, const char *tag,
+		struct sk_buff *reply_skb)
+{
+	struct drbd_device *device;
+	int vnr, try = 0;
+	const int max_tries = 4;
+	enum drbd_state_rv rv = SS_UNKNOWN_ERROR;
+	bool retried_ss_two_primaries = false, retried_ss_primary_nop = false;
+	const char *err_str = NULL;
+	enum chg_state_flags flags = CS_ALREADY_SERIALIZED | CS_DONT_RETRY | CS_WAIT_COMPLETE;
+	bool fenced_peers = false;
+
+retry:
+
+	if (role == R_PRIMARY) {
+		drbd_check_peers(resource);
+		wait_up_to_date(resource);
+	}
+	down(&resource->state_sem);
+
+	while (try++ < max_tries) {
+		if (try == max_tries - 1)
+			flags |= CS_VERBOSE;
+
+		if (err_str) {
+			kfree(err_str);
+			err_str = NULL;
+		}
+		rv = stable_state_change(resource,
+			change_role(resource, role, flags, tag, &err_str));
+
+		if (rv == SS_TIMEOUT || rv == SS_CONCURRENT_ST_CHG) {
+			long timeout = twopc_retry_timeout(resource, try);
+			/* It might be that the receiver tries to start resync, and
+			   sleeps on state_sem. Give it up, and retry in a short
+			   while */
+			up(&resource->state_sem);
+			schedule_timeout_interruptible(timeout);
+			goto retry;
+		}
+		/* in case we first succeeded to outdate,
+		 * but now suddenly could establish a connection */
+		if (rv == SS_CW_FAILED_BY_PEER && fenced_peers) {
+			flags &= ~CS_FP_LOCAL_UP_TO_DATE;
+			continue;
+		}
+
+		if (rv == SS_NO_UP_TO_DATE_DISK && force && !(flags & CS_FP_LOCAL_UP_TO_DATE)) {
+			flags |= CS_FP_LOCAL_UP_TO_DATE;
+			continue;
+		}
+
+		if (rv == SS_DEVICE_IN_USE && force && !(flags & CS_FS_IGN_OPENERS)) {
+			drbd_warn(resource, "forced demotion\n");
+			flags |= CS_FS_IGN_OPENERS; /* this sets resource->fail_io[NOW] */
+			continue;
+		}
+
+		if (rv == SS_NO_UP_TO_DATE_DISK) {
+			bool a_disk_became_up_to_date;
+
+			/* need to give up state_sem, see try_become_up_to_date(); */
+			up(&resource->state_sem);
+			drbd_flush_workqueue(&resource->work);
+			a_disk_became_up_to_date = wait_up_to_date(resource);
+			down(&resource->state_sem);
+			if (a_disk_became_up_to_date)
+				continue;
+			/* fall through into possible fence-peer or even force cases */
+		}
+
+		if (rv == SS_NO_UP_TO_DATE_DISK && !(flags & CS_FP_LOCAL_UP_TO_DATE)) {
+			struct drbd_connection *connection;
+			bool any_fencing_failed = false;
+			u64 im;
+
+			fenced_peers = false;
+			up(&resource->state_sem); /* Allow connect while fencing */
+			for_each_connection_ref(connection, im, resource) {
+				struct drbd_peer_device *peer_device;
+				int vnr;
+
+				if (conn_highest_pdsk(connection) != D_UNKNOWN)
+					continue;
+
+				idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+					struct drbd_device *device = peer_device->device;
+
+					if (device->disk_state[NOW] != D_CONSISTENT)
+						continue;
+
+					if (conn_try_outdate_peer(connection, tag))
+						fenced_peers = true;
+					else
+						any_fencing_failed = true;
+				}
+			}
+			down(&resource->state_sem);
+			if (fenced_peers && !any_fencing_failed) {
+				flags |= CS_FP_LOCAL_UP_TO_DATE;
+				continue;
 			}
+		}
+
+		/* In case the disk is Consistent and fencing is enabled, and fencing did not work
+		 * but the user forces promote..., try it pretending we fenced the peers */
+		if (rv == SS_PRIMARY_NOP && force &&
+		    (flags & CS_FP_LOCAL_UP_TO_DATE) && !(flags & CS_FP_OUTDATE_PEERS)) {
+			flags |= CS_FP_OUTDATE_PEERS;
+			continue;
+		}
+
+		if (rv == SS_NO_QUORUM && force && !(flags & CS_FP_OUTDATE_PEERS)) {
+			flags |= CS_FP_OUTDATE_PEERS;
 			continue;
 		}
-		if (rv == SS_TWO_PRIMARIES) {
-			/* Maybe the peer is detected as dead very soon...
-			   retry at most once more in this case. */
-			if (try < max_tries) {
-				int timeo;
-				try = max_tries - 1;
-				rcu_read_lock();
-				nc = rcu_dereference(connection->net_conf);
-				timeo = nc ? (nc->ping_timeo + 1) * HZ / 10 : 1;
-				rcu_read_unlock();
-				schedule_timeout_interruptible(timeo);
+
+		if (rv == SS_NOTHING_TO_DO)
+			goto out;
+		if (rv == SS_PRIMARY_NOP && !retried_ss_primary_nop) {
+			struct drbd_connection *connection;
+			u64 im;
+
+			retried_ss_primary_nop = true;
+
+			up(&resource->state_sem); /* Allow connect while fencing */
+			for_each_connection_ref(connection, im, resource) {
+				bool outdated_peer = conn_try_outdate_peer(connection, tag);
+				if (!outdated_peer && force) {
+					drbd_warn(connection, "Forced into split brain situation!\n");
+					flags |= CS_FP_LOCAL_UP_TO_DATE;
+				}
 			}
+			down(&resource->state_sem);
 			continue;
 		}
-		if (rv < SS_SUCCESS) {
-			rv = _drbd_request_state(device, mask, val,
-						CS_VERBOSE + CS_WAIT_COMPLETE);
-			if (rv < SS_SUCCESS)
-				goto out;
+
+		if (rv == SS_TWO_PRIMARIES && !retried_ss_two_primaries) {
+			struct drbd_connection *connection;
+			struct net_conf *nc;
+			int timeout = 0;
+
+			retried_ss_two_primaries = true;
+
+			/*
+			 * Catch the case where we discover that the other
+			 * primary has died soon after the state change
+			 * failure: retry once after a short timeout.
+			 */
+
+			rcu_read_lock();
+			for_each_connection_rcu(connection, resource) {
+				nc = rcu_dereference(connection->transport.net_conf);
+				if (nc && nc->ping_timeo > timeout)
+					timeout = nc->ping_timeo;
+			}
+			rcu_read_unlock();
+			timeout = timeout * HZ / 10;
+			if (timeout == 0)
+				timeout = 1;
+
+			up(&resource->state_sem);
+			schedule_timeout_interruptible(timeout);
+			goto retry;
 		}
+
 		break;
 	}
 
 	if (rv < SS_SUCCESS)
 		goto out;
 
-	if (forced)
-		drbd_warn(device, "Forced to consider local data as UpToDate!\n");
-
-	/* Wait until nothing is on the fly :) */
-	wait_event(device->misc_wait, atomic_read(&device->ap_pending_cnt) == 0);
-
-	/* FIXME also wait for all pending P_BARRIER_ACK? */
+	if (force) {
+		if (flags & CS_FP_LOCAL_UP_TO_DATE)
+			drbd_warn(resource, "Forced to consider local data as UpToDate!\n");
+		if (flags & CS_FP_OUTDATE_PEERS)
+			drbd_warn(resource, "Forced to consider peers as Outdated!\n");
+	}
 
-	if (new_role == R_SECONDARY) {
-		if (get_ldev(device)) {
-			device->ldev->md.uuid[UI_CURRENT] &= ~(u64)1;
-			put_ldev(device);
+	if (role == R_SECONDARY) {
+		idr_for_each_entry(&resource->devices, device, vnr) {
+			if (get_ldev(device)) {
+				device->ldev->md.current_uuid &= ~UUID_PRIMARY;
+				put_ldev(device);
+			}
 		}
 	} else {
-		mutex_lock(&device->resource->conf_update);
-		nc = connection->net_conf;
-		if (nc)
-			nc->discard_my_data = 0; /* without copy; single bit op is atomic */
-		mutex_unlock(&device->resource->conf_update);
+		struct drbd_connection *connection;
 
-		if (get_ldev(device)) {
-			if (((device->state.conn < C_CONNECTED ||
-			       device->state.pdsk <= D_FAILED)
-			      && device->ldev->md.uuid[UI_BITMAP] == 0) || forced)
-				drbd_uuid_new_current(device);
+		rcu_read_lock();
+		for_each_connection_rcu(connection, resource)
+			clear_bit(CONN_DISCARD_MY_DATA, &connection->flags);
+		rcu_read_unlock();
 
-			device->ldev->md.uuid[UI_CURRENT] |=  (u64)1;
-			put_ldev(device);
+		idr_for_each_entry(&resource->devices, device, vnr) {
+			if (flags & CS_FP_LOCAL_UP_TO_DATE) {
+				drbd_uuid_new_current(device, true);
+				clear_bit(NEW_CUR_UUID, &device->flags);
+			}
 		}
 	}
 
-	/* writeout of activity log covered areas of the bitmap
-	 * to stable storage done in after state change already */
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		 struct drbd_peer_device *peer_device;
+		 u64 im;
+
+		 for_each_peer_device_ref(peer_device, im, device) {
+			/* writeout of activity log covered areas of the bitmap
+			 * to stable storage done in after state change already */
 
-	if (device->state.conn >= C_WF_REPORT_PARAMS) {
-		/* if this was forced, we should consider sync */
-		if (forced)
-			drbd_send_uuids(peer_device);
-		drbd_send_current_state(peer_device);
+			if (peer_device->connection->cstate[NOW] == C_CONNECTED) {
+				/* if this was forced, we should consider sync */
+				if (flags & CS_FP_LOCAL_UP_TO_DATE) {
+					drbd_send_uuids(peer_device, 0, 0);
+					set_bit(CONSIDER_RESYNC, &peer_device->flags);
+				}
+				drbd_send_current_state(peer_device);
+			}
+		}
+	}
+
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		drbd_md_sync_if_dirty(device);
+		if (!resource->res_opts.auto_promote && role == R_PRIMARY)
+			kobject_uevent(&disk_to_dev(device->vdisk)->kobj, KOBJ_CHANGE);
 	}
 
-	drbd_md_sync(device);
-	set_disk_ro(device->vdisk, new_role == R_SECONDARY);
-	kobject_uevent(&disk_to_dev(device->vdisk)->kobj, KOBJ_CHANGE);
 out:
-	mutex_unlock(device->state_mutex);
+	up(&resource->state_sem);
+	if (err_str) {
+		drbd_err(resource, "%s", err_str);
+		if (reply_skb)
+			drbd_msg_put_info(reply_skb, err_str);
+		kfree(err_str);
+	}
 	return rv;
 }
 
+/* suggested buffer size: 128 byte */
+void youngest_and_oldest_opener_to_str(struct drbd_device *device, char *buf, size_t len)
+{
+	struct timespec64 ts;
+	struct tm tm;
+	struct opener *first;
+	struct opener *last;
+	int cnt;
+
+	buf[0] = '\0';
+	/* Do we have opener information? */
+	if (!device->open_cnt)
+		return;
+	cnt = snprintf(buf, len, " open_cnt:%d", device->open_cnt);
+	if (cnt > 0 && cnt < len) {
+		buf += cnt;
+		len -= cnt;
+	} else
+		return;
+	spin_lock(&device->openers_lock);
+	if (!list_empty(&device->openers)) {
+		first = list_first_entry(&device->openers, struct opener, list);
+		ts = ktime_to_timespec64(first->opened);
+		time64_to_tm(ts.tv_sec, -sys_tz.tz_minuteswest * 60, &tm);
+		cnt = snprintf(buf, len, " [%s:%d:%04ld-%02d-%02d_%02d:%02d:%02d.%03ld]",
+			      first->comm, first->pid,
+			      tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday,
+			      tm.tm_hour, tm.tm_min, tm.tm_sec, ts.tv_nsec / NSEC_PER_MSEC);
+		last = list_last_entry(&device->openers, struct opener, list);
+		if (cnt > 0 && cnt < len && last != first) {
+			/* append, overwriting the previously added ']' */
+			buf += cnt-1;
+			len -= cnt-1;
+			ts = ktime_to_timespec64(last->opened);
+			time64_to_tm(ts.tv_sec, -sys_tz.tz_minuteswest * 60, &tm);
+			snprintf(buf, len, "%s%s:%d:%04ld-%02d-%02d_%02d:%02d:%02d.%03ld]",
+			      device->open_cnt > 2 ? ", ..., " : ", ",
+			      last->comm, last->pid,
+			      tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday,
+			      tm.tm_hour, tm.tm_min, tm.tm_sec, ts.tv_nsec / NSEC_PER_MSEC);
+		}
+	}
+	spin_unlock(&device->openers_lock);
+}
+
+static int put_device_opener_info(struct drbd_device *device, struct sk_buff *reply_skb)
+{
+	struct timespec64 ts;
+	struct opener *o;
+	struct tm tm;
+	int cnt = 0;
+	char *dotdotdot = "";
+
+	spin_lock(&device->openers_lock);
+	if (!device->open_cnt) {
+		spin_unlock(&device->openers_lock);
+		return cnt;
+	}
+	drbd_msg_sprintf_info(reply_skb,
+		"/dev/drbd%d open_cnt:%d, writable:%d; list of openers follows",
+		device->minor, device->open_cnt, device->writable);
+	list_for_each_entry(o, &device->openers, list) {
+		ts = ktime_to_timespec64(o->opened);
+		time64_to_tm(ts.tv_sec, -sys_tz.tz_minuteswest * 60, &tm);
+
+		if (++cnt >= 10 && !list_is_last(&o->list, &device->openers)) {
+			o = list_last_entry(&device->openers, struct opener, list);
+			dotdotdot = "[...]\n";
+		}
+		drbd_msg_sprintf_info(reply_skb,
+			"%sdrbd%d opened by %s (pid %d) at %04ld-%02d-%02d %02d:%02d:%02d.%03ld",
+			dotdotdot,
+			device->minor, o->comm, o->pid,
+			tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday,
+			tm.tm_hour, tm.tm_min, tm.tm_sec,
+			ts.tv_nsec / NSEC_PER_MSEC);
+	}
+	spin_unlock(&device->openers_lock);
+	return cnt;
+}
+
+static void opener_info(struct drbd_resource *resource,
+			struct sk_buff *reply_skb,
+			enum drbd_state_rv rv)
+{
+	struct drbd_device *device;
+	int i;
+
+	if (rv != SS_DEVICE_IN_USE && rv != SS_NO_UP_TO_DATE_DISK)
+		return;
+
+	idr_for_each_entry(&resource->devices, device, i)
+		put_device_opener_info(device, reply_skb);
+}
+
 static const char *from_attrs_err_to_txt(int err)
 {
 	return	err == -ENOMSG ? "required attribute missing" :
@@ -764,20 +1357,21 @@ static const char *from_attrs_err_to_txt(int err)
 		"invalid attribute value";
 }
 
-int drbd_adm_set_role(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_set_role(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
+	struct drbd_resource *resource;
 	struct set_role_parms parms;
-	int err;
-	enum drbd_ret_code retcode;
 	enum drbd_state_rv rv;
+	enum drbd_ret_code retcode;
+	enum drbd_role new_role;
+	int err;
 
-	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
+	rv = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE);
 	if (!adm_ctx.reply_skb)
-		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
+		return rv;
 
+	resource = adm_ctx.resource;
 	memset(&parms, 0, sizeof(parms));
 	if (info->attrs[DRBD_NLA_SET_ROLE_PARMS]) {
 		err = set_role_parms_from_attrs(&parms, info);
@@ -787,16 +1381,28 @@ int drbd_adm_set_role(struct sk_buff *skb, struct genl_info *info)
 			goto out;
 		}
 	}
-	genl_unlock();
-	mutex_lock(&adm_ctx.resource->adm_mutex);
+	if (mutex_lock_interruptible(&resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out;
+	}
 
-	if (info->genlhdr->cmd == DRBD_ADM_PRIMARY)
-		rv = drbd_set_role(adm_ctx.device, R_PRIMARY, parms.assume_uptodate);
-	else
-		rv = drbd_set_role(adm_ctx.device, R_SECONDARY, 0);
+	new_role = info->genlhdr->cmd == DRBD_ADM_PRIMARY ? R_PRIMARY : R_SECONDARY;
+	if (new_role == R_PRIMARY)
+		set_bit(EXPLICIT_PRIMARY, &resource->flags);
 
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
-	genl_lock();
+	rv = drbd_set_role(resource,
+				new_role,
+				parms.force,
+				new_role == R_PRIMARY ? "primary" : "secondary",
+				adm_ctx.reply_skb);
+
+	if (resource->role[NOW] != R_PRIMARY)
+		clear_bit(EXPLICIT_PRIMARY, &resource->flags);
+
+	if (rv == SS_DEVICE_IN_USE)
+		opener_info(resource, adm_ctx.reply_skb, rv);
+
+	mutex_unlock(&resource->adm_mutex);
 	drbd_adm_finish(&adm_ctx, info, rv);
 	return 0;
 out:
@@ -804,6 +1410,28 @@ int drbd_adm_set_role(struct sk_buff *skb, struct genl_info *info)
 	return 0;
 }
 
+u64 drbd_capacity_to_on_disk_bm_sect(u64 capacity_sect, const struct drbd_md *md)
+{
+	u64 bits, bytes;
+
+	/* round up storage sectors to full "bitmap sectors per bit", then
+	 * convert to number of bits needed, and round that up to 64bit words
+	 * to ease interoperability between 32bit and 64bit architectures.
+	 */
+	bits = ALIGN(sect_to_bit(
+			ALIGN(capacity_sect, sect_per_bit(md->bm_block_shift)),
+			md->bm_block_shift), 64);
+
+	/* convert to bytes, multiply by number of peers,
+	 * and, because we do all our meta data IO in 4k blocks,
+	 * round up to full 4k
+	 */
+	bytes = ALIGN(bits / 8 * md->max_peers, 4096);
+
+	/* convert to number of sectors */
+	return bytes >> 9;
+}
+
 /* Initializes the md.*_offset members, so we are able to find
  * the on disk meta data.
  *
@@ -823,10 +1451,9 @@ int drbd_adm_set_role(struct sk_buff *skb, struct genl_info *info)
  *  ==> bitmap sectors = Y = al_offset - bm_offset
  *
  *  Activity log size used to be fixed 32kB,
- *  but is about to become configurable.
+ *  but is actually al_stripes * al_stripe_size_4k.
  */
-static void drbd_md_set_sector_offsets(struct drbd_device *device,
-				       struct drbd_backing_dev *bdev)
+void drbd_md_set_sector_offsets(struct drbd_backing_dev *bdev)
 {
 	sector_t md_size_sect = 0;
 	unsigned int al_size_sect = bdev->md.al_size_4k * 8;
@@ -836,33 +1463,32 @@ static void drbd_md_set_sector_offsets(struct drbd_device *device,
 	switch (bdev->md.meta_dev_idx) {
 	default:
 		/* v07 style fixed size indexed meta data */
-		bdev->md.md_size_sect = MD_128MB_SECT;
-		bdev->md.al_offset = MD_4kB_SECT;
-		bdev->md.bm_offset = MD_4kB_SECT + al_size_sect;
+		/* FIXME we should drop support for this! */
+		bdev->md.md_size_sect = (128 << 20 >> 9);
+		bdev->md.al_offset = (4096 >> 9);
+		bdev->md.bm_offset = (4096 >> 9) + al_size_sect;
 		break;
 	case DRBD_MD_INDEX_FLEX_EXT:
 		/* just occupy the full device; unit: sectors */
 		bdev->md.md_size_sect = drbd_get_capacity(bdev->md_bdev);
-		bdev->md.al_offset = MD_4kB_SECT;
-		bdev->md.bm_offset = MD_4kB_SECT + al_size_sect;
+		bdev->md.al_offset = (4096 >> 9);
+		bdev->md.bm_offset = (4096 >> 9) + al_size_sect;
 		break;
 	case DRBD_MD_INDEX_INTERNAL:
 	case DRBD_MD_INDEX_FLEX_INT:
-		/* al size is still fixed */
 		bdev->md.al_offset = -al_size_sect;
-		/* we need (slightly less than) ~ this much bitmap sectors: */
-		md_size_sect = drbd_get_capacity(bdev->backing_bdev);
-		md_size_sect = ALIGN(md_size_sect, BM_SECT_PER_EXT);
-		md_size_sect = BM_SECT_TO_EXT(md_size_sect);
-		md_size_sect = ALIGN(md_size_sect, 8);
 
-		/* plus the "drbd meta data super block",
+		/* enough bitmap to cover the storage,
+		 * plus the "drbd meta data super block",
 		 * and the activity log; */
-		md_size_sect += MD_4kB_SECT + al_size_sect;
+		md_size_sect = drbd_capacity_to_on_disk_bm_sect(
+				drbd_get_capacity(bdev->backing_bdev),
+				&bdev->md)
+			+ (4096 >> 9) + al_size_sect;
 
 		bdev->md.md_size_sect = md_size_sect;
 		/* bitmap offset is adjusted by 'super' block size */
-		bdev->md.bm_offset   = -md_size_sect + MD_4kB_SECT;
+		bdev->md.bm_offset   = -md_size_sect + (4096 >> 9);
 		break;
 	}
 }
@@ -884,18 +1510,11 @@ char *ppsize(char *buf, unsigned long long size)
 	return buf;
 }
 
-/* there is still a theoretical deadlock when called from receiver
- * on an D_INCONSISTENT R_PRIMARY:
- *  remote READ does inc_ap_bio, receiver would need to receive answer
- *  packet from remote to dec_ap_bio again.
- *  receiver receive_sizes(), comes here,
- *  waits for ap_bio_cnt == 0. -> deadlock.
- * but this cannot happen, actually, because:
- *  R_PRIMARY D_INCONSISTENT, and peer's disk is unreachable
- *  (not connected, or bad/no disk on peer):
- *  see drbd_fail_request_early, ap_bio_cnt is zero.
- *  R_PRIMARY D_INCONSISTENT, and C_SYNC_TARGET:
- *  peer may not initiate a resize.
+/* The receiver may call drbd_suspend_io(device, WRITE_ONLY).
+ * It should not call drbd_suspend_io(device, READ_AND_WRITE) since
+ * if the node is an D_INCONSISTENT R_PRIMARY (L_SYNC_TARGET) it
+ * may need to issue remote READs. Those is turn need the receiver
+ * to complete. -> calling drbd_suspend_io(device, READ_AND_WRITE) deadlocks.
  */
 /* Note these are not to be confused with
  * drbd_adm_suspend_io/drbd_adm_resume_io,
@@ -905,12 +1524,12 @@ char *ppsize(char *buf, unsigned long long size)
  * and should be short-lived. */
 /* It needs to be a counter, since multiple threads might
    independently suspend and resume IO. */
-void drbd_suspend_io(struct drbd_device *device)
+void drbd_suspend_io(struct drbd_device *device, enum suspend_scope ss)
 {
 	atomic_inc(&device->suspend_cnt);
-	if (drbd_suspended(device))
-		return;
-	wait_event(device->misc_wait, !atomic_read(&device->ap_bio_cnt));
+	wait_event(device->misc_wait, drbd_suspended(device) ||
+		   (atomic_read(&device->ap_bio_cnt[WRITE]) +
+		    ss == READ_AND_WRITE ? atomic_read(&device->ap_bio_cnt[READ]) : 0) == 0);
 }
 
 void drbd_resume_io(struct drbd_device *device)
@@ -919,18 +1538,64 @@ void drbd_resume_io(struct drbd_device *device)
 		wake_up(&device->misc_wait);
 }
 
+/**
+ * effective_disk_size_determined()  -  is the effective disk size "fixed" already?
+ * @device: DRBD device.
+ *
+ * When a device is configured in a cluster, the size of the replicated disk is
+ * determined by the minimum size of the disks on all nodes.  Additional nodes
+ * can be added, and this can still change the effective size of the replicated
+ * disk.
+ *
+ * When the disk on any node becomes D_UP_TO_DATE, the effective disk size
+ * becomes "fixed".  It is written to the metadata so that it will not be
+ * forgotten across node restarts.  Further nodes can only be added if their
+ * disks are big enough.
+ */
+static bool effective_disk_size_determined(struct drbd_device *device)
+{
+	struct drbd_peer_device *peer_device;
+	bool rv = false;
+
+	if (device->ldev->md.effective_size != 0)
+		return true;
+	if (device->disk_state[NOW] == D_UP_TO_DATE)
+		return true;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (peer_device->disk_state[NOW] == D_UP_TO_DATE) {
+			rv = true;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return rv;
+}
+
+void drbd_set_my_capacity(struct drbd_device *device, sector_t size)
+{
+	char ppb[10];
+
+	set_capacity_and_notify(device->vdisk, size);
+
+	drbd_info(device, "size = %s (%llu KB)\n",
+		ppsize(ppb, size>>1), (unsigned long long)size>>1);
+}
+
 /*
  * drbd_determine_dev_size() -  Sets the right device size obeying all constraints
  * @device:	DRBD device.
  *
- * Returns 0 on success, negative return values indicate errors.
  * You should call drbd_md_sync() after calling this function.
  */
 enum determine_dev_size
-drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct resize_parms *rs) __must_hold(local)
+drbd_determine_dev_size(struct drbd_device *device, sector_t peer_current_size,
+			enum dds_flags flags, struct resize_parms *rs)
 {
 	struct md_offsets_and_sizes {
-		u64 last_agreed_sect;
+		u64 effective_size;
 		u64 md_offset;
 		s32 al_offset;
 		s32 bm_offset;
@@ -939,7 +1604,7 @@ drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct
 		u32 al_stripes;
 		u32 al_stripe_size_4k;
 	} prev;
-	sector_t u_size, size;
+	sector_t u_size, size, prev_size;
 	struct drbd_md *md = &device->ldev->md;
 	void *buffer;
 
@@ -954,7 +1619,7 @@ drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct
 	 * Move is not exactly correct, btw, currently we have all our meta
 	 * data in core memory, to "move" it we just write it all out, there
 	 * are no reads. */
-	drbd_suspend_io(device);
+	drbd_suspend_io(device, READ_AND_WRITE);
 	buffer = drbd_md_get_buffer(device, __func__); /* Lock meta-data IO */
 	if (!buffer) {
 		drbd_resume_io(device);
@@ -962,29 +1627,31 @@ drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct
 	}
 
 	/* remember current offset and sizes */
-	prev.last_agreed_sect = md->la_size_sect;
+	prev.effective_size = md->effective_size;
 	prev.md_offset = md->md_offset;
 	prev.al_offset = md->al_offset;
 	prev.bm_offset = md->bm_offset;
 	prev.md_size_sect = md->md_size_sect;
 	prev.al_stripes = md->al_stripes;
 	prev.al_stripe_size_4k = md->al_stripe_size_4k;
+	prev_size = get_capacity(device->vdisk);
 
 	if (rs) {
+		/* FIXME race with peer requests that want to do an AL transaction */
 		/* rs is non NULL if we should change the AL layout only */
 		md->al_stripes = rs->al_stripes;
 		md->al_stripe_size_4k = rs->al_stripe_size / 4;
 		md->al_size_4k = (u64)rs->al_stripes * rs->al_stripe_size / 4;
 	}
 
-	drbd_md_set_sector_offsets(device, device->ldev);
+	drbd_md_set_sector_offsets(device->ldev);
 
 	rcu_read_lock();
 	u_size = rcu_dereference(device->ldev->disk_conf)->disk_size;
 	rcu_read_unlock();
-	size = drbd_new_dev_size(device, device->ldev, u_size, flags & DDSF_FORCED);
+	size = drbd_new_dev_size(device, peer_current_size, u_size, flags);
 
-	if (size < prev.last_agreed_sect) {
+	if (size < prev.effective_size) {
 		if (rs && u_size == 0) {
 			/* Remove "rs &&" later. This check should always be active, but
 			   right now the receiver expects the permissive behavior */
@@ -1000,9 +1667,11 @@ drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct
 	}
 
 	if (get_capacity(device->vdisk) != size ||
-	    drbd_bm_capacity(device) != size) {
-		int err;
-		err = drbd_bm_resize(device, size, !(flags & DDSF_NO_RESYNC));
+	    (device->bitmap && drbd_bm_capacity(device) != size)) {
+		int err = 0;
+
+		if (device->bitmap)
+			err = drbd_bm_resize(device, size, !(flags & DDSF_NO_RESYNC));
 		if (unlikely(err)) {
 			/* currently there is only one error: ENOMEM! */
 			size = drbd_bm_capacity(device);
@@ -1014,21 +1683,32 @@ drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct
 				    "Leaving size unchanged\n");
 			}
 			rv = DS_ERROR;
+		} else {
+			/* racy, see comments above. */
+			drbd_set_my_capacity(device, size);
+			if (effective_disk_size_determined(device)
+			&& md->effective_size != size) {
+				char ppb[10];
+
+				drbd_info(device, "persisting effective size = %s (%llu KB)\n",
+					ppsize(ppb, size >> 1),
+					(unsigned long long)size >> 1);
+				md->effective_size = size;
+			}
 		}
-		/* racy, see comments above. */
-		drbd_set_my_capacity(device, size);
-		md->la_size_sect = size;
 	}
 	if (rv <= DS_ERROR)
 		goto err_out;
 
-	la_size_changed = (prev.last_agreed_sect != md->la_size_sect);
+	la_size_changed = (prev.effective_size != md->effective_size);
 
 	md_moved = prev.md_offset    != md->md_offset
 		|| prev.md_size_sect != md->md_size_sect;
 
 	if (la_size_changed || md_moved || rs) {
-		u32 prev_flags;
+		int i;
+		bool prev_al_disabled = 0;
+		u32 prev_peer_full_sync = 0;
 
 		/* We do some synchronous IO below, which may take some time.
 		 * Clear the timer, to avoid scary "timer expired!" messages,
@@ -1039,11 +1719,25 @@ drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct
 		 * to move the on-disk location of the activity log ringbuffer.
 		 * Lock for transaction is good enough, it may well be "dirty"
 		 * or even "starving". */
-		wait_event(device->al_wait, lc_try_lock_for_transaction(device->act_log));
+		wait_event(device->al_wait, drbd_al_try_lock_for_transaction(device));
+
+		if (drbd_md_dax_active(device->ldev)) {
+			if (drbd_dax_map(device->ldev)) {
+				drbd_err(device, "Could not remap DAX; aborting resize\n");
+				lc_unlock(device->act_log);
+				goto err_out;
+			}
+		}
 
 		/* mark current on-disk bitmap and activity log as unreliable */
-		prev_flags = md->flags;
-		md->flags |= MDF_FULL_SYNC | MDF_AL_DISABLED;
+		prev_al_disabled = !!(md->flags & MDF_AL_DISABLED);
+		md->flags |= MDF_AL_DISABLED;
+		for (i = 0; i < DRBD_PEERS_MAX; i++) {
+			if (md->peers[i].flags & MDF_PEER_FULL_SYNC)
+				prev_peer_full_sync |= 1 << i;
+			else
+				md->peers[i].flags |= MDF_PEER_FULL_SYNC;
+		}
 		drbd_md_write(device, buffer);
 
 		drbd_al_initialize(device, buffer);
@@ -1053,27 +1747,35 @@ drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct
 			 la_size_changed ? "size changed" : "md moved");
 		/* next line implicitly does drbd_suspend_io()+drbd_resume_io() */
 		drbd_bitmap_io(device, md_moved ? &drbd_bm_write_all : &drbd_bm_write,
-			       "size changed", BM_LOCKED_MASK, NULL);
+			       "size changed", BM_LOCK_ALL, NULL);
 
 		/* on-disk bitmap and activity log is authoritative again
 		 * (unless there was an IO error meanwhile...) */
-		md->flags = prev_flags;
+		if (!prev_al_disabled)
+			md->flags &= ~MDF_AL_DISABLED;
+		for (i = 0; i < DRBD_PEERS_MAX; i++) {
+			if (0 == (prev_peer_full_sync & (1 << i)))
+				md->peers[i].flags &= ~MDF_PEER_FULL_SYNC;
+		}
 		drbd_md_write(device, buffer);
 
 		if (rs)
 			drbd_info(device, "Changed AL layout to al-stripes = %d, al-stripe-size-kB = %d\n",
-				  md->al_stripes, md->al_stripe_size_4k * 4);
+				 md->al_stripes, md->al_stripe_size_4k * 4);
+
+		lc_unlock(device->act_log);
+		wake_up(&device->al_wait);
 	}
 
-	if (size > prev.last_agreed_sect)
-		rv = prev.last_agreed_sect ? DS_GREW : DS_GREW_FROM_ZERO;
-	if (size < prev.last_agreed_sect)
+	if (size > prev_size)
+		rv = prev_size ? DS_GREW : DS_GREW_FROM_ZERO;
+	if (size < prev_size)
 		rv = DS_SHRUNK;
 
 	if (0) {
 	err_out:
 		/* restore previous offset and sizes */
-		md->la_size_sect = prev.last_agreed_sect;
+		md->effective_size = prev.effective_size;
 		md->md_offset = prev.md_offset;
 		md->al_offset = prev.al_offset;
 		md->bm_offset = prev.bm_offset;
@@ -1082,57 +1784,167 @@ drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct
 		md->al_stripe_size_4k = prev.al_stripe_size_4k;
 		md->al_size_4k = (u64)prev.al_stripes * prev.al_stripe_size_4k;
 	}
-	lc_unlock(device->act_log);
-	wake_up(&device->al_wait);
 	drbd_md_put_buffer(device);
 	drbd_resume_io(device);
 
 	return rv;
 }
 
-sector_t
-drbd_new_dev_size(struct drbd_device *device, struct drbd_backing_dev *bdev,
-		  sector_t u_size, int assume_peer_has_space)
+/**
+ * get_max_agreeable_size()
+ * @device: DRBD device
+ * @max: Pointer to store the maximum agreeable size in
+ * @twopc_reachable_nodes: Bitmap of reachable nodes from two-phase-commit reply
+ *
+ * Check if all peer devices that have bitmap slots assigned in the metadata
+ * are connected.
+ */
+static bool get_max_agreeable_size(struct drbd_device *device, uint64_t *max,
+		uint64_t twopc_reachable_nodes)
 {
-	sector_t p_size = device->p_size;   /* partner's disk size. */
-	sector_t la_size_sect = bdev->md.la_size_sect; /* last agreed size. */
-	sector_t m_size; /* my size */
-	sector_t size = 0;
-
-	m_size = drbd_get_max_capacity(bdev);
+	int node_id;
+	bool all_known;
 
-	if (device->state.conn < C_CONNECTED && assume_peer_has_space) {
-		drbd_warn(device, "Resize while not connected was forced by the user!\n");
-		p_size = m_size;
-	}
+	all_known = true;
+	rcu_read_lock();
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		struct drbd_peer_md *peer_md = &device->ldev->md.peers[node_id];
+		struct drbd_peer_device *peer_device;
 
-	if (p_size && m_size) {
-		size = min_t(sector_t, p_size, m_size);
-	} else {
-		if (la_size_sect) {
-			size = la_size_sect;
-			if (m_size && m_size < size)
-				size = m_size;
-			if (p_size && p_size < size)
-				size = p_size;
+		if (device->ldev->md.node_id == node_id) {
+			dynamic_drbd_dbg(device, "my node_id: %u\n", node_id);
+			continue; /* skip myself... */
+		}
+		/* peer_device may be NULL if we don't have a connection to that node. */
+		peer_device = peer_device_by_node_id(device, node_id);
+		if (twopc_reachable_nodes & NODE_MASK(node_id)) {
+			uint64_t size = device->resource->twopc_reply.max_possible_size;
+
+			dynamic_drbd_dbg(device, "node_id: %u, twopc YES for max_size: %llu\n",
+					node_id, (unsigned long long)size);
+
+			/* Update our cached information, they said "yes".
+			 * Note:
+			 * d_size == 0 indicates diskless peer, or not directly
+			 * connected.  It will be ignored by the min_not_zero()
+			 * aggregation elsewhere.  Only reset if size > d_size
+			 * here.  Once we really commit the change, this will
+			 * also be assigned if it was a shrinkage.
+			 */
+			if (peer_device) {
+				if (peer_device->d_size && size > peer_device->d_size)
+					peer_device->d_size = size;
+				if (size > peer_device->max_size)
+					peer_device->max_size = size;
+			}
+			continue;
+		}
+		if (peer_device) {
+			enum drbd_disk_state pdsk = peer_device->disk_state[NOW];
+			dynamic_drbd_dbg(peer_device, "node_id: %u idx: %u bm-uuid: 0x%llx flags: 0x%x max_size: %llu (%s)\n",
+					node_id,
+					peer_md->bitmap_index,
+					peer_md->bitmap_uuid,
+					peer_md->flags,
+					peer_device->max_size,
+					drbd_disk_str(pdsk));
+
+			if (test_bit(HAVE_SIZES, &peer_device->flags)) {
+				/* If we still can see it, consider its last
+				 * known size, even if it may have meanwhile
+				 * detached from its disk.
+				 * If we no longer see it, we may want to
+				 * ignore the size we last knew, and
+				 * "assume_peer_has_space".  */
+				*max = min_not_zero(*max, peer_device->max_size);
+				continue;
+			}
 		} else {
-			if (m_size)
-				size = m_size;
-			if (p_size)
-				size = p_size;
+			dynamic_drbd_dbg(device, "node_id: %u idx: %u bm-uuid: 0x%llx flags: 0x%x (not currently reachable)\n",
+					node_id,
+					peer_md->bitmap_index,
+					peer_md->bitmap_uuid,
+					peer_md->flags);
 		}
+		/* Even the currently diskless peer does not really know if it
+		 * is diskless on purpose (a "DRBD client") or if it just was
+		 * not possible to attach (backend device gone for some
+		 * reason).  But we remember in our meta data if we have ever
+		 * seen a peer disk for this peer.  If we did not ever see a
+		 * peer disk, assume that's intentional. */
+		if ((peer_md->flags & MDF_PEER_DEVICE_SEEN) == 0)
+			continue;
+
+		all_known = false;
+		/* don't break yet, min aggregation may still find a peer */
 	}
+	rcu_read_unlock();
+	return all_known;
+}
+
+#define DDUMP_LLU(d, x) do { dynamic_drbd_dbg(d, "%u: " #x ": %llu\n", __LINE__, (unsigned long long)x); } while (0)
 
+/* MUST hold a reference on ldev. */
+sector_t
+drbd_new_dev_size(struct drbd_device *device,
+		sector_t current_size, /* need at least this much */
+		sector_t user_capped_size, /* want (at most) this much */
+		enum dds_flags flags)
+{
+	struct drbd_resource *resource = device->resource;
+	uint64_t p_size = 0;
+	uint64_t la_size = device->ldev->md.effective_size; /* last agreed size */
+	uint64_t m_size; /* my size */
+	uint64_t size = 0;
+	bool all_known_connected;
+
+	/* If there are reachable_nodes, get_max_agreeable_size() will
+	 * also aggregate the twopc.resize.new_size into their d_size
+	 * and max_size.  Do that first, so drbd_partition_data_capacity()
+	 * can use that new knowledge.
+	 */
+
+	all_known_connected = get_max_agreeable_size(device, &p_size,
+		flags & DDSF_2PC ? resource->twopc_reply.reachable_nodes : 0);
+	m_size = drbd_partition_data_capacity(device);
+
+	if (all_known_connected) {
+		/* If we currently can see all peer devices,
+		 * and p_size is still 0, apparently all our peers have been
+		 * diskless, always.  If we have the only persistent backend,
+		 * only our size counts. */
+		DDUMP_LLU(device, p_size);
+		DDUMP_LLU(device, m_size);
+		p_size = min_not_zero(p_size, m_size);
+	} else if (flags & DDSF_ASSUME_UNCONNECTED_PEER_HAS_SPACE) {
+		DDUMP_LLU(device, p_size);
+		DDUMP_LLU(device, m_size);
+		DDUMP_LLU(device, la_size);
+		p_size = min_not_zero(p_size, m_size);
+		if (p_size > la_size)
+			drbd_warn(device, "Resize forced while not fully connected!\n");
+	} else {
+		DDUMP_LLU(device, p_size);
+		DDUMP_LLU(device, m_size);
+		DDUMP_LLU(device, la_size);
+		/* We currently cannot see all peer devices,
+		 * fall back to what we last agreed upon. */
+		p_size = min_not_zero(p_size, la_size);
+	}
+
+	DDUMP_LLU(device, p_size);
+	DDUMP_LLU(device, m_size);
+	size = min_not_zero(p_size, m_size);
+	DDUMP_LLU(device, size);
 	if (size == 0)
-		drbd_err(device, "Both nodes diskless!\n");
+		drbd_err(device, "All nodes diskless!\n");
 
-	if (u_size) {
-		if (u_size > size)
-			drbd_err(device, "Requested disk size is too big (%lu > %lu)\n",
-			    (unsigned long)u_size>>1, (unsigned long)size>>1);
-		else
-			size = u_size;
-	}
+	if (user_capped_size > size)
+		drbd_err(device, "Requested disk size is too big (%llu > %llu)kiB\n",
+		    (unsigned long long)user_capped_size>>1,
+		    (unsigned long long)size>>1);
+	else if (user_capped_size)
+		size = user_capped_size;
 
 	return size;
 }
@@ -1184,57 +1996,58 @@ static int drbd_check_al_size(struct drbd_device *device, struct disk_conf *dc)
 		return -EBUSY;
 	} else {
 		lc_destroy(t);
+		device->al_writ_cnt = 0;
+		memset(device->al_histogram, 0, sizeof(device->al_histogram));
 	}
 	drbd_md_mark_dirty(device); /* we changed device->act_log->nr_elemens */
 	return 0;
 }
 
-static unsigned int drbd_max_peer_bio_size(struct drbd_device *device)
+static u32 common_connection_features(struct drbd_resource *resource)
 {
-	/*
-	 * We may ignore peer limits if the peer is modern enough.  From 8.3.8
-	 * onwards the peer can use multiple BIOs for a single peer_request.
-	 */
-	if (device->state.conn < C_WF_REPORT_PARAMS)
-		return device->peer_max_bio_size;
-
-	if (first_peer_device(device)->connection->agreed_pro_version < 94)
-		return min(device->peer_max_bio_size, DRBD_MAX_SIZE_H80_PACKET);
+	struct drbd_connection *connection;
+	u32 features = -1;
 
-	/*
-	 * Correct old drbd (up to 8.3.7) if it believes it can do more than
-	 * 32KiB.
-	 */
-	if (first_peer_device(device)->connection->agreed_pro_version == 94)
-		return DRBD_MAX_SIZE_H80_PACKET;
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (connection->cstate[NOW] < C_CONNECTED)
+			continue;
+		features &= connection->agreed_features;
+	}
+	rcu_read_unlock();
 
-	/*
-	 * drbd 8.3.8 onwards, before 8.4.0
-	 */
-	if (first_peer_device(device)->connection->agreed_pro_version < 100)
-		return DRBD_MAX_BIO_SIZE_P95;
-	return DRBD_MAX_BIO_SIZE;
+	return features;
 }
 
-static unsigned int drbd_max_discard_sectors(struct drbd_connection *connection)
+static unsigned int drbd_max_discard_sectors(struct drbd_resource *resource)
 {
-	/* when we introduced REQ_WRITE_SAME support, we also bumped
+	struct drbd_connection *connection;
+	unsigned int s = DRBD_MAX_BBIO_SECTORS;
+
+	/* when we introduced WRITE_SAME support, we also bumped
 	 * our maximum supported batch bio size used for discards. */
-	if (connection->agreed_features & DRBD_FF_WSAME)
-		return DRBD_MAX_BBIO_SECTORS;
-	/* before, with DRBD <= 8.4.6, we only allowed up to one AL_EXTENT_SIZE. */
-	return AL_EXTENT_SIZE >> 9;
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (connection->cstate[NOW] == C_CONNECTED &&
+		    !(connection->agreed_features & DRBD_FF_WSAME)) {
+			/* before, with DRBD <= 8.4.6, we only allowed up to one AL_EXTENT_SIZE. */
+			s = AL_EXTENT_SIZE >> SECTOR_SHIFT;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return s;
 }
 
-static bool drbd_discard_supported(struct drbd_connection *connection,
+static bool drbd_discard_supported(struct drbd_device *device,
 		struct drbd_backing_dev *bdev)
 {
 	if (bdev && !bdev_max_discard_sectors(bdev->backing_bdev))
 		return false;
 
-	if (connection->cstate >= C_CONNECTED &&
-	    !(connection->agreed_features & DRBD_FF_TRIM)) {
-		drbd_info(connection,
+	if (!(common_connection_features(device->resource) & DRBD_FF_TRIM)) {
+		drbd_info(device,
 			"peer DRBD too old, does not support TRIM: disabling discards\n");
 		return false;
 	}
@@ -1242,85 +2055,75 @@ static bool drbd_discard_supported(struct drbd_connection *connection,
 	return true;
 }
 
-/* This is the workaround for "bio would need to, but cannot, be split" */
-static unsigned int drbd_backing_dev_max_segments(struct drbd_device *device)
+static void get_common_queue_limits(struct queue_limits *common_limits,
+		struct drbd_device *device)
 {
-	unsigned int max_segments;
+	struct drbd_peer_device *peer_device;
+	struct queue_limits peer_limits = { 0 };
+
+	blk_set_stacking_limits(common_limits);
+	common_limits->max_hw_sectors = device->device_conf.max_bio_size >> SECTOR_SHIFT;
+	common_limits->max_sectors = device->device_conf.max_bio_size >> SECTOR_SHIFT;
+	common_limits->physical_block_size = device->device_conf.block_size;
+	common_limits->logical_block_size = device->device_conf.block_size;
+	common_limits->io_min = device->device_conf.block_size;
+	common_limits->max_hw_zone_append_sectors = 0;
 
 	rcu_read_lock();
-	max_segments = rcu_dereference(device->ldev->disk_conf)->max_bio_bvecs;
+	for_each_peer_device_rcu(peer_device, device) {
+		if (!test_bit(HAVE_SIZES, &peer_device->flags) &&
+		    peer_device->repl_state[NOW] < L_ESTABLISHED)
+			continue;
+		blk_set_stacking_limits(&peer_limits);
+		peer_limits.logical_block_size = peer_device->q_limits.logical_block_size;
+		peer_limits.physical_block_size = peer_device->q_limits.physical_block_size;
+		peer_limits.alignment_offset = peer_device->q_limits.alignment_offset;
+		peer_limits.io_min = peer_device->q_limits.io_min;
+		peer_limits.io_opt = peer_device->q_limits.io_opt;
+		peer_limits.max_hw_sectors = peer_device->q_limits.max_bio_size >> SECTOR_SHIFT;
+		peer_limits.max_sectors = peer_device->q_limits.max_bio_size >> SECTOR_SHIFT;
+		blk_stack_limits(common_limits, &peer_limits, 0);
+	}
 	rcu_read_unlock();
-
-	if (!max_segments)
-		return BLK_MAX_SEGMENTS;
-	return max_segments;
 }
 
-void drbd_reconsider_queue_parameters(struct drbd_device *device,
-		struct drbd_backing_dev *bdev, struct o_qlim *o)
+void drbd_reconsider_queue_parameters(struct drbd_device *device, struct drbd_backing_dev *bdev)
 {
-	struct drbd_connection *connection =
-		first_peer_device(device)->connection;
 	struct request_queue * const q = device->rq_queue;
-	unsigned int now = queue_max_hw_sectors(q) << 9;
 	struct queue_limits lim;
 	struct request_queue *b = NULL;
-	unsigned int new;
-
-	if (bdev) {
-		b = bdev->backing_bdev->bd_disk->queue;
-
-		device->local_max_bio_size =
-			queue_max_hw_sectors(b) << SECTOR_SHIFT;
-	}
-
-	/*
-	 * We may later detach and re-attach on a disconnected Primary.  Avoid
-	 * decreasing the value in this case.
-	 *
-	 * We want to store what we know the peer DRBD can handle, not what the
-	 * peer IO backend can handle.
-	 */
-	new = min3(DRBD_MAX_BIO_SIZE, device->local_max_bio_size,
-		max(drbd_max_peer_bio_size(device), device->peer_max_bio_size));
-	if (new != now) {
-		if (device->state.role == R_PRIMARY && new < now)
-			drbd_err(device, "ASSERT FAILED new < now; (%u < %u)\n",
-					new, now);
-		drbd_info(device, "max BIO size = %u\n", new);
-	}
 
 	lim = queue_limits_start_update(q);
-	if (bdev) {
-		blk_set_stacking_limits(&lim);
-		lim.max_segments = drbd_backing_dev_max_segments(device);
-	} else {
-		lim.max_segments = BLK_MAX_SEGMENTS;
-		lim.features = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
-			       BLK_FEAT_ROTATIONAL | BLK_FEAT_STABLE_WRITES;
-	}
-
-	lim.max_hw_sectors = new >> SECTOR_SHIFT;
-	lim.seg_boundary_mask = PAGE_SIZE - 1;
+	get_common_queue_limits(&lim, device);
 
 	/*
-	 * We don't care for the granularity, really.
-	 *
-	 * Stacking limits below should fix it for the local device.  Whether or
-	 * not it is a suitable granularity on the remote device is not our
-	 * problem, really. If you care, you need to use devices with similar
-	 * topology on all peers.
+	 * discard_granularity == DRBD_DISCARD_GRANULARITY_DEF (sentinel):
+	 *   not explicitly configured; use the legacy heuristic
+	 *   (drbd_discard_supported decides, granularity=512).
+	 * discard_granularity == 0: explicitly disable discards.
+	 * discard_granularity > 0: use the configured value and enable discards
+	 *   unconditionally (e.g. LINSTOR knows the real granularity from
+	 *   storage pool info and configures it for diskless primaries or to
+	 *   advertise a larger granularity than strictly required).
 	 */
-	if (drbd_discard_supported(connection, bdev)) {
-		lim.discard_granularity = 512;
-		lim.max_hw_discard_sectors =
-			drbd_max_discard_sectors(connection);
+	if (device->device_conf.discard_granularity == DRBD_DISCARD_GRANULARITY_DEF) {
+		if (drbd_discard_supported(device, bdev)) {
+			lim.discard_granularity = 512;
+			lim.max_hw_discard_sectors = drbd_max_discard_sectors(device->resource);
+		} else {
+			lim.discard_granularity = 0;
+			lim.max_hw_discard_sectors = 0;
+		}
+	} else if (device->device_conf.discard_granularity) {
+		lim.discard_granularity = device->device_conf.discard_granularity;
+		lim.max_hw_discard_sectors = drbd_max_discard_sectors(device->resource);
 	} else {
 		lim.discard_granularity = 0;
 		lim.max_hw_discard_sectors = 0;
 	}
 
 	if (bdev) {
+		b = bdev->backing_bdev->bd_disk->queue;
 		blk_stack_limits(&lim, &b->limits, 0);
 		/*
 		 * blk_set_stacking_limits() cleared the features, and
@@ -1337,14 +2140,28 @@ void drbd_reconsider_queue_parameters(struct drbd_device *device,
 		 *    receiver will detect a checksum mismatch.
 		 */
 		lim.features |= BLK_FEAT_STABLE_WRITES;
+
+		/*
+		 * blk_stack_limits() uses max() for discard_granularity and
+		 * min_not_zero() for max_hw_discard_sectors, both of which can
+		 * re-enable discards from the backing device even when the user
+		 * explicitly disabled them (discard_granularity == 0).
+		 */
+		if (device->device_conf.discard_granularity == 0) {
+			lim.discard_granularity = 0;
+			lim.max_hw_discard_sectors = 0;
+		}
+	} else {
+		lim.features = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
+			       BLK_FEAT_ROTATIONAL | BLK_FEAT_STABLE_WRITES;
 	}
 
 	/*
-	 * If we can handle "zeroes" efficiently on the protocol, we want to do
-	 * that, even if our backend does not announce max_write_zeroes_sectors
-	 * itself.
+	 * If we can handle "zeroes" efficiently on the protocol,
+	 * we want to do that, even if our backend does not announce
+	 * max_write_zeroes_sectors itself.
 	 */
-	if (connection->agreed_features & DRBD_FF_WZEROES)
+	if (common_connection_features(device->resource) & DRBD_FF_WZEROES)
 		lim.max_write_zeroes_sectors = DRBD_MAX_BBIO_SECTORS;
 	else
 		lim.max_write_zeroes_sectors = 0;
@@ -1352,6 +2169,11 @@ void drbd_reconsider_queue_parameters(struct drbd_device *device,
 
 	if ((lim.discard_granularity >> SECTOR_SHIFT) >
 	    lim.max_hw_discard_sectors) {
+		/*
+		 * discard_granularity is the smallest supported unit of a
+		 * discard. If that is larger than the maximum supported discard
+		 * size, we need to disable discards altogether.
+		 */
 		lim.discard_granularity = 0;
 		lim.max_hw_discard_sectors = 0;
 	}
@@ -1360,56 +2182,48 @@ void drbd_reconsider_queue_parameters(struct drbd_device *device,
 		drbd_err(device, "setting new queue limits failed\n");
 }
 
-/* Starts the worker thread */
-static void conn_reconfig_start(struct drbd_connection *connection)
+/* Make sure IO is suspended before calling this function(). */
+static void drbd_try_suspend_al(struct drbd_device *device)
 {
-	drbd_thread_start(&connection->worker);
-	drbd_flush_workqueue(&connection->sender_work);
-}
+	struct drbd_peer_device *peer_device;
+	bool suspend = true;
+	int max_peers = device->ldev->md.max_peers, bitmap_index;
 
-/* if still unconfigured, stops worker again. */
-static void conn_reconfig_done(struct drbd_connection *connection)
-{
-	bool stop_threads;
-	spin_lock_irq(&connection->resource->req_lock);
-	stop_threads = conn_all_vols_unconf(connection) &&
-		connection->cstate == C_STANDALONE;
-	spin_unlock_irq(&connection->resource->req_lock);
-	if (stop_threads) {
-		/* ack_receiver thread and ack_sender workqueue are implicitly
-		 * stopped by receiver in conn_disconnect() */
-		drbd_thread_stop(&connection->receiver);
-		drbd_thread_stop(&connection->worker);
+	if (device->bitmap) {
+		for (bitmap_index = 0; bitmap_index < max_peers; bitmap_index++) {
+			if (_drbd_bm_total_weight(device, bitmap_index) != drbd_bm_bits(device))
+				return;
+		}
 	}
-}
-
-/* Make sure IO is suspended before calling this function(). */
-static void drbd_suspend_al(struct drbd_device *device)
-{
-	int s = 0;
 
-	if (!lc_try_lock(device->act_log)) {
-		drbd_warn(device, "Failed to lock al in drbd_suspend_al()\n");
+	if (!drbd_al_try_lock(device)) {
+		drbd_warn(device, "Failed to lock al in %s()", __func__);
 		return;
 	}
 
 	drbd_al_shrink(device);
-	spin_lock_irq(&device->resource->req_lock);
-	if (device->state.conn < C_CONNECTED)
-		s = !test_and_set_bit(AL_SUSPENDED, &device->flags);
-	spin_unlock_irq(&device->resource->req_lock);
+	read_lock_irq(&device->resource->state_rwlock);
+	for_each_peer_device(peer_device, device) {
+		if (peer_device->repl_state[NOW] >= L_ESTABLISHED) {
+			suspend = false;
+			break;
+		}
+	}
+	if (suspend)
+		suspend = !test_and_set_bit(AL_SUSPENDED, &device->flags);
+	read_unlock_irq(&device->resource->state_rwlock);
 	lc_unlock(device->act_log);
+	wake_up(&device->al_wait);
 
-	if (s)
+	if (suspend)
 		drbd_info(device, "Suspended AL updates\n");
 }
 
 
 static bool should_set_defaults(struct genl_info *info)
 {
-	struct drbd_genlmsghdr *dh = genl_info_userhdr(info);
-
-	return 0 != (dh->flags & DRBD_GENL_F_SET_DEFAULTS);
+	unsigned int flags = ((struct drbd_genlmsghdr *)genl_info_userhdr(info))->flags;
+	return 0 != (flags & DRBD_GENL_F_SET_DEFAULTS);
 }
 
 static unsigned int drbd_al_extents_max(struct drbd_backing_dev *bdev)
@@ -1464,25 +2278,47 @@ static void sanitize_disk_conf(struct drbd_device *device, struct disk_conf *dis
 		}
 	}
 
+	/* To be effective, rs_discard_granularity must not be larger than the
+	 * maximum resync request size, and multiple of 4k
+	 * (preferably a power-of-two multiple 4k).
+	 * See also make_resync_request().
+	 * That also means that if q->limits.discard_granularity or
+	 * q->limits.discard_alignment are "odd", rs_discard_granularity won't
+	 * be particularly effective, or not effective at all.
+	 */
 	if (disk_conf->rs_discard_granularity) {
-		int orig_value = disk_conf->rs_discard_granularity;
-		sector_t discard_size = bdev_max_discard_sectors(bdev) << 9;
+		unsigned int new_discard_granularity =
+			disk_conf->rs_discard_granularity;
+		unsigned int discard_sectors = bdev_max_discard_sectors(bdev);
 		unsigned int discard_granularity = bdev_discard_granularity(bdev);
-		int remainder;
 
-		if (discard_granularity > disk_conf->rs_discard_granularity)
-			disk_conf->rs_discard_granularity = discard_granularity;
-
-		remainder = disk_conf->rs_discard_granularity %
-				discard_granularity;
-		disk_conf->rs_discard_granularity += remainder;
-
-		if (disk_conf->rs_discard_granularity > discard_size)
-			disk_conf->rs_discard_granularity = discard_size;
-
-		if (disk_conf->rs_discard_granularity != orig_value)
+		/* should be at least the discard_granularity of the bdev,
+		 * and preferably a multiple (or the backend won't be able to
+		 * discard some of the "cuttings").
+		 * This also sanitizes nonsensical settings like "77 byte".
+		 */
+		new_discard_granularity = roundup(new_discard_granularity,
+				discard_granularity);
+
+		/* more than the max resync request size won't work anyways */
+		discard_sectors = min(discard_sectors,
+				DRBD_RS_DISCARD_GRANULARITY_MAX >> SECTOR_SHIFT);
+		/* Avoid compiler warning about truncated integer.
+		 * The min() above made sure the result fits even after left shift. */
+		new_discard_granularity = min(
+				new_discard_granularity >> SECTOR_SHIFT,
+				discard_sectors) << SECTOR_SHIFT;
+		/* less than the backend discard granularity is allowed if
+		   the backend granularity is a multiple of the configured value */
+		if (new_discard_granularity < discard_granularity &&
+		    discard_granularity % new_discard_granularity != 0)
+			new_discard_granularity = 0;
+
+		if (disk_conf->rs_discard_granularity != new_discard_granularity) {
 			drbd_info(device, "rs_discard_granularity changed to %d\n",
-				  disk_conf->rs_discard_granularity);
+					new_discard_granularity);
+			disk_conf->rs_discard_granularity = new_discard_granularity;
+		}
 	}
 }
 
@@ -1494,13 +2330,13 @@ static int disk_opts_check_al_size(struct drbd_device *device, struct disk_conf
 	    device->act_log->nr_elements == dc->al_extents)
 		return 0;
 
-	drbd_suspend_io(device);
+	drbd_suspend_io(device, READ_AND_WRITE);
 	/* If IO completion is currently blocked, we would likely wait
 	 * "forever" for the activity log to become unused. So we don't. */
-	if (atomic_read(&device->ap_bio_cnt))
+	if (atomic_read(&device->ap_bio_cnt[WRITE]) || atomic_read(&device->ap_bio_cnt[READ]))
 		goto out;
 
-	wait_event(device->al_wait, lc_try_lock(device->act_log));
+	wait_event(device->al_wait, drbd_al_try_lock(device));
 	drbd_al_shrink(device);
 	err = drbd_check_al_size(device, dc);
 	lc_unlock(device->act_log);
@@ -1510,24 +2346,113 @@ static int disk_opts_check_al_size(struct drbd_device *device, struct disk_conf
 	return err;
 }
 
-int drbd_adm_disk_opts(struct sk_buff *skb, struct genl_info *info)
+static struct drbd_connection *the_only_peer_with_disk(struct drbd_device *device,
+						       enum which_state which)
+{
+	const int my_node_id = device->resource->res_opts.node_id;
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	struct drbd_connection *connection = NULL;
+	struct drbd_peer_device *peer_device;
+	int node_id, peer_disks = 0;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if (node_id == my_node_id)
+			continue;
+
+		if (peer_md[node_id].flags & MDF_PEER_DEVICE_SEEN)
+			peer_disks++;
+
+		if (peer_disks > 1)
+			return NULL;
+
+		peer_device = peer_device_by_node_id(device, node_id);
+		if (peer_device) {
+			enum drbd_disk_state pdsk = peer_device->disk_state[which];
+
+			if (pdsk >= D_INCONSISTENT && pdsk != D_UNKNOWN)
+				connection = peer_device->connection;
+		}
+	}
+	return connection;
+}
+
+static void __update_mdf_al_disabled(struct drbd_device *device, bool al_updates,
+				     enum which_state which)
+{
+	struct drbd_md *md = &device->ldev->md;
+	struct drbd_connection *peer = NULL;
+	bool al_updates_old = !(md->flags & MDF_AL_DISABLED);
+	bool optimized = false;
+
+	if (al_updates)
+		peer = the_only_peer_with_disk(device, which);
+
+	if (device->bitmap == NULL ||
+	    (al_updates && device->ldev->md.max_peers == 1 &&
+	    peer && peer->peer_role[which] == R_PRIMARY &&
+	    device->resource->role[which] == R_SECONDARY)) {
+		al_updates = false;
+		optimized = true;
+	}
+
+	if (al_updates_old == al_updates)
+		return;
+
+	if (al_updates) {
+		drbd_info(device, "Enabling local AL-updates\n");
+		md->flags &= ~MDF_AL_DISABLED;
+	} else {
+		drbd_info(device, "Disabling local AL-updates %s\n",
+			  optimized ? "(optimization)" : "(config)");
+		md->flags |= MDF_AL_DISABLED;
+	}
+	drbd_md_mark_dirty(device);
+}
+
+/**
+ * drbd_update_mdf_al_disabled() - update the MDF_AL_DISABLED bit in md.flags
+ * @device: DRBD device
+ * @which: OLD or NEW
+ *
+ * This function also optimizes performance by turning off al-updates when:
+ * - the cluster has only two nodes with backing disk
+ * - the other node with a backing disk is the primary
+ */
+void drbd_update_mdf_al_disabled(struct drbd_device *device, enum which_state which)
+{
+	bool al_updates;
+
+	if (!get_ldev(device))
+		return;
+
+	rcu_read_lock();
+	al_updates = rcu_dereference(device->ldev->disk_conf)->al_updates;
+	rcu_read_unlock();
+	__update_mdf_al_disabled(device, al_updates, which);
+
+	put_ldev(device);
+}
+
+static int drbd_adm_disk_opts(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
 	enum drbd_ret_code retcode;
 	struct drbd_device *device;
+	struct drbd_resource *resource;
 	struct disk_conf *new_disk_conf, *old_disk_conf;
-	struct fifo_buffer *old_plan = NULL, *new_plan = NULL;
+	struct drbd_peer_device *peer_device;
 	int err;
-	unsigned int fifo_size;
 
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto finish;
 
 	device = adm_ctx.device;
-	mutex_lock(&adm_ctx.resource->adm_mutex);
+	resource = device->resource;
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_no_adm_mutex;
+	}
 
 	/* we also need a disk
 	 * to change the options on */
@@ -1542,7 +2467,7 @@ int drbd_adm_disk_opts(struct sk_buff *skb, struct genl_info *info)
 		goto fail;
 	}
 
-	mutex_lock(&device->resource->conf_update);
+	mutex_lock(&resource->conf_update);
 	old_disk_conf = device->ldev->disk_conf;
 	*new_disk_conf = *old_disk_conf;
 	if (should_set_defaults(info))
@@ -1555,24 +2480,8 @@ int drbd_adm_disk_opts(struct sk_buff *skb, struct genl_info *info)
 		goto fail_unlock;
 	}
 
-	if (!expect(device, new_disk_conf->resync_rate >= 1))
-		new_disk_conf->resync_rate = 1;
-
 	sanitize_disk_conf(device, new_disk_conf, device->ldev);
 
-	if (new_disk_conf->c_plan_ahead > DRBD_C_PLAN_AHEAD_MAX)
-		new_disk_conf->c_plan_ahead = DRBD_C_PLAN_AHEAD_MAX;
-
-	fifo_size = (new_disk_conf->c_plan_ahead * 10 * SLEEP_TIME) / HZ;
-	if (fifo_size != device->rs_plan_s->size) {
-		new_plan = fifo_alloc(fifo_size);
-		if (!new_plan) {
-			drbd_err(device, "kmalloc of fifo_buffer failed");
-			retcode = ERR_NOMEM;
-			goto fail_unlock;
-		}
-	}
-
 	err = disk_opts_check_al_size(device, new_disk_conf);
 	if (err) {
 		/* Could be just "busy". Ignore?
@@ -1583,6 +2492,30 @@ int drbd_adm_disk_opts(struct sk_buff *skb, struct genl_info *info)
 		goto fail_unlock;
 	}
 
+	if (!old_disk_conf->d_bitmap && new_disk_conf->d_bitmap) {
+		struct drbd_md *md = &device->ldev->md;
+
+		device->bitmap = drbd_bm_alloc(md->max_peers, md->bm_block_shift);
+		if (!device->bitmap) {
+			drbd_msg_put_info(adm_ctx.reply_skb, "Failed to allocate bitmap");
+			retcode = ERR_NOMEM;
+			goto fail_unlock;
+		}
+		err = drbd_bm_resize(device, get_capacity(device->vdisk), true);
+		if (err) {
+			drbd_msg_put_info(adm_ctx.reply_skb, "Failed to allocate bitmap pages");
+			retcode = ERR_NOMEM;
+			goto fail_unlock;
+		}
+
+		drbd_bitmap_io(device, &drbd_bm_write, "write from disk_opts", BM_LOCK_ALL, NULL);
+	} else if (old_disk_conf->d_bitmap && !new_disk_conf->d_bitmap) {
+		/* That would be quite some effort, and there is no use case for this */
+		drbd_msg_put_info(adm_ctx.reply_skb, "Online freeing of the bitmap not supported");
+		retcode = ERR_INVALID_REQUEST;
+		goto fail_unlock;
+	}
+
 	lock_all_resources();
 	retcode = drbd_resync_after_valid(device, new_disk_conf->resync_after);
 	if (retcode == NO_ERROR) {
@@ -1594,17 +2527,9 @@ int drbd_adm_disk_opts(struct sk_buff *skb, struct genl_info *info)
 	if (retcode != NO_ERROR)
 		goto fail_unlock;
 
-	if (new_plan) {
-		old_plan = device->rs_plan_s;
-		rcu_assign_pointer(device->rs_plan_s, new_plan);
-	}
-
-	mutex_unlock(&device->resource->conf_update);
+	mutex_unlock(&resource->conf_update);
 
-	if (new_disk_conf->al_updates)
-		device->ldev->md.flags &= ~MDF_AL_DISABLED;
-	else
-		device->ldev->md.flags |= MDF_AL_DISABLED;
+	__update_mdf_al_disabled(device, new_disk_conf->al_updates, NOW);
 
 	if (new_disk_conf->md_flushes)
 		clear_bit(MD_NO_FUA, &device->flags);
@@ -1612,65 +2537,298 @@ int drbd_adm_disk_opts(struct sk_buff *skb, struct genl_info *info)
 		set_bit(MD_NO_FUA, &device->flags);
 
 	if (write_ordering_changed(old_disk_conf, new_disk_conf))
-		drbd_bump_write_ordering(device->resource, NULL, WO_BDEV_FLUSH);
+		drbd_bump_write_ordering(device->resource, NULL, WO_BIO_BARRIER);
 
 	if (old_disk_conf->discard_zeroes_if_aligned !=
 	    new_disk_conf->discard_zeroes_if_aligned)
-		drbd_reconsider_queue_parameters(device, device->ldev, NULL);
+		drbd_reconsider_queue_parameters(device, device->ldev);
 
-	drbd_md_sync(device);
-
-	if (device->state.conn >= C_CONNECTED) {
-		struct drbd_peer_device *peer_device;
+	drbd_md_sync_if_dirty(device);
 
-		for_each_peer_device(peer_device, device)
+	for_each_peer_device(peer_device, device) {
+		if (peer_device->repl_state[NOW] >= L_ESTABLISHED)
 			drbd_send_sync_param(peer_device);
 	}
 
 	kvfree_rcu_mightsleep(old_disk_conf);
-	kfree(old_plan);
 	mod_timer(&device->request_timer, jiffies + HZ);
 	goto success;
 
 fail_unlock:
-	mutex_unlock(&device->resource->conf_update);
+	mutex_unlock(&resource->conf_update);
  fail:
 	kfree(new_disk_conf);
-	kfree(new_plan);
 success:
+	if (retcode != NO_ERROR)
+		synchronize_rcu();
 	put_ldev(device);
  out:
 	mutex_unlock(&adm_ctx.resource->adm_mutex);
- finish:
+out_no_adm_mutex:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-static struct file *open_backing_dev(struct drbd_device *device,
-		const char *bdev_path, void *claim_ptr, bool do_bd_link)
+static void mutex_unlock_cond(struct mutex *mutex, bool *have_mutex)
 {
-	struct file *file;
-	int err = 0;
+	if (*have_mutex) {
+		mutex_unlock(mutex);
+		*have_mutex = false;
+	}
+}
+
+static void update_resource_dagtag(struct drbd_resource *resource, struct drbd_backing_dev *bdev)
+{
+	u64 dagtag = 0;
+	int node_id;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		struct drbd_peer_md *peer_md;
+		if (bdev->md.node_id == node_id)
+			continue;
+
+		peer_md = &bdev->md.peers[node_id];
+
+		if (peer_md->bitmap_uuid)
+			dagtag = max(peer_md->bitmap_dagtag, dagtag);
+	}
+
+	spin_lock_irq(&resource->tl_update_lock);
+	if (dagtag > resource->dagtag_sector) {
+		resource->dagtag_before_attach = resource->dagtag_sector;
+		resource->dagtag_from_backing_dev = dagtag;
+		WRITE_ONCE(resource->dagtag_sector, dagtag);
+	}
+	spin_unlock_irq(&resource->tl_update_lock);
+}
+
+static int used_bitmap_slots(struct drbd_backing_dev *bdev)
+{
+	int node_id;
+	int used = 0;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		struct drbd_peer_md *peer_md = &bdev->md.peers[node_id];
+
+		if (peer_md->flags & MDF_HAVE_BITMAP)
+			used++;
+	}
+
+	return used;
+}
+
+static bool bitmap_index_vacant(struct drbd_backing_dev *bdev, int bitmap_index)
+{
+	int node_id;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		struct drbd_peer_md *peer_md = &bdev->md.peers[node_id];
+
+		if (peer_md->bitmap_index == bitmap_index)
+			return false;
+	}
+	return true;
+}
+
+int drbd_unallocated_index(struct drbd_backing_dev *bdev)
+{
+	int bitmap_index;
+	int bm_max_peers = bdev->md.max_peers;
+
+	for (bitmap_index = 0; bitmap_index < bm_max_peers; bitmap_index++) {
+		if (bitmap_index_vacant(bdev, bitmap_index))
+			return bitmap_index;
+	}
+
+	return -1;
+}
+
+static int
+allocate_bitmap_index(struct drbd_peer_device *peer_device,
+		      struct drbd_backing_dev *nbc)
+{
+	const int peer_node_id = peer_device->connection->peer_node_id;
+	struct drbd_peer_md *peer_md = &nbc->md.peers[peer_node_id];
+	int bitmap_index;
+
+	bitmap_index = drbd_unallocated_index(nbc);
+	if (bitmap_index == -1) {
+		drbd_err(peer_device, "Not enough free bitmap slots\n");
+		return -ENOSPC;
+	}
+
+	peer_md->bitmap_index = bitmap_index;
+	peer_device->bitmap_index = bitmap_index;
+	peer_md->flags |= MDF_HAVE_BITMAP;
+
+	return 0;
+}
+
+static struct drbd_peer_md *day0_peer_md(struct drbd_device *device)
+{
+	const int my_node_id = device->resource->res_opts.node_id;
+	struct drbd_peer_md *peer_md = device->ldev->md.peers;
+	int node_id;
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if (node_id == my_node_id)
+			continue;
+		/* Only totally unused slots definitely contain the day0 UUID. */
+		if (peer_md[node_id].bitmap_index == -1 && !peer_md[node_id].flags)
+			return &peer_md[node_id];
+	}
+	return NULL;
+}
+
+/*
+ * Clear the slot for this peer in the metadata. If md_flags is empty, clear
+ * the slot completely. Otherwise make it a slot for a diskless peer. Also
+ * clear any bitmap associated with this peer.
+ */
+static int clear_peer_slot(struct drbd_device *device, int peer_node_id, u32 md_flags)
+{
+	struct drbd_peer_md *peer_md, *day0_md;
+	struct meta_data_on_disk_9 *buffer;
+	int from_index, freed_index;
+	bool free_bitmap_slot;
+
+	if (!get_ldev(device))
+		return -ENODEV;
+
+	peer_md = &device->ldev->md.peers[peer_node_id];
+	free_bitmap_slot = peer_md->flags & MDF_HAVE_BITMAP;
+	if (free_bitmap_slot) {
+		drbd_suspend_io(device, WRITE_ONLY);
+
+		/*
+		 * Unallocated slots are considered to track writes to the
+		 * device since day 0. In order to keep that promise, copy the
+		 * bitmap from an unallocated slot to this one, or set it to
+		 * all out-of-sync.
+		 */
+
+		from_index = drbd_unallocated_index(device->ldev);
+		freed_index = peer_md->bitmap_index;
+	}
+	buffer = drbd_md_get_buffer(device, __func__); /* lock meta-data IO to superblock */
+	if (buffer == NULL)
+		goto out_no_buffer;
+
+	/* Look for day0 UUID before changing this peer slot to a day0 slot. */
+	day0_md = day0_peer_md(device);
+
+	peer_md->flags &= md_flags & ~MDF_HAVE_BITMAP;
+	peer_md->bitmap_index = -1;
+
+	if (free_bitmap_slot) {
+		drbd_bm_lock(device, __func__, BM_LOCK_BULK);
+		/*
+		 * Regular bitmap OPs (calling into bm_op()) can run in parallel to
+		 * drbd_bm_copy_slot() and interleave with it as drbd_bm_copy_slot()
+		 * gives up its locks when it moves on to the next source page.
+		 * The bitmap->bm_all_slots_lock ensures that drbd_set_sync()
+		 * (which iterates over multiple slots) does not interleave with
+		 * drbd_bm_copy_slot() while it copies data from one slot to another
+		 * one.
+		 */
+		if (from_index != -1)
+			drbd_bm_copy_slot(device, from_index, freed_index);
+		else
+			_drbd_bm_set_many_bits(device, freed_index, 0, -1UL);
+
+		drbd_bm_write(device, NULL);
+		drbd_bm_unlock(device);
+	}
+
+	/*
+	 * When we forget a peer, we clear the flags. In this case, reset the
+	 * bitmap UUID to the day0 UUID. Peer slots without any bitmap index or
+	 * any flags set should always contain the day0 UUID.
+	 */
+	if (!peer_md->flags && day0_md) {
+		peer_md->bitmap_uuid = day0_md->bitmap_uuid;
+		peer_md->bitmap_dagtag = day0_md->bitmap_dagtag;
+	} else {
+		peer_md->bitmap_uuid = 0;
+		peer_md->bitmap_dagtag = 0;
+	}
+
+	clear_bit(MD_DIRTY, &device->flags);
+	drbd_md_write(device, buffer);
+	drbd_md_put_buffer(device);
+
+ out_no_buffer:
+	if (free_bitmap_slot)
+		drbd_resume_io(device);
+
+	put_ldev(device);
+
+	return 0;
+}
+
+bool want_bitmap(struct drbd_peer_device *peer_device)
+{
+	struct peer_device_conf *pdc;
+	bool want_bitmap = false;
+
+	rcu_read_lock();
+	pdc = rcu_dereference(peer_device->conf);
+	if (pdc)
+		want_bitmap |= pdc->bitmap;
+	rcu_read_unlock();
+
+	return want_bitmap;
+}
+
+static void close_backing_dev(struct drbd_device *device,
+		struct file *bdev_file, bool do_bd_unlink)
+{
+	if (!bdev_file)
+		return;
+	if (do_bd_unlink)
+		bd_unlink_disk_holder(file_bdev(bdev_file), device->vdisk);
+	fput(bdev_file);
+}
+
+void drbd_backing_dev_free(struct drbd_device *device, struct drbd_backing_dev *ldev)
+{
+	if (ldev == NULL)
+		return;
+
+	drbd_dax_close(ldev);
+
+	close_backing_dev(device,
+			  ldev->f_md_bdev,
+			  ldev->md_bdev != ldev->backing_bdev);
+	close_backing_dev(device, ldev->backing_bdev_file, true);
 
-	file = bdev_file_open_by_path(bdev_path, BLK_OPEN_READ | BLK_OPEN_WRITE,
-				      claim_ptr, NULL);
+	kfree(ldev->disk_conf);
+	kfree(ldev);
+}
+
+static struct file *open_backing_dev(struct drbd_device *device,
+		const char *bdev_path, void *claim_ptr)
+{
+	struct file *file = bdev_file_open_by_path(bdev_path,
+				  BLK_OPEN_READ | BLK_OPEN_WRITE,
+				  claim_ptr, NULL);
 	if (IS_ERR(file)) {
 		drbd_err(device, "open(\"%s\") failed with %ld\n",
 				bdev_path, PTR_ERR(file));
-		return file;
 	}
+	return file;
+}
 
-	if (!do_bd_link)
-		return file;
-
-	err = bd_link_disk_holder(file_bdev(file), device->vdisk);
+static int link_backing_dev(struct drbd_device *device,
+		const char *bdev_path, struct file *file)
+{
+	int err = bd_link_disk_holder(file_bdev(file), device->vdisk);
 	if (err) {
 		fput(file);
 		drbd_err(device, "bd_link_disk_holder(\"%s\", ...) failed with %d\n",
 				bdev_path, err);
-		file = ERR_PTR(err);
 	}
-	return file;
+	return err;
 }
 
 static int open_backing_devices(struct drbd_device *device,
@@ -1678,14 +2836,27 @@ static int open_backing_devices(struct drbd_device *device,
 		struct drbd_backing_dev *nbc)
 {
 	struct file *file;
+	void *meta_claim_ptr;
+	int err;
 
-	file = open_backing_dev(device, new_disk_conf->backing_dev, device,
-				  true);
+	file = open_backing_dev(device, new_disk_conf->backing_dev, device);
 	if (IS_ERR(file))
 		return ERR_OPEN_DISK;
+
+	err = link_backing_dev(device, new_disk_conf->backing_dev, file);
+	if (err) {
+		/* close without unlinking; otherwise error path will try to unlink */
+		close_backing_dev(device, file, false);
+		return ERR_OPEN_DISK;
+	}
 	nbc->backing_bdev = file_bdev(file);
 	nbc->backing_bdev_file = file;
 
+	/* meta_claim_ptr: device, if claimed exclusively; shared drbd_m_holder,
+	 * if potentially shared with other drbd minors
+	 */
+	meta_claim_ptr = (new_disk_conf->meta_dev_idx < 0) ?
+		(void *)device : (void *)drbd_m_holder;
 	/*
 	 * meta_dev_idx >= 0: external fixed size, possibly multiple
 	 * drbd sharing one meta device.  TODO in that case, paranoia
@@ -1694,95 +2865,402 @@ static int open_backing_devices(struct drbd_device *device,
 	 * should check it for you already; but if you don't, or
 	 * someone fooled it, we need to double check here)
 	 */
-	file = open_backing_dev(device, new_disk_conf->meta_dev,
-		/* claim ptr: device, if claimed exclusively; shared drbd_m_holder,
-		 * if potentially shared with other drbd minors */
-			(new_disk_conf->meta_dev_idx < 0) ? (void*)device : (void*)drbd_m_holder,
-		/* avoid double bd_claim_by_disk() for the same (source,target) tuple,
-		 * as would happen with internal metadata. */
-			(new_disk_conf->meta_dev_idx != DRBD_MD_INDEX_FLEX_INT &&
-			 new_disk_conf->meta_dev_idx != DRBD_MD_INDEX_INTERNAL));
+	file = open_backing_dev(device, new_disk_conf->meta_dev, meta_claim_ptr);
 	if (IS_ERR(file))
 		return ERR_OPEN_MD_DISK;
+
+	/* avoid double bd_claim_by_disk() for the same (source,target) tuple,
+	 * as would happen with internal metadata. */
+	if (file_bdev(file) != nbc->backing_bdev) {
+		err = link_backing_dev(device, new_disk_conf->meta_dev, file);
+		if (err) {
+			/* close without unlinking; otherwise error path will try to unlink */
+			close_backing_dev(device, file, false);
+			return ERR_OPEN_MD_DISK;
+		}
+	}
+
 	nbc->md_bdev = file_bdev(file);
 	nbc->f_md_bdev = file;
 	return NO_ERROR;
 }
 
-static void close_backing_dev(struct drbd_device *device,
-		struct file *bdev_file, bool do_bd_unlink)
+static int check_activity_log_stripe_size(struct drbd_device *device, struct drbd_md *md)
 {
-	if (!bdev_file)
-		return;
-	if (do_bd_unlink)
-		bd_unlink_disk_holder(file_bdev(bdev_file), device->vdisk);
-	fput(bdev_file);
-}
+	u32 al_stripes = md->al_stripes;
+	u32 al_stripe_size_4k = md->al_stripe_size_4k;
+	u64 al_size_4k;
 
-void drbd_backing_dev_free(struct drbd_device *device, struct drbd_backing_dev *ldev)
-{
-	if (ldev == NULL)
-		return;
+	/* both not set: default to old fixed size activity log */
+	if (al_stripes == 0 && al_stripe_size_4k == 0) {
+		al_stripes = 1;
+		al_stripe_size_4k = (32768 >> 9)/8;
+	}
 
-	close_backing_dev(device, ldev->f_md_bdev,
-			  ldev->md_bdev != ldev->backing_bdev);
-	close_backing_dev(device, ldev->backing_bdev_file, true);
+	/* some paranoia plausibility checks */
 
-	kfree(ldev->disk_conf);
-	kfree(ldev);
+	/* we need both values to be set */
+	if (al_stripes == 0 || al_stripe_size_4k == 0)
+		goto err;
+
+	al_size_4k = (u64)al_stripes * al_stripe_size_4k;
+
+	/* Upper limit of activity log area, to avoid potential overflow
+	 * problems in al_tr_number_to_on_disk_sector(). As right now, more
+	 * than 72 * 4k blocks total only increases the amount of history,
+	 * limiting this arbitrarily to 16 GB is not a real limitation ;-)  */
+	if (al_size_4k > (16 * 1024 * 1024/4))
+		goto err;
+
+	/* Lower limit: we need at least 8 transaction slots (32kB)
+	 * to not break existing setups */
+	if (al_size_4k < (32768 >> 9)/8)
+		goto err;
+
+	md->al_size_4k = al_size_4k;
+
+	return 0;
+err:
+	drbd_err(device, "invalid activity log striping: al_stripes=%u, al_stripe_size_4k=%u\n",
+			al_stripes, al_stripe_size_4k);
+	return -EINVAL;
 }
 
-int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
+static int check_offsets_and_sizes(struct drbd_device *device, struct drbd_backing_dev *bdev)
 {
-	struct drbd_config_context adm_ctx;
-	struct drbd_device *device;
-	struct drbd_peer_device *peer_device;
-	struct drbd_connection *connection;
-	int err;
-	enum drbd_ret_code retcode;
+	sector_t capacity = drbd_get_capacity(bdev->md_bdev);
+	struct drbd_md *md = &bdev->md;
+	s32 on_disk_al_sect;
+	s32 on_disk_bm_sect;
+
+	if (md->max_peers > DRBD_PEERS_MAX) {
+		drbd_err(device, "bm_max_peers too high\n");
+		goto err;
+	}
+
+	/* The on-disk size of the activity log, calculated from offsets, and
+	 * the size of the activity log calculated from the stripe settings,
+	 * should match.
+	 * Though we could relax this a bit: it is ok, if the striped activity log
+	 * fits in the available on-disk activity log size.
+	 * Right now, that would break how resize is implemented.
+	 * TODO: make drbd_determine_dev_size() (and the drbdmeta tool) aware
+	 * of possible unused padding space in the on disk layout. */
+	if (md->al_offset < 0) {
+		if (md->bm_offset > md->al_offset)
+			goto err;
+		on_disk_al_sect = -md->al_offset;
+		on_disk_bm_sect = md->al_offset - md->bm_offset;
+	} else {
+		if (md->al_offset != (4096 >> 9))
+			goto err;
+		if (md->bm_offset < md->al_offset + md->al_size_4k * (4096 >> 9))
+			goto err;
+
+		on_disk_al_sect = md->bm_offset - (4096 >> 9);
+		on_disk_bm_sect = md->md_size_sect - md->bm_offset;
+	}
+
+	/* old fixed size meta data is exactly that: fixed. */
+	if (md->meta_dev_idx >= 0) {
+		if (md->bm_block_size != BM_BLOCK_SIZE_4k
+		||  md->md_size_sect != (128 << 20 >> 9)
+		||  md->al_offset != (4096 >> 9)
+		||  md->bm_offset != (4096 >> 9) + (32768 >> 9)
+		||  md->al_stripes != 1
+		||  md->al_stripe_size_4k != (32768 >> 12))
+			goto err;
+	}
+
+	if (capacity < md->md_size_sect)
+		goto err;
+	if (capacity - md->md_size_sect < drbd_md_first_sector(bdev))
+		goto err;
+
+	/* should be aligned, and at least 32k */
+	if ((on_disk_al_sect & 7) || (on_disk_al_sect < (32768 >> 9)))
+		goto err;
+
+	/* should fit (for now: exactly) into the available on-disk space;
+	 * overflow prevention is in check_activity_log_stripe_size() above. */
+	if (on_disk_al_sect != md->al_size_4k * (4096 >> 9))
+		goto err;
+
+	/* again, should be aligned */
+	if (md->bm_offset & 7)
+		goto err;
+
+	/* FIXME check for device grow with flex external meta data? */
+
+	/* can the available bitmap space cover the last agreed device size? */
+	if (on_disk_bm_sect < drbd_capacity_to_on_disk_bm_sect(
+				md->effective_size, md))
+		goto err;
+
+	return 0;
+
+err:
+	drbd_err(device, "meta data offsets don't make sense: idx=%d bm_block_size=%d al_s=%u, al_sz4k=%u, al_offset=%d, bm_offset=%d, md_size_sect=%u, la_size=%llu, md_capacity=%llu\n",
+			md->meta_dev_idx, md->bm_block_size,
+			md->al_stripes, md->al_stripe_size_4k,
+			md->al_offset, md->bm_offset, md->md_size_sect,
+			(unsigned long long)md->effective_size,
+			(unsigned long long)capacity);
+
+	return -EINVAL;
+}
+
+__printf(2, 3)
+static void drbd_err_and_skb_info(struct drbd_config_context *adm_ctx, const char *format, ...)
+{
+	struct drbd_device *device = adm_ctx->device;
+	va_list args;
+	char *text;
+
+	va_start(args, format);
+	text = kvasprintf(GFP_ATOMIC, format, args);
+	va_end(args);
+
+	if (!text)
+		return;
+
+	drbd_err(device, "%s", text);
+	drbd_msg_put_info(adm_ctx->reply_skb, text);
+
+	kfree(text);
+}
+
+static void decode_md_9(struct meta_data_on_disk_9 *on_disk, struct drbd_md *md)
+{
+	int i;
+
+	md->effective_size = be64_to_cpu(on_disk->effective_size);
+	md->current_uuid = be64_to_cpu(on_disk->current_uuid);
+	md->prev_members = be64_to_cpu(on_disk->members);
+	md->device_uuid = be64_to_cpu(on_disk->device_uuid);
+	md->md_size_sect = be32_to_cpu(on_disk->md_size_sect);
+	md->al_offset = be32_to_cpu(on_disk->al_offset);
+
+	md->bm_offset = be32_to_cpu(on_disk->bm_offset);
+
+	md->flags = be32_to_cpu(on_disk->flags);
+
+	md->max_peers = be32_to_cpu(on_disk->bm_max_peers);
+	md->bm_block_size = be32_to_cpu(on_disk->bm_bytes_per_bit);
+	md->node_id = be32_to_cpu(on_disk->node_id);
+	md->al_stripes = be32_to_cpu(on_disk->al_stripes);
+	md->al_stripe_size_4k = be32_to_cpu(on_disk->al_stripe_size_4k);
+
+
+	for (i = 0; i < DRBD_NODE_ID_MAX; i++) {
+		struct drbd_peer_md *peer_md = &md->peers[i];
+
+		peer_md->bitmap_uuid = be64_to_cpu(on_disk->peers[i].bitmap_uuid);
+		peer_md->bitmap_dagtag = be64_to_cpu(on_disk->peers[i].bitmap_dagtag);
+		peer_md->flags = be32_to_cpu(on_disk->peers[i].flags);
+		peer_md->bitmap_index = be32_to_cpu(on_disk->peers[i].bitmap_index);
+
+		if (peer_md->bitmap_index == -1)
+			continue;
+		peer_md->flags |= MDF_HAVE_BITMAP;
+	}
+	for (i = 0; i < ARRAY_SIZE(on_disk->history_uuids); i++)
+		md->history_uuids[i] = be64_to_cpu(on_disk->history_uuids[i]);
+
+	BUILD_BUG_ON(ARRAY_SIZE(md->history_uuids) != ARRAY_SIZE(on_disk->history_uuids));
+}
+
+
+static void decode_magic(struct meta_data_on_disk_9 *on_disk, u32 *magic, u32 *flags)
+{
+	/* magic and flags are in at the same offsets in 8.4 and 9 */
+	*magic = be32_to_cpu(on_disk->magic);
+	*flags = be32_to_cpu(on_disk->flags);
+}
+
+static
+int drbd_md_decode(struct drbd_config_context *adm_ctx,
+		   struct drbd_backing_dev *bdev,
+		   void *buffer)
+{
+	struct drbd_device *device = adm_ctx->device;
+	u32 magic, flags;
+	int i, rv = NO_ERROR;
+	int my_node_id = device->resource->res_opts.node_id;
+
+	decode_magic(buffer, &magic, &flags);
+	if ((magic == DRBD_MD_MAGIC_09 && !(flags & MDF_AL_CLEAN)) ||
+	    magic == DRBD_MD_MAGIC_84_UNCLEAN ||
+	    (magic == DRBD_MD_MAGIC_08 && !(flags & MDF_AL_CLEAN))) {
+		/* btw: that's Activity Log clean, not "all" clean. */
+		drbd_err_and_skb_info(adm_ctx,
+				"Found unclean meta data. Did you \"drbdadm apply-al\"?\n");
+		rv = ERR_MD_UNCLEAN;
+		goto err;
+	}
+	rv = ERR_MD_INVALID;
+	if (magic != DRBD_MD_MAGIC_09 && magic !=
+	    DRBD_MD_MAGIC_84_UNCLEAN && magic !=  DRBD_MD_MAGIC_08) {
+		if (magic == DRBD_MD_MAGIC_07)
+			drbd_err_and_skb_info(adm_ctx,
+				"Found old meta data magic. Did you \"drbdadm create-md\"?\n");
+		else
+			drbd_err_and_skb_info(adm_ctx,
+				"Meta data magic not found. Did you \"drbdadm create-md\"?\n");
+		goto err;
+	}
+
+	if (magic == DRBD_MD_MAGIC_09) {
+		clear_bit(LEGACY_84_MD, &device->flags);
+		decode_md_9(buffer, &bdev->md);
+	} else {
+		if (!device->resource->res_opts.drbd8_compat_mode) {
+			drbd_err_and_skb_info(adm_ctx,
+				"Found old meta data magic. Did you \"drbdadm create-md\"?\n");
+			goto err;
+		}
+		set_bit(LEGACY_84_MD, &device->flags);
+		drbd_md_decode_84(buffer, &bdev->md);
+		if (bdev->md.bm_block_size != BM_BLOCK_SIZE_4k) {
+			drbd_err_and_skb_info(adm_ctx,
+				"unexpected bm_bytes_per_bit: %u (expected %u)\n",
+				bdev->md.bm_block_size, BM_BLOCK_SIZE_4k);
+			goto err;
+		}
+	}
+
+	if (!is_power_of_2(bdev->md.bm_block_size)
+	|| bdev->md.bm_block_size < BM_BLOCK_SIZE_MIN
+	|| bdev->md.bm_block_size > BM_BLOCK_SIZE_MAX) {
+		drbd_err_and_skb_info(adm_ctx,
+			"unexpected bm_bytes_per_bit: %u (expected power of 2 in [%u..%u])\n",
+			bdev->md.bm_block_size, BM_BLOCK_SIZE_MIN, BM_BLOCK_SIZE_MAX);
+		goto err;
+	}
+	bdev->md.bm_block_shift = ilog2(bdev->md.bm_block_size);
+
+	if (check_activity_log_stripe_size(device, &bdev->md))
+		goto err;
+	if (check_offsets_and_sizes(device, bdev))
+		goto err;
+
+	if (bdev->md.node_id != -1 && bdev->md.node_id != my_node_id) {
+		drbd_err_and_skb_info(adm_ctx, "ambiguous node id: meta-data: %d, config: %d\n",
+			bdev->md.node_id, my_node_id);
+		goto err;
+	}
+
+	for (i = 0; i < DRBD_NODE_ID_MAX; i++) {
+		struct drbd_peer_md *peer_md = &bdev->md.peers[i];
+
+		if (peer_md->bitmap_index == -1)
+			continue;
+		if (i == my_node_id) {
+			drbd_err_and_skb_info(adm_ctx, "my own node id (%d) should not have a bitmap index (%d)\n",
+				my_node_id, peer_md->bitmap_index);
+			goto err;
+		}
+		if (peer_md->bitmap_index < -1 || peer_md->bitmap_index >= bdev->md.max_peers) {
+			drbd_err_and_skb_info(adm_ctx, "peer node id %d: bitmap index (%d) exceeds allocated bitmap slots (%d)\n",
+				i, peer_md->bitmap_index, bdev->md.max_peers);
+			goto err;
+		}
+		/* maybe: for each bitmap_index != -1, create a connection object
+		 * with peer_node_id = i, unless already present. */
+	}
+
+	rv = NO_ERROR;
+
+err:
+	return rv;
+}
+
+/**
+ * drbd_md_read() - Reads in the meta data super block
+ * @adm_ctx:	DRBD config context.
+ * @bdev:	Device from which the meta data should be read in.
+ *
+ * Return NO_ERROR on success, and an enum drbd_ret_code in case
+ * something goes wrong.
+ *
+ * Called exactly once during drbd_adm_attach(), while still being D_DISKLESS,
+ * even before @bdev is assigned to @device->ldev.
+ */
+static int drbd_md_read(struct drbd_config_context *adm_ctx, struct drbd_backing_dev *bdev)
+{
+	struct drbd_device *device = adm_ctx->device;
+	void *buffer;
+	int rv;
+
+	if (device->disk_state[NOW] != D_DISKLESS)
+		return ERR_DISK_CONFIGURED;
+
+	/* First, figure out where our meta data superblock is located,
+	 * and read it. */
+	bdev->md.meta_dev_idx = bdev->disk_conf->meta_dev_idx;
+	bdev->md.md_offset = drbd_md_ss(bdev);
+	/* Even for (flexible or indexed) external meta data,
+	 * initially restrict us to the 4k superblock for now.
+	 * Affects the paranoia out-of-range access check in drbd_md_sync_page_io(). */
+	bdev->md.md_size_sect = 8;
+
+	drbd_dax_open(bdev);
+	if (drbd_md_dax_active(bdev)) {
+		drbd_info(device, "meta-data IO uses: dax-pmem\n");
+		rv = drbd_md_decode(adm_ctx, bdev, drbd_dax_md_addr(bdev));
+		if (rv != NO_ERROR)
+			return rv;
+		if (drbd_dax_map(bdev))
+			return ERR_IO_MD_DISK;
+		return NO_ERROR;
+	}
+	drbd_info(device, "meta-data IO uses: blk-bio\n");
+
+	buffer = drbd_md_get_buffer(device, __func__);
+	if (!buffer)
+		return ERR_NOMEM;
+
+	if (drbd_md_sync_page_io(device, bdev, bdev->md.md_offset,
+				 REQ_OP_READ)) {
+		/* NOTE: can't do normal error processing here as this is
+		   called BEFORE disk is attached */
+		drbd_err_and_skb_info(adm_ctx, "Error while reading metadata.\n");
+		rv = ERR_IO_MD_DISK;
+		goto err;
+	}
+
+	rv = drbd_md_decode(adm_ctx, bdev, buffer);
+ err:
+	drbd_md_put_buffer(device);
+
+	return rv;
+}
+
+static int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
+{
+	struct drbd_config_context adm_ctx;
+	struct drbd_device *device;
+	struct drbd_resource *resource;
+	int err, retcode;
 	enum determine_dev_size dd;
-	sector_t max_possible_sectors;
 	sector_t min_md_device_sectors;
-	struct drbd_backing_dev *nbc = NULL; /* new_backing_conf */
+	struct drbd_backing_dev *nbc; /* new_backing_conf */
+	sector_t backing_disk_max_sectors;
 	struct disk_conf *new_disk_conf = NULL;
-	struct lru_cache *resync_lru = NULL;
-	struct fifo_buffer *new_plan = NULL;
-	union drbd_state ns, os;
 	enum drbd_state_rv rv;
-	struct net_conf *nc;
+	struct drbd_peer_device *peer_device;
+	unsigned int slots_needed = 0;
+	bool have_conf_update = false;
 
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto finish;
-
 	device = adm_ctx.device;
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	peer_device = first_peer_device(device);
-	connection = peer_device->connection;
-	conn_reconfig_start(connection);
-
-	/* if you want to reconfigure, please tear down first */
-	if (device->state.disk > D_DISKLESS) {
-		retcode = ERR_DISK_CONFIGURED;
-		goto fail;
+	resource = device->resource;
+	if (mutex_lock_interruptible(&resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_no_adm_mutex;
 	}
-	/* It may just now have detached because of IO error.  Make sure
-	 * drbd_ldev_destroy is done already, we may end up here very fast,
-	 * e.g. if someone calls attach from the on-io-error handler,
-	 * to realize a "hot spare" feature (not that I'd recommend that) */
-	wait_event(device->misc_wait, !test_bit(GOING_DISKLESS, &device->flags));
-
-	/* make sure there is no leftover from previous force-detach attempts */
-	clear_bit(FORCE_DETACH, &device->flags);
-	clear_bit(WAS_IO_ERROR, &device->flags);
-	clear_bit(WAS_READ_ERROR, &device->flags);
-
-	/* and no leftover from previously aborted resync or verify, either */
-	device->rs_total = 0;
-	device->rs_failed = 0;
-	atomic_set(&device->rs_pending_cnt, 0);
 
 	/* allocation not in the IO path, drbdsetup context */
 	nbc = kzalloc_obj(struct drbd_backing_dev);
@@ -1807,30 +3285,16 @@ int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
 		goto fail;
 	}
 
-	if (new_disk_conf->c_plan_ahead > DRBD_C_PLAN_AHEAD_MAX)
-		new_disk_conf->c_plan_ahead = DRBD_C_PLAN_AHEAD_MAX;
-
-	new_plan = fifo_alloc((new_disk_conf->c_plan_ahead * 10 * SLEEP_TIME) / HZ);
-	if (!new_plan) {
-		retcode = ERR_NOMEM;
-		goto fail;
-	}
-
 	if (new_disk_conf->meta_dev_idx < DRBD_MD_INDEX_FLEX_INT) {
 		retcode = ERR_MD_IDX_INVALID;
 		goto fail;
 	}
 
-	rcu_read_lock();
-	nc = rcu_dereference(connection->net_conf);
-	if (nc) {
-		if (new_disk_conf->fencing == FP_STONITH && nc->wire_protocol == DRBD_PROT_A) {
-			rcu_read_unlock();
-			retcode = ERR_STONITH_AND_PROT_A;
-			goto fail;
-		}
-	}
-	rcu_read_unlock();
+	lock_all_resources();
+	retcode = drbd_resync_after_valid(device, new_disk_conf->resync_after);
+	unlock_all_resources();
+	if (retcode != NO_ERROR)
+		goto fail;
 
 	retcode = open_backing_devices(device, new_disk_conf, nbc);
 	if (retcode != NO_ERROR)
@@ -1843,37 +3307,80 @@ int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
 		goto fail;
 	}
 
-	resync_lru = lc_create("resync", drbd_bm_ext_cache,
-			1, 61, sizeof(struct bm_extent),
-			offsetof(struct bm_extent, lce));
-	if (!resync_lru) {
-		retcode = ERR_NOMEM;
+	/* if you want to reconfigure, please tear down first */
+	if (device->disk_state[NOW] > D_DISKLESS) {
+		retcode = ERR_DISK_CONFIGURED;
 		goto fail;
 	}
+	/* It may just now have detached because of IO error.  Make sure
+	 * drbd_ldev_destroy is done already, we may end up here very fast,
+	 * e.g. if someone calls attach from the on-io-error handler,
+	 * to realize a "hot spare" feature (not that I'd recommend that) */
+	wait_event(device->misc_wait, !test_bit(GOING_DISKLESS, &device->flags));
+
+	/* make sure there is no leftover from previous force-detach attempts */
+	clear_bit(FORCE_DETACH, &device->flags);
+
+	/* and no leftover from previously aborted resync or verify, either */
+	for_each_peer_device(peer_device, device) {
+		while (atomic_read(&peer_device->rs_pending_cnt)) {
+			drbd_info_ratelimit(peer_device, "wait for rs_pending_cnt to clear\n");
+			if (schedule_timeout_interruptible(HZ / 10)) {
+				retcode = ERR_INTR;
+				goto fail;
+			}
+		}
+
+		peer_device->rs_total = 0;
+		peer_device->rs_failed = 0;
+	}
 
 	/* Read our meta data super block early.
-	 * This also sets other on-disk offsets. */
-	retcode = drbd_md_read(device, nbc);
+	 * This also sets other on-disk offsets.
+	 */
+	retcode = drbd_md_read(&adm_ctx, nbc);
 	if (retcode != NO_ERROR)
 		goto fail;
 
+	if (device->bitmap) {
+		drbd_err_and_skb_info(&adm_ctx, "already has a bitmap, this should not happen\n");
+		retcode = ERR_INVALID_REQUEST;
+		goto fail;
+	}
+
+	if (new_disk_conf->d_bitmap) {
+		/* ldev_safe: attach path, allocating bitmap */
+		device->bitmap = drbd_bm_alloc(nbc->md.max_peers, nbc->md.bm_block_shift);
+		if (!device->bitmap) {
+			retcode = ERR_NOMEM;
+			goto fail;
+		}
+	} else {
+		if (!list_empty(&resource->connections)) {
+			drbd_err_and_skb_info(&adm_ctx,
+				"Disabling bitmap allocation with peers defined is not allowed");
+			retcode = ERR_INVALID_REQUEST;
+			goto fail;
+		}
+	}
+	device->last_bm_block_shift = nbc->md.bm_block_shift;
+
 	sanitize_disk_conf(device, new_disk_conf, nbc);
 
-	if (drbd_get_max_capacity(nbc) < new_disk_conf->disk_size) {
-		drbd_err(device, "max capacity %llu smaller than disk size %llu\n",
-			(unsigned long long) drbd_get_max_capacity(nbc),
+	backing_disk_max_sectors = drbd_get_max_capacity(device, nbc, true);
+	if (backing_disk_max_sectors < new_disk_conf->disk_size) {
+		drbd_err_and_skb_info(&adm_ctx, "max capacity %llu smaller than disk size %llu\n",
+			(unsigned long long) backing_disk_max_sectors,
 			(unsigned long long) new_disk_conf->disk_size);
 		retcode = ERR_DISK_TOO_SMALL;
 		goto fail;
 	}
 
 	if (new_disk_conf->meta_dev_idx < 0) {
-		max_possible_sectors = DRBD_MAX_SECTORS_FLEX;
 		/* at least one MB, otherwise it does not make sense */
 		min_md_device_sectors = (2<<10);
 	} else {
-		max_possible_sectors = DRBD_MAX_SECTORS;
-		min_md_device_sectors = MD_128MB_SECT * (new_disk_conf->meta_dev_idx + 1);
+		min_md_device_sectors = (128 << 20 >> 9) * (new_disk_conf->meta_dev_idx + 1);
 	}
 
 	if (drbd_get_capacity(nbc->md_bdev) < min_md_device_sectors) {
@@ -1886,36 +3393,32 @@ int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
 
 	/* Make sure the new disk is big enough
 	 * (we may currently be R_PRIMARY with no local disk...) */
-	if (drbd_get_max_capacity(nbc) < get_capacity(device->vdisk)) {
+	if (backing_disk_max_sectors <
+	    get_capacity(device->vdisk)) {
+		drbd_err_and_skb_info(&adm_ctx,
+			"Current (diskless) capacity %llu, cannot attach smaller (%llu) disk\n",
+			(unsigned long long)get_capacity(device->vdisk),
+			(unsigned long long)backing_disk_max_sectors);
 		retcode = ERR_DISK_TOO_SMALL;
 		goto fail;
 	}
 
 	nbc->known_size = drbd_get_capacity(nbc->backing_bdev);
 
-	if (nbc->known_size > max_possible_sectors) {
-		drbd_warn(device, "==> truncating very big lower level device "
-			"to currently maximum possible %llu sectors <==\n",
-			(unsigned long long) max_possible_sectors);
-		if (new_disk_conf->meta_dev_idx >= 0)
-			drbd_warn(device, "==>> using internal or flexible "
-				      "meta data may help <<==\n");
-	}
-
-	drbd_suspend_io(device);
-	/* also wait for the last barrier ack. */
-	/* FIXME see also https://daiquiri.linbit/cgi-bin/bugzilla/show_bug.cgi?id=171
-	 * We need a way to either ignore barrier acks for barriers sent before a device
-	 * was attached, or a way to wait for all pending barrier acks to come in.
-	 * As barriers are counted per resource,
-	 * we'd need to suspend io on all devices of a resource.
-	 */
-	wait_event(device->misc_wait, !atomic_read(&device->ap_pending_cnt) || drbd_suspended(device));
-	/* and for any other previously queued work */
-	drbd_flush_workqueue(&connection->sender_work);
-
-	rv = _drbd_request_state(device, NS(disk, D_ATTACHING), CS_VERBOSE);
+	drbd_suspend_io(device, READ_AND_WRITE);
+	wait_event(resource->barrier_wait, !barrier_pending(resource));
+	for_each_peer_device(peer_device, device)
+		wait_event(device->misc_wait,
+			   (!atomic_read(&peer_device->ap_pending_cnt) ||
+			    drbd_suspended(device)));
+	/* and for other previously queued resource work */
+	drbd_flush_workqueue(&resource->work);
+
+	rv = stable_state_change(resource,
+		change_disk_state(device, D_ATTACHING, CS_VERBOSE | CS_SERIALIZE, "attach", NULL));
 	retcode = (enum drbd_ret_code)rv;
+	if (rv >= SS_SUCCESS)
+		update_resource_dagtag(resource, nbc);
 	drbd_resume_io(device);
 	if (rv < SS_SUCCESS)
 		goto fail;
@@ -1923,20 +3426,97 @@ int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
 	if (!get_ldev_if_state(device, D_ATTACHING))
 		goto force_diskless;
 
-	if (!device->bitmap) {
-		if (drbd_bm_init(device)) {
-			retcode = ERR_NOMEM;
+	drbd_info(device, "Maximum number of peer devices = %u\n", nbc->md.max_peers);
+
+	mutex_lock(&resource->conf_update);
+	have_conf_update = true;
+
+	/* Make sure the local node id matches or is unassigned */
+	if (nbc->md.node_id != -1 && nbc->md.node_id != resource->res_opts.node_id) {
+		drbd_err_and_skb_info(&adm_ctx, "Local node id %d differs from local "
+			 "node id %d on device\n",
+			 resource->res_opts.node_id,
+			 nbc->md.node_id);
+		retcode = ERR_INVALID_REQUEST;
+		goto force_diskless_dec;
+	}
+
+	/* Make sure no bitmap slot has our own node id.
+	 * If we are operating in "drbd 8 compatibility mode", the node ID is
+	 * not yet initialized at this point, so just ignore this check.
+	 */
+	if (resource->res_opts.node_id != -1 &&
+	    nbc->md.peers[resource->res_opts.node_id].bitmap_index != -1) {
+		drbd_err_and_skb_info(&adm_ctx, "There is a bitmap for my own node id (%d)\n",
+			 resource->res_opts.node_id);
+		retcode = ERR_INVALID_REQUEST;
+		goto force_diskless_dec;
+	}
+
+	/* Make sure we have a bitmap slot for each peer id */
+	for_each_peer_device(peer_device, device) {
+		struct drbd_connection *connection = peer_device->connection;
+		int bitmap_index;
+
+		if (peer_device->bitmap_index != -1) {
+			drbd_err_and_skb_info(&adm_ctx,
+					"ASSERTION FAILED bitmap_index %d during attach, expected -1\n",
+					peer_device->bitmap_index);
+		}
+
+		bitmap_index = nbc->md.peers[connection->peer_node_id].bitmap_index;
+		if (want_bitmap(peer_device)) {
+			if (bitmap_index != -1)
+				peer_device->bitmap_index = bitmap_index;
+			else
+				slots_needed++;
+		} else if (bitmap_index != -1) {
+			/* Pretend in core that there is not bitmap for that peer,
+			   in the on disk meta-data we keep it until it is de-allocated
+			   with forget-peer */
+			nbc->md.peers[connection->peer_node_id].flags &= ~MDF_HAVE_BITMAP;
+		}
+	}
+	if (slots_needed) {
+		int slots_available = nbc->md.max_peers - used_bitmap_slots(nbc);
+
+		if (slots_needed > slots_available) {
+			drbd_err_and_skb_info(&adm_ctx, "Not enough free bitmap "
+				 "slots (available=%d, needed=%d)\n",
+				 slots_available,
+				 slots_needed);
+			retcode = ERR_INVALID_REQUEST;
 			goto force_diskless_dec;
 		}
+		for_each_peer_device(peer_device, device) {
+			if (peer_device->bitmap_index != -1 || !want_bitmap(peer_device))
+				continue;
+
+			err = allocate_bitmap_index(peer_device, nbc);
+			if (err) {
+				retcode = ERR_INVALID_REQUEST;
+				goto force_diskless_dec;
+			}
+		}
 	}
 
-	if (device->state.pdsk != D_UP_TO_DATE && device->ed_uuid &&
-	    (device->state.role == R_PRIMARY || device->state.peer == R_PRIMARY) &&
-            (device->ed_uuid & ~((u64)1)) != (nbc->md.uuid[UI_CURRENT] & ~((u64)1))) {
-		drbd_err(device, "Can only attach to data with current UUID=%016llX\n",
-		    (unsigned long long)device->ed_uuid);
-		retcode = ERR_DATA_NOT_CURRENT;
-		goto force_diskless_dec;
+	/* Assign the local node id (if not assigned already) */
+	nbc->md.node_id = resource->res_opts.node_id;
+
+	if (resource->role[NOW] == R_PRIMARY && device->exposed_data_uuid &&
+	    (device->exposed_data_uuid & ~UUID_PRIMARY) !=
+	    (nbc->md.current_uuid & ~UUID_PRIMARY)) {
+		int data_present = false;
+		for_each_peer_device(peer_device, device) {
+			if (peer_device->disk_state[NOW] == D_UP_TO_DATE)
+				data_present = true;
+		}
+		if (!data_present) {
+			drbd_err_and_skb_info(&adm_ctx, "Can only attach to data with current UUID=%016llX\n",
+				 (unsigned long long)device->exposed_data_uuid);
+			retcode = ERR_DATA_NOT_CURRENT;
+			goto force_diskless_dec;
+		}
 	}
 
 	/* Since we are diskless, fix the activity log first... */
@@ -1945,26 +3525,30 @@ int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
 		goto force_diskless_dec;
 	}
 
-	/* Prevent shrinking of consistent devices ! */
-	{
-	unsigned long long nsz = drbd_new_dev_size(device, nbc, nbc->disk_conf->disk_size, 0);
-	unsigned long long eff = nbc->md.la_size_sect;
-	if (drbd_md_test_flag(nbc, MDF_CONSISTENT) && nsz < eff) {
-		if (nsz == nbc->disk_conf->disk_size) {
-			drbd_warn(device, "truncating a consistent device during attach (%llu < %llu)\n", nsz, eff);
-		} else {
-			drbd_warn(device, "refusing to truncate a consistent device (%llu < %llu)\n", nsz, eff);
-			drbd_msg_sprintf_info(adm_ctx.reply_skb,
-				"To-be-attached device has last effective > current size, and is consistent\n"
-				"(%llu > %llu sectors). Refusing to attach.", eff, nsz);
-			retcode = ERR_IMPLICIT_SHRINK;
+	/* Point of no return reached.
+	 * Devices and memory are no longer released by error cleanup below.
+	 * now device takes over responsibility, and the state engine should
+	 * clean it up somewhere.  */
+	D_ASSERT(device, device->ldev == NULL);
+	device->ldev = nbc;
+	nbc = NULL;
+	new_disk_conf = NULL;
+
+	if (drbd_md_dax_active(device->ldev)) {
+		/* The on-disk activity log is always initialized with the
+		 * non-pmem format. We have now decided to access it using
+		 * dax, so re-initialize it appropriately. */
+		if (drbd_dax_al_initialize(device)) {
+			retcode = ERR_IO_MD_DISK;
 			goto force_diskless_dec;
 		}
 	}
-	}
+
+	mutex_unlock(&resource->conf_update);
+	have_conf_update = false;
 
 	lock_all_resources();
-	retcode = drbd_resync_after_valid(device, new_disk_conf->resync_after);
+	retcode = drbd_resync_after_valid(device, device->ldev->disk_conf->resync_after);
 	if (retcode != NO_ERROR) {
 		unlock_all_resources();
 		goto force_diskless_dec;
@@ -1972,43 +3556,53 @@ int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
 
 	/* Reset the "barriers don't work" bits here, then force meta data to
 	 * be written, to ensure we determine if barriers are supported. */
-	if (new_disk_conf->md_flushes)
+	if (device->ldev->disk_conf->md_flushes)
 		clear_bit(MD_NO_FUA, &device->flags);
 	else
 		set_bit(MD_NO_FUA, &device->flags);
 
-	/* Point of no return reached.
-	 * Devices and memory are no longer released by error cleanup below.
-	 * now device takes over responsibility, and the state engine should
-	 * clean it up somewhere.  */
-	D_ASSERT(device, device->ldev == NULL);
-	device->ldev = nbc;
-	device->resync = resync_lru;
-	device->rs_plan_s = new_plan;
-	nbc = NULL;
-	resync_lru = NULL;
-	new_disk_conf = NULL;
-	new_plan = NULL;
-
 	drbd_resync_after_changed(device);
-	drbd_bump_write_ordering(device->resource, device->ldev, WO_BDEV_FLUSH);
+	drbd_bump_write_ordering(resource, device->ldev, WO_BIO_BARRIER);
 	unlock_all_resources();
 
-	if (drbd_md_test_flag(device->ldev, MDF_CRASHED_PRIMARY))
+	/* Prevent shrinking of consistent devices ! */
+	{
+	unsigned long long nsz = drbd_new_dev_size(device, 0, device->ldev->disk_conf->disk_size, 0);
+	unsigned long long eff = device->ldev->md.effective_size;
+	if (drbd_md_test_flag(device->ldev, MDF_CONSISTENT) && nsz < eff) {
+		if (nsz == device->ldev->disk_conf->disk_size) {
+			drbd_warn(device, "truncating a consistent device during attach (%llu < %llu)\n", nsz, eff);
+		} else {
+			drbd_warn(device, "refusing to truncate a consistent device (%llu < %llu)\n", nsz, eff);
+			drbd_msg_sprintf_info(adm_ctx.reply_skb,
+				"To-be-attached device has last effective > current size, and is consistent\n"
+				"(%llu > %llu sectors). Refusing to attach.", eff, nsz);
+			retcode = ERR_IMPLICIT_SHRINK;
+			goto force_diskless_dec;
+		}
+	}
+	}
+
+	if (drbd_md_test_flag(device->ldev, MDF_HAVE_QUORUM) &&
+	    drbd_md_test_flag(device->ldev, MDF_WAS_UP_TO_DATE) &&
+	    device->ldev->md.prev_members == NODE_MASK(resource->res_opts.node_id))
+		set_bit(RESTORE_QUORUM, &device->flags);
+
+	if (drbd_md_test_flag(device->ldev, MDF_CRASHED_PRIMARY) &&
+	    !(resource->role[NOW] == R_PRIMARY && resource->susp_nod[NOW]) &&
+	    !device->exposed_data_uuid && !test_bit(NEW_CUR_UUID, &device->flags))
 		set_bit(CRASHED_PRIMARY, &device->flags);
 	else
 		clear_bit(CRASHED_PRIMARY, &device->flags);
 
-	if (drbd_md_test_flag(device->ldev, MDF_PRIMARY_IND) &&
-	    !(device->state.role == R_PRIMARY && device->resource->susp_nod))
-		set_bit(CRASHED_PRIMARY, &device->flags);
+	if (drbd_md_test_flag(device->ldev, MDF_PRIMARY_LOST_QUORUM) &&
+	    !device->have_quorum[NOW])
+		set_bit(PRIMARY_LOST_QUORUM, &device->flags);
 
-	device->send_cnt = 0;
-	device->recv_cnt = 0;
 	device->read_cnt = 0;
 	device->writ_cnt = 0;
 
-	drbd_reconsider_queue_parameters(device, device->ldev, NULL);
+	drbd_reconsider_queue_parameters(device, device->ldev);
 
 	/* If I am currently not R_PRIMARY,
 	 * but meta data primary indicator is set,
@@ -2024,147 +3618,163 @@ int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
 	 * so we can automatically recover from a crash of a
 	 * degraded but active "cluster" after a certain timeout.
 	 */
-	clear_bit(USE_DEGR_WFC_T, &device->flags);
-	if (device->state.role != R_PRIMARY &&
-	     drbd_md_test_flag(device->ldev, MDF_PRIMARY_IND) &&
-	    !drbd_md_test_flag(device->ldev, MDF_CONNECTED_IND))
-		set_bit(USE_DEGR_WFC_T, &device->flags);
-
-	dd = drbd_determine_dev_size(device, 0, NULL);
-	if (dd <= DS_ERROR) {
-		retcode = ERR_NOMEM_BITMAP;
-		goto force_diskless_dec;
-	} else if (dd == DS_GREW)
-		set_bit(RESYNC_AFTER_NEG, &device->flags);
-
-	if (drbd_md_test_flag(device->ldev, MDF_FULL_SYNC) ||
-	    (test_bit(CRASHED_PRIMARY, &device->flags) &&
-	     drbd_md_test_flag(device->ldev, MDF_AL_DISABLED))) {
-		drbd_info(device, "Assuming that all blocks are out of sync "
-		     "(aka FullSync)\n");
-		if (drbd_bitmap_io(device, &drbd_bmio_set_n_write,
-			"set_n_write from attaching", BM_LOCKED_MASK,
-			NULL)) {
-			retcode = ERR_IO_MD_DISK;
-			goto force_diskless_dec;
-		}
-	} else {
-		if (drbd_bitmap_io(device, &drbd_bm_read,
-			"read from attaching", BM_LOCKED_MASK,
-			NULL)) {
-			retcode = ERR_IO_MD_DISK;
-			goto force_diskless_dec;
-		}
+	for_each_peer_device(peer_device, device) {
+		clear_bit(USE_DEGR_WFC_T, &peer_device->flags);
+		if (resource->role[NOW] != R_PRIMARY &&
+		    drbd_md_test_flag(device->ldev, MDF_PRIMARY_IND) &&
+		    !drbd_md_test_peer_flag(peer_device, MDF_PEER_CONNECTED))
+			set_bit(USE_DEGR_WFC_T, &peer_device->flags);
 	}
 
-	if (_drbd_bm_total_weight(device) == drbd_bm_bits(device))
-		drbd_suspend_al(device); /* IO is still suspended here... */
-
-	spin_lock_irq(&device->resource->req_lock);
-	os = drbd_read_state(device);
-	ns = os;
-	/* If MDF_CONSISTENT is not set go into inconsistent state,
-	   otherwise investigate MDF_WasUpToDate...
-	   If MDF_WAS_UP_TO_DATE is not set go into D_OUTDATED disk state,
-	   otherwise into D_CONSISTENT state.
-	*/
-	if (drbd_md_test_flag(device->ldev, MDF_CONSISTENT)) {
-		if (drbd_md_test_flag(device->ldev, MDF_WAS_UP_TO_DATE))
-			ns.disk = D_CONSISTENT;
-		else
-			ns.disk = D_OUTDATED;
-	} else {
-		ns.disk = D_INCONSISTENT;
+	/*
+	 * If we are attaching to a disk that is marked as being up-to-date,
+	 * then we do not need to set the bitmap bits.
+	 */
+	dd = drbd_determine_dev_size(device, 0,
+			disk_state_from_md(device) == D_UP_TO_DATE ? DDSF_NO_RESYNC : 0,
+			NULL);
+	if (dd == DS_ERROR) {
+		retcode = ERR_NOMEM_BITMAP;
+		goto force_diskless_dec;
+	} else if (dd == DS_GREW) {
+		for_each_peer_device(peer_device, device)
+			set_bit(RESYNC_AFTER_NEG, &peer_device->flags);
 	}
 
-	if (drbd_md_test_flag(device->ldev, MDF_PEER_OUT_DATED))
-		ns.pdsk = D_OUTDATED;
-
-	rcu_read_lock();
-	if (ns.disk == D_CONSISTENT &&
-	    (ns.pdsk == D_OUTDATED || rcu_dereference(device->ldev->disk_conf)->fencing == FP_DONT_CARE))
-		ns.disk = D_UP_TO_DATE;
+	err = drbd_bitmap_io(device, &drbd_bm_read,
+			     "read from attaching", BM_LOCK_ALL,
+			     NULL);
+	if (err) {
+		retcode = ERR_IO_MD_DISK;
+		goto force_diskless_dec;
+	}
 
-	/* All tests on MDF_PRIMARY_IND, MDF_CONNECTED_IND,
-	   MDF_CONSISTENT and MDF_WAS_UP_TO_DATE must happen before
-	   this point, because drbd_request_state() modifies these
-	   flags. */
+	for_each_peer_device(peer_device, device) {
+		if ((test_bit(CRASHED_PRIMARY, &device->flags) &&
+		     drbd_md_test_flag(device->ldev, MDF_AL_DISABLED)) ||
+		    drbd_md_test_peer_flag(peer_device, MDF_PEER_FULL_SYNC)) {
+			drbd_info(peer_device, "Assuming that all blocks are out of sync "
+				  "(aka FullSync)\n");
+			if (drbd_bitmap_io(device, &drbd_bmio_set_n_write,
+				"set_n_write from attaching", BM_LOCK_ALL,
+				peer_device)) {
+				retcode = ERR_IO_MD_DISK;
+				goto force_diskless_dec;
+			}
+		}
+	}
 
-	if (rcu_dereference(device->ldev->disk_conf)->al_updates)
-		device->ldev->md.flags &= ~MDF_AL_DISABLED;
-	else
-		device->ldev->md.flags |= MDF_AL_DISABLED;
+	drbd_try_suspend_al(device); /* IO is still suspended here... */
 
-	rcu_read_unlock();
+	drbd_update_mdf_al_disabled(device, NOW);
 
-	/* In case we are C_CONNECTED postpone any decision on the new disk
-	   state after the negotiation phase. */
-	if (device->state.conn == C_CONNECTED) {
-		device->new_state_tmp.i = ns.i;
-		ns.i = os.i;
-		ns.disk = D_NEGOTIATING;
+	/* change_disk_state uses disk_state_from_md(device); in case D_NEGOTIATING not
+	   necessary, and falls back to a local state change */
+	rv = stable_state_change(resource, change_disk_state(device,
+				D_NEGOTIATING, CS_VERBOSE | CS_SERIALIZE, "attach", NULL));
 
-		/* We expect to receive up-to-date UUIDs soon.
-		   To avoid a race in receive_state, free p_uuid while
-		   holding req_lock. I.e. atomic with the state change */
-		kfree(device->p_uuid);
-		device->p_uuid = NULL;
+	if (rv < SS_SUCCESS) {
+		if (rv == SS_CW_FAILED_BY_PEER)
+			drbd_msg_put_info(adm_ctx.reply_skb,
+				"Probably this node is marked as intentional diskless on a peer");
+		retcode = rv;
+		goto force_diskless_dec;
 	}
 
-	rv = _drbd_set_state(device, ns, CS_VERBOSE, NULL);
-	spin_unlock_irq(&device->resource->req_lock);
-
-	if (rv < SS_SUCCESS)
-		goto force_diskless_dec;
+	device->device_conf.intentional_diskless = false; /* just in case... */
 
 	mod_timer(&device->request_timer, jiffies + HZ);
 
-	if (device->state.role == R_PRIMARY)
-		device->ldev->md.uuid[UI_CURRENT] |=  (u64)1;
+	if (resource->role[NOW] == R_PRIMARY
+	&&  device->ldev->md.current_uuid != UUID_JUST_CREATED)
+		device->ldev->md.current_uuid |= UUID_PRIMARY;
 	else
-		device->ldev->md.uuid[UI_CURRENT] &= ~(u64)1;
+		device->ldev->md.current_uuid &= ~UUID_PRIMARY;
 
-	drbd_md_mark_dirty(device);
 	drbd_md_sync(device);
 
 	kobject_uevent(&disk_to_dev(device->vdisk)->kobj, KOBJ_CHANGE);
 	put_ldev(device);
-	conn_reconfig_done(connection);
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
+	mutex_unlock(&resource->adm_mutex);
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 
  force_diskless_dec:
 	put_ldev(device);
  force_diskless:
-	drbd_force_state(device, NS(disk, D_DISKLESS));
-	drbd_md_sync(device);
+	change_disk_state(device, D_DISKLESS, CS_HARD, "attach", NULL);
  fail:
-	conn_reconfig_done(connection);
-	if (nbc) {
-		close_backing_dev(device, nbc->f_md_bdev,
-			  nbc->md_bdev != nbc->backing_bdev);
-		close_backing_dev(device, nbc->backing_bdev_file, true);
-		kfree(nbc);
-	}
-	kfree(new_disk_conf);
-	lc_destroy(resync_lru);
-	kfree(new_plan);
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
- finish:
+	drbd_bm_free(device);
+	mutex_unlock_cond(&resource->conf_update, &have_conf_update);
+	drbd_backing_dev_free(device, nbc);
+	mutex_unlock(&resource->adm_mutex);
+ out_no_adm_mutex:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-static int adm_detach(struct drbd_device *device, int force)
+static enum drbd_disk_state get_disk_state(struct drbd_device *device)
+{
+	struct drbd_resource *resource = device->resource;
+	enum drbd_disk_state disk_state;
+
+	read_lock_irq(&resource->state_rwlock);
+	disk_state = device->disk_state[NOW];
+	read_unlock_irq(&resource->state_rwlock);
+	return disk_state;
+}
+
+static int adm_detach(struct drbd_device *device, bool force, bool intentional_diskless,
+		      const char *tag, struct sk_buff *reply_skb)
 {
+	const char *err_str = NULL;
+	int ret, retcode;
+
+	device->device_conf.intentional_diskless = intentional_diskless;
 	if (force) {
 		set_bit(FORCE_DETACH, &device->flags);
-		drbd_force_state(device, NS(disk, D_FAILED));
-		return SS_SUCCESS;
+		change_disk_state(device, D_DETACHING, CS_HARD, tag, NULL);
+		retcode = SS_SUCCESS;
+		goto out;
 	}
 
-	return drbd_request_detach_interruptible(device);
+	drbd_suspend_io(device, READ_AND_WRITE); /* so no-one is stuck in drbd_al_begin_io */
+	retcode = stable_state_change(device->resource,
+		change_disk_state(device, D_DETACHING,
+			CS_VERBOSE | CS_SERIALIZE, tag, &err_str));
+	/*
+	 * D_DETACHING will transition to DISKLESS.
+	 * I did not use CS_WAIT_COMPLETE above since that would deadlock on a backing device that
+	 * does not finish the I/O requests from writing to internal meta-data.  Instead, I
+	 * explicitly flush the worker queue here to ensure w_after_state_change() is completed.
+	 */
+	drbd_flush_workqueue_interruptible(device);
+
+	drbd_resume_io(device);
+	ret = wait_event_interruptible(device->misc_wait,
+			get_disk_state(device) != D_DETACHING);
+	if (retcode >= SS_SUCCESS) {
+		wait_event_interruptible(device->misc_wait, !test_bit(GOING_DISKLESS, &device->flags));
+
+		device->al_writ_cnt = 0;
+		device->bm_writ_cnt = 0;
+		device->read_cnt = 0;
+		device->writ_cnt = 0;
+		clear_bit(AL_SUSPENDED, &device->flags);
+	} else {
+		device->device_conf.intentional_diskless = false;
+	}
+	if (retcode == SS_IS_DISKLESS)
+		retcode = SS_NOTHING_TO_DO;
+	if (ret)
+		retcode = ERR_INTR;
+out:
+	if (err_str) {
+		drbd_msg_put_info(reply_skb, err_str);
+		kfree(err_str);
+	} else if (retcode == SS_NO_UP_TO_DATE_DISK)
+		put_device_opener_info(device, reply_skb);
+	return retcode;
 }
 
 /* Detaching the disk is a process in multiple stages.  First we need to lock
@@ -2172,7 +3782,7 @@ static int adm_detach(struct drbd_device *device, int force)
  * Then we transition to D_DISKLESS, and wait for put_ldev() to return all
  * internal references as well.
  * Only then we have finally detached. */
-int drbd_adm_detach(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_detach(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
 	enum drbd_ret_code retcode;
@@ -2182,8 +3792,6 @@ int drbd_adm_detach(struct sk_buff *skb, struct genl_info *info)
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
 
 	if (info->attrs[DRBD_NLA_DETACH_PARMS]) {
 		err = detach_parms_from_attrs(&parms, info);
@@ -2194,9 +3802,14 @@ int drbd_adm_detach(struct sk_buff *skb, struct genl_info *info)
 		}
 	}
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	retcode = adm_detach(adm_ctx.device, parms.force_detach);
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out;
+	}
+	retcode = (enum drbd_ret_code)adm_detach(adm_ctx.device, parms.force_detach,
+			parms.intentional_diskless_detach, "detach", adm_ctx.reply_skb);
 	mutex_unlock(&adm_ctx.resource->adm_mutex);
+
 out:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
@@ -2210,11 +3823,10 @@ static bool conn_resync_running(struct drbd_connection *connection)
 
 	rcu_read_lock();
 	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		if (device->state.conn == C_SYNC_SOURCE ||
-		    device->state.conn == C_SYNC_TARGET ||
-		    device->state.conn == C_PAUSED_SYNC_S ||
-		    device->state.conn == C_PAUSED_SYNC_T) {
+		if (peer_device->repl_state[NOW] == L_SYNC_SOURCE ||
+		    peer_device->repl_state[NOW] == L_SYNC_TARGET ||
+		    peer_device->repl_state[NOW] == L_PAUSED_SYNC_S ||
+		    peer_device->repl_state[NOW] == L_PAUSED_SYNC_T) {
 			rv = true;
 			break;
 		}
@@ -2232,9 +3844,8 @@ static bool conn_ov_running(struct drbd_connection *connection)
 
 	rcu_read_lock();
 	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
-		struct drbd_device *device = peer_device->device;
-		if (device->state.conn == C_VERIFY_S ||
-		    device->state.conn == C_VERIFY_T) {
+		if (peer_device->repl_state[NOW] == L_VERIFY_S ||
+		    peer_device->repl_state[NOW] == L_VERIFY_T) {
 			rv = true;
 			break;
 		}
@@ -2247,10 +3858,7 @@ static bool conn_ov_running(struct drbd_connection *connection)
 static enum drbd_ret_code
 _check_net_options(struct drbd_connection *connection, struct net_conf *old_net_conf, struct net_conf *new_net_conf)
 {
-	struct drbd_peer_device *peer_device;
-	int i;
-
-	if (old_net_conf && connection->cstate == C_WF_REPORT_PARAMS && connection->agreed_pro_version < 100) {
+	if (old_net_conf && connection->cstate[NOW] == C_CONNECTED && connection->agreed_pro_version < 100) {
 		if (new_net_conf->wire_protocol != old_net_conf->wire_protocol)
 			return ERR_NEED_APV_100;
 
@@ -2262,27 +3870,20 @@ _check_net_options(struct drbd_connection *connection, struct net_conf *old_net_
 	}
 
 	if (!new_net_conf->two_primaries &&
-	    conn_highest_role(connection) == R_PRIMARY &&
-	    conn_highest_peer(connection) == R_PRIMARY)
+	    connection->resource->role[NOW] == R_PRIMARY &&
+	    connection->peer_role[NOW] == R_PRIMARY)
 		return ERR_NEED_ALLOW_TWO_PRI;
 
 	if (new_net_conf->two_primaries &&
 	    (new_net_conf->wire_protocol != DRBD_PROT_C))
 		return ERR_NOT_PROTO_C;
 
-	idr_for_each_entry(&connection->peer_devices, peer_device, i) {
-		struct drbd_device *device = peer_device->device;
-		if (get_ldev(device)) {
-			enum drbd_fencing_p fp = rcu_dereference(device->ldev->disk_conf)->fencing;
-			put_ldev(device);
-			if (new_net_conf->wire_protocol == DRBD_PROT_A && fp == FP_STONITH)
-				return ERR_STONITH_AND_PROT_A;
-		}
-		if (device->state.role == R_PRIMARY && new_net_conf->discard_my_data)
-			return ERR_DISCARD_IMPOSSIBLE;
-	}
+	if (new_net_conf->wire_protocol == DRBD_PROT_A &&
+	    new_net_conf->fencing_policy == FP_STONITH)
+		return ERR_STONITH_AND_PROT_A;
 
-	if (new_net_conf->on_congestion != OC_BLOCK && new_net_conf->wire_protocol != DRBD_PROT_A)
+	if (new_net_conf->on_congestion != OC_BLOCK &&
+	    new_net_conf->wire_protocol != DRBD_PROT_A)
 		return ERR_CONG_NOT_PROTO_A;
 
 	return NO_ERROR;
@@ -2292,22 +3893,11 @@ static enum drbd_ret_code
 check_net_options(struct drbd_connection *connection, struct net_conf *new_net_conf)
 {
 	enum drbd_ret_code rv;
-	struct drbd_peer_device *peer_device;
-	int i;
 
 	rcu_read_lock();
-	rv = _check_net_options(connection, rcu_dereference(connection->net_conf), new_net_conf);
+	rv = _check_net_options(connection, rcu_dereference(connection->transport.net_conf), new_net_conf);
 	rcu_read_unlock();
 
-	/* connection->peer_devices protected by genl_lock() here */
-	idr_for_each_entry(&connection->peer_devices, peer_device, i) {
-		struct drbd_device *device = peer_device->device;
-		if (!device->bitmap) {
-			if (drbd_bm_init(device))
-				return ERR_NOMEM;
-		}
-	}
-
 	return rv;
 }
 
@@ -2318,48 +3908,88 @@ struct crypto {
 	struct crypto_shash *integrity_tfm;
 };
 
+static bool needs_key(struct crypto_shash *h)
+{
+	return h && (crypto_shash_get_flags(h) & CRYPTO_TFM_NEED_KEY);
+}
+
+/**
+ * alloc_shash() - Allocate a keyed or unkeyed shash algorithm
+ * @tfm: Destination crypto_shash
+ * @tfm_name: Which algorithm to use
+ * @type: The functionality that the hash is used for
+ * @must_unkeyed: If set, a check is included which ensures that the algorithm
+ * 	     does not require a key
+ * @reply_skb: for sending detailed error description to user-space
+ */
 static int
-alloc_shash(struct crypto_shash **tfm, char *tfm_name, int err_alg)
+alloc_shash(struct crypto_shash **tfm, char *tfm_name, const char *type, bool must_unkeyed,
+	    struct sk_buff *reply_skb)
 {
 	if (!tfm_name[0])
-		return NO_ERROR;
+		return 0;
 
 	*tfm = crypto_alloc_shash(tfm_name, 0, 0);
 	if (IS_ERR(*tfm)) {
+		drbd_msg_sprintf_info(reply_skb, "failed to allocate %s for %s\n", tfm_name, type);
 		*tfm = NULL;
-		return err_alg;
+		return -EINVAL;
 	}
 
-	return NO_ERROR;
+	if (must_unkeyed && needs_key(*tfm)) {
+		drbd_msg_sprintf_info(reply_skb,
+				      "may not use %s for %s. It requires an unkeyed algorithm\n",
+				      tfm_name, type);
+		return -EINVAL;
+	}
+
+	return 0;
 }
 
 static enum drbd_ret_code
-alloc_crypto(struct crypto *crypto, struct net_conf *new_net_conf)
+alloc_crypto(struct crypto *crypto, struct net_conf *new_net_conf, struct sk_buff *reply_skb)
 {
 	char hmac_name[CRYPTO_MAX_ALG_NAME];
-	enum drbd_ret_code rv;
+	int digest_size = 0;
+	int err;
+
+	err = alloc_shash(&crypto->csums_tfm, new_net_conf->csums_alg,
+			  "csums", true, reply_skb);
+	if (err)
+		return ERR_CSUMS_ALG;
+
+	err = alloc_shash(&crypto->verify_tfm, new_net_conf->verify_alg,
+			  "verify", true, reply_skb);
+	if (err)
+		return ERR_VERIFY_ALG;
+
+	err = alloc_shash(&crypto->integrity_tfm, new_net_conf->integrity_alg,
+			  "integrity", true, reply_skb);
+	if (err)
+		return ERR_INTEGRITY_ALG;
+
+	if (crypto->integrity_tfm) {
+		const int max_digest_size = sizeof(((struct drbd_connection *)0)->scratch_buffer.d.before);
+		digest_size = crypto_shash_digestsize(crypto->integrity_tfm);
+		if (digest_size > max_digest_size) {
+			drbd_msg_sprintf_info(reply_skb,
+				"we currently support only digest sizes <= %d bits, but digest size of %s is %d bits\n",
+				max_digest_size * 8, new_net_conf->integrity_alg, digest_size * 8);
+			return ERR_INTEGRITY_ALG;
+		}
+	}
 
-	rv = alloc_shash(&crypto->csums_tfm, new_net_conf->csums_alg,
-			 ERR_CSUMS_ALG);
-	if (rv != NO_ERROR)
-		return rv;
-	rv = alloc_shash(&crypto->verify_tfm, new_net_conf->verify_alg,
-			 ERR_VERIFY_ALG);
-	if (rv != NO_ERROR)
-		return rv;
-	rv = alloc_shash(&crypto->integrity_tfm, new_net_conf->integrity_alg,
-			 ERR_INTEGRITY_ALG);
-	if (rv != NO_ERROR)
-		return rv;
 	if (new_net_conf->cram_hmac_alg[0] != 0) {
 		snprintf(hmac_name, CRYPTO_MAX_ALG_NAME, "hmac(%s)",
 			 new_net_conf->cram_hmac_alg);
 
-		rv = alloc_shash(&crypto->cram_hmac_tfm, hmac_name,
-				 ERR_AUTH_ALG);
+		err = alloc_shash(&crypto->cram_hmac_tfm, hmac_name,
+				  "hmac", false, reply_skb);
+		if (err)
+			return ERR_AUTH_ALG;
 	}
 
-	return rv;
+	return NO_ERROR;
 }
 
 static void free_crypto(struct crypto *crypto)
@@ -2370,11 +4000,12 @@ static void free_crypto(struct crypto *crypto)
 	crypto_free_shash(crypto->verify_tfm);
 }
 
-int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
 	enum drbd_ret_code retcode;
 	struct drbd_connection *connection;
+	struct drbd_transport *transport;
 	struct net_conf *old_net_conf, *new_net_conf = NULL;
 	int err;
 	int ovr; /* online verify running */
@@ -2384,11 +4015,12 @@ int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_CONNECTION);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto finish;
 
 	connection = adm_ctx.connection;
-	mutex_lock(&adm_ctx.resource->adm_mutex);
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_no_adm_mutex;
+	}
 
 	new_net_conf = kzalloc_obj(struct net_conf);
 	if (!new_net_conf) {
@@ -2396,11 +4028,12 @@ int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
 		goto out;
 	}
 
-	conn_reconfig_start(connection);
+	drbd_flush_workqueue(&connection->sender_work);
 
-	mutex_lock(&connection->data.mutex);
 	mutex_lock(&connection->resource->conf_update);
-	old_net_conf = connection->net_conf;
+	mutex_lock(&connection->mutex[DATA_STREAM]);
+	transport = &connection->transport;
+	old_net_conf = transport->net_conf;
 
 	if (!old_net_conf) {
 		drbd_msg_put_info(adm_ctx.reply_skb, "net conf missing, try connect");
@@ -2412,6 +4045,12 @@ int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
 	if (should_set_defaults(info))
 		set_net_conf_defaults(new_net_conf);
 
+	/* The transport_name is immutable taking precedence over set_net_conf_defaults() */
+	memcpy(new_net_conf->transport_name, old_net_conf->transport_name,
+	       old_net_conf->transport_name_len);
+	new_net_conf->transport_name_len = old_net_conf->transport_name_len;
+	new_net_conf->load_balance_paths = old_net_conf->load_balance_paths;
+
 	err = net_conf_from_attrs_for_change(new_net_conf, info);
 	if (err && err != -ENOMSG) {
 		retcode = ERR_MANDATORY_TAG;
@@ -2437,11 +4076,22 @@ int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
 		goto fail;
 	}
 
-	retcode = alloc_crypto(&crypto, new_net_conf);
+	retcode = alloc_crypto(&crypto, new_net_conf, adm_ctx.reply_skb);
 	if (retcode != NO_ERROR)
 		goto fail;
 
-	rcu_assign_pointer(connection->net_conf, new_net_conf);
+	/* Call before updating net_conf in case the transport needs to compare
+	 * old and new configurations. */
+	err = transport->class->ops.net_conf_change(transport, new_net_conf);
+	if (err) {
+		drbd_msg_sprintf_info(adm_ctx.reply_skb, "transport net_conf_change failed: %d",
+				      err);
+		retcode = ERR_INVALID_REQUEST;
+		goto fail;
+	}
+
+	rcu_assign_pointer(transport->net_conf, new_net_conf);
+	connection->fencing_policy = new_net_conf->fencing_policy;
 
 	if (!rsr) {
 		crypto_free_shash(connection->csums_tfm);
@@ -2456,18 +4106,18 @@ int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
 
 	crypto_free_shash(connection->integrity_tfm);
 	connection->integrity_tfm = crypto.integrity_tfm;
-	if (connection->cstate >= C_WF_REPORT_PARAMS && connection->agreed_pro_version >= 100)
+	if (connection->cstate[NOW] >= C_CONNECTED && connection->agreed_pro_version >= 100)
 		/* Do this without trying to take connection->data.mutex again.  */
 		__drbd_send_protocol(connection, P_PROTOCOL_UPDATE);
 
 	crypto_free_shash(connection->cram_hmac_tfm);
 	connection->cram_hmac_tfm = crypto.cram_hmac_tfm;
 
+	mutex_unlock(&connection->mutex[DATA_STREAM]);
 	mutex_unlock(&connection->resource->conf_update);
-	mutex_unlock(&connection->data.mutex);
 	kvfree_rcu_mightsleep(old_net_conf);
 
-	if (connection->cstate >= C_WF_REPORT_PARAMS) {
+	if (connection->cstate[NOW] >= C_CONNECTED) {
 		struct drbd_peer_device *peer_device;
 		int vnr;
 
@@ -2475,277 +4125,1037 @@ int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info)
 			drbd_send_sync_param(peer_device);
 	}
 
-	goto done;
+	goto out;
 
  fail:
+	mutex_unlock(&connection->mutex[DATA_STREAM]);
 	mutex_unlock(&connection->resource->conf_update);
-	mutex_unlock(&connection->data.mutex);
 	free_crypto(&crypto);
 	kfree(new_net_conf);
- done:
-	conn_reconfig_done(connection);
  out:
 	mutex_unlock(&adm_ctx.resource->adm_mutex);
- finish:
+ out_no_adm_mutex:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-static void connection_to_info(struct connection_info *info,
-			       struct drbd_connection *connection)
+static int adjust_resync_fifo(struct drbd_peer_device *peer_device,
+			      struct peer_device_conf *conf,
+			      struct fifo_buffer **pp_old_plan)
 {
-	info->conn_connection_state = connection->cstate;
-	info->conn_role = conn_highest_peer(connection);
-}
+	struct fifo_buffer *old_plan, *new_plan = NULL;
+	unsigned int fifo_size;
 
-static void peer_device_to_info(struct peer_device_info *info,
-				struct drbd_peer_device *peer_device)
-{
-	struct drbd_device *device = peer_device->device;
+	fifo_size = (conf->c_plan_ahead * 10 * RS_MAKE_REQS_INTV) / HZ;
+
+	old_plan = rcu_dereference_protected(peer_device->rs_plan_s,
+			     lockdep_is_held(&peer_device->connection->resource->conf_update));
+	if (!old_plan || fifo_size != old_plan->size) {
+		new_plan = fifo_alloc(fifo_size);
+		if (!new_plan) {
+			drbd_err(peer_device, "kmalloc of fifo_buffer failed");
+			return -ENOMEM;
+		}
+		rcu_assign_pointer(peer_device->rs_plan_s, new_plan);
+		if (pp_old_plan)
+			*pp_old_plan = old_plan;
+	}
 
-	info->peer_repl_state =
-		max_t(enum drbd_conns, C_WF_REPORT_PARAMS, device->state.conn);
-	info->peer_disk_state = device->state.pdsk;
-	info->peer_resync_susp_user = device->state.user_isp;
-	info->peer_resync_susp_peer = device->state.peer_isp;
-	info->peer_resync_susp_dependency = device->state.aftr_isp;
+	return 0;
 }
 
-int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_peer_device_opts(struct sk_buff *skb, struct genl_info *info)
 {
-	struct connection_info connection_info;
-	enum drbd_notification_type flags;
-	unsigned int peer_devices = 0;
 	struct drbd_config_context adm_ctx;
-	struct drbd_peer_device *peer_device;
-	struct net_conf *old_net_conf, *new_net_conf = NULL;
-	struct crypto crypto = { };
-	struct drbd_resource *resource;
-	struct drbd_connection *connection;
 	enum drbd_ret_code retcode;
-	enum drbd_state_rv rv;
-	int i;
+	struct drbd_peer_device *peer_device;
+	struct peer_device_conf *old_peer_device_conf, *new_peer_device_conf = NULL;
+	struct fifo_buffer *old_plan = NULL;
+	struct drbd_device *device;
+	bool notify = false;
 	int err;
 
-	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE);
-
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_PEER_DEVICE);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
-	if (!(adm_ctx.my_addr && adm_ctx.peer_addr)) {
-		drbd_msg_put_info(adm_ctx.reply_skb, "connection endpoint(s) missing");
-		retcode = ERR_INVALID_REQUEST;
-		goto out;
+
+	peer_device = adm_ctx.peer_device;
+	device = peer_device->device;
+
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_no_adm_mutex;
 	}
+	mutex_lock(&adm_ctx.resource->conf_update);
 
-	/* No need for _rcu here. All reconfiguration is
-	 * strictly serialized on genl_lock(). We are protected against
-	 * concurrent reconfiguration/addition/deletion */
-	for_each_resource(resource, &drbd_resources) {
-		for_each_connection(connection, resource) {
-			if (nla_len(adm_ctx.my_addr) == connection->my_addr_len &&
-			    !memcmp(nla_data(adm_ctx.my_addr), &connection->my_addr,
-				    connection->my_addr_len)) {
-				retcode = ERR_LOCAL_ADDR;
-				goto out;
-			}
+	new_peer_device_conf = kzalloc_obj(struct peer_device_conf);
+	if (!new_peer_device_conf)
+		goto fail;
 
-			if (nla_len(adm_ctx.peer_addr) == connection->peer_addr_len &&
-			    !memcmp(nla_data(adm_ctx.peer_addr), &connection->peer_addr,
-				    connection->peer_addr_len)) {
-				retcode = ERR_PEER_ADDR;
-				goto out;
+	old_peer_device_conf = peer_device->conf;
+	*new_peer_device_conf = *old_peer_device_conf;
+	if (should_set_defaults(info))
+		set_peer_device_conf_defaults(new_peer_device_conf);
+
+	err = peer_device_conf_from_attrs_for_change(new_peer_device_conf, info);
+	if (err && err != -ENOMSG) {
+		retcode = ERR_MANDATORY_TAG;
+		drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
+		goto fail_ret_set;
+	}
+
+	if (!old_peer_device_conf->bitmap && new_peer_device_conf->bitmap &&
+	    peer_device->bitmap_index == -1) {
+		if (get_ldev(device)) {
+			err = allocate_bitmap_index(peer_device, device->ldev);
+			put_ldev(device);
+			if (err) {
+				drbd_msg_put_info(adm_ctx.reply_skb,
+						  "No bitmap slot available in meta-data");
+				retcode = ERR_INVALID_REQUEST;
+				goto fail_ret_set;
 			}
+			drbd_info(peer_device,
+				  "Former intentional diskless peer got bitmap slot %d\n",
+				  peer_device->bitmap_index);
+			drbd_md_sync(device);
+			notify = true;
+		}
+	}
+
+	if (old_peer_device_conf->bitmap && !new_peer_device_conf->bitmap) {
+		enum drbd_disk_state pdsk = peer_device->disk_state[NOW];
+		enum drbd_disk_state disk = device->disk_state[NOW];
+		if (!(disk == D_DISKLESS || pdsk == D_DISKLESS || pdsk == D_UNKNOWN)) {
+			drbd_msg_put_info(adm_ctx.reply_skb,
+					  "Can not drop the bitmap when both sides have a disk");
+			retcode = ERR_INVALID_REQUEST;
+			goto fail_ret_set;
+		}
+		err = clear_peer_slot(device, peer_device->node_id, MDF_NODE_EXISTS);
+		if (!err) {
+			peer_device->bitmap_index = -1;
+			notify = true;
 		}
 	}
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	connection = first_connection(adm_ctx.resource);
-	conn_reconfig_start(connection);
+	if (!expect(peer_device, new_peer_device_conf->resync_rate >= 1))
+		new_peer_device_conf->resync_rate = 1;
 
-	if (connection->cstate > C_STANDALONE) {
-		retcode = ERR_NET_CONFIGURED;
+	if (new_peer_device_conf->c_plan_ahead > DRBD_C_PLAN_AHEAD_MAX)
+		new_peer_device_conf->c_plan_ahead = DRBD_C_PLAN_AHEAD_MAX;
+
+	err = adjust_resync_fifo(peer_device, new_peer_device_conf, &old_plan);
+	if (err)
 		goto fail;
-	}
 
-	/* allocation not in the IO path, drbdsetup / netlink process context */
-	new_net_conf = kzalloc_obj(*new_net_conf);
-	if (!new_net_conf) {
+	rcu_assign_pointer(peer_device->conf, new_peer_device_conf);
+
+	kvfree_rcu_mightsleep(old_peer_device_conf);
+	kfree(old_plan);
+
+	/* No need to call drbd_send_sync_param() here. The values in
+	 * peer_device->conf that we send are ignored by recent peers anyway. */
+
+	if (0) {
+fail:
 		retcode = ERR_NOMEM;
-		goto fail;
+fail_ret_set:
+		kfree(new_peer_device_conf);
 	}
 
-	set_net_conf_defaults(new_net_conf);
+	mutex_unlock(&adm_ctx.resource->conf_update);
+	mutex_unlock(&adm_ctx.resource->adm_mutex);
+out_no_adm_mutex:
+	if (notify)
+		drbd_broadcast_peer_device_state(peer_device);
+	drbd_adm_finish(&adm_ctx, info, retcode);
+	return 0;
 
-	err = net_conf_from_attrs(new_net_conf, info);
-	if (err && err != -ENOMSG) {
-		retcode = ERR_MANDATORY_TAG;
-		drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
-		goto fail;
-	}
+}
 
-	retcode = check_net_options(connection, new_net_conf);
-	if (retcode != NO_ERROR)
-		goto fail;
+int drbd_create_peer_device_default_config(struct drbd_peer_device *peer_device)
+{
+	struct peer_device_conf *conf;
+	int err;
 
-	retcode = alloc_crypto(&crypto, new_net_conf);
-	if (retcode != NO_ERROR)
-		goto fail;
+	conf = kzalloc_obj(*conf);
+	if (!conf)
+		return -ENOMEM;
 
-	((char *)new_net_conf->shared_secret)[SHARED_SECRET_MAX-1] = 0;
+	set_peer_device_conf_defaults(conf);
+	err = adjust_resync_fifo(peer_device, conf, NULL);
+	if (err)
+		return err;
 
-	drbd_flush_workqueue(&connection->sender_work);
+	peer_device->conf = conf;
 
-	mutex_lock(&adm_ctx.resource->conf_update);
-	old_net_conf = connection->net_conf;
-	if (old_net_conf) {
-		retcode = ERR_NET_CONFIGURED;
-		mutex_unlock(&adm_ctx.resource->conf_update);
-		goto fail;
-	}
-	rcu_assign_pointer(connection->net_conf, new_net_conf);
+	return 0;
+}
 
-	conn_free_crypto(connection);
-	connection->cram_hmac_tfm = crypto.cram_hmac_tfm;
-	connection->integrity_tfm = crypto.integrity_tfm;
-	connection->csums_tfm = crypto.csums_tfm;
-	connection->verify_tfm = crypto.verify_tfm;
+static void connection_to_info(struct connection_info *info,
+			       struct drbd_connection *connection)
+{
+	info->conn_connection_state = connection->cstate[NOW];
+	info->conn_role = connection->peer_role[NOW];
+}
 
-	connection->my_addr_len = nla_len(adm_ctx.my_addr);
-	memcpy(&connection->my_addr, nla_data(adm_ctx.my_addr), connection->my_addr_len);
-	connection->peer_addr_len = nla_len(adm_ctx.peer_addr);
-	memcpy(&connection->peer_addr, nla_data(adm_ctx.peer_addr), connection->peer_addr_len);
+#define str_to_info(info, field, str) ({ \
+	strscpy(info->field, str, sizeof(info->field)); \
+	info->field ## _len = min(strlen(str), sizeof(info->field)); \
+})
 
-	idr_for_each_entry(&connection->peer_devices, peer_device, i) {
-		peer_devices++;
-	}
+/* shared logic between peer_device_to_info and peer_device_state_change_to_info */
+static void __peer_device_to_info(struct peer_device_info *info,
+				  struct drbd_peer_device *peer_device,
+				  enum which_state which)
+{
+	info->peer_resync_susp_dependency = resync_susp_comb_dep(peer_device, which);
+	info->peer_is_intentional_diskless = !want_bitmap(peer_device);
+}
 
-	connection_to_info(&connection_info, connection);
-	flags = (peer_devices--) ? NOTIFY_CONTINUES : 0;
-	mutex_lock(&notification_mutex);
-	notify_connection_state(NULL, 0, connection, &connection_info, NOTIFY_CREATE | flags);
-	idr_for_each_entry(&connection->peer_devices, peer_device, i) {
-		struct peer_device_info peer_device_info;
+static void peer_device_to_info(struct peer_device_info *info,
+				struct drbd_peer_device *peer_device)
+{
+	info->peer_repl_state = peer_device->repl_state[NOW];
+	info->peer_disk_state = peer_device->disk_state[NOW];
+	info->peer_resync_susp_user = peer_device->resync_susp_user[NOW];
+	info->peer_resync_susp_peer = peer_device->resync_susp_peer[NOW];
+	__peer_device_to_info(info, peer_device, NOW);
+}
 
-		peer_device_to_info(&peer_device_info, peer_device);
-		flags = (peer_devices--) ? NOTIFY_CONTINUES : 0;
-		notify_peer_device_state(NULL, 0, peer_device, &peer_device_info, NOTIFY_CREATE | flags);
+void peer_device_state_change_to_info(struct peer_device_info *info,
+				      struct drbd_peer_device_state_change *state_change)
+{
+	info->peer_repl_state = state_change->repl_state[NEW];
+	info->peer_disk_state = state_change->disk_state[NEW];
+	info->peer_resync_susp_user = state_change->resync_susp_user[NEW];
+	info->peer_resync_susp_peer = state_change->resync_susp_peer[NEW];
+	__peer_device_to_info(info, state_change->peer_device, NEW);
+}
+
+/* shared logic between device_to_info and device_state_change_to_info */
+static void __device_to_info(struct device_info *info,
+			     struct drbd_device *device)
+{
+	info->is_intentional_diskless = device->device_conf.intentional_diskless;
+	info->dev_is_open = device->open_cnt != 0;
+
+	rcu_read_lock();
+	if (get_ldev(device)) {
+		struct disk_conf *disk_conf =
+			rcu_dereference(device->ldev->disk_conf);
+		str_to_info(info, backing_dev_path, disk_conf->backing_dev);
+		put_ldev(device);
+	} else {
+		info->backing_dev_path[0] = '\0';
+		info->backing_dev_path_len = 0;
 	}
-	mutex_unlock(&notification_mutex);
-	mutex_unlock(&adm_ctx.resource->conf_update);
+	rcu_read_unlock();
+}
+
+void device_to_info(struct device_info *info,
+			   struct drbd_device *device)
+{
+	info->dev_disk_state = device->disk_state[NOW];
+	info->dev_has_quorum = device->have_quorum[NOW];
+	__device_to_info(info, device);
+}
+
+void device_state_change_to_info(struct device_info *info,
+				 struct drbd_device_state_change *state_change)
+{
+	info->dev_disk_state = state_change->disk_state[NEW];
+	info->dev_has_quorum = state_change->have_quorum[NEW];
+	__device_to_info(info, state_change->device);
+}
+
+static bool is_resync_target_in_other_connection(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_peer_device *p;
+
+	for_each_peer_device(p, device) {
+		if (p == peer_device)
+			continue;
+
+		if (p->repl_state[NOW] == L_SYNC_TARGET)
+			return true;
+	}
+
+	return false;
+}
+
+static enum drbd_ret_code drbd_check_name_str(const char *name, const bool strict);
+static void drbd_msg_put_name_error(struct sk_buff *reply_skb, enum drbd_ret_code ret_code);
+
+static enum drbd_ret_code drbd_check_conn_name(struct drbd_resource *resource, const char *new_name)
+{
+	struct drbd_connection *connection;
+	enum drbd_ret_code retcode;
+	const char *tmp_name;
+
+	retcode = drbd_check_name_str(new_name, drbd_strict_names);
+	if (retcode != NO_ERROR)
+		return retcode;
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		/* is this even possible? */
+		if (!connection->transport.net_conf)
+			continue;
+		tmp_name = connection->transport.net_conf->name;
+		if (!tmp_name)
+			continue;
+		if (strcmp(tmp_name, new_name))
+			continue;
+		retcode = ERR_ALREADY_EXISTS;
+		break;
+	}
+	rcu_read_unlock();
+	return retcode;
+}
+
+static int adm_new_connection(struct drbd_config_context *adm_ctx, struct genl_info *info)
+{
+	struct connection_info connection_info;
+	enum drbd_notification_type flags;
+	unsigned int peer_devices = 0;
+	struct drbd_device *device;
+	struct drbd_peer_device *peer_device;
+	struct net_conf *old_net_conf, *new_net_conf = NULL;
+	struct crypto crypto = { NULL, };
+	struct drbd_connection *connection;
+	enum drbd_ret_code retcode;
+	int i, err;
+	char *transport_name;
+	struct drbd_transport_class *tr_class;
+	struct drbd_transport *transport;
+
+	/* allocation not in the IO path, drbdsetup / netlink process context */
+	new_net_conf = kzalloc_obj(*new_net_conf);
+	if (!new_net_conf)
+		return ERR_NOMEM;
+
+	set_net_conf_defaults(new_net_conf);
+
+	err = net_conf_from_attrs(new_net_conf, info);
+	if (err) {
+		retcode = ERR_MANDATORY_TAG;
+		drbd_msg_put_info(adm_ctx->reply_skb, from_attrs_err_to_txt(err));
+		goto fail;
+	}
+
+	retcode = drbd_check_conn_name(adm_ctx->resource, new_net_conf->name);
+	if (retcode != NO_ERROR) {
+		drbd_msg_put_name_error(adm_ctx->reply_skb, retcode);
+		goto fail;
+	}
+
+	transport_name = new_net_conf->transport_name_len ? new_net_conf->transport_name :
+		new_net_conf->load_balance_paths ? "lb-tcp" : "tcp";
+	tr_class = drbd_get_transport_class(transport_name);
+	if (!tr_class) {
+		retcode = ERR_CREATE_TRANSPORT;
+		goto fail;
+	}
+
+	connection = drbd_create_connection(adm_ctx->resource, tr_class);
+	if (!connection) {
+		retcode = ERR_NOMEM;
+		goto fail_put_transport;
+	}
+	connection->peer_node_id = adm_ctx->peer_node_id;
+	/* transport class reference now owned by connection,
+	 * prevent double cleanup. */
+	tr_class = NULL;
+
+	mutex_lock(&adm_ctx->resource->conf_update);
+	retcode = check_net_options(connection, new_net_conf);
+	if (retcode != NO_ERROR)
+		goto unlock_fail_free_connection;
+
+	retcode = alloc_crypto(&crypto, new_net_conf, adm_ctx->reply_skb);
+	if (retcode != NO_ERROR)
+		goto unlock_fail_free_connection;
+
+	((char *)new_net_conf->shared_secret)[SHARED_SECRET_MAX-1] = 0;
+
+	idr_for_each_entry(&adm_ctx->resource->devices, device, i) {
+		int id;
+
+		retcode = ERR_NOMEM;
+		peer_device = create_peer_device(device, connection);
+		if (!peer_device)
+			goto unlock_fail_free_connection;
+		id = idr_alloc(&connection->peer_devices, peer_device,
+			       device->vnr, device->vnr + 1, GFP_KERNEL);
+		if (id < 0)
+			goto unlock_fail_free_connection;
+
+		if (get_ldev(device)) {
+			struct drbd_peer_md *peer_md =
+				&device->ldev->md.peers[adm_ctx->peer_node_id];
+			if (peer_md->flags & MDF_PEER_OUTDATED)
+				peer_device->disk_state[NOW] = D_OUTDATED;
+			put_ldev(device);
+		}
+	}
+
+	/* Set bitmap_index if it was allocated previously */
+	idr_for_each_entry(&connection->peer_devices, peer_device, i) {
+		unsigned int bitmap_index;
+
+		device = peer_device->device;
+		if (!get_ldev(device))
+			continue;
+
+		bitmap_index = device->ldev->md.peers[adm_ctx->peer_node_id].bitmap_index;
+		if (bitmap_index != -1) {
+			if (want_bitmap(peer_device))
+				peer_device->bitmap_index = bitmap_index;
+			else
+				device->ldev->md.peers[adm_ctx->peer_node_id].flags &= ~MDF_HAVE_BITMAP;
+		}
+		put_ldev(device);
+	}
+
+	idr_for_each_entry(&connection->peer_devices, peer_device, i) {
+		peer_device->send_cnt = 0;
+		peer_device->recv_cnt = 0;
+	}
+
+	idr_for_each_entry(&connection->peer_devices, peer_device, i) {
+		struct drbd_device *device = peer_device->device;
+
+		peer_device->resync_susp_other_c[NOW] =
+			is_resync_target_in_other_connection(peer_device);
+		list_add_rcu(&peer_device->peer_devices, &device->peer_devices);
+		kref_get(&connection->kref);
+		kref_get(&device->kref);
+		peer_devices++;
+		peer_device->node_id = connection->peer_node_id;
+	}
+
+	write_lock_irq(&adm_ctx->resource->state_rwlock);
+
+	/*
+	 * Initialize to the current dagtag so that flushes can be acked even
+	 * if no further writes occur.
+	 */
+	connection->last_peer_ack_dagtag_seen = READ_ONCE(adm_ctx->resource->dagtag_sector);
+
+	list_add_tail_rcu(&connection->connections, &adm_ctx->resource->connections);
+	write_unlock_irq(&adm_ctx->resource->state_rwlock);
+
+	transport = &connection->transport;
+	old_net_conf = transport->net_conf;
+	if (old_net_conf) {
+		retcode = ERR_NET_CONFIGURED;
+		goto unlock_fail_free_connection;
+	}
+
+	err = transport->class->ops.net_conf_change(transport, new_net_conf);
+	if (err) {
+		drbd_msg_sprintf_info(adm_ctx->reply_skb, "transport net_conf_change failed: %d",
+				      err);
+		retcode = ERR_INVALID_REQUEST;
+		goto unlock_fail_free_connection;
+	}
+
+	rcu_assign_pointer(transport->net_conf, new_net_conf);
+	connection->fencing_policy = new_net_conf->fencing_policy;
+
+	connection->cram_hmac_tfm = crypto.cram_hmac_tfm;
+	connection->integrity_tfm = crypto.integrity_tfm;
+	connection->csums_tfm = crypto.csums_tfm;
+	connection->verify_tfm = crypto.verify_tfm;
+
+	/* transferred ownership. prevent double cleanup. */
+	new_net_conf = NULL;
+	memset(&crypto, 0, sizeof(crypto));
+
+	if (connection->peer_node_id > adm_ctx->resource->max_node_id)
+		adm_ctx->resource->max_node_id = connection->peer_node_id;
+
+	connection_to_info(&connection_info, connection);
+	flags = (peer_devices--) ? NOTIFY_CONTINUES : 0;
+	mutex_lock(&notification_mutex);
+	notify_connection_state(NULL, 0, connection, &connection_info, NOTIFY_CREATE | flags);
+	idr_for_each_entry(&connection->peer_devices, peer_device, i) {
+		struct peer_device_info peer_device_info;
+
+		peer_device_to_info(&peer_device_info, peer_device);
+		flags = (peer_devices--) ? NOTIFY_CONTINUES : 0;
+		notify_peer_device_state(NULL, 0, peer_device, &peer_device_info, NOTIFY_CREATE | flags);
+	}
+	mutex_unlock(&notification_mutex);
+
+	mutex_unlock(&adm_ctx->resource->conf_update);
+
+	drbd_debugfs_connection_add(connection); /* after ->net_conf was assigned */
+	drbd_thread_start(&connection->sender);
+	return NO_ERROR;
+
+unlock_fail_free_connection:
+	drbd_unregister_connection(connection);
+	mutex_unlock(&adm_ctx->resource->conf_update);
+	synchronize_rcu();
+	drbd_reclaim_connection(&connection->rcu);
+fail_put_transport:
+	drbd_put_transport_class(tr_class);
+fail:
+	free_crypto(&crypto);
+	kfree(new_net_conf);
+
+	return retcode;
+}
+
+static bool addr_eq_nla(const struct sockaddr_storage *addr, const int addr_len, const struct nlattr *nla)
+{
+	return	nla_len(nla) == addr_len && memcmp(nla_data(nla), addr, addr_len) == 0;
+}
+
+static enum drbd_ret_code
+check_path_against_nla(const struct drbd_path *path,
+		       const struct nlattr *my_addr, const struct nlattr *peer_addr)
+{
+	enum drbd_ret_code ret = NO_ERROR;
+
+	if (addr_eq_nla(&path->my_addr, path->my_addr_len, my_addr))
+		ret = ERR_LOCAL_ADDR;
+	if (addr_eq_nla(&path->peer_addr, path->peer_addr_len, peer_addr))
+		ret = (ret == ERR_LOCAL_ADDR ? ERR_LOCAL_AND_PEER_ADDR : ERR_PEER_ADDR);
+	return ret;
+}
+
+static enum drbd_ret_code
+check_path_usable(const struct drbd_config_context *adm_ctx,
+		  const struct nlattr *my_addr, const struct nlattr *peer_addr)
+{
+	struct drbd_resource *resource;
+	struct drbd_connection *connection;
+	enum drbd_ret_code retcode;
+
+	if (!(my_addr && peer_addr)) {
+		drbd_msg_put_info(adm_ctx->reply_skb, "connection endpoint(s) missing");
+		return ERR_INVALID_REQUEST;
+	}
+
+	for_each_resource_rcu(resource, &drbd_resources) {
+		for_each_connection_rcu(connection, resource) {
+			struct drbd_path *path;
+			list_for_each_entry_rcu(path, &connection->transport.paths, list) {
+				retcode = check_path_against_nla(path, my_addr, peer_addr);
+				if (retcode == NO_ERROR)
+					continue;
+				/* Within the same resource, it is ok to use
+				 * the same endpoint several times */
+				if (retcode != ERR_LOCAL_AND_PEER_ADDR &&
+				    resource == adm_ctx->resource)
+					continue;
+				return retcode;
+			}
+		}
+	}
+	return NO_ERROR;
+}
+
+
+static enum drbd_ret_code
+adm_add_path(struct drbd_config_context *adm_ctx,  struct genl_info *info)
+{
+	struct drbd_transport *transport = &adm_ctx->connection->transport;
+	struct drbd_resource *resource = adm_ctx->resource;
+	struct drbd_connection *connection = adm_ctx->connection;
+	struct nlattr **nested_attr_tb;
+	struct nlattr *my_addr, *peer_addr;
+	struct drbd_path *path;
+	struct net *existing_net;
+	enum drbd_ret_code retcode;
+	int err;
+
+	/* parse and validate only */
+	existing_net = drbd_net_assigned_to_connection(adm_ctx->connection);
+	if (existing_net && !net_eq(adm_ctx->net, existing_net)) {
+		drbd_msg_put_info(adm_ctx->reply_skb, "connection already assigned to a different network namespace");
+		return ERR_INVALID_REQUEST;
+	}
+
+	err = path_parms_ntb_from_attrs(&nested_attr_tb, info);
+	if (err) {
+		drbd_msg_put_info(adm_ctx->reply_skb, from_attrs_err_to_txt(err));
+		return ERR_MANDATORY_TAG;
+	}
+	my_addr = nested_attr_tb[__nla_type(T_my_addr)];
+	peer_addr = nested_attr_tb[__nla_type(T_peer_addr)];
+	kfree(nested_attr_tb);
+	nested_attr_tb = NULL;
+
+	rcu_read_lock();
+	retcode = check_path_usable(adm_ctx, my_addr, peer_addr);
+	rcu_read_unlock();
+	if (retcode != NO_ERROR)
+		return retcode;
+
+	path = kzalloc(transport->class->path_instance_size, GFP_KERNEL);
+	if (!path)
+		return ERR_NOMEM;
+
+	path->net = adm_ctx->net;
+	path->my_addr_len = nla_len(my_addr);
+	memcpy(&path->my_addr, nla_data(my_addr), path->my_addr_len);
+	path->peer_addr_len = nla_len(peer_addr);
+	memcpy(&path->peer_addr, nla_data(peer_addr), path->peer_addr_len);
+
+	kref_get(&adm_ctx->connection->kref);
+	path->transport = transport;
+
+	kref_init(&path->kref);
+
+	if (connection->resource->res_opts.drbd8_compat_mode && resource->res_opts.node_id == -1) {
+		err = drbd_setup_node_ids_84(connection, path, adm_ctx->peer_node_id);
+		if (err) {
+			drbd_msg_put_info(adm_ctx->reply_skb,
+				err == -ENOTUNIQ ? "node-id from drbdsetup and meta-data differ" :
+				"error setting up node IDs");
+			kref_put(&path->kref, drbd_destroy_path);
+			return ERR_INVALID_REQUEST;
+		}
+	}
+
+	/* Exclusive with transport op "prepare_connect()" */
+	mutex_lock(&resource->conf_update);
+
+	err = transport->class->ops.add_path(path);
+
+	if (err) {
+		kref_put(&path->kref, drbd_destroy_path);
+		drbd_err(connection, "add_path() failed with %d\n", err);
+		drbd_msg_put_info(adm_ctx->reply_skb, "add_path on transport failed");
+		mutex_unlock(&resource->conf_update);
+		return ERR_INVALID_REQUEST;
+	}
+
+	/* Exclusive with reading state, in particular remember_state_change() */
+	write_lock_irq(&resource->state_rwlock);
+	list_add_tail_rcu(&path->list, &transport->paths);
+	write_unlock_irq(&resource->state_rwlock);
+
+	mutex_unlock(&resource->conf_update);
+
+	notify_path(adm_ctx->connection, path, NOTIFY_CREATE);
+	return NO_ERROR;
+}
+
+static int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info)
+{
+	struct drbd_config_context adm_ctx;
+	struct connect_parms parms = { 0, };
+	struct drbd_peer_device *peer_device;
+	struct drbd_connection *connection;
+	enum drbd_ret_code retcode;
+	enum drbd_state_rv rv;
+	enum drbd_conn_state cstate;
+	int i, err;
+
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_CONNECTION);
+	if (!adm_ctx.reply_skb)
+		return retcode;
+
+	connection = adm_ctx.connection;
+	cstate = connection->cstate[NOW];
+	if (cstate != C_STANDALONE) {
+		retcode = ERR_NET_CONFIGURED;
+		goto out;
+	}
+
+	if (first_path(connection) == NULL) {
+		drbd_msg_put_info(adm_ctx.reply_skb, "connection endpoint(s) missing");
+		retcode = ERR_INVALID_REQUEST;
+		goto out;
+	}
+
+	if (!net_eq(adm_ctx.net, drbd_net_assigned_to_connection(connection))) {
+		drbd_msg_put_info(adm_ctx.reply_skb, "connection assigned to a different network namespace");
+		retcode = ERR_INVALID_REQUEST;
+		goto out;
+	}
+
+	if (info->attrs[DRBD_NLA_CONNECT_PARMS]) {
+		err = connect_parms_from_attrs(&parms, info);
+		if (err) {
+			retcode = ERR_MANDATORY_TAG;
+			drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
+			goto out;
+		}
+	}
+	if (parms.discard_my_data) {
+		if (adm_ctx.resource->role[NOW] == R_PRIMARY) {
+			retcode = ERR_DISCARD_IMPOSSIBLE;
+			goto out;
+		}
+		set_bit(CONN_DISCARD_MY_DATA, &connection->flags);
+	}
+	if (parms.tentative)
+		set_bit(CONN_DRY_RUN, &connection->flags);
+
+	/* Eventually allocate bitmap indexes for the peer_devices here */
+	idr_for_each_entry(&connection->peer_devices, peer_device, i) {
+		struct drbd_device *device;
+
+		if (peer_device->bitmap_index != -1 || !want_bitmap(peer_device))
+			continue;
+
+		device = peer_device->device;
+		if (!get_ldev(device))
+			continue;
+
+		err = allocate_bitmap_index(peer_device, device->ldev);
+		put_ldev(device);
+		if (err) {
+			retcode = ERR_INVALID_REQUEST;
+			goto out;
+		}
+		drbd_md_mark_dirty(device);
+	}
+
+	rv = change_cstate_tag(connection, C_UNCONNECTED, CS_VERBOSE, "connect", NULL);
+	drbd_adm_finish(&adm_ctx, info, rv);
+	return 0;
+out:
+	drbd_adm_finish(&adm_ctx, info, retcode);
+	return 0;
+}
+
+static int drbd_adm_new_peer(struct sk_buff *skb, struct genl_info *info)
+{
+	struct drbd_config_context adm_ctx;
+	struct drbd_connection *connection;
+	struct drbd_resource *resource;
+	enum drbd_ret_code retcode;
+	struct drbd_device *device;
+	int vnr, n_connections = 0;
+
+
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_PEER_NODE);
+	if (!adm_ctx.reply_skb)
+		return retcode;
+
+	resource = adm_ctx.resource;
+	if (mutex_lock_interruptible(&resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out;
+	}
+
+	rcu_read_lock();
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		bool fail = false;
+
+		if (get_ldev_if_state(device, D_FAILED)) {
+			fail = !device->ldev->disk_conf->d_bitmap;
+			put_ldev(device);
+		}
+		if (fail) {
+			rcu_read_unlock();
+			retcode = ERR_INVALID_REQUEST;
+			drbd_msg_sprintf_info(adm_ctx.reply_skb,
+			      "Cannot add a peer while having a disk without an allocated bitmap");
+			goto out_unlock;
+		}
+	}
+	rcu_read_unlock();
+
+	for_each_connection(connection, resource)
+		n_connections++;
+	if (resource->res_opts.drbd8_compat_mode && n_connections >= 1) {
+		retcode = ERR_INVALID_REQUEST;
+		drbd_msg_sprintf_info(adm_ctx.reply_skb,
+				      "drbd8 compat mode allows one peer at max");
+		goto out_unlock;
+	}
+
+	/* ensure uniqueness of peer_node_id by checking with adm_mutex */
+	connection = drbd_connection_by_node_id(resource, adm_ctx.peer_node_id);
+	if (adm_ctx.connection || connection) {
+		retcode = ERR_INVALID_REQUEST;
+		drbd_msg_sprintf_info(adm_ctx.reply_skb,
+				      "Connection for peer node id %d already exists",
+				      adm_ctx.peer_node_id);
+	} else {
+		retcode = adm_new_connection(&adm_ctx, info);
+	}
+
+out_unlock:
+	mutex_unlock(&resource->adm_mutex);
+out:
+	drbd_adm_finish(&adm_ctx, info, retcode);
+	return 0;
+}
+
+static int drbd_adm_new_path(struct sk_buff *skb, struct genl_info *info)
+{
+	struct drbd_config_context adm_ctx;
+	enum drbd_ret_code retcode;
+
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_CONNECTION);
+	if (!adm_ctx.reply_skb)
+		return retcode;
+
+	/* remote transport endpoints need to be globally unique */
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+	} else {
+		retcode = adm_add_path(&adm_ctx, info);
+		mutex_unlock(&adm_ctx.resource->adm_mutex);
+	}
+	drbd_adm_finish(&adm_ctx, info, retcode);
+	return 0;
+}
+
+static enum drbd_ret_code
+adm_del_path(struct drbd_config_context *adm_ctx,  struct genl_info *info)
+{
+	struct drbd_resource *resource = adm_ctx->resource;
+	struct drbd_connection *connection = adm_ctx->connection;
+	struct drbd_transport *transport = &connection->transport;
+	struct nlattr **nested_attr_tb;
+	struct nlattr *my_addr, *peer_addr;
+	struct drbd_path *path;
+	int nr_paths = 0;
+	int err;
+
+	/* parse and validate only */
+	if (!net_eq(adm_ctx->net, drbd_net_assigned_to_connection(connection))) {
+		drbd_msg_put_info(adm_ctx->reply_skb, "connection assigned to a different network namespace");
+		return ERR_INVALID_REQUEST;
+	}
+
+	err = path_parms_ntb_from_attrs(&nested_attr_tb, info);
+	if (err) {
+		drbd_msg_put_info(adm_ctx->reply_skb, from_attrs_err_to_txt(err));
+		return ERR_MANDATORY_TAG;
+	}
+	my_addr = nested_attr_tb[__nla_type(T_my_addr)];
+	peer_addr = nested_attr_tb[__nla_type(T_peer_addr)];
+	kfree(nested_attr_tb);
+	nested_attr_tb = NULL;
+
+	list_for_each_entry(path, &transport->paths, list)
+		nr_paths++;
+
+	if (nr_paths == 1 && connection->cstate[NOW] >= C_CONNECTING) {
+		drbd_msg_put_info(adm_ctx->reply_skb,
+				  "Can not delete last path, use disconnect first!");
+		return ERR_INVALID_REQUEST;
+	}
+
+	err = -ENOENT;
+	list_for_each_entry(path, &transport->paths, list) {
+		if (!addr_eq_nla(&path->my_addr, path->my_addr_len, my_addr))
+			continue;
+		if (!addr_eq_nla(&path->peer_addr, path->peer_addr_len, peer_addr))
+			continue;
+
+		/* Exclusive with transport op "prepare_connect()" */
+		mutex_lock(&resource->conf_update);
+
+		if (!transport->class->ops.may_remove_path(path)) {
+			err = -EBUSY;
+			mutex_unlock(&resource->conf_update);
+			break;
+		}
+
+		set_bit(TR_UNREGISTERED, &path->flags);
+		/* Ensure flag visible before list manipulation. */
+		smp_wmb();
+
+		/* Exclusive with reading state, in particular remember_state_change() */
+		write_lock_irq(&resource->state_rwlock);
+		list_del_rcu(&path->list);
+		write_unlock_irq(&resource->state_rwlock);
+
+		mutex_unlock(&resource->conf_update);
+
+		transport->class->ops.remove_path(path);
+		notify_path(connection, path, NOTIFY_DESTROY);
+		/* Transport modules might use RCU on the path list. */
+		call_rcu(&path->rcu, drbd_reclaim_path);
 
-	rcu_read_lock();
-	idr_for_each_entry(&connection->peer_devices, peer_device, i) {
-		struct drbd_device *device = peer_device->device;
-		device->send_cnt = 0;
-		device->recv_cnt = 0;
+		return NO_ERROR;
 	}
-	rcu_read_unlock();
 
-	rv = conn_request_state(connection, NS(conn, C_UNCONNECTED), CS_VERBOSE);
+	drbd_err(connection, "del_path() failed with %d\n", err);
+	drbd_msg_put_info(adm_ctx->reply_skb,
+			  err == -ENOENT ? "no such path" : "del_path on transport failed");
+	return ERR_INVALID_REQUEST;
+}
 
-	conn_reconfig_done(connection);
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
-	drbd_adm_finish(&adm_ctx, info, rv);
-	return 0;
+static int drbd_adm_del_path(struct sk_buff *skb, struct genl_info *info)
+{
+	struct drbd_config_context adm_ctx;
+	enum drbd_ret_code retcode;
 
-fail:
-	free_crypto(&crypto);
-	kfree(new_net_conf);
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_CONNECTION);
+	if (!adm_ctx.reply_skb)
+		return retcode;
 
-	conn_reconfig_done(connection);
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
-out:
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+	} else {
+		retcode = adm_del_path(&adm_ctx, info);
+		mutex_unlock(&adm_ctx.resource->adm_mutex);
+	}
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-static enum drbd_state_rv conn_try_disconnect(struct drbd_connection *connection, bool force)
+int drbd_open_ro_count(struct drbd_resource *resource)
 {
-	enum drbd_conns cstate;
-	enum drbd_state_rv rv;
+	struct drbd_device *device;
+	int vnr, open_ro_cnt = 0;
 
-repeat:
-	rv = conn_request_state(connection, NS(conn, C_DISCONNECTING),
-			force ? CS_HARD : 0);
+	read_lock_irq(&resource->state_rwlock);
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		if (!device->writable)
+			open_ro_cnt += device->open_cnt;
+	}
+	read_unlock_irq(&resource->state_rwlock);
+
+	return open_ro_cnt;
+}
+
+static enum drbd_state_rv conn_try_disconnect(struct drbd_connection *connection, bool force,
+					      const char *tag, struct sk_buff *reply_skb)
+{
+	struct drbd_resource *resource = connection->resource;
+	enum drbd_conn_state cstate;
+	enum drbd_state_rv rv;
+	enum chg_state_flags flags = (force ? CS_HARD : 0) | CS_VERBOSE;
+	const char *err_str = NULL;
+	long t;
 
+    repeat:
+	rv = change_cstate_tag(connection, C_DISCONNECTING, flags, tag, &err_str);
 	switch (rv) {
-	case SS_NOTHING_TO_DO:
+	case SS_CW_FAILED_BY_PEER:
+	case SS_NEED_CONNECTION:
+		read_lock_irq(&resource->state_rwlock);
+		cstate = connection->cstate[NOW];
+		read_unlock_irq(&resource->state_rwlock);
+		if (cstate < C_CONNECTED)
+			goto repeat;
 		break;
+	case SS_NO_UP_TO_DATE_DISK:
+		if (resource->role[NOW] == R_PRIMARY)
+			break;
+		/* Most probably udev opened it read-only. That might happen
+		   if it was demoted very recently. Wait up to one second. */
+		t = wait_event_interruptible_timeout(resource->state_wait,
+						     drbd_open_ro_count(resource) == 0,
+						     HZ);
+		if (t <= 0)
+			break;
+		goto repeat;
 	case SS_ALREADY_STANDALONE:
-		return SS_SUCCESS;
-	case SS_PRIMARY_NOP:
-		/* Our state checking code wants to see the peer outdated. */
-		rv = conn_request_state(connection, NS2(conn, C_DISCONNECTING, pdsk, D_OUTDATED), 0);
-
-		if (rv == SS_OUTDATE_WO_CONN) /* lost connection before graceful disconnect succeeded */
-			rv = conn_request_state(connection, NS(conn, C_DISCONNECTING), CS_VERBOSE);
-
+		rv = SS_SUCCESS;
 		break;
-	case SS_CW_FAILED_BY_PEER:
-		spin_lock_irq(&connection->resource->req_lock);
-		cstate = connection->cstate;
-		spin_unlock_irq(&connection->resource->req_lock);
-		if (cstate <= C_WF_CONNECTION)
+	case SS_IS_DISKLESS:
+	case SS_LOWER_THAN_OUTDATED:
+		rv = change_cstate_tag(connection, C_DISCONNECTING, CS_HARD, tag, NULL);
+		break;
+	case SS_NO_QUORUM:
+		if (!(flags & CS_VERBOSE)) {
+			flags |= CS_VERBOSE;
 			goto repeat;
-		/* The peer probably wants to see us outdated. */
-		rv = conn_request_state(connection, NS2(conn, C_DISCONNECTING,
-							disk, D_OUTDATED), 0);
-		if (rv == SS_IS_DISKLESS || rv == SS_LOWER_THAN_OUTDATED) {
-			rv = conn_request_state(connection, NS(conn, C_DISCONNECTING),
-					CS_HARD);
 		}
 		break;
 	default:;
 		/* no special handling necessary */
 	}
 
-	if (rv >= SS_SUCCESS) {
-		enum drbd_state_rv rv2;
-		/* No one else can reconfigure the network while I am here.
-		 * The state handling only uses drbd_thread_stop_nowait(),
-		 * we want to really wait here until the receiver is no more.
-		 */
-		drbd_thread_stop(&connection->receiver);
-
-		/* Race breaker.  This additional state change request may be
-		 * necessary, if this was a forced disconnect during a receiver
-		 * restart.  We may have "killed" the receiver thread just
-		 * after drbd_receiver() returned.  Typically, we should be
-		 * C_STANDALONE already, now, and this becomes a no-op.
-		 */
-		rv2 = conn_request_state(connection, NS(conn, C_STANDALONE),
-				CS_VERBOSE | CS_HARD);
-		if (rv2 < SS_SUCCESS)
-			drbd_err(connection,
-				"unexpected rv2=%d in conn_try_disconnect()\n",
-				rv2);
-		/* Unlike in DRBD 9, the state engine has generated
-		 * NOTIFY_DESTROY events before clearing connection->net_conf. */
+	if (rv >= SS_SUCCESS)
+		wait_event_interruptible_timeout(resource->state_wait,
+						 connection->cstate[NOW] == C_STANDALONE,
+						 HZ);
+	if (err_str) {
+		drbd_msg_put_info(reply_skb, err_str);
+		kfree(err_str);
 	}
+
 	return rv;
 }
 
-int drbd_adm_disconnect(struct sk_buff *skb, struct genl_info *info)
+/* this can only be called immediately after a successful
+ * peer_try_disconnect, within the same resource->adm_mutex */
+static void del_connection(struct drbd_connection *connection, const char *tag)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_peer_device *peer_device;
+	enum drbd_state_rv rv2;
+	int vnr;
+
+	if (test_bit(C_UNREGISTERED, &connection->flags))
+		return;
+
+	/* No one else can reconfigure the network while I am here.
+	 * The state handling only uses drbd_thread_stop_nowait(),
+	 * we want to really wait here until the receiver is no more.
+	 */
+	drbd_thread_stop(&connection->receiver);
+
+	/* Race breaker.  This additional state change request may be
+	 * necessary, if this was a forced disconnect during a receiver
+	 * restart.  We may have "killed" the receiver thread just
+	 * after drbd_receiver() returned.  Typically, we should be
+	 * C_STANDALONE already, now, and this becomes a no-op.
+	 */
+	rv2 = change_cstate_tag(connection, C_STANDALONE, CS_VERBOSE | CS_HARD, tag, NULL);
+	if (rv2 < SS_SUCCESS)
+		drbd_err(connection,
+			"unexpected rv2=%d in del_connection()\n",
+			rv2);
+	/* Make sure the sender thread has actually stopped: state
+	 * handling only does drbd_thread_stop_nowait().
+	 */
+	drbd_thread_stop(&connection->sender);
+
+	mutex_lock(&resource->conf_update);
+	drbd_unregister_connection(connection);
+	mutex_unlock(&resource->conf_update);
+
+	/*
+	 * Flush the resource work queue to make sure that no more
+	 * events like state change notifications for this connection
+	 * are queued: we want the "destroy" event to come last.
+	 */
+	drbd_flush_workqueue(&resource->work);
+
+	mutex_lock(&notification_mutex);
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
+		notify_peer_device_state(NULL, 0, peer_device, NULL,
+					 NOTIFY_DESTROY | NOTIFY_CONTINUES);
+	notify_connection_state(NULL, 0, connection, NULL, NOTIFY_DESTROY);
+	mutex_unlock(&notification_mutex);
+	call_rcu(&connection->rcu, drbd_reclaim_connection);
+}
+
+static int adm_disconnect(struct sk_buff *skb, struct genl_info *info, bool destroy)
 {
 	struct drbd_config_context adm_ctx;
 	struct disconnect_parms parms;
 	struct drbd_connection *connection;
+	struct net *existing_net;
 	enum drbd_state_rv rv;
 	enum drbd_ret_code retcode;
-	int err;
+	const char *tag = destroy ? "del-peer" : "disconnect";
 
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_CONNECTION);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto fail;
 
-	connection = adm_ctx.connection;
 	memset(&parms, 0, sizeof(parms));
 	if (info->attrs[DRBD_NLA_DISCONNECT_PARMS]) {
-		err = disconnect_parms_from_attrs(&parms, info);
+		int err = disconnect_parms_from_attrs(&parms, info);
 		if (err) {
 			retcode = ERR_MANDATORY_TAG;
 			drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
@@ -2753,55 +5163,114 @@ int drbd_adm_disconnect(struct sk_buff *skb, struct genl_info *info)
 		}
 	}
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	rv = conn_try_disconnect(connection, parms.force_disconnect);
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
-	if (rv < SS_SUCCESS) {
-		drbd_adm_finish(&adm_ctx, info, rv);
-		return 0;
+	existing_net = drbd_net_assigned_to_connection(adm_ctx.connection);
+	if (existing_net && !net_eq(adm_ctx.net, existing_net)) {
+		drbd_msg_put_info(adm_ctx.reply_skb, "connection assigned to a different network namespace");
+		retcode =  ERR_INVALID_REQUEST;
+		goto fail;
+	}
+
+	connection = adm_ctx.connection;
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto fail;
+	}
+	rv = conn_try_disconnect(connection, parms.force_disconnect, tag, adm_ctx.reply_skb);
+	if (rv >= SS_SUCCESS && destroy) {
+		del_connection(connection, tag);
 	}
-	retcode = NO_ERROR;
+	if (rv < SS_SUCCESS)
+		retcode = (enum drbd_ret_code)rv;
+	else
+		retcode = NO_ERROR;
+	mutex_unlock(&adm_ctx.resource->adm_mutex);
  fail:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-void resync_after_online_grow(struct drbd_device *device)
+static int drbd_adm_disconnect(struct sk_buff *skb, struct genl_info *info)
 {
-	int iass; /* I am sync source */
+	return adm_disconnect(skb, info, 0);
+}
 
-	drbd_info(device, "Resync of new storage after online grow\n");
-	if (device->state.role != device->state.peer)
-		iass = (device->state.role == R_PRIMARY);
-	else
-		iass = test_bit(RESOLVE_CONFLICTS, &first_peer_device(device)->connection->flags);
+static int drbd_adm_del_peer(struct sk_buff *skb, struct genl_info *info)
+{
+	return adm_disconnect(skb, info, 1);
+}
 
-	if (iass)
-		drbd_start_resync(device, C_SYNC_SOURCE);
-	else
-		_drbd_request_state(device, NS(conn, C_WF_SYNC_UUID), CS_VERBOSE + CS_SERIALIZE);
+void resync_after_online_grow(struct drbd_peer_device *peer_device)
+{
+	struct drbd_connection *connection = peer_device->connection;
+	struct drbd_device *device = peer_device->device;
+	bool sync_source = false;
+	s32 peer_id;
+
+	drbd_info(peer_device, "Resync of new storage after online grow\n");
+	if (device->resource->role[NOW] != connection->peer_role[NOW])
+		sync_source = (device->resource->role[NOW] == R_PRIMARY);
+	else if (connection->agreed_pro_version < 111)
+		sync_source = test_bit(RESOLVE_CONFLICTS,
+				&peer_device->connection->transport.flags);
+	else if (get_ldev(device)) {
+		/* multiple or no primaries, proto new enough, resolve by node-id */
+		s32 self_id = device->ldev->md.node_id;
+		put_ldev(device);
+		peer_id = peer_device->node_id;
+
+		sync_source = self_id < peer_id ? 1 : 0;
+	}
+
+	if (!sync_source && connection->agreed_pro_version < 110) {
+		stable_change_repl_state(peer_device, L_WF_SYNC_UUID,
+					 CS_VERBOSE | CS_SERIALIZE, "online-grow");
+		return;
+	}
+	drbd_start_resync(peer_device, sync_source ? L_SYNC_SOURCE : L_SYNC_TARGET, "online-grow");
+}
+
+sector_t drbd_local_max_size(struct drbd_device *device)
+{
+	struct drbd_backing_dev *tmp_bdev;
+	sector_t s;
+
+	tmp_bdev = kmalloc_obj(struct drbd_backing_dev, GFP_ATOMIC);
+	if (!tmp_bdev)
+		return 0;
+
+	*tmp_bdev = *device->ldev;
+	drbd_md_set_sector_offsets(tmp_bdev);
+	s = drbd_get_max_capacity(device, tmp_bdev, false);
+	kfree(tmp_bdev);
+
+	return s;
 }
 
-int drbd_adm_resize(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_resize(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
 	struct disk_conf *old_disk_conf, *new_disk_conf = NULL;
 	struct resize_parms rs;
 	struct drbd_device *device;
-	enum drbd_ret_code retcode;
 	enum determine_dev_size dd;
 	bool change_al_layout = false;
 	enum dds_flags ddsf;
 	sector_t u_size;
-	int err;
+	int err, retcode;
+	struct drbd_peer_device *peer_device;
+	bool resolve_by_node_id = true;
+	bool has_up_to_date_primary;
+	bool traditional_resize = false;
+	sector_t local_max_size;
 
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto finish;
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_no_adm_mutex;
+	}
 	device = adm_ctx.device;
 	if (!get_ldev(device)) {
 		retcode = ERR_NO_DISK;
@@ -2820,20 +5289,58 @@ int drbd_adm_resize(struct sk_buff *skb, struct genl_info *info)
 		}
 	}
 
-	if (device->state.conn > C_CONNECTED) {
-		retcode = ERR_RESIZE_RESYNC;
+	device = adm_ctx.device;
+	for_each_peer_device(peer_device, device) {
+		if (peer_device->repl_state[NOW] > L_ESTABLISHED) {
+			retcode = ERR_RESIZE_RESYNC;
+			goto fail_ldev;
+		}
+	}
+
+
+	local_max_size = drbd_local_max_size(device);
+	if (rs.resize_size && local_max_size < (sector_t)rs.resize_size) {
+		drbd_err(device, "requested %llu sectors, backend seems only able to support %llu\n",
+			 (unsigned long long)(sector_t)rs.resize_size,
+			 (unsigned long long)local_max_size);
+		retcode = ERR_DISK_TOO_SMALL;
 		goto fail_ldev;
 	}
 
-	if (device->state.role == R_SECONDARY &&
-	    device->state.peer == R_SECONDARY) {
+	/* Maybe I could serve as sync source myself? */
+	has_up_to_date_primary =
+		device->resource->role[NOW] == R_PRIMARY &&
+		device->disk_state[NOW] == D_UP_TO_DATE;
+
+	if (!has_up_to_date_primary) {
+		for_each_peer_device(peer_device, device) {
+			/* ignore unless connection is fully established */
+			if (peer_device->repl_state[NOW] < L_ESTABLISHED)
+				continue;
+			if (peer_device->connection->agreed_pro_version < 111) {
+				resolve_by_node_id = false;
+				if (peer_device->connection->peer_role[NOW] == R_PRIMARY
+				&&  peer_device->disk_state[NOW] == D_UP_TO_DATE) {
+					has_up_to_date_primary = true;
+					break;
+				}
+			}
+		}
+	}
+
+	if (!has_up_to_date_primary && !resolve_by_node_id) {
 		retcode = ERR_NO_PRIMARY;
 		goto fail_ldev;
 	}
 
-	if (rs.no_resync && first_peer_device(device)->connection->agreed_pro_version < 93) {
-		retcode = ERR_NEED_APV_93;
-		goto fail_ldev;
+	for_each_peer_device(peer_device, device) {
+		struct drbd_connection *connection = peer_device->connection;
+		if (rs.no_resync &&
+		    connection->cstate[NOW] == C_CONNECTED &&
+		    connection->agreed_pro_version < 93) {
+			retcode = ERR_NEED_APV_93;
+			goto fail_ldev;
+		}
 	}
 
 	rcu_read_lock();
@@ -2856,21 +5363,21 @@ int drbd_adm_resize(struct sk_buff *skb, struct genl_info *info)
 			goto fail_ldev;
 		}
 
-		if (al_size_k < MD_32kB_SECT/2) {
+		if (al_size_k < (32768 >> 10)) {
 			retcode = ERR_MD_LAYOUT_TOO_SMALL;
 			goto fail_ldev;
 		}
 
+		/* Removed this pre-condition while merging from 8.4 to 9.0
 		if (device->state.conn != C_CONNECTED && !rs.resize_force) {
 			retcode = ERR_MD_LAYOUT_CONNECTED;
 			goto fail_ldev;
-		}
+		} */
 
 		change_al_layout = true;
 	}
 
-	if (device->ldev->known_size != drbd_get_capacity(device->ldev->backing_bdev))
-		device->ldev->known_size = drbd_get_capacity(device->ldev->backing_bdev);
+	device->ldev->known_size = drbd_get_capacity(device->ldev->backing_bdev);
 
 	if (new_disk_conf) {
 		mutex_lock(&device->resource->conf_update);
@@ -2883,9 +5390,17 @@ int drbd_adm_resize(struct sk_buff *skb, struct genl_info *info)
 		new_disk_conf = NULL;
 	}
 
-	ddsf = (rs.resize_force ? DDSF_FORCED : 0) | (rs.no_resync ? DDSF_NO_RESYNC : 0);
-	dd = drbd_determine_dev_size(device, ddsf, change_al_layout ? &rs : NULL);
-	drbd_md_sync(device);
+	ddsf = (rs.resize_force ? DDSF_ASSUME_UNCONNECTED_PEER_HAS_SPACE : 0)
+		| (rs.no_resync ? DDSF_NO_RESYNC : 0);
+
+	dd = change_cluster_wide_device_size(device, local_max_size, rs.resize_size, ddsf,
+					     change_al_layout ? &rs : NULL);
+	if (dd == DS_2PC_NOT_SUPPORTED) {
+		traditional_resize = true;
+		dd = drbd_determine_dev_size(device, 0, ddsf, change_al_layout ? &rs : NULL);
+	}
+
+	drbd_md_sync_if_dirty(device);
 	put_ldev(device);
 	if (dd == DS_ERROR) {
 		retcode = ERR_NOMEM_BITMAP;
@@ -2896,19 +5411,25 @@ int drbd_adm_resize(struct sk_buff *skb, struct genl_info *info)
 	} else if (dd == DS_ERROR_SHRINK) {
 		retcode = ERR_IMPLICIT_SHRINK;
 		goto fail;
+	} else if (dd == DS_2PC_ERR) {
+		retcode = SS_INTERRUPTED;
+		goto fail;
 	}
 
-	if (device->state.conn == C_CONNECTED) {
-		if (dd == DS_GREW)
-			set_bit(RESIZE_PENDING, &device->flags);
-
-		drbd_send_uuids(first_peer_device(device));
-		drbd_send_sizes(first_peer_device(device), 1, ddsf);
+	if (traditional_resize) {
+		for_each_peer_device(peer_device, device) {
+			if (peer_device->repl_state[NOW] == L_ESTABLISHED) {
+				if (dd == DS_GREW)
+					set_bit(RESIZE_PENDING, &peer_device->flags);
+				drbd_send_uuids(peer_device, 0, 0);
+				drbd_send_sizes(peer_device, rs.resize_size, ddsf);
+			}
+		}
 	}
 
  fail:
 	mutex_unlock(&adm_ctx.resource->adm_mutex);
- finish:
+ out_no_adm_mutex:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 
@@ -2918,7 +5439,7 @@ int drbd_adm_resize(struct sk_buff *skb, struct genl_info *info)
 	goto fail;
 }
 
-int drbd_adm_resource_opts(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_resource_opts(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
 	enum drbd_ret_code retcode;
@@ -2928,298 +5449,558 @@ int drbd_adm_resource_opts(struct sk_buff *skb, struct genl_info *info)
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto fail;
 
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out;
+	}
 	res_opts = adm_ctx.resource->res_opts;
 	if (should_set_defaults(info))
 		set_res_opts_defaults(&res_opts);
 
-	err = res_opts_from_attrs(&res_opts, info);
+	err = res_opts_from_attrs_for_change(&res_opts, info);
 	if (err && err != -ENOMSG) {
 		retcode = ERR_MANDATORY_TAG;
 		drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
 		goto fail;
 	}
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	err = set_resource_options(adm_ctx.resource, &res_opts);
+	if (res_opts.explicit_drbd8_compat) {
+		struct drbd_connection *connection;
+		int n_connections = 0;
+
+		for_each_connection(connection, adm_ctx.resource)
+			n_connections++;
+
+		if (n_connections > 1) {
+			drbd_msg_sprintf_info(adm_ctx.reply_skb,
+					      "drbd8 compat mode allows one peer at max");
+			goto fail;
+		}
+	}
+
+	if (res_opts.node_id != -1) {
+#ifdef CONFIG_DRBD_COMPAT_84
+		if (!res_opts.drbd8_compat_mode && res_opts.explicit_drbd8_compat)
+			atomic_inc(&nr_drbd8_devices);
+		else if (res_opts.drbd8_compat_mode && !res_opts.explicit_drbd8_compat)
+			atomic_dec(&nr_drbd8_devices);
+#endif
+		res_opts.drbd8_compat_mode = res_opts.explicit_drbd8_compat;
+	}
+
+	err = set_resource_options(adm_ctx.resource, &res_opts, "resource-options");
 	if (err) {
 		retcode = ERR_INVALID_REQUEST;
 		if (err == -ENOMEM)
 			retcode = ERR_NOMEM;
 	}
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
 
 fail:
+	mutex_unlock(&adm_ctx.resource->adm_mutex);
+out:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-int drbd_adm_invalidate(struct sk_buff *skb, struct genl_info *info)
+static enum drbd_state_rv invalidate_resync(struct drbd_peer_device *peer_device)
+{
+	struct drbd_resource *resource = peer_device->connection->resource;
+	enum drbd_state_rv rv;
+
+	drbd_flush_workqueue(&peer_device->connection->sender_work);
+
+	rv = change_repl_state(peer_device, L_STARTING_SYNC_T, CS_SERIALIZE, "invalidate");
+
+	if (rv < SS_SUCCESS && rv != SS_NEED_CONNECTION)
+		rv = stable_change_repl_state(peer_device, L_STARTING_SYNC_T,
+			CS_VERBOSE | CS_SERIALIZE, "invalidate");
+
+	wait_event_interruptible(resource->state_wait,
+				 peer_device->repl_state[NOW] != L_STARTING_SYNC_T);
+
+	return rv;
+}
+
+static enum drbd_state_rv invalidate_no_resync(struct drbd_device *device)
+{
+	struct drbd_resource *resource = device->resource;
+	struct drbd_peer_device *peer_device;
+	struct drbd_connection *connection;
+	unsigned long irq_flags;
+	enum drbd_state_rv rv;
+
+	begin_state_change(resource, &irq_flags, CS_VERBOSE);
+	for_each_connection(connection, resource) {
+		peer_device = conn_peer_device(connection, device->vnr);
+		if (peer_device->repl_state[NOW] >= L_ESTABLISHED) {
+			abort_state_change(resource, &irq_flags);
+			return SS_UNKNOWN_ERROR;
+		}
+	}
+	__change_disk_state(device, D_INCONSISTENT);
+	rv = end_state_change(resource, &irq_flags, "invalidate");
+
+	if (rv >= SS_SUCCESS) {
+		drbd_bitmap_io(device, &drbd_bmio_set_all_n_write,
+			       "set_n_write from invalidate",
+			       BM_LOCK_CLEAR | BM_LOCK_BULK,
+			       NULL);
+	}
+
+	return rv;
+}
+
+static int drbd_adm_invalidate(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
+	struct drbd_peer_device *sync_from_peer_device = NULL;
+	struct drbd_resource *resource;
 	struct drbd_device *device;
-	int retcode; /* enum drbd_ret_code rsp. enum drbd_state_rv */
+	int retcode = 0; /* enum drbd_ret_code rsp. enum drbd_state_rv */
+	struct invalidate_parms inv = {
+		.sync_from_peer_node_id = -1,
+		.reset_bitmap = DRBD_INVALIDATE_RESET_BITMAP_DEF,
+	};
 
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
 
 	device = adm_ctx.device;
+
 	if (!get_ldev(device)) {
 		retcode = ERR_NO_DISK;
-		goto out;
+		goto out_no_ldev;
 	}
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
+	resource = device->resource;
+
+	if (mutex_lock_interruptible(&resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_no_adm_mutex;
+	}
+
+	if (info->attrs[DRBD_NLA_INVALIDATE_PARMS]) {
+		int err;
+
+		err = invalidate_parms_from_attrs(&inv, info);
+		if (err) {
+			retcode = ERR_MANDATORY_TAG;
+			drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
+			goto out_no_resume;
+		}
+
+		if (inv.sync_from_peer_node_id != -1) {
+			struct drbd_connection *connection =
+				drbd_connection_by_node_id(resource, inv.sync_from_peer_node_id);
+			sync_from_peer_device = conn_peer_device(connection, device->vnr);
+		}
+
+		if (!inv.reset_bitmap && sync_from_peer_device &&
+		    sync_from_peer_device->connection->agreed_pro_version < 120) {
+			retcode = ERR_APV_TOO_LOW;
+			drbd_msg_put_info(adm_ctx.reply_skb,
+					  "Need protocol level 120 to initiate bitmap based resync");
+			goto out_no_resume;
+		}
+	}
 
 	/* If there is still bitmap IO pending, probably because of a previous
 	 * resync just being finished, wait for it before requesting a new resync.
-	 * Also wait for it's after_state_ch(). */
-	drbd_suspend_io(device);
-	wait_event(device->misc_wait, !test_bit(BITMAP_IO, &device->flags));
-	drbd_flush_workqueue(&first_peer_device(device)->connection->sender_work);
-
-	/* If we happen to be C_STANDALONE R_SECONDARY, just change to
-	 * D_INCONSISTENT, and set all bits in the bitmap.  Otherwise,
-	 * try to start a resync handshake as sync target for full sync.
-	 */
-	if (device->state.conn == C_STANDALONE && device->state.role == R_SECONDARY) {
-		retcode = drbd_request_state(device, NS(disk, D_INCONSISTENT));
-		if (retcode >= SS_SUCCESS) {
-			if (drbd_bitmap_io(device, &drbd_bmio_set_n_write,
-				"set_n_write from invalidate", BM_LOCKED_MASK, NULL))
-				retcode = ERR_IO_MD_DISK;
+	 * Also wait for its after_state_ch(). */
+	drbd_suspend_io(device, READ_AND_WRITE);
+	wait_event(device->misc_wait, !atomic_read(&device->pending_bitmap_work.n));
+
+	if (sync_from_peer_device) {
+		if (inv.reset_bitmap) {
+			retcode = invalidate_resync(sync_from_peer_device);
+		} else {
+			retcode = change_repl_state(sync_from_peer_device, L_WF_BITMAP_T,
+					CS_VERBOSE | CS_CLUSTER_WIDE | CS_WAIT_COMPLETE |
+					CS_SERIALIZE, "invalidate");
 		}
-	} else
-		retcode = drbd_request_state(device, NS(conn, C_STARTING_SYNC_T));
-	drbd_resume_io(device);
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
-	put_ldev(device);
-out:
-	drbd_adm_finish(&adm_ctx, info, retcode);
-	return 0;
-}
+	} else {
+		int retry = 3;
+		do {
+			struct drbd_connection *connection;
 
-static int drbd_adm_simple_request_state(struct sk_buff *skb, struct genl_info *info,
-		union drbd_state mask, union drbd_state val)
-{
-	struct drbd_config_context adm_ctx;
-	enum drbd_ret_code retcode;
+			for_each_connection(connection, resource) {
+				struct drbd_peer_device *peer_device;
+
+				peer_device = conn_peer_device(connection, device->vnr);
+				if (!peer_device)
+					continue;
+
+				if (inv.reset_bitmap) {
+					retcode = invalidate_resync(peer_device);
+				} else {
+					if (connection->agreed_pro_version < 120) {
+						retcode = ERR_APV_TOO_LOW;
+						continue;
+					}
+					retcode = change_repl_state(peer_device, L_WF_BITMAP_T,
+								CS_VERBOSE | CS_CLUSTER_WIDE |
+								CS_WAIT_COMPLETE | CS_SERIALIZE,
+								"invalidate");
+				}
+				if (retcode >= SS_SUCCESS)
+					goto out;
+			}
+			if (retcode != SS_NEED_CONNECTION)
+				break;
 
-	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
-	if (!adm_ctx.reply_skb)
-		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
+			retcode = invalidate_no_resync(device);
+		} while (retcode == SS_UNKNOWN_ERROR && retry--);
+	}
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	retcode = drbd_request_state(adm_ctx.device, mask, val);
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
 out:
+	drbd_resume_io(device);
+out_no_resume:
+	mutex_unlock(&resource->adm_mutex);
+out_no_adm_mutex:
+	put_ldev(device);
+out_no_ldev:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-static int drbd_bmio_set_susp_al(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local)
+static int drbd_bmio_set_susp_al(struct drbd_device *device, struct drbd_peer_device *peer_device)
 {
 	int rv;
 
 	rv = drbd_bmio_set_n_write(device, peer_device);
-	drbd_suspend_al(device);
+	drbd_try_suspend_al(device);
 	return rv;
 }
 
-int drbd_adm_invalidate_peer(struct sk_buff *skb, struct genl_info *info)
+static int full_sync_from_peer(struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_resource *resource = device->resource;
+	int retcode; /* enum drbd_ret_code rsp. enum drbd_state_rv */
+
+	retcode = stable_change_repl_state(peer_device, L_STARTING_SYNC_S, CS_SERIALIZE,
+			"invalidate-remote");
+	if (retcode < SS_SUCCESS) {
+		if (retcode == SS_NEED_CONNECTION && resource->role[NOW] == R_PRIMARY) {
+			/* The peer will get a resync upon connect anyways.
+			 * Just make that into a full resync. */
+			retcode = change_peer_disk_state(peer_device, D_INCONSISTENT,
+					CS_VERBOSE | CS_WAIT_COMPLETE | CS_SERIALIZE,
+					"invalidate-remote");
+			if (retcode >= SS_SUCCESS) {
+				if (drbd_bitmap_io(device, &drbd_bmio_set_susp_al,
+						   "set_n_write from invalidate_peer",
+						   BM_LOCK_CLEAR | BM_LOCK_BULK, peer_device))
+					retcode = ERR_IO_MD_DISK;
+			}
+		} else {
+			retcode = stable_change_repl_state(peer_device, L_STARTING_SYNC_S,
+					CS_VERBOSE | CS_SERIALIZE, "invalidate-remote");
+		}
+	}
+
+	return retcode;
+}
+
+
+static int drbd_adm_invalidate_peer(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
-	int retcode; /* drbd_ret_code, drbd_state_rv */
+	struct drbd_peer_device *peer_device;
+	struct drbd_resource *resource;
 	struct drbd_device *device;
+	int retcode; /* enum drbd_ret_code rsp. enum drbd_state_rv */
+	struct invalidate_peer_parms inv = {
+		.p_reset_bitmap = DRBD_INVALIDATE_RESET_BITMAP_DEF,
+	};
 
-	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_PEER_DEVICE);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
 
-	device = adm_ctx.device;
+	peer_device = adm_ctx.peer_device;
+	device = peer_device->device;
+	resource = device->resource;
+
 	if (!get_ldev(device)) {
 		retcode = ERR_NO_DISK;
 		goto out;
 	}
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
+	if (mutex_lock_interruptible(&resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_no_adm_mutex;
+	}
+
+	if (info->attrs[DRBD_NLA_INVAL_PEER_PARAMS]) {
+		int err;
+
+		err = invalidate_peer_parms_from_attrs(&inv, info);
+		if (err) {
+			retcode = ERR_MANDATORY_TAG;
+			drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
+			goto out_unlock;
+		}
+		if (!inv.p_reset_bitmap && peer_device->connection->agreed_pro_version < 120) {
+			retcode = ERR_APV_TOO_LOW;
+			drbd_msg_put_info(adm_ctx.reply_skb,
+					  "Need protocol level 120 to initiate bitmap based resync");
+			goto out_unlock;
+		}
+	}
+
+	drbd_suspend_io(device, READ_AND_WRITE);
+	wait_event(device->misc_wait, !atomic_read(&device->pending_bitmap_work.n));
+	drbd_flush_workqueue(&peer_device->connection->sender_work);
 
-	/* If there is still bitmap IO pending, probably because of a previous
-	 * resync just being finished, wait for it before requesting a new resync.
-	 * Also wait for it's after_state_ch(). */
-	drbd_suspend_io(device);
-	wait_event(device->misc_wait, !test_bit(BITMAP_IO, &device->flags));
-	drbd_flush_workqueue(&first_peer_device(device)->connection->sender_work);
-
-	/* If we happen to be C_STANDALONE R_PRIMARY, just set all bits
-	 * in the bitmap.  Otherwise, try to start a resync handshake
-	 * as sync source for full sync.
-	 */
-	if (device->state.conn == C_STANDALONE && device->state.role == R_PRIMARY) {
-		/* The peer will get a resync upon connect anyways. Just make that
-		   into a full resync. */
-		retcode = drbd_request_state(device, NS(pdsk, D_INCONSISTENT));
-		if (retcode >= SS_SUCCESS) {
-			if (drbd_bitmap_io(device, &drbd_bmio_set_susp_al,
-				"set_n_write from invalidate_peer",
-				BM_LOCKED_SET_ALLOWED, NULL))
-				retcode = ERR_IO_MD_DISK;
-		}
-	} else
-		retcode = drbd_request_state(device, NS(conn, C_STARTING_SYNC_S));
+	if (inv.p_reset_bitmap) {
+		retcode = full_sync_from_peer(peer_device);
+	} else {
+		retcode = change_repl_state(peer_device, L_WF_BITMAP_S,
+				CS_VERBOSE | CS_CLUSTER_WIDE | CS_WAIT_COMPLETE | CS_SERIALIZE,
+				"invalidate-remote");
+	}
 	drbd_resume_io(device);
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
+
+out_unlock:
+	mutex_unlock(&resource->adm_mutex);
+out_no_adm_mutex:
 	put_ldev(device);
 out:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-int drbd_adm_pause_sync(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_pause_sync(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
+	struct drbd_peer_device *peer_device;
 	enum drbd_ret_code retcode;
 
-	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_PEER_DEVICE);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
+
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
 		goto out;
+	}
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	if (drbd_request_state(adm_ctx.device, NS(user_isp, 1)) == SS_NOTHING_TO_DO)
+	peer_device = adm_ctx.peer_device;
+	if (change_resync_susp_user(peer_device, true,
+			CS_VERBOSE | CS_WAIT_COMPLETE | CS_SERIALIZE) == SS_NOTHING_TO_DO)
 		retcode = ERR_PAUSE_IS_SET;
+
 	mutex_unlock(&adm_ctx.resource->adm_mutex);
-out:
+ out:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-int drbd_adm_resume_sync(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_resume_sync(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
-	union drbd_dev_state s;
+	struct drbd_peer_device *peer_device;
 	enum drbd_ret_code retcode;
 
-	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_PEER_DEVICE);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
+
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
 		goto out;
+	}
+
+	peer_device = adm_ctx.peer_device;
+	if (change_resync_susp_user(peer_device, false,
+			CS_VERBOSE | CS_WAIT_COMPLETE | CS_SERIALIZE) == SS_NOTHING_TO_DO) {
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	if (drbd_request_state(adm_ctx.device, NS(user_isp, 0)) == SS_NOTHING_TO_DO) {
-		s = adm_ctx.device->state;
-		if (s.conn == C_PAUSED_SYNC_S || s.conn == C_PAUSED_SYNC_T) {
-			retcode = s.aftr_isp ? ERR_PIC_AFTER_DEP :
-				  s.peer_isp ? ERR_PIC_PEER_DEP : ERR_PAUSE_IS_CLEAR;
+		if (peer_device->repl_state[NOW] == L_PAUSED_SYNC_S ||
+		    peer_device->repl_state[NOW] == L_PAUSED_SYNC_T) {
+			if (peer_device->resync_susp_dependency[NOW])
+				retcode = ERR_PIC_AFTER_DEP;
+			else if (peer_device->resync_susp_peer[NOW])
+				retcode = ERR_PIC_PEER_DEP;
+			else
+				retcode = ERR_PAUSE_IS_CLEAR;
 		} else {
 			retcode = ERR_PAUSE_IS_CLEAR;
 		}
 	}
+
 	mutex_unlock(&adm_ctx.resource->adm_mutex);
-out:
+ out:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-int drbd_adm_suspend_io(struct sk_buff *skb, struct genl_info *info)
+static bool io_drained(struct drbd_device *device)
 {
-	return drbd_adm_simple_request_state(skb, info, NS(susp, 1));
+	struct drbd_peer_device *peer_device;
+	bool drained = true;
+
+	if (atomic_read(&device->local_cnt))
+		return false;
+
+	rcu_read_lock();
+	for_each_peer_device_rcu(peer_device, device) {
+		if (atomic_read(&peer_device->ap_pending_cnt)) {
+			drained = false;
+			break;
+		}
+	}
+	rcu_read_unlock();
+
+	return drained;
 }
 
-int drbd_adm_resume_io(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_suspend_io(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
+	struct drbd_resource *resource;
 	struct drbd_device *device;
-	int retcode; /* enum drbd_ret_code rsp. enum drbd_state_rv */
+	int retcode, vnr, err = 0;
+	struct suspend_io_parms params = {
+		.bdev_freeze = true,
+	};
 
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
+	resource = adm_ctx.device->resource;
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	device = adm_ctx.device;
-	if (test_bit(NEW_CUR_UUID, &device->flags)) {
-		if (get_ldev_if_state(device, D_ATTACHING)) {
-			drbd_uuid_new_current(device);
-			put_ldev(device);
-		} else {
-			/* This is effectively a multi-stage "forced down".
-			 * The NEW_CUR_UUID bit is supposedly only set, if we
-			 * lost the replication connection, and are configured
-			 * to freeze IO and wait for some fence-peer handler.
-			 * So we still don't have a replication connection.
-			 * And now we don't have a local disk either.  After
-			 * resume, we will fail all pending and new IO, because
-			 * we don't have any data anymore.  Which means we will
-			 * eventually be able to terminate all users of this
-			 * device, and then take it down.  By bumping the
-			 * "effective" data uuid, we make sure that you really
-			 * need to tear down before you reconfigure, we will
-			 * the refuse to re-connect or re-attach (because no
-			 * matching real data uuid exists).
-			 */
-			u64 val;
-			get_random_bytes(&val, sizeof(u64));
-			drbd_set_ed_uuid(device, val);
-			drbd_warn(device, "Resumed without access to data; please tear down before attempting to re-configure.\n");
+	if (info->attrs[DRBD_NLA_SUSPEND_IO_PARAMS]) {
+		err = suspend_io_parms_from_attrs(&params, info);
+		if (err) {
+			drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
+			return err;
 		}
-		clear_bit(NEW_CUR_UUID, &device->flags);
 	}
-	drbd_suspend_io(device);
-	retcode = drbd_request_state(device, NS3(susp, 0, susp_nod, 0, susp_fen, 0));
-	if (retcode == SS_SUCCESS) {
-		if (device->state.conn < C_CONNECTED)
-			tl_clear(first_peer_device(device)->connection);
-		if (device->state.disk == D_DISKLESS || device->state.disk == D_FAILED)
-			tl_restart(first_peer_device(device)->connection, FAIL_FROZEN_DISK_IO);
+
+	if (mutex_lock_interruptible(&resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out;
+	}
+
+	idr_for_each_entry(&resource->devices, device, vnr)
+		if (params.bdev_freeze && !test_bit(BDEV_FROZEN, &device->flags)) {
+			err = bdev_freeze(device->vdisk->part0);
+			if (err)
+				goto out_thaw;
+
+			set_bit(BDEV_FROZEN, &device->flags);
+		}
+
+	retcode = stable_state_change(resource, change_io_susp_user(resource, true,
+						CS_VERBOSE | CS_WAIT_COMPLETE | CS_SERIALIZE));
+	mutex_unlock(&resource->adm_mutex);
+	if (retcode < SS_SUCCESS)
+		goto out;
+
+	idr_for_each_entry(&resource->devices, device, vnr)
+		wait_event_interruptible(device->misc_wait, io_drained(device));
+out:
+	drbd_adm_finish(&adm_ctx, info, retcode);
+	return 0;
+out_thaw:
+	idr_for_each_entry(&resource->devices, device, vnr)
+		if (test_and_clear_bit(BDEV_FROZEN, &device->flags))
+			bdev_thaw(device->vdisk->part0);
+
+	mutex_unlock(&resource->adm_mutex);
+	drbd_adm_finish(&adm_ctx, info, retcode);
+	return err;
+}
+
+static int drbd_adm_resume_io(struct sk_buff *skb, struct genl_info *info)
+{
+	struct drbd_config_context adm_ctx;
+	struct drbd_connection *connection;
+	struct drbd_resource *resource;
+	struct drbd_device *device;
+	unsigned long irq_flags;
+	int vnr, retcode; /* enum drbd_ret_code rsp. enum drbd_state_rv */
+
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
+	if (!adm_ctx.reply_skb)
+		return retcode;
+
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out;
 	}
+	device = adm_ctx.device;
+	resource = device->resource;
+	if (test_and_clear_bit(NEW_CUR_UUID, &device->flags))
+		drbd_uuid_new_current(device, false);
+	drbd_suspend_io(device, READ_AND_WRITE);
+	begin_state_change(resource, &irq_flags, CS_VERBOSE | CS_WAIT_COMPLETE | CS_SERIALIZE);
+	__change_io_susp_user(resource, false);
+	__change_io_susp_no_data(resource, false);
+	for_each_connection(connection, resource)
+		__change_io_susp_fencing(connection, false);
+
+	__change_io_susp_quorum(resource, false);
+	retcode = end_state_change(resource, &irq_flags, "resume-io");
 	drbd_resume_io(device);
+
+	idr_for_each_entry(&resource->devices, device, vnr)
+		if (test_and_clear_bit(BDEV_FROZEN, &device->flags))
+			bdev_thaw(device->vdisk->part0);
+
 	mutex_unlock(&adm_ctx.resource->adm_mutex);
-out:
+ out:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-int drbd_adm_outdate(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_outdate(struct sk_buff *skb, struct genl_info *info)
 {
-	return drbd_adm_simple_request_state(skb, info, NS(disk, D_OUTDATED));
+	struct drbd_config_context adm_ctx;
+	enum drbd_ret_code retcode;
+
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
+	if (!adm_ctx.reply_skb)
+		return retcode;
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+	} else {
+		retcode = stable_state_change(adm_ctx.device->resource,
+			change_disk_state(adm_ctx.device, D_OUTDATED,
+				  CS_VERBOSE | CS_WAIT_COMPLETE | CS_SERIALIZE, "outdate", NULL));
+		mutex_unlock(&adm_ctx.resource->adm_mutex);
+	}
+	drbd_adm_finish(&adm_ctx, info, retcode);
+	return 0;
 }
 
 static int nla_put_drbd_cfg_context(struct sk_buff *skb,
 				    struct drbd_resource *resource,
 				    struct drbd_connection *connection,
-				    struct drbd_device *device)
+				    struct drbd_device *device,
+				    struct drbd_path *path)
 {
 	struct nlattr *nla;
 	nla = nla_nest_start_noflag(skb, DRBD_NLA_CFG_CONTEXT);
 	if (!nla)
 		goto nla_put_failure;
-	if (device &&
-	    nla_put_u32(skb, T_ctx_volume, device->vnr))
-		goto nla_put_failure;
-	if (nla_put_string(skb, T_ctx_resource_name, resource->name))
-		goto nla_put_failure;
+	if (device)
+		nla_put_u32(skb, T_ctx_volume, device->vnr);
+	if (resource)
+		nla_put_string(skb, T_ctx_resource_name, resource->name);
 	if (connection) {
-		if (connection->my_addr_len &&
-		    nla_put(skb, T_ctx_my_addr, connection->my_addr_len, &connection->my_addr))
-			goto nla_put_failure;
-		if (connection->peer_addr_len &&
-		    nla_put(skb, T_ctx_peer_addr, connection->peer_addr_len, &connection->peer_addr))
-			goto nla_put_failure;
+		nla_put_u32(skb, T_ctx_peer_node_id, connection->peer_node_id);
+		rcu_read_lock();
+		if (connection->transport.net_conf)
+			nla_put_string(skb, T_ctx_conn_name, connection->transport.net_conf->name);
+		rcu_read_unlock();
+	}
+	if (path) {
+		nla_put(skb, T_ctx_my_addr, path->my_addr_len, &path->my_addr);
+		nla_put(skb, T_ctx_peer_addr, path->peer_addr_len, &path->peer_addr);
 	}
 	nla_nest_end(skb, nla);
 	return 0;
@@ -3250,7 +6031,7 @@ static struct nlattr *find_cfg_context_attr(const struct nlmsghdr *nlh, int attr
 
 static void resource_to_info(struct resource_info *, struct drbd_resource *);
 
-int drbd_adm_dump_resources(struct sk_buff *skb, struct netlink_callback *cb)
+static int drbd_adm_dump_resources(struct sk_buff *skb, struct netlink_callback *cb)
 {
 	struct drbd_genlmsghdr *dh;
 	struct drbd_resource *resource;
@@ -3285,7 +6066,7 @@ int drbd_adm_dump_resources(struct sk_buff *skb, struct netlink_callback *cb)
 		goto out;
 	dh->minor = -1U;
 	dh->ret_code = NO_ERROR;
-	err = nla_put_drbd_cfg_context(skb, resource, NULL, NULL);
+	err = nla_put_drbd_cfg_context(skb, resource, NULL, NULL, NULL);
 	if (err)
 		goto out;
 	err = res_opts_to_skb(skb, &resource->res_opts, !capable(CAP_SYS_ADMIN));
@@ -3321,16 +6102,18 @@ static void device_to_statistics(struct device_statistics *s,
 		int n;
 
 		spin_lock_irq(&md->uuid_lock);
-		s->dev_current_uuid = md->uuid[UI_CURRENT];
-		BUILD_BUG_ON(sizeof(s->history_uuids) < UI_HISTORY_END - UI_HISTORY_START + 1);
-		for (n = 0; n < UI_HISTORY_END - UI_HISTORY_START + 1; n++)
-			history_uuids[n] = md->uuid[UI_HISTORY_START + n];
-		for (; n < HISTORY_UUIDS; n++)
-			history_uuids[n] = 0;
-		s->history_uuids_len = HISTORY_UUIDS;
+		s->dev_current_uuid = md->current_uuid;
+		BUILD_BUG_ON(sizeof(s->history_uuids) != sizeof(md->history_uuids));
+		for (n = 0; n < ARRAY_SIZE(md->history_uuids); n++)
+			history_uuids[n] = md->history_uuids[n];
+		s->history_uuids_len = sizeof(s->history_uuids);
 		spin_unlock_irq(&md->uuid_lock);
 
 		s->dev_disk_flags = md->flags;
+		/* originally, this used the bdi congestion framework,
+		 * but that was removed in linux 5.18.
+		 * so just never report the lower device as congested. */
+		s->dev_lower_blocked = false;
 		put_ldev(device);
 	}
 	s->dev_size = get_capacity(device->vdisk);
@@ -3338,10 +6121,11 @@ static void device_to_statistics(struct device_statistics *s,
 	s->dev_write = device->writ_cnt;
 	s->dev_al_writes = device->al_writ_cnt;
 	s->dev_bm_writes = device->bm_writ_cnt;
-	s->dev_upper_pending = atomic_read(&device->ap_bio_cnt);
+	s->dev_upper_pending = atomic_read(&device->ap_bio_cnt[READ]) +
+		atomic_read(&device->ap_bio_cnt[WRITE]);
 	s->dev_lower_pending = atomic_read(&device->local_cnt);
 	s->dev_al_suspended = test_bit(AL_SUSPENDED, &device->flags);
-	s->dev_exposed_data_uuid = device->ed_uuid;
+	s->dev_exposed_data_uuid = device->exposed_data_uuid;
 }
 
 static int put_resource_in_arg0(struct netlink_callback *cb, int holder_nr)
@@ -3355,13 +6139,12 @@ static int put_resource_in_arg0(struct netlink_callback *cb, int holder_nr)
 	return 0;
 }
 
-int drbd_adm_dump_devices_done(struct netlink_callback *cb) {
+static int drbd_adm_dump_devices_done(struct netlink_callback *cb)
+{
 	return put_resource_in_arg0(cb, 7);
 }
 
-static void device_to_info(struct device_info *, struct drbd_device *);
-
-int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb)
+static int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb)
 {
 	struct nlattr *resource_filter;
 	struct drbd_resource *resource;
@@ -3373,9 +6156,11 @@ int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb)
 	struct idr *idr_to_search;
 
 	resource = (struct drbd_resource *)cb->args[0];
+
+	rcu_read_lock();
 	if (!cb->args[0] && !cb->args[1]) {
 		resource_filter = find_cfg_context_attr(cb->nlh, T_ctx_resource_name);
-		if (resource_filter) {
+		if (!IS_ERR_OR_NULL(resource_filter)) {
 			retcode = ERR_RES_NOT_KNOWN;
 			resource = drbd_find_resource(nla_data(resource_filter));
 			if (!resource)
@@ -3384,7 +6169,6 @@ int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb)
 		}
 	}
 
-	rcu_read_lock();
 	minor = cb->args[1];
 	idr_to_search = resource ? &resource->devices : &drbd_devices;
 	device = idr_get_next(idr_to_search, &minor);
@@ -3410,7 +6194,7 @@ int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb)
 	dh->minor = -1U;
 	if (retcode == NO_ERROR) {
 		dh->minor = device->minor;
-		err = nla_put_drbd_cfg_context(skb, device->resource, NULL, device);
+		err = nla_put_drbd_cfg_context(skb, device->resource, NULL, device, NULL);
 		if (err)
 			goto out;
 		if (get_ldev(device)) {
@@ -3422,6 +6206,9 @@ int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb)
 			if (err)
 				goto out;
 		}
+		err = device_conf_to_skb(skb, &device->device_conf, !capable(CAP_SYS_ADMIN));
+		if (err)
+			goto out;
 		device_to_info(&device_info, device);
 		err = device_info_to_skb(skb, &device_info, !capable(CAP_SYS_ADMIN));
 		if (err)
@@ -3443,14 +6230,47 @@ int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb)
 	return skb->len;
 }
 
-int drbd_adm_dump_connections_done(struct netlink_callback *cb)
+static int drbd_adm_dump_connections_done(struct netlink_callback *cb)
 {
 	return put_resource_in_arg0(cb, 6);
 }
 
+static int connection_paths_to_skb(struct sk_buff *skb, struct drbd_connection *connection)
+{
+	struct drbd_path *path;
+	struct nlattr *tla = nla_nest_start_noflag(skb, DRBD_NLA_PATH_PARMS);
+	if (!tla)
+		goto nla_put_failure;
+
+	/* array of such paths. */
+	rcu_read_lock();
+	list_for_each_entry_rcu(path, &connection->transport.paths, list) {
+		if (nla_put(skb, T_my_addr, path->my_addr_len, &path->my_addr) ||
+				nla_put(skb, T_peer_addr, path->peer_addr_len, &path->peer_addr)) {
+			rcu_read_unlock();
+			goto nla_put_failure;
+		}
+	}
+	rcu_read_unlock();
+	nla_nest_end(skb, tla);
+	return 0;
+
+nla_put_failure:
+	if (tla)
+		nla_nest_cancel(skb, tla);
+	return -EMSGSIZE;
+}
+
+static void connection_to_statistics(struct connection_statistics *s, struct drbd_connection *connection)
+{
+	s->conn_congested = test_bit(NET_CONGESTED, &connection->transport.flags);
+	s->ap_in_flight = atomic_read(&connection->ap_in_flight);
+	s->rs_in_flight = atomic_read(&connection->rs_in_flight);
+}
+
 enum { SINGLE_RESOURCE, ITERATE_RESOURCES };
 
-int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb)
+static int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb)
 {
 	struct nlattr *resource_filter;
 	struct drbd_resource *resource = NULL, *next_resource;
@@ -3464,7 +6284,7 @@ int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb)
 	resource = (struct drbd_resource *)cb->args[0];
 	if (!cb->args[0]) {
 		resource_filter = find_cfg_context_attr(cb->nlh, T_ctx_resource_name);
-		if (resource_filter) {
+		if (!IS_ERR_OR_NULL(resource_filter)) {
 			retcode = ERR_RES_NOT_KNOWN;
 			resource = drbd_find_resource(nla_data(resource_filter));
 			if (!resource)
@@ -3484,7 +6304,13 @@ int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb)
 
     next_resource:
 	rcu_read_unlock();
-	mutex_lock(&resource->conf_update);
+	if (mutex_lock_interruptible(&resource->conf_update)) {
+		kref_put(&resource->kref, drbd_destroy_resource);
+		resource = NULL;
+		retcode = ERR_INTR;
+		rcu_read_lock();
+		goto put_result;
+	}
 	rcu_read_lock();
 	if (cb->args[2]) {
 		for_each_connection_rcu(connection, resource)
@@ -3497,8 +6323,6 @@ int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb)
 
 found_connection:
 	list_for_each_entry_continue_rcu(connection, &resource->connections, connections) {
-		if (!has_net_conf(connection))
-			continue;
 		retcode = NO_ERROR;
 		goto put_result;  /* only one iteration */
 	}
@@ -3537,20 +6361,21 @@ int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb)
 	if (retcode == NO_ERROR) {
 		struct net_conf *net_conf;
 
-		err = nla_put_drbd_cfg_context(skb, resource, connection, NULL);
+		err = nla_put_drbd_cfg_context(skb, resource, connection, NULL, NULL);
 		if (err)
 			goto out;
-		net_conf = rcu_dereference(connection->net_conf);
+		net_conf = rcu_dereference(connection->transport.net_conf);
 		if (net_conf) {
 			err = net_conf_to_skb(skb, net_conf, !capable(CAP_SYS_ADMIN));
 			if (err)
 				goto out;
 		}
 		connection_to_info(&connection_info, connection);
+		connection_paths_to_skb(skb, connection);
 		err = connection_info_to_skb(skb, &connection_info, !capable(CAP_SYS_ADMIN));
 		if (err)
 			goto out;
-		connection_statistics.conn_congested = test_bit(NET_CONGESTED, &connection->flags);
+		connection_to_statistics(&connection_statistics, connection);
 		err = connection_statistics_to_skb(skb, &connection_statistics, !capable(CAP_SYS_ADMIN));
 		if (err)
 			goto out;
@@ -3568,51 +6393,92 @@ int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb)
 	return skb->len;
 }
 
-enum mdf_peer_flag {
-	MDF_PEER_CONNECTED =	1 << 0,
-	MDF_PEER_OUTDATED =	1 << 1,
-	MDF_PEER_FENCING =	1 << 2,
-	MDF_PEER_FULL_SYNC =	1 << 3,
-};
-
 static void peer_device_to_statistics(struct peer_device_statistics *s,
-				      struct drbd_peer_device *peer_device)
+				      struct drbd_peer_device *pd)
 {
-	struct drbd_device *device = peer_device->device;
+	struct drbd_device *device = pd->device;
+	struct drbd_md *md;
+	struct drbd_peer_md *peer_md;
+	struct drbd_bitmap *bm;
+	unsigned long now = jiffies;
+	unsigned long rs_left = 0;
+	int i;
+
+	/* userspace should get "future proof" units,
+	 * convert to sectors or milli seconds as appropriate */
 
 	memset(s, 0, sizeof(*s));
-	s->peer_dev_received = device->recv_cnt;
-	s->peer_dev_sent = device->send_cnt;
-	s->peer_dev_pending = atomic_read(&device->ap_pending_cnt) +
-			      atomic_read(&device->rs_pending_cnt);
-	s->peer_dev_unacked = atomic_read(&device->unacked_cnt);
-	s->peer_dev_out_of_sync = drbd_bm_total_weight(device) << (BM_BLOCK_SHIFT - 9);
-	s->peer_dev_resync_failed = device->rs_failed << (BM_BLOCK_SHIFT - 9);
-	if (get_ldev(device)) {
-		struct drbd_md *md = &device->ldev->md;
+	s->peer_dev_received = pd->recv_cnt;
+	s->peer_dev_sent = pd->send_cnt;
+	s->peer_dev_pending = atomic_read(&pd->ap_pending_cnt) +
+			      atomic_read(&pd->rs_pending_cnt);
+	s->peer_dev_unacked = atomic_read(&pd->unacked_cnt);
+	s->peer_dev_uuid_flags = pd->uuid_flags;
+
+	/* Below are resync / verify / bitmap / meta data stats.
+	 * Without disk, we don't have those.
+	 */
+	if (!get_ldev(device))
+		return;
 
-		spin_lock_irq(&md->uuid_lock);
-		s->peer_dev_bitmap_uuid = md->uuid[UI_BITMAP];
-		spin_unlock_irq(&md->uuid_lock);
-		s->peer_dev_flags =
-			(drbd_md_test_flag(device->ldev, MDF_CONNECTED_IND) ?
-				MDF_PEER_CONNECTED : 0) +
-			(drbd_md_test_flag(device->ldev, MDF_CONSISTENT) &&
-			 !drbd_md_test_flag(device->ldev, MDF_WAS_UP_TO_DATE) ?
-				MDF_PEER_OUTDATED : 0) +
-			/* FIXME: MDF_PEER_FENCING? */
-			(drbd_md_test_flag(device->ldev, MDF_FULL_SYNC) ?
-				MDF_PEER_FULL_SYNC : 0);
-		put_ldev(device);
+	bm = device->bitmap;
+	s->peer_dev_out_of_sync = bm_bit_to_sect(bm, drbd_bm_total_weight(pd));
+
+	if (is_verify_state(pd, NOW)) {
+		rs_left = bm_bit_to_sect(bm, atomic64_read(&pd->ov_left));
+		s->peer_dev_ov_start_sector = pd->ov_start_sector;
+		s->peer_dev_ov_stop_sector = pd->ov_stop_sector;
+		s->peer_dev_ov_position = pd->ov_position;
+		s->peer_dev_ov_left = bm_bit_to_sect(bm, atomic64_read(&pd->ov_left));
+		s->peer_dev_ov_skipped = bm_bit_to_sect(bm, pd->ov_skipped);
+	} else if (is_sync_state(pd, NOW)) {
+		rs_left = s->peer_dev_out_of_sync - bm_bit_to_sect(bm, pd->rs_failed);
+		s->peer_dev_resync_failed = bm_bit_to_sect(bm, pd->rs_failed);
+		s->peer_dev_rs_same_csum = bm_bit_to_sect(bm, pd->rs_same_csum);
+	}
+
+	if (rs_left) {
+		enum drbd_repl_state repl_state = pd->repl_state[NOW];
+		if (repl_state == L_SYNC_TARGET || repl_state == L_VERIFY_S)
+			s->peer_dev_rs_c_sync_rate = pd->c_sync_rate;
+
+		s->peer_dev_rs_total = bm_bit_to_sect(bm, pd->rs_total);
+
+		s->peer_dev_rs_dt_start_ms = jiffies_to_msecs(now - pd->rs_start);
+		s->peer_dev_rs_paused_ms = jiffies_to_msecs(pd->rs_paused);
+
+		i = (pd->rs_last_mark + 2) % DRBD_SYNC_MARKS;
+		s->peer_dev_rs_dt0_ms = jiffies_to_msecs(now - pd->rs_mark_time[i]);
+		s->peer_dev_rs_db0_sectors = bm_bit_to_sect(bm, pd->rs_mark_left[i]) - rs_left;
+
+		i = (pd->rs_last_mark + DRBD_SYNC_MARKS-1) % DRBD_SYNC_MARKS;
+		s->peer_dev_rs_dt1_ms = jiffies_to_msecs(now - pd->rs_mark_time[i]);
+		s->peer_dev_rs_db1_sectors = bm_bit_to_sect(bm, pd->rs_mark_left[i]) - rs_left;
+
+		/* long term average:
+		 * dt = rs_dt_start_ms - rs_paused_ms;
+		 * db = rs_total - rs_left, which is
+		 *   rs_total - (ov_left? ov_left : out_of_sync - rs_failed)
+		 */
 	}
+
+	md = &device->ldev->md;
+	peer_md = &md->peers[pd->node_id];
+
+	spin_lock_irq(&md->uuid_lock);
+	s->peer_dev_bitmap_uuid = peer_md->bitmap_uuid;
+	spin_unlock_irq(&md->uuid_lock);
+	s->peer_dev_flags = peer_md->flags;
+
+	put_ldev(device);
 }
 
-int drbd_adm_dump_peer_devices_done(struct netlink_callback *cb)
+static int drbd_adm_dump_peer_devices_done(struct netlink_callback *cb)
 {
 	return put_resource_in_arg0(cb, 9);
 }
 
-int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
+static int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
 {
 	struct nlattr *resource_filter;
 	struct drbd_resource *resource;
@@ -3623,9 +6489,11 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
 	struct idr *idr_to_search;
 
 	resource = (struct drbd_resource *)cb->args[0];
+
+	rcu_read_lock();
 	if (!cb->args[0] && !cb->args[1]) {
 		resource_filter = find_cfg_context_attr(cb->nlh, T_ctx_resource_name);
-		if (resource_filter) {
+		if (!IS_ERR_OR_NULL(resource_filter)) {
 			retcode = ERR_RES_NOT_KNOWN;
 			resource = drbd_find_resource(nla_data(resource_filter));
 			if (!resource)
@@ -3634,7 +6502,6 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
 		cb->args[0] = (long)resource;
 	}
 
-	rcu_read_lock();
 	minor = cb->args[1];
 	idr_to_search = resource ? &resource->devices : &drbd_devices;
 	device = idr_find(idr_to_search, minor);
@@ -3649,7 +6516,7 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
 		}
 	}
 	if (cb->args[2]) {
-		for_each_peer_device(peer_device, device)
+		for_each_peer_device_rcu(peer_device, device)
 			if (peer_device == (struct drbd_peer_device *)cb->args[2])
 				goto found_peer_device;
 		/* peer device was probably deleted */
@@ -3660,8 +6527,6 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
 
 found_peer_device:
 	list_for_each_entry_continue_rcu(peer_device, &device->peer_devices, peer_devices) {
-		if (!has_net_conf(peer_device->connection))
-			continue;
 		retcode = NO_ERROR;
 		goto put_result;  /* only one iteration */
 	}
@@ -3679,9 +6544,10 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
 	if (retcode == NO_ERROR) {
 		struct peer_device_info peer_device_info;
 		struct peer_device_statistics peer_device_statistics;
+		struct peer_device_conf *peer_device_conf;
 
 		dh->minor = minor;
-		err = nla_put_drbd_cfg_context(skb, device->resource, peer_device->connection, device);
+		err = nla_put_drbd_cfg_context(skb, device->resource, peer_device->connection, device, NULL);
 		if (err)
 			goto out;
 		peer_device_to_info(&peer_device_info, peer_device);
@@ -3692,6 +6558,13 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
 		err = peer_device_statistics_to_skb(skb, &peer_device_statistics, !capable(CAP_SYS_ADMIN));
 		if (err)
 			goto out;
+		peer_device_conf = rcu_dereference(peer_device->conf);
+		if (peer_device_conf) {
+			err = peer_device_conf_to_skb(skb, peer_device_conf, !capable(CAP_SYS_ADMIN));
+			if (err)
+				goto out;
+		}
+
 		cb->args[1] = minor;
 		cb->args[2] = (long)peer_device;
 	}
@@ -3704,362 +6577,150 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
 		return err;
 	return skb->len;
 }
-/*
- * Return the connection of @resource if @resource has exactly one connection.
- */
-static struct drbd_connection *the_only_connection(struct drbd_resource *resource)
-{
-	struct list_head *connections = &resource->connections;
 
-	if (list_empty(connections) || connections->next->next != connections)
-		return NULL;
-	return list_first_entry(&resource->connections, struct drbd_connection, connections);
+static int drbd_adm_dump_paths_done(struct netlink_callback *cb)
+{
+	return put_resource_in_arg0(cb, 10);
 }
 
-static int nla_put_status_info(struct sk_buff *skb, struct drbd_device *device,
-		const struct sib_info *sib)
+static int drbd_adm_dump_paths(struct sk_buff *skb, struct netlink_callback *cb)
 {
-	struct drbd_resource *resource = device->resource;
-	struct state_info *si = NULL; /* for sizeof(si->member); */
-	struct nlattr *nla;
-	int got_ldev;
-	int err = 0;
-	int exclude_sensitive;
-
-	/* If sib != NULL, this is drbd_bcast_event, which anyone can listen
-	 * to.  So we better exclude_sensitive information.
-	 *
-	 * If sib == NULL, this is drbd_adm_get_status, executed synchronously
-	 * in the context of the requesting user process. Exclude sensitive
-	 * information, unless current has superuser.
-	 *
-	 * NOTE: for drbd_adm_get_status_all(), this is a netlink dump, and
-	 * relies on the current implementation of netlink_dump(), which
-	 * executes the dump callback successively from netlink_recvmsg(),
-	 * always in the context of the receiving process */
-	exclude_sensitive = sib || !capable(CAP_SYS_ADMIN);
-
-	got_ldev = get_ldev(device);
-
-	/* We need to add connection name and volume number information still.
-	 * Minor number is in drbd_genlmsghdr. */
-	if (nla_put_drbd_cfg_context(skb, resource, the_only_connection(resource), device))
-		goto nla_put_failure;
-
-	if (res_opts_to_skb(skb, &device->resource->res_opts, exclude_sensitive))
-		goto nla_put_failure;
+	struct nlattr *resource_filter;
+	struct drbd_resource *resource = NULL, *next_resource;
+	struct drbd_connection *connection = NULL;
+	struct drbd_path *path = NULL;
+	int err = 0, retcode;
+	struct drbd_genlmsghdr *dh;
 
 	rcu_read_lock();
-	if (got_ldev) {
-		struct disk_conf *disk_conf;
-
-		disk_conf = rcu_dereference(device->ldev->disk_conf);
-		err = disk_conf_to_skb(skb, disk_conf, exclude_sensitive);
-	}
-	if (!err) {
-		struct net_conf *nc;
-
-		nc = rcu_dereference(first_peer_device(device)->connection->net_conf);
-		if (nc)
-			err = net_conf_to_skb(skb, nc, exclude_sensitive);
-	}
-	rcu_read_unlock();
-	if (err)
-		goto nla_put_failure;
-
-	nla = nla_nest_start_noflag(skb, DRBD_NLA_STATE_INFO);
-	if (!nla)
-		goto nla_put_failure;
-	if (nla_put_u32(skb, T_sib_reason, sib ? sib->sib_reason : SIB_GET_STATUS_REPLY) ||
-	    nla_put_u32(skb, T_current_state, device->state.i) ||
-	    nla_put_u64_0pad(skb, T_ed_uuid, device->ed_uuid) ||
-	    nla_put_u64_0pad(skb, T_capacity, get_capacity(device->vdisk)) ||
-	    nla_put_u64_0pad(skb, T_send_cnt, device->send_cnt) ||
-	    nla_put_u64_0pad(skb, T_recv_cnt, device->recv_cnt) ||
-	    nla_put_u64_0pad(skb, T_read_cnt, device->read_cnt) ||
-	    nla_put_u64_0pad(skb, T_writ_cnt, device->writ_cnt) ||
-	    nla_put_u64_0pad(skb, T_al_writ_cnt, device->al_writ_cnt) ||
-	    nla_put_u64_0pad(skb, T_bm_writ_cnt, device->bm_writ_cnt) ||
-	    nla_put_u32(skb, T_ap_bio_cnt, atomic_read(&device->ap_bio_cnt)) ||
-	    nla_put_u32(skb, T_ap_pending_cnt, atomic_read(&device->ap_pending_cnt)) ||
-	    nla_put_u32(skb, T_rs_pending_cnt, atomic_read(&device->rs_pending_cnt)))
-		goto nla_put_failure;
-
-	if (got_ldev) {
-		int err;
-
-		spin_lock_irq(&device->ldev->md.uuid_lock);
-		err = nla_put(skb, T_uuids, sizeof(si->uuids), device->ldev->md.uuid);
-		spin_unlock_irq(&device->ldev->md.uuid_lock);
-
-		if (err)
-			goto nla_put_failure;
-
-		if (nla_put_u32(skb, T_disk_flags, device->ldev->md.flags) ||
-		    nla_put_u64_0pad(skb, T_bits_total, drbd_bm_bits(device)) ||
-		    nla_put_u64_0pad(skb, T_bits_oos,
-				     drbd_bm_total_weight(device)))
-			goto nla_put_failure;
-		if (C_SYNC_SOURCE <= device->state.conn &&
-		    C_PAUSED_SYNC_T >= device->state.conn) {
-			if (nla_put_u64_0pad(skb, T_bits_rs_total,
-					     device->rs_total) ||
-			    nla_put_u64_0pad(skb, T_bits_rs_failed,
-					     device->rs_failed))
-				goto nla_put_failure;
-		}
-	}
-
-	if (sib) {
-		switch(sib->sib_reason) {
-		case SIB_SYNC_PROGRESS:
-		case SIB_GET_STATUS_REPLY:
-			break;
-		case SIB_STATE_CHANGE:
-			if (nla_put_u32(skb, T_prev_state, sib->os.i) ||
-			    nla_put_u32(skb, T_new_state, sib->ns.i))
-				goto nla_put_failure;
-			break;
-		case SIB_HELPER_POST:
-			if (nla_put_u32(skb, T_helper_exit_code,
-					sib->helper_exit_code))
-				goto nla_put_failure;
-			fallthrough;
-		case SIB_HELPER_PRE:
-			if (nla_put_string(skb, T_helper, sib->helper_name))
-				goto nla_put_failure;
-			break;
+	resource = (struct drbd_resource *)cb->args[0];
+	if (!cb->args[0]) {
+		resource_filter = find_cfg_context_attr(cb->nlh, T_ctx_resource_name);
+		if (!IS_ERR_OR_NULL(resource_filter)) {
+			retcode = ERR_RES_NOT_KNOWN;
+			resource = drbd_find_resource(nla_data(resource_filter));
+			if (!resource)
+				goto put_result;
+			cb->args[0] = (long)resource;
+			cb->args[1] = SINGLE_RESOURCE;
 		}
 	}
-	nla_nest_end(skb, nla);
-
-	if (0)
-nla_put_failure:
-		err = -EMSGSIZE;
-	if (got_ldev)
-		put_ldev(device);
-	return err;
-}
-
-int drbd_adm_get_status(struct sk_buff *skb, struct genl_info *info)
-{
-	struct drbd_config_context adm_ctx;
-	enum drbd_ret_code retcode;
-	int err;
-
-	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
-	if (!adm_ctx.reply_skb)
-		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
-
-	err = nla_put_status_info(adm_ctx.reply_skb, adm_ctx.device, NULL);
-	if (err) {
-		nlmsg_free(adm_ctx.reply_skb);
-		return err;
+	if (!resource) {
+		if (list_empty(&drbd_resources))
+			goto out;
+		resource = list_first_entry(&drbd_resources, struct drbd_resource, resources);
+		kref_get(&resource->kref);
+		cb->args[0] = (long)resource;
+		cb->args[1] = ITERATE_RESOURCES;
 	}
-out:
-	drbd_adm_finish(&adm_ctx, info, retcode);
-	return 0;
-}
-
-static int get_one_status(struct sk_buff *skb, struct netlink_callback *cb)
-{
-	struct drbd_device *device;
-	struct drbd_genlmsghdr *dh;
-	struct drbd_resource *pos = (struct drbd_resource *)cb->args[0];
-	struct drbd_resource *resource = NULL;
-	struct drbd_resource *tmp;
-	unsigned volume = cb->args[1];
-
-	/* Open coded, deferred, iteration:
-	 * for_each_resource_safe(resource, tmp, &drbd_resources) {
-	 *      connection = "first connection of resource or undefined";
-	 *	idr_for_each_entry(&resource->devices, device, i) {
-	 *	  ...
-	 *	}
-	 * }
-	 * where resource is cb->args[0];
-	 * and i is cb->args[1];
-	 *
-	 * cb->args[2] indicates if we shall loop over all resources,
-	 * or just dump all volumes of a single resource.
-	 *
-	 * This may miss entries inserted after this dump started,
-	 * or entries deleted before they are reached.
-	 *
-	 * We need to make sure the device won't disappear while
-	 * we are looking at it, and revalidate our iterators
-	 * on each iteration.
-	 */
 
-	/* synchronize with conn_create()/drbd_destroy_connection() */
+next_resource:
+	rcu_read_unlock();
+	mutex_lock(&resource->conf_update);
 	rcu_read_lock();
-	/* revalidate iterator position */
-	for_each_resource_rcu(tmp, &drbd_resources) {
-		if (pos == NULL) {
-			/* first iteration */
-			pos = tmp;
-			resource = pos;
-			break;
-		}
-		if (tmp == pos) {
-			resource = pos;
-			break;
+	if (cb->args[2]) {
+		for_each_connection_rcu(connection, resource) {
+			list_for_each_entry_rcu(path, &connection->transport.paths, list)
+				if (path == (struct drbd_path *)cb->args[2])
+					goto found_path;
 		}
+		/* path was probably deleted */
+		goto no_more_paths;
 	}
-	if (resource) {
-next_resource:
-		device = idr_get_next(&resource->devices, &volume);
-		if (!device) {
-			/* No more volumes to dump on this resource.
-			 * Advance resource iterator. */
-			pos = list_entry_rcu(resource->resources.next,
-					     struct drbd_resource, resources);
-			/* Did we dump any volume of this resource yet? */
-			if (volume != 0) {
-				/* If we reached the end of the list,
-				 * or only a single resource dump was requested,
-				 * we are done. */
-				if (&pos->resources == &drbd_resources || cb->args[2])
-					goto out;
-				volume = 0;
-				resource = pos;
-				goto next_resource;
-			}
-		}
-
-		dh = genlmsg_put(skb, NETLINK_CB(cb->skb).portid,
-				cb->nlh->nlmsg_seq, &drbd_genl_family,
-				NLM_F_MULTI, DRBD_ADM_GET_STATUS);
-		if (!dh)
-			goto out;
 
-		if (!device) {
-			/* This is a connection without a single volume.
-			 * Suprisingly enough, it may have a network
-			 * configuration. */
-			struct drbd_connection *connection;
+	connection = first_connection(resource);
+	if (!connection)
+		goto no_more_paths;
 
-			dh->minor = -1U;
-			dh->ret_code = NO_ERROR;
-			connection = the_only_connection(resource);
-			if (nla_put_drbd_cfg_context(skb, resource, connection, NULL))
-				goto cancel;
-			if (connection) {
-				struct net_conf *nc;
-
-				nc = rcu_dereference(connection->net_conf);
-				if (nc && net_conf_to_skb(skb, nc, 1) != 0)
-					goto cancel;
-			}
-			goto done;
-		}
+	path = list_entry(&connection->transport.paths, struct drbd_path, list);
 
-		D_ASSERT(device, device->vnr == volume);
-		D_ASSERT(device, device->resource == resource);
+found_path:
+	/* Advance to next path in connection. */
+	list_for_each_entry_continue_rcu(path, &connection->transport.paths, list) {
+		retcode = NO_ERROR;
+		goto put_result;  /* only one iteration */
+	}
 
-		dh->minor = device_to_minor(device);
-		dh->ret_code = NO_ERROR;
+	/* Advance to next connection. */
+	list_for_each_entry_continue_rcu(connection, &resource->connections, connections) {
+		path = first_path(connection);
+		if (!path)
+			continue;
+		retcode = NO_ERROR;
+		goto put_result;
+	}
 
-		if (nla_put_status_info(skb, device, NULL)) {
-cancel:
-			genlmsg_cancel(skb, dh);
-			goto out;
+no_more_paths:
+	if (cb->args[1] == ITERATE_RESOURCES) {
+		for_each_resource_rcu(next_resource, &drbd_resources) {
+			if (next_resource == resource)
+				goto found_resource;
 		}
-done:
-		genlmsg_end(skb, dh);
+		/* resource was probably deleted */
 	}
+	goto out;
 
-out:
-	rcu_read_unlock();
-	/* where to start the next iteration */
-	cb->args[0] = (long)pos;
-	cb->args[1] = (pos == resource) ? volume + 1 : 0;
-
-	/* No more resources/volumes/minors found results in an empty skb.
-	 * Which will terminate the dump. */
-        return skb->len;
-}
-
-/*
- * Request status of all resources, or of all volumes within a single resource.
- *
- * This is a dump, as the answer may not fit in a single reply skb otherwise.
- * Which means we cannot use the family->attrbuf or other such members, because
- * dump is NOT protected by the genl_lock().  During dump, we only have access
- * to the incoming skb, and need to opencode "parsing" of the nlattr payload.
- *
- * Once things are setup properly, we call into get_one_status().
- */
-int drbd_adm_get_status_all(struct sk_buff *skb, struct netlink_callback *cb)
-{
-	const unsigned hdrlen = GENL_HDRLEN + GENL_MAGIC_FAMILY_HDRSZ;
-	struct nlattr *nla;
-	const char *resource_name;
-	struct drbd_resource *resource;
-	int maxtype;
-
-	/* Is this a followup call? */
-	if (cb->args[0]) {
-		/* ... of a single resource dump,
-		 * and the resource iterator has been advanced already? */
-		if (cb->args[2] && cb->args[2] != cb->args[0])
-			return 0; /* DONE. */
-		goto dump;
+found_resource:
+	list_for_each_entry_continue_rcu(next_resource, &drbd_resources, resources) {
+		mutex_unlock(&resource->conf_update);
+		kref_put(&resource->kref, drbd_destroy_resource);
+		resource = next_resource;
+		kref_get(&resource->kref);
+		cb->args[0] = (long)resource;
+		cb->args[2] = 0;
+		goto next_resource;
 	}
+	goto out;  /* no more resources */
 
-	/* First call (from netlink_dump_start).  We need to figure out
-	 * which resource(s) the user wants us to dump. */
-	nla = nla_find(nlmsg_attrdata(cb->nlh, hdrlen),
-			nlmsg_attrlen(cb->nlh, hdrlen),
-			DRBD_NLA_CFG_CONTEXT);
-
-	/* No explicit context given.  Dump all. */
-	if (!nla)
-		goto dump;
-	maxtype = ARRAY_SIZE(drbd_cfg_context_nl_policy) - 1;
-	nla = drbd_nla_find_nested(maxtype, nla, __nla_type(T_ctx_resource_name));
-	if (IS_ERR(nla))
-		return PTR_ERR(nla);
-	/* context given, but no name present? */
-	if (!nla)
-		return -EINVAL;
-	resource_name = nla_data(nla);
-	if (!*resource_name)
-		return -ENODEV;
-	resource = drbd_find_resource(resource_name);
-	if (!resource)
-		return -ENODEV;
-
-	kref_put(&resource->kref, drbd_destroy_resource); /* get_one_status() revalidates the resource */
+put_result:
+	dh = genlmsg_put(skb, NETLINK_CB(cb->skb).portid,
+			cb->nlh->nlmsg_seq, &drbd_genl_family,
+			NLM_F_MULTI, DRBD_ADM_GET_PATHS);
+	err = -ENOMEM;
+	if (!dh)
+		goto out;
+	dh->ret_code = retcode;
+	dh->minor = -1U;
+	if (retcode == NO_ERROR && connection && path) {
+		struct drbd_path_info path_info;
 
-	/* prime iterators, and set "filter" mode mark:
-	 * only dump this connection. */
-	cb->args[0] = (long)resource;
-	/* cb->args[1] = 0; passed in this way. */
-	cb->args[2] = (long)resource;
+		err = nla_put_drbd_cfg_context(skb, resource, connection, NULL, path);
+		if (err)
+			goto out;
+		path_info.path_established = test_bit(TR_ESTABLISHED, &path->flags);
+		err = drbd_path_info_to_skb(skb, &path_info, !capable(CAP_SYS_ADMIN));
+		if (err)
+			goto out;
+		cb->args[2] = (long)path;
+	}
+	genlmsg_end(skb, dh);
+	err = 0;
 
-dump:
-	return get_one_status(skb, cb);
+out:
+	rcu_read_unlock();
+	if (resource)
+		mutex_unlock(&resource->conf_update);
+	if (err)
+		return err;
+	return skb->len;
 }
 
-int drbd_adm_get_timeout_type(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_get_timeout_type(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
+	struct drbd_peer_device *peer_device;
 	enum drbd_ret_code retcode;
 	struct timeout_parms tp;
 	int err;
 
-	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_PEER_DEVICE);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
+	peer_device = adm_ctx.peer_device;
 
 	tp.timeout_type =
-		adm_ctx.device->state.pdsk == D_OUTDATED ? UT_PEER_OUTDATED :
-		test_bit(USE_DEGR_WFC_T, &adm_ctx.device->flags) ? UT_DEGRADED :
+		peer_device->disk_state[NOW] == D_OUTDATED ? UT_PEER_OUTDATED :
+		test_bit(USE_DEGR_WFC_T, &peer_device->flags) ? UT_DEGRADED :
 		UT_DEFAULT;
 
 	err = timeout_parms_to_priv_skb(adm_ctx.reply_skb, &tp);
@@ -4067,28 +6728,29 @@ int drbd_adm_get_timeout_type(struct sk_buff *skb, struct genl_info *info)
 		nlmsg_free(adm_ctx.reply_skb);
 		return err;
 	}
-out:
+
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-int drbd_adm_start_ov(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_start_ov(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
 	struct drbd_device *device;
+	struct drbd_peer_device *peer_device;
 	enum drbd_ret_code retcode;
+	enum drbd_state_rv rv;
 	struct start_ov_parms parms;
 
-	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_PEER_DEVICE);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
 
-	device = adm_ctx.device;
+	peer_device = adm_ctx.peer_device;
+	device = peer_device->device;
 
 	/* resume from last known position, if possible */
-	parms.ov_start_sector = device->ov_start_sector;
+	parms.ov_start_sector = peer_device->ov_start_sector;
 	parms.ov_stop_sector = ULLONG_MAX;
 	if (info->attrs[DRBD_NLA_START_OV_PARMS]) {
 		int err = start_ov_parms_from_attrs(&parms, info);
@@ -4098,40 +6760,59 @@ int drbd_adm_start_ov(struct sk_buff *skb, struct genl_info *info)
 			goto out;
 		}
 	}
-	mutex_lock(&adm_ctx.resource->adm_mutex);
+	if (!get_ldev(device)) {
+		retcode = ERR_NO_DISK;
+		goto out;
+	}
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_put_ldev;
+	}
 
 	/* w_make_ov_request expects position to be aligned */
-	device->ov_start_sector = parms.ov_start_sector & ~(BM_SECT_PER_BIT-1);
-	device->ov_stop_sector = parms.ov_stop_sector;
+	peer_device->ov_start_sector = parms.ov_start_sector & ~(bm_sect_per_bit(device->bitmap)-1);
+	peer_device->ov_stop_sector = parms.ov_stop_sector;
 
 	/* If there is still bitmap IO pending, e.g. previous resync or verify
 	 * just being finished, wait for it before requesting a new resync. */
-	drbd_suspend_io(device);
-	wait_event(device->misc_wait, !test_bit(BITMAP_IO, &device->flags));
-	retcode = drbd_request_state(device, NS(conn, C_VERIFY_S));
+	drbd_suspend_io(device, READ_AND_WRITE);
+	wait_event(device->misc_wait, !atomic_read(&device->pending_bitmap_work.n));
+	rv = stable_change_repl_state(peer_device,
+		L_VERIFY_S, CS_VERBOSE | CS_WAIT_COMPLETE | CS_SERIALIZE, "verify");
 	drbd_resume_io(device);
 
 	mutex_unlock(&adm_ctx.resource->adm_mutex);
+	put_ldev(device);
+	drbd_adm_finish(&adm_ctx, info, rv);
+	return 0;
+
+out_put_ldev:
+	put_ldev(device);
 out:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
+static bool should_skip_initial_sync(struct drbd_peer_device *peer_device)
+{
+	return peer_device->repl_state[NOW] == L_ESTABLISHED &&
+	       peer_device->connection->agreed_pro_version >= 90 &&
+	       drbd_current_uuid(peer_device->device) == UUID_JUST_CREATED;
+}
 
-int drbd_adm_new_c_uuid(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_new_c_uuid(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
 	struct drbd_device *device;
+	struct drbd_peer_device *peer_device;
 	enum drbd_ret_code retcode;
-	int skip_initial_sync = 0;
 	int err;
 	struct new_c_uuid_parms args;
+	u64 nodes = 0, diskful = 0;
 
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto out_nolock;
 
 	device = adm_ctx.device;
 	memset(&args, 0, sizeof(args));
@@ -4140,12 +6821,18 @@ int drbd_adm_new_c_uuid(struct sk_buff *skb, struct genl_info *info)
 		if (err) {
 			retcode = ERR_MANDATORY_TAG;
 			drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
-			goto out_nolock;
+			goto out_no_adm_mutex;
 		}
 	}
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	mutex_lock(device->state_mutex); /* Protects us against serialized state changes. */
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_no_adm_mutex;
+	}
+	if (down_interruptible(&device->resource->state_sem)) {
+		retcode = ERR_INTR;
+		goto out_no_state_sem;
+	}
 
 	if (!get_ldev(device)) {
 		retcode = ERR_NO_DISK;
@@ -4153,148 +6840,323 @@ int drbd_adm_new_c_uuid(struct sk_buff *skb, struct genl_info *info)
 	}
 
 	/* this is "skip initial sync", assume to be clean */
-	if (device->state.conn == C_CONNECTED &&
-	    first_peer_device(device)->connection->agreed_pro_version >= 90 &&
-	    device->ldev->md.uuid[UI_CURRENT] == UUID_JUST_CREATED && args.clear_bm) {
-		drbd_info(device, "Preparing to skip initial sync\n");
-		skip_initial_sync = 1;
-	} else if (device->state.conn != C_STANDALONE) {
-		retcode = ERR_CONNECTED;
-		goto out_dec;
+	for_each_peer_device(peer_device, device) {
+		if ((args.clear_bm || args.force_resync) && should_skip_initial_sync(peer_device)) {
+			if (peer_device->disk_state[NOW] >= D_INCONSISTENT) {
+				drbd_info(peer_device, "Preparing to %s initial sync\n",
+					  args.clear_bm ? "skip" : "force");
+				diskful |= NODE_MASK(peer_device->node_id);
+			}
+			nodes |= NODE_MASK(peer_device->node_id);
+		} else if (peer_device->repl_state[NOW] != L_OFF) {
+			retcode = ERR_CONNECTED;
+			goto out_dec;
+		}
 	}
 
-	drbd_uuid_set(device, UI_BITMAP, 0); /* Rotate UI_BITMAP to History 1, etc... */
-	drbd_uuid_new_current(device); /* New current, previous to UI_BITMAP */
+	drbd_uuid_new_current_by_user(device); /* New current, previous to UI_BITMAP */
+
+	if (args.force_resync) {
+		unsigned long irq_flags;
+		begin_state_change(device->resource, &irq_flags, CS_VERBOSE);
+		__change_disk_state(device, D_UP_TO_DATE);
+		end_state_change(device->resource, &irq_flags, "new-c-uuid");
+
+		for_each_peer_device(peer_device, device) {
+			if (NODE_MASK(peer_device->node_id) & nodes) {
+				if (NODE_MASK(peer_device->node_id) & diskful) {
+					drbd_info(peer_device, "Forcing resync");
+					set_bit(CONSIDER_RESYNC, &peer_device->flags);
+					drbd_send_uuids(peer_device, UUID_FLAG_RESYNC, 0);
+					drbd_send_current_state(peer_device);
+				} else {
+					drbd_send_uuids(peer_device, 0, 0);
+				}
+
+				drbd_print_uuids(peer_device, "forced resync UUID");
+			}
+		}
+	}
 
 	if (args.clear_bm) {
-		err = drbd_bitmap_io(device, &drbd_bmio_clear_n_write,
-			"clear_n_write from new_c_uuid", BM_LOCKED_MASK, NULL);
+		unsigned long irq_flags;
+
+		err = drbd_bitmap_io(device, &drbd_bmio_clear_all_n_write,
+			"clear_n_write from new_c_uuid", BM_LOCK_ALL, NULL);
 		if (err) {
 			drbd_err(device, "Writing bitmap failed with %d\n", err);
 			retcode = ERR_IO_MD_DISK;
 		}
-		if (skip_initial_sync) {
-			drbd_send_uuids_skip_initial_sync(first_peer_device(device));
-			_drbd_uuid_set(device, UI_BITMAP, 0);
-			drbd_print_uuids(device, "cleared bitmap UUID");
-			spin_lock_irq(&device->resource->req_lock);
-			_drbd_set_state(_NS2(device, disk, D_UP_TO_DATE, pdsk, D_UP_TO_DATE),
-					CS_VERBOSE, NULL);
-			spin_unlock_irq(&device->resource->req_lock);
+		for_each_peer_device(peer_device, device) {
+			if (NODE_MASK(peer_device->node_id) & nodes) {
+				_drbd_uuid_set_bitmap(peer_device, 0);
+				drbd_send_uuids(peer_device, UUID_FLAG_SKIP_INITIAL_SYNC, 0);
+				drbd_print_uuids(peer_device, "cleared bitmap UUID");
+			}
+		}
+		begin_state_change(device->resource, &irq_flags, CS_VERBOSE);
+		__change_disk_state(device, D_UP_TO_DATE);
+		for_each_peer_device(peer_device, device) {
+			if (NODE_MASK(peer_device->node_id) & diskful)
+				__change_peer_disk_state(peer_device, D_UP_TO_DATE);
 		}
+		end_state_change(device->resource, &irq_flags, "new-c-uuid");
 	}
 
-	drbd_md_sync(device);
+	drbd_md_sync_if_dirty(device);
 out_dec:
 	put_ldev(device);
 out:
-	mutex_unlock(device->state_mutex);
+	up(&device->resource->state_sem);
+out_no_state_sem:
 	mutex_unlock(&adm_ctx.resource->adm_mutex);
-out_nolock:
+out_no_adm_mutex:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-static enum drbd_ret_code
-drbd_check_resource_name(struct drbd_config_context *adm_ctx)
+/* name: a resource or connection name
+ * Comes from a NLA_NUL_STRING, and already passed validate_nla().
+ * It is known to be NUL-terminated within the bounds of our defined netlink
+ * attribute policy.
+ *
+ * It must not be empty.
+ * It must not be the literal "all".
+ *
+ * If strict:
+ * Only allow strict ascii alnum [0-9A-Za-z]
+ * and some hand selected punctuation characters
+ *
+ * If non strict:
+ * It must not contain '/', we use it as directory name in debugfs.
+ * It shall not contain "control characters" or space, as those may confuse
+ * utils when trying to parse the output of "drbdsetup events2" or similar.
+ * Otherwise, we don't care, it may be any tag that makes sense to userland,
+ * we do not enforce strict ascii or any other "encoding".
+ */
+static enum drbd_ret_code drbd_check_name_str(const char *name, const bool strict)
 {
-	const char *name = adm_ctx->resource_name;
-	if (!name || !name[0]) {
-		drbd_msg_put_info(adm_ctx->reply_skb, "resource name missing");
+	unsigned char c;
+	if (name == NULL || name[0] == 0)
 		return ERR_MANDATORY_TAG;
-	}
-	/* if we want to use these in sysfs/configfs/debugfs some day,
-	 * we must not allow slashes */
-	if (strchr(name, '/')) {
-		drbd_msg_put_info(adm_ctx->reply_skb, "invalid resource name");
+
+	/* Tools reserve the literal "all" to mean what you would expect. */
+	/* If we want to get really paranoid,
+	 * we could add a number of "reserved" names,
+	 * like the *_state_names defined in drbd_strings.c */
+	if (memcmp("all", name, 4) == 0)
 		return ERR_INVALID_REQUEST;
+
+	while ((c = *name++)) {
+		if (c == '/' || c <= ' ' || c == '\x7f')
+			return ERR_INVALID_REQUEST;
+		if (strict) {
+			switch (c) {
+			case '0' ... '9':
+			case 'A' ... 'Z':
+			case 'a' ... 'z':
+				/* if you change this, also change "strict_pattern" below */
+			case '+': case '-': case '.': case '_':
+				break;
+			default:
+				return ERR_INVALID_REQUEST;
+			}
+		}
 	}
 	return NO_ERROR;
 }
 
+int param_set_drbd_strict_names(const char *val, const struct kernel_param *kp)
+{
+	int err = 0;
+	bool new_value;
+	bool orig_value = *(bool *)kp->arg;
+	struct kernel_param dummy_kp = *kp;
+
+	dummy_kp.arg = &new_value;
+
+	err = param_set_bool(val, &dummy_kp);
+	if (err || new_value == orig_value)
+		return err;
+
+	if (new_value) {
+		struct drbd_resource *resource;
+		struct drbd_connection *connection;
+		int non_strict_cnt = 0;
+
+		/* If we transition from "not enforced" to "enforcing strict names",
+		 * we complain about all "non-strict names" that still exist,
+		 * but intentionally still enable the enforcing.
+		 *
+		 * That way we can prevent new "non-strict" from being created,
+		 * while allowing us to clean up the existing ones at some
+		 * "convenient time" later.
+		 */
+		rcu_read_lock();
+		for_each_resource_rcu(resource, &drbd_resources) {
+			for_each_connection_rcu(connection, resource) {
+				char *name = connection->transport.net_conf->name;
+				if (drbd_check_name_str(name, true) == NO_ERROR)
+					continue;
+				drbd_info(connection, "non-strict name still in use\n");
+				++non_strict_cnt;
+			}
+			if (drbd_check_name_str(resource->name, true) == NO_ERROR)
+				continue;
+			drbd_info(resource, "non-strict name still in use\n");
+			++non_strict_cnt;
+		}
+		rcu_read_unlock();
+		if (non_strict_cnt)
+			pr_notice("%u non-strict names still in use\n", non_strict_cnt);
+	}
+	if (!err) {
+		*(bool *)kp->arg = new_value;
+		pr_info("%s strict name checks\n", new_value ? "enabled" : "disabled");
+	}
+	return err;
+}
+
+static void drbd_msg_put_name_error(struct sk_buff *reply_skb, enum drbd_ret_code ret_code)
+{
+	char *strict_pattern = " (strict_names=1 allows only [0-9A-Za-z+._-])";
+	char *non_strict_pat = " (disallowed: ascii control, space, slash)";
+	if (ret_code == NO_ERROR)
+		return;
+	if (ret_code == ERR_INVALID_REQUEST) {
+		drbd_msg_sprintf_info(reply_skb, "invalid name%s",
+			drbd_strict_names ? strict_pattern : non_strict_pat);
+	} else if (ret_code == ERR_MANDATORY_TAG) {
+		drbd_msg_put_info(reply_skb, "name missing");
+	} else if (ret_code == ERR_ALREADY_EXISTS) {
+		drbd_msg_put_info(reply_skb, "name already exists");
+	} else {
+		drbd_msg_put_info(reply_skb, "unhandled error in drbd_check_name_str");
+	}
+}
+
+static enum drbd_ret_code drbd_check_resource_name(struct drbd_config_context *const adm_ctx)
+{
+	enum drbd_ret_code ret_code = drbd_check_name_str(adm_ctx->resource_name, drbd_strict_names);
+	drbd_msg_put_name_error(adm_ctx->reply_skb, ret_code);
+	return ret_code;
+}
+
 static void resource_to_info(struct resource_info *info,
 			     struct drbd_resource *resource)
 {
-	info->res_role = conn_highest_role(first_connection(resource));
-	info->res_susp = resource->susp;
-	info->res_susp_nod = resource->susp_nod;
-	info->res_susp_fen = resource->susp_fen;
+	info->res_role = resource->role[NOW];
+	info->res_susp = resource->susp_user[NOW];
+	info->res_susp_nod = resource->susp_nod[NOW];
+	info->res_susp_fen = is_suspended_fen(resource, NOW);
+	info->res_susp_quorum = resource->susp_quorum[NOW];
+	info->res_fail_io = resource->fail_io[NOW];
 }
 
-int drbd_adm_new_resource(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_new_resource(struct sk_buff *skb, struct genl_info *info)
 {
-	struct drbd_connection *connection;
 	struct drbd_config_context adm_ctx;
+	struct drbd_resource *resource;
 	enum drbd_ret_code retcode;
 	struct res_opts res_opts;
 	int err;
 
+	mutex_lock(&resources_mutex);
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, 0);
-	if (!adm_ctx.reply_skb)
+	if (!adm_ctx.reply_skb) {
+		mutex_unlock(&resources_mutex);
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
+	}
 
 	set_res_opts_defaults(&res_opts);
+	res_opts.node_id = -1;
 	err = res_opts_from_attrs(&res_opts, info);
-	if (err && err != -ENOMSG) {
+	if (err) {
 		retcode = ERR_MANDATORY_TAG;
 		drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
 		goto out;
 	}
 
+	/* ERR_ALREADY_EXISTS? */
+	if (adm_ctx.resource)
+		goto out;
+
 	retcode = drbd_check_resource_name(&adm_ctx);
 	if (retcode != NO_ERROR)
 		goto out;
 
-	if (adm_ctx.resource) {
-		if (info->nlhdr->nlmsg_flags & NLM_F_EXCL) {
-			retcode = ERR_INVALID_REQUEST;
-			drbd_msg_put_info(adm_ctx.reply_skb, "resource exists");
-		}
-		/* else: still NO_ERROR */
+	if (res_opts.explicit_drbd8_compat)
+		res_opts.drbd8_compat_mode = true;
+
+	if (res_opts.drbd8_compat_mode) {
+#ifdef CONFIG_DRBD_COMPAT_84
+		pr_info("drbd: running in DRBD 8 compatibility mode.\n");
+		/*
+		 * That means we ignore the value of node_id for now. That
+		 * will be set to an actual value when the resource is
+		 * connected later.
+		 */
+		atomic_inc(&nr_drbd8_devices);
+		res_opts.auto_promote = false;
+#else
+		drbd_msg_put_info(adm_ctx.reply_skb, "CONFIG_DRBD_COMPAT_84 not enabled");
+		goto out;
+#endif
+	} else if (res_opts.node_id >= DRBD_NODE_ID_MAX) {
+		pr_err("drbd: invalid node id (%d)\n", res_opts.node_id);
+		retcode = ERR_INVALID_REQUEST;
 		goto out;
 	}
 
-	/* not yet safe for genl_family.parallel_ops */
-	mutex_lock(&resources_mutex);
-	connection = conn_create(adm_ctx.resource_name, &res_opts);
+	if (!try_module_get(THIS_MODULE)) {
+		pr_err("drbd: Could not get a module reference\n");
+		retcode = ERR_INVALID_REQUEST;
+		goto out;
+	}
+
+	resource = drbd_create_resource(adm_ctx.resource_name, &res_opts);
 	mutex_unlock(&resources_mutex);
 
-	if (connection) {
+	if (resource) {
 		struct resource_info resource_info;
 
 		mutex_lock(&notification_mutex);
-		resource_to_info(&resource_info, connection->resource);
-		notify_resource_state(NULL, 0, connection->resource,
-				      &resource_info, NOTIFY_CREATE);
+		resource_to_info(&resource_info, resource);
+		notify_resource_state(NULL, 0, resource, &resource_info, NULL, NOTIFY_CREATE);
 		mutex_unlock(&notification_mutex);
-	} else
+	} else {
+		module_put(THIS_MODULE);
 		retcode = ERR_NOMEM;
-
+	}
+	goto out_no_unlock;
 out:
+	mutex_unlock(&resources_mutex);
+out_no_unlock:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-static void device_to_info(struct device_info *info,
-			   struct drbd_device *device)
-{
-	info->dev_disk_state = device->state.disk;
-}
-
-
-int drbd_adm_new_minor(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_new_minor(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
 	struct drbd_genlmsghdr *dh = genl_info_userhdr(info);
+	struct device_conf device_conf;
+	struct drbd_resource *resource;
+	struct drbd_device *device;
 	enum drbd_ret_code retcode;
+	int err;
 
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
+
+	set_device_conf_defaults(&device_conf);
+	err = device_conf_from_attrs(&device_conf, info);
+	if (err && err != -ENOMSG) {
+		retcode = ERR_MANDATORY_TAG;
+		drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
 		goto out;
+	}
 
 	if (dh->minor > MINORMASK) {
 		drbd_msg_put_info(adm_ctx.reply_skb, "requested minor out of range");
@@ -4306,31 +7168,43 @@ int drbd_adm_new_minor(struct sk_buff *skb, struct genl_info *info)
 		retcode = ERR_INVALID_REQUEST;
 		goto out;
 	}
-
-	/* drbd_adm_prepare made sure already
-	 * that first_peer_device(device)->connection and device->vnr match the request. */
-	if (adm_ctx.device) {
-		if (info->nlhdr->nlmsg_flags & NLM_F_EXCL)
-			retcode = ERR_MINOR_OR_VOLUME_EXISTS;
-		/* else: still NO_ERROR */
+	if (device_conf.block_size != 512 && device_conf.block_size != 1024 &&
+	    device_conf.block_size != 2048 && device_conf.block_size != 4096) {
+		drbd_msg_put_info(adm_ctx.reply_skb, "block_size not 512, 1024, 2048, or 4096");
+		retcode = ERR_INVALID_REQUEST;
+		goto out;
+	}
+	if (device_conf.discard_granularity != DRBD_DISCARD_GRANULARITY_DEF &&
+	    device_conf.discard_granularity != 0 &&
+	    device_conf.discard_granularity % device_conf.block_size != 0) {
+		drbd_msg_put_info(adm_ctx.reply_skb,
+			"discard_granularity must be 0 or a multiple of block_size");
+		retcode = ERR_INVALID_REQUEST;
 		goto out;
 	}
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	retcode = drbd_create_device(&adm_ctx, dh->minor);
+	if (adm_ctx.device)
+		goto out;
+
+	resource = adm_ctx.resource;
+	mutex_lock(&resource->conf_update);
+	for (;;) {
+		retcode = drbd_create_device(&adm_ctx, dh->minor, &device_conf, &device);
+		if (retcode != ERR_NOMEM ||
+		    schedule_timeout_interruptible(HZ / 10))
+			break;
+		/* Keep retrying until the memory allocations eventually succeed. */
+	}
 	if (retcode == NO_ERROR) {
-		struct drbd_device *device;
 		struct drbd_peer_device *peer_device;
 		struct device_info info;
 		unsigned int peer_devices = 0;
 		enum drbd_notification_type flags;
 
-		device = minor_to_device(dh->minor);
-		for_each_peer_device(peer_device, device) {
-			if (!has_net_conf(peer_device->connection))
-				continue;
+		drbd_reconsider_queue_parameters(device, NULL);
+
+		for_each_peer_device(peer_device, device)
 			peer_devices++;
-		}
 
 		device_to_info(&info, device);
 		mutex_lock(&notification_mutex);
@@ -4339,8 +7213,6 @@ int drbd_adm_new_minor(struct sk_buff *skb, struct genl_info *info)
 		for_each_peer_device(peer_device, device) {
 			struct peer_device_info peer_device_info;
 
-			if (!has_net_conf(peer_device->connection))
-				continue;
 			peer_device_to_info(&peer_device_info, peer_device);
 			flags = (peer_devices--) ? NOTIFY_CONTINUES : 0;
 			notify_peer_device_state(NULL, 0, peer_device, &peer_device_info,
@@ -4348,7 +7220,7 @@ int drbd_adm_new_minor(struct sk_buff *skb, struct genl_info *info)
 		}
 		mutex_unlock(&notification_mutex);
 	}
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
+	mutex_unlock(&resource->conf_update);
 out:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
@@ -4356,42 +7228,51 @@ int drbd_adm_new_minor(struct sk_buff *skb, struct genl_info *info)
 
 static enum drbd_ret_code adm_del_minor(struct drbd_device *device)
 {
+	struct drbd_resource *resource = device->resource;
 	struct drbd_peer_device *peer_device;
+	enum drbd_ret_code ret;
+	u64 im;
+
+	read_lock_irq(&resource->state_rwlock);
+	if (device->disk_state[NOW] == D_DISKLESS)
+		ret = test_and_set_bit(UNREGISTERED, &device->flags) ? ERR_MINOR_INVALID : NO_ERROR;
+	else
+		ret = ERR_MINOR_CONFIGURED;
+	read_unlock_irq(&resource->state_rwlock);
+
+	if (ret != NO_ERROR)
+		return ret;
 
-	if (device->state.disk == D_DISKLESS &&
-	    /* no need to be device->state.conn == C_STANDALONE &&
-	     * we may want to delete a minor from a live replication group.
-	     */
-	    device->state.role == R_SECONDARY) {
-		struct drbd_connection *connection =
-			first_connection(device->resource);
+	for_each_peer_device_ref(peer_device, im, device)
+		stable_change_repl_state(peer_device, L_OFF,
+					 CS_VERBOSE | CS_WAIT_COMPLETE, "del-minor");
+
+	/* If drbd_ldev_destroy() is pending, wait for it to run before
+	 * unregistering the device. */
+	wait_event(device->misc_wait, !test_bit(GOING_DISKLESS, &device->flags));
+	/*
+	 * Flush the resource work queue to make sure that no more events like
+	 * state change notifications for this device are queued: we want the
+	 * "destroy" event to come last.
+	 */
+	drbd_flush_workqueue(&resource->work);
 
-		_drbd_request_state(device, NS(conn, C_WF_REPORT_PARAMS),
-				    CS_VERBOSE + CS_WAIT_COMPLETE);
+	drbd_unregister_device(device);
 
-		/* If the state engine hasn't stopped the sender thread yet, we
-		 * need to flush the sender work queue before generating the
-		 * DESTROY events here. */
-		if (get_t_state(&connection->worker) == RUNNING)
-			drbd_flush_workqueue(&connection->sender_work);
+	mutex_lock(&notification_mutex);
+	for_each_peer_device_ref(peer_device, im, device)
+		notify_peer_device_state(NULL, 0, peer_device, NULL,
+					 NOTIFY_DESTROY | NOTIFY_CONTINUES);
+	notify_device_state(NULL, 0, device, NULL, NOTIFY_DESTROY);
+	mutex_unlock(&notification_mutex);
 
-		mutex_lock(&notification_mutex);
-		for_each_peer_device(peer_device, device) {
-			if (!has_net_conf(peer_device->connection))
-				continue;
-			notify_peer_device_state(NULL, 0, peer_device, NULL,
-						 NOTIFY_DESTROY | NOTIFY_CONTINUES);
-		}
-		notify_device_state(NULL, 0, device, NULL, NOTIFY_DESTROY);
-		mutex_unlock(&notification_mutex);
+	if (device->open_cnt == 0 && !test_and_set_bit(DESTROYING_DEV, &device->flags))
+		call_rcu(&device->rcu, drbd_reclaim_device);
 
-		drbd_delete_device(device);
-		return NO_ERROR;
-	} else
-		return ERR_MINOR_CONFIGURED;
+	return ret;
 }
 
-int drbd_adm_del_minor(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_del_minor(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
 	enum drbd_ret_code retcode;
@@ -4399,168 +7280,159 @@ int drbd_adm_del_minor(struct sk_buff *skb, struct genl_info *info)
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_MINOR);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto out;
 
-	mutex_lock(&adm_ctx.resource->adm_mutex);
-	retcode = adm_del_minor(adm_ctx.device);
-	mutex_unlock(&adm_ctx.resource->adm_mutex);
-out:
+	if (mutex_lock_interruptible(&adm_ctx.resource->adm_mutex)) {
+		retcode = ERR_INTR;
+	} else {
+		retcode = adm_del_minor(adm_ctx.device);
+		mutex_unlock(&adm_ctx.resource->adm_mutex);
+	}
+
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
 static int adm_del_resource(struct drbd_resource *resource)
 {
-	struct drbd_connection *connection;
+	int err;
 
-	for_each_connection(connection, resource) {
-		if (connection->cstate > C_STANDALONE)
-			return ERR_NET_CONFIGURED;
-	}
+	/*
+	 * Flush the resource work queue to make sure that no more events like
+	 * state change notifications are queued: we want the "destroy" event
+	 * to come last.
+	 */
+	drbd_flush_workqueue(&resource->work);
+
+	mutex_lock(&resources_mutex);
+	err = ERR_RES_NOT_KNOWN;
+	if (test_bit(R_UNREGISTERED, &resource->flags))
+		goto out;
+	err = ERR_NET_CONFIGURED;
+	if (!list_empty(&resource->connections))
+		goto out;
+	err = ERR_RES_IN_USE;
 	if (!idr_is_empty(&resource->devices))
-		return ERR_RES_IN_USE;
+		goto out;
+
+	set_bit(R_UNREGISTERED, &resource->flags);
+	list_del_rcu(&resource->resources);
+	drbd_debugfs_resource_cleanup(resource);
+	mutex_unlock(&resources_mutex);
+
+	if (cancel_work_sync(&resource->empty_twopc)) {
+		kref_put(&resource->kref, drbd_destroy_resource);
+	}
+	timer_shutdown_sync(&resource->twopc_timer);
+	timer_shutdown_sync(&resource->peer_ack_timer);
+	call_rcu(&resource->rcu, drbd_reclaim_resource);
 
-	/* The state engine has stopped the sender thread, so we don't
-	 * need to flush the sender work queue before generating the
-	 * DESTROY event here. */
 	mutex_lock(&notification_mutex);
-	notify_resource_state(NULL, 0, resource, NULL, NOTIFY_DESTROY);
+	notify_resource_state(NULL, 0, resource, NULL, NULL, NOTIFY_DESTROY);
 	mutex_unlock(&notification_mutex);
 
-	mutex_lock(&resources_mutex);
-	list_del_rcu(&resource->resources);
-	mutex_unlock(&resources_mutex);
-	/* Make sure all threads have actually stopped: state handling only
-	 * does drbd_thread_stop_nowait(). */
-	list_for_each_entry(connection, &resource->connections, connections)
-		drbd_thread_stop(&connection->worker);
-	synchronize_rcu();
-	drbd_free_resource(resource);
+	/* When the last resource was removed do an explicit synchronize RCU.
+	   Without this a immediately following rmmod would fail, since the
+	   resource's worker thread still has a reference count to the module. */
+	if (list_empty(&drbd_resources))
+		synchronize_rcu();
 	return NO_ERROR;
+out:
+	mutex_unlock(&resources_mutex);
+	return err;
 }
 
-int drbd_adm_down(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_down(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
 	struct drbd_resource *resource;
 	struct drbd_connection *connection;
 	struct drbd_device *device;
 	int retcode; /* enum drbd_ret_code rsp. enum drbd_state_rv */
-	unsigned i;
+	enum drbd_ret_code ret;
+	int i;
+	u64 im;
 
-	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE);
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info,
+			DRBD_ADM_NEED_RESOURCE | DRBD_ADM_IGNORE_VERSION);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto finish;
 
 	resource = adm_ctx.resource;
-	mutex_lock(&resource->adm_mutex);
+	if (mutex_lock_interruptible(&resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_no_adm_mutex;
+	}
+	set_bit(DOWN_IN_PROGRESS, &resource->flags);
 	/* demote */
-	for_each_connection(connection, resource) {
-		struct drbd_peer_device *peer_device;
-
-		idr_for_each_entry(&connection->peer_devices, peer_device, i) {
-			retcode = drbd_set_role(peer_device->device, R_SECONDARY, 0);
-			if (retcode < SS_SUCCESS) {
-				drbd_msg_put_info(adm_ctx.reply_skb, "failed to demote");
-				goto out;
-			}
-		}
+	retcode = drbd_set_role(resource, R_SECONDARY, false, "down", adm_ctx.reply_skb);
+	if (retcode < SS_SUCCESS) {
+		drbd_msg_put_info(adm_ctx.reply_skb, "failed to demote");
+		goto out;
+	}
 
-		retcode = conn_try_disconnect(connection, 0);
-		if (retcode < SS_SUCCESS) {
-			drbd_msg_put_info(adm_ctx.reply_skb, "failed to disconnect");
+	for_each_connection_ref(connection, im, resource) {
+		retcode = SS_SUCCESS;
+		if (connection->cstate[NOW] > C_STANDALONE)
+			retcode = conn_try_disconnect(connection, 0, "down", adm_ctx.reply_skb);
+		if (retcode >= SS_SUCCESS) {
+			del_connection(connection, "down");
+		} else {
+			kref_put(&connection->kref, drbd_destroy_connection);
 			goto out;
 		}
 	}
 
-	/* detach */
+	/* detach and delete minor */
+	rcu_read_lock();
 	idr_for_each_entry(&resource->devices, device, i) {
-		retcode = adm_detach(device, 0);
+		kref_get(&device->kref);
+		rcu_read_unlock();
+		retcode = adm_detach(device, 0, 0, "down", adm_ctx.reply_skb);
+		mutex_lock(&resource->conf_update);
+		ret = adm_del_minor(device);
+		mutex_unlock(&resource->conf_update);
+		kref_put(&device->kref, drbd_destroy_device);
 		if (retcode < SS_SUCCESS || retcode > NO_ERROR) {
 			drbd_msg_put_info(adm_ctx.reply_skb, "failed to detach");
 			goto out;
 		}
-	}
-
-	/* delete volumes */
-	idr_for_each_entry(&resource->devices, device, i) {
-		retcode = adm_del_minor(device);
-		if (retcode != NO_ERROR) {
+		if (ret != NO_ERROR) {
 			/* "can not happen" */
 			drbd_msg_put_info(adm_ctx.reply_skb, "failed to delete volume");
 			goto out;
 		}
+		rcu_read_lock();
 	}
+	rcu_read_unlock();
 
+	mutex_lock(&resource->conf_update);
 	retcode = adm_del_resource(resource);
+	/* holding a reference to resource in adm_crx until drbd_adm_finish() */
+	mutex_unlock(&resource->conf_update);
 out:
+	opener_info(adm_ctx.resource, adm_ctx.reply_skb, (enum drbd_state_rv)retcode);
+	clear_bit(DOWN_IN_PROGRESS, &resource->flags);
 	mutex_unlock(&resource->adm_mutex);
-finish:
+out_no_adm_mutex:
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-int drbd_adm_del_resource(struct sk_buff *skb, struct genl_info *info)
+static int drbd_adm_del_resource(struct sk_buff *skb, struct genl_info *info)
 {
 	struct drbd_config_context adm_ctx;
-	struct drbd_resource *resource;
 	enum drbd_ret_code retcode;
 
 	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE);
 	if (!adm_ctx.reply_skb)
 		return retcode;
-	if (retcode != NO_ERROR)
-		goto finish;
-	resource = adm_ctx.resource;
 
-	mutex_lock(&resource->adm_mutex);
-	retcode = adm_del_resource(resource);
-	mutex_unlock(&resource->adm_mutex);
-finish:
+	retcode = adm_del_resource(adm_ctx.resource);
+
 	drbd_adm_finish(&adm_ctx, info, retcode);
 	return 0;
 }
 
-void drbd_bcast_event(struct drbd_device *device, const struct sib_info *sib)
-{
-	struct sk_buff *msg;
-	struct drbd_genlmsghdr *d_out;
-	unsigned seq;
-	int err = -ENOMEM;
-
-	seq = atomic_inc_return(&drbd_genl_seq);
-	msg = genlmsg_new(NLMSG_GOODSIZE, GFP_NOIO);
-	if (!msg)
-		goto failed;
-
-	err = -EMSGSIZE;
-	d_out = genlmsg_put(msg, 0, seq, &drbd_genl_family, 0, DRBD_EVENT);
-	if (!d_out) /* cannot happen, but anyways. */
-		goto nla_put_failure;
-	d_out->minor = device_to_minor(device);
-	d_out->ret_code = NO_ERROR;
-
-	if (nla_put_status_info(msg, device, sib))
-		goto nla_put_failure;
-	genlmsg_end(msg, d_out);
-	err = drbd_genl_multicast_events(msg, GFP_NOWAIT);
-	/* msg has been consumed or freed in netlink_broadcast() */
-	if (err && err != -ESRCH)
-		goto failed;
-
-	return;
-
-nla_put_failure:
-	nlmsg_free(msg);
-failed:
-	drbd_err(device, "Error %d while broadcasting event. "
-			"Event seq:%u sib_reason:%u\n",
-			err, seq, sib->sib_reason);
-}
-
 static int nla_put_notification_header(struct sk_buff *msg,
 				       enum drbd_notification_type type)
 {
@@ -4575,6 +7447,7 @@ int notify_resource_state(struct sk_buff *skb,
 			   unsigned int seq,
 			   struct drbd_resource *resource,
 			   struct resource_info *resource_info,
+			   struct rename_resource_info *rename_resource_info,
 			   enum drbd_notification_type type)
 {
 	struct resource_statistics resource_statistics;
@@ -4583,7 +7456,7 @@ int notify_resource_state(struct sk_buff *skb,
 	int err;
 
 	if (!skb) {
-		seq = atomic_inc_return(&notify_genl_seq);
+		seq = atomic_inc_return(&drbd_genl_seq);
 		skb = genlmsg_new(NLMSG_GOODSIZE, GFP_NOIO);
 		err = -ENOMEM;
 		if (!skb)
@@ -4597,18 +7470,29 @@ int notify_resource_state(struct sk_buff *skb,
 		goto nla_put_failure;
 	dh->minor = -1U;
 	dh->ret_code = NO_ERROR;
-	if (nla_put_drbd_cfg_context(skb, resource, NULL, NULL) ||
-	    nla_put_notification_header(skb, type) ||
-	    ((type & ~NOTIFY_FLAGS) != NOTIFY_DESTROY &&
-	     resource_info_to_skb(skb, resource_info, true)))
+	if (nla_put_drbd_cfg_context(skb, resource, NULL, NULL, NULL) ||
+	    nla_put_notification_header(skb, type))
 		goto nla_put_failure;
+
+	if (resource_info) {
+		err = resource_info_to_skb(skb, resource_info, true);
+		if (err)
+			goto nla_put_failure;
+	}
+
 	resource_statistics.res_stat_write_ordering = resource->write_ordering;
 	err = resource_statistics_to_skb(skb, &resource_statistics, !capable(CAP_SYS_ADMIN));
 	if (err)
 		goto nla_put_failure;
+
+	if (rename_resource_info) {
+		err = rename_resource_info_to_skb(skb, rename_resource_info, !capable(CAP_SYS_ADMIN));
+		if (err)
+			goto nla_put_failure;
+	}
 	genlmsg_end(skb, dh);
 	if (multicast) {
-		err = drbd_genl_multicast_events(skb, GFP_NOWAIT);
+		err = drbd_genl_multicast_events(skb);
 		/* skb has been consumed or freed in netlink_broadcast() */
 		if (err && err != -ESRCH)
 			goto failed;
@@ -4635,7 +7519,7 @@ int notify_device_state(struct sk_buff *skb,
 	int err;
 
 	if (!skb) {
-		seq = atomic_inc_return(&notify_genl_seq);
+		seq = atomic_inc_return(&drbd_genl_seq);
 		skb = genlmsg_new(NLMSG_GOODSIZE, GFP_NOIO);
 		err = -ENOMEM;
 		if (!skb)
@@ -4649,7 +7533,7 @@ int notify_device_state(struct sk_buff *skb,
 		goto nla_put_failure;
 	dh->minor = device->minor;
 	dh->ret_code = NO_ERROR;
-	if (nla_put_drbd_cfg_context(skb, device->resource, NULL, device) ||
+	if (nla_put_drbd_cfg_context(skb, device->resource, NULL, device, NULL) ||
 	    nla_put_notification_header(skb, type) ||
 	    ((type & ~NOTIFY_FLAGS) != NOTIFY_DESTROY &&
 	     device_info_to_skb(skb, device_info, true)))
@@ -4658,7 +7542,7 @@ int notify_device_state(struct sk_buff *skb,
 	device_statistics_to_skb(skb, &device_statistics, !capable(CAP_SYS_ADMIN));
 	genlmsg_end(skb, dh);
 	if (multicast) {
-		err = drbd_genl_multicast_events(skb, GFP_NOWAIT);
+		err = drbd_genl_multicast_events(skb);
 		/* skb has been consumed or freed in netlink_broadcast() */
 		if (err && err != -ESRCH)
 			goto failed;
@@ -4673,6 +7557,7 @@ int notify_device_state(struct sk_buff *skb,
 	return err;
 }
 
+/* open coded path_parms_to_skb() iterating of the list */
 int notify_connection_state(struct sk_buff *skb,
 			     unsigned int seq,
 			     struct drbd_connection *connection,
@@ -4685,7 +7570,7 @@ int notify_connection_state(struct sk_buff *skb,
 	int err;
 
 	if (!skb) {
-		seq = atomic_inc_return(&notify_genl_seq);
+		seq = atomic_inc_return(&drbd_genl_seq);
 		skb = genlmsg_new(NLMSG_GOODSIZE, GFP_NOIO);
 		err = -ENOMEM;
 		if (!skb)
@@ -4699,16 +7584,17 @@ int notify_connection_state(struct sk_buff *skb,
 		goto nla_put_failure;
 	dh->minor = -1U;
 	dh->ret_code = NO_ERROR;
-	if (nla_put_drbd_cfg_context(skb, connection->resource, connection, NULL) ||
+	if (nla_put_drbd_cfg_context(skb, connection->resource, connection, NULL, NULL) ||
 	    nla_put_notification_header(skb, type) ||
 	    ((type & ~NOTIFY_FLAGS) != NOTIFY_DESTROY &&
 	     connection_info_to_skb(skb, connection_info, true)))
 		goto nla_put_failure;
-	connection_statistics.conn_congested = test_bit(NET_CONGESTED, &connection->flags);
+	connection_paths_to_skb(skb, connection);
+	connection_to_statistics(&connection_statistics, connection);
 	connection_statistics_to_skb(skb, &connection_statistics, !capable(CAP_SYS_ADMIN));
 	genlmsg_end(skb, dh);
 	if (multicast) {
-		err = drbd_genl_multicast_events(skb, GFP_NOWAIT);
+		err = drbd_genl_multicast_events(skb);
 		/* skb has been consumed or freed in netlink_broadcast() */
 		if (err && err != -ESRCH)
 			goto failed;
@@ -4736,7 +7622,7 @@ int notify_peer_device_state(struct sk_buff *skb,
 	int err;
 
 	if (!skb) {
-		seq = atomic_inc_return(&notify_genl_seq);
+		seq = atomic_inc_return(&drbd_genl_seq);
 		skb = genlmsg_new(NLMSG_GOODSIZE, GFP_NOIO);
 		err = -ENOMEM;
 		if (!skb)
@@ -4750,7 +7636,7 @@ int notify_peer_device_state(struct sk_buff *skb,
 		goto nla_put_failure;
 	dh->minor = -1U;
 	dh->ret_code = NO_ERROR;
-	if (nla_put_drbd_cfg_context(skb, resource, peer_device->connection, peer_device->device) ||
+	if (nla_put_drbd_cfg_context(skb, resource, peer_device->connection, peer_device->device, NULL) ||
 	    nla_put_notification_header(skb, type) ||
 	    ((type & ~NOTIFY_FLAGS) != NOTIFY_DESTROY &&
 	     peer_device_info_to_skb(skb, peer_device_info, true)))
@@ -4759,7 +7645,7 @@ int notify_peer_device_state(struct sk_buff *skb,
 	peer_device_statistics_to_skb(skb, &peer_device_statistics, !capable(CAP_SYS_ADMIN));
 	genlmsg_end(skb, dh);
 	if (multicast) {
-		err = drbd_genl_multicast_events(skb, GFP_NOWAIT);
+		err = drbd_genl_multicast_events(skb);
 		/* skb has been consumed or freed in netlink_broadcast() */
 		if (err && err != -ESRCH)
 			goto failed;
@@ -4774,13 +7660,86 @@ int notify_peer_device_state(struct sk_buff *skb,
 	return err;
 }
 
+void drbd_broadcast_peer_device_state(struct drbd_peer_device *peer_device)
+{
+	struct peer_device_info peer_device_info;
+	mutex_lock(&notification_mutex);
+	peer_device_to_info(&peer_device_info, peer_device);
+	notify_peer_device_state(NULL, 0, peer_device, &peer_device_info, NOTIFY_CHANGE);
+	mutex_unlock(&notification_mutex);
+}
+
+static int notify_path_state(struct sk_buff *skb,
+		       unsigned int seq,
+		       /* until we have a backpointer in drbd_path, we need an explicit connection: */
+		       struct drbd_connection *connection,
+		       struct drbd_path *path,
+		       struct drbd_path_info *path_info,
+		       enum drbd_notification_type type)
+{
+	struct drbd_resource *resource = connection->resource;
+	struct drbd_genlmsghdr *dh;
+	bool multicast = false;
+	int err;
+
+	if (!skb) {
+		seq = atomic_inc_return(&drbd_genl_seq);
+		skb = genlmsg_new(NLMSG_GOODSIZE, GFP_NOIO);
+		err = -ENOMEM;
+		if (!skb)
+			goto failed;
+		multicast = true;
+	}
+
+	err = -EMSGSIZE;
+	dh = genlmsg_put(skb, 0, seq, &drbd_genl_family, 0, DRBD_PATH_STATE);
+	if (!dh)
+		goto nla_put_failure;
+
+	dh->minor = -1U;
+	dh->ret_code = NO_ERROR;
+	if (nla_put_drbd_cfg_context(skb, resource, connection, NULL, path) ||
+	    nla_put_notification_header(skb, type) ||
+	    drbd_path_info_to_skb(skb, path_info, true))
+		goto nla_put_failure;
+	genlmsg_end(skb, dh);
+	if (multicast) {
+		err = drbd_genl_multicast_events(skb);
+		/* skb has been consumed or freed in netlink_broadcast() */
+		if (err && err != -ESRCH)
+			goto failed;
+	}
+	return 0;
+
+nla_put_failure:
+	nlmsg_free(skb);
+failed:
+	/* FIXME add path specifics to our drbd_polymorph_printk.h */
+	drbd_err(connection, "path: Error %d while broadcasting event. Event seq:%u\n",
+		 err, seq);
+	return err;
+}
+
+int notify_path(struct drbd_connection *connection, struct drbd_path *path, enum drbd_notification_type type)
+{
+	struct drbd_path_info path_info;
+	int err;
+
+	path_info.path_established = test_bit(TR_ESTABLISHED, &path->flags);
+	mutex_lock(&notification_mutex);
+	err = notify_path_state(NULL, 0, connection, path, &path_info, type);
+	mutex_unlock(&notification_mutex);
+	return err;
+
+}
+
 void notify_helper(enum drbd_notification_type type,
 		   struct drbd_device *device, struct drbd_connection *connection,
 		   const char *name, int status)
 {
 	struct drbd_resource *resource = device ? device->resource : connection->resource;
 	struct drbd_helper_info helper_info;
-	unsigned int seq = atomic_inc_return(&notify_genl_seq);
+	unsigned int seq = atomic_inc_return(&drbd_genl_seq);
 	struct sk_buff *skb = NULL;
 	struct drbd_genlmsghdr *dh;
 	int err;
@@ -4801,12 +7760,12 @@ void notify_helper(enum drbd_notification_type type,
 	dh->minor = device ? device->minor : -1;
 	dh->ret_code = NO_ERROR;
 	mutex_lock(&notification_mutex);
-	if (nla_put_drbd_cfg_context(skb, resource, connection, device) ||
+	if (nla_put_drbd_cfg_context(skb, resource, connection, device, NULL) ||
 	    nla_put_notification_header(skb, type) ||
 	    drbd_helper_info_to_skb(skb, &helper_info, true))
 		goto unlock_fail;
 	genlmsg_end(skb, dh);
-	err = drbd_genl_multicast_events(skb, GFP_NOWAIT);
+	err = drbd_genl_multicast_events(skb);
 	skb = NULL;
 	/* skb has been consumed or freed in netlink_broadcast() */
 	if (err && err != -ESRCH)
@@ -4859,7 +7818,8 @@ static unsigned int notifications_for_state_change(struct drbd_state_change *sta
 	return 1 +
 	       state_change->n_connections +
 	       state_change->n_devices +
-	       state_change->n_devices * state_change->n_connections;
+	       state_change->n_devices * state_change->n_connections +
+	       state_change->n_paths;
 }
 
 static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
@@ -4871,7 +7831,7 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
 	int err = 0;
 
 	/* There is no need for taking notification_mutex here: it doesn't
-	   matter if the initial state events mix with later state chage
+	   matter if the initial state events mix with later state change
 	   events; we can always tell the events apart by the NOTIFY_EXISTS
 	   flag. */
 
@@ -4884,7 +7844,7 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
 	if (cb->args[4] < cb->args[3])
 		flags |= NOTIFY_CONTINUES;
 	if (n < 1) {
-		err = notify_resource_state_change(skb, seq, state_change->resource,
+		err = notify_resource_state_change(skb, seq, state_change,
 					     NOTIFY_EXISTS | flags);
 		goto next;
 	}
@@ -4895,6 +7855,18 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
 		goto next;
 	}
 	n -= state_change->n_connections;
+	if (n < state_change->n_paths) {
+		struct drbd_path_state *path_state = &state_change->paths[n];
+		struct drbd_path_info path_info;
+
+		path_info.path_established = path_state->path_established;
+		err = notify_path_state(skb, seq,
+				path_state->connection,
+				path_state->path,
+				&path_info, NOTIFY_EXISTS | flags);
+		goto next;
+	}
+	n -= state_change->n_paths;
 	if (n < state_change->n_devices) {
 		err = notify_device_state_change(skb, seq, &state_change->devices[n],
 					   NOTIFY_EXISTS | flags);
@@ -4906,6 +7878,7 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
 						NOTIFY_EXISTS | flags);
 		goto next;
 	}
+	n -= state_change->n_devices * state_change->n_connections;
 
 next:
 	if (cb->args[4] == cb->args[3]) {
@@ -4919,11 +7892,25 @@ static int get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
 out:
 	if (err)
 		return err;
-	else
-		return skb->len;
+	return skb->len;
+}
+
+static int drbd_adm_get_initial_state_done(struct netlink_callback *cb)
+{
+	LIST_HEAD(head);
+	if (cb->args[0]) {
+		struct drbd_state_change *state_change =
+			(struct drbd_state_change *)cb->args[0];
+		cb->args[0] = 0;
+
+		/* connect list to head */
+		list_add(&head, &state_change->list);
+		free_state_changes(&head);
+	}
+	return 0;
 }
 
-int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
+static int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
 {
 	struct drbd_resource *resource;
 	LIST_HEAD(head);
@@ -4931,14 +7918,6 @@ int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
 	if (cb->args[5] >= 1) {
 		if (cb->args[5] > 1)
 			return get_initial_state(skb, cb);
-		if (cb->args[0]) {
-			struct drbd_state_change *state_change =
-				(struct drbd_state_change *)cb->args[0];
-
-			/* connect list to head */
-			list_add(&head, &state_change->list);
-			free_state_changes(&head);
-		}
 		return 0;
 	}
 
@@ -4947,7 +7926,9 @@ int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
 	for_each_resource(resource, &drbd_resources) {
 		struct drbd_state_change *state_change;
 
-		state_change = remember_old_state(resource, GFP_KERNEL);
+		read_lock_irq(&resource->state_rwlock);
+		state_change = remember_state_change(resource, GFP_ATOMIC);
+		read_unlock_irq(&resource->state_rwlock);
 		if (!state_change) {
 			if (!list_empty(&head))
 				free_state_changes(&head);
@@ -4971,3 +7952,144 @@ int drbd_adm_get_initial_state(struct sk_buff *skb, struct netlink_callback *cb)
 	cb->args[2] = cb->nlh->nlmsg_seq;
 	return get_initial_state(skb, cb);
 }
+
+static int drbd_adm_forget_peer(struct sk_buff *skb, struct genl_info *info)
+{
+	struct drbd_config_context adm_ctx;
+	struct drbd_resource *resource;
+	struct drbd_device *device;
+	struct forget_peer_parms parms = { };
+	enum drbd_ret_code retcode;
+	int vnr, peer_node_id, err;
+
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE);
+	if (!adm_ctx.reply_skb)
+		return retcode;
+
+	resource = adm_ctx.resource;
+
+	err = forget_peer_parms_from_attrs(&parms, info);
+	if (err) {
+		retcode = ERR_MANDATORY_TAG;
+		drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
+		goto out_no_adm_mutex;
+	}
+
+	if (mutex_lock_interruptible(&resource->adm_mutex)) {
+		retcode = ERR_INTR;
+		goto out_no_adm_mutex;
+	}
+
+	peer_node_id = parms.forget_peer_node_id;
+	if (drbd_connection_by_node_id(resource, peer_node_id)) {
+		retcode = ERR_NET_CONFIGURED;
+		goto out;
+	}
+
+	if (peer_node_id < 0 || peer_node_id >= DRBD_NODE_ID_MAX) {
+		retcode = ERR_INVALID_PEER_NODE_ID;
+		goto out;
+	}
+
+	idr_for_each_entry(&resource->devices, device, vnr)
+		clear_peer_slot(device, peer_node_id, 0);
+out:
+	mutex_unlock(&resource->adm_mutex);
+out_no_adm_mutex:
+	idr_for_each_entry(&resource->devices, device, vnr)
+		drbd_md_sync_if_dirty(device);
+
+	drbd_adm_finish(&adm_ctx, info, (enum drbd_ret_code)retcode);
+	return 0;
+
+}
+
+static enum drbd_ret_code validate_new_resource_name(const struct drbd_resource *resource, const char *new_name)
+{
+	enum drbd_ret_code retcode = drbd_check_name_str(new_name, drbd_strict_names);
+
+	if (retcode == NO_ERROR) {
+		struct drbd_resource *next_resource;
+		rcu_read_lock();
+		for_each_resource_rcu(next_resource, &drbd_resources) {
+			if (strcmp(next_resource->name, new_name) == 0) {
+				retcode = ERR_ALREADY_EXISTS;
+				break;
+			}
+		}
+		rcu_read_unlock();
+	}
+	return retcode;
+}
+
+static int drbd_adm_rename_resource(struct sk_buff *skb, struct genl_info *info)
+{
+	struct drbd_config_context adm_ctx;
+	struct drbd_resource *resource;
+	struct drbd_device *device;
+	struct rename_resource_info rename_resource_info;
+	struct rename_resource_parms parms = { };
+	char *old_res_name, *new_res_name;
+	enum drbd_ret_code retcode;
+	enum drbd_ret_code validate_err;
+	int err;
+	int vnr;
+
+	mutex_lock(&resources_mutex);
+	retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE);
+	if (!adm_ctx.reply_skb) {
+		mutex_unlock(&resources_mutex);
+		return retcode;
+	}
+
+	resource = adm_ctx.resource;
+
+	err = rename_resource_parms_from_attrs(&parms, info);
+	if (err) {
+		retcode = ERR_MANDATORY_TAG;
+		drbd_msg_put_info(adm_ctx.reply_skb, from_attrs_err_to_txt(err));
+		goto out;
+	}
+
+	validate_err = validate_new_resource_name(resource, parms.new_resource_name);
+	if (validate_err != NO_ERROR) {
+		if (ERR_ALREADY_EXISTS) {
+			drbd_msg_sprintf_info(adm_ctx.reply_skb,
+				"Cannot rename to %s: a resource with that name already exists\n",
+				 parms.new_resource_name);
+		} else {
+			drbd_msg_put_name_error(adm_ctx.reply_skb, validate_err);
+		}
+		retcode = validate_err;
+		goto out;
+	}
+
+	drbd_info(resource, "Renaming to %s\n", parms.new_resource_name);
+
+	strscpy(rename_resource_info.res_new_name, parms.new_resource_name, sizeof(rename_resource_info.res_new_name));
+	rename_resource_info.res_new_name_len = min(strlen(parms.new_resource_name), sizeof(rename_resource_info.res_new_name));
+
+	mutex_lock(&notification_mutex);
+	notify_resource_state(NULL, 0, resource, NULL, &rename_resource_info, NOTIFY_RENAME);
+	mutex_unlock(&notification_mutex);
+
+	new_res_name = kstrdup(parms.new_resource_name, GFP_KERNEL);
+	if (!new_res_name) {
+		retcode = ERR_NOMEM;
+		goto out;
+	}
+	old_res_name = resource->name;
+	resource->name = new_res_name;
+	kvfree_rcu_mightsleep(old_res_name);
+
+	drbd_debugfs_resource_rename(resource, new_res_name);
+
+	idr_for_each_entry(&resource->devices, device, vnr) {
+		kobject_uevent(&disk_to_dev(device->vdisk)->kobj, KOBJ_CHANGE);
+	}
+
+out:
+	mutex_unlock(&resources_mutex);
+	drbd_adm_finish(&adm_ctx, info, retcode);
+	return 0;
+}
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 19/20] drbd: update monitoring interfaces for multi-peer topology
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (17 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 18/20] drbd: rework netlink management interface for DRBD 9 Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-27 22:38 ` [PATCH 20/20] drbd: remove BROKEN for DRBD Christoph Böhmwalder
  19 siblings, 0 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder, Joel Colledge

Remove the /proc/drbd inline status display; detailed per-peer
monitoring moves to debugfs and netlink.
In DRBD 9, only version and build information is exposed in /proc/drbd.
The "legacy 8.4" compat mechanism restores compatible output to
/proc/drbd if necessary, ensuring userspace compat.

Restructure the debugfs tree from a fixed single-peer layout to a
per-connection hierarchy, reflecting that DRBD 9 resources can have
multiple simultaneous peers.

Request state display now iterates over all peer connections rather
than a single network state field.
Peer request tracking moves from per-device to per-connection lists,
and the transfer log is walked under RCU.

Timing statistics switch from jiffies to ktime for sub-millisecond
precision.

Transport buffer statistics are abstracted through a transport ops
callback instead of reaching into TCP internals.

New debugfs files expose two-phase commit state, transport details,
interval tree contents, activity log histograms, and per-peer resync
progress.

The connection and replication state string tables are split to match
the DRBD 9 split-state model, where transport-level connection state
and replication-level sync state are tracked separately per peer.

Co-developed-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Co-developed-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Co-developed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Co-developed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/drbd_debugfs.c   | 1657 ++++++++++++++++++++++-----
 drivers/block/drbd/drbd_interval.c  |   35 +-
 drivers/block/drbd/drbd_legacy_84.c |   25 +-
 drivers/block/drbd/drbd_proc.c      |  320 +-----
 drivers/block/drbd/drbd_strings.c   |  219 +++-
 drivers/block/drbd/drbd_transport.c |   24 +
 6 files changed, 1666 insertions(+), 614 deletions(-)

diff --git a/drivers/block/drbd/drbd_debugfs.c b/drivers/block/drbd/drbd_debugfs.c
index 12460b584bcb..fec9ec3d189e 100644
--- a/drivers/block/drbd/drbd_debugfs.c
+++ b/drivers/block/drbd/drbd_debugfs.c
@@ -1,17 +1,18 @@
 // SPDX-License-Identifier: GPL-2.0-only
-#define pr_fmt(fmt) "drbd debugfs: " fmt
+#define pr_fmt(fmt)	KBUILD_MODNAME " debugfs: " fmt
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/debugfs.h>
 #include <linux/seq_file.h>
-#include <linux/stat.h>
 #include <linux/jiffies.h>
 #include <linux/list.h>
+#include <generated/utsrelease.h>
 
 #include "drbd_int.h"
 #include "drbd_req.h"
 #include "drbd_debugfs.h"
-
+#include "drbd_transport.h"
+#include "drbd_dax_pmem.h"
 
 /**********************************************************************
  * Whenever you change the file format, remember to bump the version. *
@@ -19,26 +20,48 @@
 
 static struct dentry *drbd_debugfs_root;
 static struct dentry *drbd_debugfs_version;
+static struct dentry *drbd_debugfs_refcounts;
 static struct dentry *drbd_debugfs_resources;
 static struct dentry *drbd_debugfs_minors;
+static struct dentry *drbd_debugfs_compat;
+
+static void seq_print_node_mask(struct seq_file *m, struct drbd_resource *resource, u64 nodes)
+{
+	struct drbd_connection *connection;
+
+	rcu_read_lock();
+	for_each_connection_rcu(connection, resource) {
+		if (NODE_MASK(connection->peer_node_id) & nodes) {
+			char *name = rcu_dereference((connection)->transport.net_conf)->name;
 
-static void seq_print_age_or_dash(struct seq_file *m, bool valid, unsigned long dt)
+			seq_printf(m, "%s, ", name);
+		}
+	}
+	rcu_read_unlock();
+	seq_puts(m, "\n");
+}
+
+#ifdef CONFIG_DRBD_TIMING_STATS
+static void seq_print_age_or_dash(struct seq_file *m, bool valid, ktime_t dt)
 {
 	if (valid)
-		seq_printf(m, "\t%d", jiffies_to_msecs(dt));
+		seq_printf(m, "\t%d", (int)ktime_to_ms(dt));
 	else
-		seq_printf(m, "\t-");
+		seq_puts(m, "\t-");
 }
+#endif
 
 static void __seq_print_rq_state_bit(struct seq_file *m,
 	bool is_set, char *sep, const char *set_name, const char *unset_name)
 {
 	if (is_set && set_name) {
-		seq_putc(m, *sep);
+		if (*sep)
+			seq_putc(m, *sep);
 		seq_puts(m, set_name);
 		*sep = '|';
 	} else if (!is_set && unset_name) {
-		seq_putc(m, *sep);
+		if (*sep)
+			seq_putc(m, *sep);
 		seq_puts(m, unset_name);
 		*sep = '|';
 	}
@@ -53,17 +76,20 @@ static void seq_print_rq_state_bit(struct seq_file *m,
 /* pretty print enum drbd_req_state_bits req->rq_state */
 static void seq_print_request_state(struct seq_file *m, struct drbd_request *req)
 {
-	unsigned int s = req->rq_state;
+	struct drbd_device *device = req->device;
+	struct drbd_peer_device *peer_device;
+	unsigned int s = req->local_rq_state;
 	char sep = ' ';
 	seq_printf(m, "\t0x%08x", s);
-	seq_printf(m, "\tmaster: %s", req->master_bio ? "pending" : "completed");
+	seq_puts(m, "\tmaster:");
+	__seq_print_rq_state_bit(m, req->master_bio, &sep, "pending", "completed");
+	seq_print_rq_state_bit(m, s & RQ_POSTPONED, &sep, "postponed");
+	seq_print_rq_state_bit(m, s & RQ_COMPLETION_SUSP, &sep, "suspended");
 
 	/* RQ_WRITE ignored, already reported */
 	seq_puts(m, "\tlocal:");
-	seq_print_rq_state_bit(m, s & RQ_IN_ACT_LOG, &sep, "in-AL");
-	seq_print_rq_state_bit(m, s & RQ_POSTPONED, &sep, "postponed");
-	seq_print_rq_state_bit(m, s & RQ_COMPLETION_SUSP, &sep, "suspended");
 	sep = ' ';
+	seq_print_rq_state_bit(m, s & RQ_IN_ACT_LOG, &sep, "in-AL");
 	seq_print_rq_state_bit(m, s & RQ_LOCAL_PENDING, &sep, "pending");
 	seq_print_rq_state_bit(m, s & RQ_LOCAL_COMPLETED, &sep, "completed");
 	seq_print_rq_state_bit(m, s & RQ_LOCAL_ABORTED, &sep, "aborted");
@@ -71,64 +97,99 @@ static void seq_print_request_state(struct seq_file *m, struct drbd_request *req
 	if (sep == ' ')
 		seq_puts(m, " -");
 
-	/* for_each_connection ... */
-	seq_printf(m, "\tnet:");
-	sep = ' ';
-	seq_print_rq_state_bit(m, s & RQ_NET_PENDING, &sep, "pending");
-	seq_print_rq_state_bit(m, s & RQ_NET_QUEUED, &sep, "queued");
-	seq_print_rq_state_bit(m, s & RQ_NET_SENT, &sep, "sent");
-	seq_print_rq_state_bit(m, s & RQ_NET_DONE, &sep, "done");
-	seq_print_rq_state_bit(m, s & RQ_NET_SIS, &sep, "sis");
-	seq_print_rq_state_bit(m, s & RQ_NET_OK, &sep, "ok");
-	if (sep == ' ')
-		seq_puts(m, " -");
+	for_each_peer_device(peer_device, device) {
+		s = req->net_rq_state[peer_device->node_id];
+		seq_printf(m, "\tnet[%d]:", peer_device->node_id);
+		sep = ' ';
+		seq_print_rq_state_bit(m, s & RQ_NET_PENDING, &sep, "pending");
+		seq_print_rq_state_bit(m, s & RQ_NET_PENDING_OOS, &sep, "pending-oos");
+		seq_print_rq_state_bit(m, s & RQ_NET_QUEUED, &sep, "queued");
+		seq_print_rq_state_bit(m, s & RQ_NET_READY, &sep, "ready");
+		seq_print_rq_state_bit(m, s & RQ_NET_SENT, &sep, "sent");
+		seq_print_rq_state_bit(m, s & RQ_NET_DONE, &sep, "done");
+		seq_print_rq_state_bit(m, s & RQ_NET_SIS, &sep, "sis");
+		seq_print_rq_state_bit(m, s & RQ_NET_OK, &sep, "ok");
+		if (sep == ' ')
+			seq_puts(m, " -");
+
+		seq_puts(m, " :");
+		sep = ' ';
+		seq_print_rq_state_bit(m, s & RQ_EXP_RECEIVE_ACK, &sep, "B");
+		seq_print_rq_state_bit(m, s & RQ_EXP_WRITE_ACK, &sep, "C");
+		seq_print_rq_state_bit(m, s & RQ_EXP_BARR_ACK, &sep, "barr");
+		if (sep == ' ')
+			seq_puts(m, " -");
+	}
+	seq_putc(m, '\n');
+}
 
-	seq_printf(m, " :");
-	sep = ' ';
-	seq_print_rq_state_bit(m, s & RQ_EXP_RECEIVE_ACK, &sep, "B");
-	seq_print_rq_state_bit(m, s & RQ_EXP_WRITE_ACK, &sep, "C");
-	seq_print_rq_state_bit(m, s & RQ_EXP_BARR_ACK, &sep, "barr");
-	if (sep == ' ')
-		seq_puts(m, " -");
-	seq_printf(m, "\n");
+#define memberat(PTR, TYPE, OFFSET) (*(TYPE *)((char *)PTR + OFFSET))
+
+#ifdef CONFIG_DRBD_TIMING_STATS
+static void print_one_age_or_dash(struct seq_file *m, struct drbd_request *req,
+				  unsigned int set_mask, unsigned int clear_mask,
+				  ktime_t now, size_t offset)
+{
+	struct drbd_device *device = req->device;
+	struct drbd_peer_device *peer_device;
+
+	for_each_peer_device(peer_device, device) {
+		unsigned int s = req->net_rq_state[peer_device->node_id];
+
+		if (s & set_mask && !(s & clear_mask)) {
+			ktime_t ktime = ktime_sub(now, memberat(req, ktime_t, offset));
+			seq_printf(m, "\t[%d]%d", peer_device->node_id, (int)ktime_to_ms(ktime));
+			return;
+		}
+	}
+	seq_puts(m, "\t-");
 }
+#endif
 
-static void seq_print_one_request(struct seq_file *m, struct drbd_request *req, unsigned long now)
+static void seq_print_one_request(struct seq_file *m, struct drbd_request *req, ktime_t now, unsigned long jif)
 {
 	/* change anything here, fixup header below! */
-	unsigned int s = req->rq_state;
+	unsigned int s = req->local_rq_state;
+	unsigned long flags;
 
+	spin_lock_irqsave(&req->rq_lock, flags);
 #define RQ_HDR_1 "epoch\tsector\tsize\trw"
 	seq_printf(m, "0x%x\t%llu\t%u\t%s",
 		req->epoch,
 		(unsigned long long)req->i.sector, req->i.size >> 9,
 		(s & RQ_WRITE) ? "W" : "R");
 
+#ifdef CONFIG_DRBD_TIMING_STATS
 #define RQ_HDR_2 "\tstart\tin AL\tsubmit"
-	seq_printf(m, "\t%d", jiffies_to_msecs(now - req->start_jif));
-	seq_print_age_or_dash(m, s & RQ_IN_ACT_LOG, now - req->in_actlog_jif);
-	seq_print_age_or_dash(m, s & RQ_LOCAL_PENDING, now - req->pre_submit_jif);
+	seq_printf(m, "\t%d", (int)ktime_to_ms(ktime_sub(now, req->start_kt)));
+	seq_print_age_or_dash(m, s & RQ_IN_ACT_LOG, ktime_sub(now, req->in_actlog_kt));
+	seq_print_age_or_dash(m, s & RQ_LOCAL_PENDING, ktime_sub(now, req->pre_submit_kt));
 
 #define RQ_HDR_3 "\tsent\tacked\tdone"
-	seq_print_age_or_dash(m, s & RQ_NET_SENT, now - req->pre_send_jif);
-	seq_print_age_or_dash(m, (s & RQ_NET_SENT) && !(s & RQ_NET_PENDING), now - req->acked_jif);
-	seq_print_age_or_dash(m, s & RQ_NET_DONE, now - req->net_done_jif);
-
+	print_one_age_or_dash(m, req, RQ_NET_SENT, 0, now, offsetof(typeof(*req), pre_send_kt));
+	print_one_age_or_dash(m, req, RQ_NET_SENT, RQ_NET_PENDING, now, offsetof(typeof(*req), acked_kt));
+	print_one_age_or_dash(m, req, RQ_NET_DONE, 0, now, offsetof(typeof(*req), net_done_kt));
+#else
+#define RQ_HDR_2 "\tstart"
+#define RQ_HDR_3 ""
+	seq_printf(m, "\t%d", (int)jiffies_to_msecs(jif - req->start_jif));
+#endif
 #define RQ_HDR_4 "\tstate\n"
 	seq_print_request_state(m, req);
+	spin_unlock_irqrestore(&req->rq_lock, flags);
 }
 #define RQ_HDR RQ_HDR_1 RQ_HDR_2 RQ_HDR_3 RQ_HDR_4
 
-static void seq_print_minor_vnr_req(struct seq_file *m, struct drbd_request *req, unsigned long now)
+static void seq_print_minor_vnr_req(struct seq_file *m, struct drbd_request *req, ktime_t now, unsigned long jif)
 {
 	seq_printf(m, "%u\t%u\t", req->device->minor, req->device->vnr);
-	seq_print_one_request(m, req, now);
+	seq_print_one_request(m, req, now, jif);
 }
 
-static void seq_print_resource_pending_meta_io(struct seq_file *m, struct drbd_resource *resource, unsigned long now)
+static void seq_print_resource_pending_meta_io(struct seq_file *m, struct drbd_resource *resource, unsigned long jif)
 {
 	struct drbd_device *device;
-	unsigned int i;
+	int i;
 
 	seq_puts(m, "minor\tvnr\tstart\tsubmit\tintent\n");
 	rcu_read_lock();
@@ -142,45 +203,46 @@ static void seq_print_resource_pending_meta_io(struct seq_file *m, struct drbd_r
 		if (atomic_read(&tmp.in_use)) {
 			seq_printf(m, "%u\t%u\t%d\t",
 				device->minor, device->vnr,
-				jiffies_to_msecs(now - tmp.start_jif));
+				jiffies_to_msecs(jif - tmp.start_jif));
 			if (time_before(tmp.submit_jif, tmp.start_jif))
 				seq_puts(m, "-\t");
 			else
-				seq_printf(m, "%d\t", jiffies_to_msecs(now - tmp.submit_jif));
+				seq_printf(m, "%d\t", jiffies_to_msecs(jif - tmp.submit_jif));
 			seq_printf(m, "%s\n", tmp.current_use);
 		}
 	}
 	rcu_read_unlock();
 }
 
-static void seq_print_waiting_for_AL(struct seq_file *m, struct drbd_resource *resource, unsigned long now)
+static void seq_print_waiting_for_AL(struct seq_file *m, struct drbd_resource *resource, ktime_t now, unsigned long jif)
 {
 	struct drbd_device *device;
-	unsigned int i;
+	int i;
 
 	seq_puts(m, "minor\tvnr\tage\t#waiting\n");
 	rcu_read_lock();
 	idr_for_each_entry(&resource->devices, device, i) {
-		unsigned long jif;
 		struct drbd_request *req;
 		int n = atomic_read(&device->ap_actlog_cnt);
 		if (n) {
-			spin_lock_irq(&device->resource->req_lock);
+			spin_lock_irq(&device->pending_completion_lock);
 			req = list_first_entry_or_null(&device->pending_master_completion[1],
 				struct drbd_request, req_pending_master_completion);
 			/* if the oldest request does not wait for the activity log
 			 * it is not interesting for us here */
-			if (req && !(req->rq_state & RQ_IN_ACT_LOG))
-				jif = req->start_jif;
-			else
+			if (req && (req->local_rq_state & RQ_IN_ACT_LOG))
 				req = NULL;
-			spin_unlock_irq(&device->resource->req_lock);
+			spin_unlock_irq(&device->pending_completion_lock);
 		}
 		if (n) {
 			seq_printf(m, "%u\t%u\t", device->minor, device->vnr);
-			if (req)
-				seq_printf(m, "%u\t", jiffies_to_msecs(now - jif));
-			else
+			if (req) {
+#ifdef CONFIG_DRBD_TIMING_STATS
+				seq_printf(m, "%d\t", (int)ktime_to_ms(ktime_sub(now, req->start_kt)));
+#else
+				seq_printf(m, "%d\t", (int)jiffies_to_msecs(jif - req->start_jif));
+#endif
+			} else
 				seq_puts(m, "-\t");
 			seq_printf(m, "%u\n", n);
 		}
@@ -188,13 +250,13 @@ static void seq_print_waiting_for_AL(struct seq_file *m, struct drbd_resource *r
 	rcu_read_unlock();
 }
 
-static void seq_print_device_bitmap_io(struct seq_file *m, struct drbd_device *device, unsigned long now)
+static void seq_print_device_bitmap_io(struct seq_file *m, struct drbd_device *device, unsigned long jif)
 {
 	struct drbd_bm_aio_ctx *ctx;
 	unsigned long start_jif;
 	unsigned int in_flight;
 	unsigned int flags;
-	spin_lock_irq(&device->resource->req_lock);
+	spin_lock_irq(&device->pending_bmio_lock);
 	ctx = list_first_entry_or_null(&device->pending_bitmap_io, struct drbd_bm_aio_ctx, list);
 	if (ctx && ctx->done)
 		ctx = NULL;
@@ -203,25 +265,25 @@ static void seq_print_device_bitmap_io(struct seq_file *m, struct drbd_device *d
 		in_flight = atomic_read(&ctx->in_flight);
 		flags = ctx->flags;
 	}
-	spin_unlock_irq(&device->resource->req_lock);
+	spin_unlock_irq(&device->pending_bmio_lock);
 	if (ctx) {
 		seq_printf(m, "%u\t%u\t%c\t%u\t%u\n",
 			device->minor, device->vnr,
 			(flags & BM_AIO_READ) ? 'R' : 'W',
-			jiffies_to_msecs(now - start_jif),
+			jiffies_to_msecs(jif - start_jif),
 			in_flight);
 	}
 }
 
-static void seq_print_resource_pending_bitmap_io(struct seq_file *m, struct drbd_resource *resource, unsigned long now)
+static void seq_print_resource_pending_bitmap_io(struct seq_file *m, struct drbd_resource *resource, unsigned long jif)
 {
 	struct drbd_device *device;
-	unsigned int i;
+	int i;
 
 	seq_puts(m, "minor\tvnr\trw\tage\t#in-flight\n");
 	rcu_read_lock();
 	idr_for_each_entry(&resource->devices, device, i) {
-		seq_print_device_bitmap_io(m, device, now);
+		seq_print_device_bitmap_io(m, device, jif);
 	}
 	rcu_read_unlock();
 }
@@ -230,104 +292,196 @@ static void seq_print_resource_pending_bitmap_io(struct seq_file *m, struct drbd
 static void seq_print_peer_request_flags(struct seq_file *m, struct drbd_peer_request *peer_req)
 {
 	unsigned long f = peer_req->flags;
-	char sep = ' ';
-
-	__seq_print_rq_state_bit(m, f & EE_SUBMITTED, &sep, "submitted", "preparing");
-	__seq_print_rq_state_bit(m, f & EE_APPLICATION, &sep, "application", "internal");
-	seq_print_rq_state_bit(m, f & EE_CALL_AL_COMPLETE_IO, &sep, "in-AL");
+	char sep = 0;
+
+	seq_print_rq_state_bit(m, test_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &peer_req->i.flags),
+			&sep, "submit-conflict-queued");
+	seq_print_rq_state_bit(m, test_bit(INTERVAL_SUBMITTED, &peer_req->i.flags),
+			&sep, "submitted");
+	seq_print_rq_state_bit(m, test_bit(INTERVAL_CONFLICT, &peer_req->i.flags),
+			&sep, "conflict");
+	seq_print_rq_state_bit(m, test_bit(INTERVAL_SENT, &peer_req->i.flags),
+			&sep, "sent");
+	seq_print_rq_state_bit(m, test_bit(INTERVAL_READY_TO_SEND, &peer_req->i.flags),
+			&sep, "ready-to-send");
+	seq_print_rq_state_bit(m, test_bit(INTERVAL_RECEIVED, &peer_req->i.flags),
+			&sep, "received");
+	seq_print_rq_state_bit(m, test_bit(INTERVAL_BACKING_COMPLETED, &peer_req->i.flags),
+			&sep, "backing-completed");
+	seq_print_rq_state_bit(m, test_bit(INTERVAL_COMPLETED, &peer_req->i.flags),
+			&sep, "completed");
+	seq_print_rq_state_bit(m, f & EE_IS_BARRIER, &sep, "barr");
 	seq_print_rq_state_bit(m, f & EE_SEND_WRITE_ACK, &sep, "C");
 	seq_print_rq_state_bit(m, f & EE_MAY_SET_IN_SYNC, &sep, "set-in-sync");
+	seq_print_rq_state_bit(m, f & EE_SET_OUT_OF_SYNC, &sep, "set-out-of-sync");
+	seq_print_rq_state_bit(m, peer_req->i.type == INTERVAL_PEER_WRITE && !(f & EE_IN_ACTLOG), &sep, "blocked-on-al");
 	seq_print_rq_state_bit(m, f & EE_TRIM, &sep, "trim");
 	seq_print_rq_state_bit(m, f & EE_ZEROOUT, &sep, "zero-out");
 	seq_print_rq_state_bit(m, f & EE_WRITE_SAME, &sep, "write-same");
 	seq_putc(m, '\n');
 }
 
-static void seq_print_peer_request(struct seq_file *m,
-	struct drbd_device *device, struct list_head *lh,
-	unsigned long now)
+enum drbd_peer_request_state {
+	PRS_NEW,
+	PRS_READY_TO_SEND,
+	PRS_SUBMITTED,
+	PRS_LAST,
+};
+
+static enum drbd_peer_request_state drbd_get_peer_request_state(struct drbd_peer_request *peer_req)
+{
+	unsigned long interval_flags = peer_req->i.flags;
+
+	if (interval_flags & INTERVAL_SUBMITTED)
+		return PRS_SUBMITTED;
+
+	if (interval_flags & INTERVAL_READY_TO_SEND)
+		return PRS_READY_TO_SEND;
+
+	return PRS_NEW;
+}
+
+static void seq_print_peer_request_one(struct seq_file *m,
+	struct drbd_peer_request *peer_req,
+	const char *list_name, unsigned long jif)
+{
+	struct drbd_peer_device *peer_device = peer_req->peer_device;
+	struct drbd_device *device = peer_device ? peer_device->device : NULL;
+
+	seq_printf(m, "%s\t", list_name);
+
+	if (device)
+		seq_printf(m, "%u\t%u\t", device->minor, device->vnr);
+
+	seq_printf(m, "%llu\t%u\t%s\t%u\t",
+			(unsigned long long)peer_req->i.sector, peer_req->i.size >> 9,
+			drbd_interval_type_str(&peer_req->i),
+			jiffies_to_msecs(jif - peer_req->submit_jif));
+	seq_print_peer_request_flags(m, peer_req);
+}
+
+static void seq_print_peer_request_w(struct seq_file *m,
+	struct drbd_connection *connection, struct list_head *lh,
+	const char *list_name, unsigned long jif)
 {
-	bool reported_preparing = false;
+	int count[PRS_LAST] = {0};
 	struct drbd_peer_request *peer_req;
+
 	list_for_each_entry(peer_req, lh, w.list) {
-		if (reported_preparing && !(peer_req->flags & EE_SUBMITTED))
-			continue;
+		enum drbd_peer_request_state state = drbd_get_peer_request_state(peer_req);
 
-		if (device)
-			seq_printf(m, "%u\t%u\t", device->minor, device->vnr);
+		count[state]++;
+		if (count[state] <= 16)
+			seq_print_peer_request_one(m, peer_req, list_name, jif);
+	}
+}
 
-		seq_printf(m, "%llu\t%u\t%c\t%u\t",
-			(unsigned long long)peer_req->i.sector, peer_req->i.size >> 9,
-			(peer_req->flags & EE_WRITE) ? 'W' : 'R',
-			jiffies_to_msecs(now - peer_req->submit_jif));
-		seq_print_peer_request_flags(m, peer_req);
-		if (peer_req->flags & EE_SUBMITTED)
-			break;
-		else
-			reported_preparing = true;
+static void seq_print_peer_request(struct seq_file *m,
+	struct drbd_connection *connection, struct list_head *lh,
+	const char *list_name, unsigned long jif)
+{
+	int count = 0;
+	struct drbd_peer_request *peer_req;
+
+	list_for_each_entry(peer_req, lh, recv_order) {
+		count++;
+		if (count <= 16)
+			seq_print_peer_request_one(m, peer_req, list_name, jif);
 	}
 }
 
-static void seq_print_device_peer_requests(struct seq_file *m,
-	struct drbd_device *device, unsigned long now)
+static void seq_print_connection_peer_requests(struct seq_file *m,
+	struct drbd_connection *connection, unsigned long jif)
+{
+	struct drbd_peer_device *peer_device;
+	int i;
+
+	seq_printf(m, "list\t\tminor\tvnr\tsector\tsize\ttype\t\tage\tflags\n");
+	spin_lock_irq(&connection->peer_reqs_lock);
+	seq_print_peer_request_w(m, connection, &connection->done_ee, "done\t", jif);
+	seq_print_peer_request_w(m, connection, &connection->dagtag_wait_ee, "dagtag_wait", jif);
+	seq_print_peer_request(m, connection, &connection->peer_requests, "peer_requests", jif);
+	seq_print_peer_request(m, connection, &connection->peer_reads, "peer_reads", jif);
+	idr_for_each_entry(&connection->peer_devices, peer_device, i)
+		seq_print_peer_request(m, connection, &peer_device->resync_requests,
+				"resync_requests", jif);
+	spin_unlock_irq(&connection->peer_reqs_lock);
+}
+
+static void seq_print_device_peer_flushes(struct seq_file *m,
+	struct drbd_device *device, unsigned long jif)
 {
-	seq_puts(m, "minor\tvnr\tsector\tsize\trw\tage\tflags\n");
-	spin_lock_irq(&device->resource->req_lock);
-	seq_print_peer_request(m, device, &device->active_ee, now);
-	seq_print_peer_request(m, device, &device->read_ee, now);
-	seq_print_peer_request(m, device, &device->sync_ee, now);
-	spin_unlock_irq(&device->resource->req_lock);
 	if (test_bit(FLUSH_PENDING, &device->flags)) {
 		seq_printf(m, "%u\t%u\t-\t-\tF\t%u\tflush\n",
 			device->minor, device->vnr,
-			jiffies_to_msecs(now - device->flush_jif));
+			jiffies_to_msecs(jif - device->flush_jif));
 	}
 }
 
 static void seq_print_resource_pending_peer_requests(struct seq_file *m,
-	struct drbd_resource *resource, unsigned long now)
+	struct drbd_resource *resource, unsigned long jif)
 {
+	struct drbd_connection *connection;
 	struct drbd_device *device;
-	unsigned int i;
+	int i;
 
 	rcu_read_lock();
+
+	for_each_connection_rcu(connection, resource) {
+		seq_printf(m, "oldest peer requests (peer: %s)\n",
+			rcu_dereference(connection->transport.net_conf)->name);
+		seq_print_connection_peer_requests(m, connection, jif);
+		seq_putc(m, '\n');
+	}
+
+	seq_puts(m, "flushes\n");
 	idr_for_each_entry(&resource->devices, device, i) {
-		seq_print_device_peer_requests(m, device, now);
+		seq_print_device_peer_flushes(m, device, jif);
 	}
+	seq_putc(m, '\n');
+
 	rcu_read_unlock();
 }
 
 static void seq_print_resource_transfer_log_summary(struct seq_file *m,
 	struct drbd_resource *resource,
-	struct drbd_connection *connection,
-	unsigned long now)
+	ktime_t now, unsigned long jif)
 {
 	struct drbd_request *req;
 	unsigned int count = 0;
 	unsigned int show_state = 0;
 
 	seq_puts(m, "n\tdevice\tvnr\t" RQ_HDR);
-	spin_lock_irq(&resource->req_lock);
-	list_for_each_entry(req, &connection->transfer_log, tl_requests) {
+	rcu_read_lock();
+	list_for_each_entry_rcu(req, &resource->transfer_log, tl_requests) {
+		struct drbd_device *device = req->device;
+		struct drbd_peer_device *peer_device;
 		unsigned int tmp = 0;
 		unsigned int s;
-		++count;
 
-		/* don't disable irq "forever" */
-		if (!(count & 0x1ff)) {
-			struct drbd_request *req_next;
-			kref_get(&req->kref);
-			spin_unlock_irq(&resource->req_lock);
+		/* don't disable preemption "forever" */
+		if ((count & 0x1ff) == 0x1ff) {
+			struct list_head *next_hdr;
+			/* Only get if the request hasn't already been removed from transfer_log. */
+			if (!refcount_inc_not_zero(&req->oos_send_ref))
+				continue;
+			rcu_read_unlock();
 			cond_resched();
-			spin_lock_irq(&resource->req_lock);
-			req_next = list_next_entry(req, tl_requests);
-			if (kref_put(&req->kref, drbd_req_destroy))
-				req = req_next;
-			if (&req->tl_requests == &connection->transfer_log)
-				break;
+			rcu_read_lock();
+			next_hdr = rcu_dereference(list_next_rcu(&req->tl_requests));
+			drbd_put_ref_tl_walk(req, 0, 1);
+			if (!refcount_read(&req->done_ref)) {
+				if (next_hdr == &resource->transfer_log)
+					break;
+				req = list_entry_rcu(next_hdr,
+						     struct drbd_request,
+						     tl_requests);
+			}
 		}
+		++count;
 
-		s = req->rq_state;
+		spin_lock_irq(&req->rq_lock);
+		s = req->local_rq_state;
 
 		/* This is meant to summarize timing issues, to be able to tell
 		 * local disk problems from network problems.
@@ -337,40 +491,43 @@ static void seq_print_resource_transfer_log_summary(struct seq_file *m,
 			tmp |= 1;
 		if ((s & RQ_LOCAL_MASK) && (s & RQ_LOCAL_PENDING))
 			tmp |= 2;
-		if (s & RQ_NET_MASK) {
-			if (!(s & RQ_NET_SENT))
-				tmp |= 4;
-			if (s & RQ_NET_PENDING)
-				tmp |= 8;
-			if (!(s & RQ_NET_DONE))
-				tmp |= 16;
+
+		for_each_peer_device_rcu(peer_device, device) {
+			s = READ_ONCE(req->net_rq_state[peer_device->node_id]);
+			if (s & RQ_NET_MASK) {
+				if (!(s & RQ_NET_SENT))
+					tmp |= 4;
+				if (s & RQ_NET_PENDING)
+					tmp |= 8;
+				if (!(s & RQ_NET_DONE))
+					tmp |= 16;
+			}
 		}
+		spin_unlock_irq(&req->rq_lock);
+
 		if ((tmp & show_state) == tmp)
 			continue;
 		show_state |= tmp;
 		seq_printf(m, "%u\t", count);
-		seq_print_minor_vnr_req(m, req, now);
+		seq_print_minor_vnr_req(m, req, now, jif);
 		if (show_state == 0x1f)
 			break;
 	}
-	spin_unlock_irq(&resource->req_lock);
+	rcu_read_unlock();
+	seq_printf(m, "%u total\n", count);
 }
 
-/* TODO: transfer_log and friends should be moved to resource */
-static int in_flight_summary_show(struct seq_file *m, void *pos)
+static int resource_in_flight_summary_show(struct seq_file *m, void *pos)
 {
 	struct drbd_resource *resource = m->private;
 	struct drbd_connection *connection;
+	struct drbd_transport *transport;
+	struct drbd_transport_stats transport_stats;
+	ktime_t now = ktime_get();
 	unsigned long jif = jiffies;
 
-	connection = first_connection(resource);
-	/* This does not happen, actually.
-	 * But be robust and prepare for future code changes. */
-	if (!connection || !kref_get_unless_zero(&connection->kref))
-		return -ESTALE;
-
 	/* BUMP me if you change the file format/content/presentation */
-	seq_printf(m, "v: %u\n\n", 0);
+	seq_printf(m, "v: %u\n\n", 1);
 
 	seq_puts(m, "oldest bitmap IO\n");
 	seq_print_resource_pending_bitmap_io(m, resource, jif);
@@ -380,37 +537,125 @@ static int in_flight_summary_show(struct seq_file *m, void *pos)
 	seq_print_resource_pending_meta_io(m, resource, jif);
 	seq_putc(m, '\n');
 
-	seq_puts(m, "socket buffer stats\n");
-	/* for each connection ... once we have more than one */
+	seq_puts(m, "transport buffer stats\n");
+	seq_puts(m, "peer\ttransport class\tunread receive buffer\tunacked send buffer\n");
 	rcu_read_lock();
-	if (connection->data.socket) {
-		/* open coded SIOCINQ, the "relevant" part */
-		struct tcp_sock *tp = tcp_sk(connection->data.socket->sk);
-		int answ = tp->rcv_nxt - tp->copied_seq;
-		seq_printf(m, "unread receive buffer: %u Byte\n", answ);
-		/* open coded SIOCOUTQ, the "relevant" part */
-		answ = tp->write_seq - tp->snd_una;
-		seq_printf(m, "unacked send buffer: %u Byte\n", answ);
+	for_each_connection_rcu(connection, resource) {
+		char *name;
+
+		transport = &connection->transport;
+		name = rcu_dereference(transport->net_conf)->name;
+		seq_printf(m, "%s\t%s\t", name, transport->class->name);
+
+		if (transport->class->ops.stream_ok(transport, DATA_STREAM)) {
+			transport->class->ops.stats(transport, &transport_stats);
+			seq_printf(m, "%u\t%u\n",
+				transport_stats.unread_received,
+				transport_stats.unacked_send);
+		} else {
+			seq_printf(m, "-\t-\n");
+		}
 	}
 	rcu_read_unlock();
 	seq_putc(m, '\n');
 
-	seq_puts(m, "oldest peer requests\n");
 	seq_print_resource_pending_peer_requests(m, resource, jif);
-	seq_putc(m, '\n');
 
 	seq_puts(m, "application requests waiting for activity log\n");
-	seq_print_waiting_for_AL(m, resource, jif);
+	seq_print_waiting_for_AL(m, resource, now, jif);
 	seq_putc(m, '\n');
 
 	seq_puts(m, "oldest application requests\n");
-	seq_print_resource_transfer_log_summary(m, resource, connection, jif);
+	seq_print_resource_transfer_log_summary(m, resource, now, jif);
 	seq_putc(m, '\n');
 
 	jif = jiffies - jif;
 	if (jif)
 		seq_printf(m, "generated in %d ms\n", jiffies_to_msecs(jif));
-	kref_put(&connection->kref, drbd_destroy_connection);
+	return 0;
+}
+
+static int resource_state_twopc_show(struct seq_file *m, void *pos)
+{
+	struct drbd_resource *resource = m->private;
+	struct twopc_reply twopc;
+	bool active = false;
+	unsigned long jif;
+
+	read_lock_irq(&resource->state_rwlock);
+	if (resource->remote_state_change) {
+		twopc = resource->twopc_reply;
+		active = true;
+	}
+	read_unlock_irq(&resource->state_rwlock);
+
+	seq_printf(m, "v: %u\n\n", 1);
+	if (active) {
+		struct drbd_connection *connection;
+
+		seq_printf(m,
+			   "Executing tid: %u\n"
+			   "  initiator_node_id: %d\n"
+			   "  target_node_id: %d\n",
+			   twopc.tid, twopc.initiator_node_id,
+			   twopc.target_node_id);
+
+		if (twopc.initiator_node_id != resource->res_opts.node_id) {
+			seq_puts(m, "  parent node mask: ");
+			seq_print_node_mask(m, resource, resource->twopc_parent_nodes);
+
+			if (resource->twopc_prepare_reply_cmd)
+				seq_printf(m,
+					   "  Reply sent: %s\n",
+					   resource->twopc_prepare_reply_cmd == P_TWOPC_YES ? "yes" :
+					   resource->twopc_prepare_reply_cmd == P_TWOPC_NO ? "no" :
+					   resource->twopc_prepare_reply_cmd == P_TWOPC_RETRY ? "retry" :
+					   "else!?!");
+		}
+
+		seq_puts(m, "  received replies: ");
+		rcu_read_lock();
+		for_each_connection_rcu(connection, resource) {
+			char *name = rcu_dereference((connection)->transport.net_conf)->name;
+
+			if (!test_bit(TWOPC_PREPARED, &connection->flags))
+				/* seq_printf(m, "%s n.p., ", name) * print nothing! */;
+			else if (test_bit(TWOPC_NO, &connection->flags))
+				seq_printf(m, "%s no, ", name);
+			else if (test_bit(TWOPC_RETRY, &connection->flags))
+				seq_printf(m, "%s ret, ", name);
+			else if (test_bit(TWOPC_YES, &connection->flags))
+				seq_printf(m, "%s yes, ", name);
+			else
+				seq_printf(m, "%s ___, ", name);
+		}
+		rcu_read_unlock();
+		seq_puts(m, "\n");
+		if (twopc.initiator_node_id != resource->res_opts.node_id) {
+			/* The timer is only relevant for twopcs initiated by other nodes */
+			jif = resource->twopc_timer.expires - jiffies;
+			seq_printf(m, "  timer expires in: %d ms\n", jiffies_to_msecs(jif));
+		}
+	} else {
+		seq_puts(m, "No ongoing two phase state transaction\n");
+	}
+
+	return 0;
+}
+
+static int resource_worker_pid_show(struct seq_file *m, void *pos)
+{
+	struct drbd_resource *resource = m->private;
+	if (resource->worker.task)
+		seq_printf(m, "%d\n", resource->worker.task->pid);
+	return 0;
+}
+
+static int resource_members_show(struct seq_file *m, void *pos)
+{
+	struct drbd_resource *resource = m->private;
+
+	seq_printf(m, "0x%016llX\n", resource->members);
 	return 0;
 }
 
@@ -425,6 +670,9 @@ static int drbd_single_open(struct file *file, int (*show)(struct seq_file *, vo
 	/* Are we still linked,
 	 * or has debugfs_remove() already been called? */
 	parent = file->f_path.dentry->d_parent;
+	/* not sure if this can happen: */
+	if (!parent || !parent->d_inode)
+		goto out;
 	/* serialize with d_delete() */
 	inode_lock(d_inode(parent));
 	/* Make sure the object is still alive */
@@ -437,31 +685,55 @@ static int drbd_single_open(struct file *file, int (*show)(struct seq_file *, vo
 		if (ret)
 			kref_put(kref, release);
 	}
+out:
 	return ret;
 }
 
-static int in_flight_summary_open(struct inode *inode, struct file *file)
-{
-	struct drbd_resource *resource = inode->i_private;
-	return drbd_single_open(file, in_flight_summary_show, resource,
-				&resource->kref, drbd_destroy_resource);
-}
-
-static int in_flight_summary_release(struct inode *inode, struct file *file)
+static int resource_attr_release(struct inode *inode, struct file *file)
 {
 	struct drbd_resource *resource = inode->i_private;
 	kref_put(&resource->kref, drbd_destroy_resource);
 	return single_release(inode, file);
 }
 
-static const struct file_operations in_flight_summary_fops = {
-	.owner		= THIS_MODULE,
-	.open		= in_flight_summary_open,
-	.read		= seq_read,
-	.llseek		= seq_lseek,
-	.release	= in_flight_summary_release,
+#define drbd_debugfs_resource_attr(name)				\
+static int resource_ ## name ## _open(struct inode *inode, struct file *file) \
+{									\
+	struct drbd_resource *resource = inode->i_private;		\
+	return drbd_single_open(file, resource_ ## name ## _show, resource, \
+				&resource->kref, drbd_destroy_resource); \
+}									\
+static const struct file_operations resource_ ## name ## _fops = {	\
+	.owner		= THIS_MODULE,					\
+	.open		= resource_ ## name ## _open,			\
+	.read		= seq_read,					\
+	.llseek		= seq_lseek,					\
+	.release	= resource_attr_release,			\
 };
 
+drbd_debugfs_resource_attr(in_flight_summary)
+drbd_debugfs_resource_attr(state_twopc)
+drbd_debugfs_resource_attr(worker_pid)
+drbd_debugfs_resource_attr(members)
+
+#define drbd_dcf(top, obj, attr, perm) do {			\
+	dentry = debugfs_create_file(#attr, perm,		\
+			top, obj, &obj ## _ ## attr ## _fops);	\
+	top ## _ ## attr = dentry;				\
+	} while (0)
+
+#define res_dcf(attr) \
+	drbd_dcf(resource->debugfs_res, resource, attr, 0400)
+
+#define conn_dcf(attr) \
+	drbd_dcf(connection->debugfs_conn, connection, attr, 0400)
+
+#define vol_dcf(attr) \
+	drbd_dcf(device->debugfs_vol, device, attr, 0400)
+
+#define peer_dev_dcf(attr) \
+	drbd_dcf(peer_device->debugfs_peer_dev, peer_device, attr, 0400)
+
 void drbd_debugfs_resource_add(struct drbd_resource *resource)
 {
 	struct dentry *dentry;
@@ -475,10 +747,11 @@ void drbd_debugfs_resource_add(struct drbd_resource *resource)
 	dentry = debugfs_create_dir("connections", resource->debugfs_res);
 	resource->debugfs_res_connections = dentry;
 
-	dentry = debugfs_create_file("in_flight_summary", 0440,
-				     resource->debugfs_res, resource,
-				     &in_flight_summary_fops);
-	resource->debugfs_res_in_flight_summary = dentry;
+	/* debugfs create file */
+	res_dcf(in_flight_summary);
+	res_dcf(state_twopc);
+	res_dcf(worker_pid);
+	res_dcf(members);
 }
 
 static void drbd_debugfs_remove(struct dentry **dp)
@@ -489,16 +762,35 @@ static void drbd_debugfs_remove(struct dentry **dp)
 
 void drbd_debugfs_resource_cleanup(struct drbd_resource *resource)
 {
+	/* Older kernels have a broken implementation of
+	 * debugfs_remove_recursive (prior to upstream commit 776164c1f)
+	 * That unfortunately includes a number of "enterprise" kernels.
+	 * Even older kernels do not even have the _recursive() helper at all.
+	 * For now, remember all debugfs nodes we created,
+	 * and call debugfs_remove on all of them separately.
+	 */
 	/* it is ok to call debugfs_remove(NULL) */
+	drbd_debugfs_remove(&resource->debugfs_res_members);
+	drbd_debugfs_remove(&resource->debugfs_res_worker_pid);
+	drbd_debugfs_remove(&resource->debugfs_res_state_twopc);
 	drbd_debugfs_remove(&resource->debugfs_res_in_flight_summary);
 	drbd_debugfs_remove(&resource->debugfs_res_connections);
 	drbd_debugfs_remove(&resource->debugfs_res_volumes);
 	drbd_debugfs_remove(&resource->debugfs_res);
 }
 
+void drbd_debugfs_resource_rename(struct drbd_resource *resource, const char *new_name)
+{
+	int err;
+
+	err = debugfs_change_name(resource->debugfs_res, "%s", new_name);
+	if (err)
+		drbd_err(resource, "failed to rename debugfs entry for resource\n");
+}
+
 static void seq_print_one_timing_detail(struct seq_file *m,
 	const struct drbd_thread_timing_details *tdp,
-	unsigned long now)
+	unsigned long jif)
 {
 	struct drbd_thread_timing_details td;
 	/* No locking...
@@ -510,14 +802,14 @@ static void seq_print_one_timing_detail(struct seq_file *m,
 		return;
 	seq_printf(m, "%u\t%d\t%s:%u\t%ps\n",
 			td.cb_nr,
-			jiffies_to_msecs(now - td.start_jif),
+			jiffies_to_msecs(jif - td.start_jif),
 			td.caller_fn, td.line,
 			td.cb_addr);
 }
 
 static void seq_print_timing_details(struct seq_file *m,
 		const char *title,
-		unsigned int cb_nr, struct drbd_thread_timing_details *tdp, unsigned long now)
+		unsigned int cb_nr, struct drbd_thread_timing_details *tdp, unsigned long jif)
 {
 	unsigned int start_idx;
 	unsigned int i;
@@ -529,135 +821,301 @@ static void seq_print_timing_details(struct seq_file *m,
 	 */
 	start_idx = cb_nr % DRBD_THREAD_DETAILS_HIST;
 	for (i = start_idx; i < DRBD_THREAD_DETAILS_HIST; i++)
-		seq_print_one_timing_detail(m, tdp+i, now);
+		seq_print_one_timing_detail(m, tdp+i, jif);
 	for (i = 0; i < start_idx; i++)
-		seq_print_one_timing_detail(m, tdp+i, now);
+		seq_print_one_timing_detail(m, tdp+i, jif);
 }
 
-static int callback_history_show(struct seq_file *m, void *ignored)
+static int connection_callback_history_show(struct seq_file *m, void *ignored)
 {
 	struct drbd_connection *connection = m->private;
+	struct drbd_resource *resource = connection->resource;
 	unsigned long jif = jiffies;
 
 	/* BUMP me if you change the file format/content/presentation */
 	seq_printf(m, "v: %u\n\n", 0);
 
 	seq_puts(m, "n\tage\tcallsite\tfn\n");
-	seq_print_timing_details(m, "worker", connection->w_cb_nr, connection->w_timing_details, jif);
+	seq_print_timing_details(m, "sender", connection->s_cb_nr, connection->s_timing_details, jif);
 	seq_print_timing_details(m, "receiver", connection->r_cb_nr, connection->r_timing_details, jif);
+	seq_print_timing_details(m, "worker", resource->w_cb_nr, resource->w_timing_details, jif);
 	return 0;
 }
 
-static int callback_history_open(struct inode *inode, struct file *file)
-{
-	struct drbd_connection *connection = inode->i_private;
-	return drbd_single_open(file, callback_history_show, connection,
-				&connection->kref, drbd_destroy_connection);
-}
-
-static int callback_history_release(struct inode *inode, struct file *file)
-{
-	struct drbd_connection *connection = inode->i_private;
-	kref_put(&connection->kref, drbd_destroy_connection);
-	return single_release(inode, file);
-}
-
-static const struct file_operations connection_callback_history_fops = {
-	.owner		= THIS_MODULE,
-	.open		= callback_history_open,
-	.read		= seq_read,
-	.llseek		= seq_lseek,
-	.release	= callback_history_release,
-};
-
 static int connection_oldest_requests_show(struct seq_file *m, void *ignored)
 {
 	struct drbd_connection *connection = m->private;
-	unsigned long now = jiffies;
+	ktime_t now = ktime_get();
+	unsigned long jif = jiffies;
 	struct drbd_request *r1, *r2;
 
 	/* BUMP me if you change the file format/content/presentation */
 	seq_printf(m, "v: %u\n\n", 0);
 
-	spin_lock_irq(&connection->resource->req_lock);
-	r1 = connection->req_next;
+	rcu_read_lock();
+	r1 = READ_ONCE(connection->todo.req_next);
 	if (r1)
-		seq_print_minor_vnr_req(m, r1, now);
-	r2 = connection->req_ack_pending;
+		seq_print_minor_vnr_req(m, r1, now, jif);
+	r2 = READ_ONCE(connection->req_ack_pending);
 	if (r2 && r2 != r1) {
 		r1 = r2;
-		seq_print_minor_vnr_req(m, r1, now);
+		seq_print_minor_vnr_req(m, r1, now, jif);
 	}
-	r2 = connection->req_not_net_done;
+	r2 = READ_ONCE(connection->req_not_net_done);
 	if (r2 && r2 != r1)
-		seq_print_minor_vnr_req(m, r2, now);
-	spin_unlock_irq(&connection->resource->req_lock);
+		seq_print_minor_vnr_req(m, r2, now, jif);
+	rcu_read_unlock();
 	return 0;
 }
 
-static int connection_oldest_requests_open(struct inode *inode, struct file *file)
+static int connection_transport_show(struct seq_file *m, void *ignored)
 {
-	struct drbd_connection *connection = inode->i_private;
-	return drbd_single_open(file, connection_oldest_requests_show, connection,
-				&connection->kref, drbd_destroy_connection);
+	struct drbd_connection *connection = m->private;
+	struct drbd_transport *transport = &connection->transport;
+	struct drbd_transport_ops *tr_ops = &transport->class->ops;
+	enum drbd_stream i;
+
+	seq_printf(m, "v: %u\n\n", 0);
+
+	for (i = DATA_STREAM; i <= CONTROL_STREAM; i++) {
+		struct drbd_send_buffer *sbuf = &connection->send_buffer[i];
+		seq_printf(m, "%s stream\n", i == DATA_STREAM ? "data" : "control");
+		seq_printf(m, "  corked: %d\n", test_bit(CORKED + i, &connection->flags));
+		seq_printf(m, "  unsent: %ld bytes\n", (long)(sbuf->pos - sbuf->unsent));
+		seq_printf(m, "  allocated: %d bytes\n", sbuf->allocated_size);
+	}
+
+	seq_printf(m, "\ntransport_type: %s\n", transport->class->name);
+
+	tr_ops->debugfs_show(transport, m);
+
+	return 0;
 }
 
-static int connection_oldest_requests_release(struct inode *inode, struct file *file)
+static int connection_debug_show(struct seq_file *m, void *ignored)
+{
+	struct drbd_connection *connection = m->private;
+	struct drbd_resource *resource = connection->resource;
+	unsigned long flags = connection->flags;
+	unsigned int u1, u2;
+	unsigned long long ull1, ull2;
+	int in_flight;
+	char sep = ' ';
+
+	seq_puts(m, "content and format of this will change without notice\n");
+
+	seq_printf(m, "flags: 0x%04lx :", flags);
+#define pretty_print_bit(n) \
+	seq_print_rq_state_bit(m, test_bit(n, &flags), &sep, #n);
+	pretty_print_bit(PING_PENDING);
+	pretty_print_bit(TWOPC_PREPARED);
+	pretty_print_bit(TWOPC_YES);
+	pretty_print_bit(TWOPC_NO);
+	pretty_print_bit(TWOPC_RETRY);
+	pretty_print_bit(CONN_DRY_RUN);
+	pretty_print_bit(DISCONNECT_EXPECTED);
+	pretty_print_bit(BARRIER_ACK_PENDING);
+	pretty_print_bit(DATA_CORKED);
+	pretty_print_bit(CONTROL_CORKED);
+	pretty_print_bit(C_UNREGISTERED);
+	pretty_print_bit(RECONNECT);
+	pretty_print_bit(CONN_DISCARD_MY_DATA);
+	pretty_print_bit(SEND_STATE_AFTER_AHEAD_C);
+	pretty_print_bit(NOTIFY_PEERS_LOST_PRIMARY);
+	pretty_print_bit(CHECKING_PEER);
+	pretty_print_bit(CONN_CONGESTED);
+	pretty_print_bit(CONN_HANDSHAKE_DISCONNECT);
+	pretty_print_bit(CONN_HANDSHAKE_RETRY);
+	pretty_print_bit(CONN_HANDSHAKE_READY);
+#undef pretty_print_bit
+	seq_putc(m, '\n');
+
+	u1 = atomic_read(&resource->current_tle_nr);
+	u2 = connection->send.current_epoch_nr;
+	seq_printf(m, "resource->current_tle_nr: %u\n", u1);
+	seq_printf(m, "   send.current_epoch_nr: %u (%d)\n", u2, (int)(u2 - u1));
+
+	ull1 = resource->dagtag_sector;
+	ull2 = resource->last_peer_acked_dagtag;
+	seq_printf(m, " resource->dagtag_sector: %llu\n", ull1);
+	seq_printf(m, "  last_peer_acked_dagtag: %llu (%lld)\n", ull2, (long long)(ull2 - ull1));
+	ull2 = connection->send.current_dagtag_sector;
+	seq_printf(m, " send.current_dagtag_sec: %llu (%lld)\n", ull2, (long long)(ull2 - ull1));
+	ull2 = atomic64_read(&connection->last_dagtag_sector);
+	seq_printf(m, "      last_dagtag_sector: %llu\n", ull2);
+	seq_printf(m, "last_peer_ack_dagtag_seen: %llu\n",
+			(unsigned long long) connection->last_peer_ack_dagtag_seen);
+
+	spin_lock_irq(&resource->initiator_flush_lock);
+	seq_printf(m, "resource->current_flush_sequence: %llu\n",
+			(unsigned long long) resource->current_flush_sequence);
+	seq_puts(m, "      pending_flush_mask: ");
+	seq_print_node_mask(m, resource, connection->pending_flush_mask);
+	spin_unlock_irq(&resource->initiator_flush_lock);
+
+	spin_lock_irq(&connection->primary_flush_lock);
+	seq_printf(m, "   flush_requests_dagtag: %llu\n",
+			(unsigned long long) connection->flush_requests_dagtag);
+	seq_printf(m, "          flush_sequence: %llu\n",
+			(unsigned long long) connection->flush_sequence);
+	seq_puts(m, " flush_forward_sent_mask: ");
+	seq_print_node_mask(m, resource, connection->flush_forward_sent_mask);
+	spin_unlock_irq(&connection->primary_flush_lock);
+
+	spin_lock_irq(&connection->flush_ack_lock);
+	for (u1 = 0; u1 < DRBD_PEERS_MAX; u1++) {
+		if (connection->flush_ack_sequence[u1])
+			seq_printf(m, "      flush_ack_sequence[%u]: %llu\n", u1,
+					(unsigned long long) connection->flush_ack_sequence[u1]);
+	}
+	spin_unlock_irq(&connection->flush_ack_lock);
+
+	in_flight = atomic_read(&connection->ap_in_flight);
+	seq_printf(m, "            ap_in_flight: %d KiB (%d sectors)\n", in_flight / 2, in_flight);
+
+	in_flight = atomic_read(&connection->rs_in_flight);
+	seq_printf(m, "            rs_in_flight: %d KiB (%d sectors)\n", in_flight / 2, in_flight);
+
+	seq_printf(m, "             done_ee_cnt: %d\n"
+			"          backing_ee_cnt: %d\n"
+			"           active_ee_cnt: %d\n",
+			atomic_read(&connection->done_ee_cnt),
+			atomic_read(&connection->backing_ee_cnt),
+			atomic_read(&connection->active_ee_cnt));
+	seq_printf(m, "      agreed_pro_version: %d\n", connection->agreed_pro_version);
+	seq_printf(m, "            send control: %u bytes/pckt (%u bytes, %u pckts)\n",
+		   connection->ctl_bytes / (connection->ctl_packets ?: 1),
+		   connection->ctl_bytes, connection->ctl_packets);
+	return 0;
+}
+
+static void pid_show(struct seq_file *m, struct drbd_thread *thi)
+{
+	struct task_struct *task = NULL;
+	pid_t pid;
+
+	spin_lock_irq(&thi->t_lock);
+	task = thi->task;
+	if (task)
+		pid = task->pid;
+	spin_unlock_irq(&thi->t_lock);
+	if (task)
+		seq_printf(m, "%d\n", pid);
+}
+
+static int connection_receiver_pid_show(struct seq_file *m, void *pos)
+{
+	struct drbd_connection *connection = m->private;
+	pid_show(m, &connection->receiver);
+	return 0;
+}
+
+static int connection_sender_pid_show(struct seq_file *m, void *pos)
+{
+	struct drbd_connection *connection = m->private;
+	pid_show(m, &connection->sender);
+	return 0;
+}
+
+static int connection_attr_release(struct inode *inode, struct file *file)
 {
 	struct drbd_connection *connection = inode->i_private;
 	kref_put(&connection->kref, drbd_destroy_connection);
 	return single_release(inode, file);
 }
 
-static const struct file_operations connection_oldest_requests_fops = {
-	.owner		= THIS_MODULE,
-	.open		= connection_oldest_requests_open,
-	.read		= seq_read,
-	.llseek		= seq_lseek,
-	.release	= connection_oldest_requests_release,
+#define drbd_debugfs_connection_attr(name)				\
+static int connection_ ## name ## _open(struct inode *inode, struct file *file) \
+{									\
+	struct drbd_connection *connection = inode->i_private;		\
+	return drbd_single_open(file, connection_ ## name ## _show,	\
+				connection, &connection->kref,		\
+				drbd_destroy_connection);		\
+}									\
+static const struct file_operations connection_ ## name ## _fops = {	\
+	.owner		= THIS_MODULE,				      	\
+	.open		= connection_ ## name ##_open,			\
+	.read		= seq_read,					\
+	.llseek		= seq_lseek,					\
+	.release	= connection_attr_release,			\
 };
 
+drbd_debugfs_connection_attr(oldest_requests)
+drbd_debugfs_connection_attr(callback_history)
+drbd_debugfs_connection_attr(transport)
+drbd_debugfs_connection_attr(debug)
+drbd_debugfs_connection_attr(receiver_pid)
+drbd_debugfs_connection_attr(sender_pid)
+
 void drbd_debugfs_connection_add(struct drbd_connection *connection)
 {
 	struct dentry *conns_dir = connection->resource->debugfs_res_connections;
+	struct drbd_peer_device *peer_device;
+	char conn_name[SHARED_SECRET_MAX];
 	struct dentry *dentry;
+	int vnr;
 
-	/* Once we enable mutliple peers,
-	 * these connections will have descriptive names.
-	 * For now, it is just the one connection to the (only) "peer". */
-	dentry = debugfs_create_dir("peer", conns_dir);
-	connection->debugfs_conn = dentry;
+	rcu_read_lock();
+	strscpy(conn_name, rcu_dereference(connection->transport.net_conf)->name);
+	rcu_read_unlock();
 
-	dentry = debugfs_create_file("callback_history", 0440,
-				     connection->debugfs_conn, connection,
-				     &connection_callback_history_fops);
-	connection->debugfs_conn_callback_history = dentry;
+	dentry = debugfs_create_dir(conn_name, conns_dir);
+	connection->debugfs_conn = dentry;
 
-	dentry = debugfs_create_file("oldest_requests", 0440,
-				     connection->debugfs_conn, connection,
-				     &connection_oldest_requests_fops);
-	connection->debugfs_conn_oldest_requests = dentry;
+	/* debugfs create file */
+	conn_dcf(callback_history);
+	conn_dcf(oldest_requests);
+	conn_dcf(transport);
+	conn_dcf(debug);
+	conn_dcf(receiver_pid);
+	conn_dcf(sender_pid);
+
+	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
+		if (!peer_device->debugfs_peer_dev)
+			drbd_debugfs_peer_device_add(peer_device);
+	}
 }
 
 void drbd_debugfs_connection_cleanup(struct drbd_connection *connection)
 {
+	drbd_debugfs_remove(&connection->debugfs_conn_sender_pid);
+	drbd_debugfs_remove(&connection->debugfs_conn_receiver_pid);
+	drbd_debugfs_remove(&connection->debugfs_conn_debug);
+	drbd_debugfs_remove(&connection->debugfs_conn_transport);
 	drbd_debugfs_remove(&connection->debugfs_conn_callback_history);
 	drbd_debugfs_remove(&connection->debugfs_conn_oldest_requests);
 	drbd_debugfs_remove(&connection->debugfs_conn);
 }
 
-static void resync_dump_detail(struct seq_file *m, struct lc_element *e)
+static void seq_printf_nice_histogram(struct seq_file *m, unsigned *hist, unsigned const n)
 {
-       struct bm_extent *bme = lc_entry(e, struct bm_extent, lce);
+	unsigned i;
+	unsigned max = 0;
+	unsigned n_transactions = 0;
+	unsigned long n_updates = 0;
+
+	for (i = 1; i <= n; i++) {
+		if (hist[i] > max)
+			max = hist[i];
+		n_updates += i * hist[i];
+		n_transactions += hist[i];
+	}
 
-       seq_printf(m, "%5d %s %s %s", bme->rs_left,
-		  test_bit(BME_NO_WRITES, &bme->flags) ? "NO_WRITES" : "---------",
-		  test_bit(BME_LOCKED, &bme->flags) ? "LOCKED" : "------",
-		  test_bit(BME_PRIORITY, &bme->flags) ? "PRIORITY" : "--------"
-		  );
+	seq_puts(m, "updates per activity log transaction\n");
+	seq_printf(m, "avg: %lu\n", n_transactions == 0 ? 0 : n_updates / n_transactions);
+
+	if (!max)
+		return;
+
+	for (i = 0; i <= n; i++) {
+		unsigned v = (hist[i] * 60UL + max-1) / max;
+		seq_printf(m, "%2u : %10u : %-60.*s\n", i, hist[i], v,
+			"############################################################");
+	}
 }
 
-static int device_resync_extents_show(struct seq_file *m, void *ignored)
+
+static int device_act_log_histogram_show(struct seq_file *m, void *ignored)
 {
 	struct drbd_device *device = m->private;
 
@@ -665,8 +1123,7 @@ static int device_resync_extents_show(struct seq_file *m, void *ignored)
 	seq_printf(m, "v: %u\n\n", 0);
 
 	if (get_ldev_if_state(device, D_FAILED)) {
-		lc_seq_printf_stats(m, device->resync);
-		lc_seq_dump_details(m, device->resync, "rs_left flags", resync_dump_detail);
+		seq_printf_nice_histogram(m, device->al_histogram, AL_UPDATES_PER_TRANSACTION);
 		put_ldev(device);
 	}
 	return 0;
@@ -690,8 +1147,8 @@ static int device_act_log_extents_show(struct seq_file *m, void *ignored)
 static int device_oldest_requests_show(struct seq_file *m, void *ignored)
 {
 	struct drbd_device *device = m->private;
-	struct drbd_resource *resource = device->resource;
-	unsigned long now = jiffies;
+	ktime_t now = ktime_get();
+	unsigned long jif = jiffies;
 	struct drbd_request *r1, *r2;
 	int i;
 
@@ -699,7 +1156,7 @@ static int device_oldest_requests_show(struct seq_file *m, void *ignored)
 	seq_printf(m, "v: %u\n\n", 0);
 
 	seq_puts(m, RQ_HDR);
-	spin_lock_irq(&resource->req_lock);
+	spin_lock_irq(&device->pending_completion_lock);
 	/* WRITE, then READ */
 	for (i = 1; i >= 0; --i) {
 		r1 = list_first_entry_or_null(&device->pending_master_completion[i],
@@ -707,11 +1164,89 @@ static int device_oldest_requests_show(struct seq_file *m, void *ignored)
 		r2 = list_first_entry_or_null(&device->pending_completion[i],
 			struct drbd_request, req_pending_local);
 		if (r1)
-			seq_print_one_request(m, r1, now);
+			seq_print_one_request(m, r1, now, jif);
 		if (r2 && r2 != r1)
-			seq_print_one_request(m, r2, now);
+			seq_print_one_request(m, r2, now, jif);
+	}
+	spin_unlock_irq(&device->pending_completion_lock);
+	return 0;
+}
+
+static int device_openers_show(struct seq_file *m, void *ignored)
+{
+	struct drbd_device *device = m->private;
+	struct drbd_resource *resource = device->resource;
+	ktime_t now = ktime_get_real();
+	struct opener *tmp;
+
+	spin_lock(&device->openers_lock);
+	list_for_each_entry(tmp, &device->openers, list)
+		seq_printf(m, "%s\t%d\t%lld\n", tmp->comm, tmp->pid,
+			ktime_to_ms(ktime_sub(now, tmp->opened)));
+	spin_unlock(&device->openers_lock);
+	if (mutex_trylock(&resource->open_release)) {
+		if (resource->auto_promoted_by.pid != 0
+		&&  device->minor == resource->auto_promoted_by.minor) {
+			seq_printf(m, "+%s\t%d\t%lld\n",
+				resource->auto_promoted_by.comm,
+				resource->auto_promoted_by.pid,
+				ktime_to_ms(ktime_sub(now, resource->auto_promoted_by.opened)));
+		}
+		mutex_unlock(&resource->open_release);
+	}
+
+	return 0;
+}
+
+static int device_md_io_show(struct seq_file *m, void *ignored)
+{
+	struct drbd_device *device = m->private;
+
+	if (get_ldev_if_state(device, D_FAILED)) {
+		seq_puts(m, drbd_md_dax_active(device->ldev) ? "dax-pmem\n" : "blk-bio\n");
+		put_ldev(device);
+	}
+
+	return 0;
+}
+
+static void seq_printf_interval_tree(struct seq_file *m, struct rb_root *root)
+{
+	struct rb_node *node;
+
+	node = rb_first(root);
+	while (node) {
+		struct drbd_interval *i = rb_entry(node, struct drbd_interval, rb);
+		char sep = ' ';
+
+		seq_printf(m, "%llus+%u %s", (unsigned long long) i->sector, i->size, drbd_interval_type_str(i));
+		seq_print_rq_state_bit(m, test_bit(INTERVAL_READY_TO_SEND, &i->flags), &sep,
+				"ready-to-send");
+		seq_print_rq_state_bit(m, test_bit(INTERVAL_SENT, &i->flags), &sep, "sent");
+		seq_print_rq_state_bit(m, test_bit(INTERVAL_RECEIVED, &i->flags), &sep, "received");
+		seq_print_rq_state_bit(m, test_bit(INTERVAL_SUBMIT_CONFLICT_QUEUED, &i->flags), &sep, "submit-conflict-queued");
+		seq_print_rq_state_bit(m, test_bit(INTERVAL_SUBMITTED, &i->flags), &sep, "submitted");
+		seq_print_rq_state_bit(m, test_bit(INTERVAL_BACKING_COMPLETED, &i->flags), &sep, "backing-completed");
+		seq_print_rq_state_bit(m, test_bit(INTERVAL_COMPLETED, &i->flags), &sep, "completed");
+		seq_print_rq_state_bit(m, test_bit(INTERVAL_CANCELED, &i->flags), &sep, "canceled");
+		seq_putc(m, '\n');
+
+		node = rb_next(node);
 	}
-	spin_unlock_irq(&resource->req_lock);
+}
+
+static int device_interval_tree_show(struct seq_file *m, void *ignored)
+{
+	struct drbd_device *device = m->private;
+
+	spin_lock_irq(&device->interval_lock);
+	seq_puts(m, "Write requests:\n");
+	seq_printf_interval_tree(m, &device->requests);
+	seq_putc(m, '\n');
+	seq_puts(m, "Read requests:\n");
+	seq_printf_interval_tree(m, &device->read_requests);
+	spin_unlock_irq(&device->interval_lock);
+
 	return 0;
 }
 
@@ -719,58 +1254,230 @@ static int device_data_gen_id_show(struct seq_file *m, void *ignored)
 {
 	struct drbd_device *device = m->private;
 	struct drbd_md *md;
-	enum drbd_uuid_index idx;
+	int node_id, i = 0;
 
 	if (!get_ldev_if_state(device, D_FAILED))
 		return -ENODEV;
 
 	md = &device->ldev->md;
+
 	spin_lock_irq(&md->uuid_lock);
-	for (idx = UI_CURRENT; idx <= UI_HISTORY_END; idx++) {
-		seq_printf(m, "0x%016llX\n", md->uuid[idx]);
+	seq_printf(m, "0x%016llX\n", drbd_current_uuid(device));
+
+	for (node_id = 0; node_id < DRBD_NODE_ID_MAX; node_id++) {
+		if (!(md->peers[node_id].flags & MDF_HAVE_BITMAP))
+			continue;
+		seq_printf(m, "%s[%d]0x%016llX", i++ ? " " : "", node_id,
+			   md->peers[node_id].bitmap_uuid);
 	}
+	seq_putc(m, '\n');
+
+	for (i = 0; i < HISTORY_UUIDS; i++)
+		seq_printf(m, "0x%016llX\n", drbd_history_uuid(device, i));
 	spin_unlock_irq(&md->uuid_lock);
 	put_ldev(device);
 	return 0;
 }
 
+static int device_io_frozen_show(struct seq_file *m, void *ignored)
+{
+	struct drbd_device *device = m->private;
+	unsigned long flags = device->flags;
+	char sep = ' ';
+
+	if (!get_ldev_if_state(device, D_FAILED))
+		return -ENODEV;
+
+	/* BUMP me if you change the file format/content/presentation */
+	seq_printf(m, "v: %u\n\n", 0);
+
+	seq_printf(m, "drbd_suspended(): %d\n", drbd_suspended(device));
+	seq_printf(m, "suspend_cnt: %d\n", atomic_read(&device->suspend_cnt));
+	seq_printf(m, "!drbd_state_is_stable(): %d\n", device->cached_state_unstable);
+	seq_printf(m, "ap_bio_cnt[READ]: %d\n", atomic_read(&device->ap_bio_cnt[READ]));
+	seq_printf(m, "ap_bio_cnt[WRITE]: %d\n", atomic_read(&device->ap_bio_cnt[WRITE]));
+	seq_printf(m, "device->pending_bitmap_work.n: %d\n", atomic_read(&device->pending_bitmap_work.n));
+	seq_printf(m, "may_inc_ap_bio(): %d\n", may_inc_ap_bio(device));
+	seq_printf(m, "flags: 0x%04lx :", flags);
+#define pretty_print_bit(n) \
+	seq_print_rq_state_bit(m, test_bit(n, &flags), &sep, #n)
+	pretty_print_bit(NEW_CUR_UUID);
+	pretty_print_bit(WRITING_NEW_CUR_UUID);
+	pretty_print_bit(MAKE_NEW_CUR_UUID);
+#undef pretty_print_bit
+	seq_putc(m, '\n');
+	put_ldev(device);
+
+	return 0;
+}
+
+static int device_al_updates_show(struct seq_file *m, void *ignored)
+{
+	struct drbd_device *device = m->private;
+	bool al_updates, cfg_al_updates;
+
+	if (!get_ldev_if_state(device, D_FAILED))
+		return -ENODEV;
+
+	al_updates = !(device->ldev->md.flags & MDF_AL_DISABLED);
+	rcu_read_lock();
+	cfg_al_updates = rcu_dereference(device->ldev->disk_conf)->al_updates;
+	rcu_read_unlock();
+	put_ldev(device);
+
+	seq_printf(m, "%s\n",
+		    al_updates &&  cfg_al_updates ? "yes" :
+		   !al_updates &&  cfg_al_updates ? "no (optimized)" :
+		   !al_updates && !cfg_al_updates ? "no" :
+		   "?");
+	return 0;
+}
+
 static int device_ed_gen_id_show(struct seq_file *m, void *ignored)
 {
 	struct drbd_device *device = m->private;
-	seq_printf(m, "0x%016llX\n", (unsigned long long)device->ed_uuid);
+	seq_printf(m, "0x%016llX\n", (unsigned long long)device->exposed_data_uuid);
 	return 0;
 }
 
-#define drbd_debugfs_device_attr(name)						\
+static int device_multi_bio_cnt_show(struct seq_file *m, void *ignored)
+{
+	struct drbd_device *device = m->private;
+
+	seq_printf(m, "%u\n", device->multi_bio_cnt);
+	return 0;
+}
+
+#define show_per_peer(M) do {							\
+		seq_printf(m, "%-16s", #M ":");					\
+		for_each_peer_device(peer_device, device)			\
+			seq_printf(m, " %12lld", ktime_to_ns(peer_device->M));	\
+		seq_printf(m, "\n");						\
+	} while (0);
+
+#define PRId64 "lld"
+
+#ifdef CONFIG_DRBD_TIMING_STATS
+static int device_req_timing_show(struct seq_file *m, void *ignored)
+{
+	struct drbd_device *device = m->private;
+	struct drbd_peer_device *peer_device;
+
+	seq_printf(m,
+		   "timing values are nanoseconds; write an 'r' to reset all to 0\n\n"
+		   "requests:        %12lu\n"
+		   "before_queue:    %12" PRId64 "\n"
+		   "before_al_begin  %12" PRId64 "\n"
+		   "in_actlog:       %12" PRId64 "\n"
+		   "pre_submit:      %12" PRId64 "\n\n"
+		   "al_updates:      %12u\n"
+		   "before_bm_write  %12" PRId64 "\n"
+		   "mid              %12" PRId64 "\n"
+		   "after_sync_page  %12" PRId64 "\n",
+		   device->reqs,
+		   ktime_to_ns(device->before_queue_kt),
+		   ktime_to_ns(device->before_al_begin_io_kt),
+		   ktime_to_ns(device->in_actlog_kt),
+		   ktime_to_ns(device->pre_submit_kt),
+		   device->al_writ_cnt,
+		   ktime_to_ns(device->al_before_bm_write_hinted_kt),
+		   ktime_to_ns(device->al_mid_kt),
+		   ktime_to_ns(device->al_after_sync_page_kt));
+
+	seq_puts(m, "\npeer:           ");
+	for_each_peer_device(peer_device, device) {
+		struct drbd_connection *connection = peer_device->connection;
+		seq_printf(m, " %12.12s", rcu_dereference(connection->transport.net_conf)->name);
+	}
+	seq_puts(m, "\n");
+	show_per_peer(pre_send_kt);
+	show_per_peer(acked_kt);
+	show_per_peer(net_done_kt);
+
+	return 0;
+}
+
+static ssize_t device_req_timing_write(struct file *file, const char __user *ubuf,
+				       size_t cnt, loff_t *ppos)
+{
+	struct drbd_device *device = file_inode(file)->i_private;
+	char buffer;
+
+	if (copy_from_user(&buffer, ubuf, 1))
+		return -EFAULT;
+
+	if (buffer == 'r' || buffer == 'R') {
+		struct drbd_peer_device *peer_device;
+		unsigned long flags;
+
+		spin_lock_irqsave(&device->timing_lock, flags);
+		device->reqs = 0;
+		device->in_actlog_kt = ns_to_ktime(0);
+		device->pre_submit_kt = ns_to_ktime(0);
+
+		device->before_queue_kt = ns_to_ktime(0);
+		device->before_al_begin_io_kt = ns_to_ktime(0);
+		device->al_writ_cnt = 0;
+		device->al_before_bm_write_hinted_kt = ns_to_ktime(0);
+		device->al_mid_kt = ns_to_ktime(0);
+		device->al_after_sync_page_kt = ns_to_ktime(0);
+
+		for_each_peer_device(peer_device, device) {
+			peer_device->pre_send_kt = ns_to_ktime(0);
+			peer_device->acked_kt = ns_to_ktime(0);
+			peer_device->net_done_kt = ns_to_ktime(0);
+		}
+		spin_unlock_irqrestore(&device->timing_lock, flags);
+	}
+
+	*ppos += cnt;
+	return cnt;
+}
+#endif
+
+static int device_attr_release(struct inode *inode, struct file *file)
+{
+	struct drbd_device *device = inode->i_private;
+	kref_put(&device->kref, drbd_destroy_device);
+	return single_release(inode, file);
+}
+
+#define __drbd_debugfs_device_attr(name, write_fn)				\
 static int device_ ## name ## _open(struct inode *inode, struct file *file)	\
 {										\
 	struct drbd_device *device = inode->i_private;				\
 	return drbd_single_open(file, device_ ## name ## _show, device,		\
 				&device->kref, drbd_destroy_device);		\
 }										\
-static int device_ ## name ## _release(struct inode *inode, struct file *file)	\
-{										\
-	struct drbd_device *device = inode->i_private;				\
-	kref_put(&device->kref, drbd_destroy_device);				\
-	return single_release(inode, file);					\
-}										\
 static const struct file_operations device_ ## name ## _fops = {		\
 	.owner		= THIS_MODULE,						\
 	.open		= device_ ## name ## _open,				\
+	.write          = write_fn,						\
 	.read		= seq_read,						\
 	.llseek		= seq_lseek,						\
-	.release	= device_ ## name ## _release,				\
+	.release	= device_attr_release,					\
 };
+#define drbd_debugfs_device_attr(name) __drbd_debugfs_device_attr(name, NULL)
 
 drbd_debugfs_device_attr(oldest_requests)
 drbd_debugfs_device_attr(act_log_extents)
-drbd_debugfs_device_attr(resync_extents)
+drbd_debugfs_device_attr(act_log_histogram)
 drbd_debugfs_device_attr(data_gen_id)
+drbd_debugfs_device_attr(io_frozen)
 drbd_debugfs_device_attr(ed_gen_id)
+drbd_debugfs_device_attr(openers)
+drbd_debugfs_device_attr(md_io)
+drbd_debugfs_device_attr(interval_tree)
+drbd_debugfs_device_attr(al_updates)
+drbd_debugfs_device_attr(multi_bio_cnt)
+#ifdef CONFIG_DRBD_TIMING_STATS
+__drbd_debugfs_device_attr(req_timing, device_req_timing_write)
+#endif
 
 void drbd_debugfs_device_add(struct drbd_device *device)
 {
 	struct dentry *vols_dir = device->resource->debugfs_res_volumes;
+	struct drbd_peer_device *peer_device;
 	char minor_buf[8]; /* MINORMASK, MINORBITS == 20; */
 	char vnr_buf[8];   /* volume number vnr is even 16 bit only; */
 	char *slink_name = NULL;
@@ -793,19 +1500,28 @@ void drbd_debugfs_device_add(struct drbd_device *device)
 	kfree(slink_name);
 	slink_name = NULL;
 
-#define DCF(name)	do {					\
-	dentry = debugfs_create_file(#name, 0440,	\
-			device->debugfs_vol, device,		\
-			&device_ ## name ## _fops);		\
-	device->debugfs_vol_ ## name = dentry;			\
-	} while (0)
+	/* debugfs create file */
+	vol_dcf(oldest_requests);
+	vol_dcf(act_log_extents);
+	vol_dcf(act_log_histogram);
+	vol_dcf(data_gen_id);
+	vol_dcf(io_frozen);
+	vol_dcf(ed_gen_id);
+	vol_dcf(openers);
+	vol_dcf(md_io);
+	vol_dcf(interval_tree);
+	vol_dcf(al_updates);
+	vol_dcf(multi_bio_cnt);
+#ifdef CONFIG_DRBD_TIMING_STATS
+	drbd_dcf(device->debugfs_vol, device, req_timing, 0600);
+#endif
+
+	/* Caller holds conf_update */
+	for_each_peer_device(peer_device, device) {
+		if (!peer_device->debugfs_peer_dev)
+			drbd_debugfs_peer_device_add(peer_device);
+	}
 
-	DCF(oldest_requests);
-	DCF(act_log_extents);
-	DCF(resync_extents);
-	DCF(data_gen_id);
-	DCF(ed_gen_id);
-#undef DCF
 	return;
 
 fail:
@@ -818,12 +1534,356 @@ void drbd_debugfs_device_cleanup(struct drbd_device *device)
 	drbd_debugfs_remove(&device->debugfs_minor);
 	drbd_debugfs_remove(&device->debugfs_vol_oldest_requests);
 	drbd_debugfs_remove(&device->debugfs_vol_act_log_extents);
-	drbd_debugfs_remove(&device->debugfs_vol_resync_extents);
+	drbd_debugfs_remove(&device->debugfs_vol_act_log_histogram);
 	drbd_debugfs_remove(&device->debugfs_vol_data_gen_id);
+	drbd_debugfs_remove(&device->debugfs_vol_io_frozen);
 	drbd_debugfs_remove(&device->debugfs_vol_ed_gen_id);
+	drbd_debugfs_remove(&device->debugfs_vol_openers);
+	drbd_debugfs_remove(&device->debugfs_vol_md_io);
+	drbd_debugfs_remove(&device->debugfs_vol_interval_tree);
+	drbd_debugfs_remove(&device->debugfs_vol_al_updates);
+	drbd_debugfs_remove(&device->debugfs_vol_multi_bio_cnt);
+#ifdef CONFIG_DRBD_TIMING_STATS
+	drbd_debugfs_remove(&device->debugfs_vol_req_timing);
+#endif
 	drbd_debugfs_remove(&device->debugfs_vol);
 }
 
+static int drbd_single_open_peer_device(struct file *file,
+					int (*show)(struct seq_file *, void *),
+					struct drbd_peer_device *peer_device)
+{
+	struct drbd_device *device = peer_device->device;
+	struct drbd_connection *connection = peer_device->connection;
+	bool got_connection, got_device;
+	struct dentry *parent;
+
+	parent = file->f_path.dentry->d_parent;
+	if (!parent || !parent->d_inode)
+		goto out;
+	inode_lock(d_inode(parent));
+	if (!simple_positive(file->f_path.dentry))
+		goto out_unlock;
+
+	got_connection = kref_get_unless_zero(&connection->kref);
+	got_device = kref_get_unless_zero(&device->kref);
+
+	if (got_connection && got_device) {
+		int ret;
+		inode_unlock(d_inode(parent));
+		ret = single_open(file, show, peer_device);
+		if (ret) {
+			kref_put(&connection->kref, drbd_destroy_connection);
+			kref_put(&device->kref, drbd_destroy_device);
+		}
+		return ret;
+	}
+
+	if (got_connection)
+		kref_put(&connection->kref, drbd_destroy_connection);
+	if (got_device)
+		kref_put(&device->kref, drbd_destroy_device);
+out_unlock:
+	inode_unlock(d_inode(parent));
+out:
+	return -ESTALE;
+}
+
+static void seq_printf_with_thousands_grouping(struct seq_file *seq, long v)
+{
+	/* v is in kB/sec. We don't expect TiByte/sec yet. */
+	if (unlikely(v >= 1000000)) {
+		/* cool: > GiByte/s */
+		seq_printf(seq, "%ld,", v / 1000000);
+		v %= 1000000;
+		seq_printf(seq, "%03ld,%03ld", v/1000, v % 1000);
+	} else if (likely(v >= 1000))
+		seq_printf(seq, "%ld,%03ld", v/1000, v % 1000);
+	else
+		seq_printf(seq, "%ld", v);
+}
+
+static void drbd_get_syncer_progress(struct drbd_peer_device *pd,
+		enum drbd_repl_state repl_state, unsigned long *rs_total,
+		unsigned long *bits_left, unsigned int *per_mil_done)
+{
+	/* this is to break it at compile time when we change that, in case we
+	 * want to support more than (1<<32) bits on a 32bit arch. */
+	typecheck(unsigned long, pd->rs_total);
+	*rs_total = pd->rs_total;
+
+	/* note: both rs_total and rs_left are in bits, i.e. in
+	 * units of BM_BLOCK_SIZE.
+	 * for the percentage, we don't care. */
+
+	if (repl_state == L_VERIFY_S || repl_state == L_VERIFY_T)
+		*bits_left = atomic64_read(&pd->ov_left);
+	else
+		*bits_left = drbd_bm_total_weight(pd) - pd->rs_failed;
+	/* >> 10 to prevent overflow,
+	 * +1 to prevent division by zero */
+	if (*bits_left > *rs_total) {
+		/* D'oh. Maybe a logic bug somewhere.  More likely just a race
+		 * between state change and reset of rs_total.
+		 */
+		*bits_left = *rs_total;
+		*per_mil_done = *rs_total ? 0 : 1000;
+	} else {
+		/* Make sure the division happens in long context.
+		 * We allow up to one petabyte storage right now,
+		 * at a granularity of 4k per bit that is 2**38 bits.
+		 * After shift right and multiplication by 1000,
+		 * this should still fit easily into a 32bit long,
+		 * so we don't need a 64bit division on 32bit arch.
+		 * Note: currently we don't support such large bitmaps on 32bit
+		 * arch anyways, but no harm done to be prepared for it here.
+		 */
+		unsigned int shift = *rs_total > UINT_MAX ? 16 : 10;
+		unsigned long left = *bits_left >> shift;
+		unsigned long total = 1UL + (*rs_total >> shift);
+		unsigned long tmp = 1000UL - left * 1000UL/total;
+		*per_mil_done = tmp;
+	}
+}
+
+static void drbd_syncer_progress(struct drbd_peer_device *pd, struct seq_file *seq,
+		enum drbd_repl_state repl_state)
+{
+	unsigned long db, dt, dbdt, rt, rs_total, rs_left;
+	unsigned int res;
+	int i, x, y;
+	int stalled = 0;
+	unsigned int bm_block_shift = pd->device->last_bm_block_shift;
+
+	drbd_get_syncer_progress(pd, repl_state, &rs_total, &rs_left, &res);
+
+	x = res/50;
+	y = 20-x;
+	seq_puts(seq, "\t[");
+	for (i = 1; i < x; i++)
+		seq_putc(seq, '=');
+	seq_putc(seq, '>');
+	for (i = 0; i < y; i++)
+		seq_putc(seq, '.');
+	seq_puts(seq, "] ");
+
+	if (repl_state == L_VERIFY_S || repl_state == L_VERIFY_T)
+		seq_puts(seq, "verified:");
+	else
+		seq_puts(seq, "sync'ed:");
+	seq_printf(seq, "%3u.%u%% ", res / 10, res % 10);
+
+	/* if more than a few GB, display in MB */
+	if (rs_total > (4UL << (30 - bm_block_shift)))
+		seq_printf(seq, "(%llu/%llu)M",
+			    bit_to_kb(rs_left >> 10, bm_block_shift),
+			    bit_to_kb(rs_total >> 10, bm_block_shift));
+	else
+		seq_printf(seq, "(%llu/%llu)K",
+			    bit_to_kb(rs_left, bm_block_shift),
+			    bit_to_kb(rs_total, bm_block_shift));
+
+	seq_puts(seq, "\n\t");
+
+	/* see drivers/md/md.c
+	 * We do not want to overflow, so the order of operands and
+	 * the * 100 / 100 trick are important. We do a +1 to be
+	 * safe against division by zero. We only estimate anyway.
+	 *
+	 * dt: time from mark until now
+	 * db: blocks written from mark until now
+	 * rt: remaining time
+	 */
+	/* Rolling marks. last_mark+1 may just now be modified.  last_mark+2 is
+	 * at least (DRBD_SYNC_MARKS-2)*DRBD_SYNC_MARK_STEP old, and has at
+	 * least DRBD_SYNC_MARK_STEP time before it will be modified. */
+	/* ------------------------ ~18s average ------------------------ */
+	i = (pd->rs_last_mark + 2) % DRBD_SYNC_MARKS;
+	dt = (jiffies - pd->rs_mark_time[i]) / HZ;
+	if (dt > 180)
+		stalled = 1;
+
+	if (!dt)
+		dt++;
+	db = pd->rs_mark_left[i] - rs_left;
+	rt = (dt * (rs_left / (db/100+1)))/100; /* seconds */
+
+	seq_printf(seq, "finish: %lu:%02lu:%02lu",
+		rt / 3600, (rt % 3600) / 60, rt % 60);
+
+	dbdt = bit_to_kb(db/dt, bm_block_shift);
+	seq_puts(seq, " speed: ");
+	seq_printf_with_thousands_grouping(seq, dbdt);
+	seq_puts(seq, " (");
+	/* ------------------------- ~3s average ------------------------ */
+	if (1) {
+		/* this is what drbd_rs_should_slow_down() uses */
+		i = (pd->rs_last_mark + DRBD_SYNC_MARKS-1) % DRBD_SYNC_MARKS;
+		dt = (jiffies - pd->rs_mark_time[i]) / HZ;
+		if (!dt)
+			dt++;
+		db = pd->rs_mark_left[i] - rs_left;
+		dbdt = bit_to_kb(db/dt, bm_block_shift);
+		seq_printf_with_thousands_grouping(seq, dbdt);
+		seq_puts(seq, " -- ");
+	}
+
+	/* --------------------- long term average ---------------------- */
+	/* mean speed since syncer started
+	 * we do account for PausedSync periods */
+	dt = (jiffies - pd->rs_start - pd->rs_paused) / HZ;
+	if (dt == 0)
+		dt = 1;
+	db = rs_total - rs_left;
+	dbdt = bit_to_kb(db/dt, bm_block_shift);
+	seq_printf_with_thousands_grouping(seq, dbdt);
+	seq_putc(seq, ')');
+
+	if (repl_state == L_SYNC_TARGET ||
+	    repl_state == L_VERIFY_S) {
+		seq_puts(seq, " want: ");
+		seq_printf_with_thousands_grouping(seq, pd->c_sync_rate);
+	}
+	seq_printf(seq, " K/sec%s\n", stalled ? " (stalled)" : "");
+
+	{
+		/* 64 bit:
+		 * we convert to sectors in the display below. */
+		unsigned long bm_bits = drbd_bm_bits(pd->device);
+		unsigned long bit_pos;
+		unsigned long long stop_sector = 0;
+		if (repl_state == L_VERIFY_S ||
+		    repl_state == L_VERIFY_T) {
+			bit_pos = bm_bits - (unsigned long)atomic64_read(&pd->ov_left);
+			if (verify_can_do_stop_sector(pd))
+				stop_sector = pd->ov_stop_sector;
+		} else
+			bit_pos = pd->resync_next_bit;
+		/* Total sectors may be slightly off for oddly
+		 * sized devices. So what. */
+		seq_printf(seq,
+			"\t%3d%% sector pos: %llu/%llu",
+			(int)(bit_pos / (bm_bits/100+1)),
+			(unsigned long long)bit_pos * sect_per_bit(bm_block_shift),
+			(unsigned long long)bm_bits * sect_per_bit(bm_block_shift));
+		if (stop_sector != 0 && stop_sector != ULLONG_MAX)
+			seq_printf(seq, " stop sector: %llu", stop_sector);
+		seq_putc(seq, '\n');
+	}
+}
+
+static int peer_device_proc_drbd_show(struct seq_file *m, void *ignored)
+{
+	struct drbd_peer_device *peer_device = m->private;
+	struct drbd_device *device = peer_device->device;
+	union drbd_state state;
+	const char *sn;
+	struct net_conf *nc;
+	bool have_ldev;
+	char wp;
+
+	state.disk = device->disk_state[NOW];
+	state.pdsk = peer_device->disk_state[NOW];
+	state.conn = peer_device->repl_state[NOW];
+	state.role = device->resource->role[NOW];
+	state.peer = peer_device->connection->peer_role[NOW];
+
+	state.user_isp = peer_device->resync_susp_user[NOW];
+	state.peer_isp = peer_device->resync_susp_peer[NOW];
+	state.aftr_isp = peer_device->resync_susp_dependency[NOW];
+
+	sn = drbd_repl_str(state.conn);
+
+	rcu_read_lock();
+	have_ldev = get_ldev_if_state(device, D_FAILED);
+
+	/* reset device->congestion_reason */
+
+	nc = rcu_dereference(peer_device->connection->transport.net_conf);
+	wp = nc ? nc->wire_protocol - DRBD_PROT_A + 'A' : ' ';
+	seq_printf(m,
+		   "%2d: cs:%s ro:%s/%s ds:%s/%s %c %c%c%c%c%c%c\n"
+		   "    ns:%u nr:%u dw:%u dr:%u al:%u bm:%u "
+		   "lo:%d pe:[%d;%d] ua:%d ap:[%d;%d] ep:%d wo:%d",
+		   device->minor, sn,
+		   drbd_role_str(state.role),
+		   drbd_role_str(state.peer),
+		   drbd_disk_str(state.disk),
+		   drbd_disk_str(state.pdsk),
+		   wp,
+		   drbd_suspended(device) ? 's' : 'r',
+		   state.aftr_isp ? 'a' : '-',
+		   state.peer_isp ? 'p' : '-',
+		   state.user_isp ? 'u' : '-',
+		   '-' /* congestion reason... FIXME */,
+		   test_bit(AL_SUSPENDED, &device->flags) ? 's' : '-',
+		   peer_device->send_cnt/2,
+		   peer_device->recv_cnt/2,
+		   device->writ_cnt/2,
+		   device->read_cnt/2,
+		   device->al_writ_cnt,
+		   device->bm_writ_cnt,
+		   atomic_read(&device->local_cnt),
+		   atomic_read(&peer_device->ap_pending_cnt),
+		   atomic_read(&peer_device->rs_pending_cnt),
+		   atomic_read(&peer_device->unacked_cnt),
+		   atomic_read(&device->ap_bio_cnt[WRITE]),
+		   atomic_read(&device->ap_bio_cnt[READ]),
+		   peer_device->connection->epochs,
+		   device->resource->write_ordering
+		);
+
+	seq_printf(m, " oos:%llu\n",
+		   have_ldev ? device_bit_to_kb(device, drbd_bm_total_weight(peer_device)) : 0);
+
+	if (have_ldev) {
+		if (state.conn == L_SYNC_SOURCE ||
+		    state.conn == L_SYNC_TARGET ||
+		    state.conn == L_VERIFY_S ||
+		    state.conn == L_VERIFY_T)
+			drbd_syncer_progress(peer_device, m, state.conn);
+
+		lc_seq_printf_stats(m, device->act_log);
+
+		put_ldev(device);
+	}
+
+	seq_printf(m, "\tblocked on activity log: %d/%d/%d\n",
+		atomic_read(&device->ap_actlog_cnt),	/* requests */
+		atomic_read(&device->wait_for_actlog),	/* peer_requests */
+		/* nr extents needed to satisfy the above in the worst case */
+		atomic_read(&device->wait_for_actlog_ecnt));
+
+	rcu_read_unlock();
+
+	return 0;
+}
+
+#define drbd_debugfs_peer_device_attr(name)					\
+static int peer_device_ ## name ## _open(struct inode *inode, struct file *file)\
+{										\
+	struct drbd_peer_device *peer_device = inode->i_private;		\
+	return drbd_single_open_peer_device(file,				\
+					    peer_device_ ## name ## _show,	\
+					    peer_device);			\
+}										\
+static int peer_device_ ## name ## _release(struct inode *inode, struct file *file)\
+{										\
+	struct drbd_peer_device *peer_device = inode->i_private;		\
+	kref_put(&peer_device->connection->kref, drbd_destroy_connection);	\
+	kref_put(&peer_device->device->kref, drbd_destroy_device);		\
+	return single_release(inode, file);					\
+}										\
+static const struct file_operations peer_device_ ## name ## _fops = {		\
+	.owner		= THIS_MODULE,						\
+	.open		= peer_device_ ## name ## _open,			\
+	.read		= seq_read,						\
+	.llseek		= seq_lseek,						\
+	.release	= peer_device_ ## name ## _release,			\
+};
+
+drbd_debugfs_peer_device_attr(proc_drbd)
+
 void drbd_debugfs_peer_device_add(struct drbd_peer_device *peer_device)
 {
 	struct dentry *conn_dir = peer_device->connection->debugfs_conn;
@@ -833,10 +1893,14 @@ void drbd_debugfs_peer_device_add(struct drbd_peer_device *peer_device)
 	snprintf(vnr_buf, sizeof(vnr_buf), "%u", peer_device->device->vnr);
 	dentry = debugfs_create_dir(vnr_buf, conn_dir);
 	peer_device->debugfs_peer_dev = dentry;
+
+	/* debugfs create file */
+	peer_dev_dcf(proc_drbd);
 }
 
 void drbd_debugfs_peer_device_cleanup(struct drbd_peer_device *peer_device)
 {
+	drbd_debugfs_remove(&peer_device->debugfs_peer_dev_proc_drbd);
 	drbd_debugfs_remove(&peer_device->debugfs_peer_dev);
 }
 
@@ -847,6 +1911,11 @@ static int drbd_version_show(struct seq_file *m, void *ignored)
 	seq_printf(m, "API_VERSION=%u\n", GENL_MAGIC_VERSION);
 	seq_printf(m, "PRO_VERSION_MIN=%u\n", PRO_VERSION_MIN);
 	seq_printf(m, "PRO_VERSION_MAX=%u\n", PRO_VERSION_MAX);
+#ifdef UTS_RELEASE
+	/* the UTS_RELEASE string of the prepared kernel source tree this
+	 * module was built against */
+	seq_printf(m, "UTS_RELEASE=%s\n", UTS_RELEASE);
+#endif
 	return 0;
 }
 
@@ -863,13 +1932,53 @@ static const struct file_operations drbd_version_fops = {
 	.release = single_release,
 };
 
+static int drbd_refcounts_show(struct seq_file *m, void *ignored)
+{
+	seq_printf(m, "v: %u\n\n", 0);
+
+	return 0;
+}
+
+static int drbd_refcounts_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, drbd_refcounts_show, NULL);
+}
+
+static const struct file_operations drbd_refcounts_fops = {
+	.owner = THIS_MODULE,
+	.open = drbd_refcounts_open,
+	.llseek = seq_lseek,
+	.read = seq_read,
+	.release = single_release,
+};
+
+static int drbd_compat_show(struct seq_file *m, void *ignored)
+{
+	return 0;
+}
+
+static int drbd_compat_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, drbd_compat_show, NULL);
+}
+
+static const struct file_operations drbd_compat_fops = {
+	.owner = THIS_MODULE,
+	.open = drbd_compat_open,
+	.llseek = seq_lseek,
+	.read = seq_read,
+	.release = single_release,
+};
+
 /* not __exit, may be indirectly called
  * from the module-load-failure path as well. */
 void drbd_debugfs_cleanup(void)
 {
+	drbd_debugfs_remove(&drbd_debugfs_compat);
 	drbd_debugfs_remove(&drbd_debugfs_resources);
 	drbd_debugfs_remove(&drbd_debugfs_minors);
 	drbd_debugfs_remove(&drbd_debugfs_version);
+	drbd_debugfs_remove(&drbd_debugfs_refcounts);
 	drbd_debugfs_remove(&drbd_debugfs_root);
 }
 
@@ -883,9 +1992,15 @@ void __init drbd_debugfs_init(void)
 	dentry = debugfs_create_file("version", 0444, drbd_debugfs_root, NULL, &drbd_version_fops);
 	drbd_debugfs_version = dentry;
 
+	dentry = debugfs_create_file("reference_counts", 0444, drbd_debugfs_root, NULL, &drbd_refcounts_fops);
+	drbd_debugfs_refcounts = dentry;
+
 	dentry = debugfs_create_dir("resources", drbd_debugfs_root);
 	drbd_debugfs_resources = dentry;
 
 	dentry = debugfs_create_dir("minors", drbd_debugfs_root);
 	drbd_debugfs_minors = dentry;
+
+	dentry = debugfs_create_file("compat", 0444, drbd_debugfs_root, NULL, &drbd_compat_fops);
+	drbd_debugfs_compat = dentry;
 }
diff --git a/drivers/block/drbd/drbd_interval.c b/drivers/block/drbd/drbd_interval.c
index 873beda6de24..b16eeeaa27d3 100644
--- a/drivers/block/drbd/drbd_interval.c
+++ b/drivers/block/drbd/drbd_interval.c
@@ -14,9 +14,28 @@ sector_t interval_end(struct rb_node *node)
 }
 
 #define NODE_END(node) ((node)->sector + ((node)->size >> 9))
+RB_DECLARE_CALLBACKS_MAX(static, augment_callbacks, struct drbd_interval, rb,
+		sector_t, end, NODE_END);
+
+static const char * const drbd_interval_type_names[] = {
+	[INTERVAL_LOCAL_WRITE]    = "LocalWrite",
+	[INTERVAL_PEER_WRITE]     = "PeerWrite",
+	[INTERVAL_RESYNC_WRITE]   = "ResyncWrite",
+	[INTERVAL_RESYNC_READ]    = "ResyncRead",
+	[INTERVAL_OV_READ_SOURCE] = "VerifySource",
+	[INTERVAL_OV_READ_TARGET] = "VerifyTarget",
+	[INTERVAL_PEERS_IN_SYNC_LOCK] = "PeersInSync",
+};
+
+const char *drbd_interval_type_str(struct drbd_interval *i)
+{
+	enum drbd_interval_type type = i->type;
+	unsigned int size = sizeof drbd_interval_type_names / sizeof drbd_interval_type_names[0];
 
-RB_DECLARE_CALLBACKS_MAX(static, augment_callbacks,
-			 struct drbd_interval, rb, sector_t, end, NODE_END);
+	return (type < 0 || type >= size ||
+		!drbd_interval_type_names[type]) ?
+		       "?" : drbd_interval_type_names[type];
+}
 
 /*
  * drbd_insert_interval  -  insert a new interval into a tree
@@ -102,6 +121,18 @@ drbd_remove_interval(struct rb_root *root, struct drbd_interval *this)
 	rb_erase_augmented(&this->rb, root, &augment_callbacks);
 }
 
+void drbd_update_interval_size(struct drbd_interval *this, unsigned int new_size)
+{
+	this->size = new_size;
+
+	/* The size is one of the inputs to calculate the tree node's
+	 * augmented value. When we change it we need to update the augmented
+	 * value in this node and maybe in some parent nodes. That might be
+	 * all the way up to the root. As this function is used for joining
+	 * intervals, usually it will propagate only to the parent node. */
+	augment_callbacks_propagate(&this->rb, NULL);
+}
+
 /**
  * drbd_find_overlap  - search for an interval overlapping with [sector, sector + size)
  * @root:	red black tree root
diff --git a/drivers/block/drbd/drbd_legacy_84.c b/drivers/block/drbd/drbd_legacy_84.c
index 5363dab31918..ea49d12910aa 100644
--- a/drivers/block/drbd/drbd_legacy_84.c
+++ b/drivers/block/drbd/drbd_legacy_84.c
@@ -57,9 +57,10 @@ static const char * const drbd_conn_s_names[] = {
 	[C_NETWORK_FAILURE]  = "NetworkFailure",
 	[C_PROTOCOL_ERROR]   = "ProtocolError",
 	[C_CONNECTING]       = "WFConnection",
-	/* [C_WF_REPORT_PARAMS] = "WFReportParams", */
+	/* [C_WF_REPORT_PARAMS] = "WFReportParams", does no longer exist in drbd-9.x */
 	[C_TEAR_DOWN]        = "TearDown",
-	[C_CONNECTED]        = "Connected",
+	[C_CONNECTED]        = "WFReportParams", /* drbd-8.4 for "Negotiating" or "Off" */
+	[L_ESTABLISHED]      = "Connected",
 	[L_STARTING_SYNC_S]  = "StartingSyncS",
 	[L_STARTING_SYNC_T]  = "StartingSyncT",
 	[L_WF_BITMAP_S]      = "WFBitMapS",
@@ -474,6 +475,7 @@ static int seq_print_device_proc_drbd(struct seq_file *m, struct drbd_device *de
 	struct drbd_peer_device *peer_device;
 	union drbd_state state;
 	const char *sn;
+	bool have_ldev;
 	char wp;
 
 	peer_device = list_first_or_null_rcu(&device->peer_devices, struct drbd_peer_device,
@@ -507,6 +509,7 @@ static int seq_print_device_proc_drbd(struct seq_file *m, struct drbd_device *de
 	}
 
 	sn = drbd_conn_str_84(state.conn);
+	have_ldev = get_ldev_if_state(device, D_FAILED);
 
 	if (state.conn == C_STANDALONE &&
 	    state.disk == D_DISKLESS &&
@@ -543,16 +546,18 @@ static int seq_print_device_proc_drbd(struct seq_file *m, struct drbd_device *de
 			   epochs,
 			   write_ordering_chars[device->resource->write_ordering]
 			);
-		seq_printf(m, " oos:%llu\n",
-			   peer_device ?
+		seq_printf(m, " oos:%llu\n", (peer_device && have_ldev) ?
 				device_bit_to_kb(device, drbd_bm_total_weight(peer_device)) : 0);
 	}
-	if (state.conn == L_SYNC_SOURCE ||
-	    state.conn == L_SYNC_TARGET ||
-	    state.conn == L_VERIFY_S ||
-	    state.conn == L_VERIFY_T)
-		drbd_syncer_progress(peer_device, m, state.conn);
-
+	if (have_ldev) {
+		if (state.conn == L_SYNC_SOURCE ||
+		    state.conn == L_SYNC_TARGET ||
+		    state.conn == L_VERIFY_S ||
+		    state.conn == L_VERIFY_T)
+			drbd_syncer_progress(peer_device, m, state.conn);
+
+		put_ldev(device);
+	}
 	/* drbd_proc_details 1 or 2 missing */
 
 	return 0;
diff --git a/drivers/block/drbd/drbd_proc.c b/drivers/block/drbd/drbd_proc.c
index 1d0feafceadc..0d741108ce0c 100644
--- a/drivers/block/drbd/drbd_proc.c
+++ b/drivers/block/drbd/drbd_proc.c
@@ -11,313 +11,35 @@
 
  */
 
-#include <linux/module.h>
-
-#include <linux/uaccess.h>
-#include <linux/fs.h>
-#include <linux/file.h>
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
-#include <linux/drbd.h>
 #include "drbd_int.h"
+#include "drbd_transport.h"
+#include "drbd_legacy_84.h"
 
 struct proc_dir_entry *drbd_proc;
 
-static void seq_printf_with_thousands_grouping(struct seq_file *seq, long v)
-{
-	/* v is in kB/sec. We don't expect TiByte/sec yet. */
-	if (unlikely(v >= 1000000)) {
-		/* cool: > GiByte/s */
-		seq_printf(seq, "%ld,", v / 1000000);
-		v %= 1000000;
-		seq_printf(seq, "%03ld,%03ld", v/1000, v % 1000);
-	} else if (likely(v >= 1000))
-		seq_printf(seq, "%ld,%03ld", v/1000, v % 1000);
-	else
-		seq_printf(seq, "%ld", v);
-}
-
-static void drbd_get_syncer_progress(struct drbd_device *device,
-		union drbd_dev_state state, unsigned long *rs_total,
-		unsigned long *bits_left, unsigned int *per_mil_done)
-{
-	/* this is to break it at compile time when we change that, in case we
-	 * want to support more than (1<<32) bits on a 32bit arch. */
-	typecheck(unsigned long, device->rs_total);
-	*rs_total = device->rs_total;
-
-	/* note: both rs_total and rs_left are in bits, i.e. in
-	 * units of BM_BLOCK_SIZE.
-	 * for the percentage, we don't care. */
-
-	if (state.conn == C_VERIFY_S || state.conn == C_VERIFY_T)
-		*bits_left = device->ov_left;
-	else
-		*bits_left = drbd_bm_total_weight(device) - device->rs_failed;
-	/* >> 10 to prevent overflow,
-	 * +1 to prevent division by zero */
-	if (*bits_left > *rs_total) {
-		/* D'oh. Maybe a logic bug somewhere.  More likely just a race
-		 * between state change and reset of rs_total.
-		 */
-		*bits_left = *rs_total;
-		*per_mil_done = *rs_total ? 0 : 1000;
-	} else {
-		/* Make sure the division happens in long context.
-		 * We allow up to one petabyte storage right now,
-		 * at a granularity of 4k per bit that is 2**38 bits.
-		 * After shift right and multiplication by 1000,
-		 * this should still fit easily into a 32bit long,
-		 * so we don't need a 64bit division on 32bit arch.
-		 * Note: currently we don't support such large bitmaps on 32bit
-		 * arch anyways, but no harm done to be prepared for it here.
-		 */
-		unsigned int shift = *rs_total > UINT_MAX ? 16 : 10;
-		unsigned long left = *bits_left >> shift;
-		unsigned long total = 1UL + (*rs_total >> shift);
-		unsigned long tmp = 1000UL - left * 1000UL/total;
-		*per_mil_done = tmp;
-	}
-}
-
-
-/*lge
- * progress bars shamelessly adapted from driver/md/md.c
- * output looks like
- *	[=====>..............] 33.5% (23456/123456)
- *	finish: 2:20:20 speed: 6,345 (6,456) K/sec
- */
-static void drbd_syncer_progress(struct drbd_device *device, struct seq_file *seq,
-		union drbd_dev_state state)
-{
-	unsigned long db, dt, dbdt, rt, rs_total, rs_left;
-	unsigned int res;
-	int i, x, y;
-	int stalled = 0;
-
-	drbd_get_syncer_progress(device, state, &rs_total, &rs_left, &res);
-
-	x = res/50;
-	y = 20-x;
-	seq_puts(seq, "\t[");
-	for (i = 1; i < x; i++)
-		seq_putc(seq, '=');
-	seq_putc(seq, '>');
-	for (i = 0; i < y; i++)
-		seq_putc(seq, '.');
-	seq_puts(seq, "] ");
-
-	if (state.conn == C_VERIFY_S || state.conn == C_VERIFY_T)
-		seq_puts(seq, "verified:");
-	else
-		seq_puts(seq, "sync'ed:");
-	seq_printf(seq, "%3u.%u%% ", res / 10, res % 10);
-
-	/* if more than a few GB, display in MB */
-	if (rs_total > (4UL << (30 - BM_BLOCK_SHIFT)))
-		seq_printf(seq, "(%lu/%lu)M",
-			    (unsigned long) Bit2KB(rs_left >> 10),
-			    (unsigned long) Bit2KB(rs_total >> 10));
-	else
-		seq_printf(seq, "(%lu/%lu)K",
-			    (unsigned long) Bit2KB(rs_left),
-			    (unsigned long) Bit2KB(rs_total));
-
-	seq_puts(seq, "\n\t");
-
-	/* see drivers/md/md.c
-	 * We do not want to overflow, so the order of operands and
-	 * the * 100 / 100 trick are important. We do a +1 to be
-	 * safe against division by zero. We only estimate anyway.
-	 *
-	 * dt: time from mark until now
-	 * db: blocks written from mark until now
-	 * rt: remaining time
-	 */
-	/* Rolling marks. last_mark+1 may just now be modified.  last_mark+2 is
-	 * at least (DRBD_SYNC_MARKS-2)*DRBD_SYNC_MARK_STEP old, and has at
-	 * least DRBD_SYNC_MARK_STEP time before it will be modified. */
-	/* ------------------------ ~18s average ------------------------ */
-	i = (device->rs_last_mark + 2) % DRBD_SYNC_MARKS;
-	dt = (jiffies - device->rs_mark_time[i]) / HZ;
-	if (dt > 180)
-		stalled = 1;
-
-	if (!dt)
-		dt++;
-	db = device->rs_mark_left[i] - rs_left;
-	rt = (dt * (rs_left / (db/100+1)))/100; /* seconds */
-
-	seq_printf(seq, "finish: %lu:%02lu:%02lu",
-		rt / 3600, (rt % 3600) / 60, rt % 60);
-
-	dbdt = Bit2KB(db/dt);
-	seq_puts(seq, " speed: ");
-	seq_printf_with_thousands_grouping(seq, dbdt);
-	seq_puts(seq, " (");
-	/* ------------------------- ~3s average ------------------------ */
-	if (drbd_proc_details >= 1) {
-		/* this is what drbd_rs_should_slow_down() uses */
-		i = (device->rs_last_mark + DRBD_SYNC_MARKS-1) % DRBD_SYNC_MARKS;
-		dt = (jiffies - device->rs_mark_time[i]) / HZ;
-		if (!dt)
-			dt++;
-		db = device->rs_mark_left[i] - rs_left;
-		dbdt = Bit2KB(db/dt);
-		seq_printf_with_thousands_grouping(seq, dbdt);
-		seq_puts(seq, " -- ");
-	}
-
-	/* --------------------- long term average ---------------------- */
-	/* mean speed since syncer started
-	 * we do account for PausedSync periods */
-	dt = (jiffies - device->rs_start - device->rs_paused) / HZ;
-	if (dt == 0)
-		dt = 1;
-	db = rs_total - rs_left;
-	dbdt = Bit2KB(db/dt);
-	seq_printf_with_thousands_grouping(seq, dbdt);
-	seq_putc(seq, ')');
-
-	if (state.conn == C_SYNC_TARGET ||
-	    state.conn == C_VERIFY_S) {
-		seq_puts(seq, " want: ");
-		seq_printf_with_thousands_grouping(seq, device->c_sync_rate);
-	}
-	seq_printf(seq, " K/sec%s\n", stalled ? " (stalled)" : "");
-
-	if (drbd_proc_details >= 1) {
-		/* 64 bit:
-		 * we convert to sectors in the display below. */
-		unsigned long bm_bits = drbd_bm_bits(device);
-		unsigned long bit_pos;
-		unsigned long long stop_sector = 0;
-		if (state.conn == C_VERIFY_S ||
-		    state.conn == C_VERIFY_T) {
-			bit_pos = bm_bits - device->ov_left;
-			if (verify_can_do_stop_sector(device))
-				stop_sector = device->ov_stop_sector;
-		} else
-			bit_pos = device->bm_resync_fo;
-		/* Total sectors may be slightly off for oddly
-		 * sized devices. So what. */
-		seq_printf(seq,
-			"\t%3d%% sector pos: %llu/%llu",
-			(int)(bit_pos / (bm_bits/100+1)),
-			(unsigned long long)bit_pos * BM_SECT_PER_BIT,
-			(unsigned long long)bm_bits * BM_SECT_PER_BIT);
-		if (stop_sector != 0 && stop_sector != ULLONG_MAX)
-			seq_printf(seq, " stop sector: %llu", stop_sector);
-		seq_putc(seq, '\n');
-	}
-}
-
 int drbd_seq_show(struct seq_file *seq, void *v)
 {
-	int i, prev_i = -1;
-	const char *sn;
-	struct drbd_device *device;
-	struct net_conf *nc;
-	union drbd_dev_state state;
-	char wp;
-
-	static char write_ordering_chars[] = {
-		[WO_NONE] = 'n',
-		[WO_DRAIN_IO] = 'd',
-		[WO_BDEV_FLUSH] = 'f',
-	};
-
-	seq_printf(seq, "version: " REL_VERSION " (api:%d/proto:%d-%d)\n%s\n",
-		   GENL_MAGIC_VERSION, PRO_VERSION_MIN, PRO_VERSION_MAX, drbd_buildtag());
-
-	/*
-	  cs .. connection state
-	  ro .. node role (local/remote)
-	  ds .. disk state (local/remote)
-	     protocol
-	     various flags
-	  ns .. network send
-	  nr .. network receive
-	  dw .. disk write
-	  dr .. disk read
-	  al .. activity log write count
-	  bm .. bitmap update write count
-	  pe .. pending (waiting for ack or data reply)
-	  ua .. unack'd (still need to send ack or data reply)
-	  ap .. application requests accepted, but not yet completed
-	  ep .. number of epochs currently "on the fly", P_BARRIER_ACK pending
-	  wo .. write ordering mode currently in use
-	 oos .. known out-of-sync kB
-	*/
-
-	rcu_read_lock();
-	idr_for_each_entry(&drbd_devices, device, i) {
-		if (prev_i != i - 1)
-			seq_putc(seq, '\n');
-		prev_i = i;
-
-		state = device->state;
-		sn = drbd_conn_str(state.conn);
-
-		if (state.conn == C_STANDALONE &&
-		    state.disk == D_DISKLESS &&
-		    state.role == R_SECONDARY) {
-			seq_printf(seq, "%2d: cs:Unconfigured\n", i);
-		} else {
-			/* reset device->congestion_reason */
-
-			nc = rcu_dereference(first_peer_device(device)->connection->net_conf);
-			wp = nc ? nc->wire_protocol - DRBD_PROT_A + 'A' : ' ';
-			seq_printf(seq,
-			   "%2d: cs:%s ro:%s/%s ds:%s/%s %c %c%c%c%c%c%c\n"
-			   "    ns:%u nr:%u dw:%u dr:%u al:%u bm:%u "
-			   "lo:%d pe:%d ua:%d ap:%d ep:%d wo:%c",
-			   i, sn,
-			   drbd_role_str(state.role),
-			   drbd_role_str(state.peer),
-			   drbd_disk_str(state.disk),
-			   drbd_disk_str(state.pdsk),
-			   wp,
-			   drbd_suspended(device) ? 's' : 'r',
-			   state.aftr_isp ? 'a' : '-',
-			   state.peer_isp ? 'p' : '-',
-			   state.user_isp ? 'u' : '-',
-			   device->congestion_reason ?: '-',
-			   test_bit(AL_SUSPENDED, &device->flags) ? 's' : '-',
-			   device->send_cnt/2,
-			   device->recv_cnt/2,
-			   device->writ_cnt/2,
-			   device->read_cnt/2,
-			   device->al_writ_cnt,
-			   device->bm_writ_cnt,
-			   atomic_read(&device->local_cnt),
-			   atomic_read(&device->ap_pending_cnt) +
-			   atomic_read(&device->rs_pending_cnt),
-			   atomic_read(&device->unacked_cnt),
-			   atomic_read(&device->ap_bio_cnt),
-			   first_peer_device(device)->connection->epochs,
-			   write_ordering_chars[device->resource->write_ordering]
-			);
-			seq_printf(seq, " oos:%llu\n",
-				   Bit2KB((unsigned long long)
-					   drbd_bm_total_weight(device)));
-		}
-		if (state.conn == C_SYNC_SOURCE ||
-		    state.conn == C_SYNC_TARGET ||
-		    state.conn == C_VERIFY_S ||
-		    state.conn == C_VERIFY_T)
-			drbd_syncer_progress(device, seq, state);
-
-		if (drbd_proc_details >= 1 && get_ldev_if_state(device, D_FAILED)) {
-			lc_seq_printf_stats(seq, device->resync);
-			lc_seq_printf_stats(seq, device->act_log);
-			put_ldev(device);
-		}
-
-		if (drbd_proc_details >= 2)
-			seq_printf(seq, "\tblocked on activity log: %d\n", atomic_read(&device->ap_actlog_cnt));
+	bool any_legacy;
+	static const char legacy_info[] =
+#ifdef CONFIG_DRBD_COMPAT_84
+		" (compat 8.4)";
+#else
+		"";
+#endif
+
+	seq_printf(seq, "version: " REL_VERSION " (api:%d/proto:%d-%d)%s\n%s\n",
+		   GENL_MAGIC_VERSION, PRO_VERSION_MIN, PRO_VERSION_MAX, legacy_info,
+		   drbd_buildtag());
+
+	any_legacy = drbd_show_legacy_device(seq, v);
+	if (!any_legacy) {
+		/*
+		 * DRBD 8 did not output the transport information, so do not
+		 * display it if any resources are in DRBD 8 compatibility mode.
+		 */
+		drbd_print_transports_loaded(seq);
 	}
-	rcu_read_unlock();
-
 	return 0;
 }
diff --git a/drivers/block/drbd/drbd_strings.c b/drivers/block/drbd/drbd_strings.c
index 0a06f744b096..619e4c4d0d5e 100644
--- a/drivers/block/drbd/drbd_strings.c
+++ b/drivers/block/drbd/drbd_strings.c
@@ -8,13 +8,14 @@
   Copyright (C) 2003-2008, Philipp Reisner <philipp.reisner@linbit.com>.
   Copyright (C) 2003-2008, Lars Ellenberg <lars.ellenberg@linbit.com>.
 
-
 */
 
 #include <linux/drbd.h>
+#include <linux/array_size.h>
+#include "drbd_protocol.h"
 #include "drbd_strings.h"
 
-static const char * const drbd_conn_s_names[] = {
+static const char * const __conn_state_names[] = {
 	[C_STANDALONE]       = "StandAlone",
 	[C_DISCONNECTING]    = "Disconnecting",
 	[C_UNCONNECTED]      = "Unconnected",
@@ -22,34 +23,54 @@ static const char * const drbd_conn_s_names[] = {
 	[C_BROKEN_PIPE]      = "BrokenPipe",
 	[C_NETWORK_FAILURE]  = "NetworkFailure",
 	[C_PROTOCOL_ERROR]   = "ProtocolError",
-	[C_WF_CONNECTION]    = "WFConnection",
-	[C_WF_REPORT_PARAMS] = "WFReportParams",
 	[C_TEAR_DOWN]        = "TearDown",
-	[C_CONNECTED]        = "Connected",
-	[C_STARTING_SYNC_S]  = "StartingSyncS",
-	[C_STARTING_SYNC_T]  = "StartingSyncT",
-	[C_WF_BITMAP_S]      = "WFBitMapS",
-	[C_WF_BITMAP_T]      = "WFBitMapT",
-	[C_WF_SYNC_UUID]     = "WFSyncUUID",
-	[C_SYNC_SOURCE]      = "SyncSource",
-	[C_SYNC_TARGET]      = "SyncTarget",
-	[C_PAUSED_SYNC_S]    = "PausedSyncS",
-	[C_PAUSED_SYNC_T]    = "PausedSyncT",
-	[C_VERIFY_S]         = "VerifyS",
-	[C_VERIFY_T]         = "VerifyT",
-	[C_AHEAD]            = "Ahead",
-	[C_BEHIND]           = "Behind",
+	[C_CONNECTING]       = "Connecting",
+	[C_CONNECTED]	     = "Connected",
+};
+
+struct state_names drbd_conn_state_names = {
+	.names = __conn_state_names,
+	.size = ARRAY_SIZE(__conn_state_names),
+};
+
+static const char * const __repl_state_names[] = {
+	[L_OFF]              = "Off",
+	[L_ESTABLISHED]      = "Established",
+	[L_STARTING_SYNC_S]  = "StartingSyncS",
+	[L_STARTING_SYNC_T]  = "StartingSyncT",
+	[L_WF_BITMAP_S]      = "WFBitMapS",
+	[L_WF_BITMAP_T]      = "WFBitMapT",
+	[L_WF_SYNC_UUID]     = "WFSyncUUID",
+	[L_SYNC_SOURCE]      = "SyncSource",
+	[L_SYNC_TARGET]      = "SyncTarget",
+	[L_VERIFY_S]         = "VerifyS",
+	[L_VERIFY_T]         = "VerifyT",
+	[L_PAUSED_SYNC_S]    = "PausedSyncS",
+	[L_PAUSED_SYNC_T]    = "PausedSyncT",
+	[L_AHEAD]            = "Ahead",
+	[L_BEHIND]           = "Behind",
+};
+
+struct state_names drbd_repl_state_names = {
+	.names = __repl_state_names,
+	.size = ARRAY_SIZE(__repl_state_names),
 };
 
-static const char * const drbd_role_s_names[] = {
+static const char * const __role_state_names[] = {
+	[R_UNKNOWN]   = "Unknown",
 	[R_PRIMARY]   = "Primary",
 	[R_SECONDARY] = "Secondary",
-	[R_UNKNOWN]   = "Unknown"
 };
 
-static const char * const drbd_disk_s_names[] = {
+struct state_names drbd_role_state_names = {
+	.names = __role_state_names,
+	.size = ARRAY_SIZE(__role_state_names),
+};
+
+static const char * const __disk_state_names[] = {
 	[D_DISKLESS]     = "Diskless",
 	[D_ATTACHING]    = "Attaching",
+	[D_DETACHING]    = "Detaching",
 	[D_FAILED]       = "Failed",
 	[D_NEGOTIATING]  = "Negotiating",
 	[D_INCONSISTENT] = "Inconsistent",
@@ -59,7 +80,12 @@ static const char * const drbd_disk_s_names[] = {
 	[D_UP_TO_DATE]   = "UpToDate",
 };
 
-static const char * const drbd_state_sw_errors[] = {
+struct state_names drbd_disk_state_names = {
+	.names = __disk_state_names,
+	.size = ARRAY_SIZE(__disk_state_names),
+};
+
+static const char * const __error_messages[] = {
 	[-SS_TWO_PRIMARIES] = "Multiple primaries not allowed by config",
 	[-SS_NO_UP_TO_DATE_DISK] = "Need access to UpToDate data",
 	[-SS_NO_LOCAL_DISK] = "Can not resync without local disk",
@@ -73,34 +99,163 @@ static const char * const drbd_state_sw_errors[] = {
 	[-SS_DEVICE_IN_USE] = "Device is held open by someone",
 	[-SS_NO_NET_CONFIG] = "Have no net/connection configuration",
 	[-SS_NO_VERIFY_ALG] = "Need a verify algorithm to start online verify",
-	[-SS_NEED_CONNECTION] = "Need a connection to start verify or resync",
+	[-SS_NEED_CONNECTION] = "State change requires a connection",
 	[-SS_NOT_SUPPORTED] = "Peer does not support protocol",
 	[-SS_LOWER_THAN_OUTDATED] = "Disk state is lower than outdated",
 	[-SS_IN_TRANSIENT_STATE] = "In transient state, retry after next state change",
 	[-SS_CONCURRENT_ST_CHG] = "Concurrent state changes detected and aborted",
-	[-SS_OUTDATE_WO_CONN] = "Need a connection for a graceful disconnect/outdate peer",
 	[-SS_O_VOL_PEER_PRI] = "Other vol primary on peer not allowed by config",
+	[-SS_PRIMARY_READER] = "Peer may not become primary while device is opened read-only",
+	[-SS_INTERRUPTED] = "Interrupted state change",
+	[-SS_TIMEOUT] = "Timeout in operation",
+	[-SS_WEAKLY_CONNECTED] = "Primary nodes must be strongly connected among each other",
+	[-SS_NO_QUORUM] = "No quorum",
+	[-SS_ATTACH_NO_BITMAP] = "Intentional diskless peer may not attach a disk",
+	[-SS_HANDSHAKE_DISCONNECT] = "Disconnect chosen in handshake",
+	[-SS_HANDSHAKE_RETRY] = "Retry chosen in handshake",
 };
 
-const char *drbd_conn_str(enum drbd_conns s)
+struct state_names drbd_error_messages = {
+	.names = __error_messages,
+	.size = ARRAY_SIZE(__error_messages),
+};
+
+static const char * const __packet_names[] = {
+	[P_DATA]	        = "P_DATA",
+	[P_WSAME]               = "P_WSAME",
+	[P_TRIM]                = "P_TRIM",
+	[P_DATA_REPLY]	        = "P_DATA_REPLY",
+	[P_RS_DATA_REPLY]	= "P_RS_DATA_REPLY",
+	[P_BARRIER]	        = "P_BARRIER",
+	[P_BITMAP]	        = "P_BITMAP",
+	[P_BECOME_SYNC_TARGET]  = "P_BECOME_SYNC_TARGET",
+	[P_BECOME_SYNC_SOURCE]  = "P_BECOME_SYNC_SOURCE",
+	[P_UNPLUG_REMOTE]	= "P_UNPLUG_REMOTE",
+	[P_DATA_REQUEST]	= "P_DATA_REQUEST",
+	[P_RS_DATA_REQUEST]     = "P_RS_DATA_REQUEST",
+	[P_SYNC_PARAM]	        = "P_SYNC_PARAM",
+	[P_SYNC_PARAM89]	= "P_SYNC_PARAM89",
+	[P_PROTOCOL]            = "P_PROTOCOL",
+	[P_UUIDS]	        = "P_UUIDS",
+	[P_SIZES]	        = "P_SIZES",
+	[P_STATE]	        = "P_STATE",
+	[P_SYNC_UUID]           = "P_SYNC_UUID",
+	[P_AUTH_CHALLENGE]      = "P_AUTH_CHALLENGE",
+	[P_AUTH_RESPONSE]	= "P_AUTH_RESPONSE",
+	[P_PING]		= "P_PING",
+	[P_PING_ACK]	        = "P_PING_ACK",
+	[P_RECV_ACK]	        = "P_RECV_ACK",
+	[P_WRITE_ACK]	        = "P_WRITE_ACK",
+	[P_RS_WRITE_ACK]	= "P_RS_WRITE_ACK",
+	[P_SUPERSEDED]		= "P_SUPERSEDED",
+	[P_NEG_ACK]	        = "P_NEG_ACK",
+	[P_NEG_DREPLY]	        = "P_NEG_DREPLY",
+	[P_NEG_RS_DREPLY]	= "P_NEG_RS_DREPLY",
+	[P_BARRIER_ACK]	        = "P_BARRIER_ACK",
+	[P_STATE_CHG_REQ]       = "P_STATE_CHG_REQ",
+	[P_STATE_CHG_REPLY]     = "P_STATE_CHG_REPLY",
+	[P_OV_REQUEST]          = "P_OV_REQUEST",
+	[P_OV_REPLY]            = "P_OV_REPLY",
+	[P_OV_RESULT]           = "P_OV_RESULT",
+	[P_CSUM_RS_REQUEST]     = "P_CSUM_RS_REQUEST",
+	[P_RS_IS_IN_SYNC]	= "P_RS_IS_IN_SYNC",
+	[P_COMPRESSED_BITMAP]   = "P_COMPRESSED_BITMAP",
+	[P_DELAY_PROBE]         = "P_DELAY_PROBE",
+	[P_OUT_OF_SYNC]		= "P_OUT_OF_SYNC",
+	[P_RETRY_WRITE]		= "P_RETRY_WRITE",
+	[P_RS_CANCEL]		= "P_RS_CANCEL",
+	[P_RS_CANCEL_AHEAD]	= "P_RS_CANCEL_AHEAD",
+	[P_CONN_ST_CHG_REQ]	= "P_CONN_ST_CHG_REQ",
+	[P_CONN_ST_CHG_REPLY]	= "P_CONN_ST_CHG_REPLY",
+	[P_PROTOCOL_UPDATE]	= "P_PROTOCOL_UPDATE",
+	[P_TWOPC_PREPARE]	= "P_TWOPC_PREPARE",
+	[P_TWOPC_ABORT]		= "P_TWOPC_ABORT",
+	[P_DAGTAG]		= "P_DAGTAG",
+	[P_RS_THIN_REQ]         = "P_RS_THIN_REQ",
+	[P_RS_DEALLOCATED]      = "P_RS_DEALLOCATED",
+	[P_TWOPC_PREP_RSZ]      = "P_TWOPC_PREP_RSZ",
+	[P_ZEROES]              = "P_ZEROES",
+	[P_PEER_ACK]		= "P_PEER_ACK",
+	[P_PEERS_IN_SYNC]       = "P_PEERS_IN_SYNC",
+	[P_UUIDS110]            = "P_UUIDS110",
+	[P_PEER_DAGTAG]         = "P_PEER_DAGTAG",
+	[P_CURRENT_UUID]        = "P_CURRENT_UUID",
+	[P_TWOPC_COMMIT]	= "P_TWOPC_COMMIT",
+	[P_TWOPC_YES]		= "P_TWOPC_YES",
+	[P_TWOPC_NO]		= "P_TWOPC_NO",
+	[P_TWOPC_RETRY]		= "P_TWOPC_RETRY",
+	[P_CONFIRM_STABLE]      = "P_CONFIRM_STABLE",
+	[P_DISCONNECT]		= "P_DISCONNECT",
+	[P_RS_DAGTAG_REQ]	= "P_RS_DAGTAG_REQ",
+	[P_RS_CSUM_DAGTAG_REQ]	= "P_RS_CSUM_DAGTAG_REQ",
+	[P_RS_THIN_DAGTAG_REQ]	= "P_RS_THIN_DAGTAG_REQ",
+	[P_OV_DAGTAG_REQ]	= "P_OV_DAGTAG_REQ",
+	[P_OV_DAGTAG_REPLY]	= "P_OV_DAGTAG_REPLY",
+	[P_WRITE_ACK_IN_SYNC]   = "P_WRITE_ACK_IN_SYNC",
+	[P_RS_NEG_ACK]          = "P_RS_NEG_ACK",
+	[P_OV_RESULT_ID]        = "P_OV_RESULT_ID",
+	[P_RS_DEALLOCATED_ID]   = "P_RS_DEALLOCATED_ID",
+	[P_FLUSH_REQUESTS]      = "P_FLUSH_REQUESTS",
+	[P_FLUSH_FORWARD]       = "P_FLUSH_FORWARD",
+	[P_FLUSH_REQUESTS_ACK]  = "P_FLUSH_REQUESTS_ACK",
+	[P_ENABLE_REPLICATION_NEXT] = "P_ENABLE_REPLICATION_NEXT",
+	[P_ENABLE_REPLICATION]  = "P_ENABLE_REPLICATION",
+	/* enum drbd_packet, but not commands - obsoleted flags:
+	 *	P_MAY_IGNORE
+	 *	P_MAX_OPT_CMD
+	 */
+};
+
+struct state_names drbd_packet_names = {
+	.names = __packet_names,
+	.size = ARRAY_SIZE(__packet_names),
+};
+
+const char *drbd_repl_str(enum drbd_repl_state s)
 {
-	/* enums are unsigned... */
-	return s > C_BEHIND ? "TOO_LARGE" : drbd_conn_s_names[s];
+	return (s < 0 || s >= drbd_repl_state_names.size ||
+		!drbd_repl_state_names.names[s]) ?
+		"?" : drbd_repl_state_names.names[s];
+}
+
+const char *drbd_conn_str(enum drbd_conn_state s)
+{
+	return (s < 0 || s >= drbd_conn_state_names.size ||
+		!drbd_conn_state_names.names[s]) ?
+		"?" : drbd_conn_state_names.names[s];
 }
 
 const char *drbd_role_str(enum drbd_role s)
 {
-	return s > R_SECONDARY   ? "TOO_LARGE" : drbd_role_s_names[s];
+	return (s < 0 || s >= drbd_role_state_names.size ||
+		!drbd_role_state_names.names[s]) ?
+		"?" : drbd_role_state_names.names[s];
 }
 
 const char *drbd_disk_str(enum drbd_disk_state s)
 {
-	return s > D_UP_TO_DATE    ? "TOO_LARGE" : drbd_disk_s_names[s];
+	return (s < 0 || s >= drbd_disk_state_names.size ||
+		!drbd_disk_state_names.names[s]) ?
+		"?" : drbd_disk_state_names.names[s];
 }
 
 const char *drbd_set_st_err_str(enum drbd_state_rv err)
 {
-	return err <= SS_AFTER_LAST_ERROR ? "TOO_SMALL" :
-	       err > SS_TWO_PRIMARIES ? "TOO_LARGE"
-			: drbd_state_sw_errors[-err];
+	return (-err < 0 || -err >= drbd_error_messages.size ||
+		!drbd_error_messages.names[-err]) ?
+		"?" : drbd_error_messages.names[-err];
+}
+
+const char *drbd_packet_name(enum drbd_packet cmd)
+{
+	/* too big for the array: 0xfffX */
+	if (cmd == P_INITIAL_META)
+		return "InitialMeta";
+	if (cmd == P_INITIAL_DATA)
+		return "InitialData";
+	if (cmd == P_CONNECTION_FEATURES)
+		return "ConnectionFeatures";
+	return (cmd < 0 || cmd >= ARRAY_SIZE(__packet_names) ||
+		!__packet_names[cmd]) ?
+	       "?" : __packet_names[cmd];
 }
diff --git a/drivers/block/drbd/drbd_transport.c b/drivers/block/drbd/drbd_transport.c
index 7c6128cbb8bc..0e43a086fe80 100644
--- a/drivers/block/drbd/drbd_transport.c
+++ b/drivers/block/drbd/drbd_transport.c
@@ -366,6 +366,29 @@ struct drbd_path *__drbd_next_path_ref(struct drbd_path *drbd_path,
 	return drbd_path;
 }
 
+int drbd_bio_add_page(struct drbd_transport *transport, struct bio_list *bios,
+		      struct page *page, unsigned int len, unsigned int offset)
+{
+	struct bio *bio = bios->tail;
+	struct bio *new_bio;
+	int r;
+
+	r = bio_add_page(bio, page, len, offset);
+	if (r)
+		return r;
+
+	new_bio = bio_alloc(bio->bi_bdev, bio->bi_max_vecs, bio->bi_opf, GFP_NOIO);
+	if (!new_bio)
+		return -ENOMEM;
+
+	bio_list_add(bios, new_bio);
+	r = bio_add_page(new_bio, page, len, offset);
+	if (r)
+		return r;
+
+	return -ENOENT;
+}
+
 /* Network transport abstractions */
 EXPORT_SYMBOL_GPL(drbd_register_transport_class);
 EXPORT_SYMBOL_GPL(drbd_unregister_transport_class);
@@ -377,3 +400,4 @@ EXPORT_SYMBOL_GPL(drbd_should_abort_listening);
 EXPORT_SYMBOL_GPL(drbd_path_event);
 EXPORT_SYMBOL_GPL(drbd_listener_destroy);
 EXPORT_SYMBOL_GPL(__drbd_next_path_ref);
+EXPORT_SYMBOL_GPL(drbd_bio_add_page);
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 20/20] drbd: remove BROKEN for DRBD
  2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
                   ` (18 preceding siblings ...)
  2026-03-27 22:38 ` [PATCH 19/20] drbd: update monitoring interfaces for multi-peer topology Christoph Böhmwalder
@ 2026-03-27 22:38 ` Christoph Böhmwalder
  2026-03-28 12:21   ` kernel test robot
  2026-03-28 14:20   ` kernel test robot
  19 siblings, 2 replies; 24+ messages in thread
From: Christoph Böhmwalder @ 2026-03-27 22:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: drbd-dev, linux-kernel, Lars Ellenberg, Philipp Reisner,
	linux-block, Christoph Böhmwalder

Remove the BROKEN dependency now that the DRBD 9 rework is complete
and the driver compiles cleanly again.

Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
---
 drivers/block/drbd/Kconfig | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/block/drbd/Kconfig b/drivers/block/drbd/Kconfig
index d4975c21d4de..ddab8d4ed40b 100644
--- a/drivers/block/drbd/Kconfig
+++ b/drivers/block/drbd/Kconfig
@@ -8,7 +8,6 @@ comment "DRBD disabled because PROC_FS or INET not selected"
 
 config BLK_DEV_DRBD
 	tristate "DRBD Distributed Replicated Block Device support"
-	depends on BROKEN
 	depends on PROC_FS && INET
 	select LRU_CACHE
 	select CRC32
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 20/20] drbd: remove BROKEN for DRBD
  2026-03-27 22:38 ` [PATCH 20/20] drbd: remove BROKEN for DRBD Christoph Böhmwalder
@ 2026-03-28 12:21   ` kernel test robot
  2026-03-28 14:20   ` kernel test robot
  1 sibling, 0 replies; 24+ messages in thread
From: kernel test robot @ 2026-03-28 12:21 UTC (permalink / raw)
  To: Christoph Böhmwalder, Jens Axboe
  Cc: oe-kbuild-all, drbd-dev, linux-kernel, Lars Ellenberg,
	Philipp Reisner, linux-block, Christoph Böhmwalder

Hi Christoph,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 67807fbaf12719fca46a622d759484652b79c7c3]

url:    https://github.com/intel-lab-lkp/linux/commits/Christoph-B-hmwalder/drbd-mark-as-BROKEN-during-DRBD-9-rework/20260328-153634
base:   67807fbaf12719fca46a622d759484652b79c7c3
patch link:    https://lore.kernel.org/r/20260327223820.2244227-21-christoph.boehmwalder%40linbit.com
patch subject: [PATCH 20/20] drbd: remove BROKEN for DRBD
config: m68k-defconfig (https://download.01.org/0day-ci/archive/20260328/202603282006.UELjhGio-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260328/202603282006.UELjhGio-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603282006.UELjhGio-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> Warning: drivers/block/drbd/drbd_receiver.c:496 function parameter 'peer_device' not described in 'drbd_alloc_peer_req'
>> Warning: drivers/block/drbd/drbd_receiver.c:496 function parameter 'peer_device' not described in 'drbd_alloc_peer_req'

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 02/20] drbd: extend wire protocol definitions for DRBD 9
  2026-03-27 22:38 ` [PATCH 02/20] drbd: extend wire protocol definitions for DRBD 9 Christoph Böhmwalder
@ 2026-03-28 14:13   ` kernel test robot
  0 siblings, 0 replies; 24+ messages in thread
From: kernel test robot @ 2026-03-28 14:13 UTC (permalink / raw)
  To: Christoph Böhmwalder, Jens Axboe
  Cc: oe-kbuild-all, drbd-dev, linux-kernel, Lars Ellenberg,
	Philipp Reisner, linux-block, Christoph Böhmwalder,
	Joel Colledge

Hi Christoph,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 67807fbaf12719fca46a622d759484652b79c7c3]

url:    https://github.com/intel-lab-lkp/linux/commits/Christoph-B-hmwalder/drbd-mark-as-BROKEN-during-DRBD-9-rework/20260328-153634
base:   67807fbaf12719fca46a622d759484652b79c7c3
patch link:    https://lore.kernel.org/r/20260327223820.2244227-3-christoph.boehmwalder%40linbit.com
patch subject: [PATCH 02/20] drbd: extend wire protocol definitions for DRBD 9
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
docutils: docutils (Docutils 0.21.2, Python 3.13.5, on linux)
reproduce: (https://download.01.org/0day-ci/archive/20260328/202603281537.rTtvjPOL-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603281537.rTtvjPOL-lkp@intel.com/

All warnings (new ones prefixed by >>):

   Warning: Documentation/translations/zh_CN/networking/xfrm_proc.rst references a file that doesn't exist: Documentation/networking/xfrm_proc.rst
   Warning: Documentation/translations/zh_CN/scsi/scsi_mid_low_api.rst references a file that doesn't exist: Documentation/Configure.help
   Warning: MAINTAINERS references a file that doesn't exist: Documentation/ABI/testing/sysfs-platform-ayaneo
   Warning: MAINTAINERS references a file that doesn't exist: Documentation/devicetree/bindings/display/bridge/megachips-stdpxxxx-ge-b850v3-fw.txt
   Warning: arch/powerpc/sysdev/mpic.c references a file that doesn't exist: Documentation/devicetree/bindings/powerpc/fsl/mpic.txt
>> Warning: drivers/block/drbd/drbd_protocol.h references a file that doesn't exist: Documentation/application-resync-synchronization.rst
   Warning: rust/kernel/sync/atomic/ordering.rs references a file that doesn't exist: srctree/tools/memory-model/Documentation/explanation.txt
   Warning: tools/docs/documentation-file-ref-check references a file that doesn't exist: Documentation/virtual/lguest/lguest.c
   Warning: tools/docs/documentation-file-ref-check references a file that doesn't exist: m,\b(\S*)(Documentation/[A-Za-z0-9
   Warning: tools/docs/documentation-file-ref-check references a file that doesn't exist: Documentation/devicetree/dt-object-internal.txt
   Warning: tools/docs/documentation-file-ref-check references a file that doesn't exist: m,^Documentation/scheduler/sched-pelt

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 20/20] drbd: remove BROKEN for DRBD
  2026-03-27 22:38 ` [PATCH 20/20] drbd: remove BROKEN for DRBD Christoph Böhmwalder
  2026-03-28 12:21   ` kernel test robot
@ 2026-03-28 14:20   ` kernel test robot
  1 sibling, 0 replies; 24+ messages in thread
From: kernel test robot @ 2026-03-28 14:20 UTC (permalink / raw)
  To: Christoph Böhmwalder, Jens Axboe
  Cc: llvm, oe-kbuild-all, drbd-dev, linux-kernel, Lars Ellenberg,
	Philipp Reisner, linux-block, Christoph Böhmwalder

Hi Christoph,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 67807fbaf12719fca46a622d759484652b79c7c3]

url:    https://github.com/intel-lab-lkp/linux/commits/Christoph-B-hmwalder/drbd-mark-as-BROKEN-during-DRBD-9-rework/20260328-153634
base:   67807fbaf12719fca46a622d759484652b79c7c3
patch link:    https://lore.kernel.org/r/20260327223820.2244227-21-christoph.boehmwalder%40linbit.com
patch subject: [PATCH 20/20] drbd: remove BROKEN for DRBD
config: hexagon-allmodconfig (https://download.01.org/0day-ci/archive/20260328/202603282234.zZBLE37a-lkp@intel.com/config)
compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260328/202603282234.zZBLE37a-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202603282234.zZBLE37a-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> drivers/block/drbd/drbd_transport_rdma.c:618:11: warning: variable 'i' set but not used [-Wunused-but-set-variable]
     618 |         int err, i = 0;
         |                  ^
   1 warning generated.


vim +/i +618 drivers/block/drbd/drbd_transport_rdma.c

bc43430bc671a7 Christoph Böhmwalder 2026-03-27  610  
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  611  
221cfdc61a60c9 Christoph Böhmwalder 2026-03-27  612  static int dtr_recv_bio(struct drbd_transport *transport, struct bio_list *bios, size_t size)
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  613  {
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  614  	struct dtr_transport *rdma_transport =
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  615  		container_of(transport, struct dtr_transport, transport);
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  616  	struct dtr_stream *rdma_stream = &rdma_transport->stream[DATA_STREAM];
221cfdc61a60c9 Christoph Böhmwalder 2026-03-27  617  	struct page *page;
221cfdc61a60c9 Christoph Böhmwalder 2026-03-27 @618  	int err, i = 0;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  619  
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  620  	if (!dtr_transport_ok(transport))
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  621  		return -ECONNRESET;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  622  
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  623  	// pr_info("%s: in recv_pages, size: %zu\n", rdma_stream->name, size);
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  624  	TR_ASSERT(transport, rdma_stream->current_rx.bytes_left == 0);
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  625  	dtr_recycle_rx_desc(transport, DATA_STREAM, &rdma_stream->current_rx.desc, GFP_NOIO);
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  626  	dtr_refill_rx_desc(rdma_transport, DATA_STREAM);
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  627  
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  628  	while (size) {
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  629  		struct dtr_rx_desc *rx_desc = NULL;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  630  		long t;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  631  
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  632  		t = wait_event_interruptible_timeout(rdma_stream->recv_wq,
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  633  					dtr_receive_rx_desc(rdma_transport, DATA_STREAM, &rx_desc),
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  634  					rdma_stream->recv_timeout);
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  635  
221cfdc61a60c9 Christoph Böhmwalder 2026-03-27  636  		if (t <= 0)
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  637  			return t == 0 ? -EAGAIN : -EINTR;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  638  
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  639  		page = rx_desc->page;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  640  		/* put_page() if we would get_page() in
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  641  		 * dtr_create_rx_desc().  but we don't. We return the page
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  642  		 * chain to the user, which is supposed to give it back to
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  643  		 * drbd_free_pages() eventually. */
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  644  		rx_desc->page = NULL;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  645  		size -= rx_desc->size;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  646  
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  647  		/* If the sender did dtr_send_page every bvec of a bio with
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  648  		 * unaligned bvecs (as xfs often creates), rx_desc->size and
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  649  		 * offset may well be not the PAGE_SIZE and 0 we hope for.
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  650  		 */
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  651  
221cfdc61a60c9 Christoph Böhmwalder 2026-03-27  652  		err = drbd_bio_add_page(transport, bios, page, rx_desc->size, 0);
221cfdc61a60c9 Christoph Böhmwalder 2026-03-27  653  		if (err < 0)
221cfdc61a60c9 Christoph Böhmwalder 2026-03-27  654  			return err;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  655  
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  656  		atomic_dec(&rx_desc->cm->path->flow[DATA_STREAM].rx_descs_allocated);
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  657  		dtr_free_rx_desc(rx_desc);
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  658  
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  659  		i++;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  660  		dtr_refill_rx_desc(rdma_transport, DATA_STREAM);
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  661  	}
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  662  
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  663  	// pr_info("%s: rcvd %d pages\n", rdma_stream->name, i);
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  664  	return 0;
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  665  }
bc43430bc671a7 Christoph Böhmwalder 2026-03-27  666  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2026-03-28 14:20 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-27 22:38 [PATCH 00/20] DRBD 9 rework Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 01/20] drbd: mark as BROKEN during " Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 02/20] drbd: extend wire protocol definitions for DRBD 9 Christoph Böhmwalder
2026-03-28 14:13   ` kernel test robot
2026-03-27 22:38 ` [PATCH 03/20] drbd: introduce DRBD 9 on-disk metadata format Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 04/20] drbd: add transport layer abstraction Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 05/20] drbd: add TCP transport implementation Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 06/20] drbd: add RDMA " Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 07/20] drbd: add load-balancing TCP transport Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 08/20] drbd: add DAX/PMEM support for metadata access Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 09/20] drbd: add optional compatibility layer for DRBD 8.4 Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 10/20] drbd: rename drbd_worker.c to drbd_sender.c Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 11/20] drbd: rework sender for DRBD 9 multi-peer Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 12/20] drbd: replace per-device state model with multi-peer data structures Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 13/20] drbd: rewrite state machine for DRBD 9 multi-peer clusters Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 14/20] drbd: rework activity log and bitmap for multi-peer replication Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 15/20] drbd: rework request processing for DRBD 9 multi-peer IO Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 16/20] drbd: rework module core for DRBD 9 transport and multi-peer Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 17/20] drbd: rework receiver for DRBD 9 transport and multi-peer protocol Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 18/20] drbd: rework netlink management interface for DRBD 9 Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 19/20] drbd: update monitoring interfaces for multi-peer topology Christoph Böhmwalder
2026-03-27 22:38 ` [PATCH 20/20] drbd: remove BROKEN for DRBD Christoph Böhmwalder
2026-03-28 12:21   ` kernel test robot
2026-03-28 14:20   ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox