* [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages
@ 2024-05-30 13:26 Ofir Gal
2024-05-30 13:26 ` [PATCH 1/4] net: introduce helper sendpages_ok() Ofir Gal
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Ofir Gal @ 2024-05-30 13:26 UTC (permalink / raw)
To: davem, linux-block, linux-nvme, netdev, ceph-devel
Cc: dhowells, edumazet, pabeni, kbusch, axboe, hch, sagi,
philipp.reisner, lars.ellenberg, christoph.boehmwalder, idryomov,
xiubli
skb_splice_from_iter() warns on !sendpage_ok() which results in nvme-tcp
data transfer failure. This warning leads to hanging IO.
nvme-tcp using sendpage_ok() to check the first page of an iterator in
order to disable MSG_SPLICE_PAGES. The iterator can represent a list of
contiguous pages.
When MSG_SPLICE_PAGES is enabled skb_splice_from_iter() is being used,
it requires all pages in the iterator to be sendable.
skb_splice_from_iter() checks each page with sendpage_ok().
nvme_tcp_try_send_data() might allow MSG_SPLICE_PAGES when the first
page is sendable, but the next one are not. skb_splice_from_iter() will
attempt to send all the pages in the iterator. When reaching an
unsendable page the IO will hang.
The patch introduces a helper sendpages_ok(), it returns true if all the
continuous pages are sendable.
Drivers who want to send contiguous pages with MSG_SPLICE_PAGES may use
this helper to check whether the page list is OK. If the helper does not
return true, the driver should remove MSG_SPLICE_PAGES flag.
The bug is reproducible, in order to reproduce we need nvme-over-tcp
controllers with optimal IO size bigger than PAGE_SIZE. Creating a raid
with bitmap over those devices reproduces the bug.
In order to simulate large optimal IO size you can use dm-stripe with a
single device.
Script to reproduce the issue on top of brd devices using dm-stripe is
attached below.
I have added 3 prints to test my theory. One in nvme_tcp_try_send_data()
and two others in skb_splice_from_iter() the first before sendpage_ok()
and the second on !sendpage_ok(), after the warning.
...
nvme_tcp: sendpage_ok, page: 0x654eccd7 (pfn: 120755), len: 262144, offset: 0
skbuff: before sendpage_ok - i: 0. page: 0x654eccd7 (pfn: 120755)
skbuff: before sendpage_ok - i: 1. page: 0x1666a4da (pfn: 120756)
skbuff: before sendpage_ok - i: 2. page: 0x54f9f140 (pfn: 120757)
WARNING: at net/core/skbuff.c:6848 skb_splice_from_iter+0x142/0x450
skbuff: !sendpage_ok - page: 0x54f9f140 (pfn: 120757). is_slab: 1, page_count: 1
...
stack trace:
...
WARNING: at net/core/skbuff.c:6848 skb_splice_from_iter+0x141/0x450
Workqueue: nvme_tcp_wq nvme_tcp_io_work
Call Trace:
? show_regs+0x6a/0x80
? skb_splice_from_iter+0x141/0x450
? __warn+0x8d/0x130
? skb_splice_from_iter+0x141/0x450
? report_bug+0x18c/0x1a0
? handle_bug+0x40/0x70
? exc_invalid_op+0x19/0x70
? asm_exc_invalid_op+0x1b/0x20
? skb_splice_from_iter+0x141/0x450
tcp_sendmsg_locked+0x39e/0xee0
? _prb_read_valid+0x216/0x290
tcp_sendmsg+0x2d/0x50
inet_sendmsg+0x43/0x80
sock_sendmsg+0x102/0x130
? vprintk_default+0x1d/0x30
? vprintk+0x3c/0x70
? _printk+0x58/0x80
nvme_tcp_try_send_data+0x17d/0x530
nvme_tcp_try_send+0x1b7/0x300
nvme_tcp_io_work+0x3c/0xc0
process_one_work+0x22e/0x420
worker_thread+0x50/0x3f0
? __pfx_worker_thread+0x10/0x10
kthread+0xd6/0x100
? __pfx_kthread+0x10/0x10
ret_from_fork+0x3c/0x60
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1b/0x30
...
Ofir Gal (4):
net: introduce helper sendpages_ok()
nvme-tcp: use sendpages_ok() instead of sendpage_ok()
drbd: use sendpages_ok() to instead of sendpage_ok()
libceph: use sendpages_ok() to instead of sendpage_ok()
drivers/block/drbd/drbd_main.c | 2 +-
drivers/nvme/host/tcp.c | 2 +-
include/linux/net.h | 20 ++++++++++++++++++++
net/ceph/messenger_v1.c | 2 +-
net/ceph/messenger_v2.c | 2 +-
5 files changed, 24 insertions(+), 4 deletions(-)
---
reproduce.sh | 114 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 114 insertions(+)
create mode 100755 reproduce.sh
diff --git a/reproduce.sh b/reproduce.sh
new file mode 100755
index 000000000..8ae226b18
--- /dev/null
+++ b/reproduce.sh
@@ -0,0 +1,114 @@
+#!/usr/bin/env sh
+# SPDX-License-Identifier: MIT
+
+set -e
+
+load_modules() {
+ modprobe nvme
+ modprobe nvme-tcp
+ modprobe nvmet
+ modprobe nvmet-tcp
+}
+
+setup_ns() {
+ local dev=$1
+ local num=$2
+ local port=$3
+ ls $dev > /dev/null
+
+ mkdir -p /sys/kernel/config/nvmet/subsystems/$num
+ cd /sys/kernel/config/nvmet/subsystems/$num
+ echo 1 > attr_allow_any_host
+
+ mkdir -p namespaces/$num
+ cd namespaces/$num/
+ echo $dev > device_path
+ echo 1 > enable
+
+ ln -s /sys/kernel/config/nvmet/subsystems/$num \
+ /sys/kernel/config/nvmet/ports/$port/subsystems/
+}
+
+setup_port() {
+ local num=$1
+
+ mkdir -p /sys/kernel/config/nvmet/ports/$num
+ cd /sys/kernel/config/nvmet/ports/$num
+ echo "127.0.0.1" > addr_traddr
+ echo tcp > addr_trtype
+ echo 8009 > addr_trsvcid
+ echo ipv4 > addr_adrfam
+}
+
+setup_big_opt_io() {
+ local dev=$1
+ local name=$2
+
+ # Change optimal IO size by creating dm stripe
+ dmsetup create $name --table \
+ "0 `blockdev --getsz $dev` striped 1 512 $dev 0"
+}
+
+setup_targets() {
+ # Setup ram devices instead of using real nvme devices
+ modprobe brd rd_size=1048576 rd_nr=2 # 1GiB
+
+ setup_big_opt_io /dev/ram0 ram0_big_opt_io
+ setup_big_opt_io /dev/ram1 ram1_big_opt_io
+
+ setup_port 1
+ setup_ns /dev/mapper/ram0_big_opt_io 1 1
+ setup_ns /dev/mapper/ram1_big_opt_io 2 1
+}
+
+setup_initiators() {
+ nvme connect -t tcp -n 1 -a 127.0.0.1 -s 8009
+ nvme connect -t tcp -n 2 -a 127.0.0.1 -s 8009
+}
+
+reproduce_warn() {
+ local devs=$@
+
+ # Hangs here
+ mdadm --create /dev/md/test_md --level=1 --bitmap=internal \
+ --bitmap-chunk=1024K --assume-clean --run --raid-devices=2 $devs
+}
+
+echo "###################################
+
+The script creates 2 nvme initiators in order to reproduce the bug.
+The script doesn't know which controllers it created, choose the new nvme
+controllers when asked.
+
+###################################
+
+Press enter to continue.
+"
+
+read tmp
+
+echo "# Creating 2 nvme controllers for the reproduction. current nvme devices:"
+lsblk -s | grep nvme || true
+echo "---------------------------------
+"
+
+load_modules
+setup_targets
+setup_initiators
+
+sleep 0.1 # Wait for the new nvme ctrls to show up
+
+echo "# Created 2 nvme devices. nvme devices list:"
+
+lsblk -s | grep nvme
+echo "---------------------------------
+"
+
+echo "# Insert the new nvme devices as separated lines. both should be with size of 1G"
+read dev1
+read dev2
+
+ls /dev/$dev1 > /dev/null
+ls /dev/$dev2 > /dev/null
+
+reproduce_warn /dev/$dev1 /dev/$dev2
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 1/4] net: introduce helper sendpages_ok()
2024-05-30 13:26 [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages Ofir Gal
@ 2024-05-30 13:26 ` Ofir Gal
2024-06-03 7:18 ` Hannes Reinecke
2024-05-30 13:26 ` [PATCH 2/4] nvme-tcp: use sendpages_ok() instead of sendpage_ok() Ofir Gal
` (3 subsequent siblings)
4 siblings, 1 reply; 10+ messages in thread
From: Ofir Gal @ 2024-05-30 13:26 UTC (permalink / raw)
To: davem, linux-block, linux-nvme, netdev, ceph-devel
Cc: dhowells, edumazet, pabeni, kbusch, axboe, hch, sagi,
philipp.reisner, lars.ellenberg, christoph.boehmwalder, idryomov,
xiubli
Network drivers are using sendpage_ok() to check the first page of an
iterator in order to disable MSG_SPLICE_PAGES. The iterator can
represent list of contiguous pages.
When MSG_SPLICE_PAGES is enabled skb_splice_from_iter() is being used,
it requires all pages in the iterator to be sendable. Therefore it needs
to check that each page is sendable.
The patch introduces a helper sendpages_ok(), it returns true if all the
contiguous pages are sendable.
Drivers who want to send contiguous pages with MSG_SPLICE_PAGES may use
this helper to check whether the page list is OK. If the helper does not
return true, the driver should remove MSG_SPLICE_PAGES flag.
Signed-off-by: Ofir Gal <ofir.gal@volumez.com>
---
include/linux/net.h | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/include/linux/net.h b/include/linux/net.h
index 688320b79fcc..b33bdc3e2031 100644
--- a/include/linux/net.h
+++ b/include/linux/net.h
@@ -322,6 +322,26 @@ static inline bool sendpage_ok(struct page *page)
return !PageSlab(page) && page_count(page) >= 1;
}
+/*
+ * Check sendpage_ok on contiguous pages.
+ */
+static inline bool sendpages_ok(struct page *page, size_t len, size_t offset)
+{
+ unsigned int pagecount;
+ size_t page_offset;
+ int k;
+
+ page = page + offset / PAGE_SIZE;
+ page_offset = offset % PAGE_SIZE;
+ pagecount = DIV_ROUND_UP(len + page_offset, PAGE_SIZE);
+
+ for (k = 0; k < pagecount; k++)
+ if (!sendpage_ok(page + k))
+ return false;
+
+ return true;
+}
+
int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec,
size_t num, size_t len);
int kernel_sendmsg_locked(struct sock *sk, struct msghdr *msg,
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/4] nvme-tcp: use sendpages_ok() instead of sendpage_ok()
2024-05-30 13:26 [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages Ofir Gal
2024-05-30 13:26 ` [PATCH 1/4] net: introduce helper sendpages_ok() Ofir Gal
@ 2024-05-30 13:26 ` Ofir Gal
2024-06-03 7:22 ` Hannes Reinecke
2024-05-30 13:26 ` [PATCH 3/4] drbd: use sendpages_ok() to " Ofir Gal
` (2 subsequent siblings)
4 siblings, 1 reply; 10+ messages in thread
From: Ofir Gal @ 2024-05-30 13:26 UTC (permalink / raw)
To: davem, linux-block, linux-nvme, netdev, ceph-devel
Cc: dhowells, edumazet, pabeni, kbusch, axboe, hch, sagi
Currently nvme_tcp_try_send_data() use sendpage_ok() in order to disable
MSG_SPLICE_PAGES, it check the first page of the iterator, the iterator
may represent contiguous pages.
MSG_SPLICE_PAGES enables skb_splice_from_iter() which checks all the
pages it sends with sendpage_ok().
When nvme_tcp_try_send_data() sends an iterator that the first page is
sendable, but one of the other pages isn't skb_splice_from_iter() warns
and aborts the data transfer.
Using the new helper sendpages_ok() in order to disable MSG_SPLICE_PAGES
solves the issue.
Signed-off-by: Ofir Gal <ofir.gal@volumez.com>
---
drivers/nvme/host/tcp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 8b5e4327fe83..9f0fd14cbcb7 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1051,7 +1051,7 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
else
msg.msg_flags |= MSG_MORE;
- if (!sendpage_ok(page))
+ if (!sendpages_ok(page, len, offset))
msg.msg_flags &= ~MSG_SPLICE_PAGES;
bvec_set_page(&bvec, page, len, offset);
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/4] drbd: use sendpages_ok() to instead of sendpage_ok()
2024-05-30 13:26 [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages Ofir Gal
2024-05-30 13:26 ` [PATCH 1/4] net: introduce helper sendpages_ok() Ofir Gal
2024-05-30 13:26 ` [PATCH 2/4] nvme-tcp: use sendpages_ok() instead of sendpage_ok() Ofir Gal
@ 2024-05-30 13:26 ` Ofir Gal
2024-05-30 13:26 ` [PATCH 4/4] libceph: " Ofir Gal
2024-05-30 17:58 ` [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages Sagi Grimberg
4 siblings, 0 replies; 10+ messages in thread
From: Ofir Gal @ 2024-05-30 13:26 UTC (permalink / raw)
To: davem, linux-block, linux-nvme, netdev, ceph-devel
Cc: dhowells, edumazet, pabeni, philipp.reisner, lars.ellenberg,
christoph.boehmwalder
Currently _drbd_send_page() use sendpage_ok() in order to enable
MSG_SPLICE_PAGES, it check the first page of the iterator, the iterator
may represent contiguous pages.
MSG_SPLICE_PAGES enables skb_splice_from_iter() which checks all the
pages it sends with sendpage_ok().
When _drbd_send_page() sends an iterator that the first page is
sendable, but one of the other pages isn't skb_splice_from_iter() warns
and aborts the data transfer.
Using the new helper sendpages_ok() in order to enable MSG_SPLICE_PAGES
solves the issue.
Signed-off-by: Ofir Gal <ofir.gal@volumez.com>
---
drivers/block/drbd/drbd_main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 113b441d4d36..a5dbbf6cce23 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -1550,7 +1550,7 @@ static int _drbd_send_page(struct drbd_peer_device *peer_device, struct page *pa
* put_page(); and would cause either a VM_BUG directly, or
* __page_cache_release a page that would actually still be referenced
* by someone, leading to some obscure delayed Oops somewhere else. */
- if (!drbd_disable_sendpage && sendpage_ok(page))
+ if (!drbd_disable_sendpage && sendpages_ok(page, len, offset))
msg.msg_flags |= MSG_NOSIGNAL | MSG_SPLICE_PAGES;
drbd_update_congested(peer_device->connection);
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/4] libceph: use sendpages_ok() to instead of sendpage_ok()
2024-05-30 13:26 [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages Ofir Gal
` (2 preceding siblings ...)
2024-05-30 13:26 ` [PATCH 3/4] drbd: use sendpages_ok() to " Ofir Gal
@ 2024-05-30 13:26 ` Ofir Gal
2024-05-30 17:58 ` [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages Sagi Grimberg
4 siblings, 0 replies; 10+ messages in thread
From: Ofir Gal @ 2024-05-30 13:26 UTC (permalink / raw)
To: davem, linux-block, linux-nvme, netdev, ceph-devel
Cc: dhowells, edumazet, pabeni, idryomov, xiubli
Currently ceph_tcp_sendpage() and do_try_sendpage() use sendpage_ok() in
order to enable MSG_SPLICE_PAGES, it check the first page of the
iterator, the iterator may represent contiguous pages.
MSG_SPLICE_PAGES enables skb_splice_from_iter() which checks all the
pages it sends with sendpage_ok().
When ceph_tcp_sendpage() or do_try_sendpage() send an iterator that the
first page is sendable, but one of the other pages isn't
skb_splice_from_iter() warns and aborts the data transfer.
Using the new helper sendpages_ok() in order to enable MSG_SPLICE_PAGES
solves the issue.
Signed-off-by: Ofir Gal <ofir.gal@volumez.com>
---
net/ceph/messenger_v1.c | 2 +-
net/ceph/messenger_v2.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/ceph/messenger_v1.c b/net/ceph/messenger_v1.c
index 0cb61c76b9b8..a6788f284cd7 100644
--- a/net/ceph/messenger_v1.c
+++ b/net/ceph/messenger_v1.c
@@ -94,7 +94,7 @@ static int ceph_tcp_sendpage(struct socket *sock, struct page *page,
* coalescing neighboring slab objects into a single frag which
* triggers one of hardened usercopy checks.
*/
- if (sendpage_ok(page))
+ if (sendpages_ok(page, size, offset))
msg.msg_flags |= MSG_SPLICE_PAGES;
bvec_set_page(&bvec, page, size, offset);
diff --git a/net/ceph/messenger_v2.c b/net/ceph/messenger_v2.c
index bd608ffa0627..27f8f6c8eb60 100644
--- a/net/ceph/messenger_v2.c
+++ b/net/ceph/messenger_v2.c
@@ -165,7 +165,7 @@ static int do_try_sendpage(struct socket *sock, struct iov_iter *it)
* coalescing neighboring slab objects into a single frag
* which triggers one of hardened usercopy checks.
*/
- if (sendpage_ok(bv.bv_page))
+ if (sendpages_ok(bv.bv_page, bv.bv_len, bv.bv_offset))
msg.msg_flags |= MSG_SPLICE_PAGES;
else
msg.msg_flags &= ~MSG_SPLICE_PAGES;
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages
2024-05-30 13:26 [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages Ofir Gal
` (3 preceding siblings ...)
2024-05-30 13:26 ` [PATCH 4/4] libceph: " Ofir Gal
@ 2024-05-30 17:58 ` Sagi Grimberg
2024-06-03 10:32 ` Ofir Gal
4 siblings, 1 reply; 10+ messages in thread
From: Sagi Grimberg @ 2024-05-30 17:58 UTC (permalink / raw)
To: Ofir Gal, davem, linux-block, linux-nvme, netdev, ceph-devel
Cc: dhowells, edumazet, pabeni, kbusch, axboe, hch, philipp.reisner,
lars.ellenberg, christoph.boehmwalder, idryomov, xiubli
Hey Ofir,
On 30/05/2024 16:26, Ofir Gal wrote:
> skb_splice_from_iter() warns on !sendpage_ok() which results in nvme-tcp
> data transfer failure. This warning leads to hanging IO.
>
> nvme-tcp using sendpage_ok() to check the first page of an iterator in
> order to disable MSG_SPLICE_PAGES. The iterator can represent a list of
> contiguous pages.
>
> When MSG_SPLICE_PAGES is enabled skb_splice_from_iter() is being used,
> it requires all pages in the iterator to be sendable.
> skb_splice_from_iter() checks each page with sendpage_ok().
>
> nvme_tcp_try_send_data() might allow MSG_SPLICE_PAGES when the first
> page is sendable, but the next one are not. skb_splice_from_iter() will
> attempt to send all the pages in the iterator. When reaching an
> unsendable page the IO will hang.
Interesting. Do you know where this buffer came from? I find it strange
that a we get a bvec with a contiguous segment which consists of non slab
originated pages together with slab originated pages... it is surprising
to see
a mix of the two.
I'm wandering if this is something that happened before david's splice_pages
changes. Maybe before that with multipage bvecs? Anyways it is strange,
never
seen that.
David, strange that nvme-tcp is setting a single contiguous element
bvec but it
is broken up into PAGE_SIZE increments in skb_splice_from_iter...
>
> The patch introduces a helper sendpages_ok(), it returns true if all the
> continuous pages are sendable.
>
> Drivers who want to send contiguous pages with MSG_SPLICE_PAGES may use
> this helper to check whether the page list is OK. If the helper does not
> return true, the driver should remove MSG_SPLICE_PAGES flag.
>
>
> The bug is reproducible, in order to reproduce we need nvme-over-tcp
> controllers with optimal IO size bigger than PAGE_SIZE. Creating a raid
> with bitmap over those devices reproduces the bug.
>
> In order to simulate large optimal IO size you can use dm-stripe with a
> single device.
> Script to reproduce the issue on top of brd devices using dm-stripe is
> attached below.
This is a great candidate for blktests. would be very beneficial to have
it added there.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/4] net: introduce helper sendpages_ok()
2024-05-30 13:26 ` [PATCH 1/4] net: introduce helper sendpages_ok() Ofir Gal
@ 2024-06-03 7:18 ` Hannes Reinecke
2024-06-03 11:47 ` Ofir Gal
0 siblings, 1 reply; 10+ messages in thread
From: Hannes Reinecke @ 2024-06-03 7:18 UTC (permalink / raw)
To: Ofir Gal, davem, linux-block, linux-nvme, netdev, ceph-devel
Cc: dhowells, edumazet, pabeni, kbusch, axboe, hch, sagi,
philipp.reisner, lars.ellenberg, christoph.boehmwalder, idryomov,
xiubli
On 5/30/24 15:26, Ofir Gal wrote:
> Network drivers are using sendpage_ok() to check the first page of an
> iterator in order to disable MSG_SPLICE_PAGES. The iterator can
> represent list of contiguous pages.
>
> When MSG_SPLICE_PAGES is enabled skb_splice_from_iter() is being used,
> it requires all pages in the iterator to be sendable. Therefore it needs
> to check that each page is sendable.
>
> The patch introduces a helper sendpages_ok(), it returns true if all the
> contiguous pages are sendable.
>
> Drivers who want to send contiguous pages with MSG_SPLICE_PAGES may use
> this helper to check whether the page list is OK. If the helper does not
> return true, the driver should remove MSG_SPLICE_PAGES flag.
>
> Signed-off-by: Ofir Gal <ofir.gal@volumez.com>
> ---
> include/linux/net.h | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
> diff --git a/include/linux/net.h b/include/linux/net.h
> index 688320b79fcc..b33bdc3e2031 100644
> --- a/include/linux/net.h
> +++ b/include/linux/net.h
> @@ -322,6 +322,26 @@ static inline bool sendpage_ok(struct page *page)
> return !PageSlab(page) && page_count(page) >= 1;
> }
>
> +/*
> + * Check sendpage_ok on contiguous pages.
> + */
> +static inline bool sendpages_ok(struct page *page, size_t len, size_t offset)
> +{
> + unsigned int pagecount;
> + size_t page_offset;
> + int k;
> +
> + page = page + offset / PAGE_SIZE;
> + page_offset = offset % PAGE_SIZE;
> + pagecount = DIV_ROUND_UP(len + page_offset, PAGE_SIZE);
> +
Don't we miss the first page for offset > PAGE_SIZE?
I'd rather check for all pages from 'page' up to (offset + len), just
to be on the safe side.
> + for (k = 0; k < pagecount; k++)
> + if (!sendpage_ok(page + k))
> + return false;
> +
> + return true;
> +}
> +
> int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec,
> size_t num, size_t len);
> int kernel_sendmsg_locked(struct sock *sk, struct msghdr *msg,
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/4] nvme-tcp: use sendpages_ok() instead of sendpage_ok()
2024-05-30 13:26 ` [PATCH 2/4] nvme-tcp: use sendpages_ok() instead of sendpage_ok() Ofir Gal
@ 2024-06-03 7:22 ` Hannes Reinecke
0 siblings, 0 replies; 10+ messages in thread
From: Hannes Reinecke @ 2024-06-03 7:22 UTC (permalink / raw)
To: Ofir Gal, davem, linux-block, linux-nvme, netdev, ceph-devel
Cc: dhowells, edumazet, pabeni, kbusch, axboe, hch, sagi
On 5/30/24 15:26, Ofir Gal wrote:
> Currently nvme_tcp_try_send_data() use sendpage_ok() in order to disable
> MSG_SPLICE_PAGES, it check the first page of the iterator, the iterator
> may represent contiguous pages.
>
> MSG_SPLICE_PAGES enables skb_splice_from_iter() which checks all the
> pages it sends with sendpage_ok().
>
> When nvme_tcp_try_send_data() sends an iterator that the first page is
> sendable, but one of the other pages isn't skb_splice_from_iter() warns
> and aborts the data transfer.
>
> Using the new helper sendpages_ok() in order to disable MSG_SPLICE_PAGES
> solves the issue.
>
> Signed-off-by: Ofir Gal <ofir.gal@volumez.com>
> ---
> drivers/nvme/host/tcp.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index 8b5e4327fe83..9f0fd14cbcb7 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -1051,7 +1051,7 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req)
> else
> msg.msg_flags |= MSG_MORE;
>
> - if (!sendpage_ok(page))
> + if (!sendpages_ok(page, len, offset))
> msg.msg_flags &= ~MSG_SPLICE_PAGES;
>
> bvec_set_page(&bvec, page, len, offset);
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages
2024-05-30 17:58 ` [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages Sagi Grimberg
@ 2024-06-03 10:32 ` Ofir Gal
0 siblings, 0 replies; 10+ messages in thread
From: Ofir Gal @ 2024-06-03 10:32 UTC (permalink / raw)
To: Sagi Grimberg, davem, linux-block, linux-nvme, netdev, ceph-devel
Cc: dhowells, edumazet, pabeni, kbusch, axboe, hch, philipp.reisner,
lars.ellenberg, christoph.boehmwalder, idryomov, xiubli
On 30/05/2024 20:58, Sagi Grimberg wrote:
> Hey Ofir,
>
> On 30/05/2024 16:26, Ofir Gal wrote:
>> skb_splice_from_iter() warns on !sendpage_ok() which results in nvme-tcp
>> data transfer failure. This warning leads to hanging IO.
>>
>> nvme-tcp using sendpage_ok() to check the first page of an iterator in
>> order to disable MSG_SPLICE_PAGES. The iterator can represent a list of
>> contiguous pages.
>>
>> When MSG_SPLICE_PAGES is enabled skb_splice_from_iter() is being used,
>> it requires all pages in the iterator to be sendable.
>> skb_splice_from_iter() checks each page with sendpage_ok().
>>
>> nvme_tcp_try_send_data() might allow MSG_SPLICE_PAGES when the first
>> page is sendable, but the next one are not. skb_splice_from_iter() will
>> attempt to send all the pages in the iterator. When reaching an
>> unsendable page the IO will hang.
>
> Interesting. Do you know where this buffer came from? I find it strange
> that a we get a bvec with a contiguous segment which consists of non slab
> originated pages together with slab originated pages... it is surprising to see
> a mix of the two.
I find it strange as well, I haven't investigate the origin of the IO
yet. I suspect the first 2 pages are the superblocks of the raid
(mdp_superblock_1 and bitmap_super_s) and the rest of the IO is the
bitmap.
I have stumbled with the same issue when running xfs_format (couldn't
reproduce it from scratch). I suspect there are others cases that mix
the slab pages and non-slab pages.
> I'm wandering if this is something that happened before david's splice_pages
> changes. Maybe before that with multipage bvecs? Anyways it is strange, never
> seen that.
I haven't bisect the commit that caused the behavior but I have tested
ubuntu with 6.2.0 kernel, the bug didn't occur. (6.2.0 doesn't contain
david's splice_pages changes).
I'm not familiar with "multipage bvecs" patch, which patch do you refer
to?
> David, strange that nvme-tcp is setting a single contiguous element bvec but it
> is broken up into PAGE_SIZE increments in skb_splice_from_iter...
>
>>
>> The patch introduces a helper sendpages_ok(), it returns true if all the
>> continuous pages are sendable.
>>
>> Drivers who want to send contiguous pages with MSG_SPLICE_PAGES may use
>> this helper to check whether the page list is OK. If the helper does not
>> return true, the driver should remove MSG_SPLICE_PAGES flag.
>>
>>
>> The bug is reproducible, in order to reproduce we need nvme-over-tcp
>> controllers with optimal IO size bigger than PAGE_SIZE. Creating a raid
>> with bitmap over those devices reproduces the bug.
>>
>> In order to simulate large optimal IO size you can use dm-stripe with a
>> single device.
>> Script to reproduce the issue on top of brd devices using dm-stripe is
>> attached below.
>
> This is a great candidate for blktests. would be very beneficial to have it added there.
Good idea, will do!
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/4] net: introduce helper sendpages_ok()
2024-06-03 7:18 ` Hannes Reinecke
@ 2024-06-03 11:47 ` Ofir Gal
0 siblings, 0 replies; 10+ messages in thread
From: Ofir Gal @ 2024-06-03 11:47 UTC (permalink / raw)
To: Hannes Reinecke, davem, linux-block, linux-nvme, netdev,
ceph-devel
Cc: dhowells, edumazet, pabeni, kbusch, axboe, hch, sagi,
philipp.reisner, lars.ellenberg, christoph.boehmwalder, idryomov,
xiubli
On 03/06/2024 10:18, Hannes Reinecke wrote:
> On 5/30/24 15:26, Ofir Gal wrote:
>> Network drivers are using sendpage_ok() to check the first page of an
>> iterator in order to disable MSG_SPLICE_PAGES. The iterator can
>> represent list of contiguous pages.
>>
>> When MSG_SPLICE_PAGES is enabled skb_splice_from_iter() is being used,
>> it requires all pages in the iterator to be sendable. Therefore it needs
>> to check that each page is sendable.
>>
>> The patch introduces a helper sendpages_ok(), it returns true if all the
>> contiguous pages are sendable.
>>
>> Drivers who want to send contiguous pages with MSG_SPLICE_PAGES may use
>> this helper to check whether the page list is OK. If the helper does not
>> return true, the driver should remove MSG_SPLICE_PAGES flag.
>>
>> Signed-off-by: Ofir Gal <ofir.gal@volumez.com>
>> ---
>> include/linux/net.h | 20 ++++++++++++++++++++
>> 1 file changed, 20 insertions(+)
>>
>> diff --git a/include/linux/net.h b/include/linux/net.h
>> index 688320b79fcc..b33bdc3e2031 100644
>> --- a/include/linux/net.h
>> +++ b/include/linux/net.h
>> @@ -322,6 +322,26 @@ static inline bool sendpage_ok(struct page *page)
>> return !PageSlab(page) && page_count(page) >= 1;
>> }
>> +/*
>> + * Check sendpage_ok on contiguous pages.
>> + */
>> +static inline bool sendpages_ok(struct page *page, size_t len, size_t offset)
>> +{
>> + unsigned int pagecount;
>> + size_t page_offset;
>> + int k;
>> +
>> + page = page + offset / PAGE_SIZE;
>> + page_offset = offset % PAGE_SIZE;
>> + pagecount = DIV_ROUND_UP(len + page_offset, PAGE_SIZE);
>> +
> Don't we miss the first page for offset > PAGE_SIZE?
> I'd rather check for all pages from 'page' up to (offset + len), just
> to be on the safe side.
We do, I copied the logic from iov_iter_extract_bvec_pages() to be
aligned with how skb_splice_from_iter() splits the pages.
I don't think we need to check a page we won't send, but I don't mind to
be on the safeside.
>> + for (k = 0; k < pagecount; k++)
>> + if (!sendpage_ok(page + k))
>> + return false;
>> +
>> + return true;
>> +}
>> +
>> int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec,
>> size_t num, size_t len);
>> int kernel_sendmsg_locked(struct sock *sk, struct msghdr *msg,
>
> Cheers,
>
> Hannes
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-06-03 11:47 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-30 13:26 [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages Ofir Gal
2024-05-30 13:26 ` [PATCH 1/4] net: introduce helper sendpages_ok() Ofir Gal
2024-06-03 7:18 ` Hannes Reinecke
2024-06-03 11:47 ` Ofir Gal
2024-05-30 13:26 ` [PATCH 2/4] nvme-tcp: use sendpages_ok() instead of sendpage_ok() Ofir Gal
2024-06-03 7:22 ` Hannes Reinecke
2024-05-30 13:26 ` [PATCH 3/4] drbd: use sendpages_ok() to " Ofir Gal
2024-05-30 13:26 ` [PATCH 4/4] libceph: " Ofir Gal
2024-05-30 17:58 ` [PATCH 0/4] bugfix: Introduce sendpages_ok() to check sendpage_ok() on contiguous pages Sagi Grimberg
2024-06-03 10:32 ` Ofir Gal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox