* [bug report] netfs: Speed up buffered reading
@ 2024-09-11 7:36 Dan Carpenter
2024-09-11 9:22 ` David Howells
0 siblings, 1 reply; 3+ messages in thread
From: Dan Carpenter @ 2024-09-11 7:36 UTC (permalink / raw)
To: David Howells; +Cc: netfs
Hello David Howells,
Commit 5c7822a3013d ("netfs: Speed up buffered reading") from Jul 2,
2024 (linux-next), leads to the following Smatch static checker
warning:
fs/netfs/read_collect.c:175 netfs_consume_read_data()
error: we previously assumed 'folioq' could be null (see line 115)
fs/netfs/read_collect.c
86 static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was_async)
87 {
88 struct netfs_io_subrequest *prev, *next;
89 struct netfs_io_request *rreq = subreq->rreq;
90 struct folio_queue *folioq = subreq->curr_folioq;
91 size_t avail, prev_donated, next_donated, fsize, part, excess;
92 loff_t fpos, start;
93 loff_t fend;
94 int slot = subreq->curr_folioq_slot;
95
96 if (WARN(subreq->transferred > subreq->len,
97 "Subreq overread: R%x[%x] %zu > %zu",
98 rreq->debug_id, subreq->debug_index,
99 subreq->transferred, subreq->len))
100 subreq->transferred = subreq->len;
101
102 next_folio:
103 fsize = PAGE_SIZE << subreq->curr_folio_order;
104 fpos = round_down(subreq->start + subreq->consumed, fsize);
105 fend = fpos + fsize;
106
107 if (WARN_ON_ONCE(!folioq) ||
^^^^^^^
Check for NULL
108 WARN_ON_ONCE(!folioq_folio(folioq, slot)) ||
109 WARN_ON_ONCE(folioq_folio(folioq, slot)->index != fpos / PAGE_SIZE)) {
110 pr_err("R=%08x[%x] s=%llx-%llx ctl=%zx/%zx/%zx sl=%u\n",
111 rreq->debug_id, subreq->debug_index,
112 subreq->start, subreq->start + subreq->transferred - 1,
113 subreq->consumed, subreq->transferred, subreq->len,
114 slot);
115 if (folioq) {
116 struct folio *folio = folioq_folio(folioq, slot);
117
118 pr_err("folioq: orders=%02x%02x%02x%02x\n",
119 folioq->orders[0], folioq->orders[1],
120 folioq->orders[2], folioq->orders[3]);
121 if (folio)
122 pr_err("folio: %llx-%llx ix=%llx o=%u qo=%u\n",
123 fpos, fend - 1, folio_pos(folio), folio_order(folio),
124 folioq_folio_order(folioq, slot));
125 }
126 }
127
128 donation_changed:
129 /* Try to consume the current folio if we've hit or passed the end of
130 * it. There's a possibility that this subreq doesn't start at the
131 * beginning of the folio, in which case we need to donate to/from the
132 * preceding subreq.
133 *
134 * We also need to include any potential donation back from the
135 * following subreq.
136 */
137 prev_donated = READ_ONCE(subreq->prev_donated);
138 next_donated = READ_ONCE(subreq->next_donated);
139 if (prev_donated || next_donated) {
140 spin_lock_bh(&rreq->lock);
141 prev_donated = subreq->prev_donated;
142 next_donated = subreq->next_donated;
143 subreq->start -= prev_donated;
144 subreq->len += prev_donated;
145 subreq->transferred += prev_donated;
146 prev_donated = subreq->prev_donated = 0;
147 if (subreq->transferred == subreq->len) {
148 subreq->len += next_donated;
149 subreq->transferred += next_donated;
150 next_donated = subreq->next_donated = 0;
151 }
152 trace_netfs_sreq(subreq, netfs_sreq_trace_add_donations);
153 spin_unlock_bh(&rreq->lock);
154 }
155
156 avail = subreq->transferred;
157 if (avail == subreq->len)
158 avail += next_donated;
159 start = subreq->start;
160 if (subreq->consumed == 0) {
161 start -= prev_donated;
162 avail += prev_donated;
163 } else {
164 start += subreq->consumed;
165 avail -= subreq->consumed;
166 }
167 part = umin(avail, fsize);
168
169 trace_netfs_progress(subreq, start, avail, part);
170
171 if (start + avail >= fend) {
172 if (fpos == start) {
173 /* Flush, unlock and mark for caching any folio we've just read. */
174 subreq->consumed = fend - subreq->start;
--> 175 netfs_unlock_read_folio(subreq, rreq, folioq, slot);
^^^^^^
Unchecked dereference
176 folioq_mark2(folioq, slot);
^^^^^^
177 if (subreq->consumed >= subreq->len)
178 goto remove_subreq;
179 } else if (fpos < start) {
180 excess = fend - subreq->start;
181
182 spin_lock_bh(&rreq->lock);
regards,
dan carpenter
^ permalink raw reply [flat|nested] 3+ messages in thread
* [bug report] netfs: Speed up buffered reading
@ 2024-09-11 7:36 Dan Carpenter
0 siblings, 0 replies; 3+ messages in thread
From: Dan Carpenter @ 2024-09-11 7:36 UTC (permalink / raw)
To: David Howells; +Cc: netfs
Hello David Howells,
Commit 5c7822a3013d ("netfs: Speed up buffered reading") from Jul 2,
2024 (linux-next), leads to the following Smatch static checker
warning:
fs/netfs/read_retry.c:152 netfs_retry_read_subrequests()
error: we previously assumed 'rreq->netfs_ops->prepare_read' could be null (see line 51)
fs/netfs/read_retry.c
33 static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
34 {
35 struct netfs_io_subrequest *subreq;
36 struct netfs_io_stream *stream0 = &rreq->io_streams[0];
37 LIST_HEAD(sublist);
38 LIST_HEAD(queue);
39
40 _enter("R=%x", rreq->debug_id);
41
42 if (list_empty(&rreq->subrequests))
43 return;
44
45 if (rreq->netfs_ops->retry_request)
46 rreq->netfs_ops->retry_request(rreq, NULL);
47
48 /* If there's no renegotiation to do, just resend each retryable subreq
49 * up to the first permanently failed one.
50 */
51 if (!rreq->netfs_ops->prepare_read &&
^^^^^^^^^^^^^^
Check for NULL
52 !test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags)) {
53 struct netfs_io_subrequest *subreq;
54
55 list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
56 if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
57 break;
58 if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {
59 netfs_reset_iter(subreq);
60 netfs_reissue_read(rreq, subreq);
61 }
62 }
63 return;
64 }
65
66 /* Okay, we need to renegotiate all the download requests and flip any
67 * failed cache reads over to being download requests and negotiate
68 * those also. All fully successful subreqs have been removed from the
69 * list and any spare data from those has been donated.
70 *
71 * What we do is decant the list and rebuild it one subreq at a time so
72 * that we don't end up with donations jumping over a gap we're busy
73 * populating with smaller subrequests. In the event that the subreq
74 * we just launched finishes before we insert the next subreq, it'll
75 * fill in rreq->prev_donated instead.
76
77 * Note: Alternatively, we could split the tail subrequest right before
78 * we reissue it and fix up the donations under lock.
79 */
80 list_splice_init(&rreq->subrequests, &queue);
81
82 do {
83 struct netfs_io_subrequest *from;
84 struct iov_iter source;
85 unsigned long long start, len;
86 size_t part, deferred_next_donated = 0;
87 bool boundary = false;
88
89 /* Go through the subreqs and find the next span of contiguous
90 * buffer that we then rejig (cifs, for example, needs the
91 * rsize renegotiating) and reissue.
92 */
93 from = list_first_entry(&queue, struct netfs_io_subrequest, rreq_link);
94 list_move_tail(&from->rreq_link, &sublist);
95 start = from->start + from->transferred;
96 len = from->len - from->transferred;
97
98 _debug("from R=%08x[%x] s=%llx ctl=%zx/%zx/%zx",
99 rreq->debug_id, from->debug_index,
100 from->start, from->consumed, from->transferred, from->len);
101
102 if (test_bit(NETFS_SREQ_FAILED, &from->flags) ||
103 !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags))
104 goto abandon;
105
106 deferred_next_donated = from->next_donated;
107 while ((subreq = list_first_entry_or_null(
108 &queue, struct netfs_io_subrequest, rreq_link))) {
109 if (subreq->start != start + len ||
110 subreq->transferred > 0 ||
111 !test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags))
112 break;
113 list_move_tail(&subreq->rreq_link, &sublist);
114 len += subreq->len;
115 deferred_next_donated = subreq->next_donated;
116 if (test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags))
117 break;
118 }
119
120 _debug(" - range: %llx-%llx %llx", start, start + len - 1, len);
121
122 /* Determine the set of buffers we're going to use. Each
123 * subreq gets a subset of a single overall contiguous buffer.
124 */
125 netfs_reset_iter(from);
126 source = from->io_iter;
127 source.count = len;
128
129 /* Work through the sublist. */
130 while ((subreq = list_first_entry_or_null(
131 &sublist, struct netfs_io_subrequest, rreq_link))) {
132 list_del(&subreq->rreq_link);
133
134 subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
135 subreq->start = start - subreq->transferred;
136 subreq->len = len + subreq->transferred;
137 stream0->sreq_max_len = subreq->len;
138
139 __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
140 __set_bit(NETFS_SREQ_RETRYING, &subreq->flags);
141
142 spin_lock_bh(&rreq->lock);
143 list_add_tail(&subreq->rreq_link, &rreq->subrequests);
144 subreq->prev_donated += rreq->prev_donated;
145 rreq->prev_donated = 0;
146 trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
147 spin_unlock_bh(&rreq->lock);
148
149 BUG_ON(!len);
150
151 /* Renegotiate max_len (rsize) */
--> 152 if (rreq->netfs_ops->prepare_read(subreq) < 0) {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Unchecked dereference
153 trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed);
154 __set_bit(NETFS_SREQ_FAILED, &subreq->flags);
155 }
156
157 part = umin(len, stream0->sreq_max_len);
158 if (unlikely(rreq->io_streams[0].sreq_max_segs))
regards,
dan carpenter
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [bug report] netfs: Speed up buffered reading
2024-09-11 7:36 [bug report] netfs: Speed up buffered reading Dan Carpenter
@ 2024-09-11 9:22 ` David Howells
0 siblings, 0 replies; 3+ messages in thread
From: David Howells @ 2024-09-11 9:22 UTC (permalink / raw)
To: Dan Carpenter; +Cc: dhowells, netfs
Dan Carpenter <dan.carpenter@linaro.org> wrote:
> 107 if (WARN_ON_ONCE(!folioq) ||
> ^^^^^^^
> Check for NULL
>
> 108 WARN_ON_ONCE(!folioq_folio(folioq, slot)) ||
> 109 WARN_ON_ONCE(folioq_folio(folioq, slot)->index != fpos / PAGE_SIZE)) {
None of these three conditions should be true - and if they are, the operation
is broken. We cannot find the pages we need to unlock and processes will very
likely hang.
David
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-09-11 9:22 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-11 7:36 [bug report] netfs: Speed up buffered reading Dan Carpenter
2024-09-11 9:22 ` David Howells
-- strict thread matches above, loose matches on Subject: below --
2024-09-11 7:36 Dan Carpenter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox