From: Denis Kenzior <denkenz@gmail.com>
To: James Prestwood <prestwoj@gmail.com>, iwd@lists.linux.dev
Subject: Re: [PATCH 0/4] Packet/beacon loss roaming improvements
Date: Thu, 2 Nov 2023 09:10:59 -0500 [thread overview]
Message-ID: <be09ed30-f92c-4f9d-897d-77d9d5f7a919@gmail.com> (raw)
In-Reply-To: <27703a4f-a071-4ff7-afbc-8dda1c5b0b27@gmail.com>
Hi James,
>
> I'm fine adding similar handling that I added for packet loss, except always
> delay rather than only on additional events. But I would like to explore other
> options in the future.
I guess the question is, does adding LOST BEACON handling actually help or
you're speculating that it does? I don't mind if we add this back in with a
delay, but I'm worried it doesn't actually do anything. 7 beacons lost in a row
is likely not recoverable territory.
I'm actually surprised the driver doesn't give you any other indications prior
to the lost beacon event. I would have expected RSSI or packet loss to manifest
itself prior?
>
> I'm not sure how, but being able to detect if the AP responded to
> nullfunc/probes prior to the kernel blowing away the connection would be great.
> (like send our own nullfunc frames or something, not really sure...)
Yes, the current implementation of this event in the kernel is pretty useless.
What we really need is an additional threshold that generates an event out to
userspace _before_ the kernel starts taking potentially irreversible actions.
Something like a pre-beacon-loss event that gets generated when 2-3 beacons are
lost in a row.
>
> Its taking IWD about 4-5 seconds to reconnect, 3 seconds are the quick scan and
> ~1-2 seconds for DHCP. (I need to look at why the quick scan is taking that
That's awful. How many frequencies are you generating? Even a full scan should
be ~1-2 seconds at most. And 1-2 seconds for DHCP also seems fishy. I've
tested our implementation to be sub-300ms or so.
> long, that seems like something isn't right). So if its at all possible to roam
> that is best, obviously.
>
Yep.
Regards,
-Denis
next prev parent reply other threads:[~2023-11-02 14:11 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-30 13:48 [PATCH 0/4] Packet/beacon loss roaming improvements James Prestwood
2023-10-30 13:48 ` [PATCH 1/4] station: rename ap_directed_roam to force_roam James Prestwood
2023-10-30 13:48 ` [PATCH 2/4] station: start roam on beacon loss event James Prestwood
2023-10-30 13:48 ` [PATCH 3/4] netdev: handle/send " James Prestwood
2023-10-30 13:48 ` [PATCH 4/4] station: rate limit packet loss roam scans James Prestwood
2023-10-30 14:48 ` Denis Kenzior
2023-10-30 15:00 ` [PATCH 0/4] Packet/beacon loss roaming improvements Denis Kenzior
2023-10-30 15:37 ` James Prestwood
2023-10-30 17:05 ` Denis Kenzior
2023-10-30 17:37 ` James Prestwood
2023-11-01 12:07 ` James Prestwood
2023-11-02 1:39 ` Denis Kenzior
2023-11-02 11:58 ` James Prestwood
2023-11-02 14:10 ` Denis Kenzior [this message]
2023-11-02 14:33 ` James Prestwood
2023-11-02 15:17 ` Denis Kenzior
2023-11-02 15:41 ` James Prestwood
2023-11-02 16:10 ` Denis Kenzior
2023-11-02 16:13 ` James Prestwood
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=be09ed30-f92c-4f9d-897d-77d9d5f7a919@gmail.com \
--to=denkenz@gmail.com \
--cc=iwd@lists.linux.dev \
--cc=prestwoj@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox