Linux maintainer tooling and workflows
 help / color / mirror / Atom feed
* [PATCH b4 v2 00/11] Enable stricter local checks
@ 2026-04-19 15:59 Tamir Duberstein
  2026-04-19 15:59 ` [PATCH b4 v2 01/11] Add CI script Tamir Duberstein
                   ` (11 more replies)
  0 siblings, 12 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 15:59 UTC (permalink / raw)
  To: Kernel.org Tools
  Cc: Konstantin Ryabitsev, Tamir Duberstein,
	"str = 'reply", "str = 'reviewer",
	"str = 'author", "str = 'patch1"

This series makes b4 local developer checks enforceable from the
review TUI and makes the repo clean under ruff, mypy, pyright, and ty.

The early patches set ruff formatting and import behavior, make the
test environment reproducible under uv, and type the misc helpers enough
for whole-repo mypy. The middle patches tighten mypy and pyright, then
add ty with all rules enabled and bump the Python requirement to 3.11
because the code already uses 3.11-only syntax.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
Changes in v2:
- Rebase.
- Replace b4 ci integration with a simple shell script.
- Link to v1: https://patch.msgid.link/20260407-ruff-check-v1-0-c9568541ff67@kernel.org

---
Tamir Duberstein (11):
      Add CI script
      Add ruff checks to CI
      Import dependencies unconditionally
      Add ruff format check to CI
      Fix tests under uv with complex git config
      Fix typings in misc/
      Enable mypy unreachable warnings
      Enable and fix pyright diagnostics
      Avoid duplicate map lookups
      Add ty and configuration
      Enable pyright strict mode

 ci.sh                              |   10 +
 misc/retrieve_lore_thread.py       |   17 +-
 misc/review-ci-example.py          |   10 +-
 misc/send-receive.py               |  454 +++++++---
 pyproject.toml                     |   55 +-
 src/b4/__init__.py                 | 1571 +++++++++++++++++++++++-----------
 src/b4/bugs/__init__.py            |   52 +-
 src/b4/bugs/_import.py             |   14 +-
 src/b4/bugs/_tui.py                |  440 ++++++----
 src/b4/command.py                  | 1369 +++++++++++++++++++++++-------
 src/b4/diff.py                     |   50 +-
 src/b4/dig.py                      |   75 +-
 src/b4/ez.py                       | 1016 +++++++++++++++-------
 src/b4/kr.py                       |   10 +-
 src/b4/mbox.py                     |  309 ++++---
 src/b4/pr.py                       |  160 +++-
 src/b4/review/__init__.py          |   52 +-
 src/b4/review/_review.py           |  491 +++++++----
 src/b4/review/checks.py            |  396 +++++----
 src/b4/review/messages.py          |   64 +-
 src/b4/review/tracking.py          |  673 +++++++++------
 src/b4/review_tui/__init__.py      |   38 +-
 src/b4/review_tui/_common.py       |  366 +++++---
 src/b4/review_tui/_entry.py        |   69 +-
 src/b4/review_tui/_lite_app.py     |  111 ++-
 src/b4/review_tui/_modals.py       |  680 +++++++++------
 src/b4/review_tui/_pw_app.py       |  202 +++--
 src/b4/review_tui/_review_app.py   |  513 +++++++----
 src/b4/review_tui/_tracking_app.py | 1543 ++++++++++++++++++++++-----------
 src/b4/tui/__init__.py             |    1 +
 src/b4/tui/_common.py              |   43 +-
 src/b4/tui/_modals.py              |   49 +-
 src/b4/ty.py                       |  233 +++--
 src/tests/conftest.py              |   57 +-
 src/tests/test___init__.py         |  827 ++++++++++++------
 src/tests/test_ez.py               |  154 +++-
 src/tests/test_mbox.py             |   96 ++-
 src/tests/test_messages.py         |   21 +-
 src/tests/test_patatt.py           |   33 +-
 src/tests/test_rethread.py         |  178 ++--
 src/tests/test_review.py           | 1648 +++++++++++++++++++++---------------
 src/tests/test_review_checks.py    |  455 ++++++----
 src/tests/test_review_show_info.py |   97 ++-
 src/tests/test_review_tracking.py  |  990 ++++++++++++++--------
 src/tests/test_three_way_merge.py  |  188 ++--
 src/tests/test_tui_bugs.py         |   42 +-
 src/tests/test_tui_modals.py       |   60 +-
 src/tests/test_tui_review.py       |   98 +--
 src/tests/test_tui_tracking.py     | 1360 ++++++++++++++++++-----------
 49 files changed, 11574 insertions(+), 5866 deletions(-)
---
base-commit: 3bfbc1bf8f9549cfa2ad3949d807ce3d4954bb5d
change-id: 20260403-ruff-check-79f9f5441956

Best regards,
--  
Tamir Duberstein <tamird@kernel.org>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 01/11] Add CI script
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
@ 2026-04-19 15:59 ` Tamir Duberstein
  2026-04-19 15:59 ` [PATCH b4 v2 02/11] Add ruff checks to CI Tamir Duberstein
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 15:59 UTC (permalink / raw)
  To: Kernel.org Tools; +Cc: Konstantin Ryabitsev, Tamir Duberstein

Add a shell script that runs mypy. Skip submodule paths if they are
present and the misc directory which doesn't yet type check.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 ci.sh          | 5 +++++
 pyproject.toml | 1 +
 2 files changed, 6 insertions(+)

diff --git a/ci.sh b/ci.sh
new file mode 100755
index 0000000..89a5a80
--- /dev/null
+++ b/ci.sh
@@ -0,0 +1,5 @@
+#!/usr/bin/env sh
+
+set -eu
+
+uv run mypy .
diff --git a/pyproject.toml b/pyproject.toml
index c0c9935..867fcae 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -111,4 +111,5 @@ typeCheckingMode = "off"
 
 # Configure mypy in strict mode
 [tool.mypy]
+exclude = ["^ezgb/", "^liblore/", "^misc/", "^patatt/"]
 strict = true

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 02/11] Add ruff checks to CI
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
  2026-04-19 15:59 ` [PATCH b4 v2 01/11] Add CI script Tamir Duberstein
@ 2026-04-19 15:59 ` Tamir Duberstein
  2026-04-19 15:59 ` [PATCH b4 v2 03/11] Import dependencies unconditionally Tamir Duberstein
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 15:59 UTC (permalink / raw)
  To: Kernel.org Tools; +Cc: Konstantin Ryabitsev, Tamir Duberstein

Mark example-only variables as intentionally unused so ruff can check
the script without changing its illustrative structure.

Change `ruff.lint.select` to `ruff.lint.extend-select` to enable default
lints and fix ambiguous variable name warnings.

Enable import sorting and use `ruff check --fix` to fix existing
violations.

Configure ruff to skip submodules.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 ci.sh                              |  1 +
 misc/review-ci-example.py          |  4 +-
 misc/send-receive.py               | 30 +++++++-------
 pyproject.toml                     |  6 ++-
 src/b4/__init__.py                 | 67 +++++++++++++++++-------------
 src/b4/bugs/__init__.py            |  8 ++--
 src/b4/bugs/_tui.py                |  7 ++--
 src/b4/command.py                  |  4 +-
 src/b4/diff.py                     | 16 +++----
 src/b4/dig.py                      | 13 +++---
 src/b4/ez.py                       | 33 ++++++++-------
 src/b4/kr.py                       |  2 +-
 src/b4/mbox.py                     | 28 ++++++-------
 src/b4/pr.py                       | 24 +++++------
 src/b4/review/__init__.py          | 29 ++++++++-----
 src/b4/review/_review.py           |  3 +-
 src/b4/review/checks.py            |  1 -
 src/b4/review/messages.py          |  1 -
 src/b4/review/tracking.py          |  7 ++--
 src/b4/review_tui/__init__.py      | 20 +++++----
 src/b4/review_tui/_common.py       | 85 ++++++++++++++++++++++++++------------
 src/b4/review_tui/_entry.py        |  3 +-
 src/b4/review_tui/_lite_app.py     | 19 +++++----
 src/b4/review_tui/_modals.py       | 51 ++++++++++++++---------
 src/b4/review_tui/_pw_app.py       | 28 ++++++++-----
 src/b4/review_tui/_review_app.py   | 56 ++++++++++++++++---------
 src/b4/review_tui/_tracking_app.py | 53 +++++++++++++++++-------
 src/b4/tui/_common.py              |  8 ++--
 src/b4/tui/_modals.py              | 11 +++--
 src/b4/ty.py                       | 19 ++++-----
 src/tests/conftest.py              |  7 ++--
 src/tests/test___init__.py         | 24 +++++++----
 src/tests/test_ez.py               | 11 ++---
 src/tests/test_mbox.py             | 11 ++---
 src/tests/test_patatt.py           |  3 +-
 src/tests/test_rethread.py         |  4 +-
 src/tests/test_review.py           | 12 +++---
 src/tests/test_review_checks.py    |  1 -
 src/tests/test_review_show_info.py |  5 +--
 src/tests/test_review_tracking.py  | 12 +++---
 src/tests/test_three_way_merge.py  |  7 ++--
 src/tests/test_tui_bugs.py         |  4 +-
 src/tests/test_tui_modals.py       |  6 +--
 src/tests/test_tui_review.py       |  6 +--
 src/tests/test_tui_tracking.py     | 13 +++---
 45 files changed, 436 insertions(+), 327 deletions(-)

diff --git a/ci.sh b/ci.sh
index 89a5a80..b65ae97 100755
--- a/ci.sh
+++ b/ci.sh
@@ -2,4 +2,5 @@
 
 set -eu
 
+uv run ruff check
 uv run mypy .
diff --git a/misc/review-ci-example.py b/misc/review-ci-example.py
index e5837eb..cbac2ae 100755
--- a/misc/review-ci-example.py
+++ b/misc/review-ci-example.py
@@ -43,7 +43,7 @@ import sys
 
 def main() -> None:
     msg = email.message_from_binary_file(sys.stdin.buffer)
-    subject = msg.get('subject', '(no subject)')
+    subject = msg.get('subject', '(no subject)')  # noqa: F841
     msgid = msg.get('message-id', '').strip('<> ')
 
     # Example: read tracking data for commit-based CI lookups
@@ -53,7 +53,7 @@ def main() -> None:
             tracking = json.load(fp)
         branch_tips = tracking.get('series', {}).get('branch-tips', [])
     else:
-        branch_tips = []
+        branch_tips = []  # noqa: F841
 
     # Seed the RNG with the message-id so results are stable across
     # repeated runs of the same message (simulates cached CI results).
diff --git a/misc/send-receive.py b/misc/send-receive.py
index a3dd893..35c5e99 100644
--- a/misc/send-receive.py
+++ b/misc/send-receive.py
@@ -1,29 +1,29 @@
 #!/usr/bin/env python3
 
-import falcon
-import os
-import sys
-import logging
-import logging.handlers
-import json
-import sqlalchemy as sa
-import patatt
-import smtplib
+import copy
 import email
 import email.header
 import email.policy
 import email.quoprimime
+import json
+import logging
+import logging.handlers
+import os
 import re
-import ezpi
-import copy
+import smtplib
+import sys
 import textwrap
-
 from configparser import ConfigParser, ExtendedInterpolation
+from email import charset, utils
 from string import Template
-from email import utils
-from typing import Tuple, Union, List
+from typing import List, Tuple, Union
+
+import ezpi
+import falcon
+import sqlalchemy as sa
+
+import patatt
 
-from email import charset
 charset.add_charset('utf-8', None)
 emlpolicy = email.policy.EmailPolicy(utf8=True, cte_type='8bit', max_line_length=None)
 
diff --git a/pyproject.toml b/pyproject.toml
index 867fcae..6eb2fbb 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -82,13 +82,17 @@ files = [
     {filename = "src/b4/man/b4.1"},
 ]
 
+[tool.ruff]
+extend-exclude = ["ezgb", "liblore", "patatt"]
+
 [tool.ruff.lint]
-select = [
+extend-select = [
     "F",       # https://docs.astral.sh/ruff/rules/#pyflakes-f
     "B007",    # https://docs.astral.sh/ruff/rules/unused-loop-control-variable/
     "B904",    # https://docs.astral.sh/ruff/rules/raise-without-from-err/
     "DTZ",     # https://docs.astral.sh/ruff/rules/#flake8-datetimez-dtz
     "G",       # https://docs.astral.sh/ruff/rules/#flake8-logging-format-g
+    "I",       # https://docs.astral.sh/ruff/rules/#isort-i
     "PERF102", # https://docs.astral.sh/ruff/rules/incorrect-dict-iterator/
     "PGH004",  # https://docs.astral.sh/ruff/rules/blanket-noqa/
     "PIE790",  # https://docs.astral.sh/ruff/rules/unnecessary-placeholder/
diff --git a/src/b4/__init__.py b/src/b4/__init__.py
index 75238bb..1e4c91e 100644
--- a/src/b4/__init__.py
+++ b/src/b4/__init__.py
@@ -1,50 +1,61 @@
 # SPDX-License-Identifier: GPL-2.0-or-later
 # Copyright (C) 2020 by the Linux Foundation
-import subprocess
-import logging
-import hashlib
-import re
-import sys
-import os
-import fnmatch
+import argparse
+import copy
+import datetime
 import email.generator
 import email.header
 import email.parser
 import email.policy
 import email.quoprimime
 import email.utils
-import tempfile
+import fnmatch
+import hashlib
+import io
+import json
+import logging
+import mailbox
+import os
 import pathlib
-import argparse
-import smtplib
+import pwd
+import re
 import shlex
+import shutil
+import smtplib
+import subprocess
+import sys
+import tempfile
 import textwrap
-import json
-
-import urllib.parse
-import datetime
 import time
-import copy
-import shutil
-import mailbox
-import pwd
-import io
+import urllib.parse
+from contextlib import contextmanager
+from email import charset
+from email.message import EmailMessage
+from pathlib import Path
+from typing import (
+    Any,
+    BinaryIO,
+    Dict,
+    Generator,
+    Iterator,
+    List,
+    Literal,
+    Optional,
+    Sequence,
+    Set,
+    Tuple,
+    TypeVar,
+    Union,
+    overload,
+)
 
+import liblore.utils
 import requests
 
 import liblore
-import liblore.utils
-
-from pathlib import Path
-from contextlib import contextmanager
-from typing import Optional, Tuple, Set, List, BinaryIO, Union, Sequence, Literal, Iterator, Dict, \
-    TypeVar, overload, Generator, Any
 
 ConfigDictT = Dict[str, Union[str, List[str], None]]
 
-from email.message import EmailMessage
-
-from email import charset
 
 charset.add_charset('utf-8', None)
 # Policy we use for saving mail locally
diff --git a/src/b4/bugs/__init__.py b/src/b4/bugs/__init__.py
index dd28b5a..cb21611 100644
--- a/src/b4/bugs/__init__.py
+++ b/src/b4/bugs/__init__.py
@@ -4,15 +4,15 @@
 # Copyright (C) 2020 by the Linux Foundation
 """b4 bugs: manage bug reports from mailing list threads."""
 import argparse
+import json
 import logging
+import shutil
 import sys
 
-import json
-import shutil
+from ezgb._git import git_bug_cli
 
 import b4
 from ezgb import BugNotFoundError, GitBugRepo, Status
-from ezgb._git import git_bug_cli
 
 logger = logging.getLogger('b4')
 
@@ -160,7 +160,7 @@ def cmd_list(cmdargs: argparse.Namespace) -> None:
 
     for bug in bugs:
         icon = '\u25cf' if bug.status == Status.OPEN else '\u25cb'
-        labels = ' '.join(f'[{l}]' for l in sorted(bug.labels))
+        labels = ' '.join(f'[{label}]' for label in sorted(bug.labels))
         logger.info('%s %s  %s  %s',
                     icon, bug.id[:7], bug.title, labels)
 
diff --git a/src/b4/bugs/_tui.py b/src/b4/bugs/_tui.py
index 0309a0c..998e6bb 100644
--- a/src/b4/bugs/_tui.py
+++ b/src/b4/bugs/_tui.py
@@ -14,14 +14,13 @@ from typing import TYPE_CHECKING, Optional, Union
 if TYPE_CHECKING:
     from textual.events import Key
 
-from textual.events import Click, MouseScrollDown, MouseScrollUp
-
 from rich import box
 from rich.panel import Panel
 from rich.text import Text
 from textual.app import App, ComposeResult
 from textual.binding import Binding
 from textual.containers import Horizontal, Vertical
+from textual.events import Click, MouseScrollDown, MouseScrollUp
 from textual.screen import ModalScreen
 from textual.suggester import SuggestFromList
 from textual.widgets import (
@@ -34,6 +33,8 @@ from textual.widgets import (
 )
 from textual.worker import Worker, WorkerState
 
+import b4
+from b4.bugs._import import is_comment_removed, make_tombstone, parse_comment_header
 from b4.tui import (
     ActionScreen,
     ConfirmScreen,
@@ -47,8 +48,6 @@ from b4.tui import (
     resolve_styles,
     reviewer_colours,
 )
-import b4
-from b4.bugs._import import is_comment_removed, make_tombstone, parse_comment_header
 from ezgb import Bug, BugSummary, Comment, GitBugRepo, Status
 
 # Union type for items that can appear in the bug list.
diff --git a/src/b4/command.py b/src/b4/command.py
index 79ec596..7ebe79c 100644
--- a/src/b4/command.py
+++ b/src/b4/command.py
@@ -7,11 +7,11 @@ __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
 import argparse
 import logging
-import b4
 import sys
-
 from typing import Any, Optional, Sequence, Union
 
+import b4
+
 logger = b4.logger
 
 
diff --git a/src/b4/diff.py b/src/b4/diff.py
index 934b9ac..8045243 100644
--- a/src/b4/diff.py
+++ b/src/b4/diff.py
@@ -5,19 +5,19 @@
 #
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
-import os
-import sys
-import b4
-import b4.mbox
+import argparse
 import email
 import email.parser
-import shutil
+import os
 import pathlib
-import argparse
 import shlex
-
-from typing import Tuple, Optional, List
+import shutil
+import sys
 from email.message import EmailMessage
+from typing import List, Optional, Tuple
+
+import b4
+import b4.mbox
 
 logger = b4.logger
 
diff --git a/src/b4/dig.py b/src/b4/dig.py
index f13deac..b3d637d 100644
--- a/src/b4/dig.py
+++ b/src/b4/dig.py
@@ -5,19 +5,18 @@
 #
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
-import sys
-import b4
 import argparse
+import datetime
+import email.utils
 import re
+import sys
 import urllib.parse
-import datetime
+from email.message import EmailMessage
+from typing import List, Optional, Set
 
+import b4
 import b4.mbox
 
-from email.message import EmailMessage
-import email.utils
-from typing import List, Set, Optional
-
 logger = b4.logger
 
 # Supported diff algorithms we will try to match
diff --git a/src/b4/ez.py b/src/b4/ez.py
index 94b8686..e69a106 100644
--- a/src/b4/ez.py
+++ b/src/b4/ez.py
@@ -5,31 +5,31 @@
 #
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
-import os
-import sys
-import b4
-import re
 import argparse
-import uuid
-import time
+import base64
 import datetime
-import json
-import shlex
 import email
 import email.policy
 import email.utils
-import pathlib
-import base64
-import textwrap
 import gzip
+import hashlib
 import io
+import json
+import os
+import pathlib
+import re
+import shlex
+import sys
 import tarfile
-import hashlib
+import textwrap
+import time
 import urllib.parse
-
-from typing import Any, Optional, Tuple, List, Union, Dict, Set
-from string import Template
+import uuid
 from email.message import EmailMessage
+from string import Template
+from typing import Any, Dict, List, Optional, Set, Tuple, Union
+
+import b4
 
 try:
     import patatt
@@ -44,6 +44,7 @@ except ModuleNotFoundError:
     can_gfr = False
 
 import importlib.util
+
 can_codespell = importlib.util.find_spec('codespell_lib') is not None
 
 logger = b4.logger
@@ -216,8 +217,8 @@ def auth_new() -> None:
             sys.exit(1)
         pubkey = out.decode()
     elif algo == 'ed25519':
-        from nacl.signing import SigningKey
         from nacl.encoding import Base64Encoder
+        from nacl.signing import SigningKey
         sk = SigningKey(keydata.encode(), encoder=Base64Encoder)
         pubkey = base64.b64encode(sk.verify_key.encode()).decode()
     else:
diff --git a/src/b4/kr.py b/src/b4/kr.py
index 13f24c7..8bbfe26 100644
--- a/src/b4/kr.py
+++ b/src/b4/kr.py
@@ -7,9 +7,9 @@ __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
 import argparse
 import os
-import sys
 import pathlib
 import re
+import sys
 
 import b4
 
diff --git a/src/b4/mbox.py b/src/b4/mbox.py
index 2164fcc..624a2f3 100644
--- a/src/b4/mbox.py
+++ b/src/b4/mbox.py
@@ -5,28 +5,26 @@
 #
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
-import os
-import sys
-import mailbox
+import argparse
 import email
-import email.utils
 import email.parser
-import re
-import time
-import json
+import email.utils
 import fnmatch
-import shutil
-import pathlib
 import io
+import json
+import mailbox
+import os
+import pathlib
+import re
 import shlex
-import argparse
-
-import b4
-
-from typing import Any, Optional, Union, List, Set, Dict, Tuple
+import shutil
+import sys
+import time
+from email.message import EmailMessage
 from string import Template
+from typing import Any, Dict, List, Optional, Set, Tuple, Union
 
-from email.message import EmailMessage
+import b4
 
 logger = b4.logger
 
diff --git a/src/b4/pr.py b/src/b4/pr.py
index 5969a0d..cb2ca76 100644
--- a/src/b4/pr.py
+++ b/src/b4/pr.py
@@ -5,26 +5,24 @@
 #
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
-import os
-import sys
-import tempfile
-
-import b4
-import re
-import json
+import argparse
 import email
 import email.message
 import email.parser
 import email.utils
-import argparse
-
+import json
+import os
+import re
+import sys
+import tempfile
 import urllib.parse
-import requests
-
 from datetime import datetime, timezone
+from email import charset, utils
+from typing import List, Optional
 
-from email import utils, charset
-from typing import Optional, List
+import requests
+
+import b4
 
 charset.add_charset('utf-8', None)
 
diff --git a/src/b4/review/__init__.py b/src/b4/review/__init__.py
index 4f64451..df1e39b 100644
--- a/src/b4/review/__init__.py
+++ b/src/b4/review/__init__.py
@@ -1,21 +1,30 @@
 # Re-export everything from the original review module
 from b4.review._review import *  # noqa: F403
 from b4.review._review import (
-    _retrieve_messages, retrieve_series_messages, _get_lore_series,
-    _collect_followups, _collect_reply_headers,
-    _get_my_review, _ensure_my_review, _cleanup_review,
-    _get_patch_state, _set_patch_state,
-    _resolve_comment_positions,
-    _render_quoted_diff_with_comments, _extract_editor_comments,
-    _clear_other_comments, _strip_subject,
-    _build_reply_from_comments, _ensure_trailers_in_body,
+    _build_reply_from_comments,
     _build_review_email,
-    _integrate_agent_reviews,
+    _cleanup_review,
+    _clear_other_comments,
+    _collect_followups,
+    _collect_reply_headers,
+    _ensure_my_review,
+    _ensure_trailers_in_body,
     _extract_comments_from_quoted_reply,
-    _integrate_sashiko_reviews,
+    _extract_editor_comments,
+    _get_lore_series,
+    _get_my_review,
+    _get_patch_state,
+    _integrate_agent_reviews,
     _integrate_followup_inline_comments,
+    _integrate_sashiko_reviews,
     _prepare_review_session,
+    _render_quoted_diff_with_comments,
+    _resolve_comment_positions,
+    _retrieve_messages,
+    _set_patch_state,
     _should_promote_waiting,
+    _strip_subject,
+    retrieve_series_messages,
 )
 
 # Tell mypy these private symbols are intentionally re-exported
diff --git a/src/b4/review/_review.py b/src/b4/review/_review.py
index 1234b0f..661369b 100644
--- a/src/b4/review/_review.py
+++ b/src/b4/review/_review.py
@@ -15,6 +15,7 @@ import re
 import shutil
 import sys
 import urllib.parse
+from typing import Any, Dict, List, Optional, Set, Tuple, Union
 
 import liblore.utils
 
@@ -22,8 +23,6 @@ import b4
 import b4.mbox
 import b4.review.tracking
 
-from typing import Dict, Any, List, Optional, Set, Tuple, Union
-
 logger = b4.logger
 
 REVIEW_MAGIC_MARKER = '--- b4-review-tracking ---'
diff --git a/src/b4/review/checks.py b/src/b4/review/checks.py
index 65ee0ca..2ea5027 100644
--- a/src/b4/review/checks.py
+++ b/src/b4/review/checks.py
@@ -12,7 +12,6 @@ import os
 import pathlib
 import shlex
 import sqlite3
-
 from email.message import EmailMessage
 from typing import Any, Dict, List, Optional, Tuple
 
diff --git a/src/b4/review/messages.py b/src/b4/review/messages.py
index 344d36b..3a0098c 100644
--- a/src/b4/review/messages.py
+++ b/src/b4/review/messages.py
@@ -8,7 +8,6 @@ __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 import os
 import pathlib
 import sqlite3
-
 from typing import Dict, List, Optional
 
 import b4
diff --git a/src/b4/review/tracking.py b/src/b4/review/tracking.py
index cd966ca..eb80eea 100644
--- a/src/b4/review/tracking.py
+++ b/src/b4/review/tracking.py
@@ -13,13 +13,11 @@ import os
 import pathlib
 import sqlite3
 import sys
-
-import liblore
+from typing import Any, Dict, List, Optional, Set, Tuple
 
 import b4
 import b4.mbox
-
-from typing import Any, Dict, List, Optional, Set, Tuple
+import liblore
 
 logger = b4.logger
 
@@ -1129,6 +1127,7 @@ def _store_thread_blob(topdir: str, change_id: str,
     # Local import first — avoids circular deps AND prevents UnboundLocalError
     # that would occur if `import b4.review` appeared after a `b4.xxx` call.
     import io
+
     import b4.review as _b4_review
 
     buf = io.BytesIO()
diff --git a/src/b4/review_tui/__init__.py b/src/b4/review_tui/__init__.py
index 47f3f93..68548e6 100644
--- a/src/b4/review_tui/__init__.py
+++ b/src/b4/review_tui/__init__.py
@@ -1,15 +1,21 @@
 from b4.review_tui._common import (
-    logger, PATCH_STATE_MARKERS,
-    resolve_styles, reviewer_colours,
+    PATCH_STATE_MARKERS,
+    _addrs_to_lines,
+    _lines_to_header,
+    _validate_addrs,
     gather_attestation_info,
-    _addrs_to_lines, _lines_to_header, _validate_addrs,
+    logger,
+    resolve_styles,
+    reviewer_colours,
 )
-from b4.review_tui._review_app import ReviewApp
-from b4.review_tui._tracking_app import TrackingApp
-from b4.review_tui._pw_app import PwApp
 from b4.review_tui._entry import (
-    run_branch_tui, run_pw_tui, run_tracking_tui,
+    run_branch_tui,
+    run_pw_tui,
+    run_tracking_tui,
 )
+from b4.review_tui._pw_app import PwApp
+from b4.review_tui._review_app import ReviewApp
+from b4.review_tui._tracking_app import TrackingApp
 
 __all__ = [
     'logger', 'PATCH_STATE_MARKERS',
diff --git a/src/b4/review_tui/_common.py b/src/b4/review_tui/_common.py
index 9c06b32..e819af5 100644
--- a/src/b4/review_tui/_common.py
+++ b/src/b4/review_tui/_common.py
@@ -12,22 +12,73 @@ import email.utils
 import json
 import os
 import tempfile
-
 from typing import Any, Dict, List, Optional, Set, Tuple
 
 import liblore.utils
+from rich import box
+from rich.padding import Padding
+from rich.panel import Panel
+from rich.rule import Rule
+from rich.text import Text
+from textual.widgets import RichLog
 
 import b4
 import b4.mbox
 import b4.review
 import b4.review.tracking
 
-from textual.widgets import RichLog
-from rich import box
-from rich.padding import Padding
-from rich.panel import Panel
-from rich.rule import Rule
-from rich.text import Text
+# -- Re-exported from b4.tui (canonical home for shared TUI utilities) --------
+from b4.tui._common import (
+    JKListNavMixin as JKListNavMixin,
+)
+from b4.tui._common import (
+    SeparatedFooter as SeparatedFooter,
+)
+from b4.tui._common import (
+    _addrs_to_lines as _addrs_to_lines,
+)
+from b4.tui._common import (
+    _fix_ansi_theme as _fix_ansi_theme,
+)
+from b4.tui._common import (
+    _lines_to_header as _lines_to_header,
+)
+from b4.tui._common import (
+    _quiet_worker as _quiet_worker,
+)
+from b4.tui._common import (
+    _suspend_to_shell as _suspend_to_shell,
+)
+from b4.tui._common import (
+    _to_rich_color as _to_rich_color,
+)
+from b4.tui._common import (
+    _validate_addrs as _validate_addrs,
+)
+from b4.tui._common import (
+    _wait_for_enter as _wait_for_enter,
+)
+from b4.tui._common import (
+    ci_check_styles as ci_check_styles,
+)
+from b4.tui._common import (
+    ci_markup as ci_markup,
+)
+from b4.tui._common import (
+    ci_styles as ci_styles,
+)
+from b4.tui._common import (
+    display_width as display_width,
+)
+from b4.tui._common import (
+    pad_display as pad_display,
+)
+from b4.tui._common import (
+    resolve_styles as resolve_styles,
+)
+from b4.tui._common import (
+    reviewer_colours as reviewer_colours,
+)
 
 logger = b4.logger
 
@@ -75,26 +126,6 @@ CI_CHECK_LABELS = {
 }
 
 
-# -- Re-exported from b4.tui (canonical home for shared TUI utilities) --------
-from b4.tui._common import (
-    JKListNavMixin as JKListNavMixin,
-    SeparatedFooter as SeparatedFooter,
-    _addrs_to_lines as _addrs_to_lines,
-    _fix_ansi_theme as _fix_ansi_theme,
-    _lines_to_header as _lines_to_header,
-    _quiet_worker as _quiet_worker,
-    _suspend_to_shell as _suspend_to_shell,
-    _to_rich_color as _to_rich_color,
-    _validate_addrs as _validate_addrs,
-    _wait_for_enter as _wait_for_enter,
-    ci_check_styles as ci_check_styles,
-    ci_markup as ci_markup,
-    ci_styles as ci_styles,
-    display_width as display_width,
-    pad_display as pad_display,
-    resolve_styles as resolve_styles,
-    reviewer_colours as reviewer_colours,
-)
 
 
 class CheckRunnerMixin:
diff --git a/src/b4/review_tui/_entry.py b/src/b4/review_tui/_entry.py
index 717d1eb..68a48af 100644
--- a/src/b4/review_tui/_entry.py
+++ b/src/b4/review_tui/_entry.py
@@ -10,11 +10,10 @@ from typing import Any, Dict, Optional
 import b4
 import b4.review
 import b4.review.tracking
-
 from b4.review_tui._common import logger
+from b4.review_tui._pw_app import PwApp
 from b4.review_tui._review_app import ReviewApp
 from b4.review_tui._tracking_app import TrackingApp
-from b4.review_tui._pw_app import PwApp
 
 
 def _tui_use_mouse() -> bool:
diff --git a/src/b4/review_tui/_lite_app.py b/src/b4/review_tui/_lite_app.py
index 7474927..7a37e0d 100644
--- a/src/b4/review_tui/_lite_app.py
+++ b/src/b4/review_tui/_lite_app.py
@@ -6,14 +6,10 @@
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
 import email.utils
-
 from dataclasses import dataclass, field
 from typing import Any, Dict, List, Optional
 
-import b4
-import b4.review
-import b4.review.tracking
-
+from rich.text import Text
 from textual.app import ComposeResult
 from textual.binding import Binding
 from textual.containers import Vertical
@@ -21,11 +17,16 @@ from textual.screen import ModalScreen
 from textual.widgets import Label, ListItem, ListView, LoadingIndicator, RichLog, Static
 from textual.worker import Worker, WorkerState
 
-from rich.text import Text
-
+import b4
+import b4.review
+import b4.review.tracking
 from b4.review_tui._common import (
-    resolve_styles, _quiet_worker, _fix_ansi_theme,
-    _write_diff_line, display_width, pad_display,
+    _fix_ansi_theme,
+    _quiet_worker,
+    _write_diff_line,
+    display_width,
+    pad_display,
+    resolve_styles,
 )
 from b4.review_tui._modals import FollowupReplyPreviewScreen
 
diff --git a/src/b4/review_tui/_modals.py b/src/b4/review_tui/_modals.py
index 31b05d3..412419a 100644
--- a/src/b4/review_tui/_modals.py
+++ b/src/b4/review_tui/_modals.py
@@ -10,29 +10,50 @@ import email.utils
 import io
 import json
 import re
-
 from typing import Any, Dict, List, Optional, Tuple
 
-import b4
-
+from rich import box
+from rich.panel import Panel
+from rich.rule import Rule
+from rich.text import Text
 from textual.app import ComposeResult
 from textual.binding import Binding
 from textual.containers import Vertical
-from textual.widgets import Checkbox, Input, Label, ListItem, ListView, LoadingIndicator, ProgressBar, RichLog, Select, Static
 from textual.screen import ModalScreen
 from textual.suggester import SuggestFromList
+from textual.widgets import (
+    Checkbox,
+    Input,
+    Label,
+    ListItem,
+    ListView,
+    LoadingIndicator,
+    ProgressBar,
+    RichLog,
+    Select,
+    Static,
+)
 from textual.worker import Worker, WorkerState
-from rich import box
-from rich.panel import Panel
-from rich.rule import Rule
-from rich.text import Text
 
+import b4
 from b4.review_tui._common import (
-    CI_CHECK_LABELS, resolve_styles, ci_check_styles,
-    JKListNavMixin, logger,
-    _write_diff_line, _quiet_worker, _render_email_to_viewer,
+    CI_CHECK_LABELS,
+    JKListNavMixin,
+    _quiet_worker,
+    _render_email_to_viewer,
+    _write_diff_line,
+    ci_check_styles,
+    logger,
+    resolve_styles,
 )
 
+# Re-exported from b4.tui (canonical home for shared modals)
+from b4.tui._modals import ActionItem as ActionItem
+from b4.tui._modals import ActionScreen as ActionScreen
+from b4.tui._modals import ConfirmScreen as ConfirmScreen
+from b4.tui._modals import LimitScreen as LimitScreen
+from b4.tui._modals import ToCcScreen as ToCcScreen
+
 
 class TrailerOption(ListItem):
     """A toggleable trailer option in the trailer selection dialog."""
@@ -540,14 +561,6 @@ class FollowupReplyPreviewScreen(ModalScreen[Optional[str]]):
         self.dismiss(None)
 
 
-# Re-exported from b4.tui (canonical home for shared modals)
-from b4.tui._modals import ToCcScreen as ToCcScreen
-from b4.tui._modals import ConfirmScreen as ConfirmScreen
-from b4.tui._modals import LimitScreen as LimitScreen
-from b4.tui._modals import ActionItem as ActionItem
-from b4.tui._modals import ActionScreen as ActionScreen
-
-
 class SendScreen(ModalScreen[bool]):
     """Modal confirmation screen showing a summary of emails to send."""
 
diff --git a/src/b4/review_tui/_pw_app.py b/src/b4/review_tui/_pw_app.py
index cfc0b11..2b0c10a 100644
--- a/src/b4/review_tui/_pw_app.py
+++ b/src/b4/review_tui/_pw_app.py
@@ -7,24 +7,32 @@ __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
 import json
 import pathlib
-
 from typing import Any, Dict, List, Optional, Set, Tuple
 
-import b4
-import b4.review
-import b4.review.tracking
-
+from rich.text import Text
 from textual.app import App, ComposeResult
 from textual.binding import Binding
 from textual.widgets import Footer, Label, ListItem, ListView, LoadingIndicator, Static
 from textual.worker import Worker, WorkerState
 
-from rich.text import Text
-
-from b4.review_tui._common import resolve_styles, ci_styles, logger, SeparatedFooter, _fix_ansi_theme, pad_display
+import b4
+import b4.review
+import b4.review.tracking
+from b4.review_tui._common import (
+    SeparatedFooter,
+    _fix_ansi_theme,
+    ci_styles,
+    logger,
+    pad_display,
+    resolve_styles,
+)
 from b4.review_tui._modals import (
-    CIChecksScreen, SetStateScreen, ApplyStateModal,
-    LimitScreen, HelpScreen, PW_HELP_LINES,
+    PW_HELP_LINES,
+    ApplyStateModal,
+    CIChecksScreen,
+    HelpScreen,
+    LimitScreen,
+    SetStateScreen,
 )
 
 
diff --git a/src/b4/review_tui/_review_app.py b/src/b4/review_tui/_review_app.py
index 7807064..51004de 100644
--- a/src/b4/review_tui/_review_app.py
+++ b/src/b4/review_tui/_review_app.py
@@ -10,39 +10,56 @@ import email.utils
 import os
 import re
 import subprocess
-
 from typing import Any, Dict, List, Optional, Set, Tuple
 
-import b4
-import b4.mbox
-import b4.review
-import b4.review.tracking
-
+from rich.rule import Rule
+from rich.syntax import Syntax
+from rich.text import Text
 from textual.app import App, ComposeResult
 from textual.binding import Binding
 from textual.containers import Horizontal, Vertical
 from textual.events import Click
 from textual.widgets import Label, ListItem, ListView, RichLog, Static
-from rich.rule import Rule
-from rich.syntax import Syntax
-from rich.text import Text
 
+import b4
+import b4.mbox
+import b4.review
+import b4.review.tracking
+from b4.review._review import COMMIT_MESSAGE_PATH
 from b4.review_tui._common import (
-    logger, PATCH_STATE_MARKERS,
-    resolve_styles, reviewer_colours, CheckRunnerMixin,
-    _quiet_worker, get_thread_msgs,
-    _has_review_data, _make_initials, _wait_for_enter,
-    _write_comments, _write_followup_comments,
-    _write_followup_trailers, _resolve_patch_for_followup, _chain_has_additional_patch,
-    _get_followup_depth, _render_email_to_viewer,
-    _suspend_to_shell, SeparatedFooter, _fix_ansi_theme,
+    PATCH_STATE_MARKERS,
+    CheckRunnerMixin,
+    SeparatedFooter,
+    _chain_has_additional_patch,
+    _fix_ansi_theme,
+    _get_followup_depth,
+    _has_review_data,
+    _make_initials,
+    _quiet_worker,
+    _render_email_to_viewer,
+    _resolve_patch_for_followup,
+    _suspend_to_shell,
+    _wait_for_enter,
+    _write_comments,
+    _write_followup_comments,
+    _write_followup_trailers,
+    get_thread_msgs,
+    logger,
+    resolve_styles,
+    reviewer_colours,
 )
 from b4.review_tui._modals import (
-    TrailerScreen, HelpScreen, _review_help_lines,
-    NoteScreen, PriorReviewScreen, ToCcScreen, SendScreen,
     FollowupReplyPreviewScreen,
+    HelpScreen,
+    NoteScreen,
+    PriorReviewScreen,
+    SendScreen,
+    ToCcScreen,
+    TrailerScreen,
+    _review_help_lines,
 )
 
+
 class PatchListItem(ListItem):
     """A single entry in the patch list."""
 
@@ -92,7 +109,6 @@ class FollowupItem(ListItem):
 
 
 
-from b4.review._review import COMMIT_MESSAGE_PATH
 
 
 class ReviewApp(CheckRunnerMixin, App[None]):
diff --git a/src/b4/review_tui/_tracking_app.py b/src/b4/review_tui/_tracking_app.py
index 1828bac..2493cda 100644
--- a/src/b4/review_tui/_tracking_app.py
+++ b/src/b4/review_tui/_tracking_app.py
@@ -17,15 +17,9 @@ import os
 import pathlib
 import re
 import sqlite3
-
 from string import Template
 from typing import Any, Dict, List, Literal, Optional, Tuple
 
-import b4
-import b4.mbox
-import b4.review
-import b4.review.tracking
-
 from rich.text import Text as RichText
 from textual.app import App, ComposeResult
 from textual.binding import Binding
@@ -33,19 +27,46 @@ from textual.containers import Horizontal, Vertical
 from textual.css.query import NoMatches
 from textual.widgets import Footer, Label, ListItem, ListView, Static
 from textual.worker import Worker, WorkerState
+
+import b4
+import b4.mbox
+import b4.review
+import b4.review.tracking
 from b4.review_tui._common import (
-    logger, resolve_styles, _wait_for_enter, _suspend_to_shell,
-    SeparatedFooter, _quiet_worker, CheckRunnerMixin,
-    _fix_ansi_theme, display_width, pad_display,
+    CheckRunnerMixin,
+    SeparatedFooter,
+    _fix_ansi_theme,
+    _quiet_worker,
+    _suspend_to_shell,
+    _wait_for_enter,
+    display_width,
+    logger,
+    pad_display,
+    resolve_styles,
 )
 from b4.review_tui._modals import (
-    BaseSelectionScreen, WorkerScreen, TakeScreen, TakeConfirmScreen,
-    CherryPickScreen, NewerRevisionWarningScreen,
-    RevisionChoiceScreen, RebaseScreen, TargetBranchScreen,
+    TRACKING_HELP_LINES,
     AbandonConfirmScreen,
-    ArchiveConfirmScreen, RangeDiffScreen, ThankScreen, QueueScreen, QueueDeliveryScreen,
-    LimitScreen, UpdateRevisionScreen, UpdateAllScreen,
-    ActionScreen, HelpScreen, SnoozeScreen, TRACKING_HELP_LINES,
+    ActionScreen,
+    ArchiveConfirmScreen,
+    BaseSelectionScreen,
+    CherryPickScreen,
+    HelpScreen,
+    LimitScreen,
+    NewerRevisionWarningScreen,
+    QueueDeliveryScreen,
+    QueueScreen,
+    RangeDiffScreen,
+    RebaseScreen,
+    RevisionChoiceScreen,
+    SnoozeScreen,
+    TakeConfirmScreen,
+    TakeScreen,
+    TargetBranchScreen,
+    ThankScreen,
+    UpdateAllScreen,
+    UpdateRevisionScreen,
+    WorkerScreen,
 )
 
 # Shortcut keys for the tracking-app action selector.
@@ -3725,6 +3746,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         """
         import tarfile
         import time
+
         import b4.ez
 
         topdir = b4.git_get_toplevel()
@@ -3823,6 +3845,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
     def action_thank(self) -> None:
         """Compose and preview a thank-you reply for a taken series."""
         import argparse
+
         import b4.review
         import b4.ty
 
diff --git a/src/b4/tui/_common.py b/src/b4/tui/_common.py
index f03976d..8eb6c45 100644
--- a/src/b4/tui/_common.py
+++ b/src/b4/tui/_common.py
@@ -11,18 +11,16 @@ import os
 import subprocess
 import tempfile
 import unicodedata
-
-from typing import Any, Dict, List, Optional
-
-import b4
-
 from collections import defaultdict
+from typing import Any, Dict, List, Optional
 
 from textual.app import ComposeResult
 from textual.binding import Binding
 from textual.widgets import Footer, ListView
 from textual.widgets._footer import FooterKey
 
+import b4
+
 logger = b4.logger
 
 
diff --git a/src/b4/tui/_modals.py b/src/b4/tui/_modals.py
index 46025f4..15f2e3b 100644
--- a/src/b4/tui/_modals.py
+++ b/src/b4/tui/_modals.py
@@ -6,7 +6,7 @@
 """Shared modal screens for b4 Textual apps."""
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
-from typing import Dict, List, Optional, Tuple, TYPE_CHECKING
+from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
 
 if TYPE_CHECKING:
     from textual.events import Key
@@ -14,10 +14,15 @@ if TYPE_CHECKING:
 from textual.app import ComposeResult
 from textual.binding import Binding
 from textual.containers import Vertical
-from textual.widgets import Checkbox, Input, Label, ListItem, ListView, Static, TextArea
 from textual.screen import ModalScreen
+from textual.widgets import Checkbox, Input, Label, ListItem, ListView, Static, TextArea
 
-from b4.tui._common import JKListNavMixin, _addrs_to_lines, _lines_to_header, _validate_addrs
+from b4.tui._common import (
+    JKListNavMixin,
+    _addrs_to_lines,
+    _lines_to_header,
+    _validate_addrs,
+)
 
 
 class ToCcScreen(ModalScreen[bool]):
diff --git a/src/b4/ty.py b/src/b4/ty.py
index b429566..5786222 100644
--- a/src/b4/ty.py
+++ b/src/b4/ty.py
@@ -5,23 +5,20 @@
 #
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
-import os
-import sys
-
-import b4
-import re
+import argparse
 import email
 import email.parser
 import email.utils
 import json
-import argparse
-
-from string import Template
-from pathlib import Path
-
+import os
+import re
+import sys
 from email.message import EmailMessage
+from pathlib import Path
+from string import Template
+from typing import Any, Callable, Dict, List, Optional, Set, Tuple, Union, cast
 
-from typing import Callable, cast, Optional, Set, Tuple, Union, List, Dict, Any
+import b4
 
 ConfigDictT = b4.ConfigDictT
 JsonDictT = Dict[str, Union[str, int, List[Any], Dict[str, Any]]]
diff --git a/src/tests/conftest.py b/src/tests/conftest.py
index f42ade4..d8cd853 100644
--- a/src/tests/conftest.py
+++ b/src/tests/conftest.py
@@ -1,11 +1,12 @@
-import pytest
-import b4
 import os
 import pathlib
 import sys
-
 from typing import Generator
 
+import pytest
+
+import b4
+
 
 @pytest.fixture(scope="function", autouse=True)
 def settestdefaults(tmp_path: pathlib.Path) -> None:
diff --git a/src/tests/test___init__.py b/src/tests/test___init__.py
index 362643a..faf5c96 100644
--- a/src/tests/test___init__.py
+++ b/src/tests/test___init__.py
@@ -1,14 +1,15 @@
-import pytest
-import b4
-import os
 import email
 import email.parser
 import io
+import os
 import pathlib
 import socket
-
 from typing import Any, Dict, List, Literal, Optional, Tuple
 
+import pytest
+
+import b4
+
 
 @pytest.mark.parametrize('source,expected', [
     ('good-valid-trusted', (True, True, True, 'B6C41CE35664996C', '1623274836')),
@@ -679,8 +680,9 @@ class TestGetLoreNode:
 
     def test_uses_from_git_config(self, monkeypatch: pytest.MonkeyPatch) -> None:
         """get_lore_node() constructs via LoreNode.from_git_config()."""
-        import liblore
         from unittest.mock import MagicMock
+
+        import liblore
         mock_node = MagicMock()
         mock_from_gc = MagicMock(return_value=mock_node)
         monkeypatch.setattr(liblore.LoreNode, 'from_git_config', mock_from_gc)
@@ -690,8 +692,9 @@ class TestGetLoreNode:
 
     def test_sets_user_agent(self, monkeypatch: pytest.MonkeyPatch) -> None:
         """get_lore_node() calls set_user_agent with b4's identity."""
-        import liblore
         from unittest.mock import MagicMock
+
+        import liblore
         mock_node = MagicMock()
         monkeypatch.setattr(liblore.LoreNode, 'from_git_config', MagicMock(return_value=mock_node))
         b4.get_lore_node()
@@ -699,8 +702,9 @@ class TestGetLoreNode:
 
     def test_does_not_inject_session(self, monkeypatch: pytest.MonkeyPatch) -> None:
         """get_lore_node() lets liblore own its session."""
-        import liblore
         from unittest.mock import MagicMock
+
+        import liblore
         mock_node = MagicMock()
         monkeypatch.setattr(liblore.LoreNode, 'from_git_config', MagicMock(return_value=mock_node))
         b4.get_lore_node()
@@ -708,8 +712,9 @@ class TestGetLoreNode:
 
     def test_passes_cache_settings(self, monkeypatch: pytest.MonkeyPatch) -> None:
         """cache_dir and cache_ttl from b4 config are passed through."""
-        import liblore
         from unittest.mock import MagicMock
+
+        import liblore
         b4.MAIN_CONFIG['cache-expire'] = '5'
         mock_node = MagicMock()
         mock_from_gc = MagicMock(return_value=mock_node)
@@ -721,8 +726,9 @@ class TestGetLoreNode:
 
     def test_singleton(self, monkeypatch: pytest.MonkeyPatch) -> None:
         """Repeated calls return the same LoreNode instance."""
-        import liblore
         from unittest.mock import MagicMock
+
+        import liblore
         mock_node = MagicMock()
         mock_from_gc = MagicMock(return_value=mock_node)
         monkeypatch.setattr(liblore.LoreNode, 'from_git_config', mock_from_gc)
diff --git a/src/tests/test_ez.py b/src/tests/test_ez.py
index 7e67a0b..ef21985 100644
--- a/src/tests/test_ez.py
+++ b/src/tests/test_ez.py
@@ -1,12 +1,13 @@
-import pytest
 import os
+from typing import Any, Dict, Generator, List, Optional, Tuple
+from unittest.mock import MagicMock, patch
+
+import pytest
+
 import b4
+import b4.command
 import b4.ez
 import b4.mbox
-import b4.command
-
-from typing import Any, Dict, Generator, List, Optional, Tuple
-from unittest.mock import MagicMock, patch
 
 
 @pytest.fixture(scope="function")
diff --git a/src/tests/test_mbox.py b/src/tests/test_mbox.py
index b3c0536..b533421 100644
--- a/src/tests/test_mbox.py
+++ b/src/tests/test_mbox.py
@@ -1,13 +1,14 @@
-import pytest
 import os
-import b4
-import b4.mbox
-import b4.command
-
 from email.message import EmailMessage
 from typing import Any, Dict, List
 from unittest.mock import patch as mock_patch
 
+import pytest
+
+import b4
+import b4.command
+import b4.mbox
+
 
 @pytest.mark.parametrize('mboxf, shazamargs, compareargs, compareout, b4cfg', [
     ('shazam-git1-just-series', [],
diff --git a/src/tests/test_patatt.py b/src/tests/test_patatt.py
index 592d546..c257d41 100644
--- a/src/tests/test_patatt.py
+++ b/src/tests/test_patatt.py
@@ -10,12 +10,11 @@ from collections.abc import Generator
 from typing import Tuple, Union
 
 import pytest
+from nacl.signing import SigningKey
 
 import b4
 import patatt
 
-from nacl.signing import SigningKey
-
 
 @pytest.fixture()
 def ed25519_keypair() -> Generator[Tuple[str, str, str, str], None, None]:
diff --git a/src/tests/test_rethread.py b/src/tests/test_rethread.py
index 1cf5239..f2a0394 100644
--- a/src/tests/test_rethread.py
+++ b/src/tests/test_rethread.py
@@ -1,10 +1,10 @@
 # SPDX-License-Identifier: GPL-2.0-or-later
 # Copyright (C) 2020 by the Linux Foundation
-import b4
 import email.message
+from typing import List, Optional, Tuple
 from unittest import mock
 
-from typing import List, Optional, Tuple
+import b4
 
 
 # ---------------------------------------------------------------------------
diff --git a/src/tests/test_review.py b/src/tests/test_review.py
index 1523618..92ebe57 100644
--- a/src/tests/test_review.py
+++ b/src/tests/test_review.py
@@ -6,11 +6,9 @@ from unittest import mock
 import pytest
 
 import b4
-from b4 import review
-from b4 import review_tui
+from b4 import review, review_tui
 from b4.review._review import REVIEW_MAGIC_MARKER, check_series_attestation
 
-
 # -- Helper diffs used across tests ------------------------------------------
 
 # A minimal single-file, single-hunk diff
@@ -140,7 +138,7 @@ class TestRenderQuotedDiffWithComments:
         # First non-empty line should be an instruction
         assert lines[0].startswith('# ')
         # Instructions end before the first quoted diff line
-        instruction_lines = [l for l in lines if l.startswith('#')]
+        instruction_lines = [line for line in lines if line.startswith('#')]
         assert len(instruction_lines) >= 3
         # _extract_editor_comments should strip them
         comments = review._extract_editor_comments(result)
@@ -157,7 +155,7 @@ class TestRenderQuotedDiffWithComments:
         assert '> Second line.' in lines
         # They should come before the diff
         body_idx = lines.index('> This is the body.')
-        diff_idx = next(i for i, l in enumerate(lines) if 'diff --git' in l)
+        diff_idx = next(i for i, line in enumerate(lines) if 'diff --git' in line)
         assert body_idx < diff_idx
 
     def test_commit_msg_own_comment(self) -> None:
@@ -213,7 +211,7 @@ class TestRenderQuotedDiffWithComments:
         lines = result.splitlines()
         assert 'General note' in lines
         note_idx = lines.index('General note')
-        body_idx = next(i for i, l in enumerate(lines) if 'First body line' in l)
+        body_idx = next(i for i, line in enumerate(lines) if 'First body line' in line)
         assert note_idx < body_idx
 
 
@@ -583,7 +581,7 @@ index abc..def 100644
         assert 'General feedback.' in lines
         # Preamble should come before any quoted line
         feedback_idx = lines.index('General feedback.')
-        quoted_lines = [i for i, l in enumerate(lines) if l.startswith('>')]
+        quoted_lines = [i for i, line in enumerate(lines) if line.startswith('>')]
         if quoted_lines:
             assert feedback_idx < quoted_lines[0]
 
diff --git a/src/tests/test_review_checks.py b/src/tests/test_review_checks.py
index 7d737ad..c866082 100644
--- a/src/tests/test_review_checks.py
+++ b/src/tests/test_review_checks.py
@@ -9,7 +9,6 @@ import pytest
 
 from b4.review import checks
 
-
 # ---------------------------------------------------------------------------
 # Helpers
 # ---------------------------------------------------------------------------
diff --git a/src/tests/test_review_show_info.py b/src/tests/test_review_show_info.py
index abfbf3c..db955c4 100644
--- a/src/tests/test_review_show_info.py
+++ b/src/tests/test_review_show_info.py
@@ -5,18 +5,17 @@
 #
 """Tests for ``b4 review show-info``."""
 import json
-import pytest
 
+import pytest
 
 import b4
 import b4.review
 from b4.review._review import (
     get_review_info,
-    show_review_info,
     list_review_branches,
+    show_review_info,
 )
 
-
 # ---------------------------------------------------------------------------
 # Helpers
 # ---------------------------------------------------------------------------
diff --git a/src/tests/test_review_tracking.py b/src/tests/test_review_tracking.py
index 8cc0c70..a5fd903 100644
--- a/src/tests/test_review_tracking.py
+++ b/src/tests/test_review_tracking.py
@@ -12,8 +12,8 @@ import pytest
 import b4
 import b4.review
 from b4.review import tracking as review_tracking
-from b4.review_tui._tracking_app import _format_snooze_until, _format_attestation
 from b4.review_tui._modals import SnoozeScreen
+from b4.review_tui._tracking_app import _format_attestation, _format_snooze_until
 
 
 class TestGetReviewDataDir:
@@ -1820,14 +1820,14 @@ class TestBuildReplyFromComments:
 
     def _skip_markers(self, lines: list[str]) -> list[str]:
         """Return all skip-marker lines from the output."""
-        return [l for l in lines if l.startswith('> [ ... skip')]
+        return [line for line in lines if line.startswith('> [ ... skip')]
 
     def test_short_hunk_no_skip_marker(self) -> None:
         """Comment within 5 lines of hunk start → no skip marker of any kind."""
         lines = self._call([self._make_comment(3, 'nice')])
         assert not self._skip_markers(lines)
         # @@ header always present
-        assert any('@@ -0,0 +1,40 @@' in l for l in lines)
+        assert any('@@ -0,0 +1,40 @@' in line for line in lines)
         # All 3 added lines quoted
         assert '> +line1' in lines
         assert '> +line2' in lines
@@ -1852,7 +1852,7 @@ class TestBuildReplyFromComments:
         assert len(markers) == 1
         assert 'skip 14 lines' in markers[0]
         # @@ header present
-        assert any('@@ -0,0 +1,40 @@' in l for l in lines)
+        assert any('@@ -0,0 +1,40 @@' in line for line in lines)
         # Only lines 15-20 quoted (5 context + the commented line)
         assert '> +line15' in lines
         assert '> +line20' in lines
@@ -1903,7 +1903,7 @@ class TestBuildReplyFromComments:
     def test_hunk_header_always_present(self) -> None:
         """The @@ hunk header is always included even for a comment on line 20."""
         lines = self._call([self._make_comment(20, 'end')])
-        assert any('@@ -0,0 +1,40 @@' in l for l in lines)
+        assert any('@@ -0,0 +1,40 @@' in line for line in lines)
         assert self._skip_markers(lines)
         assert '> +line20' in lines
         assert '> +line14' not in lines
@@ -1915,7 +1915,7 @@ class TestBuildReplyFromComments:
             self._make_comment(10, 'y'),
         ]
         lines = self._call(comments)
-        quoted = [l for l in lines if l.startswith('> +')]
+        quoted = [line for line in lines if line.startswith('> +')]
         # Each quoted diff line should appear exactly once
         assert len(quoted) == len(set(quoted))
 
diff --git a/src/tests/test_three_way_merge.py b/src/tests/test_three_way_merge.py
index 83a4c77..c0127bf 100644
--- a/src/tests/test_three_way_merge.py
+++ b/src/tests/test_three_way_merge.py
@@ -1,13 +1,14 @@
 import argparse
 import json
 import os
+from typing import Any, Dict, Optional, Tuple
+from unittest.mock import patch
+
 import pytest
+
 import b4
 import b4.mbox
 
-from typing import Any, Dict, Optional, Tuple
-from unittest.mock import patch
-
 
 class TestAmConflictError:
     """Tests for the AmConflictError exception class."""
diff --git a/src/tests/test_tui_bugs.py b/src/tests/test_tui_bugs.py
index 19f073c..4cab381 100644
--- a/src/tests/test_tui_bugs.py
+++ b/src/tests/test_tui_bugs.py
@@ -12,8 +12,6 @@ from datetime import datetime, timezone
 from typing import Set
 from unittest import mock
 
-from ezgb import Bug, BugSummary, Comment, Identity, Status
-
 from b4.bugs._import import (
     format_comment,
     is_comment_removed,
@@ -29,7 +27,7 @@ from b4.bugs._tui import (
     _relative_time,
     label_color,
 )
-
+from ezgb import Bug, BugSummary, Comment, Identity, Status
 
 # ---------------------------------------------------------------------------
 # Helpers -- factory functions for real Bug and BugSummary objects
diff --git a/src/tests/test_tui_modals.py b/src/tests/test_tui_modals.py
index 7b123cc..e0e6f3f 100644
--- a/src/tests/test_tui_modals.py
+++ b/src/tests/test_tui_modals.py
@@ -9,14 +9,14 @@ Uses Textual's built-in ``App.run_test()`` / ``Pilot`` harness so the
 tests run without a real terminal.  Only lightweight, self-contained
 modals are exercised here — no database, network, or git needed.
 """
-import pytest
-
 from typing import Any, Dict, List, Optional, Tuple
 
+import pytest
 from textual.app import App, ComposeResult
 from textual.widgets import Input, Label, ListView
 
 from b4.review_tui._modals import (
+    TRACKING_HELP_LINES,
     ActionScreen,
     ConfirmScreen,
     HelpScreen,
@@ -28,10 +28,8 @@ from b4.review_tui._modals import (
     SnoozeScreen,
     TrailerScreen,
     UpdateRevisionScreen,
-    TRACKING_HELP_LINES,
 )
 
-
 # ---------------------------------------------------------------------------
 # Compat helper — Textual ≥ 1.0 (pip) uses Static.content,
 # older builds (e.g. Fedora 43 package) still use Static.renderable.
diff --git a/src/tests/test_tui_review.py b/src/tests/test_tui_review.py
index c3a2db4..3222989 100644
--- a/src/tests/test_tui_review.py
+++ b/src/tests/test_tui_review.py
@@ -8,16 +8,14 @@
 Tests the shell-return reconciliation logic that detects and handles
 cosmetic commit edits (e.g. reworded subjects via git rebase -i).
 """
-import pytest
-
 from typing import Any, Dict, List, Tuple
 
+import pytest
+
 import b4
 import b4.review
-
 from b4.review_tui._review_app import ReviewApp
 
-
 # ---------------------------------------------------------------------------
 # Helpers
 # ---------------------------------------------------------------------------
diff --git a/src/tests/test_tui_tracking.py b/src/tests/test_tui_tracking.py
index 96b160a..80004e8 100644
--- a/src/tests/test_tui_tracking.py
+++ b/src/tests/test_tui_tracking.py
@@ -11,28 +11,25 @@ core user workflows: series listing, navigation, filtering,
 status transitions, and modal interactions.
 """
 import pathlib
-import pytest
-
 from typing import Any, Dict, List, Optional
 from unittest.mock import patch
 
+import pytest
+from textual.widgets import Input, ListView, Static
+
 import b4
 import b4.review
 import b4.review.tracking as tracking
-
-from textual.widgets import Input, ListView, Static
-
-from b4.review_tui._tracking_app import TrackingApp, TrackedSeriesItem
 from b4.review_tui._modals import (
-    ActionScreen,
     ActionItem,
+    ActionScreen,
     ConfirmScreen,
     HelpScreen,
     LimitScreen,
     SnoozeScreen,
     TargetBranchScreen,
 )
-
+from b4.review_tui._tracking_app import TrackedSeriesItem, TrackingApp
 
 # ---------------------------------------------------------------------------
 # Compat helper — Textual ≥ 1.0 (pip) uses Static.content,

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 03/11] Import dependencies unconditionally
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
  2026-04-19 15:59 ` [PATCH b4 v2 01/11] Add CI script Tamir Duberstein
  2026-04-19 15:59 ` [PATCH b4 v2 02/11] Add ruff checks to CI Tamir Duberstein
@ 2026-04-19 15:59 ` Tamir Duberstein
  2026-04-19 15:59 ` [PATCH b4 v2 04/11] Add ruff format check to CI Tamir Duberstein
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 15:59 UTC (permalink / raw)
  To: Kernel.org Tools; +Cc: Konstantin Ryabitsev, Tamir Duberstein

These are always available since commit f4185d6b.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 src/b4/__init__.py    | 30 ++----------------------------
 src/b4/ez.py          | 30 ++++--------------------------
 src/tests/conftest.py |  1 -
 3 files changed, 6 insertions(+), 55 deletions(-)

diff --git a/src/b4/__init__.py b/src/b4/__init__.py
index 1e4c91e..df5c58c 100644
--- a/src/b4/__init__.py
+++ b/src/b4/__init__.py
@@ -49,10 +49,12 @@ from typing import (
     overload,
 )
 
+import dkim  # type: ignore[import-untyped]
 import liblore.utils
 import requests
 
 import liblore
+import patatt
 
 ConfigDictT = Dict[str, Union[str, List[str], None]]
 
@@ -65,20 +67,6 @@ emlpolicy = email.policy.EmailPolicy(utf8=True, cte_type='8bit', max_line_length
 # adapted from email._parseaddr
 qspecials = re.compile(r'[()<>@,:;.\"\[\]]')
 
-try:
-    import dkim  # type: ignore[import-untyped]
-
-    can_dkim = True
-except ModuleNotFoundError:
-    can_dkim = False
-
-try:
-    import patatt
-
-    can_patatt = True
-except ModuleNotFoundError:
-    can_patatt = False
-
 # global setting allowing us to turn off networking
 can_network = True
 
@@ -1012,13 +1000,6 @@ class LoreSeries:
             for trailer in attref:
                 logger.info('  %s', trailer)
 
-        if not (can_dkim and can_patatt):
-            logger.info('  ---')
-            if not can_dkim:
-                logger.info('  NOTE: install dkimpy for DKIM signature verification')
-            if not can_patatt:
-                logger.info('  NOTE: install patatt for end-to-end signature verification')
-
         return msgs
 
 
@@ -1694,9 +1675,6 @@ class LoreMessage:
         if not can_network:
             logger.debug('Message has DKIM signatures, but can_network is off')
             return
-        if not can_dkim:
-            logger.debug('Message has DKIM signatures, but can_dkim is off')
-            return
 
         # Identify all DKIM-Signature headers and try them in reverse order
         # until we come to a passing one
@@ -1783,10 +1761,6 @@ class LoreMessage:
                 self.body = '\n'.join(ibh) + '\n\n' + self.body
 
     def _load_patatt_attestors(self) -> None:
-        if not can_patatt:
-            logger.debug('Message has %s headers, but can_patatt is off', DEVSIG_HDR)
-            return
-
         # This should be always the case, but assert it anyway
         assert isinstance(self._attestors, list)
 
diff --git a/src/b4/ez.py b/src/b4/ez.py
index e69a106..562f2a9 100644
--- a/src/b4/ez.py
+++ b/src/b4/ez.py
@@ -13,6 +13,7 @@ import email.policy
 import email.utils
 import gzip
 import hashlib
+import importlib.util
 import io
 import json
 import os
@@ -29,21 +30,10 @@ from email.message import EmailMessage
 from string import Template
 from typing import Any, Dict, List, Optional, Set, Tuple, Union
 
-import b4
-
-try:
-    import patatt
-    can_patatt = True
-except ModuleNotFoundError:
-    can_patatt = False
+import git_filter_repo as fr  # type: ignore[import-untyped]
 
-try:
-    import git_filter_repo as fr  # type: ignore[import-untyped]
-    can_gfr = True
-except ModuleNotFoundError:
-    can_gfr = False
-
-import importlib.util
+import b4
+import patatt
 
 can_codespell = importlib.util.find_spec('codespell_lib') is not None
 
@@ -150,9 +140,6 @@ def run_frf(frf: fr.RepoFilter) -> None:
     but is completely unnecessary for b4's purposes. Delete this file after
     each invocation, so it doesn't interfere with subsequent runs.
     """
-    if not can_gfr:
-        logger.critical('CRITICAL: git-filter-repo is not available')
-        sys.exit(1)
     run_rewrite_hook('pre')
     logger.debug('Running git-filter-repo...')
     frf.run()
@@ -2492,13 +2479,6 @@ def reroll(mybranch: str, tag_msg: str, msgid: str, tagprefix: str = SENT_TAG_PR
     store_cover(new_cover, tracking)
 
 
-def check_can_gfr() -> None:
-    if not can_gfr:
-        logger.critical('ERROR: b4 submit requires git-filter-repo. You should be able')
-        logger.critical('       to install it from your distro packages, or from pip.')
-        sys.exit(1)
-
-
 def show_revision() -> None:
     is_prep_branch(mustbe=True)
     _cover, tracking = load_cover()
@@ -3054,7 +3034,6 @@ def set_presubject(presubject: str) -> None:
 
 
 def cmd_prep(cmdargs: argparse.Namespace) -> None:
-    check_can_gfr()
     status = b4.git_get_repo_status()
     if len(status):
         logger.critical('CRITICAL: Repository contains uncommitted changes.')
@@ -3156,7 +3135,6 @@ def cmd_prep(cmdargs: argparse.Namespace) -> None:
 
 
 def cmd_trailers(cmdargs: argparse.Namespace) -> None:
-    check_can_gfr()
     status = b4.git_get_repo_status()
     if len(status):
         logger.critical('CRITICAL: Repository contains uncommitted changes.')
diff --git a/src/tests/conftest.py b/src/tests/conftest.py
index d8cd853..3ff3891 100644
--- a/src/tests/conftest.py
+++ b/src/tests/conftest.py
@@ -13,7 +13,6 @@ def settestdefaults(tmp_path: pathlib.Path) -> None:
     topdir = b4.git_get_toplevel()
     if topdir and topdir != os.getcwd():
         os.chdir(topdir)
-    b4.can_patatt = False
     b4.can_network = False
     b4.MAIN_CONFIG = dict(b4.DEFAULT_CONFIG)
     b4.USER_CONFIG = {

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 04/11] Add ruff format check to CI
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
                   ` (2 preceding siblings ...)
  2026-04-19 15:59 ` [PATCH b4 v2 03/11] Import dependencies unconditionally Tamir Duberstein
@ 2026-04-19 15:59 ` Tamir Duberstein
  2026-04-19 18:06   ` Tamir Duberstein
  2026-04-19 16:00 ` [PATCH b4 v2 05/11] Fix tests under uv with complex git config Tamir Duberstein
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 15:59 UTC (permalink / raw)
  To: Kernel.org Tools
  Cc: Konstantin Ryabitsev, Tamir Duberstein,
	"str = 'reply", "str = 'reviewer",
	"str = 'author", "str = 'patch1"

Enable ruff format checking in the b4 CI script and configure Ruff's
formatter in pyproject.toml.

Apply a one-time repo-wide format pass so the new check enforces the
current style without leaving the branch permanently red.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 ci.sh                              |    1 +
 misc/retrieve_lore_thread.py       |    2 +-
 misc/review-ci-example.py          |    4 +-
 misc/send-receive.py               |  256 ++++--
 pyproject.toml                     |    3 +
 src/b4/__init__.py                 | 1333 +++++++++++++++++++++--------
 src/b4/bugs/__init__.py            |   44 +-
 src/b4/bugs/_import.py             |   14 +-
 src/b4/bugs/_tui.py                |  433 ++++++----
 src/b4/command.py                  | 1354 ++++++++++++++++++++++-------
 src/b4/diff.py                     |   34 +-
 src/b4/dig.py                      |   58 +-
 src/b4/ez.py                       |  908 ++++++++++++++------
 src/b4/kr.py                       |    8 +-
 src/b4/mbox.py                     |  274 ++++--
 src/b4/pr.py                       |  127 ++-
 src/b4/review/__init__.py          |   23 +-
 src/b4/review/_review.py           |  485 +++++++----
 src/b4/review/checks.py            |  379 +++++----
 src/b4/review/messages.py          |   63 +-
 src/b4/review/tracking.py          |  646 ++++++++------
 src/b4/review_tui/__init__.py      |   18 +-
 src/b4/review_tui/_common.py       |  173 ++--
 src/b4/review_tui/_entry.py        |   66 +-
 src/b4/review_tui/_lite_app.py     |   92 +-
 src/b4/review_tui/_modals.py       |  612 +++++++++-----
 src/b4/review_tui/_pw_app.py       |  173 ++--
 src/b4/review_tui/_review_app.py   |  426 +++++++---
 src/b4/review_tui/_tracking_app.py | 1474 +++++++++++++++++++++-----------
 src/b4/tui/__init__.py             |    1 +
 src/b4/tui/_common.py              |   19 +-
 src/b4/tui/_modals.py              |   38 +-
 src/b4/ty.py                       |  187 +++--
 src/tests/conftest.py              |   10 +-
 src/tests/test___init__.py         |  799 ++++++++++++------
 src/tests/test_ez.py               |  143 +++-
 src/tests/test_mbox.py             |   85 +-
 src/tests/test_messages.py         |   21 +-
 src/tests/test_patatt.py           |   30 +-
 src/tests/test_rethread.py         |  174 ++--
 src/tests/test_review.py           | 1636 +++++++++++++++++++++---------------
 src/tests/test_review_checks.py    |  454 ++++++----
 src/tests/test_review_show_info.py |   92 +-
 src/tests/test_review_tracking.py  |  982 ++++++++++++++--------
 src/tests/test_three_way_merge.py  |  181 ++--
 src/tests/test_tui_bugs.py         |   38 +-
 src/tests/test_tui_modals.py       |   44 +-
 src/tests/test_tui_review.py       |   94 +--
 src/tests/test_tui_tracking.py     | 1332 ++++++++++++++++++-----------
 49 files changed, 10637 insertions(+), 5206 deletions(-)

diff --git a/ci.sh b/ci.sh
index b65ae97..ddd4cff 100755
--- a/ci.sh
+++ b/ci.sh
@@ -2,5 +2,6 @@
 
 set -eu
 
+uv run ruff format --check
 uv run ruff check
 uv run mypy .
diff --git a/misc/retrieve_lore_thread.py b/misc/retrieve_lore_thread.py
index 4de39fb..aad586b 100644
--- a/misc/retrieve_lore_thread.py
+++ b/misc/retrieve_lore_thread.py
@@ -21,7 +21,7 @@ class Function(OpenAISchema):
     )
 
     class Config:
-        title = "retrieve_lore_thread"
+        title = 'retrieve_lore_thread'
 
     @classmethod
     def execute(cls, message_id: str) -> str:
diff --git a/misc/review-ci-example.py b/misc/review-ci-example.py
index cbac2ae..dee0a5d 100755
--- a/misc/review-ci-example.py
+++ b/misc/review-ci-example.py
@@ -76,7 +76,9 @@ def main() -> None:
     if build_status == 'warn':
         build_result['details'] = 'Warning: unused variable in drivers/foo.c:42'
     elif build_status == 'fail':
-        build_result['details'] = 'Error: implicit declaration of function bar\n  drivers/foo.c:57:5'
+        build_result['details'] = (
+            'Error: implicit declaration of function bar\n  drivers/foo.c:57:5'
+        )
     results.append(build_result)
 
     # Simulate a test suite check
diff --git a/misc/send-receive.py b/misc/send-receive.py
index 35c5e99..22a5d99 100644
--- a/misc/send-receive.py
+++ b/misc/send-receive.py
@@ -36,7 +36,6 @@ logger.setLevel(logging.DEBUG)
 
 
 class SendReceiveListener(object):
-
     def __init__(self, _engine, _config) -> None:
         self._engine = _engine
         self._config = _config
@@ -51,7 +50,9 @@ class SendReceiveListener(object):
     def _init_logger(self, logfile: str, loglevel: str) -> None:
         global logger
         lch = logging.handlers.WatchedFileHandler(os.path.expanduser(logfile))
-        lfmt = logging.Formatter('[%(process)d] %(asctime)s - %(levelname)s - %(message)s')
+        lfmt = logging.Formatter(
+            '[%(process)d] %(asctime)s - %(levelname)s - %(message)s'
+        )
         lch.setFormatter(lfmt)
         if loglevel == 'critical':
             lch.setLevel(logging.CRITICAL)
@@ -65,18 +66,23 @@ class SendReceiveListener(object):
         logger.info('Setting up SQLite database')
         conn = self._engine.connect()
         md = sa.MetaData()
-        meta = sa.Table('meta', md,
-                        sa.Column('version', sa.Integer())
-                        )
-        auth = sa.Table('auth', md,
-                        sa.Column('auth_id', sa.Integer(), primary_key=True),
-                        sa.Column('created', sa.DateTime(), nullable=False, server_default=sa.sql.func.now()),
-                        sa.Column('identity', sa.Text(), nullable=False),
-                        sa.Column('selector', sa.Text(), nullable=False),
-                        sa.Column('pubkey', sa.Text(), nullable=False),
-                        sa.Column('challenge', sa.Text(), nullable=True),
-                        sa.Column('verified', sa.Integer(), nullable=False),
-                        )
+        meta = sa.Table('meta', md, sa.Column('version', sa.Integer()))
+        auth = sa.Table(
+            'auth',
+            md,
+            sa.Column('auth_id', sa.Integer(), primary_key=True),
+            sa.Column(
+                'created',
+                sa.DateTime(),
+                nullable=False,
+                server_default=sa.sql.func.now(),
+            ),
+            sa.Column('identity', sa.Text(), nullable=False),
+            sa.Column('selector', sa.Text(), nullable=False),
+            sa.Column('pubkey', sa.Text(), nullable=False),
+            sa.Column('challenge', sa.Text(), nullable=True),
+            sa.Column('verified', sa.Integer(), nullable=False),
+        )
         sa.Index('idx_identity_selector', auth.c.identity, auth.c.selector, unique=True)
         md.create_all(self._engine)
         q = sa.insert(meta).values(version=DB_VERSION)
@@ -98,7 +104,9 @@ class SendReceiveListener(object):
         logger.debug('Returning success: %s', message)
         resp.text = json.dumps({'result': 'success', 'message': message})
 
-    def get_smtp(self) -> Tuple[Union[smtplib.SMTP, smtplib.SMTP_SSL, None], Tuple[str, str]]:
+    def get_smtp(
+        self,
+    ) -> Tuple[Union[smtplib.SMTP, smtplib.SMTP_SSL, None], Tuple[str, str]]:
         sconfig = self._config['sendemail']
         server = sconfig.get('smtpserver', 'localhost')
         port = sconfig.get('smtpserverport', 0)
@@ -120,7 +128,9 @@ class SendReceiveListener(object):
                 # We do TLS from the get-go
                 smtp = smtplib.SMTP_SSL(server, port)
             else:
-                raise smtplib.SMTPException('Unclear what to do with smtpencryption=%s' % encryption)
+                raise smtplib.SMTPException(
+                    'Unclear what to do with smtpencryption=%s' % encryption
+                )
 
             # If we got to this point, we should do authentication.
             auser = sconfig.get('smtpuser')
@@ -144,21 +154,35 @@ class SendReceiveListener(object):
         logger.info('New authentication request for %s/%s', identity, selector)
         pubkey = jdata.get('pubkey')
         t_auth = sa.Table('auth', md, autoload=True, autoload_with=self._engine)
-        q = sa.select([t_auth.c.auth_id]).where(t_auth.c.identity == identity, t_auth.c.selector == selector,
-                                                t_auth.c.verified == 1)
+        q = sa.select([t_auth.c.auth_id]).where(
+            t_auth.c.identity == identity,
+            t_auth.c.selector == selector,
+            t_auth.c.verified == 1,
+        )
         rp = conn.execute(q)
         if len(rp.fetchall()):
-            self.send_error(resp, message='i=%s;s=%s is already authorized' % (identity, selector))
+            self.send_error(
+                resp, message='i=%s;s=%s is already authorized' % (identity, selector)
+            )
             return
         # delete any existing challenges for this and create a new one
-        q = sa.delete(t_auth).where(t_auth.c.identity == identity, t_auth.c.selector == selector,
-                                    t_auth.c.verified == 0)
+        q = sa.delete(t_auth).where(
+            t_auth.c.identity == identity,
+            t_auth.c.selector == selector,
+            t_auth.c.verified == 0,
+        )
         conn.execute(q)
         # create new challenge
         import uuid
+
         cstr = str(uuid.uuid4())
-        q = sa.insert(t_auth).values(identity=identity, selector=selector, pubkey=pubkey, challenge=cstr,
-                                     verified=0)
+        q = sa.insert(t_auth).values(
+            identity=identity,
+            selector=selector,
+            pubkey=pubkey,
+            challenge=cstr,
+            verified=0,
+        )
         conn.execute(q)
         logger.info('Created new challenge for %s/%s: %s', identity, selector, cstr)
         conn.close()
@@ -172,7 +196,9 @@ class SendReceiveListener(object):
         tpt_subject = self._config['templates']['verify-subject'].strip()
         tpt_body = self._config['templates']['verify-body'].strip()
         signature = self._config['templates']['signature'].strip()
-        subject = Template(tpt_subject).safe_substitute({'identity': jdata.get('identity')})
+        subject = Template(tpt_subject).safe_substitute(
+            {'identity': jdata.get('identity')}
+        )
         cmsg.add_header('Subject', subject)
         name = jdata.get('name', 'Anonymous Llama')
         cmsg.add_header('To', f'{name} <{identity}>')
@@ -215,9 +241,11 @@ class SendReceiveListener(object):
             if s:
                 selector = s.decode()
             logger.debug('i=%s; s=%s', identity, selector)
-            q = sa.select([t_auth.c.auth_id, t_auth.c.pubkey]).where(t_auth.c.identity == identity,
-                                                                     t_auth.c.selector == selector,
-                                                                     t_auth.c.verified == verified)
+            q = sa.select([t_auth.c.auth_id, t_auth.c.pubkey]).where(
+                t_auth.c.identity == identity,
+                t_auth.c.selector == selector,
+                t_auth.c.verified == verified,
+            )
             rp = conn.execute(q)
             res = rp.fetchall()
             if res:
@@ -228,7 +256,9 @@ class SendReceiveListener(object):
             logger.debug('Did not find a matching identity!')
             raise patatt.NoKeyError('No match for this identity')
 
-        logger.debug('Found matching %s/%s with auth_id=%s', identity, selector, auth_id)
+        logger.debug(
+            'Found matching %s/%s with auth_id=%s', identity, selector, auth_id
+        )
         pm.validate(identity, pubkey.encode())
 
         return identity, selector, auth_id
@@ -243,11 +273,18 @@ class SendReceiveListener(object):
         t_auth = sa.Table('auth', md, autoload=True, autoload_with=self._engine)
         bdata = msg.encode()
         try:
-            identity, selector, auth_id = self.validate_message(conn, t_auth, bdata, verified=0)
+            identity, selector, auth_id = self.validate_message(
+                conn, t_auth, bdata, verified=0
+            )
         except Exception as ex:
             self.send_error(resp, message='Signature validation failed: %s' % ex)
             return
-        logger.debug('Message validation passed for %s/%s with auth_id=%s', identity, selector, auth_id)
+        logger.debug(
+            'Message validation passed for %s/%s with auth_id=%s',
+            identity,
+            selector,
+            auth_id,
+        )
 
         # Now compare the challenge to what we received
         q = sa.select([t_auth.c.challenge]).where(t_auth.c.auth_id == auth_id)
@@ -255,13 +292,28 @@ class SendReceiveListener(object):
         res = rp.fetchall()
         challenge = res[0][0]
         if msg.find(f'\nverify:{challenge}') < 0:
-            self.send_error(resp, message='Challenge verification for %s/%s did not match' % (identity, selector))
+            self.send_error(
+                resp,
+                message='Challenge verification for %s/%s did not match'
+                % (identity, selector),
+            )
             return
-        logger.info('Successfully verified challenge for %s/%s with auth_id=%s', identity, selector, auth_id)
-        q = sa.update(t_auth).where(t_auth.c.auth_id == auth_id).values(challenge=None, verified=1)
+        logger.info(
+            'Successfully verified challenge for %s/%s with auth_id=%s',
+            identity,
+            selector,
+            auth_id,
+        )
+        q = (
+            sa.update(t_auth)
+            .where(t_auth.c.auth_id == auth_id)
+            .values(challenge=None, verified=1)
+        )
         conn.execute(q)
         conn.close()
-        self.send_success(resp, message='Challenge verified for %s/%s' % (identity, selector))
+        self.send_success(
+            resp, message='Challenge verified for %s/%s' % (identity, selector)
+        )
 
     def auth_delete(self, jdata, resp) -> None:
         msg = jdata.get('msg')
@@ -278,11 +330,15 @@ class SendReceiveListener(object):
             self.send_error(resp, message='Signature validation failed: %s' % ex)
             return
 
-        logger.info('Deleting record for %s/%s with auth_id=%s', identity, selector, auth_id)
+        logger.info(
+            'Deleting record for %s/%s with auth_id=%s', identity, selector, auth_id
+        )
         q = sa.delete(t_auth).where(t_auth.c.auth_id == auth_id)
         conn.execute(q)
         conn.close()
-        self.send_success(resp, message='Record deleted for %s/%s' % (identity, selector))
+        self.send_success(
+            resp, message='Record deleted for %s/%s' % (identity, selector)
+        )
 
     def clean_header(self, hdrval: str) -> str:
         if hdrval is None:
@@ -312,7 +368,11 @@ class SendReceiveListener(object):
                 # Remove any quoted-printable header junk from the name
                 pair = (self.clean_header(pair[0]), pair[1])
             # Work around https://github.com/python/cpython/issues/100900
-            if not pair[0].startswith('=?') and not pair[0].startswith('"') and qspecials.search(pair[0]):
+            if (
+                not pair[0].startswith('=?')
+                and not pair[0].startswith('"')
+                and qspecials.search(pair[0])
+            ):
                 quoted = email.utils.quote(pair[0])
                 addrs.append(f'"{quoted}" <{pair[1]}>')
                 continue
@@ -325,14 +385,21 @@ class SendReceiveListener(object):
         except AttributeError:
             return all([ord(c) < 128 for c in strval])
 
-    def wrap_header(self, hdr, width: int = 75, nl: str = '\r\n', transform: str = 'preserve') -> bytes:
+    def wrap_header(
+        self, hdr, width: int = 75, nl: str = '\r\n', transform: str = 'preserve'
+    ) -> bytes:
         hname, hval = hdr
         if hname.lower() in ('to', 'cc', 'from', 'x-original-from'):
             _parts = [f'{hname}: ']
             first = True
             for addr in email.utils.getaddresses([hval]):
                 if transform == 'encode' and not self.isascii(addr[0]):
-                    addr = (email.quoprimime.header_encode(addr[0].encode(), charset='utf-8'), addr[1])
+                    addr = (
+                        email.quoprimime.header_encode(
+                            addr[0].encode(), charset='utf-8'
+                        ),
+                        addr[1],
+                    )
                     qp = self.format_addrs([addr], clean=False)
                 elif transform == 'decode':
                     qp = self.format_addrs([addr], clean=True)
@@ -359,11 +426,18 @@ class SendReceiveListener(object):
                 # Use simple textwrap, with a small trick that ensures that long non-breakable
                 # strings don't show up on the next line from the bare header
                 hdata = hdata.replace(': ', ':_', 1)
-                wrapped = textwrap.wrap(hdata, break_long_words=False, break_on_hyphens=False,
-                                        subsequent_indent=' ', width=width)
+                wrapped = textwrap.wrap(
+                    hdata,
+                    break_long_words=False,
+                    break_on_hyphens=False,
+                    subsequent_indent=' ',
+                    width=width,
+                )
                 return nl.join(wrapped).replace(':_', ': ', 1).encode()
 
-            qp = f'{hname}: ' + email.quoprimime.header_encode(hval.encode(), charset='utf-8')
+            qp = f'{hname}: ' + email.quoprimime.header_encode(
+                hval.encode(), charset='utf-8'
+            )
             # is it longer than width?
             if len(qp) <= width:
                 return qp.encode()
@@ -375,17 +449,22 @@ class SendReceiveListener(object):
                     # Also allow for the ' ' at the front on continuation lines
                     wrapat -= 1
                 # Make sure we don't break on a =XX escape sequence
-                while '=' in qp[wrapat - 2:wrapat]:
+                while '=' in qp[wrapat - 2 : wrapat]:
                     wrapat -= 1
                 _parts.append(qp[:wrapat] + '?=')
-                qp = ('=?utf-8?q?' + qp[wrapat:])
+                qp = '=?utf-8?q?' + qp[wrapat:]
             _parts.append(qp)
         return f'{nl} '.join(_parts).encode()
 
-    def get_msg_as_bytes(self, msg: email.message.Message, nl: str = '\r\n', headers: str = 'preserve') -> bytes:
+    def get_msg_as_bytes(
+        self, msg: email.message.Message, nl: str = '\r\n', headers: str = 'preserve'
+    ) -> bytes:
         bdata = b''
         for hname, hval in msg.items():
-            bdata += self.wrap_header((hname, str(hval)), nl=nl, transform=headers) + nl.encode()
+            bdata += (
+                self.wrap_header((hname, str(hval)), nl=nl, transform=headers)
+                + nl.encode()
+            )
         bdata += nl.encode()
         payload = msg.get_payload(decode=True)
         for bline in payload.split(b'\n'):
@@ -402,8 +481,13 @@ class SendReceiveListener(object):
             return
         logger.debug('Received a request for %s messages', len(umsgs))
 
-        diffre = re.compile(rb'^(---.*\n\+\+\+|GIT binary patch|diff --git \w/\S+ \w/\S+)', flags=re.M | re.I)
-        diffstatre = re.compile(rb'^\s*\d+ file.*\d+ (insertion|deletion)', flags=re.M | re.I)
+        diffre = re.compile(
+            rb'^(---.*\n\+\+\+|GIT binary patch|diff --git \w/\S+ \w/\S+)',
+            flags=re.M | re.I,
+        )
+        diffstatre = re.compile(
+            rb'^\s*\d+ file.*\d+ (insertion|deletion)', flags=re.M | re.I
+        )
 
         msgs = list()
         conn = self._engine.connect()
@@ -417,7 +501,9 @@ class SendReceiveListener(object):
             try:
                 identity, selector, auth_id = self.validate_message(conn, t_auth, bdata)
             except patatt.NoKeyError:
-                self.send_error(resp, message='No matching key, please complete web auth first.')
+                self.send_error(
+                    resp, message='No matching key, please complete web auth first.'
+                )
                 return
             except Exception as ex:
                 self.send_error(resp, message='Signature validation failed: %s' % ex)
@@ -427,7 +513,10 @@ class SendReceiveListener(object):
             if seenid is None:
                 seenid = auth_id
             elif seenid != auth_id:
-                self.send_error(resp, message='We only support a single signing identity across patch series.')
+                self.send_error(
+                    resp,
+                    message='We only support a single signing identity across patch series.',
+                )
                 return
 
             msg = email.message_from_bytes(bdata, policy=emlpolicy)
@@ -454,8 +543,15 @@ class SendReceiveListener(object):
                 return
 
             # Make sure that From, Date, Subject, and Message-Id headers exist
-            if not msg.get('From') or not msg.get('Date') or not msg.get('Subject') or not msg.get('Message-Id'):
-                self.send_error(resp, message='Message is missing some required headers.')
+            if (
+                not msg.get('From')
+                or not msg.get('Date')
+                or not msg.get('Subject')
+                or not msg.get('Message-Id')
+            ):
+                self.send_error(
+                    resp, message='Message is missing some required headers.'
+                )
                 return
 
             # Make sure that From: matches the validated identity. We allow + expansion,
@@ -463,19 +559,26 @@ class SendReceiveListener(object):
             allfroms = utils.getaddresses([str(x) for x in msg.get_all('from')])
             # Allow only a single From: address
             if len(allfroms) > 1:
-                self.send_error(resp, message='Message may only contain a single From: address.')
+                self.send_error(
+                    resp, message='Message may only contain a single From: address.'
+                )
                 return
 
             fromaddr = allfroms[0][1]
             if validfrom != fromaddr:
                 ldparts = fromaddr.split('@')
                 if len(ldparts) != 2:
-                    self.send_error(resp, message=f'Invalid address in From: {fromaddr}')
+                    self.send_error(
+                        resp, message=f'Invalid address in From: {fromaddr}'
+                    )
                     return
                 lparts = ldparts[0].split('+', maxsplit=1)
                 toval = f'{lparts[0]}@{ldparts[1]}'
                 if toval != identity:
-                    self.send_error(resp, message=f'From header invalid for identity {identity}: {fromaddr}')
+                    self.send_error(
+                        resp,
+                        message=f'From header invalid for identity {identity}: {fromaddr}',
+                    )
                     return
                 # usually, all From: addresses will be the same, so use validfrom as a quick bypass
                 if validfrom is None:
@@ -492,9 +595,15 @@ class SendReceiveListener(object):
                         matched = True
                         break
                 if not matched:
-                    self.send_error(resp, message='Destinations must include a mailing list we recognize.')
+                    self.send_error(
+                        resp,
+                        message='Destinations must include a mailing list we recognize.',
+                    )
                     return
-            msg.add_header('X-Endpoint-Received', f'by {servicename} for {identity}/{selector} with auth_id={auth_id}')
+            msg.add_header(
+                'X-Endpoint-Received',
+                f'by {servicename} for {identity}/{selector} with auth_id={auth_id}',
+            )
             msgs.append((msg, destaddrs))
 
         conn.close()
@@ -512,7 +621,11 @@ class SendReceiveListener(object):
             bccaddrs.update([x[1] for x in utils.getaddresses([_bcc])])
 
         repo = listid = None
-        if 'public-inbox' in self._config and self._config['public-inbox'].get('repo') and not reflect:
+        if (
+            'public-inbox' in self._config
+            and self._config['public-inbox'].get('repo')
+            and not reflect
+        ):
             repo = self._config['public-inbox'].get('repo')
             listid = self._config['public-inbox'].get('listid')
             if not os.path.isdir(repo):
@@ -549,7 +662,9 @@ class SendReceiveListener(object):
                 logger.debug('%s matches mydomain, no substitution required', origaddr)
                 fromaddr = origaddr
             else:
-                logger.debug('%s does not match mydomain, substitution required', origaddr)
+                logger.debug(
+                    '%s does not match mydomain, substitution required', origaddr
+                )
                 # We can't just send this as-is due to DMARC policies. Therefore, we set
                 # Reply-To and X-Original-From.
                 fromaddr = frompair[1]
@@ -580,13 +695,21 @@ class SendReceiveListener(object):
                     if cmsg.get('From') is None:
                         newbody = 'From: ' + self.clean_header(origfrom) + '\n'
                         if cmsg.get('Subject'):
-                            newbody += 'Subject: ' + self.clean_header(cmsg.get('Subject')) + '\n'
+                            newbody += (
+                                'Subject: '
+                                + self.clean_header(cmsg.get('Subject'))
+                                + '\n'
+                            )
                         if cmsg.get('Date'):
-                            newbody += 'Date: ' + self.clean_header(cmsg.get('Date')) + '\n'
+                            newbody += (
+                                'Date: ' + self.clean_header(cmsg.get('Date')) + '\n'
+                            )
                         newbody += '\n' + body.decode()
                         msg.set_payload(newbody, charset='utf-8')
                         # If we have non-ascii content in the new body, force CTE to 8bit
-                        if msg['Content-Transfer-Encoding'] == '7bit' and not all(ord(char) < 128 for char in newbody):
+                        if msg['Content-Transfer-Encoding'] == '7bit' and not all(
+                            ord(char) < 128 for char in newbody
+                        ):
                             msg.set_charset('utf-8')
                             msg.replace_header('Content-Transfer-Encoding', '8bit')
 
@@ -610,8 +733,12 @@ class SendReceiveListener(object):
             # run it once after writing all messages
             logger.debug('Running public-inbox repo hook (if present)')
             ezpi.run_hook(repo)
-        logger.info('%s %s messages for %s/%s', sentaction, len(msgs), identity, selector)
-        self.send_success(resp, message=f'{sentaction} {len(msgs)} messages for {identity}/{selector}')
+        logger.info(
+            '%s %s messages for %s/%s', sentaction, len(msgs), identity, selector
+        )
+        self.send_success(
+            resp, message=f'{sentaction} {len(msgs)} messages for {identity}/{selector}'
+        )
 
     def on_post(self, req, resp):
         if not req.content_length:
@@ -677,6 +804,7 @@ app.add_route(mp, srl)
 
 if __name__ == '__main__':
     from wsgiref.simple_server import make_server
+
     logger.setLevel(logging.DEBUG)
     ch = logging.StreamHandler()
     formatter = logging.Formatter('%(message)s')
diff --git a/pyproject.toml b/pyproject.toml
index 6eb2fbb..0c4f024 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -110,6 +110,9 @@ extend-select = [
 ]
 flake8-quotes.inline-quotes = "single"
 
+[tool.ruff.format]
+quote-style = "single"
+
 [tool.pyright]
 typeCheckingMode = "off"
 
diff --git a/src/b4/__init__.py b/src/b4/__init__.py
index df5c58c..3c1c127 100644
--- a/src/b4/__init__.py
+++ b/src/b4/__init__.py
@@ -61,7 +61,9 @@ ConfigDictT = Dict[str, Union[str, List[str], None]]
 
 charset.add_charset('utf-8', None)
 # Policy we use for saving mail locally
-emlpolicy = email.policy.EmailPolicy(utf8=True, cte_type='8bit', max_line_length=None, message_factory=EmailMessage)
+emlpolicy = email.policy.EmailPolicy(
+    utf8=True, cte_type='8bit', max_line_length=None, message_factory=EmailMessage
+)
 
 # Presence of these characters requires quoting of the name in the header
 # adapted from email._parseaddr
@@ -99,7 +101,9 @@ logging.getLogger('liblore').parent = logger
 
 HUNK_RE = re.compile(r'^@@ -\d+(?:,(\d+))? \+\d+(?:,(\d+))? @@')
 FILENAME_RE = re.compile(r'^(---|\+\+\+) (\S+)')
-DIFF_RE = re.compile(r'^(---.*\n\+\+\+|GIT binary patch|diff --git \w/\S+ \w/\S+)', flags=re.M | re.I)
+DIFF_RE = re.compile(
+    r'^(---.*\n\+\+\+|GIT binary patch|diff --git \w/\S+ \w/\S+)', flags=re.M | re.I
+)
 DIFFSTAT_RE = re.compile(r'^\s*\d+ file.*\d+ (insertion|deletion)', flags=re.M | re.I)
 
 ATT_PASS_SIMPLE = 'v'
@@ -247,19 +251,32 @@ class LoreMailbox:
             ppatch = self.msgid_map[patch.in_reply_to]
             found = False
             while True:
-                if patch.counter == ppatch.counter and patch.expected == ppatch.expected:
-                    logger.debug('Found a previous matching patch in v%s', ppatch.revision)
+                if (
+                    patch.counter == ppatch.counter
+                    and patch.expected == ppatch.expected
+                ):
+                    logger.debug(
+                        'Found a previous matching patch in v%s', ppatch.revision
+                    )
                     found = True
                     break
                 # Do we have another level up?
-                if ppatch.in_reply_to is None or ppatch.in_reply_to not in self.msgid_map:
+                if (
+                    ppatch.in_reply_to is None
+                    or ppatch.in_reply_to not in self.msgid_map
+                ):
                     break
                 ppatch = self.msgid_map[ppatch.in_reply_to]
 
             if not found:
                 sane = False
-                logger.debug('Patch not a reply to a patch with the same counter/expected (%s/%s != %s/%s)',
-                             patch.counter, patch.expected, ppatch.counter, ppatch.expected)
+                logger.debug(
+                    'Patch not a reply to a patch with the same counter/expected (%s/%s != %s/%s)',
+                    patch.counter,
+                    patch.expected,
+                    ppatch.counter,
+                    ppatch.expected,
+                )
                 break
 
         if not sane:
@@ -301,19 +318,22 @@ class LoreMailbox:
         if not q:
             return
         query = ' OR '.join(q)
-        qmsgs = get_pi_search_results(query, message='Looking for additional code-review trailers on %s')
+        qmsgs = get_pi_search_results(
+            query, message='Looking for additional code-review trailers on %s'
+        )
         if not qmsgs:
             logger.debug('No matching code-review messages')
             return
 
         logger.debug('Retrieved %s matching code-review messages', len(qmsgs))
-        patchid_map = map_codereview_trailers(qmsgs, ignore_msgids=set(self.msgid_map.keys()))
+        patchid_map = map_codereview_trailers(
+            qmsgs, ignore_msgids=set(self.msgid_map.keys())
+        )
         for patchid, fmsgs in patchid_map.items():
             if patchid not in self.trailer_map:
                 self.trailer_map[patchid] = list()
             self.trailer_map[patchid] += fmsgs
 
-
     def get_latest_revision(self) -> Optional[int]:
         if not len(self.series):
             return None
@@ -323,9 +343,13 @@ class LoreMailbox:
         revs.sort(key=lambda r: self.series[r].submission_date or 0)
         return revs[-1]
 
-
-    def get_series(self, revision: Optional[int] = None, sloppytrailers: bool = False,
-                   reroll: bool = True, codereview_trailers: bool = True) -> Optional['LoreSeries']:
+    def get_series(
+        self,
+        revision: Optional[int] = None,
+        sloppytrailers: bool = False,
+        reroll: bool = True,
+        codereview_trailers: bool = True,
+    ) -> Optional['LoreSeries']:
         if revision is None:
             if not len(self.series):
                 return None
@@ -360,7 +384,11 @@ class LoreMailbox:
             for member in lser.patches:
                 if member is not None and member.in_reply_to is not None:
                     potential = self.get_by_msgid(member.in_reply_to)
-                    if potential is not None and potential.has_diffstat and not potential.has_diff:
+                    if (
+                        potential is not None
+                        and potential.has_diffstat
+                        and not potential.has_diff
+                    ):
                         # This is *probably* the cover letter
                         lser.patches[0] = potential
                         lser.has_cover = True
@@ -371,7 +399,9 @@ class LoreMailbox:
 
         # Do we have any follow-ups?
         for fmsg in self.followups:
-            logger.debug('Analyzing follow-up: %s (%s)', fmsg.full_subject, fmsg.fromemail)
+            logger.debug(
+                'Analyzing follow-up: %s (%s)', fmsg.full_subject, fmsg.fromemail
+            )
             # If there are no trailers in this one, ignore it
             if not len(fmsg.trailers):
                 logger.debug('  no trailers found, skipping')
@@ -397,7 +427,9 @@ class LoreMailbox:
 
             trailers, mismatches = fmsg.get_trailers(sloppy=sloppytrailers)
             for ltr in mismatches:
-                lser.trailer_mismatches.add((ltr.name, ltr.value, fmsg.fromname, fmsg.fromemail))
+                lser.trailer_mismatches.add(
+                    (ltr.name, ltr.value, fmsg.fromname, fmsg.fromemail)
+                )
             lvl = 1
             while True:
                 logger.debug('%sParent: %s', ' ' * lvl, pmsg.full_subject)
@@ -437,7 +469,9 @@ class LoreMailbox:
         for lmsg in lser.patches:
             if lmsg is None or lmsg.git_patch_id is None:
                 continue
-            logger.debug('  matching patch_id %s from: %s', lmsg.git_patch_id, lmsg.full_subject)
+            logger.debug(
+                '  matching patch_id %s from: %s', lmsg.git_patch_id, lmsg.full_subject
+            )
             if lmsg.git_patch_id in self.trailer_map:
                 for fmsg in self.trailer_map[lmsg.git_patch_id]:
                     logger.debug('  matched: %s', fmsg.msgid)
@@ -449,14 +483,18 @@ class LoreMailbox:
                         if fltr in lmsg.followup_trailers:
                             logger.debug('  identical trailer received for this series')
                             continue
-                        logger.debug('  carrying over the trailer to this series (may be duplicate)')
+                        logger.debug(
+                            '  carrying over the trailer to this series (may be duplicate)'
+                        )
                         logger.debug('  %s', lmsg.full_subject)
                         logger.debug('    + %s', fltr.as_string())
                         if fltr.lmsg:
                             logger.debug('      via: %s', fltr.lmsg.msgid)
                         lmsg.followup_trailers.append(fltr)
                     for fltr in fmis:
-                        lser.trailer_mismatches.add((fltr.name, fltr.value, fmsg.fromname, fmsg.fromemail))
+                        lser.trailer_mismatches.add(
+                            (fltr.name, fltr.value, fmsg.fromname, fmsg.fromemail)
+                        )
 
         return lser
 
@@ -497,7 +535,12 @@ class LoreMailbox:
                         logger.debug('  fixed revision to v%s', irt.revision)
                         lmsg.revision = irt.revision
                     # alternatively, see if upthread is patch 1
-                    elif lmsg.counter > 0 and irt is not None and irt.has_diff and irt.counter == 1:
+                    elif (
+                        lmsg.counter > 0
+                        and irt is not None
+                        and irt.has_diff
+                        and irt.counter == 1
+                    ):
                         logger.debug('  fixed revision to v%s', irt.revision)
                         lmsg.revision = irt.revision
 
@@ -509,14 +552,29 @@ class LoreMailbox:
 
             # Attempt to auto-number series from the same author who did not bother
             # to set v2, v3, etc. in the patch revision
-            if (lmsg.counter == 1 and lmsg.counters_inferred
-                    and not lmsg.reply and lmsg.lsubject.patch and not lmsg.lsubject.resend):
+            if (
+                lmsg.counter == 1
+                and lmsg.counters_inferred
+                and not lmsg.reply
+                and lmsg.lsubject.patch
+                and not lmsg.lsubject.resend
+            ):
                 omsg = self.series[lmsg.revision].patches[lmsg.counter]
-                if (omsg is not None and omsg.counters_inferred and lmsg.fromemail == omsg.fromemail
-                        and omsg.date < lmsg.date):
+                if (
+                    omsg is not None
+                    and omsg.counters_inferred
+                    and lmsg.fromemail == omsg.fromemail
+                    and omsg.date < lmsg.date
+                ):
                     lmsg.revision = len(self.series) + 1
-                    self.series[lmsg.revision] = LoreSeries(lmsg.revision, lmsg.expected)
-                    logger.info('Assuming new revision: v%s (%s)', lmsg.revision, lmsg.full_subject)
+                    self.series[lmsg.revision] = LoreSeries(
+                        lmsg.revision, lmsg.expected
+                    )
+                    logger.info(
+                        'Assuming new revision: v%s (%s)',
+                        lmsg.revision,
+                        lmsg.full_subject,
+                    )
             logger.debug('  adding as patch')
             self.series[lmsg.revision].add_patch(lmsg)
             return
@@ -600,7 +658,6 @@ class LoreSeries:
                 return lmsg
         raise IndexError('No such patch in series')
 
-
     def add_patch(self, lmsg: 'LoreMessage') -> None:
         while len(self.patches) < lmsg.expected + 1:
             self.patches.append(None)
@@ -608,7 +665,9 @@ class LoreSeries:
         omsg = self.patches[lmsg.counter]
         if omsg is not None:
             # Okay, strange, is the one in there a reply?
-            logger.warning('WARNING: duplicate messages found at index %s', lmsg.counter)
+            logger.warning(
+                'WARNING: duplicate messages found at index %s', lmsg.counter
+            )
             logger.warning('   Subject 1: %s', lmsg.subject)
             logger.warning('   Subject 2: %s', omsg.subject)
             if omsg.reply or (omsg.counters_inferred and not lmsg.counters_inferred):
@@ -628,17 +687,28 @@ class LoreSeries:
         if lmsg.counter < 2:
             # Cover letter or first patch
             if not self.base_commit and '\nbase-commit:' in lmsg.body:
-                matches = re.search(r'^base-commit: .*?([\da-f]+)', lmsg.body, flags=re.I | re.M)
+                matches = re.search(
+                    r'^base-commit: .*?([\da-f]+)', lmsg.body, flags=re.I | re.M
+                )
                 if matches:
                     self.base_commit = matches.groups()[0]
             if not self.change_id and '\nchange-id:' in lmsg.body:
-                matches = re.search(r'^change-id:\s+(\S+)', lmsg.body, flags=re.I | re.M)
+                matches = re.search(
+                    r'^change-id:\s+(\S+)', lmsg.body, flags=re.I | re.M
+                )
                 if matches:
                     self.change_id = matches.groups()[0]
             if not self.prereq_patch_ids and '\nprerequisite-patch-id:' in lmsg.body:
-                self.prereq_patch_ids = re.findall(r'^prerequisite-patch-id:\s+(\S+)', lmsg.body, flags=re.I | re.M)
-            if not self.prereq_base_commit and '\nprerequisite-base-commit:' in lmsg.body:
-                matches = re.search(r'^prerequisite-base-id:\s+(\S+)', lmsg.body, flags=re.I | re.M)
+                self.prereq_patch_ids = re.findall(
+                    r'^prerequisite-patch-id:\s+(\S+)', lmsg.body, flags=re.I | re.M
+                )
+            if (
+                not self.prereq_base_commit
+                and '\nprerequisite-base-commit:' in lmsg.body
+            ):
+                matches = re.search(
+                    r'^prerequisite-base-id:\s+(\S+)', lmsg.body, flags=re.I | re.M
+                )
                 if matches:
                     self.prereq_base_commit = matches.groups()[0]
 
@@ -666,8 +736,9 @@ class LoreSeries:
         msg['Subject'] = new_subject
 
     @staticmethod
-    def identify_cover_letter(all_msgs: List[EmailMessage],
-                              msgids: List[str]) -> Tuple[Optional[str], List[EmailMessage]]:
+    def identify_cover_letter(
+        all_msgs: List[EmailMessage], msgids: List[str]
+    ) -> Tuple[Optional[str], List[EmailMessage]]:
         """Identify the cover letter and patch messages among the user-specified msgids.
 
         Scans the messages matching the given msgids for one with an explicit
@@ -723,8 +794,9 @@ class LoreSeries:
                 LoreSeries.rewrite_subject_counter(msg, i, num_patches)
 
     @staticmethod
-    def rethread_messages(all_msgs: List[EmailMessage], cover_msgid: str,
-                          patch_msgids: Set[str]) -> None:
+    def rethread_messages(
+        all_msgs: List[EmailMessage], cover_msgid: str, patch_msgids: Set[str]
+    ) -> None:
         """Rewrite threading headers so all top-level patches are children of the cover.
 
         The cover letter has its In-Reply-To and References stripped (it
@@ -750,8 +822,9 @@ class LoreSeries:
                 msg['References'] = f'<{cover_msgid}>'
 
     @staticmethod
-    def rethread_series(msgids: List[str],
-                        all_msgs: List[EmailMessage]) -> Tuple[str, List[EmailMessage]]:
+    def rethread_series(
+        msgids: List[str], all_msgs: List[EmailMessage]
+    ) -> Tuple[str, List[EmailMessage]]:
         """Reconstitute a properly threaded series from unthreaded messages.
 
         Runs the full rethread pipeline: identify a cover letter (or use
@@ -808,7 +881,9 @@ class LoreSeries:
             return 'undefined'
 
         prefix = lmsg.date.strftime('%Y%m%d')
-        authorline = email.utils.getaddresses([str(x) for x in lmsg.msg.get_all('from', [])])[0]
+        authorline = email.utils.getaddresses(
+            [str(x) for x in lmsg.msg.get_all('from', [])]
+        )[0]
         if extended:
             local = authorline[1].split('@')[0]
             unsafe = '%s_%s_%s' % (prefix, local, lmsg.subject)
@@ -832,16 +907,25 @@ class LoreSeries:
         if self.patches[0] and self.patches[0].followup_trailers:
             self.add_extra_trailers(self.patches[0].followup_trailers)
 
-    def get_am_ready(self, noaddtrailers: bool = False, addmysob: bool = False,
-                     addlink: bool = False, cherrypick: Optional[List[int]] = None, copyccs: bool = False,
-                     allowbadchars: bool = False, showchecks: bool = False) -> List[EmailMessage]:
+    def get_am_ready(
+        self,
+        noaddtrailers: bool = False,
+        addmysob: bool = False,
+        addlink: bool = False,
+        cherrypick: Optional[List[int]] = None,
+        copyccs: bool = False,
+        allowbadchars: bool = False,
+        showchecks: bool = False,
+    ) -> List[EmailMessage]:
 
         usercfg = get_user_config()
         config = get_main_config()
 
         if addmysob:
             if 'name' not in usercfg or 'email' not in usercfg:
-                logger.critical('WARNING: Unable to add your Signed-off-by: git returned no user.name or user.email')
+                logger.critical(
+                    'WARNING: Unable to add your Signed-off-by: git returned no user.name or user.email'
+                )
                 addmysob = False
 
         attpolicy = str(config.get('attestation-policy', 'softfail'))
@@ -864,7 +948,9 @@ class LoreSeries:
                     attsame = False
                     break
 
-                checkmark, trailers, attcrit = lmsg.get_attestation_trailers(attpolicy, maxdays)
+                checkmark, trailers, attcrit = lmsg.get_attestation_trailers(
+                    attpolicy, maxdays
+                )
                 if attref is None:
                     attref = trailers
                     attmark = checkmark
@@ -905,7 +991,10 @@ class LoreSeries:
                     # TODO: Progress bar?
                     lmsg.load_pw_ci_status()
                     if not lmsg.pw_ci_status or lmsg.pw_ci_status == 'pending':
-                        logger.debug('No CI on patch %s, skipping the rest of the checks', lmsg.counter)
+                        logger.debug(
+                            'No CI on patch %s, skipping the rest of the checks',
+                            lmsg.counter,
+                        )
                         lmsg.pw_ci_status = None
 
         self.add_cover_trailers()
@@ -917,10 +1006,16 @@ class LoreSeries:
             if cherrypick is not None:
                 if at not in cherrypick:
                     at += 1
-                    logger.debug('  skipped: [%s/%s] (not in cherrypick)', at, self.expected)
+                    logger.debug(
+                        '  skipped: [%s/%s] (not in cherrypick)', at, self.expected
+                    )
                     continue
                 if lmsg is None:
-                    logger.critical('CRITICAL: [%s/%s] is missing, cannot cherrypick', at, self.expected)
+                    logger.critical(
+                        'CRITICAL: [%s/%s] is missing, cannot cherrypick',
+                        at,
+                        self.expected,
+                    )
                     raise KeyError('Cherrypick not in series')
 
             if lmsg is not None:
@@ -935,16 +1030,24 @@ class LoreSeries:
                             llval = lparts[1].strip() % lmsg.msgid
                             linktrailer = LoreTrailer(name=llname, value=llval)
                         else:
-                            logger.critical('linktrailermask does not look like a valid trailer, using defaults')
+                            logger.critical(
+                                'linktrailermask does not look like a valid trailer, using defaults'
+                            )
 
                     if not linktrailer:
                         defmask = LOREADDR + '/r/%s'
                         cfg_llval = config.get('linkmask', defmask)
                         if isinstance(cfg_llval, str) and '%s' in cfg_llval:
-                            linktrailer = LoreTrailer(name='Link', value=cfg_llval % lmsg.msgid)
+                            linktrailer = LoreTrailer(
+                                name='Link', value=cfg_llval % lmsg.msgid
+                            )
                         else:
-                            logger.critical('linkmask does not look like a valid mask, using defaults')
-                            linktrailer = LoreTrailer(name='Link', value=defmask % lmsg.msgid)
+                            logger.critical(
+                                'linkmask does not look like a valid mask, using defaults'
+                            )
+                            linktrailer = LoreTrailer(
+                                name='Link', value=defmask % lmsg.msgid
+                            )
 
                     extras.append(linktrailer)
 
@@ -955,7 +1058,9 @@ class LoreSeries:
                         logger.info('  %s', lmsg.get_am_subject())
 
                 else:
-                    checkmark, trailers, critical = lmsg.get_attestation_trailers(attpolicy, maxdays)
+                    checkmark, trailers, critical = lmsg.get_attestation_trailers(
+                        attpolicy, maxdays
+                    )
                     if checkmark:
                         logger.info('  %s %s', checkmark, lmsg.get_am_subject())
                     else:
@@ -966,6 +1071,7 @@ class LoreSeries:
 
                     if critical:
                         import sys
+
                         logger.critical('---')
                         logger.critical('Exiting due to attestation-policy: hardfail')
                         sys.exit(128)
@@ -973,11 +1079,20 @@ class LoreSeries:
                 add_trailers = True
                 if noaddtrailers:
                     add_trailers = False
-                msg = lmsg.get_am_message(add_trailers=add_trailers, extras=extras, copyccs=copyccs,
-                                          addmysob=addmysob, allowbadchars=allowbadchars)
+                msg = lmsg.get_am_message(
+                    add_trailers=add_trailers,
+                    extras=extras,
+                    copyccs=copyccs,
+                    addmysob=addmysob,
+                    allowbadchars=allowbadchars,
+                )
                 if local_check_cmds:
                     lmsg.load_local_ci_status(local_check_cmds)
-                if lmsg.local_ci_status or lmsg.pw_ci_status in {'success', 'fail', 'warning'}:
+                if lmsg.local_ci_status or lmsg.pw_ci_status in {
+                    'success',
+                    'fail',
+                    'warning',
+                }:
                     if lmsg.local_ci_status:
                         for flag, status in lmsg.local_ci_status:
                             logger.info('    %s %s', CI_FLAGS_FANCY[flag], status)
@@ -985,8 +1100,12 @@ class LoreSeries:
                     pwproj = config.get('pw-project')
                     if lmsg.pw_ci_status in {'fail', 'warning'}:
                         pwlink = f'{pwurl}/project/{pwproj}/patch/{lmsg.msgid}'
-                        logger.info('    %s patchwork: %s: %s', CI_FLAGS_FANCY[lmsg.pw_ci_status],
-                                    str(lmsg.pw_ci_status).upper(), pwlink)
+                        logger.info(
+                            '    %s patchwork: %s: %s',
+                            CI_FLAGS_FANCY[lmsg.pw_ci_status],
+                            str(lmsg.pw_ci_status).upper(),
+                            pwlink,
+                        )
                 msgs.append(msg)
             else:
                 logger.error('  ERROR: missing [%s/%s]!', at, self.expected)
@@ -1002,7 +1121,6 @@ class LoreSeries:
 
         return msgs
 
-
     @property
     def submission_date(self) -> Optional[datetime.datetime]:
         # Find the date of the first patch we have
@@ -1015,7 +1133,6 @@ class LoreSeries:
             break
         return self._submission_date
 
-
     @property
     def indexes(self) -> List[Tuple[str, str]]:
         if self._indexes is not None:
@@ -1026,8 +1143,15 @@ class LoreSeries:
             if lmsg is None or not lmsg.blob_indexes:
                 continue
             for ofn, obh, nfn, fmod in lmsg.blob_indexes:
-                logger.debug('%s/%s: ofn=%s, obh=%s, nfn=%s, fmod=%s',
-                             lmsg.counter, lmsg.expected, ofn, obh, nfn, fmod)
+                logger.debug(
+                    '%s/%s: ofn=%s, obh=%s, nfn=%s, fmod=%s',
+                    lmsg.counter,
+                    lmsg.expected,
+                    ofn,
+                    obh,
+                    nfn,
+                    fmod,
+                )
                 if ofn in seenfiles:
                     # if we have seen this file once already, then it's a repeat patch
                     # it's no longer going to match current hash
@@ -1039,8 +1163,9 @@ class LoreSeries:
                 self._indexes.append((ofn, obh))
         return self._indexes
 
-    def check_applies_clean(self, gitdir: Optional[str] = None,
-                            at: Optional[str] = None) -> Tuple[int, List[Tuple[str, str]]]:
+    def check_applies_clean(
+        self, gitdir: Optional[str] = None, at: Optional[str] = None
+    ) -> Tuple[int, List[Tuple[str, str]]]:
         mismatches = list()
         if at is None:
             at = 'HEAD'
@@ -1059,7 +1184,12 @@ class LoreSeries:
 
         return len(self.indexes), mismatches
 
-    def find_base(self, gitdir: Optional[str], branches: Optional[List[str]] = None, maxdays: int = 30) -> Tuple[str, int, int]:
+    def find_base(
+        self,
+        gitdir: Optional[str],
+        branches: Optional[List[str]] = None,
+        maxdays: int = 30,
+    ) -> Tuple[str, int, int]:
         if self.indexes is None:
             self.populate_indexes()
         if self.indexes is None or not len(self.indexes):
@@ -1076,7 +1206,13 @@ class LoreSeries:
         else:
             where = ['--all']
 
-        gitargs = ['log', '--pretty=oneline', '--until', guntil, '--max-count=1'] + where
+        gitargs = [
+            'log',
+            '--pretty=oneline',
+            '--until',
+            guntil,
+            '--max-count=1',
+        ] + where
         lines = git_get_command_lines(gitdir, gitargs)
         if not lines:
             raise IndexError('No commits found before %s' % guntil)
@@ -1090,8 +1226,16 @@ class LoreSeries:
             best = commit
             for fn, bi in mismatches:
                 logger.debug('Finding tree matching %s=%s in %s', fn, bi, where)
-                gitargs = ['log', '--pretty=oneline', '--since', gsince, '--until', guntil,
-                           '--find-object', bi] + where
+                gitargs = [
+                    'log',
+                    '--pretty=oneline',
+                    '--since',
+                    gsince,
+                    '--until',
+                    guntil,
+                    '--find-object',
+                    bi,
+                ] + where
                 lines = git_get_command_lines(gitdir, gitargs)
                 if not lines:
                     logger.debug('Could not find object %s in the tree', bi)
@@ -1127,8 +1271,9 @@ class LoreSeries:
 
         raise IndexError('Could not describe commit %s' % best)
 
-    def make_fake_am_range(self, gitdir: Optional[str],
-                           at_base: Optional[str] = None) -> Tuple[Optional[str], Optional[str]]:
+    def make_fake_am_range(
+        self, gitdir: Optional[str], at_base: Optional[str] = None
+    ) -> Tuple[Optional[str], Optional[str]]:
         start_commit = end_commit = None
         # Use the msgid of the first non-None patch in the series
         msgid = None
@@ -1149,7 +1294,9 @@ class LoreSeries:
                 stalecache = True
             if start_commit is not None and end_commit is not None:
                 # Make sure they are still there
-                if git_commit_exists(gitdir, start_commit) and git_commit_exists(gitdir, end_commit):
+                if git_commit_exists(gitdir, start_commit) and git_commit_exists(
+                    gitdir, end_commit
+                ):
                     logger.debug('Using previously generated range')
                     return start_commit, end_commit
                 stalecache = True
@@ -1175,7 +1322,10 @@ class LoreSeries:
             seenfiles = set()
             for lmsg in self.patches[1:]:
                 if lmsg is None:
-                    logger.critical('ERROR: v%s series incomplete; unable to create a fake-am range', self.revision)
+                    logger.critical(
+                        'ERROR: v%s series incomplete; unable to create a fake-am range',
+                        self.revision,
+                    )
                     return None, None
 
                 logger.debug('Looking at %s', lmsg.full_subject)
@@ -1203,19 +1353,37 @@ class LoreSeries:
                     try:
                         ohash = git_revparse_obj(ofi)
                         logger.debug('  Found matching blob for: %s', ofn)
-                        gitargs = ['update-index', '--add', '--cacheinfo', f'{fmod},{ohash},{ofn}']
+                        gitargs = [
+                            'update-index',
+                            '--add',
+                            '--cacheinfo',
+                            f'{fmod},{ohash},{ofn}',
+                        ]
                     except RuntimeError:
-                        logger.debug('Could not find matching blob for %s (%s)', ofn, ofi)
+                        logger.debug(
+                            'Could not find matching blob for %s (%s)', ofn, ofi
+                        )
                         try:
                             chash = git_revparse_obj(f':{ofn}', topdir)
-                            gitargs = ['update-index', '--add', '--cacheinfo', f'{fmod},{chash},{ofn}']
+                            gitargs = [
+                                'update-index',
+                                '--add',
+                                '--cacheinfo',
+                                f'{fmod},{chash},{ofn}',
+                            ]
                         except RuntimeError:
-                            logger.critical('  ERROR: Could not find anything matching %s', ofn)
+                            logger.critical(
+                                '  ERROR: Could not find anything matching %s', ofn
+                            )
                             return None, None
 
                     ecode, out = git_run_command(None, gitargs)
                     if ecode > 0:
-                        logger.critical('  ERROR: Could not run update-index for %s (%s)', ofn, ohash)
+                        logger.critical(
+                            '  ERROR: Could not run update-index for %s (%s)',
+                            ofn,
+                            ohash,
+                        )
                         return None, None
 
                 msgs.append(lmsg.get_am_message(add_trailers=False))
@@ -1227,7 +1395,9 @@ class LoreSeries:
             treeid = out.strip()
             # At this point we have a worktree with files that should (hopefully) cleanly receive a git am
             gitargs = ['commit-tree', treeid + '^{tree}', '-F', '-']
-            ecode, out = git_run_command(None, gitargs, stdin='Initial fake commit'.encode('utf-8'))
+            ecode, out = git_run_command(
+                None, gitargs, stdin='Initial fake commit'.encode('utf-8')
+            )
             if ecode > 0:
                 logger.critical('ERROR: Could not commit-tree')
                 return None, None
@@ -1271,11 +1441,25 @@ class LoreTrailer:
     addr: Optional[Tuple[str, str]] = None
     lmsg: Optional['LoreMessage'] = None
     # Small list of recognized utility trailers
-    _utility: Set[str] = {'fixes', 'link', 'buglink', 'closes', 'obsoleted-by', 'message-id', 'change-id',
-                          'base-commit', 'based-on'}
+    _utility: Set[str] = {
+        'fixes',
+        'link',
+        'buglink',
+        'closes',
+        'obsoleted-by',
+        'message-id',
+        'change-id',
+        'base-commit',
+        'based-on',
+    }
 
-    def __init__(self, name: Optional[str] = None, value: Optional[str] = None, extinfo: Optional[str] = None,
-                 msg: Optional[EmailMessage] = None):
+    def __init__(
+        self,
+        name: Optional[str] = None,
+        value: Optional[str] = None,
+        extinfo: Optional[str] = None,
+        msg: Optional[EmailMessage] = None,
+    ):
         if name is None or value is None:
             self.name = 'Signed-off-by'
             self.type = 'person'
@@ -1363,8 +1547,9 @@ class LoreTrailer:
         if olocal != tlocal:
             return False
 
-        return (abs(odomain.count('.') - tdomain.count('.')) == 1
-                and (odomain.endswith(f'.{tdomain}') or tdomain.endswith(f'.{odomain}')))
+        return abs(odomain.count('.') - tdomain.count('.')) == 1 and (
+            odomain.endswith(f'.{tdomain}') or tdomain.endswith(f'.{odomain}')
+        )
 
     @staticmethod
     def _extract_link_msgid(url: str) -> Optional[str]:
@@ -1526,8 +1711,9 @@ class LoreMessage:
             self.references.append(self.in_reply_to)
 
         try:
-            fromdata = email.utils.getaddresses([LoreMessage.clean_header(str(x))
-                                                 for x in self.msg.get_all('from', [])])[0]
+            fromdata = email.utils.getaddresses(
+                [LoreMessage.clean_header(str(x)) for x in self.msg.get_all('from', [])]
+            )[0]
             self.fromname = fromdata[0]
             self.fromemail = fromdata[1]
             if not len(self.fromname.strip()):
@@ -1572,19 +1758,33 @@ class LoreMessage:
         trailers, _others = LoreMessage.find_trailers(self.body, followup=True)
         # We only pay attention to trailers that are sent in reply
         if trailers and self.references and not self.has_diff and not self.reply:
-            logger.debug('A follow-up missing a Re: but containing a trailer with no patch diff')
+            logger.debug(
+                'A follow-up missing a Re: but containing a trailer with no patch diff'
+            )
             self.reply = True
         if self.reply:
             for trailer in trailers:
                 # These are commonly part of patch/commit metadata
-                badtrailers = {'from', 'author', 'cc', 'to', 'date', 'subject',
-                               'subscribe', 'unsubscribe', 'base-commit', 'change-id',
-                               'message-id'}
+                badtrailers = {
+                    'from',
+                    'author',
+                    'cc',
+                    'to',
+                    'date',
+                    'subject',
+                    'subscribe',
+                    'unsubscribe',
+                    'base-commit',
+                    'change-id',
+                    'message-id',
+                }
                 if trailer.lname not in badtrailers:
                     trailer.lmsg = self
                     self.trailers.append(trailer)
 
-    def get_trailers(self, sloppy: bool = False) -> Tuple[List[LoreTrailer], Set[LoreTrailer]]:
+    def get_trailers(
+        self, sloppy: bool = False
+    ) -> Tuple[List[LoreTrailer], Set[LoreTrailer]]:
         trailers = list()
         mismatches = set()
 
@@ -1679,10 +1879,10 @@ class LoreMessage:
         # Identify all DKIM-Signature headers and try them in reverse order
         # until we come to a passing one
         dkhdrs = list()
-        for header in list(self.msg._headers): # type: ignore[attr-defined]
+        for header in list(self.msg._headers):  # type: ignore[attr-defined]
             if header[0].lower() == 'dkim-signature':
                 dkhdrs.append(header)
-                self.msg._headers.remove(header) # type: ignore[attr-defined]
+                self.msg._headers.remove(header)  # type: ignore[attr-defined]
         dkhdrs.reverse()
 
         seenatts = list()
@@ -1693,7 +1893,11 @@ class LoreMessage:
                 hval = str(email.header.make_header(email.header.decode_header(hval)))
             errors = list()
             hdata = LoreMessage.get_parts_from_header(hval)
-            logger.debug('Loading DKIM attestation for d=%s, s=%s', hdata.get('d'), hdata.get('s'))
+            logger.debug(
+                'Loading DKIM attestation for d=%s, s=%s',
+                hdata.get('d'),
+                hdata.get('s'),
+            )
 
             identity = hdata.get('i')
             if not identity:
@@ -1712,9 +1916,11 @@ class LoreMessage:
                 if isinstance(sh, str) and 'date' in sh.lower().split(':'):
                     signtime = self.date
 
-            self.msg._headers.append((hn, hval)) # type: ignore[attr-defined]
+            self.msg._headers.append((hn, hval))  # type: ignore[attr-defined]
             try:
-                res = dkim.verify(self.msg.as_bytes(policy=emlpolicy), logger=dkimlogger)
+                res = dkim.verify(
+                    self.msg.as_bytes(policy=emlpolicy), logger=dkimlogger
+                )
                 logger.debug('DKIM verify results: %s=%s', identity, res)
             except Exception as ex:
                 # Usually, this is due to some DNS resolver failure, which we can't
@@ -1729,7 +1935,7 @@ class LoreMessage:
                 self._attestors.append(attestor)
                 return
 
-            self.msg._headers.pop(-1) # type: ignore[attr-defined]
+            self.msg._headers.pop(-1)  # type: ignore[attr-defined]
             seenatts.append(attestor)
 
         # No exact domain matches, so return everything we have
@@ -1756,7 +1962,10 @@ class LoreMessage:
             if i.get('Subject') != self.subject:
                 ibh.append('Subject: %s' % str(i.get('Subject')))
             if i.get('Email') != self.fromemail or i.get('Author') != self.fromname:
-                ibh.append('From: ' + format_addrs([(str(i.get('Author')), str(i.get('Email')))]))
+                ibh.append(
+                    'From: '
+                    + format_addrs([(str(i.get('Author')), str(i.get('Email')))])
+                )
             if len(ibh):
                 self.body = '\n'.join(ibh) + '\n\n' + self.body
 
@@ -1771,10 +1980,16 @@ class LoreMessage:
         sources = config.get('keyringsrc')
         if not sources:
             # fallback to patatt's keyring if none is specified for b4
-            patatt_config = patatt.get_config_from_git(r'patatt\..*', multivals=['keyringsrc'])
+            patatt_config = patatt.get_config_from_git(
+                r'patatt\..*', multivals=['keyringsrc']
+            )
             sources = patatt_config.get('keyringsrc')
             if not sources:
-                sources = ['ref:::.keys', 'ref:::.local-keys', 'ref::refs/meta/keyring:']
+                sources = [
+                    'ref:::.keys',
+                    'ref:::.local-keys',
+                    'ref::refs/meta/keyring:',
+                ]
         if not isinstance(sources, list):
             sources = [sources]
         if pdir not in sources:
@@ -1782,8 +1997,9 @@ class LoreMessage:
 
         # Push our logger and GPGBIN into patatt
         patatt.logger = logger
-        assert isinstance(config['gpgbin'], str), \
+        assert isinstance(config['gpgbin'], str), (
             'gpgbin config value is not a string: %s' % str(config['gpgbin'])
+        )
         patatt.GPGBIN = config['gpgbin']
 
         logger.debug('Loading patatt attestations with sources=%s', str(sources))
@@ -1791,7 +2007,9 @@ class LoreMessage:
         success = False
         trim_body = False
         while True:
-            attestations = patatt.validate_message(self.msg.as_bytes(policy=emlpolicy), sources, trim_body=trim_body)
+            attestations = patatt.validate_message(
+                self.msg.as_bytes(policy=emlpolicy), sources, trim_body=trim_body
+            )
             # Do we have any successes?
             for attestation in attestations:
                 if attestation[0] == patatt.RES_VALID:
@@ -1826,18 +2044,22 @@ class LoreMessage:
                 signdt = LoreAttestor.parse_ts(signtime)
             else:
                 signdt = None
-            attestor = LoreAttestorPatatt(result, identity, signdt, keysrc, keyalgo, errors)
+            attestor = LoreAttestorPatatt(
+                result, identity, signdt, keysrc, keyalgo, errors
+            )
             self._attestors.append(attestor)
 
     @staticmethod
-    def run_local_check(cmdargs: List[str], ident: str, msg: EmailMessage,
-                        nocache: bool = False) -> List[Tuple[str, str]]:
+    def run_local_check(
+        cmdargs: List[str], ident: str, msg: EmailMessage, nocache: bool = False
+    ) -> List[Tuple[str, str]]:
         cacheid = ' '.join(cmdargs) + ident
         if not nocache:
             cachedata = get_cache(cacheid, suffix='checks', as_json=True)
             if cachedata is not None:
-                assert isinstance(cachedata, list), \
+                assert isinstance(cachedata, list), (
                     'Cache data for %s is not a list: %s' % (cacheid, str(cachedata))
+                )
                 return cachedata
 
         logger.debug('Checking ident=%s using %s', ident, cmdargs[0])
@@ -1888,13 +2110,16 @@ class LoreMessage:
         pwurl = str(config.get('pw-url', ''))
         pwproj = str(config.get('pw-project', ''))
         if not (pwkey and pwurl and pwproj):
-            logger.debug('Patchwork support requires pw-key, pw-url and pw-project settings')
+            logger.debug(
+                'Patchwork support requires pw-key, pw-url and pw-project settings'
+            )
             raise LookupError('Error looking up %s in patchwork' % msgid)
 
         cachedata = get_cache(pwurl + pwproj + msgid, suffix='lookup', as_json=True)
         if cachedata is not None:
-            assert isinstance(cachedata, dict), \
+            assert isinstance(cachedata, dict), (
                 'Cache data for %s is not a dict: %s' % (msgid, str(cachedata))
+            )
             return cachedata
 
         pses, url = get_patchwork_session(pwkey, pwurl)
@@ -1926,7 +2151,7 @@ class LoreMessage:
         save_cache(pwdata, pwurl + pwproj + msgid, suffix='lookup', is_json=True)
         return pwdata
 
-    def get_patchwork_info(self) -> Optional[Dict[str,str]]:
+    def get_patchwork_info(self) -> Optional[Dict[str, str]]:
         if not self.pwhash:
             return None
         try:
@@ -1945,7 +2170,9 @@ class LoreMessage:
         logger.debug('ci_state for %s: %s', self.msgid, ci_status)
         self.pw_ci_status = ci_status
 
-    def get_attestation_status(self, attpolicy: str, maxdays: int = 0) -> Tuple[List[Dict[str, Any]], bool, bool]:
+    def get_attestation_status(
+        self, attpolicy: str, maxdays: int = 0
+    ) -> Tuple[List[Dict[str, Any]], bool, bool]:
         """Get attestation status for this message.
 
         Args:
@@ -1968,7 +2195,11 @@ class LoreMessage:
         critical = False
 
         for attestor in self.attestors:
-            if attestor.passing and maxdays and not attestor.check_time_drift(self.date, maxdays):
+            if (
+                attestor.passing
+                and maxdays
+                and not attestor.check_time_drift(self.date, maxdays)
+            ):
                 logger.debug('The time drift is too much, marking as non-passing')
                 attestor.passing = False
 
@@ -1978,27 +2209,33 @@ class LoreMessage:
                     if attestor.have_key:
                         # This was signed, and we have a key, but it's failing
                         has_failing = True
-                        attestations.append({
-                            'status': 'badsig',
-                            'identity': attestor.trailer,
-                            'passing': False,
-                        })
+                        attestations.append(
+                            {
+                                'status': 'badsig',
+                                'identity': attestor.trailer,
+                                'passing': False,
+                            }
+                        )
                     elif attpolicy in ('softfail', 'hardfail'):
                         has_failing = True
-                        attestations.append({
-                            'status': 'nokey',
-                            'identity': attestor.trailer,
-                            'passing': False,
-                        })
+                        attestations.append(
+                            {
+                                'status': 'nokey',
+                                'identity': attestor.trailer,
+                                'passing': False,
+                            }
+                        )
                         # This is not critical even in hardfail
                         continue
                 elif attpolicy in ('softfail', 'hardfail'):
                     has_failing = True
-                    attestations.append({
-                        'status': 'badsig',
-                        'identity': attestor.trailer,
-                        'passing': False,
-                    })
+                    attestations.append(
+                        {
+                            'status': 'badsig',
+                            'identity': attestor.trailer,
+                            'passing': False,
+                        }
+                    )
 
                 if attpolicy == 'hardfail':
                     critical = True
@@ -2018,9 +2255,12 @@ class LoreMessage:
                             self.fromname = xpair[0]
                             self.fromemail = xpair[1]
                             # Drop the reply-to header if it's exactly the same
-                            for header in list(self.msg._headers): # type: ignore[attr-defined]
-                                if header[0].lower() == 'reply-to' and header[1].find(xpair[1]) > 0:
-                                    self.msg._headers.remove(header) # type: ignore[attr-defined]
+                            for header in list(self.msg._headers):  # type: ignore[attr-defined]
+                                if (
+                                    header[0].lower() == 'reply-to'
+                                    and header[1].find(xpair[1]) > 0
+                                ):
+                                    self.msg._headers.remove(header)  # type: ignore[attr-defined]
 
                 has_passing = True
                 att_info: Dict[str, Any] = {
@@ -2037,7 +2277,9 @@ class LoreMessage:
         overall_passing = not has_failing or has_passing
         return attestations, overall_passing, critical
 
-    def get_attestation_trailers(self, attpolicy: str, maxdays: int = 0) -> Tuple[Optional[str], List[str], bool]:
+    def get_attestation_trailers(
+        self, attpolicy: str, maxdays: int = 0
+    ) -> Tuple[Optional[str], List[str], bool]:
         """Get formatted attestation trailers with checkmarks for display.
 
         Args:
@@ -2050,7 +2292,9 @@ class LoreMessage:
             - trailers: List of formatted trailer strings with checkmarks
             - critical: True if hardfail policy triggered
         """
-        attestations, _overall_passing, critical = self.get_attestation_status(attpolicy, maxdays)
+        attestations, _overall_passing, critical = self.get_attestation_status(
+            attpolicy, maxdays
+        )
 
         config = get_main_config()
         if config['attestation-checkmarks'] == 'fancy':
@@ -2069,7 +2313,9 @@ class LoreMessage:
                 if checkmark is None:
                     checkmark = mark
                 if 'mismatch' in att:
-                    trailers.append(f'{mark} Signed: {att["identity"]} (From: {att["mismatch"]})')
+                    trailers.append(
+                        f'{mark} Signed: {att["identity"]} (From: {att["mismatch"]})'
+                    )
                 else:
                     trailers.append(f'{mark} Signed: {att["identity"]}')
             else:
@@ -2200,9 +2446,10 @@ class LoreMessage:
         return new_hdrval.strip()
 
     @staticmethod
-    def make_reply_addrs(to_addrs: List[Tuple[str, str]],
-                         cc_addrs: List[Tuple[str, str]],
-                         ) -> Tuple[List[Tuple[str, str]], List[Tuple[str, str]]]:
+    def make_reply_addrs(
+        to_addrs: List[Tuple[str, str]],
+        cc_addrs: List[Tuple[str, str]],
+    ) -> Tuple[List[Tuple[str, str]], List[Tuple[str, str]]]:
         """Deduplicate To and Cc address lists for a reply.
 
         Removes duplicates within To, then removes any Cc entries that
@@ -2224,8 +2471,9 @@ class LoreMessage:
                 deduped_cc.append((name, addr))
         return deduped_to, deduped_cc
 
-    def make_reply(self, body: str,
-                   mailfrom: Optional[Tuple[str, str]] = None) -> EmailMessage:
+    def make_reply(
+        self, body: str, mailfrom: Optional[Tuple[str, str]] = None
+    ) -> EmailMessage:
         """Build a reply EmailMessage addressing this message.
 
         Handles Reply-To → To promotion, folds the original To into Cc,
@@ -2242,7 +2490,9 @@ class LoreMessage:
             subject = f'Re: {subject}'
 
         try:
-            reply_to = email.utils.getaddresses([str(x) for x in self.msg.get_all('reply-to', [])])
+            reply_to = email.utils.getaddresses(
+                [str(x) for x in self.msg.get_all('reply-to', [])]
+            )
         except Exception:
             reply_to = []
         if reply_to:
@@ -2251,15 +2501,21 @@ class LoreMessage:
             to_addrs = [(self.fromname, self.fromemail)]
 
         try:
-            orig_to = email.utils.getaddresses([str(x) for x in self.msg.get_all('to', [])])
+            orig_to = email.utils.getaddresses(
+                [str(x) for x in self.msg.get_all('to', [])]
+            )
         except Exception:
             orig_to = []
         try:
-            orig_cc = email.utils.getaddresses([str(x) for x in self.msg.get_all('cc', [])])
+            orig_cc = email.utils.getaddresses(
+                [str(x) for x in self.msg.get_all('cc', [])]
+            )
         except Exception:
             orig_cc = []
 
-        deduped_to, deduped_cc = LoreMessage.make_reply_addrs(to_addrs, orig_to + orig_cc)
+        deduped_to, deduped_cc = LoreMessage.make_reply_addrs(
+            to_addrs, orig_to + orig_cc
+        )
 
         msg = EmailMessage()
         msg.set_payload(body, charset='utf-8')
@@ -2276,15 +2532,24 @@ class LoreMessage:
         return msg
 
     @staticmethod
-    def wrap_header(hdr: Tuple[str, str], width: int = 75, nl: str = '\n',
-                    transform: Literal['encode', 'decode', 'preserve'] = 'preserve') -> bytes:
+    def wrap_header(
+        hdr: Tuple[str, str],
+        width: int = 75,
+        nl: str = '\n',
+        transform: Literal['encode', 'decode', 'preserve'] = 'preserve',
+    ) -> bytes:
         hname, hval = hdr
         if hname.lower() in ('to', 'cc', 'from', 'x-original-from'):
             _parts = [f'{hname}: ']
             first = True
             for addr in email.utils.getaddresses([hval]):
                 if transform == 'encode' and not addr[0].isascii():
-                    addr = (email.quoprimime.header_encode(addr[0].encode(), charset='utf-8'), addr[1])
+                    addr = (
+                        email.quoprimime.header_encode(
+                            addr[0].encode(), charset='utf-8'
+                        ),
+                        addr[1],
+                    )
                     qp = format_addrs([addr], clean=False)
                 elif transform == 'decode':
                     qp = format_addrs([addr], clean=True)
@@ -2311,11 +2576,18 @@ class LoreMessage:
                 # Use simple textwrap, with a small trick that ensures that long non-breakable
                 # strings don't show up on the next line from the bare header
                 hdata = hdata.replace(': ', ':_', 1)
-                wrapped = textwrap.wrap(hdata, break_long_words=False, break_on_hyphens=False,
-                                        subsequent_indent=' ', width=width)
+                wrapped = textwrap.wrap(
+                    hdata,
+                    break_long_words=False,
+                    break_on_hyphens=False,
+                    subsequent_indent=' ',
+                    width=width,
+                )
                 return nl.join(wrapped).replace(':_', ': ', 1).encode()
 
-            qp = f'{hname}: ' + email.quoprimime.header_encode(hval.encode(), charset='utf-8')
+            qp = f'{hname}: ' + email.quoprimime.header_encode(
+                hval.encode(), charset='utf-8'
+            )
             # is it longer than width?
             if len(qp) <= width:
                 return qp.encode()
@@ -2327,19 +2599,25 @@ class LoreMessage:
                     # Also allow for the ' ' at the front on continuation lines
                     wrapat -= 1
                 # Make sure we don't break on a =XX escape sequence
-                while '=' in qp[wrapat - 2:wrapat]:
+                while '=' in qp[wrapat - 2 : wrapat]:
                     wrapat -= 1
                 _parts.append(qp[:wrapat] + '?=')
-                qp = ('=?utf-8?q?' + qp[wrapat:])
+                qp = '=?utf-8?q?' + qp[wrapat:]
             _parts.append(qp)
         return f'{nl} '.join(_parts).encode()
 
     @staticmethod
-    def get_msg_as_bytes(msg: EmailMessage, nl: str = '\n',
-                         headers: Literal['encode', 'decode', 'preserve'] = 'preserve') -> bytes:
+    def get_msg_as_bytes(
+        msg: EmailMessage,
+        nl: str = '\n',
+        headers: Literal['encode', 'decode', 'preserve'] = 'preserve',
+    ) -> bytes:
         bdata = b''
         for hname, hval in msg.items():
-            bdata += LoreMessage.wrap_header((hname, str(hval)), nl=nl, transform=headers) + nl.encode()
+            bdata += (
+                LoreMessage.wrap_header((hname, str(hval)), nl=nl, transform=headers)
+                + nl.encode()
+            )
         bdata += nl.encode()
         payload = msg.get_payload(decode=True)
         if not isinstance(payload, bytes):
@@ -2371,7 +2649,9 @@ class LoreMessage:
         listidpref = config['listid-preference']
         if not isinstance(listidpref, list):
             listidpref = [str(listidpref)]
-        return liblore.utils.get_preferred_duplicate(msg1, msg2, listid_preference=listidpref)
+        return liblore.utils.get_preferred_duplicate(
+            msg1, msg2, listid_preference=listidpref
+        )
 
     @staticmethod
     def get_patch_id(diff: str) -> Optional[str]:
@@ -2455,17 +2735,28 @@ class LoreMessage:
         return indexes
 
     @staticmethod
-    def find_trailers(body: str, followup: bool = False) -> Tuple[List[LoreTrailer], List[str]]:
+    def find_trailers(
+        body: str, followup: bool = False
+    ) -> Tuple[List[LoreTrailer], List[str]]:
         ignores = {'phone', 'mail', 'email', 'e-mail', 'prerequisite-message-id'}
         headers = {'subject', 'date', 'from'}
         links = {'link', 'buglink', 'closes'}
-        nonperson = links | {'fixes', 'subject', 'date', 'obsoleted-by', 'change-id', 'base-commit'}
+        nonperson = links | {
+            'fixes',
+            'subject',
+            'date',
+            'obsoleted-by',
+            'change-id',
+            'base-commit',
+        }
         # Ignore everything below standard email signature marker
         body = body.split('\n-- \n', 1)[0].strip() + '\n'
         # Fix some more common copypasta trailer wrapping
         # Fixes: abcd0123 (foo bar
         # baz quux)
-        body = re.sub(r'^(\S+:\s+[\da-f]+\s+\([^)]+)\n([^\n]+\))', r'\1 \2', body, flags=re.M)
+        body = re.sub(
+            r'^(\S+:\s+[\da-f]+\s+\([^)]+)\n([^\n]+\))', r'\1 \2', body, flags=re.M
+        )
         # Signed-off-by: Long Name
         # <email.here@example.com>
         body = re.sub(r'^(\S+:\s+[^<]+)\n(<[^>]+>)$', r'\1 \2', body, flags=re.M)
@@ -2490,19 +2781,31 @@ class LoreMessage:
                     logger.debug('Ignoring %d: %s (known non-trailer)', at, line)
                     continue
                 if len(others) and lname in headers:
-                    logger.debug('Ignoring %d: %s (header after other content)', at, line)
+                    logger.debug(
+                        'Ignoring %d: %s (header after other content)', at, line
+                    )
                     continue
                 if followup:
                     if not lname.isascii():
-                        logger.debug('Ignoring %d: %s (known non-ascii follow-up trailer)', at, lname)
+                        logger.debug(
+                            'Ignoring %d: %s (known non-ascii follow-up trailer)',
+                            at,
+                            lname,
+                        )
                         continue
                     mperson = re.search(r'\S+@\S+\.\S+', ovalue)
                     if not mperson and lname not in nonperson:
-                        logger.debug('Ignoring %d: %s (not a recognized non-person trailer)', at, line)
+                        logger.debug(
+                            'Ignoring %d: %s (not a recognized non-person trailer)',
+                            at,
+                            line,
+                        )
                         continue
                     mlink = re.search(r'https?://', ovalue)
                     if mlink and lname not in links:
-                        logger.debug('Ignoring %d: %s (not a recognized link trailer)', at, line)
+                        logger.debug(
+                            'Ignoring %d: %s (not a recognized link trailer)', at, line
+                        )
                         continue
 
                 extinfo = None
@@ -2531,8 +2834,13 @@ class LoreMessage:
         return trailers, others
 
     @staticmethod
-    def rebuild_message(headers: List[LoreTrailer], message: str, trailers: List[LoreTrailer],
-                        basement: str, signature: str) -> str:
+    def rebuild_message(
+        headers: List[LoreTrailer],
+        message: str,
+        trailers: List[LoreTrailer],
+        basement: str,
+        signature: str,
+    ) -> str:
         body = ''
         if headers:
             for ltr in headers:
@@ -2551,8 +2859,10 @@ class LoreMessage:
         if len(basement):
             if not len(trailers):
                 body += '\n'
-            if (DIFFSTAT_RE.search(basement)
-                    or not (basement.strip().startswith('diff --git') or basement.lstrip().startswith('--- '))):
+            if DIFFSTAT_RE.search(basement) or not (
+                basement.strip().startswith('diff --git')
+                or basement.lstrip().startswith('--- ')
+            ):
                 body += '---\n'
             else:
                 # We don't need to add a ---
@@ -2566,7 +2876,9 @@ class LoreMessage:
         return body
 
     @staticmethod
-    def get_body_parts(body: str) -> Tuple[List[LoreTrailer], str, List[LoreTrailer], str, str]:
+    def get_body_parts(
+        body: str,
+    ) -> Tuple[List[LoreTrailer], str, List[LoreTrailer], str, str]:
         # remove any starting/trailing blank lines
         body = body.replace('\r', '')
         body = body.strip('\n')
@@ -2640,14 +2952,20 @@ class LoreMessage:
 
         return githeaders, message, trailers, basement, signature
 
-    def fix_trailers(self, extras: Optional[List[LoreTrailer]] = None,
-                     copyccs: bool = False, addmysob: bool = False,
-                     fallback_order: str = '*',
-                     omit_trailers: Optional[List[str]] = None) -> None:
+    def fix_trailers(
+        self,
+        extras: Optional[List[LoreTrailer]] = None,
+        copyccs: bool = False,
+        addmysob: bool = False,
+        fallback_order: str = '*',
+        omit_trailers: Optional[List[str]] = None,
+    ) -> None:
 
         config = get_main_config()
 
-        bheaders, message, btrailers, basement, signature = LoreMessage.get_body_parts(self.body)
+        bheaders, message, btrailers, basement, signature = LoreMessage.get_body_parts(
+            self.body
+        )
 
         sobtr = LoreTrailer()
         hasmysob = False
@@ -2666,10 +2984,20 @@ class LoreMessage:
             addmysob = True
 
         if copyccs:
-            alldests = email.utils.getaddresses([str(x) for x in self.msg.get_all('to', [])])
-            alldests += email.utils.getaddresses([str(x) for x in self.msg.get_all('cc', [])])
+            alldests = email.utils.getaddresses(
+                [str(x) for x in self.msg.get_all('to', [])]
+            )
+            alldests += email.utils.getaddresses(
+                [str(x) for x in self.msg.get_all('cc', [])]
+            )
             # Sort by domain name, then local
-            alldests.sort(key=lambda x: x[1].find('@') > 0 and x[1].split('@')[1] + x[1].split('@')[0] or x[1])
+            alldests.sort(
+                key=lambda x: (
+                    x[1].find('@') > 0
+                    and x[1].split('@')[1] + x[1].split('@')[0]
+                    or x[1]
+                )
+            )
             for pair in alldests:
                 found = False
                 for fltr in btrailers + new_trailers:
@@ -2687,7 +3015,9 @@ class LoreMessage:
 
         torder = config.get('trailer-order', fallback_order)
         if not isinstance(torder, str):
-            logger.critical('b4.trailer-order must be a string, falling back to default')
+            logger.critical(
+                'b4.trailer-order must be a string, falling back to default'
+            )
             torder = fallback_order
 
         if torder and torder != '*':
@@ -2727,7 +3057,9 @@ class LoreMessage:
             if ltr in fixtrailers or ltr in ignored:
                 continue
 
-            if (ltr.addr and ltr.addr[1].lower() in ignores) or (ltr.lmsg and ltr.lmsg.fromemail.lower() in ignores):
+            if (ltr.addr and ltr.addr[1].lower() in ignores) or (
+                ltr.lmsg and ltr.lmsg.fromemail.lower() in ignores
+            ):
                 logger.info('    x %s', ltr.as_string(omit_extinfo=True))
                 ignored.add(ltr)
                 continue
@@ -2742,8 +3074,11 @@ class LoreMessage:
                         extra = ' (%s %s)' % (attestor.checkmark, attestor.trailer)
                         if attpolicy == 'hardfail':
                             import sys
+
                             logger.critical('---')
-                            logger.critical('Exiting due to attestation-policy: hardfail')
+                            logger.critical(
+                                'Exiting due to attestation-policy: hardfail'
+                            )
                             sys.exit(1)
 
                 logger.info('    + %s%s', ltr.as_string(omit_extinfo=True), extra)
@@ -2784,9 +3119,13 @@ class LoreMessage:
         if bparts:
             self.message += '---\n' + '---\n'.join(bparts)
 
-        self.body = LoreMessage.rebuild_message(bheaders, message, fixtrailers, basement, signature)
+        self.body = LoreMessage.rebuild_message(
+            bheaders, message, fixtrailers, basement, signature
+        )
 
-    def get_am_subject(self, indicate_reroll: bool = True, use_subject: Optional[str] = None) -> str:
+    def get_am_subject(
+        self, indicate_reroll: bool = True, use_subject: Optional[str] = None
+    ) -> str:
         # Return a clean patch subject
         parts = ['PATCH']
         if self.lsubject.rfc:
@@ -2794,9 +3133,14 @@ class LoreMessage:
         if self.reroll_from_revision:
             if indicate_reroll:
                 if self.reroll_from_revision != self.revision:
-                    parts.append('v%d->v%d' % (self.reroll_from_revision, self.revision))
+                    parts.append(
+                        'v%d->v%d' % (self.reroll_from_revision, self.revision)
+                    )
                 else:
-                    parts.append(' %s  v%d' % (' ' * len(str(self.reroll_from_revision)), self.revision))
+                    parts.append(
+                        ' %s  v%d'
+                        % (' ' * len(str(self.reroll_from_revision)), self.revision)
+                    )
             else:
                 parts.append('v%d' % self.revision)
         elif not self.revision_inferred:
@@ -2809,14 +3153,25 @@ class LoreMessage:
 
         return '[%s] %s' % (' '.join(parts), use_subject)
 
-    def get_am_message(self, add_trailers: bool = True, addmysob: bool = False,
-                       extras: Optional[List['LoreTrailer']] = None, copyccs: bool = False,
-                       allowbadchars: bool = False) -> EmailMessage:
+    def get_am_message(
+        self,
+        add_trailers: bool = True,
+        addmysob: bool = False,
+        extras: Optional[List['LoreTrailer']] = None,
+        copyccs: bool = False,
+        allowbadchars: bool = False,
+    ) -> EmailMessage:
         # Look through the body to make sure there aren't any suspicious unicode control flow chars
         # First, encode into ascii and compare for a quick utf8 presence test
-        if not allowbadchars and self.body.encode('ascii', errors='replace') != self.body.encode():
+        if (
+            not allowbadchars
+            and self.body.encode('ascii', errors='replace') != self.body.encode()
+        ):
             import unicodedata
-            logger.debug('Body contains non-ascii characters. Running Unicode Cf char tests.')
+
+            logger.debug(
+                'Body contains non-ascii characters. Running Unicode Cf char tests.'
+            )
             for line in self.body.split('\n'):
                 # Does this line have any unicode?
                 if line.encode() == line.encode('ascii', errors='replace'):
@@ -2829,12 +3184,20 @@ class LoreMessage:
                     for at, c in enumerate(line.rstrip('\r')):
                         if unicodedata.category(c) == 'Cf':
                             logger.critical('---')
-                            logger.critical('WARNING: Message contains suspicious unicode control characters!')
+                            logger.critical(
+                                'WARNING: Message contains suspicious unicode control characters!'
+                            )
                             logger.critical('         Subject: %s', self.full_subject)
                             logger.critical('            Line: %s', line.rstrip('\r'))
                             logger.critical('            ------%s^', '-' * at)
-                            logger.critical('            Char: %s (%s)', unicodedata.name(c), hex(ord(c)))
-                            logger.critical('         If you are sure about this, rerun with the right flag to allow.')
+                            logger.critical(
+                                '            Char: %s (%s)',
+                                unicodedata.name(c),
+                                hex(ord(c)),
+                            )
+                            logger.critical(
+                                '         If you are sure about this, rerun with the right flag to allow.'
+                            )
                             sys.exit(1)
 
         # Remove anything cut off by scissors
@@ -2854,7 +3217,10 @@ class LoreMessage:
 
         am_msg = EmailMessage()
         hfrom = format_addrs([(str(i.get('Author', '')), str(i.get('Email')))])
-        am_msg.add_header('Subject', self.get_am_subject(indicate_reroll=False, use_subject=i.get('Subject')))
+        am_msg.add_header(
+            'Subject',
+            self.get_am_subject(indicate_reroll=False, use_subject=i.get('Subject')),
+        )
         am_msg.add_header('From', hfrom)
         am_msg.add_header('Date', str(i.get('Date')))
         am_msg.add_header('Message-Id', f'<{self.msgid}>')
@@ -2893,7 +3259,9 @@ class LoreSubject:
         self.full_subject = subject
 
         # Is it a reply?
-        if re.search(r'^(Re|Aw|Fwd):', subject, re.I) or re.search(r'^\w{2,3}:\s*\[', subject):
+        if re.search(r'^(Re|Aw|Fwd):', subject, re.I) or re.search(
+            r'^\w{2,3}:\s*\[', subject
+        ):
             subject = re.sub(r'^\w{2,3}:\s*\[', '[', subject)
             self.reply = True
 
@@ -2956,8 +3324,9 @@ class LoreSubject:
 
         return ret
 
-    def get_rebuilt_subject(self, eprefixes: Optional[List[str]] = None,
-                            presubject: Optional[str] = None) -> str:
+    def get_rebuilt_subject(
+        self, eprefixes: Optional[List[str]] = None, presubject: Optional[str] = None
+    ) -> str:
 
         exclude = None
         if eprefixes and 'PATCH' in eprefixes:
@@ -2972,9 +3341,12 @@ class LoreSubject:
         if self.revision > 1:
             _pfx.append(f'v{self.revision}')
         if self.expected > 1:
-            _pfx.append('%s/%s' % (str(self.counter).zfill(len(str(self.expected))), self.expected))
+            _pfx.append(
+                '%s/%s'
+                % (str(self.counter).zfill(len(str(self.expected))), self.expected)
+            )
 
-        subject = ""
+        subject = ''
         if len(_pfx):
             subject = '[' + ' '.join(_pfx) + '] ' + self.subject
         else:
@@ -3077,15 +3449,27 @@ class LoreAttestor:
 
         if self.level == 'domain':
             if emlfrom.lower().endswith('@' + self.identity.lower()):
-                logger.debug('PASS : sig domain %s matches from identity %s', self.identity, emlfrom)
+                logger.debug(
+                    'PASS : sig domain %s matches from identity %s',
+                    self.identity,
+                    emlfrom,
+                )
                 return True
-            self.errors.append('signing domain %s does not match From: %s' % (self.identity, emlfrom))
+            self.errors.append(
+                'signing domain %s does not match From: %s' % (self.identity, emlfrom)
+            )
             return False
 
         if emlfrom.lower() == self.identity.lower():
-            logger.debug('PASS : sig identity %s matches from identity %s', self.identity, emlfrom)
+            logger.debug(
+                'PASS : sig identity %s matches from identity %s',
+                self.identity,
+                emlfrom,
+            )
             return True
-        self.errors.append('signing identity %s does not match From: %s' % (self.identity, emlfrom))
+        self.errors.append(
+            'signing identity %s does not match From: %s' % (self.identity, emlfrom)
+        )
         return False
 
     @staticmethod
@@ -3113,7 +3497,13 @@ class LoreAttestor:
 
 
 class LoreAttestorDKIM(LoreAttestor):
-    def __init__(self, passing: bool, identity: str, signtime: Optional[datetime.datetime], errors: List[str]) -> None:
+    def __init__(
+        self,
+        passing: bool,
+        identity: str,
+        signtime: Optional[datetime.datetime],
+        errors: List[str],
+    ) -> None:
         super().__init__()
         self.mode = 'DKIM'
         self.level = 'domain'
@@ -3128,12 +3518,15 @@ class LoreAttestorDKIM(LoreAttestor):
 
 
 class LoreAttestorPatatt(LoreAttestor):
-    def __init__(self, result: int,
-                 identity: Optional[str],
-                 signtime: Optional[datetime.datetime],
-                 keysrc: Optional[str],
-                 keyalgo: Optional[str],
-                 errors: List[str]) -> None:
+    def __init__(
+        self,
+        result: int,
+        identity: Optional[str],
+        signtime: Optional[datetime.datetime],
+        keysrc: Optional[str],
+        keyalgo: Optional[str],
+        errors: List[str],
+    ) -> None:
         super().__init__()
         self.mode = 'patatt'
         self.level = 'person'
@@ -3149,8 +3542,9 @@ class LoreAttestorPatatt(LoreAttestor):
             self.have_key = True
 
 
-def _run_command(cmdargs: List[str], stdin: Optional[bytes] = None,
-                 rundir: Optional[str] = None) -> Tuple[int, bytes, bytes]:
+def _run_command(
+    cmdargs: List[str], stdin: Optional[bytes] = None, rundir: Optional[str] = None
+) -> Tuple[int, bytes, bytes]:
     if rundir:
         logger.debug('Changing dir to %s', rundir)
         curdir = os.getcwd()
@@ -3159,7 +3553,9 @@ def _run_command(cmdargs: List[str], stdin: Optional[bytes] = None,
         curdir = None
 
     logger.debug('Running %s', ' '.join(cmdargs))
-    sp = subprocess.Popen(cmdargs, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
+    sp = subprocess.Popen(
+        cmdargs, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE
+    )
     (output, error) = sp.communicate(input=stdin)
     if curdir:
         logger.debug('Changing back into %s', curdir)
@@ -3168,7 +3564,9 @@ def _run_command(cmdargs: List[str], stdin: Optional[bytes] = None,
     return sp.returncode, output, error
 
 
-def gpg_run_command(args: List[str], stdin: Optional[bytes] = None) -> Tuple[int, bytes, bytes]:
+def gpg_run_command(
+    args: List[str], stdin: Optional[bytes] = None
+) -> Tuple[int, bytes, bytes]:
     config = get_main_config()
     gpgbin = config.get('gpgbin', 'gpg')
     if not isinstance(gpgbin, str):
@@ -3183,22 +3581,38 @@ def gpg_run_command(args: List[str], stdin: Optional[bytes] = None) -> Tuple[int
 
 
 @overload
-def git_run_command(gitdir: Optional[Union[str, Path]], args: List[str], stdin: Optional[bytes] = ...,
-                    *, logstderr: bool = ..., decode: Literal[False],
-                    rundir: Optional[str] = ...) -> Tuple[int, bytes]:
-    ...
+def git_run_command(
+    gitdir: Optional[Union[str, Path]],
+    args: List[str],
+    stdin: Optional[bytes] = ...,
+    *,
+    logstderr: bool = ...,
+    decode: Literal[False],
+    rundir: Optional[str] = ...,
+) -> Tuple[int, bytes]: ...
 
 
 @overload
-def git_run_command(gitdir: Optional[Union[str, Path]], args: List[str], stdin: Optional[bytes] = ...,
-                    *, logstderr: bool = ..., decode: Literal[True] = ...,
-                    rundir: Optional[str] = ...) -> Tuple[int, str]:
-    ...
-
-
-def git_run_command(gitdir: Optional[Union[str, Path]], args: List[str], stdin: Optional[bytes] = None,
-                    *, logstderr: bool = False, decode: bool = True,
-                    rundir: Optional[str] = None) -> Tuple[int, Union[str, bytes]]:
+def git_run_command(
+    gitdir: Optional[Union[str, Path]],
+    args: List[str],
+    stdin: Optional[bytes] = ...,
+    *,
+    logstderr: bool = ...,
+    decode: Literal[True] = ...,
+    rundir: Optional[str] = ...,
+) -> Tuple[int, str]: ...
+
+
+def git_run_command(
+    gitdir: Optional[Union[str, Path]],
+    args: List[str],
+    stdin: Optional[bytes] = None,
+    *,
+    logstderr: bool = False,
+    decode: bool = True,
+    rundir: Optional[str] = None,
+) -> Tuple[int, Union[str, bytes]]:
     cmdargs = ['git', '--no-pager']
     if gitdir:
         if os.path.exists(os.path.join(gitdir, '.git')):
@@ -3228,13 +3642,16 @@ def git_run_command(gitdir: Optional[Union[str, Path]], args: List[str], stdin:
 
 
 def git_check_minimal_version(min_version: str) -> bool:
-    _ecode, out = git_run_command(None, ["version"])
-    current_version = re.sub(r"git version (\d+\.\d+)\..*", r"\1", out)
-    return tuple(map(int, current_version.split(".")[:2])) >= tuple(map(int, min_version.split(".")[:2]))
-
+    _ecode, out = git_run_command(None, ['version'])
+    current_version = re.sub(r'git version (\d+\.\d+)\..*', r'\1', out)
+    return tuple(map(int, current_version.split('.')[:2])) >= tuple(
+        map(int, min_version.split('.')[:2])
+    )
 
 
-def git_credential_fill(gitdir: Optional[str], protocol: str, host: str, username: str) -> Optional[str]:
+def git_credential_fill(
+    gitdir: Optional[str], protocol: str, host: str, username: str
+) -> Optional[str]:
     stdin = f'protocol={protocol}\nhost={host}\nusername={username}\n'.encode()
     ecode, out = git_run_command(gitdir, args=['credential', 'fill'], stdin=stdin)
     if ecode == 0:
@@ -3258,7 +3675,9 @@ def git_get_command_lines(gitdir: Optional[str], args: List[str]) -> List[str]:
     return lines
 
 
-def git_get_repo_status(gitdir: Optional[str] = None, untracked: bool = False) -> List[str]:
+def git_get_repo_status(
+    gitdir: Optional[str] = None, untracked: bool = False
+) -> List[str]:
     args = ['status', '--porcelain=v1']
     if not untracked:
         args.append('--untracked-files=no')
@@ -3266,7 +3685,9 @@ def git_get_repo_status(gitdir: Optional[str] = None, untracked: bool = False) -
 
 
 @contextmanager
-def git_temp_worktree(gitdir: Optional[str] = None, commitish: Optional[str] = None) -> Generator[str, None, None]:
+def git_temp_worktree(
+    gitdir: Optional[str] = None, commitish: Optional[str] = None
+) -> Generator[str, None, None]:
     """Context manager that creates a temporary work tree and chdirs into it. The
     worktree is deleted when the contex manager is closed. Taken from gj_tools."""
     dfn = None
@@ -3323,7 +3744,9 @@ def setup_config(cmdargs: argparse.Namespace) -> None:
     _setup_sendemail_config(cmdargs)
 
 
-def _cmdline_config_override(cmdargs: argparse.Namespace, config: Dict[str, Any], section: str) -> None:
+def _cmdline_config_override(
+    cmdargs: argparse.Namespace, config: Dict[str, Any], section: str
+) -> None:
     """Use cmdline.config to set and override config values for section."""
     if not cmdargs.config:
         return
@@ -3331,7 +3754,7 @@ def _cmdline_config_override(cmdargs: argparse.Namespace, config: Dict[str, Any]
     section += '.'
 
     config_override = {
-        key[len(section):]: val
+        key[len(section) :]: val
         for key, val in cmdargs.config.items()
         if key.startswith(section)
     }
@@ -3339,14 +3762,20 @@ def _cmdline_config_override(cmdargs: argparse.Namespace, config: Dict[str, Any]
     config.update(config_override)
 
 
-def git_set_config(fullpath: Optional[str], param: str, value: str, operation: str = '--replace-all') -> int:
+def git_set_config(
+    fullpath: Optional[str], param: str, value: str, operation: str = '--replace-all'
+) -> int:
     args = ['config', operation, param, value]
     ecode, _out = git_run_command(fullpath, args)
     return ecode
 
 
-def get_config_from_git(regexp: str, defaults: Optional[Dict[str, Any]] = None,
-                        multivals: Optional[List[str]] = None, source: Optional[str] = None) -> Dict[str, Any]:
+def get_config_from_git(
+    regexp: str,
+    defaults: Optional[Dict[str, Any]] = None,
+    multivals: Optional[List[str]] = None,
+    source: Optional[str] = None,
+) -> Dict[str, Any]:
     if multivals is None:
         multivals = list()
     args = ['config']
@@ -3396,10 +3825,23 @@ def _setup_main_config(cmdargs: Optional[argparse.Namespace] = None) -> None:
     # some options can be provided via the toplevel .b4-config file,
     # so load them up and use as defaults
     topdir = git_get_toplevel()
-    wtglobs = ['prep-*-check-cmd', 'review-*-check-cmd', 'send-*', '*mask', '*template*', 'trailer*', 'pw-*']
-    multivals = ['keyringsrc', 'am-perpatch-check-cmd', 'prep-perpatch-check-cmd',
-                  'review-perpatch-check-cmd', 'review-series-check-cmd',
-                  'review-target-branch']
+    wtglobs = [
+        'prep-*-check-cmd',
+        'review-*-check-cmd',
+        'send-*',
+        '*mask',
+        '*template*',
+        'trailer*',
+        'pw-*',
+    ]
+    multivals = [
+        'keyringsrc',
+        'am-perpatch-check-cmd',
+        'prep-perpatch-check-cmd',
+        'review-perpatch-check-cmd',
+        'review-series-check-cmd',
+        'review-target-branch',
+    ]
     if topdir:
         wtcfg = os.path.join(topdir, '.b4-config')
         if os.access(wtcfg, os.R_OK):
@@ -3427,10 +3869,13 @@ def _setup_main_config(cmdargs: Optional[argparse.Namespace] = None) -> None:
     # If we specify DNS resolvers, configure them now
     if config['attestation-dns-resolvers'] is not None:
         try:
-            resolvers = [x.strip() for x in config['attestation-dns-resolvers'].split(',')]
+            resolvers = [
+                x.strip() for x in config['attestation-dns-resolvers'].split(',')
+            ]
             if resolvers:
                 # Don't force this as an automatically discovered dependency
                 import dns.resolver
+
                 dns.resolver.default_resolver = dns.resolver.Resolver(configure=False)
                 dns.resolver.default_resolver.nameservers = resolvers
         except ImportError:
@@ -3472,7 +3917,10 @@ def get_cache_dir(appname: str = 'b4') -> str:
     try:
         expmin = int(str(config['cache-expire'])) * 60
     except ValueError:
-        logger.critical('ERROR: cache-expire must be an integer (minutes): %s', config['cache-expire'])
+        logger.critical(
+            'ERROR: cache-expire must be an integer (minutes): %s',
+            config['cache-expire'],
+        )
         expmin = 600
     expage = time.time() - expmin
     # Expire anything else that is older than 30 days
@@ -3503,7 +3951,9 @@ def get_cache_file(identifier: str, suffix: Optional[str] = None) -> str:
     return os.path.join(cachedir, cachefile)
 
 
-def get_cache(identifier: str, suffix: Optional[str] = None, as_json: bool = False) -> Optional[Any]:
+def get_cache(
+    identifier: str, suffix: Optional[str] = None, as_json: bool = False
+) -> Optional[Any]:
     fullpath = get_cache_file(identifier, suffix=suffix)
     cachedata = None
     try:
@@ -3530,7 +3980,9 @@ def clear_cache(identifier: str, suffix: Optional[str] = None) -> None:
         logger.debug('Removed cache %s for %s', fullpath, identifier)
 
 
-def save_cache(contents: Any, identifier: str, suffix: Optional[str] = None, is_json: bool = False) -> None:
+def save_cache(
+    contents: Any, identifier: str, suffix: Optional[str] = None, is_json: bool = False
+) -> None:
     fullpath = get_cache_file(identifier, suffix=suffix)
     try:
         with open(fullpath, 'w') as fh:
@@ -3554,7 +4006,7 @@ def _setup_user_config(cmdargs: argparse.Namespace) -> None:
             USER_CONFIG['name'] = os.environ['GIT_AUTHOR_NAME']
         else:
             udata = pwd.getpwuid(os.getuid())
-            USER_CONFIG['name'] = udata.pw_gecos.strip(",")
+            USER_CONFIG['name'] = udata.pw_gecos.strip(',')
     if 'email' not in USER_CONFIG:
         if 'GIT_COMMITTER_EMAIL' in os.environ:
             USER_CONFIG['email'] = os.environ['GIT_COMMITTER_EMAIL']
@@ -3613,8 +4065,9 @@ def get_lore_node() -> liblore.LoreNode:
 
 def get_msgid_from_stdin() -> Optional[str]:
     if not sys.stdin.isatty():
-        message = email.parser.BytesParser(policy=emlpolicy, _class=EmailMessage).parsebytes(
-            sys.stdin.buffer.read(), headersonly=True)
+        message = email.parser.BytesParser(
+            policy=emlpolicy, _class=EmailMessage
+        ).parsebytes(sys.stdin.buffer.read(), headersonly=True)
         msgid = message.get('Message-ID', None)
         if msgid:
             return str(msgid)
@@ -3626,7 +4079,9 @@ def parse_msgid(msgid: str) -> str:
     msgid = msgid.strip().strip('<>')
     # Handle the case when someone pastes a full URL to the message
     # Is this a patchwork URL?
-    matches = re.search(r'^https?://.*/project/.*/patch/([^/]+@[^/]+)', msgid, re.IGNORECASE)
+    matches = re.search(
+        r'^https?://.*/project/.*/patch/([^/]+@[^/]+)', msgid, re.IGNORECASE
+    )
     if matches:
         logger.debug('Looks like a patchwork URL')
         chunks = matches.groups()
@@ -3671,7 +4126,9 @@ def get_msgid(cmdargs: argparse.Namespace) -> Optional[str]:
     return parse_msgid(msgid)
 
 
-def get_strict_thread(msgs: List[EmailMessage], msgid: str, noparent: bool = False) -> Optional[List[EmailMessage]]:
+def get_strict_thread(
+    msgs: List[EmailMessage], msgid: str, noparent: bool = False
+) -> Optional[List[EmailMessage]]:
     # Attempt to automatically recognize the situation when someone posts
     # a standalone patch or series in the middle of a large discussion for another series.
     # We recommend dealing with this using --no-parent, but we can also catch this
@@ -3711,26 +4168,36 @@ def mailsplit_bytes(bmbox: bytes, pipesep: Optional[str] = None) -> List[EmailMe
         logger.debug('Mailsplitting using pipesep=%s', pipesep)
         if '\\' in pipesep:
             import codecs
+
             pipesep = codecs.decode(pipesep.encode(), 'unicode_escape')
         msgs: List[EmailMessage] = []
         for chunk in bmbox.split(pipesep.encode()):
             if chunk.strip():
-                msgs.append(email.parser.BytesParser(policy=emlpolicy,
-                                                     _class=EmailMessage).parsebytes(chunk))
+                msgs.append(
+                    email.parser.BytesParser(
+                        policy=emlpolicy, _class=EmailMessage
+                    ).parsebytes(chunk)
+                )
         return msgs
 
     return liblore.utils.split_mbox(bmbox)
 
 
-def get_pi_search_results(query: str, nocache: bool = False, message: Optional[str] = None,
-                          full_threads: bool = True) -> Optional[List[EmailMessage]]:
+def get_pi_search_results(
+    query: str,
+    nocache: bool = False,
+    message: Optional[str] = None,
+    full_threads: bool = True,
+) -> Optional[List[EmailMessage]]:
     node = get_lore_node()
     if message is not None and len(message):
         logger.info(message, node.hostname)
     else:
         logger.info('Grabbing search results from %s', node.hostname)
     try:
-        t_mbox = node.get_mbox_by_query(query, full_threads=full_threads, nocache=nocache)
+        t_mbox = node.get_mbox_by_query(
+            query, full_threads=full_threads, nocache=nocache
+        )
     except liblore.RemoteError:
         logger.info('Server returned an error.')
         return None
@@ -3761,7 +4228,9 @@ def get_series_by_msgid(msgid: str, nocache: bool = False) -> Optional['LoreMail
     return lmbx
 
 
-def get_series_by_change_id(change_id: str, nocache: bool = False) -> Optional['LoreMailbox']:
+def get_series_by_change_id(
+    change_id: str, nocache: bool = False
+) -> Optional['LoreMailbox']:
     q = f'nq:"change-id:{change_id}"'
     q_msgs = get_pi_search_results(q, nocache=nocache, full_threads=False)
     if not q_msgs:
@@ -3770,11 +4239,15 @@ def get_series_by_change_id(change_id: str, nocache: bool = False) -> Optional['
     for q_msg in q_msgs:
         body, _bcharset = LoreMessage.get_payload(q_msg)
         if not re.search(rf'^\s*change-id:\s*{change_id}$', body, flags=re.M | re.I):
-            logger.debug('No change-id match for %s', q_msg.get('Subject', '(no subject)'))
+            logger.debug(
+                'No change-id match for %s', q_msg.get('Subject', '(no subject)')
+            )
             continue
         q_msgid = LoreMessage.get_clean_msgid(q_msg)
         if q_msgid is None:
-            logger.debug('No message-id found, ignoring %s', q_msg.get('Subject', '(no subject)'))
+            logger.debug(
+                'No message-id found, ignoring %s', q_msg.get('Subject', '(no subject)')
+            )
             continue
         t_msgs = get_pi_thread_by_msgid(q_msgid, nocache=nocache)
         if t_msgs:
@@ -3784,9 +4257,12 @@ def get_series_by_change_id(change_id: str, nocache: bool = False) -> Optional['
     return lmbx
 
 
-def get_msgs_by_patch_id(patch_id: str, extra_query: Optional[str] = None,
-                         nocache: bool = False, full_threads: bool = False
-                         ) -> Optional[List[EmailMessage]]:
+def get_msgs_by_patch_id(
+    patch_id: str,
+    extra_query: Optional[str] = None,
+    nocache: bool = False,
+    full_threads: bool = False,
+) -> Optional[List[EmailMessage]]:
     q = f'patchid:{patch_id}'
     if extra_query:
         q = f'{q} {extra_query}'
@@ -3798,7 +4274,9 @@ def get_msgs_by_patch_id(patch_id: str, extra_query: Optional[str] = None,
     return q_msgs
 
 
-def get_series_by_patch_id(patch_id: str, nocache: bool = False) -> Optional['LoreMailbox']:
+def get_series_by_patch_id(
+    patch_id: str, nocache: bool = False
+) -> Optional['LoreMailbox']:
     q_msgs = get_msgs_by_patch_id(patch_id, full_threads=True, nocache=nocache)
     if not q_msgs:
         return None
@@ -3809,10 +4287,13 @@ def get_series_by_patch_id(patch_id: str, nocache: bool = False) -> Optional['Lo
     return lmbx
 
 
-def get_pi_thread_by_msgid(msgid: str, nocache: bool = False,
-                           onlymsgids: Optional[Set[str]] = None,
-                           with_thread: bool = True,
-                           quiet: bool = False) -> Optional[List[EmailMessage]]:
+def get_pi_thread_by_msgid(
+    msgid: str,
+    nocache: bool = False,
+    onlymsgids: Optional[Set[str]] = None,
+    with_thread: bool = True,
+    quiet: bool = False,
+) -> Optional[List[EmailMessage]]:
     if not quiet:
         logger.info('Looking up %s', msgid)
     node = get_lore_node()
@@ -3847,16 +4328,20 @@ def get_pi_thread_by_msgid(msgid: str, nocache: bool = False,
     return strict
 
 
-def git_range_to_patches(gitdir: Optional[str], start: str, end: str,
-                         prefixes: Optional[List[str]] = None,
-                         revision: Optional[int] = 1,
-                         msgid_tpt: Optional[str] = None,
-                         seriests: Optional[int] = None,
-                         mailfrom: Optional[Tuple[str, str]] = None,
-                         extrahdrs: Optional[List[Tuple[str, str]]] = None,
-                         ignore_commits: Optional[Set[str]] = None,
-                         limit_committer: Optional[str] = None,
-                         presubject: Optional[str] = None) -> List[Tuple[str, EmailMessage]]:
+def git_range_to_patches(
+    gitdir: Optional[str],
+    start: str,
+    end: str,
+    prefixes: Optional[List[str]] = None,
+    revision: Optional[int] = 1,
+    msgid_tpt: Optional[str] = None,
+    seriests: Optional[int] = None,
+    mailfrom: Optional[Tuple[str, str]] = None,
+    extrahdrs: Optional[List[Tuple[str, str]]] = None,
+    ignore_commits: Optional[Set[str]] = None,
+    limit_committer: Optional[str] = None,
+    presubject: Optional[str] = None,
+) -> List[Tuple[str, EmailMessage]]:
     gitargs = ['rev-list', '--no-merges', '--reverse']
     if limit_committer:
         gitargs += ['-F', f'--committer={limit_committer}']
@@ -3881,20 +4366,23 @@ def git_range_to_patches(gitdir: Optional[str], start: str, end: str,
             '--find-renames',
         ]
 
-        if git_check_minimal_version("2.40"):
-            showargs.append("--default-prefix")
+        if git_check_minimal_version('2.40'):
+            showargs.append('--default-prefix')
 
         smcfg = get_sendemail_config()
         if not get_git_bool(str(smcfg.get('mailmap', 'false'))):
             showargs.append('--no-mailmap')
         logger.debug('showargs=%s', showargs)
         ecode, out = git_run_command(
-            gitdir, ['show'] + showargs + [commit],
+            gitdir,
+            ['show'] + showargs + [commit],
             decode=False,
         )
         if ecode > 0:
             raise RuntimeError(f'Could not get a patch out of {commit}')
-        msg = email.parser.BytesParser(policy=emlpolicy, _class=EmailMessage).parsebytes(out)
+        msg = email.parser.BytesParser(
+            policy=emlpolicy, _class=EmailMessage
+        ).parsebytes(out)
         patches.append((commit, msg))
 
     fullcount = len(patches)
@@ -3917,8 +4405,9 @@ def git_range_to_patches(gitdir: Optional[str], start: str, end: str,
         lsubject.expected = expected
         if revision is not None:
             lsubject.revision = revision
-        subject = lsubject.get_rebuilt_subject(eprefixes=prefixes,
-                                               presubject=presubject)
+        subject = lsubject.get_rebuilt_subject(
+            eprefixes=prefixes, presubject=presubject
+        )
 
         logger.debug('  %s', subject)
         msg.replace_header('Subject', subject)
@@ -3937,7 +4426,9 @@ def git_range_to_patches(gitdir: Optional[str], start: str, end: str,
             patchts = seriests + counter + 1
             origdate = msg.get('Date')
             if origdate:
-                msg.replace_header('Date', email.utils.formatdate(patchts, localtime=True))
+                msg.replace_header(
+                    'Date', email.utils.formatdate(patchts, localtime=True)
+                )
             else:
                 msg.add_header('Date', email.utils.formatdate(patchts, localtime=True))
 
@@ -3988,7 +4479,9 @@ def git_revparse_tag(gitdir: Optional[str], tagname: str) -> Optional[str]:
     return out.strip()
 
 
-def git_branch_contains(gitdir: Optional[str], commit_id: str, checkall: bool = False) -> List[str]:
+def git_branch_contains(
+    gitdir: Optional[str], commit_id: str, checkall: bool = False
+) -> List[str]:
     gitargs = ['branch', '--format=%(refname:short)', '--contains', commit_id]
     if checkall:
         gitargs.append('--all')
@@ -4031,8 +4524,9 @@ def git_get_common_dir(path: Optional[str] = None) -> Optional[str]:
     return None
 
 
-def format_addrs(pairs: List[Tuple[str, str]], clean: bool = True,
-                 header_safe: bool = True) -> str:
+def format_addrs(
+    pairs: List[Tuple[str, str]], clean: bool = True, header_safe: bool = True
+) -> str:
     addrs = list()
     for pair in pairs:
         if not pair[0] or pair[0] == pair[1]:
@@ -4069,7 +4563,9 @@ def print_pretty_addrs(addrs: List[Tuple[str, str]], hdrname: str) -> None:
 
 
 def make_quote(body: str, maxlines: int = 5) -> str:
-    _headers, message, _trailers, _basement, _signature = LoreMessage.get_body_parts(body)
+    _headers, message, _trailers, _basement, _signature = LoreMessage.get_body_parts(
+        body
+    )
     if not len(message):
         # Sometimes there is no message, just trailers
         return '> \n'
@@ -4122,7 +4618,9 @@ def parse_int_range(intrange: str, upper: int) -> Iterator[int]:
             logger.critical('Unknown range value specified: %s', n)
 
 
-def check_gpg_status(status: str) -> Tuple[bool, bool, bool, Optional[str], Optional[str]]:
+def check_gpg_status(
+    status: str,
+) -> Tuple[bool, bool, bool, Optional[str], Optional[str]]:
     good = False
     valid = False
     trusted = False
@@ -4139,7 +4637,9 @@ def check_gpg_status(status: str) -> Tuple[bool, bool, bool, Optional[str], Opti
     if gs_matches:
         good = True
         keyid = gs_matches.groups()[0]
-    vs_matches = re.search(r'^\[GNUPG:] VALIDSIG ([\dA-F]+) (\d{4}-\d{2}-\d{2}) (\d+)', status, flags=re.M)
+    vs_matches = re.search(
+        r'^\[GNUPG:] VALIDSIG ([\dA-F]+) (\d{4}-\d{2}-\d{2}) (\d+)', status, flags=re.M
+    )
     if vs_matches:
         valid = True
         signtime = vs_matches.groups()[2]
@@ -4181,8 +4681,12 @@ def save_git_am_mbox(msgs: List[EmailMessage], dest: BinaryIO) -> None:
         dest.write(LoreMessage.get_msg_as_bytes(msg, headers='decode'))
 
 
-def save_mboxrd_mbox(msgs: List[EmailMessage], dest: BinaryIO, mangle_from: bool = False) -> None:
-    gen = email.generator.BytesGenerator(dest, mangle_from_=mangle_from, policy=emlpolicy)
+def save_mboxrd_mbox(
+    msgs: List[EmailMessage], dest: BinaryIO, mangle_from: bool = False
+) -> None:
+    gen = email.generator.BytesGenerator(
+        dest, mangle_from_=mangle_from, policy=emlpolicy
+    )
     for msg in msgs:
         dest.write(b'From mboxrd@z Thu Jan  1 00:00:00 1970\n')
         gen.flatten(msg)
@@ -4198,13 +4702,20 @@ def save_maildir(msgs: List[EmailMessage], dest: str) -> None:
     for msg in msgs:
         # make a slug out of it
         lsubj = LoreSubject(msg.get('subject', ''))
-        slug = '%04d_%s' % (lsubj.counter, re.sub(r'\W+', '_', lsubj.subject).strip('_').lower())
+        slug = '%04d_%s' % (
+            lsubj.counter,
+            re.sub(r'\W+', '_', lsubj.subject).strip('_').lower(),
+        )
         with open(os.path.join(d_tmp, f'{slug}.eml'), 'wb') as mfh:
             mfh.write(LoreMessage.get_msg_as_bytes(msg, headers='decode'))
-        os.rename(os.path.join(d_tmp, f'{slug}.eml'), os.path.join(d_new, f'{slug}.eml'))
+        os.rename(
+            os.path.join(d_tmp, f'{slug}.eml'), os.path.join(d_new, f'{slug}.eml')
+        )
 
 
-def get_mailinfo(bmsg: bytes, scissors: bool = False) -> Tuple[Dict[str, str], bytes, bytes]:
+def get_mailinfo(
+    bmsg: bytes, scissors: bool = False
+) -> Tuple[Dict[str, str], bytes, bytes]:
     with tempfile.TemporaryDirectory() as tfd:
         m_out = os.path.join(tfd, 'm')
         p_out = os.path.join(tfd, 'p')
@@ -4258,10 +4769,16 @@ def _setup_sendemail_config(cmdargs: argparse.Namespace) -> None:
     identity = config.get('sendemail-identity') or _basecfg.get('identity')
     if identity:
         # Use this identity to override what we got from the default one
-        sconfig = get_config_from_git(rf'sendemail\.{identity}\..*', multivals=['smtpserveroption'], defaults=_basecfg)
+        sconfig = get_config_from_git(
+            rf'sendemail\.{identity}\..*',
+            multivals=['smtpserveroption'],
+            defaults=_basecfg,
+        )
         sectname = f'sendemail.{identity}'
         if not len(sconfig):
-            raise smtplib.SMTPException('Unable to find %s settings in any applicable git config' % sectname)
+            raise smtplib.SMTPException(
+                'Unable to find %s settings in any applicable git config' % sectname
+            )
     else:
         sconfig = _basecfg
         sectname = 'sendemail'
@@ -4277,7 +4794,9 @@ def get_sendemail_config() -> Dict[str, Optional[Union[str, List[str]]]]:
     return SENDEMAIL_CONFIG
 
 
-def get_smtp(dryrun: bool = False) -> Tuple[Union[smtplib.SMTP, smtplib.SMTP_SSL, List[str], None], str]:
+def get_smtp(
+    dryrun: bool = False,
+) -> Tuple[Union[smtplib.SMTP, smtplib.SMTP_SSL, List[str], None], str]:
     sconfig = get_sendemail_config()
     # Limited support for smtp settings to begin with, but should cover the vast majority of cases
     fromaddr = sconfig.get('from')
@@ -4292,7 +4811,9 @@ def get_smtp(dryrun: bool = False) -> Tuple[Union[smtplib.SMTP, smtplib.SMTP_SSL
     try:
         port = int(str(sconfig.get('smtpserverport', '0')))
     except ValueError as exc:
-        raise smtplib.SMTPException('Invalid smtpport entry in config: %s' % sconfig.get('smtpserverport')) from exc
+        raise smtplib.SMTPException(
+            'Invalid smtpport entry in config: %s' % sconfig.get('smtpserverport')
+        ) from exc
 
     # If server contains slashes, then it's a local command
     if '/' in server:
@@ -4337,7 +4858,9 @@ def get_smtp(dryrun: bool = False) -> Tuple[Union[smtplib.SMTP, smtplib.SMTP_SSL
             # We do TLS from the get-go
             smtp = smtplib.SMTP_SSL(server, port)
         else:
-            raise smtplib.SMTPException('Unclear what to do with smtpencryption=%s' % encryption)
+            raise smtplib.SMTPException(
+                'Unclear what to do with smtpencryption=%s' % encryption
+            )
 
         # If we got to this point, we should do authentication,
         # unless smtpauth is set to a special "none" value
@@ -4353,11 +4876,15 @@ def get_smtp(dryrun: bool = False) -> Tuple[Union[smtplib.SMTP, smtplib.SMTP_SSL
                 gchost = f'{server}:{port}'
             else:
                 gchost = server
-            gc_pass = git_credential_fill(None, protocol='smtp', host=gchost, username=auser)
+            gc_pass = git_credential_fill(
+                None, protocol='smtp', host=gchost, username=auser
+            )
             if gc_pass:
                 apass = gc_pass
             if not apass:
-                raise smtplib.SMTPException('No password specified for connecting to %s', server)
+                raise smtplib.SMTPException(
+                    'No password specified for connecting to %s', server
+                )
         if auser and apass:
             # Let any exceptions bubble up
             if smtpauth in ('oauth', 'oauth2', 'xoauth2'):
@@ -4374,10 +4901,12 @@ def get_smtp(dryrun: bool = False) -> Tuple[Union[smtplib.SMTP, smtplib.SMTP_SSL
 
 def get_patchwork_session(pwkey: str, pwurl: str) -> Tuple[requests.Session, str]:
     session = requests.session()
-    session.headers.update({
-        'User-Agent': 'b4/%s' % __VERSION__,
-        'Authorization': f'Token {pwkey}',
-    })
+    session.headers.update(
+        {
+            'User-Agent': 'b4/%s' % __VERSION__,
+            'Authorization': f'Token {pwkey}',
+        }
+    )
     url = '/'.join((pwurl.rstrip('/'), 'api', PW_REST_API_VERSION))
     logger.debug('pw url=%s', url)
     return session, url
@@ -4390,7 +4919,9 @@ def patchwork_set_state(msgids: List[str], state: str) -> None:
     pwurl = str(config.get('pw-url', ''))
     pwproj = str(config.get('pw-project', ''))
     if not (pwkey and pwurl and pwproj):
-        logger.debug('Patchwork support requires pw-key, pw-url and pw-project settings')
+        logger.debug(
+            'Patchwork support requires pw-key, pw-url and pw-project settings'
+        )
         return
     pses, url = get_patchwork_session(pwkey, pwurl)
     patches_url = '/'.join((url, 'patches'))
@@ -4432,11 +4963,17 @@ def patchwork_set_state(msgids: List[str], state: str) -> None:
                 logger.debug('Patchwork REST error: %s', ex)
 
 
-def send_mail(smtp: Union[smtplib.SMTP, smtplib.SMTP_SSL, List[str], None], msgs: Sequence[EmailMessage],
-              fromaddr: Optional[str], destaddrs: Optional[Union[Set[str], List[str]]] = None,
-              patatt_sign: bool = False, dryrun: bool = False,
-              output_dir: Optional[str] = None, web_endpoint: Optional[str] = None,
-              reflect: bool = False) -> Optional[int]:
+def send_mail(
+    smtp: Union[smtplib.SMTP, smtplib.SMTP_SSL, List[str], None],
+    msgs: Sequence[EmailMessage],
+    fromaddr: Optional[str],
+    destaddrs: Optional[Union[Set[str], List[str]]] = None,
+    patatt_sign: bool = False,
+    dryrun: bool = False,
+    output_dir: Optional[str] = None,
+    web_endpoint: Optional[str] = None,
+    reflect: bool = False,
+) -> Optional[int]:
     tosend = list()
     if output_dir is not None:
         dryrun = True
@@ -4457,16 +4994,21 @@ def send_mail(smtp: Union[smtplib.SMTP, smtplib.SMTP_SSL, List[str], None], msgs
         ls = LoreSubject(subject)
         if patatt_sign:
             import patatt
+
             # patatt.logger = logger
             try:
                 bdata = patatt.rfc2822_sign(bdata)
             except patatt.NoKeyError as ex:
                 logger.critical('CRITICAL: Error signing: no key configured')
-                logger.critical('          Run "patatt genkey" or configure "user.signingKey" to use PGP')
+                logger.critical(
+                    '          Run "patatt genkey" or configure "user.signingKey" to use PGP'
+                )
                 logger.critical('          As a last resort, rerun with --no-sign')
                 raise RuntimeError(str(ex)) from ex
             except patatt.SigningError as ex:
-                raise RuntimeError('Failure trying to patatt-sign: %s' % str(ex)) from ex
+                raise RuntimeError(
+                    'Failure trying to patatt-sign: %s' % str(ex)
+                ) from ex
         if dryrun:
             if output_dir:
                 filen = '%s.eml' % ls.get_slug(sep='-')
@@ -4481,7 +5023,9 @@ def send_mail(smtp: Union[smtplib.SMTP, smtplib.SMTP_SSL, List[str], None], msgs
             continue
         if not destaddrs:
             alldests = email.utils.getaddresses([str(x) for x in msg.get_all('to', [])])
-            alldests += email.utils.getaddresses([str(x) for x in msg.get_all('cc', [])])
+            alldests += email.utils.getaddresses(
+                [str(x) for x in msg.get_all('cc', [])]
+            )
             myaddrs = {x[1] for x in alldests}
         else:
             myaddrs = set(destaddrs)
@@ -4538,7 +5082,9 @@ def send_mail(smtp: Union[smtplib.SMTP, smtplib.SMTP_SSL, List[str], None], msgs
                 cmdargs = list(smtp) + list(destaddrs)
             ecode, _out, err = _run_command(cmdargs, stdin=bdata)
             if ecode > 0:
-                raise RuntimeError('Error running %s: %s' % (' '.join(smtp), err.decode()))
+                raise RuntimeError(
+                    'Error running %s: %s' % (' '.join(smtp), err.decode())
+                )
             sent += 1
 
     elif smtp:
@@ -4555,7 +5101,9 @@ def send_mail(smtp: Union[smtplib.SMTP, smtplib.SMTP_SSL, List[str], None], msgs
     return sent
 
 
-def git_get_current_branch(gitdir: Optional[str] = None, short: bool = True) -> Optional[str]:
+def git_get_current_branch(
+    gitdir: Optional[str] = None, short: bool = True
+) -> Optional[str]:
     gitargs = ['symbolic-ref', '-q', 'HEAD']
     ecode, out = git_run_command(gitdir, gitargs)
     if ecode > 0:
@@ -4578,13 +5126,14 @@ def get_excluded_addrs() -> Set[str]:
     return excludes
 
 
-def cleanup_email_addrs(addresses: List[Tuple[str, str]], excludes: Set[str],
-                        gitdir: Optional[str]) -> List[Tuple[str, str]]:
+def cleanup_email_addrs(
+    addresses: List[Tuple[str, str]], excludes: Set[str], gitdir: Optional[str]
+) -> List[Tuple[str, str]]:
     global ALIAS_INFO
     global MAILMAP_INFO
 
     # Translate aliases if support is available
-    if git_check_minimal_version("2.47"):
+    if git_check_minimal_version('2.47'):
         logger.debug('Translating aliases via git send-email')
 
         unqual_addrs: Set[str] = set()
@@ -4598,23 +5147,27 @@ def cleanup_email_addrs(addresses: List[Tuple[str, str]], excludes: Set[str],
             tocheck = list(unqual_addrs)
             data = '\n'.join(tocheck).encode('utf-8')
             args = ['send-email', '--translate-aliases']
-            ecode, out = git_run_command(gitdir,
-                                        ['send-email', '--translate-aliases'],
-                                        stdin=data)
+            ecode, out = git_run_command(
+                gitdir, ['send-email', '--translate-aliases'], stdin=data
+            )
             if ecode == 0:
                 translated_addrs = email.utils.getaddresses(out.strip().splitlines())
 
                 for alias, entry in zip(tocheck, translated_addrs):
                     if alias != entry[1]:
-                        logger.debug('Translated alias %s to qualified address %s',
-                                    alias, entry[1])
+                        logger.debug(
+                            'Translated alias %s to qualified address %s',
+                            alias,
+                            entry[1],
+                        )
                         ALIAS_INFO[alias] = entry
                     else:
                         logger.debug('"%s" is not a known alias', alias)
                         ALIAS_INFO[alias] = None
             else:
-                logger.debug('git send-email --translate-aliases failed with exit code %s',
-                             ecode)
+                logger.debug(
+                    'git send-email --translate-aliases failed with exit code %s', ecode
+                )
 
         def _replace_aliases(entry: Tuple[str, str]) -> Tuple[str, str]:
             if entry[1] in ALIAS_INFO:
@@ -4646,7 +5199,9 @@ def cleanup_email_addrs(addresses: List[Tuple[str, str]], excludes: Set[str],
             replacement = MAILMAP_INFO[entry[1]]
             # If it's None, we don't want to replace it
             if replacement is not None:
-                logger.debug('Replaced %s with mailmap-updated %s', entry[1], replacement[1])
+                logger.debug(
+                    'Replaced %s with mailmap-updated %s', entry[1], replacement[1]
+                )
                 addresses.remove(entry)
                 addresses.append(replacement)
             continue
@@ -4707,8 +5262,13 @@ def discover_rethread_series(msgid: str, nocache: bool = False) -> List[str]:
         seed_msg = seed_msgs[0]
 
     seed = LoreMessage(seed_msg)
-    logger.info('Seed: [%d/%d] %s (from %s)',
-                seed.counter, seed.expected, seed.subject, seed.fromemail)
+    logger.info(
+        'Seed: [%d/%d] %s (from %s)',
+        seed.counter,
+        seed.expected,
+        seed.subject,
+        seed.fromemail,
+    )
 
     # Build a 1-hour date window around the seed (30 min each way)
     # Convert to UTC since public-inbox dt: expects UTC timestamps
@@ -4757,7 +5317,9 @@ def discover_rethread_series(msgid: str, nocache: bool = False) -> List[str]:
 
     if seed.counters_inferred:
         if not found_bare:
-            logger.warning('Could not find any matching patches, using seed message only')
+            logger.warning(
+                'Could not find any matching patches, using seed message only'
+            )
             return [msgid]
         logger.info('Discovered %d bare patches from the same author', len(found_bare))
         return found_bare
@@ -4774,15 +5336,18 @@ def discover_rethread_series(msgid: str, nocache: bool = False) -> List[str]:
     n_patches = sum(1 for c in found if c > 0)
     if n_patches < expected:
         missing = [str(i) for i in range(1, expected + 1) if i not in found]
-        logger.warning('Found %d/%d patches (missing: %s)',
-                       n_patches, expected, ', '.join(missing))
+        logger.warning(
+            'Found %d/%d patches (missing: %s)', n_patches, expected, ', '.join(missing)
+        )
     else:
         logger.info('Discovered %d/%d patches for the series', n_patches, expected)
 
     return msgids
 
 
-def fetch_rethread_messages(msgids: List[str], nocache: bool = False) -> Tuple[List[str], List[EmailMessage]]:
+def fetch_rethread_messages(
+    msgids: List[str], nocache: bool = False
+) -> Tuple[List[str], List[EmailMessage]]:
     """Fetch messages for multiple msgids, deduplicating across threads.
 
     Returns (msgids, all_msgs) where msgids is the input list (for
@@ -4810,7 +5375,9 @@ def fetch_rethread_messages(msgids: List[str], nocache: bool = False) -> Tuple[L
     return msgids, all_msgs
 
 
-def retrieve_rethreaded_messages(cmdargs: argparse.Namespace) -> Tuple[str, List[EmailMessage]]:
+def retrieve_rethreaded_messages(
+    cmdargs: argparse.Namespace,
+) -> Tuple[str, List[EmailMessage]]:
     """Retrieve messages from multiple unthreaded msgids and rethread them into a series."""
     raw_ids: List[str] = cmdargs.rethread
 
@@ -4848,7 +5415,9 @@ def retrieve_rethreaded_messages(cmdargs: argparse.Namespace) -> Tuple[str, List
     return LoreSeries.rethread_series(msgids, all_msgs)
 
 
-def retrieve_messages(cmdargs: argparse.Namespace) -> Tuple[Optional[str], Optional[List[EmailMessage]]]:
+def retrieve_messages(
+    cmdargs: argparse.Namespace,
+) -> Tuple[Optional[str], Optional[List[EmailMessage]]]:
     # Handle --rethread mode: fetch multiple unrelated messages and stitch them together
     if getattr(cmdargs, 'rethread', None):
         if not can_network:
@@ -4875,14 +5444,24 @@ def retrieve_messages(cmdargs: argparse.Namespace) -> Tuple[Optional[str], Optio
         if ('cherrypick' in cmdargs and cmdargs.cherrypick == '_') or not with_thread:
             # Just that msgid, please
             pickings.add(msgid)
-        msgs = get_pi_thread_by_msgid(msgid, nocache=cmdargs.nocache, onlymsgids=pickings, with_thread=with_thread) or []
+        msgs = (
+            get_pi_thread_by_msgid(
+                msgid,
+                nocache=cmdargs.nocache,
+                onlymsgids=pickings,
+                with_thread=with_thread,
+            )
+            or []
+        )
         if not msgs:
             logger.debug('No messages from the query')
             return None, msgs
     else:
         if cmdargs.localmbox == '-':
             # The entire mbox is passed via stdin, so mailsplit it and use the first message for our msgid
-            msgs = mailsplit_bytes(sys.stdin.buffer.read(), pipesep=cmdargs.stdin_pipe_sep)
+            msgs = mailsplit_bytes(
+                sys.stdin.buffer.read(), pipesep=cmdargs.stdin_pipe_sep
+            )
             if not len(msgs):
                 raise LookupError('Stdin did not contain any messages')
 
@@ -4895,7 +5474,9 @@ def retrieve_messages(cmdargs: argparse.Namespace) -> Tuple[Optional[str], Optio
                 if with_thread:
                     msgs = get_strict_thread(mb_msgs, msgid) or []
                     if not msgs:
-                        raise LookupError('Could not find %s in %s' % (msgid, cmdargs.localmbox))
+                        raise LookupError(
+                            'Could not find %s in %s' % (msgid, cmdargs.localmbox)
+                        )
                 else:
                     msgs = list()
                     for msg in mb_msgs:
@@ -4929,8 +5510,9 @@ def git_revparse_obj(gitobj: str, gitdir: Optional[str] = None) -> str:
 
 def _rewrite_fetch_head_origin(topdir: str, old_origin: str, new_origin: str) -> None:
     """Rewrite FETCH_HEAD to replace old_origin with a descriptive message."""
-    ecode, fhf = git_run_command(topdir, ['rev-parse', '--git-path', 'FETCH_HEAD'],
-                                 logstderr=True)
+    ecode, fhf = git_run_command(
+        topdir, ['rev-parse', '--git-path', 'FETCH_HEAD'], logstderr=True
+    )
     if ecode > 0:
         return
     fhf = fhf.rstrip()
@@ -4943,9 +5525,14 @@ def _rewrite_fetch_head_origin(topdir: str, old_origin: str, new_origin: str) ->
             fhh.write(new_contents)
 
 
-def git_fetch_am_into_repo(gitdir: Optional[str], ambytes: bytes, at_base: str = 'HEAD',
-                           origin: Optional[str] = None, check_only: bool = False,
-                           am_flags: Optional[List[str]] = None) -> None:
+def git_fetch_am_into_repo(
+    gitdir: Optional[str],
+    ambytes: bytes,
+    at_base: str = 'HEAD',
+    origin: Optional[str] = None,
+    check_only: bool = False,
+    am_flags: Optional[List[str]] = None,
+) -> None:
     if gitdir is None:
         gitdir = os.getcwd()
     topdir = git_get_toplevel(gitdir)
@@ -4969,12 +5556,16 @@ def git_fetch_am_into_repo(gitdir: Optional[str], ambytes: bytes, at_base: str =
     cleanup = True
     try:
         logger.info('Magic: Preparing a sparse worktree')
-        ecode, out = git_run_command(gwt, ['sparse-checkout', 'set'], logstderr=True, rundir=gwt)
+        ecode, out = git_run_command(
+            gwt, ['sparse-checkout', 'set'], logstderr=True, rundir=gwt
+        )
         if ecode > 0:
             logger.critical('Error running sparse-checkout set')
             logger.critical(out)
             raise RuntimeError
-        ecode, out = git_run_command(gwt, ['checkout', '-f'], logstderr=True, rundir=gwt)
+        ecode, out = git_run_command(
+            gwt, ['checkout', '-f'], logstderr=True, rundir=gwt
+        )
         if ecode > 0:
             logger.critical('Error running checkout into sparse workdir')
             logger.critical(out)
@@ -4982,7 +5573,9 @@ def git_fetch_am_into_repo(gitdir: Optional[str], ambytes: bytes, at_base: str =
         amargs = ['am']
         if am_flags:
             amargs.extend(am_flags)
-        ecode, out = git_run_command(gwt, amargs, stdin=ambytes, logstderr=True, rundir=gwt)
+        ecode, out = git_run_command(
+            gwt, amargs, stdin=ambytes, logstderr=True, rundir=gwt
+        )
         if ecode > 0:
             cleanup = False
             raise AmConflictError(gwt, out.strip())
@@ -5044,13 +5637,21 @@ def edit_in_editor(bdata: bytes, filehint: str = 'COMMIT_EDITMSG') -> bytes:
 
     write_branch = git_get_current_branch()
     if write_branch != read_branch:
-        with tempfile.NamedTemporaryFile(mode="wb", prefix=f"old-{read_branch}".replace("/", "-"),
-                                         delete=False) as save_file:
+        with tempfile.NamedTemporaryFile(
+            mode='wb', prefix=f'old-{read_branch}'.replace('/', '-'), delete=False
+        ) as save_file:
             save_file.write(bdata)
-            logger.critical('Editing started on branch %s, but current branch is %s.',
-                            read_branch, write_branch)
-            logger.critical('To avoid a collision, your text was saved in %s', save_file.name)
-        raise RuntimeError(f"Branch changed during file editing, the temporary file was saved at {save_file.name}")
+            logger.critical(
+                'Editing started on branch %s, but current branch is %s.',
+                read_branch,
+                write_branch,
+            )
+            logger.critical(
+                'To avoid a collision, your text was saved in %s', save_file.name
+            )
+        raise RuntimeError(
+            f'Branch changed during file editing, the temporary file was saved at {save_file.name}'
+        )
     return bdata
 
 
@@ -5095,8 +5696,9 @@ def view_in_pager(bdata: bytes, filehint: str = 'b4-view.txt') -> None:
         spop.wait()
 
 
-def map_codereview_trailers(qmsgs: List[EmailMessage],
-                            ignore_msgids: Optional[Set[str]] = None) -> Dict[str, List['LoreMessage']]:
+def map_codereview_trailers(
+    qmsgs: List[EmailMessage], ignore_msgids: Optional[Set[str]] = None
+) -> Dict[str, List['LoreMessage']]:
     """
     Map messages containing code-review trailers to patch-ids they were sent for.
     :param qmsgs: list of messages to process
@@ -5150,9 +5752,14 @@ def map_codereview_trailers(qmsgs: List[EmailMessage],
             # Is it a patch?
             logger.debug('  subj: %s', _qmsg.full_subject)
             # Is it the cover letter?
-            if (_qmsg.counter == 0 and (not _qmsg.counters_inferred or _qmsg.has_diffstat)
-                    and _qmsg.msgid in ref_map):
-                logger.debug('  stopping: found the cover letter for %s', qlmsg.full_subject)
+            if (
+                _qmsg.counter == 0
+                and (not _qmsg.counters_inferred or _qmsg.has_diffstat)
+                and _qmsg.msgid in ref_map
+            ):
+                logger.debug(
+                    '  stopping: found the cover letter for %s', qlmsg.full_subject
+                )
                 if _qmsg.msgid not in covers:
                     covers[_qmsg.msgid] = set()
                 covers[_qmsg.msgid].add(qlmsg.msgid)
@@ -5166,7 +5773,9 @@ def map_codereview_trailers(qmsgs: List[EmailMessage],
                         patchid_map[pqpid] = list()
                     if qlmsg not in patchid_map[pqpid]:
                         patchid_map[pqpid].append(qlmsg)
-                        logger.debug('  matched patch-id %s to %s', pqpid, qlmsg.full_subject)
+                        logger.debug(
+                            '  matched patch-id %s to %s', pqpid, qlmsg.full_subject
+                        )
                     pfound = True
                     break
             else:
@@ -5187,7 +5796,9 @@ def map_codereview_trailers(qmsgs: List[EmailMessage],
             if qlmsg.in_reply_to == cmsgid and qlmsg.git_patch_id:
                 pqpid = qlmsg.git_patch_id
                 for fwmsgid in fwmsgids:
-                    logger.debug('Adding cover follow-up %s to patch-id %s', fwmsgid, pqpid)
+                    logger.debug(
+                        'Adding cover follow-up %s to patch-id %s', fwmsgid, pqpid
+                    )
                     if pqpid not in patchid_map:
                         patchid_map[pqpid] = list()
                     patchid_map[pqpid].append(qmid_map[fwmsgid])
@@ -5215,7 +5826,7 @@ def get_msgs_from_mailbox_or_maildir(mbmd: str) -> List[EmailMessage]:
         return [x[1] for x in in_mdr.items()]  # type: ignore[misc]
 
     in_mbx = mailbox.mbox(mbmd, factory=mailbox_email_factory)  # type: ignore[arg-type]
-    return[x[1] for x in in_mbx.items()]  # type: ignore[misc]
+    return [x[1] for x in in_mbx.items()]  # type: ignore[misc]
 
 
 def get_mailfrom() -> Tuple[str, str]:
@@ -5234,6 +5845,8 @@ def make_msgid(idstring: Optional[str] = None, domain: str = 'b4') -> str:
 
 
 def is_maildir(dest: str) -> bool:
-    return (os.path.isdir(os.path.join(dest, 'new'))
-            and os.path.isdir(os.path.join(dest, 'cur'))
-            and os.path.isdir(os.path.join(dest, 'tmp')))
+    return (
+        os.path.isdir(os.path.join(dest, 'new'))
+        and os.path.isdir(os.path.join(dest, 'cur'))
+        and os.path.isdir(os.path.join(dest, 'tmp'))
+    )
diff --git a/src/b4/bugs/__init__.py b/src/b4/bugs/__init__.py
index cb21611..1cd2db4 100644
--- a/src/b4/bugs/__init__.py
+++ b/src/b4/bugs/__init__.py
@@ -3,6 +3,7 @@
 # SPDX-License-Identifier: GPL-2.0-or-later
 # Copyright (C) 2020 by the Linux Foundation
 """b4 bugs: manage bug reports from mailing list threads."""
+
 import argparse
 import json
 import logging
@@ -46,14 +47,21 @@ def _ensure_identity(topdir: str) -> bool:
             git_email = git_email.strip() if ecode_e == 0 else ''
             for user in users:
                 if user.get('email', '') == git_email:
-                    ecode, _out, _err = git_bug_cli(topdir, ['user', 'adopt', user['id']])
+                    ecode, _out, _err = git_bug_cli(
+                        topdir, ['user', 'adopt', user['id']]
+                    )
                     if ecode == 0:
-                        logger.info('Adopted existing git-bug identity: %s', user.get('name', ''))
+                        logger.info(
+                            'Adopted existing git-bug identity: %s',
+                            user.get('name', ''),
+                        )
                         return True
             # No email match -- adopt the first one
             ecode, _out, _err = git_bug_cli(topdir, ['user', 'adopt', users[0]['id']])
             if ecode == 0:
-                logger.info('Adopted existing git-bug identity: %s', users[0].get('name', ''))
+                logger.info(
+                    'Adopted existing git-bug identity: %s', users[0].get('name', '')
+                )
                 return True
 
     # No identities at all -- create from git config after confirmation
@@ -62,7 +70,9 @@ def _ensure_identity(topdir: str) -> bool:
     git_name = git_name.strip() if ecode_n == 0 else ''
     git_email = git_email.strip() if ecode_e == 0 else ''
     if not git_name or not git_email:
-        logger.critical('Cannot create git-bug identity: git user.name/user.email not configured')
+        logger.critical(
+            'Cannot create git-bug identity: git user.name/user.email not configured'
+        )
         return False
 
     logger.info('No git-bug identity found for this repository.')
@@ -74,9 +84,18 @@ def _ensure_identity(topdir: str) -> bool:
     if answer and answer != 'y':
         return False
 
-    ecode, out, err = git_bug_cli(topdir, [
-        'user', 'new', '-n', git_name, '-e', git_email, '--non-interactive',
-    ])
+    ecode, out, err = git_bug_cli(
+        topdir,
+        [
+            'user',
+            'new',
+            '-n',
+            git_name,
+            '-e',
+            git_email,
+            '--non-interactive',
+        ],
+    )
     if ecode != 0:
         logger.critical('Failed to create git-bug identity: %s', err.strip())
         return False
@@ -115,8 +134,9 @@ def cmd_import(cmdargs: argparse.Namespace) -> None:
     except RuntimeError as exc:
         logger.critical('Import failed: %s', exc)
         sys.exit(1)
-    logger.info('Created bug %s: %s (%d comments)',
-                bug.id[:7], bug.title, len(bug.comments))
+    logger.info(
+        'Created bug %s: %s (%d comments)', bug.id[:7], bug.title, len(bug.comments)
+    )
 
 
 def cmd_refresh(cmdargs: argparse.Namespace) -> None:
@@ -140,8 +160,7 @@ def cmd_refresh(cmdargs: argparse.Namespace) -> None:
             if count:
                 logger.info('Bug %s: %d new comment(s)', bug.id[:7], count)
                 total += count
-        logger.info('Refreshed %d bug(s), %d new comment(s) total',
-                    len(bugs), total)
+        logger.info('Refreshed %d bug(s), %d new comment(s) total', len(bugs), total)
 
 
 def cmd_list(cmdargs: argparse.Namespace) -> None:
@@ -161,8 +180,7 @@ def cmd_list(cmdargs: argparse.Namespace) -> None:
     for bug in bugs:
         icon = '\u25cf' if bug.status == Status.OPEN else '\u25cb'
         labels = ' '.join(f'[{label}]' for label in sorted(bug.labels))
-        logger.info('%s %s  %s  %s',
-                    icon, bug.id[:7], bug.title, labels)
+        logger.info('%s %s  %s  %s', icon, bug.id[:7], bug.title, labels)
 
 
 def cmd_delete(cmdargs: argparse.Namespace) -> None:
diff --git a/src/b4/bugs/_import.py b/src/b4/bugs/_import.py
index 954ebec..9114f70 100644
--- a/src/b4/bugs/_import.py
+++ b/src/b4/bugs/_import.py
@@ -3,6 +3,7 @@
 # SPDX-License-Identifier: GPL-2.0-or-later
 # Copyright (C) 2020 by the Linux Foundation
 """Thread import engine: lore.kernel.org -> git-bug via ezgb."""
+
 import email.utils
 import logging
 import re
@@ -101,6 +102,7 @@ def _get_clean_msgid(msg: EmailMessage) -> Optional[str]:
 
 def _sort_by_date(msgs: list[EmailMessage]) -> list[EmailMessage]:
     """Sort messages by their Date header, oldest first."""
+
     def _date_key(msg: EmailMessage) -> float:
         raw = msg.get('Date')
         if raw:
@@ -109,11 +111,14 @@ def _sort_by_date(msgs: list[EmailMessage]) -> list[EmailMessage]:
             )
             return parsed.timestamp()
         return 0.0
+
     return sorted(msgs, key=_date_key)
 
 
 def import_thread(
-    repo: GitBugRepo, msgid: str, noparent: bool = False,
+    repo: GitBugRepo,
+    msgid: str,
+    noparent: bool = False,
 ) -> Bug:
     """Import a lore.kernel.org thread as a new git-bug bug.
 
@@ -132,9 +137,7 @@ def import_thread(
     if noparent:
         filtered = b4.get_strict_thread(msgs, msgid, noparent=True)
         if not filtered:
-            raise RuntimeError(
-                f'No messages in sub-thread for {msgid}'
-            )
+            raise RuntimeError(f'No messages in sub-thread for {msgid}')
         msgs = filtered
 
     msgs = b4.mbox.minimize_thread(msgs)
@@ -181,7 +184,8 @@ def import_thread(
         subject = '(no subject)'
     scope = 'no-parent' if noparent else ''
     bug = repo.create_bug(
-        title=subject, body=format_comment(root, scope=scope),
+        title=subject,
+        body=format_comment(root, scope=scope),
     )
 
     # Add follow-up messages as comments, sorted by date
diff --git a/src/b4/bugs/_tui.py b/src/b4/bugs/_tui.py
index 998e6bb..aaf079f 100644
--- a/src/b4/bugs/_tui.py
+++ b/src/b4/bugs/_tui.py
@@ -3,6 +3,7 @@
 # SPDX-License-Identifier: GPL-2.0-or-later
 # Copyright (C) 2020 by the Linux Foundation
 """Textual TUI for b4 bugs."""
+
 import email.message
 import email.utils
 import hashlib
@@ -59,25 +60,25 @@ logger = logging.getLogger('b4')
 # Material UI colors used by git-bug for deterministic label coloring.
 # See entities/common/label.go in git-bug.
 _LABEL_COLORS = [
-    (244, 67, 54),    # red
-    (233, 30, 99),    # pink
-    (156, 39, 176),   # purple
-    (103, 58, 183),   # deepPurple
-    (63, 81, 181),    # indigo
-    (33, 150, 243),   # blue
-    (3, 169, 244),    # lightBlue
-    (0, 188, 212),    # cyan
-    (0, 150, 136),    # teal
-    (76, 175, 80),    # green
-    (139, 195, 74),   # lightGreen
-    (205, 220, 57),   # lime
-    (255, 235, 59),   # yellow
-    (255, 193, 7),    # amber
-    (255, 152, 0),    # orange
-    (255, 87, 34),    # deepOrange
-    (121, 85, 72),    # brown
+    (244, 67, 54),  # red
+    (233, 30, 99),  # pink
+    (156, 39, 176),  # purple
+    (103, 58, 183),  # deepPurple
+    (63, 81, 181),  # indigo
+    (33, 150, 243),  # blue
+    (3, 169, 244),  # lightBlue
+    (0, 188, 212),  # cyan
+    (0, 150, 136),  # teal
+    (76, 175, 80),  # green
+    (139, 195, 74),  # lightGreen
+    (205, 220, 57),  # lime
+    (255, 235, 59),  # yellow
+    (255, 193, 7),  # amber
+    (255, 152, 0),  # orange
+    (255, 87, 34),  # deepOrange
+    (121, 85, 72),  # brown
     (158, 158, 158),  # grey
-    (96, 125, 139),   # blueGrey
+    (96, 125, 139),  # blueGrey
 ]
 
 
@@ -94,13 +95,13 @@ def label_color(label: str) -> str:
 
 
 _LIFECYCLE_SYMBOLS: dict[str, str] = {
-    'new':        '\u2605',  # ★ black star
-    'confirmed':  '\u00a4',  # ¤ currency sign (bug-like)
+    'new': '\u2605',  # ★ black star
+    'confirmed': '\u00a4',  # ¤ currency sign (bug-like)
     'worksforme': '\u00f8',  # ø latin small letter o with stroke
-    'needinfo':   '\u203d',  # ‽ interrobang
-    'wontfix':    '\u2260',  # ≠ not equal to
-    'fixed':      '\u2713',  # ✓ check mark
-    'duplicate':  '\u2261',  # ≡ identical to
+    'needinfo': '\u203d',  # ‽ interrobang
+    'wontfix': '\u2260',  # ≠ not equal to
+    'fixed': '\u2713',  # ✓ check mark
+    'duplicate': '\u2261',  # ≡ identical to
 }
 
 
@@ -109,13 +110,13 @@ _LIFECYCLE_SYMBOLS: dict[str, str] = {
 #   1 = waiting (pending external input)
 #   2 = resolved (no action needed)
 _LIFECYCLE_TIER: dict[str, int] = {
-    'new':        0,
-    'confirmed':  0,
-    'needinfo':   1,
+    'new': 0,
+    'confirmed': 0,
+    'needinfo': 1,
     'worksforme': 2,
-    'wontfix':    2,
-    'fixed':      2,
-    'duplicate':  2,
+    'wontfix': 2,
+    'fixed': 2,
+    'duplicate': 2,
 }
 
 
@@ -125,7 +126,7 @@ def _bug_tier(bug: BugLike) -> int:
         return 2
     for lb in bug.labels:
         if lb.startswith('lifecycle:'):
-            state = lb[len('lifecycle:'):]
+            state = lb[len('lifecycle:') :]
             return _LIFECYCLE_TIER.get(state, 0)
     return 0
 
@@ -148,7 +149,7 @@ def _bug_lifecycle(bug: BugLike) -> str:
     """
     for lb in bug.labels:
         if lb.startswith('lifecycle:'):
-            state = lb[len('lifecycle:'):]
+            state = lb[len('lifecycle:') :]
             return _LIFECYCLE_SYMBOLS.get(state, '?')
     if bug.status == Status.CLOSED:
         return '\u00d7'  # × multiplication sign
@@ -193,8 +194,7 @@ def _relative_time(dt: datetime) -> str:
     return f'{years}y ago'
 
 
-def _render_comment(viewer: RichLog, text: str,
-                    ts: dict[str, str]) -> None:
+def _render_comment(viewer: RichLog, text: str, ts: dict[str, str]) -> None:
     """Render an RFC 2822 formatted comment into a RichLog."""
     if '\n\n' in text:
         header_block, body = text.split('\n\n', 1)
@@ -206,8 +206,8 @@ def _render_comment(viewer: RichLog, text: str,
         colon = line.find(':')
         if colon > 0:
             hdr_text = Text()
-            hdr_text.append(line[:colon + 1], style='bold')
-            hdr_text.append(line[colon + 1:])
+            hdr_text.append(line[: colon + 1], style='bold')
+            hdr_text.append(line[colon + 1 :])
             viewer.write(hdr_text)
         else:
             viewer.write(Text(line))
@@ -225,6 +225,7 @@ def _render_comment(viewer: RichLog, text: str,
 
 # -- List item widget --------------------------------------------------------
 
+
 def _bug_submitter(bug: Bug) -> str:
     """Get the submitter name from the first comment's From header."""
     if bug.comments:
@@ -253,8 +254,7 @@ class BugListItem(ListItem):
             count = bug.comment_count
         else:
             submitter = _bug_submitter(bug)
-            count = sum(1 for c in bug.comments
-                        if not is_comment_removed(c.text))
+            count = sum(1 for c in bug.comments if not is_comment_removed(c.text))
         if display_width(submitter) > 20:
             while display_width(submitter) > 19:
                 submitter = submitter[:-1]
@@ -276,10 +276,11 @@ class BugListItem(ListItem):
 
 # -- Modal screens -----------------------------------------------------------
 
+
 class ImportScreen(ModalScreen[Optional[str]]):
     """Modal for importing a lore thread by message-id."""
 
-    DEFAULT_CSS = '''
+    DEFAULT_CSS = """
     ImportScreen {
         align: center middle;
     }
@@ -295,7 +296,7 @@ class ImportScreen(ModalScreen[Optional[str]]):
         height: auto;
         margin-top: 1;
     }
-    '''
+    """
 
     BINDINGS = [
         Binding('escape', 'cancel', 'cancel'),
@@ -305,8 +306,7 @@ class ImportScreen(ModalScreen[Optional[str]]):
         with Vertical(id='import-dialog') as dialog:
             dialog.border_title = 'Import thread from lore'
             yield Input(placeholder='Message-ID or lore URL', id='import-msgid')
-            yield Checkbox('Ignore parent messages in thread',
-                           id='import-noparent')
+            yield Checkbox('Ignore parent messages in thread', id='import-noparent')
             yield Static('', id='import-status')
 
     def on_input_submitted(self, event: Input.Submitted) -> None:
@@ -325,11 +325,14 @@ class ImportScreen(ModalScreen[Optional[str]]):
         status.update('Importing...')
         self.run_worker(
             lambda: self._do_import(msgid, noparent),
-            name='import', thread=True, exit_on_error=False,
+            name='import',
+            thread=True,
+            exit_on_error=False,
         )
 
     def _do_import(self, msgid: str, noparent: bool) -> str:
         from b4.bugs._import import import_thread
+
         app = self.app
         if not isinstance(app, BugListApp):
             raise RuntimeError('ImportScreen must be used with BugListApp')
@@ -337,9 +340,7 @@ class ImportScreen(ModalScreen[Optional[str]]):
             with _quiet_worker():
                 bug = import_thread(app.repo, msgid, noparent=noparent)
         except RuntimeError:
-            raise RuntimeError(
-                'Could not retrieve message thread'
-            ) from None
+            raise RuntimeError('Could not retrieve message thread') from None
         return bug.id
 
     async def on_worker_state_changed(self, event: Worker.StateChanged) -> None:
@@ -366,6 +367,7 @@ class CommentItem(ListItem):
 
     def compose(self) -> ComposeResult:
         from textual.widgets import Label
+
         st = Label(f'  {self._display_name}', markup=False)
         st.styles.text_style = 'dim'
         yield st
@@ -374,7 +376,7 @@ class CommentItem(ListItem):
 class BugDetailScreen(ModalScreen[None]):
     """Full-screen bug detail view with left pane navigation."""
 
-    DEFAULT_CSS = '''
+    DEFAULT_CSS = """
     BugDetailScreen {
         background: $surface;
     }
@@ -419,7 +421,7 @@ class BugDetailScreen(ModalScreen[None]):
     #detail-log {
         width: 3fr;
     }
-    '''
+    """
 
     BINDINGS = [
         Binding('a', 'bug_action', 'action'),
@@ -469,8 +471,7 @@ class BugDetailScreen(ModalScreen[None]):
         with Horizontal(id='detail-body'):
             with Vertical(id='comment-list-pane'):
                 yield ListView(id='comment-list')
-            yield RichLog(id='detail-log', wrap=True, markup=False,
-                          auto_scroll=False)
+            yield RichLog(id='detail-log', wrap=True, markup=False, auto_scroll=False)
         yield SeparatedFooter()
 
     def on_mount(self) -> None:
@@ -479,10 +480,17 @@ class BugDetailScreen(ModalScreen[None]):
             self._ts = app._ts
 
         # Build colour map for commenters
-        palette = reviewer_colours(self._ts) if self._ts else [
-            'dark_goldenrod', 'dark_cyan', 'dark_magenta',
-            'dark_red', 'dark_blue',
-        ]
+        palette = (
+            reviewer_colours(self._ts)
+            if self._ts
+            else [
+                'dark_goldenrod',
+                'dark_cyan',
+                'dark_magenta',
+                'dark_red',
+                'dark_blue',
+            ]
+        )
         emails: list[str] = []
         for comment in self.bug.comments:
             addr = ''
@@ -509,15 +517,15 @@ class BugDetailScreen(ModalScreen[None]):
                 if parent_idx is not None:
                     parent_depth = self._comment_depths.get(parent_idx, 0)
                     self._comment_depths[i] = min(
-                        parent_depth + 1, self._MAX_DEPTH,
+                        parent_depth + 1,
+                        self._MAX_DEPTH,
                     )
                     continue
             self._comment_depths[i] = 0
 
         # Build visible comment indices (skip removed)
         self._visible_indices = [
-            i for i, c in enumerate(self.bug.comments)
-            if not is_comment_removed(c.text)
+            i for i, c in enumerate(self.bug.comments) if not is_comment_removed(c.text)
         ]
 
         # Populate left pane
@@ -530,6 +538,7 @@ class BugDetailScreen(ModalScreen[None]):
         def _initial_populate() -> None:
             self._populate_richlog()
             self.query_one('#comment-list', ListView).focus()
+
         self.call_after_refresh(_initial_populate)
 
     def _build_comment_items(self) -> list[CommentItem]:
@@ -571,8 +580,7 @@ class BugDetailScreen(ModalScreen[None]):
         subsequent appends, causing stale child counts.
         """
         self._visible_indices = [
-            i for i, c in enumerate(self.bug.comments)
-            if not is_comment_removed(c.text)
+            i for i, c in enumerate(self.bug.comments) if not is_comment_removed(c.text)
         ]
         # Replace the ListView widget and rebuild the RichLog in a
         # single batch so the screen doesn't flicker mid-rebuild.
@@ -600,7 +608,10 @@ class BugDetailScreen(ModalScreen[None]):
         return self._ts.get('accent', 'cyan')
 
     def _render_comment_panel(
-        self, viewer: RichLog, comment: Comment, idx: int,
+        self,
+        viewer: RichLog,
+        comment: Comment,
+        idx: int,
         depth: int = 0,
     ) -> None:
         """Render a comment as a bordered panel in the review app style."""
@@ -611,7 +622,7 @@ class BugDetailScreen(ModalScreen[None]):
             header_block, body = text, ''
 
         colour = self._get_comment_colour(comment)
-        bg = f"on {self._ts['panel']}" if self._ts.get('panel') else 'on grey11'
+        bg = f'on {self._ts["panel"]}' if self._ts.get('panel') else 'on grey11'
 
         # Extract sender name for panel title
         from_hdr = parse_comment_header(text, 'From')
@@ -633,7 +644,7 @@ class BugDetailScreen(ModalScreen[None]):
             colon = line.find(':')
             if colon > 0:
                 hdr_name = line[:colon]
-                hdr_val = line[colon + 1:].strip()
+                hdr_val = line[colon + 1 :].strip()
                 if hdr_name == 'In-Reply-To':
                     continue
                 if hdr_name == 'Message-ID':
@@ -672,6 +683,7 @@ class BugDetailScreen(ModalScreen[None]):
         )
         if depth > 0:
             from rich.padding import Padding
+
             viewer.write(Padding(panel, pad=(0, 0, 0, depth * 2)))
         else:
             viewer.write(panel)
@@ -802,13 +814,15 @@ class BugDetailScreen(ModalScreen[None]):
         with self.app.suspend():
             try:
                 result = b4.edit_in_editor(
-                    template.encode(), filehint='bug-comment.md',
+                    template.encode(),
+                    filehint='bug-comment.md',
                 )
             except Exception as exc:
                 logger.critical('Editor error: %s', exc)
                 return
         # Strip HTML comments and check if anything remains
         import re
+
         text = result.decode(errors='replace')
         text = re.sub(r'<!--.*?-->', '', text, flags=re.DOTALL).strip()
         if not text:
@@ -825,7 +839,8 @@ class BugDetailScreen(ModalScreen[None]):
         """Return the currently selected comment, or None."""
         lv = self.query_one('#comment-list', ListView)
         if lv.highlighted_child is not None and isinstance(
-            lv.highlighted_child, CommentItem,
+            lv.highlighted_child,
+            CommentItem,
         ):
             idx = lv.highlighted_child.comment_idx
             if idx < len(self.bug.comments):
@@ -841,8 +856,9 @@ class BugDetailScreen(ModalScreen[None]):
         # Get message-id from comment — required for reply
         msgid = parse_comment_header(comment.text, 'Message-ID')
         if not msgid:
-            self.notify('No Message-ID in this comment, cannot reply',
-                        severity='warning')
+            self.notify(
+                'No Message-ID in this comment, cannot reply', severity='warning'
+            )
             return
 
         # Fetch the original message from lore
@@ -857,10 +873,12 @@ class BugDetailScreen(ModalScreen[None]):
         """Fetch the original message and compose a reply."""
         # Determine if this bug uses --no-parent scope
         scope = parse_comment_header(
-            self.bug.comments[0].text, 'X-B4-Bug-Scope',
+            self.bug.comments[0].text,
+            'X-B4-Bug-Scope',
         )
         root_msgid = parse_comment_header(
-            self.bug.comments[0].text, 'Message-ID',
+            self.bug.comments[0].text,
+            'Message-ID',
         )
         if not root_msgid:
             root_msgid = msgid
@@ -871,7 +889,8 @@ class BugDetailScreen(ModalScreen[None]):
             msgs = b4.get_pi_thread_by_msgid(fetch_id)
         if not msgs:
             self.app.call_from_thread(
-                self.notify, 'Could not retrieve thread from lore',
+                self.notify,
+                'Could not retrieve thread from lore',
                 severity='error',
             )
             return
@@ -879,7 +898,9 @@ class BugDetailScreen(ModalScreen[None]):
         # Apply --no-parent filter if applicable
         if scope == 'no-parent':
             filtered = b4.get_strict_thread(
-                msgs, fetch_id, noparent=True,
+                msgs,
+                fetch_id,
+                noparent=True,
             )
             if filtered:
                 msgs = filtered
@@ -896,7 +917,8 @@ class BugDetailScreen(ModalScreen[None]):
 
         if target_msg is None:
             self.app.call_from_thread(
-                self.notify, f'Message {msgid} not found in thread',
+                self.notify,
+                f'Message {msgid} not found in thread',
                 severity='error',
             )
             return
@@ -922,21 +944,23 @@ class BugDetailScreen(ModalScreen[None]):
 
         # Schedule the editor open on the main thread
         self.app.call_from_thread(
-            self._open_reply_editor, lmsg, reply_text,
+            self._open_reply_editor,
+            lmsg,
+            reply_text,
         )
 
-    def _open_reply_editor(self, lmsg: 'b4.LoreMessage',
-                           reply_text: str) -> None:
+    def _open_reply_editor(self, lmsg: 'b4.LoreMessage', reply_text: str) -> None:
         """Open editor and show preview (runs on main thread)."""
         self._reply_edit_loop(lmsg, reply_text)
 
-    def _reply_edit_loop(self, lmsg: 'b4.LoreMessage',
-                         reply_text: str,
-                         is_reedit: bool = False) -> None:
+    def _reply_edit_loop(
+        self, lmsg: 'b4.LoreMessage', reply_text: str, is_reedit: bool = False
+    ) -> None:
         with self.app.suspend():
             try:
                 result = b4.edit_in_editor(
-                    reply_text.encode(), filehint='reply.eml',
+                    reply_text.encode(),
+                    filehint='reply.eml',
                 )
             except Exception as exc:
                 logger.critical('Editor error: %s', exc)
@@ -960,7 +984,8 @@ class BugDetailScreen(ModalScreen[None]):
                 self._reply_edit_loop(lmsg, final_text, is_reedit=True)
 
         self.app.push_screen(
-            ReplyPreviewScreen(reply_msg), callback=_on_preview,
+            ReplyPreviewScreen(reply_msg),
+            callback=_on_preview,
         )
 
     def _send_reply(self, msg: 'email.message.EmailMessage') -> None:
@@ -973,7 +998,8 @@ class BugDetailScreen(ModalScreen[None]):
             try:
                 smtp, fromaddr = b4.get_smtp(dryrun=dryrun)
                 sent = b4.send_mail(
-                    smtp, [msg],
+                    smtp,
+                    [msg],
                     fromaddr=fromaddr,
                     patatt_sign=patatt_sign,
                     dryrun=dryrun,
@@ -990,6 +1016,7 @@ class BugDetailScreen(ModalScreen[None]):
             self.notify('Reply sent')
         # Record the reply as a comment on the bug
         from b4.bugs._import import format_comment
+
         comment_text = format_comment(msg)
         app.repo.add_comment(self.bug.id, comment_text)
         app.repo.invalidate(self.bug.id)
@@ -1021,8 +1048,9 @@ class BugDetailScreen(ModalScreen[None]):
             if not isinstance(app, BugListApp):
                 return
             usercfg = b4.get_user_config()
-            identity = (f'{usercfg.get("name", "Unknown")} '
-                        f'<{usercfg.get("email", "unknown")}>')
+            identity = (
+                f'{usercfg.get("name", "Unknown")} <{usercfg.get("email", "unknown")}>'
+            )
             tombstone = make_tombstone(comment.text, identity)
             app.repo.edit_comment(self.bug.id, comment.id, tombstone)
             self._refresh_bug_view()
@@ -1032,8 +1060,10 @@ class BugDetailScreen(ModalScreen[None]):
         self.app.push_screen(
             ConfirmScreen(
                 title='Remove comment?',
-                body=[f'From: {sender}',
-                      'The comment body will be permanently removed.'],
+                body=[
+                    f'From: {sender}',
+                    'The comment body will be permanently removed.',
+                ],
                 border='$warning',
             ),
             callback=_on_confirm,
@@ -1041,6 +1071,7 @@ class BugDetailScreen(ModalScreen[None]):
 
     def action_edit_title(self) -> None:
         """Edit the bug title."""
+
         def _on_result(new_title: Optional[str]) -> None:
             if not new_title:
                 return
@@ -1057,7 +1088,8 @@ class BugDetailScreen(ModalScreen[None]):
             self.query_one('#detail-header', Static).update(header)
 
         self.app.push_screen(
-            EditTitleScreen(self.bug.title), callback=_on_result,
+            EditTitleScreen(self.bug.title),
+            callback=_on_result,
         )
 
     def action_add_label(self) -> None:
@@ -1093,12 +1125,14 @@ class BugDetailScreen(ModalScreen[None]):
                 return
             bid = self.bug.id
             if action == 'delete':
+
                 def _on_delete(confirmed: bool | None) -> None:
                     if not confirmed:
                         return
                     app.repo.remove_bug(bid)
                     app.repo.invalidate()
                     self.dismiss(None)
+
                 self.app.push_screen(
                     ConfirmScreen(
                         title='Delete bug?',
@@ -1110,14 +1144,14 @@ class BugDetailScreen(ModalScreen[None]):
                 )
                 return
             if action == 'duplicate':
+
                 def _on_dup(target_id: Optional[str]) -> None:
                     if not target_id:
                         return
                     target = app.repo.get_bug(target_id)
                     app.repo.add_comment(
                         bid,
-                        f'Closing as duplicate of {target.id[:7]}: '
-                        f'{target.title}',
+                        f'Closing as duplicate of {target.id[:7]}: {target.title}',
                     )
                     for lb in self.bug.labels:
                         if lb.startswith('lifecycle:'):
@@ -1126,6 +1160,7 @@ class BugDetailScreen(ModalScreen[None]):
                     app.repo.set_status(bid, Status.CLOSED)
                     app.repo.invalidate()
                     self.dismiss(None)
+
                 self.app.push_screen(
                     DuplicateScreen(app.repo, self.bug),
                     callback=_on_dup,
@@ -1151,8 +1186,8 @@ class BugDetailScreen(ModalScreen[None]):
                 self._refresh_bug_view()
 
         self.app.push_screen(
-            ActionScreen(actions, shortcuts=_ACTION_SHORTCUTS),
-            callback=_on_result)
+            ActionScreen(actions, shortcuts=_ACTION_SHORTCUTS), callback=_on_result
+        )
 
     def action_back(self) -> None:
         self.dismiss(None)
@@ -1164,7 +1199,7 @@ class ReplyPreviewScreen(ModalScreen[Optional[str]]):
     Returns 'send' to send, 'edit' to re-edit, or None to abandon.
     """
 
-    DEFAULT_CSS = '''
+    DEFAULT_CSS = """
     ReplyPreviewScreen {
         background: $surface;
     }
@@ -1183,7 +1218,7 @@ class ReplyPreviewScreen(ModalScreen[Optional[str]]):
         padding: 0 1;
         color: $text-muted;
     }
-    '''
+    """
 
     BINDINGS = [
         Binding('S', 'send', 'Send'),
@@ -1242,6 +1277,7 @@ class ReplyPreviewScreen(ModalScreen[Optional[str]]):
 
     def action_edit_tocc(self) -> None:
         from b4.tui import ToCcScreen
+
         to_val = b4.LoreMessage.clean_header(self._msg.get('To', ''))
         cc_val = b4.LoreMessage.clean_header(self._msg.get('Cc', ''))
         screen = ToCcScreen(to_val, cc_val, '', show_apply_all=False)
@@ -1281,7 +1317,6 @@ class ReplyPreviewScreen(ModalScreen[Optional[str]]):
         self.query_one('#reply-preview-log', RichLog).scroll_page_up()
 
 
-
 class LabelOption(ListItem):
     """A toggleable label option in the label selection dialog."""
 
@@ -1292,6 +1327,7 @@ class LabelOption(ListItem):
 
     def compose(self) -> ComposeResult:
         from textual.widgets import Label
+
         mark = 'x' if self.selected else ' '
         text = Text()
         text.append(f'[{mark}] ')
@@ -1302,6 +1338,7 @@ class LabelOption(ListItem):
     def toggle(self) -> None:
         self.selected = not self.selected
         from textual.widgets import Label
+
         mark = 'x' if self.selected else ' '
         text = Text()
         text.append(f'[{mark}] ')
@@ -1322,7 +1359,7 @@ class LabelScreen(JKListNavMixin, ModalScreen[Optional[dict[str, list[str]]]]):
 
     _list_id = '#label-list'
 
-    DEFAULT_CSS = '''
+    DEFAULT_CSS = """
     LabelScreen {
         align: center middle;
     }
@@ -1342,7 +1379,7 @@ class LabelScreen(JKListNavMixin, ModalScreen[Optional[dict[str, list[str]]]]):
         margin-top: 1;
         color: $text-muted;
     }
-    '''
+    """
 
     BINDINGS = [
         Binding('space', 'toggle_item', 'Toggle', show=False),
@@ -1355,7 +1392,8 @@ class LabelScreen(JKListNavMixin, ModalScreen[Optional[dict[str, list[str]]]]):
     ]
 
     def __init__(
-        self, current_labels: set[str] | frozenset[str],
+        self,
+        current_labels: set[str] | frozenset[str],
         suggestions: list[str],
     ) -> None:
         super().__init__()
@@ -1365,10 +1403,7 @@ class LabelScreen(JKListNavMixin, ModalScreen[Optional[dict[str, list[str]]]]):
         self._suggestions = suggestions
 
     def compose(self) -> ComposeResult:
-        items = [
-            LabelOption(lb, initially_selected=True)
-            for lb in self._current
-        ]
+        items = [LabelOption(lb, initially_selected=True) for lb in self._current]
         with Vertical(id='label-dialog') as dialog:
             dialog.border_title = 'Select labels'
             if not items:
@@ -1381,7 +1416,9 @@ class LabelScreen(JKListNavMixin, ModalScreen[Optional[dict[str, list[str]]]]):
             else:
                 yield ListView(*items, id='label-list')
                 yield Static(
-                    Text('space toggle  |  [a] add new  |  Enter save  |  Escape cancel'),
+                    Text(
+                        'space toggle  |  [a] add new  |  Enter save  |  Escape cancel'
+                    ),
                     id='label-hint',
                 )
 
@@ -1391,7 +1428,8 @@ class LabelScreen(JKListNavMixin, ModalScreen[Optional[dict[str, list[str]]]]):
     def action_toggle_item(self) -> None:
         lv = self.query_one('#label-list', ListView)
         if lv.highlighted_child is not None and isinstance(
-            lv.highlighted_child, LabelOption,
+            lv.highlighted_child,
+            LabelOption,
         ):
             lv.highlighted_child.toggle()
 
@@ -1405,7 +1443,9 @@ class LabelScreen(JKListNavMixin, ModalScreen[Optional[dict[str, list[str]]]]):
         has_items = len(lv.children) > 0
         hint = self.query_one('#label-hint', Static)
         if has_items:
-            hint.update(Text('space toggle  |  [a] add new  |  Enter save  |  Escape cancel'))
+            hint.update(
+                Text('space toggle  |  [a] add new  |  Enter save  |  Escape cancel')
+            )
         else:
             hint.update(Text('[a] add new  |  Escape cancel'))
 
@@ -1425,6 +1465,7 @@ class LabelScreen(JKListNavMixin, ModalScreen[Optional[dict[str, list[str]]]]):
                 lv.index = len(lv.children) - 1
                 lv.focus()
                 self._update_hint()
+
         self.app.push_screen(
             AddLabelScreen(self._suggestions),
             callback=_on_added,
@@ -1457,7 +1498,7 @@ class LabelScreen(JKListNavMixin, ModalScreen[Optional[dict[str, list[str]]]]):
 class AddLabelScreen(ModalScreen[Optional[str]]):
     """Text input for adding a brand-new label, with suggestions."""
 
-    DEFAULT_CSS = '''
+    DEFAULT_CSS = """
     AddLabelScreen {
         align: center middle;
     }
@@ -1468,7 +1509,7 @@ class AddLabelScreen(ModalScreen[Optional[str]]):
         background: $surface;
         padding: 1 2;
     }
-    '''
+    """
 
     BINDINGS = [
         Binding('escape', 'cancel', 'cancel'),
@@ -1481,7 +1522,8 @@ class AddLabelScreen(ModalScreen[Optional[str]]):
     def compose(self) -> ComposeResult:
         suggester = (
             SuggestFromList(self._suggestions, case_sensitive=False)
-            if self._suggestions else None
+            if self._suggestions
+            else None
         )
         with Vertical(id='addlabel-dialog') as dialog:
             dialog.border_title = 'Add label'
@@ -1505,21 +1547,21 @@ class AddLabelScreen(ModalScreen[Optional[str]]):
 
 # Shortcut keys for the bug action selector.
 _ACTION_SHORTCUTS: dict[str, str] = {
-    'confirmed':  'c',
-    'needinfo':   'n',
+    'confirmed': 'c',
+    'needinfo': 'n',
     'worksforme': 'w',
-    'wontfix':    'x',
-    'fixed':      'f',
-    'duplicate':  'd',
-    'reopen':     'r',
-    'delete':     'D',
+    'wontfix': 'x',
+    'fixed': 'f',
+    'duplicate': 'd',
+    'reopen': 'r',
+    'delete': 'D',
 }
 
 
 class UpdateBugsScreen(ModalScreen[Optional[dict[str, int]]]):
     """Modal showing progress while updating bugs from lore."""
 
-    DEFAULT_CSS = '''
+    DEFAULT_CSS = """
     UpdateBugsScreen {
         align: center middle;
     }
@@ -1537,7 +1579,7 @@ class UpdateBugsScreen(ModalScreen[Optional[dict[str, int]]]):
         margin-top: 1;
         color: $text-muted;
     }
-    '''
+    """
 
     BINDINGS = [
         Binding('escape', 'cancel', 'cancel'),
@@ -1550,11 +1592,14 @@ class UpdateBugsScreen(ModalScreen[Optional[dict[str, int]]]):
         self._repo = repo
         self._cancelled = False
         self._result: dict[str, int] = {
-            'checked': 0, 'updated': 0, 'new_comments': 0,
+            'checked': 0,
+            'updated': 0,
+            'new_comments': 0,
         }
 
     def compose(self) -> ComposeResult:
         from textual.widgets import Label, ProgressBar
+
         count = len(self._bugs)
         title = f'Updating {count} bug(s)' if count > 1 else 'Updating bug'
         with Vertical(id='update-dialog') as dialog:
@@ -1565,16 +1610,21 @@ class UpdateBugsScreen(ModalScreen[Optional[dict[str, int]]]):
             )
             yield Label('', id='update-bug', markup=False)
             yield ProgressBar(
-                total=count, show_eta=False, id='update-progress',
+                total=count,
+                show_eta=False,
+                id='update-progress',
             )
 
     def on_mount(self) -> None:
         self.run_worker(
-            self._do_updates, name='_do_updates', thread=True,
+            self._do_updates,
+            name='_do_updates',
+            thread=True,
         )
 
     def _update_progress(self, completed: int, title: str) -> None:
         from textual.widgets import Label, ProgressBar
+
         count = len(self._bugs)
         self.query_one('#update-status', Label).update(
             f'Checking {completed}/{count} bugs...',
@@ -1584,12 +1634,15 @@ class UpdateBugsScreen(ModalScreen[Optional[dict[str, int]]]):
 
     def _do_updates(self) -> dict[str, int]:
         from b4.bugs._import import refresh_bug
+
         with _quiet_worker():
             for i, bug in enumerate(self._bugs):
                 if self._cancelled:
                     break
                 self.app.call_from_thread(
-                    self._update_progress, i, bug.title,
+                    self._update_progress,
+                    i,
+                    bug.title,
                 )
                 try:
                     count = refresh_bug(self._repo, bug.id)
@@ -1600,12 +1653,15 @@ class UpdateBugsScreen(ModalScreen[Optional[dict[str, int]]]):
                     self._result['updated'] += 1
                     self._result['new_comments'] += count
                 self.app.call_from_thread(
-                    self._update_progress, i + 1, bug.title,
+                    self._update_progress,
+                    i + 1,
+                    bug.title,
                 )
         return self._result
 
     async def on_worker_state_changed(
-        self, event: Worker.StateChanged,
+        self,
+        event: Worker.StateChanged,
     ) -> None:
         if event.worker.name != '_do_updates':
             return
@@ -1624,7 +1680,7 @@ class DuplicateScreen(ModalScreen[Optional[str]]):
     Returns the resolved bug ID on confirm, or None on cancel.
     """
 
-    DEFAULT_CSS = '''
+    DEFAULT_CSS = """
     DuplicateScreen {
         align: center middle;
     }
@@ -1640,7 +1696,7 @@ class DuplicateScreen(ModalScreen[Optional[str]]):
         margin-top: 1;
         color: $text-muted;
     }
-    '''
+    """
 
     BINDINGS = [
         Binding('escape', 'cancel', 'cancel'),
@@ -1655,8 +1711,7 @@ class DuplicateScreen(ModalScreen[Optional[str]]):
         with Vertical(id='dup-dialog') as dialog:
             dialog.border_title = 'Close as duplicate of'
             yield Input(placeholder='Bug ID', id='dup-input')
-            yield Static('Enter confirm  |  Escape cancel',
-                         id='dup-status')
+            yield Static('Enter confirm  |  Escape cancel', id='dup-status')
 
     def on_mount(self) -> None:
         self.query_one('#dup-input', Input).focus()
@@ -1685,7 +1740,7 @@ class DuplicateScreen(ModalScreen[Optional[str]]):
 class EditTitleScreen(ModalScreen[Optional[str]]):
     """Edit a bug's title."""
 
-    DEFAULT_CSS = '''
+    DEFAULT_CSS = """
     EditTitleScreen {
         align: center middle;
     }
@@ -1700,7 +1755,7 @@ class EditTitleScreen(ModalScreen[Optional[str]]):
         margin-top: 1;
         color: $text-muted;
     }
-    '''
+    """
 
     BINDINGS = [
         Binding('escape', 'cancel', 'cancel'),
@@ -1714,8 +1769,7 @@ class EditTitleScreen(ModalScreen[Optional[str]]):
         with Vertical(id='edit-title-dialog') as dialog:
             dialog.border_title = 'Edit title'
             yield Input(value=self._current_title, id='edit-title-input')
-            yield Static('Enter save  |  Escape cancel',
-                         id='edit-title-hint')
+            yield Static('Enter save  |  Escape cancel', id='edit-title-hint')
 
     def on_mount(self) -> None:
         self.query_one('#edit-title-input', Input).focus()
@@ -1733,6 +1787,7 @@ class EditTitleScreen(ModalScreen[Optional[str]]):
 
 # -- Main app ----------------------------------------------------------------
 
+
 class BugListApp(JKListNavMixin, App[None]):
     """Bug management TUI backed by git-bug via ezgb."""
 
@@ -1740,7 +1795,7 @@ class BugListApp(JKListNavMixin, App[None]):
 
     _list_id = '#bug-list'
 
-    DEFAULT_CSS = '''
+    DEFAULT_CSS = """
     BugListApp {
         layout: vertical;
     }
@@ -1780,7 +1835,7 @@ class BugListApp(JKListNavMixin, App[None]):
         width: 14;
         text-style: bold;
     }
-    '''
+    """
 
     BINDINGS = [
         Binding('j', 'cursor_down', 'down', show=False),
@@ -1812,9 +1867,9 @@ class BugListApp(JKListNavMixin, App[None]):
         'action_quit': 'global',
     }
 
-    def __init__(self, repo: GitBugRepo, *,
-                 email_dryrun: bool = False,
-                 no_sign: bool = False) -> None:
+    def __init__(
+        self, repo: GitBugRepo, *, email_dryrun: bool = False, no_sign: bool = False
+    ) -> None:
         super().__init__()
         self.repo = repo
         self.email_dryrun = email_dryrun
@@ -1832,7 +1887,9 @@ class BugListApp(JKListNavMixin, App[None]):
 
     def compose(self) -> ComposeResult:
         yield Static('b4 bugs', id='title-bar')
-        header_text = f'{"ID":<7s}  {"Submitter":<20s}  {"Msgs":>4s}     {"S"}  {"Subject"}'
+        header_text = (
+            f'{"ID":<7s}  {"Submitter":<20s}  {"Msgs":>4s}     {"S"}  {"Subject"}'
+        )
         yield Static(header_text, id='column-header')
         yield ListView(id='bug-list')
         with Vertical(id='details-panel'):
@@ -1882,8 +1939,7 @@ class BugListApp(JKListNavMixin, App[None]):
         if current_mtime != self._cache_mtime:
             self._cache_mtime = current_mtime
             self._save_focus()
-            self.run_worker(self._load_bugs, name='load_bugs',
-                            thread=True)
+            self.run_worker(self._load_bugs, name='load_bugs', thread=True)
 
     def on_mount(self) -> None:
         self._ts = resolve_styles(self)
@@ -1947,20 +2003,18 @@ class BugListApp(JKListNavMixin, App[None]):
         # or the limit pattern has an explicit s: token
         has_explicit_status = self._has_status_token(self._limit_pattern)
         if not self._show_closed and not has_explicit_status:
-            display_bugs = [
-                b for b in display_bugs if b.status == Status.OPEN
-            ]
+            display_bugs = [b for b in display_bugs if b.status == Status.OPEN]
 
         if self._limit_pattern:
             display_bugs = [
-                b for b in display_bugs
-                if self._matches_limit(b, self._limit_pattern)
+                b for b in display_bugs if self._matches_limit(b, self._limit_pattern)
             ]
 
         # Sort: by last activity (newest first) within each tier,
         # then by tier (active → waiting → resolved).
         display_bugs.sort(
-            key=_bug_last_activity, reverse=True,
+            key=_bug_last_activity,
+            reverse=True,
         )
         display_bugs.sort(key=_bug_tier)
 
@@ -1968,7 +2022,9 @@ class BugListApp(JKListNavMixin, App[None]):
 
         items: list[BugListItem] = []
         for bug in display_bugs:
-            count = bug.comment_count if isinstance(bug, BugSummary) else len(bug.comments)
+            count = (
+                bug.comment_count if isinstance(bug, BugSummary) else len(bug.comments)
+            )
             seen = self._seen_counts.get(bug.id, count)
             unseen = max(0, count - seen)
             items.append(BugListItem(bug, unseen=unseen))
@@ -1999,7 +2055,11 @@ class BugListApp(JKListNavMixin, App[None]):
             # new comments that arrived since the baseline.
             for bug in self._all_bugs:
                 if bug.id not in self._seen_counts:
-                    count = bug.comment_count if isinstance(bug, BugSummary) else len(bug.comments)
+                    count = (
+                        bug.comment_count
+                        if isinstance(bug, BugSummary)
+                        else len(bug.comments)
+                    )
                     self._seen_counts[bug.id] = count
             # Collect all known labels across all bugs
             all_labels: set[str] = set()
@@ -2023,7 +2083,7 @@ class BugListApp(JKListNavMixin, App[None]):
         lifecycle = ''
         for lb in bug.labels:
             if lb.startswith('lifecycle:'):
-                lifecycle = lb[len('lifecycle:'):]
+                lifecycle = lb[len('lifecycle:') :]
                 break
         git_status = 'open' if bug.status == Status.OPEN else 'closed'
         if lifecycle:
@@ -2050,15 +2110,13 @@ class BugListApp(JKListNavMixin, App[None]):
             self.query_one('#detail-labels', Static).update('none')
 
         created_str = (
-            f'{bug.created_at:%Y-%m-%d %H:%M} '
-            f'({_relative_time(bug.created_at)})'
+            f'{bug.created_at:%Y-%m-%d %H:%M} ({_relative_time(bug.created_at)})'
         )
         self.query_one('#detail-created', Static).update(created_str)
 
         if isinstance(bug, BugSummary):
             edited_str = (
-                f'{bug.edited_at:%Y-%m-%d %H:%M} '
-                f'({_relative_time(bug.edited_at)})'
+                f'{bug.edited_at:%Y-%m-%d %H:%M} ({_relative_time(bug.edited_at)})'
             )
             self.query_one('#detail-last-activity', Static).update(edited_str)
             self.query_one('#detail-comments', Static).update(
@@ -2072,8 +2130,7 @@ class BugListApp(JKListNavMixin, App[None]):
                 f'by {last.author.name}'
             )
             self.query_one('#detail-last-activity', Static).update(last_str)
-            visible = sum(1 for c in bug.comments
-                          if not is_comment_removed(c.text))
+            visible = sum(1 for c in bug.comments if not is_comment_removed(c.text))
             self.query_one('#detail-comments', Static).update(
                 str(visible),
             )
@@ -2090,7 +2147,11 @@ class BugListApp(JKListNavMixin, App[None]):
             # Mark as seen so the badge clears on return
             bug_id = event.item.bug.id
             bug_obj = event.item.bug
-            count = bug_obj.comment_count if isinstance(bug_obj, BugSummary) else len(bug_obj.comments)
+            count = (
+                bug_obj.comment_count
+                if isinstance(bug_obj, BugSummary)
+                else len(bug_obj.comments)
+            )
             self._seen_counts[bug_id] = count
             # Load full Bug on demand (CachedBug doesn't have comments)
             bug = self.repo.get_bug(bug_id)
@@ -2098,17 +2159,17 @@ class BugListApp(JKListNavMixin, App[None]):
             def _on_dismiss(_result: None) -> None:
                 self.repo.invalidate()
                 self._save_focus()
-                self.run_worker(self._load_bugs, name='load_bugs',
-                                thread=True)
+                self.run_worker(self._load_bugs, name='load_bugs', thread=True)
 
-            self.push_screen(BugDetailScreen(bug),
-                             callback=_on_dismiss)
+            self.push_screen(BugDetailScreen(bug), callback=_on_dismiss)
 
     # -- Actions -------------------------------------------------------------
 
     def _get_selected_bug(self) -> Optional[BugLike]:
         lv = self.query_one('#bug-list', ListView)
-        if lv.highlighted_child is not None and isinstance(lv.highlighted_child, BugListItem):
+        if lv.highlighted_child is not None and isinstance(
+            lv.highlighted_child, BugListItem
+        ):
             return lv.highlighted_child.bug
         return None
 
@@ -2125,9 +2186,11 @@ class BugListApp(JKListNavMixin, App[None]):
 
     def action_limit(self) -> None:
         self.push_screen(
-            LimitScreen(self._limit_pattern,
-                        hint='Prefixes: s:<status>  l:<label>',
-                        title='Limit bugs'),
+            LimitScreen(
+                self._limit_pattern,
+                hint='Prefixes: s:<status>  l:<label>',
+                title='Limit bugs',
+            ),
             callback=self._on_limit,
         )
 
@@ -2161,8 +2224,8 @@ class BugListApp(JKListNavMixin, App[None]):
             if result:
                 self._focus_bug_id = result
                 self.repo.invalidate()
-                self.run_worker(self._load_bugs, name='load_bugs',
-                                thread=True)
+                self.run_worker(self._load_bugs, name='load_bugs', thread=True)
+
         self.push_screen(ImportScreen(), callback=_on_result)
 
     def _do_create_new_bug(self) -> None:
@@ -2175,12 +2238,14 @@ class BugListApp(JKListNavMixin, App[None]):
         with self.suspend():
             try:
                 result = b4.edit_in_editor(
-                    template.encode(), filehint='new-bug.md',
+                    template.encode(),
+                    filehint='new-bug.md',
                 )
             except Exception as exc:
                 logger.critical('Editor error: %s', exc)
                 return
         import re
+
         text = result.decode(errors='replace')
         text = re.sub(r'<!--.*?-->', '', text, flags=re.DOTALL).strip()
         if not text:
@@ -2195,11 +2260,11 @@ class BugListApp(JKListNavMixin, App[None]):
         bug = self.repo.create_bug(title, body)
         self._focus_bug_id = bug.id
         self.repo.invalidate()
-        self.run_worker(self._load_bugs, name='load_bugs',
-                        thread=True)
+        self.run_worker(self._load_bugs, name='load_bugs', thread=True)
 
     def _on_update_complete(
-        self, result: Optional[dict[str, int]],
+        self,
+        result: Optional[dict[str, int]],
     ) -> None:
         if result:
             updated = result.get('updated', 0)
@@ -2281,8 +2346,7 @@ class BugListApp(JKListNavMixin, App[None]):
             for lb in result.get('add', []):
                 self.repo.add_label(bug.id, lb)
             self.repo.invalidate()
-            self.run_worker(self._load_bugs, name='load_bugs',
-                            thread=True)
+            self.run_worker(self._load_bugs, name='load_bugs', thread=True)
 
         self.push_screen(
             LabelScreen(bug.labels, self._known_labels),
@@ -2306,7 +2370,7 @@ class BugListApp(JKListNavMixin, App[None]):
         lifecycle = 'new'
         for lb in bug.labels:
             if lb.startswith('lifecycle:'):
-                lifecycle = lb[len('lifecycle:'):]
+                lifecycle = lb[len('lifecycle:') :]
                 break
 
         actions: list[tuple[str, str]] = []
@@ -2329,12 +2393,14 @@ class BugListApp(JKListNavMixin, App[None]):
                 ('needinfo', 'Need info'),
             ]
         # Close reasons are always available regardless of lifecycle
-        actions.extend([
-            ('fixed', 'Close: fixed'),
-            ('worksforme', 'Close: works for me'),
-            ('wontfix', "Close: won't fix"),
-            ('duplicate', 'Close: duplicate of\u2026'),
-        ])
+        actions.extend(
+            [
+                ('fixed', 'Close: fixed'),
+                ('worksforme', 'Close: works for me'),
+                ('wontfix', "Close: won't fix"),
+                ('duplicate', 'Close: duplicate of\u2026'),
+            ]
+        )
         actions.append(('delete', 'Delete bug'))
         return actions
 
@@ -2347,8 +2413,9 @@ class BugListApp(JKListNavMixin, App[None]):
         def _on_result(action: Optional[str]) -> None:
             self._apply_action(bug, action)
 
-        self.push_screen(ActionScreen(actions, shortcuts=_ACTION_SHORTCUTS),
-                         callback=_on_result)
+        self.push_screen(
+            ActionScreen(actions, shortcuts=_ACTION_SHORTCUTS), callback=_on_result
+        )
 
     # Lifecycle states that close the bug
     _CLOSING_STATES = {'worksforme', 'wontfix', 'fixed', 'duplicate'}
@@ -2390,8 +2457,8 @@ class BugListApp(JKListNavMixin, App[None]):
             if confirmed and bug:
                 self.repo.remove_bug(bug.id)
                 self.repo.invalidate()
-                self.run_worker(self._load_bugs, name='load_bugs',
-                                thread=True)
+                self.run_worker(self._load_bugs, name='load_bugs', thread=True)
+
         self.push_screen(
             ConfirmScreen(
                 title='Delete bug?',
@@ -2419,11 +2486,9 @@ class BugListApp(JKListNavMixin, App[None]):
             self.repo.add_label(bid, 'lifecycle:duplicate')
             self.repo.set_status(bid, Status.CLOSED)
             self.repo.invalidate()
-            self.run_worker(self._load_bugs, name='load_bugs',
-                            thread=True)
+            self.run_worker(self._load_bugs, name='load_bugs', thread=True)
 
         self.push_screen(
-            DuplicateScreen(self.repo, bug), callback=_on_result,
+            DuplicateScreen(self.repo, bug),
+            callback=_on_result,
         )
-
-
diff --git a/src/b4/command.py b/src/b4/command.py
index 7ebe79c..35ed44b 100644
--- a/src/b4/command.py
+++ b/src/b4/command.py
@@ -16,134 +16,263 @@ logger = b4.logger
 
 
 def cmd_retrieval_common_opts(sp: argparse.ArgumentParser) -> None:
-    sp.add_argument('msgid', nargs='?',
-                    help='Message ID to process, or pipe a raw message')
-    sp.add_argument('-m', '--use-local-mbox', dest='localmbox', default=None,
-                    help='Instead of grabbing a thread from lore, process this mbox file (or - for stdin)')
-    sp.add_argument('--stdin-pipe-sep',
-                    help='When accepting messages on stdin, split using this pipe separator string')
-    sp.add_argument('-C', '--no-cache', dest='nocache', action='store_true', default=False,
-                    help='Do not use local cache')
-    sp.add_argument('--single-message', dest='singlemsg', action='store_true', default=False,
-                    help='Only retrieve the message matching the msgid and ignore the rest of the thread')
-    sp.add_argument('--rethread', nargs='+', metavar='MSGID', default=None,
-                    help='Rethread multiple unrelated messages into a single series '
-                         '(pass - to read message IDs from stdin, one per line)')
+    sp.add_argument(
+        'msgid', nargs='?', help='Message ID to process, or pipe a raw message'
+    )
+    sp.add_argument(
+        '-m',
+        '--use-local-mbox',
+        dest='localmbox',
+        default=None,
+        help='Instead of grabbing a thread from lore, process this mbox file (or - for stdin)',
+    )
+    sp.add_argument(
+        '--stdin-pipe-sep',
+        help='When accepting messages on stdin, split using this pipe separator string',
+    )
+    sp.add_argument(
+        '-C',
+        '--no-cache',
+        dest='nocache',
+        action='store_true',
+        default=False,
+        help='Do not use local cache',
+    )
+    sp.add_argument(
+        '--single-message',
+        dest='singlemsg',
+        action='store_true',
+        default=False,
+        help='Only retrieve the message matching the msgid and ignore the rest of the thread',
+    )
+    sp.add_argument(
+        '--rethread',
+        nargs='+',
+        metavar='MSGID',
+        default=None,
+        help='Rethread multiple unrelated messages into a single series '
+        '(pass - to read message IDs from stdin, one per line)',
+    )
 
 
 def cmd_mbox_common_opts(sp: argparse.ArgumentParser) -> None:
     cmd_retrieval_common_opts(sp)
-    sp.add_argument('-o', '--outdir', default='.',
-                    help='Output into this directory (or use - to output mailbox contents to stdout)')
-    sp.add_argument('-c', '--check-newer-revisions', dest='checknewer', action='store_true', default=False,
-                    help='Check if newer patch revisions exist')
-    sp.add_argument('-n', '--mbox-name', dest='wantname', default=None,
-                    help='Filename to name the mbox destination')
-    sp.add_argument('-M', '--save-as-maildir', dest='maildir', action='store_true', default=False,
-                    help='Save as maildir (avoids mbox format ambiguities)')
+    sp.add_argument(
+        '-o',
+        '--outdir',
+        default='.',
+        help='Output into this directory (or use - to output mailbox contents to stdout)',
+    )
+    sp.add_argument(
+        '-c',
+        '--check-newer-revisions',
+        dest='checknewer',
+        action='store_true',
+        default=False,
+        help='Check if newer patch revisions exist',
+    )
+    sp.add_argument(
+        '-n',
+        '--mbox-name',
+        dest='wantname',
+        default=None,
+        help='Filename to name the mbox destination',
+    )
+    sp.add_argument(
+        '-M',
+        '--save-as-maildir',
+        dest='maildir',
+        action='store_true',
+        default=False,
+        help='Save as maildir (avoids mbox format ambiguities)',
+    )
 
 
 def cmd_am_common_opts(sp: argparse.ArgumentParser) -> None:
-    sp.add_argument('-v', '--use-version', dest='wantver', type=int, default=None,
-                    help='Get a specific version of the patch/series')
-    sp.add_argument('-t', '--apply-cover-trailers', dest='covertrailers', action='store_true', default=False,
-                    help='(This is now the default behavior; this option will be removed in the future.)')
-    sp.add_argument('-S', '--sloppy-trailers', dest='sloppytrailers', action='store_true', default=False,
-                    help='Apply trailers without email address match checking')
-    sp.add_argument('-T', '--no-add-trailers', dest='noaddtrailers', action='store_true', default=False,
-                    help='Do not add any trailers from follow-up messages')
-    sp.add_argument('-s', '--add-my-sob', dest='addmysob', action='store_true', default=False,
-                    help='Add your own signed-off-by to every patch')
-    sp.add_argument('-P', '--cherry-pick', dest='cherrypick', default=None,
-                    help='Cherry-pick a subset of patches (e.g. "-P 1-2,4,6-", '
-                         '"-P _" to use just the msgid specified, or '
-                         '"-P *globbing*" to match on commit subject)')
-    sp.add_argument('-k', '--check', action='store_true', default=False,
-                    help='Run local checks for every patch (e.g. checkpatch)')
-    sp.add_argument('--cc-trailers', dest='copyccs', action='store_true', default=False,
-                    help='Copy all Cc\'d addresses into Cc: trailers')
-    sp.add_argument('--no-parent', dest='noparent', action='store_true', default=False,
-                    help='Break thread at the msgid specified and ignore any parent messages')
-    sp.add_argument('--allow-unicode-control-chars', dest='allowbadchars', action='store_true', default=False,
-                    help='Allow unicode control characters (very rarely legitimate)')
+    sp.add_argument(
+        '-v',
+        '--use-version',
+        dest='wantver',
+        type=int,
+        default=None,
+        help='Get a specific version of the patch/series',
+    )
+    sp.add_argument(
+        '-t',
+        '--apply-cover-trailers',
+        dest='covertrailers',
+        action='store_true',
+        default=False,
+        help='(This is now the default behavior; this option will be removed in the future.)',
+    )
+    sp.add_argument(
+        '-S',
+        '--sloppy-trailers',
+        dest='sloppytrailers',
+        action='store_true',
+        default=False,
+        help='Apply trailers without email address match checking',
+    )
+    sp.add_argument(
+        '-T',
+        '--no-add-trailers',
+        dest='noaddtrailers',
+        action='store_true',
+        default=False,
+        help='Do not add any trailers from follow-up messages',
+    )
+    sp.add_argument(
+        '-s',
+        '--add-my-sob',
+        dest='addmysob',
+        action='store_true',
+        default=False,
+        help='Add your own signed-off-by to every patch',
+    )
+    sp.add_argument(
+        '-P',
+        '--cherry-pick',
+        dest='cherrypick',
+        default=None,
+        help='Cherry-pick a subset of patches (e.g. "-P 1-2,4,6-", '
+        '"-P _" to use just the msgid specified, or '
+        '"-P *globbing*" to match on commit subject)',
+    )
+    sp.add_argument(
+        '-k',
+        '--check',
+        action='store_true',
+        default=False,
+        help='Run local checks for every patch (e.g. checkpatch)',
+    )
+    sp.add_argument(
+        '--cc-trailers',
+        dest='copyccs',
+        action='store_true',
+        default=False,
+        help="Copy all Cc'd addresses into Cc: trailers",
+    )
+    sp.add_argument(
+        '--no-parent',
+        dest='noparent',
+        action='store_true',
+        default=False,
+        help='Break thread at the msgid specified and ignore any parent messages',
+    )
+    sp.add_argument(
+        '--allow-unicode-control-chars',
+        dest='allowbadchars',
+        action='store_true',
+        default=False,
+        help='Allow unicode control characters (very rarely legitimate)',
+    )
     sa_g = sp.add_mutually_exclusive_group()
-    sa_g.add_argument('-l', '--add-link', dest='addlink', action='store_true', default=False,
-                      help='Add a Link: trailer with message-id lookup URL to every patch')
-    sa_g.add_argument('-i', '--add-message-id', dest='addmsgid', action='store_true', default=False,
-                      help='Add a Message-ID: trailer to every patch')
+    sa_g.add_argument(
+        '-l',
+        '--add-link',
+        dest='addlink',
+        action='store_true',
+        default=False,
+        help='Add a Link: trailer with message-id lookup URL to every patch',
+    )
+    sa_g.add_argument(
+        '-i',
+        '--add-message-id',
+        dest='addmsgid',
+        action='store_true',
+        default=False,
+        help='Add a Message-ID: trailer to every patch',
+    )
 
 
 def cmd_mbox(cmdargs: argparse.Namespace) -> None:
     import b4.mbox
+
     b4.mbox.main(cmdargs)
 
 
 def cmd_kr(cmdargs: argparse.Namespace) -> None:
     import b4.kr
+
     b4.kr.main(cmdargs)
 
 
 def cmd_prep(cmdargs: argparse.Namespace) -> None:
     import b4.ez
+
     b4.ez.cmd_prep(cmdargs)
 
 
 def cmd_trailers(cmdargs: argparse.Namespace) -> None:
     import b4.ez
+
     b4.ez.cmd_trailers(cmdargs)
 
 
 def cmd_send(cmdargs: argparse.Namespace) -> None:
     import b4.ez
+
     b4.ez.cmd_send(cmdargs)
 
 
 def cmd_am(cmdargs: argparse.Namespace) -> None:
     import b4.mbox
+
     b4.mbox.main(cmdargs)
 
 
 def cmd_shazam(cmdargs: argparse.Namespace) -> None:
     import b4.mbox
+
     b4.mbox.main(cmdargs)
 
 
 def cmd_review(cmdargs: argparse.Namespace) -> None:
     import b4.review
+
     b4.review.main(cmdargs)
 
 
 def cmd_bugs(cmdargs: argparse.Namespace) -> None:
     import b4.bugs
+
     b4.bugs.main(cmdargs)
 
 
 def cmd_pr(cmdargs: argparse.Namespace) -> None:
     import b4.pr
+
     b4.pr.main(cmdargs)
 
 
 def cmd_ty(cmdargs: argparse.Namespace) -> None:
     import b4.ty
+
     b4.ty.main(cmdargs)
 
 
 def cmd_diff(cmdargs: argparse.Namespace) -> None:
     import b4.diff
+
     b4.diff.main(cmdargs)
 
 
 def cmd_dig(cmdargs: argparse.Namespace) -> None:
     import b4.dig
+
     b4.dig.main(cmdargs)
 
 
 class ConfigOption(argparse.Action):
     """Action class for storing key=value arguments in a dict."""
-    def __call__(self, parser: argparse.ArgumentParser,
-                 namespace: argparse.Namespace,
-                 keyval: Union[str, Sequence[Any], None],
-                 option_string: Optional[str] = None) -> None:
+
+    def __call__(
+        self,
+        parser: argparse.ArgumentParser,
+        namespace: argparse.Namespace,
+        keyval: Union[str, Sequence[Any], None],
+        option_string: Optional[str] = None,
+    ) -> None:
         config = getattr(namespace, self.dest, None)
 
         if config is None:
@@ -167,26 +296,55 @@ def setup_parser() -> argparse.ArgumentParser:
         formatter_class=argparse.ArgumentDefaultsHelpFormatter,
     )
     parser.add_argument('--version', action='version', version=b4.__VERSION__)
-    parser.add_argument('-d', '--debug', action='store_true', default=False,
-                        help='Add more debugging info to the output')
-    parser.add_argument('-q', '--quiet', action='store_true', default=False,
-                        help='Output critical information only')
-    parser.add_argument('-n', '--no-interactive', action='store_true', default=False,
-                        help='Do not ask any interactive questions')
-    parser.add_argument('--offline-mode', action='store_true', default=False,
-                        help='Do not perform any network queries')
-    parser.add_argument('--no-stdin', action='store_true', default=False,
-                        help='Disable TTY detection for stdin')
-    parser.add_argument('-c', '--config', metavar='NAME=VALUE', action=ConfigOption,
-                        help='''Set config option NAME to VALUE. Override value
+    parser.add_argument(
+        '-d',
+        '--debug',
+        action='store_true',
+        default=False,
+        help='Add more debugging info to the output',
+    )
+    parser.add_argument(
+        '-q',
+        '--quiet',
+        action='store_true',
+        default=False,
+        help='Output critical information only',
+    )
+    parser.add_argument(
+        '-n',
+        '--no-interactive',
+        action='store_true',
+        default=False,
+        help='Do not ask any interactive questions',
+    )
+    parser.add_argument(
+        '--offline-mode',
+        action='store_true',
+        default=False,
+        help='Do not perform any network queries',
+    )
+    parser.add_argument(
+        '--no-stdin',
+        action='store_true',
+        default=False,
+        help='Disable TTY detection for stdin',
+    )
+    parser.add_argument(
+        '-c',
+        '--config',
+        metavar='NAME=VALUE',
+        action=ConfigOption,
+        help="""Set config option NAME to VALUE. Override value
                         from config files. NAME is in dotted section.key
                         format. Using NAME= and omitting VALUE will set the
                         value to the empty string. Using NAME and omitting
-                        =VALUE will set the value to "true".''')
+                        =VALUE will set the value to "true".""",
+    )
 
     try:
         import shtab
-        shtab.add_argument_to(parser, ["--print-completion"])
+
+        shtab.add_argument_to(parser, ['--print-completion'])
     except ImportError:
         pass
 
@@ -195,333 +353,889 @@ def setup_parser() -> argparse.ArgumentParser:
     # b4 mbox
     sp_mbox = subparsers.add_parser('mbox', help='Download a thread as an mbox file')
     cmd_mbox_common_opts(sp_mbox)
-    sp_mbox.add_argument('-f', '--filter-dupes', dest='filterdupes', action='store_true', default=False,
-                         help='When adding messages to existing maildir, filter out duplicates')
+    sp_mbox.add_argument(
+        '-f',
+        '--filter-dupes',
+        dest='filterdupes',
+        action='store_true',
+        default=False,
+        help='When adding messages to existing maildir, filter out duplicates',
+    )
     sm_g = sp_mbox.add_mutually_exclusive_group()
-    sm_g.add_argument('-r', '--refetch', dest='refetch', metavar='MBOX', default=False,
-                      help='Refetch all messages in specified mbox with their original headers')
-    sm_g.add_argument('--minimize', dest='minimize', action='store_true', default=False,
-                      help='Attempt to generate a minimal thread to simplify review.')
+    sm_g.add_argument(
+        '-r',
+        '--refetch',
+        dest='refetch',
+        metavar='MBOX',
+        default=False,
+        help='Refetch all messages in specified mbox with their original headers',
+    )
+    sm_g.add_argument(
+        '--minimize',
+        dest='minimize',
+        action='store_true',
+        default=False,
+        help='Attempt to generate a minimal thread to simplify review.',
+    )
     sp_mbox.set_defaults(func=cmd_mbox)
 
     # b4 am
-    sp_am = subparsers.add_parser('am', help='Create an mbox file that is ready to git-am')
+    sp_am = subparsers.add_parser(
+        'am', help='Create an mbox file that is ready to git-am'
+    )
     cmd_mbox_common_opts(sp_am)
     cmd_am_common_opts(sp_am)
-    sp_am.add_argument('-Q', '--quilt-ready', dest='quiltready', action='store_true', default=False,
-                       help='Save patches in a quilt-ready folder')
-    sp_am.add_argument('-g', '--guess-base', dest='guessbase', action='store_true', default=False,
-                       help='Try to guess the base of the series (if not specified)')
-    sp_am.add_argument('-b', '--guess-branch', dest='guessbranch', nargs='+', action='extend', type=str, default=None,
-                       help='When guessing base, restrict to this branch (use with -g)')
-    sp_am.add_argument('--guess-lookback', dest='guessdays', type=int, default=21,
-                       help='When guessing base, go back this many days from the patch date (default: 2 weeks)')
-    sp_am.add_argument('-3', '--prep-3way', dest='threeway', action='store_true', default=False,
-                       help='Prepare for a 3-way merge '
-                            '(tries to ensure that all index blobs exist by making a fake commit range)')
-    sp_am.add_argument('--no-cover', dest='nocover', action='store_true', default=False,
-                       help='Do not save the cover letter (on by default when using -o -)')
-    sp_am.add_argument('--no-partial-reroll', dest='nopartialreroll', action='store_true', default=False,
-                       help='Do not reroll partial series when detected')
+    sp_am.add_argument(
+        '-Q',
+        '--quilt-ready',
+        dest='quiltready',
+        action='store_true',
+        default=False,
+        help='Save patches in a quilt-ready folder',
+    )
+    sp_am.add_argument(
+        '-g',
+        '--guess-base',
+        dest='guessbase',
+        action='store_true',
+        default=False,
+        help='Try to guess the base of the series (if not specified)',
+    )
+    sp_am.add_argument(
+        '-b',
+        '--guess-branch',
+        dest='guessbranch',
+        nargs='+',
+        action='extend',
+        type=str,
+        default=None,
+        help='When guessing base, restrict to this branch (use with -g)',
+    )
+    sp_am.add_argument(
+        '--guess-lookback',
+        dest='guessdays',
+        type=int,
+        default=21,
+        help='When guessing base, go back this many days from the patch date (default: 2 weeks)',
+    )
+    sp_am.add_argument(
+        '-3',
+        '--prep-3way',
+        dest='threeway',
+        action='store_true',
+        default=False,
+        help='Prepare for a 3-way merge '
+        '(tries to ensure that all index blobs exist by making a fake commit range)',
+    )
+    sp_am.add_argument(
+        '--no-cover',
+        dest='nocover',
+        action='store_true',
+        default=False,
+        help='Do not save the cover letter (on by default when using -o -)',
+    )
+    sp_am.add_argument(
+        '--no-partial-reroll',
+        dest='nopartialreroll',
+        action='store_true',
+        default=False,
+        help='Do not reroll partial series when detected',
+    )
     sp_am.set_defaults(func=cmd_am)
 
     # b4 shazam
-    sp_sh = subparsers.add_parser('shazam', help='Like b4 am, but applies the series to your tree')
+    sp_sh = subparsers.add_parser(
+        'shazam', help='Like b4 am, but applies the series to your tree'
+    )
     cmd_retrieval_common_opts(sp_sh)
     cmd_am_common_opts(sp_sh)
     sh_g = sp_sh.add_mutually_exclusive_group()
-    sh_g.add_argument('-H', '--make-fetch-head', dest='makefetchhead', action='store_true', default=False,
-                      help='Attempt to treat series as a pull request and fetch it into FETCH_HEAD')
-    sh_g.add_argument('-M', '--merge', dest='merge', action='store_true', default=False,
-                      help='Attempt to merge series as if it were a pull request (execs git-merge)')
-    sp_sh.add_argument('--guess-lookback', dest='guessdays', type=int, default=21,
-                       help=('(use with -H or -M) When guessing base, go back this many days from the patch date '
-                             '(default: 3 weeks)'))
-    sp_sh.add_argument('--merge-base', dest='mergebase', type=str, default=None,
-                       help='(use with -H or -M) Force this base when merging')
-    sp_sh.add_argument('--resolve', dest='shazam_resolve', action='store_true', default=False,
-                       help='(use with -H or -M) Enable conflict resolution if patches fail to apply')
-    sp_sh.add_argument('--continue', dest='shazam_continue', action='store_true', default=False,
-                       help='Continue after resolving merge conflicts from --resolve')
-    sp_sh.add_argument('--abort', dest='shazam_abort', action='store_true', default=False,
-                       help='Abort a conflicted shazam and clean up')
+    sh_g.add_argument(
+        '-H',
+        '--make-fetch-head',
+        dest='makefetchhead',
+        action='store_true',
+        default=False,
+        help='Attempt to treat series as a pull request and fetch it into FETCH_HEAD',
+    )
+    sh_g.add_argument(
+        '-M',
+        '--merge',
+        dest='merge',
+        action='store_true',
+        default=False,
+        help='Attempt to merge series as if it were a pull request (execs git-merge)',
+    )
+    sp_sh.add_argument(
+        '--guess-lookback',
+        dest='guessdays',
+        type=int,
+        default=21,
+        help=(
+            '(use with -H or -M) When guessing base, go back this many days from the patch date '
+            '(default: 3 weeks)'
+        ),
+    )
+    sp_sh.add_argument(
+        '--merge-base',
+        dest='mergebase',
+        type=str,
+        default=None,
+        help='(use with -H or -M) Force this base when merging',
+    )
+    sp_sh.add_argument(
+        '--resolve',
+        dest='shazam_resolve',
+        action='store_true',
+        default=False,
+        help='(use with -H or -M) Enable conflict resolution if patches fail to apply',
+    )
+    sp_sh.add_argument(
+        '--continue',
+        dest='shazam_continue',
+        action='store_true',
+        default=False,
+        help='Continue after resolving merge conflicts from --resolve',
+    )
+    sp_sh.add_argument(
+        '--abort',
+        dest='shazam_abort',
+        action='store_true',
+        default=False,
+        help='Abort a conflicted shazam and clean up',
+    )
     sp_sh.set_defaults(func=cmd_shazam)
 
     # b4 review
-    sp_rev = subparsers.add_parser('review', help='Review patch series received on mailing lists')
+    sp_rev = subparsers.add_parser(
+        'review', help='Review patch series received on mailing lists'
+    )
     sp_rev.set_defaults(func=cmd_review)
-    rev_subparsers = sp_rev.add_subparsers(help='review sub-command help', dest='review_subcmd')
+    rev_subparsers = sp_rev.add_subparsers(
+        help='review sub-command help', dest='review_subcmd'
+    )
 
     # b4 review tui
     sp_rev_tui = rev_subparsers.add_parser('tui', help='Browse tracked series in a TUI')
-    sp_rev_tui.add_argument('-i', '--identifier', dest='identifier', default=None,
-                            help='Project identifier (required if not in an enrolled repository)')
-    sp_rev_tui.add_argument('--email-dry-run', dest='email_dryrun', action='store_true', default=False,
-                            help='Show all email dialogs but print messages to stdout instead of sending')
-    sp_rev_tui.add_argument('--no-sign', dest='no_sign', action='store_true', default=False,
-                            help='Do not patatt-sign outgoing review emails')
-    sp_rev_tui.add_argument('--no-mouse', dest='no_mouse', action='store_true', default=False,
-                            help='Disable mouse support in the TUI')
+    sp_rev_tui.add_argument(
+        '-i',
+        '--identifier',
+        dest='identifier',
+        default=None,
+        help='Project identifier (required if not in an enrolled repository)',
+    )
+    sp_rev_tui.add_argument(
+        '--email-dry-run',
+        dest='email_dryrun',
+        action='store_true',
+        default=False,
+        help='Show all email dialogs but print messages to stdout instead of sending',
+    )
+    sp_rev_tui.add_argument(
+        '--no-sign',
+        dest='no_sign',
+        action='store_true',
+        default=False,
+        help='Do not patatt-sign outgoing review emails',
+    )
+    sp_rev_tui.add_argument(
+        '--no-mouse',
+        dest='no_mouse',
+        action='store_true',
+        default=False,
+        help='Disable mouse support in the TUI',
+    )
 
     # b4 review enroll
-    sp_rev_enroll = rev_subparsers.add_parser('enroll', help='Enroll a repository for review tracking')
-    sp_rev_enroll.add_argument('repo_path', nargs='?', default=None,
-                               help='Path to the git repository to enroll (default: current directory)')
-    sp_rev_enroll.add_argument('-i', '--identifier', dest='identifier', default=None,
-                               help='Project identifier (default: repository directory name)')
+    sp_rev_enroll = rev_subparsers.add_parser(
+        'enroll', help='Enroll a repository for review tracking'
+    )
+    sp_rev_enroll.add_argument(
+        'repo_path',
+        nargs='?',
+        default=None,
+        help='Path to the git repository to enroll (default: current directory)',
+    )
+    sp_rev_enroll.add_argument(
+        '-i',
+        '--identifier',
+        dest='identifier',
+        default=None,
+        help='Project identifier (default: repository directory name)',
+    )
 
     # b4 review track
     sp_rev_track = rev_subparsers.add_parser('track', help='Track a series for review')
-    sp_rev_track.add_argument('series_id', nargs='?', default=None,
-                              help='Series identifier (message-id, URL, or change-id); or pipe message to stdin')
-    sp_rev_track.add_argument('-i', '--identifier', dest='identifier', default=None,
-                              help='Project identifier (required if not in an enrolled repository)')
-    sp_rev_track.add_argument('--rethread', nargs='+', metavar='MSGID', default=None,
-                              help='Rethread multiple unrelated messages into a single series for tracking '
-                                   '(pass - to read message IDs from stdin, one per line)')
+    sp_rev_track.add_argument(
+        'series_id',
+        nargs='?',
+        default=None,
+        help='Series identifier (message-id, URL, or change-id); or pipe message to stdin',
+    )
+    sp_rev_track.add_argument(
+        '-i',
+        '--identifier',
+        dest='identifier',
+        default=None,
+        help='Project identifier (required if not in an enrolled repository)',
+    )
+    sp_rev_track.add_argument(
+        '--rethread',
+        nargs='+',
+        metavar='MSGID',
+        default=None,
+        help='Rethread multiple unrelated messages into a single series for tracking '
+        '(pass - to read message IDs from stdin, one per line)',
+    )
 
     # b4 review show-info
-    sp_rev_showinfo = rev_subparsers.add_parser('show-info',
-        help='Show review branch info in a format suitable for scripting')
-    sp_rev_showinfo.add_argument('param', metavar='PARAM', nargs='?',
+    sp_rev_showinfo = rev_subparsers.add_parser(
+        'show-info', help='Show review branch info in a format suitable for scripting'
+    )
+    sp_rev_showinfo.add_argument(
+        'param',
+        metavar='PARAM',
+        nargs='?',
         default=':_all',
-        help='[branch:]key — branch and/or key to display (default: all)')
-    sp_rev_showinfo.add_argument('-l', '--list', dest='list_branches',
-        action='store_true', default=False,
-        help='List all review branches with summary info')
-    sp_rev_showinfo.add_argument('-j', '--json', dest='json_output',
-        action='store_true', default=False,
-        help='Output in JSON format')
-
+        help='[branch:]key — branch and/or key to display (default: all)',
+    )
+    sp_rev_showinfo.add_argument(
+        '-l',
+        '--list',
+        dest='list_branches',
+        action='store_true',
+        default=False,
+        help='List all review branches with summary info',
+    )
+    sp_rev_showinfo.add_argument(
+        '-j',
+        '--json',
+        dest='json_output',
+        action='store_true',
+        default=False,
+        help='Output in JSON format',
+    )
 
     # b4 pr
-    sp_pr = subparsers.add_parser('pr', help='Fetch a pull request found in a message ID')
-    sp_pr.add_argument('-g', '--gitdir', default=None,
-                       help='Operate on this git tree instead of current dir')
-    sp_pr.add_argument('-b', '--branch', default=None,
-                       help='Check out FETCH_HEAD into this branch after fetching')
-    sp_pr.add_argument('-c', '--check', action='store_true', default=False,
-                       help='Check if pull request has already been applied')
-    sp_pr.add_argument('-e', '--explode', action='store_true', default=False,
-                       help='Convert a pull request into an mbox full of patches')
-    sp_pr.add_argument('-o', '--output-mbox', dest='outmbox', default=None,
-                       help='Save exploded messages into this mailbox (default: msgid.mbx)')
-    sp_pr.add_argument('-f', '--from-addr', dest='mailfrom', default=None,
-                       help='Use this From: in exploded messages (use with -e)')
-    sp_pr.add_argument('-s', '--send-as-identity', dest='sendidentity', default=None,
-                       help=('Use git-send-email to send exploded series (use with -e);'
-                             'the identity must match a [sendemail "identity"] config section'))
-    sp_pr.add_argument('--dry-run', dest='dryrun', action='store_true', default=False,
-                       help='Force a --dry-run on git-send-email invocation (use with -s)')
-    sp_pr.add_argument('msgid', nargs='?',
-                       help='Message ID to process, or pipe a raw message')
+    sp_pr = subparsers.add_parser(
+        'pr', help='Fetch a pull request found in a message ID'
+    )
+    sp_pr.add_argument(
+        '-g',
+        '--gitdir',
+        default=None,
+        help='Operate on this git tree instead of current dir',
+    )
+    sp_pr.add_argument(
+        '-b',
+        '--branch',
+        default=None,
+        help='Check out FETCH_HEAD into this branch after fetching',
+    )
+    sp_pr.add_argument(
+        '-c',
+        '--check',
+        action='store_true',
+        default=False,
+        help='Check if pull request has already been applied',
+    )
+    sp_pr.add_argument(
+        '-e',
+        '--explode',
+        action='store_true',
+        default=False,
+        help='Convert a pull request into an mbox full of patches',
+    )
+    sp_pr.add_argument(
+        '-o',
+        '--output-mbox',
+        dest='outmbox',
+        default=None,
+        help='Save exploded messages into this mailbox (default: msgid.mbx)',
+    )
+    sp_pr.add_argument(
+        '-f',
+        '--from-addr',
+        dest='mailfrom',
+        default=None,
+        help='Use this From: in exploded messages (use with -e)',
+    )
+    sp_pr.add_argument(
+        '-s',
+        '--send-as-identity',
+        dest='sendidentity',
+        default=None,
+        help=(
+            'Use git-send-email to send exploded series (use with -e);'
+            'the identity must match a [sendemail "identity"] config section'
+        ),
+    )
+    sp_pr.add_argument(
+        '--dry-run',
+        dest='dryrun',
+        action='store_true',
+        default=False,
+        help='Force a --dry-run on git-send-email invocation (use with -s)',
+    )
+    sp_pr.add_argument(
+        'msgid', nargs='?', help='Message ID to process, or pipe a raw message'
+    )
     sp_pr.set_defaults(func=cmd_pr)
 
     # b4 ty
-    sp_ty = subparsers.add_parser('ty', help='Generate thanks email when something gets merged/applied')
-    sp_ty.add_argument('-g', '--gitdir', default=None,
-                       help='Operate on this git tree instead of current dir')
-    sp_ty.add_argument('-o', '--outdir', default='.',
-                       help='Write thanks files into this dir (default=.)')
-    sp_ty.add_argument('-l', '--list', action='store_true', default=False,
-                       help='List pull requests and patch series you have retrieved')
-    sp_ty.add_argument('-t', '--thank-for', dest='thankfor', default=None,
-                       help='Generate thankyous for specific entries from -l (e.g.: 1,3-5,7-; or "all")')
-    sp_ty.add_argument('-d', '--discard', default=None,
-                       help='Discard specific messages from -l (e.g.: 1,3-5,7-; or "all")')
-    sp_ty.add_argument('-a', '--auto', action='store_true', default=False,
-                       help='Use the Auto-Thankanator to figure out what got applied/merged')
-    sp_ty.add_argument('-b', '--branch', default=None,
-                       help='The branch to check against, instead of current')
-    sp_ty.add_argument('--since', default='1.week',
-                       help='The --since option to use when auto-matching patches (default=1.week)')
-    sp_ty.add_argument('-S', '--send-email', action='store_true', dest='sendemail', default=False,
-                       help='Send email instead of writing out .thanks files')
-    sp_ty.add_argument('--dry-run', action='store_true', dest='dryrun', default=False,
-                       help='Print out emails instead of sending them')
-    sp_ty.add_argument('--pw-set-state', default=None,
-                       help='Set this patchwork state instead of default (use with -a, -t or -d)')
-    sp_ty.add_argument('--me-too', action='store_true', dest='metoo', default=False,
-                       help='Send a copy of the thank-you message to yourself as well')
+    sp_ty = subparsers.add_parser(
+        'ty', help='Generate thanks email when something gets merged/applied'
+    )
+    sp_ty.add_argument(
+        '-g',
+        '--gitdir',
+        default=None,
+        help='Operate on this git tree instead of current dir',
+    )
+    sp_ty.add_argument(
+        '-o',
+        '--outdir',
+        default='.',
+        help='Write thanks files into this dir (default=.)',
+    )
+    sp_ty.add_argument(
+        '-l',
+        '--list',
+        action='store_true',
+        default=False,
+        help='List pull requests and patch series you have retrieved',
+    )
+    sp_ty.add_argument(
+        '-t',
+        '--thank-for',
+        dest='thankfor',
+        default=None,
+        help='Generate thankyous for specific entries from -l (e.g.: 1,3-5,7-; or "all")',
+    )
+    sp_ty.add_argument(
+        '-d',
+        '--discard',
+        default=None,
+        help='Discard specific messages from -l (e.g.: 1,3-5,7-; or "all")',
+    )
+    sp_ty.add_argument(
+        '-a',
+        '--auto',
+        action='store_true',
+        default=False,
+        help='Use the Auto-Thankanator to figure out what got applied/merged',
+    )
+    sp_ty.add_argument(
+        '-b',
+        '--branch',
+        default=None,
+        help='The branch to check against, instead of current',
+    )
+    sp_ty.add_argument(
+        '--since',
+        default='1.week',
+        help='The --since option to use when auto-matching patches (default=1.week)',
+    )
+    sp_ty.add_argument(
+        '-S',
+        '--send-email',
+        action='store_true',
+        dest='sendemail',
+        default=False,
+        help='Send email instead of writing out .thanks files',
+    )
+    sp_ty.add_argument(
+        '--dry-run',
+        action='store_true',
+        dest='dryrun',
+        default=False,
+        help='Print out emails instead of sending them',
+    )
+    sp_ty.add_argument(
+        '--pw-set-state',
+        default=None,
+        help='Set this patchwork state instead of default (use with -a, -t or -d)',
+    )
+    sp_ty.add_argument(
+        '--me-too',
+        action='store_true',
+        dest='metoo',
+        default=False,
+        help='Send a copy of the thank-you message to yourself as well',
+    )
     sp_ty.set_defaults(func=cmd_ty)
 
     # b4 diff
-    sp_diff = subparsers.add_parser('diff', help='Show a range-diff to previous series revision')
-    sp_diff.add_argument('msgid', nargs='?',
-                         help='Message ID to process, or pipe a raw message')
-    sp_diff.add_argument('-g', '--gitdir', default=None,
-                         help='Operate on this git tree instead of current dir')
-    sp_diff.add_argument('-C', '--no-cache', dest='nocache', action='store_true', default=False,
-                         help='Do not use local cache')
-    sp_diff.add_argument('-v', '--compare-versions', dest='wantvers', type=int, default=None, nargs='+',
-                         help='Compare specific versions instead of latest and one before that, e.g. -v 3 5')
-    sp_diff.add_argument('-n', '--no-diff', dest='nodiff', action='store_true', default=False,
-                         help='Do not generate a diff, just show the command to do it')
-    sp_diff.add_argument('-o', '--output-diff', dest='outdiff', default=None,
-                         help='Save diff into this file instead of outputting to stdout')
-    sp_diff.add_argument('-c', '--color', dest='color', action='store_true', default=False,
-                         help='Force color output even when writing to file')
-    sp_diff.add_argument('-m', '--compare-am-mboxes', dest='ambox', nargs=2, default=None,
-                         help='Compare two mbx files prepared with "b4 am"')
-    sp_diff.add_argument('--range-diff-opts', default=None,
-                         help='Arguments passed to git range-diff')
+    sp_diff = subparsers.add_parser(
+        'diff', help='Show a range-diff to previous series revision'
+    )
+    sp_diff.add_argument(
+        'msgid', nargs='?', help='Message ID to process, or pipe a raw message'
+    )
+    sp_diff.add_argument(
+        '-g',
+        '--gitdir',
+        default=None,
+        help='Operate on this git tree instead of current dir',
+    )
+    sp_diff.add_argument(
+        '-C',
+        '--no-cache',
+        dest='nocache',
+        action='store_true',
+        default=False,
+        help='Do not use local cache',
+    )
+    sp_diff.add_argument(
+        '-v',
+        '--compare-versions',
+        dest='wantvers',
+        type=int,
+        default=None,
+        nargs='+',
+        help='Compare specific versions instead of latest and one before that, e.g. -v 3 5',
+    )
+    sp_diff.add_argument(
+        '-n',
+        '--no-diff',
+        dest='nodiff',
+        action='store_true',
+        default=False,
+        help='Do not generate a diff, just show the command to do it',
+    )
+    sp_diff.add_argument(
+        '-o',
+        '--output-diff',
+        dest='outdiff',
+        default=None,
+        help='Save diff into this file instead of outputting to stdout',
+    )
+    sp_diff.add_argument(
+        '-c',
+        '--color',
+        dest='color',
+        action='store_true',
+        default=False,
+        help='Force color output even when writing to file',
+    )
+    sp_diff.add_argument(
+        '-m',
+        '--compare-am-mboxes',
+        dest='ambox',
+        nargs=2,
+        default=None,
+        help='Compare two mbx files prepared with "b4 am"',
+    )
+    sp_diff.add_argument(
+        '--range-diff-opts', default=None, help='Arguments passed to git range-diff'
+    )
     sp_diff.set_defaults(func=cmd_diff)
 
     # b4 kr
     sp_kr = subparsers.add_parser('kr', help='Keyring operations')
     cmd_retrieval_common_opts(sp_kr)
-    sp_kr.add_argument('--show-keys', dest='showkeys', action='store_true', default=False,
-                       help='Show all developer keys found in a thread')
+    sp_kr.add_argument(
+        '--show-keys',
+        dest='showkeys',
+        action='store_true',
+        default=False,
+        help='Show all developer keys found in a thread',
+    )
     sp_kr.set_defaults(func=cmd_kr)
 
     # b4 prep
-    sp_prep = subparsers.add_parser('prep', help='Work on patch series to submit for mailing list review')
-    sp_prep.add_argument('-c', '--auto-to-cc', action='store_true', default=False,
-                         help='Automatically populate cover letter trailers with To and Cc addresses')
-    sp_prep.add_argument('--force-revision', metavar='N', type=int,
-                         help='Force revision to be this number instead')
-    sp_prep.add_argument('--set-prefixes', metavar='PREFIX', nargs='+',
-                         help='Prefixes to include after [PATCH] (e.g.: RFC mydrv)')
-    sp_prep.add_argument('--add-prefixes', metavar='PREFIX', nargs='+',
-                         help='Additional prefixes to add to those already defined')
-    sp_prep.add_argument('--set-presubject', metavar='PRESUBJECT', type=str, default=None,
-                         help='Prefix to include before [PATCH] (e.g.: [mylist])')
-    sp_prep.add_argument('-C', '--no-cache', dest='nocache', action='store_true', default=False,
-                         help='Do not use local cache')
-    sp_prep.add_argument('--range-diff-opts', default=None, type=str,
-                         help='Arguments passed to git range-diff when comparing series')
+    sp_prep = subparsers.add_parser(
+        'prep', help='Work on patch series to submit for mailing list review'
+    )
+    sp_prep.add_argument(
+        '-c',
+        '--auto-to-cc',
+        action='store_true',
+        default=False,
+        help='Automatically populate cover letter trailers with To and Cc addresses',
+    )
+    sp_prep.add_argument(
+        '--force-revision',
+        metavar='N',
+        type=int,
+        help='Force revision to be this number instead',
+    )
+    sp_prep.add_argument(
+        '--set-prefixes',
+        metavar='PREFIX',
+        nargs='+',
+        help='Prefixes to include after [PATCH] (e.g.: RFC mydrv)',
+    )
+    sp_prep.add_argument(
+        '--add-prefixes',
+        metavar='PREFIX',
+        nargs='+',
+        help='Additional prefixes to add to those already defined',
+    )
+    sp_prep.add_argument(
+        '--set-presubject',
+        metavar='PRESUBJECT',
+        type=str,
+        default=None,
+        help='Prefix to include before [PATCH] (e.g.: [mylist])',
+    )
+    sp_prep.add_argument(
+        '-C',
+        '--no-cache',
+        dest='nocache',
+        action='store_true',
+        default=False,
+        help='Do not use local cache',
+    )
+    sp_prep.add_argument(
+        '--range-diff-opts',
+        default=None,
+        type=str,
+        help='Arguments passed to git range-diff when comparing series',
+    )
 
     spp_g = sp_prep.add_mutually_exclusive_group()
-    spp_g.add_argument('-p', '--format-patch', metavar='OUTPUT_DIR',
-                       help='Output prep-tracked commits as patches')
-    spp_g.add_argument('--edit-cover', action='store_true', default=False,
-                       help='Edit the cover letter in the configured editor')
-    spp_g.add_argument('--edit-deps', action='store_true', default=False,
-                       help='Edit the series dependencies in the configured editor')
-    spp_g.add_argument('--check-deps', action='store_true', default=False,
-                       help='Run checks for any defined series dependencies')
-    spp_g.add_argument('--check', action='store_true', default=False,
-                       help='Run checks on the series')
-    spp_g.add_argument('--show-revision', action='store_true', default=False,
-                       help='Show current series revision number')
-    spp_g.add_argument('--compare-to', metavar='vN',
-                       help='Display a range-diff to previously sent revision N')
-    spp_g.add_argument('--manual-reroll', dest='reroll', default=None, metavar='COVER_MSGID',
-                       help='Mark current revision as sent and reroll (requires cover letter msgid)')
-    spp_g.add_argument('--show-info', metavar='PARAM', nargs='?', const=':_all',
-                       help='Show series info in a format that can be passed to other commands.')
-    spp_g.add_argument('--cleanup', metavar='BRANCHNAME', nargs='*',
-                       help='Archive and remove prep-tracked branches and all associated sent/ tags')
-
-    ag_prepn = sp_prep.add_argument_group('Create new branch', 'Create a new branch for working on patch series')
-    ag_prepn.add_argument('-n', '--new', dest='new_series_name',
-                          help='Create a new branch for working on a patch series')
-    ag_prepn.add_argument('-f', '--fork-point', dest='fork_point',
-                          help='When creating a new branch, use this fork point instead of HEAD')
-    ag_prepn.add_argument('-F', '--from-thread', metavar='MSGID', dest='msgid',
-                          help='When creating a new branch, use this thread')
-    ag_prepe = sp_prep.add_argument_group('Enroll existing branch', 'Enroll existing branch for prep work')
-    ag_prepe.add_argument('-e', '--enroll', dest='enroll_base', nargs='?', const='@{upstream}',
-                          help='Enroll current branch, using its configured upstream branch as fork base, '
-                               'or the passed tag, branch, or commit')
+    spp_g.add_argument(
+        '-p',
+        '--format-patch',
+        metavar='OUTPUT_DIR',
+        help='Output prep-tracked commits as patches',
+    )
+    spp_g.add_argument(
+        '--edit-cover',
+        action='store_true',
+        default=False,
+        help='Edit the cover letter in the configured editor',
+    )
+    spp_g.add_argument(
+        '--edit-deps',
+        action='store_true',
+        default=False,
+        help='Edit the series dependencies in the configured editor',
+    )
+    spp_g.add_argument(
+        '--check-deps',
+        action='store_true',
+        default=False,
+        help='Run checks for any defined series dependencies',
+    )
+    spp_g.add_argument(
+        '--check', action='store_true', default=False, help='Run checks on the series'
+    )
+    spp_g.add_argument(
+        '--show-revision',
+        action='store_true',
+        default=False,
+        help='Show current series revision number',
+    )
+    spp_g.add_argument(
+        '--compare-to',
+        metavar='vN',
+        help='Display a range-diff to previously sent revision N',
+    )
+    spp_g.add_argument(
+        '--manual-reroll',
+        dest='reroll',
+        default=None,
+        metavar='COVER_MSGID',
+        help='Mark current revision as sent and reroll (requires cover letter msgid)',
+    )
+    spp_g.add_argument(
+        '--show-info',
+        metavar='PARAM',
+        nargs='?',
+        const=':_all',
+        help='Show series info in a format that can be passed to other commands.',
+    )
+    spp_g.add_argument(
+        '--cleanup',
+        metavar='BRANCHNAME',
+        nargs='*',
+        help='Archive and remove prep-tracked branches and all associated sent/ tags',
+    )
+
+    ag_prepn = sp_prep.add_argument_group(
+        'Create new branch', 'Create a new branch for working on patch series'
+    )
+    ag_prepn.add_argument(
+        '-n',
+        '--new',
+        dest='new_series_name',
+        help='Create a new branch for working on a patch series',
+    )
+    ag_prepn.add_argument(
+        '-f',
+        '--fork-point',
+        dest='fork_point',
+        help='When creating a new branch, use this fork point instead of HEAD',
+    )
+    ag_prepn.add_argument(
+        '-F',
+        '--from-thread',
+        metavar='MSGID',
+        dest='msgid',
+        help='When creating a new branch, use this thread',
+    )
+    ag_prepe = sp_prep.add_argument_group(
+        'Enroll existing branch', 'Enroll existing branch for prep work'
+    )
+    ag_prepe.add_argument(
+        '-e',
+        '--enroll',
+        dest='enroll_base',
+        nargs='?',
+        const='@{upstream}',
+        help='Enroll current branch, using its configured upstream branch as fork base, '
+        'or the passed tag, branch, or commit',
+    )
     sp_prep.set_defaults(func=cmd_prep)
 
     # b4 trailers
-    sp_trl = subparsers.add_parser('trailers', help='Operate on trailers received for mailing list reviews')
-    sp_trl.add_argument('-u', '--update', action='store_true', default=False,
-                        help='Update branch commits with latest received trailers')
-    sp_trl.add_argument('-S', '--sloppy-trailers', dest='sloppytrailers', action='store_true', default=False,
-                        help='Apply trailers without email address match checking')
-    sp_trl.add_argument('-F', '--trailers-from', dest='trailers_from', metavar='MSGID',
-                        help='Look for trailers in the thread with this msgid instead of using the series change-id')
-    sp_trl.add_argument('--since', default='1.month', metavar='GITLOGDATE',
-                        help='The --since option to use with git-log when auto-matching patches (default=1.month)')
-    sp_trl.add_argument('--since-commit', metavar='COMMITISH',
-                        help='Look for any new trailers for commits starting with this one')
+    sp_trl = subparsers.add_parser(
+        'trailers', help='Operate on trailers received for mailing list reviews'
+    )
+    sp_trl.add_argument(
+        '-u',
+        '--update',
+        action='store_true',
+        default=False,
+        help='Update branch commits with latest received trailers',
+    )
+    sp_trl.add_argument(
+        '-S',
+        '--sloppy-trailers',
+        dest='sloppytrailers',
+        action='store_true',
+        default=False,
+        help='Apply trailers without email address match checking',
+    )
+    sp_trl.add_argument(
+        '-F',
+        '--trailers-from',
+        dest='trailers_from',
+        metavar='MSGID',
+        help='Look for trailers in the thread with this msgid instead of using the series change-id',
+    )
+    sp_trl.add_argument(
+        '--since',
+        default='1.month',
+        metavar='GITLOGDATE',
+        help='The --since option to use with git-log when auto-matching patches (default=1.month)',
+    )
+    sp_trl.add_argument(
+        '--since-commit',
+        metavar='COMMITISH',
+        help='Look for any new trailers for commits starting with this one',
+    )
     cmd_retrieval_common_opts(sp_trl)
     sp_trl.set_defaults(func=cmd_trailers)
 
     # b4 send
-    sp_send = subparsers.add_parser('send', help='Submit your work for review on the mailing lists')
+    sp_send = subparsers.add_parser(
+        'send', help='Submit your work for review on the mailing lists'
+    )
     sp_send_g = sp_send.add_mutually_exclusive_group()
-    sp_send_g.add_argument('-d', '--dry-run', dest='dryrun', action='store_true', default=False,
-                           help='Do not send, just dump out raw smtp messages to the stdout')
-    sp_send_g.add_argument('-o', '--output-dir',
-                           help='Do not send, write raw messages to this directory (forces --dry-run)')
-    sp_send_g.add_argument('--preview-to', nargs='+', metavar='ADDR',
-                           help='Send everything for a pre-review to specified addresses instead of actual recipients')
-    sp_send_g.add_argument('--reflect', action='store_true', default=False,
-                           help='Send everything to yourself instead of the actual recipients')
-
-    sp_send.add_argument('--no-trailer-to-cc', action='store_true', default=False,
-                         help='Do not add any addresses found in the cover or patch trailers to To: or Cc:')
-    sp_send.add_argument('--to', nargs='+', metavar='ADDR', help='Addresses to add to the To: list')
-    sp_send.add_argument('--cc', nargs='+', metavar='ADDR', help='Addresses to add to the Cc: list')
-    sp_send.add_argument('--not-me-too', action='store_true', default=False,
-                         help='Remove yourself from the To: or Cc: list')
-    sp_send.add_argument('--resend', metavar='vN', nargs='?', const='latest',
-                         help='Resend a previously sent version of the series')
-    sp_send.add_argument('--no-sign', action='store_true', default=False,
-                         help='Do not add the cryptographic attestation signature header')
-    sp_send.add_argument('--force-cover-letter', action='store_true', default=False,
-                         help='Send a cover letter even for single-patch series')
-    sp_send.add_argument('--use-web-endpoint', dest='send_web', action='store_true', default=False,
-                         help="Force going through the web endpoint")
-    ag_sendh = sp_send.add_argument_group('Web submission', 'Authenticate with the web submission endpoint')
-    ag_sendh.add_argument('--web-auth-new', dest='auth_new', action='store_true', default=False,
-                          help='Initiate a new web authentication request')
-    ag_sendh.add_argument('--web-auth-verify', dest='auth_verify', metavar='VERIFY_TOKEN',
-                          help='Submit the token received via verification email')
+    sp_send_g.add_argument(
+        '-d',
+        '--dry-run',
+        dest='dryrun',
+        action='store_true',
+        default=False,
+        help='Do not send, just dump out raw smtp messages to the stdout',
+    )
+    sp_send_g.add_argument(
+        '-o',
+        '--output-dir',
+        help='Do not send, write raw messages to this directory (forces --dry-run)',
+    )
+    sp_send_g.add_argument(
+        '--preview-to',
+        nargs='+',
+        metavar='ADDR',
+        help='Send everything for a pre-review to specified addresses instead of actual recipients',
+    )
+    sp_send_g.add_argument(
+        '--reflect',
+        action='store_true',
+        default=False,
+        help='Send everything to yourself instead of the actual recipients',
+    )
+
+    sp_send.add_argument(
+        '--no-trailer-to-cc',
+        action='store_true',
+        default=False,
+        help='Do not add any addresses found in the cover or patch trailers to To: or Cc:',
+    )
+    sp_send.add_argument(
+        '--to', nargs='+', metavar='ADDR', help='Addresses to add to the To: list'
+    )
+    sp_send.add_argument(
+        '--cc', nargs='+', metavar='ADDR', help='Addresses to add to the Cc: list'
+    )
+    sp_send.add_argument(
+        '--not-me-too',
+        action='store_true',
+        default=False,
+        help='Remove yourself from the To: or Cc: list',
+    )
+    sp_send.add_argument(
+        '--resend',
+        metavar='vN',
+        nargs='?',
+        const='latest',
+        help='Resend a previously sent version of the series',
+    )
+    sp_send.add_argument(
+        '--no-sign',
+        action='store_true',
+        default=False,
+        help='Do not add the cryptographic attestation signature header',
+    )
+    sp_send.add_argument(
+        '--force-cover-letter',
+        action='store_true',
+        default=False,
+        help='Send a cover letter even for single-patch series',
+    )
+    sp_send.add_argument(
+        '--use-web-endpoint',
+        dest='send_web',
+        action='store_true',
+        default=False,
+        help='Force going through the web endpoint',
+    )
+    ag_sendh = sp_send.add_argument_group(
+        'Web submission', 'Authenticate with the web submission endpoint'
+    )
+    ag_sendh.add_argument(
+        '--web-auth-new',
+        dest='auth_new',
+        action='store_true',
+        default=False,
+        help='Initiate a new web authentication request',
+    )
+    ag_sendh.add_argument(
+        '--web-auth-verify',
+        dest='auth_verify',
+        metavar='VERIFY_TOKEN',
+        help='Submit the token received via verification email',
+    )
     sp_send.set_defaults(func=cmd_send)
 
     # b4 dig
-    sp_dig = subparsers.add_parser('dig', help='Dig into the details of a specific commit')
-    sp_dig.add_argument('-c', '--commitish', dest='commitish', metavar='COMMITISH',
-                        help='Commit-ish object to dig into')
-    sp_dig.add_argument('-C', '--no-cache', dest='nocache', action='store_true', default=False,
-                        help='Do not use local cache')
+    sp_dig = subparsers.add_parser(
+        'dig', help='Dig into the details of a specific commit'
+    )
+    sp_dig.add_argument(
+        '-c',
+        '--commitish',
+        dest='commitish',
+        metavar='COMMITISH',
+        help='Commit-ish object to dig into',
+    )
+    sp_dig.add_argument(
+        '-C',
+        '--no-cache',
+        dest='nocache',
+        action='store_true',
+        default=False,
+        help='Do not use local cache',
+    )
     sp_dig_eg = sp_dig.add_mutually_exclusive_group()
-    sp_dig_eg.add_argument('-a', '--all-series', action='store_true', default=False,
-                           help='Show all series, not just the latest matching')
-    sp_dig_eg.add_argument('-m', '--save-mbox', metavar='DEST', default=None,
-                           help='Save matched thread to the specified mbox file')
-    sp_dig_eg.add_argument('-w', '--who', action='store_true', default=False,
-                           help='Show list of recipients in the original message')
+    sp_dig_eg.add_argument(
+        '-a',
+        '--all-series',
+        action='store_true',
+        default=False,
+        help='Show all series, not just the latest matching',
+    )
+    sp_dig_eg.add_argument(
+        '-m',
+        '--save-mbox',
+        metavar='DEST',
+        default=None,
+        help='Save matched thread to the specified mbox file',
+    )
+    sp_dig_eg.add_argument(
+        '-w',
+        '--who',
+        action='store_true',
+        default=False,
+        help='Show list of recipients in the original message',
+    )
     sp_dig.set_defaults(func=cmd_dig)
 
     # b4 bugs
-    sp_bugs = subparsers.add_parser('bugs', help='Manage bug reports from mailing list threads')
+    sp_bugs = subparsers.add_parser(
+        'bugs', help='Manage bug reports from mailing list threads'
+    )
     sp_bugs.set_defaults(func=cmd_bugs)
-    bugs_subparsers = sp_bugs.add_subparsers(help='bugs sub-command help', dest='bugs_subcmd')
+    bugs_subparsers = sp_bugs.add_subparsers(
+        help='bugs sub-command help', dest='bugs_subcmd'
+    )
 
     # b4 bugs tui
-    sp_bugs_tui = bugs_subparsers.add_parser('tui', help='Browse and triage bugs in a TUI')
-    sp_bugs_tui.add_argument('--no-mouse', dest='no_mouse', action='store_true', default=False,
-                              help='Disable mouse support in the TUI')
-    sp_bugs_tui.add_argument('--email-dry-run', dest='email_dryrun', action='store_true', default=False,
-                              help='Show email dialogs but print messages to stdout instead of sending')
-    sp_bugs_tui.add_argument('--no-sign', dest='no_sign', action='store_true', default=False,
-                              help='Do not patatt-sign outgoing emails')
+    sp_bugs_tui = bugs_subparsers.add_parser(
+        'tui', help='Browse and triage bugs in a TUI'
+    )
+    sp_bugs_tui.add_argument(
+        '--no-mouse',
+        dest='no_mouse',
+        action='store_true',
+        default=False,
+        help='Disable mouse support in the TUI',
+    )
+    sp_bugs_tui.add_argument(
+        '--email-dry-run',
+        dest='email_dryrun',
+        action='store_true',
+        default=False,
+        help='Show email dialogs but print messages to stdout instead of sending',
+    )
+    sp_bugs_tui.add_argument(
+        '--no-sign',
+        dest='no_sign',
+        action='store_true',
+        default=False,
+        help='Do not patatt-sign outgoing emails',
+    )
 
     # b4 bugs import
-    sp_bugs_import = bugs_subparsers.add_parser('import', help='Import a lore thread as a new bug')
+    sp_bugs_import = bugs_subparsers.add_parser(
+        'import', help='Import a lore thread as a new bug'
+    )
     sp_bugs_import.add_argument('msgid', help='Message-ID of the thread to import')
-    sp_bugs_import.add_argument('--no-parent', dest='noparent', action='store_true', default=False,
-                                 help='Break thread at the msgid and ignore parent messages')
+    sp_bugs_import.add_argument(
+        '--no-parent',
+        dest='noparent',
+        action='store_true',
+        default=False,
+        help='Break thread at the msgid and ignore parent messages',
+    )
 
     # b4 bugs delete
-    sp_bugs_delete = bugs_subparsers.add_parser('delete', help='Permanently delete a bug')
+    sp_bugs_delete = bugs_subparsers.add_parser(
+        'delete', help='Permanently delete a bug'
+    )
     sp_bugs_delete.add_argument('bugid', help='Bug ID to delete')
 
     # b4 bugs refresh
-    sp_bugs_refresh = bugs_subparsers.add_parser('refresh', help='Refresh bugs with new thread messages')
-    sp_bugs_refresh.add_argument('bugid', nargs='?', default=None,
-                                  help='Bug ID to refresh (default: refresh all open bugs)')
+    sp_bugs_refresh = bugs_subparsers.add_parser(
+        'refresh', help='Refresh bugs with new thread messages'
+    )
+    sp_bugs_refresh.add_argument(
+        'bugid',
+        nargs='?',
+        default=None,
+        help='Bug ID to refresh (default: refresh all open bugs)',
+    )
 
     # b4 bugs list
     sp_bugs_list = bugs_subparsers.add_parser('list', help='List tracked bugs')
-    sp_bugs_list.add_argument('--status', choices=['open', 'closed'], default=None,
-                               help='Filter by status')
+    sp_bugs_list.add_argument(
+        '--status', choices=['open', 'closed'], default=None, help='Filter by status'
+    )
     sp_bugs_list.add_argument('--label', default=None, help='Filter by label')
 
     return parser
@@ -564,7 +1278,9 @@ if __name__ == '__main__':
 
     try:
         if b4.__VERSION__.find('-dev') > 0:
-            base = os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
+            base = os.path.dirname(
+                os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
+            )
             dotgit = os.path.join(base, '.git')
             ecode, short = b4.git_run_command(dotgit, ['rev-parse', '--short', 'HEAD'])
             if ecode == 0:
diff --git a/src/b4/diff.py b/src/b4/diff.py
index 8045243..7392056 100644
--- a/src/b4/diff.py
+++ b/src/b4/diff.py
@@ -22,7 +22,9 @@ import b4.mbox
 logger = b4.logger
 
 
-def diff_same_thread_series(cmdargs: argparse.Namespace) -> Tuple[Optional[b4.LoreSeries], Optional[b4.LoreSeries]]:
+def diff_same_thread_series(
+    cmdargs: argparse.Namespace,
+) -> Tuple[Optional[b4.LoreSeries], Optional[b4.LoreSeries]]:
     msgid = b4.get_msgid(cmdargs)
     if not msgid:
         logger.critical('Please pass msgid on the command-line')
@@ -45,7 +47,11 @@ def diff_same_thread_series(cmdargs: argparse.Namespace) -> Tuple[Optional[b4.Lo
         msgs = list()
         for cachemsg in os.listdir(cachedir):
             with open(os.path.join(cachedir, cachemsg), 'rb') as fh:
-                msgs.append(email.parser.BytesParser(policy=b4.emlpolicy, _class=EmailMessage).parse(fh))
+                msgs.append(
+                    email.parser.BytesParser(
+                        policy=b4.emlpolicy, _class=EmailMessage
+                    ).parse(fh)
+                )
     else:
         msgs = b4.get_pi_thread_by_msgid(msgid, nocache=cmdargs.nocache)
         if not msgs:
@@ -107,7 +113,9 @@ def diff_same_thread_series(cmdargs: argparse.Namespace) -> Tuple[Optional[b4.Lo
     return lmbx.series[lower], lmbx.series[upper]
 
 
-def diff_mboxes(cmdargs: argparse.Namespace) -> Tuple[Optional[b4.LoreSeries], Optional[b4.LoreSeries]]:
+def diff_mboxes(
+    cmdargs: argparse.Namespace,
+) -> Tuple[Optional[b4.LoreSeries], Optional[b4.LoreSeries]]:
     chunks = list()
     for mboxfile in cmdargs.ambox:
         if not os.path.exists(mboxfile):
@@ -125,7 +133,9 @@ def diff_mboxes(cmdargs: argparse.Namespace) -> Tuple[Optional[b4.LoreSeries], O
             logger.critical('No valid patches found in %s', mboxfile)
             sys.exit(1)
         if len(lmbx.series) > 1:
-            logger.critical('More than one series version in %s, will use latest', mboxfile)
+            logger.critical(
+                'More than one series version in %s, will use latest', mboxfile
+            )
 
         chunks.append(lmbx.series[max(lmbx.series.keys())])
 
@@ -145,13 +155,17 @@ def main(cmdargs: argparse.Namespace) -> None:
     lsc, lec = lser.make_fake_am_range(gitdir=cmdargs.gitdir)
     if lsc is None or lec is None:
         logger.critical('---')
-        logger.critical('Could not create fake-am range for lower series v%s', lser.revision)
+        logger.critical(
+            'Could not create fake-am range for lower series v%s', lser.revision
+        )
         sys.exit(1)
     # Prepare the upper fake-am range
     usc, uec = user.make_fake_am_range(gitdir=cmdargs.gitdir)
     if usc is None or uec is None:
         logger.critical('---')
-        logger.critical('Could not create fake-am range for upper series v%s', user.revision)
+        logger.critical(
+            'Could not create fake-am range for upper series v%s', user.revision
+        )
         sys.exit(1)
     rd_opts = []
     if cmdargs.range_diff_opts:
@@ -159,8 +173,12 @@ def main(cmdargs: argparse.Namespace) -> None:
         sp.whitespace_split = True
         rd_opts = list(sp)
     grdcmd = 'git range-diff %s%.12s..%.12s %.12s..%.12s' % (
-        " ".join(rd_opts) + " " if rd_opts else "",
-        lsc, lec, usc, uec)
+        ' '.join(rd_opts) + ' ' if rd_opts else '',
+        lsc,
+        lec,
+        usc,
+        uec,
+    )
     if cmdargs.nodiff:
         logger.info('Success, to compare v%s and v%s:', lser.revision, user.revision)
         logger.info('    %s', grdcmd)
diff --git a/src/b4/dig.py b/src/b4/dig.py
index b3d637d..781d509 100644
--- a/src/b4/dig.py
+++ b/src/b4/dig.py
@@ -60,7 +60,7 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
     # Are we inside a git repo?
     topdir = b4.git_get_toplevel()
     if not topdir:
-        logger.error("Not inside a git repository.")
+        logger.error('Not inside a git repository.')
         sys.exit(1)
 
     # Can we resolve this commit to an object?
@@ -73,7 +73,8 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
     logger.info('Digging into commit %s', commit)
     # Make sure it has exactly one parent (not a merge)
     ecode, out = b4.git_run_command(
-        topdir, ['show', '--no-patch', '--format=%p', commit],
+        topdir,
+        ['show', '--no-patch', '--format=%p', commit],
     )
     if ecode > 0:
         logger.error('Could not get commit info for %s', commit)
@@ -85,7 +86,8 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
     # Look at the commit message and find any Link: trailers
     links: Set[str] = set()
     ecode, out = b4.git_run_command(
-        topdir, ['show', '--no-patch', '--format=%B', commit],
+        topdir,
+        ['show', '--no-patch', '--format=%B', commit],
     )
     if ecode > 0:
         logger.error('Could not get commit message for %s', commit)
@@ -101,13 +103,16 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
 
     # Find commit's author and subject from git
     ecode, out = b4.git_run_command(
-        topdir, ['show', '--no-patch', '--format=%as%x00%ae%x00%an%x00%s', commit],
+        topdir,
+        ['show', '--no-patch', '--format=%as%x00%ae%x00%an%x00%s', commit],
     )
     if ecode > 0:
         logger.error('Could not get commit info for %s', commit)
         sys.exit(1)
     cdate, fromeml, fromname, csubj = out.strip().split('\x00', maxsplit=3)
-    logger.debug('cdate=%s, fromeml=%s, fromname=%s, csubj=%s', cdate, fromeml, fromname, csubj)
+    logger.debug(
+        'cdate=%s, fromeml=%s, fromname=%s, csubj=%s', cdate, fromeml, fromname, csubj
+    )
     # Add 24 hours to the date to account for timezones
     # First, parse YYYY-MM-DD into datetime
     cdate_dt = datetime.datetime.strptime(cdate, '%Y-%m-%d')  # noqa: DTZ007
@@ -129,7 +134,8 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
         algoarg = f'--diff-algorithm={algo}'
         logger.debug('showargs=%s', showargs + [algoarg])
         ecode, bpatch = b4.git_run_command(
-            topdir, ['show'] + showargs + [algoarg] + [commit],
+            topdir,
+            ['show'] + showargs + [algoarg] + [commit],
             decode=False,
         )
         if ecode > 0:
@@ -146,12 +152,16 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
             sys.exit(1)
         patch_id = out.split(maxsplit=1)[0]
         logger.debug('Patch-id for commit %s is %s', commit, patch_id)
-        logger.info('Trying to find matching series by patch-id %s (%s)', patch_id, algo)
+        logger.info(
+            'Trying to find matching series by patch-id %s (%s)', patch_id, algo
+        )
         # Limit lookup by date prior to the commit date, to weed out any false-positives from
         # backports or from erroneously resent series
         extra_query = f'AND d:..{pidate}'
         logger.debug('extra_query=%s', extra_query)
-        msgs = b4.get_msgs_by_patch_id(patch_id, nocache=cmdargs.nocache, extra_query=extra_query)
+        msgs = b4.get_msgs_by_patch_id(
+            patch_id, nocache=cmdargs.nocache, extra_query=extra_query
+        )
         if msgs:
             logger.info('Found matching series by patch-id')
             for msg in msgs:
@@ -179,9 +189,15 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
             # can search for that exact string on lore.
             inbody_from = f'From: {fromname} <{fromeml}>'
             logger.info('Attempting to match by in-body From: line...')
-            q = '(nq:"%s" AND s:"%s" AND d:..%s)' % (inbody_from.replace('"', ''), csubj.replace('"', ''), pidate)
+            q = '(nq:"%s" AND s:"%s" AND d:..%s)' % (
+                inbody_from.replace('"', ''),
+                csubj.replace('"', ''),
+                pidate,
+            )
             logger.debug('q=%s', q)
-            msgs = b4.get_pi_search_results(q, nocache=cmdargs.nocache, full_threads=False)
+            msgs = b4.get_pi_search_results(
+                q, nocache=cmdargs.nocache, full_threads=False
+            )
             if msgs:
                 for msg in msgs:
                     msgid = b4.LoreMessage.get_clean_msgid(msg)
@@ -232,11 +248,16 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
             elif lser and lser.subject and lser.fromemail:
                 # We're going to match by first patch/cover letter subject and author.
                 # It's not perfect, but it's the best we can do without a change-id.
-                fillin_q = '(s:"%s" AND f:"%s")' % (lser.subject.replace('"', ''), lser.fromemail)
+                fillin_q = '(s:"%s" AND f:"%s")' % (
+                    lser.subject.replace('"', ''),
+                    lser.fromemail,
+                )
             if fillin_q:
                 fillin_q += f' AND d:..{pidate}'
                 logger.debug('fillin_q=%s', fillin_q)
-                q_msgs = b4.get_pi_search_results(fillin_q, nocache=cmdargs.nocache, full_threads=True)
+                q_msgs = b4.get_pi_search_results(
+                    fillin_q, nocache=cmdargs.nocache, full_threads=True
+                )
                 if q_msgs:
                     for q_msg in q_msgs:
                         lmbx.add_message(q_msg)
@@ -311,7 +332,9 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
             allrto = email.utils.getaddresses(best_match.msg.get_all('reply-to', []))
             if not allrto:
                 allrto = [(best_match.fromname, best_match.fromemail)]
-            deduped_to, deduped_cc = b4.LoreMessage.make_reply_addrs(allrto, allto + allcc)
+            deduped_to, deduped_cc = b4.LoreMessage.make_reply_addrs(
+                allrto, allto + allcc
+            )
             logger.info('---')
             logger.info('People originally included in this patch:')
             logger.info(b4.format_addrs(deduped_to + deduped_cc, header_safe=False))
@@ -345,8 +368,13 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
             # Use the first patch in the series as a fallback
             lmsg = firstmsg
         logger.info('%s%s', pref, firstmsg.full_subject)
-        logger.info('%sDate: %s, From: %s <%s>', ' ' * len(pref),
-                    firstmsg.date.strftime('%Y-%m-%d'), firstmsg.fromname, firstmsg.fromemail)
+        logger.info(
+            '%sDate: %s, From: %s <%s>',
+            ' ' * len(pref),
+            firstmsg.date.strftime('%Y-%m-%d'),
+            firstmsg.fromname,
+            firstmsg.fromemail,
+        )
         logger.info('%s%s', ' ' * len(pref), linkmask % lmsg.msgid)
 
 
diff --git a/src/b4/ez.py b/src/b4/ez.py
index 562f2a9..02589c5 100644
--- a/src/b4/ez.py
+++ b/src/b4/ez.py
@@ -45,7 +45,8 @@ SENT_TAG_PREFIX = 'sent/'
 
 DEFAULT_ENDPOINT = 'https://lkml.kernel.org/_b4_submit'
 
-DEFAULT_COVER_TEMPLATE = """
+DEFAULT_COVER_TEMPLATE = (
+    """
 ${cover}
 
 ---
@@ -57,9 +58,12 @@ base-commit: ${base_commit}
 change-id: ${change_id}
 ${prerequisites}
 Best regards,
--- """ + ' ' + """
+-- """
+    + ' '
+    + """
 ${signature}
 """
+)
 
 DEFAULT_CHANGELOG_TEMPLATE = """
 Changes in v${newrev}:
@@ -99,7 +103,7 @@ DEPS_HELP = """
 """
 
 # Cache of preflight hashes, used to avoid recalculating them
-PFHASH_CACHE: Dict[str, str]= dict()
+PFHASH_CACHE: Dict[str, str] = dict()
 
 
 def run_rewrite_hook(stage: str) -> None:
@@ -158,7 +162,9 @@ def get_auth_configs() -> Tuple[str, str, str, str, str, str]:
     config = b4.get_main_config()
     endpoint = config.get('send-endpoint-web', '')
     if not isinstance(endpoint, str):
-        logger.debug('Web submission endpoint (b4.send-endpoint-web) is not defined, or is not a string.')
+        logger.debug(
+            'Web submission endpoint (b4.send-endpoint-web) is not defined, or is not a string.'
+        )
         endpoint = None
     elif not re.search(r'^https?://', endpoint):
         logger.debug('Web submission endpoint (b4.send-endpoint-web) is not a web URL.')
@@ -168,10 +174,14 @@ def get_auth_configs() -> Tuple[str, str, str, str, str, str]:
         # Use the default endpoint if we are in the kernel repo
         topdir = b4.git_get_toplevel()
         if topdir and os.path.exists(os.path.join(topdir, 'Kconfig')):
-            logger.debug('No sendemail configs found, will use the default web endpoint')
+            logger.debug(
+                'No sendemail configs found, will use the default web endpoint'
+            )
             endpoint = DEFAULT_ENDPOINT
         else:
-            raise RuntimeError('Web submission endpoint (b4.send-endpoint-web) is not defined, or is not valid.')
+            raise RuntimeError(
+                'Web submission endpoint (b4.send-endpoint-web) is not defined, or is not valid.'
+            )
 
     usercfg = b4.get_user_config()
     myemail = str(usercfg.get('email', ''))
@@ -200,16 +210,22 @@ def auth_new() -> None:
         gpgargs = ['--export', '--export-options', 'export-minimal', '-a', keydata]
         ecode, out, _err = b4.gpg_run_command(gpgargs)
         if ecode > 0:
-            logger.critical('CRITICAL: unable to get PGP public key for %s:%s', algo, keydata)
+            logger.critical(
+                'CRITICAL: unable to get PGP public key for %s:%s', algo, keydata
+            )
             sys.exit(1)
         pubkey = out.decode()
     elif algo == 'ed25519':
         from nacl.encoding import Base64Encoder
         from nacl.signing import SigningKey
+
         sk = SigningKey(keydata.encode(), encoder=Base64Encoder)
         pubkey = base64.b64encode(sk.verify_key.encode()).decode()
     else:
-        logger.critical('CRITICAL: algorithm %s not currently supported for web endpoint submission', algo)
+        logger.critical(
+            'CRITICAL: algorithm %s not currently supported for web endpoint submission',
+            algo,
+        )
         sys.exit(1)
 
     logger.info('Will submit a new email authorization request to:')
@@ -244,7 +260,9 @@ def auth_new() -> None:
             rdata = res.json()
             if rdata.get('result') == 'success':
                 logger.info('Challenge generated and sent to %s', myemail)
-                logger.info('Once you receive it, run b4 send --web-auth-verify [challenge-string]')
+                logger.info(
+                    'Once you receive it, run b4 send --web-auth-verify [challenge-string]'
+                )
             sys.exit(0)
 
         except Exception:
@@ -312,7 +330,9 @@ def get_base_forkpoint(basebranch: str, mybranch: Optional[str] = None) -> str:
     if mybranch is None:
         mybranch = b4.git_get_current_branch()
         if not mybranch:
-            raise RuntimeError('Not currently on a branch, please checkout a b4-tracked branch')
+            raise RuntimeError(
+                'Not currently on a branch, please checkout a b4-tracked branch'
+            )
     logger.debug('Finding the fork-point with %s', basebranch)
     gitargs = ['merge-base', '--fork-point', basebranch]
     lines = b4.git_get_command_lines(None, gitargs)
@@ -320,8 +340,12 @@ def get_base_forkpoint(basebranch: str, mybranch: Optional[str] = None) -> str:
         gitargs = ['merge-base', mybranch, basebranch]
         lines = b4.git_get_command_lines(None, gitargs)
         if not lines:
-            logger.critical('CRITICAL: Could not find common ancestor with %s', basebranch)
-            raise RuntimeError('Branches %s and %s have no common ancestors' % (basebranch, mybranch))
+            logger.critical(
+                'CRITICAL: Could not find common ancestor with %s', basebranch
+            )
+            raise RuntimeError(
+                'Branches %s and %s have no common ancestors' % (basebranch, mybranch)
+            )
     forkpoint = lines[0]
     logger.debug('Fork-point between %s and %s is %s', mybranch, basebranch, forkpoint)
 
@@ -331,7 +355,9 @@ def get_base_forkpoint(basebranch: str, mybranch: Optional[str] = None) -> str:
 def start_new_series(cmdargs: argparse.Namespace) -> None:
     usercfg = b4.get_user_config()
     if 'name' not in usercfg or 'email' not in usercfg:
-        logger.critical('CRITICAL: Unable to add your Signed-off-by: git returned no user.name or user.email')
+        logger.critical(
+            'CRITICAL: Unable to add your Signed-off-by: git returned no user.name or user.email'
+        )
         sys.exit(1)
 
     cover = tracking = patches = thread_msgid = revision = None
@@ -373,7 +399,9 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
                         cover_sections.append(section)
                     cover = '\n---\n'.join(cover_sections).strip()
                 except Exception as ex:
-                    logger.critical('CRITICAL: unable to restore tracking information, ignoring')
+                    logger.critical(
+                        'CRITICAL: unable to restore tracking information, ignoring'
+                    )
                     logger.critical('          %s', ex)
 
             else:
@@ -386,9 +414,11 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
             # Escape lines starting with "#" so they don't get lost
             cover = re.sub(r'^(#.*)$', r'>\1', cover, flags=re.M)
 
-            cover = (f'{cmsg.subject}\n\n'
-                     f'EDITME: Imported from f{msgid}\n'
-                     f'        Please review before sending.\n\n') + cover
+            cover = (
+                f'{cmsg.subject}\n\n'
+                f'EDITME: Imported from f{msgid}\n'
+                f'        Please review before sending.\n\n'
+            ) + cover
 
             change_id = lser.change_id
             if not cmdargs.new_series_name:
@@ -429,7 +459,7 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
             if is_prep_branch():
                 logger.debug('Will use current branch as dependency.')
                 _pcover, ptracking = load_cover(strip_comments=True)
-                depends_on = f"change-id: {ptracking['series']['change-id']}:v{ptracking['series']['revision']}"
+                depends_on = f'change-id: {ptracking["series"]["change-id"]}:v{ptracking["series"]["revision"]}'
 
             cmdargs.fork_point = 'HEAD'
             if mybranch:
@@ -442,7 +472,9 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
                 gitargs = ['branch', '-v', '--contains', cmdargs.fork_point]
                 lines = b4.git_get_command_lines(None, gitargs)
                 if not lines:
-                    logger.critical('CRITICAL: no branch contains fork-point %s', cmdargs.fork_point)
+                    logger.critical(
+                        'CRITICAL: no branch contains fork-point %s', cmdargs.fork_point
+                    )
                     sys.exit(1)
                 for line in lines:
                     chunks = line.split(maxsplit=2)
@@ -450,15 +482,24 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
                     if chunks[0] != '*':
                         continue
                     if chunks[1] == mybranch:
-                        logger.debug('branch %s does contain fork-point %s', mybranch, cmdargs.fork_point)
+                        logger.debug(
+                            'branch %s does contain fork-point %s',
+                            mybranch,
+                            cmdargs.fork_point,
+                        )
                         basebranch = mybranch
                         break
             else:
                 basebranch = mybranch
 
             if basebranch is None:
-                logger.critical('CRITICAL: fork-point %s is not on the current branch.', cmdargs.fork_point)
-                logger.critical('          Switch to the branch you want to use as base and try again.')
+                logger.critical(
+                    'CRITICAL: fork-point %s is not on the current branch.',
+                    cmdargs.fork_point,
+                )
+                logger.critical(
+                    '          Switch to the branch you want to use as base and try again.'
+                )
                 sys.exit(1)
 
         slug = re.sub(r'\W+', '-', cmdargs.new_series_name).strip('-').lower()
@@ -476,7 +517,9 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
         basebranch = None
         _cb = b4.git_get_current_branch()
         if _cb is None:
-            logger.critical('CRITICAL: not currently on a branch, unable to enroll with a base')
+            logger.critical(
+                'CRITICAL: not currently on a branch, unable to enroll with a base'
+            )
             sys.exit(1)
         seriesname = branchname = _cb
         slug = re.sub(r'\W+', '-', branchname).strip('-').lower()
@@ -491,13 +534,19 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
         elif out:
             enroll_base = out.strip()
         # Is it a branch?
-        gitargs = ['show-ref', f'refs/heads/{enroll_base}', f'refs/remotes/{enroll_base}']
+        gitargs = [
+            'show-ref',
+            f'refs/heads/{enroll_base}',
+            f'refs/remotes/{enroll_base}',
+        ]
         lines = b4.git_get_command_lines(None, gitargs)
         if lines:
             try:
                 forkpoint = get_base_forkpoint(enroll_base, mybranch)
             except RuntimeError as ex:
-                logger.critical('CRITICAL: could not use %s as enrollment base:', enroll_base)
+                logger.critical(
+                    'CRITICAL: could not use %s as enrollment base:', enroll_base
+                )
                 logger.critical('          %s', ex)
                 sys.exit(1)
             basebranch = enroll_base
@@ -512,7 +561,9 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
             # check branches where this object lives
             heads = b4.git_branch_contains(None, forkpoint, checkall=True)
             if mybranch not in heads:
-                logger.critical('CRITICAL: object %s does not exist on current branch', enroll_base)
+                logger.critical(
+                    'CRITICAL: object %s does not exist on current branch', enroll_base
+                )
                 sys.exit(1)
             if strategy != 'commit':
                 # Remove any branches starting with b4/
@@ -521,12 +572,17 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
                     if head.startswith('b4/'):
                         heads.remove(head)
                 if len(heads) > 1:
-                    logger.critical('CRITICAL: Multiple branches contain object %s, please pass a branch name as base',
-                                    enroll_base)
+                    logger.critical(
+                        'CRITICAL: Multiple branches contain object %s, please pass a branch name as base',
+                        enroll_base,
+                    )
                     logger.critical('          %s', ', '.join(heads))
                     sys.exit(1)
                 if len(heads) < 1:
-                    logger.critical('CRITICAL: No other branch contains %s: cannot use as fork base', enroll_base)
+                    logger.critical(
+                        'CRITICAL: No other branch contains %s: cannot use as fork base',
+                        enroll_base,
+                    )
                     sys.exit(1)
                 basebranch = heads.pop()
 
@@ -554,7 +610,9 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
             gitargs = ['reset', '--hard', forkpoint]
             ecode, out = b4.git_run_command(None, gitargs, logstderr=True)
             if ecode > 0:
-                logger.critical('CRITICAL: not able to reset current branch to %s', forkpoint)
+                logger.critical(
+                    'CRITICAL: not able to reset current branch to %s', forkpoint
+                )
                 logger.critical(out)
                 sys.exit(1)
 
@@ -572,36 +630,44 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
         # create a default cover letter and store it where the strategy indicates
         uname = str(usercfg.get('name', ''))
         uemail = str(usercfg.get('email', ''))
-        carry = (f'EDITME: cover title for {seriesname}',
-                 '',
-                 '# Describe the purpose of this series. The information you put here',
-                 '# will be used by the project maintainer to make a decision whether',
-                 '# your patches should be reviewed, and in what priority order. Please be',
-                 '# very detailed and link to any relevant discussions or sites that the',
-                 '# maintainer can review to better understand your proposed changes. If you',
-                 '# only have a single patch in your series, the contents of the cover',
-                 '# letter will be appended to the "under-the-cut" portion of the patch.',
-                 '',
-                 '# Lines starting with # will be removed from the cover letter. You can',
-                 '# use them to add notes or reminders to yourself. If you want to use',
-                 '# markdown headers in your cover letter, start the line with ">#".',
-                 '',
-                 '# You can add trailers to the cover letter. Any email addresses found in',
-                 '# these trailers will be added to the addresses specified/generated',
-                 '# during the b4 send stage. You can also run "b4 prep --auto-to-cc" to',
-                 '# auto-populate the To: and Cc: trailers based on the code being',
-                 '# modified.',
-                 '',
-                 f'Signed-off-by: {uname} <{uemail}>',
-                 '',
-                 '',
-                 )
+        carry = (
+            f'EDITME: cover title for {seriesname}',
+            '',
+            '# Describe the purpose of this series. The information you put here',
+            '# will be used by the project maintainer to make a decision whether',
+            '# your patches should be reviewed, and in what priority order. Please be',
+            '# very detailed and link to any relevant discussions or sites that the',
+            '# maintainer can review to better understand your proposed changes. If you',
+            '# only have a single patch in your series, the contents of the cover',
+            '# letter will be appended to the "under-the-cut" portion of the patch.',
+            '',
+            '# Lines starting with # will be removed from the cover letter. You can',
+            '# use them to add notes or reminders to yourself. If you want to use',
+            '# markdown headers in your cover letter, start the line with ">#".',
+            '',
+            '# You can add trailers to the cover letter. Any email addresses found in',
+            '# these trailers will be added to the addresses specified/generated',
+            '# during the b4 send stage. You can also run "b4 prep --auto-to-cc" to',
+            '# auto-populate the To: and Cc: trailers based on the code being',
+            '# modified.',
+            '',
+            f'Signed-off-by: {uname} <{uemail}>',
+            '',
+            '',
+        )
         cover = '\n'.join(carry)
         logger.info('Created the default cover letter, you can edit with --edit-cover.')
 
     if not tracking:
         # We don't need all the entropy of uuid, just some of it
-        changeid = '%s-%s-%s' % (datetime.date.today().strftime('%Y%m%d'), slug, uuid.uuid4().hex[:12])  # noqa: DTZ011
+        changeid = (
+            '%s-%s-%s'
+            % (
+                datetime.date.today().strftime('%Y%m%d'),  # noqa: DTZ011
+                slug,
+                uuid.uuid4().hex[:12],
+            )
+        )
         if revision is None:
             revision = 1
         prefixes = list()
@@ -662,16 +728,22 @@ def start_new_series(cmdargs: argparse.Namespace) -> None:
             logger.critical('Could not apply patches from thread: %s', out)
             sys.exit(ecode)
         logger.info('---')
-        logger.info('NOTE: any follow-up trailers were ignored; apply them with b4 trailers -u')
+        logger.info(
+            'NOTE: any follow-up trailers were ignored; apply them with b4 trailers -u'
+        )
 
 
 def make_magic_json(data: Dict[str, Any]) -> str:
-    mj = (f'{MAGIC_MARKER}\n'
-          '# This section is used internally by b4 prep for tracking purposes.\n')
+    mj = (
+        f'{MAGIC_MARKER}\n'
+        '# This section is used internally by b4 prep for tracking purposes.\n'
+    )
     return mj + json.dumps(data, indent=2)
 
 
-def load_cover(strip_comments: bool = False, usebranch: Optional[str] = None) -> Tuple[str, Dict[str, Any]]:
+def load_cover(
+    strip_comments: bool = False, usebranch: Optional[str] = None
+) -> Tuple[str, Dict[str, Any]]:
     strategy = get_cover_strategy(usebranch)
     if strategy in {'commit', 'tip-commit'}:
         cover_commit = find_cover_commit(usebranch=usebranch)
@@ -718,7 +790,9 @@ def store_cover(content: str, tracking: Dict[str, Any], new: bool = False) -> No
         cover_message = content + '\n\n' + make_magic_json(tracking)
         if new:
             args = ['commit', '--allow-empty', '-F', '-']
-            ecode, out = b4.git_run_command(None, args, stdin=cover_message.encode(), logstderr=True)
+            ecode, out = b4.git_run_command(
+                None, args, stdin=cover_message.encode(), logstderr=True
+            )
             if ecode > 0:
                 logger.critical('CRITICAL: Generating cover letter commit failed:')
                 logger.critical(out)
@@ -730,7 +804,9 @@ def store_cover(content: str, tracking: Dict[str, Any], new: bool = False) -> No
                 raise RuntimeError('Error saving cover letter (commit not found)')
             fred = FRCommitMessageEditor()
             fred.add(commit, cover_message)
-            frargs = fr.FilteringOptions.parse_args(['--force', '--quiet', '--refs', f'{commit}~1..HEAD'])
+            frargs = fr.FilteringOptions.parse_args(
+                ['--force', '--quiet', '--refs', f'{commit}~1..HEAD']
+            )
             frargs.refs = [f'{commit}~1..HEAD']
             frf = fr.RepoFilter(frargs, commit_callback=fred.callback)
             logger.info('Invoking git-filter-repo to update the cover letter.')
@@ -752,13 +828,16 @@ def store_cover(content: str, tracking: Dict[str, Any], new: bool = False) -> No
 # 'tip-merge': in an empty merge commit at the tip of the branch : TODO
 #              (once/if git upstream properly supports it)
 
+
 def get_cover_strategy(usebranch: Optional[str] = None) -> str:
     if usebranch:
         branch = usebranch
     else:
         _cb = b4.git_get_current_branch()
         if _cb is None:
-            logger.critical('CRITICAL: not currently on a branch, unable to determine cover strategy')
+            logger.critical(
+                'CRITICAL: not currently on a branch, unable to determine cover strategy'
+            )
             sys.exit(1)
         branch = _cb
     # Check local branch config for the strategy
@@ -778,7 +857,9 @@ def get_cover_strategy(usebranch: Optional[str] = None) -> str:
 
 
 def is_prep_branch(mustbe: bool = False, usebranch: Optional[str] = None) -> bool:
-    mustmsg = 'CRITICAL: This is not a prep-managed branch or it was created by someone else.'
+    mustmsg = (
+        'CRITICAL: This is not a prep-managed branch or it was created by someone else.'
+    )
     mybranch: Optional[str] = None
     if usebranch:
         mybranch = usebranch
@@ -816,13 +897,19 @@ def is_prep_branch(mustbe: bool = False, usebranch: Optional[str] = None) -> boo
 def find_cover_commit(usebranch: Optional[str] = None) -> Optional[str]:
     # Walk back commits until we find the cover letter
     # Our covers always contain the MAGIC_MARKER line
-    logger.debug('Looking for the cover letter commit with magic marker "%s"', MAGIC_MARKER)
+    logger.debug(
+        'Looking for the cover letter commit with magic marker "%s"', MAGIC_MARKER
+    )
     if not usebranch:
         usebranch = b4.git_get_current_branch()
     if usebranch is None:
-        logger.critical("The current repository is not tracking a branch. To use b4, please checkout a branch.")
-        logger.critical("Maybe a rebase is running?")
-        raise RuntimeError("Not currently on a branch, please checkout a b4-tracked branch")
+        logger.critical(
+            'The current repository is not tracking a branch. To use b4, please checkout a branch.'
+        )
+        logger.critical('Maybe a rebase is running?')
+        raise RuntimeError(
+            'Not currently on a branch, please checkout a b4-tracked branch'
+        )
 
     # Restrict to committer being the current person, in case an errant cover letter
     # got added into the shared tree, as in:
@@ -830,8 +917,18 @@ def find_cover_commit(usebranch: Optional[str] = None) -> Optional[str]:
     # TODO: make it possible to ignore it, to make it possible to work on deliberately shared trees?
     usercfg = b4.get_user_config()
     limit_committer = usercfg['email']
-    gitargs = ['log', '--grep', MAGIC_MARKER, '-F', '--pretty=oneline', '--max-count=1', '--since=1.year',
-               '--no-mailmap', f'--committer={limit_committer}', usebranch]
+    gitargs = [
+        'log',
+        '--grep',
+        MAGIC_MARKER,
+        '-F',
+        '--pretty=oneline',
+        '--max-count=1',
+        '--since=1.year',
+        '--no-mailmap',
+        f'--committer={limit_committer}',
+        usebranch,
+    ]
     lines = b4.git_get_command_lines(None, gitargs)
     if not lines:
         return None
@@ -934,19 +1031,41 @@ def check_deps(cmdargs: argparse.Namespace) -> None:
                 if matches:
                     wantser = int(matches.groups()[0])
                     if wantser not in lmbx.series:
-                        logger.debug('FAIL: No matching series %s for change-id %s', wantser, change_id)
-                        res[prereq] = (False, f'No version {wantser} found for change-id {change_id}')
+                        logger.debug(
+                            'FAIL: No matching series %s for change-id %s',
+                            wantser,
+                            change_id,
+                        )
+                        res[prereq] = (
+                            False,
+                            f'No version {wantser} found for change-id {change_id}',
+                        )
                         continue
                     # Is it the latest version?
                     maxser = max(lmbx.series.keys())
                     if wantser < maxser:
-                        logger.debug('Fail: Newer version v%s available for change-id %s', maxser, change_id)
-                        res[prereq] = (False, f'v{maxser} available for change-id {change_id} (you have: v{wantser})')
+                        logger.debug(
+                            'Fail: Newer version v%s available for change-id %s',
+                            maxser,
+                            change_id,
+                        )
+                        res[prereq] = (
+                            False,
+                            f'v{maxser} available for change-id {change_id} (you have: v{wantser})',
+                        )
                         continue
-                    logger.debug('Pass: change-id %s found and is the latest posted series', change_id)
-                    res[prereq] = (True, f'Change-id {change_id} found and is the latest available version')
+                    logger.debug(
+                        'Pass: change-id %s found and is the latest posted series',
+                        change_id,
+                    )
+                    res[prereq] = (
+                        True,
+                        f'Change-id {change_id} found and is the latest available version',
+                    )
                     lser = lmbx.get_series(wantser, codereview_trailers=False)
-                    assert lser is not None  # should never happen if we found the series
+                    assert (
+                        lser is not None
+                    )  # should never happen if we found the series
                     for lmsg in lser.patches[1:]:
                         if not lmsg:
                             # Should also never happen, but just in case
@@ -955,7 +1074,10 @@ def check_deps(cmdargs: argparse.Namespace) -> None:
                         known_patches[lmsg.git_patch_id] = lmsg
             else:
                 maxser = max(lmbx.series.keys())
-                res[prereq] = (False, f'change-id should include the revision, e.g.: {change_id}:v{maxser}')
+                res[prereq] = (
+                    False,
+                    f'change-id should include the revision, e.g.: {change_id}:v{maxser}',
+                )
                 continue
 
         elif parts[0] == 'patch-id':
@@ -989,14 +1111,20 @@ def check_deps(cmdargs: argparse.Namespace) -> None:
             # Always do no-parent for these
             s_msgs = b4.get_strict_thread(q_msgs, msgid, noparent=True)
             if not s_msgs:
-                res[prereq] = (False, 'No matching message-id found on the server after strict thread check')
+                res[prereq] = (
+                    False,
+                    'No matching message-id found on the server after strict thread check',
+                )
                 continue
             lmbx = b4.LoreMailbox()
             for s_msg in s_msgs:
                 lmbx.add_message(s_msg)
             if len(lmbx.series) > 1:
                 logger.debug('FAIL: msgid=%s is a thread with multiple series', msgid)
-                res[prereq] = (False, f'Message-id <{msgid}> has multiple posted series')
+                res[prereq] = (
+                    False,
+                    f'Message-id <{msgid}> has multiple posted series',
+                )
                 continue
 
             maxser = max(lmbx.series.keys())
@@ -1020,11 +1148,16 @@ def check_deps(cmdargs: argparse.Namespace) -> None:
     allgood = all([x[0] for x in res.values()])
     if not base_commit:
         logger.debug('FAIL: base-commit not specified')
-        res['base-commit: MISSING'] = (False, 'Series with dependencies require a base-commit')
+        res['base-commit: MISSING'] = (
+            False,
+            'Series with dependencies require a base-commit',
+        )
     elif allgood:
         logger.info('Testing if all patches can be applied to %s', base_commit)
-        _, _, _, mypatches = get_prep_branch_as_patches(thread=False, movefrom=False, addtracking=False)
-        if get_cover_strategy() == "commit":
+        _, _, _, mypatches = get_prep_branch_as_patches(
+            thread=False, movefrom=False, addtracking=False
+        )
+        if get_cover_strategy() == 'commit':
             # If the cover letter is stored as a commit, skip it to avoid empty patches
             prereq_patches += [x[1] for x in mypatches[1:]]
         else:
@@ -1038,15 +1171,23 @@ def check_deps(cmdargs: argparse.Namespace) -> None:
             b4.save_git_am_mbox(prereq_patches, ifh)
             ambytes = ifh.getvalue()
             try:
-                b4.git_fetch_am_into_repo(topdir, ambytes, at_base=base_commit, check_only=True)
+                b4.git_fetch_am_into_repo(
+                    topdir, ambytes, at_base=base_commit, check_only=True
+                )
                 logger.debug('PASS: Prereqs cleanly apply to %s', base_commit)
                 res[f'base-commit: {base_commit}'] = (True, 'All patches cleanly apply')
             except RuntimeError:
                 logger.debug('FAIL: Could not cleanly apply patches to %s', base_commit)
-                res[f'base-commit: {base_commit}'] = (False, 'Could not cleanly apply patches')
+                res[f'base-commit: {base_commit}'] = (
+                    False,
+                    'Could not cleanly apply patches',
+                )
         else:
             logger.debug('FAIL: %s does not exist in current tree', base_commit)
-            res[f'base-commit: {base_commit}'] = (False, 'Base commit not found in the current tree')
+            res[f'base-commit: {base_commit}'] = (
+                False,
+                'Base commit not found in the current tree',
+            )
     else:
         logger.info('Not checking applicability of the series due to other errors')
 
@@ -1102,7 +1243,9 @@ def get_series_start(usebranch: Optional[str] = None) -> Optional[str]:
 
 def update_trailers(cmdargs: argparse.Namespace) -> None:
     if not b4.can_network and not cmdargs.localmbox:
-        logger.critical('CRITICAL: To work in offline mode you have to pass a local mailbox.')
+        logger.critical(
+            'CRITICAL: To work in offline mode you have to pass a local mailbox.'
+        )
         sys.exit(1)
 
     usercfg = b4.get_user_config()
@@ -1142,25 +1285,38 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
         if since_commit:
             start = f'{since_commit}~1'
         else:
-            logger.critical('CRITICAL: Could not resolve %s to a git commit', cmdargs.since_commit)
+            logger.critical(
+                'CRITICAL: Could not resolve %s to a git commit', cmdargs.since_commit
+            )
             sys.exit(1)
 
     else:
         # Find the most recent commit where we're not the committer
-        gitargs = ['log', '--perl-regexp', '--no-mailmap',
-                   f'--committer=^(?!.*<{limit_committer}>)', '--max-count=1',
-                   '--format=%H', '--since', cmdargs.since]
+        gitargs = [
+            'log',
+            '--perl-regexp',
+            '--no-mailmap',
+            f'--committer=^(?!.*<{limit_committer}>)',
+            '--max-count=1',
+            '--format=%H',
+            '--since',
+            cmdargs.since,
+        ]
 
         lines = b4.git_get_command_lines(None, gitargs)
         if not lines:
-            logger.critical('CRITICAL: could not find any commits, try changing --since')
+            logger.critical(
+                'CRITICAL: could not find any commits, try changing --since'
+            )
             sys.exit(1)
         # Iterate through the commits we will consider and do some sanity checking
         first_considered = lines[0]
         logger.debug('First commit to consider: %s', first_considered)
         # Make sure this commit isn't HEAD
         if first_considered == end:
-            logger.critical('CRITICAL: the tip commit was not committed by you, refusing to continue')
+            logger.critical(
+                'CRITICAL: the tip commit was not committed by you, refusing to continue'
+            )
             sys.exit(1)
         start = first_considered
 
@@ -1171,7 +1327,9 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
     lines = b4.git_get_command_lines(None, gitargs)
     if not lines:
         # Should never happen?
-        logger.critical('CRITICAL: could not find any commits between %s and HEAD.', start)
+        logger.critical(
+            'CRITICAL: could not find any commits between %s and HEAD.', start
+        )
         sys.exit(1)
     first_to_update = end
     for line in lines:
@@ -1180,7 +1338,9 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
         # If we have more than 3 parts, that means we found a commit with multiple parents
         commit, committer_email, parents = cparts[0], cparts[1], cparts[2:]
         if len(parents) != 1:
-            logger.debug('Commit %s has non-single parent, stopping: %s', commit, parents)
+            logger.debug(
+                'Commit %s has non-single parent, stopping: %s', commit, parents
+            )
             break
         if committer_email != limit_committer and not in_prep_branch:
             logger.debug('Commit %s is not by %s, stopping', commit, committer_email)
@@ -1199,8 +1359,13 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
     logger.debug('End of the range: %s', end)
 
     try:
-        patches = b4.git_range_to_patches(None, start, end, ignore_commits=ignore_commits,
-                                          limit_committer=limit_committer)
+        patches = b4.git_range_to_patches(
+            None,
+            start,
+            end,
+            ignore_commits=ignore_commits,
+            limit_committer=limit_committer,
+        )
         if cover:
             cmsg = EmailMessage()
             cmsg['Subject'] = f'[PATCH 0/{len(patches)}] cover'
@@ -1224,7 +1389,10 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
     by_patchid: Dict[str, str] = dict()
     for lmsg in bbox.series[1].patches:
         if lmsg is None or lmsg.git_patch_id is None:
-            logger.debug('Skipping None or empty patch-id in %s', lmsg.subject if lmsg else 'unknown message')
+            logger.debug(
+                'Skipping None or empty patch-id in %s',
+                lmsg.subject if lmsg else 'unknown message',
+            )
             continue
         by_patchid[lmsg.git_patch_id] = lmsg.msgid
         commit_map[lmsg.msgid] = lmsg
@@ -1253,7 +1421,9 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
     patchid_map = b4.map_codereview_trailers(list_msgs)
     for patchid, llmsgs in patchid_map.items():
         if patchid not in by_patchid:
-            logger.debug('Skipping patch-id %s: not found in the current series', patchid)
+            logger.debug(
+                'Skipping patch-id %s: not found in the current series', patchid
+            )
             logger.debug('Ignoring follow-ups: %s', [x.subject for x in llmsgs])
             continue
         for llmsg in llmsgs:
@@ -1262,7 +1432,9 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
                 mismatches.add((ltr.name, ltr.value, llmsg.fromname, llmsg.fromemail))
             commit = by_patchid[patchid]
             lmsg = commit_map[commit]
-            logger.debug('Adding %s to %s', [x.as_string() for x in ltrailers], lmsg.msgid)
+            logger.debug(
+                'Adding %s to %s', [x.as_string() for x in ltrailers], lmsg.msgid
+            )
             lmsg.followup_trailers += ltrailers
 
     if msgid or tracking:
@@ -1271,7 +1443,9 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
     else:
         codereview_trailers = True
 
-    lser = bbox.get_series(sloppytrailers=cmdargs.sloppytrailers, codereview_trailers=codereview_trailers)
+    lser = bbox.get_series(
+        sloppytrailers=cmdargs.sloppytrailers, codereview_trailers=codereview_trailers
+    )
     if lser is None:
         logger.critical('CRITICAL: Unable to find series for %s', msgid)
         sys.exit(1)
@@ -1303,7 +1477,9 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
                     continue
                 seen_froms.add(rendered)
                 if fltr.lmsg is not None:
-                    source = midmask % urllib.parse.quote_plus(fltr.lmsg.msgid, safe='@')
+                    source = midmask % urllib.parse.quote_plus(
+                        fltr.lmsg.msgid, safe='@'
+                    )
                 logger.info('  + %s', rendered)
                 logger.info('    via: %s', source)
             else:
@@ -1339,7 +1515,9 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
 
     logger.critical('---')
     if not cmdargs.no_interactive:
-        resp = input('Rewrite %d commit(s) to add these trailers? [y/N] ' % len(commits))
+        resp = input(
+            'Rewrite %d commit(s) to add these trailers? [y/N] ' % len(commits)
+        )
         if resp.lower() not in {'y', 'yes'}:
             logger.info('Exiting without changes.')
             sys.exit(130)
@@ -1356,7 +1534,9 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
         logger.debug('commit=%s, message=%s', commit, clmsg.message)
         fred.add(commit, clmsg.message)
     logger.info('---')
-    args = fr.FilteringOptions.parse_args(['--force', '--quiet', '--refs', f'{start}..'])
+    args = fr.FilteringOptions.parse_args(
+        ['--force', '--quiet', '--refs', f'{start}..']
+    )
     args.refs = [f'{start}..']
     frf = fr.RepoFilter(args, commit_callback=fred.callback)
     logger.info('Invoking git-filter-repo to update trailers.')
@@ -1364,7 +1544,9 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
     logger.info('Trailers updated.')
 
 
-def get_addresses_from_cmd(cmdargs: List[str], msgbytes: bytes) -> List[Tuple[str, str]]:
+def get_addresses_from_cmd(
+    cmdargs: List[str], msgbytes: bytes
+) -> List[Tuple[str, str]]:
     if not cmdargs:
         return list()
     # Run this command from git toplevel
@@ -1380,7 +1562,9 @@ def get_addresses_from_cmd(cmdargs: List[str], msgbytes: bytes) -> List[Tuple[st
     return email.utils.getaddresses(addrs.split('\n'))
 
 
-def get_series_range(start_commit: Optional[str] = None, usebranch: Optional[str] = None) -> Tuple[str, str, str]:
+def get_series_range(
+    start_commit: Optional[str] = None, usebranch: Optional[str] = None
+) -> Tuple[str, str, str]:
     mybranch: Optional[str] = None
     if usebranch:
         mybranch = usebranch
@@ -1404,14 +1588,17 @@ def get_series_range(start_commit: Optional[str] = None, usebranch: Optional[str
     elif mybranch:
         end_commit = b4.git_revparse_obj(mybranch)
     else:
-        logger.critical('CRITICAL: Not currently on a branch, unable to determine end commit')
+        logger.critical(
+            'CRITICAL: Not currently on a branch, unable to determine end commit'
+        )
         sys.exit(1)
 
     return base_commit, start_commit, end_commit
 
 
-def get_series_details(start_commit: Optional[str] = None, usebranch: Optional[str] = None
-                       ) -> Tuple[str, str, str, List[str], str, str]:
+def get_series_details(
+    start_commit: Optional[str] = None, usebranch: Optional[str] = None
+) -> Tuple[str, str, str, List[str], str, str]:
     base_commit, start_commit, end_commit = get_series_range(start_commit, usebranch)
     gitargs = ['shortlog', f'{start_commit}..{end_commit}']
     _, shortlog = b4.git_run_command(None, gitargs)
@@ -1420,7 +1607,14 @@ def get_series_details(start_commit: Optional[str] = None, usebranch: Optional[s
     gitargs = ['log', '--oneline', f'{start_commit}..{end_commit}']
     _, _olout = b4.git_run_command(None, gitargs)
     oneline = _olout.rstrip().splitlines()
-    return base_commit, start_commit, end_commit, oneline, shortlog.rstrip(), diffstat.rstrip()
+    return (
+        base_commit,
+        start_commit,
+        end_commit,
+        oneline,
+        shortlog.rstrip(),
+        diffstat.rstrip(),
+    )
 
 
 def get_base_changeid_from_tag(tagname: str) -> Tuple[str, str, str]:
@@ -1466,7 +1660,9 @@ def make_msgid_tpt(change_id: str, revision: str, domain: Optional[str] = None)
     return msgid_tpt
 
 
-def get_cover_dests(cbody: str) -> Tuple[List[Tuple[str, str]], List[Tuple[str, str]], str]:
+def get_cover_dests(
+    cbody: str,
+) -> Tuple[List[Tuple[str, str]], List[Tuple[str, str]], str]:
     htrs, cmsg, mtrs, basement, sig = b4.LoreMessage.get_body_parts(cbody)
     tos = list()
     ccs = list()
@@ -1481,8 +1677,15 @@ def get_cover_dests(cbody: str) -> Tuple[List[Tuple[str, str]], List[Tuple[str,
     return tos, ccs, cbody
 
 
-def add_cover(csubject: b4.LoreSubject, msgid_tpt: str, patches: List[Tuple[str, EmailMessage]],
-              cbody: str, datets: int, thread: bool = True, presubject: Optional[str] = None) -> None:
+def add_cover(
+    csubject: b4.LoreSubject,
+    msgid_tpt: str,
+    patches: List[Tuple[str, EmailMessage]],
+    cbody: str,
+    datets: int,
+    thread: bool = True,
+    presubject: Optional[str] = None,
+) -> None:
     fp = patches[0][1]
     cmsg = EmailMessage()
     cmsg.add_header('From', fp['From'])
@@ -1491,8 +1694,12 @@ def add_cover(csubject: b4.LoreSubject, msgid_tpt: str, patches: List[Tuple[str,
     csubject.expected = fpls.expected
     csubject.counter = 0
     csubject.revision = fpls.revision
-    cmsg.add_header('Subject', csubject.get_rebuilt_subject(eprefixes=fpls.get_extra_prefixes(),
-                                                            presubject=presubject))
+    cmsg.add_header(
+        'Subject',
+        csubject.get_rebuilt_subject(
+            eprefixes=fpls.get_extra_prefixes(), presubject=presubject
+        ),
+    )
     cmsg.add_header('Date', email.utils.formatdate(datets, localtime=True))
     cmsg.add_header('Message-Id', msgid_tpt % str(0))
 
@@ -1507,8 +1714,12 @@ def add_cover(csubject: b4.LoreSubject, msgid_tpt: str, patches: List[Tuple[str,
 def mixin_cover(cbody: str, patches: List[Tuple[str, EmailMessage]]) -> None:
     msg = patches[0][1]
     pbody, _pcharset = b4.LoreMessage.get_payload(msg)
-    pheaders, pmessage, ptrailers, pbasement, _psignature = b4.LoreMessage.get_body_parts(pbody)
-    _cheaders, cmessage, ctrailers, cbasement, csignature = b4.LoreMessage.get_body_parts(cbody)
+    pheaders, pmessage, ptrailers, pbasement, _psignature = (
+        b4.LoreMessage.get_body_parts(pbody)
+    )
+    _cheaders, cmessage, ctrailers, cbasement, csignature = (
+        b4.LoreMessage.get_body_parts(cbody)
+    )
     nbparts = list()
     nmessage = cmessage.rstrip('\r\n') + '\n'
 
@@ -1544,7 +1755,9 @@ def mixin_cover(cbody: str, patches: List[Tuple[str, EmailMessage]]) -> None:
 
     newbasement = '---\n'.join(nbparts)
 
-    pbody = b4.LoreMessage.rebuild_message(pheaders, pmessage, ptrailers, newbasement, csignature)
+    pbody = b4.LoreMessage.rebuild_message(
+        pheaders, pmessage, ptrailers, newbasement, csignature
+    )
     msg.set_payload(pbody, charset='utf-8')
     # Check if the new body now has 8bit content and fix CTR
     if msg.get('Content-Transfer-Encoding') != '8bit' and not pbody.isascii():
@@ -1572,10 +1785,17 @@ def rethread(patches: List[Tuple[str, EmailMessage]]) -> None:
             msg.add_header('In-Reply-To', refto)
 
 
-def get_prep_branch_as_patches(movefrom: bool = True, thread: bool = True, addtracking: bool = True,
-                               prefixes: Optional[List[str]] = None, usebranch: Optional[str] = None,
-                               expandprereqs: bool = True, force_cover: bool = False,
-                               ) -> Tuple[List[Tuple[str, str]], List[Tuple[str, str]], str, List[Tuple[str, EmailMessage]]]:
+def get_prep_branch_as_patches(
+    movefrom: bool = True,
+    thread: bool = True,
+    addtracking: bool = True,
+    prefixes: Optional[List[str]] = None,
+    usebranch: Optional[str] = None,
+    expandprereqs: bool = True,
+    force_cover: bool = False,
+) -> Tuple[
+    List[Tuple[str, str]], List[Tuple[str, str]], str, List[Tuple[str, EmailMessage]]
+]:
     cover, tracking = load_cover(strip_comments=True, usebranch=usebranch)
 
     if prefixes is None:
@@ -1604,17 +1824,22 @@ def get_prep_branch_as_patches(movefrom: bool = True, thread: bool = True, addtr
 
     presubject = tracking['series'].get('presubject', list())
 
-    patches = b4.git_range_to_patches(None, start_commit, end_commit,
-                                      revision=revision,
-                                      prefixes=prefixes,
-                                      msgid_tpt=msgid_tpt,
-                                      seriests=seriests,
-                                      mailfrom=mailfrom,
-                                      ignore_commits=ignore_commits,
-                                      presubject=presubject)
-
-    base_commit, _, _, _, shortlog, diffstat = get_series_details(start_commit=start_commit,
-                                                                  usebranch=usebranch)
+    patches = b4.git_range_to_patches(
+        None,
+        start_commit,
+        end_commit,
+        revision=revision,
+        prefixes=prefixes,
+        msgid_tpt=msgid_tpt,
+        seriests=seriests,
+        mailfrom=mailfrom,
+        ignore_commits=ignore_commits,
+        presubject=presubject,
+    )
+
+    base_commit, _, _, _, shortlog, diffstat = get_series_details(
+        start_commit=start_commit, usebranch=usebranch
+    )
 
     config = b4.get_main_config()
     cover_template = DEFAULT_COVER_TEMPLATE
@@ -1623,12 +1848,17 @@ def get_prep_branch_as_patches(movefrom: bool = True, thread: bool = True, addtr
         try:
             ctf = config['prep-cover-template']
             if not isinstance(ctf, str):
-                logger.critical('ERROR: prep-cover-template must be a string, got %s', type(ctf).__name__)
+                logger.critical(
+                    'ERROR: prep-cover-template must be a string, got %s',
+                    type(ctf).__name__,
+                )
                 sys.exit(1)
             cover_template = b4.read_template(ctf)
         except FileNotFoundError:
-            logger.critical('ERROR: prep-cover-template says to use %s, but it does not exist',
-                            config['prep-cover-template'])
+            logger.critical(
+                'ERROR: prep-cover-template says to use %s, but it does not exist',
+                config['prep-cover-template'],
+            )
             sys.exit(2)
     prereqs = tracking['series'].get('prerequisites', list())
     prerequisites = ''
@@ -1642,7 +1872,9 @@ def get_prep_branch_as_patches(movefrom: bool = True, thread: bool = True, addtr
         if prereq.startswith('base-commit:'):
             base_commit = b4.git_revparse_obj(chunks[1])
             if not base_commit:
-                logger.warning('WARNING: unable to resolve prerequisite-base-commit %s', chunks[1])
+                logger.warning(
+                    'WARNING: unable to resolve prerequisite-base-commit %s', chunks[1]
+                )
                 base_commit = chunks[1]
             else:
                 logger.debug('Overriding base-commit with: %s', base_commit)
@@ -1677,12 +1909,15 @@ def get_prep_branch_as_patches(movefrom: bool = True, thread: bool = True, addtr
                     continue
                 logger.debug('Checking if we have a sent version')
                 try:
-                    _, _, ppatches = get_sent_tag_as_patches(tagname, revision=revision,
-                                                             presubject=presubject)
+                    _, _, ppatches = get_sent_tag_as_patches(
+                        tagname, revision=revision, presubject=presubject
+                    )
                     for _psha, ppatch in ppatches:
                         spatches.append(ppatch)
                 except RuntimeError:
-                    logger.debug('Nothing matched tagname=%s, checking remotely', tagname)
+                    logger.debug(
+                        'Nothing matched tagname=%s, checking remotely', tagname
+                    )
                     lmbx = b4.get_series_by_change_id(pcid)
                     if not lmbx:
                         logger.info('Nothing known about change-id: %s', pcid)
@@ -1725,27 +1960,39 @@ def get_prep_branch_as_patches(movefrom: bool = True, thread: bool = True, addtr
             rd_tptvals = {
                 'oldrev': oldrev,
             }
-            range_diff = Template(rangediff_template.lstrip()).safe_substitute(rd_tptvals)
+            range_diff = Template(rangediff_template.lstrip()).safe_substitute(
+                rd_tptvals
+            )
             _rdcmp = range_diff_compare(oldrev, execvp=False)
             if _rdcmp:
                 range_diff += _rdcmp
             tptvals['range_diff'] = range_diff
         else:
-            tptvals['range_diff'] = ""
+            tptvals['range_diff'] = ''
     cover_letter = Template(cover_template.lstrip()).safe_substitute(tptvals)
     # Store tracking info in the header in a safe format, which should allow us to
     # fully restore our work from the already sent series.
     ztracking = gzip.compress(bytes(json.dumps(tracking), 'utf-8'))
     b4tracking = base64.b64encode(ztracking).decode()
     # A little trick for pretty wrapping
-    wrapped = textwrap.wrap('X-B4-Tracking: v=1; b=' + b4tracking, subsequent_indent=' ', width=75)
+    wrapped = textwrap.wrap(
+        'X-B4-Tracking: v=1; b=' + b4tracking, subsequent_indent=' ', width=75
+    )
     thdata = ''.join(wrapped).replace('X-B4-Tracking: ', '')
 
     alltos, allccs, cbody = get_cover_dests(cover_letter)
     if len(patches) == 1 and not force_cover:
         mixin_cover(cbody, patches)
     else:
-        add_cover(csubject, msgid_tpt, patches, cbody, seriests, thread=thread, presubject=presubject)
+        add_cover(
+            csubject,
+            msgid_tpt,
+            patches,
+            cbody,
+            seriests,
+            thread=thread,
+            presubject=presubject,
+        )
 
     if addtracking:
         patches[0][1].add_header('X-B4-Tracking', thdata)
@@ -1774,8 +2021,14 @@ def get_prep_branch_as_patches(movefrom: bool = True, thread: bool = True, addtr
     return alltos, allccs, tag_msg, patches
 
 
-def get_sent_tag_as_patches(tagname: str, revision: int, presubject: Optional[str] = None, force_cover: bool = False) \
-        -> Tuple[List[Tuple[str, str]], List[Tuple[str, str]], List[Tuple[str, EmailMessage]]]:
+def get_sent_tag_as_patches(
+    tagname: str,
+    revision: int,
+    presubject: Optional[str] = None,
+    force_cover: bool = False,
+) -> Tuple[
+    List[Tuple[str, str]], List[Tuple[str, str]], List[Tuple[str, EmailMessage]]
+]:
     cover, base_commit, change_id = get_base_changeid_from_tag(tagname)
 
     csubject, cbody = get_cover_subject_body(cover)
@@ -1785,13 +2038,17 @@ def get_sent_tag_as_patches(tagname: str, revision: int, presubject: Optional[st
     seriests = int(time.time())
     mailfrom = b4.get_mailfrom()
 
-    patches = b4.git_range_to_patches(None, base_commit, tagname,
-                                      revision=revision,
-                                      prefixes=prefixes,
-                                      msgid_tpt=msgid_tpt,
-                                      seriests=seriests,
-                                      mailfrom=mailfrom,
-                                      presubject=presubject)
+    patches = b4.git_range_to_patches(
+        None,
+        base_commit,
+        tagname,
+        revision=revision,
+        prefixes=prefixes,
+        msgid_tpt=msgid_tpt,
+        seriests=seriests,
+        mailfrom=mailfrom,
+        presubject=presubject,
+    )
 
     alltos, allccs, cbody = get_cover_dests(cbody)
     if len(patches) == 1 and not force_cover:
@@ -1804,7 +2061,9 @@ def get_sent_tag_as_patches(tagname: str, revision: int, presubject: Optional[st
 
 def format_patch(output_dir: str) -> None:
     try:
-        _, _, _, patches = get_prep_branch_as_patches(thread=False, movefrom=False, addtracking=False)
+        _, _, _, patches = get_prep_branch_as_patches(
+            thread=False, movefrom=False, addtracking=False
+        )
     except RuntimeError as ex:
         logger.critical('CRITICAL: Failed to convert range to patches: %s', ex)
         sys.exit(1)
@@ -1839,8 +2098,10 @@ def get_check_cmds() -> Tuple[List[str], List[str]]:
         if topdir:
             checkpatch = os.path.join(topdir, 'scripts', 'checkpatch.pl')
             if os.access(checkpatch, os.X_OK):
-                spell = "--codespell" if can_codespell else ""
-                ppcmds = [f'{checkpatch} -q --terse --no-summary --mailback --showfile {spell}']
+                spell = '--codespell' if can_codespell else ''
+                ppcmds = [
+                    f'{checkpatch} -q --terse --no-summary --mailback --showfile {spell}'
+                ]
 
     # TODO: support for a whole-series check command, (pytest, etc)
     return ppcmds, scmds
@@ -1879,7 +2140,9 @@ def check(cmdargs: argparse.Namespace) -> None:
             continue
         report = list()
         for ppcmdargs in local_check_cmds:
-            ckrep = b4.LoreMessage.run_local_check(ppcmdargs, commit, msg, nocache=cmdargs.nocache)
+            ckrep = b4.LoreMessage.run_local_check(
+                ppcmdargs, commit, msg, nocache=cmdargs.nocache
+            )
             if ckrep:
                 report.extend(ckrep)
 
@@ -1902,7 +2165,12 @@ def check(cmdargs: argparse.Namespace) -> None:
             summary[flag] += 1
             logger.info('  %s %s', b4.CI_FLAGS_FANCY[flag], status)
     logger.info('---')
-    logger.info('Success: %s, Warning: %s, Error: %s', summary['success'], summary['warning'], summary['fail'])
+    logger.info(
+        'Success: %s, Warning: %s, Error: %s',
+        summary['success'],
+        summary['warning'],
+        summary['fail'],
+    )
     store_preflight_check('check')
 
 
@@ -1933,7 +2201,9 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
             revstr = cmdargs.resend
 
         # Start with full change-id based tag name
-        tagname, revision = get_sent_tagname(tracking['series']['change-id'], SENT_TAG_PREFIX, revstr)
+        tagname, revision = get_sent_tagname(
+            tracking['series']['change-id'], SENT_TAG_PREFIX, revstr
+        )
 
         if revision is None:
             logger.critical('Could not figure out revision from %s', revstr)
@@ -1949,9 +2219,12 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
         presubject = tracking['series'].get('presubject', None)
 
         try:
-            todests, ccdests, patches = get_sent_tag_as_patches(tagname, revision=revision,
-                                                                presubject=presubject,
-                                                                force_cover=cmdargs.force_cover_letter)
+            todests, ccdests, patches = get_sent_tag_as_patches(
+                tagname,
+                revision=revision,
+                presubject=presubject,
+                force_cover=cmdargs.force_cover_letter,
+            )
         except RuntimeError as ex:
             logger.critical('CRITICAL: Failed to convert tag to patches: %s', ex)
             sys.exit(1)
@@ -1971,8 +2244,9 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
             prefixes = None
 
         try:
-            todests, ccdests, tag_msg, patches = get_prep_branch_as_patches(prefixes=prefixes,
-                                                                            force_cover=cmdargs.force_cover_letter)
+            todests, ccdests, tag_msg, patches = get_prep_branch_as_patches(
+                prefixes=prefixes, force_cover=cmdargs.force_cover_letter
+            )
         except RuntimeError as ex:
             logger.critical('CRITICAL: Failed to convert range to patches: %s', ex)
             sys.exit(1)
@@ -2084,7 +2358,9 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
             # Use the default endpoint if we are in the kernel repo
             topdir = b4.git_get_toplevel()
             if topdir and os.path.exists(os.path.join(topdir, 'Kconfig')):
-                logger.debug('No sendemail configs found, will use the default web endpoint')
+                logger.debug(
+                    'No sendemail configs found, will use the default web endpoint'
+                )
                 endpoint = DEFAULT_ENDPOINT
 
     # Cannot currently use endpoint with --preview-to
@@ -2102,14 +2378,18 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
         if not cmdargs.resend:
             logger.debug('Running pre-flight checks')
             sinfo = get_info(usebranch=mybranch)
-            pfchecks = {'needs-editing': True,
-                        'needs-checking': True,
-                        'needs-checking-deps': True,
-                        'needs-auto-to-cc': True,
-                        }
+            pfchecks = {
+                'needs-editing': True,
+                'needs-checking': True,
+                'needs-checking-deps': True,
+                'needs-auto-to-cc': True,
+            }
             _cppfc = config.get('prep-pre-flight-checks', 'enable-all')
             if not isinstance(_cppfc, str):
-                logger.critical('CRITICAL: prep-pre-flight-checks must be a str, got %s', type(_cppfc).__name__)
+                logger.critical(
+                    'CRITICAL: prep-pre-flight-checks must be a str, got %s',
+                    type(_cppfc).__name__,
+                )
                 sys.exit(1)
             cfg_checks = [x.strip() for x in _cppfc.split(',')]
             if 'disable-all' in cfg_checks:
@@ -2124,7 +2404,11 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
             for pfcheck in pfchecks:
                 pfdata = sinfo[pfcheck]
                 if not isinstance(pfdata, bool):
-                    logger.debug('Pre-flight check %s is not a boolean, got %s', pfcheck, type(pfdata).__name__)
+                    logger.debug(
+                        'Pre-flight check %s is not a boolean, got %s',
+                        pfcheck,
+                        type(pfdata).__name__,
+                    )
                     continue
                 pfchecks[pfcheck] = pfdata
                 if pfdata and not failing:
@@ -2145,7 +2429,9 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
                         logger.critical('  - Run auto-to-cc   : b4 prep --auto-to-cc')
                 try:
                     logger.critical('---')
-                    input('Press Enter to ignore and send anyway or Ctrl-C to abort and fix')
+                    input(
+                        'Press Enter to ignore and send anyway or Ctrl-C to abort and fix'
+                    )
                 except KeyboardInterrupt:
                     logger.info('')
                     sys.exit(130)
@@ -2157,7 +2443,10 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
         for commit, msg in patches:
             if not msg:
                 continue
-            logger.info('  %s', re.sub(r'\s+', ' ', b4.LoreMessage.clean_header(msg.get('Subject'))))
+            logger.info(
+                '  %s',
+                re.sub(r'\s+', ' ', b4.LoreMessage.clean_header(msg.get('Subject'))),
+            )
             if commit in pccs:
                 extracc = list()
                 for pair in pccs[commit]:
@@ -2172,7 +2461,9 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
         logger.info('Ready to:')
         if endpoint:
             if cmdargs.reflect:
-                logger.info('  - send the above messages to just %s (REFLECT MODE)', fromaddr)
+                logger.info(
+                    '  - send the above messages to just %s (REFLECT MODE)', fromaddr
+                )
             else:
                 logger.info('  - send the above messages to actual recipients')
             logger.info('  - via web endpoint: %s', endpoint)
@@ -2180,9 +2471,13 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
             if sconfig.get('from'):
                 fromaddr = sconfig.get('from')
             if cmdargs.reflect:
-                logger.info('  - send the above messages to just %s (REFLECT MODE)', fromaddr)
+                logger.info(
+                    '  - send the above messages to just %s (REFLECT MODE)', fromaddr
+                )
             elif cmdargs.preview_to:
-                logger.info('  - send the above messages to the recipients listed (PREVIEW MODE)')
+                logger.info(
+                    '  - send the above messages to the recipients listed (PREVIEW MODE)'
+                )
             else:
                 logger.info('  - send the above messages to actual listed recipients')
             logger.info('  - with envelope-from: %s', fromaddr)
@@ -2190,13 +2485,24 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
             smtpserver = str(sconfig.get('smtpserver', 'localhost'))
             if '/' in smtpserver:
                 logger.info('  - via local command %s', smtpserver)
-                if cmdargs.reflect and sconfig.get('b4-really-reflect-via') != smtpserver:
+                if (
+                    cmdargs.reflect
+                    and sconfig.get('b4-really-reflect-via') != smtpserver
+                ):
                     logger.critical('---')
-                    logger.critical('CRITICAL: Cowardly refusing to reflect via %s.', smtpserver)
-                    logger.critical('          There is no guarantee that this command will do the right thing')
-                    logger.critical('          and will not send mail to actual addressees.')
+                    logger.critical(
+                        'CRITICAL: Cowardly refusing to reflect via %s.', smtpserver
+                    )
+                    logger.critical(
+                        '          There is no guarantee that this command will do the right thing'
+                    )
+                    logger.critical(
+                        '          and will not send mail to actual addressees.'
+                    )
                     logger.critical('---')
-                    logger.critical('If you are ABSOLUTELY SURE that this command will do the right thing,')
+                    logger.critical(
+                        'If you are ABSOLUTELY SURE that this command will do the right thing,'
+                    )
                     logger.critical('add the following to the [sendemail] section:')
                     logger.critical('b4-really-reflect-via = %s', smtpserver)
                     sys.exit(1)
@@ -2209,11 +2515,17 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
         logger.info('')
         if cmdargs.reflect:
             logger.info('REFLECT MODE:')
-            logger.info('    The To: and Cc: headers will be fully populated, but the only')
-            logger.info('    address given to the mail server for actual delivery will be')
+            logger.info(
+                '    The To: and Cc: headers will be fully populated, but the only'
+            )
+            logger.info(
+                '    address given to the mail server for actual delivery will be'
+            )
             logger.info('    %s', fromaddr)
             logger.info('')
-            logger.info('    Addresses in To: and Cc: headers will NOT receive this series.')
+            logger.info(
+                '    Addresses in To: and Cc: headers will NOT receive this series.'
+            )
             logger.info('')
         try:
             input('Press Enter to proceed or Ctrl-C to abort')
@@ -2278,20 +2590,31 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
         send_msgs.append(msg)
 
     if cl_msgid is None:
-        logger.critical('CRITICAL: Unable to get a clean message-id for the cover letter')
+        logger.critical(
+            'CRITICAL: Unable to get a clean message-id for the cover letter'
+        )
         sys.exit(1)
 
     if endpoint:
         # Web endpoint always requires signing
         if not sign:
-            logger.critical('CRITICAL: Web endpoint will be used for sending, but signing is turned off')
+            logger.critical(
+                'CRITICAL: Web endpoint will be used for sending, but signing is turned off'
+            )
             logger.critical('          Please re-enable signing or use SMTP')
             sys.exit(1)
 
         try:
-            sent = b4.send_mail(None, send_msgs, fromaddr=None, patatt_sign=True,
-                                dryrun=cmdargs.dryrun, output_dir=cmdargs.output_dir, web_endpoint=endpoint,
-                                reflect=cmdargs.reflect)
+            sent = b4.send_mail(
+                None,
+                send_msgs,
+                fromaddr=None,
+                patatt_sign=True,
+                dryrun=cmdargs.dryrun,
+                output_dir=cmdargs.output_dir,
+                web_endpoint=endpoint,
+                reflect=cmdargs.reflect,
+            )
         except RuntimeError as ex:
             logger.critical('CRITICAL: %s', ex)
             sys.exit(1)
@@ -2304,9 +2627,15 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
             sys.exit(1)
 
         try:
-            sent = b4.send_mail(smtp, send_msgs, fromaddr=fromaddr, patatt_sign=sign,
-                                dryrun=cmdargs.dryrun, output_dir=cmdargs.output_dir,
-                                reflect=cmdargs.reflect)
+            sent = b4.send_mail(
+                smtp,
+                send_msgs,
+                fromaddr=fromaddr,
+                patatt_sign=sign,
+                dryrun=cmdargs.dryrun,
+                output_dir=cmdargs.output_dir,
+                reflect=cmdargs.reflect,
+            )
         except RuntimeError as ex:
             logger.critical('CRITICAL: %s', ex)
             sys.exit(1)
@@ -2334,13 +2663,17 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
         return
 
     if tag_msg is None:
-        logger.critical('CRITICAL: unable to get tag_msg from %s, not rerolling', mybranch)
+        logger.critical(
+            'CRITICAL: unable to get tag_msg from %s, not rerolling', mybranch
+        )
         return
 
     reroll(mybranch, tag_msg, cl_msgid)
 
 
-def get_sent_tagname(tagbase: str, tagprefix: str, revstr: Union[str, int]) -> Tuple[str, Optional[int]]:
+def get_sent_tagname(
+    tagbase: str, tagprefix: str, revstr: Union[str, int]
+) -> Tuple[str, Optional[int]]:
     revision = None
     if isinstance(revstr, int):
         revision = revstr
@@ -2362,7 +2695,9 @@ def get_sent_tagname(tagbase: str, tagprefix: str, revstr: Union[str, int]) -> T
     return f'{tagprefix}{tagbase}-v{revision}', revision
 
 
-def reroll(mybranch: str, tag_msg: str, msgid: str, tagprefix: str = SENT_TAG_PREFIX) -> None:
+def reroll(
+    mybranch: str, tag_msg: str, msgid: str, tagprefix: str = SENT_TAG_PREFIX
+) -> None:
     # Remove signature
     chunks = tag_msg.rsplit('\n-- \n')
     if len(chunks) > 1:
@@ -2380,15 +2715,21 @@ def reroll(mybranch: str, tag_msg: str, msgid: str, tagprefix: str = SENT_TAG_PR
         tagcommit = 'HEAD'
         try:
             if strategy == 'commit':
-                base_commit, start_commit, end_commit = get_series_range(usebranch=mybranch)
+                base_commit, start_commit, end_commit = get_series_range(
+                    usebranch=mybranch
+                )
                 with b4.git_temp_worktree(topdir, base_commit) as gwt:
                     logger.debug('Preparing a sparse worktree')
-                    ecode, out = b4.git_run_command(gwt, ['sparse-checkout', 'set'], logstderr=True)
+                    ecode, out = b4.git_run_command(
+                        gwt, ['sparse-checkout', 'set'], logstderr=True
+                    )
                     if ecode > 0:
                         logger.critical('Error running sparse-checkout set')
                         logger.critical(out)
                         raise RuntimeError
-                    ecode, out = b4.git_run_command(gwt, ['checkout', '-f'], logstderr=True)
+                    ecode, out = b4.git_run_command(
+                        gwt, ['checkout', '-f'], logstderr=True
+                    )
                     if ecode > 0:
                         logger.critical('Error running checkout into sparse workdir')
                         logger.critical(out)
@@ -2397,7 +2738,9 @@ def reroll(mybranch: str, tag_msg: str, msgid: str, tagprefix: str = SENT_TAG_PR
                     ecode, out = b4.git_run_command(gwt, gitargs, logstderr=True)
                     if ecode > 0:
                         # In theory, this shouldn't happen
-                        logger.critical('Unable to cleanly apply series, see failure log below')
+                        logger.critical(
+                            'Unable to cleanly apply series, see failure log below'
+                        )
                         logger.critical('---')
                         logger.critical(out.strip())
                         logger.critical('---')
@@ -2492,7 +2835,9 @@ def show_revision() -> None:
                 logger.info('  %s: %s', rn, config['linkmask'] % link)
 
 
-def write_to_tar(bio_tar: tarfile.TarFile, name: str, mtime: int, bio_file: io.BytesIO) -> None:
+def write_to_tar(
+    bio_tar: tarfile.TarFile, name: str, mtime: int, bio_file: io.BytesIO
+) -> None:
     tifo = tarfile.TarInfo(name)
     tuser = os.environ.get('USERNAME', 'user')
     tuid = os.getuid()
@@ -2551,7 +2896,9 @@ def _cleanup_branch(branch: str) -> None:
     logger.info('branch: %s', branch)
     if 'history' in ts:
         for rn in ts['history']:
-            tagname, revision = get_sent_tagname(ts.get('change-id'), SENT_TAG_PREFIX, rn)
+            tagname, revision = get_sent_tagname(
+                ts.get('change-id'), SENT_TAG_PREFIX, rn
+            )
             tag_commit = b4.git_revparse_tag(None, tagname)
             if not tag_commit:
                 tagname, revision = get_sent_tagname(branch, SENT_TAG_PREFIX, rn)
@@ -2572,32 +2919,43 @@ def _cleanup_branch(branch: str) -> None:
         resp = None
         while resp is None:
             resp = input('Proceed? [y/s/q/N/?] ')
-            if resp == "?":
-                logger.info(textwrap.dedent(
-                    """
+            if resp == '?':
+                logger.info(
+                    textwrap.dedent(
+                        """
                     Possible answers:
                     y: cleanup the branch
                     s: show branch log
                     q or Ctrl-C: abort cleanup
                     n (default): do not cleanup this branch
                     ?: show this help message
-                    """))
+                    """
+                    )
+                )
                 resp = None
-            elif resp in ("show", "s"):
-                ecode, out = b4.git_run_command(None, ["log",
-                                                       "--patch",
-                                                       "--color=always",
-                                                       f"{start_commit}~..{end_commit}"])
+            elif resp in ('show', 's'):
+                ecode, out = b4.git_run_command(
+                    None,
+                    [
+                        'log',
+                        '--patch',
+                        '--color=always',
+                        f'{start_commit}~..{end_commit}',
+                    ],
+                )
                 if ecode > 0:
-                    logger.critical('ERROR: unable to show git log between %s and %s',
-                                    start_commit, end_commit)
+                    logger.critical(
+                        'ERROR: unable to show git log between %s and %s',
+                        start_commit,
+                        end_commit,
+                    )
                     sys.exit(130)
                 logger.info(out)
                 logger.info('')
                 resp = None
-            elif resp == "q":
+            elif resp == 'q':
                 sys.exit(130)
-            elif resp != "y":
+            elif resp != 'y':
                 return
 
     except KeyboardInterrupt:
@@ -2632,19 +2990,28 @@ def _cleanup_branch(branch: str) -> None:
         for tagname, base_commit, tag_commit, revision, cover in tags:
             logger.info('Archiving %s', tagname)
             # use tag date as mtime
-            lines = b4.git_get_command_lines(None, ['log', '-1', '--format=%ct', tagname])
+            lines = b4.git_get_command_lines(
+                None, ['log', '-1', '--format=%ct', tagname]
+            )
             if not lines:
                 logger.critical('Could not get tag date for %s', tagname)
                 sys.exit(1)
             mtime = int(lines[0])
             ifh = io.BytesIO()
             ifh.write(cover.encode())
-            write_to_tar(tfh, f'{change_id}/{SENT_TAG_PREFIX}patches-v{revision}.cover', mtime, ifh)
+            write_to_tar(
+                tfh,
+                f'{change_id}/{SENT_TAG_PREFIX}patches-v{revision}.cover',
+                mtime,
+                ifh,
+            )
             ifh.close()
             patches = b4.git_range_to_patches(None, base_commit, tag_commit)
             ifh = io.BytesIO()
             b4.save_git_am_mbox([patch[1] for patch in patches], ifh)
-            write_to_tar(tfh, f'{change_id}/{SENT_TAG_PREFIX}patches-v{revision}.mbx', mtime, ifh)
+            write_to_tar(
+                tfh, f'{change_id}/{SENT_TAG_PREFIX}patches-v{revision}.mbx', mtime, ifh
+            )
             deletes.append(['tag', '--delete', tagname])
 
     # Write in data_dir
@@ -2726,8 +3093,12 @@ def get_info(usebranch: str) -> Dict[str, Union[str, bool, None]]:
     cover, tracking = load_cover(usebranch=usebranch)
     csubject, _ = get_cover_subject_body(cover)
     ts = tracking['series']
-    base_commit, start_commit, end_commit, oneline, _shortlog, _diffstat = get_series_details(usebranch=usebranch)
-    todests, ccdests, _, patches = get_prep_branch_as_patches(usebranch=usebranch, expandprereqs=False)
+    base_commit, start_commit, end_commit, oneline, _shortlog, _diffstat = (
+        get_series_details(usebranch=usebranch)
+    )
+    todests, ccdests, _, patches = get_prep_branch_as_patches(
+        usebranch=usebranch, expandprereqs=False
+    )
     prereqs = tracking['series'].get('prerequisites', list())
     tocmd, cccmd = get_auto_to_cc_cmds()
     ppcmds, scmds = get_check_cmds()
@@ -2742,13 +3113,11 @@ def get_info(usebranch: str) -> Dict[str, Union[str, bool, None]]:
         'start-commit': start_commit,
         'end-commit': end_commit,
         'series-range': f'{start_commit}..{end_commit}',
-
         # General information about this branch status
         'prefixes': ' '.join(ts.get('prefixes', [])) or None,
         'change-id': ts.get('change-id'),
         'revision': ts.get('revision'),
         'cover-strategy': get_cover_strategy(usebranch=usebranch),
-
         # General information about this branch checks
         'needs-editing': b'EDITME' in b4.LoreMessage.get_msg_as_bytes(patches[0][1]),
         'needs-recipients': bool(not todests and not ccdests),
@@ -2758,9 +3127,15 @@ def get_info(usebranch: str) -> Dict[str, Union[str, bool, None]]:
         'needs-checking-deps': len(prereqs) > 0 and 'check-deps' not in pf_checks,
         'preflight-checks-failing': None,
     }
-    info['needs-auto-to-cc'] = info["needs-recipients"] or (bool(tocmd or cccmd) and 'auto-to-cc' not in pf_checks)
-    info['preflight-checks-failing'] = bool(info['needs-editing'] or info['needs-auto-to-cc'] or
-                                            info['needs-checking'] or info['needs-checking-deps'])
+    info['needs-auto-to-cc'] = info['needs-recipients'] or (
+        bool(tocmd or cccmd) and 'auto-to-cc' not in pf_checks
+    )
+    info['preflight-checks-failing'] = bool(
+        info['needs-editing']
+        or info['needs-auto-to-cc']
+        or info['needs-checking']
+        or info['needs-checking-deps']
+    )
 
     # Add informations about the commits in this series
     #   `commit-<hash>`: stores the subject of each commit
@@ -2770,10 +3145,14 @@ def get_info(usebranch: str) -> Dict[str, Union[str, bool, None]]:
         info[f'commit-{short}'] = subject
     if 'history' in ts:
         for rn, links in reversed(ts['history'].items()):
-            tagname, revision = get_sent_tagname(ts.get('change-id'), SENT_TAG_PREFIX, rn)
+            tagname, revision = get_sent_tagname(
+                ts.get('change-id'), SENT_TAG_PREFIX, rn
+            )
             tag_commit = b4.git_revparse_tag(None, tagname)
             if not tag_commit:
-                logger.debug('No tag %s, trying with base branch name %s', tagname, usebranch)
+                logger.debug(
+                    'No tag %s, trying with base branch name %s', tagname, usebranch
+                )
                 tagname, revision = get_sent_tagname(usebranch, SENT_TAG_PREFIX, rn)
                 tag_commit = b4.git_revparse_tag(None, tagname)
             if not tag_commit:
@@ -2781,7 +3160,11 @@ def get_info(usebranch: str) -> Dict[str, Union[str, bool, None]]:
                 continue
             try:
                 cover, base_commit, _change_id = get_base_changeid_from_tag(tagname)
-                info[f'series-{rn}'] = '%s..%s %s' % (base_commit[:12], tag_commit[:12], links[0])
+                info[f'series-{rn}'] = '%s..%s %s' % (
+                    base_commit[:12],
+                    tag_commit[:12],
+                    links[0],
+                )
             except RuntimeError as ex:
                 logger.debug('Could not get base-commit info from %s: %s', tagname, ex)
     return info
@@ -2794,10 +3177,14 @@ def force_revision(forceto: int) -> None:
     store_cover(cover, tracking)
 
 
-def range_diff_compare(compareto: str, execvp: bool = True, range_diff_opts: Optional[str] = None) -> Union[str, None]:
+def range_diff_compare(
+    compareto: str, execvp: bool = True, range_diff_opts: Optional[str] = None
+) -> Union[str, None]:
     _, tracking = load_cover()
     # Try the new format first
-    tagname, _ = get_sent_tagname(tracking['series']['change-id'], SENT_TAG_PREFIX, compareto)
+    tagname, _ = get_sent_tagname(
+        tracking['series']['change-id'], SENT_TAG_PREFIX, compareto
+    )
     prev_end = b4.git_revparse_tag(None, tagname)
     if not prev_end:
         mybranch = b4.git_get_current_branch(None)
@@ -2826,7 +3213,12 @@ def range_diff_compare(compareto: str, execvp: bool = True, range_diff_opts: Opt
     gitargs = ['rev-parse', series_end]
     lines = b4.git_get_command_lines(None, gitargs)
     curr_end = lines[0]
-    grdcmd = ['git', 'range-diff', '%.12s..%.12s' % (prev_start, prev_end), '%.12s..%.12s' % (curr_start, curr_end)]
+    grdcmd = [
+        'git',
+        'range-diff',
+        '%.12s..%.12s' % (prev_start, prev_end),
+        '%.12s..%.12s' % (curr_start, curr_end),
+    ]
     if range_diff_opts:
         sp = shlex.shlex(range_diff_opts, posix=True)
         sp.whitespace_split = True
@@ -2897,7 +3289,10 @@ def auto_to_cc() -> None:
         logger.debug('added %s to seen', ltr.addr[1])
 
     extras = list()
-    for tname, addrs in (('To', config.get('send-series-to')), ('Cc', config.get('send-series-cc'))):
+    for tname, addrs in (
+        ('To', config.get('send-series-to')),
+        ('Cc', config.get('send-series-cc')),
+    ):
         if not addrs or not isinstance(addrs, str):
             continue
         for pair in email.utils.getaddresses([addrs]):
@@ -2923,8 +3318,10 @@ def auto_to_cc() -> None:
 
         logger.debug('Collecting from: %s', msg.get('subject'))
         msgbytes = msg.as_bytes()
-        for tname, pairs in (('To', get_addresses_from_cmd(tocmd, msgbytes)),
-                             ('Cc', get_addresses_from_cmd(cccmd, msgbytes))):
+        for tname, pairs in (
+            ('To', get_addresses_from_cmd(tocmd, msgbytes)),
+            ('Cc', get_addresses_from_cmd(cccmd, msgbytes)),
+        ):
             for pair in pairs:
                 if pair[1] not in seen:
                     seen.add(pair[1])
@@ -2954,8 +3351,13 @@ def get_preflight_hash(usebranch: Optional[str] = None) -> str:
     global PFHASH_CACHE
     cachebranch = usebranch if usebranch is not None else '_current_'
     if cachebranch not in PFHASH_CACHE:
-        _tos, _ccs, _tstr, patches = get_prep_branch_as_patches(movefrom=False, thread=False, addtracking=False,
-                                                             usebranch=usebranch, expandprereqs=False)
+        _tos, _ccs, _tstr, patches = get_prep_branch_as_patches(
+            movefrom=False,
+            thread=False,
+            addtracking=False,
+            usebranch=usebranch,
+            expandprereqs=False,
+        )
         hashed = hashlib.sha1()
         for _commit, msg in patches:
             body, _charset = b4.LoreMessage.get_payload(msg)
@@ -3010,13 +3412,13 @@ def set_prefixes(prefixes: List[str], additive: bool = False) -> None:
 
 
 def _check_presubject(presubject: str) -> None:
-    if presubject == "":
+    if presubject == '':
         return
 
-    if presubject.startswith("[") and presubject.endswith("]"):
+    if presubject.startswith('[') and presubject.endswith(']'):
         return
 
-    raise RuntimeError("The presubject must be enclosed with brackets. E.g: [mylist]")
+    raise RuntimeError('The presubject must be enclosed with brackets. E.g: [mylist]')
 
 
 def set_presubject(presubject: str) -> None:
@@ -3025,7 +3427,7 @@ def set_presubject(presubject: str) -> None:
     tracking['series']['presubject'] = presubject
     if tracking['series']['presubject'] != old_presubject:
         store_cover(cover, tracking)
-        if tracking['series']['presubject'] != "":
+        if tracking['series']['presubject'] != '':
             logger.info('Updated pre-subject to: %s', presubject)
         else:
             logger.info('Removed pre-subject.')
@@ -3080,7 +3482,9 @@ def cmd_prep(cmdargs: argparse.Namespace) -> None:
         return
 
     if cmdargs.enroll_base and cmdargs.new_series_name:
-        logger.critical('CRITICAL: -n NEW_SERIES_NAME and -e [ENROLL_BASE] can not be used together.')
+        logger.critical(
+            'CRITICAL: -n NEW_SERIES_NAME and -e [ENROLL_BASE] can not be used together.'
+        )
         sys.exit(1)
 
     if cmdargs.enroll_base or cmdargs.new_series_name:
@@ -3088,14 +3492,26 @@ def cmd_prep(cmdargs: argparse.Namespace) -> None:
             # We only support this with the commit strategy
             strategy = get_cover_strategy()
             if strategy != 'commit':
-                logger.critical('CRITICAL: This appears to already be a b4-prep managed branch.')
-                logger.critical('          Chaining series is only supported with the "commit" strategy.')
-                logger.critical('          Switch to a different branch or use the -f flag to continue.')
+                logger.critical(
+                    'CRITICAL: This appears to already be a b4-prep managed branch.'
+                )
+                logger.critical(
+                    '          Chaining series is only supported with the "commit" strategy.'
+                )
+                logger.critical(
+                    '          Switch to a different branch or use the -f flag to continue.'
+                )
                 sys.exit(1)
 
-            logger.critical('IMPORTANT: This appears to already be a b4-prep managed branch.')
-            logger.critical('           The new branch will be marked as depending on this series.')
-            logger.critical('           Alternatively, switch to a different branch or use the -f flag.')
+            logger.critical(
+                'IMPORTANT: This appears to already be a b4-prep managed branch.'
+            )
+            logger.critical(
+                '           The new branch will be marked as depending on this series.'
+            )
+            logger.critical(
+                '           Alternatively, switch to a different branch or use the -f flag.'
+            )
             try:
                 input('Press Enter to confirm or Ctrl-C to abort')
                 logger.info('---')
diff --git a/src/b4/kr.py b/src/b4/kr.py
index 8bbfe26..f89f302 100644
--- a/src/b4/kr.py
+++ b/src/b4/kr.py
@@ -58,10 +58,14 @@ def main(cmdargs: argparse.Namespace) -> None:
         ecc = False
         for identity, algo, selector, keyinfo in keydata:
             if not identity:
-                logger.warning('No identity found for key %s %s %s', algo, selector, keyinfo)
+                logger.warning(
+                    'No identity found for key %s %s %s', algo, selector, keyinfo
+                )
                 continue
             if not keyinfo:
-                logger.warning('No keyinfo found for key %s %s %s', algo, selector, identity)
+                logger.warning(
+                    'No keyinfo found for key %s %s %s', algo, selector, identity
+                )
                 continue
             keypath = patatt.make_pkey_path(algo, identity, selector)
             fullpath = os.path.join(krpath, keypath)
diff --git a/src/b4/mbox.py b/src/b4/mbox.py
index 624a2f3..3198836 100644
--- a/src/b4/mbox.py
+++ b/src/b4/mbox.py
@@ -38,7 +38,9 @@ Link: ${midurl}
 """
 
 
-def save_msgs_as_mbox(dest: str, msgs: List[EmailMessage], filterdupes: bool = False) -> int:
+def save_msgs_as_mbox(
+    dest: str, msgs: List[EmailMessage], filterdupes: bool = False
+) -> int:
     if dest == '-':
         b4.save_mboxrd_mbox(msgs, sys.stdout.buffer, mangle_from=False)
         return len(msgs)
@@ -62,12 +64,15 @@ def save_msgs_as_mbox(dest: str, msgs: List[EmailMessage], filterdupes: bool = F
     return len(msgs)
 
 
-def get_base_commit(topdir: Optional[str], body: str, lser: b4.LoreSeries,
-                    cmdargs: argparse.Namespace) -> str:
+def get_base_commit(
+    topdir: Optional[str], body: str, lser: b4.LoreSeries, cmdargs: argparse.Namespace
+) -> str:
     base_commit = 'HEAD'
 
     if lser.prereq_base_commit:
-        logger.debug('Setting base-commit to prereq-base-commit: %s', lser.prereq_base_commit)
+        logger.debug(
+            'Setting base-commit to prereq-base-commit: %s', lser.prereq_base_commit
+        )
         base_commit = lser.prereq_base_commit
     else:
         matches = re.search(r'base-commit: .*?([\da-f]+)', body, re.MULTILINE)
@@ -90,22 +95,30 @@ def get_base_commit(topdir: Optional[str], body: str, lser: b4.LoreSeries,
     if base_commit == 'HEAD' and topdir and cmdargs.guessbase:
         logger.info(' Base: attempting to guess base-commit...')
         try:
-            base_commit, nblobs, mismatches = lser.find_base(topdir, branches=cmdargs.guessbranch,
-                                                             maxdays=cmdargs.guessdays)
+            base_commit, nblobs, mismatches = lser.find_base(
+                topdir, branches=cmdargs.guessbranch, maxdays=cmdargs.guessdays
+            )
             if mismatches == 0:
                 logger.critical(' Base: %s (exact match)', base_commit)
             elif nblobs == mismatches:
                 logger.critical(' Base: failed to guess base')
             else:
-                logger.critical(' Base: %s (best guess, %s/%s blobs matched)', base_commit,
-                                nblobs - mismatches, nblobs)
+                logger.critical(
+                    ' Base: %s (best guess, %s/%s blobs matched)',
+                    base_commit,
+                    nblobs - mismatches,
+                    nblobs,
+                )
         except IndexError as ex:
             logger.critical(' Base: failed to guess base (%s)', ex)
 
     if cmdargs.mergebase:
         if base_commit:
-            logger.debug(' Base: overriding submitter provided base-commit %s with %s',
-                           base_commit, cmdargs.mergebase)
+            logger.debug(
+                ' Base: overriding submitter provided base-commit %s with %s',
+                base_commit,
+                cmdargs.mergebase,
+            )
         base_commit = cmdargs.mergebase
 
     return base_commit
@@ -150,15 +163,22 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
                     payload = bpayload.decode('utf-8', errors='replace')
                     part.set_param('charset', 'utf-8')
                 if payload and b4.DIFF_RE.search(payload):
-                    xmsg = email.parser.Parser(policy=b4.emlpolicy, _class=EmailMessage).parsestr(payload)
+                    xmsg = email.parser.Parser(
+                        policy=b4.emlpolicy, _class=EmailMessage
+                    ).parsestr(payload)
                     # Needs to have Subject, From, Date for us to consider it
                     if xmsg.get('Subject') and xmsg.get('From') and xmsg.get('Date'):
                         logger.debug('Found attached patch: %s', xmsg.get('Subject'))
                         xmsg['Message-ID'] = f'<att{len(xpatches)}-{xmsgid}>'
                         xpatches.append(xmsg)
             if len(xpatches):
-                logger.info('Warning: Found %s patches attached to the requested message', len(xpatches))
-                logger.info('         This mode ignores any follow-up trailers, use with caution')
+                logger.info(
+                    'Warning: Found %s patches attached to the requested message',
+                    len(xpatches),
+                )
+                logger.info(
+                    '         This mode ignores any follow-up trailers, use with caution'
+                )
                 # Throw out lmbx and only use these
                 lmbx = b4.LoreMailbox()
                 load_codereview = False
@@ -181,8 +201,12 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
     if cmdargs.nopartialreroll:
         reroll = False
 
-    lser = lmbx.get_series(revision=wantver, sloppytrailers=cmdargs.sloppytrailers, reroll=reroll,
-                           codereview_trailers=load_codereview)
+    lser = lmbx.get_series(
+        revision=wantver,
+        sloppytrailers=cmdargs.sloppytrailers,
+        reroll=reroll,
+        codereview_trailers=load_codereview,
+    )
     if lser is None and cmdargs.cherrypick != '_':
         if wantver is None:
             logger.critical('No patches found.')
@@ -214,7 +238,9 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
                     cmdargs.cherrypick = f'<{msgid}>'
                     break
             if not len(cherrypick):
-                logger.critical('Specified msgid is not present in the series, cannot cherrypick')
+                logger.critical(
+                    'Specified msgid is not present in the series, cannot cherrypick'
+                )
                 sys.exit(1)
         elif cmdargs.cherrypick.find('*') >= 0:
             # Globbing on subject
@@ -226,22 +252,35 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
                 if fnmatch.fnmatch(lmsg.subject, cmdargs.cherrypick):
                     cherrypick.append(at)
             if not len(cherrypick):
-                logger.critical('Could not match "%s" to any subjects in the series', cmdargs.cherrypick)
+                logger.critical(
+                    'Could not match "%s" to any subjects in the series',
+                    cmdargs.cherrypick,
+                )
                 sys.exit(1)
         else:
-            cherrypick = list(b4.parse_int_range(cmdargs.cherrypick, upper=len(lser.patches) - 1))
+            cherrypick = list(
+                b4.parse_int_range(cmdargs.cherrypick, upper=len(lser.patches) - 1)
+            )
     else:
         cherrypick = None
 
-    am_msgs = lser.get_am_ready(noaddtrailers=cmdargs.noaddtrailers, addmysob=cmdargs.addmysob, addlink=cmdargs.addlink,
-                                cherrypick=cherrypick, copyccs=cmdargs.copyccs, allowbadchars=cmdargs.allowbadchars,
-                                showchecks=cmdargs.check)
+    am_msgs = lser.get_am_ready(
+        noaddtrailers=cmdargs.noaddtrailers,
+        addmysob=cmdargs.addmysob,
+        addlink=cmdargs.addlink,
+        cherrypick=cherrypick,
+        copyccs=cmdargs.copyccs,
+        allowbadchars=cmdargs.allowbadchars,
+        showchecks=cmdargs.check,
+    )
     logger.info('---')
 
     if cherrypick is None:
         logger.critical('Total patches: %s', len(am_msgs))
     else:
-        logger.info('Total patches: %s (cherrypicked: %s)', len(am_msgs), cmdargs.cherrypick)
+        logger.info(
+            'Total patches: %s (cherrypicked: %s)', len(am_msgs), cmdargs.cherrypick
+        )
 
     if len(lser.trailer_mismatches):
         logger.critical('---')
@@ -274,12 +313,20 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
             if mismatches:
                 rstart, rend = lser.make_fake_am_range(gitdir=None)
                 if rstart and rend:
-                    logger.info('Prepared fake commit range for 3-way merge (%.12s..%.12s)', rstart, rend)
+                    logger.info(
+                        'Prepared fake commit range for 3-way merge (%.12s..%.12s)',
+                        rstart,
+                        rend,
+                    )
 
     logger.critical('---')
     if lser.partial_reroll:
-        logger.critical('WARNING: v%s is a partial reroll from previous revisions', lser.revision)
-        logger.critical('         Please carefully review the resulting series to ensure correctness')
+        logger.critical(
+            'WARNING: v%s is a partial reroll from previous revisions', lser.revision
+        )
+        logger.critical(
+            '         Please carefully review the resulting series to ensure correctness'
+        )
         logger.critical('         Pass --no-partial-reroll to disable')
         logger.critical('---')
     if not lser.complete and not cmdargs.cherrypick:
@@ -341,12 +388,17 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
 
     if cmdargs.subcmd == 'shazam':
         if not topdir:
-            logger.critical('Could not figure out where your git dir is, cannot shazam.')
+            logger.critical(
+                'Could not figure out where your git dir is, cannot shazam.'
+            )
             sys.exit(1)
 
         ifh = io.BytesIO()
         if lser.prereq_patch_ids:
-            logger.info(' Deps: looking for dependencies matching %s patch-ids', len(lser.prereq_patch_ids))
+            logger.info(
+                ' Deps: looking for dependencies matching %s patch-ids',
+                len(lser.prereq_patch_ids),
+            )
             query = ' OR '.join([f'patchid:{x}' for x in lser.prereq_patch_ids])
             logger.debug('query=%s', query)
             dmsgs = b4.get_pi_search_results(query)
@@ -362,7 +414,10 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
                         pmap[dlmsg.git_patch_id] = dlmsg
             for ppid in lser.prereq_patch_ids:
                 if ppid in pmap:
-                    logger.info(' Deps: Applying prerequisite patch: %s', pmap[ppid].full_subject)
+                    logger.info(
+                        ' Deps: Applying prerequisite patch: %s',
+                        pmap[ppid].full_subject,
+                    )
                     pam_msg = pmap[ppid].get_am_message(add_trailers=False)
                     b4.save_mboxrd_mbox([pam_msg], ifh)
 
@@ -373,7 +428,9 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
             sp = shlex.shlex(amflags, posix=True)
             sp.whitespace_split = True
             amargs = list(sp) + ['--patch-format=mboxrd']
-            ecode, out = b4.git_run_command(topdir, ['am'] + amargs, stdin=ambytes, logstderr=True, rundir=topdir)
+            ecode, out = b4.git_run_command(
+                topdir, ['am'] + amargs, stdin=ambytes, logstderr=True, rundir=topdir
+            )
             logger.info(out.strip())
             if ecode == 0:
                 thanks_record_am(lser, cherrypick=cherrypick)
@@ -388,8 +445,10 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
             try:
                 merge_template = b4.read_template(str(config['shazam-merge-template']))
             except FileNotFoundError:
-                logger.critical('ERROR: shazam-merge-template says to use %s, but it does not exist',
-                                config['shazam-merge-template'])
+                logger.critical(
+                    'ERROR: shazam-merge-template says to use %s, but it does not exist',
+                    config['shazam-merge-template'],
+                )
                 sys.exit(2)
 
         if lser.has_cover and lser.patches[0] is not None:
@@ -398,12 +457,16 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
             covermessage = parts[1]
         else:
             if lser.patches[1] is None:
-                logger.critical('No cover letter provided by the author and no first patch, cannot shazam')
+                logger.critical(
+                    'No cover letter provided by the author and no first patch, cannot shazam'
+                )
                 sys.exit(1)
 
             clmsg = lser.patches[1]
-            covermessage = ('NOTE: No cover letter provided by the author.\n'
-                            '      Add merge commit message here.')
+            covermessage = (
+                'NOTE: No cover letter provided by the author.\n'
+                '      Add merge commit message here.'
+            )
 
         tptvals = {
             'seriestitle': clmsg.subject,
@@ -432,8 +495,13 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
                 logger.info(' Base: %s', base_commit)
             else:
                 logger.info(' Base: %s (use --merge-base to override)', base_commit)
-            b4.git_fetch_am_into_repo(topdir, ambytes=ambytes, at_base=base_commit,
-                                       origin=linkurl, am_flags=am_flags)
+            b4.git_fetch_am_into_repo(
+                topdir,
+                ambytes=ambytes,
+                at_base=base_commit,
+                origin=linkurl,
+                am_flags=am_flags,
+            )
         except b4.AmConflictError as cex:
             gwt = cex.worktree_path
             if not getattr(cmdargs, 'shazam_resolve', False):
@@ -522,7 +590,9 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
             logger.critical('       git checkout -b %s %s', gitbranch, base_commit)
 
     if cmdargs.outdir != '-':
-        logger.critical('       git am %s%s', '-3 ' if cmdargs.threeway else '', am_filename)
+        logger.critical(
+            '       git am %s%s', '-3 ' if cmdargs.threeway else '', am_filename
+        )
 
     thanks_record_am(lser, cherrypick=cherrypick)
 
@@ -561,7 +631,9 @@ def thanks_record_am(lser: b4.LoreSeries, cherrypick: Optional[List[int]]) -> No
         msgids.append(pmsg.msgid)
 
         if pmsg.pwhash is None:
-            logger.debug('Unable to get hashes for all patches, not tracking for thanks')
+            logger.debug(
+                'Unable to get hashes for all patches, not tracking for thanks'
+            )
             return
 
         prefix = '%s/%s' % (str(pmsg.counter).zfill(padlen), pmsg.expected)
@@ -610,20 +682,28 @@ def thanks_record_am(lser: b4.LoreSeries, cherrypick: Optional[List[int]]) -> No
 
 def save_as_quilt(am_msgs: List[EmailMessage], q_dirname: str) -> None:
     if os.path.exists(q_dirname):
-        logger.critical('ERROR: Directory %s exists, not saving quilt patches', q_dirname)
+        logger.critical(
+            'ERROR: Directory %s exists, not saving quilt patches', q_dirname
+        )
         return
     pathlib.Path(q_dirname).mkdir(parents=True)
     patch_filenames = list()
     for msg in am_msgs:
         lsubj = b4.LoreSubject(msg.get('subject', ''))
-        slug = '%04d_%s' % (lsubj.counter, re.sub(r'\W+', '_', lsubj.subject).strip('_').lower())
+        slug = '%04d_%s' % (
+            lsubj.counter,
+            re.sub(r'\W+', '_', lsubj.subject).strip('_').lower(),
+        )
         patch_filename = f'{slug}.patch'
         patch_filenames.append(patch_filename)
         quilt_out = os.path.join(q_dirname, patch_filename)
         i, m, p = b4.get_mailinfo(msg.as_bytes(policy=b4.emlpolicy), scissors=True)
         with open(quilt_out, 'wb') as fh:
             if i.get('Author'):
-                fh.write(b'From: %s <%s>\n' % (i.get('Author', '').encode(), i.get('Email', '').encode()))
+                fh.write(
+                    b'From: %s <%s>\n'
+                    % (i.get('Author', '').encode(), i.get('Email', '').encode())
+                )
             else:
                 fh.write(b'From: %s\n' % i.get('Email', '').encode())
             fh.write(b'Subject: %s\n' % i.get('Subject', '').encode())
@@ -638,8 +718,12 @@ def save_as_quilt(am_msgs: List[EmailMessage], q_dirname: str) -> None:
             sfh.write('%s\n' % patch_filename)
 
 
-def get_extra_series(msgs: List[EmailMessage], direction: int = 1, wantvers: Optional[List[int]] = None,
-                     nocache: bool = False) -> List[EmailMessage]:
+def get_extra_series(
+    msgs: List[EmailMessage],
+    direction: int = 1,
+    wantvers: Optional[List[int]] = None,
+    nocache: bool = False,
+) -> List[EmailMessage]:
     base_msg: Optional[EmailMessage] = None
     latest_revision: Optional[int] = None
     seen_msgids: Set[str] = set()
@@ -720,8 +804,9 @@ def get_extra_series(msgs: List[EmailMessage], direction: int = 1, wantvers: Opt
         logger.critical('Checking for older revisions')
         # Cap backward search to 12 months to avoid matching years of
         # identically-named series (common with subject+from fallback).
-        earliest = time.strftime('%Y%m%d', time.gmtime(
-            time.mktime(msgdate[:9]) - 365 * 86400))
+        earliest = time.strftime(
+            '%Y%m%d', time.gmtime(time.mktime(msgdate[:9]) - 365 * 86400)
+        )
         datelim = 'd:%s..%s' % (earliest, startdate)
 
     q = '(%s) AND %s' % (' OR '.join(queries), datelim)
@@ -756,7 +841,9 @@ def get_extra_series(msgs: List[EmailMessage], direction: int = 1, wantvers: Opt
             logger.debug('Ignoring result (not old revision): %s', lsub.full_subject)
             continue
         if direction < 0 and wantvers and lsub.revision not in wantvers:
-            logger.debug('Ignoring result (not revision we want): %s', lsub.full_subject)
+            logger.debug(
+                'Ignoring result (not revision we want): %s', lsub.full_subject
+            )
             continue
 
         if lsub.revision == 1 and lsub.revision == latest_revision:
@@ -768,9 +855,15 @@ def get_extra_series(msgs: List[EmailMessage], direction: int = 1, wantvers: Opt
                 # It's *probably* an older revision.
                 logger.debug('Likely an older revision: %s', lsub.full_subject)
         elif direction > 0 and lsub.revision > latest_revision:
-            logger.debug('Definitely a new revision [v%s]: %s', lsub.revision, lsub.full_subject)
+            logger.debug(
+                'Definitely a new revision [v%s]: %s', lsub.revision, lsub.full_subject
+            )
         elif direction < 0 and lsub.revision < latest_revision:
-            logger.debug('Definitely an older revision [v%s]: %s', lsub.revision, lsub.full_subject)
+            logger.debug(
+                'Definitely an older revision [v%s]: %s',
+                lsub.revision,
+                lsub.full_subject,
+            )
         else:
             logger.debug('No idea what this is: %s', lsub.subject)
             continue
@@ -793,7 +886,9 @@ def get_extra_series(msgs: List[EmailMessage], direction: int = 1, wantvers: Opt
             if not payload:
                 continue
             for cid in change_ids:
-                if re.search(rf'^change-id:\s*{re.escape(cid)}\s*$', payload, flags=re.I | re.M):
+                if re.search(
+                    rf'^change-id:\s*{re.escape(cid)}\s*$', payload, flags=re.I | re.M
+                ):
                     lsub = b4.LoreSubject(q_msg.get('Subject', ''))
                     valid_revisions.add(lsub.revision)
                     break
@@ -855,13 +950,13 @@ def refetch(dest: str) -> None:
 def minimize_thread(msgs: List[EmailMessage]) -> List[EmailMessage]:
     # We go through each message and minimize headers and body content
     wanthdrs = {
-                'From',
-                'Subject',
-                'Date',
-                'Message-ID',
-                'Reply-To',
-                'In-Reply-To',
-                }
+        'From',
+        'Subject',
+        'Date',
+        'Message-ID',
+        'Reply-To',
+        'In-Reply-To',
+    }
     mmsgs = list()
     for msg in msgs:
         mmsg = EmailMessage()
@@ -877,7 +972,7 @@ def minimize_thread(msgs: List[EmailMessage]) -> List[EmailMessage]:
             chunks: List[Tuple[bool, List[str]]] = list()
             chunk: List[str] = list()
             current = None
-            for line in (cmsg.rstrip().splitlines()):
+            for line in cmsg.rstrip().splitlines():
                 quoted = line.startswith('>') and True or False
                 if current is None:
                     current = quoted
@@ -922,8 +1017,9 @@ def minimize_thread(msgs: List[EmailMessage]) -> List[EmailMessage]:
     return mmsgs
 
 
-def _start_merge_resolve(topdir: str, cex: b4.AmConflictError,
-                          common_dir: str, state: Dict[str, Any]) -> None:
+def _start_merge_resolve(
+    topdir: str, cex: b4.AmConflictError, common_dir: str, state: Dict[str, Any]
+) -> None:
     gwt = cex.worktree_path
     logger.critical('---')
     logger.critical(cex.output)
@@ -931,8 +1027,9 @@ def _start_merge_resolve(topdir: str, cex: b4.AmConflictError,
     logger.critical('Patch series did not apply cleanly, resolving...')
 
     # Find rebase-apply in the worktree
-    ecode, gitdir = b4.git_run_command(gwt, ['rev-parse', '--git-dir'],
-                                        logstderr=True, rundir=gwt)
+    ecode, gitdir = b4.git_run_command(
+        gwt, ['rev-parse', '--git-dir'], logstderr=True, rundir=gwt
+    )
     if ecode > 0:
         logger.critical('Unable to find git directory in worktree')
         b4.git_run_command(topdir, ['worktree', 'remove', '--force', gwt])
@@ -1011,8 +1108,12 @@ def _start_merge_resolve(topdir: str, cex: b4.AmConflictError,
 
     # Start merge of successfully applied patches
     logger.info('Merging successfully applied patches into your branch...')
-    ecode, out = b4.git_run_command(topdir, ['merge', '--no-ff', '--no-commit', 'FETCH_HEAD'],
-                                     logstderr=True, rundir=topdir)
+    ecode, out = b4.git_run_command(
+        topdir,
+        ['merge', '--no-ff', '--no-commit', 'FETCH_HEAD'],
+        logstderr=True,
+        rundir=topdir,
+    )
 
     if ecode > 0:
         logger.warning('Merge had conflicts:')
@@ -1026,8 +1127,13 @@ def _start_merge_resolve(topdir: str, cex: b4.AmConflictError,
     sys.exit(0)
 
 
-def _apply_remaining_patches(topdir: str, patches_dir: str, state: Dict[str, Any],
-                              state_file: str, common_dir: str) -> None:
+def _apply_remaining_patches(
+    topdir: str,
+    patches_dir: str,
+    state: Dict[str, Any],
+    state_file: str,
+    common_dir: str,
+) -> None:
     with open(os.path.join(patches_dir, 'total'), 'r') as fh:
         total = int(fh.read().strip())
     with open(os.path.join(patches_dir, 'current'), 'r') as fh:
@@ -1043,14 +1149,19 @@ def _apply_remaining_patches(topdir: str, patches_dir: str, state: Dict[str, Any
             patch_data = fh.read()
 
         logger.info('Applying remaining patch %d/%d...', current + 1, total)
-        ecode, out = b4.git_run_command(topdir, ['apply', '--3way'],
-                                         stdin=patch_data, logstderr=True, rundir=topdir)
+        ecode, out = b4.git_run_command(
+            topdir, ['apply', '--3way'], stdin=patch_data, logstderr=True, rundir=topdir
+        )
         if ecode > 0:
             logger.critical('---')
             logger.critical(out.strip())
             logger.critical('---')
-            logger.critical('Remaining patch %d/%d did not apply cleanly.', current + 1, total)
-            logger.critical('Resolve conflicts in your working tree, then run: b4 shazam --continue')
+            logger.critical(
+                'Remaining patch %d/%d did not apply cleanly.', current + 1, total
+            )
+            logger.critical(
+                'Resolve conflicts in your working tree, then run: b4 shazam --continue'
+            )
             logger.critical('To abort: b4 shazam --abort')
             # Advance past this patch, its changes (with conflict markers) are in the tree
             with open(os.path.join(patches_dir, 'current'), 'w') as fh:
@@ -1067,8 +1178,13 @@ def _apply_remaining_patches(topdir: str, patches_dir: str, state: Dict[str, Any
     _finish_shazam_merge(topdir, state, state_file, common_dir, patches_dir)
 
 
-def _finish_shazam_merge(topdir: str, state: Dict[str, Any], state_file: str,
-                          common_dir: str, patches_dir: str) -> None:
+def _finish_shazam_merge(
+    topdir: str,
+    state: Dict[str, Any],
+    state_file: str,
+    common_dir: str,
+    patches_dir: str,
+) -> None:
     b4.git_run_command(topdir, ['add', '-u'], logstderr=True, rundir=topdir)
 
     gitargs = ['rev-parse', '--git-dir']
@@ -1101,7 +1217,9 @@ def _finish_shazam_merge(topdir: str, state: Dict[str, Any], state_file: str,
         commitargs.extend(list(sp))
     if no_interactive:
         commitargs.append('--no-edit')
-        ecode, out = b4.git_run_command(topdir, commitargs, logstderr=True, rundir=topdir)
+        ecode, out = b4.git_run_command(
+            topdir, commitargs, logstderr=True, rundir=topdir
+        )
         if ecode > 0:
             logger.critical('Failed to commit merge:')
             logger.critical(out.strip())
@@ -1123,7 +1241,9 @@ def _finish_shazam_merge(topdir: str, state: Dict[str, Any], state_file: str,
     logger.info('Merge completed successfully.')
 
 
-def _load_shazam_state(require_state: bool = True) -> Tuple[str, str, str, Optional[Dict[str, Any]]]:
+def _load_shazam_state(
+    require_state: bool = True,
+) -> Tuple[str, str, str, Optional[Dict[str, Any]]]:
     topdir = b4.git_get_toplevel()
     if not topdir:
         logger.critical('Could not figure out where your git dir is.')
@@ -1159,8 +1279,12 @@ def shazam_continue(cmdargs: argparse.Namespace) -> None:
     b4.git_run_command(topdir, ['add', '-u'], logstderr=True, rundir=topdir)
 
     # Check for remaining unmerged files
-    _ecode, unmerged = b4.git_run_command(topdir, ['diff', '--name-only', '--diff-filter=U'],
-                                          logstderr=True, rundir=topdir)
+    _ecode, unmerged = b4.git_run_command(
+        topdir,
+        ['diff', '--name-only', '--diff-filter=U'],
+        logstderr=True,
+        rundir=topdir,
+    )
     if unmerged.strip():
         logger.critical('There are still unresolved conflicts:')
         logger.critical(unmerged.strip())
diff --git a/src/b4/pr.py b/src/b4/pr.py
index cb2ca76..7c0659a 100644
--- a/src/b4/pr.py
+++ b/src/b4/pr.py
@@ -46,7 +46,11 @@ PULL_BODY_REMOTE_REF_RE = [
 
 def git_get_commit_id_from_repo_ref(repo: str, ref: str) -> Optional[str]:
     # We only handle git and http/s URLs
-    if not (repo.find('git://') == 0 or repo.find('http://') == 0 or repo.find('https://') == 0):
+    if not (
+        repo.find('git://') == 0
+        or repo.find('http://') == 0
+        or repo.find('https://') == 0
+    ):
         logger.info('%s uses unsupported protocol', repo)
         return None
 
@@ -56,10 +60,14 @@ def git_get_commit_id_from_repo_ref(repo: str, ref: str) -> Optional[str]:
     # Is it a full ref name or a shortname?
     if ref.find('heads/') < 0 and ref.find('tags/') < 0:
         # Try grabbing it as a head first
-        lines = b4.git_get_command_lines(None, ['ls-remote', repo, 'refs/heads/%s' % ref])
+        lines = b4.git_get_command_lines(
+            None, ['ls-remote', repo, 'refs/heads/%s' % ref]
+        )
         if not lines:
             # try it as a tag, then
-            lines = b4.git_get_command_lines(None, ['ls-remote', repo, 'refs/tags/%s^{}' % ref])
+            lines = b4.git_get_command_lines(
+                None, ['ls-remote', repo, 'refs/tags/%s^{}' % ref]
+            )
 
     elif ref.find('tags/') == 0:
         # try as an annotated tag first
@@ -114,7 +122,9 @@ def parse_pr_data(msg: email.message.EmailMessage) -> Optional[b4.LoreMessage]:
             break
 
     if lmsg.pr_repo and lmsg.pr_ref:
-        lmsg.pr_remote_tip_commit = git_get_commit_id_from_repo_ref(lmsg.pr_repo, lmsg.pr_ref)
+        lmsg.pr_remote_tip_commit = git_get_commit_id_from_repo_ref(
+            lmsg.pr_repo, lmsg.pr_ref
+        )
 
     return lmsg
 
@@ -136,9 +146,13 @@ def attest_fetch_head(gitdir: Optional[str], lmsg: b4.LoreMessage) -> None:
     if len(htype):
         otype = htype[0]
     if otype == 'tag':
-        _ecode, out = b4.git_run_command(gitdir, ['verify-tag', '--raw', 'FETCH_HEAD'], logstderr=True)
+        _ecode, out = b4.git_run_command(
+            gitdir, ['verify-tag', '--raw', 'FETCH_HEAD'], logstderr=True
+        )
     elif otype == 'commit':
-        _ecode, out = b4.git_run_command(gitdir, ['verify-commit', '--raw', 'FETCH_HEAD'], logstderr=True)
+        _ecode, out = b4.git_run_command(
+            gitdir, ['verify-commit', '--raw', 'FETCH_HEAD'], logstderr=True
+        )
 
     good, valid, _trusted, keyid, _sigtime = b4.check_gpg_status(out)
     signer = None
@@ -172,7 +186,9 @@ def attest_fetch_head(gitdir: Optional[str], lmsg: b4.LoreMessage) -> None:
     if errors:
         logger.critical('  ---')
         if len(out):
-            logger.critical('  Pull request is signed, but verification did not succeed:')
+            logger.critical(
+                '  Pull request is signed, but verification did not succeed:'
+            )
         else:
             logger.critical('  Pull request verification did not succeed:')
         for error in errors:
@@ -180,24 +196,36 @@ def attest_fetch_head(gitdir: Optional[str], lmsg: b4.LoreMessage) -> None:
 
         if attpolicy == 'hardfail':
             import sys
+
             sys.exit(128)
 
 
-def fetch_remote(gitdir: Optional[str], lmsg: b4.LoreMessage, branch: Optional[str] = None,
-                 check_sig: bool = True, ty_track: bool = True) -> int:
+def fetch_remote(
+    gitdir: Optional[str],
+    lmsg: b4.LoreMessage,
+    branch: Optional[str] = None,
+    check_sig: bool = True,
+    ty_track: bool = True,
+) -> int:
     # Do we know anything about this base commit?
     if lmsg.pr_base_commit and not b4.git_commit_exists(gitdir, lmsg.pr_base_commit):
         logger.critical('ERROR: git knows nothing about commit %s', lmsg.pr_base_commit)
-        logger.critical('       Are you running inside a git checkout and is it up-to-date?')
+        logger.critical(
+            '       Are you running inside a git checkout and is it up-to-date?'
+        )
         return 1
 
     if lmsg.pr_tip_commit != lmsg.pr_remote_tip_commit:
         logger.critical('ERROR: commit-id mismatch between pull request and remote')
-        logger.critical('       msg=%s, remote=%s', lmsg.pr_tip_commit, lmsg.pr_remote_tip_commit)
+        logger.critical(
+            '       msg=%s, remote=%s', lmsg.pr_tip_commit, lmsg.pr_remote_tip_commit
+        )
         return 1
 
     if not lmsg.pr_repo or not lmsg.pr_ref:
-        logger.critical('ERROR: Could not find remote repository or ref in pull request')
+        logger.critical(
+            'ERROR: Could not find remote repository or ref in pull request'
+        )
         logger.critical('       msgid=%s', lmsg.msgid)
         return 1
 
@@ -252,7 +280,7 @@ def thanks_record_pr(lmsg: b4.LoreMessage) -> None:
         'remote': lmsg.pr_repo,
         'ref': lmsg.pr_ref,
         'sentdate': b4.LoreMessage.clean_header(lmsg.msg['Date']),
-        'quote': b4.make_quote(lmsg.body, maxlines=6)
+        'quote': b4.make_quote(lmsg.body, maxlines=6),
     }
     fullpath = os.path.join(datadir, filename)
     with open(fullpath, 'w', encoding='utf-8') as fh:
@@ -266,9 +294,11 @@ def thanks_record_pr(lmsg: b4.LoreMessage) -> None:
         b4.patchwork_set_state([lmsg.msgid], pwstate)
 
 
-def explode(gitdir: Optional[str], lmsg: b4.LoreMessage,
-            usefrom: Optional[str] = None) -> List[email.message.EmailMessage]:
+def explode(
+    gitdir: Optional[str], lmsg: b4.LoreMessage, usefrom: Optional[str] = None
+) -> List[email.message.EmailMessage]:
     import b4.ez
+
     ecode = fetch_remote(gitdir, lmsg, check_sig=False, ty_track=False)
     if ecode > 0:
         raise RuntimeError('Fetching unsuccessful')
@@ -313,22 +343,33 @@ def explode(gitdir: Optional[str], lmsg: b4.LoreMessage,
     config = b4.get_main_config()
     msgid_tpt = f'<b4-pr-%s-{lmsg.msgid}>'
 
-    pmsgs = b4.git_range_to_patches(gitdir, lmsg.pr_base_commit, 'FETCH_HEAD',
-                                    prefixes=prefixes, msgid_tpt=msgid_tpt,
-                                    seriests=int(lmsg.date.timestamp()), mailfrom=mailfrom)
+    pmsgs = b4.git_range_to_patches(
+        gitdir,
+        lmsg.pr_base_commit,
+        'FETCH_HEAD',
+        prefixes=prefixes,
+        msgid_tpt=msgid_tpt,
+        seriests=int(lmsg.date.timestamp()),
+        mailfrom=mailfrom,
+    )
 
     msgs = list()
     # Build the cover message from the pull request body
     linkmask = config.get('linkmask', 'https://lore.kernel.org/%s')
     assert isinstance(linkmask, str), 'linkmask must be a string'
     cbody = '%s\n\nbase-commit: %s\npull-request: %s\n' % (
-        lmsg.body.strip(), lmsg.pr_base_commit, linkmask % lmsg.msgid)
+        lmsg.body.strip(),
+        lmsg.pr_base_commit,
+        linkmask % lmsg.msgid,
+    )
 
     if len(pmsgs) == 1:
         b4.ez.mixin_cover(cbody, pmsgs)
     else:
         lmsg.lsubject.prefixes = prefixes
-        b4.ez.add_cover(lmsg.lsubject, msgid_tpt, pmsgs, cbody, int(lmsg.date.timestamp()))
+        b4.ez.add_cover(
+            lmsg.lsubject, msgid_tpt, pmsgs, cbody, int(lmsg.date.timestamp())
+        )
 
     for _at, (_commit, msg) in enumerate(pmsgs):
         msg.add_header('To', b4.format_addrs(allto))
@@ -336,7 +377,9 @@ def explode(gitdir: Optional[str], lmsg: b4.LoreMessage,
             msg.add_header('Cc', b4.format_addrs(allcc))
 
         if lmsg.msg['List-Id']:
-            msg.add_header('X-Original-List-Id', b4.LoreMessage.clean_header(lmsg.msg['List-Id']))
+            msg.add_header(
+                'X-Original-List-Id', b4.LoreMessage.clean_header(lmsg.msg['List-Id'])
+            )
 
         msgs.append(msg)
         logger.info('  %s', re.sub(r'\n\s*', ' ', msg.get('Subject', '(no subject)')))
@@ -394,7 +437,11 @@ def get_pr_from_github(ghurl: str) -> Optional[b4.LoreMessage]:
         idstring=f'{rproj}-{rrepo}-pr-{rpull}',
         domain='github.com',
     )
-    created_at = utils.format_datetime(datetime.strptime(prdata.get('created_at'), '%Y-%m-%dT%H:%M:%SZ').replace(tzinfo=timezone.utc))
+    created_at = utils.format_datetime(
+        datetime.strptime(prdata.get('created_at'), '%Y-%m-%dT%H:%M:%SZ').replace(
+            tzinfo=timezone.utc
+        )
+    )
     msg['Date'] = created_at
     msg.set_charset('utf-8')
     body = prdata.get('body')
@@ -416,12 +463,17 @@ def main(cmdargs: argparse.Namespace) -> None:
 
     if not cmdargs.no_stdin and not sys.stdin.isatty():
         logger.debug('Getting PR message from stdin')
-        msg = email.parser.BytesParser(policy=b4.emlpolicy,
-                                       _class=email.message.EmailMessage).parse(sys.stdin.buffer)
+        msg = email.parser.BytesParser(
+            policy=b4.emlpolicy, _class=email.message.EmailMessage
+        ).parse(sys.stdin.buffer)
         cmdargs.msgid = b4.LoreMessage.get_clean_msgid(msg)
         lmsg = parse_pr_data(msg)
     else:
-        if cmdargs.msgid and 'github.com' in cmdargs.msgid and '/pull/' in cmdargs.msgid:
+        if (
+            cmdargs.msgid
+            and 'github.com' in cmdargs.msgid
+            and '/pull/' in cmdargs.msgid
+        ):
             logger.debug('Getting PR info from Github')
             lmsg = get_pr_from_github(cmdargs.msgid)
         else:
@@ -459,13 +511,24 @@ def main(cmdargs: argparse.Namespace) -> None:
         if msgs:
             if cmdargs.sendidentity:
                 # Pass exploded series via git-send-email
-                config = b4.get_config_from_git(rf'sendemail\.{cmdargs.sendidentity}\..*')
+                config = b4.get_config_from_git(
+                    rf'sendemail\.{cmdargs.sendidentity}\..*'
+                )
                 if not len(config):
-                    logger.critical('Not able to find sendemail.%s configuration', cmdargs.sendidentity)
+                    logger.critical(
+                        'Not able to find sendemail.%s configuration',
+                        cmdargs.sendidentity,
+                    )
                     sys.exit(1)
                 # Make sure from is not overridden by current user
                 mailfrom = msgs[0].get('from')
-                gitargs = ['send-email', '--identity', cmdargs.sendidentity, '--from', mailfrom]
+                gitargs = [
+                    'send-email',
+                    '--identity',
+                    cmdargs.sendidentity,
+                    '--from',
+                    mailfrom,
+                ]
                 if cmdargs.dryrun:
                     gitargs.append('--dry-run')
                 # Write out everything into a temporary dir
@@ -477,7 +540,9 @@ def main(cmdargs: argparse.Namespace) -> None:
                             tfh.write(msg.as_bytes(policy=b4.emlpolicy))
                         gitargs.append(outfile)
                         counter += 1
-                    ecode, out = b4.git_run_command(cmdargs.gitdir, gitargs, logstderr=True)
+                    ecode, out = b4.git_run_command(
+                        cmdargs.gitdir, gitargs, logstderr=True
+                    )
                     if cmdargs.dryrun:
                         logger.info(out)
                     sys.exit(ecode)
@@ -521,7 +586,9 @@ def main(cmdargs: argparse.Namespace) -> None:
             sys.exit(1)
 
         # Is it at the tip of FETCH_HEAD?
-        loglines = b4.git_get_command_lines(gitdir, ['log', '-1', '--pretty=oneline', 'FETCH_HEAD'])
+        loglines = b4.git_get_command_lines(
+            gitdir, ['log', '-1', '--pretty=oneline', 'FETCH_HEAD']
+        )
         if len(loglines) and loglines[0].find(lmsg.pr_tip_commit) == 0:
             logger.info('Pull request is at the tip of FETCH_HEAD')
             if cmdargs.check:
diff --git a/src/b4/review/__init__.py b/src/b4/review/__init__.py
index df1e39b..56e5df2 100644
--- a/src/b4/review/__init__.py
+++ b/src/b4/review/__init__.py
@@ -29,14 +29,23 @@ from b4.review._review import (
 
 # Tell mypy these private symbols are intentionally re-exported
 __all__ = [
-    '_retrieve_messages', 'retrieve_series_messages', '_get_lore_series',
-    '_collect_followups', '_collect_reply_headers',
-    '_get_my_review', '_ensure_my_review', '_cleanup_review',
-    '_get_patch_state', '_set_patch_state',
+    '_retrieve_messages',
+    'retrieve_series_messages',
+    '_get_lore_series',
+    '_collect_followups',
+    '_collect_reply_headers',
+    '_get_my_review',
+    '_ensure_my_review',
+    '_cleanup_review',
+    '_get_patch_state',
+    '_set_patch_state',
     '_resolve_comment_positions',
-    '_render_quoted_diff_with_comments', '_extract_editor_comments',
-    '_clear_other_comments', '_strip_subject',
-    '_build_reply_from_comments', '_ensure_trailers_in_body',
+    '_render_quoted_diff_with_comments',
+    '_extract_editor_comments',
+    '_clear_other_comments',
+    '_strip_subject',
+    '_build_reply_from_comments',
+    '_ensure_trailers_in_body',
     '_build_review_email',
     '_integrate_agent_reviews',
     '_extract_comments_from_quoted_reply',
diff --git a/src/b4/review/_review.py b/src/b4/review/_review.py
index 661369b..b061139 100644
--- a/src/b4/review/_review.py
+++ b/src/b4/review/_review.py
@@ -33,8 +33,7 @@ COMMIT_MESSAGE_PATH = ':message'
 _REPLY_CONTEXT_LINES = 5
 
 
-def _should_promote_waiting(newer_vers: List[int],
-                            previously_known: Set[int]) -> bool:
+def _should_promote_waiting(newer_vers: List[int], previously_known: Set[int]) -> bool:
     """Decide whether a waiting series should be promoted to reviewing.
 
     Only promotes when at least one of the newer versions was not
@@ -60,8 +59,10 @@ def _strip_subject(text: str) -> List[str]:
 
 
 def make_review_magic_json(data: Dict[str, Any]) -> str:
-    mj = (f'{REVIEW_MAGIC_MARKER}\n'
-          '# This section is used internally by b4 review for tracking purposes.\n')
+    mj = (
+        f'{REVIEW_MAGIC_MARKER}\n'
+        '# This section is used internally by b4 review for tracking purposes.\n'
+    )
     return mj + json.dumps(data, indent=2)
 
 
@@ -99,10 +100,14 @@ def _collect_reply_headers(lmsg: b4.LoreMessage) -> Dict[str, str]:
         allcc = []
         logger.debug('Unable to parse the Cc: header in %s: %s', lmsg.msgid, str(ex))
     try:
-        reply_to = email.utils.getaddresses([str(x) for x in lmsg.msg.get_all('reply-to', [])])
+        reply_to = email.utils.getaddresses(
+            [str(x) for x in lmsg.msg.get_all('reply-to', [])]
+        )
     except Exception as ex:
         reply_to = []
-        logger.debug('Unable to parse the Reply-To: header in %s: %s', lmsg.msgid, str(ex))
+        logger.debug(
+            'Unable to parse the Reply-To: header in %s: %s', lmsg.msgid, str(ex)
+        )
 
     headers: Dict[str, str] = {
         'msgid': lmsg.msgid,
@@ -148,7 +153,9 @@ def check_series_attestation(lser: b4.LoreSeries) -> Optional[str]:
     for lmsg in lser.patches[1:]:
         if lmsg is None:
             continue
-        attestations, _passing, _critical = lmsg.get_attestation_status(attpolicy, maxdays)
+        attestations, _passing, _critical = lmsg.get_attestation_status(
+            attpolicy, maxdays
+        )
         for att in attestations:
             key = (att.get('status', ''), att.get('identity', ''))
             seen.add(key)
@@ -178,8 +185,9 @@ def _retrieve_messages(message_id: str) -> List[email.message.EmailMessage]:
     return msgs
 
 
-def retrieve_series_messages(series: Dict[str, Any],
-                             identifier: str) -> List[email.message.EmailMessage]:
+def retrieve_series_messages(
+    series: Dict[str, Any], identifier: str
+) -> List[email.message.EmailMessage]:
     """Fetch messages for a tracked series, using stored patch info when available.
 
     For rethreaded series, reads the series_patches table to fetch each
@@ -202,7 +210,9 @@ def retrieve_series_messages(series: Dict[str, Any],
                 _msgids, all_msgs = b4.fetch_rethread_messages(msgids, nocache=True)
                 _cover_msgid, msgs = b4.LoreSeries.rethread_series(msgids, all_msgs)
                 if not msgs:
-                    raise LookupError(f'Could not retrieve series patches for {change_id}')
+                    raise LookupError(
+                        f'Could not retrieve series patches for {change_id}'
+                    )
                 return msgs
 
     if not message_id:
@@ -210,8 +220,11 @@ def retrieve_series_messages(series: Dict[str, Any],
     return _retrieve_messages(message_id)
 
 
-def _get_lore_series(msgs: List[email.message.EmailMessage], sloppytrailers: bool = False,
-                     wantver: Optional[int] = None) -> 'b4.LoreSeries':
+def _get_lore_series(
+    msgs: List[email.message.EmailMessage],
+    sloppytrailers: bool = False,
+    wantver: Optional[int] = None,
+) -> 'b4.LoreSeries':
     """Build a LoreMailbox from messages and return the requested series version.
 
     When *wantver* is ``None`` (the default), the highest version found
@@ -229,10 +242,11 @@ def _get_lore_series(msgs: List[email.message.EmailMessage], sloppytrailers: boo
     if wantver not in lmbx.series:
         found = ', '.join(f'v{v}' for v in sorted(lmbx.series.keys()))
         raise LookupError(
-            f'Series version {wantver} not found in retrieved messages'
-            f' (found: {found})')
-    lser = lmbx.get_series(wantver, sloppytrailers=sloppytrailers,
-                           codereview_trailers=False)
+            f'Series version {wantver} not found in retrieved messages (found: {found})'
+        )
+    lser = lmbx.get_series(
+        wantver, sloppytrailers=sloppytrailers, codereview_trailers=False
+    )
     if not lser:
         raise LookupError(f'Could not find series version {wantver}')
     return lser
@@ -253,12 +267,18 @@ def get_reference_message(lser: 'b4.LoreSeries') -> 'b4.LoreMessage':
     return ref_msg
 
 
-def create_review_branch(topdir: str, branch_name: str, base_commit: str,
-                         lser: b4.LoreSeries, linkurl: str, linkmask: str,
-                         num_prereqs: int = 0,
-                         identifier: Optional[str] = None,
-                         status: str = 'reviewing',
-                         is_rethreaded: bool = False) -> None:
+def create_review_branch(
+    topdir: str,
+    branch_name: str,
+    base_commit: str,
+    lser: b4.LoreSeries,
+    linkurl: str,
+    linkmask: str,
+    num_prereqs: int = 0,
+    identifier: Optional[str] = None,
+    status: str = 'reviewing',
+    is_rethreaded: bool = False,
+) -> None:
     # Verify branch does not already exist
     ecode, out = b4.git_run_command(topdir, ['rev-parse', '--verify', branch_name])
     if ecode == 0:
@@ -272,23 +292,27 @@ def create_review_branch(topdir: str, branch_name: str, base_commit: str,
         current_branch = out.strip()
 
     # Resolve base_commit to a concrete hash before checkout changes HEAD
-    ecode, out = b4.git_run_command(topdir, ['rev-parse', f'{base_commit}^{{}}'], logstderr=True)
+    ecode, out = b4.git_run_command(
+        topdir, ['rev-parse', f'{base_commit}^{{}}'], logstderr=True
+    )
     if ecode > 0:
         logger.critical('Unable to resolve base commit %s', base_commit)
         sys.exit(1)
     resolved_base = out.strip()
 
     # Create and check out the review branch
-    ecode, out = b4.git_run_command(topdir, ['checkout', '-b', branch_name, resolved_base],
-                                    logstderr=True)
+    ecode, out = b4.git_run_command(
+        topdir, ['checkout', '-b', branch_name, resolved_base], logstderr=True
+    )
     if ecode > 0:
         logger.critical('Unable to create branch %s at %s', branch_name, resolved_base)
         logger.critical(out.strip())
         sys.exit(1)
 
     # Cherry-pick the applied patches from FETCH_HEAD
-    ecode, out = b4.git_run_command(topdir, ['cherry-pick', f'{resolved_base}..FETCH_HEAD'],
-                                    logstderr=True)
+    ecode, out = b4.git_run_command(
+        topdir, ['cherry-pick', f'{resolved_base}..FETCH_HEAD'], logstderr=True
+    )
     if ecode > 0:
         logger.critical('Unable to cherry-pick patches onto review branch')
         logger.critical(out.strip())
@@ -301,8 +325,9 @@ def create_review_branch(topdir: str, branch_name: str, base_commit: str,
         sys.exit(1)
 
     # Record the first patch commit (the one right after base)
-    ecode, out = b4.git_run_command(topdir, ['rev-list', '--reverse',
-                                             f'{resolved_base}..HEAD'], logstderr=True)
+    ecode, out = b4.git_run_command(
+        topdir, ['rev-list', '--reverse', f'{resolved_base}..HEAD'], logstderr=True
+    )
     if ecode > 0 or not out.strip():
         logger.critical('Unable to determine first patch commit')
         sys.exit(1)
@@ -317,8 +342,9 @@ def create_review_branch(topdir: str, branch_name: str, base_commit: str,
         cover_content = clmsg.subject + '\n\n' + clmsg.body
     elif lser.patches[1] is not None:
         clmsg = lser.patches[1]
-        cover_content = (clmsg.subject + '\n\n'
-                         'NOTE: No cover letter provided by the author.')
+        cover_content = (
+            clmsg.subject + '\n\nNOTE: No cover letter provided by the author.'
+        )
     else:
         cover_content = 'NOTE: No cover letter or first patch available.'
 
@@ -341,7 +367,7 @@ def create_review_branch(topdir: str, branch_name: str, base_commit: str,
         if pbasement.strip():
             # Keep only the notes before the diff (diffstat, changelog, etc.)
             diff_start = b4.DIFF_RE.search(pbasement)
-            notes = pbasement[:diff_start.start()] if diff_start else pbasement
+            notes = pbasement[: diff_start.start()] if diff_start else pbasement
             if notes.strip():
                 pmeta['basement'] = notes
         patches_meta.append(pmeta)
@@ -353,7 +379,8 @@ def create_review_branch(topdir: str, branch_name: str, base_commit: str,
             'identifier': identifier,
             'status': status,
             'revision': lser.revision,
-            'change-id': lser.change_id or branch_name.removeprefix(REVIEW_BRANCH_PREFIX),
+            'change-id': lser.change_id
+            or branch_name.removeprefix(REVIEW_BRANCH_PREFIX),
             'link': linkurl,
             'subject': clmsg.full_subject if clmsg else '',
             'fromname': lser.fromname or '',
@@ -374,8 +401,12 @@ def create_review_branch(topdir: str, branch_name: str, base_commit: str,
     # Create the tracking commit at the tip of the branch
     commit_msg = cover_content + '\n\n' + make_review_magic_json(tracking)
 
-    ecode, out = b4.git_run_command(topdir, ['commit', '--allow-empty', '-F', '-'],
-                                    stdin=commit_msg.encode(), logstderr=True)
+    ecode, out = b4.git_run_command(
+        topdir,
+        ['commit', '--allow-empty', '-F', '-'],
+        stdin=commit_msg.encode(),
+        logstderr=True,
+    )
     if ecode > 0:
         logger.critical('Unable to create tracking commit')
         logger.critical(out.strip())
@@ -388,6 +419,7 @@ def create_review_branch(topdir: str, branch_name: str, base_commit: str,
     # Mark cover + patch messages as Seen in the messages DB
     try:
         from b4.review import messages
+
         entries = []
         for pmsg in lser.patches:
             if pmsg is None or not pmsg.msgid:
@@ -419,7 +451,9 @@ def main(cmdargs: argparse.Namespace) -> None:
         cmd_show_info(cmdargs)
 
 
-def get_review_branch_patch_ids(topdir: str, branch: str) -> List[Tuple[int, str, Optional[str]]]:
+def get_review_branch_patch_ids(
+    topdir: str, branch: str
+) -> List[Tuple[int, str, Optional[str]]]:
     """Compute stable patch-ids for every patch commit on a review branch.
 
     Loads tracking data to find the first-patch-commit, then iterates
@@ -435,7 +469,8 @@ def get_review_branch_patch_ids(topdir: str, branch: str) -> List[Tuple[int, str
         return []
 
     ecode, out = b4.git_run_command(
-        topdir, ['rev-list', '--reverse', f'{first_patch}~1..{branch}~1'])
+        topdir, ['rev-list', '--reverse', f'{first_patch}~1..{branch}~1']
+    )
     if ecode > 0 or not out.strip():
         return []
 
@@ -450,7 +485,8 @@ def get_review_branch_patch_ids(topdir: str, branch: str) -> List[Tuple[int, str
             result.append((idx, sha, None))
             continue
         ecode, pid_out = b4.git_run_command(
-            topdir, ['patch-id', '--stable'], stdin=bpatch)
+            topdir, ['patch-id', '--stable'], stdin=bpatch
+        )
         if ecode > 0 or not pid_out.strip():
             result.append((idx, sha, None))
             continue
@@ -471,7 +507,9 @@ def load_tracking(topdir: str, branch: str) -> Tuple[str, Dict[str, Any]]:
 
     commit_msg = out.strip()
     if REVIEW_MAGIC_MARKER not in commit_msg:
-        logger.critical('Branch %s does not contain a valid review tracking commit', branch)
+        logger.critical(
+            'Branch %s does not contain a valid review tracking commit', branch
+        )
         sys.exit(1)
 
     parts = commit_msg.split(REVIEW_MAGIC_MARKER, maxsplit=1)
@@ -499,7 +537,7 @@ def get_review_info(topdir: str, branch: str) -> Dict[str, Union[str, int, bool,
 
     sender = ''
     if series.get('fromname') or series.get('fromemail'):
-        sender = f"{series.get('fromname', '')} <{series.get('fromemail', '')}>"
+        sender = f'{series.get("fromname", "")} <{series.get("fromemail", "")}>'
 
     first_patch = series.get('first-patch-commit')
     prereqs = series.get('prerequisite-commits', [])
@@ -531,9 +569,15 @@ def get_review_info(topdir: str, branch: str) -> Dict[str, Union[str, int, bool,
     if first_patch:
         # Range: first-patch-commit~1..branch~1 (excludes the tracking commit at tip)
         commit_range = f'{first_patch}~1..{branch}~1'
-        lines = b4.git_get_command_lines(topdir, [
-            'log', '--reverse', '--format=%h %s', commit_range,
-        ])
+        lines = b4.git_get_command_lines(
+            topdir,
+            [
+                'log',
+                '--reverse',
+                '--format=%h %s',
+                commit_range,
+            ],
+        )
         info['series-range'] = f'{first_patch}..{branch}~1'
         info['num-patches'] = len(lines)
         for line in lines:
@@ -579,8 +623,11 @@ def show_review_info(param: str, as_json: bool = False) -> None:
         sys.exit(1)
 
     if not mybranch.startswith(REVIEW_BRANCH_PREFIX):
-        logger.critical('Branch %s does not look like a review branch (expected prefix %s)',
-                        mybranch, REVIEW_BRANCH_PREFIX)
+        logger.critical(
+            'Branch %s does not look like a review branch (expected prefix %s)',
+            mybranch,
+            REVIEW_BRANCH_PREFIX,
+        )
         sys.exit(1)
 
     info = get_review_info(topdir, mybranch)
@@ -627,8 +674,16 @@ def list_review_branches(as_json: bool = False) -> None:
     for idx, info in enumerate(all_info):
         if idx > 0:
             print()
-        for key in ('branch', 'change-id', 'status', 'subject', 'sender',
-                     'revision', 'num-patches', 'complete'):
+        for key in (
+            'branch',
+            'change-id',
+            'status',
+            'subject',
+            'sender',
+            'revision',
+            'num-patches',
+            'complete',
+        ):
             val = info.get(key)
             if val is not None:
                 print(f'{key}: {val}')
@@ -642,8 +697,9 @@ def cmd_show_info(cmdargs: argparse.Namespace) -> None:
         show_review_info(cmdargs.param, as_json=cmdargs.json_output)
 
 
-def save_tracking_ref(topdir: str, branch: str,
-                      cover_text: str, tracking: Dict[str, Any]) -> bool:
+def save_tracking_ref(
+    topdir: str, branch: str, cover_text: str, tracking: Dict[str, Any]
+) -> bool:
     """Amend the tracking commit at the tip of a ref without checkout.
 
     Uses git commit-tree + git update-ref so that commit.gpgsign and
@@ -651,7 +707,9 @@ def save_tracking_ref(topdir: str, branch: str,
     not benefit from signing.  Returns True on success.
     """
     if not branch.startswith(REVIEW_BRANCH_PREFIX):
-        logger.critical('Refusing to write tracking commit to non-review branch: %s', branch)
+        logger.critical(
+            'Refusing to write tracking commit to non-review branch: %s', branch
+        )
         return False
     commit_msg = cover_text + '\n\n' + make_review_magic_json(tracking)
     ecode, out = b4.git_run_command(topdir, ['rev-parse', f'{branch}^{{tree}}'])
@@ -662,14 +720,17 @@ def save_tracking_ref(topdir: str, branch: str,
     if ecode > 0:
         return False
     parent = out.strip()
-    ecode, out = b4.git_run_command(topdir,
-                                    ['commit-tree', tree, '-p', parent, '-F', '-'],
-                                    stdin=commit_msg.encode())
+    ecode, out = b4.git_run_command(
+        topdir,
+        ['commit-tree', tree, '-p', parent, '-F', '-'],
+        stdin=commit_msg.encode(),
+    )
     if ecode > 0:
         return False
     new_sha = out.strip()
-    ecode, out = b4.git_run_command(topdir,
-                                    ['update-ref', f'refs/heads/{branch}', new_sha])
+    ecode, out = b4.git_run_command(
+        topdir, ['update-ref', f'refs/heads/{branch}', new_sha]
+    )
     return ecode == 0
 
 
@@ -706,7 +767,9 @@ def _get_my_review(target: Dict[str, Any], usercfg: b4.ConfigDictT) -> Dict[str,
     return result
 
 
-def _ensure_my_review(target: Dict[str, Any], usercfg: b4.ConfigDictT) -> Dict[str, Any]:
+def _ensure_my_review(
+    target: Dict[str, Any], usercfg: b4.ConfigDictT
+) -> Dict[str, Any]:
     """Return the current user's review sub-dict, creating it if needed."""
     email = str(usercfg.get('email', 'unknown@example.com'))
     name = str(usercfg.get('name', 'Unknown'))
@@ -750,8 +813,7 @@ def _get_patch_state(target: Dict[str, Any], usercfg: b4.ConfigDictT) -> str:
     if explicit in ('skip', 'done'):
         return explicit
     trailer_keys = {
-        t.split(':', 1)[0].strip().lower()
-        for t in review.get('trailers', [])
+        t.split(':', 1)[0].strip().lower() for t in review.get('trailers', [])
     }
     if _NACK_TRAILER_KEY in trailer_keys:
         return 'draft'
@@ -762,14 +824,16 @@ def _get_patch_state(target: Dict[str, Any], usercfg: b4.ConfigDictT) -> str:
     # Check for external reviewer comments
     my_email = str(usercfg.get('email', ''))
     all_reviews = target.get('reviews', {})
-    if any(addr != my_email and rev.get('comments')
-           for addr, rev in all_reviews.items()):
+    if any(
+        addr != my_email and rev.get('comments') for addr, rev in all_reviews.items()
+    ):
         return 'external'
     return explicit if explicit else ''
 
 
-def _set_patch_state(target: Dict[str, Any], usercfg: b4.ConfigDictT,
-                     state: str) -> None:
+def _set_patch_state(
+    target: Dict[str, Any], usercfg: b4.ConfigDictT, state: str
+) -> None:
     """Store an explicit patch state ('done', 'skip', 'unchanged', or '' to clear)."""
     if state:
         review = _ensure_my_review(target, usercfg)
@@ -866,8 +930,9 @@ def _resolve_comment_positions(
             # comment's current (source-derived) position and path.
             cur_path = c['path']
             cur_line = c['line']
-            best = min(positions,
-                       key=lambda p: (p[0] != cur_path, abs(p[1] - cur_line)))
+            best = min(
+                positions, key=lambda p: (p[0] != cur_path, abs(p[1] - cur_line))
+            )
             c['path'], c['line'] = best
 
 
@@ -894,8 +959,7 @@ def reanchor_patch_comments(
             if not comments or not any(c.get('content') for c in comments):
                 continue
             if real_diff is None:
-                ecode, real_diff = b4.git_run_command(
-                    topdir, ['diff', f'{sha}~1', sha])
+                ecode, real_diff = b4.git_run_command(topdir, ['diff', f'{sha}~1', sha])
                 if ecode != 0:
                     break
             _resolve_comment_positions(real_diff, comments)
@@ -999,8 +1063,9 @@ def _integrate_agent_reviews(
 
     # Read NNNN.txt files (per-patch reviews, 1-indexed)
     try:
-        entries = sorted(f for f in os.listdir(review_dir)
-                         if re.match(r'^\d{4}\.txt$', f))
+        entries = sorted(
+            f for f in os.listdir(review_dir) if re.match(r'^\d{4}\.txt$', f)
+        )
     except OSError:
         entries = []
 
@@ -1008,12 +1073,19 @@ def _integrate_agent_reviews(
         patch_num = int(fname[:4])  # 1-indexed
         idx = patch_num - 1
         if idx < 0 or idx >= len(patches):
-            logger.warning('b4-review/%s/%s: patch number out of range, skipping',
-                           head_sha[:12], fname)
+            logger.warning(
+                'b4-review/%s/%s: patch number out of range, skipping',
+                head_sha[:12],
+                fname,
+            )
             continue
         if idx >= len(commit_shas):
-            logger.warning('b4-review/%s/%s: no commit SHA for patch %d, skipping',
-                           head_sha[:12], fname, patch_num)
+            logger.warning(
+                'b4-review/%s/%s: no commit SHA for patch %d, skipping',
+                head_sha[:12],
+                fname,
+                patch_num,
+            )
             continue
 
         fpath = os.path.join(review_dir, fname)
@@ -1031,7 +1103,7 @@ def _integrate_agent_reviews(
             diff_portion = file_text
         elif diff_idx >= 0:
             note_text = file_text[:diff_idx].strip()
-            diff_portion = file_text[diff_idx + 1:]
+            diff_portion = file_text[diff_idx + 1 :]
         else:
             note_text = file_text.strip()
 
@@ -1075,8 +1147,11 @@ def _integrate_agent_reviews(
         save_tracking_ref(topdir, branch, cover_text, tracking)
     else:
         save_tracking(topdir, cover_text, tracking)
-    logger.info('Integrated agent review data from %d file(s) in b4-review/%s',
-                integrated, head_sha[:12])
+    logger.info(
+        'Integrated agent review data from %d file(s) in b4-review/%s',
+        integrated,
+        head_sha[:12],
+    )
 
     # Clean up the consumed review directory
     shutil.rmtree(review_dir, ignore_errors=True)
@@ -1084,8 +1159,9 @@ def _integrate_agent_reviews(
     return True
 
 
-def _extract_comments_from_quoted_reply(text: str,
-                                        capture_preamble: bool = False) -> List[Dict[str, Any]]:
+def _extract_comments_from_quoted_reply(
+    text: str, capture_preamble: bool = False
+) -> List[Dict[str, Any]]:
     """Extract inline comments from a ``> ``-quoted email reply.
 
     This is the standard mailing-list code review format: the reviewer
@@ -1111,9 +1187,18 @@ def _extract_comments_from_quoted_reply(text: str,
     i = 0
     while i < len(raw_lines):
         line = raw_lines[i]
-        stripped = line[2:] if line.startswith('> ') else line[1:] if line.startswith('>') else None
-        if (stripped is not None
-                and stripped.startswith('diff --git a/') and ' b/' not in stripped):
+        stripped = (
+            line[2:]
+            if line.startswith('> ')
+            else line[1:]
+            if line.startswith('>')
+            else None
+        )
+        if (
+            stripped is not None
+            and stripped.startswith('diff --git a/')
+            and ' b/' not in stripped
+        ):
             # Peek at next line for the b/ continuation
             if i + 1 < len(raw_lines):
                 nxt = raw_lines[i + 1]
@@ -1170,11 +1255,13 @@ def _extract_comments_from_quoted_reply(text: str,
         """Store preamble text as a comment on commit message line 0."""
         text = '\n'.join(preamble_lines).strip()
         if text:
-            comments.append({
-                'path': COMMIT_MESSAGE_PATH,
-                'line': 0,
-                'text': text,
-            })
+            comments.append(
+                {
+                    'path': COMMIT_MESSAGE_PATH,
+                    'line': 0,
+                    'text': text,
+                }
+            )
         preamble_lines.clear()
 
     for line in text.splitlines():
@@ -1330,6 +1417,7 @@ def _integrate_sashiko_reviews(
         return False
 
     from b4.review.checks import _fetch_sashiko_patchset, clear_sashiko_cache
+
     clear_sashiko_cache()
     patchset = _fetch_sashiko_patchset(series_msgid, sashiko_url)
     if not patchset:
@@ -1444,8 +1532,8 @@ def _integrate_followup_inline_comments(
 
     cover_msgid = series.get('header-info', {}).get('msgid', '')
     followup_comments = b4.review.tracking._parse_msgs_to_followup_comments(
-        liblore.utils.split_mbox(mbox_bytes),
-        cover_msgid, patches)
+        liblore.utils.split_mbox(mbox_bytes), cover_msgid, patches
+    )
 
     integrated = 0
 
@@ -1633,8 +1721,9 @@ def _render_quoted_diff_with_comments(
     return '\n'.join(result) + '\n'
 
 
-def _extract_editor_comments(edited_text: str,
-                             diff_text: str = '') -> List[Dict[str, Any]]:
+def _extract_editor_comments(
+    edited_text: str, diff_text: str = ''
+) -> List[Dict[str, Any]]:
     """Extract comments from the quoted-diff editor format.
 
     Strips instruction lines (``#`` prefix) and external reviewer
@@ -1655,16 +1744,19 @@ def _extract_editor_comments(edited_text: str,
             continue
         filtered.append(line)
     comments = _extract_comments_from_quoted_reply(
-        '\n'.join(filtered), capture_preamble=True)
+        '\n'.join(filtered), capture_preamble=True
+    )
     if diff_text and comments:
         _resolve_comment_positions(diff_text, comments)
     return comments
 
 
-def _build_reply_from_comments(diff_text: str,
-                               comments: List[Dict[str, Any]],
-                               review_trailers: List[str],
-                               commit_msg: Optional[str] = None) -> str:
+def _build_reply_from_comments(
+    diff_text: str,
+    comments: List[Dict[str, Any]],
+    review_trailers: List[str],
+    commit_msg: Optional[str] = None,
+) -> str:
     """Build an email reply body from review comments.
 
     For each hunk that has comments, quotes the hunk up to the commented
@@ -1718,8 +1810,9 @@ def _build_reply_from_comments(diff_text: str,
         if msg_comment_indices:
             prev_quoted = 0  # last msg line index (1-based) emitted
             for comment_lineno in msg_comment_indices:
-                window_start = max(prev_quoted + 1,
-                                   comment_lineno - _REPLY_CONTEXT_LINES)
+                window_start = max(
+                    prev_quoted + 1, comment_lineno - _REPLY_CONTEXT_LINES
+                )
                 # Clamp to valid msg_lines range
                 window_start = min(window_start, len(msg_lines) + 1)
                 comment_quote_end = min(comment_lineno, len(msg_lines))
@@ -1913,16 +2006,19 @@ def update_series_tracking(
     if b4.can_network:
         try:
             _conn = b4.review.tracking.get_db(identifier)
-            _known = set(r['revision'] for r in b4.review.tracking.get_revisions(_conn, change_id))
+            _known = set(
+                r['revision']
+                for r in b4.review.tracking.get_revisions(_conn, change_id)
+            )
             _conn.close()
         except Exception:
             _known = set()
 
         msgs = b4.mbox.get_extra_series(msgs, direction=1, nocache=True)
         if current_rev > 1 and not _known:
-            msgs = b4.mbox.get_extra_series(msgs, direction=-1,
-                                            wantvers=list(range(1, current_rev)),
-                                            nocache=True)
+            msgs = b4.mbox.get_extra_series(
+                msgs, direction=-1, wantvers=list(range(1, current_rev)), nocache=True
+            )
 
     lmbx = b4.LoreMailbox()
     for msg in msgs:
@@ -1935,15 +2031,16 @@ def update_series_tracking(
         lser_att = lmbx.get_series(sloppytrailers=False)
     if lser_att is not None:
         att = check_series_attestation(lser_att)
-        b4.review.tracking.update_attestation(
-            identifier, change_id, current_rev, att)
+        b4.review.tracking.update_attestation(identifier, change_id, current_rev, att)
 
     # Record all discovered revisions in SQLite, keeping track of what
     # was already known so we can distinguish genuinely new versions.
     previously_known: Set[int] = set()
     try:
         conn = b4.review.tracking.get_db(identifier)
-        previously_known = set(r['revision'] for r in b4.review.tracking.get_revisions(conn, change_id))
+        previously_known = set(
+            r['revision'] for r in b4.review.tracking.get_revisions(conn, change_id)
+        )
         for v in sorted(lmbx.series.keys()):
             v_ser = lmbx.series[v]
             v_msgid = ''
@@ -1952,10 +2049,14 @@ def update_series_tracking(
                 for p in v_ser.patches:
                     if p is not None:
                         v_msgid = p.msgid
-                        v_subject = getattr(p, 'full_subject', '') or getattr(p, 'subject', '')
+                        v_subject = getattr(p, 'full_subject', '') or getattr(
+                            p, 'subject', ''
+                        )
                         break
             v_link = (linkmask % v_msgid) if v_msgid and '%s' in str(linkmask) else ''
-            b4.review.tracking.add_revision(conn, change_id, v, v_msgid, v_subject, v_link)
+            b4.review.tracking.add_revision(
+                conn, change_id, v, v_msgid, v_subject, v_link
+            )
             if v not in previously_known:
                 result['new_revisions'] += 1
         conn.close()
@@ -1969,17 +2070,16 @@ def update_series_tracking(
     # prevents a broken version (e.g. v2 that fails to apply) from
     # repeatedly waking the series after the maintainer puts it back
     # into waiting.
-    if status == 'waiting' and _should_promote_waiting(
-            newer_vers, previously_known):
-            try:
-                conn = b4.review.tracking.get_db(identifier)
-                b4.review.tracking.update_series_status(
-                    conn, change_id, 'reviewing',
-                    revision=series.get('revision'))
-                conn.close()
-                result['promoted'] = True
-            except Exception as ex:
-                logger.warning('Could not promote waiting series: %s', ex)
+    if status == 'waiting' and _should_promote_waiting(newer_vers, previously_known):
+        try:
+            conn = b4.review.tracking.get_db(identifier)
+            b4.review.tracking.update_series_status(
+                conn, change_id, 'reviewing', revision=series.get('revision')
+            )
+            conn.close()
+            result['promoted'] = True
+        except Exception as ex:
+            logger.warning('Could not promote waiting series: %s', ex)
 
     # Update follow-up trailers if the series has a review branch
     if status in ('reviewing', 'replied', 'waiting') and topdir:
@@ -2000,14 +2100,15 @@ def update_series_tracking(
         else:
             t_series.pop('newer-versions', None)
 
-        lser = lmbx.get_series(wantver, sloppytrailers=False,
-                               codereview_trailers=True)
+        lser = lmbx.get_series(wantver, sloppytrailers=False, codereview_trailers=True)
         if lser is None:
             result['error'] = f'Could not find series v{wantver} in retrieved messages'
             return result
 
         # Collect fresh cover followups
-        clmsg = lser.patches[0] if lser.has_cover and lser.patches[0] is not None else None
+        clmsg = (
+            lser.patches[0] if lser.has_cover and lser.patches[0] is not None else None
+        )
         new_cover_followups = _collect_followups(clmsg, linkmask) if clmsg else list()
 
         # Collect fresh per-patch followups
@@ -2059,7 +2160,8 @@ def update_series_tracking(
         try:
             conn = b4.review.tracking.get_db(identifier)
             b4.review.tracking.update_message_count_from_msgs(
-                conn, change_id, current_rev, thread_msgs, topdir=topdir)
+                conn, change_id, current_rev, thread_msgs, topdir=topdir
+            )
             conn.close()
             result['counts_updated'] = True
         except Exception as ex:
@@ -2095,9 +2197,12 @@ def cmd_tui(cmdargs: argparse.Namespace) -> None:
             logger.critical('Enroll with: b4 review enroll')
             sys.exit(1)
 
-    b4.review_tui.run_tracking_tui(identifier, email_dryrun=cmdargs.email_dryrun,
-                                   no_sign=cmdargs.no_sign,
-                                   no_mouse=cmdargs.no_mouse)
+    b4.review_tui.run_tracking_tui(
+        identifier,
+        email_dryrun=cmdargs.email_dryrun,
+        no_sign=cmdargs.no_sign,
+        no_mouse=cmdargs.no_mouse,
+    )
 
 
 def _prepare_review_session(cmdargs: argparse.Namespace) -> Dict[str, Any]:
@@ -2121,8 +2226,11 @@ def _prepare_review_session(cmdargs: argparse.Namespace) -> Dict[str, Any]:
         branch = out.strip()
 
     if not branch.startswith(REVIEW_BRANCH_PREFIX):
-        logger.critical('Branch %s does not look like a review branch (expected prefix %s)',
-                        branch, REVIEW_BRANCH_PREFIX)
+        logger.critical(
+            'Branch %s does not look like a review branch (expected prefix %s)',
+            branch,
+            REVIEW_BRANCH_PREFIX,
+        )
         sys.exit(1)
 
     # No checkout needed — all git operations use explicit refs.
@@ -2150,8 +2258,9 @@ def _prepare_review_session(cmdargs: argparse.Namespace) -> Dict[str, Any]:
     commit_shas = out.strip().splitlines()
 
     # Get commit subjects
-    ecode, out = b4.git_run_command(topdir, ['log', '--reverse', '--format=%s',
-                                              range_spec])
+    ecode, out = b4.git_run_command(
+        topdir, ['log', '--reverse', '--format=%s', range_spec]
+    )
     if ecode > 0:
         logger.critical('Unable to get commit subjects')
         sys.exit(1)
@@ -2167,7 +2276,9 @@ def _prepare_review_session(cmdargs: argparse.Namespace) -> Dict[str, Any]:
     cover_subject = series.get('subject', '')
     cover_subject_clean = b4.LoreSubject(cover_subject).subject
     if not cover_subject_clean:
-        cover_subject_clean = cover_text.split('\n', maxsplit=1)[0] if cover_text else '(no subject)'
+        cover_subject_clean = (
+            cover_text.split('\n', maxsplit=1)[0] if cover_text else '(no subject)'
+        )
 
     # Get user identity for trailers (needed throughout the loop)
     usercfg = b4.get_user_config()
@@ -2176,20 +2287,32 @@ def _prepare_review_session(cmdargs: argparse.Namespace) -> Dict[str, Any]:
     default_identity = f'{user_name} <{user_email}>'
 
     # Integrate agent reviews from .git/b4-review/
-    _integrate_agent_reviews(topdir, cover_text, tracking, commit_shas, patches, branch=branch)
+    _integrate_agent_reviews(
+        topdir, cover_text, tracking, commit_shas, patches, branch=branch
+    )
 
     # Integrate sashiko inline reviews (if configured)
-    _integrate_sashiko_reviews(topdir, cover_text, tracking, commit_shas, patches, branch=branch)
+    _integrate_sashiko_reviews(
+        topdir, cover_text, tracking, commit_shas, patches, branch=branch
+    )
 
     # Integrate inline comments from mailing-list follow-up messages
-    _integrate_followup_inline_comments(topdir, cover_text, tracking, commit_shas, patches, branch=branch)
+    _integrate_followup_inline_comments(
+        topdir, cover_text, tracking, commit_shas, patches, branch=branch
+    )
 
     # Ensure the plain-text thread-context-blob exists for the AI agent.
     # Runs only when thread-blob was stored before this feature existed
     # (migration) or is being seen for the first time this session.
     change_id = series.get('change-id')
-    if change_id and series.get('thread-blob') and not series.get('thread-context-blob'):
-        b4.review.tracking.ensure_thread_context_blob(topdir, change_id, series, patches)
+    if (
+        change_id
+        and series.get('thread-blob')
+        and not series.get('thread-context-blob')
+    ):
+        b4.review.tracking.ensure_thread_context_blob(
+            topdir, change_id, series, patches
+        )
 
     # Record current branch so ReviewApp can restore it if it checks
     # out the review branch for shell/agent operations.
@@ -2239,9 +2362,14 @@ def _ensure_trailers_in_body(body: str, trailers: List[str]) -> str:
     return main_body
 
 
-def _build_review_email(series: Dict[str, Any], patch_meta: Optional[Dict[str, Any]],
-                        review: Dict[str, Any], cover_text: str,
-                        topdir: str, commit_sha: Optional[str]) -> Optional[email.message.EmailMessage]:
+def _build_review_email(
+    series: Dict[str, Any],
+    patch_meta: Optional[Dict[str, Any]],
+    review: Dict[str, Any],
+    cover_text: str,
+    topdir: str,
+    commit_sha: Optional[str],
+) -> Optional[email.message.EmailMessage]:
     """Build an EmailMessage for a single review entry (cover or patch).
 
     Returns None if there is nothing to send.
@@ -2276,41 +2404,68 @@ def _build_review_email(series: Dict[str, Any], patch_meta: Optional[Dict[str, A
     elif comments and patch_meta is None:
         # Cover letter with structured comments — build reply from cover text
         reply_body = _build_reply_from_comments(
-            '', comments, trailers, commit_msg=cover_text)
+            '', comments, trailers, commit_msg=cover_text
+        )
         # Add blank line before preamble, but not before quoted content
         sep = '\n\n' if not reply_body.startswith('>') else '\n'
         body = attribution + sep + reply_body
     elif comments and commit_sha and topdir:
         # Auto-generate reply from inline review comments
         ecode, commit_msg = b4.git_run_command(
-            topdir, ['show', '--format=%B', '--no-patch', commit_sha])
+            topdir, ['show', '--format=%B', '--no-patch', commit_sha]
+        )
         if ecode > 0:
             logger.warning('Could not get commit message for %s', commit_sha)
             return None
         ecode, diff_text = b4.git_run_command(
-            topdir, ['diff', f'{commit_sha}~1', commit_sha])
+            topdir, ['diff', f'{commit_sha}~1', commit_sha]
+        )
         if ecode > 0:
             logger.warning('Could not get diff for %s', commit_sha)
             return None
         reply_body = _build_reply_from_comments(
-            diff_text, comments, trailers, commit_msg=commit_msg)
+            diff_text, comments, trailers, commit_msg=commit_msg
+        )
         sep = '\n\n' if not reply_body.startswith('>') else '\n'
         body = attribution + sep + reply_body
     else:
         # Trailer-only reply: quote the first paragraph of the original
         if patch_meta is not None and commit_sha and topdir:
             ecode, commit_msg = b4.git_run_command(
-                topdir, ['show', '--format=%B', '--no-patch', commit_sha])
+                topdir, ['show', '--format=%B', '--no-patch', commit_sha]
+            )
             if ecode == 0 and commit_msg.strip():
                 # Strip the subject line (already in Subject: Re: header)
                 cm_body = '\n'.join(_strip_subject(commit_msg))
-                body = attribution + '\n' + b4.make_quote(cm_body) + '\n\n' + '\n'.join(trailers) \
-                    if cm_body else \
-                    attribution + '\n' + b4.make_quote(cover_text) + '\n\n' + '\n'.join(trailers)
+                body = (
+                    attribution
+                    + '\n'
+                    + b4.make_quote(cm_body)
+                    + '\n\n'
+                    + '\n'.join(trailers)
+                    if cm_body
+                    else attribution
+                    + '\n'
+                    + b4.make_quote(cover_text)
+                    + '\n\n'
+                    + '\n'.join(trailers)
+                )
             else:
-                body = attribution + '\n' + b4.make_quote(cover_text) + '\n\n' + '\n'.join(trailers)
+                body = (
+                    attribution
+                    + '\n'
+                    + b4.make_quote(cover_text)
+                    + '\n\n'
+                    + '\n'.join(trailers)
+                )
         else:
-            body = attribution + '\n' + b4.make_quote(cover_text) + '\n\n' + '\n'.join(trailers)
+            body = (
+                attribution
+                + '\n'
+                + b4.make_quote(cover_text)
+                + '\n\n'
+                + '\n'.join(trailers)
+            )
 
     # Ensure all trailers appear in the body
     body = _ensure_trailers_in_body(body, trailers)
@@ -2388,11 +2543,12 @@ def collect_review_emails(
 
     # Cover letter review (maintainer only)
     cover_review = series.get('reviews', {}).get(my_email, {})
-    if (cover_review
-            and cover_review.get('patch-state') != 'skip'
-            and cover_review.get('sent-revision') is None):
-        msg = _build_review_email(series, None, cover_review, cover_text,
-                                  topdir, None)
+    if (
+        cover_review
+        and cover_review.get('patch-state') != 'skip'
+        and cover_review.get('sent-revision') is None
+    ):
+        msg = _build_review_email(series, None, cover_review, cover_text, topdir, None)
         if msg is not None:
             msgs.append(msg)
 
@@ -2404,8 +2560,9 @@ def collect_review_emails(
         if patch_review.get('sent-revision') is not None:
             continue
         commit_sha = commit_shas[idx] if idx < len(commit_shas) else None
-        msg = _build_review_email(series, patch_meta, patch_review, cover_text,
-                                  topdir, commit_sha)
+        msg = _build_review_email(
+            series, patch_meta, patch_review, cover_text, topdir, commit_sha
+        )
         if msg is not None:
             msgs.append(msg)
 
@@ -2455,7 +2612,9 @@ def pw_fetch_series(pwkey: str, pwurl: str, pwproj: str) -> List[Dict[str, Any]]
                 series_map[sid] = {
                     'id': sid,
                     'name': s.get('name', '(no subject)'),
-                    'submitter': submitter.get('name', submitter.get('email', 'Unknown')),
+                    'submitter': submitter.get(
+                        'name', submitter.get('email', 'Unknown')
+                    ),
                     'submitter_email': submitter.get('email', ''),
                     'delegate': delegate.get('username', ''),
                     'date': patch.get('date', ''),
@@ -2477,7 +2636,9 @@ def pw_fetch_series(pwkey: str, pwurl: str, pwproj: str) -> List[Dict[str, Any]]
                 # Aggregate CI check: worst status wins
                 patch_check = patch.get('check', 'pending')
                 cur_check = series_map[sid].get('check', 'pending')
-                if _check_priority.get(patch_check, 0) > _check_priority.get(cur_check, 0):
+                if _check_priority.get(patch_check, 0) > _check_priority.get(
+                    cur_check, 0
+                ):
                     series_map[sid]['check'] = patch_check
             if patch_id:
                 series_map[sid]['patch_ids'].append(patch_id)
@@ -2510,8 +2671,9 @@ def pw_fetch_states(pwkey: str, pwurl: str, pwproj: str) -> List[Dict[str, Any]]
     return [{'slug': s, 'name': s.replace('-', ' ').title()} for s in default_slugs]
 
 
-def pw_fetch_checks(pwkey: str, pwurl: str,
-                    patch_ids: List[int]) -> List[Dict[str, Any]]:
+def pw_fetch_checks(
+    pwkey: str, pwurl: str, patch_ids: List[int]
+) -> List[Dict[str, Any]]:
     """Fetch CI check details for a list of patch IDs.
 
     Returns a flat list of check dicts, each augmented with 'patch_id'.
@@ -2532,8 +2694,9 @@ def pw_fetch_checks(pwkey: str, pwurl: str,
     return all_checks
 
 
-def pw_set_series_state(pwkey: str, pwurl: str, patch_ids: List[int],
-                        state: str, archived: bool) -> Tuple[int, int]:
+def pw_set_series_state(
+    pwkey: str, pwurl: str, patch_ids: List[int], state: str, archived: bool
+) -> Tuple[int, int]:
     """Set state and archived flag on patches by patch ID.
 
     Returns (success_count, failure_count).
@@ -2558,7 +2721,9 @@ def pw_set_series_state(pwkey: str, pwurl: str, patch_ids: List[int],
     return ok, fail
 
 
-def pw_update_series_state(pw_series_id: int, state: str, archived: bool = False) -> bool:
+def pw_update_series_state(
+    pw_series_id: int, state: str, archived: bool = False
+) -> bool:
     """Update Patchwork state for a series tracked by pw_series_id.
 
     Looks up pw-key and pw-url from git config, fetches the patch IDs
diff --git a/src/b4/review/checks.py b/src/b4/review/checks.py
index 2ea5027..a7e1ff0 100644
--- a/src/b4/review/checks.py
+++ b/src/b4/review/checks.py
@@ -33,9 +33,10 @@ def clear_sashiko_cache() -> None:
     """Clear the sashiko patchset cache between check runs."""
     _sashiko_patchset_cache.clear()
 
+
 SCHEMA_VERSION = 1
 
-SCHEMA_SQL = '''
+SCHEMA_SQL = """
 CREATE TABLE IF NOT EXISTS schema_version (
     version INTEGER PRIMARY KEY
 );
@@ -50,13 +51,14 @@ CREATE TABLE IF NOT EXISTS check_results (
     checked_at TEXT NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%S', 'now')),
     PRIMARY KEY (msgid, tool)
 );
-'''
+"""
 
 
 # ---------------------------------------------------------------------------
 # Cache database
 # ---------------------------------------------------------------------------
 
+
 def _get_db_path() -> str:
     """Return the path to the CI check cache database."""
     datadir = b4.get_data_dir()
@@ -75,23 +77,25 @@ def get_db() -> sqlite3.Connection:
         conn.executescript(SCHEMA_SQL)
         conn.execute(
             'INSERT OR REPLACE INTO schema_version (version) VALUES (?)',
-            (SCHEMA_VERSION,))
+            (SCHEMA_VERSION,),
+        )
         conn.commit()
     return conn
 
 
 def cleanup_old(conn: sqlite3.Connection, max_days: int = 180) -> int:
     """Delete check results older than *max_days*. Returns count deleted."""
-    cutoff = (datetime.datetime.now(datetime.timezone.utc)
-              - datetime.timedelta(days=max_days)).isoformat()
-    cursor = conn.execute(
-        'DELETE FROM check_results WHERE checked_at < ?', (cutoff,))
+    cutoff = (
+        datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(days=max_days)
+    ).isoformat()
+    cursor = conn.execute('DELETE FROM check_results WHERE checked_at < ?', (cutoff,))
     conn.commit()
     return cursor.rowcount
 
 
-def get_cached_results(conn: sqlite3.Connection,
-                       msgids: List[str]) -> Dict[str, List[Dict[str, str]]]:
+def get_cached_results(
+    conn: sqlite3.Connection, msgids: List[str]
+) -> Dict[str, List[Dict[str, str]]]:
     """Return cached check results keyed by msgid.
 
     Returns ``{msgid: [{tool, status, summary, url, details}, ...]}``.
@@ -102,7 +106,8 @@ def get_cached_results(conn: sqlite3.Connection,
     cursor = conn.execute(
         'SELECT msgid, tool, status, summary, url, details'
         f' FROM check_results WHERE msgid IN ({placeholders})',
-        msgids)
+        msgids,
+    )
     results: Dict[str, List[Dict[str, str]]] = {}
     for row in cursor.fetchall():
         entry = {
@@ -116,29 +121,33 @@ def get_cached_results(conn: sqlite3.Connection,
     return results
 
 
-def store_results(conn: sqlite3.Connection, msgid: str,
-                  results: List[Dict[str, str]]) -> None:
+def store_results(
+    conn: sqlite3.Connection, msgid: str, results: List[Dict[str, str]]
+) -> None:
     """Store check results for a single message."""
     for entry in results:
         conn.execute(
             'INSERT OR REPLACE INTO check_results'
             ' (msgid, tool, status, summary, url, details)'
             ' VALUES (?, ?, ?, ?, ?, ?)',
-            (msgid, entry['tool'], entry['status'],
-             entry.get('summary', ''), entry.get('url', ''),
-             entry.get('details', '')))
+            (
+                msgid,
+                entry['tool'],
+                entry['status'],
+                entry.get('summary', ''),
+                entry.get('url', ''),
+                entry.get('details', ''),
+            ),
+        )
     conn.commit()
 
 
-def delete_results(conn: sqlite3.Connection,
-                   msgids: List[str]) -> None:
+def delete_results(conn: sqlite3.Connection, msgids: List[str]) -> None:
     """Delete all cached check results for the given message-ids."""
     if not msgids:
         return
     placeholders = ','.join('?' * len(msgids))
-    conn.execute(
-        f'DELETE FROM check_results WHERE msgid IN ({placeholders})',
-        msgids)
+    conn.execute(f'DELETE FROM check_results WHERE msgid IN ({placeholders})', msgids)
     conn.commit()
 
 
@@ -146,6 +155,7 @@ def delete_results(conn: sqlite3.Connection,
 # Config helpers
 # ---------------------------------------------------------------------------
 
+
 def load_check_cmds() -> Tuple[List[str], List[str]]:
     """Read check commands from git config.
 
@@ -168,7 +178,11 @@ def load_check_cmds() -> Tuple[List[str], List[str]]:
             if os.access(checkpatch, os.X_OK):
                 perpatch = ['_builtin_checkpatch']
     # Auto-wire patchwork CI when project is configured
-    if '_builtin_patchwork' not in perpatch and config.get('pw-project') and config.get('pw-url'):
+    if (
+        '_builtin_patchwork' not in perpatch
+        and config.get('pw-project')
+        and config.get('pw-url')
+    ):
         perpatch.append('_builtin_patchwork')
     series = _as_list(config.get('review-series-check-cmd'))
     # Auto-wire sashiko AI review when URL is configured
@@ -191,16 +205,18 @@ def parse_cmd(cmdstr: str) -> List[str]:
 # Built-in handlers
 # ---------------------------------------------------------------------------
 
-def _run_builtin_checkpatch(msg: EmailMessage,
-                            topdir: str) -> List[Dict[str, str]]:
+
+def _run_builtin_checkpatch(msg: EmailMessage, topdir: str) -> List[Dict[str, str]]:
     """Run scripts/checkpatch.pl on a single patch message."""
     checkpatch = os.path.join(topdir, 'scripts', 'checkpatch.pl')
     if not os.access(checkpatch, os.X_OK):
-        return [{
-            'tool': 'checkpatch',
-            'status': 'fail',
-            'summary': 'checkpatch.pl not found or not executable',
-        }]
+        return [
+            {
+                'tool': 'checkpatch',
+                'status': 'fail',
+                'summary': 'checkpatch.pl not found or not executable',
+            }
+        ]
 
     cmdargs = [checkpatch, '-q', '--terse', '--no-summary', '--mailback']
     bdata = b4.LoreMessage.get_msg_as_bytes(msg)
@@ -211,7 +227,7 @@ def _run_builtin_checkpatch(msg: EmailMessage,
 
     findings: List[Dict[str, str]] = []
     worst = 'pass'
-    for raw in (out_str.splitlines() + err_str.splitlines()):
+    for raw in out_str.splitlines() + err_str.splitlines():
         line = raw[2:] if raw.startswith('-:') else raw
         if not line:
             continue
@@ -231,16 +247,20 @@ def _run_builtin_checkpatch(msg: EmailMessage,
 
     if not findings:
         if ecode:
-            return [{
+            return [
+                {
+                    'tool': 'checkpatch',
+                    'status': 'fail',
+                    'summary': f'exited with error code {ecode}',
+                }
+            ]
+        return [
+            {
                 'tool': 'checkpatch',
-                'status': 'fail',
-                'summary': f'exited with error code {ecode}',
-            }]
-        return [{
-            'tool': 'checkpatch',
-            'status': 'pass',
-            'summary': 'passed all checks',
-        }]
+                'status': 'pass',
+                'summary': 'passed all checks',
+            }
+        ]
 
     errors = sum(1 for f in findings if f['status'] == 'fail')
     warnings = sum(1 for f in findings if f['status'] == 'warn')
@@ -251,16 +271,19 @@ def _run_builtin_checkpatch(msg: EmailMessage,
         parts.append(f'{warnings} warning{"s" if warnings != 1 else ""}')
     summary = ', '.join(parts) if parts else findings[0]['description']
 
-    return [{
-        'tool': 'checkpatch',
-        'status': worst,
-        'summary': summary,
-        'details': json.dumps(findings),
-    }]
+    return [
+        {
+            'tool': 'checkpatch',
+            'status': worst,
+            'summary': summary,
+            'details': json.dumps(findings),
+        }
+    ]
 
 
-def _run_builtin_patchwork(msg: EmailMessage, pwkey: str,
-                           pwurl: str) -> List[Dict[str, str]]:
+def _run_builtin_patchwork(
+    msg: EmailMessage, pwkey: str, pwurl: str
+) -> List[Dict[str, str]]:
     """Query Patchwork REST API for checks on a single patch."""
     msgid = msg.get('message-id', '').strip('<> ')
     if not msgid:
@@ -278,6 +301,7 @@ def _run_builtin_patchwork(msg: EmailMessage, pwkey: str,
 
     try:
         from b4.review import pw_fetch_checks
+
         checks = pw_fetch_checks(pwkey, pwurl, [int(patch_id)])
     except Exception as ex:
         logger.debug('Patchwork check query failed: %s', ex)
@@ -301,25 +325,29 @@ def _run_builtin_patchwork(msg: EmailMessage, pwkey: str,
         if _STATUS_ORDER.get(status, 0) > _STATUS_ORDER.get(worst, 0):
             worst = status
         counts[status] = counts.get(status, 0) + 1
-        individual.append({
-            'context': check.get('context', 'unknown'),
-            'status': status,
-            'state': state,
-            'description': check.get('description', ''),
-            'url': check.get('url', ''),
-        })
+        individual.append(
+            {
+                'context': check.get('context', 'unknown'),
+                'status': status,
+                'state': state,
+                'description': check.get('description', ''),
+                'url': check.get('url', ''),
+            }
+        )
 
     summary_parts = []
     for s in ('pass', 'warn', 'fail'):
         if counts.get(s):
             summary_parts.append(f'{counts[s]} {s}')
 
-    return [{
-        'tool': 'patchwork',
-        'status': worst,
-        'summary': ', '.join(summary_parts),
-        'details': json.dumps(individual),
-    }]
+    return [
+        {
+            'tool': 'patchwork',
+            'status': worst,
+            'summary': ', '.join(summary_parts),
+            'details': json.dumps(individual),
+        }
+    ]
 
 
 def _fetch_sashiko_patchset(msgid: str, sashiko_url: str) -> Optional[Dict[str, Any]]:
@@ -385,12 +413,14 @@ def _parse_sashiko_findings(review: Dict[str, Any]) -> List[Dict[str, str]]:
         desc = str(problem)
         if suggestion:
             desc += f' \u2014 {suggestion}'
-        findings.append({
-            'status': status,
-            'context': f'sashiko/{severity}',
-            'state': severity,
-            'description': desc,
-        })
+        findings.append(
+            {
+                'status': status,
+                'context': f'sashiko/{severity}',
+                'state': severity,
+                'description': desc,
+            }
+        )
     return findings
 
 
@@ -415,8 +445,7 @@ def _sashiko_findings_summary(findings: List[Dict[str, str]]) -> Tuple[str, str]
     return worst, ', '.join(parts)
 
 
-def _run_builtin_sashiko(msg: EmailMessage,
-                         sashiko_url: str) -> List[Dict[str, str]]:
+def _run_builtin_sashiko(msg: EmailMessage, sashiko_url: str) -> List[Dict[str, str]]:
     """Query sashiko AI review service for findings on a patch."""
     msgid = msg.get('message-id', '').strip('<> ')
     if not msgid:
@@ -442,20 +471,37 @@ def _run_builtin_sashiko(msg: EmailMessage,
             patch_id_by_msgid[p_msgid] = int(p_id)
 
     cover_msgid = data.get('message_id', '')
-    is_cover = (msgid == cover_msgid)
+    is_cover = msgid == cover_msgid
 
     # Overall patchset status check (applies to cover letter row or
     # when the series is not yet reviewed).
     if ps_status in ('Pending', 'In Review', 'Applying'):
-        return [{'tool': 'sashiko', 'status': 'warn',
-                 'summary': f'Review {ps_status.lower()}',
-                 'url': patchset_url}]
+        return [
+            {
+                'tool': 'sashiko',
+                'status': 'warn',
+                'summary': f'Review {ps_status.lower()}',
+                'url': patchset_url,
+            }
+        ]
     if ps_status in ('Failed', 'Failed To Apply'):
-        return [{'tool': 'sashiko', 'status': 'fail',
-                 'summary': ps_status, 'url': patchset_url}]
+        return [
+            {
+                'tool': 'sashiko',
+                'status': 'fail',
+                'summary': ps_status,
+                'url': patchset_url,
+            }
+        ]
     if ps_status == 'Incomplete':
-        return [{'tool': 'sashiko', 'status': 'warn',
-                 'summary': 'Series incomplete', 'url': patchset_url}]
+        return [
+            {
+                'tool': 'sashiko',
+                'status': 'warn',
+                'summary': 'Series incomplete',
+                'url': patchset_url,
+            }
+        ]
 
     if is_cover:
         # Aggregate findings across all reviews for the cover letter
@@ -464,8 +510,10 @@ def _run_builtin_sashiko(msg: EmailMessage,
             all_findings.extend(_parse_sashiko_findings(review))
         worst, summary = _sashiko_findings_summary(all_findings)
         result: Dict[str, str] = {
-            'tool': 'sashiko', 'status': worst,
-            'summary': summary, 'url': patchset_url,
+            'tool': 'sashiko',
+            'status': worst,
+            'summary': summary,
+            'url': patchset_url,
         }
         if all_findings:
             result['details'] = json.dumps(all_findings)
@@ -481,40 +529,68 @@ def _run_builtin_sashiko(msg: EmailMessage,
             review_status = review.get('status', '')
             if review_status == 'Skipped':
                 result_msg = review.get('result', '') or 'Skipped'
-                return [{'tool': 'sashiko', 'status': 'pass',
-                         'summary': result_msg, 'url': patchset_url}]
+                return [
+                    {
+                        'tool': 'sashiko',
+                        'status': 'pass',
+                        'summary': result_msg,
+                        'url': patchset_url,
+                    }
+                ]
             if review_status in ('Pending', 'In Review'):
-                return [{'tool': 'sashiko', 'status': 'warn',
-                         'summary': 'Review in progress',
-                         'url': patchset_url}]
+                return [
+                    {
+                        'tool': 'sashiko',
+                        'status': 'warn',
+                        'summary': 'Review in progress',
+                        'url': patchset_url,
+                    }
+                ]
             if review_status == 'Failed':
                 result_msg = review.get('result', '') or 'Review failed'
-                return [{'tool': 'sashiko', 'status': 'fail',
-                         'summary': result_msg, 'url': patchset_url}]
+                return [
+                    {
+                        'tool': 'sashiko',
+                        'status': 'fail',
+                        'summary': result_msg,
+                        'url': patchset_url,
+                    }
+                ]
             # Reviewed — parse findings
             findings = _parse_sashiko_findings(review)
             worst, summary = _sashiko_findings_summary(findings)
             result = {
-                'tool': 'sashiko', 'status': worst,
-                'summary': summary, 'url': patchset_url,
+                'tool': 'sashiko',
+                'status': worst,
+                'summary': summary,
+                'url': patchset_url,
             }
             if findings:
                 result['details'] = json.dumps(findings)
             return [result]
 
     # No review found for this patch
-    return [{'tool': 'sashiko', 'status': 'pass',
-             'summary': 'No review', 'url': patchset_url}]
+    return [
+        {
+            'tool': 'sashiko',
+            'status': 'pass',
+            'summary': 'No review',
+            'url': patchset_url,
+        }
+    ]
 
 
 # ---------------------------------------------------------------------------
 # External command runner
 # ---------------------------------------------------------------------------
 
-def _run_external_cmd(cmdargs: List[str], msg: EmailMessage,
-                      topdir: str,
-                      extra_env: Optional[Dict[str, str]] = None,
-                      ) -> List[Dict[str, str]]:
+
+def _run_external_cmd(
+    cmdargs: List[str],
+    msg: EmailMessage,
+    topdir: str,
+    extra_env: Optional[Dict[str, str]] = None,
+) -> List[Dict[str, str]]:
     """Run an external check command and parse its JSON output."""
     bdata = b4.LoreMessage.get_msg_as_bytes(msg)
     saved_env: Dict[str, Optional[str]] = {}
@@ -535,24 +611,28 @@ def _run_external_cmd(cmdargs: List[str], msg: EmailMessage,
         if ecode:
             mycmd = os.path.basename(cmdargs[0])
             err_msg = err.strip().decode(errors='replace') if err else ''
-            return [{
-                'tool': mycmd,
-                'status': 'fail',
-                'summary': f'exited with error code {ecode}',
-                'details': err_msg,
-            }]
+            return [
+                {
+                    'tool': mycmd,
+                    'status': 'fail',
+                    'summary': f'exited with error code {ecode}',
+                    'details': err_msg,
+                }
+            ]
         return []
 
     try:
         data = json.loads(out)
     except json.JSONDecodeError as ex:
         mycmd = os.path.basename(cmdargs[0])
-        return [{
-            'tool': mycmd,
-            'status': 'fail',
-            'summary': f'invalid JSON output: {ex}',
-            'details': out.decode(errors='replace'),
-        }]
+        return [
+            {
+                'tool': mycmd,
+                'status': 'fail',
+                'summary': f'invalid JSON output: {ex}',
+                'details': out.decode(errors='replace'),
+            }
+        ]
 
     if not isinstance(data, list):
         data = [data]
@@ -565,13 +645,15 @@ def _run_external_cmd(cmdargs: List[str], msg: EmailMessage,
         status = entry.get('status', 'fail')
         if status not in ('pass', 'warn', 'fail'):
             status = 'fail'
-        results.append({
-            'tool': tool,
-            'status': status,
-            'summary': entry.get('summary', ''),
-            'url': entry.get('url', ''),
-            'details': entry.get('details', ''),
-        })
+        results.append(
+            {
+                'tool': tool,
+                'status': status,
+                'summary': entry.get('summary', ''),
+                'url': entry.get('url', ''),
+                'details': entry.get('details', ''),
+            }
+        )
     return results
 
 
@@ -579,10 +661,15 @@ def _run_external_cmd(cmdargs: List[str], msg: EmailMessage,
 # High-level runners
 # ---------------------------------------------------------------------------
 
-def _dispatch_cmd(cmdstr: str, msg: EmailMessage, topdir: str,
-                  pwkey: str = '', pwurl: str = '',
-                  extra_env: Optional[Dict[str, str]] = None,
-                  ) -> List[Dict[str, str]]:
+
+def _dispatch_cmd(
+    cmdstr: str,
+    msg: EmailMessage,
+    topdir: str,
+    pwkey: str = '',
+    pwurl: str = '',
+    extra_env: Optional[Dict[str, str]] = None,
+) -> List[Dict[str, str]]:
     """Run a single check command (built-in or external) against a message."""
     if cmdstr == '_builtin_checkpatch':
         return _run_builtin_checkpatch(msg, topdir)
@@ -604,12 +691,12 @@ def _dispatch_cmd(cmdstr: str, msg: EmailMessage, topdir: str,
 
 
 def run_perpatch_checks(
-        patches: List[Tuple[str, EmailMessage]],
-        cmds: List[str],
-        topdir: str,
-        pwkey: str = '',
-        pwurl: str = '',
-        extra_env: Optional[Dict[str, str]] = None,
+    patches: List[Tuple[str, EmailMessage]],
+    cmds: List[str],
+    topdir: str,
+    pwkey: str = '',
+    pwurl: str = '',
+    extra_env: Optional[Dict[str, str]] = None,
 ) -> Dict[str, List[Dict[str, str]]]:
     """Run per-patch check commands on each patch.
 
@@ -622,27 +709,31 @@ def run_perpatch_checks(
         patch_results: List[Dict[str, str]] = []
         for cmdstr in cmds:
             try:
-                patch_results.extend(_dispatch_cmd(cmdstr, msg, topdir,
-                                                   pwkey, pwurl,
-                                                   extra_env=extra_env))
+                patch_results.extend(
+                    _dispatch_cmd(
+                        cmdstr, msg, topdir, pwkey, pwurl, extra_env=extra_env
+                    )
+                )
             except Exception as ex:
                 logger.debug('Check command %s failed: %s', cmdstr, ex)
-                patch_results.append({
-                    'tool': cmdstr.split()[0] if cmdstr else 'unknown',
-                    'status': 'fail',
-                    'summary': str(ex),
-                })
+                patch_results.append(
+                    {
+                        'tool': cmdstr.split()[0] if cmdstr else 'unknown',
+                        'status': 'fail',
+                        'summary': str(ex),
+                    }
+                )
         results[msgid] = patch_results
     return results
 
 
 def run_series_checks(
-        cover_msg: Tuple[str, EmailMessage],
-        cmds: List[str],
-        topdir: str,
-        pwkey: str = '',
-        pwurl: str = '',
-        extra_env: Optional[Dict[str, str]] = None,
+    cover_msg: Tuple[str, EmailMessage],
+    cmds: List[str],
+    topdir: str,
+    pwkey: str = '',
+    pwurl: str = '',
+    extra_env: Optional[Dict[str, str]] = None,
 ) -> List[Dict[str, str]]:
     """Run per-series check commands on the cover letter.
 
@@ -654,14 +745,16 @@ def run_series_checks(
     results: List[Dict[str, str]] = []
     for cmdstr in cmds:
         try:
-            results.extend(_dispatch_cmd(cmdstr, msg, topdir,
-                                         pwkey, pwurl,
-                                         extra_env=extra_env))
+            results.extend(
+                _dispatch_cmd(cmdstr, msg, topdir, pwkey, pwurl, extra_env=extra_env)
+            )
         except Exception as ex:
             logger.debug('Series check command %s failed: %s', cmdstr, ex)
-            results.append({
-                'tool': cmdstr.split()[0] if cmdstr else 'unknown',
-                'status': 'fail',
-                'summary': str(ex),
-            })
+            results.append(
+                {
+                    'tool': cmdstr.split()[0] if cmdstr else 'unknown',
+                    'status': 'fail',
+                    'summary': str(ex),
+                }
+            )
     return results
diff --git a/src/b4/review/messages.py b/src/b4/review/messages.py
index 3a0098c..5d6456e 100644
--- a/src/b4/review/messages.py
+++ b/src/b4/review/messages.py
@@ -12,7 +12,7 @@ from typing import Dict, List, Optional
 
 import b4
 
-SCHEMA_SQL = '''
+SCHEMA_SQL = """
 CREATE TABLE IF NOT EXISTS schema_version (
     version INTEGER PRIMARY KEY
 );
@@ -22,7 +22,7 @@ CREATE TABLE IF NOT EXISTS messages (
     msg_date TEXT,
     flags    TEXT DEFAULT ''
 );
-'''
+"""
 
 SCHEMA_VERSION = 1
 
@@ -45,7 +45,8 @@ def get_db() -> sqlite3.Connection:
         conn.executescript(SCHEMA_SQL)
         conn.execute(
             'INSERT OR REPLACE INTO schema_version (version) VALUES (?)',
-            (SCHEMA_VERSION,))
+            (SCHEMA_VERSION,),
+        )
         conn.commit()
     return conn
 
@@ -53,45 +54,49 @@ def get_db() -> sqlite3.Connection:
 def get_flags(conn: sqlite3.Connection, msgid: str) -> str:
     """Return the flags string for a message, or '' if not stored."""
     row = conn.execute(
-        'SELECT flags FROM messages WHERE msgid = ?', (msgid,)).fetchone()
+        'SELECT flags FROM messages WHERE msgid = ?', (msgid,)
+    ).fetchone()
     return row[0] if row else ''
 
 
-def get_flags_bulk(conn: sqlite3.Connection,
-                   msgids: List[str]) -> Dict[str, str]:
+def get_flags_bulk(conn: sqlite3.Connection, msgids: List[str]) -> Dict[str, str]:
     """Return {msgid: flags} for all known messages in *msgids*."""
     if not msgids:
         return {}
     placeholders = ','.join('?' * len(msgids))
     cursor = conn.execute(
-        f'SELECT msgid, flags FROM messages WHERE msgid IN ({placeholders})',
-        msgids)
+        f'SELECT msgid, flags FROM messages WHERE msgid IN ({placeholders})', msgids
+    )
     return {row[0]: row[1] for row in cursor.fetchall()}
 
 
-def set_flag(conn: sqlite3.Connection, msgid: str, flag: str,
-             msg_date: Optional[str] = None) -> None:
+def set_flag(
+    conn: sqlite3.Connection, msgid: str, flag: str, msg_date: Optional[str] = None
+) -> None:
     """Add *flag* to a message, creating the row if needed."""
     conn.execute(
         'INSERT INTO messages (msgid, msg_date, flags)'
         ' VALUES (?, ?, ?)'
         ' ON CONFLICT(msgid) DO NOTHING',
-        (msgid, msg_date, flag))
+        (msgid, msg_date, flag),
+    )
     row = conn.execute(
-        'SELECT flags FROM messages WHERE msgid = ?', (msgid,)).fetchone()
+        'SELECT flags FROM messages WHERE msgid = ?', (msgid,)
+    ).fetchone()
     if row:
         existing = set(row[0].split())
         if flag not in existing:
             existing.add(flag)
             conn.execute(
                 'UPDATE messages SET flags = ? WHERE msgid = ?',
-                (' '.join(sorted(existing)), msgid))
+                (' '.join(sorted(existing)), msgid),
+            )
     conn.commit()
 
 
-def set_flags_bulk(conn: sqlite3.Connection,
-                   entries: List[Dict[str, Optional[str]]],
-                   flag: str) -> None:
+def set_flags_bulk(
+    conn: sqlite3.Connection, entries: List[Dict[str, Optional[str]]], flag: str
+) -> None:
     """Add *flag* to multiple messages in one transaction.
 
     Each entry in *entries* is ``{'msgid': ..., 'msg_date': ...}``.
@@ -105,24 +110,27 @@ def set_flags_bulk(conn: sqlite3.Connection,
             'INSERT INTO messages (msgid, msg_date, flags)'
             ' VALUES (?, ?, ?)'
             ' ON CONFLICT(msgid) DO NOTHING',
-            (msgid, msg_date, flag))
+            (msgid, msg_date, flag),
+        )
         row = conn.execute(
-            'SELECT flags FROM messages WHERE msgid = ?',
-            (msgid,)).fetchone()
+            'SELECT flags FROM messages WHERE msgid = ?', (msgid,)
+        ).fetchone()
         if row:
             existing = set(row[0].split())
             if flag not in existing:
                 existing.add(flag)
                 conn.execute(
                     'UPDATE messages SET flags = ? WHERE msgid = ?',
-                    (' '.join(sorted(existing)), msgid))
+                    (' '.join(sorted(existing)), msgid),
+                )
     conn.commit()
 
 
 def remove_flag(conn: sqlite3.Connection, msgid: str, flag: str) -> None:
     """Remove *flag* from a message. Deletes the row if no flags remain."""
     row = conn.execute(
-        'SELECT flags FROM messages WHERE msgid = ?', (msgid,)).fetchone()
+        'SELECT flags FROM messages WHERE msgid = ?', (msgid,)
+    ).fetchone()
     if not row:
         return
     existing = set(row[0].split())
@@ -130,7 +138,8 @@ def remove_flag(conn: sqlite3.Connection, msgid: str, flag: str) -> None:
     if existing:
         conn.execute(
             'UPDATE messages SET flags = ? WHERE msgid = ?',
-            (' '.join(sorted(existing)), msgid))
+            (' '.join(sorted(existing)), msgid),
+        )
     else:
         conn.execute('DELETE FROM messages WHERE msgid = ?', (msgid,))
     conn.commit()
@@ -139,10 +148,12 @@ def remove_flag(conn: sqlite3.Connection, msgid: str, flag: str) -> None:
 def cleanup_old(conn: sqlite3.Connection, max_days: int = 180) -> int:
     """Delete messages older than *max_days*. Returns count deleted."""
     import datetime
-    cutoff = (datetime.datetime.now(datetime.timezone.utc)
-              - datetime.timedelta(days=max_days)).isoformat()
+
+    cutoff = (
+        datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(days=max_days)
+    ).isoformat()
     cursor = conn.execute(
-        'DELETE FROM messages WHERE msg_date IS NOT NULL AND msg_date < ?',
-        (cutoff,))
+        'DELETE FROM messages WHERE msg_date IS NOT NULL AND msg_date < ?', (cutoff,)
+    )
     conn.commit()
     return cursor.rowcount
diff --git a/src/b4/review/tracking.py b/src/b4/review/tracking.py
index eb80eea..ee0d218 100644
--- a/src/b4/review/tracking.py
+++ b/src/b4/review/tracking.py
@@ -26,7 +26,7 @@ REVIEW_METADATA_FILE = 'metadata.json'
 
 SCHEMA_VERSION = 8
 
-SERIES_PATCHES_DDL = '''
+SERIES_PATCHES_DDL = """
 CREATE TABLE IF NOT EXISTS series_patches (
     change_id  TEXT NOT NULL,
     revision   INTEGER NOT NULL,
@@ -34,9 +34,10 @@ CREATE TABLE IF NOT EXISTS series_patches (
     message_id TEXT NOT NULL,
     subject    TEXT,
     PRIMARY KEY (change_id, revision, position)
-)'''
+)"""
 
-SCHEMA_SQL = '''
+SCHEMA_SQL = (
+    """
 CREATE TABLE IF NOT EXISTS schema_version (
     version INTEGER PRIMARY KEY
 );
@@ -78,7 +79,10 @@ CREATE TABLE IF NOT EXISTS revisions (
     PRIMARY KEY (change_id, revision)
 );
 
-''' + SERIES_PATCHES_DDL + ';'
+"""
+    + SERIES_PATCHES_DDL
+    + ';'
+)
 
 
 def get_review_data_dir() -> str:
@@ -105,7 +109,9 @@ def init_db(identifier: str) -> sqlite3.Connection:
     db_path = get_db_path(identifier)
     conn = sqlite3.connect(db_path)
     conn.executescript(SCHEMA_SQL)
-    conn.execute('INSERT OR REPLACE INTO schema_version (version) VALUES (?)', (SCHEMA_VERSION,))
+    conn.execute(
+        'INSERT OR REPLACE INTO schema_version (version) VALUES (?)', (SCHEMA_VERSION,)
+    )
     conn.commit()
     return conn
 
@@ -127,7 +133,9 @@ def _migrate_db_if_needed(conn: sqlite3.Connection) -> None:
         conn.execute('ALTER TABLE series ADD COLUMN snoozed_until TEXT')
     if version < 4:
         conn.execute('ALTER TABLE series RENAME COLUMN followup_count TO message_count')
-        conn.execute('ALTER TABLE series RENAME COLUMN seen_followup_count TO seen_message_count')
+        conn.execute(
+            'ALTER TABLE series RENAME COLUMN seen_followup_count TO seen_message_count'
+        )
     if version < 5:
         conn.execute("ALTER TABLE series ADD COLUMN attestation TEXT DEFAULT 'pending'")
     if version < 6:
@@ -137,7 +145,7 @@ def _migrate_db_if_needed(conn: sqlite3.Connection) -> None:
         conn.execute('ALTER TABLE series ADD COLUMN is_rethreaded INTEGER DEFAULT 0')
     if version < 8:
         # Older DBs may not have the revisions table at all; create it if absent.
-        conn.execute('''CREATE TABLE IF NOT EXISTS revisions (
+        conn.execute("""CREATE TABLE IF NOT EXISTS revisions (
             change_id  TEXT NOT NULL,
             revision   INTEGER NOT NULL,
             message_id TEXT NOT NULL,
@@ -145,7 +153,7 @@ def _migrate_db_if_needed(conn: sqlite3.Connection) -> None:
             link       TEXT,
             found_at   TEXT,
             PRIMARY KEY (change_id, revision)
-        )''')
+        )""")
         # Add thread_blob only if the table didn't already have it.
         existing = {row[1] for row in conn.execute('PRAGMA table_info(revisions)')}
         if 'thread_blob' not in existing:
@@ -242,7 +250,9 @@ def record_take_branch(gitdir: str, branch: str) -> None:
         f.write('\n')
 
 
-def resolve_identifier(cmdargs: argparse.Namespace, topdir: Optional[str] = None) -> Optional[str]:
+def resolve_identifier(
+    cmdargs: argparse.Namespace, topdir: Optional[str] = None
+) -> Optional[str]:
     """Resolve project identifier from command args or repository metadata."""
     if hasattr(cmdargs, 'identifier') and cmdargs.identifier:
         return str(cmdargs.identifier)
@@ -264,7 +274,9 @@ def cmd_enroll(cmdargs: argparse.Namespace) -> None:
         # Use current directory
         repo_path_opt = b4.git_get_toplevel()
         if not repo_path_opt:
-            logger.critical('Not in a git repository. Specify a path or run from within a repository.')
+            logger.critical(
+                'Not in a git repository. Specify a path or run from within a repository.'
+            )
             sys.exit(1)
         repo_path = repo_path_opt
 
@@ -319,18 +331,26 @@ def cmd_enroll(cmdargs: argparse.Namespace) -> None:
     logger.info('Project enrolled successfully with identifier: %s', identifier)
 
 
-def add_series_to_db(conn: sqlite3.Connection, change_id: str, revision: int,
-                     subject: Optional[str], sender_name: Optional[str],
-                     sender_email: Optional[str], sent_at: Optional[str],
-                     message_id: str, num_patches: int,
-                     pw_series_id: Optional[int] = None,
-                     fingerprint: Optional[str] = None,
-                     added_at: Optional[str] = None,
-                     is_rethreaded: bool = False) -> int:
+def add_series_to_db(
+    conn: sqlite3.Connection,
+    change_id: str,
+    revision: int,
+    subject: Optional[str],
+    sender_name: Optional[str],
+    sender_email: Optional[str],
+    sent_at: Optional[str],
+    message_id: str,
+    num_patches: int,
+    pw_series_id: Optional[int] = None,
+    fingerprint: Optional[str] = None,
+    added_at: Optional[str] = None,
+    is_rethreaded: bool = False,
+) -> int:
     """Add a series to the tracking database. Returns the track_id."""
     if added_at is None:
         added_at = datetime.datetime.now(datetime.timezone.utc).isoformat()
-    cursor = conn.execute('''
+    cursor = conn.execute(
+        """
         INSERT INTO series
         (change_id, revision, subject, sender_name, sender_email, sent_at, added_at,
          message_id, num_patches, pw_series_id, fingerprint, is_rethreaded)
@@ -347,8 +367,22 @@ def add_series_to_db(conn: sqlite3.Connection, change_id: str, revision: int,
             fingerprint = excluded.fingerprint,
             is_rethreaded = excluded.is_rethreaded
         RETURNING track_id
-    ''', (change_id, revision, subject, sender_name, sender_email, sent_at, added_at,
-          message_id, num_patches, pw_series_id, fingerprint, int(is_rethreaded)))
+    """,
+        (
+            change_id,
+            revision,
+            subject,
+            sender_name,
+            sender_email,
+            sent_at,
+            added_at,
+            message_id,
+            num_patches,
+            pw_series_id,
+            fingerprint,
+            int(is_rethreaded),
+        ),
+    )
     track_id = cursor.fetchone()[0]
     conn.commit()
     return int(track_id)
@@ -381,7 +415,9 @@ def cmd_track(cmdargs: argparse.Namespace) -> None:
     rethread = getattr(cmdargs, 'rethread', None)
     if rethread:
         if cmdargs.series_id:
-            logger.critical('--rethread cannot be used with a positional series_id argument')
+            logger.critical(
+                '--rethread cannot be used with a positional series_id argument'
+            )
             sys.exit(1)
         series_id = None
     else:
@@ -390,7 +426,9 @@ def cmd_track(cmdargs: argparse.Namespace) -> None:
             series_id = b4.get_msgid_from_stdin()
             if not series_id:
                 logger.critical('No series identifier provided')
-                logger.critical('Pipe a message or pass msgid/URL/change-id as parameter')
+                logger.critical(
+                    'Pipe a message or pass msgid/URL/change-id as parameter'
+                )
                 sys.exit(1)
 
     # Set up cmdargs for retrieve_messages
@@ -430,16 +468,15 @@ def cmd_track(cmdargs: argparse.Namespace) -> None:
     if b4.can_network:
         msgs = b4.mbox.get_extra_series(msgs, direction=1, nocache=True)
         if wanted_ver > 1:
-            msgs = b4.mbox.get_extra_series(msgs, direction=-1,
-                                             wantvers=list(range(1, wanted_ver)),
-                                             nocache=True)
+            msgs = b4.mbox.get_extra_series(
+                msgs, direction=-1, wantvers=list(range(1, wanted_ver)), nocache=True
+            )
         # Rebuild the mailbox with all discovered messages
         lmbx = b4.LoreMailbox()
         for msg in msgs:
             lmbx.add_message(msg)
 
-    lser = lmbx.get_series(wanted_ver, sloppytrailers=False,
-                           codereview_trailers=False)
+    lser = lmbx.get_series(wanted_ver, sloppytrailers=False, codereview_trailers=False)
     if not lser:
         logger.critical('Could not find series version %d', wanted_ver)
         sys.exit(1)
@@ -492,8 +529,9 @@ def cmd_track(cmdargs: argparse.Namespace) -> None:
     ).fetchone()
     if existing is not None:
         conn.close()
-        logger.critical('This series is already tracked (status: %s, v%d)',
-                        existing[0], existing[1])
+        logger.critical(
+            'This series is already tracked (status: %s, v%d)', existing[0], existing[1]
+        )
         logger.critical('Change-ID: %s', existing[2])
         sys.exit(1)
     conn.close()
@@ -506,9 +544,19 @@ def cmd_track(cmdargs: argparse.Namespace) -> None:
     # Add to database
     subject = lser.subject
     conn = get_db(identifier)
-    add_series_to_db(conn, change_id, revision, subject, sender_name, sender_email,
-                     sent_at, message_id, num_patches, fingerprint=fingerprint,
-                     is_rethreaded=bool(rethread))
+    add_series_to_db(
+        conn,
+        change_id,
+        revision,
+        subject,
+        sender_name,
+        sender_email,
+        sent_at,
+        message_id,
+        num_patches,
+        fingerprint=fingerprint,
+        is_rethreaded=bool(rethread),
+    )
     add_series_patches(conn, change_id, revision, lser)
 
     # Record all discovered revisions
@@ -523,7 +571,9 @@ def cmd_track(cmdargs: argparse.Namespace) -> None:
                 for p in v_ser.patches:
                     if p is not None:
                         v_msgid = str(getattr(p, 'msgid', ''))
-                        v_subject = str(getattr(p, 'full_subject', '') or getattr(p, 'subject', ''))
+                        v_subject = str(
+                            getattr(p, 'full_subject', '') or getattr(p, 'subject', '')
+                        )
                         break
         except Exception:
             pass
@@ -549,7 +599,9 @@ def get_tracked_pw_series_ids(identifier: str) -> set[int]:
         return set()
     try:
         conn = get_db(identifier)
-        cursor = conn.execute('SELECT pw_series_id FROM series WHERE pw_series_id IS NOT NULL')
+        cursor = conn.execute(
+            'SELECT pw_series_id FROM series WHERE pw_series_id IS NOT NULL'
+        )
         result = {row[0] for row in cursor.fetchall()}
         conn.close()
         return result
@@ -564,8 +616,7 @@ def is_pw_series_tracked(identifier: str, pw_series_id: int) -> bool:
     try:
         conn = get_db(identifier)
         cursor = conn.execute(
-            'SELECT 1 FROM series WHERE pw_series_id = ? LIMIT 1',
-            (pw_series_id,)
+            'SELECT 1 FROM series WHERE pw_series_id = ? LIMIT 1', (pw_series_id,)
         )
         result = cursor.fetchone() is not None
         conn.close()
@@ -585,57 +636,67 @@ def get_all_tracked_series(identifier: str) -> list[dict[str, Any]]:
         return []
     try:
         conn = get_db(identifier)
-        cursor = conn.execute('''
+        cursor = conn.execute("""
             SELECT track_id, change_id, revision, subject, sender_name, sender_email,
                    sent_at, added_at, status, num_patches, message_id, pw_series_id,
                    message_count, seen_message_count, last_activity_at, attestation,
                    target_branch, is_rethreaded, snoozed_until
             FROM series
             ORDER BY added_at DESC
-        ''')
+        """)
         result = []
         for row in cursor.fetchall():
-            result.append({
-                'track_id': row[0],
-                'change_id': row[1],
-                'revision': row[2],
-                'subject': row[3] or '(no subject)',
-                'sender_name': row[4] or 'Unknown',
-                'sender_email': row[5] or '',
-                'sent_at': row[6] or '',
-                'added_at': row[7] or '',
-                'status': row[8] or 'new',
-                'num_patches': row[9] or 0,
-                'message_id': row[10] or '',
-                'pw_series_id': row[11],
-                'message_count': row[12],
-                'seen_message_count': row[13],
-                'last_activity_at': row[14],
-                'attestation': row[15],
-                'target_branch': row[16],
-                'is_rethreaded': bool(row[17]),
-                'snoozed_until': row[18],
-            })
+            result.append(
+                {
+                    'track_id': row[0],
+                    'change_id': row[1],
+                    'revision': row[2],
+                    'subject': row[3] or '(no subject)',
+                    'sender_name': row[4] or 'Unknown',
+                    'sender_email': row[5] or '',
+                    'sent_at': row[6] or '',
+                    'added_at': row[7] or '',
+                    'status': row[8] or 'new',
+                    'num_patches': row[9] or 0,
+                    'message_id': row[10] or '',
+                    'pw_series_id': row[11],
+                    'message_count': row[12],
+                    'seen_message_count': row[13],
+                    'last_activity_at': row[14],
+                    'attestation': row[15],
+                    'target_branch': row[16],
+                    'is_rethreaded': bool(row[17]),
+                    'snoozed_until': row[18],
+                }
+            )
         conn.close()
         return result
     except Exception:
         return []
 
 
-def add_revision(conn: sqlite3.Connection, change_id: str, revision: int,
-                 message_id: str, subject: Optional[str] = None,
-                 link: Optional[str] = None) -> None:
+def add_revision(
+    conn: sqlite3.Connection,
+    change_id: str,
+    revision: int,
+    message_id: str,
+    subject: Optional[str] = None,
+    link: Optional[str] = None,
+) -> None:
     """Insert a revision record, ignoring if already present."""
     found_at = datetime.datetime.now(datetime.timezone.utc).isoformat()
-    conn.execute('''INSERT OR IGNORE INTO revisions
+    conn.execute(
+        """INSERT OR IGNORE INTO revisions
         (change_id, revision, message_id, subject, link, found_at)
-        VALUES (?, ?, ?, ?, ?, ?)''',
-                 (change_id, revision, message_id, subject, link, found_at))
+        VALUES (?, ?, ?, ?, ?, ?)""",
+        (change_id, revision, message_id, subject, link, found_at),
+    )
     conn.commit()
 
 
-def set_revision_thread_blob(conn: sqlite3.Connection, change_id: str,
-                             revision: int, blob_sha: str) -> None:
+def set_revision_thread_blob(
+    conn: sqlite3.Connection, change_id: str, revision: int, blob_sha: str
+) -> None:
     """Record the git blob SHA of the cached mbox thread for a revision.
 
     The blob may later become unreachable (GC'd), so callers that read this
@@ -643,19 +704,23 @@ def set_revision_thread_blob(conn: sqlite3.Connection, change_id: str,
     """
     conn.execute(
         'UPDATE revisions SET thread_blob = ? WHERE change_id = ? AND revision = ?',
-        (blob_sha, change_id, revision))
+        (blob_sha, change_id, revision),
+    )
     conn.commit()
 
 
-def add_series_patches(conn: sqlite3.Connection, change_id: str, revision: int,
-                       lser: 'b4.LoreSeries') -> None:
+def add_series_patches(
+    conn: sqlite3.Connection, change_id: str, revision: int, lser: 'b4.LoreSeries'
+) -> None:
     """Store the member patches for a tracked series.
 
     Iterates lser.patches and inserts one row per non-None patch.
     Deletes any existing rows first so the call is idempotent.
     """
-    conn.execute('DELETE FROM series_patches WHERE change_id = ? AND revision = ?',
-                 (change_id, revision))
+    conn.execute(
+        'DELETE FROM series_patches WHERE change_id = ? AND revision = ?',
+        (change_id, revision),
+    )
     rows = []
     for position, lmsg in enumerate(lser.patches):
         if lmsg is None:
@@ -664,37 +729,49 @@ def add_series_patches(conn: sqlite3.Connection, change_id: str, revision: int,
     if rows:
         conn.executemany(
             'INSERT INTO series_patches (change_id, revision, position, message_id, subject)'
-            ' VALUES (?, ?, ?, ?, ?)', rows)
+            ' VALUES (?, ?, ?, ?, ?)',
+            rows,
+        )
     conn.commit()
 
 
-def get_series_patches(conn: sqlite3.Connection, change_id: str,
-                       revision: int) -> List[Dict[str, Any]]:
+def get_series_patches(
+    conn: sqlite3.Connection, change_id: str, revision: int
+) -> List[Dict[str, Any]]:
     """Return the stored patches for a series, ordered by position."""
     cols = ('position', 'message_id', 'subject')
     cursor = conn.execute(
         'SELECT position, message_id, subject FROM series_patches'
         ' WHERE change_id = ? AND revision = ? ORDER BY position ASC',
-        (change_id, revision))
+        (change_id, revision),
+    )
     return [dict(zip(cols, row)) for row in cursor.fetchall()]
 
 
 def get_revisions(conn: sqlite3.Connection, change_id: str) -> list[dict[str, Any]]:
     """Return all known revisions for a change_id, ordered ascending."""
-    cols = ('change_id', 'revision', 'message_id', 'subject', 'link', 'found_at',
-            'thread_blob')
+    cols = (
+        'change_id',
+        'revision',
+        'message_id',
+        'subject',
+        'link',
+        'found_at',
+        'thread_blob',
+    )
     cursor = conn.execute(
         'SELECT change_id, revision, message_id, subject, link, found_at, thread_blob '
         'FROM revisions WHERE change_id = ? ORDER BY revision ASC',
-        (change_id,))
+        (change_id,),
+    )
     return [dict(zip(cols, row)) for row in cursor.fetchall()]
 
 
 def get_newest_revision(conn: sqlite3.Connection, change_id: str) -> Optional[int]:
     """Return the highest known revision number, or None."""
     cursor = conn.execute(
-        'SELECT MAX(revision) FROM revisions WHERE change_id = ?',
-        (change_id,))
+        'SELECT MAX(revision) FROM revisions WHERE change_id = ?', (change_id,)
+    )
     row = cursor.fetchone()
     if row and row[0] is not None:
         return int(row[0])
@@ -704,24 +781,36 @@ def get_newest_revision(conn: sqlite3.Connection, change_id: str) -> Optional[in
 def get_all_newest_revisions(conn: sqlite3.Connection) -> dict[str, int]:
     """Return {change_id: max_revision} for all change_ids with revisions."""
     cursor = conn.execute(
-        'SELECT change_id, MAX(revision) FROM revisions GROUP BY change_id')
+        'SELECT change_id, MAX(revision) FROM revisions GROUP BY change_id'
+    )
     return {row[0]: int(row[1]) for row in cursor.fetchall()}
 
 
 def get_all_revision_counts(conn: sqlite3.Connection) -> dict[str, int]:
     """Return {change_id: revision_count} for all change_ids."""
     cursor = conn.execute(
-        'SELECT change_id, COUNT(*) FROM revisions GROUP BY change_id')
+        'SELECT change_id, COUNT(*) FROM revisions GROUP BY change_id'
+    )
     return {row[0]: int(row[1]) for row in cursor.fetchall()}
 
 
-def get_all_revisions_grouped(conn: sqlite3.Connection) -> dict[str, list[dict[str, Any]]]:
+def get_all_revisions_grouped(
+    conn: sqlite3.Connection,
+) -> dict[str, list[dict[str, Any]]]:
     """Return {change_id: [rev_dicts]} for all change_ids, ordered ascending."""
-    cols = ('change_id', 'revision', 'message_id', 'subject', 'link', 'found_at',
-            'thread_blob')
+    cols = (
+        'change_id',
+        'revision',
+        'message_id',
+        'subject',
+        'link',
+        'found_at',
+        'thread_blob',
+    )
     cursor = conn.execute(
         'SELECT change_id, revision, message_id, subject, link, found_at, thread_blob '
-        'FROM revisions ORDER BY change_id, revision ASC')
+        'FROM revisions ORDER BY change_id, revision ASC'
+    )
     result: dict[str, list[dict[str, Any]]] = {}
     for row in cursor.fetchall():
         entry = dict(zip(cols, row))
@@ -729,22 +818,28 @@ def get_all_revisions_grouped(conn: sqlite3.Connection) -> dict[str, list[dict[s
     return result
 
 
-def update_attestation(identifier: str, change_id: str,
-                       revision: int, attestation: Optional[str]) -> None:
+def update_attestation(
+    identifier: str, change_id: str, revision: int, attestation: Optional[str]
+) -> None:
     """Store attestation result for a tracked series."""
     try:
         conn = get_db(identifier)
         conn.execute(
             'UPDATE series SET attestation = ? WHERE change_id = ? AND revision = ?',
-            (attestation, change_id, revision))
+            (attestation, change_id, revision),
+        )
         conn.commit()
         conn.close()
     except Exception:
         pass
 
 
-def update_series_status(conn: sqlite3.Connection, change_id: str, status: str,
-                         revision: Optional[int] = None) -> None:
+def update_series_status(
+    conn: sqlite3.Connection,
+    change_id: str,
+    status: str,
+    revision: Optional[int] = None,
+) -> None:
     """Update the status of a tracked series.
 
     When *revision* is given only that specific revision is updated;
@@ -759,19 +854,24 @@ def update_series_status(conn: sqlite3.Connection, change_id: str, status: str,
         conn.execute(
             'UPDATE series SET status = ?, last_activity_at = ?'
             ' WHERE change_id = ? AND revision = ?',
-            (status, now, change_id, revision))
+            (status, now, change_id, revision),
+        )
     else:
         conn.execute(
-            'UPDATE series SET status = ?, last_activity_at = ?'
-            ' WHERE change_id = ?',
-            (status, now, change_id))
+            'UPDATE series SET status = ?, last_activity_at = ? WHERE change_id = ?',
+            (status, now, change_id),
+        )
     conn.commit()
 
 
-def update_series_revision(conn: sqlite3.Connection, change_id: str,
-                           old_revision: int, new_revision: int,
-                           new_message_id: str,
-                           new_subject: Optional[str] = None) -> None:
+def update_series_revision(
+    conn: sqlite3.Connection,
+    change_id: str,
+    old_revision: int,
+    new_revision: int,
+    new_message_id: str,
+    new_subject: Optional[str] = None,
+) -> None:
     """Switch a tracked series to a different revision number.
 
     Used when a not-yet-checked-out series should track a different
@@ -787,21 +887,25 @@ def update_series_revision(conn: sqlite3.Connection, change_id: str,
             ' message_count = NULL, seen_message_count = NULL,'
             ' last_activity_at = ?'
             ' WHERE change_id = ? AND revision = ?',
-            (new_revision, new_message_id, new_subject, now,
-             change_id, old_revision))
+            (new_revision, new_message_id, new_subject, now, change_id, old_revision),
+        )
     else:
         conn.execute(
             'UPDATE series SET revision = ?, message_id = ?,'
             ' message_count = NULL, seen_message_count = NULL,'
             ' last_activity_at = ?'
             ' WHERE change_id = ? AND revision = ?',
-            (new_revision, new_message_id, now,
-             change_id, old_revision))
+            (new_revision, new_message_id, now, change_id, old_revision),
+        )
     conn.commit()
 
 
-def snooze_series(conn: sqlite3.Connection, change_id: str,
-                  until_date: str, revision: Optional[int] = None) -> None:
+def snooze_series(
+    conn: sqlite3.Connection,
+    change_id: str,
+    until_date: str,
+    revision: Optional[int] = None,
+) -> None:
     """Set a series to snoozed status with a wake-up date.
 
     Args:
@@ -815,17 +919,23 @@ def snooze_series(conn: sqlite3.Connection, change_id: str,
         conn.execute(
             'UPDATE series SET status = ?, snoozed_until = ?, last_activity_at = ?'
             ' WHERE change_id = ? AND revision = ?',
-            ('snoozed', until_date, now, change_id, revision))
+            ('snoozed', until_date, now, change_id, revision),
+        )
     else:
         conn.execute(
             'UPDATE series SET status = ?, snoozed_until = ?, last_activity_at = ?'
             ' WHERE change_id = ?',
-            ('snoozed', until_date, now, change_id))
+            ('snoozed', until_date, now, change_id),
+        )
     conn.commit()
 
 
-def unsnooze_series(conn: sqlite3.Connection, change_id: str,
-                    previous_status: str, revision: Optional[int] = None) -> None:
+def unsnooze_series(
+    conn: sqlite3.Connection,
+    change_id: str,
+    previous_status: str,
+    revision: Optional[int] = None,
+) -> None:
     """Restore a snoozed series to its previous status.
 
     Args:
@@ -839,92 +949,107 @@ def unsnooze_series(conn: sqlite3.Connection, change_id: str,
         conn.execute(
             'UPDATE series SET status = ?, snoozed_until = NULL, last_activity_at = ?'
             ' WHERE change_id = ? AND revision = ?',
-            (previous_status, now, change_id, revision))
+            (previous_status, now, change_id, revision),
+        )
     else:
         conn.execute(
             'UPDATE series SET status = ?, snoozed_until = NULL, last_activity_at = ?'
             ' WHERE change_id = ?',
-            (previous_status, now, change_id))
+            (previous_status, now, change_id),
+        )
     conn.commit()
 
 
 def get_expired_snoozed(conn: sqlite3.Connection) -> List[Dict[str, Any]]:
     """Return all snoozed series whose wake-up time has passed."""
     cursor = conn.execute(
-        "SELECT change_id, revision, snoozed_until FROM series"
+        'SELECT change_id, revision, snoozed_until FROM series'
         " WHERE status = 'snoozed'"
         " AND snoozed_until <= strftime('%Y-%m-%dT%H:%M:%S', 'now')"
     )
     results = []
     for row in cursor:
-        results.append({
-            'change_id': row[0],
-            'revision': row[1],
-            'snoozed_until': row[2],
-        })
+        results.append(
+            {
+                'change_id': row[0],
+                'revision': row[1],
+                'snoozed_until': row[2],
+            }
+        )
     return results
 
 
 def get_tag_snoozed(conn: sqlite3.Connection) -> List[Dict[str, Any]]:
     """Return all snoozed series waiting for a git tag to appear."""
     cursor = conn.execute(
-        "SELECT change_id, revision, snoozed_until FROM series"
+        'SELECT change_id, revision, snoozed_until FROM series'
         " WHERE status = 'snoozed'"
         " AND snoozed_until LIKE 'tag:%'"
     )
     results = []
     for row in cursor:
-        results.append({
-            'change_id': row[0],
-            'revision': row[1],
-            'snoozed_until': row[2],
-        })
+        results.append(
+            {
+                'change_id': row[0],
+                'revision': row[1],
+                'snoozed_until': row[2],
+            }
+        )
     return results
 
 
-def get_snoozed_until(conn: sqlite3.Connection, change_id: str,
-                      revision: Optional[int] = None) -> Optional[str]:
+def get_snoozed_until(
+    conn: sqlite3.Connection, change_id: str, revision: Optional[int] = None
+) -> Optional[str]:
     """Return the snoozed_until date for a series, or None."""
     if revision is not None:
         row = conn.execute(
             'SELECT snoozed_until FROM series WHERE change_id = ? AND revision = ?',
-            (change_id, revision)).fetchone()
+            (change_id, revision),
+        ).fetchone()
     else:
         row = conn.execute(
-            'SELECT snoozed_until FROM series WHERE change_id = ?',
-            (change_id,)).fetchone()
+            'SELECT snoozed_until FROM series WHERE change_id = ?', (change_id,)
+        ).fetchone()
     return row[0] if row else None
 
 
-def update_target_branch(conn: sqlite3.Connection, change_id: str,
-                         target_branch: Optional[str],
-                         revision: Optional[int] = None) -> None:
+def update_target_branch(
+    conn: sqlite3.Connection,
+    change_id: str,
+    target_branch: Optional[str],
+    revision: Optional[int] = None,
+) -> None:
     """Set or clear the per-series target branch in the database."""
     now = datetime.datetime.now(datetime.timezone.utc).isoformat()
     if revision is not None:
         conn.execute(
             'UPDATE series SET target_branch = ?, last_activity_at = ?'
             ' WHERE change_id = ? AND revision = ?',
-            (target_branch, now, change_id, revision))
+            (target_branch, now, change_id, revision),
+        )
     else:
         conn.execute(
             'UPDATE series SET target_branch = ?, last_activity_at = ?'
             ' WHERE change_id = ?',
-            (target_branch, now, change_id))
+            (target_branch, now, change_id),
+        )
     conn.commit()
 
 
-def get_target_branch(conn: sqlite3.Connection, change_id: str,
-                      revision: Optional[int] = None) -> Optional[str]:
+def get_target_branch(
+    conn: sqlite3.Connection, change_id: str, revision: Optional[int] = None
+) -> Optional[str]:
     """Return the per-series target branch, or None."""
     if revision is not None:
         row = conn.execute(
             'SELECT target_branch FROM series WHERE change_id = ? AND revision = ?',
-            (change_id, revision)).fetchone()
+            (change_id, revision),
+        ).fetchone()
     else:
         row = conn.execute(
-            'SELECT target_branch FROM series WHERE change_id = ?',
-            (change_id,)).fetchone()
+            'SELECT target_branch FROM series WHERE change_id = ?', (change_id,)
+        ).fetchone()
     return row[0] if row else None
 
 
@@ -1011,11 +1136,11 @@ def _latest_date_from_msgs(msgs: List[Any]) -> Optional[str]:
 
 
 def update_message_count_from_msgs(
-        conn: sqlite3.Connection,
-        change_id: str,
-        revision: int,
-        msgs: List[Any],
-        topdir: Optional[str] = None,
+    conn: sqlite3.Connection,
+    change_id: str,
+    revision: int,
+    msgs: List[Any],
+    topdir: Optional[str] = None,
 ) -> bool:
     """Update message count and thread blob from already-fetched messages.
 
@@ -1032,9 +1157,9 @@ def update_message_count_from_msgs(
     last_activity = _latest_date_from_msgs(msgs)
 
     row = conn.execute(
-        'SELECT message_count FROM series'
-        ' WHERE change_id = ? AND revision = ?',
-        (change_id, revision)).fetchone()
+        'SELECT message_count FROM series WHERE change_id = ? AND revision = ?',
+        (change_id, revision),
+    ).fetchone()
     existing_count = row['message_count'] if row else None
 
     if existing_count is None:
@@ -1044,7 +1169,8 @@ def update_message_count_from_msgs(
             ' SET message_count = ?, seen_message_count = ?,'
             '     last_update_check = ?, last_activity_at = ?'
             ' WHERE change_id = ? AND revision = ?',
-            (count, count, now, last_activity, change_id, revision))
+            (count, count, now, last_activity, change_id, revision),
+        )
     elif count != existing_count:
         # Count changed — update count but not seen (badge will appear)
         conn.execute(
@@ -1052,13 +1178,15 @@ def update_message_count_from_msgs(
             ' SET message_count = ?, last_update_check = ?,'
             '     last_activity_at = COALESCE(?, last_activity_at)'
             ' WHERE change_id = ? AND revision = ?',
-            (count, now, last_activity, change_id, revision))
+            (count, now, last_activity, change_id, revision),
+        )
     else:
         # No change — just stamp the check time, skip commit
         conn.execute(
             'UPDATE series SET last_update_check = ?'
             ' WHERE change_id = ? AND revision = ?',
-            (now, change_id, revision))
+            (now, change_id, revision),
+        )
         conn.commit()
         return False
 
@@ -1081,7 +1209,9 @@ def fetch_thread_message_count(message_id: str) -> Optional[int]:
     return len(parsed)
 
 
-def _fetch_new_since(message_id: str, since: str) -> Optional[Tuple[int, Optional[str]]]:
+def _fetch_new_since(
+    message_id: str, since: str
+) -> Optional[Tuple[int, Optional[str]]]:
     """Fetch new thread messages since a timestamp via LoreNode.
 
     Uses LoreNode.get_thread_updates_since() which queries the public-inbox
@@ -1103,7 +1233,9 @@ def _fetch_new_since(message_id: str, since: str) -> Optional[Tuple[int, Optiona
 
     try:
         node = b4.get_lore_node()
-        msgs = node.get_thread_updates_since(message_id, since_dt, strict=False, sort=False)
+        msgs = node.get_thread_updates_since(
+            message_id, since_dt, strict=False, sort=False
+        )
         if not msgs:
             return (0, None)
         count = len(msgs)
@@ -1114,8 +1246,7 @@ def _fetch_new_since(message_id: str, since: str) -> Optional[Tuple[int, Optiona
         return None
 
 
-def _store_thread_blob(topdir: str, change_id: str,
-                       msgs: List[Any]) -> Optional[str]:
+def _store_thread_blob(topdir: str, change_id: str, msgs: List[Any]) -> Optional[str]:
     """Serialize msgs to mboxrd and write as a git blob; update tracking commit.
 
     Also writes thread-context-blob (the plain-text rendered context for the
@@ -1137,9 +1268,9 @@ def _store_thread_blob(topdir: str, change_id: str,
         logger.debug('No bytes to store for thread blob for %s', change_id)
         return None
 
-    ecode, out = b4.git_run_command(topdir,
-                                    ['hash-object', '-w', '--stdin'],
-                                    stdin=mbox_bytes)
+    ecode, out = b4.git_run_command(
+        topdir, ['hash-object', '-w', '--stdin'], stdin=mbox_bytes
+    )
     if ecode != 0:
         logger.debug('Could not write thread blob for %s', change_id)
         return None
@@ -1161,13 +1292,16 @@ def _store_thread_blob(topdir: str, change_id: str,
             cover_subject = series.get('subject', '')
             patches = tracking.get('patches', [])
             followup_comments = _parse_msgs_to_followup_comments(
-                msgs, cover_msgid, patches)
+                msgs, cover_msgid, patches
+            )
             context_text = _render_thread_context(
-                followup_comments, patches, cover_subject)
+                followup_comments, patches, cover_subject
+            )
             if context_text.strip():
                 ctx_bytes = context_text.encode()
                 ecode2, ctx_out = b4.git_run_command(
-                    topdir, ['hash-object', '-w', '--stdin'], stdin=ctx_bytes)
+                    topdir, ['hash-object', '-w', '--stdin'], stdin=ctx_bytes
+                )
                 if ecode2 == 0:
                     ctx_sha = ctx_out.strip()
                     if series.get('thread-context-blob') != ctx_sha:
@@ -1175,18 +1309,17 @@ def _store_thread_blob(topdir: str, change_id: str,
                         changed = True
 
             if changed:
-                _b4_review.save_tracking_ref(topdir, branch_name,
-                                             cover_text, tracking)
+                _b4_review.save_tracking_ref(topdir, branch_name, cover_text, tracking)
         except Exception as ex:
-            logger.debug('Could not update thread blobs for %s: %s',
-                         change_id, ex)
+            logger.debug('Could not update thread blobs for %s: %s', change_id, ex)
     return blob_sha
 
 
 def get_thread_mbox(topdir: str, blob_sha: str) -> Optional[bytes]:
     """Read cached thread mbox bytes from a git blob; None if unavailable (e.g. GC'd)."""
-    ecode, out = b4.git_run_command(topdir, ['cat-file', 'blob', blob_sha],
-                                    decode=False)
+    ecode, out = b4.git_run_command(
+        topdir, ['cat-file', 'blob', blob_sha], decode=False
+    )
     if ecode != 0:
         logger.debug("Followup blob %s not found (may have been GC'd)", blob_sha)
         return None
@@ -1280,7 +1413,8 @@ def _parse_msgs_to_followup_comments(
     followup_comments: Dict[int, List[Dict[str, Any]]] = {}
     for lmsg in sorted(lmbx.followups, key=lambda m: m.date):
         display_idx = _resolve_patch_for_followup_local(
-            lmsg.in_reply_to, patch_msgids, lmbx.msgid_map)
+            lmsg.in_reply_to, patch_msgids, lmbx.msgid_map
+        )
         if display_idx is None:
             continue
         mbody = minimised_body_map.get(lmsg.msgid, '').strip()
@@ -1300,7 +1434,9 @@ def _parse_msgs_to_followup_comments(
             'date': lmsg.date,
             'msgid': lmsg.msgid,
             'subject': lmsg.subject,
-            'depth': _get_followup_depth_local(lmsg.in_reply_to, patch_msgids, lmbx.msgid_map),
+            'depth': _get_followup_depth_local(
+                lmsg.in_reply_to, patch_msgids, lmbx.msgid_map
+            ),
         }
         followup_comments.setdefault(display_idx, []).append(entry)
 
@@ -1326,15 +1462,21 @@ def _render_thread_context(
             section = f'Follow-up: cover letter ({cover_subject})'
         else:
             patch_idx = display_idx - 1
-            title = (patches[patch_idx].get('title', f'patch {display_idx}')
-                     if patch_idx < len(patches) else f'patch {display_idx}')
+            title = (
+                patches[patch_idx].get('title', f'patch {display_idx}')
+                if patch_idx < len(patches)
+                else f'patch {display_idx}'
+            )
             section = f'Follow-up: patch {display_idx}/{n_patches} — {title}'
         lines.append(f'== {section} ==')
         lines.append('')
         for entry in fc_list:
-            date_str = (entry['date'].strftime('%Y-%m-%d %H:%M %z')
-                        if entry.get('date') else '')
-            lines.append(f"From: {entry['fromname']} <{entry['fromemail']}> | {date_str}")
+            date_str = (
+                entry['date'].strftime('%Y-%m-%d %H:%M %z') if entry.get('date') else ''
+            )
+            lines.append(
+                f'From: {entry["fromname"]} <{entry["fromemail"]}> | {date_str}'
+            )
             lines.append('')
             lines.append(entry['body'].rstrip())
             lines.append('')
@@ -1416,9 +1558,9 @@ def render_prior_review_context(
     return '\n'.join(lines)
 
 
-def ensure_thread_context_blob(topdir: str, change_id: str,
-                                series: Dict[str, Any],
-                                patches: List[Dict[str, Any]]) -> Optional[str]:
+def ensure_thread_context_blob(
+    topdir: str, change_id: str, series: Dict[str, Any], patches: List[Dict[str, Any]]
+) -> Optional[str]:
     """Ensure thread-context-blob exists in the tracking commit.
 
     Migration aid: if thread-blob was written before this feature existed
@@ -1454,8 +1596,9 @@ def ensure_thread_context_blob(topdir: str, change_id: str,
         return None
 
     ctx_bytes = context_text.encode()
-    ecode, out = b4.git_run_command(topdir, ['hash-object', '-w', '--stdin'],
-                                    stdin=ctx_bytes)
+    ecode, out = b4.git_run_command(
+        topdir, ['hash-object', '-w', '--stdin'], stdin=ctx_bytes
+    )
     if ecode != 0:
         logger.debug('Could not write thread-context blob for %s', change_id)
         return None
@@ -1469,17 +1612,20 @@ def ensure_thread_context_blob(topdir: str, change_id: str,
                 tracking['series']['thread-context-blob'] = ctx_sha
                 _b4_review.save_tracking_ref(topdir, branch_name, cover_text, tracking)
         except Exception as ex:
-            logger.debug('Could not persist thread-context-blob for %s: %s', change_id, ex)
+            logger.debug(
+                'Could not persist thread-context-blob for %s: %s', change_id, ex
+            )
 
     series['thread-context-blob'] = ctx_sha
     return ctx_sha
 
 
-def update_message_counts(identifier: str,
-                           series_list: List[Dict[str, Any]],
-                           topdir: Optional[str] = None,
-                           prefetched: Optional[Dict[Tuple[str, int], List[Any]]] = None,
-                           ) -> Dict[str, int]:
+def update_message_counts(
+    identifier: str,
+    series_list: List[Dict[str, Any]],
+    topdir: Optional[str] = None,
+    prefetched: Optional[Dict[Tuple[str, int], List[Any]]] = None,
+) -> Dict[str, int]:
     """Fetch and store thread message counts for a list of series.
 
     For each active series in *series_list* that has a message_id:
@@ -1525,7 +1671,8 @@ def update_message_counts(identifier: str,
         row = conn.execute(
             'SELECT message_count, seen_message_count, last_update_check'
             ' FROM series WHERE change_id = ? AND revision = ?',
-            (change_id, revision)).fetchone()
+            (change_id, revision),
+        ).fetchone()
 
         existing_count = row['message_count'] if row else None
         last_check = row['last_update_check'] if row else None
@@ -1543,7 +1690,8 @@ def update_message_counts(identifier: str,
                     ' SET message_count = ?, seen_message_count = ?,'
                     '     last_update_check = ?, last_activity_at = ?'
                     ' WHERE change_id = ? AND revision = ?',
-                    (count, count, now, last_activity, change_id, revision))
+                    (count, count, now, last_activity, change_id, revision),
+                )
                 conn.commit()
                 updated += 1
                 if topdir and pre_msgs:
@@ -1561,7 +1709,8 @@ def update_message_counts(identifier: str,
                     ' SET message_count = ?, seen_message_count = ?,'
                     '     last_update_check = ?, last_activity_at = ?'
                     ' WHERE change_id = ? AND revision = ?',
-                    (count, count, now, last_activity, change_id, revision))
+                    (count, count, now, last_activity, change_id, revision),
+                )
                 conn.commit()
                 updated += 1
                 if topdir and parsed:
@@ -1580,7 +1729,8 @@ def update_message_counts(identifier: str,
                     ' SET message_count = message_count + ?, last_update_check = ?,'
                     '     last_activity_at = COALESCE(?, last_activity_at)'
                     ' WHERE change_id = ? AND revision = ?',
-                    (new_count, now, new_activity, change_id, revision))
+                    (new_count, now, new_activity, change_id, revision),
+                )
                 conn.commit()
                 updated += 1
                 if topdir:
@@ -1594,18 +1744,21 @@ def update_message_counts(identifier: str,
     return {'updated': updated, 'errors': errors}
 
 
-def mark_all_messages_seen(conn: sqlite3.Connection, change_id: str,
-                        revision: int) -> None:
+def mark_all_messages_seen(
+    conn: sqlite3.Connection, change_id: str, revision: int
+) -> None:
     """Set seen_message_count = message_count, clearing the unread badge."""
     conn.execute(
         'UPDATE series SET seen_message_count = message_count'
         ' WHERE change_id = ? AND revision = ?',
-        (change_id, revision))
+        (change_id, revision),
+    )
     conn.commit()
 
 
-def sync_seen_from_unseen_count(identifier: str, change_id: str,
-                                revision: int, unseen_count: int) -> bool:
+def sync_seen_from_unseen_count(
+    identifier: str, change_id: str, revision: int, unseen_count: int
+) -> bool:
     """Sync seen_message_count so the unread badge matches the messages DB.
 
     Sets ``seen_message_count = message_count - unseen_count``, clamped
@@ -1621,7 +1774,8 @@ def sync_seen_from_unseen_count(identifier: str, change_id: str,
     row = conn.execute(
         'SELECT message_count, seen_message_count FROM series'
         ' WHERE change_id = ? AND revision = ?',
-        (change_id, revision)).fetchone()
+        (change_id, revision),
+    ).fetchone()
     if row is None:
         conn.close()
         return False
@@ -1637,16 +1791,17 @@ def sync_seen_from_unseen_count(identifier: str, change_id: str,
         return False
 
     conn.execute(
-        'UPDATE series SET seen_message_count = ?'
-        ' WHERE change_id = ? AND revision = ?',
-        (new_seen, change_id, revision))
+        'UPDATE series SET seen_message_count = ? WHERE change_id = ? AND revision = ?',
+        (new_seen, change_id, revision),
+    )
     conn.commit()
     conn.close()
     return True
 
 
-def refresh_message_count(identifier: str, change_id: str, revision: int,
-                           total_messages: int) -> bool:
+def refresh_message_count(
+    identifier: str, change_id: str, revision: int, total_messages: int
+) -> bool:
     """Opportunistically refresh the message count from already-fetched messages.
 
     Called when thread messages have been fetched for another purpose (e.g.
@@ -1673,7 +1828,8 @@ def refresh_message_count(identifier: str, change_id: str, revision: int,
     row = conn.execute(
         'SELECT message_count, seen_message_count FROM series'
         ' WHERE change_id = ? AND revision = ?',
-        (change_id, revision)).fetchone()
+        (change_id, revision),
+    ).fetchone()
     if row is None:
         conn.close()
         return False
@@ -1692,7 +1848,8 @@ def refresh_message_count(identifier: str, change_id: str, revision: int,
             'UPDATE series SET message_count = ?, seen_message_count = ?,'
             '  last_update_check = ?'
             ' WHERE change_id = ? AND revision = ?',
-            (count, count, now, change_id, revision))
+            (count, count, now, change_id, revision),
+        )
     else:
         # Count changed: update only message_count; cap seen if it
         # exceeds the new count (possible when dedup reduces the total).
@@ -1702,20 +1859,23 @@ def refresh_message_count(identifier: str, change_id: str, revision: int,
                 'UPDATE series SET message_count = ?, seen_message_count = ?,'
                 '  last_update_check = ?'
                 ' WHERE change_id = ? AND revision = ?',
-                (count, count, now, change_id, revision))
+                (count, count, now, change_id, revision),
+            )
         else:
             conn.execute(
                 'UPDATE series SET message_count = ?, last_update_check = ?'
                 ' WHERE change_id = ? AND revision = ?',
-                (count, now, change_id, revision))
+                (count, now, change_id, revision),
+            )
 
     conn.commit()
     conn.close()
     return True
 
 
-def rescan_branches(identifier: str, topdir: str,
-                    branch: Optional[str] = None) -> Dict[str, int]:
+def rescan_branches(
+    identifier: str, topdir: str, branch: Optional[str] = None
+) -> Dict[str, int]:
     """Rescan review branches and sync status/metadata into the tracking DB.
 
     Iterates b4/review/* branches (or a single branch if specified).  For each
@@ -1757,7 +1917,8 @@ def rescan_branches(identifier: str, topdir: str,
         stored = conn.execute(
             'SELECT branch_sha FROM series WHERE change_id = ?'
             ' ORDER BY revision DESC LIMIT 1',
-            (change_id_from_branch,)).fetchone()
+            (change_id_from_branch,),
+        ).fetchone()
         if stored and stored['branch_sha'] == current_sha:
             # Branch HEAD unchanged — skip the expensive tracking-commit read.
             scanned_change_ids.add(change_id_from_branch)
@@ -1775,8 +1936,12 @@ def rescan_branches(identifier: str, topdir: str,
 
         # Verify identifier matches (skip if mismatch)
         if track_id_value and track_id_value != identifier:
-            logger.warning('Branch %s has identifier %s, expected %s; skipping',
-                           br, track_id_value, identifier)
+            logger.warning(
+                'Branch %s has identifier %s, expected %s; skipping',
+                br,
+                track_id_value,
+                identifier,
+            )
             continue
 
         change_id = series.get('change-id')
@@ -1803,23 +1968,28 @@ def rescan_branches(identifier: str, topdir: str,
 
         # Upsert metadata and sync status from the tracking commit.
         tracked_at = series.get('tracked-at')
-        add_series_to_db(conn, change_id,
-                         revision=revision,
-                         subject=series.get('subject'),
-                         sender_name=series.get('fromname'),
-                         sender_email=series.get('fromemail'),
-                         sent_at=sent_at,
-                         message_id=message_id,
-                         num_patches=series.get('expected', 0),
-                         added_at=tracked_at,
-                         is_rethreaded=is_rethreaded)
+        add_series_to_db(
+            conn,
+            change_id,
+            revision=revision,
+            subject=series.get('subject'),
+            sender_name=series.get('fromname'),
+            sender_email=series.get('fromemail'),
+            sent_at=sent_at,
+            message_id=message_id,
+            num_patches=series.get('expected', 0),
+            added_at=tracked_at,
+            is_rethreaded=is_rethreaded,
+        )
         update_series_status(conn, change_id, status, revision=revision)
 
         # Rebuild series_patches from tracking commit data
         tracking_patches = tracking.get('patches', [])
         if tracking_patches:
-            conn.execute('DELETE FROM series_patches WHERE change_id = ? AND revision = ?',
-                         (change_id, revision))
+            conn.execute(
+                'DELETE FROM series_patches WHERE change_id = ? AND revision = ?',
+                (change_id, revision),
+            )
             rows = []
             for i, p in enumerate(tracking_patches, 1):
                 p_msgid = p.get('header-info', {}).get('msgid', '')
@@ -1828,12 +1998,16 @@ def rescan_branches(identifier: str, topdir: str,
             if rows:
                 conn.executemany(
                     'INSERT INTO series_patches (change_id, revision, position, message_id, subject)'
-                    ' VALUES (?, ?, ?, ?, ?)', rows)
+                    ' VALUES (?, ?, ?, ?, ?)',
+                    rows,
+                )
                 conn.commit()
 
         # Persist the new HEAD SHA so future rescans can skip this branch.
-        conn.execute('UPDATE series SET branch_sha = ? WHERE change_id = ? AND revision = ?',
-                     (current_sha, change_id, revision))
+        conn.execute(
+            'UPDATE series SET branch_sha = ? WHERE change_id = ? AND revision = ?',
+            (current_sha, change_id, revision),
+        )
         conn.commit()
 
         logger.info('Rescanned: %s (status: %s)', change_id, status)
@@ -1848,12 +2022,10 @@ def rescan_branches(identifier: str, topdir: str,
             sid = s.get('change_id')
             if not sid:
                 continue
-            if (s.get('status') in active_statuses
-                    and sid not in scanned_change_ids):
+            if s.get('status') in active_statuses and sid not in scanned_change_ids:
                 branch_name = f'b4/review/{sid}'
                 if not b4.git_branch_exists(topdir, branch_name):
-                    update_series_status(conn, sid, 'gone',
-                                         revision=s.get('revision'))
+                    update_series_status(conn, sid, 'gone', revision=s.get('revision'))
                     logger.info('Marked as gone: %s', sid)
                     gone += 1
 
@@ -1861,9 +2033,9 @@ def rescan_branches(identifier: str, topdir: str,
     return {'gone': gone, 'changed': changed}
 
 
-
-def delete_series(conn: sqlite3.Connection, change_id: str,
-                  revision: Optional[int] = None) -> None:
+def delete_series(
+    conn: sqlite3.Connection, change_id: str, revision: Optional[int] = None
+) -> None:
     """Delete a series from the database.
 
     When *revision* is given only that specific revision is removed;
@@ -1871,10 +2043,14 @@ def delete_series(conn: sqlite3.Connection, change_id: str,
     behaviour kept for backwards compatibility).
     """
     if revision is not None:
-        conn.execute('DELETE FROM revisions WHERE change_id = ? AND revision = ?',
-                     (change_id, revision))
-        conn.execute('DELETE FROM series WHERE change_id = ? AND revision = ?',
-                     (change_id, revision))
+        conn.execute(
+            'DELETE FROM revisions WHERE change_id = ? AND revision = ?',
+            (change_id, revision),
+        )
+        conn.execute(
+            'DELETE FROM series WHERE change_id = ? AND revision = ?',
+            (change_id, revision),
+        )
     else:
         conn.execute('DELETE FROM revisions WHERE change_id = ?', (change_id,))
         conn.execute('DELETE FROM series WHERE change_id = ?', (change_id,))
diff --git a/src/b4/review_tui/__init__.py b/src/b4/review_tui/__init__.py
index 68548e6..27551d0 100644
--- a/src/b4/review_tui/__init__.py
+++ b/src/b4/review_tui/__init__.py
@@ -18,10 +18,18 @@ from b4.review_tui._review_app import ReviewApp
 from b4.review_tui._tracking_app import TrackingApp
 
 __all__ = [
-    'logger', 'PATCH_STATE_MARKERS',
-    'resolve_styles', 'reviewer_colours',
+    'logger',
+    'PATCH_STATE_MARKERS',
+    'resolve_styles',
+    'reviewer_colours',
     'gather_attestation_info',
-    '_addrs_to_lines', '_lines_to_header', '_validate_addrs',
-    'ReviewApp', 'TrackingApp', 'PwApp',
-    'run_branch_tui', 'run_pw_tui', 'run_tracking_tui',
+    '_addrs_to_lines',
+    '_lines_to_header',
+    '_validate_addrs',
+    'ReviewApp',
+    'TrackingApp',
+    'PwApp',
+    'run_branch_tui',
+    'run_pw_tui',
+    'run_tracking_tui',
 ]
diff --git a/src/b4/review_tui/_common.py b/src/b4/review_tui/_common.py
index e819af5..c6d5df3 100644
--- a/src/b4/review_tui/_common.py
+++ b/src/b4/review_tui/_common.py
@@ -109,11 +109,11 @@ def get_thread_msgs(
 
 # Per-patch state indicators — same glyphs as _tracking_app._STATUS_SYMBOLS
 PATCH_STATE_MARKERS: Dict[str, str] = {
-    '':          ' ',
-    'external':  '\u00b1',  # ± plus-minus    (= external comments available)
-    'draft':     '\u270e',  # ✎ pencil        (= maintainer reviewing)
-    'done':      '\u2713',  # ✓ check         (= done)
-    'skip':      '\u2715',  # ✕ cross         (= skipped)
+    '': ' ',
+    'external': '\u00b1',  # ± plus-minus    (= external comments available)
+    'draft': '\u270e',  # ✎ pencil        (= maintainer reviewing)
+    'done': '\u2713',  # ✓ check         (= done)
+    'skip': '\u2715',  # ✕ cross         (= skipped)
     'unchanged': '\u2261',  # ≡ identical-to  (= patch unchanged from prior revision)
 }
 
@@ -126,8 +126,6 @@ CI_CHECK_LABELS = {
 }
 
 
-
-
 class CheckRunnerMixin:
     """Mixin providing CI check execution for Textual App subclasses.
 
@@ -165,31 +163,44 @@ class CheckRunnerMixin:
             self.notify('No message-id for this series', severity='error')  # type: ignore[attr-defined]
             return
         from b4.review_tui._modals import CheckLoadingScreen
+
         self._check_loading = CheckLoadingScreen()
         self.push_screen(self._check_loading)  # type: ignore[attr-defined]
         self.run_worker(  # type: ignore[attr-defined]
-            lambda: self._fetch_and_check(message_id, series_subject,
-                                          change_id=change_id, force=force),
-            name='_check_worker', thread=True)
+            lambda: self._fetch_and_check(
+                message_id, series_subject, change_id=change_id, force=force
+            ),
+            name='_check_worker',
+            thread=True,
+        )
 
     def _dismiss_loading(self, msg: str = '', severity: str = '') -> None:
         """Dismiss the check loading screen and optionally notify."""
+
         def _do() -> None:
             if self._check_loading is not None and self._check_loading.is_attached:
                 self._check_loading.dismiss(None)
             if msg:
                 self.notify(msg, severity=severity)  # type: ignore[attr-defined]
+
         self.app.call_from_thread(_do)  # type: ignore[attr-defined]
 
     def _update_loading(self, text: str) -> None:
         """Update the loading screen status text."""
+
         def _do() -> None:
             if self._check_loading is not None and self._check_loading.is_attached:
                 self._check_loading.update_status(text)
+
         self.app.call_from_thread(_do)  # type: ignore[attr-defined]
 
-    def _fetch_and_check(self, message_id: str, series_subject: str,
-                         change_id: str = '', force: bool = False) -> None:
+    def _fetch_and_check(
+        self,
+        message_id: str,
+        series_subject: str,
+        change_id: str = '',
+        force: bool = False,
+    ) -> None:
         """Fetch thread, run checks, and push results modal (worker thread)."""
         import b4.review.checks as checks
         from b4.review_tui._modals import TrackingCheckResultsScreen
@@ -219,7 +230,9 @@ class CheckRunnerMixin:
             try:
                 _cover, tracking = b4.review.load_tracking(topdir, review_branch)
                 blob_sha = tracking.get('series', {}).get('thread-blob', '')
-                fd, tracking_file = tempfile.mkstemp(prefix='b4-tracking-', suffix='.json')
+                fd, tracking_file = tempfile.mkstemp(
+                    prefix='b4-tracking-', suffix='.json'
+                )
                 with os.fdopen(fd, 'w') as fp:
                     json.dump(tracking, fp, indent=2)
                 extra_env['B4_TRACKING_FILE'] = tracking_file
@@ -229,8 +242,7 @@ class CheckRunnerMixin:
         # Fetch the thread (local blob first, then lore)
         self._update_loading('Loading thread\u2026')
         with _quiet_worker():
-            msgs = get_thread_msgs(topdir, message_id,
-                                   blob_sha=blob_sha, quiet=True)
+            msgs = get_thread_msgs(topdir, message_id, blob_sha=blob_sha, quiet=True)
         if not msgs:
             self._dismiss_loading('Could not fetch thread from lore', 'error')
             return
@@ -272,7 +284,9 @@ class CheckRunnerMixin:
         ordered_msgs: List[Tuple[str, email.message.EmailMessage]] = []
         if cover_msg:
             patch_labels.append(f'0/{num_patches}')
-            patch_subjects.append(b4.LoreSubject(cover_msg[1].get('subject', '')).subject)
+            patch_subjects.append(
+                b4.LoreSubject(cover_msg[1].get('subject', '')).subject
+            )
             ordered_msgs.append(cover_msg)
         for idx, (mid, msg) in enumerate(patches, 1):
             patch_labels.append(f'{idx}/{num_patches}')
@@ -313,8 +327,13 @@ class CheckRunnerMixin:
                 label = patch_labels[pidx]
                 self._update_loading(f'Running checks\u2026 {label}')
                 single_results = checks.run_perpatch_checks(
-                    [(mid, _msg)], perpatch_cmds, topdir, pwkey, pwurl,
-                    extra_env=extra_env)
+                    [(mid, _msg)],
+                    perpatch_cmds,
+                    topdir,
+                    pwkey,
+                    pwurl,
+                    extra_env=extra_env,
+                )
                 for result in single_results.get(mid, []):
                     tool = result['tool']
                     all_tools.add(tool)
@@ -323,12 +342,14 @@ class CheckRunnerMixin:
 
         # Run per-series checks (only if not cached)
         if series_cmds:
-            target = cover_msg if cover_msg else (ordered_msgs[0] if ordered_msgs else None)
+            target = (
+                cover_msg if cover_msg else (ordered_msgs[0] if ordered_msgs else None)
+            )
             if target and target[0] not in cached:
                 self._update_loading('Running series checks\u2026')
                 series_results = checks.run_series_checks(
-                    target, series_cmds, topdir, pwkey, pwurl,
-                    extra_env=extra_env)
+                    target, series_cmds, topdir, pwkey, pwurl, extra_env=extra_env
+                )
                 cover_idx = 0
                 for result in series_results:
                     tool = result['tool']
@@ -359,9 +380,12 @@ class CheckRunnerMixin:
         def _push_modal() -> None:
             if self._check_loading is not None and self._check_loading.is_attached:
                 self._check_loading.dismiss(None)
-            self.push_screen(TrackingCheckResultsScreen(  # type: ignore[attr-defined]
-                title, patch_labels, patch_subjects, tools_sorted, matrix),
-                callback=_on_result)
+            self.push_screen(  # type: ignore[attr-defined]
+                TrackingCheckResultsScreen(
+                    title, patch_labels, patch_subjects, tools_sorted, matrix
+                ),
+                callback=_on_result,
+            )
 
         self.app.call_from_thread(_push_modal)  # type: ignore[attr-defined]
 
@@ -391,7 +415,10 @@ def _make_initials(name: str) -> str:
 def _has_review_data(reviews: Dict[str, Dict[str, Any]]) -> bool:
     """Return True if any reviewer has trailers, reply, comments, or a note."""
     return any(
-        r.get('trailers') or r.get('reply', '') or r.get('comments') or r.get('note', '')
+        r.get('trailers')
+        or r.get('reply', '')
+        or r.get('comments')
+        or r.get('note', '')
         for r in reviews.values()
     )
 
@@ -423,11 +450,13 @@ def _strip_attribution(body: str) -> str:
     if attr_end is None:
         return body
     # Check that the next non-blank line starts with '>'
-    for ln in lines[attr_end + 1:]:
+    for ln in lines[attr_end + 1 :]:
         if ln.strip():
             if ln.startswith('> ') or ln.strip() == '>':
-                remaining = lines[attr_end + 1:]
-                while remaining and (not remaining[0].strip() or remaining[0].strip() == '>'):
+                remaining = lines[attr_end + 1 :]
+                while remaining and (
+                    not remaining[0].strip() or remaining[0].strip() == '>'
+                ):
                     remaining.pop(0)
                 return '\n'.join(remaining)
             break
@@ -452,10 +481,17 @@ def _write_followup_comments(
     """
     if not fc_list:
         return
-    rev_palette = reviewer_colours(ts) if ts else [
-        'dark_goldenrod', 'dark_cyan',
-        'dark_magenta', 'dark_red', 'dark_blue',
-    ]
+    rev_palette = (
+        reviewer_colours(ts)
+        if ts
+        else [
+            'dark_goldenrod',
+            'dark_cyan',
+            'dark_magenta',
+            'dark_red',
+            'dark_blue',
+        ]
+    )
     fc_emails = sorted({e['fromemail'] for e in fc_list})
     colour_map: Dict[str, str] = {}
     for ci, em in enumerate(fc_emails):
@@ -473,7 +509,9 @@ def _write_followup_comments(
         body = _strip_attribution(e['body'])
         body_text = Text()
         body_text.append(f'From:  {fromname} <{e["fromemail"]}>\n', style='bold')
-        body_text.append(f'Date:  {e["date"].strftime("%Y-%m-%d %H:%M %z")}\n', style='bold')
+        body_text.append(
+            f'Date:  {e["date"].strftime("%Y-%m-%d %H:%M %z")}\n', style='bold'
+        )
         if msgid := e.get('msgid', ''):
             body_text.append(f'Msgid: <{msgid}>\n', style='bold')
         body_text.append('\n')
@@ -590,7 +628,7 @@ def _write_comments(
     the same diff line are rendered as separate panels.
     *ts* is a resolved theme styles dict from :func:`resolve_styles`.
     """
-    bg = f"on {ts['panel']}" if ts else 'on grey11'
+    bg = f'on {ts["panel"]}' if ts else 'on grey11'
     for name, colour, text in entries:
         panel = Panel(
             Text(text),
@@ -648,7 +686,8 @@ def _write_followup_trailers(
 
 
 def _write_diff_line(
-    viewer: 'RichLog', line: str,
+    viewer: 'RichLog',
+    line: str,
     ts: Optional[Dict[str, str]] = None,
 ) -> None:
     """Write a single diff line to a RichLog with appropriate colouring.
@@ -658,7 +697,7 @@ def _write_diff_line(
     if line.startswith(('diff --git ', '--- ', '+++ ')):
         viewer.write(Text(line, style='bold'))
     elif line.startswith('@@'):
-        viewer.write(Text(line, style=f"bold {ts['accent']}" if ts else 'bold cyan'))
+        viewer.write(Text(line, style=f'bold {ts["accent"]}' if ts else 'bold cyan'))
     elif line.startswith('+'):
         viewer.write(Text(line, style=ts['success'] if ts else 'green'))
     elif line.startswith('-'):
@@ -668,7 +707,8 @@ def _write_diff_line(
 
 
 def _render_email_to_viewer(
-    viewer: 'RichLog', msg: email.message.EmailMessage,
+    viewer: 'RichLog',
+    msg: email.message.EmailMessage,
     ts: Optional[Dict[str, str]] = None,
 ) -> None:
     """Render an EmailMessage into a RichLog, headers first then body.
@@ -684,14 +724,15 @@ def _render_email_to_viewer(
             continue
         val = str(val)
         if hdr.lower() in ('to', 'cc'):
-            wrapped = b4.LoreMessage.wrap_header(
-                (hdr, val), transform='decode').decode(errors='replace')
+            wrapped = b4.LoreMessage.wrap_header((hdr, val), transform='decode').decode(
+                errors='replace'
+            )
             first_line, *rest = wrapped.splitlines()
             colon = first_line.find(':')
             hdr_text = Text()
             if colon >= 0:
-                hdr_text.append(first_line[:colon + 1], style='bold')
-                hdr_text.append(first_line[colon + 1:])
+                hdr_text.append(first_line[: colon + 1], style='bold')
+                hdr_text.append(first_line[colon + 1 :])
             else:
                 hdr_text.append(first_line)
             for r in rest:
@@ -705,10 +746,14 @@ def _render_email_to_viewer(
             viewer.write(hdr_text)
     viewer.write('')
     payload = msg.get_payload(decode=True)
-    body = payload.decode(errors='replace') if isinstance(payload, bytes) else str(payload or '')
+    body = (
+        payload.decode(errors='replace')
+        if isinstance(payload, bytes)
+        else str(payload or '')
+    )
     for line in body.splitlines():
         if line.startswith('>'):
-            viewer.write(Text(line, style=f"dim {ts['accent']}" if ts else 'dim cyan'))
+            viewer.write(Text(line, style=f'dim {ts["accent"]}' if ts else 'dim cyan'))
         elif line.startswith('---'):
             viewer.write(Text(line, style='dim'))
         else:
@@ -764,9 +809,11 @@ def gather_attestation_info(lser: b4.LoreSeries) -> Dict[str, Any]:
                 check_at = 'HEAD'
 
             try:
-                apply_checked, mismatches = lser.check_applies_clean(topdir, at=check_at)
+                apply_checked, mismatches = lser.check_applies_clean(
+                    topdir, at=check_at
+                )
                 apply_mismatches = len(mismatches)
-                applies_clean = (apply_mismatches == 0)
+                applies_clean = apply_mismatches == 0
             except Exception:
                 pass
 
@@ -802,15 +849,25 @@ def gather_attestation_info(lser: b4.LoreSeries) -> Dict[str, Any]:
         patch_idx = f'{idx:0{width}d}/{total:0{width}d}'
 
         if lmsg is None:
-            per_patch.append({
-                'index': patch_idx,
-                'passing': False,
-                'attestations': [{'status': 'missing', 'identity': 'Patch not available', 'passing': False}],
-            })
+            per_patch.append(
+                {
+                    'index': patch_idx,
+                    'passing': False,
+                    'attestations': [
+                        {
+                            'status': 'missing',
+                            'identity': 'Patch not available',
+                            'passing': False,
+                        }
+                    ],
+                }
+            )
             same_attestation = False
             continue
 
-        attestations, overall_passing, critical = lmsg.get_attestation_status(attpolicy, maxdays)
+        attestations, overall_passing, critical = lmsg.get_attestation_status(
+            attpolicy, maxdays
+        )
         if critical:
             any_critical = True
 
@@ -826,11 +883,13 @@ def gather_attestation_info(lser: b4.LoreSeries) -> Dict[str, Any]:
             if ref_ids != cur_ids:
                 same_attestation = False
 
-        per_patch.append({
-            'index': patch_idx,
-            'passing': overall_passing,
-            'attestations': attestations,
-        })
+        per_patch.append(
+            {
+                'index': patch_idx,
+                'passing': overall_passing,
+                'attestations': attestations,
+            }
+        )
 
     return {
         'total': len(per_patch),
@@ -845,5 +904,3 @@ def gather_attestation_info(lser: b4.LoreSeries) -> Dict[str, Any]:
         'apply_checked': apply_checked,
         'apply_mismatches': apply_mismatches,
     }
-
-
diff --git a/src/b4/review_tui/_entry.py b/src/b4/review_tui/_entry.py
index 68a48af..ece97da 100644
--- a/src/b4/review_tui/_entry.py
+++ b/src/b4/review_tui/_entry.py
@@ -25,12 +25,17 @@ def _tui_use_mouse() -> bool:
         return True
 
 
-def run_pw_tui(pwkey: str, pwurl: str, pwproj: str,
-               email_dryrun: bool = False,
-               patatt_sign: bool = True) -> None:
+def run_pw_tui(
+    pwkey: str,
+    pwurl: str,
+    pwproj: str,
+    email_dryrun: bool = False,
+    patatt_sign: bool = True,
+) -> None:
     """Launch the Patchwork series browser TUI."""
-    app = PwApp(pwkey, pwurl, pwproj,
-                email_dryrun=email_dryrun, patatt_sign=patatt_sign)
+    app = PwApp(
+        pwkey, pwurl, pwproj, email_dryrun=email_dryrun, patatt_sign=patatt_sign
+    )
     app.run(mouse=_tui_use_mouse())
 
 
@@ -40,9 +45,12 @@ def run_branch_tui(session: Dict[str, Any]) -> None:
     app.run(mouse=_tui_use_mouse())
 
 
-def run_tracking_tui(identifier: str, email_dryrun: bool = False,
-                     no_sign: bool = False,
-                     no_mouse: bool = False) -> None:
+def run_tracking_tui(
+    identifier: str,
+    email_dryrun: bool = False,
+    no_sign: bool = False,
+    no_mouse: bool = False,
+) -> None:
     """Entry point called from b4.review.cmd_tui().
 
     Loops between TrackingApp and ReviewApp as needed.
@@ -88,14 +96,21 @@ def run_tracking_tui(identifier: str, email_dryrun: bool = False,
             review_app = ReviewApp(session)
             review_app.run(mouse=use_mouse)
         except SystemExit:
-            logger.warning('Could not prepare review session for branch: %s', original_branch)
+            logger.warning(
+                'Could not prepare review session for branch: %s', original_branch
+            )
         return
 
     # Normal tracking mode - loop between TrackingApp and ReviewApp
     focus_change_id: Optional[str] = None
     while True:
-        app = TrackingApp(identifier, original_branch, focus_change_id=focus_change_id,
-                          email_dryrun=email_dryrun, patatt_sign=patatt_sign)
+        app = TrackingApp(
+            identifier,
+            original_branch,
+            focus_change_id=focus_change_id,
+            email_dryrun=email_dryrun,
+            patatt_sign=patatt_sign,
+        )
         focus_change_id = None
         branch_name = app.run(mouse=use_mouse)
 
@@ -110,9 +125,13 @@ def run_tracking_tui(identifier: str, email_dryrun: bool = False,
             pwurl = str(config.get('pw-url', ''))
             pwproj = str(config.get('pw-project', ''))
             if pwkey and pwurl and pwproj:
-                run_pw_tui(pwkey, pwurl, pwproj,
-                           email_dryrun=email_dryrun,
-                           patatt_sign=patatt_sign)
+                run_pw_tui(
+                    pwkey,
+                    pwurl,
+                    pwproj,
+                    email_dryrun=email_dryrun,
+                    patatt_sign=patatt_sign,
+                )
             continue
 
         # User selected a branch to review - prepare session and run ReviewApp
@@ -121,7 +140,9 @@ def run_tracking_tui(identifier: str, email_dryrun: bool = False,
             session = b4.review._prepare_review_session(cmdargs)
         except SystemExit:
             # Session prep failed (e.g., branch doesn't exist)
-            logger.warning('Could not prepare review session for branch: %s', branch_name)
+            logger.warning(
+                'Could not prepare review session for branch: %s', branch_name
+            )
             continue
 
         session['email_dryrun'] = email_dryrun
@@ -143,7 +164,8 @@ def run_tracking_tui(identifier: str, email_dryrun: bool = False,
             if tracking_status and focus_change_id:
                 conn = b4.review.tracking.get_db(identifier)
                 b4.review.tracking.update_series_status(
-                    conn, focus_change_id, tracking_status, revision=revision)
+                    conn, focus_change_id, tracking_status, revision=revision
+                )
                 conn.close()
         except Exception as ex:
             logger.warning('Could not sync tracking status: %s', ex)
@@ -155,7 +177,13 @@ def run_tracking_tui(identifier: str, email_dryrun: bool = False,
         if original_branch:
             current = b4.git_get_current_branch(topdir)
             if current and current != original_branch:
-                logger.info('Checking out %s and starting tracking UI...', original_branch)
-                ecode, _out = b4.git_run_command(topdir, ['checkout', original_branch], logstderr=True)
+                logger.info(
+                    'Checking out %s and starting tracking UI...', original_branch
+                )
+                ecode, _out = b4.git_run_command(
+                    topdir, ['checkout', original_branch], logstderr=True
+                )
                 if ecode != 0:
-                    logger.warning('Could not restore original branch: %s', original_branch)
+                    logger.warning(
+                        'Could not restore original branch: %s', original_branch
+                    )
diff --git a/src/b4/review_tui/_lite_app.py b/src/b4/review_tui/_lite_app.py
index 7a37e0d..801665b 100644
--- a/src/b4/review_tui/_lite_app.py
+++ b/src/b4/review_tui/_lite_app.py
@@ -34,6 +34,7 @@ from b4.review_tui._modals import FollowupReplyPreviewScreen
 @dataclass
 class ThreadNode:
     """A single message in the thread tree."""
+
     lmsg: b4.LoreMessage
     children: List['ThreadNode'] = field(default_factory=list)
     depth: int = 0
@@ -54,7 +55,7 @@ def _flatten_tree(
     """DFS-flatten a list of roots into a list with tree_art set."""
     result: List[ThreadNode] = []
     for i, node in enumerate(roots):
-        is_last = (i == len(roots) - 1)
+        is_last = i == len(roots) - 1
         if is_root:
             node.tree_art = ''
         else:
@@ -88,7 +89,9 @@ def build_thread_tree(lmbx: b4.LoreMailbox) -> List[ThreadNode]:
         att_list: List[Dict[str, Any]] = []
         att_passing = True
         if attpolicy != 'off':
-            att_list, att_passing, _critical = lmsg.get_attestation_status(attpolicy, maxdays)
+            att_list, att_passing, _critical = lmsg.get_attestation_status(
+                attpolicy, maxdays
+            )
         nodes[msgid] = ThreadNode(
             lmsg=lmsg,
             is_patch=lmsg.has_diff,
@@ -121,8 +124,7 @@ def build_thread_tree(lmbx: b4.LoreMailbox) -> List[ThreadNode]:
     return flat
 
 
-def _build_thread_label(node: ThreadNode,
-                        ts: Optional[Dict[str, str]] = None) -> Text:
+def _build_thread_label(node: ThreadNode, ts: Optional[Dict[str, str]] = None) -> Text:
     """Build the Text label for a thread index row."""
     lmsg = node.lmsg
     if lmsg.date:
@@ -136,15 +138,15 @@ def _build_thread_label(node: ThreadNode,
         author += '\u2026'
     author = pad_display(author, 20)
     is_unseen = node.is_unseen
-    unseen_style = f"bold {ts['warning']}" if ts else 'bold'
-    flag_style = f"bold {ts['accent']}" if ts else 'bold'
+    unseen_style = f'bold {ts["warning"]}' if ts else 'bold'
+    flag_style = f'bold {ts["accent"]}' if ts else 'bold'
     answered_style = ts['success'] if ts else ''
     if node.is_answered:
-        row_style = f"dim {ts['success']}" if ts else 'dim'
+        row_style = f'dim {ts["success"]}' if ts else 'dim'
     elif is_unseen:
         row_style = ''
     elif node.is_flagged:
-        row_style = f"bold {ts['accent']}" if ts else 'bold'
+        row_style = f'bold {ts["accent"]}' if ts else 'bold'
     else:
         row_style = 'dim'
     text = Text(no_wrap=True, overflow='ellipsis')
@@ -226,9 +228,16 @@ class MessageViewScreen(ModalScreen[None]):
 
     def compose(self) -> ComposeResult:
         with Vertical(id='msg-dialog'):
-            yield Static(f'Subject: {self._node.lmsg.full_subject}', id='msg-title', markup=False)
-            yield RichLog(id='msg-viewer', highlight=False, wrap=True,
-                          markup=False, auto_scroll=False)
+            yield Static(
+                f'Subject: {self._node.lmsg.full_subject}', id='msg-title', markup=False
+            )
+            yield RichLog(
+                id='msg-viewer',
+                highlight=False,
+                wrap=True,
+                markup=False,
+                auto_scroll=False,
+            )
             yield Static(
                 'r reply  |  F flag  |  S skip quoted  |  j/k prev/next msg  |  q back',
                 id='msg-hint',
@@ -246,7 +255,7 @@ class MessageViewScreen(ModalScreen[None]):
         if self._node.is_flagged:
             ts = resolve_styles(self.app)
             text = Text()
-            text.append(f'Subject: {subject} \u2605', style=f"bold {ts['accent']}")
+            text.append(f'Subject: {subject} \u2605', style=f'bold {ts["accent"]}')
             title.update(text)
         else:
             title.update(f'Subject: {subject}')
@@ -286,7 +295,7 @@ class MessageViewScreen(ModalScreen[None]):
             linkurl = linkmask % lmsg.msgid
             hdr_text = Text()
             hdr_text.append('Link: ', style='dim bold')
-            hdr_text.append(linkurl, style=f"dim link {linkurl}")
+            hdr_text.append(linkurl, style=f'dim link {linkurl}')
             viewer.write(hdr_text)
 
         # Attestation status
@@ -302,14 +311,20 @@ class MessageViewScreen(ModalScreen[None]):
                 if att.get('passing'):
                     att_text.append(f'\u2713 {identity}', style=ts['success'])
                     if 'mismatch' in att:
-                        att_text.append(f' (From: {att["mismatch"]})', style=ts['warning'])
+                        att_text.append(
+                            f' (From: {att["mismatch"]})', style=ts['warning']
+                        )
                 else:
                     if status == 'badsig':
                         att_text.append(f'\u2717 BADSIG: {identity}', style=ts['error'])
                     elif status == 'nokey':
-                        att_text.append(f'\u2717 No key: {identity}', style=ts['warning'])
+                        att_text.append(
+                            f'\u2717 No key: {identity}', style=ts['warning']
+                        )
                     else:
-                        att_text.append(f'\u2717 {status}: {identity}', style=ts['error'])
+                        att_text.append(
+                            f'\u2717 {status}: {identity}', style=ts['error']
+                        )
             viewer.write(att_text)
 
         viewer.write('')
@@ -323,7 +338,7 @@ class MessageViewScreen(ModalScreen[None]):
             if in_diff:
                 _write_diff_line(viewer, line, ts=ts)
             elif line.startswith('>'):
-                viewer.write(Text(line, style=f"dim {ts['accent']}"))
+                viewer.write(Text(line, style=f'dim {ts["accent"]}'))
             elif line.startswith('---'):
                 viewer.write(Text(line, style='dim'))
             else:
@@ -331,8 +346,10 @@ class MessageViewScreen(ModalScreen[None]):
 
     @staticmethod
     def _write_addr_header(
-        viewer: RichLog, hdr_name: str,
-        pairs: List[Any], width: int,
+        viewer: RichLog,
+        hdr_name: str,
+        pairs: List[Any],
+        width: int,
     ) -> None:
         """Write an address header, packing addresses to fill each line."""
         indent_len = len(hdr_name) + 2  # "Cc: "
@@ -561,6 +578,7 @@ class LiteThreadScreen(ModalScreen[None]):
             return
         try:
             from b4.review import messages
+
             conn = messages.get_db()
             msgids = [n.lmsg.msgid for n in self._thread_nodes if n.lmsg.msgid]
             flags_map = messages.get_flags_bulk(conn, msgids)
@@ -597,6 +615,7 @@ class LiteThreadScreen(ModalScreen[None]):
             return
         try:
             from b4.review import messages
+
             conn = messages.get_db()
             messages.set_flag(conn, msgid, 'Seen', self._msg_date(node))
             conn.close()
@@ -611,6 +630,7 @@ class LiteThreadScreen(ModalScreen[None]):
             return
         try:
             from b4.review import messages
+
             conn = messages.get_db()
             messages.set_flag(conn, msgid, 'Answered', self._msg_date(node))
             conn.close()
@@ -644,30 +664,34 @@ class LiteThreadScreen(ModalScreen[None]):
             # Maintainer's own message → Seen
             if node.is_unseen:
                 node.is_unseen = False
-                seen_entries.append({'msgid': node.lmsg.msgid,
-                                     'msg_date': self._msg_date(node)})
+                seen_entries.append(
+                    {'msgid': node.lmsg.msgid, 'msg_date': self._msg_date(node)}
+                )
             # Immediate parent → Answered
             parent_id = node.lmsg.in_reply_to
             if parent_id and parent_id in node_map:
                 parent = node_map[parent_id]
                 if not parent.is_answered:
                     parent.is_answered = True
-                    answered_entries.append({'msgid': parent_id,
-                                             'msg_date': self._msg_date(parent)})
+                    answered_entries.append(
+                        {'msgid': parent_id, 'msg_date': self._msg_date(parent)}
+                    )
             # All ancestors → Seen
             ancestor_id = node.lmsg.in_reply_to
             while ancestor_id and ancestor_id in node_map:
                 ancestor = node_map[ancestor_id]
                 if ancestor.is_unseen:
                     ancestor.is_unseen = False
-                    seen_entries.append({'msgid': ancestor_id,
-                                         'msg_date': self._msg_date(ancestor)})
+                    seen_entries.append(
+                        {'msgid': ancestor_id, 'msg_date': self._msg_date(ancestor)}
+                    )
                 ancestor_id = ancestor.lmsg.in_reply_to
 
         if not seen_entries and not answered_entries:
             return
         try:
             from b4.review import messages
+
             conn = messages.get_db()
             if seen_entries:
                 messages.set_flags_bulk(conn, seen_entries, 'Seen')
@@ -685,6 +709,7 @@ class LiteThreadScreen(ModalScreen[None]):
             return
         try:
             from b4.review import messages
+
             conn = messages.get_db()
             if node.is_flagged:
                 messages.set_flag(conn, msgid, 'Flagged', self._msg_date(node))
@@ -725,7 +750,9 @@ class LiteThreadScreen(ModalScreen[None]):
         for item in lv.query(ThreadIndexItem):
             item.query_one(Label).update(_build_thread_label(item.node, ts))
 
-    def compose_reply(self, node: ThreadNode, initial_text: Optional[str] = None) -> None:
+    def compose_reply(
+        self, node: ThreadNode, initial_text: Optional[str] = None
+    ) -> None:
         """Compose a reply to the given thread node using external editor."""
         lmsg = node.lmsg
         if initial_text is not None:
@@ -768,10 +795,15 @@ class LiteThreadScreen(ModalScreen[None]):
         try:
             with self.app.suspend():
                 smtp, fromaddr = b4.get_smtp(dryrun=self._email_dryrun)
-                sent = b4.send_mail(smtp, [msg], fromaddr=fromaddr,
-                                    patatt_sign=self._patatt_sign,
-                                    dryrun=self._email_dryrun,
-                                    output_dir=None, reflect=False)
+                sent = b4.send_mail(
+                    smtp,
+                    [msg],
+                    fromaddr=fromaddr,
+                    patatt_sign=self._patatt_sign,
+                    dryrun=self._email_dryrun,
+                    output_dir=None,
+                    reflect=False,
+                )
             if sent is None:
                 self.app.notify('Failed to send reply.', severity='error')
             elif self._email_dryrun:
diff --git a/src/b4/review_tui/_modals.py b/src/b4/review_tui/_modals.py
index 412419a..c8521d6 100644
--- a/src/b4/review_tui/_modals.py
+++ b/src/b4/review_tui/_modals.py
@@ -132,7 +132,10 @@ class TrailerScreen(JKListNavMixin, ModalScreen[Optional[List[str]]]):
         with Vertical(id='trailer-dialog'):
             yield Label('Select trailers:')
             yield ListView(
-                *[TrailerOption(name, name in self._existing) for name in self.TRAILER_NAMES],
+                *[
+                    TrailerOption(name, name in self._existing)
+                    for name in self.TRAILER_NAMES
+                ],
                 id='trailer-list',
             )
             yield Static('a/r/t/n toggle  |  Enter save', id='trailer-hint')
@@ -145,7 +148,9 @@ class TrailerScreen(JKListNavMixin, ModalScreen[Optional[List[str]]]):
 
     def action_toggle_item(self) -> None:
         lv = self.query_one('#trailer-list', ListView)
-        if lv.highlighted_child is not None and isinstance(lv.highlighted_child, TrailerOption):
+        if lv.highlighted_child is not None and isinstance(
+            lv.highlighted_child, TrailerOption
+        ):
             lv.highlighted_child.toggle()
 
     def action_quick_toggle(self, name: str) -> None:
@@ -401,8 +406,13 @@ class NoteScreen(ModalScreen[Optional[str]]):
 
     def compose(self) -> ComposeResult:
         with Vertical(id='note-dialog'):
-            yield RichLog(id='note-viewer', highlight=False, wrap=True,
-                          markup=True, auto_scroll=False)
+            yield RichLog(
+                id='note-viewer',
+                highlight=False,
+                wrap=True,
+                markup=True,
+                auto_scroll=False,
+            )
             yield Static('Escape close  |  e edit  |  d delete all', id='note-hint')
 
     def on_mount(self) -> None:
@@ -471,8 +481,13 @@ class PriorReviewScreen(ModalScreen[None]):
 
     def compose(self) -> ComposeResult:
         with Vertical(id='prior-review-dialog'):
-            yield RichLog(id='prior-review-viewer', highlight=False, wrap=True,
-                          markup=False, auto_scroll=False)
+            yield RichLog(
+                id='prior-review-viewer',
+                highlight=False,
+                wrap=True,
+                markup=False,
+                auto_scroll=False,
+            )
             yield Static('Escape close', id='prior-review-hint')
 
     def on_mount(self) -> None:
@@ -480,7 +495,7 @@ class PriorReviewScreen(ModalScreen[None]):
         viewer = self.query_one('#prior-review-viewer', RichLog)
         for line in self._context_text.splitlines():
             if line.startswith('== ') and line.endswith(' =='):
-                viewer.write(Text(line, style=f"bold {ts['accent']}"))
+                viewer.write(Text(line, style=f'bold {ts["accent"]}'))
             else:
                 viewer.write(Text(line))
 
@@ -537,10 +552,16 @@ class FollowupReplyPreviewScreen(ModalScreen[Optional[str]]):
 
     def compose(self) -> ComposeResult:
         with Vertical(id='followup-preview-dialog'):
-            yield RichLog(id='followup-preview-viewer', highlight=False,
-                          wrap=True, markup=False, auto_scroll=False)
-            yield Static('S  send  |  e  edit  |  Escape  abandon',
-                         id='followup-preview-hint')
+            yield RichLog(
+                id='followup-preview-viewer',
+                highlight=False,
+                wrap=True,
+                markup=False,
+                auto_scroll=False,
+            )
+            yield Static(
+                'S  send  |  e  edit  |  Escape  abandon', id='followup-preview-hint'
+            )
 
     def on_mount(self) -> None:
         body = self._reply_text
@@ -691,11 +712,15 @@ class TakeScreen(ModalScreen[bool]):
     }
     """
 
-    def __init__(self, target_branch: str, review_branch: str,
-                 num_patches: int = 0,
-                 default_method: Optional[str] = None,
-                 recent_branches: Optional[List[str]] = None,
-                 subject: str = '') -> None:
+    def __init__(
+        self,
+        target_branch: str,
+        review_branch: str,
+        num_patches: int = 0,
+        default_method: Optional[str] = None,
+        recent_branches: Optional[List[str]] = None,
+        subject: str = '',
+    ) -> None:
         """Initialize take screen.
 
         Args:
@@ -709,7 +734,9 @@ class TakeScreen(ModalScreen[bool]):
         super().__init__()
         self._target_branch = target_branch
         self._review_branch = review_branch
-        self._default_method = default_method or ('linear' if num_patches == 1 else 'merge')
+        self._default_method = default_method or (
+            'linear' if num_patches == 1 else 'merge'
+        )
         self._recent_branches = recent_branches
         self._subject = subject
         # Results set after continue
@@ -731,12 +758,30 @@ class TakeScreen(ModalScreen[bool]):
                 yield Static(self._subject, id='take-title', markup=False)
             yield Static(f'Review branch: {self._review_branch}', classes='take-value')
             yield Static('Target branch:', classes='take-label')
-            suggester = SuggestFromList(self._recent_branches, case_sensitive=True) if self._recent_branches else None
-            yield Input(value=self._target_branch, id='take-target', suggester=suggester)
+            suggester = (
+                SuggestFromList(self._recent_branches, case_sensitive=True)
+                if self._recent_branches
+                else None
+            )
+            yield Input(
+                value=self._target_branch, id='take-target', suggester=suggester
+            )
             yield Static('Method:', classes='take-label')
-            yield Select(method_options, value=self._default_method, id='take-method', allow_blank=False)
-            yield Checkbox('add Link:', value=True, id='take-add-link', classes='take-checkbox')
-            yield Checkbox('add Signed-off-by:', value=True, id='take-add-signoff', classes='take-checkbox')
+            yield Select(
+                method_options,
+                value=self._default_method,
+                id='take-method',
+                allow_blank=False,
+            )
+            yield Checkbox(
+                'add Link:', value=True, id='take-add-link', classes='take-checkbox'
+            )
+            yield Checkbox(
+                'add Signed-off-by:',
+                value=True,
+                id='take-add-signoff',
+                classes='take-checkbox',
+            )
             yield Static('Ctrl-y continue  |  Escape cancel', id='take-hint')
 
     def on_mount(self) -> None:
@@ -748,7 +793,9 @@ class TakeScreen(ModalScreen[bool]):
             self.notify('Target branch is required', severity='error')
             return
         if not b4.git_branch_exists(None, self.target_result):
-            self.notify(f'Branch does not exist: {self.target_result}', severity='error')
+            self.notify(
+                f'Branch does not exist: {self.target_result}', severity='error'
+            )
             return
         self.method_result = str(self.query_one('#take-method', Select).value)
         self.add_link = self.query_one('#take-add-link', Checkbox).value
@@ -801,8 +848,9 @@ class CherryPickScreen(ModalScreen[bool]):
     }
     """
 
-    def __init__(self, patches: List[Dict[str, Any]],
-                 preselected: Optional[List[int]] = None) -> None:
+    def __init__(
+        self, patches: List[Dict[str, Any]], preselected: Optional[List[int]] = None
+    ) -> None:
         super().__init__()
         self._patches = patches
         self._preselected: List[int] = preselected if preselected is not None else []
@@ -813,14 +861,19 @@ class CherryPickScreen(ModalScreen[bool]):
         with Vertical(id='cherrypick-dialog'):
             yield Static('Select patches to apply', id='cherrypick-title')
             if has_preselected:
-                yield Static('Skipped patches are pre-deselected.',
-                             id='cherrypick-skip-note')
+                yield Static(
+                    'Skipped patches are pre-deselected.', id='cherrypick-skip-note'
+                )
             with Vertical(id='cherrypick-list'):
                 for i, patch in enumerate(self._patches):
                     title = patch.get('title', f'Patch {i + 1}')
                     checked = (i + 1) in self._preselected if has_preselected else False
-                    yield Checkbox(Text(f' {i + 1:3d}. {title}'), value=checked,
-                                   id=f'cherrypick-{i}', classes='cherrypick-checkbox')
+                    yield Checkbox(
+                        Text(f' {i + 1:3d}. {title}'),
+                        value=checked,
+                        id=f'cherrypick-{i}',
+                        classes='cherrypick-checkbox',
+                    )
             yield Static('Ctrl-y continue  |  Escape cancel', id='cherrypick-hint')
 
     def action_continue_pick(self) -> None:
@@ -887,9 +940,14 @@ class TakeConfirmScreen(ModalScreen[bool]):
     }
     """
 
-    def __init__(self, method: str, target_branch: str,
-                 review_branch: str, subject: str = '',
-                 cherrypick: Optional[List[int]] = None) -> None:
+    def __init__(
+        self,
+        method: str,
+        target_branch: str,
+        review_branch: str,
+        subject: str = '',
+        cherrypick: Optional[List[int]] = None,
+    ) -> None:
         super().__init__()
         self._method = method
         self._target_branch = target_branch
@@ -908,14 +966,12 @@ class TakeConfirmScreen(ModalScreen[bool]):
             if self._cherrypick:
                 yield Static(
                     f'Patches: {", ".join(str(i) for i in self._cherrypick)}',
-                    markup=False)
+                    markup=False,
+                )
             yield Static('Testing apply\u2026', id='takeconfirm-status')
             yield LoadingIndicator(id='takeconfirm-loading')
-            yield Checkbox('mark as accepted', value=True,
-                           id='takeconfirm-accept')
-            yield Static(
-                'Ctrl-y confirm  |  Escape cancel',
-                id='takeconfirm-hint')
+            yield Checkbox('mark as accepted', value=True, id='takeconfirm-accept')
+            yield Static('Ctrl-y confirm  |  Escape cancel', id='takeconfirm-hint')
 
     def on_mount(self) -> None:
         self.run_worker(self._test_take, name='_test_take', thread=True)
@@ -931,8 +987,7 @@ class TakeConfirmScreen(ModalScreen[bool]):
 
             # Load tracking to find base-commit and patch count
             try:
-                _cover, tracking = b4.review.load_tracking(
-                    topdir, self._review_branch)
+                _cover, tracking = b4.review.load_tracking(topdir, self._review_branch)
             except SystemExit:
                 return False, 'could not load tracking data'
 
@@ -957,15 +1012,16 @@ class TakeConfirmScreen(ModalScreen[bool]):
 
             # Resolve the test base
             ecode, out = b4.git_run_command(
-                topdir, ['rev-parse', '--verify', test_base])
+                topdir, ['rev-parse', '--verify', test_base]
+            )
             if ecode != 0:
                 return False, f'cannot resolve base: {test_base}'
             resolved_base = out.strip()
 
             # Get patch commits
             commits = b4.git_get_command_lines(
-                topdir, ['rev-list', '--reverse',
-                         f'{patch_base}..{patch_tip}'])
+                topdir, ['rev-list', '--reverse', f'{patch_base}..{patch_tip}']
+            )
             if not commits:
                 return False, 'no commits found on review branch'
 
@@ -983,9 +1039,8 @@ class TakeConfirmScreen(ModalScreen[bool]):
             mbox_parts: list[bytes] = []
             for commit in commits:
                 ecode, patch_bytes = b4.git_run_command(
-                    topdir,
-                    ['format-patch', '--stdout', '-1', commit],
-                    decode=False)
+                    topdir, ['format-patch', '--stdout', '-1', commit], decode=False
+                )
                 if ecode != 0:
                     return False, f'format-patch failed for {commit[:12]}'
                 mbox_parts.append(patch_bytes)
@@ -994,16 +1049,13 @@ class TakeConfirmScreen(ModalScreen[bool]):
             # Test apply in a temporary sparse worktree
             try:
                 with b4.git_temp_worktree(topdir, resolved_base) as gwt:
-                    ecode, out = b4.git_run_command(
-                        gwt, ['sparse-checkout', 'set'])
+                    ecode, out = b4.git_run_command(gwt, ['sparse-checkout', 'set'])
                     if ecode > 0:
                         return False, 'failed to set up worktree'
-                    ecode, out = b4.git_run_command(
-                        gwt, ['checkout', '-f'])
+                    ecode, out = b4.git_run_command(gwt, ['checkout', '-f'])
                     if ecode > 0:
                         return False, 'failed to checkout base'
-                    ecode, out = b4.git_run_command(
-                        gwt, ['am'], stdin=ambytes)
+                    ecode, out = b4.git_run_command(gwt, ['am'], stdin=ambytes)
                     if ecode > 0:
                         for line in out.splitlines():
                             if line.startswith('Patch failed at '):
@@ -1016,29 +1068,25 @@ class TakeConfirmScreen(ModalScreen[bool]):
     def _update_status(self, text: str, level: str) -> None:
         widget = self.query_one('#takeconfirm-status', Static)
         widget.update(text)
-        widget.remove_class('takeconfirm-pass', 'takeconfirm-warn',
-                            'takeconfirm-fail')
+        widget.remove_class('takeconfirm-pass', 'takeconfirm-warn', 'takeconfirm-fail')
         widget.add_class(f'takeconfirm-{level}')
 
     async def on_worker_state_changed(self, event: Worker.StateChanged) -> None:
         if event.worker.name != '_test_take':
             return
         if event.state == WorkerState.SUCCESS and event.worker.result:
-            await self.query_one('#takeconfirm-loading', LoadingIndicator
-                                 ).remove()
+            await self.query_one('#takeconfirm-loading', LoadingIndicator).remove()
             ok, detail = event.worker.result
             if ok:
                 self._update_status(f'Test apply: {detail}', 'pass')
             else:
                 self._update_status(f'Test apply: {detail}', 'fail')
         elif event.state == WorkerState.ERROR:
-            await self.query_one('#takeconfirm-loading', LoadingIndicator
-                                 ).remove()
+            await self.query_one('#takeconfirm-loading', LoadingIndicator).remove()
             self._update_status('test apply error', 'fail')
 
     def action_confirm_take(self) -> None:
-        self.accept_series = self.query_one(
-            '#takeconfirm-accept', Checkbox).value
+        self.accept_series = self.query_one('#takeconfirm-accept', Checkbox).value
         self.dismiss(True)
 
     def action_cancel(self) -> None:
@@ -1087,8 +1135,9 @@ class SnoozeScreen(ModalScreen[Optional[Dict[str, str]]]):
     }
     """
 
-    def __init__(self, last_source: str = '', last_input: str = '',
-                 subject: str = '') -> None:
+    def __init__(
+        self, last_source: str = '', last_input: str = '', subject: str = ''
+    ) -> None:
         super().__init__()
         self._last_source = last_source
         self._last_input = last_input
@@ -1181,7 +1230,9 @@ class SnoozeScreen(ModalScreen[Optional[Dict[str, str]]]):
                 return
             # Convert date to midnight UTC datetime
             target = datetime.datetime(
-                target_date.year, target_date.month, target_date.day,
+                target_date.year,
+                target_date.month,
+                target_date.day,
                 tzinfo=datetime.timezone.utc,
             )
             until_value = target.strftime('%Y-%m-%dT%H:%M:%S')
@@ -1235,16 +1286,22 @@ class ThankScreen(ModalScreen[Optional[str]]):
     }
     """
 
-    def __init__(self, msg: email.message.EmailMessage,
-                 checkurl: Optional[str] = None) -> None:
+    def __init__(
+        self, msg: email.message.EmailMessage, checkurl: Optional[str] = None
+    ) -> None:
         super().__init__()
         self._msg = msg
         self._checkurl = checkurl
 
     def compose(self) -> ComposeResult:
         with Vertical(id='thank-dialog'):
-            yield RichLog(id='thank-viewer', highlight=False, wrap=True,
-                          markup=False, auto_scroll=False)
+            yield RichLog(
+                id='thank-viewer',
+                highlight=False,
+                wrap=True,
+                markup=False,
+                auto_scroll=False,
+            )
             if self._checkurl:
                 hint = 'e edit  |  S send now  |  W queue  |  Escape cancel'
             else:
@@ -1342,8 +1399,13 @@ class QueueScreen(ModalScreen[Optional[str]]):
 
     def compose(self) -> ComposeResult:
         with Vertical(id='queue-dialog'):
-            yield RichLog(id='queue-viewer', highlight=False, wrap=True,
-                          markup=False, auto_scroll=False)
+            yield RichLog(
+                id='queue-viewer',
+                highlight=False,
+                wrap=True,
+                markup=False,
+                auto_scroll=False,
+            )
             yield Static('Q deliver  |  Escape close', id='queue-hint')
 
     def on_mount(self) -> None:
@@ -1365,7 +1427,9 @@ class QueueScreen(ModalScreen[Optional[str]]):
         self.dismiss(None)
 
 
-class QueueDeliveryScreen(ModalScreen[Optional[Tuple[int, int, List[Tuple[str, int]]]]]):
+class QueueDeliveryScreen(
+    ModalScreen[Optional[Tuple[int, int, List[Tuple[str, int]]]]]
+):
     """Modal that processes the thanks queue with a progress bar.
 
     Dismisses with ``(delivered, still_pending, delivered_series)``
@@ -1390,9 +1454,9 @@ class QueueDeliveryScreen(ModalScreen[Optional[Tuple[int, int, List[Tuple[str, i
     }
     """
 
-    def __init__(self, total: int,
-                 dryrun: bool = False,
-                 patatt_sign: bool = True) -> None:
+    def __init__(
+        self, total: int, dryrun: bool = False, patatt_sign: bool = True
+    ) -> None:
         super().__init__()
         self._total = total
         self._dryrun = dryrun
@@ -1417,7 +1481,9 @@ class QueueDeliveryScreen(ModalScreen[Optional[Tuple[int, int, List[Tuple[str, i
 
         def _on_progress(completed: int, total: int, status: str) -> None:
             if not self._cancelled:
-                self.app.call_from_thread(self._update_progress, completed, total, status)
+                self.app.call_from_thread(
+                    self._update_progress, completed, total, status
+                )
 
         return b4.ty.process_queue(
             dryrun=self._dryrun,
@@ -1553,8 +1619,13 @@ class _FetchViewerScreen(ModalScreen[None]):
         with Vertical(id='fv-dialog'):
             yield Static(self._loading_text, id='fv-title', markup=False)
             yield LoadingIndicator(id='fv-loading')
-            yield RichLog(id='fv-viewer', highlight=False, wrap=True,
-                          markup=True, auto_scroll=False)
+            yield RichLog(
+                id='fv-viewer',
+                highlight=False,
+                wrap=True,
+                markup=True,
+                auto_scroll=False,
+            )
             yield Static('Escape close', id='fv-hint')
 
     def on_mount(self) -> None:
@@ -1623,7 +1694,9 @@ class ViewSeriesScreen(_FetchViewerScreen):
             if not first:
                 viewer.write(Rule())
             first = False
-            viewer.write(Text(f'From: {lmsg.fromname} <{lmsg.fromemail}>', style='bold'))
+            viewer.write(
+                Text(f'From: {lmsg.fromname} <{lmsg.fromemail}>', style='bold')
+            )
             if lmsg.date:
                 viewer.write(Text(f'Date: {lmsg.date}', style='bold'))
             viewer.write(Text(f'Subject: {lmsg.full_subject}', style='bold'))
@@ -1644,8 +1717,7 @@ class CIChecksScreen(_FetchViewerScreen):
 
     _loading_text = 'Fetching CI checks\u2026'
 
-    def __init__(self, pwkey: str, pwurl: str,
-                 series: Dict[str, Any]) -> None:
+    def __init__(self, pwkey: str, pwurl: str, series: Dict[str, Any]) -> None:
         super().__init__()
         self._pwkey = pwkey
         self._pwurl = pwurl
@@ -1653,15 +1725,14 @@ class CIChecksScreen(_FetchViewerScreen):
 
     def _fetch(self) -> List[Dict[str, Any]]:
         import b4.review
+
         with _quiet_worker():
             patch_ids = self._series.get('patch_ids', [])
-            return b4.review.pw_fetch_checks(
-                self._pwkey, self._pwurl, patch_ids)
+            return b4.review.pw_fetch_checks(self._pwkey, self._pwurl, patch_ids)
 
     def _show_result(self, checks: List[Dict[str, Any]]) -> None:
         series_name = self._series.get('name') or '(no subject)'
-        self.query_one('#fv-title', Static).update(
-            f'CI checks \u2014 {series_name}')
+        self.query_one('#fv-title', Static).update(f'CI checks \u2014 {series_name}')
         viewer = self.query_one('#fv-viewer', RichLog)
 
         if not checks:
@@ -1715,7 +1786,9 @@ class CIChecksScreen(_FetchViewerScreen):
                     viewer.write(Text(f'    \u2192 {target_url}', style='dim'))
 
 
-def NewerRevisionWarningScreen(current_rev: int, newer_versions: List[int]) -> ConfirmScreen:
+def NewerRevisionWarningScreen(
+    current_rev: int, newer_versions: List[int]
+) -> ConfirmScreen:
     """Build a confirmation screen warning about newer revisions."""
     versions = ', '.join(f'v{v}' for v in newer_versions)
     return ConfirmScreen(
@@ -1773,14 +1846,16 @@ class RevisionChoiceScreen(ModalScreen[Optional[int]]):
             yield Static('Newer revision available', id='rev-choice-title')
             yield Static(
                 f'This series was tracked as v{self._current_rev}, but '
-                f'v{self._newest_rev} is now available.')
+                f'v{self._newest_rev} is now available.'
+            )
             yield Static('')
             yield Static('Which version would you like to review?')
             yield Static(
                 f'n review v{self._newest_rev} (newer)  |  '
                 f'o review v{self._current_rev} (older)  |  '
                 f'Escape cancel',
-                id='rev-choice-hint')
+                id='rev-choice-hint',
+            )
 
     def action_newer(self) -> None:
         self.dismiss(self._newest_rev)
@@ -1848,9 +1923,13 @@ class RebaseScreen(ModalScreen[bool]):
     }
     """
 
-    def __init__(self, current_branch: str, review_branch: str,
-                 recent_branches: Optional[List[str]] = None,
-                 subject: str = '') -> None:
+    def __init__(
+        self,
+        current_branch: str,
+        review_branch: str,
+        recent_branches: Optional[List[str]] = None,
+        subject: str = '',
+    ) -> None:
         super().__init__()
         self._current_branch = current_branch
         self._review_branch = review_branch
@@ -1865,14 +1944,22 @@ class RebaseScreen(ModalScreen[bool]):
             dialog.border_title = 'Rebase Series'
             if self._subject:
                 yield Static(self._subject, id='rebase-title', markup=False)
-            yield Static(f'Review branch: {self._review_branch}', classes='rebase-value')
+            yield Static(
+                f'Review branch: {self._review_branch}', classes='rebase-value'
+            )
             yield Static('Rebase on top of:', classes='rebase-label')
-            suggester = SuggestFromList(self._recent_branches, case_sensitive=True) if self._recent_branches else None
-            yield Input(value=self._current_branch, id='rebase-target', suggester=suggester)
+            suggester = (
+                SuggestFromList(self._recent_branches, case_sensitive=True)
+                if self._recent_branches
+                else None
+            )
+            yield Input(
+                value=self._current_branch, id='rebase-target', suggester=suggester
+            )
             yield Static('', id='rebase-status', markup=False)
             yield Static(
-                'Enter check  |  Ctrl-y confirm  |  Escape cancel',
-                id='rebase-hint')
+                'Enter check  |  Ctrl-y confirm  |  Escape cancel', id='rebase-hint'
+            )
 
     def on_mount(self) -> None:
         self.query_one('#rebase-target', Input).focus()
@@ -1898,18 +1985,17 @@ class RebaseScreen(ModalScreen[bool]):
             self._run_test_apply(value)
         else:
             self._update_status('Testing applicability\u2026', 'warn')
-            self.run_worker(
-                self._prepare_local, name='_prepare', thread=True)
+            self.run_worker(self._prepare_local, name='_prepare', thread=True)
 
     def _prepare_local(self) -> bytes:
         """Build mbox from the local review branch patches."""
         import b4.review
+
         topdir = b4.git_get_toplevel()
         if not topdir:
             raise RuntimeError('Not in a git repository')
         with _quiet_worker():
-            _cover, tracking = b4.review.load_tracking(
-                topdir, self._review_branch)
+            _cover, tracking = b4.review.load_tracking(topdir, self._review_branch)
             series_data = tracking.get('series', {})
             base_commit = series_data.get('base-commit', '')
             first_patch = series_data.get('first-patch-commit', '')
@@ -1919,9 +2005,10 @@ class RebaseScreen(ModalScreen[bool]):
                 range_start = base_commit
             range_end = f'{self._review_branch}~1'
             ecode, ambytes = b4.git_run_command(
-                topdir, ['format-patch', '--stdout',
-                         f'{range_start}..{range_end}'],
-                decode=False)
+                topdir,
+                ['format-patch', '--stdout', f'{range_start}..{range_end}'],
+                decode=False,
+            )
             if ecode > 0:
                 raise RuntimeError('Could not generate patches from review branch')
             return ambytes
@@ -1932,28 +2019,25 @@ class RebaseScreen(ModalScreen[bool]):
         assert ambytes is not None
         self.run_worker(
             lambda: self._test_apply_at(ambytes, branch),
-            name='_test_apply', thread=True,
+            name='_test_apply',
+            thread=True,
         )
 
     @staticmethod
-    def _test_apply_at(ambytes: bytes,
-                       branch: str) -> Tuple[bool, str]:
+    def _test_apply_at(ambytes: bytes, branch: str) -> Tuple[bool, str]:
         topdir = b4.git_get_toplevel()
         if not topdir:
             return False, 'not in a git repository'
         with _quiet_worker():
             try:
                 with b4.git_temp_worktree(topdir, branch) as gwt:
-                    ecode, out = b4.git_run_command(
-                        gwt, ['sparse-checkout', 'set'])
+                    ecode, out = b4.git_run_command(gwt, ['sparse-checkout', 'set'])
                     if ecode > 0:
                         return False, 'failed to set up worktree'
-                    ecode, out = b4.git_run_command(
-                        gwt, ['checkout', '-f'])
+                    ecode, out = b4.git_run_command(gwt, ['checkout', '-f'])
                     if ecode > 0:
                         return False, 'failed to checkout base'
-                    ecode, out = b4.git_run_command(
-                        gwt, ['am'], stdin=ambytes)
+                    ecode, out = b4.git_run_command(gwt, ['am'], stdin=ambytes)
                     if ecode > 0:
                         for line in out.splitlines():
                             if line.startswith('Patch failed at '):
@@ -1987,7 +2071,9 @@ class RebaseScreen(ModalScreen[bool]):
             self.notify('Target branch is required', severity='error')
             return
         if not b4.git_branch_exists(None, self.target_result):
-            self.notify(f'Branch does not exist: {self.target_result}', severity='error')
+            self.notify(
+                f'Branch does not exist: {self.target_result}', severity='error'
+            )
             return
         self.dismiss(True)
 
@@ -2050,12 +2136,15 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
     }
     """
 
-    def __init__(self, current_target: str = '',
-                 suggestions: Optional[List[str]] = None,
-                 subject: str = '',
-                 message_id: str = '',
-                 revision: Optional[int] = None,
-                 review_branch: Optional[str] = None) -> None:
+    def __init__(
+        self,
+        current_target: str = '',
+        suggestions: Optional[List[str]] = None,
+        subject: str = '',
+        message_id: str = '',
+        revision: Optional[int] = None,
+        review_branch: Optional[str] = None,
+    ) -> None:
         super().__init__()
         self._current_target = current_target
         self._suggestions = suggestions
@@ -2072,9 +2161,19 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
             dialog.border_title = 'Set Target Branch'
             if self._subject:
                 yield Static(self._subject, id='target-branch-title', markup=False)
-            yield Static('Target branch for this series:', classes='target-branch-label')
-            suggester = SuggestFromList(self._suggestions, case_sensitive=True) if self._suggestions else None
-            yield Input(value=self._current_target, id='target-branch-input', suggester=suggester)
+            yield Static(
+                'Target branch for this series:', classes='target-branch-label'
+            )
+            suggester = (
+                SuggestFromList(self._suggestions, case_sensitive=True)
+                if self._suggestions
+                else None
+            )
+            yield Input(
+                value=self._current_target,
+                id='target-branch-input',
+                suggester=suggester,
+            )
             yield Static('', id='target-branch-status', markup=False)
             yield Static(
                 'Enter check  |  Ctrl-y confirm  |  Ctrl-d clear  |  Escape cancel',
@@ -2106,12 +2205,10 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
             self._check_applicability(value)
         elif self._review_branch:
             self._update_status('Testing applicability\u2026', 'warn')
-            self.run_worker(
-                self._prepare_local, name='_prepare', thread=True)
+            self.run_worker(self._prepare_local, name='_prepare', thread=True)
         elif self._message_id:
             self._update_status('Fetching series\u2026', 'warn')
-            self.run_worker(
-                self._prepare_remote, name='_prepare', thread=True)
+            self.run_worker(self._prepare_remote, name='_prepare', thread=True)
         else:
             self._update_status(f'Branch exists: {value}', 'pass')
 
@@ -2122,8 +2219,7 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
         if not topdir:
             raise RuntimeError('Not in a git repository')
         with _quiet_worker():
-            _cover, tracking = b4.review.load_tracking(
-                topdir, self._review_branch)
+            _cover, tracking = b4.review.load_tracking(topdir, self._review_branch)
             series_data = tracking.get('series', {})
             base_commit = series_data.get('base-commit', '')
             first_patch = series_data.get('first-patch-commit', '')
@@ -2133,9 +2229,10 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
                 range_start = base_commit
             range_end = f'{self._review_branch}~1'
             ecode, ambytes = b4.git_run_command(
-                topdir, ['format-patch', '--stdout',
-                         f'{range_start}..{range_end}'],
-                decode=False)
+                topdir,
+                ['format-patch', '--stdout', f'{range_start}..{range_end}'],
+                decode=False,
+            )
             if ecode > 0:
                 raise RuntimeError('Could not generate patches from review branch')
             return None, ambytes
@@ -2144,12 +2241,16 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
         """Fetch series from lore and build LoreSeries + ambytes."""
         with _quiet_worker():
             msgs = b4.review._retrieve_messages(self._message_id)
-            lser = b4.review._get_lore_series(
-                msgs, wantver=self._revision)
+            lser = b4.review._get_lore_series(msgs, wantver=self._revision)
             am_msgs = lser.get_am_ready(
-                noaddtrailers=True, addmysob=False, addlink=False,
-                cherrypick=None, copyccs=False, allowbadchars=False,
-                showchecks=False)
+                noaddtrailers=True,
+                addmysob=False,
+                addlink=False,
+                cherrypick=None,
+                copyccs=False,
+                allowbadchars=False,
+                showchecks=False,
+            )
             if not am_msgs:
                 raise LookupError('No patches ready for applying')
             ifh = io.BytesIO()
@@ -2168,16 +2269,16 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
         # Fast blob check if we have a LoreSeries with indexes (remote fetch)
         if self._lser and self._lser.indexes:
             try:
-                checked, mismatches = self._lser.check_applies_clean(
-                    topdir, at=branch)
+                checked, mismatches = self._lser.check_applies_clean(topdir, at=branch)
                 if len(mismatches) == 0:
-                    self._update_status(
-                        f'Apply results: clean ({branch})', 'pass')
+                    self._update_status(f'Apply results: clean ({branch})', 'pass')
                     return
                 matched = checked - len(mismatches)
                 self._update_status(
                     f'Apply results: {matched}/{checked} a/b blobs match'
-                    f' \u2014 testing\u2026', 'warn')
+                    f' \u2014 testing\u2026',
+                    'warn',
+                )
             except Exception:
                 self._update_status('Testing applicability\u2026', 'warn')
         else:
@@ -2190,28 +2291,25 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
         ambytes = self._ambytes
         self.run_worker(
             lambda: self._test_apply_at(ambytes, branch),
-            name='_test_apply', thread=True,
+            name='_test_apply',
+            thread=True,
         )
 
     @staticmethod
-    def _test_apply_at(ambytes: bytes,
-                       branch: str) -> Tuple[bool, str]:
+    def _test_apply_at(ambytes: bytes, branch: str) -> Tuple[bool, str]:
         topdir = b4.git_get_toplevel()
         if not topdir:
             return False, 'not in a git repository'
         with _quiet_worker():
             try:
                 with b4.git_temp_worktree(topdir, branch) as gwt:
-                    ecode, out = b4.git_run_command(
-                        gwt, ['sparse-checkout', 'set'])
+                    ecode, out = b4.git_run_command(gwt, ['sparse-checkout', 'set'])
                     if ecode > 0:
                         return False, 'failed to set up worktree'
-                    ecode, out = b4.git_run_command(
-                        gwt, ['checkout', '-f'])
+                    ecode, out = b4.git_run_command(gwt, ['checkout', '-f'])
                     if ecode > 0:
                         return False, 'failed to checkout base'
-                    ecode, out = b4.git_run_command(
-                        gwt, ['am'], stdin=ambytes)
+                    ecode, out = b4.git_run_command(gwt, ['am'], stdin=ambytes)
                     if ecode > 0:
                         for line in out.splitlines():
                             if line.startswith('Patch failed at '):
@@ -2242,7 +2340,9 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
     def action_confirm(self) -> None:
         value = self.query_one('#target-branch-input', Input).value.strip()
         if not value:
-            self.notify('Branch name is required (use Ctrl-d to clear)', severity='error')
+            self.notify(
+                'Branch name is required (use Ctrl-d to clear)', severity='error'
+            )
             return
         if not b4.git_branch_exists(None, value):
             self.notify(f'Branch does not exist: {value}', severity='error')
@@ -2256,9 +2356,9 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
         self.dismiss(None)
 
 
-def AbandonConfirmScreen(change_id: str, review_branch: str,
-                         has_branch: bool,
-                         subject: str = '') -> ConfirmScreen:
+def AbandonConfirmScreen(
+    change_id: str, review_branch: str, has_branch: bool, subject: str = ''
+) -> ConfirmScreen:
     """Build a confirmation screen for abandon operation."""
     body = [f'Change-ID: {change_id}']
     if has_branch:
@@ -2277,9 +2377,9 @@ def AbandonConfirmScreen(change_id: str, review_branch: str,
     )
 
 
-def ArchiveConfirmScreen(change_id: str, review_branch: str,
-                         has_branch: bool,
-                         subject: str = '') -> ConfirmScreen:
+def ArchiveConfirmScreen(
+    change_id: str, review_branch: str, has_branch: bool, subject: str = ''
+) -> ConfirmScreen:
     """Build a confirmation screen for archive operation."""
     body = [f'Change-ID: {change_id}']
     if has_branch:
@@ -2337,11 +2437,15 @@ class RangeDiffScreen(JKListNavMixin, ModalScreen[Optional[int]]):
         self._current_revision = current_revision
         self._revisions = sorted(
             [r for r in revisions if r['revision'] != current_revision],
-            key=lambda r: r['revision'], reverse=True)
+            key=lambda r: r['revision'],
+            reverse=True,
+        )
 
     def compose(self) -> ComposeResult:
         with Vertical(id='rangediff-dialog'):
-            yield Label(f'Range-diff against v{self._current_revision} \u2014 select version:')
+            yield Label(
+                f'Range-diff against v{self._current_revision} \u2014 select version:'
+            )
             items = []
             for r in self._revisions:
                 subject = r.get('subject', '(no subject)')
@@ -2431,8 +2535,10 @@ class SetStateScreen(JKListNavMixin, ModalScreen[Optional[Tuple[str, bool]]]):
         with Vertical(id='state-dialog'):
             yield Label('Set state (Enter=confirm, Esc=cancel):')
             yield ListView(
-                *[StateOption(s['slug'], s['name'], s['slug'] == self._current_state)
-                  for s in self._states],
+                *[
+                    StateOption(s['slug'], s['name'], s['slug'] == self._current_state)
+                    for s in self._states
+                ],
                 id='state-list',
             )
             yield Checkbox('Archived', False, id='state-archived')
@@ -2459,7 +2565,9 @@ class SetStateScreen(JKListNavMixin, ModalScreen[Optional[Tuple[str, bool]]]):
 
     def _do_confirm(self) -> None:
         lv = self.query_one('#state-list', ListView)
-        if lv.highlighted_child is not None and isinstance(lv.highlighted_child, StateOption):
+        if lv.highlighted_child is not None and isinstance(
+            lv.highlighted_child, StateOption
+        ):
             slug = lv.highlighted_child.slug
         else:
             self.dismiss(None)
@@ -2501,8 +2609,15 @@ class ApplyStateModal(ModalScreen[Tuple[int, int, str]]):
     }
     """
 
-    def __init__(self, pwkey: str, pwurl: str, patch_ids: List[int],
-                 new_state: str, archived: bool, series_name: str) -> None:
+    def __init__(
+        self,
+        pwkey: str,
+        pwurl: str,
+        patch_ids: List[int],
+        new_state: str,
+        archived: bool,
+        series_name: str,
+    ) -> None:
         super().__init__()
         self._pwkey = pwkey
         self._pwurl = pwurl
@@ -2517,8 +2632,12 @@ class ApplyStateModal(ModalScreen[Tuple[int, int, str]]):
         with Vertical(id='apply-dialog'):
             yield Label(f'Setting state to: {self._new_state}', id='apply-title')
             yield Label(self._series_name, id='apply-series', markup=False)
-            yield Label(f'Processing 0/{len(self._patch_ids)} patches...', id='apply-status')
-            yield ProgressBar(total=len(self._patch_ids), show_eta=False, id='apply-progress')
+            yield Label(
+                f'Processing 0/{len(self._patch_ids)} patches...', id='apply-status'
+            )
+            yield ProgressBar(
+                total=len(self._patch_ids), show_eta=False, id='apply-progress'
+            )
 
     def on_mount(self) -> None:
         self.run_worker(self._apply_states, name='_apply_states', thread=True)
@@ -2600,9 +2719,13 @@ class UpdateAllScreen(ModalScreen[Dict[str, Any]]):
         Binding('q', 'cancel', 'Cancel', show=False),
     ]
 
-    def __init__(self, series_list: List[Dict[str, Any]],
-                 identifier: str, linkmask: str,
-                 topdir: Optional[str] = None) -> None:
+    def __init__(
+        self,
+        series_list: List[Dict[str, Any]],
+        identifier: str,
+        linkmask: str,
+        topdir: Optional[str] = None,
+    ) -> None:
         super().__init__()
         self._series_list = series_list
         self._identifier = identifier
@@ -2624,9 +2747,13 @@ class UpdateAllScreen(ModalScreen[Dict[str, Any]]):
             count = len(self._series_list)
             title = 'Updating series' if count == 1 else 'Updating all tracked series'
             yield Label(title, id='updateall-title')
-            yield Label(f'Checking 0/{len(self._series_list)} series...', id='updateall-status')
+            yield Label(
+                f'Checking 0/{len(self._series_list)} series...', id='updateall-status'
+            )
             yield Label('', id='updateall-series', markup=False)
-            yield ProgressBar(total=len(self._series_list), show_eta=False, id='updateall-progress')
+            yield ProgressBar(
+                total=len(self._series_list), show_eta=False, id='updateall-progress'
+            )
 
     def on_mount(self) -> None:
         self.run_worker(self._do_updates, name='_do_updates', thread=True)
@@ -2643,7 +2770,8 @@ class UpdateAllScreen(ModalScreen[Dict[str, Any]]):
             if self._topdir:
                 try:
                     rescan = b4.review.tracking.rescan_branches(
-                        self._identifier, self._topdir)
+                        self._identifier, self._topdir
+                    )
                     self._result['gone'] = rescan.get('gone', 0)
                 except Exception as ex:
                     logger.warning('Pre-update rescan failed: %s', ex)
@@ -2656,7 +2784,9 @@ class UpdateAllScreen(ModalScreen[Dict[str, Any]]):
                 self.app.call_from_thread(self._update_progress, i, subject)
 
                 r = b4.review.update_series_tracking(
-                    series, self._identifier, self._linkmask,
+                    series,
+                    self._identifier,
+                    self._linkmask,
                     topdir=self._topdir,
                 )
                 self._result['series_checked'] += 1
@@ -2748,12 +2878,15 @@ class BaseSelectionScreen(ModalScreen[Optional[str]]):
     }
     """
 
-    def __init__(self, initial_base: str,
-                 lser: 'b4.LoreSeries',
-                 ambytes: bytes,
-                 base_suggestions: Optional[List[str]] = None,
-                 base_hint: str = '',
-                 subject: str = '') -> None:
+    def __init__(
+        self,
+        initial_base: str,
+        lser: 'b4.LoreSeries',
+        ambytes: bytes,
+        base_suggestions: Optional[List[str]] = None,
+        base_hint: str = '',
+        subject: str = '',
+    ) -> None:
         """Initialize the base selection screen.
 
         Args:
@@ -2779,18 +2912,24 @@ class BaseSelectionScreen(ModalScreen[Optional[str]]):
             if self._subject:
                 yield Static(self._subject, id='base-title', markup=False)
             if self._base_hint:
-                yield Static(self._base_hint, id='base-hint',
-                             classes='base-warn', markup=False)
+                yield Static(
+                    self._base_hint, id='base-hint', classes='base-warn', markup=False
+                )
             yield Static('Base:', markup=False)
-            suggester = SuggestFromList(
-                self._base_suggestions, case_sensitive=True,
-            ) if self._base_suggestions else None
-            yield Input(value=self._initial_base, id='base-input',
-                        suggester=suggester)
+            suggester = (
+                SuggestFromList(
+                    self._base_suggestions,
+                    case_sensitive=True,
+                )
+                if self._base_suggestions
+                else None
+            )
+            yield Input(value=self._initial_base, id='base-input', suggester=suggester)
             yield Static('', id='base-status', markup=False)
             yield Static(
                 'Enter check  |  Ctrl-y confirm  |  Escape cancel',
-                id='base-footer', markup=False,
+                id='base-footer',
+                markup=False,
             )
 
     def on_mount(self) -> None:
@@ -2812,8 +2951,7 @@ class BaseSelectionScreen(ModalScreen[Optional[str]]):
             self._update_status('not in a git repository', 'fail')
             return
 
-        ecode, out = b4.git_run_command(
-            topdir, ['rev-parse', '--verify', value])
+        ecode, out = b4.git_run_command(topdir, ['rev-parse', '--verify', value])
         if ecode != 0:
             self._update_status(f'not a valid ref: {value}', 'fail')
             self._resolved_base = None
@@ -2824,22 +2962,24 @@ class BaseSelectionScreen(ModalScreen[Optional[str]]):
         if self._lser.indexes:
             try:
                 checked, mismatches = self._lser.check_applies_clean(
-                    topdir, at=self._resolved_base)
+                    topdir, at=self._resolved_base
+                )
                 if len(mismatches) == 0:
                     self._update_status(
-                        f'Apply results: clean ({self._resolved_base[:12]})',
-                        'pass')
+                        f'Apply results: clean ({self._resolved_base[:12]})', 'pass'
+                    )
                 else:
                     matched = checked - len(mismatches)
                     self._update_status(
                         f'Apply results: {matched}/{checked} a/b blobs match'
-                        f' — testing\u2026', 'warn')
+                        f' — testing\u2026',
+                        'warn',
+                    )
                     self._run_test_apply()
             except Exception:
                 self._update_status('could not check applicability', 'warn')
         else:
-            self._update_status(
-                f'will use {self._resolved_base[:12]}', 'pass')
+            self._update_status(f'will use {self._resolved_base[:12]}', 'pass')
 
     def on_input_submitted(self, event: Input.Submitted) -> None:
         """Validate the entered base ref."""
@@ -2858,12 +2998,12 @@ class BaseSelectionScreen(ModalScreen[Optional[str]]):
             return
         self.run_worker(
             lambda: self._test_apply_at(self._ambytes, base),
-            name='_test_apply', thread=True,
+            name='_test_apply',
+            thread=True,
         )
 
     @staticmethod
-    def _test_apply_at(ambytes: bytes,
-                       base: str) -> Tuple[bool, str]:
+    def _test_apply_at(ambytes: bytes, base: str) -> Tuple[bool, str]:
         """Run git-am in a throwaway sparse worktree. Returns (ok, detail)."""
         topdir = b4.git_get_toplevel()
         if not topdir:
@@ -2871,16 +3011,13 @@ class BaseSelectionScreen(ModalScreen[Optional[str]]):
         with _quiet_worker():
             try:
                 with b4.git_temp_worktree(topdir, base) as gwt:
-                    ecode, out = b4.git_run_command(
-                        gwt, ['sparse-checkout', 'set'])
+                    ecode, out = b4.git_run_command(gwt, ['sparse-checkout', 'set'])
                     if ecode > 0:
                         return False, 'failed to set up worktree'
-                    ecode, out = b4.git_run_command(
-                        gwt, ['checkout', '-f'])
+                    ecode, out = b4.git_run_command(gwt, ['checkout', '-f'])
                     if ecode > 0:
                         return False, 'failed to checkout base'
-                    ecode, out = b4.git_run_command(
-                        gwt, ['am'], stdin=ambytes)
+                    ecode, out = b4.git_run_command(gwt, ['am'], stdin=ambytes)
                     if ecode > 0:
                         # Extract just the "Patch failed" line
                         for line in out.splitlines():
@@ -2915,8 +3052,7 @@ class BaseSelectionScreen(ModalScreen[Optional[str]]):
         topdir = b4.git_get_toplevel()
         if not topdir:
             return
-        ecode, out = b4.git_run_command(
-            topdir, ['rev-parse', '--verify', value])
+        ecode, out = b4.git_run_command(topdir, ['rev-parse', '--verify', value])
         if ecode != 0:
             self.notify(f'Not a valid ref: {value}', severity='error')
             return
@@ -2970,14 +3106,10 @@ class UpdateRevisionScreen(JKListNavMixin, ModalScreen[Optional[int]]):
     }
     """
 
-    def __init__(self, current_revision: int,
-                 revisions: List[Dict[str, Any]]) -> None:
+    def __init__(self, current_revision: int, revisions: List[Dict[str, Any]]) -> None:
         super().__init__()
         self._current_revision = current_revision
-        self._revisions = [
-            r for r in revisions
-            if r['revision'] > current_revision
-        ]
+        self._revisions = [r for r in revisions if r['revision'] > current_revision]
 
     def compose(self) -> ComposeResult:
         with Vertical(id='update-rev-dialog'):
@@ -2985,15 +3117,15 @@ class UpdateRevisionScreen(JKListNavMixin, ModalScreen[Optional[int]]):
             yield Static(
                 f'Current revision: v{self._current_revision}\n'
                 'The current review branch will be archived.\n'
-                'Reviews on unchanged patches will be preserved.')
+                'Reviews on unchanged patches will be preserved.'
+            )
             items = []
             for r in self._revisions:
                 subject = r.get('subject', '(no subject)')
                 label = f'v{r["revision"]}  {subject}'
                 items.append(ListItem(Label(label, markup=False)))
             yield ListView(*items, id='update-rev-list')
-            yield Static('Enter confirm  |  Escape cancel',
-                         id='update-rev-hint')
+            yield Static('Enter confirm  |  Escape cancel', id='update-rev-hint')
 
     def on_mount(self) -> None:
         self.query_one('#update-rev-list', ListView).focus()
@@ -3115,12 +3247,12 @@ class TrackingCheckResultsScreen(ModalScreen[str]):
     }
 
     def __init__(
-            self,
-            title: str,
-            patch_labels: List[str],
-            patch_subjects: List[str],
-            tools: List[str],
-            matrix: Dict[Tuple[int, str], Dict[str, str]],
+        self,
+        title: str,
+        patch_labels: List[str],
+        patch_subjects: List[str],
+        tools: List[str],
+        matrix: Dict[Tuple[int, str], Dict[str, str]],
     ) -> None:
         """Create a check results modal.
 
@@ -3141,12 +3273,24 @@ class TrackingCheckResultsScreen(ModalScreen[str]):
     def compose(self) -> ComposeResult:
         with Vertical(id='tcr-dialog'):
             yield Static(self._title, id='tcr-title', markup=False)
-            yield RichLog(id='tcr-matrix', highlight=False, wrap=False,
-                          markup=True, auto_scroll=False)
-            yield RichLog(id='tcr-detail', highlight=False, wrap=True,
-                          markup=True, auto_scroll=False)
-            yield Static(Text('[j/k] navigate  [Enter] details  [R] rerun  [q] close'),
-                         id='tcr-hint')
+            yield RichLog(
+                id='tcr-matrix',
+                highlight=False,
+                wrap=False,
+                markup=True,
+                auto_scroll=False,
+            )
+            yield RichLog(
+                id='tcr-detail',
+                highlight=False,
+                wrap=True,
+                markup=True,
+                auto_scroll=False,
+            )
+            yield Static(
+                Text('[j/k] navigate  [Enter] details  [R] rerun  [q] close'),
+                id='tcr-hint',
+            )
 
     def on_mount(self) -> None:
         self.query_one('#tcr-detail', RichLog).display = False
@@ -3161,7 +3305,9 @@ class TrackingCheckResultsScreen(ModalScreen[str]):
             return
 
         # Compute column widths
-        label_w = max(len(lbl) for lbl in self._patch_labels) if self._patch_labels else 5
+        label_w = (
+            max(len(lbl) for lbl in self._patch_labels) if self._patch_labels else 5
+        )
         col_w = max(max(len(t) for t in self._tools), 8)
         ci_total = (col_w + 2) * len(self._tools)
         # pointer(2) + label + gap(2) + subject + gap(2) + ci_columns
@@ -3180,16 +3326,18 @@ class TrackingCheckResultsScreen(ModalScreen[str]):
 
         # Data rows with cursor highlight
         for pidx, label in enumerate(self._patch_labels):
-            is_selected = (pidx == self._cursor_row)
+            is_selected = pidx == self._cursor_row
             row = Text(style='on grey27' if is_selected else '')
             pointer = '\u25b6 ' if is_selected else '  '
             row.append(pointer)
             row.append(f'{label:>{label_w}s}', style='bold' if is_selected else '')
             row.append('  ')
             # Truncated subject
-            subj = self._patch_subjects[pidx] if pidx < len(self._patch_subjects) else ''
+            subj = (
+                self._patch_subjects[pidx] if pidx < len(self._patch_subjects) else ''
+            )
             if len(subj) > subj_w:
-                subj = subj[:subj_w - 1] + '\u2026'
+                subj = subj[: subj_w - 1] + '\u2026'
             row.append(f'{subj:<{subj_w}s}  ', style='' if is_selected else 'dim')
             for tool in self._tools:
                 cell = self._matrix.get((pidx, tool))
@@ -3206,8 +3354,7 @@ class TrackingCheckResultsScreen(ModalScreen[str]):
 
         # Scroll so the cursor row is visible (2 header lines + data rows)
         cursor_line = 2 + self._cursor_row
-        viewer.scroll_to(y=max(0, cursor_line - viewer.size.height // 2),
-                         animate=False)
+        viewer.scroll_to(y=max(0, cursor_line - viewer.size.height // 2), animate=False)
 
     def _render_detail(self, pidx: int) -> None:
         detail = self.query_one('#tcr-detail', RichLog)
@@ -3309,7 +3456,8 @@ class TrackingCheckResultsScreen(ModalScreen[str]):
             self.query_one('#tcr-detail', RichLog).display = False
             self.query_one('#tcr-matrix', RichLog).display = True
             self.query_one('#tcr-hint', Static).update(
-                Text('[j/k] navigate  [Enter] details  [R] rerun  [q] close'))
+                Text('[j/k] navigate  [Enter] details  [R] rerun  [q] close')
+            )
             return
         self.dismiss('close')
 
diff --git a/src/b4/review_tui/_pw_app.py b/src/b4/review_tui/_pw_app.py
index 2b0c10a..6efb55a 100644
--- a/src/b4/review_tui/_pw_app.py
+++ b/src/b4/review_tui/_pw_app.py
@@ -36,9 +36,12 @@ from b4.review_tui._modals import (
 )
 
 
-def _format_series_label(series: Dict[str, Any], tracked: bool,
-                         ts: Optional[Dict[str, str]] = None,
-                         show_delegate: bool = True) -> Text:
+def _format_series_label(
+    series: Dict[str, Any],
+    tracked: bool,
+    ts: Optional[Dict[str, str]] = None,
+    show_delegate: bool = True,
+) -> Text:
     """Build a Text label for a series row.
 
     *ts* is a resolved theme styles dict from :func:`resolve_styles`.
@@ -46,12 +49,19 @@ def _format_series_label(series: Dict[str, Any], tracked: bool,
     """
     track_mark = 'T' if tracked else ' '
     ci_state = series.get('check') or 'pending'
-    ci_map = ci_styles(ts) if ts else {
-        'pending': 'dim', 'success': 'green', 'warning': 'red', 'fail': 'bold red',
-    }
+    ci_map = (
+        ci_styles(ts)
+        if ts
+        else {
+            'pending': 'dim',
+            'success': 'green',
+            'warning': 'red',
+            'fail': 'bold red',
+        }
+    )
     ci_style = ci_map.get(ci_state, ci_map['pending'])
     date = (series.get('date') or '')[:10]
-    state = f"{(series.get('state') or 'new'):<15s}"
+    state = f'{(series.get("state") or "new"):<15s}'
     submitter = pad_display(series.get('submitter') or 'Unknown', 30)
     name = series.get('name') or '(no subject)'
     text = Text()
@@ -70,8 +80,9 @@ class PwSeriesItem(ListItem):
 
     ACTION_REQUIRED_STATES = ('new', 'under-review')
 
-    def __init__(self, series: Dict[str, Any], tracked: bool = False,
-                 show_delegate: bool = True) -> None:
+    def __init__(
+        self, series: Dict[str, Any], tracked: bool = False, show_delegate: bool = True
+    ) -> None:
         super().__init__()
         self.series = series
         self.tracked = tracked
@@ -84,9 +95,12 @@ class PwSeriesItem(ListItem):
 
     def compose(self) -> ComposeResult:
         ts = resolve_styles(self.app)
-        yield Label(_format_series_label(self.series, self.tracked, ts,
-                                         show_delegate=self.show_delegate),
-                    markup=False)
+        yield Label(
+            _format_series_label(
+                self.series, self.tracked, ts, show_delegate=self.show_delegate
+            ),
+            markup=False,
+        )
 
 
 class PwApp(App[None]):
@@ -162,9 +176,16 @@ class PwApp(App[None]):
     """
 
     BINDING_GROUPS = {
-        'view': 'Series', 'ci_checks': 'Series', 'track_series': 'Series',
-        'set_state': 'Series', 'hide_series': 'Series',
-        'refresh': 'App', 'limit': 'App', 'toggle_show_hidden': 'App', 'quit': 'App', 'help': 'App',
+        'view': 'Series',
+        'ci_checks': 'Series',
+        'track_series': 'Series',
+        'set_state': 'Series',
+        'hide_series': 'Series',
+        'refresh': 'App',
+        'limit': 'App',
+        'toggle_show_hidden': 'App',
+        'quit': 'App',
+        'help': 'App',
     }
 
     BINDINGS = [
@@ -185,8 +206,14 @@ class PwApp(App[None]):
         Binding('question_mark', 'help', 'help', key_display='?'),
     ]
 
-    def __init__(self, pwkey: str, pwurl: str, pwproj: str,
-                 email_dryrun: bool = False, patatt_sign: bool = True) -> None:
+    def __init__(
+        self,
+        pwkey: str,
+        pwurl: str,
+        pwproj: str,
+        email_dryrun: bool = False,
+        patatt_sign: bool = True,
+    ) -> None:
         super().__init__()
         self._pwkey = pwkey
         self._pwurl = pwurl
@@ -229,9 +256,13 @@ class PwApp(App[None]):
         if not self._tracking_identifier:
             # Fall back to patchwork project name
             self._tracking_identifier = self._pwproj
-        if self._tracking_identifier and b4.review.tracking.db_exists(self._tracking_identifier):
+        if self._tracking_identifier and b4.review.tracking.db_exists(
+            self._tracking_identifier
+        ):
             self._tracking_enabled = True
-            self._tracked_ids = b4.review.tracking.get_tracked_pw_series_ids(self._tracking_identifier)
+            self._tracked_ids = b4.review.tracking.get_tracked_pw_series_ids(
+                self._tracking_identifier
+            )
 
     def _save_local_data(self) -> None:
         path = self._get_local_data_path()
@@ -273,7 +304,9 @@ class PwApp(App[None]):
             elif event.state == WorkerState.ERROR:
                 for widget in self.query('#pw-loading'):
                     await widget.remove()
-                self.query_one('#pw-title', Static).update(' Patchwork — error fetching series')
+                self.query_one('#pw-title', Static).update(
+                    ' Patchwork — error fetching series'
+                )
                 self.notify(str(event.worker.error), severity='error')
 
     async def _populate(self, series_list: List[Dict[str, Any]]) -> None:
@@ -301,16 +334,23 @@ class PwApp(App[None]):
                 visible.append((s, False))
         if self._limit_pattern:
             visible = [
-                (s, h) for s, h in visible
+                (s, h)
+                for s, h in visible
                 if self._matches_limit(s, self._limit_pattern)
             ]
         limit_suffix = f', limit: {self._limit_pattern}' if self._limit_pattern else ''
         if hidden_count and not self._show_hidden:
-            title.update(f' Patchwork — {len(visible)} series ({hidden_count} hidden{limit_suffix})')
+            title.update(
+                f' Patchwork — {len(visible)} series ({hidden_count} hidden{limit_suffix})'
+            )
         elif hidden_count and self._show_hidden:
-            title.update(f' Patchwork — {len(visible)} series (showing {hidden_count} hidden{limit_suffix})')
+            title.update(
+                f' Patchwork — {len(visible)} series (showing {hidden_count} hidden{limit_suffix})'
+            )
         elif self._limit_pattern:
-            title.update(f' Patchwork — {len(visible)} action-required series{limit_suffix}')
+            title.update(
+                f' Patchwork — {len(visible)} action-required series{limit_suffix}'
+            )
         else:
             title.update(f' Patchwork — {len(visible)} action-required series')
         if not visible:
@@ -321,14 +361,15 @@ class PwApp(App[None]):
         if show_delegate:
             header_text = f'   {"Date":<12s}{"State":<15s} {"Submitter":<30s} {"Delegate":<15s} {"Series"}'
         else:
-            header_text = f'   {"Date":<12s}{"State":<15s} {"Submitter":<30s} {"Series"}'
+            header_text = (
+                f'   {"Date":<12s}{"State":<15s} {"Submitter":<30s} {"Series"}'
+            )
         header = Static(header_text, id='pw-header')
         items = []
         for s, is_hidden in visible:
             sid = s.get('id')
             is_tracked = sid in self._tracked_ids if sid else False
-            item = PwSeriesItem(s, tracked=is_tracked,
-                                show_delegate=show_delegate)
+            item = PwSeriesItem(s, tracked=is_tracked, show_delegate=show_delegate)
             if is_hidden:
                 item.add_class('--hidden')
             items.append(item)
@@ -342,7 +383,9 @@ class PwApp(App[None]):
         for widget in self.query('#pw-header, #pw-list'):
             await widget.remove()
         self.query_one('#pw-title', Static).update(' Patchwork — refreshing\u2026')
-        await self.mount(LoadingIndicator(id='pw-loading'), before=self.query_one(Footer))
+        await self.mount(
+            LoadingIndicator(id='pw-loading'), before=self.query_one(Footer)
+        )
         self.run_worker(self._fetch_initial, name='_fetch_initial', thread=True)
 
     @staticmethod
@@ -364,8 +407,10 @@ class PwApp(App[None]):
                 if needle not in (series.get('delegate', '') or '').lower():
                     return False
             else:
-                if (token not in (series.get('name', '') or '').lower()
-                        and token not in (series.get('submitter', '') or '').lower()):
+                if (
+                    token not in (series.get('name', '') or '').lower()
+                    and token not in (series.get('submitter', '') or '').lower()
+                ):
                     return False
         return True
 
@@ -375,8 +420,9 @@ class PwApp(App[None]):
             hint = 'Prefixes: s:<state>  d:<delegate>'
         else:
             hint = 'Prefixes: s:<state>'
-        self.push_screen(LimitScreen(self._limit_pattern, hint=hint),
-                         callback=self._on_limit)
+        self.push_screen(
+            LimitScreen(self._limit_pattern, hint=hint), callback=self._on_limit
+        )
 
     async def _on_limit(self, result: Optional[str]) -> None:
         if result is None:
@@ -406,9 +452,12 @@ class PwApp(App[None]):
             self.notify('No message-id available for this series', severity='error')
             return
         from b4.review_tui._lite_app import LiteThreadScreen
-        self.push_screen(LiteThreadScreen(msgid,
-                                          email_dryrun=self._email_dryrun,
-                                          patatt_sign=self._patatt_sign))
+
+        self.push_screen(
+            LiteThreadScreen(
+                msgid, email_dryrun=self._email_dryrun, patatt_sign=self._patatt_sign
+            )
+        )
 
     def action_ci_checks(self) -> None:
         """View CI check details for the highlighted series."""
@@ -417,7 +466,9 @@ class PwApp(App[None]):
             return
         check = item.series.get('check') or 'pending'
         if check == 'pending':
-            self.notify('No CI checks available for this series', severity='information')
+            self.notify(
+                'No CI checks available for this series', severity='information'
+            )
             return
         self.push_screen(CIChecksScreen(self._pwkey, self._pwurl, item.series))
 
@@ -444,7 +495,9 @@ class PwApp(App[None]):
             callback=lambda result: self._on_set_state(result, item),
         )
 
-    def _on_set_state(self, result: Optional[Tuple[str, bool]], item: 'PwSeriesItem') -> None:
+    def _on_set_state(
+        self, result: Optional[Tuple[str, bool]], item: 'PwSeriesItem'
+    ) -> None:
         if result is None:
             return
         new_state, archived = result
@@ -456,13 +509,14 @@ class PwApp(App[None]):
         series_name = item.series.get('name', '(no subject)')
         self.push_screen(
             ApplyStateModal(
-                self._pwkey, self._pwurl, patch_ids,
-                new_state, archived, series_name
+                self._pwkey, self._pwurl, patch_ids, new_state, archived, series_name
             ),
             callback=lambda res: self._on_apply_complete(res, item),
         )
 
-    def _on_apply_complete(self, result: Tuple[int, int, str], item: 'PwSeriesItem') -> None:
+    def _on_apply_complete(
+        self, result: Tuple[int, int, str], item: 'PwSeriesItem'
+    ) -> None:
         ok, fail, new_state = result
         if fail:
             self.notify(f'{ok} updated, {fail} failed', severity='warning')
@@ -474,13 +528,18 @@ class PwApp(App[None]):
         else:
             item.add_class('--dimmed')
         ts = resolve_styles(self)
-        item.query_one(Label).update(_format_series_label(
-            item.series, item.tracked, ts,
-            show_delegate=item.show_delegate))
+        item.query_one(Label).update(
+            _format_series_label(
+                item.series, item.tracked, ts, show_delegate=item.show_delegate
+            )
+        )
 
     def action_track_series(self) -> None:
         if not self._tracking_enabled:
-            self.notify('Repository not enrolled. Enroll with: b4 review enroll', severity='warning')
+            self.notify(
+                'Repository not enrolled. Enroll with: b4 review enroll',
+                severity='warning',
+            )
             return
         item = self._get_highlighted_item()
         if item is None:
@@ -546,8 +605,17 @@ class PwApp(App[None]):
         assert self._tracking_identifier is not None
         conn = b4.review.tracking.get_db(self._tracking_identifier)
         b4.review.tracking.add_series_to_db(
-            conn, change_id, revision, subject, sender_name, sender_email,
-            sent_at, message_id, num_patches, pw_series_id)
+            conn,
+            change_id,
+            revision,
+            subject,
+            sender_name,
+            sender_email,
+            sent_at,
+            message_id,
+            num_patches,
+            pw_series_id,
+        )
 
         conn.close()
 
@@ -556,10 +624,14 @@ class PwApp(App[None]):
         item.tracked = True
         item.add_class('--tracked')
         ts = resolve_styles(self)
-        item.query_one(Label).update(_format_series_label(
-            item.series, True, ts,
-            show_delegate=item.show_delegate))
-        self.notify(f'Started tracking: {series_name}', severity='information', timeout=3)
+        item.query_one(Label).update(
+            _format_series_label(
+                item.series, True, ts, show_delegate=item.show_delegate
+            )
+        )
+        self.notify(
+            f'Started tracking: {series_name}', severity='information', timeout=3
+        )
 
     async def action_hide_series(self) -> None:
         item = self._get_highlighted_item()
@@ -642,4 +714,3 @@ class PwApp(App[None]):
 
     async def action_quit(self) -> None:
         self.exit()
-
diff --git a/src/b4/review_tui/_review_app.py b/src/b4/review_tui/_review_app.py
index 51004de..e476044 100644
--- a/src/b4/review_tui/_review_app.py
+++ b/src/b4/review_tui/_review_app.py
@@ -107,10 +107,6 @@ class FollowupItem(ListItem):
         yield st
 
 
-
-
-
-
 class ReviewApp(CheckRunnerMixin, App[None]):
     """Textual app for b4 review TUI."""
 
@@ -201,12 +197,21 @@ class ReviewApp(CheckRunnerMixin, App[None]):
     _EMAIL_ACTIONS = frozenset({'edit_tocc', 'send'})
 
     BINDING_GROUPS = {
-        'trailer': 'Review', 'edit_note': 'Review',
-        'edit_reply': 'Review', 'followups': 'Review', 'agent': 'Review',
+        'trailer': 'Review',
+        'edit_note': 'Review',
+        'edit_reply': 'Review',
+        'followups': 'Review',
+        'agent': 'Review',
         'prior_review': 'Review',
-        'patch_done': 'Review', 'patch_skip': 'Review', 'check': 'Review',
-        'edit_tocc': 'Review', 'send': 'Review',
-        'toggle_preview': 'App', 'suspend': 'App', 'quit': 'App', 'help': 'App',
+        'patch_done': 'Review',
+        'patch_skip': 'Review',
+        'check': 'Review',
+        'edit_tocc': 'Review',
+        'send': 'Review',
+        'toggle_preview': 'App',
+        'suspend': 'App',
+        'quit': 'App',
+        'help': 'App',
     }
 
     BINDINGS = [
@@ -264,15 +269,21 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         self._abbrev_len: int = session['abbrev_len']
         self._default_identity: str = session['default_identity']
         self._usercfg: b4.ConfigDictT = session['usercfg']
-        self._reviewer_initials: str = _make_initials(session['usercfg'].get('name', ''))
+        self._reviewer_initials: str = _make_initials(
+            session['usercfg'].get('name', '')
+        )
         self._cover_subject_clean: str = session['cover_subject_clean']
         self._email_dryrun: bool = session.get('email_dryrun', False)
         self._patatt_sign: bool = session.get('patatt_sign', True)
         self._branch: str = session['branch']
         self._original_branch: Optional[str] = session.get('original_branch')
         self.branch_checked_out: bool = False
-        self._has_cover: bool = 'NOTE: No cover letter provided by the author.' not in self._cover_text
-        self._selected_idx: int = 0 if self._has_cover else 1  # 0 = cover, 1..N = patches
+        self._has_cover: bool = (
+            'NOTE: No cover letter provided by the author.' not in self._cover_text
+        )
+        self._selected_idx: int = (
+            0 if self._has_cover else 1
+        )  # 0 = cover, 1..N = patches
         self._preview_mode: bool = False
         self._comment_positions: List[int] = []
         self._followup_positions: Dict[str, int] = {}
@@ -298,7 +309,13 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             with Vertical(id='left-pane'):
                 yield ListView(id='patch-list')
                 yield Static(id='trailer-overlay', markup=False)
-            yield RichLog(id='diff-viewer', highlight=False, wrap=False, markup=True, auto_scroll=False)
+            yield RichLog(
+                id='diff-viewer',
+                highlight=False,
+                wrap=False,
+                markup=True,
+                auto_scroll=False,
+            )
         yield SeparatedFooter()
 
     def on_mount(self) -> None:
@@ -310,7 +327,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         switch_hint = self._session.get('_switch_hint')
         if switch_hint:
             self.notify(
-                f'You\'re in a review branch. To see all tracked series, switch to {switch_hint}.',
+                f"You're in a review branch. To see all tracked series, switch to {switch_hint}.",
                 timeout=10,
             )
 
@@ -365,7 +382,11 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         # Patch entries
         for idx, _sha in enumerate(self._commit_shas):
             patch_num = idx + 1
-            subject = self._commit_subjects[idx] if idx < len(self._commit_subjects) else '(unknown)'
+            subject = (
+                self._commit_subjects[idx]
+                if idx < len(self._commit_subjects)
+                else '(unknown)'
+            )
             patch_meta = self._patches[idx] if idx < len(self._patches) else {}
             state = b4.review._get_patch_state(patch_meta, self._usercfg)
             if self._hide_skipped and state == 'skip':
@@ -387,6 +408,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             def _restore() -> None:
                 lv.index = restore_index
                 lv.scroll_visible()
+
             self.call_after_refresh(_restore)
 
     def _append_followup_items(self, lv: ListView, display_idx: int) -> None:
@@ -416,7 +438,11 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             label_num = f'0/{total}'
         else:
             patch_idx = display_idx - 1
-            subject = self._commit_subjects[patch_idx] if patch_idx < len(self._commit_subjects) else '(unknown)'
+            subject = (
+                self._commit_subjects[patch_idx]
+                if patch_idx < len(self._commit_subjects)
+                else '(unknown)'
+            )
             target = self._patches[patch_idx] if patch_idx < len(self._patches) else {}
             label_num = f'{display_idx}/{total}'
             subject = subject[:40]
@@ -451,7 +477,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         self._refresh_trailer_overlay()
 
     def _build_msg_comment_map(
-        self, target: Dict[str, Any], ts: Dict[str, str],
+        self,
+        target: Dict[str, Any],
+        ts: Dict[str, str],
     ) -> Dict[int, List[Tuple[str, str, str]]]:
         """Build a comment map for COMMIT_MESSAGE_PATH comments.
 
@@ -465,8 +493,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         colour = self._reviewer_colour(my_email, target, ts)
         for c in my_review.get('comments', []):
             if c['path'] == COMMIT_MESSAGE_PATH:
-                result.setdefault(c['line'], []).append(
-                    ('You', colour, c['text']))
+                result.setdefault(c['line'], []).append(('You', colour, c['text']))
         return result
 
     def _show_cover(self, viewer: RichLog) -> None:
@@ -475,7 +502,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         cover_lines = self._cover_text.strip().splitlines()
         # Render subject in accent colour, same as patches
         if cover_lines:
-            viewer.write(Text(cover_lines[0], style=f"bold {ts['accent']}"))
+            viewer.write(Text(cover_lines[0], style=f'bold {ts["accent"]}'))
             viewer.write(Text(''))
         body_lines = b4.review._strip_subject(self._cover_text)
         body = '\n'.join(body_lines)
@@ -504,9 +531,12 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         _write_followup_trailers(viewer, self._tracking.get('followups', []), ts=ts)
         # Show cover-level follow-up comments
         _write_followup_comments(
-            viewer, self._followup_comments.get(0, []),
+            viewer,
+            self._followup_comments.get(0, []),
             self._comment_positions,
-            header_position_map=self._followup_header_map, ts=ts)
+            header_position_map=self._followup_header_map,
+            ts=ts,
+        )
         for line_pos, entry in self._followup_header_map.items():
             msgid = str(entry.get('msgid', ''))
             if msgid:
@@ -522,17 +552,20 @@ class ReviewApp(CheckRunnerMixin, App[None]):
 
         # Show commit message with subject as a bright heading
         ecode, commit_msg = b4.git_run_command(
-            self._topdir, ['show', '--format=%B', '--no-patch', sha])
+            self._topdir, ['show', '--format=%B', '--no-patch', sha]
+        )
         if ecode == 0 and commit_msg.strip():
             all_lines = commit_msg.strip().splitlines()
             # Render subject in accent colour
             if all_lines:
-                viewer.write(Text(all_lines[0], style=f"bold {ts['accent']}"))
+                viewer.write(Text(all_lines[0], style=f'bold {ts["accent"]}'))
                 viewer.write(Text(''))
             body = '\n'.join(b4.review._strip_subject(commit_msg))
             if body:
                 # Build commit message comment map
-                patch_target_cm = self._patches[patch_idx] if patch_idx < len(self._patches) else {}
+                patch_target_cm = (
+                    self._patches[patch_idx] if patch_idx < len(self._patches) else {}
+                )
                 msg_comment_map = self._build_msg_comment_map(patch_target_cm, ts)
 
                 # Render preamble comments (line 0 = before commit message)
@@ -541,8 +574,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                     self._comment_positions.append(len(viewer.lines))
                     _write_comments(viewer, preamble_entries, ts=ts)
 
-                bheaders, message, btrailers, _basement, _signature = \
+                bheaders, message, btrailers, _basement, _signature = (
                     b4.LoreMessage.get_body_parts(body)
+                )
                 has_content = bool(preamble_entries)
                 # Track line number through the body (1-based, after
                 # subject and leading blanks — same as _build_annotated_diff)
@@ -584,12 +618,15 @@ class ReviewApp(CheckRunnerMixin, App[None]):
 
                 # Show follow-up trailers not already in the commit,
                 # including cover-letter trailers that apply to all patches
-                patch_meta = self._patches[patch_idx] if patch_idx < len(self._patches) else {}
+                patch_meta = (
+                    self._patches[patch_idx] if patch_idx < len(self._patches) else {}
+                )
                 existing = set()
                 if btrailers:
                     existing = {lt.as_string().lower() for lt in btrailers}
-                all_followups = (self._tracking.get('followups', [])
-                                 + patch_meta.get('followups', []))
+                all_followups = self._tracking.get('followups', []) + patch_meta.get(
+                    'followups', []
+                )
                 _write_followup_trailers(viewer, all_followups, existing, ts=ts)
                 if all_followups:
                     has_content = True
@@ -614,7 +651,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             return
 
         # Get review comments — own comments always, external only on "f"
-        patch_target = self._patches[patch_idx] if patch_idx < len(self._patches) else {}
+        patch_target = (
+            self._patches[patch_idx] if patch_idx < len(self._patches) else {}
+        )
         all_reviews = patch_target.get('reviews', {})
         my_email = str(self._usercfg.get('email', ''))
         comment_map: Dict[Tuple[str, int], List[Tuple[str, str, str]]] = {}
@@ -631,7 +670,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 colour = self._reviewer_colour(rev_email, patch_target, ts)
                 for c in rev_data.get('comments', []):
                     key = (c['path'], c['line'])
-                    comment_map.setdefault(key, []).append((rev_name, colour, c['text']))
+                    comment_map.setdefault(key, []).append(
+                        (rev_name, colour, c['text'])
+                    )
             else:
                 rev_name = rev_data.get('name', rev_email)
                 for c in rev_data.get('comments', []):
@@ -645,7 +686,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         current_b_file = ''
         a_line = 0
         b_line = 0
-        hint_style = f"bold {ts['warning']}"
+        hint_style = f'bold {ts["warning"]}'
         self._collapsed_comment_lines = {}
 
         def _write_hints(key: Tuple[str, int]) -> None:
@@ -686,7 +727,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 # Colour only the @@...@@ marker, leave context in default
                 end = line.index(' @@', 3) + 3
                 hunk_text = Text()
-                hunk_text.append(line[:end], style=f"bold {ts['secondary']}")
+                hunk_text.append(line[:end], style=f'bold {ts["secondary"]}')
                 if len(line) > end:
                     hunk_text.append(line[end:])
                 viewer.write(hunk_text)
@@ -725,9 +766,12 @@ class ReviewApp(CheckRunnerMixin, App[None]):
 
         # Render follow-up comments at the bottom
         _write_followup_comments(
-            viewer, self._followup_comments.get(patch_idx + 1, []),
+            viewer,
+            self._followup_comments.get(patch_idx + 1, []),
             self._comment_positions,
-            header_position_map=self._followup_header_map, ts=ts)
+            header_position_map=self._followup_header_map,
+            ts=ts,
+        )
         for line_pos, entry in self._followup_header_map.items():
             msgid = str(entry.get('msgid', ''))
             if msgid:
@@ -747,7 +791,11 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             else:
                 review = {}
                 patch_meta = None
-            commit_sha = self._commit_shas[patch_idx] if patch_idx < len(self._commit_shas) else None
+            commit_sha = (
+                self._commit_shas[patch_idx]
+                if patch_idx < len(self._commit_shas)
+                else None
+            )
 
         target = self._series if display_idx == 0 else patch_meta
         if target and b4.review._get_patch_state(target, self._usercfg) == 'skip':
@@ -756,19 +804,27 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             my_review = b4.review._get_my_review(target, self._usercfg)
             skip_reason = str(my_review.get('skip-reason', ''))
             if skip_reason:
-                viewer.write(f'[dim]Patch {label} is marked as skipped: {skip_reason}[/dim]')
+                viewer.write(
+                    f'[dim]Patch {label} is marked as skipped: {skip_reason}[/dim]'
+                )
             else:
-                viewer.write(f'[dim]Patch {label} is marked as skipped — no email will be sent.[/dim]')
+                viewer.write(
+                    f'[dim]Patch {label} is marked as skipped — no email will be sent.[/dim]'
+                )
             return
 
-        if not review or not (review.get('trailers') or review.get('reply', '')
-                              or review.get('comments') or review.get('note', '')):
+        if not review or not (
+            review.get('trailers')
+            or review.get('reply', '')
+            or review.get('comments')
+            or review.get('note', '')
+        ):
             viewer.write('[dim]No reply will be sent for this patch.[/dim]')
             return
 
         msg = b4.review._build_review_email(
-            self._series, patch_meta, review, self._cover_text,
-            self._topdir, commit_sha)
+            self._series, patch_meta, review, self._cover_text, self._topdir, commit_sha
+        )
         if msg is None:
             viewer.write('[dim]No email to preview (missing message-id?).[/dim]')
             return
@@ -801,8 +857,12 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         text = Text()
         has_content = False
         for rev_email, review in ordered:
-            if not (review.get('trailers') or review.get('reply', '')
-                    or review.get('comments') or review.get('note', '')):
+            if not (
+                review.get('trailers')
+                or review.get('reply', '')
+                or review.get('comments')
+                or review.get('note', '')
+            ):
                 continue
 
             colour = self._reviewer_colour(rev_email, target, ts)
@@ -822,13 +882,20 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             comments = review.get('comments', [])
             if comments:
                 files: set[str] = set(c.get('path', '') for c in comments)
-                text.append(f'\n    {len(comments)} comments across '
-                            f'{len(files)} files', style=ts['warning'])
+                text.append(
+                    f'\n    {len(comments)} comments across {len(files)} files',
+                    style=ts['warning'],
+                )
             reply = review.get('reply', '')
             if reply:
-                non_quoted = sum(1 for ln in reply.splitlines()
-                                 if ln.strip() and not ln.startswith('>'))
-                text.append(f'\n    {non_quoted} non-quoted reply lines', style=ts['accent'])
+                non_quoted = sum(
+                    1
+                    for ln in reply.splitlines()
+                    if ln.strip() and not ln.startswith('>')
+                )
+                text.append(
+                    f'\n    {non_quoted} non-quoted reply lines', style=ts['accent']
+                )
             trailers = review.get('trailers', [])
             if trailers:
                 for t in trailers:
@@ -848,7 +915,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 if body_start is not None and body_start < len(lines):
                     body_words = ' '.join(lines[body_start:]).split()
                     if body_words:
-                        text.append('\n    (view full note with N)', style=ts['secondary'])
+                        text.append(
+                            '\n    (view full note with N)', style=ts['secondary']
+                        )
 
         if not has_content:
             overlay.display = False
@@ -896,18 +965,27 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         """Return the current user's review sub-dict, creating it if needed."""
         return b4.review._ensure_my_review(target, self._usercfg)
 
-    def _reviewer_colour(self, email: str, target: Dict[str, Any],
-                         ts: Optional[Dict[str, str]] = None) -> str:
+    def _reviewer_colour(
+        self, email: str, target: Dict[str, Any], ts: Optional[Dict[str, str]] = None
+    ) -> str:
         """Return a stable colour for a reviewer email.
 
         Current user always gets index 0; others are sorted by email
         and assigned cyclically from the rest of the palette.
         *ts* is a resolved theme styles dict from :func:`resolve_styles`.
         """
-        palette = reviewer_colours(ts) if ts else [
-            'dark_goldenrod', 'dark_green', 'dark_cyan',
-            'dark_magenta', 'dark_red', 'dark_blue',
-        ]
+        palette = (
+            reviewer_colours(ts)
+            if ts
+            else [
+                'dark_goldenrod',
+                'dark_green',
+                'dark_cyan',
+                'dark_magenta',
+                'dark_red',
+                'dark_blue',
+            ]
+        )
         my_email = self._usercfg.get('email', '')
         if email == my_email:
             return palette[0]
@@ -1007,7 +1085,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             return True
         if action == 'agent':
             config = b4.get_main_config()
-            if not config.get('review-agent-command') or not config.get('review-agent-prompt-path'):
+            if not config.get('review-agent-command') or not config.get(
+                'review-agent-prompt-path'
+            ):
                 return False
             return not self._preview_mode
         if action == 'prior_review':
@@ -1034,7 +1114,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         target = self._get_current_review_target()
         if not target:
             return
-        existing_trailers = b4.review._get_my_review(target, self._usercfg).get('trailers', [])
+        existing_trailers = b4.review._get_my_review(target, self._usercfg).get(
+            'trailers', []
+        )
 
         def _on_trailer(result: Optional[List[str]]) -> None:
             if result is None:
@@ -1048,12 +1130,23 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 new_trailers = [f'{name}: {self._default_identity}' for name in result]
             # If adding to a patch, filter out trailers already on the cover letter
             if self._selected_idx > 0 and new_trailers:
-                cover_trailers = b4.review._get_my_review(self._series, self._usercfg).get('trailers', [])
-                cover_names = {t.split(':', 1)[0].strip().lower() for t in cover_trailers}
+                cover_trailers = b4.review._get_my_review(
+                    self._series, self._usercfg
+                ).get('trailers', [])
+                cover_names = {
+                    t.split(':', 1)[0].strip().lower() for t in cover_trailers
+                }
                 overlap = [r for r in result if r.lower() in cover_names]
                 if overlap:
-                    self.notify(f'{", ".join(overlap)} already on cover letter', severity='warning')
-                    new_trailers = [t for t in new_trailers if t.split(':', 1)[0].strip().lower() not in cover_names]
+                    self.notify(
+                        f'{", ".join(overlap)} already on cover letter',
+                        severity='warning',
+                    )
+                    new_trailers = [
+                        t
+                        for t in new_trailers
+                        if t.split(':', 1)[0].strip().lower() not in cover_names
+                    ]
             old_trailers: List[str] = review.get('trailers', [])
             if new_trailers == old_trailers:
                 return
@@ -1073,7 +1166,11 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                     pt = preview.get('trailers', [])
                     if not pt:
                         continue
-                    remaining = [t for t in pt if t.split(':', 1)[0].strip().lower() not in new_names]
+                    remaining = [
+                        t
+                        for t in pt
+                        if t.split(':', 1)[0].strip().lower() not in new_names
+                    ]
                     if remaining != pt:
                         if remaining:
                             preview['trailers'] = remaining
@@ -1136,7 +1233,8 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 return
             sha = self._commit_shas[patch_idx]
             ecode, real_diff = b4.git_run_command(
-                self._topdir, ['diff', f'{sha}~1', sha])
+                self._topdir, ['diff', f'{sha}~1', sha]
+            )
             if ecode > 0:
                 self.notify('Could not get diff', severity='error')
                 return
@@ -1149,21 +1247,23 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             if self._selected_idx == 0:
                 # Cover letter reply
                 editor_text = b4.review._render_quoted_diff_with_comments(
-                    '', all_reviews, my_email,
-                    commit_msg=self._cover_text)
+                    '', all_reviews, my_email, commit_msg=self._cover_text
+                )
             else:
                 ecode, commit_msg = b4.git_run_command(
-                    self._topdir, ['show', '--format=%B', '--no-patch', sha])
+                    self._topdir, ['show', '--format=%B', '--no-patch', sha]
+                )
                 if ecode > 0:
                     self.notify('Could not get commit message', severity='error')
                     return
                 editor_text = b4.review._render_quoted_diff_with_comments(
-                    real_diff, all_reviews, my_email,
-                    commit_msg=commit_msg.strip())
+                    real_diff, all_reviews, my_email, commit_msg=commit_msg.strip()
+                )
 
         with self.suspend():
             result = b4.edit_in_editor(
-                editor_text.encode(), filehint='reply.b4-review.eml')
+                editor_text.encode(), filehint='reply.b4-review.eml'
+            )
 
         if result is None:
             self.notify('Editor returned no content')
@@ -1194,7 +1294,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         # Parse inline comments from the quoted reply
         # _extract_editor_comments strips | lines (unadopted external
         # comments) before parsing, so only adopted ones are kept
-        new_comments = b4.review._extract_editor_comments(reply_text, diff_text=real_diff)
+        new_comments = b4.review._extract_editor_comments(
+            reply_text, diff_text=real_diff
+        )
         if new_comments:
             review['comments'] = new_comments
             if 'reply' in review:
@@ -1213,13 +1315,11 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         else:
             self.notify('Reply saved')
 
-
     def action_prior_review(self) -> None:
         """Show prior revision review context."""
         context = self._series.get('prior-review-context', '')
         if not context:
-            self.notify('No prior review context available',
-                        severity='information')
+            self.notify('No prior review context available', severity='information')
             return
         self.push_screen(PriorReviewScreen(context))
 
@@ -1264,7 +1364,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             if result is None:
                 return
             if result == '__EDIT__':
-                my_note = b4.review._get_my_review(target, self._usercfg).get('note', '')
+                my_note = b4.review._get_my_review(target, self._usercfg).get(
+                    'note', ''
+                )
                 self._edit_note_in_editor(target, my_note)
             elif result == '__DELETE__':
                 self._delete_all_notes(target)
@@ -1290,7 +1392,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             self.notify('Editor returned no content')
             return
         raw_text = result.decode(errors='replace')
-        note_text = '\n'.join(ln for ln in raw_text.splitlines() if not ln.startswith('#')).strip()
+        note_text = '\n'.join(
+            ln for ln in raw_text.splitlines() if not ln.startswith('#')
+        ).strip()
         if note_text == existing.strip():
             self.notify('No changes made')
             return
@@ -1321,8 +1425,12 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             my_email = self._usercfg.get('email', '')
             for addr in list(all_reviews):
                 rev = all_reviews[addr]
-                if not (rev.get('trailers') or rev.get('reply', '')
-                        or rev.get('comments') or rev.get('note', '')):
+                if not (
+                    rev.get('trailers')
+                    or rev.get('reply', '')
+                    or rev.get('comments')
+                    or rev.get('note', '')
+                ):
                     if addr == my_email:
                         b4.review._cleanup_review(target, self._usercfg)
                     else:
@@ -1390,7 +1498,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         self._refresh_patch_item(self._selected_idx)
         total = len(self._commit_shas)
         label = 'cover' if self._selected_idx == 0 else f'{self._selected_idx}/{total}'
-        self.notify(f'{label} marked as done' if new_state else f'{label} unmarked done')
+        self.notify(
+            f'{label} marked as done' if new_state else f'{label} unmarked done'
+        )
 
     def action_patch_skip(self) -> None:
         """Toggle the explicit 'skip' state on the current patch."""
@@ -1425,7 +1535,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 self._selected_idx = 0 if self._has_cover else 1
         self._populate_patch_list()
         self._show_content(self._selected_idx)
-        self.notify('Skipped patches hidden' if self._hide_skipped else 'Skipped patches shown')
+        self.notify(
+            'Skipped patches hidden' if self._hide_skipped else 'Skipped patches shown'
+        )
 
     def action_send(self) -> None:
         """Collect review emails and show send confirmation dialog."""
@@ -1443,12 +1555,17 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         if draft_patches:
             self.notify(
                 f'Still in draft: {", ".join(draft_patches)}. Mark as done (d) or skip (x) first.',
-                severity='warning')
+                severity='warning',
+            )
             return
 
         msgs = b4.review.collect_review_emails(
-            self._series, self._patches, self._cover_text,
-            self._topdir, self._commit_shas)
+            self._series,
+            self._patches,
+            self._cover_text,
+            self._topdir,
+            self._commit_shas,
+        )
         if not msgs:
             self.notify('No review data to send.')
             return
@@ -1459,9 +1576,15 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             try:
                 with self.suspend():
                     smtp, fromaddr = b4.get_smtp(dryrun=self._email_dryrun)
-                    sent = b4.send_mail(smtp, msgs, fromaddr=fromaddr,
-                                        patatt_sign=self._patatt_sign, dryrun=self._email_dryrun,
-                                        output_dir=None, reflect=False)
+                    sent = b4.send_mail(
+                        smtp,
+                        msgs,
+                        fromaddr=fromaddr,
+                        patatt_sign=self._patatt_sign,
+                        dryrun=self._email_dryrun,
+                        output_dir=None,
+                        reflect=False,
+                    )
                 if sent is None:
                     self.notify('Failed to send review emails.', severity='error')
                 else:
@@ -1508,8 +1631,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 self._compose_followup_reply(entry)
                 event.stop()
 
-    def _compose_followup_reply(self, entry: Dict[str, Any],
-                                 initial_text: Optional[str] = None) -> None:
+    def _compose_followup_reply(
+        self, entry: Dict[str, Any], initial_text: Optional[str] = None
+    ) -> None:
         """Compose a reply to a follow-up message using the external editor.
 
         If *initial_text* is given (re-edit loop), use it directly instead of
@@ -1545,9 +1669,15 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         try:
             with self.suspend():
                 smtp, fromaddr = b4.get_smtp(dryrun=self._email_dryrun)
-                sent = b4.send_mail(smtp, [msg], fromaddr=fromaddr,
-                                    patatt_sign=self._patatt_sign, dryrun=self._email_dryrun,
-                                    output_dir=None, reflect=False)
+                sent = b4.send_mail(
+                    smtp,
+                    [msg],
+                    fromaddr=fromaddr,
+                    patatt_sign=self._patatt_sign,
+                    dryrun=self._email_dryrun,
+                    output_dir=None,
+                    reflect=False,
+                )
             if sent is None:
                 self.notify('Failed to send reply.', severity='error')
             elif self._email_dryrun:
@@ -1594,7 +1724,8 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         all_followups = lmbx.followups + lmbx.unknowns
         for lmsg in sorted(all_followups, key=lambda m: m.date):
             display_idx = _resolve_patch_for_followup(
-                lmsg.in_reply_to, patch_msgids, lmbx.msgid_map)
+                lmsg.in_reply_to, patch_msgids, lmbx.msgid_map
+            )
             if display_idx is None:
                 continue
 
@@ -1609,8 +1740,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 continue
             # minimize_thread strips trailers from the body; re-append
             # them so the follow-up panel shows the full message.
-            _htrs, _cmsg, mtrs, _basement, _sig = (
-                b4.LoreMessage.get_body_parts(lmsg.body))
+            _htrs, _cmsg, mtrs, _basement, _sig = b4.LoreMessage.get_body_parts(
+                lmsg.body
+            )
             if mtrs:
                 trailer_block = '\n'.join(t.as_string() for t in mtrs)
                 mbody = mbody.rstrip('\n') + '\n\n' + trailer_block
@@ -1623,10 +1755,13 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 'msgid': lmsg.msgid,
                 'subject': lmsg.full_subject,
                 'reply': lmsg.reply,
-                'depth': _get_followup_depth(lmsg.in_reply_to, patch_msgids, lmbx.msgid_map),
+                'depth': _get_followup_depth(
+                    lmsg.in_reply_to, patch_msgids, lmbx.msgid_map
+                ),
                 'lmsg': lmsg,
                 'replies-to-diff': _chain_has_additional_patch(
-                    lmsg.in_reply_to, patch_msgids, lmbx.msgid_map),
+                    lmsg.in_reply_to, patch_msgids, lmbx.msgid_map
+                ),
             }
             self._followup_comments.setdefault(display_idx, []).append(entry)
             count += 1
@@ -1710,7 +1845,8 @@ class ReviewApp(CheckRunnerMixin, App[None]):
 
                 sha = self._commit_shas[idx]
                 ecode, real_diff = b4.git_run_command(
-                    self._topdir, ['diff', f'{sha}~1', sha])
+                    self._topdir, ['diff', f'{sha}~1', sha]
+                )
                 if ecode == 0:
                     b4.review._resolve_comment_positions(real_diff, comments)
 
@@ -1753,9 +1889,13 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         self.notify('Loading follow-ups\u2026')
         self.run_worker(
             lambda: self._fetch_followups_bg(cover_msgid, blob_sha),
-            name='_followup_worker', thread=True)
+            name='_followup_worker',
+            thread=True,
+        )
 
-    def _fetch_rethreaded_threads(self, blob_sha: str) -> Optional[List[email.message.EmailMessage]]:
+    def _fetch_rethreaded_threads(
+        self, blob_sha: str
+    ) -> Optional[List[email.message.EmailMessage]]:
         """Fetch threads for each real patch in a rethreaded series."""
         all_msgs: List[email.message.EmailMessage] = []
         seen_msgids: Set[str] = set()
@@ -1765,8 +1905,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             if not pmsgid or pmsgid in fetched_patches:
                 continue
             fetched_patches.add(pmsgid)
-            thread = get_thread_msgs(self._topdir, pmsgid,
-                                     blob_sha=blob_sha, quiet=True)
+            thread = get_thread_msgs(
+                self._topdir, pmsgid, blob_sha=blob_sha, quiet=True
+            )
             if thread:
                 for msg in thread:
                     c_msgid = b4.LoreMessage.get_clean_msgid(msg)
@@ -1782,15 +1923,18 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 # Rethreaded series: fetch each real patch's thread
                 msgs = self._fetch_rethreaded_threads(blob_sha)
             else:
-                msgs = get_thread_msgs(self._topdir, cover_msgid,
-                                       blob_sha=blob_sha, quiet=True)
+                msgs = get_thread_msgs(
+                    self._topdir, cover_msgid, blob_sha=blob_sha, quiet=True
+                )
 
         if not msgs:
+
             def _no_msgs() -> None:
                 self.notify('Could not load thread', severity='error')
                 # Still refresh to show external comments (sashiko etc.)
                 self._populate_patch_list()
                 self._show_content(self._selected_idx)
+
             self.app.call_from_thread(_no_msgs)
             return
 
@@ -1800,7 +1944,8 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             if change_id:
                 with _quiet_worker():
                     new_sha = b4.review.tracking._store_thread_blob(
-                        self._topdir, change_id, msgs)
+                        self._topdir, change_id, msgs
+                    )
                 if new_sha:
                     self._series['thread-blob'] = new_sha
 
@@ -1808,6 +1953,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             self._load_followup_msgs(msgs)
             self._mark_followup_msgs_seen(msgs)
             self._detect_maintainer_replies(msgs)
+
         self.app.call_from_thread(_finish)
 
     def _mark_followup_msgs_seen(self, msgs: List[Any]) -> None:
@@ -1821,7 +1967,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 msg_date = None
                 if date_val:
                     try:
-                        msg_date = email.utils.parsedate_to_datetime(str(date_val)).isoformat()
+                        msg_date = email.utils.parsedate_to_datetime(
+                            str(date_val)
+                        ).isoformat()
                     except Exception:
                         pass
                 entries.append({'msgid': mid, 'msg_date': msg_date})
@@ -1829,6 +1977,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             return
         try:
             from b4.review import messages
+
             conn = messages.get_db()
             messages.set_flags_bulk(conn, entries, 'Seen')
             conn.close()
@@ -1848,6 +1997,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             return
         try:
             from b4.review import messages
+
             conn = messages.get_db()
             messages.set_flags_bulk(conn, entries, 'Answered')
             conn.close()
@@ -1895,6 +2045,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             return
         try:
             from b4.review import messages
+
             conn = messages.get_db()
             messages.set_flags_bulk(conn, answered_entries, 'Answered')
             conn.close()
@@ -1912,7 +2063,8 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         if self.branch_checked_out:
             return True
         ecode, _out = b4.git_run_command(
-            self._topdir, ['checkout', self._branch], logstderr=True)
+            self._topdir, ['checkout', self._branch], logstderr=True
+        )
         if ecode != 0:
             self.notify(f'Could not check out {self._branch}', severity='error')
             return False
@@ -1924,7 +2076,8 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         if not self.branch_checked_out or not self._original_branch:
             return
         ecode, _out = b4.git_run_command(
-            self._topdir, ['checkout', self._original_branch], logstderr=True)
+            self._topdir, ['checkout', self._original_branch], logstderr=True
+        )
         if ecode == 0:
             self.branch_checked_out = False
 
@@ -1936,8 +2089,10 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         agent_cmd = config.get('review-agent-command')
         agent_prompt = str(config.get('review-agent-prompt-path', ''))
         if not agent_cmd or not agent_prompt:
-            self.notify('Review agent not configured (set b4.review-agent-command and b4.review-agent-prompt-path)',
-                        severity='warning')
+            self.notify(
+                'Review agent not configured (set b4.review-agent-command and b4.review-agent-prompt-path)',
+                severity='warning',
+            )
             return
 
         assert isinstance(agent_cmd, str)
@@ -1947,8 +2102,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
 
         prompt_path = os.path.join(self._topdir, agent_prompt)
         if not os.path.isfile(prompt_path):
-            self.notify(f'Agent prompt file not found: {agent_prompt}',
-                        severity='error')
+            self.notify(
+                f'Agent prompt file not found: {agent_prompt}', severity='error'
+            )
             return
         cmdargs += [f'Read and execute the prompt from {prompt_path}']
 
@@ -1976,8 +2132,12 @@ class ReviewApp(CheckRunnerMixin, App[None]):
 
         # Integrate any review files the agent wrote
         integrated = b4.review._integrate_agent_reviews(
-            self._topdir, self._cover_text, self._tracking,
-            self._commit_shas, self._patches)
+            self._topdir,
+            self._cover_text,
+            self._tracking,
+            self._commit_shas,
+            self._patches,
+        )
         if integrated:
             self._populate_patch_list()
             self._show_content(self._selected_idx)
@@ -1986,7 +2146,9 @@ class ReviewApp(CheckRunnerMixin, App[None]):
 
     def _save_tracking(self) -> None:
         """Save tracking data to the review branch."""
-        b4.review.save_tracking_ref(self._topdir, self._branch, self._cover_text, self._tracking)
+        b4.review.save_tracking_ref(
+            self._topdir, self._branch, self._cover_text, self._tracking
+        )
 
     def action_suspend(self) -> None:
         """Suspend the TUI and drop to an interactive shell."""
@@ -2014,11 +2176,13 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             range_spec = f'{self._base_commit}..HEAD~1'
 
         ecode, out = b4.git_run_command(
-            self._topdir, ['rev-list', '--reverse', range_spec])
+            self._topdir, ['rev-list', '--reverse', range_spec]
+        )
         if ecode != 0 or not out.strip():
             # Could not enumerate — tracking commit may be damaged
-            self.notify('Could not enumerate patch commits after shell',
-                        severity='warning')
+            self.notify(
+                'Could not enumerate patch commits after shell', severity='warning'
+            )
             return
 
         new_shas = out.strip().splitlines()
@@ -2030,16 +2194,19 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             self.notify(
                 f'Patch count changed ({len(old_shas)} → {len(new_shas)}). '
                 'Please exit and re-enter the review.',
-                severity='warning')
+                severity='warning',
+            )
             return
 
         # Reload tracking from the (possibly rewritten) tip commit
         try:
             self._cover_text, self._tracking = b4.review.load_tracking(
-                self._topdir, 'HEAD')
+                self._topdir, 'HEAD'
+            )
         except SystemExit:
-            self.notify('Could not reload tracking data after shell',
-                        severity='warning')
+            self.notify(
+                'Could not reload tracking data after shell', severity='warning'
+            )
             return
 
         series = self._tracking.get('series', {})
@@ -2050,23 +2217,24 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         series['first-patch-commit'] = new_shas[0]
 
         # Re-anchor inline comments against the rebased diffs
-        b4.review.reanchor_patch_comments(
-            self._topdir, new_shas, self._patches)
+        b4.review.reanchor_patch_comments(self._topdir, new_shas, self._patches)
 
         # Persist updated tracking
         self._tracking['series'] = series
         b4.review.save_tracking_ref(
-            self._topdir, self._branch, self._cover_text, self._tracking)
+            self._topdir, self._branch, self._cover_text, self._tracking
+        )
 
         # Refresh in-memory state
         self._commit_shas = new_shas
         ecode, out = b4.git_run_command(
-            self._topdir, ['log', '--reverse', '--format=%s', range_spec])
+            self._topdir, ['log', '--reverse', '--format=%s', range_spec]
+        )
         if ecode == 0 and out.strip():
             self._commit_subjects = out.strip().splitlines()
         self._sha_map = {}
         for idx, full_sha in enumerate(new_shas):
-            self._sha_map[full_sha[:self._abbrev_len]] = (full_sha, idx)
+            self._sha_map[full_sha[: self._abbrev_len]] = (full_sha, idx)
 
         # Refresh the patch list display
         self._populate_patch_list()
@@ -2080,6 +2248,8 @@ class ReviewApp(CheckRunnerMixin, App[None]):
     def action_help(self) -> None:
         """Show help overlay."""
         config = b4.get_main_config()
-        has_agent = bool(config.get('review-agent-command') and config.get('review-agent-prompt-path'))
+        has_agent = bool(
+            config.get('review-agent-command')
+            and config.get('review-agent-prompt-path')
+        )
         self.push_screen(HelpScreen(_review_help_lines(has_agent=has_agent)))
-
diff --git a/src/b4/review_tui/_tracking_app.py b/src/b4/review_tui/_tracking_app.py
index 2493cda..2c8087f 100644
--- a/src/b4/review_tui/_tracking_app.py
+++ b/src/b4/review_tui/_tracking_app.py
@@ -86,15 +86,15 @@ _ACTION_SHORTCUTS: Dict[str, str] = {
 
 # Single-character Unicode symbols for each series status.
 _STATUS_SYMBOLS: Dict[str, str] = {
-    'new':       '★',  # U+2605 black star
+    'new': '★',  # U+2605 black star
     'reviewing': '✎',  # U+270E lower right pencil (matches review app)
-    'replied':   '↩',  # U+21A9 leftwards arrow with hook
-    'waiting':   '↻',  # U+21BB clockwise open circle arrow
-    'accepted':  '∈',  # U+2208 element of
-    'queued':    '◷',  # U+25F7 white circle with upper right quadrant
-    'snoozed':   '⏸',  # U+23F8 double vertical bar
-    'thanked':   '✓',  # U+2713 check mark
-    'gone':      'ø',  # U+00F8 latin small letter o with stroke
+    'replied': '↩',  # U+21A9 leftwards arrow with hook
+    'waiting': '↻',  # U+21BB clockwise open circle arrow
+    'accepted': '∈',  # U+2208 element of
+    'queued': '◷',  # U+25F7 white circle with upper right quadrant
+    'snoozed': '⏸',  # U+23F8 double vertical bar
+    'thanked': '✓',  # U+2713 check mark
+    'gone': 'ø',  # U+00F8 latin small letter o with stroke
 }
 
 # Sort tier for each status.  Lower tier sorts higher in the list.
@@ -102,21 +102,26 @@ _STATUS_SYMBOLS: Dict[str, str] = {
 #   1 = action required (awaiting maintainer decision)
 #   2 = inactive (no action needed)
 _STATUS_TIER: Dict[str, int] = {
-    'new':       0,
+    'new': 0,
     'reviewing': 0,
-    'replied':   1,
-    'accepted':  1,
-    'queued':    2,
-    'snoozed':   2,
-    'waiting':   2,
-    'thanked':   2,
-    'gone':      2,
+    'replied': 1,
+    'accepted': 1,
+    'queued': 2,
+    'snoozed': 2,
+    'waiting': 2,
+    'thanked': 2,
+    'gone': 2,
 }
 
 # Statuses where the maintainer can take action right now.
-_ACTIONABLE_STATUSES: frozenset[str] = frozenset({
-    'new', 'reviewing', 'replied', 'accepted',
-})
+_ACTIONABLE_STATUSES: frozenset[str] = frozenset(
+    {
+        'new',
+        'reviewing',
+        'replied',
+        'accepted',
+    }
+)
 
 
 def _resolve_worktree_am_conflict(topdir: str, cex: 'b4.AmConflictError') -> bool:
@@ -135,20 +140,32 @@ def _resolve_worktree_am_conflict(topdir: str, cex: 'b4.AmConflictError') -> boo
     logger.critical('---')
     logger.critical('Patch did not apply cleanly.')
     # Disable sparse checkout so user can see and edit files
-    b4.git_run_command(cex.worktree_path, ['sparse-checkout', 'disable'],
-                       logstderr=True, rundir=cex.worktree_path)
+    b4.git_run_command(
+        cex.worktree_path,
+        ['sparse-checkout', 'disable'],
+        logstderr=True,
+        rundir=cex.worktree_path,
+    )
     # Save worktree HEAD before shell so we can detect abort
     _ecode, wt_head_before = b4.git_run_command(
-        cex.worktree_path, ['rev-parse', 'HEAD'],
-        logstderr=True, rundir=cex.worktree_path)
+        cex.worktree_path,
+        ['rev-parse', 'HEAD'],
+        logstderr=True,
+        rundir=cex.worktree_path,
+    )
     wt_head_before = wt_head_before.strip()
     logger.info('You can resolve the conflict in the worktree.')
-    logger.info('Use "git am --continue" after resolving, or "git am --abort" to give up.')
+    logger.info(
+        'Use "git am --continue" after resolving, or "git am --abort" to give up.'
+    )
     _suspend_to_shell(hint='b4 conflict', cwd=cex.worktree_path)
     # Check if am is still in progress (user exited without finishing)
     ecode, wt_gitdir = b4.git_run_command(
-        cex.worktree_path, ['rev-parse', '--git-dir'],
-        logstderr=True, rundir=cex.worktree_path)
+        cex.worktree_path,
+        ['rev-parse', '--git-dir'],
+        logstderr=True,
+        rundir=cex.worktree_path,
+    )
     if ecode == 0:
         rebase_apply = os.path.join(wt_gitdir.strip(), 'rebase-apply')
     else:
@@ -159,15 +176,20 @@ def _resolve_worktree_am_conflict(topdir: str, cex: 'b4.AmConflictError') -> boo
         return False
     # Check if am was aborted (HEAD unchanged from before shell)
     _ecode, wt_head_after = b4.git_run_command(
-        cex.worktree_path, ['rev-parse', 'HEAD'],
-        logstderr=True, rundir=cex.worktree_path)
+        cex.worktree_path,
+        ['rev-parse', 'HEAD'],
+        logstderr=True,
+        rundir=cex.worktree_path,
+    )
     if wt_head_after.strip() == wt_head_before:
         logger.warning('Conflict resolution aborted')
         b4.git_run_command(topdir, ['worktree', 'remove', '--force', cex.worktree_path])
         return False
     # am completed -- fetch result into FETCH_HEAD
     logger.info('Conflict resolved, fetching result...')
-    ecode, _out = b4.git_run_command(topdir, ['fetch', cex.worktree_path], logstderr=True)
+    ecode, _out = b4.git_run_command(
+        topdir, ['fetch', cex.worktree_path], logstderr=True
+    )
     b4.git_run_command(topdir, ['worktree', 'remove', '--force', cex.worktree_path])
     if ecode > 0:
         logger.critical('Unable to fetch from resolved worktree')
@@ -257,8 +279,11 @@ def _get_review_branch_tips(topdir: str) -> Dict[str, str]:
 
     Uses a single git for-each-ref call instead of per-branch rev-parse.
     """
-    gitargs = ['for-each-ref', '--format=%(refname:short) %(objectname)',
-               'refs/heads/b4/review/']
+    gitargs = [
+        'for-each-ref',
+        '--format=%(refname:short) %(objectname)',
+        'refs/heads/b4/review/',
+    ]
     lines = b4.git_get_command_lines(topdir, gitargs)
     result: Dict[str, str] = {}
     for line in lines:
@@ -314,7 +339,7 @@ def _get_art_counts_batch(
         msg_start = content.find('\n\n')
         if msg_start < 0:
             continue
-        commit_msg = content[msg_start + 2:]
+        commit_msg = content[msg_start + 2 :]
 
         art = _parse_art_from_message(commit_msg)
         if art is not None:
@@ -447,7 +472,7 @@ class TrackedSeriesItem(ListItem):
         badge_style = ''
         if base_accent or fu_badge:
             ts = resolve_styles(self.app)
-            accent = f"bold {ts['warning']}"
+            accent = f'bold {ts["warning"]}'
             if base_accent:
                 base_style = accent
             if fu_badge:
@@ -552,11 +577,20 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
     """
 
     BINDING_GROUPS = {
-        'review': 'Series', 'check': 'Series', 'thread': 'Series',
-        'range_diff': 'Series', 'action': 'Series', 'update_one': 'Series',
+        'review': 'Series',
+        'check': 'Series',
+        'thread': 'Series',
+        'range_diff': 'Series',
+        'action': 'Series',
+        'update_one': 'Series',
         'target_branch': 'Series',
-        'update_all': 'App', 'process_queue': 'App', 'limit': 'App',
-        'suspend': 'App', 'patchwork': 'App', 'quit': 'App', 'help': 'App',
+        'update_all': 'App',
+        'process_queue': 'App',
+        'limit': 'App',
+        'suspend': 'App',
+        'patchwork': 'App',
+        'quit': 'App',
+        'help': 'App',
     }
 
     BINDINGS = [
@@ -582,10 +616,14 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         Binding('question_mark', 'help', 'help', key_display='?'),
     ]
 
-    def __init__(self, identifier: str, original_branch: Optional[str] = None,
-                 focus_change_id: Optional[str] = None,
-                 email_dryrun: bool = False,
-                 patatt_sign: bool = True) -> None:
+    def __init__(
+        self,
+        identifier: str,
+        original_branch: Optional[str] = None,
+        focus_change_id: Optional[str] = None,
+        email_dryrun: bool = False,
+        patatt_sign: bool = True,
+    ) -> None:
         super().__init__()
         self._identifier = identifier
         self._original_branch = original_branch
@@ -610,7 +648,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         self._queue_count: int = 0
         # Show target branch binding only when configured
         self._has_target_branches = bool(
-            b4.review.tracking.get_review_target_branches())
+            b4.review.tracking.get_review_target_branches()
+        )
         # Cached data for _load_series — invalidated by _invalidate_caches()
         # when u/U update runs or actions change tracking data.
         self._cached_branch_tips: Optional[Dict[str, str]] = None
@@ -638,8 +677,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         self._cached_revisions = None
         self._cached_art_counts = None
 
-    def _refresh_msg_count(self, series: Dict[str, Any],
-                           total_messages: int) -> None:
+    def _refresh_msg_count(self, series: Dict[str, Any], total_messages: int) -> None:
         """Opportunistically refresh message count after fetching messages."""
         if not self._identifier:
             return
@@ -697,8 +735,11 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         self.set_interval(1, self._check_db_changed)
         topdir = b4.git_get_toplevel()
         if topdir and b4.review.tracking.db_exists(self._identifier):
-            self.run_worker(lambda: self._startup_rescan(topdir),
-                            name='_startup_rescan', thread=True)
+            self.run_worker(
+                lambda: self._startup_rescan(topdir),
+                name='_startup_rescan',
+                thread=True,
+            )
 
     def _startup_rescan(self, topdir: str) -> Dict[str, int]:
         """Rescan review branches in the background on app startup."""
@@ -772,9 +813,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         finally:
             conn.close()
 
-    def _wake_one(self, conn: 'sqlite3.Connection',
-                  entry: Dict[str, Any],
-                  topdir: Optional[str]) -> None:
+    def _wake_one(
+        self, conn: 'sqlite3.Connection', entry: Dict[str, Any], topdir: Optional[str]
+    ) -> None:
         """Restore a single snoozed series to its previous state."""
         cid = entry['change_id']
         rev = entry['revision']
@@ -789,6 +830,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     def _load_series(self) -> None:
         import b4.ty
+
         self._auto_wake_snoozed()
 
         all_series = b4.review.tracking.get_all_tracked_series(self._identifier)
@@ -815,9 +857,15 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             conn = None
         if self._cached_newest_revisions is None and conn:
             try:
-                self._cached_newest_revisions = b4.review.tracking.get_all_newest_revisions(conn)
-                self._cached_revision_counts = b4.review.tracking.get_all_revision_counts(conn)
-                self._cached_revisions = b4.review.tracking.get_all_revisions_grouped(conn)
+                self._cached_newest_revisions = (
+                    b4.review.tracking.get_all_newest_revisions(conn)
+                )
+                self._cached_revision_counts = (
+                    b4.review.tracking.get_all_revision_counts(conn)
+                )
+                self._cached_revisions = b4.review.tracking.get_all_revisions_grouped(
+                    conn
+                )
             except Exception:
                 pass
         newest_revisions = self._cached_newest_revisions or {}
@@ -843,7 +891,11 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             rev_count = revision_counts.get(change_id, 0)
             if rev_count > 1:
                 series['has_multiple_revisions'] = True
-            if rev_count == 0 and series.get('status') not in ('new', 'gone', 'snoozed'):
+            if rev_count == 0 and series.get('status') not in (
+                'new',
+                'gone',
+                'snoozed',
+            ):
                 series['needs_update'] = True
             # Stash revisions list for the detail panel
             series['_revisions'] = all_revisions.get(change_id, [])
@@ -867,8 +919,10 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         # This is a display-only pseudo-state, not stored in the DB.
         queued_cids = b4.ty.get_queued_change_ids(dryrun=self._email_dryrun)
         for series in self._all_series:
-            if (series.get('status') == 'accepted'
-                    and series.get('change_id', '') in queued_cids):
+            if (
+                series.get('status') == 'accepted'
+                and series.get('change_id', '') in queued_cids
+            ):
                 series['queued'] = True
         # Sort into three tiers: active → action required → inactive.
         # Within each tier, sort by when the maintainer started tracking
@@ -879,7 +933,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         )
         self._all_series.sort(
             key=lambda s: _STATUS_TIER.get(
-                'queued' if s.get('queued') else s.get('status', 'new'), 2)
+                'queued' if s.get('queued') else s.get('status', 'new'), 2
+            )
         )
         self.call_later(self._refresh_list)
 
@@ -919,8 +974,10 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 if needle not in (series.get('target_branch', '') or '').lower():
                     return False
             else:
-                if (token not in (series.get('subject', '') or '').lower()
-                        and token not in (series.get('sender_name', '') or '').lower()):
+                if (
+                    token not in (series.get('subject', '') or '').lower()
+                    and token not in (series.get('sender_name', '') or '').lower()
+                ):
                     return False
         return True
 
@@ -928,8 +985,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         display_series = self._all_series
         if self._limit_pattern:
             display_series = [
-                s for s in display_series
-                if self._matches_limit(s, self._limit_pattern)
+                s for s in display_series if self._matches_limit(s, self._limit_pattern)
             ]
 
         try:
@@ -953,11 +1009,16 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         # the new list is mounted.
         with self.app.batch_update():
             # Remove existing list/empty widgets
-            for widget in list(self.query('#tracking-header, #tracking-list, #tracking-empty')):
+            for widget in list(
+                self.query('#tracking-header, #tracking-list, #tracking-empty')
+            ):
                 await widget.remove()
 
             if not display_series:
-                empty = Static('No tracked series. Use "b4 review track" to add series.', id='tracking-empty')
+                empty = Static(
+                    'No tracked series. Use "b4 review track" to add series.',
+                    id='tracking-empty',
+                )
                 await self.mount(empty, before=self.query_one(Footer))
                 return
 
@@ -971,7 +1032,10 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
         if self._focus_change_id:
             for idx, item in enumerate(list_items):
-                if isinstance(item, TrackedSeriesItem) and item.series.get('change_id') == self._focus_change_id:
+                if (
+                    isinstance(item, TrackedSeriesItem)
+                    and item.series.get('change_id') == self._focus_change_id
+                ):
                     lv.index = idx
                     break
             self._focus_change_id = None
@@ -985,9 +1049,12 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             self._show_details(highlighted.series)
 
     def action_limit(self) -> None:
-        self.push_screen(LimitScreen(self._limit_pattern,
-                                     hint='Prefixes: s:<status>  t:<target-branch>'),
-                         callback=self._on_limit)
+        self.push_screen(
+            LimitScreen(
+                self._limit_pattern, hint='Prefixes: s:<status>  t:<target-branch>'
+            ),
+            callback=self._on_limit,
+        )
 
     def _on_limit(self, result: Optional[str]) -> None:
         if result is None:
@@ -1028,14 +1095,42 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             else:
                 self.action_action()
 
-
     _STATE_ACTIONS: Dict[str, frozenset[str]] = {
-        'new': frozenset({'review', 'range_diff', 'abandon', 'snooze', 'waiting', 'target_branch'}),
-        'reviewing': frozenset({'review', 'update_revision', 'range_diff', 'take', 'rebase', 'abandon', 'waiting', 'snooze', 'target_branch'}),
-        'replied': frozenset({'review', 'range_diff', 'take', 'rebase', 'archive', 'waiting', 'snooze', 'target_branch'}),
-        'waiting': frozenset({'review', 'range_diff', 'abandon', 'archive', 'snooze', 'target_branch'}),
+        'new': frozenset(
+            {'review', 'range_diff', 'abandon', 'snooze', 'waiting', 'target_branch'}
+        ),
+        'reviewing': frozenset(
+            {
+                'review',
+                'update_revision',
+                'range_diff',
+                'take',
+                'rebase',
+                'abandon',
+                'waiting',
+                'snooze',
+                'target_branch',
+            }
+        ),
+        'replied': frozenset(
+            {
+                'review',
+                'range_diff',
+                'take',
+                'rebase',
+                'archive',
+                'waiting',
+                'snooze',
+                'target_branch',
+            }
+        ),
+        'waiting': frozenset(
+            {'review', 'range_diff', 'abandon', 'archive', 'snooze', 'target_branch'}
+        ),
         'accepted': frozenset({'review', 'range_diff', 'thank', 'archive'}),
-        'snoozed': frozenset({'review', 'range_diff', 'unsnooze', 'abandon', 'target_branch'}),
+        'snoozed': frozenset(
+            {'review', 'range_diff', 'unsnooze', 'abandon', 'target_branch'}
+        ),
         'thanked': frozenset({'archive'}),
         'gone': frozenset({'abandon', 'review'}),
     }
@@ -1147,15 +1242,19 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                             pass
                     if conn:
                         b4.review.tracking.unsnooze_series(
-                            conn, change_id, 'reviewing', revision=revision)
+                            conn, change_id, 'reviewing', revision=revision
+                        )
                 elif status in ('waiting', 'accepted'):
                     # Bring back to reviewing on re-entry
                     if conn:
                         b4.review.tracking.update_series_status(
-                            conn, change_id, 'reviewing', revision=revision)
+                            conn, change_id, 'reviewing', revision=revision
+                        )
                     topdir = b4.git_get_toplevel()
                     if topdir:
-                        b4.review.update_tracking_status(topdir, branch_name, 'reviewing')
+                        b4.review.update_tracking_status(
+                            topdir, branch_name, 'reviewing'
+                        )
                 # Clear the followup badge — user is about to read this series
                 if conn and self._identifier and isinstance(revision, int):
                     b4.review.tracking.mark_all_messages_seen(conn, change_id, revision)
@@ -1210,8 +1309,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                     rev_info = rev
                     break
             if rev_info is None:
-                self.notify(f'Revision v{chosen} not found in database',
-                            severity='error')
+                self.notify(
+                    f'Revision v{chosen} not found in database', severity='error'
+                )
                 return
             series['message_id'] = rev_info['message_id']
             series['revision'] = chosen
@@ -1219,8 +1319,13 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             try:
                 conn = b4.review.tracking.get_db(self._identifier)
                 b4.review.tracking.update_series_revision(
-                    conn, change_id, current_rev, chosen,
-                    rev_info['message_id'], rev_info.get('subject'))
+                    conn,
+                    change_id,
+                    current_rev,
+                    chosen,
+                    rev_info['message_id'],
+                    rev_info.get('subject'),
+                )
                 conn.close()
             except Exception:
                 pass
@@ -1243,10 +1348,15 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             }
         self._focus_change_id = self._selected_series.get('change_id')
         from b4.review_tui._lite_app import LiteThreadScreen
-        self.push_screen(LiteThreadScreen(message_id,
-                                          email_dryrun=self._email_dryrun,
-                                          patatt_sign=self._patatt_sign,
-                                          tracking_info=tracking_info))
+
+        self.push_screen(
+            LiteThreadScreen(
+                message_id,
+                email_dryrun=self._email_dryrun,
+                patatt_sign=self._patatt_sign,
+                tracking_info=tracking_info,
+            )
+        )
 
     def _checkout_new_series(self) -> None:
         """Retrieve series, build am-ready mbox, and show base selection."""
@@ -1271,21 +1381,24 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                         # The stored message-id may point to a different
                         # version's thread.  Search for the wanted version
                         # in other threads before giving up.
-                        msgs = b4.mbox.get_extra_series(
-                            msgs, direction=1, nocache=True)
+                        msgs = b4.mbox.get_extra_series(msgs, direction=1, nocache=True)
                         if wantver > 1:
                             msgs = b4.mbox.get_extra_series(
-                                msgs, direction=-1,
-                                wantvers=[wantver], nocache=True)
-                        lser = b4.review._get_lore_series(
-                            msgs, wantver=wantver)
+                                msgs, direction=-1, wantvers=[wantver], nocache=True
+                            )
+                        lser = b4.review._get_lore_series(msgs, wantver=wantver)
                     else:
                         raise
 
                 am_msgs = lser.get_am_ready(
-                    noaddtrailers=True, addmysob=False, addlink=False,
-                    cherrypick=None, copyccs=False, allowbadchars=False,
-                    showchecks=False)
+                    noaddtrailers=True,
+                    addmysob=False,
+                    addlink=False,
+                    cherrypick=None,
+                    copyccs=False,
+                    allowbadchars=False,
+                    showchecks=False,
+                )
                 if not am_msgs:
                     raise LookupError('No patches ready for applying')
 
@@ -1312,25 +1425,27 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                     try:
                         guessed, nblobs, mismatches = lser.find_base(
                             topdir,
-                            branches=['--exclude=refs/heads/b4/review/*',
-                                      '--all'],
-                            maxdays=30)
+                            branches=['--exclude=refs/heads/b4/review/*', '--all'],
+                            maxdays=30,
+                        )
                         if guessed:
                             # find_base returns a describe name (e.g. heads/foo);
                             # resolve it to a SHA for the input field
                             ecode, sha_out = b4.git_run_command(
-                                topdir, ['rev-parse', '--verify', guessed])
+                                topdir, ['rev-parse', '--verify', guessed]
+                            )
                             sha = sha_out.strip() if ecode == 0 else ''
                             short_sha = sha[:12] if sha else guessed
                             if mismatches == 0:
                                 initial_base = short_sha
-                                base_hint = (f'Guessed base: {guessed}'
-                                             f' (exact match)')
+                                base_hint = f'Guessed base: {guessed} (exact match)'
                             elif nblobs != mismatches:
                                 matched = nblobs - mismatches
                                 initial_base = short_sha
-                                base_hint = (f'Guessed base: {guessed}'
-                                             f' ({matched}/{nblobs} blobs)')
+                                base_hint = (
+                                    f'Guessed base: {guessed}'
+                                    f' ({matched}/{nblobs} blobs)'
+                                )
                             else:
                                 base_hint = 'Could not find a matching base'
                     except (IndexError, Exception):
@@ -1343,7 +1458,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                         self._identifier,
                         series.get('change_id', ''),
                         series.get('revision', 1),
-                        att)
+                        att,
+                    )
 
                 return lser, ambytes, initial_base, base_hint
 
@@ -1352,8 +1468,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             callback=lambda result: self._on_series_fetched(result, series),
         )
 
-    def _on_series_fetched(self, result: Any,
-                            series: Dict[str, Any]) -> None:
+    def _on_series_fetched(self, result: Any, series: Dict[str, Any]) -> None:
         """Handle the result from the series fetch worker."""
         if result is None:
             return
@@ -1376,28 +1491,35 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                             base_suggestions.append(rb)
 
         self.push_screen(
-            BaseSelectionScreen(initial_base, lser, ambytes,
-                                base_suggestions=base_suggestions,
-                                base_hint=base_hint,
-                                subject=series.get('subject', '')),
+            BaseSelectionScreen(
+                initial_base,
+                lser,
+                ambytes,
+                base_suggestions=base_suggestions,
+                base_hint=base_hint,
+                subject=series.get('subject', ''),
+            ),
             callback=lambda base_sha: self._on_base_selected(
-                base_sha, lser, series, ambytes),
+                base_sha, lser, series, ambytes
+            ),
         )
 
-    def _on_base_selected(self, base_sha: Optional[str],
-                           lser: b4.LoreSeries,
-                           series: Dict[str, Any],
-                           ambytes: bytes) -> None:
+    def _on_base_selected(
+        self,
+        base_sha: Optional[str],
+        lser: b4.LoreSeries,
+        series: Dict[str, Any],
+        ambytes: bytes,
+    ) -> None:
         """Handle base selection screen result."""
         if base_sha is None:
             self.notify('Checkout cancelled', severity='information')
             return
-        self._do_checkout(lser, series, base_commit=base_sha,
-                          ambytes=ambytes)
+        self._do_checkout(lser, series, base_commit=base_sha, ambytes=ambytes)
 
-    def _discover_newer_versions(self, change_id: str,
-                                 current_rev: int,
-                                 review_branch: str) -> List[int]:
+    def _discover_newer_versions(
+        self, change_id: str, current_rev: int, review_branch: str
+    ) -> List[int]:
         """Look up newer revision numbers from tracking data and DB."""
         newer_versions: List[int] = []
         topdir = b4.git_get_toplevel()
@@ -1419,8 +1541,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         return newer_versions
 
     @staticmethod
-    def _resolve_base_commit(topdir: str,
-                             lser: 'b4.LoreSeries') -> Optional[str]:
+    def _resolve_base_commit(topdir: str, lser: 'b4.LoreSeries') -> Optional[str]:
         """Determine the base commit for a series, guessing if needed.
 
         Returns the base commit SHA or None if it cannot be determined.
@@ -1429,8 +1550,10 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         need_guess = False
         if base_commit:
             if not b4.git_commit_exists(topdir, base_commit):
-                logger.warning('Base commit %s not found in repository, will try to guess',
-                               base_commit)
+                logger.warning(
+                    'Base commit %s not found in repository, will try to guess',
+                    base_commit,
+                )
                 need_guess = True
         else:
             need_guess = True
@@ -1439,23 +1562,33 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             logger.info('Guessing base commit...')
             try:
                 base_commit, nblobs, mismatches = lser.find_base(
-                    topdir, branches=None, maxdays=30)
+                    topdir, branches=None, maxdays=30
+                )
                 if mismatches == 0:
                     logger.info('Base: %s (exact match)', base_commit)
                 elif nblobs == mismatches:
                     logger.warning('Base: failed to find matching base')
                     base_commit = None
                 else:
-                    logger.info('Base: %s (best guess, %s/%s blobs matched)',
-                                base_commit, nblobs - mismatches, nblobs)
+                    logger.info(
+                        'Base: %s (best guess, %s/%s blobs matched)',
+                        base_commit,
+                        nblobs - mismatches,
+                        nblobs,
+                    )
             except IndexError as ex:
                 logger.warning('Base: failed to guess (%s)', ex)
                 base_commit = None
 
         return base_commit
 
-    def _do_checkout(self, lser: b4.LoreSeries, series: Dict[str, Any],
-                     base_commit: str, ambytes: bytes) -> None:
+    def _do_checkout(
+        self,
+        lser: b4.LoreSeries,
+        series: Dict[str, Any],
+        base_commit: str,
+        ambytes: bytes,
+    ) -> None:
         """Create the review branch for the series.
 
         Args:
@@ -1505,20 +1638,36 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 if mismatches:
                     rstart, rend = lser.make_fake_am_range(gitdir=topdir)
                     if rstart and rend:
-                        logger.info('Prepared fake commit range for 3-way merge (%.12s..%.12s)', rstart, rend)
+                        logger.info(
+                            'Prepared fake commit range for 3-way merge (%.12s..%.12s)',
+                            rstart,
+                            rend,
+                        )
 
             try:
                 logger.info('Base: %s', base_commit)
-                b4.git_fetch_am_into_repo(topdir, ambytes=ambytes, at_base=base_commit,
-                                          origin=linkurl, am_flags=['-3'])
+                b4.git_fetch_am_into_repo(
+                    topdir,
+                    ambytes=ambytes,
+                    at_base=base_commit,
+                    origin=linkurl,
+                    am_flags=['-3'],
+                )
 
                 # Create the review branch
                 _is_rt = bool(series.get('is_rethreaded'))
-                b4.review.create_review_branch(topdir, branch_name, base_commit, lser,
-                                               linkurl, linkmask, num_prereqs=0,
-                                               identifier=self._identifier,
-                                               status='reviewing',
-                                               is_rethreaded=_is_rt)
+                b4.review.create_review_branch(
+                    topdir,
+                    branch_name,
+                    base_commit,
+                    lser,
+                    linkurl,
+                    linkmask,
+                    num_prereqs=0,
+                    identifier=self._identifier,
+                    status='reviewing',
+                    is_rethreaded=_is_rt,
+                )
                 logger.info('Review branch created: %s', branch_name)
                 checkout_success = True
             except b4.AmConflictError as cex:
@@ -1527,11 +1676,18 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                     return
                 b4._rewrite_fetch_head_origin(topdir, cex.worktree_path, linkurl)
                 # Create the review branch from resolved result
-                b4.review.create_review_branch(topdir, branch_name, base_commit, lser,
-                                               linkurl, linkmask, num_prereqs=0,
-                                               identifier=self._identifier,
-                                               status='reviewing',
-                                               is_rethreaded=_is_rt)
+                b4.review.create_review_branch(
+                    topdir,
+                    branch_name,
+                    base_commit,
+                    lser,
+                    linkurl,
+                    linkmask,
+                    num_prereqs=0,
+                    identifier=self._identifier,
+                    status='reviewing',
+                    is_rethreaded=_is_rt,
+                )
                 logger.info('Review branch created: %s', branch_name)
                 checkout_success = True
             except Exception as ex:
@@ -1545,9 +1701,15 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         if self._identifier:
             try:
                 conn = b4.review.tracking.get_db(self._identifier)
-                conn.execute('UPDATE series SET status = ?, revision = ?, message_id = ? WHERE track_id = ?',
-                             ('reviewing', series.get('revision'), series.get('message_id'),
-                              series.get('track_id')))
+                conn.execute(
+                    'UPDATE series SET status = ?, revision = ?, message_id = ? WHERE track_id = ?',
+                    (
+                        'reviewing',
+                        series.get('revision'),
+                        series.get('message_id'),
+                        series.get('track_id'),
+                    ),
+                )
                 conn.commit()
                 conn.close()
             except Exception as ex:
@@ -1559,7 +1721,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             try:
                 conn = b4.review.tracking.get_db(self._identifier)
                 db_target = b4.review.tracking.get_target_branch(
-                    conn, _co_change_id, revision=series.get('revision'))
+                    conn, _co_change_id, revision=series.get('revision')
+                )
                 conn.close()
                 if db_target and b4.git_branch_exists(topdir, branch_name):
                     cover_text, tracking = b4.review.load_tracking(topdir, branch_name)
@@ -1567,7 +1730,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                     if not trk_series.get('target-branch'):
                         trk_series['target-branch'] = db_target
                         tracking['series'] = trk_series
-                        b4.review.save_tracking_ref(topdir, branch_name, cover_text, tracking)
+                        b4.review.save_tracking_ref(
+                            topdir, branch_name, cover_text, tracking
+                        )
             except (SystemExit, Exception):
                 pass
 
@@ -1582,7 +1747,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         if self._identifier and _co_change_id and isinstance(_co_revision, int):
             try:
                 conn = b4.review.tracking.get_db(self._identifier)
-                b4.review.tracking.mark_all_messages_seen(conn, _co_change_id, _co_revision)
+                b4.review.tracking.mark_all_messages_seen(
+                    conn, _co_change_id, _co_revision
+                )
                 conn.close()
             except Exception:
                 pass
@@ -1709,7 +1876,10 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
         # Show target branch from preloaded series data or config default
         target_row = self.query_one('#detail-target-row', Horizontal)
-        target_branch = series.get('target_branch') or b4.review.tracking.get_review_target_branch_default()
+        target_branch = (
+            series.get('target_branch')
+            or b4.review.tracking.get_review_target_branch_default()
+        )
         if target_branch:
             self.query_one('#detail-target', Static).update(target_branch)
             target_row.display = True
@@ -1744,7 +1914,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         try:
             conn = b4.review.tracking.get_db(self._identifier)
             db_target = b4.review.tracking.get_target_branch(
-                conn, change_id, revision=series.get('revision'))
+                conn, change_id, revision=series.get('revision')
+            )
             conn.close()
             if db_target:
                 return db_target
@@ -1785,11 +1956,14 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 review_branch = rb
 
         self.push_screen(
-            TargetBranchScreen(current_target, suggestions=suggestions or None,
-                               subject=series.get('subject', ''),
-                               message_id=series.get('message_id', ''),
-                               revision=series.get('revision'),
-                               review_branch=review_branch),
+            TargetBranchScreen(
+                current_target,
+                suggestions=suggestions or None,
+                subject=series.get('subject', ''),
+                message_id=series.get('message_id', ''),
+                revision=series.get('revision'),
+                review_branch=review_branch,
+            ),
             callback=self._on_target_branch_set,
         )
 
@@ -1813,22 +1987,27 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         if topdir and status in ('reviewing', 'replied', 'waiting', 'snoozed'):
             if b4.git_branch_exists(topdir, review_branch):
                 try:
-                    cover_text, tracking = b4.review.load_tracking(topdir, review_branch)
+                    cover_text, tracking = b4.review.load_tracking(
+                        topdir, review_branch
+                    )
                     trk_series = tracking.get('series', {})
                     if target_value:
                         trk_series['target-branch'] = target_value
                     else:
                         trk_series.pop('target-branch', None)
                     tracking['series'] = trk_series
-                    b4.review.save_tracking_ref(topdir, review_branch, cover_text, tracking)
+                    b4.review.save_tracking_ref(
+                        topdir, review_branch, cover_text, tracking
+                    )
                 except (SystemExit, Exception) as ex:
                     logger.warning('Could not update tracking commit: %s', ex)
 
         # Update database
         try:
             conn = b4.review.tracking.get_db(self._identifier)
-            b4.review.tracking.update_target_branch(conn, change_id, target_value,
-                                                    revision=revision)
+            b4.review.tracking.update_target_branch(
+                conn, change_id, target_value, revision=revision
+            )
             conn.close()
         except Exception as ex:
             logger.warning('Could not update target branch in DB: %s', ex)
@@ -1853,8 +2032,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
         self._focus_change_id = self._selected_series.get('change_id')
         self.push_screen(
-            UpdateAllScreen([self._selected_series], self._identifier,
-                            linkmask, topdir),
+            UpdateAllScreen(
+                [self._selected_series], self._identifier, linkmask, topdir
+            ),
             callback=self._on_update_complete,
         )
 
@@ -1900,7 +2080,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         if errors:
             parts.append(f'{errors} error(s)')
 
-        severity: Literal['information', 'warning'] = 'warning' if errors else 'information'
+        severity: Literal['information', 'warning'] = (
+            'warning' if errors else 'information'
+        )
         self.notify(', '.join(parts), severity=severity)
         for submitter, error in error_details:
             logger.warning('Update error (%s): %s', submitter, error)
@@ -1947,42 +2129,58 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         # Check if a newer revision is known to exist
         current_rev = series.get('revision', 1)
         newer_versions = self._discover_newer_versions(
-            change_id, current_rev, review_branch)
+            change_id, current_rev, review_branch
+        )
 
         if newer_versions:
             # Require explicit confirmation before taking an older revision
             self.push_screen(
                 NewerRevisionWarningScreen(current_rev, newer_versions),
                 callback=lambda proceed: self._on_newer_revision_acknowledged(
-                    proceed, target_branch, change_id, review_branch, series),
+                    proceed, target_branch, change_id, review_branch, series
+                ),
             )
         else:
             self._show_take_screen(target_branch, change_id, review_branch, series)
 
-    def _on_newer_revision_acknowledged(self, proceed: bool, target_branch: str,
-                                        change_id: str, review_branch: str,
-                                        series: Dict[str, Any]) -> None:
+    def _on_newer_revision_acknowledged(
+        self,
+        proceed: bool,
+        target_branch: str,
+        change_id: str,
+        review_branch: str,
+        series: Dict[str, Any],
+    ) -> None:
         """Handle result of the newer-revision warning."""
         if not proceed:
             return
         self._show_take_screen(target_branch, change_id, review_branch, series)
 
-    def _show_take_screen(self, target_branch: str, change_id: str,
-                          review_branch: str, series: Dict[str, Any]) -> None:
+    def _show_take_screen(
+        self,
+        target_branch: str,
+        change_id: str,
+        review_branch: str,
+        series: Dict[str, Any],
+    ) -> None:
         """Push the TakeScreen dialog."""
         num_patches = series.get('num_patches', 0) or 0
         # Start with user config preference; skip detection below may override it.
         _valid_take_methods = {'merge', 'linear', 'cherry-pick'}
         b4cfg = b4.get_config_from_git(r'b4\..*')
         cfg_method = str(b4cfg.get('review-default-take-method', ''))
-        default_method: Optional[str] = cfg_method if cfg_method in _valid_take_methods else None
+        default_method: Optional[str] = (
+            cfg_method if cfg_method in _valid_take_methods else None
+        )
         topdir = b4.git_get_toplevel()
         if topdir:
             try:
                 _cover_text, tracking = b4.review.load_tracking(topdir, review_branch)
                 usercfg = b4.get_user_config()
                 patches = tracking.get('patches', [])
-                if any(b4.review._get_patch_state(p, usercfg) == 'skip' for p in patches):
+                if any(
+                    b4.review._get_patch_state(p, usercfg) == 'skip' for p in patches
+                ):
                     default_method = 'cherry-pick'
             except Exception:
                 pass
@@ -1995,7 +2193,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         if recent_branches and not per_series_target:
             target_branch = recent_branches[0]
         # Build the suggestion list: config branches + recent take branches
-        all_suggestions: List[str] = list(b4.review.tracking.get_review_target_branches())
+        all_suggestions: List[str] = list(
+            b4.review.tracking.get_review_target_branches()
+        )
         if recent_branches:
             for rb in recent_branches:
                 if rb not in all_suggestions:
@@ -2003,19 +2203,29 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         if target_branch and target_branch not in all_suggestions:
             all_suggestions.append(target_branch)
         recent_branches = all_suggestions or None
-        take_screen = TakeScreen(target_branch, review_branch, num_patches=num_patches,
-                                 default_method=default_method,
-                                 recent_branches=recent_branches,
-                                 subject=series.get('subject', ''))
+        take_screen = TakeScreen(
+            target_branch,
+            review_branch,
+            num_patches=num_patches,
+            default_method=default_method,
+            recent_branches=recent_branches,
+            subject=series.get('subject', ''),
+        )
         self.push_screen(
             take_screen,
             callback=lambda confirmed: self._on_take_confirmed(
-                confirmed, change_id, review_branch, take_screen, series),
+                confirmed, change_id, review_branch, take_screen, series
+            ),
         )
 
-    def _on_take_confirmed(self, confirmed: bool, change_id: str,
-                           review_branch: str, take_screen: 'TakeScreen',
-                           series: Dict[str, Any]) -> None:
+    def _on_take_confirmed(
+        self,
+        confirmed: bool,
+        change_id: str,
+        review_branch: str,
+        take_screen: 'TakeScreen',
+        series: Dict[str, Any],
+    ) -> None:
         """Handle take screen result — proceed to cherry-pick or confirm."""
         if not confirmed:
             return
@@ -2036,7 +2246,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 return
             usercfg = b4.get_user_config()
             preselected = [
-                i + 1 for i, p in enumerate(patches)
+                i + 1
+                for i, p in enumerate(patches)
                 if b4.review._get_patch_state(p, usercfg) != 'skip'
             ]
             # Only pre-populate if some patches are actually skipped
@@ -2046,49 +2257,81 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             self.push_screen(
                 pick_screen,
                 callback=lambda picked: self._on_cherrypick_confirmed(
-                    picked, change_id, review_branch, take_screen, series,
-                    pick_screen),
+                    picked, change_id, review_branch, take_screen, series, pick_screen
+                ),
             )
         else:
             self._show_take_confirm(
-                take_screen.method_result, take_screen.target_result,
-                change_id, review_branch, take_screen, series)
+                take_screen.method_result,
+                take_screen.target_result,
+                change_id,
+                review_branch,
+                take_screen,
+                series,
+            )
 
-    def _on_cherrypick_confirmed(self, confirmed: bool, change_id: str,
-                                 review_branch: str, take_screen: 'TakeScreen',
-                                 series: Dict[str, Any],
-                                 pick_screen: 'CherryPickScreen') -> None:
+    def _on_cherrypick_confirmed(
+        self,
+        confirmed: bool,
+        change_id: str,
+        review_branch: str,
+        take_screen: 'TakeScreen',
+        series: Dict[str, Any],
+        pick_screen: 'CherryPickScreen',
+    ) -> None:
         """Handle cherry-pick selection — proceed to confirm screen."""
         if not confirmed:
             return
         self._show_take_confirm(
-            'cherry-pick', take_screen.target_result,
-            change_id, review_branch, take_screen, series,
-            cherrypick=pick_screen.selected_indices)
-
-    def _show_take_confirm(self, method: str, target_branch: str,
-                           change_id: str, review_branch: str,
-                           take_screen: 'TakeScreen',
-                           series: Dict[str, Any],
-                           cherrypick: Optional[List[int]] = None) -> None:
+            'cherry-pick',
+            take_screen.target_result,
+            change_id,
+            review_branch,
+            take_screen,
+            series,
+            cherrypick=pick_screen.selected_indices,
+        )
+
+    def _show_take_confirm(
+        self,
+        method: str,
+        target_branch: str,
+        change_id: str,
+        review_branch: str,
+        take_screen: 'TakeScreen',
+        series: Dict[str, Any],
+        cherrypick: Optional[List[int]] = None,
+    ) -> None:
         """Push the TakeConfirmScreen for final confirmation."""
         subject = series.get('subject', '')
         confirm_screen = TakeConfirmScreen(
-            method, target_branch, review_branch, subject=subject,
-            cherrypick=cherrypick)
+            method, target_branch, review_branch, subject=subject, cherrypick=cherrypick
+        )
         self.push_screen(
             confirm_screen,
             callback=lambda ok: self._on_take_final(
-                ok, method, change_id, review_branch, take_screen,
-                series, confirm_screen, cherrypick),
+                ok,
+                method,
+                change_id,
+                review_branch,
+                take_screen,
+                series,
+                confirm_screen,
+                cherrypick,
+            ),
         )
 
-    def _on_take_final(self, confirmed: bool, method: str,
-                       change_id: str, review_branch: str,
-                       take_screen: 'TakeScreen',
-                       series: Dict[str, Any],
-                       confirm_screen: 'TakeConfirmScreen',
-                       cherrypick: Optional[List[int]] = None) -> None:
+    def _on_take_final(
+        self,
+        confirmed: bool,
+        method: str,
+        change_id: str,
+        review_branch: str,
+        take_screen: 'TakeScreen',
+        series: Dict[str, Any],
+        confirm_screen: 'TakeConfirmScreen',
+        cherrypick: Optional[List[int]] = None,
+    ) -> None:
         """Execute the actual take after final confirmation."""
         if not confirmed:
             return
@@ -2101,15 +2344,20 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             self._load_series()
         else:
             with self.suspend():
-                self._do_take_am(change_id, review_branch, take_screen, series,
-                                 cherrypick=cherrypick)
+                self._do_take_am(
+                    change_id, review_branch, take_screen, series, cherrypick=cherrypick
+                )
             self._load_series()
 
     @staticmethod
-    def _record_take_metadata(topdir: str, review_branch: str,
-                              target_branch: str, commit_ids: List[str],
-                              cherrypick: Optional[List[int]] = None,
-                              accepted: bool = True) -> None:
+    def _record_take_metadata(
+        topdir: str,
+        review_branch: str,
+        target_branch: str,
+        commit_ids: List[str],
+        cherrypick: Optional[List[int]] = None,
+        accepted: bool = True,
+    ) -> None:
         """Record taken commit IDs in the tracking data on the review branch.
 
         Args:
@@ -2166,9 +2414,13 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         if not b4.review.save_tracking_ref(topdir, review_branch, cover_text, tracking):
             logger.warning('Could not save take metadata to tracking commit')
 
-    def _do_take_merge(self, change_id: str, review_branch: str,
-                       take_screen: 'TakeScreen',
-                       series: Dict[str, Any]) -> None:
+    def _do_take_merge(
+        self,
+        change_id: str,
+        review_branch: str,
+        take_screen: 'TakeScreen',
+        series: Dict[str, Any],
+    ) -> None:
         """Perform a merge-based take operation."""
         target_branch = take_screen.target_result
 
@@ -2199,15 +2451,19 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             try:
                 merge_template = b4.read_template(str(config['shazam-merge-template']))
             except FileNotFoundError:
-                logger.critical('ERROR: shazam-merge-template says to use %s, but it does not exist',
-                                config['shazam-merge-template'])
+                logger.critical(
+                    'ERROR: shazam-merge-template says to use %s, but it does not exist',
+                    config['shazam-merge-template'],
+                )
                 _wait_for_enter()
                 return
 
         # Extract cover message body
         covermessage = ''
         if cover_text:
-            _githeaders, message, _trailers, _basement, _sig = b4.LoreMessage.get_body_parts(cover_text)
+            _githeaders, message, _trailers, _basement, _sig = (
+                b4.LoreMessage.get_body_parts(cover_text)
+            )
             covermessage = message.strip()
         if not covermessage:
             covermessage = '(no cover letter message)'
@@ -2277,7 +2533,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             base_commit = out.strip()
 
         try:
-            b4.git_fetch_am_into_repo(topdir, ambytes, at_base=base_commit, am_flags=['-3'])
+            b4.git_fetch_am_into_repo(
+                topdir, ambytes, at_base=base_commit, am_flags=['-3']
+            )
         except b4.AmConflictError as cex:
             if not _resolve_worktree_am_conflict(topdir, cex):
                 _wait_for_enter()
@@ -2292,7 +2550,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             prev_branch = b4.git_revparse_obj('HEAD', gitdir=topdir)
 
         # Checkout target branch
-        ecode, out = b4.git_run_command(topdir, ['checkout', target_branch], logstderr=True)
+        ecode, out = b4.git_run_command(
+            topdir, ['checkout', target_branch], logstderr=True
+        )
         if ecode != 0:
             logger.critical('Could not checkout %s: %s', target_branch, out.strip())
             _wait_for_enter()
@@ -2334,20 +2594,31 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         # After --no-ff merge, HEAD^2 is the tip of the merged side;
         # the individual patch commits are base_commit..HEAD^2.
         ecode, out = b4.git_run_command(
-            topdir, ['rev-list', '--reverse', f'{base_commit}..HEAD^2'])
+            topdir, ['rev-list', '--reverse', f'{base_commit}..HEAD^2']
+        )
         if ecode == 0 and out.strip():
             commit_ids = out.strip().splitlines()
-            self._record_take_metadata(topdir, review_branch, target_branch,
-                                       commit_ids,
-                                       accepted=take_screen.accept_series)
+            self._record_take_metadata(
+                topdir,
+                review_branch,
+                target_branch,
+                commit_ids,
+                accepted=take_screen.accept_series,
+            )
 
-        self._finalize_take(topdir, target_branch, change_id, t_series,
-                            take_screen.accept_series)
+        self._finalize_take(
+            topdir, target_branch, change_id, t_series, take_screen.accept_series
+        )
         _wait_for_enter()
 
-    def _finalize_take(self, topdir: str, target_branch: str,
-                       change_id: str, series: Dict[str, Any],
-                       accepted: bool) -> None:
+    def _finalize_take(
+        self,
+        topdir: str,
+        target_branch: str,
+        change_id: str,
+        series: Dict[str, Any],
+        accepted: bool,
+    ) -> None:
         """Common post-take steps: record branch, update DB, update Patchwork."""
         common_dir = b4.git_get_common_dir(topdir)
         if common_dir:
@@ -2358,14 +2629,17 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             existing_target = None
             try:
                 conn = b4.review.tracking.get_db(self._identifier)
-                b4.review.tracking.update_series_status(conn, change_id, 'accepted',
-                                                        revision=revision)
+                b4.review.tracking.update_series_status(
+                    conn, change_id, 'accepted', revision=revision
+                )
                 # Record the take target as the series target branch if not already set
                 existing_target = b4.review.tracking.get_target_branch(
-                    conn, change_id, revision=revision)
+                    conn, change_id, revision=revision
+                )
                 if not existing_target:
                     b4.review.tracking.update_target_branch(
-                        conn, change_id, target_branch, revision=revision)
+                        conn, change_id, target_branch, revision=revision
+                    )
                 conn.close()
             except Exception as ex:
                 logger.warning('Could not update series status: %s', ex)
@@ -2373,13 +2647,16 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             review_branch = f'b4/review/{change_id}'
             if not existing_target and b4.git_branch_exists(topdir, review_branch):
                 try:
-                    cover_text, tracking = b4.review.load_tracking(topdir, review_branch)
+                    cover_text, tracking = b4.review.load_tracking(
+                        topdir, review_branch
+                    )
                     trk_series = tracking.get('series', {})
                     if not trk_series.get('target-branch'):
                         trk_series['target-branch'] = target_branch
                         tracking['series'] = trk_series
-                        b4.review.save_tracking_ref(topdir, review_branch,
-                                                    cover_text, tracking)
+                        b4.review.save_tracking_ref(
+                            topdir, review_branch, cover_text, tracking
+                        )
                 except (SystemExit, Exception):
                     pass
 
@@ -2435,8 +2712,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         # Generate patches from local commits
         revision = t_series.get('revision', 1)
         try:
-            local_patches = b4.git_range_to_patches(topdir, range_start, range_end,
-                                                    revision=revision)
+            local_patches = b4.git_range_to_patches(
+                topdir, range_start, range_end, revision=revision
+            )
         except RuntimeError as ex:
             logger.critical('Could not generate patches: %s', ex)
             _wait_for_enter()
@@ -2460,8 +2738,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         for _commit, msg in local_patches:
             lmbx.add_message(msg)
 
-        lser = lmbx.get_series(revision, sloppytrailers=False,
-                               codereview_trailers=False)
+        lser = lmbx.get_series(
+            revision, sloppytrailers=False, codereview_trailers=False
+        )
         if lser is None:
             logger.critical('Could not build series from local patches')
             _wait_for_enter()
@@ -2497,20 +2776,25 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                         patch.followup_trailers.append(fltr)
 
         # Get am-ready messages
-        am_msgs = lser.get_am_ready(noaddtrailers=False,
-                                    addmysob=take_screen.add_signoff,
-                                    addlink=take_screen.add_link,
-                                    cherrypick=cherrypick,
-                                    copyccs=False,
-                                    allowbadchars=False)
+        am_msgs = lser.get_am_ready(
+            noaddtrailers=False,
+            addmysob=take_screen.add_signoff,
+            addlink=take_screen.add_link,
+            cherrypick=cherrypick,
+            copyccs=False,
+            allowbadchars=False,
+        )
         if not am_msgs:
             logger.critical('No patches ready for applying')
             _wait_for_enter()
             return None
 
         if cherrypick:
-            logger.info('Prepared %d patch(es) (cherry-picked: %s)',
-                        len(am_msgs), ', '.join(str(x) for x in cherrypick))
+            logger.info(
+                'Prepared %d patch(es) (cherry-picked: %s)',
+                len(am_msgs),
+                ', '.join(str(x) for x in cherrypick),
+            )
         else:
             logger.info('Prepared %d patch(es)', len(am_msgs))
 
@@ -2519,9 +2803,14 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         b4.save_git_am_mbox(am_msgs, ifh)
         return ifh.getvalue()
 
-    def _do_take_am(self, change_id: str, review_branch: str,
-                    take_screen: 'TakeScreen', series: Dict[str, Any],
-                    cherrypick: Optional[List[int]]) -> None:
+    def _do_take_am(
+        self,
+        change_id: str,
+        review_branch: str,
+        take_screen: 'TakeScreen',
+        series: Dict[str, Any],
+        cherrypick: Optional[List[int]],
+    ) -> None:
         """Perform a linear or cherry-pick take via git-am."""
         target_branch = take_screen.target_result
 
@@ -2530,8 +2819,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             logger.critical('Not in a git repository')
             return
 
-        ambytes = self._prepare_am_messages(review_branch, take_screen, series,
-                                            cherrypick=cherrypick)
+        ambytes = self._prepare_am_messages(
+            review_branch, take_screen, series, cherrypick=cherrypick
+        )
         if ambytes is None:
             return
 
@@ -2541,7 +2831,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             prev_branch = b4.git_revparse_obj('HEAD', gitdir=topdir)
 
         # Checkout target branch
-        ecode, out = b4.git_run_command(topdir, ['checkout', target_branch], logstderr=True)
+        ecode, out = b4.git_run_command(
+            topdir, ['checkout', target_branch], logstderr=True
+        )
         if ecode != 0:
             logger.critical('Could not checkout %s: %s', target_branch, out.strip())
             _wait_for_enter()
@@ -2552,12 +2844,16 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         pre_am_head = out.strip() if ecode == 0 else ''
 
         # Run git-am with three-way merge
-        ecode, out = b4.git_run_command(topdir, ['am', '-3'], stdin=ambytes, logstderr=True)
+        ecode, out = b4.git_run_command(
+            topdir, ['am', '-3'], stdin=ambytes, logstderr=True
+        )
         if ecode != 0:
             logger.critical('git-am failed:')
             logger.critical(out.strip())
             logger.info('You can resolve the conflict now.')
-            logger.info('Use "git am --continue" after resolving, or "git am --abort" to give up.')
+            logger.info(
+                'Use "git am --continue" after resolving, or "git am --abort" to give up.'
+            )
             _suspend_to_shell(hint='b4 conflict')
             # Check if am is still in progress (user exited without finishing)
             rebase_apply_path = os.path.join(topdir, '.git', 'rebase-apply')
@@ -2567,7 +2863,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 _wait_for_enter()
                 return
             # Check if am was aborted (HEAD unchanged)
-            ecode, current_head = b4.git_run_command(topdir, ['rev-parse', 'HEAD'], logstderr=True)
+            ecode, current_head = b4.git_run_command(
+                topdir, ['rev-parse', 'HEAD'], logstderr=True
+            )
             if ecode != 0 or current_head.strip() == pre_am_head:
                 logger.warning('Conflict resolution aborted')
                 _wait_for_enter()
@@ -2580,15 +2878,22 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         # Record per-patch commit IDs in the tracking data
         if pre_am_head:
             ecode, out = b4.git_run_command(
-                topdir, ['rev-list', '--reverse', f'{pre_am_head}..HEAD'])
+                topdir, ['rev-list', '--reverse', f'{pre_am_head}..HEAD']
+            )
             if ecode == 0:
                 commit_ids = out.strip().splitlines()
-                self._record_take_metadata(topdir, review_branch, target_branch,
-                                           commit_ids, cherrypick=cherrypick,
-                                           accepted=take_screen.accept_series)
+                self._record_take_metadata(
+                    topdir,
+                    review_branch,
+                    target_branch,
+                    commit_ids,
+                    cherrypick=cherrypick,
+                    accepted=take_screen.accept_series,
+                )
 
-        self._finalize_take(topdir, target_branch, change_id, series,
-                            take_screen.accept_series)
+        self._finalize_take(
+            topdir, target_branch, change_id, series, take_screen.accept_series
+        )
         _wait_for_enter()
 
     def action_rebase(self) -> None:
@@ -2597,7 +2902,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             return
         status = self._selected_series.get('status', 'new')
         if status not in ('reviewing', 'replied'):
-            self.notify('Series must be checked out before rebasing', severity='warning')
+            self.notify(
+                'Series must be checked out before rebasing', severity='warning'
+            )
             return
 
         series = self._selected_series
@@ -2619,22 +2926,31 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         elif recent_branches:
             current_branch = recent_branches[0]
         # Ensure the original branch is always in the suggestion list
-        if current_branch and recent_branches is not None and current_branch not in recent_branches:
+        if (
+            current_branch
+            and recent_branches is not None
+            and current_branch not in recent_branches
+        ):
             recent_branches.append(current_branch)
         elif current_branch and recent_branches is None:
             recent_branches = [current_branch]
 
-        rebase_screen = RebaseScreen(current_branch, review_branch,
-                                     recent_branches=recent_branches,
-                                     subject=self._selected_series.get('subject', ''))
+        rebase_screen = RebaseScreen(
+            current_branch,
+            review_branch,
+            recent_branches=recent_branches,
+            subject=self._selected_series.get('subject', ''),
+        )
         self.push_screen(
             rebase_screen,
             callback=lambda confirmed: self._on_rebase_confirmed(
-                confirmed, review_branch, rebase_screen),
+                confirmed, review_branch, rebase_screen
+            ),
         )
 
-    def _on_rebase_confirmed(self, confirmed: bool, review_branch: str,
-                             rebase_screen: 'RebaseScreen') -> None:
+    def _on_rebase_confirmed(
+        self, confirmed: bool, review_branch: str, rebase_screen: 'RebaseScreen'
+    ) -> None:
         """Handle rebase confirmation result."""
         if not confirmed:
             return
@@ -2691,12 +3007,16 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         try:
             with b4.git_temp_worktree(topdir, target_head) as gwt:
                 # Set up sparse checkout for minimal disk usage
-                ecode, out = b4.git_run_command(gwt, ['sparse-checkout', 'set'], logstderr=True)
+                ecode, out = b4.git_run_command(
+                    gwt, ['sparse-checkout', 'set'], logstderr=True
+                )
                 if ecode != 0:
                     logger.warning('Could not set up sparse checkout: %s', out.strip())
                 ecode, out = b4.git_run_command(gwt, ['checkout', '-f'], logstderr=True)
                 if ecode != 0:
-                    logger.warning('Could not checkout sparse worktree: %s', out.strip())
+                    logger.warning(
+                        'Could not checkout sparse worktree: %s', out.strip()
+                    )
 
                 # Try cherry-picking the commits
                 gitargs = ['cherry-pick', f'{base_commit}..{series_tip}']
@@ -2716,8 +3036,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         logger.info('Rebasing %s onto %s...', review_branch, target_branch)
 
         # Remember where we are so we can restore on failure
-        ecode, original_branch = b4.git_run_command(topdir, ['rev-parse', '--abbrev-ref', 'HEAD'],
-                                                     logstderr=True)
+        ecode, original_branch = b4.git_run_command(
+            topdir, ['rev-parse', '--abbrev-ref', 'HEAD'], logstderr=True
+        )
         if ecode != 0:
             logger.critical('Could not determine current branch')
             _wait_for_enter()
@@ -2725,14 +3046,18 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         original_branch = original_branch.strip()
 
         # First, checkout the review branch (at the tracking commit)
-        ecode, out = b4.git_run_command(topdir, ['checkout', review_branch], logstderr=True)
+        ecode, out = b4.git_run_command(
+            topdir, ['checkout', review_branch], logstderr=True
+        )
         if ecode != 0:
             logger.critical('Could not checkout review branch: %s', out.strip())
             _wait_for_enter()
             return
 
         # Save the tracking commit SHA so we can restore on failure
-        ecode, tracking_commit = b4.git_run_command(topdir, ['rev-parse', 'HEAD'], logstderr=True)
+        ecode, tracking_commit = b4.git_run_command(
+            topdir, ['rev-parse', 'HEAD'], logstderr=True
+        )
         if ecode != 0:
             logger.critical('Could not resolve tracking commit')
             b4.git_run_command(topdir, ['checkout', original_branch], logstderr=True)
@@ -2741,25 +3066,37 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         tracking_commit = tracking_commit.strip()
 
         # Reset to before the tracking commit (now at series_tip)
-        ecode, out = b4.git_run_command(topdir, ['reset', '--hard', 'HEAD~1'], logstderr=True)
+        ecode, out = b4.git_run_command(
+            topdir, ['reset', '--hard', 'HEAD~1'], logstderr=True
+        )
         if ecode != 0:
-            logger.critical('Could not reset to before tracking commit: %s', out.strip())
-            b4.git_run_command(topdir, ['reset', '--hard', tracking_commit], logstderr=True)
+            logger.critical(
+                'Could not reset to before tracking commit: %s', out.strip()
+            )
+            b4.git_run_command(
+                topdir, ['reset', '--hard', tracking_commit], logstderr=True
+            )
             b4.git_run_command(topdir, ['checkout', original_branch], logstderr=True)
             _wait_for_enter()
             return
 
         # Rebase the patches onto target_head
         # --onto target_head base_commit means: take commits after base_commit and replay onto target_head
-        ecode, out = b4.git_run_command(topdir, ['rebase', '--onto', target_head, base_commit], logstderr=True)
+        ecode, out = b4.git_run_command(
+            topdir, ['rebase', '--onto', target_head, base_commit], logstderr=True
+        )
         if ecode != 0:
             if applies_clean:
                 # Test said clean but real rebase failed — something is wrong, abort
                 logger.critical('Rebase failed unexpectedly: %s', out.strip())
                 logger.critical('Aborting rebase...')
                 b4.git_run_command(topdir, ['rebase', '--abort'], logstderr=True)
-                b4.git_run_command(topdir, ['reset', '--hard', tracking_commit], logstderr=True)
-                b4.git_run_command(topdir, ['checkout', original_branch], logstderr=True)
+                b4.git_run_command(
+                    topdir, ['reset', '--hard', tracking_commit], logstderr=True
+                )
+                b4.git_run_command(
+                    topdir, ['checkout', original_branch], logstderr=True
+                )
                 _wait_for_enter()
                 return
 
@@ -2768,37 +3105,62 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             logger.critical('---')
             logger.critical('Rebase had conflicts.')
             logger.info('You can resolve the conflicts in your working tree.')
-            logger.info('Use "git rebase --continue" after resolving, or "git rebase --abort" to give up.')
+            logger.info(
+                'Use "git rebase --continue" after resolving, or "git rebase --abort" to give up.'
+            )
             _suspend_to_shell(hint='b4 rebase')
             # Check if rebase is still in progress (user exited without finishing)
-            ecode, gitdir = b4.git_run_command(topdir, ['rev-parse', '--git-dir'], logstderr=True)
+            ecode, gitdir = b4.git_run_command(
+                topdir, ['rev-parse', '--git-dir'], logstderr=True
+            )
             rebase_in_progress = False
             if ecode == 0:
                 gitdir = gitdir.strip()
-                rebase_in_progress = (os.path.isdir(os.path.join(gitdir, 'rebase-merge'))
-                                      or os.path.isdir(os.path.join(gitdir, 'rebase-apply')))
+                rebase_in_progress = os.path.isdir(
+                    os.path.join(gitdir, 'rebase-merge')
+                ) or os.path.isdir(os.path.join(gitdir, 'rebase-apply'))
             if rebase_in_progress:
                 logger.warning('Rebase not completed, aborting')
                 b4.git_run_command(topdir, ['rebase', '--abort'], logstderr=True)
-                b4.git_run_command(topdir, ['reset', '--hard', tracking_commit], logstderr=True)
-                b4.git_run_command(topdir, ['checkout', original_branch], logstderr=True)
+                b4.git_run_command(
+                    topdir, ['reset', '--hard', tracking_commit], logstderr=True
+                )
+                b4.git_run_command(
+                    topdir, ['checkout', original_branch], logstderr=True
+                )
                 _wait_for_enter()
                 return
             # Check if the rebase was aborted (HEAD back at pre-rebase state)
-            ecode, current_head = b4.git_run_command(topdir, ['rev-parse', 'HEAD'], logstderr=True)
+            ecode, current_head = b4.git_run_command(
+                topdir, ['rev-parse', 'HEAD'], logstderr=True
+            )
             if ecode != 0 or current_head.strip() == series_tip:
                 logger.warning('Rebase was aborted')
-                b4.git_run_command(topdir, ['reset', '--hard', tracking_commit], logstderr=True)
-                b4.git_run_command(topdir, ['checkout', original_branch], logstderr=True)
+                b4.git_run_command(
+                    topdir, ['reset', '--hard', tracking_commit], logstderr=True
+                )
+                b4.git_run_command(
+                    topdir, ['checkout', original_branch], logstderr=True
+                )
                 _wait_for_enter()
                 return
             # Verify target is an ancestor of HEAD (rebase actually landed)
             ecode, _out = b4.git_run_command(
-                topdir, ['merge-base', '--is-ancestor', target_head, 'HEAD'], logstderr=True)
+                topdir,
+                ['merge-base', '--is-ancestor', target_head, 'HEAD'],
+                logstderr=True,
+            )
             if ecode != 0:
-                logger.warning('Rebase result does not include %s, something went wrong', target_branch)
-                b4.git_run_command(topdir, ['reset', '--hard', tracking_commit], logstderr=True)
-                b4.git_run_command(topdir, ['checkout', original_branch], logstderr=True)
+                logger.warning(
+                    'Rebase result does not include %s, something went wrong',
+                    target_branch,
+                )
+                b4.git_run_command(
+                    topdir, ['reset', '--hard', tracking_commit], logstderr=True
+                )
+                b4.git_run_command(
+                    topdir, ['checkout', original_branch], logstderr=True
+                )
                 _wait_for_enter()
                 return
             logger.info('Rebase conflicts resolved')
@@ -2808,7 +3170,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
         # Enumerate new patch commit SHAs and update first-patch-commit
         ecode, out = b4.git_run_command(
-            topdir, ['rev-list', '--reverse', f'{target_head}..HEAD'])
+            topdir, ['rev-list', '--reverse', f'{target_head}..HEAD']
+        )
         if ecode == 0 and out.strip():
             new_shas = out.strip().splitlines()
             series['first-patch-commit'] = new_shas[0]
@@ -2820,8 +3183,12 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
         # Re-create the tracking commit
         commit_msg = cover_text + '\n\n' + b4.review.make_review_magic_json(tracking)
-        ecode, out = b4.git_run_command(topdir, ['commit', '--allow-empty', '-F', '-'],
-                                        stdin=commit_msg.encode(), logstderr=True)
+        ecode, out = b4.git_run_command(
+            topdir,
+            ['commit', '--allow-empty', '-F', '-'],
+            stdin=commit_msg.encode(),
+            logstderr=True,
+        )
         if ecode != 0:
             logger.critical('Could not create new tracking commit: %s', out.strip())
             _wait_for_enter()
@@ -2863,11 +3230,14 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
         self.push_screen(
             RangeDiffScreen(current_rev, revisions),
-            callback=lambda chosen: self._on_range_diff_selected(chosen, change_id, current_rev),
+            callback=lambda chosen: self._on_range_diff_selected(
+                chosen, change_id, current_rev
+            ),
         )
 
-    def _on_range_diff_selected(self, chosen: Optional[int], change_id: str,
-                                 current_rev: int) -> None:
+    def _on_range_diff_selected(
+        self, chosen: Optional[int], change_id: str, current_rev: int
+    ) -> None:
         """Handle the revision chosen from the range-diff modal."""
         if chosen is None:
             return
@@ -2876,7 +3246,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     @staticmethod
     def _fetch_fake_am_range(
-        topdir: str, revisions: List[Dict[str, Any]], rev: int,
+        topdir: str,
+        revisions: List[Dict[str, Any]],
+        rev: int,
     ) -> Optional[Tuple[str, str]]:
         """Fetch a revision and create a fake-am commit range.
 
@@ -2923,8 +3295,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         for msg in msgs:
             lmbx.add_message(msg)
 
-        lser = lmbx.get_series(rev, sloppytrailers=False,
-                               codereview_trailers=False)
+        lser = lmbx.get_series(rev, sloppytrailers=False, codereview_trailers=False)
         if lser is None:
             logger.critical('Could not find series v%d in retrieved messages', rev)
             return None
@@ -2998,9 +3369,12 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
         # --- Run git range-diff ---
         logger.info('Running range-diff...')
-        gitargs = ['range-diff', '--color',
-                    f'{left_start}..{left_end}',
-                    f'{right_start}..{right_end}']
+        gitargs = [
+            'range-diff',
+            '--color',
+            f'{left_start}..{left_end}',
+            f'{right_start}..{right_end}',
+        ]
         ecode, out = b4.git_run_command(topdir, gitargs)
         if ecode != 0:
             logger.critical('git range-diff failed (exit %d)', ecode)
@@ -3010,42 +3384,49 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             return
 
         if not out.strip():
-            logger.info('No differences found between v%d and v%d',
-                        min(other_rev, current_rev), max(other_rev, current_rev))
+            logger.info(
+                'No differences found between v%d and v%d',
+                min(other_rev, current_rev),
+                max(other_rev, current_rev),
+            )
             _wait_for_enter()
             return
 
         b4.view_in_pager(out.encode(), filehint='range-diff.txt')
 
-    def _delete_review_branch(self, topdir: str, review_branch: str,
-                              notify: bool = True) -> bool:
+    def _delete_review_branch(
+        self, topdir: str, review_branch: str, notify: bool = True
+    ) -> bool:
         """Delete a review branch, switching away if currently on it.
 
         Returns True on success, False on failure.
         """
         if b4.git_get_current_branch(topdir) == review_branch:
             ecode, out = b4.git_run_command(
-                topdir, ['rev-parse', f'{review_branch}~1'],
-                logstderr=True)
+                topdir, ['rev-parse', f'{review_branch}~1'], logstderr=True
+            )
             if ecode > 0:
                 if notify:
-                    self.notify('Could not determine parent commit',
-                                severity='error')
+                    self.notify('Could not determine parent commit', severity='error')
                 return False
             parent = out.strip()
             ecode, out = b4.git_run_command(
-                topdir, ['checkout', parent], logstderr=True)
+                topdir, ['checkout', parent], logstderr=True
+            )
             if ecode > 0:
                 if notify:
-                    self.notify(f'Could not switch away from {review_branch}',
-                                severity='error')
+                    self.notify(
+                        f'Could not switch away from {review_branch}', severity='error'
+                    )
                 return False
         ecode, out = b4.git_run_command(
-            topdir, ['branch', '-D', review_branch], logstderr=True)
+            topdir, ['branch', '-D', review_branch], logstderr=True
+        )
         if ecode > 0:
             if notify:
-                self.notify(f'Failed to delete branch {review_branch}',
-                            severity='error')
+                self.notify(
+                    f'Failed to delete branch {review_branch}', severity='error'
+                )
             return False
         return True
 
@@ -3058,16 +3439,25 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         review_branch = f'b4/review/{change_id}'
         has_branch = b4.git_branch_exists(None, review_branch)
         self.push_screen(
-            AbandonConfirmScreen(change_id, review_branch, has_branch,
-                                 subject=self._selected_series.get('subject', '')),
+            AbandonConfirmScreen(
+                change_id,
+                review_branch,
+                has_branch,
+                subject=self._selected_series.get('subject', ''),
+            ),
             callback=lambda confirmed: self._on_abandon_confirmed(
-                confirmed, change_id, review_branch, has_branch,
-                revision=revision),
+                confirmed, change_id, review_branch, has_branch, revision=revision
+            ),
         )
 
-    def _on_abandon_confirmed(self, confirmed: bool, change_id: str,
-                               review_branch: str, has_branch: bool,
-                               revision: Optional[int] = None) -> None:
+    def _on_abandon_confirmed(
+        self,
+        confirmed: bool,
+        change_id: str,
+        review_branch: str,
+        has_branch: bool,
+        revision: Optional[int] = None,
+    ) -> None:
         if not confirmed:
             return
         topdir = b4.git_get_toplevel()
@@ -3098,8 +3488,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             return
         status = self._selected_series.get('status', 'new')
         if status not in ('reviewing', 'new'):
-            self.notify('Series must be checked out or new to upgrade',
-                        severity='warning')
+            self.notify(
+                'Series must be checked out or new to upgrade', severity='warning'
+            )
             return
 
         change_id = self._selected_series.get('change_id', '')
@@ -3108,12 +3499,13 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
         # Discover newer revisions from tracking data and DB
         newer_versions = self._discover_newer_versions(
-            change_id, current_rev, review_branch)
+            change_id, current_rev, review_branch
+        )
 
         if not newer_versions:
             self.notify(
-                'No newer revisions known. Try \\[u]pdate first.',
-                severity='warning')
+                'No newer revisions known. Try \\[u]pdate first.', severity='warning'
+            )
             return
 
         # Look up revision metadata from the DB
@@ -3128,42 +3520,44 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         newer_revs = [r for r in revisions if r['revision'] in newer_versions]
         if not newer_revs:
             self.notify(
-                'No newer revisions known. Try \\[u]pdate first.',
-                severity='warning')
+                'No newer revisions known. Try \\[u]pdate first.', severity='warning'
+            )
             return
 
         if status == 'new':
             # No review branch — just update the DB record
             if len(newer_revs) == 1:
-                self._do_switch_revision(change_id, current_rev,
-                                         newer_revs[0])
+                self._do_switch_revision(change_id, current_rev, newer_revs[0])
                 return
             self.push_screen(
                 UpdateRevisionScreen(current_rev, revisions),
                 callback=lambda chosen: (
                     self._switch_revision_by_number(
-                        change_id, current_rev, chosen, revisions)
-                    if chosen is not None else None
+                        change_id, current_rev, chosen, revisions
+                    )
+                    if chosen is not None
+                    else None
                 ),
             )
             return
 
         if len(newer_revs) == 1:
             # Only one newer revision — go straight to the upgrade
-            self._do_update_revision(change_id, current_rev,
-                                     newer_revs[0]['revision'])
+            self._do_update_revision(change_id, current_rev, newer_revs[0]['revision'])
             return
 
         self.push_screen(
             UpdateRevisionScreen(current_rev, revisions),
             callback=lambda chosen: (
                 self._do_update_revision(change_id, current_rev, chosen)
-                if chosen is not None else None
+                if chosen is not None
+                else None
             ),
         )
 
-    def _do_switch_revision(self, change_id: str, current_rev: int,
-                            rev_info: Dict[str, Any]) -> None:
+    def _do_switch_revision(
+        self, change_id: str, current_rev: int, rev_info: Dict[str, Any]
+    ) -> None:
         """Switch a not-yet-checked-out series to a different revision.
 
         Simply updates the database record — no branch operations needed.
@@ -3174,8 +3568,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         try:
             conn = b4.review.tracking.get_db(self._identifier)
             b4.review.tracking.update_series_revision(
-                conn, change_id, current_rev, target_rev,
-                new_msgid, new_subject)
+                conn, change_id, current_rev, target_rev, new_msgid, new_subject
+            )
             conn.close()
         except Exception as ex:
             self.notify(f'Could not update revision: {ex}', severity='error')
@@ -3185,9 +3579,13 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         self._invalidate_caches(change_id)
         self._load_series()
 
-    def _switch_revision_by_number(self, change_id: str, current_rev: int,
-                                   chosen: int,
-                                   revisions: List[Dict[str, Any]]) -> None:
+    def _switch_revision_by_number(
+        self,
+        change_id: str,
+        current_rev: int,
+        chosen: int,
+        revisions: List[Dict[str, Any]],
+    ) -> None:
         """Callback wrapper: find the revision dict and call _do_switch_revision."""
         for rev in revisions:
             if rev['revision'] == chosen:
@@ -3195,8 +3593,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 return
         self.notify(f'Revision v{chosen} not found', severity='error')
 
-    def _do_update_revision(self, change_id: str, current_rev: int,
-                            target_rev: int) -> None:
+    def _do_update_revision(
+        self, change_id: str, current_rev: int, target_rev: int
+    ) -> None:
         """Upgrade the review branch from *current_rev* to *target_rev*.
 
         Uses a three-phase workflow so the old review branch is never
@@ -3227,8 +3626,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 break
 
         if not target_msgid:
-            self.notify(f'No message-id recorded for v{target_rev}',
-                        severity='error')
+            self.notify(f'No message-id recorded for v{target_rev}', severity='error')
             return
 
         # Phase 1: fetch series and compute base in a worker thread
@@ -3237,13 +3635,20 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 target_series = dict(self._selected_series or {})
                 target_series['message_id'] = target_msgid
                 target_series['revision'] = target_rev
-                msgs = b4.review.retrieve_series_messages(target_series, self._identifier)
+                msgs = b4.review.retrieve_series_messages(
+                    target_series, self._identifier
+                )
                 lser = b4.review._get_lore_series(msgs)
 
                 am_msgs = lser.get_am_ready(
-                    noaddtrailers=True, addmysob=False, addlink=False,
-                    cherrypick=None, copyccs=False, allowbadchars=False,
-                    showchecks=False)
+                    noaddtrailers=True,
+                    addmysob=False,
+                    addlink=False,
+                    cherrypick=None,
+                    copyccs=False,
+                    allowbadchars=False,
+                    showchecks=False,
+                )
                 if not am_msgs:
                     raise LookupError('No patches ready for applying')
 
@@ -3267,23 +3672,25 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                     try:
                         guessed, nblobs, mismatches = lser.find_base(
                             topdir,
-                            branches=['--exclude=refs/heads/b4/review/*',
-                                      '--all'],
-                            maxdays=30)
+                            branches=['--exclude=refs/heads/b4/review/*', '--all'],
+                            maxdays=30,
+                        )
                         if guessed:
                             ecode, sha_out = b4.git_run_command(
-                                topdir, ['rev-parse', '--verify', guessed])
+                                topdir, ['rev-parse', '--verify', guessed]
+                            )
                             sha = sha_out.strip() if ecode == 0 else ''
                             short_sha = sha[:12] if sha else guessed
                             if mismatches == 0:
                                 initial_base = short_sha
-                                base_hint = (f'Guessed base: {guessed}'
-                                             f' (exact match)')
+                                base_hint = f'Guessed base: {guessed} (exact match)'
                             elif nblobs != mismatches:
                                 matched = nblobs - mismatches
                                 initial_base = short_sha
-                                base_hint = (f'Guessed base: {guessed}'
-                                             f' ({matched}/{nblobs} blobs)')
+                                base_hint = (
+                                    f'Guessed base: {guessed}'
+                                    f' ({matched}/{nblobs} blobs)'
+                                )
                             else:
                                 base_hint = 'Could not find a matching base'
                     except (IndexError, Exception):
@@ -3294,15 +3701,26 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         self.push_screen(
             WorkerScreen('Fetching new revision\u2026', _fetch_update),
             callback=lambda result: self._on_update_prepared(
-                result, change_id, current_rev, target_rev,
-                target_msgid, target_subject, review_branch),
+                result,
+                change_id,
+                current_rev,
+                target_rev,
+                target_msgid,
+                target_subject,
+                review_branch,
+            ),
         )
 
-    def _on_update_prepared(self, result: Any,
-                            change_id: str, current_rev: int,
-                            target_rev: int, target_msgid: str,
-                            target_subject: str,
-                            review_branch: str) -> None:
+    def _on_update_prepared(
+        self,
+        result: Any,
+        change_id: str,
+        current_rev: int,
+        target_rev: int,
+        target_msgid: str,
+        target_subject: str,
+        review_branch: str,
+    ) -> None:
         """Phase 2: show BaseSelectionScreen after fetching the new series."""
         if result is None:
             return
@@ -3325,22 +3743,41 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                             base_suggestions.append(rb)
 
         self.push_screen(
-            BaseSelectionScreen(initial_base, lser, ambytes,
-                                base_suggestions=base_suggestions,
-                                base_hint=base_hint,
-                                subject=target_subject),
+            BaseSelectionScreen(
+                initial_base,
+                lser,
+                ambytes,
+                base_suggestions=base_suggestions,
+                base_hint=base_hint,
+                subject=target_subject,
+            ),
             callback=lambda base_sha: self._on_update_base_selected(
-                base_sha, lser, ambytes, num_am, change_id, current_rev,
-                target_rev, target_msgid, target_subject,
-                review_branch),
+                base_sha,
+                lser,
+                ambytes,
+                num_am,
+                change_id,
+                current_rev,
+                target_rev,
+                target_msgid,
+                target_subject,
+                review_branch,
+            ),
         )
 
     def _on_update_base_selected(
-            self, base_sha: Optional[str],
-            lser: b4.LoreSeries, ambytes: bytes, num_am: int,
-            change_id: str, current_rev: int, target_rev: int,
-            target_msgid: str, target_subject: str,
-            review_branch: str) -> None:
+        self,
+        base_sha: Optional[str],
+        lser: b4.LoreSeries,
+        ambytes: bytes,
+        num_am: int,
+        change_id: str,
+        current_rev: int,
+        target_rev: int,
+        target_msgid: str,
+        target_subject: str,
+        review_branch: str,
+    ) -> None:
         """Phase 3: apply new revision to upgrade branch, then swap.
 
         The old review branch is never touched until the apply succeeds.
@@ -3385,7 +3822,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             usercfg = b4.get_user_config()
             maintainer_email = str(usercfg.get('email', ''))
             prior_context = b4.review.tracking.render_prior_review_context(
-                maintainer_email, current_rev, old_series, patches)
+                maintainer_email, current_rev, old_series, patches
+            )
             prior_thread_blob = old_series.get('thread-context-blob', '')
             prior_msgid = old_series.get('header-info', {}).get('msgid', '')
 
@@ -3398,11 +3836,13 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                     try:
                         _conn = b4.review.tracking.get_db(self._identifier)
                         b4.review.tracking.set_revision_thread_blob(
-                            _conn, change_id, current_rev, cur_mbox_blob)
+                            _conn, change_id, current_rev, cur_mbox_blob
+                        )
                         _conn.close()
                     except Exception as _ex:
-                        logger.debug('Could not record thread blob for v%d: %s',
-                                     current_rev, _ex)
+                        logger.debug(
+                            'Could not record thread blob for v%d: %s', current_rev, _ex
+                        )
 
             # --- 2. Resolve metadata for git-am ---
             top_msgid = None
@@ -3432,53 +3872,73 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 if mismatches:
                     rstart, rend = lser.make_fake_am_range(gitdir=topdir)
                     if rstart and rend:
-                        logger.info('Prepared fake commit range for 3-way merge (%.12s..%.12s)', rstart, rend)
+                        logger.info(
+                            'Prepared fake commit range for 3-way merge (%.12s..%.12s)',
+                            rstart,
+                            rend,
+                        )
 
             # --- 3. Apply to temporary upgrade branch ---
             try:
                 logger.info('Base: %s', base_sha)
-                b4.git_fetch_am_into_repo(topdir, ambytes=ambytes,
-                                          at_base=base_sha, origin=linkurl,
-                                          am_flags=['-3'])
+                b4.git_fetch_am_into_repo(
+                    topdir,
+                    ambytes=ambytes,
+                    at_base=base_sha,
+                    origin=linkurl,
+                    am_flags=['-3'],
+                )
                 _is_rt = bool((self._selected_series or {}).get('is_rethreaded'))
-                b4.review.create_review_branch(topdir, upgrade_branch,
-                                               base_sha, lser, linkurl,
-                                               linkmask, num_prereqs=0,
-                                               identifier=self._identifier,
-                                               status='reviewing',
-                                               is_rethreaded=_is_rt)
+                b4.review.create_review_branch(
+                    topdir,
+                    upgrade_branch,
+                    base_sha,
+                    lser,
+                    linkurl,
+                    linkmask,
+                    num_prereqs=0,
+                    identifier=self._identifier,
+                    status='reviewing',
+                    is_rethreaded=_is_rt,
+                )
                 logger.info('Upgrade branch created: %s', upgrade_branch)
             except b4.AmConflictError as cex:
                 if not _resolve_worktree_am_conflict(topdir, cex):
                     # User aborted — clean up upgrade branch if it was
                     # partially created before the conflict
                     if b4.git_branch_exists(topdir, upgrade_branch):
-                        b4.git_run_command(
-                            topdir, ['branch', '-D', upgrade_branch])
+                        b4.git_run_command(topdir, ['branch', '-D', upgrade_branch])
                     _wait_for_enter()
                     return
                 b4._rewrite_fetch_head_origin(topdir, cex.worktree_path, linkurl)
-                b4.review.create_review_branch(topdir, upgrade_branch,
-                                               base_sha, lser, linkurl,
-                                               linkmask, num_prereqs=0,
-                                               identifier=self._identifier,
-                                               status='reviewing',
-                                               is_rethreaded=_is_rt)
+                b4.review.create_review_branch(
+                    topdir,
+                    upgrade_branch,
+                    base_sha,
+                    lser,
+                    linkurl,
+                    linkmask,
+                    num_prereqs=0,
+                    identifier=self._identifier,
+                    status='reviewing',
+                    is_rethreaded=_is_rt,
+                )
                 logger.info('Upgrade branch created: %s', upgrade_branch)
             except Exception as ex:
                 logger.critical('Error creating review branch: %s', ex)
                 if b4.git_branch_exists(topdir, upgrade_branch):
-                    b4.git_run_command(
-                        topdir, ['branch', '-D', upgrade_branch])
+                    b4.git_run_command(topdir, ['branch', '-D', upgrade_branch])
                 _wait_for_enter()
                 return
 
             # --- 4. Apply succeeded — restore reviews onto upgrade branch ---
             logger.info('Restoring reviews...')
             new_patch_ids = b4.review.get_review_branch_patch_ids(
-                topdir, upgrade_branch)
+                topdir, upgrade_branch
+            )
             new_cover_text, new_tracking = b4.review.load_tracking(
-                topdir, upgrade_branch)
+                topdir, upgrade_branch
+            )
             new_patches = new_tracking.get('patches', [])
 
             usercfg = b4.get_user_config()
@@ -3491,7 +3951,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             # sent anything yet (e.g. the submitter sent a new version while
             # the maintainer was still drafting their review).
             old_status = old_series.get('status', '')
-            carry_over_reviews = (old_status != 'replied')
+            carry_over_reviews = old_status != 'replied'
             restored = 0
             for idx, _sha, patch_id in new_patch_ids:
                 if patch_id is None or idx >= len(new_patches):
@@ -3508,7 +3968,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 # _get_patch_state() will override this with any derived state
                 # (draft, done, …) if the maintainer already has review content.
                 my_review = reviews.setdefault(
-                    my_email, {'name': str(usercfg.get('name', ''))})
+                    my_email, {'name': str(usercfg.get('name', ''))}
+                )
                 my_review['patch-state'] = 'unchanged'
                 reviews['b4-upgrade@internal'] = {
                     'name': 'B4 Upgrade',
@@ -3532,18 +3993,25 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 new_tracking['series']['prior-thread-context-blob'] = prior_thread_blob
             if prior_msgid:
                 new_tracking['series']['prior-revision-msgid'] = prior_msgid
-            b4.review.save_tracking_ref(topdir, upgrade_branch,
-                                        new_cover_text, new_tracking)
-            logger.info('Restored reviews for %d of %d patch(es)',
-                        restored, len(new_patch_ids))
+            b4.review.save_tracking_ref(
+                topdir, upgrade_branch, new_cover_text, new_tracking
+            )
+            logger.info(
+                'Restored reviews for %d of %d patch(es)', restored, len(new_patch_ids)
+            )
 
             # --- 5. Archive old branch and rename upgrade → review ---
             logger.info('Archiving v%d...', current_rev)
             pw_series_id = None
             if self._selected_series:
                 pw_series_id = self._selected_series.get('pw_series_id')
-            if not self._archive_branch(change_id, current_rev, review_branch,
-                                        pw_series_id=pw_series_id, notify=False):
+            if not self._archive_branch(
+                change_id,
+                current_rev,
+                review_branch,
+                pw_series_id=pw_series_id,
+                notify=False,
+            ):
                 logger.critical('Failed to archive v%d', current_rev)
                 # Upgrade branch has new data but old branch could not be
                 # archived.  Leave both branches so the user can recover.
@@ -3551,10 +4019,10 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 return
 
             ecode, out = b4.git_run_command(
-                topdir, ['branch', '-m', upgrade_branch, review_branch])
+                topdir, ['branch', '-m', upgrade_branch, review_branch]
+            )
             if ecode > 0:
-                logger.critical('Failed to rename upgrade branch: %s',
-                                out.strip())
+                logger.critical('Failed to rename upgrade branch: %s', out.strip())
                 _wait_for_enter()
                 return
 
@@ -3573,16 +4041,22 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                     if ref_msg and ref_msg.date:
                         sent_at = ref_msg.date.isoformat()
                     b4.review.tracking.add_series_to_db(
-                        conn, change_id, target_rev,
-                        target_subject, sender_name, sender_email,
-                        sent_at, target_msgid,
-                        lser.expected or num_am)
+                        conn,
+                        change_id,
+                        target_rev,
+                        target_subject,
+                        sender_name,
+                        sender_email,
+                        sent_at,
+                        target_msgid,
+                        lser.expected or num_am,
+                    )
                     b4.review.tracking.update_series_status(
-                        conn, change_id, 'reviewing', revision=target_rev)
+                        conn, change_id, 'reviewing', revision=target_rev
+                    )
                     conn.close()
                 except Exception as ex:
-                    logger.warning('Failed to update DB for v%d: %s',
-                                   target_rev, ex)
+                    logger.warning('Failed to update DB for v%d: %s', target_rev, ex)
 
             logger.info('Upgrade to v%d complete', target_rev)
             _wait_for_enter()
@@ -3604,7 +4078,8 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         try:
             conn = b4.review.tracking.get_db(self._identifier)
             b4.review.tracking.update_series_status(
-                conn, change_id, 'waiting', revision=revision)
+                conn, change_id, 'waiting', revision=revision
+            )
             conn.close()
         except Exception as ex:
             self.notify(f'Error: {ex}', severity='error')
@@ -3627,9 +4102,11 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             self.notify('Cannot snooze a series in this state', severity='warning')
             return
         self.push_screen(
-            SnoozeScreen(last_source=self._last_snooze_source,
-                         last_input=self._last_snooze_input,
-                         subject=self._selected_series.get('subject', '')),
+            SnoozeScreen(
+                last_source=self._last_snooze_source,
+                last_input=self._last_snooze_input,
+                subject=self._selected_series.get('subject', ''),
+            ),
             callback=self._on_snooze_confirmed,
         )
 
@@ -3665,8 +4142,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         # Update database
         try:
             conn = b4.review.tracking.get_db(self._identifier)
-            b4.review.tracking.snooze_series(conn, change_id, until_value,
-                                             revision=revision)
+            b4.review.tracking.snooze_series(
+                conn, change_id, until_value, revision=revision
+            )
             conn.close()
         except Exception as ex:
             self.notify(f'Error: {ex}', severity='error')
@@ -3703,8 +4181,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         # Update database
         try:
             conn = b4.review.tracking.get_db(self._identifier)
-            b4.review.tracking.unsnooze_series(conn, change_id, previous_status,
-                                               revision=revision)
+            b4.review.tracking.unsnooze_series(
+                conn, change_id, previous_status, revision=revision
+            )
             conn.close()
         except Exception as ex:
             self.notify(f'Error: {ex}', severity='error')
@@ -3725,16 +4204,30 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         review_branch = f'b4/review/{change_id}'
         has_branch = b4.git_branch_exists(None, review_branch)
         self.push_screen(
-            ArchiveConfirmScreen(change_id, review_branch, has_branch,
-                                 subject=self._selected_series.get('subject', '')),
+            ArchiveConfirmScreen(
+                change_id,
+                review_branch,
+                has_branch,
+                subject=self._selected_series.get('subject', ''),
+            ),
             callback=lambda confirmed: self._on_archive_confirmed(
-                confirmed, change_id, review_branch, has_branch, pw_series_id,
-                revision=revision),
+                confirmed,
+                change_id,
+                review_branch,
+                has_branch,
+                pw_series_id,
+                revision=revision,
+            ),
         )
 
-    def _archive_branch(self, change_id: str, revision: Optional[int],
-                        review_branch: str, pw_series_id: Optional[int] = None,
-                        notify: bool = True) -> bool:
+    def _archive_branch(
+        self,
+        change_id: str,
+        revision: Optional[int],
+        review_branch: str,
+        pw_series_id: Optional[int] = None,
+        notify: bool = True,
+    ) -> bool:
         """Archive a review branch and update the tracking database.
 
         Creates a tar.gz archive of the cover letter, tracking metadata,
@@ -3769,8 +4262,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             first_patch = series_info.get('first-patch-commit', '')
             if not first_patch:
                 if notify:
-                    self.notify('No patch commits found in tracking data',
-                                severity='error')
+                    self.notify(
+                        'No patch commits found in tracking data', severity='error'
+                    )
                 return False
 
             with tarfile.open(fileobj=tio, mode='w:gz') as tfh:
@@ -3786,12 +4280,12 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 ifh.close()
                 # Add patches as mbox
                 patches = b4.git_range_to_patches(
-                    None, f'{first_patch}~1', f'{review_branch}~1')
+                    None, f'{first_patch}~1', f'{review_branch}~1'
+                )
                 if patches:
                     ifh = io.BytesIO()
                     b4.save_git_am_mbox([patch[1] for patch in patches], ifh)
-                    b4.ez.write_to_tar(
-                        tfh, f'{change_id}/patches.mbx', mnow, ifh)
+                    b4.ez.write_to_tar(tfh, f'{change_id}/patches.mbx', mnow, ifh)
                     ifh.close()
 
             # Write archive to data directory
@@ -3809,8 +4303,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         # Update tracking database
         try:
             conn = b4.review.tracking.get_db(self._identifier)
-            b4.review.tracking.update_series_status(conn, change_id, 'archived',
-                                                    revision=revision)
+            b4.review.tracking.update_series_status(
+                conn, change_id, 'archived', revision=revision
+            )
             conn.close()
         except Exception as ex:
             if notify:
@@ -3828,14 +4323,20 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 self.notify(f'Archived {change_id}')
         return True
 
-    def _on_archive_confirmed(self, confirmed: bool, change_id: str,
-                               review_branch: str, has_branch: bool,
-                               pw_series_id: Optional[int] = None,
-                               revision: Optional[int] = None) -> None:
+    def _on_archive_confirmed(
+        self,
+        confirmed: bool,
+        change_id: str,
+        review_branch: str,
+        has_branch: bool,
+        pw_series_id: Optional[int] = None,
+        revision: Optional[int] = None,
+    ) -> None:
         if not confirmed:
             return
-        if self._archive_branch(change_id, revision, review_branch,
-                                pw_series_id=pw_series_id):
+        if self._archive_branch(
+            change_id, revision, review_branch, pw_series_id=pw_series_id
+        ):
             self._selected_series = None
             panel = self.query_one('#details-panel', Vertical)
             panel.styles.height = 0
@@ -3853,7 +4354,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         if not series:
             return
         if series.get('status', 'new') != 'accepted':
-            self.notify('Series must be accepted before sending thanks', severity='warning')
+            self.notify(
+                'Series must be accepted before sending thanks', severity='warning'
+            )
             return
 
         change_id = series.get('change_id', '')
@@ -3941,8 +4444,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
         self._show_thank_preview(msg, checkurl=checkurl)
 
-    def _show_thank_preview(self, msg: email.message.EmailMessage,
-                            checkurl: Optional[str] = None) -> None:
+    def _show_thank_preview(
+        self, msg: email.message.EmailMessage, checkurl: Optional[str] = None
+    ) -> None:
         """Push the ThankScreen modal and handle edit/send/queue/cancel."""
 
         def _on_thank_result(result: Optional[str]) -> None:
@@ -3957,8 +4461,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
         self.push_screen(ThankScreen(msg, checkurl=checkurl), _on_thank_result)
 
-    def _edit_thank_message(self, msg: email.message.EmailMessage,
-                            checkurl: Optional[str] = None) -> None:
+    def _edit_thank_message(
+        self, msg: email.message.EmailMessage, checkurl: Optional[str] = None
+    ) -> None:
         """Open the thank-you message in $EDITOR and re-show preview."""
         msg_bytes = msg.as_bytes(policy=b4.emlpolicy)
         try:
@@ -3970,7 +4475,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         new_msg = email.parser.BytesParser(policy=b4.emlpolicy).parsebytes(edited)
         self._show_thank_preview(new_msg, checkurl=checkurl)
 
-    def _queue_thank_message(self, msg: email.message.EmailMessage, checkurl: str) -> None:
+    def _queue_thank_message(
+        self, msg: email.message.EmailMessage, checkurl: str
+    ) -> None:
         """Queue the thanks message for delivery once commits are public."""
         import b4.ty
 
@@ -3980,9 +4487,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         change_id = series.get('change_id', '')
         revision = series.get('revision', 1)
         try:
-            b4.ty.queue_message(msg, checkurl,
-                                change_id, revision,
-                                dryrun=self._email_dryrun)
+            b4.ty.queue_message(
+                msg, checkurl, change_id, revision, dryrun=self._email_dryrun
+            )
         except Exception as ex:
             self.notify(f'Failed to queue message: {ex}', severity='error')
             return
@@ -3998,9 +4505,15 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         try:
             with self.suspend():
                 smtp, fromaddr = b4.get_smtp(dryrun=self._email_dryrun)
-                sent = b4.send_mail(smtp, [msg], fromaddr=fromaddr,
-                                    patatt_sign=self._patatt_sign, dryrun=self._email_dryrun,
-                                    output_dir=None, reflect=False)
+                sent = b4.send_mail(
+                    smtp,
+                    [msg],
+                    fromaddr=fromaddr,
+                    patatt_sign=self._patatt_sign,
+                    dryrun=self._email_dryrun,
+                    output_dir=None,
+                    reflect=False,
+                )
             if sent is None:
                 self.notify('Failed to send thank-you message', severity='error')
                 return
@@ -4010,8 +4523,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
             if self._identifier and change_id:
                 try:
                     conn = b4.review.tracking.get_db(self._identifier)
-                    b4.review.tracking.update_series_status(conn, change_id, 'thanked',
-                                                            revision=revision)
+                    b4.review.tracking.update_series_status(
+                        conn, change_id, 'thanked', revision=revision
+                    )
                     conn.close()
                 except Exception as ex:
                     logger.warning('Could not update series status: %s', ex)
@@ -4029,6 +4543,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
     def _refresh_queue_indicator(self) -> None:
         """Update the title-bar queue count and Q binding visibility."""
         import b4.ty
+
         self._queue_count = b4.ty.get_queued_count(dryrun=self._email_dryrun)
         try:
             right = self.query_one('#title-right', Static)
@@ -4043,6 +4558,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
     def action_process_queue(self) -> None:
         """Show the queue modal and optionally deliver."""
         import b4.ty
+
         entries = b4.ty.get_queued_messages(dryrun=self._email_dryrun)
         if not entries:
             self.notify('No queued thanks messages')
@@ -4057,7 +4573,9 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
     def _deliver_queue(self) -> None:
         """Push a delivery modal with progress bar."""
 
-        def _on_delivery_result(result: Optional[Tuple[int, int, List[Tuple[str, int]]]]) -> None:
+        def _on_delivery_result(
+            result: Optional[Tuple[int, int, List[Tuple[str, int]]]],
+        ) -> None:
             if result is None:
                 self.notify('Queue delivery cancelled or failed', severity='warning')
                 self._refresh_queue_indicator()
@@ -4069,14 +4587,17 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                     try:
                         conn = b4.review.tracking.get_db(self._identifier)
                         b4.review.tracking.update_series_status(
-                            conn, change_id, 'thanked', revision=revision)
+                            conn, change_id, 'thanked', revision=revision
+                        )
                         conn.close()
                     except Exception as ex:
                         logger.warning('Could not update series status: %s', ex)
                     topdir = b4.git_get_toplevel()
                     if topdir:
                         review_branch = f'b4/review/{change_id}'
-                        b4.review.update_tracking_status(topdir, review_branch, 'thanked')
+                        b4.review.update_tracking_status(
+                            topdir, review_branch, 'thanked'
+                        )
             parts = []
             if delivered:
                 parts.append(f'{delivered} delivered')
@@ -4104,8 +4625,10 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
     def action_patchwork(self) -> None:
         """Exit to the outer loop so it can launch the Patchwork TUI."""
         if not (self._pwkey and self._pwurl and self._pwproj):
-            self.notify('Patchwork not configured (need b4.pw-key, b4.pw-url, b4.pw-project)',
-                        severity='error')
+            self.notify(
+                'Patchwork not configured (need b4.pw-key, b4.pw-url, b4.pw-project)',
+                severity='error',
+            )
             return
         self.exit(self.PATCHWORK_SENTINEL)
 
@@ -4120,4 +4643,3 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     async def action_quit(self) -> None:
         self.exit()
-
diff --git a/src/b4/tui/__init__.py b/src/b4/tui/__init__.py
index be4699a..a7b3598 100644
--- a/src/b4/tui/__init__.py
+++ b/src/b4/tui/__init__.py
@@ -1,4 +1,5 @@
 """Shared TUI utilities and widgets for b4 Textual apps."""
+
 __all__ = [
     'ActionItem',
     'ActionScreen',
diff --git a/src/b4/tui/_common.py b/src/b4/tui/_common.py
index 8eb6c45..b788f35 100644
--- a/src/b4/tui/_common.py
+++ b/src/b4/tui/_common.py
@@ -4,6 +4,7 @@
 # Copyright (C) 2024 by the Linux Foundation
 #
 """Shared TUI utilities for b4 Textual apps."""
+
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
 import email.utils
@@ -108,15 +109,14 @@ def ci_styles(ts: Dict[str, str]) -> Dict[str, str]:
         'pending': 'dim',
         'success': ts['success'],
         'warning': ts['warning'],
-        'fail': f"bold {ts['error']}",
+        'fail': f'bold {ts["error"]}',
     }
 
 
 def ci_markup(ts: Dict[str, str]) -> Dict[str, str]:
     """Return CI dot markup strings from a resolved theme dict."""
     return {
-        state: f'[{style}]\u25cf[/{style}]'
-        for state, style in ci_styles(ts).items()
+        state: f'[{style}]\u25cf[/{style}]' for state, style in ci_styles(ts).items()
     }
 
 
@@ -126,7 +126,7 @@ def ci_check_styles(ts: Dict[str, str]) -> Dict[str, str]:
         'pending': 'dim',
         'success': ts['success'],
         'warning': ts['warning'],
-        'fail': f"bold {ts['error']}",
+        'fail': f'bold {ts["error"]}',
     }
 
 
@@ -136,7 +136,7 @@ def reviewer_colours(ts: Dict[str, str]) -> List[str]:
     Index 0 is always the current user; the rest cycle for others.
     """
     return [
-        ts['warning'],      # index 0: current user (warm/distinct)
+        ts['warning'],  # index 0: current user (warm/distinct)
         ts['accent'],
         ts['secondary'],
         ts['error'],
@@ -189,7 +189,9 @@ def _suspend_to_shell(hint: str = 'b4', cwd: Optional[str] = None) -> None:
     is set so the user can incorporate it into their own prompt.
     """
     logger.info('---')
-    logger.info('You are now in shell mode. You can execute git commands or run checks.')
+    logger.info(
+        'You are now in shell mode. You can execute git commands or run checks.'
+    )
     logger.info('Cosmetic commit edits (reword subjects, fix trailers) are fine;')
     logger.info('b4 will reconcile tracking data when you return.')
     logger.info('Do NOT add, remove, squash, or reorder commits.')
@@ -205,8 +207,9 @@ def _suspend_to_shell(hint: str = 'b4', cwd: Optional[str] = None) -> None:
         bashrc = os.path.expanduser('~/.bashrc')
         source = f'[ -f {bashrc} ] && . {bashrc}\n'
         source += f'PS1="({hint}) $PS1"\n'
-        with tempfile.NamedTemporaryFile(mode='w', prefix='b4-shell-',
-                                         suffix='.sh', delete=False) as rcf:
+        with tempfile.NamedTemporaryFile(
+            mode='w', prefix='b4-shell-', suffix='.sh', delete=False
+        ) as rcf:
             rcf.write(source)
             rcfile = rcf.name
         try:
diff --git a/src/b4/tui/_modals.py b/src/b4/tui/_modals.py
index 15f2e3b..da3ca13 100644
--- a/src/b4/tui/_modals.py
+++ b/src/b4/tui/_modals.py
@@ -4,6 +4,7 @@
 # Copyright (C) 2024 by the Linux Foundation
 #
 """Shared modal screens for b4 Textual apps."""
+
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
 from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
@@ -87,7 +88,9 @@ class ToCcScreen(ModalScreen[bool]):
             yield TextArea(self._bcc_text, id='bcc-area', classes='tocc-area')
             if self._show_apply_all:
                 yield Checkbox('Apply to all patches', id='apply-all')
-            yield Static('Ctrl+S save  |  Escape cancel  |  Tab next field', id='tocc-hint')
+            yield Static(
+                'Ctrl+S save  |  Escape cancel  |  Tab next field', id='tocc-hint'
+            )
 
     def on_mount(self) -> None:
         self.query_one('#to-area', TextArea).focus()
@@ -165,10 +168,14 @@ class ConfirmScreen(ModalScreen[bool]):
     # Map CSS variable names to CSS class suffixes for border/title colours.
     _COLOUR_CLASSES = {'$warning': 'warning', '$error': 'error'}
 
-    def __init__(self, title: str, body: List[str],
-                 border: str = '$accent',
-                 title_colour: Optional[str] = None,
-                 subject: str = '') -> None:
+    def __init__(
+        self,
+        title: str,
+        body: List[str],
+        border: str = '$accent',
+        title_colour: Optional[str] = None,
+        subject: str = '',
+    ) -> None:
         super().__init__()
         self._title = title
         self._body = body
@@ -226,9 +233,12 @@ class LimitScreen(ModalScreen[Optional[str]]):
     }
     """
 
-    def __init__(self, current_pattern: str = '',
-                 hint: Optional[str] = None,
-                 title: str = 'Limit') -> None:
+    def __init__(
+        self,
+        current_pattern: str = '',
+        hint: Optional[str] = None,
+        title: str = 'Limit',
+    ) -> None:
         super().__init__()
         self._current_pattern = current_pattern
         self._hint = hint
@@ -237,8 +247,11 @@ class LimitScreen(ModalScreen[Optional[str]]):
     def compose(self) -> ComposeResult:
         with Vertical(id='limit-dialog') as dialog:
             dialog.border_title = self._title
-            yield Input(value=self._current_pattern, id='limit-input',
-                        placeholder='substring to match (empty to clear)')
+            yield Input(
+                value=self._current_pattern,
+                id='limit-input',
+                placeholder='substring to match (empty to clear)',
+            )
             hint_lines = ''
             if self._hint:
                 hint_lines = self._hint + '\n'
@@ -303,8 +316,9 @@ class ActionScreen(JKListNavMixin, ModalScreen[Optional[str]]):
     }
     """
 
-    def __init__(self, actions: List[Tuple[str, str]],
-                 shortcuts: Optional[Dict[str, str]] = None) -> None:
+    def __init__(
+        self, actions: List[Tuple[str, str]], shortcuts: Optional[Dict[str, str]] = None
+    ) -> None:
         super().__init__()
         self._actions = actions
         self._shortcuts = shortcuts or {}
diff --git a/src/b4/ty.py b/src/b4/ty.py
index 5786222..45c7773 100644
--- a/src/b4/ty.py
+++ b/src/b4/ty.py
@@ -25,7 +25,8 @@ JsonDictT = Dict[str, Union[str, int, List[Any], Dict[str, Any]]]
 
 logger = b4.logger
 
-DEFAULT_PR_TEMPLATE = """
+DEFAULT_PR_TEMPLATE = (
+    """
 On ${sentdate}, ${fromname} wrote:
 ${quote}
 
@@ -34,11 +35,15 @@ Merged, thanks!
 ${summary}
 
 Best regards,
---""" + ' ' + """
+--"""
+    + ' '
+    + """
 ${signature}
 """
+)
 
-DEFAULT_AM_TEMPLATE = """
+DEFAULT_AM_TEMPLATE = (
+    """
 On ${sentdate}, ${fromname} wrote:
 ${quote}
 
@@ -47,9 +52,12 @@ Applied, thanks!
 ${summary}
 
 Best regards,
---""" + ' ' + """
+--"""
+    + ' '
+    + """
 ${signature}
 """
+)
 
 # Used to track commits created by current user
 MY_COMMITS: Optional[Dict[str, Tuple[str, str, List[str]]]] = None
@@ -57,7 +65,9 @@ MY_COMMITS: Optional[Dict[str, Tuple[str, str, List[str]]]] = None
 BRANCH_INFO: Optional[Dict[str, str]] = None
 
 
-def git_get_merge_id(gitdir: Optional[str], commit_id: str, branch: Optional[str] = None) -> Optional[str]:
+def git_get_merge_id(
+    gitdir: Optional[str], commit_id: str, branch: Optional[str] = None
+) -> Optional[str]:
     # get merge commit id
     args = ['rev-list', '%s..' % commit_id, '--ancestry-path']
     if branch is not None:
@@ -78,7 +88,12 @@ def git_get_commit_message(gitdir: Optional[str], rev: str) -> Tuple[int, str]:
     return b4.git_run_command(gitdir, args)
 
 
-def make_reply(reply_template: str, jsondata: JsonDictT, gitdir: Optional[str], cmdargs: argparse.Namespace) -> EmailMessage:
+def make_reply(
+    reply_template: str,
+    jsondata: JsonDictT,
+    gitdir: Optional[str],
+    cmdargs: argparse.Namespace,
+) -> EmailMessage:
     msg = EmailMessage()
     msg['From'] = '%s <%s>' % (jsondata['myname'], jsondata['myemail'])
     excludes = b4.get_excluded_addrs()
@@ -87,14 +102,20 @@ def make_reply(reply_template: str, jsondata: JsonDictT, gitdir: Optional[str],
     assert isinstance(jsondata['to'], str), 'to must be a string'
     assert isinstance(jsondata['cc'], str), 'cc must be a string'
     assert isinstance(jsondata['myemail'], str), 'msgid must be a string'
-    newto = b4.cleanup_email_addrs([(jsondata['fromname'], jsondata['fromemail'])], excludes, gitdir)
+    newto = b4.cleanup_email_addrs(
+        [(jsondata['fromname'], jsondata['fromemail'])], excludes, gitdir
+    )
 
     # Exclude ourselves and original sender from allto or allcc
     if not cmdargs.metoo:
         excludes.add(jsondata['myemail'])
     excludes.add(jsondata['fromemail'])
-    allto = b4.cleanup_email_addrs(email.utils.getaddresses([jsondata['to']]), excludes, gitdir)
-    allcc = b4.cleanup_email_addrs(email.utils.getaddresses([jsondata['cc']]), excludes, gitdir)
+    allto = b4.cleanup_email_addrs(
+        email.utils.getaddresses([jsondata['to']]), excludes, gitdir
+    )
+    allcc = b4.cleanup_email_addrs(
+        email.utils.getaddresses([jsondata['cc']]), excludes, gitdir
+    )
 
     if newto:
         allto += newto
@@ -124,7 +145,9 @@ def make_reply(reply_template: str, jsondata: JsonDictT, gitdir: Optional[str],
     return msg
 
 
-def auto_locate_pr(gitdir: Optional[str], jsondata: JsonDictT, branch: str) -> Optional[str]:
+def auto_locate_pr(
+    gitdir: Optional[str], jsondata: JsonDictT, branch: str
+) -> Optional[str]:
     pr_commit_id = jsondata['pr_commit_id']
     assert isinstance(pr_commit_id, str), 'pr_commit_id must be a string'
     logger.debug('Checking %s', jsondata['pr_commit_id'])
@@ -161,8 +184,12 @@ def auto_locate_pr(gitdir: Optional[str], jsondata: JsonDictT, branch: str) -> O
     return merge_commit_id
 
 
-def get_all_commits(gitdir: Optional[str], branch: str, since: str = '1.week',
-                    committer: Optional[str] = None) -> Dict[str, Tuple[str, str, List[str]]]:
+def get_all_commits(
+    gitdir: Optional[str],
+    branch: str,
+    since: str = '1.week',
+    committer: Optional[str] = None,
+) -> Dict[str, Tuple[str, str, List[str]]]:
     global MY_COMMITS
     if MY_COMMITS is not None:
         return MY_COMMITS
@@ -174,11 +201,23 @@ def get_all_commits(gitdir: Optional[str], branch: str, since: str = '1.week',
         if isinstance(_ce, str):
             committer = _ce
         else:
-            logger.critical('No committer email found in user config, please set user.email')
+            logger.critical(
+                'No committer email found in user config, please set user.email'
+            )
             sys.exit(1)
 
-    gitargs = ['log', '--committer', committer, '--no-mailmap', '--no-abbrev', '--no-decorate',
-               '--oneline', '--since', since, branch]
+    gitargs = [
+        'log',
+        '--committer',
+        committer,
+        '--no-mailmap',
+        '--no-abbrev',
+        '--no-decorate',
+        '--oneline',
+        '--since',
+        since,
+        branch,
+    ]
     lines = b4.git_get_command_lines(gitdir, gitargs)
     if not len(lines):
         logger.debug('No new commits from the current user --since=%s', since)
@@ -194,7 +233,9 @@ def get_all_commits(gitdir: Optional[str], branch: str, since: str = '1.week',
         logger.debug('phash=%s', pwhash)
         # get all message-id or link trailers
         _ecode, out = git_get_commit_message(gitdir, commit_id)
-        matches = re.findall(r'^\s*(?:message-id|link):[ \t]+(\S+)\s*$', out, flags=re.I | re.M)
+        matches = re.findall(
+            r'^\s*(?:message-id|link):[ \t]+(\S+)\s*$', out, flags=re.I | re.M
+        )
         trackers: List[str] = list()
         if matches:
             for tvalue in matches:
@@ -205,8 +246,9 @@ def get_all_commits(gitdir: Optional[str], branch: str, since: str = '1.week',
     return MY_COMMITS
 
 
-def auto_locate_series(gitdir: Optional[str], jsondata: JsonDictT, branch: str,
-                       since: str = '1.week') -> List[Tuple[int, Optional[str]]]:
+def auto_locate_series(
+    gitdir: Optional[str], jsondata: JsonDictT, branch: str, since: str = '1.week'
+) -> List[Tuple[int, Optional[str]]]:
     commits = get_all_commits(gitdir, branch, since)
 
     patchids = set(commits.keys())
@@ -252,10 +294,9 @@ def auto_locate_series(gitdir: Optional[str], jsondata: JsonDictT, branch: str,
     return found
 
 
-def set_branch_details(gitdir: Optional[str],
-                       branch: str,
-                       jsondata: JsonDictT,
-                       config: ConfigDictT) -> Tuple[JsonDictT, ConfigDictT]:
+def set_branch_details(
+    gitdir: Optional[str], branch: str, jsondata: JsonDictT, config: ConfigDictT
+) -> Tuple[JsonDictT, ConfigDictT]:
     binfo = get_branch_info(gitdir, branch)
     jsondata['branch'] = branch
     for key, val in binfo.items():
@@ -286,7 +327,9 @@ def set_branch_details(gitdir: Optional[str],
     return jsondata, config
 
 
-def generate_pr_thanks(gitdir: Optional[str], jsondata: JsonDictT, branch: str, cmdargs: argparse.Namespace) -> EmailMessage:
+def generate_pr_thanks(
+    gitdir: Optional[str], jsondata: JsonDictT, branch: str, cmdargs: argparse.Namespace
+) -> EmailMessage:
     config = b4.get_main_config()
     jsondata, config = set_branch_details(gitdir, branch, jsondata, config)
     thanks_template = DEFAULT_PR_TEMPLATE
@@ -296,13 +339,17 @@ def generate_pr_thanks(gitdir: Optional[str], jsondata: JsonDictT, branch: str,
         try:
             thanks_template = b4.read_template(_ctpr)
         except FileNotFoundError:
-            logger.critical('ERROR: thanks-pr-template says to use %s, but it does not exist',
-                            config['thanks-pr-template'])
+            logger.critical(
+                'ERROR: thanks-pr-template says to use %s, but it does not exist',
+                config['thanks-pr-template'],
+            )
             sys.exit(2)
 
     if 'merge_commit_id' not in jsondata:
         assert 'pr_commit_id' in jsondata, 'pr_commit_id must be present in jsondata'
-        assert isinstance(jsondata['pr_commit_id'], str), 'pr_commit_id must be a string'
+        assert isinstance(jsondata['pr_commit_id'], str), (
+            'pr_commit_id must be a string'
+        )
         merge_commit_id = git_get_merge_id(gitdir, jsondata['pr_commit_id'])
         if not merge_commit_id:
             logger.critical('Could not get merge commit id for %s', jsondata['subject'])
@@ -319,7 +366,9 @@ def generate_pr_thanks(gitdir: Optional[str], jsondata: JsonDictT, branch: str,
     return msg
 
 
-def generate_am_thanks(gitdir: Optional[str], jsondata: JsonDictT, branch: str, cmdargs: argparse.Namespace) -> EmailMessage:
+def generate_am_thanks(
+    gitdir: Optional[str], jsondata: JsonDictT, branch: str, cmdargs: argparse.Namespace
+) -> EmailMessage:
     global BRANCH_INFO
     BRANCH_INFO = None
     config = b4.get_main_config()
@@ -331,8 +380,10 @@ def generate_am_thanks(gitdir: Optional[str], jsondata: JsonDictT, branch: str,
         try:
             thanks_template = b4.read_template(_ctat)
         except FileNotFoundError:
-            logger.critical('ERROR: thanks-am-template says to use %s, but it does not exist',
-                            config['thanks-am-template'])
+            logger.critical(
+                'ERROR: thanks-am-template says to use %s, but it does not exist',
+                config['thanks-am-template'],
+            )
             sys.exit(2)
     if 'commits' not in jsondata:
         commits = auto_locate_series(gitdir, jsondata, branch, cmdargs.since)
@@ -361,11 +412,17 @@ def generate_am_thanks(gitdir: Optional[str], jsondata: JsonDictT, branch: str,
             slines.append('%s%s' % (' ' * len(prefix), cidmask % cid))
     jsondata['summary'] = '\n'.join(slines)
     if nomatch == len(commits):
-        logger.critical('  WARNING: None of the patches matched for: %s', jsondata['subject'])
+        logger.critical(
+            '  WARNING: None of the patches matched for: %s', jsondata['subject']
+        )
         logger.critical('           Please review the resulting message')
     elif nomatch > 0:
-        logger.critical('  WARNING: Could not match %s of %s patches in: %s',
-                        nomatch, len(commits), jsondata['subject'])
+        logger.critical(
+            '  WARNING: Could not match %s of %s patches in: %s',
+            nomatch,
+            len(commits),
+            jsondata['subject'],
+        )
         logger.critical('           Please review the resulting message')
 
     msg = make_reply(thanks_template, jsondata, gitdir, cmdargs)
@@ -391,7 +448,9 @@ def auto_thankanator(cmdargs: argparse.Namespace) -> None:
             jsondata['merge_commit_id'] = merge_commit_id
         else:
             # This is a patch series
-            commits = auto_locate_series(gitdir, jsondata, wantbranch, since=cmdargs.since)
+            commits = auto_locate_series(
+                gitdir, jsondata, wantbranch, since=cmdargs.since
+            )
             # Weed out series that have no matches at all
             found = False
             for commit in commits:
@@ -413,7 +472,9 @@ def auto_thankanator(cmdargs: argparse.Namespace) -> None:
     sys.exit(0)
 
 
-def send_messages(listing: List[JsonDictT], branch: str, cmdargs: argparse.Namespace) -> None:
+def send_messages(
+    listing: List[JsonDictT], branch: str, cmdargs: argparse.Namespace
+) -> None:
     logger.info('Generating %s thank-you letters', len(listing))
     gitdir = cmdargs.gitdir
     datadir = b4.get_data_dir()
@@ -474,7 +535,9 @@ def send_messages(listing: List[JsonDictT], branch: str, cmdargs: argparse.Names
         if send_email:
             if not fromaddr and isinstance(jsondata['myemail'], str):
                 fromaddr = jsondata['myemail']
-            logger.info('  Sending: %s', b4.LoreMessage.clean_header(msg.get('subject')))
+            logger.info(
+                '  Sending: %s', b4.LoreMessage.clean_header(msg.get('subject'))
+            )
             b4.send_mail(smtp, [msg], fromaddr, dryrun=cmdargs.dryrun)
         else:
             assert isinstance(jsondata['fromemail'], str), 'fromname must be a string'
@@ -642,7 +705,9 @@ def check_stale_thanks(outdir: str) -> None:
         for entry in Path(outdir).iterdir():
             if entry.suffix == '.thanks':
                 logger.critical('ERROR: Found existing .thanks files in: %s', outdir)
-                logger.critical('       Please send them first (or delete if already sent).')
+                logger.critical(
+                    '       Please send them first (or delete if already sent).'
+                )
                 logger.critical('       Refusing to run to avoid potential confusion.')
                 sys.exit(1)
 
@@ -664,7 +729,10 @@ def get_wanted_branch(cmdargs: argparse.Namespace) -> str:
         gitargs = ['branch', '--format=%(refname)', '--list', '--all', cmdargs.branch]
         lines = b4.git_get_command_lines(gitdir, gitargs)
         if not len(lines):
-            logger.critical('Requested branch not found in git branch --list --all %s', cmdargs.branch)
+            logger.critical(
+                'Requested branch not found in git branch --list --all %s',
+                cmdargs.branch,
+            )
             sys.exit(1)
         wantbranch = cmdargs.branch
 
@@ -704,9 +772,13 @@ def _parse_queue_file(filepath: str) -> Optional[EmailMessage]:
         return None
 
 
-def queue_message(msg: EmailMessage, checkurl: str,
-                  change_id: str, revision: int,
-                  dryrun: bool = False) -> None:
+def queue_message(
+    msg: EmailMessage,
+    checkurl: str,
+    change_id: str,
+    revision: int,
+    dryrun: bool = False,
+) -> None:
     """Write a thanks message to the file-based queue."""
     qdir = _get_queue_dir(dryrun=dryrun)
     os.makedirs(qdir, exist_ok=True)
@@ -744,11 +816,13 @@ def get_queued_messages(dryrun: bool = False) -> List[Dict[str, str]]:
             continue
         subject = str(msg.get('Subject', '(no subject)'))
         checkurl = str(msg.get('X-Check-URL', ''))
-        results.append({
-            'filename': fname,
-            'subject': subject,
-            'checkurl': checkurl,
-        })
+        results.append(
+            {
+                'filename': fname,
+                'subject': subject,
+                'checkurl': checkurl,
+            }
+        )
     return results
 
 
@@ -786,9 +860,11 @@ def _parse_change_revision(fname: str) -> Tuple[str, int]:
     return (stem, 0)
 
 
-def process_queue(dryrun: bool = False,
-                  patatt_sign: bool = True,
-                  progress_cb: ProgressCallbackT = None) -> QueueResultT:
+def process_queue(
+    dryrun: bool = False,
+    patatt_sign: bool = True,
+    progress_cb: ProgressCallbackT = None,
+) -> QueueResultT:
     """Check queued messages and deliver those whose commits are visible.
 
     *progress_cb*, when provided, is called as
@@ -826,8 +902,7 @@ def process_queue(dryrun: bool = False,
         if not dryrun and checkurl:
             try:
                 session = b4.get_requests_session()
-                resp = session.head(checkurl, timeout=15,
-                                    allow_redirects=True)
+                resp = session.head(checkurl, timeout=15, allow_redirects=True)
                 if resp.status_code >= 300:
                     still_pending += 1
                     if progress_cb:
@@ -846,9 +921,15 @@ def process_queue(dryrun: bool = False,
         # Commit is visible — deliver the message
         try:
             smtp, fromaddr = b4.get_smtp(dryrun=dryrun)
-            b4.send_mail(smtp, [msg], fromaddr=fromaddr,
-                         patatt_sign=patatt_sign, dryrun=dryrun,
-                         output_dir=None, reflect=False)
+            b4.send_mail(
+                smtp,
+                [msg],
+                fromaddr=fromaddr,
+                patatt_sign=patatt_sign,
+                dryrun=dryrun,
+                output_dir=None,
+                reflect=False,
+            )
             # Move to sent/
             os.makedirs(sentdir, exist_ok=True)
             os.rename(filepath, os.path.join(sentdir, fname))
diff --git a/src/tests/conftest.py b/src/tests/conftest.py
index 3ff3891..0825373 100644
--- a/src/tests/conftest.py
+++ b/src/tests/conftest.py
@@ -8,7 +8,7 @@ import pytest
 import b4
 
 
-@pytest.fixture(scope="function", autouse=True)
+@pytest.fixture(scope='function', autouse=True)
 def settestdefaults(tmp_path: pathlib.Path) -> None:
     topdir = b4.git_get_toplevel()
     if topdir and topdir != os.getcwd():
@@ -25,13 +25,15 @@ def settestdefaults(tmp_path: pathlib.Path) -> None:
     sys._running_in_pytest = True  # type: ignore[attr-defined]
 
 
-@pytest.fixture(scope="function")
+@pytest.fixture(scope='function')
 def sampledir(request: pytest.FixtureRequest) -> str:
     return os.path.join(request.path.parent, 'samples')
 
 
-@pytest.fixture(scope="function")
-def gitdir(request: pytest.FixtureRequest, tmp_path: pathlib.Path) -> Generator[str, None, None]:
+@pytest.fixture(scope='function')
+def gitdir(
+    request: pytest.FixtureRequest, tmp_path: pathlib.Path
+) -> Generator[str, None, None]:
     sampledir = os.path.join(request.path.parent, 'samples')
     # look for bundle file specific to the calling fspath
     bname = request.path.name[5:-3]
diff --git a/src/tests/test___init__.py b/src/tests/test___init__.py
index faf5c96..c997059 100644
--- a/src/tests/test___init__.py
+++ b/src/tests/test___init__.py
@@ -11,28 +11,46 @@ import pytest
 import b4
 
 
-@pytest.mark.parametrize('source,expected', [
-    ('good-valid-trusted', (True, True, True, 'B6C41CE35664996C', '1623274836')),
-    ('good-valid-notrust', (True, True, False, 'B6C41CE35664996C', '1623274836')),
-    ('good-invalid-notrust', (True, False, False, 'B6C41CE35664996C', None)),
-    ('badsig', (False, False, False, 'B6C41CE35664996C', None)),
-    ('no-pubkey', (False, False, False, None, None)),
-])
-def test_check_gpg_status(sampledir: str, source: str, expected: Tuple[bool, bool, bool, Optional[str], Optional[str]]) -> None:
+@pytest.mark.parametrize(
+    'source,expected',
+    [
+        ('good-valid-trusted', (True, True, True, 'B6C41CE35664996C', '1623274836')),
+        ('good-valid-notrust', (True, True, False, 'B6C41CE35664996C', '1623274836')),
+        ('good-invalid-notrust', (True, False, False, 'B6C41CE35664996C', None)),
+        ('badsig', (False, False, False, 'B6C41CE35664996C', None)),
+        ('no-pubkey', (False, False, False, None, None)),
+    ],
+)
+def test_check_gpg_status(
+    sampledir: str,
+    source: str,
+    expected: Tuple[bool, bool, bool, Optional[str], Optional[str]],
+) -> None:
     with open(f'{sampledir}/gpg-{source}.txt', 'r') as fh:
         status = fh.read()
     assert b4.check_gpg_status(status) == expected
 
 
-@pytest.mark.parametrize('source,regex,flags,ismbox', [
-    (None, r'^From git@z ', 0, False),
-    (None, r'\n\nFrom git@z ', 0, False),
-    ('save-7bit-clean', r'From: Unicôdé', 0, True),
-    # mailbox.mbox does not properly handle 8bit-clean headers
-    ('save-8bit-clean', r'From: Unicôdé', 0, False),
-])
-def test_save_git_am_mbox(sampledir: Optional[str], tmp_path: pathlib.Path, source: Optional[str], regex: str, flags: int, ismbox: bool) -> None:
+@pytest.mark.parametrize(
+    'source,regex,flags,ismbox',
+    [
+        (None, r'^From git@z ', 0, False),
+        (None, r'\n\nFrom git@z ', 0, False),
+        ('save-7bit-clean', r'From: Unicôdé', 0, True),
+        # mailbox.mbox does not properly handle 8bit-clean headers
+        ('save-8bit-clean', r'From: Unicôdé', 0, False),
+    ],
+)
+def test_save_git_am_mbox(
+    sampledir: Optional[str],
+    tmp_path: pathlib.Path,
+    source: Optional[str],
+    regex: str,
+    flags: int,
+    ismbox: bool,
+) -> None:
     import re
+
     msgs: List[email.message.EmailMessage]
     if source is not None:
         if ismbox:
@@ -40,11 +58,15 @@ def test_save_git_am_mbox(sampledir: Optional[str], tmp_path: pathlib.Path, sour
         else:
             import email
             import email.parser
+
             with open(f'{sampledir}/{source}.txt', 'rb') as fh:
-                msg = email.parser.BytesParser(policy=b4.emlpolicy, _class=email.message.EmailMessage).parse(fh)
+                msg = email.parser.BytesParser(
+                    policy=b4.emlpolicy, _class=email.message.EmailMessage
+                ).parse(fh)
             msgs = [msg]
     else:
         import email.message
+
         msgs = list()
         for x in range(0, 3):
             msg = email.message.EmailMessage()
@@ -73,23 +95,54 @@ def test_make_msgid_avoids_host_domain_by_default() -> None:
     assert _msgid_domain(b4_msgid) != socket.getfqdn()
 
 
-@pytest.mark.parametrize('source,expected', [
-    ('trailers-test-simple',
-     [('person', 'Reported-by', '"Doe, Jane" <jane@example.com>', None),
-      ('person', 'Reviewed-by', 'Bogus Bupkes <bogus@example.com>', None),
-      ('utility', 'Fixes', 'abcdef01234567890', None),
-      ('utility', 'Link', 'https://msgid.link/some@msgid.here', None),
-      ]),
-    ('trailers-test-extinfo',
-     [('person', 'Reported-by', 'Some, One <somewhere@example.com>', None),
-      ('person', 'Reviewed-by', 'Bogus Bupkes <bogus@example.com>', '[for the parts that are bogus]'),
-      ('utility', 'Fixes', 'abcdef01234567890', None),
-      ('person', 'Tested-by', 'Some Person <bogus2@example.com>', '           [this person visually indented theirs]'),
-      ('utility', 'Link', 'https://msgid.link/some@msgid.here', '  # initial submission'),
-      ('person', 'Signed-off-by', 'Wrapped Persontrailer <broken@example.com>', None),
-      ]),
-])
-def test_parse_trailers(sampledir: str, source: str, expected: List[Tuple[str, str, str, Optional[str]]]) -> None:
+@pytest.mark.parametrize(
+    'source,expected',
+    [
+        (
+            'trailers-test-simple',
+            [
+                ('person', 'Reported-by', '"Doe, Jane" <jane@example.com>', None),
+                ('person', 'Reviewed-by', 'Bogus Bupkes <bogus@example.com>', None),
+                ('utility', 'Fixes', 'abcdef01234567890', None),
+                ('utility', 'Link', 'https://msgid.link/some@msgid.here', None),
+            ],
+        ),
+        (
+            'trailers-test-extinfo',
+            [
+                ('person', 'Reported-by', 'Some, One <somewhere@example.com>', None),
+                (
+                    'person',
+                    'Reviewed-by',
+                    'Bogus Bupkes <bogus@example.com>',
+                    '[for the parts that are bogus]',
+                ),
+                ('utility', 'Fixes', 'abcdef01234567890', None),
+                (
+                    'person',
+                    'Tested-by',
+                    'Some Person <bogus2@example.com>',
+                    '           [this person visually indented theirs]',
+                ),
+                (
+                    'utility',
+                    'Link',
+                    'https://msgid.link/some@msgid.here',
+                    '  # initial submission',
+                ),
+                (
+                    'person',
+                    'Signed-off-by',
+                    'Wrapped Persontrailer <broken@example.com>',
+                    None,
+                ),
+            ],
+        ),
+    ],
+)
+def test_parse_trailers(
+    sampledir: str, source: str, expected: List[Tuple[str, str, str, Optional[str]]]
+) -> None:
     msgs = b4.get_msgs_from_mailbox_or_maildir(f'{sampledir}/{source}.txt')
     for msg in msgs:
         lmsg = b4.LoreMessage(msg)
@@ -107,76 +160,140 @@ def test_parse_trailers(sampledir: str, source: str, expected: List[Tuple[str, s
             assert tr.extinfo == mytr.extinfo
 
 
-@pytest.mark.parametrize('name,value,exp_type,exp_addr,exp_value', [
-    # Simple name
-    ('Signed-off-by', 'Simple Name <simple@example.com>',
-     'person', ('Simple Name', 'simple@example.com'),
-     'Simple Name <simple@example.com>'),
-    # Double quotes in display name must be preserved
-    ('Signed-off-by', 'Jane "JD" Doe <jd@example.com>',
-     'person', ('Jane "JD" Doe', 'jd@example.com'),
-     'Jane "JD" Doe <jd@example.com>'),
-    # Outer RFC 2822 quotes around a name with comma
-    ('Reported-by', '"Doe, Jane" <jane@example.com>',
-     'person', ('"Doe, Jane"', 'jane@example.com'),
-     '"Doe, Jane" <jane@example.com>'),
-    # Comma in name without quotes
-    ('Reported-by', 'Some, One <somewhere@example.com>',
-     'person', ('Some, One', 'somewhere@example.com'),
-     'Some, One <somewhere@example.com>'),
-    # Parentheses in display name
-    ('Tested-by', 'Developer Foo (EXAMPLECORP) <dev@example.com>',
-     'person', ('Developer Foo (EXAMPLECORP)', 'dev@example.com'),
-     'Developer Foo (EXAMPLECORP) <dev@example.com>'),
-    # Bare angle-bracket email
-    ('Cc', '<bare@example.com>',
-     'person', ('', 'bare@example.com'),
-     'bare@example.com'),
-    # Bare email without angle brackets
-    ('Cc', 'bare@example.com',
-     'person', ('', 'bare@example.com'),
-     'bare@example.com'),
-])
-def test_trailer_addr_parsing(name: str, value: str, exp_type: str,
-                              exp_addr: Tuple[str, str], exp_value: str) -> None:
+@pytest.mark.parametrize(
+    'name,value,exp_type,exp_addr,exp_value',
+    [
+        # Simple name
+        (
+            'Signed-off-by',
+            'Simple Name <simple@example.com>',
+            'person',
+            ('Simple Name', 'simple@example.com'),
+            'Simple Name <simple@example.com>',
+        ),
+        # Double quotes in display name must be preserved
+        (
+            'Signed-off-by',
+            'Jane "JD" Doe <jd@example.com>',
+            'person',
+            ('Jane "JD" Doe', 'jd@example.com'),
+            'Jane "JD" Doe <jd@example.com>',
+        ),
+        # Outer RFC 2822 quotes around a name with comma
+        (
+            'Reported-by',
+            '"Doe, Jane" <jane@example.com>',
+            'person',
+            ('"Doe, Jane"', 'jane@example.com'),
+            '"Doe, Jane" <jane@example.com>',
+        ),
+        # Comma in name without quotes
+        (
+            'Reported-by',
+            'Some, One <somewhere@example.com>',
+            'person',
+            ('Some, One', 'somewhere@example.com'),
+            'Some, One <somewhere@example.com>',
+        ),
+        # Parentheses in display name
+        (
+            'Tested-by',
+            'Developer Foo (EXAMPLECORP) <dev@example.com>',
+            'person',
+            ('Developer Foo (EXAMPLECORP)', 'dev@example.com'),
+            'Developer Foo (EXAMPLECORP) <dev@example.com>',
+        ),
+        # Bare angle-bracket email
+        (
+            'Cc',
+            '<bare@example.com>',
+            'person',
+            ('', 'bare@example.com'),
+            'bare@example.com',
+        ),
+        # Bare email without angle brackets
+        (
+            'Cc',
+            'bare@example.com',
+            'person',
+            ('', 'bare@example.com'),
+            'bare@example.com',
+        ),
+    ],
+)
+def test_trailer_addr_parsing(
+    name: str, value: str, exp_type: str, exp_addr: Tuple[str, str], exp_value: str
+) -> None:
     tr = b4.LoreTrailer(name=name, value=value)
     assert tr.type == exp_type
     assert tr.addr == exp_addr
     assert tr.value == exp_value
 
 
-@pytest.mark.parametrize('source,serargs,amargs,reference,b4cfg', [
-    ('single', {}, {}, 'defaults', {}),
-    ('single', {}, {'noaddtrailers': True}, 'noadd', {}),
-    ('single', {}, {'addmysob': True}, 'addmysob', {}),
-    ('single', {}, {'addmysob': True, 'copyccs': True}, 'copyccs', {}),
-    ('single', {}, {'addmysob': True, 'addlink': True}, 'addlink', {}),
-    ('single', {}, {'addmysob': True, 'addlink': True}, 'addmsgid', {'linktrailermask': 'Message-ID: <%s>'}),
-    ('single', {}, {'addmysob': True, 'copyccs': True}, 'ordered',
-     {'trailer-order': 'Cc,Tested*,Reviewed*,*'}),
-    ('single', {'sloppytrailers': True}, {'addmysob': True}, 'sloppy', {}),
-    ('with-cover', {}, {'addmysob': True}, 'defaults', {}),
-    ('with-cover', {}, {'addmysob': True, 'addlink': True}, 'addlink', {}),
-    ('custody', {}, {'addmysob': True, 'copyccs': True}, 'unordered', {}),
-    ('custody', {}, {'addmysob': True, 'copyccs': True}, 'ordered',
-     {'trailer-order': 'Cc,Fixes*,Link*,Suggested*,Reviewed*,Tested*,*'}),
-    ('custody', {}, {'addmysob': True, 'copyccs': True}, 'with-ignored',
-     {'trailers-ignore-from': 'followup-reviewer1@example.com'}),
-    ('partial-reroll', {}, {'addmysob': True}, 'defaults', {}),
-    ('nore', {}, {}, 'defaults', {}),
-    ('non-git-patch', {}, {}, 'defaults', {}),
-    ('non-git-patch-with-comments', {}, {}, 'defaults', {}),
-    ('with-diffstat', {}, {}, 'defaults', {}),
-    ('name-parens', {}, {}, 'defaults', {}),
-    ('bare-address', {}, {}, 'defaults', {}),
-    ('stripped-lines', {}, {}, 'defaults', {}),
-    ('htmljunk', {}, {}, 'defaults', {}),
-])
-def test_followup_trailers(sampledir: str, source: str, serargs: Dict[str, Any], amargs: Dict[str, Any],
-                           reference: str, b4cfg: Dict[str, Any]) -> None:
+@pytest.mark.parametrize(
+    'source,serargs,amargs,reference,b4cfg',
+    [
+        ('single', {}, {}, 'defaults', {}),
+        ('single', {}, {'noaddtrailers': True}, 'noadd', {}),
+        ('single', {}, {'addmysob': True}, 'addmysob', {}),
+        ('single', {}, {'addmysob': True, 'copyccs': True}, 'copyccs', {}),
+        ('single', {}, {'addmysob': True, 'addlink': True}, 'addlink', {}),
+        (
+            'single',
+            {},
+            {'addmysob': True, 'addlink': True},
+            'addmsgid',
+            {'linktrailermask': 'Message-ID: <%s>'},
+        ),
+        (
+            'single',
+            {},
+            {'addmysob': True, 'copyccs': True},
+            'ordered',
+            {'trailer-order': 'Cc,Tested*,Reviewed*,*'},
+        ),
+        ('single', {'sloppytrailers': True}, {'addmysob': True}, 'sloppy', {}),
+        ('with-cover', {}, {'addmysob': True}, 'defaults', {}),
+        ('with-cover', {}, {'addmysob': True, 'addlink': True}, 'addlink', {}),
+        ('custody', {}, {'addmysob': True, 'copyccs': True}, 'unordered', {}),
+        (
+            'custody',
+            {},
+            {'addmysob': True, 'copyccs': True},
+            'ordered',
+            {'trailer-order': 'Cc,Fixes*,Link*,Suggested*,Reviewed*,Tested*,*'},
+        ),
+        (
+            'custody',
+            {},
+            {'addmysob': True, 'copyccs': True},
+            'with-ignored',
+            {'trailers-ignore-from': 'followup-reviewer1@example.com'},
+        ),
+        ('partial-reroll', {}, {'addmysob': True}, 'defaults', {}),
+        ('nore', {}, {}, 'defaults', {}),
+        ('non-git-patch', {}, {}, 'defaults', {}),
+        ('non-git-patch-with-comments', {}, {}, 'defaults', {}),
+        ('with-diffstat', {}, {}, 'defaults', {}),
+        ('name-parens', {}, {}, 'defaults', {}),
+        ('bare-address', {}, {}, 'defaults', {}),
+        ('stripped-lines', {}, {}, 'defaults', {}),
+        ('htmljunk', {}, {}, 'defaults', {}),
+    ],
+)
+def test_followup_trailers(
+    sampledir: str,
+    source: str,
+    serargs: Dict[str, Any],
+    amargs: Dict[str, Any],
+    reference: str,
+    b4cfg: Dict[str, Any],
+) -> None:
     b4.MAIN_CONFIG.update(b4cfg)
     lmbx = b4.LoreMailbox()
-    for msg in b4.get_msgs_from_mailbox_or_maildir(f'{sampledir}/trailers-followup-{source}.mbox'):
+    for msg in b4.get_msgs_from_mailbox_or_maildir(
+        f'{sampledir}/trailers-followup-{source}.mbox'
+    ):
         lmbx.add_message(msg)
     lser = lmbx.get_series(**serargs)
     assert lser is not None
@@ -187,70 +304,134 @@ def test_followup_trailers(sampledir: str, source: str, serargs: Dict[str, Any],
         assert ifh.getvalue().decode() == fh.read()
 
 
-@pytest.mark.parametrize('hval,verify,tr', [
-    ('short-ascii', 'short-ascii', 'encode'),
-    ('short-unicôde', '=?utf-8?q?short-unic=C3=B4de?=', 'encode'),
-    # Long ascii
-    (('Lorem ipsum dolor sit amet consectetur adipiscing elit '
-      'sed do eiusmod tempor incididunt ut labore et dolore magna aliqua'),
-     ('Lorem ipsum dolor sit amet consectetur adipiscing elit sed do\n'
-      ' eiusmod tempor incididunt ut labore et dolore magna aliqua'), 'encode'),
-    # Long unicode
-    (('Lorem îpsum dolor sit amet consectetur adipiscing elît '
-      'sed do eiusmod tempôr incididunt ut labore et dolôre magna aliqua'),
-     ('=?utf-8?q?Lorem_=C3=AEpsum_dolor_sit_amet_consectetur_adipiscin?=\n'
-      ' =?utf-8?q?g_el=C3=AEt_sed_do_eiusmod_temp=C3=B4r_incididunt_ut_labore_et?=\n'
-      ' =?utf-8?q?_dol=C3=B4re_magna_aliqua?='), 'encode'),
-    # Exactly 75 long
-    ('Lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiu',
-     'Lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiu', 'encode'),
-    # Unicode that breaks on escape boundary
-    ('Lorem ipsum dolor sit amet consectetur adipiscin elît',
-     '=?utf-8?q?Lorem_ipsum_dolor_sit_amet_consectetur_adipiscin_el?=\n =?utf-8?q?=C3=AEt?=', 'encode'),
-    # Unicode that's just 1 too long
-    ('Lorem ipsum dolor sit amet consectetur adipi elît',
-     '=?utf-8?q?Lorem_ipsum_dolor_sit_amet_consectetur_adipi_el=C3=AE?=\n =?utf-8?q?t?=', 'encode'),
-    # A single address
-    ('foo@example.com', 'foo@example.com', 'encode'),
-    # Two addresses
-    ('foo@example.com, bar@example.com', 'foo@example.com, bar@example.com', 'encode'),
-    # Mixed addresses
-    ('foo@example.com, Foo Bar <bar@example.com>', 'foo@example.com, Foo Bar <bar@example.com>', 'encode'),
-    # Mixed Unicode
-    ('foo@example.com, Foo Bar <bar@example.com>, Fôo Baz <baz@example.com>',
-     'foo@example.com, Foo Bar <bar@example.com>, \n =?utf-8?q?F=C3=B4o_Baz?= <baz@example.com>', 'encode'),
-    ('foo@example.com, Foo Bar <bar@example.com>, Fôo Baz <baz@example.com>, "Quux, Foo" <quux@example.com>',
-     ('foo@example.com, Foo Bar <bar@example.com>, \n'
-      ' =?utf-8?q?F=C3=B4o_Baz?= <baz@example.com>, "Quux, Foo" <quux@example.com>'), 'encode'),
-    ('01234567890123456789012345678901234567890123456789012345678901@example.org, ä <foo@example.org>',
-     ('01234567890123456789012345678901234567890123456789012345678901@example.org, \n'
-      ' =?utf-8?q?=C3=A4?= <foo@example.org>'), 'encode'),
-    # Test for https://github.com/python/cpython/issues/100900
-    ('foo@example.com, Foo Bar <bar@example.com>, Fôo Baz <baz@example.com>, "Quûx, Foo" <quux@example.com>',
-     ('foo@example.com, Foo Bar <bar@example.com>, \n'
-      ' =?utf-8?q?F=C3=B4o_Baz?= <baz@example.com>, \n =?utf-8?q?Qu=C3=BBx=2C_Foo?= <quux@example.com>'), 'encode'),
-    # Test preserve
-    ('foo@example.com, Foo Bar <bar@example.com>, Fôo Baz <baz@example.com>, "Quûx, Foo" <quux@example.com>',
-     'foo@example.com, Foo Bar <bar@example.com>, Fôo Baz <baz@example.com>, \n "Quûx, Foo" <quux@example.com>',
-     'preserve'),
-    # Test decode
-    ('foo@example.com, Foo Bar <bar@example.com>, =?utf-8?q?Qu=C3=BBx=2C_Foo?= <quux@example.com>',
-     'foo@example.com, Foo Bar <bar@example.com>, \n "Quûx, Foo" <quux@example.com>',
-     'decode'),
-    # Test short message-id
-    ('Message-ID: <20240319-short-message-id@example.com>', '<20240319-short-message-id@example.com>', 'encode'),
-    # Test long message-id
-    ('Message-ID: <20240319-very-long-message-id-that-spans-multiple-lines-for-sure-because-longer-than-75-characters-abcde123456@longdomain.example.com>',
-     '<20240319-very-long-message-id-that-spans-multiple-lines-for-sure-because-longer-than-75-characters-abcde123456@longdomain.example.com>',
-     'encode'),
-])
-def test_header_wrapping(sampledir: str, hval: str, verify: str, tr: Literal['encode', 'decode', 'preserve']) -> None:
+@pytest.mark.parametrize(
+    'hval,verify,tr',
+    [
+        ('short-ascii', 'short-ascii', 'encode'),
+        ('short-unicôde', '=?utf-8?q?short-unic=C3=B4de?=', 'encode'),
+        # Long ascii
+        (
+            (
+                'Lorem ipsum dolor sit amet consectetur adipiscing elit '
+                'sed do eiusmod tempor incididunt ut labore et dolore magna aliqua'
+            ),
+            (
+                'Lorem ipsum dolor sit amet consectetur adipiscing elit sed do\n'
+                ' eiusmod tempor incididunt ut labore et dolore magna aliqua'
+            ),
+            'encode',
+        ),
+        # Long unicode
+        (
+            (
+                'Lorem îpsum dolor sit amet consectetur adipiscing elît '
+                'sed do eiusmod tempôr incididunt ut labore et dolôre magna aliqua'
+            ),
+            (
+                '=?utf-8?q?Lorem_=C3=AEpsum_dolor_sit_amet_consectetur_adipiscin?=\n'
+                ' =?utf-8?q?g_el=C3=AEt_sed_do_eiusmod_temp=C3=B4r_incididunt_ut_labore_et?=\n'
+                ' =?utf-8?q?_dol=C3=B4re_magna_aliqua?='
+            ),
+            'encode',
+        ),
+        # Exactly 75 long
+        (
+            'Lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiu',
+            'Lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiu',
+            'encode',
+        ),
+        # Unicode that breaks on escape boundary
+        (
+            'Lorem ipsum dolor sit amet consectetur adipiscin elît',
+            '=?utf-8?q?Lorem_ipsum_dolor_sit_amet_consectetur_adipiscin_el?=\n =?utf-8?q?=C3=AEt?=',
+            'encode',
+        ),
+        # Unicode that's just 1 too long
+        (
+            'Lorem ipsum dolor sit amet consectetur adipi elît',
+            '=?utf-8?q?Lorem_ipsum_dolor_sit_amet_consectetur_adipi_el=C3=AE?=\n =?utf-8?q?t?=',
+            'encode',
+        ),
+        # A single address
+        ('foo@example.com', 'foo@example.com', 'encode'),
+        # Two addresses
+        (
+            'foo@example.com, bar@example.com',
+            'foo@example.com, bar@example.com',
+            'encode',
+        ),
+        # Mixed addresses
+        (
+            'foo@example.com, Foo Bar <bar@example.com>',
+            'foo@example.com, Foo Bar <bar@example.com>',
+            'encode',
+        ),
+        # Mixed Unicode
+        (
+            'foo@example.com, Foo Bar <bar@example.com>, Fôo Baz <baz@example.com>',
+            'foo@example.com, Foo Bar <bar@example.com>, \n =?utf-8?q?F=C3=B4o_Baz?= <baz@example.com>',
+            'encode',
+        ),
+        (
+            'foo@example.com, Foo Bar <bar@example.com>, Fôo Baz <baz@example.com>, "Quux, Foo" <quux@example.com>',
+            (
+                'foo@example.com, Foo Bar <bar@example.com>, \n'
+                ' =?utf-8?q?F=C3=B4o_Baz?= <baz@example.com>, "Quux, Foo" <quux@example.com>'
+            ),
+            'encode',
+        ),
+        (
+            '01234567890123456789012345678901234567890123456789012345678901@example.org, ä <foo@example.org>',
+            (
+                '01234567890123456789012345678901234567890123456789012345678901@example.org, \n'
+                ' =?utf-8?q?=C3=A4?= <foo@example.org>'
+            ),
+            'encode',
+        ),
+        # Test for https://github.com/python/cpython/issues/100900
+        (
+            'foo@example.com, Foo Bar <bar@example.com>, Fôo Baz <baz@example.com>, "Quûx, Foo" <quux@example.com>',
+            (
+                'foo@example.com, Foo Bar <bar@example.com>, \n'
+                ' =?utf-8?q?F=C3=B4o_Baz?= <baz@example.com>, \n =?utf-8?q?Qu=C3=BBx=2C_Foo?= <quux@example.com>'
+            ),
+            'encode',
+        ),
+        # Test preserve
+        (
+            'foo@example.com, Foo Bar <bar@example.com>, Fôo Baz <baz@example.com>, "Quûx, Foo" <quux@example.com>',
+            'foo@example.com, Foo Bar <bar@example.com>, Fôo Baz <baz@example.com>, \n "Quûx, Foo" <quux@example.com>',
+            'preserve',
+        ),
+        # Test decode
+        (
+            'foo@example.com, Foo Bar <bar@example.com>, =?utf-8?q?Qu=C3=BBx=2C_Foo?= <quux@example.com>',
+            'foo@example.com, Foo Bar <bar@example.com>, \n "Quûx, Foo" <quux@example.com>',
+            'decode',
+        ),
+        # Test short message-id
+        (
+            'Message-ID: <20240319-short-message-id@example.com>',
+            '<20240319-short-message-id@example.com>',
+            'encode',
+        ),
+        # Test long message-id
+        (
+            'Message-ID: <20240319-very-long-message-id-that-spans-multiple-lines-for-sure-because-longer-than-75-characters-abcde123456@longdomain.example.com>',
+            '<20240319-very-long-message-id-that-spans-multiple-lines-for-sure-because-longer-than-75-characters-abcde123456@longdomain.example.com>',
+            'encode',
+        ),
+    ],
+)
+def test_header_wrapping(
+    sampledir: str, hval: str, verify: str, tr: Literal['encode', 'decode', 'preserve']
+) -> None:
     if ':' in hval:
         chunks = hval.split(':', maxsplit=1)
         hname = chunks[0].strip()
         hval = chunks[1].strip()
     else:
-        hname = 'To' if '@' in hval else "X-Header"
+        hname = 'To' if '@' in hval else 'X-Header'
     wrapped = b4.LoreMessage.wrap_header((hname, hval), transform=tr)
     assert wrapped.decode() == f'{hname}: {verify}'
     _wname, wval = wrapped.split(b':', maxsplit=1)
@@ -259,72 +440,138 @@ def test_header_wrapping(sampledir: str, hval: str, verify: str, tr: Literal['en
         assert cval == hval
 
 
-@pytest.mark.parametrize('pairs,verify,clean', [
-    ([('', 'foo@example.com'), ('Foo Bar', 'bar@example.com')],
-     'foo@example.com, Foo Bar <bar@example.com>', True),
-    ([('', 'foo@example.com'), ('Foo, Bar', 'bar@example.com')],
-     'foo@example.com, "Foo, Bar" <bar@example.com>', True),
-    ([('', 'foo@example.com'), ('Fôo, Bar', 'bar@example.com')],
-     'foo@example.com, "Fôo, Bar" <bar@example.com>', True),
-    ([('', 'foo@example.com'), ('=?utf-8?q?Qu=C3=BBx_Foo?=', 'quux@example.com')],
-     'foo@example.com, Quûx Foo <quux@example.com>', True),
-    ([('', 'foo@example.com'), ('=?utf-8?q?Qu=C3=BBx=2C_Foo?=', 'quux@example.com')],
-     'foo@example.com, "Quûx, Foo" <quux@example.com>', True),
-    ([('', 'foo@example.com'), ('=?utf-8?q?Qu=C3=BBx=2C_Foo?=', 'quux@example.com')],
-     'foo@example.com, =?utf-8?q?Qu=C3=BBx=2C_Foo?= <quux@example.com>', False),
-    # Pre-quoted display name with special chars must not be double-quoted
-    ([('', 'foo@example.com'), ('"Example.org Tools"', 'tools@example.org')],
-     'foo@example.com, "Example.org Tools" <tools@example.org>', True),
-    ([('', 'foo@example.com'), ('"Doe, Jane"', 'jane@example.com')],
-     'foo@example.com, "Doe, Jane" <jane@example.com>', True),
-    # Unquoted name with internal quotes
-    ([('', 'foo@example.com'), ('Jane "JD" Doe', 'jd@example.com')],
-     'foo@example.com, "Jane \\"JD\\" Doe" <jd@example.com>', True),
-    # Name starting with quote but not fully quoted
-    ([('', 'foo@example.com'), ('"JD" Doe', 'jd@example.com')],
-     'foo@example.com, "\\"JD\\" Doe" <jd@example.com>', True),
-    # Pre-quoted name with internal quotes
-    ([('', 'foo@example.com'), ('"Jane "JD" Doe"', 'jd@example.com')],
-     'foo@example.com, "Jane \\"JD\\" Doe" <jd@example.com>', True),
-])
+@pytest.mark.parametrize(
+    'pairs,verify,clean',
+    [
+        (
+            [('', 'foo@example.com'), ('Foo Bar', 'bar@example.com')],
+            'foo@example.com, Foo Bar <bar@example.com>',
+            True,
+        ),
+        (
+            [('', 'foo@example.com'), ('Foo, Bar', 'bar@example.com')],
+            'foo@example.com, "Foo, Bar" <bar@example.com>',
+            True,
+        ),
+        (
+            [('', 'foo@example.com'), ('Fôo, Bar', 'bar@example.com')],
+            'foo@example.com, "Fôo, Bar" <bar@example.com>',
+            True,
+        ),
+        (
+            [
+                ('', 'foo@example.com'),
+                ('=?utf-8?q?Qu=C3=BBx_Foo?=', 'quux@example.com'),
+            ],
+            'foo@example.com, Quûx Foo <quux@example.com>',
+            True,
+        ),
+        (
+            [
+                ('', 'foo@example.com'),
+                ('=?utf-8?q?Qu=C3=BBx=2C_Foo?=', 'quux@example.com'),
+            ],
+            'foo@example.com, "Quûx, Foo" <quux@example.com>',
+            True,
+        ),
+        (
+            [
+                ('', 'foo@example.com'),
+                ('=?utf-8?q?Qu=C3=BBx=2C_Foo?=', 'quux@example.com'),
+            ],
+            'foo@example.com, =?utf-8?q?Qu=C3=BBx=2C_Foo?= <quux@example.com>',
+            False,
+        ),
+        # Pre-quoted display name with special chars must not be double-quoted
+        (
+            [('', 'foo@example.com'), ('"Example.org Tools"', 'tools@example.org')],
+            'foo@example.com, "Example.org Tools" <tools@example.org>',
+            True,
+        ),
+        (
+            [('', 'foo@example.com'), ('"Doe, Jane"', 'jane@example.com')],
+            'foo@example.com, "Doe, Jane" <jane@example.com>',
+            True,
+        ),
+        # Unquoted name with internal quotes
+        (
+            [('', 'foo@example.com'), ('Jane "JD" Doe', 'jd@example.com')],
+            'foo@example.com, "Jane \\"JD\\" Doe" <jd@example.com>',
+            True,
+        ),
+        # Name starting with quote but not fully quoted
+        (
+            [('', 'foo@example.com'), ('"JD" Doe', 'jd@example.com')],
+            'foo@example.com, "\\"JD\\" Doe" <jd@example.com>',
+            True,
+        ),
+        # Pre-quoted name with internal quotes
+        (
+            [('', 'foo@example.com'), ('"Jane "JD" Doe"', 'jd@example.com')],
+            'foo@example.com, "Jane \\"JD\\" Doe" <jd@example.com>',
+            True,
+        ),
+    ],
+)
 def test_format_addrs(pairs: List[Tuple[str, str]], verify: str, clean: bool) -> None:
     formatted = b4.format_addrs(pairs, clean)
     assert formatted == verify
 
 
-@pytest.mark.parametrize('intrange,upper,expected', [
-    ('1-3', 5, [1, 2, 3]),
-    ('-1', 5, [5]),
-    ('1,3-5', 5, [1, 3, 4, 5]),
-    ('1', 5, [1]),
-    ('3', 5, [3]),
-    ('5', 5, [5]),
-    ('1,3,4-', 6, [1, 3, 4, 5, 6]),
-    ('1-3,5,-1', 7, [1, 2, 3, 5, 7]),
-    ('-7', 5, []),
-    ('1-8', 3, [1, 2, 3]),
-])
+@pytest.mark.parametrize(
+    'intrange,upper,expected',
+    [
+        ('1-3', 5, [1, 2, 3]),
+        ('-1', 5, [5]),
+        ('1,3-5', 5, [1, 3, 4, 5]),
+        ('1', 5, [1]),
+        ('3', 5, [3]),
+        ('5', 5, [5]),
+        ('1,3,4-', 6, [1, 3, 4, 5, 6]),
+        ('1-3,5,-1', 7, [1, 2, 3, 5, 7]),
+        ('-7', 5, []),
+        ('1-8', 3, [1, 2, 3]),
+    ],
+)
 def test_parse_int_range(intrange: str, upper: int, expected: List[int]) -> None:
     assert list(b4.parse_int_range(intrange, upper)) == expected
 
 
-@pytest.mark.parametrize('body_link,extra_link,expect_count', [
-    # Exact same URL — should dedup to one
-    ('https://patch.msgid.link/20240101-test-v1-1-abc123@example.com',
-     'https://patch.msgid.link/20240101-test-v1-1-abc123@example.com', 1),
-    # Same URL, different case — should still dedup
-    ('https://patch.msgid.link/20240101-TEST-V1-1-ABC123@example.com',
-     'https://patch.msgid.link/20240101-test-v1-1-abc123@example.com', 1),
-    # Different domains, same message-id — should dedup to one
-    ('https://lore.kernel.org/r/20240101-test-v1-1-abc123@example.com',
-     'https://patch.msgid.link/20240101-test-v1-1-abc123@example.com', 1),
-    # URL-encoded message-id — should match decoded form
-    ('https://lore.kernel.org/r/20240101-test-v1-1-abc123%40example.com',
-     'https://patch.msgid.link/20240101-test-v1-1-abc123@example.com', 1),
-    # Different message-ids — both should survive
-    ('https://lore.kernel.org/r/20240101-foo-v1-1-aaa@example.com',
-     'https://patch.msgid.link/20240101-bar-v1-1-bbb@example.com', 2),
-])
+@pytest.mark.parametrize(
+    'body_link,extra_link,expect_count',
+    [
+        # Exact same URL — should dedup to one
+        (
+            'https://patch.msgid.link/20240101-test-v1-1-abc123@example.com',
+            'https://patch.msgid.link/20240101-test-v1-1-abc123@example.com',
+            1,
+        ),
+        # Same URL, different case — should still dedup
+        (
+            'https://patch.msgid.link/20240101-TEST-V1-1-ABC123@example.com',
+            'https://patch.msgid.link/20240101-test-v1-1-abc123@example.com',
+            1,
+        ),
+        # Different domains, same message-id — should dedup to one
+        (
+            'https://lore.kernel.org/r/20240101-test-v1-1-abc123@example.com',
+            'https://patch.msgid.link/20240101-test-v1-1-abc123@example.com',
+            1,
+        ),
+        # URL-encoded message-id — should match decoded form
+        (
+            'https://lore.kernel.org/r/20240101-test-v1-1-abc123%40example.com',
+            'https://patch.msgid.link/20240101-test-v1-1-abc123@example.com',
+            1,
+        ),
+        # Different message-ids — both should survive
+        (
+            'https://lore.kernel.org/r/20240101-foo-v1-1-aaa@example.com',
+            'https://patch.msgid.link/20240101-bar-v1-1-bbb@example.com',
+            2,
+        ),
+    ],
+)
 def test_link_trailer_dedup(body_link: str, extra_link: str, expect_count: int) -> None:
     """Link: trailers already in the body should not be duplicated by extras."""
     raw = (
@@ -357,9 +604,15 @@ class TestTakeFlow:
     """
 
     @staticmethod
-    def _make_patch_msg(msgid: str, subject: str, body: str,
-                        diff: str, counter: int = 1, expected: int = 1,
-                        in_reply_to: Optional[str] = None) -> email.message.EmailMessage:
+    def _make_patch_msg(
+        msgid: str,
+        subject: str,
+        body: str,
+        diff: str,
+        counter: int = 1,
+        expected: int = 1,
+        in_reply_to: Optional[str] = None,
+    ) -> email.message.EmailMessage:
         """Build a realistic patch email like what lore returns.
 
         The *body* should contain the full commit message including
@@ -379,19 +632,19 @@ class TestTakeFlow:
         if in_reply_to:
             raw += f'In-Reply-To: <{in_reply_to}>\n'
             raw += f'References: <{in_reply_to}>\n'
-        raw += (
-            f'\n'
-            f'{body}\n'
-            f'---\n'
-            f'{diff}\n'
-        )
+        raw += f'\n{body}\n---\n{diff}\n'
         return email.message_from_string(
-            raw, policy=email.policy.EmailPolicy(utf8=True))
+            raw, policy=email.policy.EmailPolicy(utf8=True)
+        )
 
     @staticmethod
-    def _make_reply_msg(msgid: str, in_reply_to: str,
-                        from_name: str, from_email: str,
-                        trailer_lines: List[str]) -> email.message.EmailMessage:
+    def _make_reply_msg(
+        msgid: str,
+        in_reply_to: str,
+        from_name: str,
+        from_email: str,
+        trailer_lines: List[str],
+    ) -> email.message.EmailMessage:
         """Build a followup reply with trailers."""
         trailers = '\n'.join(trailer_lines)
         raw = (
@@ -407,7 +660,8 @@ class TestTakeFlow:
             f'{trailers}\n'
         )
         return email.message_from_string(
-            raw, policy=email.policy.EmailPolicy(utf8=True))
+            raw, policy=email.policy.EmailPolicy(utf8=True)
+        )
 
     def test_link_dedup_with_followups(self, gitdir: str) -> None:
         """Patch already has Link: in body, get_am_ready(addlink=True)
@@ -474,17 +728,16 @@ class TestTakeFlow:
         # Apply to master via git am
         ifh = io.BytesIO()
         b4.save_git_am_mbox(am_msgs, ifh)
-        ecode, out = b4.git_run_command(
-            gitdir, ['am'], stdin=ifh.getvalue())
+        ecode, out = b4.git_run_command(gitdir, ['am'], stdin=ifh.getvalue())
         assert ecode == 0, f'git am failed: {out}'
 
-        ecode, result = b4.git_run_command(
-            gitdir, ['log', '-1', '--format=%B'])
+        ecode, result = b4.git_run_command(gitdir, ['log', '-1', '--format=%B'])
         assert ecode == 0
 
         # Exactly one Link: trailer, not two
-        assert result.count(f'Link: {link_url}') == 1, \
+        assert result.count(f'Link: {link_url}') == 1, (
             f'Duplicate Link: found:\n{result}'
+        )
         # Followup trailers applied
         assert 'Reviewed-by: Reviewer One <reviewer@example.com>' in result
         assert 'Acked-by: Acker Two <acker@example.com>' in result
@@ -527,12 +780,10 @@ class TestTakeFlow:
 
         ifh = io.BytesIO()
         b4.save_git_am_mbox(am_msgs, ifh)
-        ecode, out = b4.git_run_command(
-            gitdir, ['am'], stdin=ifh.getvalue())
+        ecode, out = b4.git_run_command(gitdir, ['am'], stdin=ifh.getvalue())
         assert ecode == 0, f'git am failed: {out}'
 
-        ecode, result = b4.git_run_command(
-            gitdir, ['log', '-1', '--format=%B'])
+        ecode, result = b4.git_run_command(gitdir, ['log', '-1', '--format=%B'])
         assert ecode == 0
 
         expected_link = f'https://patch.msgid.link/{patch_msgid}'
@@ -589,12 +840,10 @@ class TestTakeFlow:
 
         ifh = io.BytesIO()
         b4.save_git_am_mbox(am_msgs, ifh)
-        ecode, out = b4.git_run_command(
-            gitdir, ['am'], stdin=ifh.getvalue())
+        ecode, out = b4.git_run_command(gitdir, ['am'], stdin=ifh.getvalue())
         assert ecode == 0, f'git am failed: {out}'
 
-        ecode, result = b4.git_run_command(
-            gitdir, ['log', '-1', '--format=%B'])
+        ecode, result = b4.git_run_command(gitdir, ['log', '-1', '--format=%B'])
         assert ecode == 0
 
         assert 'Reviewed-by: Alice Author <alice@example.com>' in result
@@ -643,12 +892,10 @@ class TestTakeFlow:
 
         ifh = io.BytesIO()
         b4.save_git_am_mbox(am_msgs, ifh)
-        ecode, out = b4.git_run_command(
-            gitdir, ['am'], stdin=ifh.getvalue())
+        ecode, out = b4.git_run_command(gitdir, ['am'], stdin=ifh.getvalue())
         assert ecode == 0, f'git am failed: {out}'
 
-        ecode, result = b4.git_run_command(
-            gitdir, ['log', '-1', '--format=%B'])
+        ecode, result = b4.git_run_command(gitdir, ['log', '-1', '--format=%B'])
         assert ecode == 0
 
         # Same message-id in both URLs, so deduped to one Link:
@@ -656,18 +903,47 @@ class TestTakeFlow:
         assert result.count('Link:') == 1
 
 
-@pytest.mark.parametrize('subject,extras,expected', [
-    ('[PATCH] This is a patch', None, '[PATCH] This is a patch'),
-    ('[PATCH v3] This is a patch', None, '[PATCH v3] This is a patch'),
-    ('[PATCH RFC v3] This is a patch', None, '[PATCH RFC v3] This is a patch'),
-    ('[RFC PATCH v3 1/3] This is a patch', None, '[RFC PATCH v3 1/3] This is a patch'),
-    ('[RESEND PATCH v3 1/3] This is a patch', None, '[RESEND PATCH v3 1/3] This is a patch'),
-    ('[PATCH RFC v3 2/3] This is a patch', ['RFC'], '[PATCH RFC v3 2/3] This is a patch'),
-    ('[PATCH RFC v3 3/12] This is a patch', None, '[PATCH RFC v3 03/12] This is a patch'),
-    ('[PATCH RFC v3] This is a [patch]', ['RFC'], '[PATCH RFC v3] This is a [patch]'),
-    ('[PATCH RFC v3 2/3] This is a patch', ['netdev', 'bpf'], '[PATCH RFC netdev bpf v3 2/3] This is a patch'),
-])
-def test_lore_subject_prefixes(subject: str, extras: Optional[List[str]], expected: str) -> None:
+@pytest.mark.parametrize(
+    'subject,extras,expected',
+    [
+        ('[PATCH] This is a patch', None, '[PATCH] This is a patch'),
+        ('[PATCH v3] This is a patch', None, '[PATCH v3] This is a patch'),
+        ('[PATCH RFC v3] This is a patch', None, '[PATCH RFC v3] This is a patch'),
+        (
+            '[RFC PATCH v3 1/3] This is a patch',
+            None,
+            '[RFC PATCH v3 1/3] This is a patch',
+        ),
+        (
+            '[RESEND PATCH v3 1/3] This is a patch',
+            None,
+            '[RESEND PATCH v3 1/3] This is a patch',
+        ),
+        (
+            '[PATCH RFC v3 2/3] This is a patch',
+            ['RFC'],
+            '[PATCH RFC v3 2/3] This is a patch',
+        ),
+        (
+            '[PATCH RFC v3 3/12] This is a patch',
+            None,
+            '[PATCH RFC v3 03/12] This is a patch',
+        ),
+        (
+            '[PATCH RFC v3] This is a [patch]',
+            ['RFC'],
+            '[PATCH RFC v3] This is a [patch]',
+        ),
+        (
+            '[PATCH RFC v3 2/3] This is a patch',
+            ['netdev', 'bpf'],
+            '[PATCH RFC netdev bpf v3 2/3] This is a patch',
+        ),
+    ],
+)
+def test_lore_subject_prefixes(
+    subject: str, extras: Optional[List[str]], expected: str
+) -> None:
     lsubj = b4.LoreSubject(subject)
     assert lsubj.get_rebuilt_subject(eprefixes=extras) == expected
 
@@ -683,6 +959,7 @@ class TestGetLoreNode:
         from unittest.mock import MagicMock
 
         import liblore
+
         mock_node = MagicMock()
         mock_from_gc = MagicMock(return_value=mock_node)
         monkeypatch.setattr(liblore.LoreNode, 'from_git_config', mock_from_gc)
@@ -695,8 +972,11 @@ class TestGetLoreNode:
         from unittest.mock import MagicMock
 
         import liblore
+
         mock_node = MagicMock()
-        monkeypatch.setattr(liblore.LoreNode, 'from_git_config', MagicMock(return_value=mock_node))
+        monkeypatch.setattr(
+            liblore.LoreNode, 'from_git_config', MagicMock(return_value=mock_node)
+        )
         b4.get_lore_node()
         mock_node.set_user_agent.assert_called_once_with('b4', b4.__VERSION__)
 
@@ -705,8 +985,11 @@ class TestGetLoreNode:
         from unittest.mock import MagicMock
 
         import liblore
+
         mock_node = MagicMock()
-        monkeypatch.setattr(liblore.LoreNode, 'from_git_config', MagicMock(return_value=mock_node))
+        monkeypatch.setattr(
+            liblore.LoreNode, 'from_git_config', MagicMock(return_value=mock_node)
+        )
         b4.get_lore_node()
         mock_node.set_requests_session.assert_not_called()
 
@@ -715,6 +998,7 @@ class TestGetLoreNode:
         from unittest.mock import MagicMock
 
         import liblore
+
         b4.MAIN_CONFIG['cache-expire'] = '5'
         mock_node = MagicMock()
         mock_from_gc = MagicMock(return_value=mock_node)
@@ -729,6 +1013,7 @@ class TestGetLoreNode:
         from unittest.mock import MagicMock
 
         import liblore
+
         mock_node = MagicMock()
         mock_from_gc = MagicMock(return_value=mock_node)
         monkeypatch.setattr(liblore.LoreNode, 'from_git_config', mock_from_gc)
diff --git a/src/tests/test_ez.py b/src/tests/test_ez.py
index ef21985..dc42cfe 100644
--- a/src/tests/test_ez.py
+++ b/src/tests/test_ez.py
@@ -10,33 +10,77 @@ import b4.ez
 import b4.mbox
 
 
-@pytest.fixture(scope="function")
+@pytest.fixture(scope='function')
 def prepdir(gitdir: str) -> Generator[str, None, None]:
     b4.MAIN_CONFIG.update({'prep-cover-strategy': 'branch-description'})
     parser = b4.command.setup_parser()
-    b4args = ['--no-stdin', '--no-interactive', '--offline-mode', 'prep', '-n', 'pytest']
+    b4args = [
+        '--no-stdin',
+        '--no-interactive',
+        '--offline-mode',
+        'prep',
+        '-n',
+        'pytest',
+    ]
     cmdargs = parser.parse_args(b4args)
     b4.ez.cmd_prep(cmdargs)
     yield gitdir
 
 
-@pytest.mark.parametrize('mboxf, bundlef, rep, trargs, compareargs, compareout, b4cfg', [
-    ('trailers-thread-with-followups', None, None, [],
-     ['log', '--format=%ae%n%s%n%b---', 'HEAD~4..'], 'trailers-thread-with-followups',
-     {'shazam-am-flags': '--signoff'}),
-    ('trailers-thread-with-cover-followup', None, None, [],
-     ['log', '--format=%ae%n%s%n%b---', 'HEAD~4..'], 'trailers-thread-with-cover-followup',
-     {'shazam-am-flags': '--signoff'}),
-    # Test matching trailer updates by subject when patch-id changes
-    ('trailers-thread-with-followups', None, (b'vivendum', b'addendum'), [],
-     ['log', '--format=%ae%n%s%n%b---', 'HEAD~4..'], 'trailers-thread-with-followups-no-match',
-     {'shazam-am-flags': '--signoff'}),
-    # Test that we properly perserve commits with --- in them
-    ('trailers-thread-with-followups', 'trailers-with-tripledash', None, [],
-     ['log', '--format=%ae%n%s%n%b---', 'HEAD~4..'], 'trailers-thread-with-followups-and-tripledash',
-     None),
-])
-def test_trailers(sampledir: str, prepdir: str, mboxf: str, bundlef: Optional[str], rep: Optional[Tuple[bytes, bytes]], trargs: List[str], compareargs: List[str], compareout: str, b4cfg: Dict[str, Any]) -> None:
+@pytest.mark.parametrize(
+    'mboxf, bundlef, rep, trargs, compareargs, compareout, b4cfg',
+    [
+        (
+            'trailers-thread-with-followups',
+            None,
+            None,
+            [],
+            ['log', '--format=%ae%n%s%n%b---', 'HEAD~4..'],
+            'trailers-thread-with-followups',
+            {'shazam-am-flags': '--signoff'},
+        ),
+        (
+            'trailers-thread-with-cover-followup',
+            None,
+            None,
+            [],
+            ['log', '--format=%ae%n%s%n%b---', 'HEAD~4..'],
+            'trailers-thread-with-cover-followup',
+            {'shazam-am-flags': '--signoff'},
+        ),
+        # Test matching trailer updates by subject when patch-id changes
+        (
+            'trailers-thread-with-followups',
+            None,
+            (b'vivendum', b'addendum'),
+            [],
+            ['log', '--format=%ae%n%s%n%b---', 'HEAD~4..'],
+            'trailers-thread-with-followups-no-match',
+            {'shazam-am-flags': '--signoff'},
+        ),
+        # Test that we properly perserve commits with --- in them
+        (
+            'trailers-thread-with-followups',
+            'trailers-with-tripledash',
+            None,
+            [],
+            ['log', '--format=%ae%n%s%n%b---', 'HEAD~4..'],
+            'trailers-thread-with-followups-and-tripledash',
+            None,
+        ),
+    ],
+)
+def test_trailers(
+    sampledir: str,
+    prepdir: str,
+    mboxf: str,
+    bundlef: Optional[str],
+    rep: Optional[Tuple[bytes, bytes]],
+    trargs: List[str],
+    compareargs: List[str],
+    compareout: str,
+    b4cfg: Dict[str, Any],
+) -> None:
     if b4cfg:
         b4.MAIN_CONFIG.update(b4cfg)
     config = b4.get_main_config()
@@ -59,7 +103,15 @@ def test_trailers(sampledir: str, prepdir: str, mboxf: str, bundlef: Optional[st
                 fh.write(contents)
         else:
             tfile = mfile
-        b4args = ['--no-stdin', '--no-interactive', '--offline-mode', 'shazam', '--no-add-trailers', '-m', tfile]
+        b4args = [
+            '--no-stdin',
+            '--no-interactive',
+            '--offline-mode',
+            'shazam',
+            '--no-add-trailers',
+            '-m',
+            tfile,
+        ]
         parser = b4.command.setup_parser()
 
         cmdargs = parser.parse_args(b4args)
@@ -71,7 +123,15 @@ def test_trailers(sampledir: str, prepdir: str, mboxf: str, bundlef: Optional[st
     assert os.path.exists(cfile)
 
     parser = b4.command.setup_parser()
-    b4args = ['--no-stdin', '--no-interactive', '--offline-mode', 'trailers', '--update', '-m', mfile] + trargs
+    b4args = [
+        '--no-stdin',
+        '--no-interactive',
+        '--offline-mode',
+        'trailers',
+        '--update',
+        '-m',
+        mfile,
+    ] + trargs
     cmdargs = parser.parse_args(b4args)
     b4.ez.cmd_trailers(cmdargs)
 
@@ -86,6 +146,7 @@ def test_trailers(sampledir: str, prepdir: str, mboxf: str, bundlef: Optional[st
 # Tests for pre/post-rewrite hooks
 # ---------------------------------------------------------------------------
 
+
 class TestRunRewriteHook:
     """Tests for run_rewrite_hook() and its integration with run_frf()."""
 
@@ -100,9 +161,10 @@ class TestRunRewriteHook:
         """A pre-hook that exits 0 should not raise."""
         b4.MAIN_CONFIG['prep-pre-rewrite-hook'] = 'true'
         try:
-            with patch('b4.ez.b4._run_command',
-                       return_value=(0, b'', b'')) as mock_run, \
-                 patch('b4.ez.b4.git_get_toplevel', return_value='/tmp'):
+            with (
+                patch('b4.ez.b4._run_command', return_value=(0, b'', b'')) as mock_run,
+                patch('b4.ez.b4.git_get_toplevel', return_value='/tmp'),
+            ):
                 b4.ez.run_rewrite_hook('pre')
                 mock_run.assert_called_once_with(['true'], rundir='/tmp')
         finally:
@@ -112,9 +174,13 @@ class TestRunRewriteHook:
         """A pre-hook that exits non-zero should raise RuntimeError."""
         b4.MAIN_CONFIG['prep-pre-rewrite-hook'] = 'stg commit --all'
         try:
-            with patch('b4.ez.b4._run_command',
-                       return_value=(1, b'', b'stg: not initialized\n')), \
-                 patch('b4.ez.b4.git_get_toplevel', return_value='/tmp'):
+            with (
+                patch(
+                    'b4.ez.b4._run_command',
+                    return_value=(1, b'', b'stg: not initialized\n'),
+                ),
+                patch('b4.ez.b4.git_get_toplevel', return_value='/tmp'),
+            ):
                 with pytest.raises(RuntimeError, match='Pre-rewrite hook'):
                     b4.ez.run_rewrite_hook('pre')
         finally:
@@ -124,9 +190,10 @@ class TestRunRewriteHook:
         """A post-hook that exits non-zero should warn, not raise."""
         b4.MAIN_CONFIG['prep-post-rewrite-hook'] = 'false'
         try:
-            with patch('b4.ez.b4._run_command',
-                       return_value=(1, b'', b'error\n')), \
-                 patch('b4.ez.b4.git_get_toplevel', return_value='/tmp'):
+            with (
+                patch('b4.ez.b4._run_command', return_value=(1, b'', b'error\n')),
+                patch('b4.ez.b4.git_get_toplevel', return_value='/tmp'),
+            ):
                 # Should not raise
                 b4.ez.run_rewrite_hook('post')
         finally:
@@ -137,9 +204,10 @@ class TestRunRewriteHook:
         b4.MAIN_CONFIG['prep-pre-rewrite-hook'] = 'false'
         try:
             mock_frf = MagicMock()
-            with patch('b4.ez.b4._run_command',
-                       return_value=(1, b'', b'hook failed\n')), \
-                 patch('b4.ez.b4.git_get_toplevel', return_value='/tmp'):
+            with (
+                patch('b4.ez.b4._run_command', return_value=(1, b'', b'hook failed\n')),
+                patch('b4.ez.b4.git_get_toplevel', return_value='/tmp'),
+            ):
                 with pytest.raises(RuntimeError):
                     b4.ez.run_frf(mock_frf)
             # frf.run() should never have been called
@@ -160,13 +228,14 @@ class TestRunRewriteHook:
                 call_order.append(cmdargs[0])
                 return (0, b'', b'')
 
-            with patch('b4.ez.b4._run_command', side_effect=_track_run), \
-                 patch('b4.ez.b4.git_get_toplevel', return_value='/tmp'), \
-                 patch('b4.ez.b4.git_get_gitdir', return_value='/tmp'):
+            with (
+                patch('b4.ez.b4._run_command', side_effect=_track_run),
+                patch('b4.ez.b4.git_get_toplevel', return_value='/tmp'),
+                patch('b4.ez.b4.git_get_gitdir', return_value='/tmp'),
+            ):
                 b4.ez.run_frf(mock_frf)
 
             assert call_order == ['pre-cmd', 'frf', 'post-cmd']
         finally:
             b4.MAIN_CONFIG.pop('prep-pre-rewrite-hook', None)
             b4.MAIN_CONFIG.pop('prep-post-rewrite-hook', None)
-
diff --git a/src/tests/test_mbox.py b/src/tests/test_mbox.py
index b533421..119d21d 100644
--- a/src/tests/test_mbox.py
+++ b/src/tests/test_mbox.py
@@ -10,28 +10,71 @@ import b4.command
 import b4.mbox
 
 
-@pytest.mark.parametrize('mboxf, shazamargs, compareargs, compareout, b4cfg', [
-    ('shazam-git1-just-series', [],
-     ['log', '--format=%ae%n%ce%n%s%n%b---', 'HEAD~4..'], 'shazam-git1-just-series-defaults', {}),
-    ('shazam-git1-just-series', ['-H'],
-     ['log', '--format=%ae%n%ce%n%s%n%b---', 'HEAD..FETCH_HEAD'], 'shazam-git1-just-series-defaults', {}),
-    ('shazam-git1-just-series', ['-M'],
-     ['log', '--format=%ae%n%ce%n%s%n%b---', 'HEAD^..'], 'shazam-git1-just-series-merged', {}),
-    # --add-link: Link: trailers are appended to each patch
-    ('shazam-git1-just-series', ['--add-link'],
-     ['log', '--format=%ae%n%ce%n%s%n%b---', 'HEAD~4..'], 'shazam-git1-just-series-addlink', {}),
-    # --add-link with pre-existing Link: in patch bodies: no duplicates
-    ('shazam-git1-with-link', ['--add-link'],
-     ['log', '--format=%ae%n%ce%n%s%n%b---', 'HEAD~4..'], 'shazam-git1-just-series-addlink', {}),
-])
-def test_shazam(sampledir: str, gitdir: str, mboxf: str, shazamargs: List[str], compareargs: List[str], compareout: str, b4cfg: Dict[str, Any]) -> None:
+@pytest.mark.parametrize(
+    'mboxf, shazamargs, compareargs, compareout, b4cfg',
+    [
+        (
+            'shazam-git1-just-series',
+            [],
+            ['log', '--format=%ae%n%ce%n%s%n%b---', 'HEAD~4..'],
+            'shazam-git1-just-series-defaults',
+            {},
+        ),
+        (
+            'shazam-git1-just-series',
+            ['-H'],
+            ['log', '--format=%ae%n%ce%n%s%n%b---', 'HEAD..FETCH_HEAD'],
+            'shazam-git1-just-series-defaults',
+            {},
+        ),
+        (
+            'shazam-git1-just-series',
+            ['-M'],
+            ['log', '--format=%ae%n%ce%n%s%n%b---', 'HEAD^..'],
+            'shazam-git1-just-series-merged',
+            {},
+        ),
+        # --add-link: Link: trailers are appended to each patch
+        (
+            'shazam-git1-just-series',
+            ['--add-link'],
+            ['log', '--format=%ae%n%ce%n%s%n%b---', 'HEAD~4..'],
+            'shazam-git1-just-series-addlink',
+            {},
+        ),
+        # --add-link with pre-existing Link: in patch bodies: no duplicates
+        (
+            'shazam-git1-with-link',
+            ['--add-link'],
+            ['log', '--format=%ae%n%ce%n%s%n%b---', 'HEAD~4..'],
+            'shazam-git1-just-series-addlink',
+            {},
+        ),
+    ],
+)
+def test_shazam(
+    sampledir: str,
+    gitdir: str,
+    mboxf: str,
+    shazamargs: List[str],
+    compareargs: List[str],
+    compareout: str,
+    b4cfg: Dict[str, Any],
+) -> None:
     b4.MAIN_CONFIG.update(b4cfg)
     mfile = os.path.join(sampledir, f'{mboxf}.mbox')
     cfile = os.path.join(sampledir, f'{compareout}.verify')
     assert os.path.exists(mfile)
     assert os.path.exists(cfile)
     parser = b4.command.setup_parser()
-    shazamargs = ['--no-stdin', '--no-interactive', '--offline-mode', 'shazam', '-m', mfile] + shazamargs
+    shazamargs = [
+        '--no-stdin',
+        '--no-interactive',
+        '--offline-mode',
+        'shazam',
+        '-m',
+        mfile,
+    ] + shazamargs
     cmdargs = parser.parse_args(shazamargs)
     with pytest.raises(SystemExit) as e:
         b4.mbox.main(cmdargs)
@@ -43,8 +86,9 @@ def test_shazam(sampledir: str, gitdir: str, mboxf: str, shazamargs: List[str],
     assert logstr == cstr
 
 
-def _make_msg(subject: str, from_addr: str, date: str,
-              body: str = '', msgid: str = '') -> EmailMessage:
+def _make_msg(
+    subject: str, from_addr: str, date: str, body: str = '', msgid: str = ''
+) -> EmailMessage:
     msg = EmailMessage()
     msg['Subject'] = subject
     msg['From'] = from_addr
@@ -129,10 +173,7 @@ def test_get_extra_series_accepts_matching_change_id() -> None:
         '[PATCH v2 0/2] foo: fix bar syntax',
         'Author <author@example.com>',
         'Fri, 03 Jan 2026 10:00:00 +0000',
-        body=(
-            'v2: split into two patches.\n\n'
-            f'change-id: {change_id}\n'
-        ),
+        body=(f'v2: split into two patches.\n\nchange-id: {change_id}\n'),
         msgid='<v2-cover@example.com>',
     )
     v2_patches = [
diff --git a/src/tests/test_messages.py b/src/tests/test_messages.py
index 9d50896..d00b6ff 100644
--- a/src/tests/test_messages.py
+++ b/src/tests/test_messages.py
@@ -29,9 +29,7 @@ class TestGetDb:
 
     def test_sets_schema_version(self, tmp_path: pytest.TempPathFactory) -> None:
         conn = messages.get_db()
-        version = conn.execute(
-            'SELECT version FROM schema_version'
-        ).fetchone()[0]
+        version = conn.execute('SELECT version FROM schema_version').fetchone()[0]
         assert version == messages.SCHEMA_VERSION
         conn.close()
 
@@ -85,8 +83,9 @@ class TestSetFlag:
 
     def test_creates_new_row(self, tmp_path: pytest.TempPathFactory) -> None:
         conn = messages.get_db()
-        messages.set_flag(conn, 'new@example.com', 'Seen',
-                          msg_date='2026-03-05T10:00:00')
+        messages.set_flag(
+            conn, 'new@example.com', 'Seen', msg_date='2026-03-05T10:00:00'
+        )
         flags = messages.get_flags(conn, 'new@example.com')
         assert 'Seen' in flags
         conn.close()
@@ -197,13 +196,15 @@ class TestCleanupOld:
     def test_removes_old_entries(self, tmp_path: pytest.TempPathFactory) -> None:
         conn = messages.get_db()
         old_date = (
-            datetime.datetime.now(datetime.timezone.utc)
-            - datetime.timedelta(days=200)
+            datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(days=200)
         ).isoformat()
         messages.set_flag(conn, 'old@example.com', 'Seen', msg_date=old_date)
-        messages.set_flag(conn, 'recent@example.com', 'Seen',
-                          msg_date=datetime.datetime.now(
-                              datetime.timezone.utc).isoformat())
+        messages.set_flag(
+            conn,
+            'recent@example.com',
+            'Seen',
+            msg_date=datetime.datetime.now(datetime.timezone.utc).isoformat(),
+        )
         deleted = messages.cleanup_old(conn, max_days=180)
         assert deleted == 1
         assert messages.get_flags(conn, 'old@example.com') == ''
diff --git a/src/tests/test_patatt.py b/src/tests/test_patatt.py
index c257d41..de05277 100644
--- a/src/tests/test_patatt.py
+++ b/src/tests/test_patatt.py
@@ -2,6 +2,7 @@
 
 Uses ephemeral ed25519 keys so no external key material is needed.
 """
+
 import base64
 import email.message
 import os
@@ -27,8 +28,7 @@ def ed25519_keypair() -> Generator[Tuple[str, str, str, str], None, None]:
     sk = SigningKey.generate()
     sk_b64 = base64.b64encode(sk.encode()).decode()
     vk_b64 = base64.b64encode(sk.verify_key.encode()).decode()
-    with tempfile.NamedTemporaryFile(mode='w', suffix='.key',
-                                     delete=False) as fh:
+    with tempfile.NamedTemporaryFile(mode='w', suffix='.key', delete=False) as fh:
         fh.write(sk_b64)
         privkey_path = fh.name
     yield privkey_path, vk_b64, 'test@example.com', 'default'
@@ -36,7 +36,9 @@ def ed25519_keypair() -> Generator[Tuple[str, str, str, str], None, None]:
 
 
 @pytest.fixture()
-def keyring_dir(ed25519_keypair: Tuple[str, str, str, str]) -> Generator[str, None, None]:
+def keyring_dir(
+    ed25519_keypair: Tuple[str, str, str, str],
+) -> Generator[str, None, None]:
     """Create a temporary keyring directory with the ephemeral public key.
 
     The directory layout follows patatt's expected structure:
@@ -54,9 +56,11 @@ def keyring_dir(ed25519_keypair: Tuple[str, str, str, str]) -> Generator[str, No
         yield tmpdir
 
 
-def _make_test_message(from_addr: str = 'test@example.com',
-                       subject: str = 'Test patch',
-                       body: str = 'This is a test.\n') -> bytes:
+def _make_test_message(
+    from_addr: str = 'test@example.com',
+    subject: str = 'Test patch',
+    body: str = 'This is a test.\n',
+) -> bytes:
     """Build a minimal RFC2822 message as bytes."""
     msg = email.message.EmailMessage()
     msg['From'] = from_addr
@@ -70,8 +74,9 @@ def _make_test_message(from_addr: str = 'test@example.com',
 class TestPatattSignVerify:
     """Round-trip sign and verify using ephemeral ed25519 keys."""
 
-    def test_sign_and_verify(self, ed25519_keypair: Tuple[str, str, str, str],
-                             keyring_dir: str) -> None:
+    def test_sign_and_verify(
+        self, ed25519_keypair: Tuple[str, str, str, str], keyring_dir: str
+    ) -> None:
         """A signed message should validate with the matching public key."""
         privkey_path, _vk_b64, identity, selector = ed25519_keypair
         msg_bytes = _make_test_message(from_addr=identity)
@@ -88,8 +93,9 @@ class TestPatattSignVerify:
         assert len(results) > 0
         assert results[0][0] == patatt.RES_VALID
 
-    def test_tampered_body_fails(self, ed25519_keypair: Tuple[str, str, str, str],
-                                 keyring_dir: str) -> None:
+    def test_tampered_body_fails(
+        self, ed25519_keypair: Tuple[str, str, str, str], keyring_dir: str
+    ) -> None:
         """Modifying the body after signing should fail validation."""
         privkey_path, _vk_b64, identity, selector = ed25519_keypair
         msg_bytes = _make_test_message(from_addr=identity)
@@ -156,7 +162,9 @@ class TestPatattSignVerify:
         assert len(results) == 1
         assert results[0][0] == patatt.RES_NOSIG
 
-    def test_sign_adds_developer_key_header(self, ed25519_keypair: Tuple[str, str, str, str]) -> None:
+    def test_sign_adds_developer_key_header(
+        self, ed25519_keypair: Tuple[str, str, str, str]
+    ) -> None:
         """Signing adds both X-Developer-Signature and X-Developer-Key."""
         privkey_path, _vk_b64, identity, selector = ed25519_keypair
         msg_bytes = _make_test_message(from_addr=identity)
diff --git a/src/tests/test_rethread.py b/src/tests/test_rethread.py
index f2a0394..6e518fc 100644
--- a/src/tests/test_rethread.py
+++ b/src/tests/test_rethread.py
@@ -10,11 +10,15 @@ import b4
 # ---------------------------------------------------------------------------
 # Helpers for building synthetic EmailMessage objects
 # ---------------------------------------------------------------------------
-def _make_msg(msgid: str, subject: str, from_addr: str = 'Test Author <test@example.com>',
-              date: str = 'Mon, 23 Mar 2026 12:00:00 +0000',
-              in_reply_to: Optional[str] = None,
-              references: Optional[str] = None,
-              body: str = 'Hello\n') -> email.message.EmailMessage:
+def _make_msg(
+    msgid: str,
+    subject: str,
+    from_addr: str = 'Test Author <test@example.com>',
+    date: str = 'Mon, 23 Mar 2026 12:00:00 +0000',
+    in_reply_to: Optional[str] = None,
+    references: Optional[str] = None,
+    body: str = 'Hello\n',
+) -> email.message.EmailMessage:
     msg = email.message.EmailMessage()
     msg['Message-ID'] = f'<{msgid}>'
     msg['Subject'] = subject
@@ -33,40 +37,50 @@ def _make_msg(msgid: str, subject: str, from_addr: str = 'Test Author <test@exam
 # ===========================================================================
 class TestParseMsgid:
     def test_bare_msgid(self) -> None:
-        assert b4.parse_msgid('20260323041505.2088-1-user@example.com') == \
-            '20260323041505.2088-1-user@example.com'
+        assert (
+            b4.parse_msgid('20260323041505.2088-1-user@example.com')
+            == '20260323041505.2088-1-user@example.com'
+        )
 
     def test_angle_brackets(self) -> None:
-        assert b4.parse_msgid('<20260323041505.2088-1-user@example.com>') == \
-            '20260323041505.2088-1-user@example.com'
+        assert (
+            b4.parse_msgid('<20260323041505.2088-1-user@example.com>')
+            == '20260323041505.2088-1-user@example.com'
+        )
 
     def test_lore_url(self) -> None:
         result = b4.parse_msgid(
-            'https://lore.kernel.org/all/20260323041505.2088-1-user@example.com/')
+            'https://lore.kernel.org/all/20260323041505.2088-1-user@example.com/'
+        )
         assert result == '20260323041505.2088-1-user@example.com'
 
     def test_lore_url_with_r_shorthand(self) -> None:
         result = b4.parse_msgid(
-            'https://lore.kernel.org/r/20260323041505.2088-1-user@example.com')
+            'https://lore.kernel.org/r/20260323041505.2088-1-user@example.com'
+        )
         assert result == '20260323041505.2088-1-user@example.com'
 
     def test_lore_url_percent_encoded(self) -> None:
-        result = b4.parse_msgid(
-            'https://lore.kernel.org/all/abc%2Bdef@example.com/')
+        result = b4.parse_msgid('https://lore.kernel.org/all/abc%2Bdef@example.com/')
         assert result == 'abc+def@example.com'
 
     def test_patchwork_url(self) -> None:
         result = b4.parse_msgid(
-            'https://patchwork.kernel.org/project/linux-mm/patch/20260323041505.2088-1-user@example.com/')
+            'https://patchwork.kernel.org/project/linux-mm/patch/20260323041505.2088-1-user@example.com/'
+        )
         assert result == '20260323041505.2088-1-user@example.com'
 
     def test_id_prefix(self) -> None:
-        assert b4.parse_msgid('id:20260323041505.2088-1-user@example.com') == \
-            '20260323041505.2088-1-user@example.com'
+        assert (
+            b4.parse_msgid('id:20260323041505.2088-1-user@example.com')
+            == '20260323041505.2088-1-user@example.com'
+        )
 
     def test_rfc822msgid_prefix(self) -> None:
-        assert b4.parse_msgid('rfc822msgid:20260323041505.2088-1-user@example.com') == \
-            '20260323041505.2088-1-user@example.com'
+        assert (
+            b4.parse_msgid('rfc822msgid:20260323041505.2088-1-user@example.com')
+            == '20260323041505.2088-1-user@example.com'
+        )
 
     def test_whitespace_stripped(self) -> None:
         assert b4.parse_msgid('  <foo@bar.com>  ') == 'foo@bar.com'
@@ -285,9 +299,9 @@ class TestRethreadMessages:
     def test_child_messages_untouched(self) -> None:
         cover = _make_msg('cover@x', '[PATCH 0/2] Cover')
         p1 = _make_msg('p1@x', '[PATCH 1/2] First')
-        reply = _make_msg('reply@x', 'Re: [PATCH 1/2] First',
-                          in_reply_to='p1@x',
-                          references='<p1@x>')
+        reply = _make_msg(
+            'reply@x', 'Re: [PATCH 1/2] First', in_reply_to='p1@x', references='<p1@x>'
+        )
         all_msgs = [cover, p1, reply]
 
         b4.LoreSeries.rethread_messages(all_msgs, 'cover@x', {'p1@x'})
@@ -298,9 +312,12 @@ class TestRethreadMessages:
 
     def test_strips_old_threading_from_cover(self) -> None:
         """If the cover had pre-existing threading, it should be stripped."""
-        cover = _make_msg('cover@x', '[PATCH 0/2] Cover',
-                          in_reply_to='old-parent@x',
-                          references='<old-parent@x>')
+        cover = _make_msg(
+            'cover@x',
+            '[PATCH 0/2] Cover',
+            in_reply_to='old-parent@x',
+            references='<old-parent@x>',
+        )
         p1 = _make_msg('p1@x', '[PATCH 1/2] First')
         all_msgs = [cover, p1]
 
@@ -312,9 +329,12 @@ class TestRethreadMessages:
     def test_replaces_old_threading_on_patches(self) -> None:
         """Patches should lose their old threading and get cover as parent."""
         cover = _make_msg('cover@x', '[PATCH 0/1] Cover')
-        p1 = _make_msg('p1@x', '[PATCH 1/1] Fix',
-                        in_reply_to='unrelated@x',
-                        references='<unrelated@x> <other@x>')
+        p1 = _make_msg(
+            'p1@x',
+            '[PATCH 1/1] Fix',
+            in_reply_to='unrelated@x',
+            references='<unrelated@x> <other@x>',
+        )
         all_msgs = [cover, p1]
 
         b4.LoreSeries.rethread_messages(all_msgs, 'cover@x', {'p1@x'})
@@ -344,9 +364,9 @@ class TestRethreadMessages:
 # ===========================================================================
 class TestRethreadIntegration:
     @staticmethod
-    def _run_pipeline(msgids: List[str],
-                      all_msgs: List[email.message.EmailMessage]
-                      ) -> Tuple[str, b4.LoreSeries]:
+    def _run_pipeline(
+        msgids: List[str], all_msgs: List[email.message.EmailMessage]
+    ) -> Tuple[str, b4.LoreSeries]:
         """Run the rethread pipeline and feed into LoreMailbox."""
         cover_msgid, all_msgs = b4.LoreSeries.rethread_series(msgids, all_msgs)
 
@@ -359,12 +379,21 @@ class TestRethreadIntegration:
 
     def test_numbered_patches_with_cover(self) -> None:
         """Properly numbered patches with a cover letter should produce a complete series."""
-        cover = _make_msg('cover@x', '[PATCH 0/2] Widget overhaul',
-                          body='This series overhauls widgets.\n')
-        p1 = _make_msg('p1@x', '[PATCH 1/2] Refactor widget core',
-                        body='---\n widget.c | 10 +\n 1 file changed\n\ndiff --git a/widget.c b/widget.c\n--- a/widget.c\n+++ b/widget.c\n@@ -1 +1 @@\n-old\n+new\n')
-        p2 = _make_msg('p2@x', '[PATCH 2/2] Add widget tests',
-                        body='---\n test.c | 5 +\n 1 file changed\n\ndiff --git a/test.c b/test.c\n--- a/test.c\n+++ b/test.c\n@@ -1 +1 @@\n-old\n+new\n')
+        cover = _make_msg(
+            'cover@x',
+            '[PATCH 0/2] Widget overhaul',
+            body='This series overhauls widgets.\n',
+        )
+        p1 = _make_msg(
+            'p1@x',
+            '[PATCH 1/2] Refactor widget core',
+            body='---\n widget.c | 10 +\n 1 file changed\n\ndiff --git a/widget.c b/widget.c\n--- a/widget.c\n+++ b/widget.c\n@@ -1 +1 @@\n-old\n+new\n',
+        )
+        p2 = _make_msg(
+            'p2@x',
+            '[PATCH 2/2] Add widget tests',
+            body='---\n test.c | 5 +\n 1 file changed\n\ndiff --git a/test.c b/test.c\n--- a/test.c\n+++ b/test.c\n@@ -1 +1 @@\n-old\n+new\n',
+        )
         all_msgs = [cover, p1, p2]
         msgids = ['cover@x', 'p1@x', 'p2@x']
 
@@ -376,10 +405,16 @@ class TestRethreadIntegration:
 
     def test_unnumbered_patches_no_cover(self) -> None:
         """Unnumbered patches without a cover should get renumbered and threaded under patch 1."""
-        p1 = _make_msg('p1@x', '[PATCH] Add alpha feature',
-                        body='---\n a.c | 1 +\n 1 file changed\n\ndiff --git a/a.c b/a.c\n--- a/a.c\n+++ b/a.c\n@@ -1 +1 @@\n-old\n+new\n')
-        p2 = _make_msg('p2@x', '[PATCH] Add beta feature',
-                        body='---\n b.c | 1 +\n 1 file changed\n\ndiff --git a/b.c b/b.c\n--- a/b.c\n+++ b/b.c\n@@ -1 +1 @@\n-old\n+new\n')
+        p1 = _make_msg(
+            'p1@x',
+            '[PATCH] Add alpha feature',
+            body='---\n a.c | 1 +\n 1 file changed\n\ndiff --git a/a.c b/a.c\n--- a/a.c\n+++ b/a.c\n@@ -1 +1 @@\n-old\n+new\n',
+        )
+        p2 = _make_msg(
+            'p2@x',
+            '[PATCH] Add beta feature',
+            body='---\n b.c | 1 +\n 1 file changed\n\ndiff --git a/b.c b/b.c\n--- a/b.c\n+++ b/b.c\n@@ -1 +1 @@\n-old\n+new\n',
+        )
         all_msgs = [p1, p2]
         msgids = ['p1@x', 'p2@x']
 
@@ -391,15 +426,24 @@ class TestRethreadIntegration:
 
     def test_followup_trailers_preserved(self) -> None:
         """Review replies should be associated with the correct patch after rethreading."""
-        p1 = _make_msg('p1@x', '[PATCH 1/2] First patch',
-                        body='---\n a.c | 1 +\n 1 file changed\n\ndiff --git a/a.c b/a.c\n--- a/a.c\n+++ b/a.c\n@@ -1 +1 @@\n-old\n+new\n')
-        review = _make_msg('rev@x', 'Re: [PATCH 1/2] First patch',
-                           in_reply_to='p1@x',
-                           references='<p1@x>',
-                           from_addr='Reviewer <rev@example.com>',
-                           body='Looks good.\n\nReviewed-by: Reviewer <rev@example.com>\n')
-        p2 = _make_msg('p2@x', '[PATCH 2/2] Second patch',
-                        body='---\n b.c | 1 +\n 1 file changed\n\ndiff --git a/b.c b/b.c\n--- a/b.c\n+++ b/b.c\n@@ -1 +1 @@\n-old\n+new\n')
+        p1 = _make_msg(
+            'p1@x',
+            '[PATCH 1/2] First patch',
+            body='---\n a.c | 1 +\n 1 file changed\n\ndiff --git a/a.c b/a.c\n--- a/a.c\n+++ b/a.c\n@@ -1 +1 @@\n-old\n+new\n',
+        )
+        review = _make_msg(
+            'rev@x',
+            'Re: [PATCH 1/2] First patch',
+            in_reply_to='p1@x',
+            references='<p1@x>',
+            from_addr='Reviewer <rev@example.com>',
+            body='Looks good.\n\nReviewed-by: Reviewer <rev@example.com>\n',
+        )
+        p2 = _make_msg(
+            'p2@x',
+            '[PATCH 2/2] Second patch',
+            body='---\n b.c | 1 +\n 1 file changed\n\ndiff --git a/b.c b/b.c\n--- a/b.c\n+++ b/b.c\n@@ -1 +1 @@\n-old\n+new\n',
+        )
         all_msgs = [p1, review, p2]
         msgids = ['p1@x', 'p2@x']
 
@@ -414,10 +458,16 @@ class TestRethreadIntegration:
 
     def test_wrong_expected_fixed(self) -> None:
         """Patches claiming 1/1 each should have expected fixed to match actual count."""
-        p1 = _make_msg('p1@x', '[PATCH 1/1] First',
-                        body='---\n a.c | 1 +\n 1 file changed\n\ndiff --git a/a.c b/a.c\n--- a/a.c\n+++ b/a.c\n@@ -1 +1 @@\n-old\n+new\n')
-        p2 = _make_msg('p2@x', '[PATCH 1/1] Second',
-                        body='---\n b.c | 1 +\n 1 file changed\n\ndiff --git a/b.c b/b.c\n--- a/b.c\n+++ b/b.c\n@@ -1 +1 @@\n-old\n+new\n')
+        p1 = _make_msg(
+            'p1@x',
+            '[PATCH 1/1] First',
+            body='---\n a.c | 1 +\n 1 file changed\n\ndiff --git a/a.c b/a.c\n--- a/a.c\n+++ b/a.c\n@@ -1 +1 @@\n-old\n+new\n',
+        )
+        p2 = _make_msg(
+            'p2@x',
+            '[PATCH 1/1] Second',
+            body='---\n b.c | 1 +\n 1 file changed\n\ndiff --git a/b.c b/b.c\n--- a/b.c\n+++ b/b.c\n@@ -1 +1 @@\n-old\n+new\n',
+        )
         all_msgs = [p1, p2]
         msgids = ['p1@x', 'p2@x']
 
@@ -435,13 +485,17 @@ class TestRethreadIntegration:
 # ===========================================================================
 class TestDiscoverRethreadSeries:
     @staticmethod
-    def _mock_discover(seed_msg: email.message.EmailMessage,
-                       search_results: List[email.message.EmailMessage]) -> List[str]:
+    def _mock_discover(
+        seed_msg: email.message.EmailMessage,
+        search_results: List[email.message.EmailMessage],
+    ) -> List[str]:
         """Run discover_rethread_series with mocked network calls."""
         seed_msgid = b4.LoreMessage.get_clean_msgid(seed_msg)
         assert seed_msgid is not None
-        with mock.patch('b4.get_pi_thread_by_msgid', return_value=[seed_msg]), \
-             mock.patch('b4.get_pi_search_results', return_value=search_results):
+        with (
+            mock.patch('b4.get_pi_thread_by_msgid', return_value=[seed_msg]),
+            mock.patch('b4.get_pi_search_results', return_value=search_results),
+        ):
             return b4.discover_rethread_series(seed_msgid)
 
     def test_discovers_numbered_series(self) -> None:
@@ -507,7 +561,9 @@ class TestDiscoverRethreadSeries:
         seed = _make_msg('p1@x', '[PATCH 1/2] Fix')
         seed_msgid = b4.LoreMessage.get_clean_msgid(seed)
         assert seed_msgid is not None
-        with mock.patch('b4.get_pi_thread_by_msgid', return_value=[seed]), \
-             mock.patch('b4.get_pi_search_results', return_value=None):
+        with (
+            mock.patch('b4.get_pi_thread_by_msgid', return_value=[seed]),
+            mock.patch('b4.get_pi_search_results', return_value=None),
+        ):
             result = b4.discover_rethread_series(seed_msgid)
         assert result == ['p1@x']
diff --git a/src/tests/test_review.py b/src/tests/test_review.py
index 92ebe57..174c1b5 100644
--- a/src/tests/test_review.py
+++ b/src/tests/test_review.py
@@ -46,16 +46,18 @@ index 3333333..4444444 100644
 """
 
 
-
 class TestRenderQuotedDiffWithComments:
     """Tests for _render_quoted_diff_with_comments()."""
 
     def test_no_comments_quotes_diff(self) -> None:
         """Without comments, every diff line gets a '> ' prefix."""
         result = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, {}, 'me@example.com')
+            SIMPLE_DIFF, {}, 'me@example.com'
+        )
         for line in result.splitlines():
-            assert line.startswith(('> ', '#')) or line == '', f'Unquoted line: {line!r}'
+            assert line.startswith(('> ', '#')) or line == '', (
+                f'Unquoted line: {line!r}'
+            )
 
     def test_own_comment_is_unquoted(self) -> None:
         """Own comments appear as unquoted text between quoted diff."""
@@ -63,12 +65,17 @@ class TestRenderQuotedDiffWithComments:
             'me@example.com': {
                 'name': 'Me',
                 'comments': [
-                    {'path': 'b/lib/helpers.c', 'line': 12, 'text': 'Check NULL return'},
+                    {
+                        'path': 'b/lib/helpers.c',
+                        'line': 12,
+                        'text': 'Check NULL return',
+                    },
                 ],
             },
         }
         result = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com')
+            SIMPLE_DIFF, all_reviews, 'me@example.com'
+        )
         assert 'Check NULL return' in result
         # Comment should NOT be quoted
         for line in result.splitlines():
@@ -87,7 +94,8 @@ class TestRenderQuotedDiffWithComments:
             },
         }
         result = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com')
+            SIMPLE_DIFF, all_reviews, 'me@example.com'
+        )
         assert '| Looks wrong.' in result
         assert '| Other <other@example.com>:' in result
 
@@ -109,7 +117,8 @@ class TestRenderQuotedDiffWithComments:
             },
         }
         result = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com')
+            SIMPLE_DIFF, all_reviews, 'me@example.com'
+        )
         assert 'My comment' in result
         assert '| Ext comment' in result
 
@@ -125,15 +134,16 @@ class TestRenderQuotedDiffWithComments:
             },
         }
         result = review._render_quoted_diff_with_comments(
-            TWO_FILE_DIFF, all_reviews, 'me@example.com')
+            TWO_FILE_DIFF, all_reviews, 'me@example.com'
+        )
         assert 'Comment in a.c' in result
         assert 'Comment in b.c' in result
 
-
     def test_editor_instructions_at_top(self) -> None:
         """Rendered output starts with # instruction lines."""
         result = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, {}, 'me@example.com')
+            SIMPLE_DIFF, {}, 'me@example.com'
+        )
         lines = result.splitlines()
         # First non-empty line should be an instruction
         assert lines[0].startswith('# ')
@@ -147,8 +157,11 @@ class TestRenderQuotedDiffWithComments:
     def test_commit_msg_quoted_before_diff(self) -> None:
         """Commit message body is quoted before the diff when provided."""
         result = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, {}, 'me@example.com',
-            commit_msg='Subject line\n\nThis is the body.\nSecond line.')
+            SIMPLE_DIFF,
+            {},
+            'me@example.com',
+            commit_msg='Subject line\n\nThis is the body.\nSecond line.',
+        )
         lines = result.splitlines()
         # Body lines should appear quoted before the diff
         assert '> This is the body.' in lines
@@ -169,8 +182,11 @@ class TestRenderQuotedDiffWithComments:
             },
         }
         result = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com',
-            commit_msg='Subject\n\nFirst body line.')
+            SIMPLE_DIFF,
+            all_reviews,
+            'me@example.com',
+            commit_msg='Subject\n\nFirst body line.',
+        )
         assert 'Body comment' in result
         for line in result.splitlines():
             if 'Body comment' in line:
@@ -189,8 +205,11 @@ class TestRenderQuotedDiffWithComments:
             },
         }
         result = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com',
-            commit_msg='Subject\n\nFirst body line.')
+            SIMPLE_DIFF,
+            all_reviews,
+            'me@example.com',
+            commit_msg='Subject\n\nFirst body line.',
+        )
         assert '| Ext msg comment' in result
         assert '| Other <other@example.com>:' in result
         assert '| via: https://lore.kernel.org/test' in result
@@ -206,8 +225,11 @@ class TestRenderQuotedDiffWithComments:
             },
         }
         result = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com',
-            commit_msg='Subject\n\nFirst body line.')
+            SIMPLE_DIFF,
+            all_reviews,
+            'me@example.com',
+            commit_msg='Subject\n\nFirst body line.',
+        )
         lines = result.splitlines()
         assert 'General note' in lines
         note_idx = lines.index('General note')
@@ -368,7 +390,8 @@ class TestQuotedEditorRoundTrip:
             'me@example.com': {'name': 'Me', 'comments': comments},
         }
         rendered = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com')
+            SIMPLE_DIFF, all_reviews, 'me@example.com'
+        )
         extracted = review._extract_editor_comments(rendered)
         assert len(extracted) == 1
         assert extracted[0]['path'] == 'b/lib/helpers.c'
@@ -386,7 +409,8 @@ class TestQuotedEditorRoundTrip:
             },
         }
         rendered = review._render_quoted_diff_with_comments(
-            TWO_FILE_DIFF, all_reviews, 'me@example.com')
+            TWO_FILE_DIFF, all_reviews, 'me@example.com'
+        )
         extracted = review._extract_editor_comments(rendered)
         assert len(extracted) == 2
         assert extracted[0]['path'] == 'b/src/a.c'
@@ -401,14 +425,16 @@ class TestQuotedEditorRoundTrip:
             'me@example.com': {'name': 'Me', 'comments': comments},
         }
         rendered1 = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com')
+            SIMPLE_DIFF, all_reviews, 'me@example.com'
+        )
         extracted1 = review._extract_editor_comments(rendered1)
 
         all_reviews2: Dict[str, Any] = {
             'me@example.com': {'name': 'Me', 'comments': extracted1},
         }
         rendered2 = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews2, 'me@example.com')
+            SIMPLE_DIFF, all_reviews2, 'me@example.com'
+        )
         extracted2 = review._extract_editor_comments(rendered2)
 
         assert len(extracted1) == len(extracted2)
@@ -434,7 +460,8 @@ class TestQuotedEditorRoundTrip:
             },
         }
         rendered = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com')
+            SIMPLE_DIFF, all_reviews, 'me@example.com'
+        )
         extracted = review._extract_editor_comments(rendered)
         assert len(extracted) == 1
         assert extracted[0]['text'] == 'My note'
@@ -446,8 +473,11 @@ class TestQuotedEditorRoundTrip:
             'me@example.com': {'name': 'Me', 'comments': comments},
         }
         rendered = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com',
-            commit_msg='Subject\n\nFirst body line.\nSecond line.')
+            SIMPLE_DIFF,
+            all_reviews,
+            'me@example.com',
+            commit_msg='Subject\n\nFirst body line.\nSecond line.',
+        )
         extracted = review._extract_editor_comments(rendered)
         msg_comments = [c for c in extracted if c['path'] == ':message']
         assert len(msg_comments) == 1
@@ -461,8 +491,11 @@ class TestQuotedEditorRoundTrip:
             'me@example.com': {'name': 'Me', 'comments': comments},
         }
         rendered = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com',
-            commit_msg='Subject\n\nBody line.')
+            SIMPLE_DIFF,
+            all_reviews,
+            'me@example.com',
+            commit_msg='Subject\n\nBody line.',
+        )
         extracted = review._extract_editor_comments(rendered)
         preamble = [c for c in extracted if c['path'] == ':message' and c['line'] == 0]
         assert len(preamble) == 1
@@ -478,8 +511,11 @@ class TestQuotedEditorRoundTrip:
             'me@example.com': {'name': 'Me', 'comments': comments},
         }
         rendered = review._render_quoted_diff_with_comments(
-            SIMPLE_DIFF, all_reviews, 'me@example.com',
-            commit_msg='Subject\n\nFirst body line.')
+            SIMPLE_DIFF,
+            all_reviews,
+            'me@example.com',
+            commit_msg='Subject\n\nFirst body line.',
+        )
         extracted = review._extract_editor_comments(rendered)
         msg_c = [c for c in extracted if c['path'] == ':message']
         diff_c = [c for c in extracted if c['path'] != ':message']
@@ -495,11 +531,9 @@ class TestBuildReplyFromComments:
     def test_trailing_hunk_lines_truncated(self) -> None:
         """Diff lines after the last comment in a hunk are omitted."""
         comments = [
-            {'path': 'b/lib/helpers.c', 'line': 12,
-             'text': 'Check return value.'},
+            {'path': 'b/lib/helpers.c', 'line': 12, 'text': 'Check return value.'},
         ]
-        result = review._build_reply_from_comments(
-            SIMPLE_DIFF, comments, [])
+        result = review._build_reply_from_comments(SIMPLE_DIFF, comments, [])
         # The comment should be present
         assert 'Check return value.' in result
         # The +kzalloc line (line 12) should be quoted
@@ -513,11 +547,9 @@ class TestBuildReplyFromComments:
     def test_lines_before_comment_preserved(self) -> None:
         """Diff lines before the comment are preserved as quoted context."""
         comments = [
-            {'path': 'b/lib/helpers.c', 'line': 13,
-             'text': 'Check field assignment.'},
+            {'path': 'b/lib/helpers.c', 'line': 13, 'text': 'Check field assignment.'},
         ]
-        result = review._build_reply_from_comments(
-            SIMPLE_DIFF, comments, [])
+        result = review._build_reply_from_comments(SIMPLE_DIFF, comments, [])
         # The kzalloc line (line 12) precedes the comment target
         assert 'kzalloc' in result
         # The ptr->field line (line 13) is the commented line
@@ -530,8 +562,7 @@ class TestBuildReplyFromComments:
             {'path': 'b/lib/helpers.c', 'line': 12, 'text': 'First.'},
             {'path': 'b/lib/helpers.c', 'line': 13, 'text': 'Second.'},
         ]
-        result = review._build_reply_from_comments(
-            SIMPLE_DIFF, comments, [])
+        result = review._build_reply_from_comments(SIMPLE_DIFF, comments, [])
         assert 'First.' in result
         assert 'Second.' in result
         assert 'kzalloc' in result
@@ -565,7 +596,8 @@ index abc..def 100644
             {'path': ':message', 'line': 3, 'text': 'Comment on line three.'},
         ]
         result = review._build_reply_from_comments(
-            SIMPLE_DIFF, comments, [], commit_msg=commit_msg)
+            SIMPLE_DIFF, comments, [], commit_msg=commit_msg
+        )
         assert 'Comment on line three.' in result
         assert '> Line three.' in result
 
@@ -576,7 +608,8 @@ index abc..def 100644
             {'path': ':message', 'line': 0, 'text': 'General feedback.'},
         ]
         result = review._build_reply_from_comments(
-            '', comments, [], commit_msg=commit_msg)
+            '', comments, [], commit_msg=commit_msg
+        )
         lines = result.splitlines()
         assert 'General feedback.' in lines
         # Preamble should come before any quoted line
@@ -592,7 +625,8 @@ index abc..def 100644
             {'path': ':message', 'line': 1, 'text': 'A comment.'},
         ]
         result = review._build_reply_from_comments(
-            '', comments, [], commit_msg=commit_msg)
+            '', comments, [], commit_msg=commit_msg
+        )
         # Should not end with a bare >
         stripped = result.rstrip()
         assert not stripped.endswith('\n>')
@@ -607,7 +641,8 @@ index abc..def 100644
             {'path': ':message', 'line': 25, 'text': 'Comment here.'},
         ]
         result = review._build_reply_from_comments(
-            '', comments, [], commit_msg=commit_msg)
+            '', comments, [], commit_msg=commit_msg
+        )
         # Line 25 and a few lines of context above should be quoted
         assert '> Line 25' in result
         assert 'Comment here.' in result
@@ -623,45 +658,47 @@ index abc..def 100644
             {'path': ':message', 'line': 4, 'text': 'General comment.'},
         ]
         result = review._build_reply_from_comments(
-            SIMPLE_DIFF, comments, [], commit_msg=commit_msg)
+            SIMPLE_DIFF, comments, [], commit_msg=commit_msg
+        )
         assert 'General comment.' in result
 
     def test_comment_above_diff_git_roundtrips(self) -> None:
         """Comment above first diff --git line survives parse and render."""
         commit_msg = 'Subject\n\nBody.\n\nSigned-off-by: A <a@b.c>'
         diff = (
-            "diff --git a/f.c b/f.c\n"
-            "--- a/f.c\n"
-            "+++ b/f.c\n"
-            "@@ -1,3 +1,4 @@\n"
-            " ctx\n"
-            "+new\n"
-            " more\n"
+            'diff --git a/f.c b/f.c\n'
+            '--- a/f.c\n'
+            '+++ b/f.c\n'
+            '@@ -1,3 +1,4 @@\n'
+            ' ctx\n'
+            '+new\n'
+            ' more\n'
         )
         # Simulate what the editor would produce: quoted commit message,
         # separator, user comment, then quoted diff
         edited = (
-            "> Body.\n"
-            ">\n"
-            "> Signed-off-by: A <a@b.c>\n"
-            ">\n"
-            "\n"
-            "My general comment.\n"
-            "\n"
-            "> diff --git a/f.c b/f.c\n"
-            "> --- a/f.c\n"
-            "> +++ b/f.c\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new\n"
-            ">  more\n"
+            '> Body.\n'
+            '>\n'
+            '> Signed-off-by: A <a@b.c>\n'
+            '>\n'
+            '\n'
+            'My general comment.\n'
+            '\n'
+            '> diff --git a/f.c b/f.c\n'
+            '> --- a/f.c\n'
+            '> +++ b/f.c\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new\n'
+            '>  more\n'
         )
         comments = review._extract_editor_comments(edited, diff_text=diff)
         assert len(comments) == 1
         assert comments[0]['text'] == 'My general comment.'
         # Now rebuild the reply from those comments
         result = review._build_reply_from_comments(
-            diff, comments, [], commit_msg=commit_msg)
+            diff, comments, [], commit_msg=commit_msg
+        )
         assert 'My general comment.' in result
 
 
@@ -809,46 +846,60 @@ class TestBuildReviewEmailBcc:
         return {'trailers': ['Reviewed-by: Test <test@example.com>']}
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_bcc_set_when_present(self, _mock_cfg: mock.Mock,
-                                  _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_bcc_set_when_present(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         series = self._make_series(bcc='secret@example.com')
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert msg['Bcc'] == 'secret@example.com'
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_no_bcc_when_absent(self, _mock_cfg: mock.Mock,
-                                _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_no_bcc_when_absent(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         series = self._make_series()
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert msg['Bcc'] is None
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_no_bcc_when_empty(self, _mock_cfg: mock.Mock,
-                               _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_no_bcc_when_empty(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         series = self._make_series(bcc='')
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert msg['Bcc'] is None
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_cc_still_works(self, _mock_cfg: mock.Mock,
-                            _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_cc_still_works(self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock) -> None:
         series = self._make_series(cc='other@example.com')
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert 'other@example.com' in msg['Cc']
         assert 'maintainer@example.com' in msg['Cc']
@@ -856,6 +907,7 @@ class TestBuildReviewEmailBcc:
 
 # -- Tests for make_review_magic_json() --------------------------------------
 
+
 class TestMakeReviewMagicJson:
     """Tests for make_review_magic_json()."""
 
@@ -879,6 +931,7 @@ class TestMakeReviewMagicJson:
 
 # -- Tests for _get_my_review() ----------------------------------------------
 
+
 class TestGetMyReview:
     """Tests for _get_my_review()."""
 
@@ -912,12 +965,16 @@ class TestGetMyReview:
 
 # -- Tests for _ensure_my_review() -------------------------------------------
 
+
 class TestEnsureMyReview:
     """Tests for _ensure_my_review()."""
 
     def test_creates_entry_when_empty(self) -> None:
         target: Dict[str, Any] = {}
-        usercfg: Dict[str, Union[str, List[str], None]] = {'email': 'user@example.com', 'name': 'User'}
+        usercfg: Dict[str, Union[str, List[str], None]] = {
+            'email': 'user@example.com',
+            'name': 'User',
+        }
         entry = review._ensure_my_review(target, usercfg)
         assert entry['name'] == 'User'
         assert target['reviews']['user@example.com'] is entry
@@ -925,7 +982,10 @@ class TestEnsureMyReview:
     def test_returns_existing_and_updates_name(self) -> None:
         existing = {'name': 'Old Name', 'trailers': ['Reviewed-by: Old']}
         target = {'reviews': {'user@example.com': existing}}
-        usercfg: Dict[str, Union[str, List[str], None]] = {'email': 'user@example.com', 'name': 'New Name'}
+        usercfg: Dict[str, Union[str, List[str], None]] = {
+            'email': 'user@example.com',
+            'name': 'New Name',
+        }
         entry = review._ensure_my_review(target, usercfg)
         assert entry is existing
         assert entry['name'] == 'New Name'
@@ -940,6 +1000,7 @@ class TestEnsureMyReview:
 
 # -- Tests for _cleanup_review() ---------------------------------------------
 
+
 class TestCleanupReview:
     """Tests for _cleanup_review()."""
 
@@ -989,6 +1050,7 @@ class TestCleanupReview:
 
 # -- Tests for _clear_other_comments() ---------------------------------------
 
+
 class TestClearOtherComments:
     """Tests for _clear_other_comments()."""
 
@@ -1047,6 +1109,7 @@ class TestClearOtherComments:
 
 # -- Tests for _ensure_trailers_in_body() ------------------------------------
 
+
 class TestEnsureTrailersInBody:
     """Tests for _ensure_trailers_in_body()."""
 
@@ -1085,6 +1148,7 @@ class TestEnsureTrailersInBody:
 
 # -- Tests for _build_review_email() (expanded) ------------------------------
 
+
 class TestBuildReviewEmailHeaders:
     """Expanded tests for _build_review_email() header and body construction."""
 
@@ -1112,139 +1176,191 @@ class TestBuildReviewEmailHeaders:
         return base
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_returns_none_when_empty_review(self, _mock_cfg: mock.Mock,
-                                            _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_returns_none_when_empty_review(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         msg = review._build_review_email(
-            self._make_series(), None, {'trailers': [], 'reply': '', 'comments': []},
-            'cover', '', None)
+            self._make_series(),
+            None,
+            {'trailers': [], 'reply': '', 'comments': []},
+            'cover',
+            '',
+            None,
+        )
         assert msg is None
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_returns_none_when_no_msgid(self, _mock_cfg: mock.Mock,
-                                        _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_returns_none_when_no_msgid(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         series = self._make_series()
         series['header-info']['msgid'] = ''
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is None
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_subject_gets_re_prefix(self, _mock_cfg: mock.Mock,
-                                    _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_subject_gets_re_prefix(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         msg = review._build_review_email(
-            self._make_series(), None, self._make_review(), 'cover', '', None)
+            self._make_series(), None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert msg['Subject'] == 'Re: Test patch'
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_re_prefix_not_doubled(self, _mock_cfg: mock.Mock,
-                                   _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_re_prefix_not_doubled(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         series = self._make_series()
         series['subject'] = 'Re: Already prefixed'
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert msg['Subject'] == 'Re: Already prefixed'
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_reply_to_used_as_to(self, _mock_cfg: mock.Mock,
-                                 _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_reply_to_used_as_to(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         series = self._make_series(**{'reply-to': 'list@lists.example.com'})
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert 'list@lists.example.com' in msg['To']
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_from_is_series_author_when_no_reply_to(self, _mock_cfg: mock.Mock,
-                                                    _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_from_is_series_author_when_no_reply_to(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         msg = review._build_review_email(
-            self._make_series(), None, self._make_review(), 'cover', '', None)
+            self._make_series(), None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert 'author@example.com' in msg['To']
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_references_without_existing(self, _mock_cfg: mock.Mock,
-                                         _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_references_without_existing(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         msg = review._build_review_email(
-            self._make_series(), None, self._make_review(), 'cover', '', None)
+            self._make_series(), None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert msg['References'] == '<test-msgid@example.com>'
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_references_with_existing(self, _mock_cfg: mock.Mock,
-                                      _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_references_with_existing(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         series = self._make_series(references='<prev@example.com>')
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert '<prev@example.com>' in msg['References']
         assert '<test-msgid@example.com>' in msg['References']
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_body_contains_trailers(self, _mock_cfg: mock.Mock,
-                                    _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_body_contains_trailers(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         msg = review._build_review_email(
-            self._make_series(), None, self._make_review(), 'cover text', '', None)
+            self._make_series(), None, self._make_review(), 'cover text', '', None
+        )
         assert msg is not None
         payload = msg.get_payload(decode=True)
         assert isinstance(payload, bytes)
         assert 'Reviewed-by: Test <test@example.com>' in payload.decode()
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_explicit_reply_text_used(self, _mock_cfg: mock.Mock,
-                                      _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_explicit_reply_text_used(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         rev = self._make_review(reply='This is my explicit reply.')
         msg = review._build_review_email(
-            self._make_series(), None, rev, 'cover', '', None)
+            self._make_series(), None, rev, 'cover', '', None
+        )
         assert msg is not None
         payload = msg.get_payload(decode=True)
         assert isinstance(payload, bytes)
         assert 'This is my explicit reply.' in payload.decode()
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_in_reply_to_set(self, _mock_cfg: mock.Mock,
-                             _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_in_reply_to_set(self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock) -> None:
         msg = review._build_review_email(
-            self._make_series(), None, self._make_review(), 'cover', '', None)
+            self._make_series(), None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert msg['In-Reply-To'] == '<test-msgid@example.com>'
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_from_header_is_reviewer(self, _mock_cfg: mock.Mock,
-                                     _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_from_header_is_reviewer(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         msg = review._build_review_email(
-            self._make_series(), None, self._make_review(), 'cover', '', None)
+            self._make_series(), None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert 'reviewer@example.com' in msg['From']
         assert 'Reviewer' in msg['From']
 
+
 # -- Tests for _build_review_email() user-edited To/Cc -----------------------
 
+
 class TestBuildReviewEmailToCcEdited:
     """Tests for user-edited To/Cc handling in _build_review_email()."""
 
@@ -1270,73 +1386,94 @@ class TestBuildReviewEmailToCcEdited:
         return {'trailers': ['Reviewed-by: Test <test@example.com>']}
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_default_to_is_author(self, _mock_cfg: mock.Mock,
-                                  _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_default_to_is_author(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         """Without tocc-edited, To should be the original author."""
         msg = review._build_review_email(
-            self._make_series(), None, self._make_review(), 'cover', '', None)
+            self._make_series(), None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert 'author@example.com' in msg['To']
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_default_demotes_to_header_to_cc(self, _mock_cfg: mock.Mock,
-                                             _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_default_demotes_to_header_to_cc(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         """Without tocc-edited, original To gets folded into Cc."""
         series = self._make_series(to='list@lists.example.com')
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert 'author@example.com' in msg['To']
         assert 'list@lists.example.com' in msg['Cc']
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_edited_to_is_honoured(self, _mock_cfg: mock.Mock,
-                                   _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_edited_to_is_honoured(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         """With tocc-edited, user's To choice should be used as-is."""
         series = self._make_series(to='custom@example.com')
         series['header-info']['tocc-edited'] = True
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert 'custom@example.com' in msg['To']
         assert 'author@example.com' not in (msg['To'] or '')
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_edited_cc_is_honoured(self, _mock_cfg: mock.Mock,
-                                   _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_edited_cc_is_honoured(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         """With tocc-edited, user's Cc choice should be used as-is."""
         series = self._make_series(to='custom@example.com', cc='other@example.com')
         series['header-info']['tocc-edited'] = True
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert msg['To'] == 'custom@example.com'
         assert msg['Cc'] == 'other@example.com'
 
     @mock.patch('b4.get_email_signature', return_value='sig')
-    @mock.patch('b4.get_user_config', return_value={
-        'name': 'Reviewer', 'email': 'reviewer@example.com'})
-    def test_edited_empty_cc_omitted(self, _mock_cfg: mock.Mock,
-                                     _mock_sig: mock.Mock) -> None:
+    @mock.patch(
+        'b4.get_user_config',
+        return_value={'name': 'Reviewer', 'email': 'reviewer@example.com'},
+    )
+    def test_edited_empty_cc_omitted(
+        self, _mock_cfg: mock.Mock, _mock_sig: mock.Mock
+    ) -> None:
         """With tocc-edited, empty Cc should not produce a Cc header."""
         series = self._make_series(to='custom@example.com', cc='')
         series['header-info']['tocc-edited'] = True
         msg = review._build_review_email(
-            series, None, self._make_review(), 'cover', '', None)
+            series, None, self._make_review(), 'cover', '', None
+        )
         assert msg is not None
         assert msg['Cc'] is None
 
 
 # -- Tests for get_reference_message() ---------------------------------------
 
+
 class TestGetReferenceMessage:
     """Tests for get_reference_message()."""
 
@@ -1370,9 +1507,9 @@ class TestGetReferenceMessage:
             review.get_reference_message(lser)
 
 
-
 # -- Tests for _collect_reply_headers() --------------------------------------
 
+
 class TestCollectReplyHeaders:
     """Tests for _collect_reply_headers()."""
 
@@ -1425,6 +1562,7 @@ class TestCollectReplyHeaders:
 
 # -- Tests for _collect_followups() ------------------------------------------
 
+
 class TestCollectFollowups:
     """Tests for _collect_followups()."""
 
@@ -1432,7 +1570,8 @@ class TestCollectFollowups:
 
     @staticmethod
     def _make_followup_trailer(
-        name: str, value: str,
+        name: str,
+        value: str,
         msgid: str = 'reply@example.com',
         fromname: str = 'Reviewer',
         fromemail: str = 'reviewer@example.com',
@@ -1446,7 +1585,9 @@ class TestCollectFollowups:
         return lt
 
     def _make_lmsg(
-        self, body: str, followup_trailers: List[Any],
+        self,
+        body: str,
+        followup_trailers: List[Any],
     ) -> mock.Mock:
         """Build a mock LoreMessage with body and followup_trailers."""
         lmsg = mock.Mock()
@@ -1457,7 +1598,8 @@ class TestCollectFollowups:
     def test_basic_followup(self) -> None:
         """A single follow-up trailer is collected."""
         ft = self._make_followup_trailer(
-            'Reviewed-by', 'Reviewer <reviewer@example.com>',
+            'Reviewed-by',
+            'Reviewer <reviewer@example.com>',
         )
         lmsg = self._make_lmsg('Some patch body\n', [ft])
         result = review._collect_followups(lmsg, self.LINKMASK)
@@ -1484,7 +1626,8 @@ class TestCollectFollowups:
             'Signed-off-by: Author <author@example.com>\n'
         )
         ft = self._make_followup_trailer(
-            'Reviewed-by', 'Reviewer <reviewer@example.com>',
+            'Reviewed-by',
+            'Reviewer <reviewer@example.com>',
         )
         lmsg = self._make_lmsg(body, [ft])
         result = review._collect_followups(lmsg, self.LINKMASK)
@@ -1492,13 +1635,10 @@ class TestCollectFollowups:
 
     def test_keeps_trailer_not_in_body(self) -> None:
         """Follow-up trailers NOT in the body are kept."""
-        body = (
-            'Patch description\n'
-            '\n'
-            'Signed-off-by: Author <author@example.com>\n'
-        )
+        body = 'Patch description\n\nSigned-off-by: Author <author@example.com>\n'
         ft = self._make_followup_trailer(
-            'Acked-by', 'Acker <acker@example.com>',
+            'Acked-by',
+            'Acker <acker@example.com>',
         )
         lmsg = self._make_lmsg(body, [ft])
         result = review._collect_followups(lmsg, self.LINKMASK)
@@ -1514,11 +1654,13 @@ class TestCollectFollowups:
             'Signed-off-by: Author <author@example.com>\n'
         )
         ft_dup = self._make_followup_trailer(
-            'Reviewed-by', 'Reviewer <reviewer@example.com>',
+            'Reviewed-by',
+            'Reviewer <reviewer@example.com>',
             msgid='reply1@example.com',
         )
         ft_new = self._make_followup_trailer(
-            'Tested-by', 'Tester <tester@example.com>',
+            'Tested-by',
+            'Tester <tester@example.com>',
             msgid='reply2@example.com',
             fromname='Tester',
             fromemail='tester@example.com',
@@ -1532,11 +1674,13 @@ class TestCollectFollowups:
     def test_groups_by_msgid(self) -> None:
         """Multiple trailers from the same reply are grouped together."""
         ft1 = self._make_followup_trailer(
-            'Reviewed-by', 'Reviewer <reviewer@example.com>',
+            'Reviewed-by',
+            'Reviewer <reviewer@example.com>',
             msgid='reply@example.com',
         )
         ft2 = self._make_followup_trailer(
-            'Tested-by', 'Reviewer <reviewer@example.com>',
+            'Tested-by',
+            'Reviewer <reviewer@example.com>',
             msgid='reply@example.com',
         )
         lmsg = self._make_lmsg('body\n', [ft1, ft2])
@@ -1553,36 +1697,53 @@ class TestCollectFollowups:
 
 # -- Tests for _get_art_counts() ---------------------------------------------
 
+
 class TestGetArtCounts:
     """Tests for _get_art_counts() in _tracking_app."""
 
     @staticmethod
-    def _make_tracking_json(followups: Optional[List[Dict[str, Any]]] = None, patches: Optional[List[Dict[str, Any]]] = None) -> str:
+    def _make_tracking_json(
+        followups: Optional[List[Dict[str, Any]]] = None,
+        patches: Optional[List[Dict[str, Any]]] = None,
+    ) -> str:
         """Build a tracking commit message with the given followup data."""
         tracking: Dict[str, Any] = {}
         if followups is not None:
             tracking['followups'] = followups
         if patches is not None:
             tracking['patches'] = patches
-        return 'Cover letter text\n\n--- b4-review-tracking ---\n' + json.dumps(tracking)
+        return 'Cover letter text\n\n--- b4-review-tracking ---\n' + json.dumps(
+            tracking
+        )
 
     @mock.patch('b4.git_run_command')
     def test_counts_all_trailer_types(self, mock_git: mock.Mock) -> None:
         """Counts Acked-by, Reviewed-by, and Tested-by from followups."""
         commit_msg = self._make_tracking_json(
             followups=[
-                {'trailers': ['Acked-by: A <a@example.com>',
-                               'Reviewed-by: R <r@example.com>']},
+                {
+                    'trailers': [
+                        'Acked-by: A <a@example.com>',
+                        'Reviewed-by: R <r@example.com>',
+                    ]
+                },
             ],
             patches=[
-                {'followups': [
-                    {'trailers': ['Tested-by: T <t@example.com>',
-                                  'Acked-by: B <b@example.com>']},
-                ]},
+                {
+                    'followups': [
+                        {
+                            'trailers': [
+                                'Tested-by: T <t@example.com>',
+                                'Acked-by: B <b@example.com>',
+                            ]
+                        },
+                    ]
+                },
             ],
         )
         mock_git.return_value = (0, commit_msg)
         from b4.review_tui._tracking_app import _get_art_counts
+
         result = _get_art_counts('/tmp', 'b4/review/test')
         assert result == (2, 1, 1)
 
@@ -1590,12 +1751,14 @@ class TestGetArtCounts:
     def test_returns_none_on_git_failure(self, mock_git: mock.Mock) -> None:
         mock_git.return_value = (1, '')
         from b4.review_tui._tracking_app import _get_art_counts
+
         assert _get_art_counts('/tmp', 'b4/review/test') is None
 
     @mock.patch('b4.git_run_command')
     def test_returns_none_without_marker(self, mock_git: mock.Mock) -> None:
         mock_git.return_value = (0, 'Just a commit message without marker')
         from b4.review_tui._tracking_app import _get_art_counts
+
         assert _get_art_counts('/tmp', 'b4/review/test') is None
 
     @mock.patch('b4.git_run_command')
@@ -1603,6 +1766,7 @@ class TestGetArtCounts:
         commit_msg = self._make_tracking_json(patches=[{'followups': []}])
         mock_git.return_value = (0, commit_msg)
         from b4.review_tui._tracking_app import _get_art_counts
+
         assert _get_art_counts('/tmp', 'b4/review/test') == (0, 0, 0)
 
     @mock.patch('b4.git_run_command')
@@ -1610,21 +1774,29 @@ class TestGetArtCounts:
         """Trailers like Signed-off-by are not counted."""
         commit_msg = self._make_tracking_json(
             followups=[
-                {'trailers': ['Signed-off-by: S <s@example.com>',
-                               'Reviewed-by: R <r@example.com>']},
+                {
+                    'trailers': [
+                        'Signed-off-by: S <s@example.com>',
+                        'Reviewed-by: R <r@example.com>',
+                    ]
+                },
             ],
         )
         mock_git.return_value = (0, commit_msg)
         from b4.review_tui._tracking_app import _get_art_counts
+
         assert _get_art_counts('/tmp', 'b4/review/test') == (0, 1, 0)
 
     @mock.patch('b4.git_run_command')
     def test_skips_comment_lines_in_json(self, mock_git: mock.Mock) -> None:
         """Lines starting with # in the JSON block are ignored."""
-        tracking = json.dumps({'followups': [{'trailers': ['Acked-by: A <a@example.com>']}]})
+        tracking = json.dumps(
+            {'followups': [{'trailers': ['Acked-by: A <a@example.com>']}]}
+        )
         commit_msg = 'Cover\n\n--- b4-review-tracking ---\n# comment line\n' + tracking
         mock_git.return_value = (0, commit_msg)
         from b4.review_tui._tracking_app import _get_art_counts
+
         assert _get_art_counts('/tmp', 'b4/review/test') == (1, 0, 0)
 
 
@@ -1632,42 +1804,61 @@ class TestParseArtFromMessage:
     """Tests for the extracted _parse_art_from_message() helper."""
 
     @staticmethod
-    def _make_msg(followups: Optional[List[Dict[str, Any]]] = None,
-                  patches: Optional[List[Dict[str, Any]]] = None) -> str:
+    def _make_msg(
+        followups: Optional[List[Dict[str, Any]]] = None,
+        patches: Optional[List[Dict[str, Any]]] = None,
+    ) -> str:
         tracking: Dict[str, Any] = {}
         if followups is not None:
             tracking['followups'] = followups
         if patches is not None:
             tracking['patches'] = patches
-        return 'Cover letter text\n\n--- b4-review-tracking ---\n' + json.dumps(tracking)
+        return 'Cover letter text\n\n--- b4-review-tracking ---\n' + json.dumps(
+            tracking
+        )
 
     def test_counts_trailers(self) -> None:
         from b4.review_tui._tracking_app import _parse_art_from_message
+
         msg = self._make_msg(
-            followups=[{'trailers': ['Acked-by: A <a@example.com>',
-                                     'Reviewed-by: R <r@example.com>']}],
+            followups=[
+                {
+                    'trailers': [
+                        'Acked-by: A <a@example.com>',
+                        'Reviewed-by: R <r@example.com>',
+                    ]
+                }
+            ],
             patches=[{'followups': [{'trailers': ['Tested-by: T <t@example.com>']}]}],
         )
         assert _parse_art_from_message(msg) == (1, 1, 1)
 
     def test_returns_none_without_marker(self) -> None:
         from b4.review_tui._tracking_app import _parse_art_from_message
+
         assert _parse_art_from_message('no marker here') is None
 
     def test_returns_none_on_bad_json(self) -> None:
         from b4.review_tui._tracking_app import _parse_art_from_message
-        assert _parse_art_from_message('text\n\n--- b4-review-tracking ---\n{bad json') is None
+
+        assert (
+            _parse_art_from_message('text\n\n--- b4-review-tracking ---\n{bad json')
+            is None
+        )
 
 
 # -- Tests for note comment stripping ----------------------------------------
 
+
 class TestNoteCommentStripping:
     """Tests for the # comment stripping logic used in note editing."""
 
     @staticmethod
     def _strip_comments(raw_text: str) -> str:
         """Replicate the stripping logic from _edit_note_in_editor."""
-        return '\n'.join(ln for ln in raw_text.splitlines() if not ln.startswith('#')).strip()
+        return '\n'.join(
+            ln for ln in raw_text.splitlines() if not ln.startswith('#')
+        ).strip()
 
     def test_strips_comment_lines(self) -> None:
         raw = 'This is my note\n# This is a comment\nSecond line'
@@ -1700,20 +1891,26 @@ class TestNoteCommentStripping:
 
 # -- Helpers for attestation tests -------------------------------------------
 
+
 def _make_mock_attestation(status: str, identity: str, passing: bool) -> Dict[str, Any]:
     """Build an attestation dict as returned by LoreMessage.get_attestation_status()."""
     return {'status': status, 'identity': identity, 'passing': passing}
 
 
-def _make_mock_lmsg(attestations: List[Dict[str, Any]], passing: bool = True, critical: bool = False) -> mock.Mock:
+def _make_mock_lmsg(
+    attestations: List[Dict[str, Any]], passing: bool = True, critical: bool = False
+) -> mock.Mock:
     """Build a mock LoreMessage with a canned get_attestation_status() response."""
     lmsg = mock.Mock()
-    lmsg.get_attestation_status = mock.Mock(return_value=(attestations, passing, critical))
+    lmsg.get_attestation_status = mock.Mock(
+        return_value=(attestations, passing, critical)
+    )
     return lmsg
 
 
 # -- Tests for check_series_attestation() ------------------------------------
 
+
 class TestCheckSeriesAttestation:
     """Tests for check_series_attestation()."""
 
@@ -1726,20 +1923,26 @@ class TestCheckSeriesAttestation:
     def test_policy_off_returns_none(self) -> None:
         """When attestation-policy is 'off', returns None immediately."""
         lser = self._make_series([_make_mock_lmsg([])])
-        with mock.patch('b4.get_main_config', return_value={'attestation-policy': 'off'}):
+        with mock.patch(
+            'b4.get_main_config', return_value={'attestation-policy': 'off'}
+        ):
             assert check_series_attestation(lser) is None
 
     def test_no_signatures_returns_none_string(self) -> None:
         """When no attestors found on any patch, returns 'none'."""
         lser = self._make_series([_make_mock_lmsg([]), _make_mock_lmsg([])])
-        with mock.patch('b4.get_main_config', return_value={'attestation-policy': 'softfail'}):
+        with mock.patch(
+            'b4.get_main_config', return_value={'attestation-policy': 'softfail'}
+        ):
             assert check_series_attestation(lser) == 'none'
 
     def test_single_signed_dkim(self) -> None:
         """A single passing DKIM attestor is reported correctly."""
         att = [_make_mock_attestation('signed', 'DKIM/kernel.org', True)]
         lser = self._make_series([_make_mock_lmsg(att)])
-        with mock.patch('b4.get_main_config', return_value={'attestation-policy': 'softfail'}):
+        with mock.patch(
+            'b4.get_main_config', return_value={'attestation-policy': 'softfail'}
+        ):
             result = check_series_attestation(lser)
         assert result == 'signed:DKIM/kernel.org'
 
@@ -1747,7 +1950,9 @@ class TestCheckSeriesAttestation:
         """A nokey attestor is reported with status 'nokey'."""
         att = [_make_mock_attestation('nokey', 'ed25519/user@example.com', False)]
         lser = self._make_series([_make_mock_lmsg(att)])
-        with mock.patch('b4.get_main_config', return_value={'attestation-policy': 'softfail'}):
+        with mock.patch(
+            'b4.get_main_config', return_value={'attestation-policy': 'softfail'}
+        ):
             result = check_series_attestation(lser)
         assert result == 'nokey:ed25519/user@example.com'
 
@@ -1755,7 +1960,9 @@ class TestCheckSeriesAttestation:
         """A badsig attestor is reported with status 'badsig'."""
         att = [_make_mock_attestation('badsig', 'ed25519/user@example.com', False)]
         lser = self._make_series([_make_mock_lmsg(att)])
-        with mock.patch('b4.get_main_config', return_value={'attestation-policy': 'softfail'}):
+        with mock.patch(
+            'b4.get_main_config', return_value={'attestation-policy': 'softfail'}
+        ):
             result = check_series_attestation(lser)
         assert result == 'badsig:ed25519/user@example.com'
 
@@ -1766,7 +1973,9 @@ class TestCheckSeriesAttestation:
             _make_mock_attestation('nokey', 'ed25519/user@example.com', False),
         ]
         lser = self._make_series([_make_mock_lmsg(att)])
-        with mock.patch('b4.get_main_config', return_value={'attestation-policy': 'softfail'}):
+        with mock.patch(
+            'b4.get_main_config', return_value={'attestation-policy': 'softfail'}
+        ):
             result = check_series_attestation(lser)
         # Sorted by (status, identity): nokey < signed alphabetically
         assert result is not None
@@ -1779,7 +1988,9 @@ class TestCheckSeriesAttestation:
         """Same attestor on multiple patches is only reported once."""
         att = [_make_mock_attestation('signed', 'DKIM/kernel.org', True)]
         lser = self._make_series([_make_mock_lmsg(att), _make_mock_lmsg(att)])
-        with mock.patch('b4.get_main_config', return_value={'attestation-policy': 'softfail'}):
+        with mock.patch(
+            'b4.get_main_config', return_value={'attestation-policy': 'softfail'}
+        ):
             result = check_series_attestation(lser)
         assert result == 'signed:DKIM/kernel.org'
 
@@ -1788,7 +1999,9 @@ class TestCheckSeriesAttestation:
         att = [_make_mock_attestation('signed', 'DKIM/kernel.org', True)]
         lser = mock.Mock()
         lser.patches = [None, None, _make_mock_lmsg(att), None]
-        with mock.patch('b4.get_main_config', return_value={'attestation-policy': 'softfail'}):
+        with mock.patch(
+            'b4.get_main_config', return_value={'attestation-policy': 'softfail'}
+        ):
             result = check_series_attestation(lser)
         assert result == 'signed:DKIM/kernel.org'
 
@@ -1807,7 +2020,10 @@ class TestCheckSeriesAttestation:
         att = [_make_mock_attestation('signed', 'DKIM/kernel.org', True)]
         lmsg = _make_mock_lmsg(att)
         lser = self._make_series([lmsg])
-        config = {'attestation-policy': 'softfail', 'attestation-staleness-days': 'garbage'}
+        config = {
+            'attestation-policy': 'softfail',
+            'attestation-staleness-days': 'garbage',
+        }
         with mock.patch('b4.get_main_config', return_value=config):
             check_series_attestation(lser)
         lmsg.get_attestation_status.assert_called_once_with('softfail', 0)
@@ -1880,19 +2096,19 @@ class TestExtractCommentsFromQuotedReply:
     def test_single_hunk_single_comment(self) -> None:
         """A minimal single-hunk inline review produces one comment."""
         inline = (
-            "commit abc123\n"
-            "Author: Test <test@test.com>\n"
-            "\n"
-            "Test patch\n"
-            "\n"
-            "> diff --git a/fs/file.c b/fs/file.c\n"
-            "> @@ -10,4 +10,5 @@ void func(void)\n"
-            ">  \tint x;\n"
-            "> +\tptr = malloc(sz);\n"
-            "\n"
-            "Missing NULL check after malloc.\n"
-            "\n"
-            ">  \treturn 0;\n"
+            'commit abc123\n'
+            'Author: Test <test@test.com>\n'
+            '\n'
+            'Test patch\n'
+            '\n'
+            '> diff --git a/fs/file.c b/fs/file.c\n'
+            '> @@ -10,4 +10,5 @@ void func(void)\n'
+            '>  \tint x;\n'
+            '> +\tptr = malloc(sz);\n'
+            '\n'
+            'Missing NULL check after malloc.\n'
+            '\n'
+            '>  \treturn 0;\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -1903,27 +2119,27 @@ class TestExtractCommentsFromQuotedReply:
 
     def test_no_diff_produces_no_comments(self) -> None:
         """Text with no quoted diff content produces nothing."""
-        inline = "commit abc123\nAuthor: Test\n\nJust text, no diffs.\n"
+        inline = 'commit abc123\nAuthor: Test\n\nJust text, no diffs.\n'
         comments = review._extract_comments_from_quoted_reply(inline)
         assert comments == []
 
     def test_truncation_markers_skipped(self) -> None:
         """'[ ... ]' markers don't appear in comment text."""
         inline = (
-            "> diff --git a/f.c b/f.c\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new\n"
-            "\n"
-            "Comment here.\n"
-            "\n"
-            "[ ... ]\n"
-            "\n"
-            "> @@ -10,3 +10,4 @@\n"
-            ">  ctx2\n"
-            "> +new2\n"
-            "\n"
-            "Another comment.\n"
+            '> diff --git a/f.c b/f.c\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new\n'
+            '\n'
+            'Comment here.\n'
+            '\n'
+            '[ ... ]\n'
+            '\n'
+            '> @@ -10,3 +10,4 @@\n'
+            '>  ctx2\n'
+            '> +new2\n'
+            '\n'
+            'Another comment.\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 2
@@ -1934,15 +2150,15 @@ class TestExtractCommentsFromQuotedReply:
     def test_multiline_comment(self) -> None:
         """Multiple non-quoted lines between diff sections form one comment."""
         inline = (
-            "> diff --git a/f.c b/f.c\n"
-            "> @@ -5,3 +5,4 @@ void f(void)\n"
-            ">  \tint a;\n"
-            "> +\tint b;\n"
-            "\n"
-            "This variable name is confusing.\n"
-            "Consider using a more descriptive name.\n"
-            "\n"
-            ">  \treturn;\n"
+            '> diff --git a/f.c b/f.c\n'
+            '> @@ -5,3 +5,4 @@ void f(void)\n'
+            '>  \tint a;\n'
+            '> +\tint b;\n'
+            '\n'
+            'This variable name is confusing.\n'
+            'Consider using a more descriptive name.\n'
+            '\n'
+            '>  \treturn;\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -1952,19 +2168,19 @@ class TestExtractCommentsFromQuotedReply:
     def test_multi_paragraph_comment_stays_merged(self) -> None:
         """Two paragraphs separated by a blank line become one comment."""
         inline = (
-            "> diff --git a/f.c b/f.c\n"
-            "> --- a/f.c\n"
-            "> +++ b/f.c\n"
-            "> @@ -5,3 +5,5 @@ void f(void)\n"
-            ">  \tint a;\n"
-            "> +\tint b;\n"
-            "> +\tint c;\n"
-            "\n"
-            "First paragraph of review.\n"
-            "\n"
-            "Second paragraph of review.\n"
-            "\n"
-            ">  \treturn;\n"
+            '> diff --git a/f.c b/f.c\n'
+            '> --- a/f.c\n'
+            '> +++ b/f.c\n'
+            '> @@ -5,3 +5,5 @@ void f(void)\n'
+            '>  \tint a;\n'
+            '> +\tint b;\n'
+            '> +\tint c;\n'
+            '\n'
+            'First paragraph of review.\n'
+            '\n'
+            'Second paragraph of review.\n'
+            '\n'
+            '>  \treturn;\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -1974,23 +2190,23 @@ class TestExtractCommentsFromQuotedReply:
     def test_comments_in_different_hunks_stay_separate(self) -> None:
         """Comments in different hunks (far apart) stay separate."""
         inline = (
-            "> diff --git a/f.c b/f.c\n"
-            "> --- a/f.c\n"
-            "> +++ b/f.c\n"
-            "> @@ -5,3 +5,4 @@\n"
-            ">  \tint a;\n"
-            "> +\tint b;\n"
-            "\n"
-            "Comment on hunk 1.\n"
-            "\n"
-            ">  \treturn;\n"
-            "> @@ -100,3 +101,4 @@\n"
-            ">  \tvoid x;\n"
-            "> +\tvoid y;\n"
-            "\n"
-            "Comment on hunk 2.\n"
-            "\n"
-            ">  \treturn;\n"
+            '> diff --git a/f.c b/f.c\n'
+            '> --- a/f.c\n'
+            '> +++ b/f.c\n'
+            '> @@ -5,3 +5,4 @@\n'
+            '>  \tint a;\n'
+            '> +\tint b;\n'
+            '\n'
+            'Comment on hunk 1.\n'
+            '\n'
+            '>  \treturn;\n'
+            '> @@ -100,3 +101,4 @@\n'
+            '>  \tvoid x;\n'
+            '> +\tvoid y;\n'
+            '\n'
+            'Comment on hunk 2.\n'
+            '\n'
+            '>  \treturn;\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 2
@@ -2000,18 +2216,18 @@ class TestExtractCommentsFromQuotedReply:
     def test_email_reply_with_file_headers(self) -> None:
         """Email follow-ups include --- a/ and +++ b/ lines; parser handles them."""
         email_reply = (
-            "On Mon, Jan 1, 2024, Dev <dev@test.com> wrote:\n"
-            "> diff --git a/fs/file.c b/fs/file.c\n"
-            "> index abc123..def456 100644\n"
-            "> --- a/fs/file.c\n"
-            "> +++ b/fs/file.c\n"
-            "> @@ -10,3 +10,4 @@ void f(void)\n"
-            ">  \tint x;\n"
-            "> +\tptr = malloc(sz);\n"
-            "\n"
-            "Missing NULL check.\n"
-            "\n"
-            ">  \treturn 0;\n"
+            'On Mon, Jan 1, 2024, Dev <dev@test.com> wrote:\n'
+            '> diff --git a/fs/file.c b/fs/file.c\n'
+            '> index abc123..def456 100644\n'
+            '> --- a/fs/file.c\n'
+            '> +++ b/fs/file.c\n'
+            '> @@ -10,3 +10,4 @@ void f(void)\n'
+            '>  \tint x;\n'
+            '> +\tptr = malloc(sz);\n'
+            '\n'
+            'Missing NULL check.\n'
+            '\n'
+            '>  \treturn 0;\n'
         )
         comments = review._extract_comments_from_quoted_reply(email_reply)
         assert len(comments) == 1
@@ -2022,12 +2238,7 @@ class TestExtractCommentsFromQuotedReply:
     def test_bare_gt_prefix(self) -> None:
         """Lines starting with just '>' (no space) are also parsed."""
         inline = (
-            ">diff --git a/f.c b/f.c\n"
-            ">@@ -1,3 +1,4 @@\n"
-            "> ctx\n"
-            ">+new\n"
-            "\n"
-            "Looks good.\n"
+            '>diff --git a/f.c b/f.c\n>@@ -1,3 +1,4 @@\n> ctx\n>+new\n\nLooks good.\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -2036,19 +2247,19 @@ class TestExtractCommentsFromQuotedReply:
     def test_comments_in_different_files(self) -> None:
         """Comments in different files produce separate entries with correct paths."""
         inline = (
-            "> diff --git a/a.c b/a.c\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new_a\n"
-            "\n"
-            "Comment in a.c.\n"
-            "\n"
-            "> diff --git a/b.c b/b.c\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new_b\n"
-            "\n"
-            "Comment in b.c.\n"
+            '> diff --git a/a.c b/a.c\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new_a\n'
+            '\n'
+            'Comment in a.c.\n'
+            '\n'
+            '> diff --git a/b.c b/b.c\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new_b\n'
+            '\n'
+            'Comment in b.c.\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 2
@@ -2060,14 +2271,14 @@ class TestExtractCommentsFromQuotedReply:
     def test_preamble_before_diff_ignored(self) -> None:
         """Text before the first quoted diff line is not treated as a comment."""
         inline = (
-            "Hi, some general feedback below:\n"
-            "\n"
-            "> diff --git a/f.c b/f.c\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new\n"
-            "\n"
-            "Actual inline comment.\n"
+            'Hi, some general feedback below:\n'
+            '\n'
+            '> diff --git a/f.c b/f.c\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new\n'
+            '\n'
+            'Actual inline comment.\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -2076,12 +2287,12 @@ class TestExtractCommentsFromQuotedReply:
     def test_trailing_comment_flushed(self) -> None:
         """A comment at the very end (no trailing quoted line) is still captured."""
         inline = (
-            "> diff --git a/f.c b/f.c\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new\n"
-            "\n"
-            "Final comment with no trailing diff.\n"
+            '> diff --git a/f.c b/f.c\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new\n'
+            '\n'
+            'Final comment with no trailing diff.\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -2090,14 +2301,14 @@ class TestExtractCommentsFromQuotedReply:
     def test_deletion_line_anchors_to_a_file(self) -> None:
         """Comment after a deletion line anchors to the a-side file and line."""
         inline = (
-            "> diff --git a/old.c b/old.c\n"
-            "> @@ -10,4 +10,3 @@\n"
-            ">  ctx\n"
-            "> -removed_line\n"
-            "\n"
-            "Why was this removed?\n"
-            "\n"
-            ">  more ctx\n"
+            '> diff --git a/old.c b/old.c\n'
+            '> @@ -10,4 +10,3 @@\n'
+            '>  ctx\n'
+            '> -removed_line\n'
+            '\n'
+            'Why was this removed?\n'
+            '\n'
+            '>  more ctx\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -2105,19 +2316,18 @@ class TestExtractCommentsFromQuotedReply:
         # Deletion at a_line=11, so comment anchors to line 11
         assert comments[0]['line'] == 11
 
-
     def test_commit_message_comment_extracted(self) -> None:
         """Comments on quoted commit message lines get :message path."""
         inline = (
-            "> This is the commit body.\n"
-            "> It explains the change.\n"
-            "\n"
-            "Why is this needed?\n"
-            "\n"
-            "> diff --git a/f.c b/f.c\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new\n"
+            '> This is the commit body.\n'
+            '> It explains the change.\n'
+            '\n'
+            'Why is this needed?\n'
+            '\n'
+            '> diff --git a/f.c b/f.c\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -2128,17 +2338,18 @@ class TestExtractCommentsFromQuotedReply:
     def test_preamble_captured_when_enabled(self) -> None:
         """With capture_preamble=True, text before first quote is a comment."""
         inline = (
-            "General feedback on this patch.\n"
-            "\n"
-            "> Commit body line.\n"
-            "\n"
-            "> diff --git a/f.c b/f.c\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new\n"
+            'General feedback on this patch.\n'
+            '\n'
+            '> Commit body line.\n'
+            '\n'
+            '> diff --git a/f.c b/f.c\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new\n'
         )
         comments = review._extract_comments_from_quoted_reply(
-            inline, capture_preamble=True)
+            inline, capture_preamble=True
+        )
         preamble = [c for c in comments if c['line'] == 0]
         assert len(preamble) == 1
         assert preamble[0]['path'] == ':message'
@@ -2147,14 +2358,14 @@ class TestExtractCommentsFromQuotedReply:
     def test_preamble_not_captured_by_default(self) -> None:
         """Without capture_preamble, text before first quote is ignored."""
         inline = (
-            "General feedback on this patch.\n"
-            "\n"
-            "> diff --git a/f.c b/f.c\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new\n"
-            "\n"
-            "Actual comment.\n"
+            'General feedback on this patch.\n'
+            '\n'
+            '> diff --git a/f.c b/f.c\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new\n'
+            '\n'
+            'Actual comment.\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -2163,18 +2374,19 @@ class TestExtractCommentsFromQuotedReply:
     def test_attribution_line_skipped_in_preamble(self) -> None:
         """The 'On ..., ... wrote:' attribution line is not captured."""
         inline = (
-            "On Thu, 12 Mar 2026 15:54:20 +0100, Author <a@b.com> wrote:\n"
-            "> Commit body.\n"
-            "\n"
-            "My comment.\n"
-            "\n"
-            "> diff --git a/f.c b/f.c\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new\n"
+            'On Thu, 12 Mar 2026 15:54:20 +0100, Author <a@b.com> wrote:\n'
+            '> Commit body.\n'
+            '\n'
+            'My comment.\n'
+            '\n'
+            '> diff --git a/f.c b/f.c\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new\n'
         )
         comments = review._extract_comments_from_quoted_reply(
-            inline, capture_preamble=True)
+            inline, capture_preamble=True
+        )
         # Attribution line should NOT become a comment
         for c in comments:
             assert 'wrote:' not in c.get('text', '')
@@ -2182,11 +2394,7 @@ class TestExtractCommentsFromQuotedReply:
     def test_orphan_hunk_header_enters_diff_mode(self) -> None:
         """A @@ hunk header without diff --git still enters diff mode."""
         inline = (
-            "> @@ -10,3 +10,4 @@ some_func\n"
-            ">  ctx\n"
-            "> +new line\n"
-            "\n"
-            "This needs a test.\n"
+            '> @@ -10,3 +10,4 @@ some_func\n>  ctx\n> +new line\n\nThis needs a test.\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -2197,13 +2405,13 @@ class TestExtractCommentsFromQuotedReply:
     def test_orphan_file_headers_enter_diff_mode(self) -> None:
         """--- a/ and +++ b/ without diff --git still enter diff mode."""
         inline = (
-            "> --- a/kernel/sched.c\n"
-            "> +++ b/kernel/sched.c\n"
-            "> @@ -5,3 +5,4 @@\n"
-            ">  existing\n"
-            "> +added\n"
-            "\n"
-            "Why this change?\n"
+            '> --- a/kernel/sched.c\n'
+            '> +++ b/kernel/sched.c\n'
+            '> @@ -5,3 +5,4 @@\n'
+            '>  existing\n'
+            '> +added\n'
+            '\n'
+            'Why this change?\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -2214,11 +2422,7 @@ class TestExtractCommentsFromQuotedReply:
     def test_trimmed_diff_with_content_resolution(self) -> None:
         """Trimmed reply resolved against real diff gets correct position."""
         # User trimmed everything except the line they're commenting on
-        inline = (
-            "> +new line\n"
-            "\n"
-            "Looks good.\n"
-        )
+        inline = '> +new line\n\nLooks good.\n'
         comments = review._extract_comments_from_quoted_reply(inline)
         # Comment is captured (even without file path from headers)
         assert len(comments) == 1
@@ -2227,13 +2431,13 @@ class TestExtractCommentsFromQuotedReply:
 
         # Now resolve against the real diff
         real_diff = (
-            "diff --git a/f.c b/f.c\n"
-            "--- a/f.c\n"
-            "+++ b/f.c\n"
-            "@@ -1,3 +1,4 @@\n"
-            " ctx\n"
-            "+new line\n"
-            " more\n"
+            'diff --git a/f.c b/f.c\n'
+            '--- a/f.c\n'
+            '+++ b/f.c\n'
+            '@@ -1,3 +1,4 @@\n'
+            ' ctx\n'
+            '+new line\n'
+            ' more\n'
         )
         review._resolve_comment_positions(real_diff, comments)
         assert comments[0]['path'] == 'b/f.c'
@@ -2243,15 +2447,15 @@ class TestExtractCommentsFromQuotedReply:
         """A diff --git line wrapped by the editor is rejoined."""
         # Editor wraps at 72 chars, splitting diff --git into two lines
         inline = (
-            "> diff --git a/tools/lib/python/kdoc/xforms_lists.py\n"
-            "b/tools/lib/python/kdoc/xforms_lists.py\n"
-            "> --- a/tools/lib/python/kdoc/xforms_lists.py\n"
-            "> +++ b/tools/lib/python/kdoc/xforms_lists.py\n"
-            "> @@ -4,7 +4,8 @@\n"
-            ">  existing\n"
-            "> +from kdoc.c_lex import CMatch\n"
-            "\n"
-            "Only editing 2nd file.\n"
+            '> diff --git a/tools/lib/python/kdoc/xforms_lists.py\n'
+            'b/tools/lib/python/kdoc/xforms_lists.py\n'
+            '> --- a/tools/lib/python/kdoc/xforms_lists.py\n'
+            '> +++ b/tools/lib/python/kdoc/xforms_lists.py\n'
+            '> @@ -4,7 +4,8 @@\n'
+            '>  existing\n'
+            '> +from kdoc.c_lex import CMatch\n'
+            '\n'
+            'Only editing 2nd file.\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -2262,15 +2466,15 @@ class TestExtractCommentsFromQuotedReply:
     def test_wrapped_diff_git_line_quoted_continuation(self) -> None:
         """A diff --git line wrapped with quoted continuation is rejoined."""
         inline = (
-            "> diff --git a/tools/lib/python/kdoc/xforms_lists.py\n"
-            "> b/tools/lib/python/kdoc/xforms_lists.py\n"
-            "> --- a/tools/lib/python/kdoc/xforms_lists.py\n"
-            "> +++ b/tools/lib/python/kdoc/xforms_lists.py\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new\n"
-            "\n"
-            "Comment here.\n"
+            '> diff --git a/tools/lib/python/kdoc/xforms_lists.py\n'
+            '> b/tools/lib/python/kdoc/xforms_lists.py\n'
+            '> --- a/tools/lib/python/kdoc/xforms_lists.py\n'
+            '> +++ b/tools/lib/python/kdoc/xforms_lists.py\n'
+            '> @@ -1,3 +1,4 @@\n'
+            '>  ctx\n'
+            '> +new\n'
+            '\n'
+            'Comment here.\n'
         )
         comments = review._extract_comments_from_quoted_reply(inline)
         assert len(comments) == 1
@@ -2280,21 +2484,16 @@ class TestExtractCommentsFromQuotedReply:
     def test_extract_editor_comments_with_diff_resolution(self) -> None:
         """_extract_editor_comments resolves positions when diff provided."""
         edited = (
-            "# instructions\n"
-            "> @@ -1,3 +1,4 @@\n"
-            ">  ctx\n"
-            "> +new line\n"
-            "\n"
-            "My comment.\n"
+            '# instructions\n> @@ -1,3 +1,4 @@\n>  ctx\n> +new line\n\nMy comment.\n'
         )
         real_diff = (
-            "diff --git a/f.c b/f.c\n"
-            "--- a/f.c\n"
-            "+++ b/f.c\n"
-            "@@ -1,3 +1,4 @@\n"
-            " ctx\n"
-            "+new line\n"
-            " more\n"
+            'diff --git a/f.c b/f.c\n'
+            '--- a/f.c\n'
+            '+++ b/f.c\n'
+            '@@ -1,3 +1,4 @@\n'
+            ' ctx\n'
+            '+new line\n'
+            ' more\n'
         )
         comments = review._extract_editor_comments(edited, diff_text=real_diff)
         assert len(comments) == 1
@@ -2347,20 +2546,24 @@ class TestResolveCommentPositions:
         # Sashiko uses fake context hunks even for new files, so the
         # content key has a space prefix while the real diff has + prefix.
         real_diff = (
-            "diff --git a/f.c b/f.c\n"
-            "new file mode 100644\n"
-            "--- /dev/null\n"
-            "+++ b/f.c\n"
-            "@@ -0,0 +1,5 @@\n"
-            "+int x;\n"
-            "+int y;\n"
-            "+return -EINVAL;\n"
-            "+if (check)\n"
-            "+\treturn 0;\n"
+            'diff --git a/f.c b/f.c\n'
+            'new file mode 100644\n'
+            '--- /dev/null\n'
+            '+++ b/f.c\n'
+            '@@ -0,0 +1,5 @@\n'
+            '+int x;\n'
+            '+int y;\n'
+            '+return -EINVAL;\n'
+            '+if (check)\n'
+            '+\treturn 0;\n'
         )
         comments = [
-            {'path': 'f.c', 'line': 90, 'text': 'Bug here.',
-             'content': ' return -EINVAL;'},
+            {
+                'path': 'f.c',
+                'line': 90,
+                'text': 'Bug here.',
+                'content': ' return -EINVAL;',
+            },
         ]
         review._resolve_comment_positions(real_diff, comments)
         assert comments[0]['line'] == 3
@@ -2369,24 +2572,23 @@ class TestResolveCommentPositions:
     def test_exact_prefix_match_still_works(self) -> None:
         """Content with matching prefix (both +) still resolves correctly."""
         real_diff = (
-            "diff --git a/f.c b/f.c\n"
-            "--- a/f.c\n"
-            "+++ b/f.c\n"
-            "@@ -10,3 +10,4 @@\n"
-            " ctx\n"
-            "+new_line\n"
-            " more\n"
+            'diff --git a/f.c b/f.c\n'
+            '--- a/f.c\n'
+            '+++ b/f.c\n'
+            '@@ -10,3 +10,4 @@\n'
+            ' ctx\n'
+            '+new_line\n'
+            ' more\n'
         )
         comments = [
-            {'path': 'f.c', 'line': 99, 'text': 'Review.',
-             'content': '+new_line'},
+            {'path': 'f.c', 'line': 99, 'text': 'Review.', 'content': '+new_line'},
         ]
         review._resolve_comment_positions(real_diff, comments)
         assert comments[0]['line'] == 11
 
     def test_no_content_key_keeps_original_position(self) -> None:
         """Comments without content key are not touched."""
-        real_diff = "diff --git a/f.c b/f.c\n--- a/f.c\n+++ b/f.c\n@@ -1,1 +1,1 @@\n-old\n+new\n"
+        real_diff = 'diff --git a/f.c b/f.c\n--- a/f.c\n+++ b/f.c\n@@ -1,1 +1,1 @@\n-old\n+new\n'
         comments = [{'path': 'f.c', 'line': 42, 'text': 'Note.'}]
         review._resolve_comment_positions(real_diff, comments)
         assert comments[0]['line'] == 42
@@ -2395,22 +2597,26 @@ class TestResolveCommentPositions:
         """When the same line appears multiple times, pick the closest match."""
         # Simulates a new file with return -EINVAL; at lines 10, 30, and 50
         real_diff = (
-            "diff --git a/f.c b/f.c\n"
-            "new file mode 100644\n"
-            "--- /dev/null\n"
-            "+++ b/f.c\n"
-            "@@ -0,0 +1,50 @@\n"
-            + "".join(f"+line{i}\n" for i in range(1, 10))
-            + "+\treturn -EINVAL;\n"        # line 10
-            + "".join(f"+line{i}\n" for i in range(11, 30))
-            + "+\treturn -EINVAL;\n"        # line 30
-            + "".join(f"+line{i}\n" for i in range(31, 50))
-            + "+\treturn -EINVAL;\n"        # line 50
+            'diff --git a/f.c b/f.c\n'
+            'new file mode 100644\n'
+            '--- /dev/null\n'
+            '+++ b/f.c\n'
+            '@@ -0,0 +1,50 @@\n'
+            + ''.join(f'+line{i}\n' for i in range(1, 10))
+            + '+\treturn -EINVAL;\n'  # line 10
+            + ''.join(f'+line{i}\n' for i in range(11, 30))
+            + '+\treturn -EINVAL;\n'  # line 30
+            + ''.join(f'+line{i}\n' for i in range(31, 50))
+            + '+\treturn -EINVAL;\n'  # line 50
         )
         # Sashiko says line 30 with context-prefix content
         comments = [
-            {'path': 'f.c', 'line': 30, 'text': 'Bug here.',
-             'content': ' \treturn -EINVAL;'},
+            {
+                'path': 'f.c',
+                'line': 30,
+                'text': 'Bug here.',
+                'content': ' \treturn -EINVAL;',
+            },
         ]
         review._resolve_comment_positions(real_diff, comments)
         # Should pick line 30 (closest to source position 30)
@@ -2436,17 +2642,17 @@ class TestIntegrateSashikoReviews:
                 'status': 'Reviewed',
                 'output': '{}',
                 'inline_review': (
-                    "commit aaa\n"
-                    "Author: Test\n\n"
-                    "Test patch 1\n\n"
-                    "> diff --git a/f.c b/f.c\n"
-                    "> @@ -10,3 +10,4 @@ void f(void)\n"
-                    ">  \tint x;\n"
-                    "> +\tptr = alloc();\n"
-                    "\n"
-                    "Missing error check.\n"
-                    "\n"
-                    ">  \treturn 0;\n"
+                    'commit aaa\n'
+                    'Author: Test\n\n'
+                    'Test patch 1\n\n'
+                    '> diff --git a/f.c b/f.c\n'
+                    '> @@ -10,3 +10,4 @@ void f(void)\n'
+                    '>  \tint x;\n'
+                    '> +\tptr = alloc();\n'
+                    '\n'
+                    'Missing error check.\n'
+                    '\n'
+                    '>  \treturn 0;\n'
                 ),
             },
             {
@@ -2463,26 +2669,33 @@ class TestIntegrateSashikoReviews:
         """When sashiko-url is not configured, returns False immediately."""
         with mock.patch('b4.get_main_config', return_value={}):
             result = review._integrate_sashiko_reviews(
-                '/tmp', '', {'series': {}, 'patches': []}, [], [])
+                '/tmp', '', {'series': {}, 'patches': []}, [], []
+            )
         assert result is False
 
     def test_no_series_msgid_returns_false(self) -> None:
         """When series has no message_id, returns False."""
-        with mock.patch('b4.get_main_config',
-                        return_value={'sashiko-url': 'https://sashiko.dev'}):
+        with mock.patch(
+            'b4.get_main_config', return_value={'sashiko-url': 'https://sashiko.dev'}
+        ):
             result = review._integrate_sashiko_reviews(
-                '/tmp', '', {'series': {}, 'patches': []}, [], [])
+                '/tmp', '', {'series': {}, 'patches': []}, [], []
+            )
         assert result is False
 
     def test_api_returns_none(self) -> None:
         """When sashiko API returns nothing, returns False."""
         series = {'message_id': 'test@example.com'}
-        with mock.patch('b4.get_main_config',
-                        return_value={'sashiko-url': 'https://sashiko.dev'}):
-            with mock.patch('b4.review.checks._fetch_sashiko_patchset', return_value=None):
+        with mock.patch(
+            'b4.get_main_config', return_value={'sashiko-url': 'https://sashiko.dev'}
+        ):
+            with mock.patch(
+                'b4.review.checks._fetch_sashiko_patchset', return_value=None
+            ):
                 with mock.patch('b4.review.checks.clear_sashiko_cache'):
                     result = review._integrate_sashiko_reviews(
-                        '/tmp', '', {'series': series, 'patches': []}, [], [])
+                        '/tmp', '', {'series': series, 'patches': []}, [], []
+                    )
         assert result is False
 
     def test_integrates_inline_comments(self) -> None:
@@ -2496,26 +2709,34 @@ class TestIntegrateSashikoReviews:
         commit_shas = ['aaaa', 'bbbb']
         # Real diff matching the inline review structure
         real_diff = (
-            "diff --git a/f.c b/f.c\n"
-            "index 111..222 100644\n"
-            "--- a/f.c\n"
-            "+++ b/f.c\n"
-            "@@ -10,3 +10,4 @@ void f(void)\n"
-            " \tint x;\n"
-            "+\tptr = alloc();\n"
-            " \treturn 0;\n"
-        )
-        with mock.patch('b4.get_main_config',
-                        return_value={'sashiko-url': 'https://sashiko.dev'}):
-            with mock.patch('b4.review.checks._fetch_sashiko_patchset',
-                            return_value=self._SASHIKO_RESPONSE):
+            'diff --git a/f.c b/f.c\n'
+            'index 111..222 100644\n'
+            '--- a/f.c\n'
+            '+++ b/f.c\n'
+            '@@ -10,3 +10,4 @@ void f(void)\n'
+            ' \tint x;\n'
+            '+\tptr = alloc();\n'
+            ' \treturn 0;\n'
+        )
+        with mock.patch(
+            'b4.get_main_config', return_value={'sashiko-url': 'https://sashiko.dev'}
+        ):
+            with mock.patch(
+                'b4.review.checks._fetch_sashiko_patchset',
+                return_value=self._SASHIKO_RESPONSE,
+            ):
                 with mock.patch('b4.review.checks.clear_sashiko_cache'):
                     with mock.patch('b4.git_run_command') as mock_git:
                         mock_git.return_value = (0, real_diff)
                         with mock.patch.object(review, 'save_tracking_ref'):
                             result = review._integrate_sashiko_reviews(
-                                '/tmp', 'cover', tracking, commit_shas, patches,
-                                branch='b4/review/test')
+                                '/tmp',
+                                'cover',
+                                tracking,
+                                commit_shas,
+                                patches,
+                                branch='b4/review/test',
+                            )
 
         assert result is True
         # Patch 1 should have sashiko comments
@@ -2535,26 +2756,31 @@ class TestIntegrateSashikoReviews:
         ]
         series = {'message_id': 'cover@example.com'}
         tracking = {'series': series, 'patches': patches}
-        with mock.patch('b4.get_main_config',
-                        return_value={'sashiko-url': 'https://sashiko.dev'}):
-            with mock.patch('b4.review.checks._fetch_sashiko_patchset',
-                            return_value=self._SASHIKO_RESPONSE):
+        with mock.patch(
+            'b4.get_main_config', return_value={'sashiko-url': 'https://sashiko.dev'}
+        ):
+            with mock.patch(
+                'b4.review.checks._fetch_sashiko_patchset',
+                return_value=self._SASHIKO_RESPONSE,
+            ):
                 with mock.patch('b4.review.checks.clear_sashiko_cache'):
                     result = review._integrate_sashiko_reviews(
-                        '/tmp', '', tracking, ['aaa'], patches)
+                        '/tmp', '', tracking, ['aaa'], patches
+                    )
         assert result is False
 
     def test_uses_header_info_msgid_fallback(self) -> None:
         """Falls back to header-info.msgid when message_id is missing."""
         series = {'header-info': {'msgid': 'cover@example.com'}}
         tracking = {'series': series, 'patches': []}
-        with mock.patch('b4.get_main_config',
-                        return_value={'sashiko-url': 'https://sashiko.dev'}):
-            with mock.patch('b4.review.checks._fetch_sashiko_patchset',
-                            return_value=None) as mock_fetch:
+        with mock.patch(
+            'b4.get_main_config', return_value={'sashiko-url': 'https://sashiko.dev'}
+        ):
+            with mock.patch(
+                'b4.review.checks._fetch_sashiko_patchset', return_value=None
+            ) as mock_fetch:
                 with mock.patch('b4.review.checks.clear_sashiko_cache'):
-                    review._integrate_sashiko_reviews(
-                        '/tmp', '', tracking, [], [])
+                    review._integrate_sashiko_reviews('/tmp', '', tracking, [], [])
         # Should have been called with the header-info msgid
         mock_fetch.assert_called_once_with('cover@example.com', 'https://sashiko.dev')
 
@@ -2573,10 +2799,10 @@ class TestIntegrateSashikoReviews:
                     'patch_id': 100,
                     'status': 'Reviewed',
                     'inline_review': (
-                        "commit aaa\nAuthor: Test\n\nOld\n\n"
-                        "> diff --git a/f.c b/f.c\n"
-                        "> @@ -1,3 +1,4 @@\n>  ctx\n> +new\n"
-                        "\nOld review comment.\n"
+                        'commit aaa\nAuthor: Test\n\nOld\n\n'
+                        '> diff --git a/f.c b/f.c\n'
+                        '> @@ -1,3 +1,4 @@\n>  ctx\n> +new\n'
+                        '\nOld review comment.\n'
                     ),
                 },
                 {
@@ -2584,31 +2810,40 @@ class TestIntegrateSashikoReviews:
                     'patch_id': 100,
                     'status': 'Reviewed',
                     'inline_review': (
-                        "commit bbb\nAuthor: Test\n\nNew\n\n"
-                        "> diff --git a/f.c b/f.c\n"
-                        "> @@ -1,3 +1,4 @@\n>  ctx\n> +new\n"
-                        "\nNew review comment.\n"
+                        'commit bbb\nAuthor: Test\n\nNew\n\n'
+                        '> diff --git a/f.c b/f.c\n'
+                        '> @@ -1,3 +1,4 @@\n>  ctx\n> +new\n'
+                        '\nNew review comment.\n'
                     ),
                 },
             ],
         }
-        patches: List[Dict[str, Any]] = [{'header-info': {'msgid': 'patch1@example.com'}}]
+        patches: List[Dict[str, Any]] = [
+            {'header-info': {'msgid': 'patch1@example.com'}}
+        ]
         series = {'message_id': 'cover@example.com'}
         tracking = {'series': series, 'patches': patches}
         real_diff = (
-            "diff --git a/f.c b/f.c\n--- a/f.c\n+++ b/f.c\n"
-            "@@ -1,3 +1,4 @@\n ctx\n+new\n ctx\n"
+            'diff --git a/f.c b/f.c\n--- a/f.c\n+++ b/f.c\n'
+            '@@ -1,3 +1,4 @@\n ctx\n+new\n ctx\n'
         )
-        with mock.patch('b4.get_main_config',
-                        return_value={'sashiko-url': 'https://sashiko.dev'}):
-            with mock.patch('b4.review.checks._fetch_sashiko_patchset',
-                            return_value=patchset):
+        with mock.patch(
+            'b4.get_main_config', return_value={'sashiko-url': 'https://sashiko.dev'}
+        ):
+            with mock.patch(
+                'b4.review.checks._fetch_sashiko_patchset', return_value=patchset
+            ):
                 with mock.patch('b4.review.checks.clear_sashiko_cache'):
                     with mock.patch('b4.git_run_command', return_value=(0, real_diff)):
                         with mock.patch.object(review, 'save_tracking_ref'):
                             review._integrate_sashiko_reviews(
-                                '/tmp', '', tracking, ['aaa'], patches,
-                                branch='b4/review/test')
+                                '/tmp',
+                                '',
+                                tracking,
+                                ['aaa'],
+                                patches,
+                                branch='b4/review/test',
+                            )
         comments = patches[0]['reviews']['sashiko@sashiko.dev']['comments']
         # Should have the newer review's comment
         assert any('New review comment' in c['text'] for c in comments)
@@ -2624,66 +2859,79 @@ class TestIntegrateSashikoReviews:
                     'sashiko@sashiko.dev': {
                         'name': 'sashiko.dev',
                         'sashiko-review-id': 200,
-                        'comments': [{'path': 'f.c', 'line': 11, 'text': 'Already here.'}],
+                        'comments': [
+                            {'path': 'f.c', 'line': 11, 'text': 'Already here.'}
+                        ],
                     },
                 },
             },
         ]
         series = {'message_id': 'cover@example.com'}
         tracking = {'series': series, 'patches': patches}
-        with mock.patch('b4.get_main_config',
-                        return_value={'sashiko-url': 'https://sashiko.dev'}):
-            with mock.patch('b4.review.checks._fetch_sashiko_patchset',
-                            return_value=self._SASHIKO_RESPONSE):
+        with mock.patch(
+            'b4.get_main_config', return_value={'sashiko-url': 'https://sashiko.dev'}
+        ):
+            with mock.patch(
+                'b4.review.checks._fetch_sashiko_patchset',
+                return_value=self._SASHIKO_RESPONSE,
+            ):
                 with mock.patch('b4.review.checks.clear_sashiko_cache'):
                     with mock.patch('b4.git_run_command') as mock_git:
                         result = review._integrate_sashiko_reviews(
-                            '/tmp', '', tracking, ['aaaa'], patches)
+                            '/tmp', '', tracking, ['aaaa'], patches
+                        )
         # Should not have called git diff (skipped re-parsing)
         mock_git.assert_not_called()
         assert result is False
         # Original comments untouched
-        assert patches[0]['reviews']['sashiko@sashiko.dev']['comments'][0]['text'] == 'Already here.'
+        assert (
+            patches[0]['reviews']['sashiko@sashiko.dev']['comments'][0]['text']
+            == 'Already here.'
+        )
 
 
 class TestIntegrateFollowupInlineComments:
     """Tests for _integrate_followup_inline_comments()."""
 
     _FOLLOWUP_BODY_WITH_DIFF = (
-        "On Mon, Jan 1, 2024, Dev <dev@test.com> wrote:\n"
-        "> diff --git a/fs/file.c b/fs/file.c\n"
-        "> index abc123..def456 100644\n"
-        "> --- a/fs/file.c\n"
-        "> +++ b/fs/file.c\n"
-        "> @@ -10,3 +10,4 @@ void f(void)\n"
-        ">  \tint x;\n"
-        "> +\tptr = malloc(sz);\n"
-        "\n"
-        "Missing NULL check after malloc.\n"
-        "\n"
-        ">  \treturn 0;\n"
+        'On Mon, Jan 1, 2024, Dev <dev@test.com> wrote:\n'
+        '> diff --git a/fs/file.c b/fs/file.c\n'
+        '> index abc123..def456 100644\n'
+        '> --- a/fs/file.c\n'
+        '> +++ b/fs/file.c\n'
+        '> @@ -10,3 +10,4 @@ void f(void)\n'
+        '>  \tint x;\n'
+        '> +\tptr = malloc(sz);\n'
+        '\n'
+        'Missing NULL check after malloc.\n'
+        '\n'
+        '>  \treturn 0;\n'
     )
 
     _FOLLOWUP_BODY_NO_DIFF = (
-        "I think this approach makes sense, but can we also\n"
-        "add a test for the error path?\n"
+        'I think this approach makes sense, but can we also\n'
+        'add a test for the error path?\n'
     )
 
-    def _make_followup_comments(self, bodies_by_patch: Dict[int, List[str]]) -> Dict[int, List[Dict[str, Any]]]:
+    def _make_followup_comments(
+        self, bodies_by_patch: Dict[int, List[str]]
+    ) -> Dict[int, List[Dict[str, Any]]]:
         """Build a followup_comments dict like _parse_msgs_to_followup_comments returns."""
         result: Dict[int, List[Dict[str, Any]]] = {}
         for display_idx, body_list in bodies_by_patch.items():
             entries = []
             for i, body in enumerate(body_list):
-                entries.append({
-                    'body': body,
-                    'fromname': f'Reviewer {i}',
-                    'fromemail': f'reviewer{i}@example.com',
-                    'date': '2024-01-01',
-                    'msgid': f'followup{display_idx}-{i}@example.com',
-                    'subject': 'Re: [PATCH]',
-                    'depth': 0,
-                })
+                entries.append(
+                    {
+                        'body': body,
+                        'fromname': f'Reviewer {i}',
+                        'fromemail': f'reviewer{i}@example.com',
+                        'date': '2024-01-01',
+                        'msgid': f'followup{display_idx}-{i}@example.com',
+                        'subject': 'Re: [PATCH]',
+                        'depth': 0,
+                    }
+                )
             result[display_idx] = entries
         return result
 
@@ -2691,7 +2939,8 @@ class TestIntegrateFollowupInlineComments:
         """Without a thread-blob, returns False immediately."""
         tracking: Dict[str, Any] = {'series': {}, 'patches': []}
         result = review._integrate_followup_inline_comments(
-            '/tmp', '', tracking, [], [])
+            '/tmp', '', tracking, [], []
+        )
         assert result is False
 
     def test_extracts_inline_comments_from_followup(self) -> None:
@@ -2707,30 +2956,39 @@ class TestIntegrateFollowupInlineComments:
         commit_shas = ['aaaa']
 
         # Follow-up body that quotes diff with a comment
-        followup_comments = self._make_followup_comments({
-            1: [self._FOLLOWUP_BODY_WITH_DIFF],  # display_idx 1 = patch 0
-        })
+        followup_comments = self._make_followup_comments(
+            {
+                1: [self._FOLLOWUP_BODY_WITH_DIFF],  # display_idx 1 = patch 0
+            }
+        )
 
         real_diff = (
-            "diff --git a/fs/file.c b/fs/file.c\n"
-            "index abc123..def456 100644\n"
-            "--- a/fs/file.c\n"
-            "+++ b/fs/file.c\n"
-            "@@ -10,3 +10,4 @@ void f(void)\n"
-            " \tint x;\n"
-            "+\tptr = malloc(sz);\n"
-            " \treturn 0;\n"
+            'diff --git a/fs/file.c b/fs/file.c\n'
+            'index abc123..def456 100644\n'
+            '--- a/fs/file.c\n'
+            '+++ b/fs/file.c\n'
+            '@@ -10,3 +10,4 @@ void f(void)\n'
+            ' \tint x;\n'
+            '+\tptr = malloc(sz);\n'
+            ' \treturn 0;\n'
         )
 
         with mock.patch('b4.review.tracking.get_thread_mbox', return_value=b'mbox'):
             with mock.patch('liblore.utils.split_mbox', return_value=[]):
-                with mock.patch('b4.review.tracking._parse_msgs_to_followup_comments',
-                                return_value=followup_comments):
+                with mock.patch(
+                    'b4.review.tracking._parse_msgs_to_followup_comments',
+                    return_value=followup_comments,
+                ):
                     with mock.patch('b4.git_run_command', return_value=(0, real_diff)):
                         with mock.patch.object(review, 'save_tracking_ref'):
                             result = review._integrate_followup_inline_comments(
-                                '/tmp', 'cover', tracking, commit_shas, patches,
-                                branch='b4/review/test')
+                                '/tmp',
+                                'cover',
+                                tracking,
+                                commit_shas,
+                                patches,
+                                branch='b4/review/test',
+                            )
 
         assert result is True
         assert 'reviews' in patches[0]
@@ -2750,16 +3008,21 @@ class TestIntegrateFollowupInlineComments:
             'thread-blob': 'abc123',
         }
         tracking = {'series': series, 'patches': patches}
-        followup_comments = self._make_followup_comments({
-            1: [self._FOLLOWUP_BODY_NO_DIFF],
-        })
+        followup_comments = self._make_followup_comments(
+            {
+                1: [self._FOLLOWUP_BODY_NO_DIFF],
+            }
+        )
 
         with mock.patch('b4.review.tracking.get_thread_mbox', return_value=b'mbox'):
             with mock.patch('liblore.utils.split_mbox', return_value=[]):
-                with mock.patch('b4.review.tracking._parse_msgs_to_followup_comments',
-                                return_value=followup_comments):
+                with mock.patch(
+                    'b4.review.tracking._parse_msgs_to_followup_comments',
+                    return_value=followup_comments,
+                ):
                     result = review._integrate_followup_inline_comments(
-                        '/tmp', '', tracking, ['aaa'], patches)
+                        '/tmp', '', tracking, ['aaa'], patches
+                    )
         assert result is False
         assert 'reviews' not in patches[0]
 
@@ -2773,16 +3036,21 @@ class TestIntegrateFollowupInlineComments:
             'thread-blob': 'abc123',
         }
         tracking = {'series': series, 'patches': patches}
-        followup_comments = self._make_followup_comments({
-            0: [self._FOLLOWUP_BODY_WITH_DIFF],  # cover letter
-        })
+        followup_comments = self._make_followup_comments(
+            {
+                0: [self._FOLLOWUP_BODY_WITH_DIFF],  # cover letter
+            }
+        )
 
         with mock.patch('b4.review.tracking.get_thread_mbox', return_value=b'mbox'):
             with mock.patch('liblore.utils.split_mbox', return_value=[]):
-                with mock.patch('b4.review.tracking._parse_msgs_to_followup_comments',
-                                return_value=followup_comments):
+                with mock.patch(
+                    'b4.review.tracking._parse_msgs_to_followup_comments',
+                    return_value=followup_comments,
+                ):
                     result = review._integrate_followup_inline_comments(
-                        '/tmp', '', tracking, ['aaa'], patches)
+                        '/tmp', '', tracking, ['aaa'], patches
+                    )
         assert result is False
 
     def test_multiple_reviewers_same_patch(self) -> None:
@@ -2795,30 +3063,39 @@ class TestIntegrateFollowupInlineComments:
             'thread-blob': 'abc123',
         }
         tracking = {'series': series, 'patches': patches}
-        followup_comments = self._make_followup_comments({
-            1: [self._FOLLOWUP_BODY_WITH_DIFF, self._FOLLOWUP_BODY_WITH_DIFF],
-        })
+        followup_comments = self._make_followup_comments(
+            {
+                1: [self._FOLLOWUP_BODY_WITH_DIFF, self._FOLLOWUP_BODY_WITH_DIFF],
+            }
+        )
 
         real_diff = (
-            "diff --git a/fs/file.c b/fs/file.c\n"
-            "index abc123..def456 100644\n"
-            "--- a/fs/file.c\n"
-            "+++ b/fs/file.c\n"
-            "@@ -10,3 +10,4 @@ void f(void)\n"
-            " \tint x;\n"
-            "+\tptr = malloc(sz);\n"
-            " \treturn 0;\n"
+            'diff --git a/fs/file.c b/fs/file.c\n'
+            'index abc123..def456 100644\n'
+            '--- a/fs/file.c\n'
+            '+++ b/fs/file.c\n'
+            '@@ -10,3 +10,4 @@ void f(void)\n'
+            ' \tint x;\n'
+            '+\tptr = malloc(sz);\n'
+            ' \treturn 0;\n'
         )
 
         with mock.patch('b4.review.tracking.get_thread_mbox', return_value=b'mbox'):
             with mock.patch('liblore.utils.split_mbox', return_value=[]):
-                with mock.patch('b4.review.tracking._parse_msgs_to_followup_comments',
-                                return_value=followup_comments):
+                with mock.patch(
+                    'b4.review.tracking._parse_msgs_to_followup_comments',
+                    return_value=followup_comments,
+                ):
                     with mock.patch('b4.git_run_command', return_value=(0, real_diff)):
                         with mock.patch.object(review, 'save_tracking_ref'):
                             result = review._integrate_followup_inline_comments(
-                                '/tmp', 'cover', tracking, ['aaa'], patches,
-                                branch='b4/review/test')
+                                '/tmp',
+                                'cover',
+                                tracking,
+                                ['aaa'],
+                                patches,
+                                branch='b4/review/test',
+                            )
 
         assert result is True
         reviews = patches[0]['reviews']
@@ -2835,7 +3112,9 @@ class TestIntegrateFollowupInlineComments:
                     'reviewer0@example.com': {
                         'name': 'Reviewer 0',
                         'followup-msgid': 'followup1-0@example.com',
-                        'comments': [{'path': 'fs/file.c', 'line': 11, 'text': 'Already here.'}],
+                        'comments': [
+                            {'path': 'fs/file.c', 'line': 11, 'text': 'Already here.'}
+                        ],
                     },
                 },
             },
@@ -2845,22 +3124,30 @@ class TestIntegrateFollowupInlineComments:
             'thread-blob': 'abc123',
         }
         tracking = {'series': series, 'patches': patches}
-        followup_comments = self._make_followup_comments({
-            1: [self._FOLLOWUP_BODY_WITH_DIFF],
-        })
+        followup_comments = self._make_followup_comments(
+            {
+                1: [self._FOLLOWUP_BODY_WITH_DIFF],
+            }
+        )
 
         with mock.patch('b4.review.tracking.get_thread_mbox', return_value=b'mbox'):
             with mock.patch('liblore.utils.split_mbox', return_value=[]):
-                with mock.patch('b4.review.tracking._parse_msgs_to_followup_comments',
-                                return_value=followup_comments):
+                with mock.patch(
+                    'b4.review.tracking._parse_msgs_to_followup_comments',
+                    return_value=followup_comments,
+                ):
                     with mock.patch('b4.git_run_command') as mock_git:
                         result = review._integrate_followup_inline_comments(
-                            '/tmp', '', tracking, ['aaa'], patches)
+                            '/tmp', '', tracking, ['aaa'], patches
+                        )
         # Should not have called git diff (skipped re-parsing)
         mock_git.assert_not_called()
         assert result is False
         # Original comments untouched
-        assert patches[0]['reviews']['reviewer0@example.com']['comments'][0]['text'] == 'Already here.'
+        assert (
+            patches[0]['reviews']['reviewer0@example.com']['comments'][0]['text']
+            == 'Already here.'
+        )
 
 
 class TestFollowupItemPerMessage:
@@ -2888,6 +3175,7 @@ class TestFollowupItemPerMessage:
     def test_followup_item_keyed_by_msgid(self) -> None:
         """FollowupItem stores msgid, not fromemail."""
         from b4.review_tui._review_app import FollowupItem
+
         item = FollowupItem('Alice', 1, 'reply-1@example.com')
         assert item.msgid == 'reply-1@example.com'
         assert item.display_idx == 1
@@ -2895,6 +3183,7 @@ class TestFollowupItemPerMessage:
     def test_selected_followup_enables_reply_in_preview(self) -> None:
         """check_action returns True for edit_reply when a follow-up is selected."""
         from b4.review_tui._review_app import ReviewApp
+
         app = ReviewApp(self._make_session())
         app._preview_mode = True
         app._selected_followup_msgid = 'reply@example.com'
@@ -2903,6 +3192,7 @@ class TestFollowupItemPerMessage:
     def test_selected_followup_cleared_on_show_content(self) -> None:
         """_selected_followup_msgid is reset when switching patches."""
         from b4.review_tui._review_app import ReviewApp
+
         app = ReviewApp(self._make_session())
         app._selected_followup_msgid = 'reply@example.com'
         # Verify it was set
@@ -2935,8 +3225,9 @@ index aaa..bbb 100644
 """
 
 
-def _make_patch_msg(subject: str, from_addr: str, date: str,
-                    body: str = '', msgid: str = '') -> email.message.EmailMessage:
+def _make_patch_msg(
+    subject: str, from_addr: str, date: str, body: str = '', msgid: str = ''
+) -> email.message.EmailMessage:
     """Build a minimal EmailMessage that LoreMailbox can parse as a patch."""
     msg = email.message.EmailMessage()
     msg['Subject'] = subject
@@ -3017,6 +3308,7 @@ class TestGetLoreSeriesVersionMismatch:
 
 # -- Tests for collect_review_emails() ----------------------------------------
 
+
 class TestCollectReviewEmails:
     """Tests for collect_review_emails() filtering logic.
 
@@ -3059,60 +3351,65 @@ class TestCollectReviewEmails:
     # Use a sentinel email message so we can count how many were produced.
     _FAKE_MSG = mock.sentinel.email_msg
 
-    @mock.patch('b4.review._review._build_review_email',
-                return_value=_FAKE_MSG)
-    @mock.patch('b4.get_user_config',
-                return_value={'name': 'Maintainer', 'email': MY_EMAIL})
-    def test_sends_normal_cover_review(self, _cfg: mock.Mock,
-                                       _build: mock.Mock) -> None:
+    @mock.patch('b4.review._review._build_review_email', return_value=_FAKE_MSG)
+    @mock.patch(
+        'b4.get_user_config', return_value={'name': 'Maintainer', 'email': MY_EMAIL}
+    )
+    def test_sends_normal_cover_review(
+        self, _cfg: mock.Mock, _build: mock.Mock
+    ) -> None:
         """A cover review without sent-revision produces one email."""
         series = self._make_series({self.MY_EMAIL: self._review()})
         msgs = review.collect_review_emails(series, [], 'cover', '', [])
         assert len(msgs) == 1
 
-    @mock.patch('b4.review._review._build_review_email',
-                return_value=_FAKE_MSG)
-    @mock.patch('b4.get_user_config',
-                return_value={'name': 'Maintainer', 'email': MY_EMAIL})
-    def test_skips_cover_with_sent_revision(self, _cfg: mock.Mock,
-                                             _build: mock.Mock) -> None:
+    @mock.patch('b4.review._review._build_review_email', return_value=_FAKE_MSG)
+    @mock.patch(
+        'b4.get_user_config', return_value={'name': 'Maintainer', 'email': MY_EMAIL}
+    )
+    def test_skips_cover_with_sent_revision(
+        self, _cfg: mock.Mock, _build: mock.Mock
+    ) -> None:
         """Cover review stamped with sent-revision is not re-sent."""
         series = self._make_series(
-            {self.MY_EMAIL: self._review(**{'sent-revision': 1})})
+            {self.MY_EMAIL: self._review(**{'sent-revision': 1})}
+        )
         msgs = review.collect_review_emails(series, [], 'cover', '', [])
         assert msgs == []
 
-    @mock.patch('b4.review._review._build_review_email',
-                return_value=_FAKE_MSG)
-    @mock.patch('b4.get_user_config',
-                return_value={'name': 'Maintainer', 'email': MY_EMAIL})
-    def test_sends_normal_patch_review(self, _cfg: mock.Mock,
-                                       _build: mock.Mock) -> None:
+    @mock.patch('b4.review._review._build_review_email', return_value=_FAKE_MSG)
+    @mock.patch(
+        'b4.get_user_config', return_value={'name': 'Maintainer', 'email': MY_EMAIL}
+    )
+    def test_sends_normal_patch_review(
+        self, _cfg: mock.Mock, _build: mock.Mock
+    ) -> None:
         """A patch review without sent-revision produces one email."""
         series = self._make_series()
         patch = self._make_patch({self.MY_EMAIL: self._review()})
         msgs = review.collect_review_emails(series, [patch], 'cover', '', ['sha1'])
         assert len(msgs) == 1
 
-    @mock.patch('b4.review._review._build_review_email',
-                return_value=_FAKE_MSG)
-    @mock.patch('b4.get_user_config',
-                return_value={'name': 'Maintainer', 'email': MY_EMAIL})
-    def test_skips_patch_with_sent_revision(self, _cfg: mock.Mock,
-                                             _build: mock.Mock) -> None:
+    @mock.patch('b4.review._review._build_review_email', return_value=_FAKE_MSG)
+    @mock.patch(
+        'b4.get_user_config', return_value={'name': 'Maintainer', 'email': MY_EMAIL}
+    )
+    def test_skips_patch_with_sent_revision(
+        self, _cfg: mock.Mock, _build: mock.Mock
+    ) -> None:
         """Patch review stamped with sent-revision is not re-sent."""
         series = self._make_series()
-        patch = self._make_patch(
-            {self.MY_EMAIL: self._review(**{'sent-revision': 1})})
+        patch = self._make_patch({self.MY_EMAIL: self._review(**{'sent-revision': 1})})
         msgs = review.collect_review_emails(series, [patch], 'cover', '', ['sha1'])
         assert msgs == []
 
-    @mock.patch('b4.review._review._build_review_email',
-                return_value=_FAKE_MSG)
-    @mock.patch('b4.get_user_config',
-                return_value={'name': 'Maintainer', 'email': MY_EMAIL})
-    def test_skips_patch_auto_skipped_after_upgrade(self, _cfg: mock.Mock,
-                                                     _build: mock.Mock) -> None:
+    @mock.patch('b4.review._review._build_review_email', return_value=_FAKE_MSG)
+    @mock.patch(
+        'b4.get_user_config', return_value={'name': 'Maintainer', 'email': MY_EMAIL}
+    )
+    def test_skips_patch_auto_skipped_after_upgrade(
+        self, _cfg: mock.Mock, _build: mock.Mock
+    ) -> None:
         """Patch auto-marked skip+skip-reason during upgrade is not re-sent.
 
         This is the combo A+B fix: the upgrade step sets patch-state=skip
@@ -3121,38 +3418,49 @@ class TestCollectReviewEmails:
         prevent re-sending; this test exercises the skip-state path.
         """
         series = self._make_series()
-        patch = self._make_patch({self.MY_EMAIL: self._review(
-            **{'sent-revision': 1,
-               'patch-state': 'skip',
-               'skip-reason': 'Patch unchanged from v1; review already sent'})})
+        patch = self._make_patch(
+            {
+                self.MY_EMAIL: self._review(
+                    **{
+                        'sent-revision': 1,
+                        'patch-state': 'skip',
+                        'skip-reason': 'Patch unchanged from v1; review already sent',
+                    }
+                )
+            }
+        )
         msgs = review.collect_review_emails(series, [patch], 'cover', '', ['sha1'])
         assert msgs == []
 
-    @mock.patch('b4.review._review._build_review_email',
-                return_value=_FAKE_MSG)
-    @mock.patch('b4.get_user_config',
-                return_value={'name': 'Maintainer', 'email': MY_EMAIL})
-    def test_only_unsent_patches_included(self, _cfg: mock.Mock,
-                                          _build: mock.Mock) -> None:
+    @mock.patch('b4.review._review._build_review_email', return_value=_FAKE_MSG)
+    @mock.patch(
+        'b4.get_user_config', return_value={'name': 'Maintainer', 'email': MY_EMAIL}
+    )
+    def test_only_unsent_patches_included(
+        self, _cfg: mock.Mock, _build: mock.Mock
+    ) -> None:
         """Mix of sent and unsent patches: only unsent ones produce emails."""
         series = self._make_series()
         sent_patch = self._make_patch(
-            {self.MY_EMAIL: self._review(**{'sent-revision': 1})})
-        fresh_patch = self._make_patch(
-            {self.MY_EMAIL: self._review()})
+            {self.MY_EMAIL: self._review(**{'sent-revision': 1})}
+        )
+        fresh_patch = self._make_patch({self.MY_EMAIL: self._review()})
         msgs = review.collect_review_emails(
-            series, [sent_patch, fresh_patch], 'cover', '', ['sha1', 'sha2'])
+            series, [sent_patch, fresh_patch], 'cover', '', ['sha1', 'sha2']
+        )
         assert len(msgs) == 1
 
-    @mock.patch('b4.review._review._build_review_email',
-                return_value=_FAKE_MSG)
-    @mock.patch('b4.get_user_config',
-                return_value={'name': 'Maintainer', 'email': MY_EMAIL})
+    @mock.patch('b4.review._review._build_review_email', return_value=_FAKE_MSG)
+    @mock.patch(
+        'b4.get_user_config', return_value={'name': 'Maintainer', 'email': MY_EMAIL}
+    )
     def test_skip_state_without_sent_revision_still_skipped(
-            self, _cfg: mock.Mock, _build: mock.Mock) -> None:
+        self, _cfg: mock.Mock, _build: mock.Mock
+    ) -> None:
         """Explicit skip state (manually set, no sent-revision) is honoured."""
         series = self._make_series()
         patch = self._make_patch(
-            {self.MY_EMAIL: self._review(**{'patch-state': 'skip'})})
+            {self.MY_EMAIL: self._review(**{'patch-state': 'skip'})}
+        )
         msgs = review.collect_review_emails(series, [patch], 'cover', '', ['sha1'])
         assert msgs == []
diff --git a/src/tests/test_review_checks.py b/src/tests/test_review_checks.py
index c866082..1469bf3 100644
--- a/src/tests/test_review_checks.py
+++ b/src/tests/test_review_checks.py
@@ -13,8 +13,10 @@ from b4.review import checks
 # Helpers
 # ---------------------------------------------------------------------------
 
-def _make_msg(subject: str = 'test patch', msgid: str = 'abc@example.com',
-              body: str = 'dummy') -> EmailMessage:
+
+def _make_msg(
+    subject: str = 'test patch', msgid: str = 'abc@example.com', body: str = 'dummy'
+) -> EmailMessage:
     """Create a minimal EmailMessage for testing."""
     msg = EmailMessage()
     msg['Subject'] = subject
@@ -27,6 +29,7 @@ def _make_msg(subject: str = 'test patch', msgid: str = 'abc@example.com',
 # SQLite cache: store / retrieve / delete / cleanup
 # ---------------------------------------------------------------------------
 
+
 class TestCacheDb:
     """Tests for the CI check cache database."""
 
@@ -43,10 +46,20 @@ class TestCacheDb:
     def test_store_and_retrieve(self, tmp_path: pytest.TempPathFactory) -> None:
         conn = checks.get_db()
         results = [
-            {'tool': 'lint', 'status': 'pass', 'summary': 'ok',
-             'url': '', 'details': ''},
-            {'tool': 'build', 'status': 'fail', 'summary': 'broken',
-             'url': 'https://ci.example.com', 'details': 'error on line 5'},
+            {
+                'tool': 'lint',
+                'status': 'pass',
+                'summary': 'ok',
+                'url': '',
+                'details': '',
+            },
+            {
+                'tool': 'build',
+                'status': 'fail',
+                'summary': 'broken',
+                'url': 'https://ci.example.com',
+                'details': 'error on line 5',
+            },
         ]
         checks.store_results(conn, 'msg1@example', results)
         cached = checks.get_cached_results(conn, ['msg1@example'])
@@ -70,10 +83,12 @@ class TestCacheDb:
 
     def test_store_replaces_existing(self, tmp_path: pytest.TempPathFactory) -> None:
         conn = checks.get_db()
-        checks.store_results(conn, 'msg@ex', [
-            {'tool': 'lint', 'status': 'pass', 'summary': 'v1'}])
-        checks.store_results(conn, 'msg@ex', [
-            {'tool': 'lint', 'status': 'fail', 'summary': 'v2'}])
+        checks.store_results(
+            conn, 'msg@ex', [{'tool': 'lint', 'status': 'pass', 'summary': 'v1'}]
+        )
+        checks.store_results(
+            conn, 'msg@ex', [{'tool': 'lint', 'status': 'fail', 'summary': 'v2'}]
+        )
         cached = checks.get_cached_results(conn, ['msg@ex'])
         assert cached['msg@ex'][0]['status'] == 'fail'
         assert cached['msg@ex'][0]['summary'] == 'v2'
@@ -81,10 +96,8 @@ class TestCacheDb:
 
     def test_delete_results(self, tmp_path: pytest.TempPathFactory) -> None:
         conn = checks.get_db()
-        checks.store_results(conn, 'a@ex', [
-            {'tool': 't1', 'status': 'pass'}])
-        checks.store_results(conn, 'b@ex', [
-            {'tool': 't1', 'status': 'pass'}])
+        checks.store_results(conn, 'a@ex', [{'tool': 't1', 'status': 'pass'}])
+        checks.store_results(conn, 'b@ex', [{'tool': 't1', 'status': 'pass'}])
         checks.delete_results(conn, ['a@ex'])
         cached = checks.get_cached_results(conn, ['a@ex', 'b@ex'])
         assert 'a@ex' not in cached
@@ -98,16 +111,17 @@ class TestCacheDb:
 
     def test_cleanup_old(self, tmp_path: pytest.TempPathFactory) -> None:
         conn = checks.get_db()
-        checks.store_results(conn, 'recent@ex', [
-            {'tool': 't', 'status': 'pass'}])
+        checks.store_results(conn, 'recent@ex', [{'tool': 't', 'status': 'pass'}])
         # Manually backdate one row
-        old_date = (datetime.datetime.now(datetime.timezone.utc)
-                    - datetime.timedelta(days=200)).isoformat()
+        old_date = (
+            datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(days=200)
+        ).isoformat()
         conn.execute(
-            "INSERT OR REPLACE INTO check_results"
-            " (msgid, tool, status, checked_at)"
-            " VALUES (?, ?, ?, ?)",
-            ('old@ex', 't', 'pass', old_date))
+            'INSERT OR REPLACE INTO check_results'
+            ' (msgid, tool, status, checked_at)'
+            ' VALUES (?, ?, ?, ?)',
+            ('old@ex', 't', 'pass', old_date),
+        )
         conn.commit()
         deleted = checks.cleanup_old(conn, max_days=180)
         assert deleted == 1
@@ -121,6 +135,7 @@ class TestCacheDb:
 # parse_cmd
 # ---------------------------------------------------------------------------
 
+
 class TestParseCmd:
     """Tests for parse_cmd shell splitting."""
 
@@ -128,34 +143,39 @@ class TestParseCmd:
         assert checks.parse_cmd('/usr/bin/check') == ['/usr/bin/check']
 
     def test_with_args(self) -> None:
-        assert checks.parse_cmd('check --verbose -q') == [
-            'check', '--verbose', '-q']
+        assert checks.parse_cmd('check --verbose -q') == ['check', '--verbose', '-q']
 
     def test_quoted_arg(self) -> None:
-        assert checks.parse_cmd('check "hello world"') == [
-            'check', 'hello world']
+        assert checks.parse_cmd('check "hello world"') == ['check', 'hello world']
 
     def test_single_quotes(self) -> None:
-        assert checks.parse_cmd("check 'hello world'") == [
-            'check', 'hello world']
+        assert checks.parse_cmd("check 'hello world'") == ['check', 'hello world']
 
 
 # ---------------------------------------------------------------------------
 # _run_builtin_checkpatch output parsing
 # ---------------------------------------------------------------------------
 
+
 class TestBuiltinCheckpatch:
     """Tests for _run_builtin_checkpatch output parsing."""
 
-    def _run(self, stdout: str, stderr: str = '',
-             ecode: int = 0, topdir: str = '/fake') -> List[Dict[str, str]]:
+    def _run(
+        self, stdout: str, stderr: str = '', ecode: int = 0, topdir: str = '/fake'
+    ) -> List[Dict[str, str]]:
         msg = _make_msg()
-        with mock.patch('os.access', return_value=True), \
-             mock.patch('b4._run_command', return_value=(
-                 ecode,
-                 stdout.encode() if stdout else b'',
-                 stderr.encode() if stderr else b'')), \
-             mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''):
+        with (
+            mock.patch('os.access', return_value=True),
+            mock.patch(
+                'b4._run_command',
+                return_value=(
+                    ecode,
+                    stdout.encode() if stdout else b'',
+                    stderr.encode() if stderr else b'',
+                ),
+            ),
+            mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''),
+        ):
             return checks._run_builtin_checkpatch(msg, topdir)
 
     def test_clean_pass(self) -> None:
@@ -215,17 +235,25 @@ class TestBuiltinCheckpatch:
 # _run_external_cmd JSON protocol
 # ---------------------------------------------------------------------------
 
+
 class TestRunExternalCmd:
     """Tests for _run_external_cmd JSON parsing."""
 
-    def _run(self, stdout: str, stderr: str = '',
-             ecode: int = 0) -> List[Dict[str, str]]:
+    def _run(
+        self, stdout: str, stderr: str = '', ecode: int = 0
+    ) -> List[Dict[str, str]]:
         msg = _make_msg()
-        with mock.patch('b4._run_command', return_value=(
-                ecode,
-                stdout.encode() if stdout else b'',
-                stderr.encode() if stderr else b'')), \
-             mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''):
+        with (
+            mock.patch(
+                'b4._run_command',
+                return_value=(
+                    ecode,
+                    stdout.encode() if stdout else b'',
+                    stderr.encode() if stderr else b'',
+                ),
+            ),
+            mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''),
+        ):
             return checks._run_external_cmd(['mycheck'], msg, '/fake')
 
     def test_valid_json_array(self) -> None:
@@ -286,16 +314,21 @@ class TestRunExternalCmd:
     def test_extra_env_set_during_run(self) -> None:
         captured_env: Dict[str, str] = {}
 
-        def fake_run(cmdargs: Any, stdin: Any = None,
-                     rundir: Any = None) -> Any:
+        def fake_run(cmdargs: Any, stdin: Any = None, rundir: Any = None) -> Any:
             captured_env['B4_TRACKING_FILE'] = os.environ.get('B4_TRACKING_FILE', '')
             return (0, b'[]', b'')
 
         msg = _make_msg()
-        with mock.patch('b4._run_command', side_effect=fake_run), \
-             mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''):
-            checks._run_external_cmd(['mycheck'], msg, '/fake',
-                                     extra_env={'B4_TRACKING_FILE': '/tmp/test.json'})
+        with (
+            mock.patch('b4._run_command', side_effect=fake_run),
+            mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''),
+        ):
+            checks._run_external_cmd(
+                ['mycheck'],
+                msg,
+                '/fake',
+                extra_env={'B4_TRACKING_FILE': '/tmp/test.json'},
+            )
         assert captured_env['B4_TRACKING_FILE'] == '/tmp/test.json'
         # Env var should be cleaned up after the call
         assert 'B4_TRACKING_FILE' not in os.environ
@@ -304,12 +337,17 @@ class TestRunExternalCmd:
         msg = _make_msg()
         os.environ['B4_TRACKING_FILE'] = 'original'
         try:
-            with mock.patch('b4._run_command', side_effect=RuntimeError('boom')), \
-                 mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''):
+            with (
+                mock.patch('b4._run_command', side_effect=RuntimeError('boom')),
+                mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''),
+            ):
                 try:
                     checks._run_external_cmd(
-                        ['mycheck'], msg, '/fake',
-                        extra_env={'B4_TRACKING_FILE': '/tmp/new.json'})
+                        ['mycheck'],
+                        msg,
+                        '/fake',
+                        extra_env={'B4_TRACKING_FILE': '/tmp/new.json'},
+                    )
                 except RuntimeError:
                     pass
             assert os.environ.get('B4_TRACKING_FILE') == 'original'
@@ -321,15 +359,18 @@ class TestRunExternalCmd:
 # _run_builtin_patchwork aggregation
 # ---------------------------------------------------------------------------
 
+
 class TestBuiltinPatchwork:
     """Tests for _run_builtin_patchwork status aggregation."""
 
     def _run(self, pw_checks: List[Dict[str, Any]]) -> List[Dict[str, str]]:
         msg = _make_msg(msgid='test@example.com')
-        with mock.patch('b4.LoreMessage.get_patchwork_data_by_msgid',
-                        return_value={'id': 42}), \
-             mock.patch('b4.review.pw_fetch_checks',
-                        return_value=pw_checks):
+        with (
+            mock.patch(
+                'b4.LoreMessage.get_patchwork_data_by_msgid', return_value={'id': 42}
+            ),
+            mock.patch('b4.review.pw_fetch_checks', return_value=pw_checks),
+        ):
             return checks._run_builtin_patchwork(msg, 'proj', 'https://pw.example.com')
 
     def test_all_success(self) -> None:
@@ -366,7 +407,12 @@ class TestBuiltinPatchwork:
 
     def test_details_are_json(self) -> None:
         pw = [
-            {'state': 'success', 'context': 'build', 'description': 'ok', 'url': 'http://x'},
+            {
+                'state': 'success',
+                'context': 'build',
+                'description': 'ok',
+                'url': 'http://x',
+            },
         ]
         results = self._run(pw)
         details = json.loads(results[0]['details'])
@@ -381,9 +427,13 @@ class TestBuiltinPatchwork:
 
     def test_lookup_failure_returns_empty(self) -> None:
         msg = _make_msg()
-        with mock.patch('b4.LoreMessage.get_patchwork_data_by_msgid',
-                        side_effect=LookupError('not found')):
-            result = checks._run_builtin_patchwork(msg, 'proj', 'https://pw.example.com')
+        with mock.patch(
+            'b4.LoreMessage.get_patchwork_data_by_msgid',
+            side_effect=LookupError('not found'),
+        ):
+            result = checks._run_builtin_patchwork(
+                msg, 'proj', 'https://pw.example.com'
+            )
         assert result == []
 
 
@@ -391,56 +441,59 @@ class TestBuiltinPatchwork:
 # High-level runners
 # ---------------------------------------------------------------------------
 
+
 class TestRunners:
     """Tests for run_perpatch_checks and run_series_checks."""
 
     def test_perpatch_dispatches_external(self) -> None:
         msg = _make_msg()
         data = json.dumps([{'tool': 'ci', 'status': 'pass', 'summary': 'ok'}])
-        with mock.patch('b4._run_command', return_value=(
-                0, data.encode(), b'')), \
-             mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''):
-            results = checks.run_perpatch_checks(
-                [('m1@ex', msg)], ['mycheck'], '/fake')
+        with (
+            mock.patch('b4._run_command', return_value=(0, data.encode(), b'')),
+            mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''),
+        ):
+            results = checks.run_perpatch_checks([('m1@ex', msg)], ['mycheck'], '/fake')
         assert 'm1@ex' in results
         assert results['m1@ex'][0]['tool'] == 'ci'
 
     def test_perpatch_exception_captured(self) -> None:
         msg = _make_msg()
-        with mock.patch('b4._run_command',
-                        side_effect=RuntimeError('boom')), \
-             mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''):
-            results = checks.run_perpatch_checks(
-                [('m1@ex', msg)], ['badcmd'], '/fake')
+        with (
+            mock.patch('b4._run_command', side_effect=RuntimeError('boom')),
+            mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''),
+        ):
+            results = checks.run_perpatch_checks([('m1@ex', msg)], ['badcmd'], '/fake')
         assert results['m1@ex'][0]['status'] == 'fail'
         assert 'boom' in results['m1@ex'][0]['summary']
 
     def test_series_dispatches_external(self) -> None:
         msg = _make_msg()
         data = json.dumps([{'tool': 'series-ci', 'status': 'warn'}])
-        with mock.patch('b4._run_command', return_value=(
-                0, data.encode(), b'')), \
-             mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''):
-            results = checks.run_series_checks(
-                ('cover@ex', msg), ['mycheck'], '/fake')
+        with (
+            mock.patch('b4._run_command', return_value=(0, data.encode(), b'')),
+            mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''),
+        ):
+            results = checks.run_series_checks(('cover@ex', msg), ['mycheck'], '/fake')
         assert len(results) == 1
         assert results[0]['tool'] == 'series-ci'
 
     def test_series_exception_captured(self) -> None:
         msg = _make_msg()
-        with mock.patch('b4._run_command',
-                        side_effect=RuntimeError('kaboom')), \
-             mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''):
-            results = checks.run_series_checks(
-                ('cover@ex', msg), ['badcmd'], '/fake')
+        with (
+            mock.patch('b4._run_command', side_effect=RuntimeError('kaboom')),
+            mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''),
+        ):
+            results = checks.run_series_checks(('cover@ex', msg), ['badcmd'], '/fake')
         assert results[0]['status'] == 'fail'
         assert 'kaboom' in results[0]['summary']
 
     def test_dispatch_builtin_checkpatch(self) -> None:
         msg = _make_msg()
-        with mock.patch('os.access', return_value=True), \
-             mock.patch('b4._run_command', return_value=(0, b'', b'')), \
-             mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''):
+        with (
+            mock.patch('os.access', return_value=True),
+            mock.patch('b4._run_command', return_value=(0, b'', b'')),
+            mock.patch('b4.LoreMessage.get_msg_as_bytes', return_value=b''),
+        ):
             results = checks._dispatch_cmd('_builtin_checkpatch', msg, '/fake')
         assert results[0]['tool'] == 'checkpatch'
 
@@ -454,6 +507,7 @@ class TestRunners:
 # _STATUS_ORDER module-level constant
 # ---------------------------------------------------------------------------
 
+
 class TestStatusOrder:
     """Verify the module-level status ordering constant."""
 
@@ -474,40 +528,72 @@ _SASHIKO_PATCHSET: Dict[str, Any] = {
     'status': 'Reviewed',
     'author': 'Test Author <test@example.com>',
     'patches': [
-        {'id': 1, 'message_id': 'patch1@example.com', 'part_index': 1,
-         'subject': '[PATCH 1/3] First patch', 'status': 'applied'},
-        {'id': 2, 'message_id': 'patch2@example.com', 'part_index': 2,
-         'subject': '[PATCH 2/3] Second patch', 'status': 'applied'},
-        {'id': 3, 'message_id': 'patch3@example.com', 'part_index': 3,
-         'subject': '[PATCH 3/3] Third patch', 'status': 'applied'},
+        {
+            'id': 1,
+            'message_id': 'patch1@example.com',
+            'part_index': 1,
+            'subject': '[PATCH 1/3] First patch',
+            'status': 'applied',
+        },
+        {
+            'id': 2,
+            'message_id': 'patch2@example.com',
+            'part_index': 2,
+            'subject': '[PATCH 2/3] Second patch',
+            'status': 'applied',
+        },
+        {
+            'id': 3,
+            'message_id': 'patch3@example.com',
+            'part_index': 3,
+            'subject': '[PATCH 3/3] Third patch',
+            'status': 'applied',
+        },
     ],
     'reviews': [
         {
-            'id': 100, 'patch_id': 1, 'status': 'Reviewed',
+            'id': 100,
+            'patch_id': 1,
+            'status': 'Reviewed',
             'result': 'Review completed successfully.',
-            'summary': '', 'inline_review': 'looks good',
-            'output': json.dumps({
-                'findings': [
-                    {'severity': 'Low', 'problem': 'Minor style issue'},
-                ],
-            }),
+            'summary': '',
+            'inline_review': 'looks good',
+            'output': json.dumps(
+                {
+                    'findings': [
+                        {'severity': 'Low', 'problem': 'Minor style issue'},
+                    ],
+                }
+            ),
         },
         {
-            'id': 101, 'patch_id': 2, 'status': 'Reviewed',
+            'id': 101,
+            'patch_id': 2,
+            'status': 'Reviewed',
             'result': 'Review completed successfully.',
-            'summary': '', 'inline_review': 'has issues',
-            'output': json.dumps({
-                'findings': [
-                    {'severity': 'Critical', 'problem': 'Use-after-free',
-                     'suggestion': 'Add proper locking'},
-                    {'severity': 'High', 'problem': 'Missing error check'},
-                ],
-            }),
+            'summary': '',
+            'inline_review': 'has issues',
+            'output': json.dumps(
+                {
+                    'findings': [
+                        {
+                            'severity': 'Critical',
+                            'problem': 'Use-after-free',
+                            'suggestion': 'Add proper locking',
+                        },
+                        {'severity': 'High', 'problem': 'Missing error check'},
+                    ],
+                }
+            ),
         },
         {
-            'id': 102, 'patch_id': 3, 'status': 'Skipped',
+            'id': 102,
+            'patch_id': 3,
+            'status': 'Skipped',
             'result': 'Skipped: touches only ignored files',
-            'summary': '', 'inline_review': '', 'output': '',
+            'summary': '',
+            'inline_review': '',
+            'output': '',
         },
     ],
 }
@@ -537,7 +623,8 @@ class TestSashikoCache:
 
         with mock.patch('b4.get_requests_session', return_value=session):
             data = checks._fetch_sashiko_patchset(
-                'cover@example.com', 'https://sashiko.dev')
+                'cover@example.com', 'https://sashiko.dev'
+            )
 
         assert data is not None
         assert data['id'] == 93
@@ -549,7 +636,8 @@ class TestSashikoCache:
         # Second call should use cache, not network
         session.get.reset_mock()
         data2 = checks._fetch_sashiko_patchset(
-            'patch2@example.com', 'https://sashiko.dev')
+            'patch2@example.com', 'https://sashiko.dev'
+        )
         session.get.assert_not_called()
         assert data2 is not None
         assert data2['id'] == 93
@@ -563,19 +651,22 @@ class TestSashikoCache:
 
         with mock.patch('b4.get_requests_session', return_value=session):
             data = checks._fetch_sashiko_patchset(
-                'unknown@example.com', 'https://sashiko.dev')
+                'unknown@example.com', 'https://sashiko.dev'
+            )
 
         assert data is None
         assert checks._sashiko_patchset_cache['unknown@example.com'] is None
 
     def test_fetch_network_error_caches_none(self) -> None:
         import requests
+
         session = mock.Mock()
         session.get.side_effect = requests.ConnectionError('offline')
 
         with mock.patch('b4.get_requests_session', return_value=session):
             data = checks._fetch_sashiko_patchset(
-                'test@example.com', 'https://sashiko.dev')
+                'test@example.com', 'https://sashiko.dev'
+            )
 
         assert data is None
         assert checks._sashiko_patchset_cache['test@example.com'] is None
@@ -597,9 +688,13 @@ class TestParseSashikoFindings:
         assert checks._parse_sashiko_findings({'output': 'not json'}) == []
 
     def test_critical_finding(self) -> None:
-        review = {'output': json.dumps({
-            'findings': [{'severity': 'Critical', 'problem': 'UAF bug'}],
-        })}
+        review = {
+            'output': json.dumps(
+                {
+                    'findings': [{'severity': 'Critical', 'problem': 'UAF bug'}],
+                }
+            )
+        }
         findings = checks._parse_sashiko_findings(review)
         assert len(findings) == 1
         assert findings[0]['status'] == 'fail'
@@ -607,52 +702,79 @@ class TestParseSashikoFindings:
         assert 'UAF bug' in findings[0]['description']
 
     def test_high_finding(self) -> None:
-        review = {'output': json.dumps({
-            'findings': [{'severity': 'High', 'problem': 'Missing check'}],
-        })}
+        review = {
+            'output': json.dumps(
+                {
+                    'findings': [{'severity': 'High', 'problem': 'Missing check'}],
+                }
+            )
+        }
         findings = checks._parse_sashiko_findings(review)
         assert findings[0]['status'] == 'fail'
         assert findings[0]['state'] == 'high'
 
     def test_medium_finding(self) -> None:
-        review = {'output': json.dumps({
-            'findings': [{'severity': 'Medium', 'problem': 'Questionable logic'}],
-        })}
+        review = {
+            'output': json.dumps(
+                {
+                    'findings': [
+                        {'severity': 'Medium', 'problem': 'Questionable logic'}
+                    ],
+                }
+            )
+        }
         findings = checks._parse_sashiko_findings(review)
         assert findings[0]['status'] == 'warn'
         assert findings[0]['state'] == 'medium'
 
     def test_low_finding(self) -> None:
-        review = {'output': json.dumps({
-            'findings': [{'severity': 'Low', 'problem': 'Style issue'}],
-        })}
+        review = {
+            'output': json.dumps(
+                {
+                    'findings': [{'severity': 'Low', 'problem': 'Style issue'}],
+                }
+            )
+        }
         findings = checks._parse_sashiko_findings(review)
         assert findings[0]['status'] == 'pass'
         assert findings[0]['state'] == 'low'
 
     def test_suggestion_appended(self) -> None:
-        review = {'output': json.dumps({
-            'findings': [{'severity': 'High', 'problem': 'Bug',
-                         'suggestion': 'Fix it'}],
-        })}
+        review = {
+            'output': json.dumps(
+                {
+                    'findings': [
+                        {'severity': 'High', 'problem': 'Bug', 'suggestion': 'Fix it'}
+                    ],
+                }
+            )
+        }
         findings = checks._parse_sashiko_findings(review)
         assert 'Bug' in findings[0]['description']
         assert 'Fix it' in findings[0]['description']
 
     def test_context_includes_severity(self) -> None:
-        review = {'output': json.dumps({
-            'findings': [{'severity': 'Medium', 'problem': 'test'}],
-        })}
+        review = {
+            'output': json.dumps(
+                {
+                    'findings': [{'severity': 'Medium', 'problem': 'test'}],
+                }
+            )
+        }
         findings = checks._parse_sashiko_findings(review)
         assert findings[0]['context'] == 'sashiko/medium'
 
     def test_multiple_findings(self) -> None:
-        review = {'output': json.dumps({
-            'findings': [
-                {'severity': 'Critical', 'problem': 'bad'},
-                {'severity': 'Low', 'problem': 'minor'},
-            ],
-        })}
+        review = {
+            'output': json.dumps(
+                {
+                    'findings': [
+                        {'severity': 'Critical', 'problem': 'bad'},
+                        {'severity': 'Low', 'problem': 'minor'},
+                    ],
+                }
+            )
+        }
         findings = checks._parse_sashiko_findings(review)
         assert len(findings) == 2
 
@@ -670,8 +792,7 @@ class TestSashikoFindingsSummary:
         assert summary == 'No findings'
 
     def test_single_critical(self) -> None:
-        findings = [{'status': 'fail', 'state': 'critical',
-                     'description': 'bad'}]
+        findings = [{'status': 'fail', 'state': 'critical', 'description': 'bad'}]
         worst, summary = checks._sashiko_findings_summary(findings)
         assert worst == 'fail'
         assert '1 critical' in summary
@@ -712,8 +833,12 @@ class TestRunBuiltinSashiko:
     def _prefill_cache(self, patchset: Optional[Dict[str, Any]] = None) -> None:
         """Pre-fill the cache so no HTTP calls are made."""
         ps = patchset if patchset is not None else _SASHIKO_PATCHSET
-        for key in ['cover@example.com', 'patch1@example.com',
-                    'patch2@example.com', 'patch3@example.com']:
+        for key in [
+            'cover@example.com',
+            'patch1@example.com',
+            'patch2@example.com',
+            'patch3@example.com',
+        ]:
             checks._sashiko_patchset_cache[key] = ps
 
     def test_no_msgid_returns_empty(self) -> None:
@@ -803,9 +928,15 @@ class TestRunBuiltinSashiko:
         assert 'incomplete' in results[0]['summary'].lower()
 
     def test_no_findings_pass(self) -> None:
-        reviews = [{'id': 100, 'patch_id': 1, 'status': 'Reviewed',
-                    'result': 'Review completed successfully.',
-                    'output': json.dumps({'findings': []})}]
+        reviews = [
+            {
+                'id': 100,
+                'patch_id': 1,
+                'status': 'Reviewed',
+                'result': 'Review completed successfully.',
+                'output': json.dumps({'findings': []}),
+            }
+        ]
         ps = dict(_SASHIKO_PATCHSET, reviews=reviews)
         self._prefill_cache(ps)
         msg = _make_msg(msgid='patch1@example.com')
@@ -814,8 +945,7 @@ class TestRunBuiltinSashiko:
         assert results[0]['summary'] == 'No findings'
 
     def test_pending_review_for_patch(self) -> None:
-        reviews = [{'id': 100, 'patch_id': 1, 'status': 'Pending',
-                    'output': ''}]
+        reviews = [{'id': 100, 'patch_id': 1, 'status': 'Pending', 'output': ''}]
         ps = dict(_SASHIKO_PATCHSET, reviews=reviews)
         self._prefill_cache(ps)
         msg = _make_msg(msgid='patch1@example.com')
@@ -824,8 +954,15 @@ class TestRunBuiltinSashiko:
         assert 'in progress' in results[0]['summary'].lower()
 
     def test_failed_review_for_patch(self) -> None:
-        reviews = [{'id': 100, 'patch_id': 1, 'status': 'Failed',
-                    'result': 'Token limit exceeded', 'output': ''}]
+        reviews = [
+            {
+                'id': 100,
+                'patch_id': 1,
+                'status': 'Failed',
+                'result': 'Token limit exceeded',
+                'output': '',
+            }
+        ]
         ps = dict(_SASHIKO_PATCHSET, reviews=reviews)
         self._prefill_cache(ps)
         msg = _make_msg(msgid='patch1@example.com')
@@ -863,16 +1000,20 @@ class TestSashikoAutoWire:
 
     def test_sashiko_added_when_url_configured(self) -> None:
         config = {'sashiko-url': 'https://sashiko.dev'}
-        with mock.patch('b4.get_main_config', return_value=config), \
-             mock.patch('b4.git_get_toplevel', return_value=None):
+        with (
+            mock.patch('b4.get_main_config', return_value=config),
+            mock.patch('b4.git_get_toplevel', return_value=None),
+        ):
             perpatch, series = checks.load_check_cmds()
         assert '_builtin_sashiko' in perpatch
         assert '_builtin_sashiko' in series
 
     def test_sashiko_not_added_without_url(self) -> None:
         config: Dict[str, Any] = {}
-        with mock.patch('b4.get_main_config', return_value=config), \
-             mock.patch('b4.git_get_toplevel', return_value=None):
+        with (
+            mock.patch('b4.get_main_config', return_value=config),
+            mock.patch('b4.git_get_toplevel', return_value=None),
+        ):
             perpatch, series = checks.load_check_cmds()
         assert '_builtin_sashiko' not in perpatch
         assert '_builtin_sashiko' not in series
@@ -883,8 +1024,10 @@ class TestSashikoAutoWire:
             'review-perpatch-check-cmd': ['_builtin_sashiko'],
             'review-series-check-cmd': ['_builtin_sashiko'],
         }
-        with mock.patch('b4.get_main_config', return_value=config), \
-             mock.patch('b4.git_get_toplevel', return_value=None):
+        with (
+            mock.patch('b4.get_main_config', return_value=config),
+            mock.patch('b4.git_get_toplevel', return_value=None),
+        ):
             perpatch, series = checks.load_check_cmds()
         assert perpatch.count('_builtin_sashiko') == 1
         assert series.count('_builtin_sashiko') == 1
@@ -899,7 +1042,8 @@ class TestSashikoDispatch:
         msg = _make_msg(msgid='test@ex')
         # Pre-cache so no HTTP call is made; use cover msgid
         checks._sashiko_patchset_cache['test@ex'] = dict(
-            _SASHIKO_PATCHSET, message_id='test@ex')
+            _SASHIKO_PATCHSET, message_id='test@ex'
+        )
         config = {'sashiko-url': 'https://sashiko.dev'}
         with mock.patch('b4.get_main_config', return_value=config):
             results = checks._dispatch_cmd('_builtin_sashiko', msg, '/fake')
diff --git a/src/tests/test_review_show_info.py b/src/tests/test_review_show_info.py
index db955c4..92dfaba 100644
--- a/src/tests/test_review_show_info.py
+++ b/src/tests/test_review_show_info.py
@@ -4,6 +4,7 @@
 # Copyright (C) 2024 by the Linux Foundation
 #
 """Tests for ``b4 review show-info``."""
+
 import json
 
 import pytest
@@ -20,15 +21,19 @@ from b4.review._review import (
 # Helpers
 # ---------------------------------------------------------------------------
 
-def _create_review_branch(gitdir: str, change_id: str,
-                          identifier: str = 'test-project',
-                          revision: int = 1,
-                          status: str = 'reviewing',
-                          subject: str = 'Test series',
-                          sender_name: str = 'Test Author',
-                          sender_email: str = 'test@example.com',
-                          link: str = '',
-                          num_real_commits: int = 0) -> str:
+
+def _create_review_branch(
+    gitdir: str,
+    change_id: str,
+    identifier: str = 'test-project',
+    revision: int = 1,
+    status: str = 'reviewing',
+    subject: str = 'Test series',
+    sender_name: str = 'Test Author',
+    sender_email: str = 'test@example.com',
+    link: str = '',
+    num_real_commits: int = 0,
+) -> str:
     """Create a fake b4 review branch with a proper tracking commit.
 
     When *num_real_commits* > 0, that many empty commits are created between
@@ -54,7 +59,9 @@ def _create_review_branch(gitdir: str, change_id: str,
     first_patch_commit = None
     for i in range(num_real_commits):
         ecode, _ = b4.git_run_command(
-            gitdir, ['commit', '--allow-empty', '-m', f'patch {i+1}: do thing {i+1}'])
+            gitdir,
+            ['commit', '--allow-empty', '-m', f'patch {i + 1}: do thing {i + 1}'],
+        )
         assert ecode == 0
         if i == 0:
             ecode, sha = b4.git_run_command(gitdir, ['rev-parse', 'HEAD'])
@@ -88,8 +95,7 @@ def _create_review_branch(gitdir: str, change_id: str,
     commit_msg = f'{subject}\n\n{b4.review.make_review_magic_json(trk)}'
 
     # Create the tracking commit
-    ecode, _ = b4.git_run_command(
-        gitdir, ['commit', '--allow-empty', '-m', commit_msg])
+    ecode, _ = b4.git_run_command(gitdir, ['commit', '--allow-empty', '-m', commit_msg])
     assert ecode == 0
 
     # Go back to master
@@ -103,12 +109,12 @@ def _create_review_branch(gitdir: str, change_id: str,
 # TestGetReviewInfo
 # ---------------------------------------------------------------------------
 
-class TestGetReviewInfo:
 
+class TestGetReviewInfo:
     def test_basic_info(self, gitdir: str) -> None:
-        branch = _create_review_branch(gitdir, 'basic-change-id',
-                                       subject='Basic test series',
-                                       status='reviewing')
+        branch = _create_review_branch(
+            gitdir, 'basic-change-id', subject='Basic test series', status='reviewing'
+        )
         info = get_review_info(gitdir, branch)
 
         assert info['branch'] == branch
@@ -122,15 +128,17 @@ class TestGetReviewInfo:
         assert info['first-patch-commit'] is not None
 
     def test_sender_format(self, gitdir: str) -> None:
-        branch = _create_review_branch(gitdir, 'sender-test',
-                                       sender_name='Alice Author',
-                                       sender_email='alice@example.com')
+        branch = _create_review_branch(
+            gitdir,
+            'sender-test',
+            sender_name='Alice Author',
+            sender_email='alice@example.com',
+        )
         info = get_review_info(gitdir, branch)
         assert info['sender'] == 'Alice Author <alice@example.com>'
 
     def test_commit_keys(self, gitdir: str) -> None:
-        branch = _create_review_branch(gitdir, 'commit-keys-test',
-                                       num_real_commits=3)
+        branch = _create_review_branch(gitdir, 'commit-keys-test', num_real_commits=3)
         info = get_review_info(gitdir, branch)
 
         assert info['num-patches'] == 3
@@ -147,8 +155,8 @@ class TestGetReviewInfo:
 # TestShowReviewInfo
 # ---------------------------------------------------------------------------
 
-class TestShowReviewInfo:
 
+class TestShowReviewInfo:
     def test_all_keys(self, gitdir: str, capsys: pytest.CaptureFixture[str]) -> None:
         _create_review_branch(gitdir, 'show-all-test', subject='All keys test')
         show_review_info('b4/review/show-all-test:_all')
@@ -163,7 +171,9 @@ class TestShowReviewInfo:
         out = capsys.readouterr().out
         assert out.strip() == 'applied'
 
-    def test_named_branch(self, gitdir: str, capsys: pytest.CaptureFixture[str]) -> None:
+    def test_named_branch(
+        self, gitdir: str, capsys: pytest.CaptureFixture[str]
+    ) -> None:
         branch = _create_review_branch(gitdir, 'named-branch-test')
         show_review_info(branch)
         out = capsys.readouterr().out
@@ -196,9 +206,11 @@ class TestShowReviewInfo:
 # TestListReviewBranches
 # ---------------------------------------------------------------------------
 
-class TestListReviewBranches:
 
-    def test_list_multiple(self, gitdir: str, capsys: pytest.CaptureFixture[str]) -> None:
+class TestListReviewBranches:
+    def test_list_multiple(
+        self, gitdir: str, capsys: pytest.CaptureFixture[str]
+    ) -> None:
         _create_review_branch(gitdir, 'list-alpha', subject='Alpha series')
         _create_review_branch(gitdir, 'list-bravo', subject='Bravo series')
         list_review_branches()
@@ -229,8 +241,8 @@ class TestListReviewBranches:
 # TestTargetBranchInInfo
 # ---------------------------------------------------------------------------
 
-class TestTargetBranchInInfo:
 
+class TestTargetBranchInInfo:
     def test_target_branch_in_info(self, gitdir: str) -> None:
         """Branch with target-branch in tracking data includes it in info."""
         branch_name = 'b4/review/target-info-test'
@@ -264,14 +276,19 @@ class TestTargetBranchInInfo:
             'patches': [],
         }
         commit_msg = f'Target info test\n\n{b4.review.make_review_magic_json(trk)}'
-        ecode, tree = b4.git_run_command(gitdir, ['rev-parse', f'{branch_name}^{{tree}}'])
+        ecode, tree = b4.git_run_command(
+            gitdir, ['rev-parse', f'{branch_name}^{{tree}}']
+        )
         assert ecode == 0
         ecode, new_sha = b4.git_run_command(
-            gitdir, ['commit-tree', tree.strip(), '-p', base_sha],
-            stdin=commit_msg.encode())
+            gitdir,
+            ['commit-tree', tree.strip(), '-p', base_sha],
+            stdin=commit_msg.encode(),
+        )
         assert ecode == 0
         ecode, _ = b4.git_run_command(
-            gitdir, ['update-ref', f'refs/heads/{branch_name}', new_sha.strip()])
+            gitdir, ['update-ref', f'refs/heads/{branch_name}', new_sha.strip()]
+        )
         assert ecode == 0
 
         info = get_review_info(gitdir, branch_name)
@@ -280,16 +297,19 @@ class TestTargetBranchInInfo:
     def test_target_branch_fallback(self, gitdir: str) -> None:
         """No per-series target + single config value = fallback shown."""
         from unittest.mock import patch as mock_patch
-        branch = _create_review_branch(gitdir, 'target-fallback-test',
-                                       subject='Fallback test')
-        with mock_patch('b4.review.tracking.get_review_target_branch_default',
-                        return_value='regulator/for-next'):
+
+        branch = _create_review_branch(
+            gitdir, 'target-fallback-test', subject='Fallback test'
+        )
+        with mock_patch(
+            'b4.review.tracking.get_review_target_branch_default',
+            return_value='regulator/for-next',
+        ):
             info = get_review_info(gitdir, branch)
         assert info['target-branch'] == 'regulator/for-next'
 
     def test_target_branch_none(self, gitdir: str) -> None:
         """No per-series target + no config = None."""
-        branch = _create_review_branch(gitdir, 'target-none-test',
-                                       subject='None test')
+        branch = _create_review_branch(gitdir, 'target-none-test', subject='None test')
         info = get_review_info(gitdir, branch)
         assert info['target-branch'] is None
diff --git a/src/tests/test_review_tracking.py b/src/tests/test_review_tracking.py
index a5fd903..e106312 100644
--- a/src/tests/test_review_tracking.py
+++ b/src/tests/test_review_tracking.py
@@ -71,19 +71,23 @@ class TestDbOperations:
             sender_email='author@example.com',
             sent_at='2024-01-15T10:00:00+00:00',
             message_id='test-msgid@example.com',
-            num_patches=3
+            num_patches=3,
         )
 
         assert track_id == 1
-        cursor = conn.execute('SELECT track_id, change_id, subject FROM series WHERE change_id = ?',
-                              ('test-change-id',))
+        cursor = conn.execute(
+            'SELECT track_id, change_id, subject FROM series WHERE change_id = ?',
+            ('test-change-id',),
+        )
         row = cursor.fetchone()
         assert row is not None
         assert row[0] == track_id
         assert row[2] == 'Test series subject'
         conn.close()
 
-    def test_add_series_with_pw_series_id(self, tmp_path: pytest.TempPathFactory) -> None:
+    def test_add_series_with_pw_series_id(
+        self, tmp_path: pytest.TempPathFactory
+    ) -> None:
         """Verify series can be added with patchwork series ID."""
         conn = review_tracking.init_db('pw-series-test')
         track_id = review_tracking.add_series_to_db(
@@ -96,27 +100,45 @@ class TestDbOperations:
             sent_at='2024-01-15T10:00:00+00:00',
             message_id='test-msgid@example.com',
             num_patches=3,
-            pw_series_id=12345
+            pw_series_id=12345,
         )
 
-        cursor = conn.execute('SELECT pw_series_id FROM series WHERE track_id = ?', (track_id,))
+        cursor = conn.execute(
+            'SELECT pw_series_id FROM series WHERE track_id = ?', (track_id,)
+        )
         row = cursor.fetchone()
         assert row[0] == 12345
         conn.close()
 
-    def test_add_series_multiple_revisions(self, tmp_path: pytest.TempPathFactory) -> None:
+    def test_add_series_multiple_revisions(
+        self, tmp_path: pytest.TempPathFactory
+    ) -> None:
         """Verify multiple revisions can be tracked for the same change-id."""
         conn = review_tracking.init_db('multi-rev-test')
 
         # Add v1
         track_id_v1 = review_tracking.add_series_to_db(
-            conn, 'change-123', 1, 'Subject v1', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid-v1@example.com', 3
+            conn,
+            'change-123',
+            1,
+            'Subject v1',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid-v1@example.com',
+            3,
         )
         # Add v2
         track_id_v2 = review_tracking.add_series_to_db(
-            conn, 'change-123', 2, 'Subject v2', 'Author', 'a@example.com',
-            '2024-01-16T10:00:00+00:00', 'msgid-v2@example.com', 4
+            conn,
+            'change-123',
+            2,
+            'Subject v2',
+            'Author',
+            'a@example.com',
+            '2024-01-16T10:00:00+00:00',
+            'msgid-v2@example.com',
+            4,
         )
 
         # Different track_ids
@@ -124,7 +146,7 @@ class TestDbOperations:
 
         cursor = conn.execute(
             'SELECT track_id, revision, num_patches FROM series WHERE change_id = ? ORDER BY revision',
-            ('change-123',)
+            ('change-123',),
         )
         rows = cursor.fetchall()
         assert len(rows) == 2
@@ -137,12 +159,26 @@ class TestDbOperations:
         conn = review_tracking.init_db('upsert-test')
 
         track_id_1 = review_tracking.add_series_to_db(
-            conn, 'change-456', 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid-old@example.com', 3
+            conn,
+            'change-456',
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid-old@example.com',
+            3,
         )
         track_id_2 = review_tracking.add_series_to_db(
-            conn, 'change-456', 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid-new@example.com', 5
+            conn,
+            'change-456',
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid-new@example.com',
+            5,
         )
 
         # Same track_id after upsert
@@ -150,7 +186,7 @@ class TestDbOperations:
 
         cursor = conn.execute(
             'SELECT track_id, message_id, num_patches FROM series WHERE change_id = ? AND revision = ?',
-            ('change-456', 1)
+            ('change-456', 1),
         )
         row = cursor.fetchone()
         assert row == (track_id_1, 'msgid-new@example.com', 5)
@@ -161,19 +197,40 @@ class TestDbOperations:
         conn = review_tracking.init_db('pw-ids-test')
         # Add series with pw_series_id
         review_tracking.add_series_to_db(
-            conn, 'change-1', 1, 'Subject 1', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid@example.com', 3,
-            pw_series_id=100
+            conn,
+            'change-1',
+            1,
+            'Subject 1',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid@example.com',
+            3,
+            pw_series_id=100,
         )
         review_tracking.add_series_to_db(
-            conn, 'change-2', 1, 'Subject 2', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid@example.com', 3,
-            pw_series_id=200
+            conn,
+            'change-2',
+            1,
+            'Subject 2',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid@example.com',
+            3,
+            pw_series_id=200,
         )
         # Add series without pw_series_id
         review_tracking.add_series_to_db(
-            conn, 'change-3', 1, 'Subject 3', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid@example.com', 3
+            conn,
+            'change-3',
+            1,
+            'Subject 3',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid@example.com',
+            3,
         )
         conn.close()
 
@@ -191,9 +248,16 @@ class TestDbOperations:
         """Verify is_pw_series_tracked works correctly."""
         conn = review_tracking.init_db('is-tracked-test')
         review_tracking.add_series_to_db(
-            conn, 'change-1', 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid@example.com', 3,
-            pw_series_id=12345
+            conn,
+            'change-1',
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid@example.com',
+            3,
+            pw_series_id=12345,
         )
         conn.close()
 
@@ -210,12 +274,26 @@ class TestDbOperations:
         """Verify get_all_tracked_series returns all series with correct fields."""
         conn = review_tracking.init_db('all-series-test')
         review_tracking.add_series_to_db(
-            conn, 'change-1', 1, 'First series', 'Author One', 'one@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid-1@example.com', 3
+            conn,
+            'change-1',
+            1,
+            'First series',
+            'Author One',
+            'one@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid-1@example.com',
+            3,
         )
         review_tracking.add_series_to_db(
-            conn, 'change-2', 2, 'Second series', 'Author Two', 'two@example.com',
-            '2024-01-16T10:00:00+00:00', 'msgid-2@example.com', 5
+            conn,
+            'change-2',
+            2,
+            'Second series',
+            'Author Two',
+            'two@example.com',
+            '2024-01-16T10:00:00+00:00',
+            'msgid-2@example.com',
+            5,
         )
         conn.close()
 
@@ -264,7 +342,9 @@ class TestRepoMetadata:
 
         # Create a real worktree
         worktree_dir = os.path.join(str(os.path.dirname(gitdir)), 'worktree')
-        out, _logstr = b4.git_run_command(gitdir, ['worktree', 'add', worktree_dir, '-b', 'wt-branch'])
+        out, _logstr = b4.git_run_command(
+            gitdir, ['worktree', 'add', worktree_dir, '-b', 'wt-branch']
+        )
         assert out == 0
 
         identifier = review_tracking.get_repo_identifier(worktree_dir)
@@ -293,7 +373,9 @@ class TestResolveIdentifier:
         result = review_tracking.resolve_identifier(cmdargs, gitdir)
         assert result == 'repo-identifier'
 
-    def test_returns_none_when_no_identifier(self, tmp_path: pytest.TempPathFactory) -> None:
+    def test_returns_none_when_no_identifier(
+        self, tmp_path: pytest.TempPathFactory
+    ) -> None:
         """Verify returns None when no identifier available."""
         cmdargs = argparse.Namespace(identifier=None)
         # Pass a non-git directory
@@ -306,20 +388,14 @@ class TestCmdEnroll:
 
     def test_enroll_creates_database(self, gitdir: str) -> None:
         """Verify enroll creates the database."""
-        cmdargs = argparse.Namespace(
-            repo_path=gitdir,
-            identifier='enroll-test'
-        )
+        cmdargs = argparse.Namespace(repo_path=gitdir, identifier='enroll-test')
         review_tracking.cmd_enroll(cmdargs)
 
         assert review_tracking.db_exists('enroll-test')
 
     def test_enroll_creates_metadata_file(self, gitdir: str) -> None:
         """Verify enroll creates metadata file in .git directory."""
-        cmdargs = argparse.Namespace(
-            repo_path=gitdir,
-            identifier='metadata-test'
-        )
+        cmdargs = argparse.Namespace(repo_path=gitdir, identifier='metadata-test')
         review_tracking.cmd_enroll(cmdargs)
 
         metadata_path = os.path.join(gitdir, '.git', 'b4-review', 'metadata.json')
@@ -327,10 +403,7 @@ class TestCmdEnroll:
 
     def test_enroll_uses_dirname_as_default_identifier(self, gitdir: str) -> None:
         """Verify enroll uses directory name as default identifier."""
-        cmdargs = argparse.Namespace(
-            repo_path=gitdir,
-            identifier=None
-        )
+        cmdargs = argparse.Namespace(repo_path=gitdir, identifier=None)
         review_tracking.cmd_enroll(cmdargs)
 
         dirname = os.path.basename(gitdir)
@@ -339,10 +412,7 @@ class TestCmdEnroll:
     def test_enroll_uses_current_directory_when_no_path(self, gitdir: str) -> None:
         """Verify enroll uses current directory when no path specified."""
         # gitdir fixture already changes cwd to the test repo
-        cmdargs = argparse.Namespace(
-            repo_path=None,
-            identifier='current-dir-test'
-        )
+        cmdargs = argparse.Namespace(repo_path=None, identifier='current-dir-test')
         review_tracking.cmd_enroll(cmdargs)
 
         assert review_tracking.db_exists('current-dir-test')
@@ -359,10 +429,7 @@ class TestCmdEnroll:
         oldcwd = os.getcwd()
         os.chdir(non_git_dir)
         try:
-            cmdargs = argparse.Namespace(
-                repo_path=None,
-                identifier='test'
-            )
+            cmdargs = argparse.Namespace(repo_path=None, identifier='test')
             with pytest.raises(SystemExit) as exc_info:
                 review_tracking.cmd_enroll(cmdargs)
             assert exc_info.value.code == 1
@@ -373,10 +440,7 @@ class TestCmdEnroll:
         self, tmp_path: pytest.TempPathFactory
     ) -> None:
         """Verify enroll fails for non-existent paths."""
-        cmdargs = argparse.Namespace(
-            repo_path='/nonexistent/path',
-            identifier='test'
-        )
+        cmdargs = argparse.Namespace(repo_path='/nonexistent/path', identifier='test')
         with pytest.raises(SystemExit) as exc_info:
             review_tracking.cmd_enroll(cmdargs)
         assert exc_info.value.code == 1
@@ -388,10 +452,7 @@ class TestCmdEnroll:
         non_git_dir = os.path.join(str(tmp_path), 'not-a-repo')
         os.makedirs(non_git_dir)
 
-        cmdargs = argparse.Namespace(
-            repo_path=non_git_dir,
-            identifier='test'
-        )
+        cmdargs = argparse.Namespace(repo_path=non_git_dir, identifier='test')
         with pytest.raises(SystemExit) as exc_info:
             review_tracking.cmd_enroll(cmdargs)
         assert exc_info.value.code == 1
@@ -399,17 +460,11 @@ class TestCmdEnroll:
     def test_enroll_fails_when_repo_already_enrolled(self, gitdir: str) -> None:
         """Verify enroll fails when repository already has metadata."""
         # First enrollment
-        cmdargs = argparse.Namespace(
-            repo_path=gitdir,
-            identifier='first-id'
-        )
+        cmdargs = argparse.Namespace(repo_path=gitdir, identifier='first-id')
         review_tracking.cmd_enroll(cmdargs)
 
         # Second enrollment of same repo should fail
-        cmdargs2 = argparse.Namespace(
-            repo_path=gitdir,
-            identifier='second-id'
-        )
+        cmdargs2 = argparse.Namespace(repo_path=gitdir, identifier='second-id')
         with pytest.raises(SystemExit) as exc_info:
             review_tracking.cmd_enroll(cmdargs2)
         assert exc_info.value.code == 1
@@ -420,10 +475,7 @@ class TestCmdEnroll:
     ) -> None:
         """Verify enroll can reuse existing database for different repo."""
         # Create database via first enrollment
-        cmdargs = argparse.Namespace(
-            repo_path=gitdir,
-            identifier='shared-db'
-        )
+        cmdargs = argparse.Namespace(repo_path=gitdir, identifier='shared-db')
         review_tracking.cmd_enroll(cmdargs)
 
         # Create a second git repo
@@ -431,10 +483,7 @@ class TestCmdEnroll:
         b4.git_run_command(None, ['init', second_repo])
 
         # Enroll second repo with same identifier - user confirms
-        cmdargs2 = argparse.Namespace(
-            repo_path=second_repo,
-            identifier='shared-db'
-        )
+        cmdargs2 = argparse.Namespace(repo_path=second_repo, identifier='shared-db')
         review_tracking.cmd_enroll(cmdargs2)
 
         # Metadata file should exist in second repo's .git
@@ -448,10 +497,7 @@ class TestCmdEnroll:
     ) -> None:
         """Verify enroll aborts when user declines to use existing database."""
         # Create database via first enrollment
-        cmdargs = argparse.Namespace(
-            repo_path=gitdir,
-            identifier='declined-db'
-        )
+        cmdargs = argparse.Namespace(repo_path=gitdir, identifier='declined-db')
         review_tracking.cmd_enroll(cmdargs)
 
         # Create a second git repo
@@ -459,10 +505,7 @@ class TestCmdEnroll:
         b4.git_run_command(None, ['init', second_repo])
 
         # Enroll second repo with same identifier - user declines
-        cmdargs2 = argparse.Namespace(
-            repo_path=second_repo,
-            identifier='declined-db'
-        )
+        cmdargs2 = argparse.Namespace(repo_path=second_repo, identifier='declined-db')
         with pytest.raises(SystemExit) as exc_info:
             review_tracking.cmd_enroll(cmdargs2)
         # Exit code 0 for user-initiated cancellation
@@ -478,13 +521,12 @@ class TestCmdEnroll:
         """Verify enroll from a worktree writes metadata to the shared .git."""
         # Create a real worktree
         worktree_dir = os.path.join(str(os.path.dirname(gitdir)), 'worktree')
-        out, _logstr = b4.git_run_command(gitdir, ['worktree', 'add', worktree_dir, '-b', 'wt-branch'])
+        out, _logstr = b4.git_run_command(
+            gitdir, ['worktree', 'add', worktree_dir, '-b', 'wt-branch']
+        )
         assert out == 0
 
-        cmdargs = argparse.Namespace(
-            repo_path=worktree_dir,
-            identifier='worktree-test'
-        )
+        cmdargs = argparse.Namespace(repo_path=worktree_dir, identifier='worktree-test')
         review_tracking.cmd_enroll(cmdargs)
 
         # Database should be created
@@ -496,22 +538,18 @@ class TestCmdEnroll:
     def test_enroll_from_worktree_already_enrolled(self, gitdir: str) -> None:
         """Verify enrolling from worktree exits 0 when repo already enrolled."""
         # Enroll the main repo first
-        cmdargs = argparse.Namespace(
-            repo_path=gitdir,
-            identifier='main-id'
-        )
+        cmdargs = argparse.Namespace(repo_path=gitdir, identifier='main-id')
         review_tracking.cmd_enroll(cmdargs)
 
         # Create a real worktree
         worktree_dir = os.path.join(str(os.path.dirname(gitdir)), 'worktree')
-        out, _logstr = b4.git_run_command(gitdir, ['worktree', 'add', worktree_dir, '-b', 'wt-branch'])
+        out, _logstr = b4.git_run_command(
+            gitdir, ['worktree', 'add', worktree_dir, '-b', 'wt-branch']
+        )
         assert out == 0
 
         # Enrolling from worktree with same identifier should exit 0
-        cmdargs2 = argparse.Namespace(
-            repo_path=worktree_dir,
-            identifier='main-id'
-        )
+        cmdargs2 = argparse.Namespace(repo_path=worktree_dir, identifier='main-id')
         with pytest.raises(SystemExit) as exc_info:
             review_tracking.cmd_enroll(cmdargs2)
         assert exc_info.value.code == 0
@@ -519,22 +557,18 @@ class TestCmdEnroll:
     def test_enroll_from_worktree_conflicting_identifier(self, gitdir: str) -> None:
         """Verify enrolling from worktree fails with a different identifier."""
         # Enroll the main repo first
-        cmdargs = argparse.Namespace(
-            repo_path=gitdir,
-            identifier='main-id'
-        )
+        cmdargs = argparse.Namespace(repo_path=gitdir, identifier='main-id')
         review_tracking.cmd_enroll(cmdargs)
 
         # Create a real worktree
         worktree_dir = os.path.join(str(os.path.dirname(gitdir)), 'worktree')
-        out, _logstr = b4.git_run_command(gitdir, ['worktree', 'add', worktree_dir, '-b', 'wt-branch'])
+        out, _logstr = b4.git_run_command(
+            gitdir, ['worktree', 'add', worktree_dir, '-b', 'wt-branch']
+        )
         assert out == 0
 
         # Enrolling from worktree with different identifier should fail
-        cmdargs2 = argparse.Namespace(
-            repo_path=worktree_dir,
-            identifier='different-id'
-        )
+        cmdargs2 = argparse.Namespace(repo_path=worktree_dir, identifier='different-id')
         with pytest.raises(SystemExit) as exc_info:
             review_tracking.cmd_enroll(cmdargs2)
         assert exc_info.value.code == 1
@@ -549,7 +583,9 @@ class TestCmdTrack:
         fromname: str = 'Test Author',
         fromemail: str = 'author@example.com',
         subject: str = 'Test patch',
-        date: datetime.datetime = datetime.datetime(2024, 1, 15, 10, 0, 0, tzinfo=datetime.timezone.utc)
+        date: datetime.datetime = datetime.datetime(
+            2024, 1, 15, 10, 0, 0, tzinfo=datetime.timezone.utc
+        ),
     ) -> mock.Mock:
         """Create a mock LoreMessage."""
         lmsg = mock.Mock()
@@ -571,7 +607,7 @@ class TestCmdTrack:
         first_patch_msgid: str = 'patch1@example.com',
         fromname: str = 'Test Author',
         fromemail: str = 'author@example.com',
-        subject: str = 'Test series'
+        subject: str = 'Test series',
     ) -> mock.Mock:
         """Create a mock LoreSeries."""
         lser = mock.Mock()
@@ -594,10 +630,7 @@ class TestCmdTrack:
     @mock.patch('b4.retrieve_messages')
     @mock.patch('b4.LoreMailbox')
     def test_track_with_change_id(
-        self,
-        mock_mailbox_class: mock.Mock,
-        mock_retrieve: mock.Mock,
-        gitdir: str
+        self, mock_mailbox_class: mock.Mock, mock_retrieve: mock.Mock, gitdir: str
     ) -> None:
         """Verify tracking a series with a change-id."""
         # Set up enrolled project
@@ -620,7 +653,7 @@ class TestCmdTrack:
             msgid=None,
             noparent=False,
             wantname=None,
-            wantver=None
+            wantver=None,
         )
         review_tracking.cmd_track(cmdargs)
 
@@ -635,10 +668,7 @@ class TestCmdTrack:
     @mock.patch('b4.retrieve_messages')
     @mock.patch('b4.LoreMailbox')
     def test_track_generates_change_id_without_change_id(
-        self,
-        mock_mailbox_class: mock.Mock,
-        mock_retrieve: mock.Mock,
-        gitdir: str
+        self, mock_mailbox_class: mock.Mock, mock_retrieve: mock.Mock, gitdir: str
     ) -> None:
         """Verify tracking generates a change-id when series has none."""
         cmdargs_enroll = argparse.Namespace(repo_path=gitdir, identifier='noid-test')
@@ -659,7 +689,7 @@ class TestCmdTrack:
             msgid=None,
             noparent=False,
             wantname=None,
-            wantver=None
+            wantver=None,
         )
         review_tracking.cmd_track(cmdargs)
 
@@ -675,21 +705,19 @@ class TestCmdTrack:
     @mock.patch('b4.retrieve_messages')
     @mock.patch('b4.LoreMailbox')
     def test_track_uses_first_patch_without_cover(
-        self,
-        mock_mailbox_class: mock.Mock,
-        mock_retrieve: mock.Mock,
-        gitdir: str
+        self, mock_mailbox_class: mock.Mock, mock_retrieve: mock.Mock, gitdir: str
     ) -> None:
         """Verify tracking uses first patch msgid when no cover letter."""
-        cmdargs_enroll = argparse.Namespace(repo_path=gitdir, identifier='no-cover-test')
+        cmdargs_enroll = argparse.Namespace(
+            repo_path=gitdir, identifier='no-cover-test'
+        )
         review_tracking.cmd_enroll(cmdargs_enroll)
 
         mock_msg = mock.Mock()
         mock_retrieve.return_value = ('test-msgid', [mock_msg])
 
         mock_lser = self._make_mock_lore_series(
-            has_cover=False,
-            first_patch_msgid='first-patch@example.com'
+            has_cover=False, first_patch_msgid='first-patch@example.com'
         )
         mock_mailbox = mock.Mock()
         mock_mailbox.series = {1: mock_lser}
@@ -702,7 +730,7 @@ class TestCmdTrack:
             msgid=None,
             noparent=False,
             wantname=None,
-            wantver=None
+            wantver=None,
         )
         review_tracking.cmd_track(cmdargs)
 
@@ -714,9 +742,7 @@ class TestCmdTrack:
 
     @mock.patch('b4.review.tracking.resolve_identifier', return_value=None)
     def test_track_fails_without_identifier(
-        self,
-        mock_resolve: mock.Mock,
-        tmp_path: pytest.TempPathFactory
+        self, mock_resolve: mock.Mock, tmp_path: pytest.TempPathFactory
     ) -> None:
         """Verify track fails when no identifier can be resolved."""
         cmdargs = argparse.Namespace(
@@ -725,7 +751,7 @@ class TestCmdTrack:
             msgid=None,
             noparent=False,
             wantname=None,
-            wantver=None
+            wantver=None,
         )
         with pytest.raises(SystemExit) as exc_info:
             review_tracking.cmd_track(cmdargs)
@@ -733,9 +759,7 @@ class TestCmdTrack:
 
     @mock.patch('b4.retrieve_messages')
     def test_track_fails_for_unenrolled_project(
-        self,
-        mock_retrieve: mock.Mock,
-        tmp_path: pytest.TempPathFactory
+        self, mock_retrieve: mock.Mock, tmp_path: pytest.TempPathFactory
     ) -> None:
         """Verify track fails when project is not enrolled."""
         cmdargs = argparse.Namespace(
@@ -744,7 +768,7 @@ class TestCmdTrack:
             msgid=None,
             noparent=False,
             wantname=None,
-            wantver=None
+            wantver=None,
         )
         with pytest.raises(SystemExit) as exc_info:
             review_tracking.cmd_track(cmdargs)
@@ -752,12 +776,12 @@ class TestCmdTrack:
 
     @mock.patch('b4.retrieve_messages')
     def test_track_fails_when_retrieval_fails(
-        self,
-        mock_retrieve: mock.Mock,
-        gitdir: str
+        self, mock_retrieve: mock.Mock, gitdir: str
     ) -> None:
         """Verify track fails when series retrieval fails."""
-        cmdargs_enroll = argparse.Namespace(repo_path=gitdir, identifier='retrieval-fail')
+        cmdargs_enroll = argparse.Namespace(
+            repo_path=gitdir, identifier='retrieval-fail'
+        )
         review_tracking.cmd_enroll(cmdargs_enroll)
 
         mock_retrieve.return_value = (None, None)
@@ -768,7 +792,7 @@ class TestCmdTrack:
             msgid=None,
             noparent=False,
             wantname=None,
-            wantver=None
+            wantver=None,
         )
         with pytest.raises(SystemExit) as exc_info:
             review_tracking.cmd_track(cmdargs)
@@ -781,8 +805,14 @@ class TestRevisions:
     def test_add_revision(self, tmp_path: pytest.TempPathFactory) -> None:
         """Verify a revision can be added and retrieved."""
         conn = review_tracking.init_db('rev-add-test')
-        review_tracking.add_revision(conn, 'change-abc', 1, 'msgid-v1@example.com',
-                                     subject='Test v1', link='https://lore.kernel.org/r/msgid-v1')
+        review_tracking.add_revision(
+            conn,
+            'change-abc',
+            1,
+            'msgid-v1@example.com',
+            subject='Test v1',
+            link='https://lore.kernel.org/r/msgid-v1',
+        )
         revs = review_tracking.get_revisions(conn, 'change-abc')
         assert len(revs) == 1
         assert revs[0]['change_id'] == 'change-abc'
@@ -840,7 +870,9 @@ class TestRevisions:
         assert result == {'change-a': 3, 'change-b': 2}
         conn.close()
 
-    def test_get_all_newest_revisions_empty(self, tmp_path: pytest.TempPathFactory) -> None:
+    def test_get_all_newest_revisions_empty(
+        self, tmp_path: pytest.TempPathFactory
+    ) -> None:
         """Verify bulk newest-revision query returns empty dict with no data."""
         conn = review_tracking.init_db('rev-bulk-newest-empty-test')
         assert review_tracking.get_all_newest_revisions(conn) == {}
@@ -860,9 +892,15 @@ class TestRevisions:
     def test_get_all_revisions_grouped(self, tmp_path: pytest.TempPathFactory) -> None:
         """Verify bulk grouped revisions returns correct per-change-id lists."""
         conn = review_tracking.init_db('rev-bulk-grouped-test')
-        review_tracking.add_revision(conn, 'change-a', 2, 'a-v2@example.com', subject='A v2')
-        review_tracking.add_revision(conn, 'change-a', 1, 'a-v1@example.com', subject='A v1')
-        review_tracking.add_revision(conn, 'change-b', 1, 'b-v1@example.com', subject='B v1')
+        review_tracking.add_revision(
+            conn, 'change-a', 2, 'a-v2@example.com', subject='A v2'
+        )
+        review_tracking.add_revision(
+            conn, 'change-a', 1, 'a-v1@example.com', subject='A v1'
+        )
+        review_tracking.add_revision(
+            conn, 'change-b', 1, 'b-v1@example.com', subject='B v1'
+        )
         result = review_tracking.get_all_revisions_grouped(conn)
         assert set(result.keys()) == {'change-a', 'change-b'}
         # change-a should be sorted ascending
@@ -870,7 +908,9 @@ class TestRevisions:
         assert len(result['change-b']) == 1
         conn.close()
 
-    def test_get_all_revisions_grouped_empty(self, tmp_path: pytest.TempPathFactory) -> None:
+    def test_get_all_revisions_grouped_empty(
+        self, tmp_path: pytest.TempPathFactory
+    ) -> None:
         """Verify bulk grouped revisions returns empty dict with no data."""
         conn = review_tracking.init_db('rev-bulk-grouped-empty-test')
         assert review_tracking.get_all_revisions_grouped(conn) == {}
@@ -881,48 +921,78 @@ class TestRevisions:
         conn = review_tracking.init_db('del-series-test')
         # Add a series with revisions
         review_tracking.add_series_to_db(
-            conn, 'change-del', 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid@example.com', 3)
+            conn,
+            'change-del',
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid@example.com',
+            3,
+        )
         review_tracking.add_revision(conn, 'change-del', 1, 'msgid-v1@example.com')
         review_tracking.add_revision(conn, 'change-del', 2, 'msgid-v2@example.com')
         # Add another series that should not be affected
         review_tracking.add_series_to_db(
-            conn, 'change-keep', 1, 'Keep', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'keep@example.com', 1)
+            conn,
+            'change-keep',
+            1,
+            'Keep',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'keep@example.com',
+            1,
+        )
         review_tracking.add_revision(conn, 'change-keep', 1, 'keep-v1@example.com')
 
         review_tracking.delete_series(conn, 'change-del')
 
         # Deleted change_id should be gone from both tables
-        cursor = conn.execute('SELECT * FROM series WHERE change_id = ?',
-                              ('change-del',))
+        cursor = conn.execute(
+            'SELECT * FROM series WHERE change_id = ?', ('change-del',)
+        )
         assert cursor.fetchone() is None
         assert review_tracking.get_revisions(conn, 'change-del') == []
 
         # Other change_id should be untouched
-        cursor = conn.execute('SELECT * FROM series WHERE change_id = ?',
-                              ('change-keep',))
+        cursor = conn.execute(
+            'SELECT * FROM series WHERE change_id = ?', ('change-keep',)
+        )
         assert cursor.fetchone() is not None
         assert len(review_tracking.get_revisions(conn, 'change-keep')) == 1
         conn.close()
 
+
 class TestUpdateSeriesStatus:
     """Tests for update_series_status()."""
 
     def test_updates_existing_series(self, tmp_path: pytest.TempPathFactory) -> None:
         conn = review_tracking.init_db('status-update-test')
         review_tracking.add_series_to_db(
-            conn, 'change-status', 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid@example.com', 3)
+            conn,
+            'change-status',
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid@example.com',
+            3,
+        )
 
         review_tracking.update_series_status(conn, 'change-status', 'reviewing')
 
         cursor = conn.execute(
-            'SELECT status FROM series WHERE change_id = ?', ('change-status',))
+            'SELECT status FROM series WHERE change_id = ?', ('change-status',)
+        )
         assert cursor.fetchone()[0] == 'reviewing'
         conn.close()
 
-    def test_noop_for_nonexistent_change_id(self, tmp_path: pytest.TempPathFactory) -> None:
+    def test_noop_for_nonexistent_change_id(
+        self, tmp_path: pytest.TempPathFactory
+    ) -> None:
         conn = review_tracking.init_db('status-noop-test')
         # Should not raise
         review_tracking.update_series_status(conn, 'nonexistent', 'reviewing')
@@ -942,7 +1012,9 @@ class TestGitGetCommonDir:
     def test_returns_shared_git_dir_from_worktree(self, gitdir: str) -> None:
         """Verify git_get_common_dir returns the shared .git from a worktree."""
         worktree_dir = os.path.join(str(os.path.dirname(gitdir)), 'worktree')
-        out, _logstr = b4.git_run_command(gitdir, ['worktree', 'add', worktree_dir, '-b', 'wt-branch'])
+        out, _logstr = b4.git_run_command(
+            gitdir, ['worktree', 'add', worktree_dir, '-b', 'wt-branch']
+        )
         assert out == 0
 
         result = b4.git_get_common_dir(worktree_dir)
@@ -950,7 +1022,9 @@ class TestGitGetCommonDir:
         expected = os.path.join(gitdir, '.git')
         assert os.path.normpath(result) == os.path.normpath(expected)
 
-    def test_returns_none_for_non_git_dir(self, tmp_path: pytest.TempPathFactory) -> None:
+    def test_returns_none_for_non_git_dir(
+        self, tmp_path: pytest.TempPathFactory
+    ) -> None:
         """Verify git_get_common_dir returns None outside a git repo."""
         non_git = os.path.join(str(tmp_path), 'not-a-repo')
         os.makedirs(non_git)
@@ -967,12 +1041,13 @@ class TestReviewTargetBranch:
         assert b4.DEFAULT_CONFIG['review-target-branch'] is None
 
 
-def _create_review_branch(topdir: str, change_id: str, tracking_data: Dict[str, Any]) -> str:
+def _create_review_branch(
+    topdir: str, change_id: str, tracking_data: Dict[str, Any]
+) -> str:
     """Helper: create a b4/review/<change_id> branch with a tracking commit."""
     branch = f'b4/review/{change_id}'
     cover_text = f'Cover letter for {change_id}'
-    commit_msg = (cover_text + '\n\n'
-                  + b4.review.make_review_magic_json(tracking_data))
+    commit_msg = cover_text + '\n\n' + b4.review.make_review_magic_json(tracking_data)
     # Create an orphan-ish branch off current HEAD
     b4.git_run_command(topdir, ['branch', branch])
     # Create a tracking commit on it via commit-tree
@@ -983,11 +1058,15 @@ def _create_review_branch(topdir: str, change_id: str, tracking_data: Dict[str,
     assert ecode == 0
     parent = parent.strip()
     ecode, new_sha = b4.git_run_command(
-        topdir, ['commit-tree', tree, '-p', parent, '-F', '-'],
-        stdin=commit_msg.encode())
+        topdir,
+        ['commit-tree', tree, '-p', parent, '-F', '-'],
+        stdin=commit_msg.encode(),
+    )
     assert ecode == 0
     new_sha = new_sha.strip()
-    ecode, _ = b4.git_run_command(topdir, ['update-ref', f'refs/heads/{branch}', new_sha])
+    ecode, _ = b4.git_run_command(
+        topdir, ['update-ref', f'refs/heads/{branch}', new_sha]
+    )
     assert ecode == 0
     return branch
 
@@ -1057,7 +1136,9 @@ class TestUpdateTrackingStatus:
 
     def test_returns_false_for_missing_branch(self, gitdir: str) -> None:
         """Verify update_tracking_status returns False for non-existent branch."""
-        result = b4.review.update_tracking_status(gitdir, 'b4/review/nonexistent', 'replied')
+        result = b4.review.update_tracking_status(
+            gitdir, 'b4/review/nonexistent', 'replied'
+        )
         assert result is False
 
 
@@ -1104,9 +1185,14 @@ class TestGetReviewBranches:
 class TestRescanBranches:
     """Tests for rescan_branches()."""
 
-    def _make_tracking_data(self, change_id: str, identifier: str = 'rescan-proj',
-                            status: str = 'reviewing', revision: int = 1,
-                            subject: str = 'Test series') -> Dict[str, Any]:
+    def _make_tracking_data(
+        self,
+        change_id: str,
+        identifier: str = 'rescan-proj',
+        status: str = 'reviewing',
+        revision: int = 1,
+        subject: str = 'Test series',
+    ) -> Dict[str, Any]:
         return {
             'series': {
                 'identifier': identifier,
@@ -1136,8 +1222,9 @@ class TestRescanBranches:
         identifier = 'rescan-single'
         review_tracking.init_db(identifier).close()
 
-        tracking_data = self._make_tracking_data('single-change', identifier=identifier,
-                                                  status='replied')
+        tracking_data = self._make_tracking_data(
+            'single-change', identifier=identifier, status='replied'
+        )
         branch = _create_review_branch(gitdir, 'single-change', tracking_data)
 
         review_tracking.rescan_branches(identifier, gitdir, branch=branch)
@@ -1145,7 +1232,8 @@ class TestRescanBranches:
         conn = review_tracking.get_db(identifier)
         cursor = conn.execute(
             'SELECT change_id, status, revision FROM series WHERE change_id = ?',
-            ('single-change',))
+            ('single-change',),
+        )
         row = cursor.fetchone()
         assert row is not None
         assert row['change_id'] == 'single-change'
@@ -1159,8 +1247,16 @@ class TestRescanBranches:
         conn = review_tracking.init_db(identifier)
         # Add a series to DB with 'reviewing' status but no corresponding branch
         review_tracking.add_series_to_db(
-            conn, 'gone-change', 1, 'Gone series', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid@example.com', 3)
+            conn,
+            'gone-change',
+            1,
+            'Gone series',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid@example.com',
+            3,
+        )
         review_tracking.update_series_status(conn, 'gone-change', 'reviewing')
         conn.close()
 
@@ -1168,7 +1264,8 @@ class TestRescanBranches:
 
         conn = review_tracking.get_db(identifier)
         cursor = conn.execute(
-            'SELECT status FROM series WHERE change_id = ?', ('gone-change',))
+            'SELECT status FROM series WHERE change_id = ?', ('gone-change',)
+        )
         row = cursor.fetchone()
         assert row['status'] == 'gone'
         conn.close()
@@ -1179,15 +1276,17 @@ class TestRescanBranches:
         review_tracking.init_db(identifier).close()
 
         # Create branch with a different identifier
-        tracking_data = self._make_tracking_data('mismatch-change',
-                                                  identifier='other-project')
+        tracking_data = self._make_tracking_data(
+            'mismatch-change', identifier='other-project'
+        )
         _create_review_branch(gitdir, 'mismatch-change', tracking_data)
 
         review_tracking.rescan_branches(identifier, gitdir)
 
         conn = review_tracking.get_db(identifier)
         cursor = conn.execute(
-            'SELECT * FROM series WHERE change_id = ?', ('mismatch-change',))
+            'SELECT * FROM series WHERE change_id = ?', ('mismatch-change',)
+        )
         row = cursor.fetchone()
         assert row is None
         conn.close()
@@ -1198,8 +1297,16 @@ class TestRescanBranches:
         conn = review_tracking.init_db(identifier)
         # Add an 'accepted' series with no branch — should NOT become 'gone'
         review_tracking.add_series_to_db(
-            conn, 'accepted-change', 1, 'Accepted', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'msgid@example.com', 3)
+            conn,
+            'accepted-change',
+            1,
+            'Accepted',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'msgid@example.com',
+            3,
+        )
         review_tracking.update_series_status(conn, 'accepted-change', 'accepted')
         conn.close()
 
@@ -1207,7 +1314,8 @@ class TestRescanBranches:
 
         conn = review_tracking.get_db(identifier)
         cursor = conn.execute(
-            'SELECT status FROM series WHERE change_id = ?', ('accepted-change',))
+            'SELECT status FROM series WHERE change_id = ?', ('accepted-change',)
+        )
         row = cursor.fetchone()
         assert row['status'] == 'accepted'
         conn.close()
@@ -1252,8 +1360,9 @@ class TestRescanBranches:
         identifier = 'rescan-sha-change'
         review_tracking.init_db(identifier).close()
 
-        tracking_data = self._make_tracking_data('sha-change', identifier=identifier,
-                                                  status='reviewing')
+        tracking_data = self._make_tracking_data(
+            'sha-change', identifier=identifier, status='reviewing'
+        )
         branch = _create_review_branch(gitdir, 'sha-change', tracking_data)
 
         # First rescan: registers the branch with status 'reviewing'.
@@ -1262,23 +1371,28 @@ class TestRescanBranches:
 
         # Amend the tracking commit on the branch with a different status.
         tracking_data['series']['status'] = 'replied'
-        new_msg = ('Cover\n\n' + b4.review.make_review_magic_json(tracking_data))
+        new_msg = 'Cover\n\n' + b4.review.make_review_magic_json(tracking_data)
         _ecode, tree = b4.git_run_command(gitdir, ['rev-parse', f'{branch}^{{tree}}'])
         tree = tree.strip()
         _ecode, parent = b4.git_run_command(gitdir, ['rev-parse', branch])
         parent = parent.strip()
         _ecode, new_sha = b4.git_run_command(
-            gitdir, ['commit-tree', tree, '-p', parent, '-F', '-'],
-            stdin=new_msg.encode())
-        b4.git_run_command(gitdir, ['update-ref', f'refs/heads/{branch}', new_sha.strip()])
+            gitdir,
+            ['commit-tree', tree, '-p', parent, '-F', '-'],
+            stdin=new_msg.encode(),
+        )
+        b4.git_run_command(
+            gitdir, ['update-ref', f'refs/heads/{branch}', new_sha.strip()]
+        )
 
         # Second rescan: SHA changed, should re-read and update status.
         result = review_tracking.rescan_branches(identifier, gitdir)
         assert result['changed'] == 1
 
         conn = review_tracking.get_db(identifier)
-        row = conn.execute('SELECT status FROM series WHERE change_id = ?',
-                           ('sha-change',)).fetchone()
+        row = conn.execute(
+            'SELECT status FROM series WHERE change_id = ?', ('sha-change',)
+        ).fetchone()
         assert row['status'] == 'replied'
         conn.close()
 
@@ -1298,7 +1412,9 @@ def _make_test_mbox(n: int, date: str = 'Mon, 15 Jan 2024 10:00:00 +0000') -> by
 class TestFollowupCounts:
     """Tests for message_count / seen_message_count tracking."""
 
-    def test_schema_has_followup_columns(self, tmp_path: pytest.TempPathFactory) -> None:
+    def test_schema_has_followup_columns(
+        self, tmp_path: pytest.TempPathFactory
+    ) -> None:
         """Verify fresh DB has message_count, seen_message_count, last_update_check, last_activity_at."""
         conn = review_tracking.init_db('fc-schema-test')
         cursor = conn.execute('PRAGMA table_info(series)')
@@ -1309,13 +1425,16 @@ class TestFollowupCounts:
         assert 'last_activity_at' in col_names
         conn.close()
 
-    def test_migration_adds_followup_columns(self, tmp_path: pytest.TempPathFactory) -> None:
+    def test_migration_adds_followup_columns(
+        self, tmp_path: pytest.TempPathFactory
+    ) -> None:
         """Verify v1 DB gets followup/update columns during migration."""
         import sqlite3 as _sqlite3
+
         db_path = review_tracking.get_db_path('fc-migration-test')
         # Manually build a schema-version 1 database (no branch_sha, no followup cols)
         raw = _sqlite3.connect(db_path)
-        raw.executescript('''
+        raw.executescript("""
             CREATE TABLE schema_version (version INTEGER PRIMARY KEY);
             CREATE TABLE series (
                 track_id INTEGER PRIMARY KEY,
@@ -1324,7 +1443,7 @@ class TestFollowupCounts:
                 status TEXT DEFAULT 'new',
                 UNIQUE (change_id, revision)
             );
-        ''')
+        """)
         raw.execute('INSERT INTO schema_version (version) VALUES (1)')
         raw.commit()
         raw.close()
@@ -1345,8 +1464,7 @@ class TestFollowupCounts:
 
     @mock.patch('b4.review.tracking._fetch_thread_mbox_bytes')
     def test_first_fetch_initialises_seen(
-        self, mock_mbox_bytes: mock.Mock,
-        tmp_path: pytest.TempPathFactory
+        self, mock_mbox_bytes: mock.Mock, tmp_path: pytest.TempPathFactory
     ) -> None:
         """First update_message_counts sets seen = count (no badge shown yet)."""
         # 9 unique messages in the thread
@@ -1354,13 +1472,27 @@ class TestFollowupCounts:
 
         conn = review_tracking.init_db('fc-first-test')
         review_tracking.add_series_to_db(
-            conn, 'fc-change', 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'cover@example.com', 3)
+            conn,
+            'fc-change',
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'cover@example.com',
+            3,
+        )
         conn.close()
 
-        series_list = [{'change_id': 'fc-change', 'revision': 1,
-                        'message_id': 'cover@example.com', 'num_patches': 3,
-                        'status': 'new'}]
+        series_list = [
+            {
+                'change_id': 'fc-change',
+                'revision': 1,
+                'message_id': 'cover@example.com',
+                'num_patches': 3,
+                'status': 'new',
+            }
+        ]
         result = review_tracking.update_message_counts('fc-first-test', series_list)
         assert result['updated'] == 1
         assert result['errors'] == 0
@@ -1368,7 +1500,9 @@ class TestFollowupCounts:
         conn = review_tracking.get_db('fc-first-test')
         row = conn.execute(
             'SELECT message_count, seen_message_count, last_update_check, last_activity_at'
-            ' FROM series WHERE change_id = ?', ('fc-change',)).fetchone()
+            ' FROM series WHERE change_id = ?',
+            ('fc-change',),
+        ).fetchone()
         assert row['message_count'] == 9
         # First fetch: seen initialised to same value — no badge yet
         assert row['seen_message_count'] == 9
@@ -1379,8 +1513,10 @@ class TestFollowupCounts:
     @mock.patch('b4.review.tracking._fetch_new_since')
     @mock.patch('b4.review.tracking._fetch_thread_mbox_bytes')
     def test_incremental_fetch_adds_new_count(
-        self, mock_fetch: mock.Mock,
-        mock_new_since: mock.Mock, tmp_path: pytest.TempPathFactory
+        self,
+        mock_fetch: mock.Mock,
+        mock_new_since: mock.Mock,
+        tmp_path: pytest.TempPathFactory,
     ) -> None:
         """Incremental update adds new message count and keeps seen unchanged."""
         # 9 unique messages in the thread
@@ -1390,13 +1526,27 @@ class TestFollowupCounts:
 
         conn = review_tracking.init_db('fc-incr-test')
         review_tracking.add_series_to_db(
-            conn, 'fc-change2', 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'cover2@example.com', 3)
+            conn,
+            'fc-change2',
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'cover2@example.com',
+            3,
+        )
         conn.close()
 
-        series_list = [{'change_id': 'fc-change2', 'revision': 1,
-                        'message_id': 'cover2@example.com', 'num_patches': 3,
-                        'status': 'reviewing'}]
+        series_list = [
+            {
+                'change_id': 'fc-change2',
+                'revision': 1,
+                'message_id': 'cover2@example.com',
+                'num_patches': 3,
+                'status': 'reviewing',
+            }
+        ]
 
         # First fetch: seen = count = 9, last_update_check set
         review_tracking.update_message_counts('fc-incr-test', series_list)
@@ -1408,7 +1558,9 @@ class TestFollowupCounts:
         conn = review_tracking.get_db('fc-incr-test')
         row = conn.execute(
             'SELECT message_count, seen_message_count, last_activity_at FROM series'
-            ' WHERE change_id = ?', ('fc-change2',)).fetchone()
+            ' WHERE change_id = ?',
+            ('fc-change2',),
+        ).fetchone()
         assert row['message_count'] == 12  # 9 + 3
         assert row['seen_message_count'] == 9  # badge shows +3
         assert row['last_activity_at'] == '2024-02-01T00:00:00+00:00'
@@ -1417,28 +1569,45 @@ class TestFollowupCounts:
     @mock.patch('b4.review.tracking._fetch_new_since')
     @mock.patch('b4.review.tracking._fetch_thread_mbox_bytes')
     def test_incremental_noop_makes_no_db_write(
-        self, mock_fetch: mock.Mock,
-        mock_new_since: mock.Mock, tmp_path: pytest.TempPathFactory
+        self,
+        mock_fetch: mock.Mock,
+        mock_new_since: mock.Mock,
+        tmp_path: pytest.TempPathFactory,
     ) -> None:
         """Incremental update with zero new messages writes nothing to the DB."""
         # 9 unique messages in the thread
         mock_fetch.return_value = _make_test_mbox(9)
-        mock_new_since.return_value = (0, None)   # no new messages
+        mock_new_since.return_value = (0, None)  # no new messages
 
         conn = review_tracking.init_db('fc-noop-test')
         review_tracking.add_series_to_db(
-            conn, 'fc-change3', 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'cover3@example.com', 3)
+            conn,
+            'fc-change3',
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'cover3@example.com',
+            3,
+        )
         conn.close()
 
-        series_list = [{'change_id': 'fc-change3', 'revision': 1,
-                        'message_id': 'cover3@example.com', 'num_patches': 3,
-                        'status': 'reviewing'}]
+        series_list = [
+            {
+                'change_id': 'fc-change3',
+                'revision': 1,
+                'message_id': 'cover3@example.com',
+                'num_patches': 3,
+                'status': 'reviewing',
+            }
+        ]
 
         # First fetch sets the baseline
         review_tracking.update_message_counts('fc-noop-test', series_list)
 
         import os
+
         db_path = review_tracking.get_db_path('fc-noop-test')
         mtime_before = os.path.getmtime(db_path)
 
@@ -1454,11 +1623,22 @@ class TestFollowupCounts:
         """mark_all_messages_seen sets seen_message_count = message_count."""
         conn = review_tracking.init_db('fc-seen-test')
         review_tracking.add_series_to_db(
-            conn, 'fc-seen', 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'cover3@example.com', 3)
+            conn,
+            'fc-seen',
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'cover3@example.com',
+            3,
+        )
         # Manually set a delta
-        conn.execute('UPDATE series SET message_count = 10, seen_message_count = 6'
-                     ' WHERE change_id = ?', ('fc-seen',))
+        conn.execute(
+            'UPDATE series SET message_count = 10, seen_message_count = 6'
+            ' WHERE change_id = ?',
+            ('fc-seen',),
+        )
         conn.commit()
 
         review_tracking.mark_all_messages_seen(conn, 'fc-seen', 1)
@@ -1466,8 +1646,10 @@ class TestFollowupCounts:
 
         # Reopen with get_db to get row_factory for named column access
         conn = review_tracking.get_db('fc-seen-test')
-        row = conn.execute('SELECT message_count, seen_message_count FROM series'
-                           ' WHERE change_id = ?', ('fc-seen',)).fetchone()
+        row = conn.execute(
+            'SELECT message_count, seen_message_count FROM series WHERE change_id = ?',
+            ('fc-seen',),
+        ).fetchone()
         assert row['message_count'] == 10
         assert row['seen_message_count'] == 10
         conn.close()
@@ -1488,14 +1670,27 @@ class TestFollowupCounts:
         for status in ('archived', 'accepted', 'thanked'):
             cid = f'fc-{status}'
             review_tracking.add_series_to_db(
-                conn, cid, 1, 'Subject', 'Author', 'a@example.com',
-                '2024-01-15T10:00:00+00:00', f'{cid}@example.com', 3)
+                conn,
+                cid,
+                1,
+                'Subject',
+                'Author',
+                'a@example.com',
+                '2024-01-15T10:00:00+00:00',
+                f'{cid}@example.com',
+                3,
+            )
             review_tracking.update_series_status(conn, cid, status)
         conn.close()
 
         series_list = [
-            {'change_id': f'fc-{s}', 'revision': 1,
-             'message_id': f'fc-{s}@example.com', 'num_patches': 3, 'status': s}
+            {
+                'change_id': f'fc-{s}',
+                'revision': 1,
+                'message_id': f'fc-{s}@example.com',
+                'num_patches': 3,
+                'status': s,
+            }
             for s in ('archived', 'accepted', 'thanked')
         ]
         result = review_tracking.update_message_counts('fc-skip-test', series_list)
@@ -1532,7 +1727,9 @@ def _make_test_msg(msgid: str = 'test@example.com') -> EmailMessage:
     return msg
 
 
-def _make_blob_tracking_data(change_id: str, identifier: str = 'blob-proj') -> Dict[str, Any]:
+def _make_blob_tracking_data(
+    change_id: str, identifier: str = 'blob-proj'
+) -> Dict[str, Any]:
     """Return a minimal tracking dict for blob tests."""
     return {
         'series': {
@@ -1567,8 +1764,7 @@ class TestFollowupBlob:
     ) -> None:
         """_store_thread_blob serializes msgs via save_mboxrd_mbox and records SHA."""
         change_id = 'blob-write-test'
-        _create_review_branch(gitdir, change_id,
-                              _make_blob_tracking_data(change_id))
+        _create_review_branch(gitdir, change_id, _make_blob_tracking_data(change_id))
 
         msgs = [_make_test_msg('cover@example.com')]
         blob_sha = review_tracking._store_thread_blob(gitdir, change_id, msgs)
@@ -1578,13 +1774,13 @@ class TestFollowupBlob:
         expected_buf = io.BytesIO()
         b4.save_mboxrd_mbox(msgs, expected_buf)
         ecode, content = b4.git_run_command(
-            gitdir, ['cat-file', 'blob', blob_sha], decode=False)
+            gitdir, ['cat-file', 'blob', blob_sha], decode=False
+        )
         assert ecode == 0
         assert content == expected_buf.getvalue()
 
         # Tracking commit JSON must carry the blob SHA
-        _cover, loaded = b4.review.load_tracking(
-            gitdir, f'b4/review/{change_id}')
+        _cover, loaded = b4.review.load_tracking(gitdir, f'b4/review/{change_id}')
         assert loaded['series']['thread-blob'] == blob_sha
 
     def test_store_thread_blob_skips_update_when_sha_unchanged(
@@ -1592,8 +1788,7 @@ class TestFollowupBlob:
     ) -> None:
         """_store_thread_blob avoids a new tracking commit when SHA is unchanged."""
         change_id = 'blob-nochurn-test'
-        _create_review_branch(gitdir, change_id,
-                              _make_blob_tracking_data(change_id))
+        _create_review_branch(gitdir, change_id, _make_blob_tracking_data(change_id))
 
         msgs = [_make_test_msg('nochurn@example.com')]
 
@@ -1601,7 +1796,8 @@ class TestFollowupBlob:
         assert sha1 is not None
 
         ecode, tip1 = b4.git_run_command(
-            gitdir, ['rev-parse', f'b4/review/{change_id}'])
+            gitdir, ['rev-parse', f'b4/review/{change_id}']
+        )
         assert ecode == 0
 
         # Second call with identical messages — SHA and branch tip unchanged
@@ -1609,7 +1805,8 @@ class TestFollowupBlob:
         assert sha2 == sha1
 
         ecode, tip2 = b4.git_run_command(
-            gitdir, ['rev-parse', f'b4/review/{change_id}'])
+            gitdir, ['rev-parse', f'b4/review/{change_id}']
+        )
         assert ecode == 0
         assert tip2.strip() == tip1.strip()
 
@@ -1617,56 +1814,70 @@ class TestFollowupBlob:
         """get_thread_mbox returns the exact bytes written to the blob."""
         sample = b'From mboxrd@z Thu Jan  1 00:00:00 1970\nSubject: hi\n\nbody\n'
         ecode, blob_sha = b4.git_run_command(
-            gitdir, ['hash-object', '-w', '--stdin'], stdin=sample)
+            gitdir, ['hash-object', '-w', '--stdin'], stdin=sample
+        )
         assert ecode == 0
 
         result = review_tracking.get_thread_mbox(gitdir, blob_sha.strip())
         assert result == sample
 
-    def test_get_thread_mbox_returns_none_for_missing_sha(
-        self, gitdir: str
-    ) -> None:
+    def test_get_thread_mbox_returns_none_for_missing_sha(self, gitdir: str) -> None:
         """get_thread_mbox returns None (not an exception) for a bogus SHA."""
         result = review_tracking.get_thread_mbox(gitdir, 'deadbeef' * 5)
         assert result is None
 
     @mock.patch('b4.review.tracking._fetch_thread_mbox_bytes')
     def test_update_message_counts_stores_blob_on_first_fetch(
-        self, mock_mbox: mock.Mock,
-        gitdir: str
+        self, mock_mbox: mock.Mock, gitdir: str
     ) -> None:
         """update_message_counts writes a thread blob on the first fetch."""
         mock_mbox.return_value = _make_mbox_bytes(9, prefix='ff')
 
         change_id = 'blob-first-fetch'
-        _create_review_branch(gitdir, change_id,
-                              _make_blob_tracking_data(change_id, 'blob-ff-proj'))
+        _create_review_branch(
+            gitdir, change_id, _make_blob_tracking_data(change_id, 'blob-ff-proj')
+        )
 
         conn = review_tracking.init_db('blob-ff-proj')
         review_tracking.add_series_to_db(
-            conn, change_id, 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'blob-first@example.com', 3)
+            conn,
+            change_id,
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'blob-first@example.com',
+            3,
+        )
         conn.close()
 
-        series_list = [{'change_id': change_id, 'revision': 1,
-                        'message_id': 'blob-first@example.com',
-                        'num_patches': 3, 'status': 'reviewing'}]
+        series_list = [
+            {
+                'change_id': change_id,
+                'revision': 1,
+                'message_id': 'blob-first@example.com',
+                'num_patches': 3,
+                'status': 'reviewing',
+            }
+        ]
         review_tracking.update_message_counts(
-            'blob-ff-proj', series_list, topdir=gitdir)
+            'blob-ff-proj', series_list, topdir=gitdir
+        )
 
         _cover, loaded = b4.review.load_tracking(gitdir, f'b4/review/{change_id}')
         blob_sha = loaded['series'].get('thread-blob')
         assert blob_sha is not None
         # Blob must be readable
         ecode, _ = b4.git_run_command(
-            gitdir, ['cat-file', 'blob', blob_sha], decode=False)
+            gitdir, ['cat-file', 'blob', blob_sha], decode=False
+        )
         assert ecode == 0
 
     @mock.patch('b4.review.tracking._fetch_new_since')
     @mock.patch('b4.review.tracking._fetch_thread_mbox_bytes')
     def test_update_message_counts_updates_blob_on_incremental(
-        self, mock_fetch: mock.Mock,
-        mock_new_since: mock.Mock, gitdir: str
+        self, mock_fetch: mock.Mock, mock_new_since: mock.Mock, gitdir: str
     ) -> None:
         """update_message_counts replaces the blob when new replies arrive."""
         # Different prefixes → different Message-IDs → different blobs
@@ -1676,22 +1887,38 @@ class TestFollowupBlob:
         mock_new_since.return_value = (3, '2024-02-01T00:00:00+00:00')
 
         change_id = 'blob-incr-test'
-        _create_review_branch(gitdir, change_id,
-                              _make_blob_tracking_data(change_id, 'blob-incr-proj'))
+        _create_review_branch(
+            gitdir, change_id, _make_blob_tracking_data(change_id, 'blob-incr-proj')
+        )
 
         conn = review_tracking.init_db('blob-incr-proj')
         review_tracking.add_series_to_db(
-            conn, change_id, 1, 'Subject', 'Author', 'a@example.com',
-            '2024-01-15T10:00:00+00:00', 'blob-incr@example.com', 3)
+            conn,
+            change_id,
+            1,
+            'Subject',
+            'Author',
+            'a@example.com',
+            '2024-01-15T10:00:00+00:00',
+            'blob-incr@example.com',
+            3,
+        )
         conn.close()
 
-        series_list = [{'change_id': change_id, 'revision': 1,
-                        'message_id': 'blob-incr@example.com',
-                        'num_patches': 3, 'status': 'reviewing'}]
+        series_list = [
+            {
+                'change_id': change_id,
+                'revision': 1,
+                'message_id': 'blob-incr@example.com',
+                'num_patches': 3,
+                'status': 'reviewing',
+            }
+        ]
 
         # First fetch — stores initial blob
         review_tracking.update_message_counts(
-            'blob-incr-proj', series_list, topdir=gitdir)
+            'blob-incr-proj', series_list, topdir=gitdir
+        )
         _cover, loaded = b4.review.load_tracking(gitdir, f'b4/review/{change_id}')
         sha_initial = loaded['series'].get('thread-blob')
         assert sha_initial is not None
@@ -1699,7 +1926,8 @@ class TestFollowupBlob:
         # Incremental — _fetch_thread_mbox_bytes now returns the larger mbox
         mock_fetch.return_value = larger_mbox
         review_tracking.update_message_counts(
-            'blob-incr-proj', series_list, topdir=gitdir)
+            'blob-incr-proj', series_list, topdir=gitdir
+        )
         _cover, loaded = b4.review.load_tracking(gitdir, f'b4/review/{change_id}')
         sha_updated = loaded['series'].get('thread-blob')
 
@@ -1710,13 +1938,20 @@ class TestFollowupBlob:
 class TestPatchState:
     """Tests for _get_patch_state() and _set_patch_state()."""
 
-    _USERCFG: Dict[str, Union[str, List[str], None]] = {'email': 'reviewer@example.com', 'name': 'Test Reviewer'}
+    _USERCFG: Dict[str, Union[str, List[str], None]] = {
+        'email': 'reviewer@example.com',
+        'name': 'Test Reviewer',
+    }
 
     def _make_target(self, review_data: Dict[str, Any] | None = None) -> Dict[str, Any]:
         """Return a minimal target dict, optionally with review data."""
         if review_data is None:
             return {}
-        return {'reviews': {self._USERCFG['email']: {'name': 'Test Reviewer', **review_data}}}
+        return {
+            'reviews': {
+                self._USERCFG['email']: {'name': 'Test Reviewer', **review_data}
+            }
+        }
 
     def test_no_data(self) -> None:
         """Empty reviews dict → no state."""
@@ -1730,7 +1965,9 @@ class TestPatchState:
 
     def test_comments(self) -> None:
         """Inline comments list → 'draft'."""
-        target = self._make_target({'comments': [{'path': 'a.c', 'line': 1, 'text': 'hi'}]})
+        target = self._make_target(
+            {'comments': [{'path': 'a.c', 'line': 1, 'text': 'hi'}]}
+        )
         assert b4.review._get_patch_state(target, self._USERCFG) == 'draft'
 
     def test_reply(self) -> None:
@@ -1740,25 +1977,35 @@ class TestPatchState:
 
     def test_reviewed_by(self) -> None:
         """Reviewed-by trailer → 'done'."""
-        target = self._make_target({'trailers': ['Reviewed-by: Test Reviewer <reviewer@example.com>']})
+        target = self._make_target(
+            {'trailers': ['Reviewed-by: Test Reviewer <reviewer@example.com>']}
+        )
         assert b4.review._get_patch_state(target, self._USERCFG) == 'done'
 
     def test_acked_by(self) -> None:
         """Acked-by trailer → 'done'."""
-        target = self._make_target({'trailers': ['Acked-by: Test Reviewer <reviewer@example.com>']})
+        target = self._make_target(
+            {'trailers': ['Acked-by: Test Reviewer <reviewer@example.com>']}
+        )
         assert b4.review._get_patch_state(target, self._USERCFG) == 'done'
 
     def test_nacked_by_alone(self) -> None:
         """NACKed-by trailer alone → 'draft' (explanation required)."""
-        target = self._make_target({'trailers': ['NACKed-by: Test Reviewer <reviewer@example.com>']})
+        target = self._make_target(
+            {'trailers': ['NACKed-by: Test Reviewer <reviewer@example.com>']}
+        )
         assert b4.review._get_patch_state(target, self._USERCFG) == 'draft'
 
     def test_nacked_by_with_acked(self) -> None:
         """NACK wins over Acked-by — result is still 'draft'."""
-        target = self._make_target({'trailers': [
-            'NACKed-by: Test Reviewer <reviewer@example.com>',
-            'Acked-by: Test Reviewer <reviewer@example.com>',
-        ]})
+        target = self._make_target(
+            {
+                'trailers': [
+                    'NACKed-by: Test Reviewer <reviewer@example.com>',
+                    'Acked-by: Test Reviewer <reviewer@example.com>',
+                ]
+            }
+        )
         assert b4.review._get_patch_state(target, self._USERCFG) == 'draft'
 
     def test_explicit_done(self) -> None:
@@ -1773,10 +2020,12 @@ class TestPatchState:
 
     def test_explicit_done_beats_nack(self) -> None:
         """Explicit done overrides a NACKed-by trailer (human override wins)."""
-        target = self._make_target({
-            'patch-state': 'done',
-            'trailers': ['NACKed-by: Test Reviewer <reviewer@example.com>'],
-        })
+        target = self._make_target(
+            {
+                'patch-state': 'done',
+                'trailers': ['NACKed-by: Test Reviewer <reviewer@example.com>'],
+            }
+        )
         assert b4.review._get_patch_state(target, self._USERCFG) == 'done'
 
     def test_set_and_clear(self) -> None:
@@ -1807,8 +2056,7 @@ class TestBuildReplyFromComments:
         'diff --git a/foo.py b/foo.py\n'
         '--- a/foo.py\n'
         '+++ b/foo.py\n'
-        '@@ -0,0 +1,40 @@\n'
-        + ''.join(f'+line{i}\n' for i in range(1, 41))
+        '@@ -0,0 +1,40 @@\n' + ''.join(f'+line{i}\n' for i in range(1, 41))
     )
 
     def _make_comment(self, line: int, text: str) -> dict[str, Any]:
@@ -1929,22 +2177,25 @@ class TestFormatSnoozeUntil:
 
     def test_expired_datetime(self) -> None:
         """A datetime in the past returns 'expired'."""
-        past = (datetime.datetime.now(datetime.timezone.utc)
-                - datetime.timedelta(hours=1)).isoformat()
+        past = (
+            datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(hours=1)
+        ).isoformat()
         assert _format_snooze_until(past) == 'expired'
 
     def test_future_days_hours_minutes(self) -> None:
         """A datetime ~1d 2h 30m in the future shows all three components."""
-        target = (datetime.datetime.now(datetime.timezone.utc)
-                  + datetime.timedelta(days=1, hours=2, minutes=30, seconds=30))
+        target = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(
+            days=1, hours=2, minutes=30, seconds=30
+        )
         result = _format_snooze_until(target.isoformat())
         assert result.startswith('wakes in 1d 2h 30m')
         assert '(' in result  # contains the local date/time
 
     def test_future_hours_only(self) -> None:
         """A datetime exactly 3h in the future shows hours (and maybe minutes)."""
-        target = (datetime.datetime.now(datetime.timezone.utc)
-                  + datetime.timedelta(hours=3, seconds=30))
+        target = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(
+            hours=3, seconds=30
+        )
         result = _format_snooze_until(target.isoformat())
         assert 'wakes in' in result
         assert '3h' in result
@@ -1952,8 +2203,9 @@ class TestFormatSnoozeUntil:
 
     def test_future_minutes_only(self) -> None:
         """A datetime 45m in the future shows only minutes."""
-        target = (datetime.datetime.now(datetime.timezone.utc)
-                  + datetime.timedelta(minutes=45, seconds=30))
+        target = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(
+            minutes=45, seconds=30
+        )
         result = _format_snooze_until(target.isoformat())
         assert 'wakes in 45m' in result
         assert 'd' not in result.split('(')[0]
@@ -1961,8 +2213,9 @@ class TestFormatSnoozeUntil:
 
     def test_future_less_than_one_minute(self) -> None:
         """A datetime <1m away shows '<1m'."""
-        target = (datetime.datetime.now(datetime.timezone.utc)
-                  + datetime.timedelta(seconds=20))
+        target = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(
+            seconds=20
+        )
         result = _format_snooze_until(target.isoformat())
         assert 'wakes in <1m' in result
 
@@ -1976,8 +2229,9 @@ class TestFormatSnoozeUntil:
 
     def test_local_time_shown(self) -> None:
         """The parenthesised local time uses YYYY-MM-DD HH:MM format."""
-        target = (datetime.datetime.now(datetime.timezone.utc)
-                  + datetime.timedelta(hours=6))
+        target = datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(
+            hours=6
+        )
         result = _format_snooze_until(target.isoformat())
         local_dt = target.astimezone()
         expected_str = local_dt.strftime('%Y-%m-%d %H:%M')
@@ -1987,29 +2241,43 @@ class TestFormatSnoozeUntil:
 class TestSnoozeDurationRegex:
     """Tests for SnoozeScreen._DURATION_RE pattern matching."""
 
-    @pytest.mark.parametrize('input_str,expected_value,expected_unit', [
-        ('30m', 30, 'm'),
-        ('3h', 3, 'h'),
-        ('1d', 1, 'd'),
-        ('2w', 2, 'w'),
-        ('7', 7, ''),
-        ('30 m', 30, 'm'),
-        ('3H', 3, 'H'),
-        ('1D', 1, 'D'),
-        ('2W', 2, 'W'),
-        ('45M', 45, 'M'),
-    ])
-    def test_valid_durations(self, input_str: str,
-                             expected_value: int, expected_unit: str) -> None:
+    @pytest.mark.parametrize(
+        'input_str,expected_value,expected_unit',
+        [
+            ('30m', 30, 'm'),
+            ('3h', 3, 'h'),
+            ('1d', 1, 'd'),
+            ('2w', 2, 'w'),
+            ('7', 7, ''),
+            ('30 m', 30, 'm'),
+            ('3H', 3, 'H'),
+            ('1D', 1, 'D'),
+            ('2W', 2, 'W'),
+            ('45M', 45, 'M'),
+        ],
+    )
+    def test_valid_durations(
+        self, input_str: str, expected_value: int, expected_unit: str
+    ) -> None:
         """Valid duration strings are parsed correctly."""
         m = SnoozeScreen._DURATION_RE.match(input_str)
         assert m is not None
         assert int(m.group(1)) == expected_value
         assert m.group(2) == expected_unit
 
-    @pytest.mark.parametrize('input_str', [
-        'abc', '3x', 'h3', 'm', '', '3.5h', '-1d', '3hh',
-    ])
+    @pytest.mark.parametrize(
+        'input_str',
+        [
+            'abc',
+            '3x',
+            'h3',
+            'm',
+            '',
+            '3.5h',
+            '-1d',
+            '3hh',
+        ],
+    )
     def test_invalid_durations(self, input_str: str) -> None:
         """Invalid duration strings are rejected."""
         assert SnoozeScreen._DURATION_RE.match(input_str) is None
@@ -2018,8 +2286,9 @@ class TestSnoozeDurationRegex:
 class TestGetExpiredSnoozedDatetime:
     """Verify get_expired_snoozed() works with full ISO datetimes."""
 
-    def _make_snoozed_series(self, conn: Any, change_id: str,
-                             snoozed_until: str) -> None:
+    def _make_snoozed_series(
+        self, conn: Any, change_id: str, snoozed_until: str
+    ) -> None:
         """Insert a snoozed series with a given wake-up time."""
         review_tracking.add_series_to_db(
             conn,
@@ -2037,8 +2306,9 @@ class TestGetExpiredSnoozedDatetime:
     def test_past_datetime_is_expired(self) -> None:
         """A series snoozed until a past datetime shows up as expired."""
         conn = review_tracking.init_db('snooze-past-dt')
-        past = (datetime.datetime.now(datetime.timezone.utc)
-                - datetime.timedelta(minutes=5)).isoformat()
+        past = (
+            datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(minutes=5)
+        ).isoformat()
         self._make_snoozed_series(conn, 'past-dt-id', past)
         expired = review_tracking.get_expired_snoozed(conn)
         assert len(expired) == 1
@@ -2048,8 +2318,9 @@ class TestGetExpiredSnoozedDatetime:
     def test_future_datetime_not_expired(self) -> None:
         """A series snoozed until a future datetime does not show up."""
         conn = review_tracking.init_db('snooze-future-dt')
-        future = (datetime.datetime.now(datetime.timezone.utc)
-                  + datetime.timedelta(hours=2)).isoformat()
+        future = (
+            datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(hours=2)
+        ).isoformat()
         self._make_snoozed_series(conn, 'future-dt-id', future)
         expired = review_tracking.get_expired_snoozed(conn)
         assert len(expired) == 0
@@ -2058,8 +2329,10 @@ class TestGetExpiredSnoozedDatetime:
     def test_past_date_only_is_expired(self) -> None:
         """A legacy date-only value in the past still works."""
         conn = review_tracking.init_db('snooze-past-date')
-        yesterday = (datetime.datetime.now(datetime.timezone.utc).date()
-                     - datetime.timedelta(days=1)).isoformat()
+        yesterday = (
+            datetime.datetime.now(datetime.timezone.utc).date()
+            - datetime.timedelta(days=1)
+        ).isoformat()
         self._make_snoozed_series(conn, 'past-date-id', yesterday)
         expired = review_tracking.get_expired_snoozed(conn)
         assert len(expired) == 1
@@ -2069,8 +2342,10 @@ class TestGetExpiredSnoozedDatetime:
     def test_future_date_only_not_expired(self) -> None:
         """A legacy date-only value in the future still works."""
         conn = review_tracking.init_db('snooze-future-date')
-        tomorrow = (datetime.datetime.now(datetime.timezone.utc).date()
-                    + datetime.timedelta(days=2)).isoformat()
+        tomorrow = (
+            datetime.datetime.now(datetime.timezone.utc).date()
+            + datetime.timedelta(days=2)
+        ).isoformat()
         self._make_snoozed_series(conn, 'future-date-id', tomorrow)
         expired = review_tracking.get_expired_snoozed(conn)
         assert len(expired) == 0
@@ -2079,12 +2354,17 @@ class TestGetExpiredSnoozedDatetime:
     def test_mixed_date_and_datetime(self) -> None:
         """Both legacy date-only and new datetime values handled together."""
         conn = review_tracking.init_db('snooze-mixed')
-        past_dt = (datetime.datetime.now(datetime.timezone.utc)
-                   - datetime.timedelta(minutes=30)).isoformat()
-        yesterday = (datetime.datetime.now(datetime.timezone.utc).date()
-                     - datetime.timedelta(days=1)).isoformat()
-        future_dt = (datetime.datetime.now(datetime.timezone.utc)
-                     + datetime.timedelta(hours=5)).isoformat()
+        past_dt = (
+            datetime.datetime.now(datetime.timezone.utc)
+            - datetime.timedelta(minutes=30)
+        ).isoformat()
+        yesterday = (
+            datetime.datetime.now(datetime.timezone.utc).date()
+            - datetime.timedelta(days=1)
+        ).isoformat()
+        future_dt = (
+            datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(hours=5)
+        ).isoformat()
         self._make_snoozed_series(conn, 'expired-dt', past_dt)
         self._make_snoozed_series(conn, 'expired-date', yesterday)
         self._make_snoozed_series(conn, 'still-sleeping', future_dt)
@@ -2107,8 +2387,9 @@ class TestGetExpiredSnoozedDatetime:
         # Add a tag-based snooze
         self._make_snoozed_series(conn, 'tag-id', 'tag:v6.15-rc3')
         # Add a time-based snooze
-        future = (datetime.datetime.now(datetime.timezone.utc)
-                  + datetime.timedelta(hours=2)).isoformat()
+        future = (
+            datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(hours=2)
+        ).isoformat()
         self._make_snoozed_series(conn, 'time-id', future)
         tag_results = review_tracking.get_tag_snoozed(conn)
         assert len(tag_results) == 1
@@ -2119,11 +2400,13 @@ class TestGetExpiredSnoozedDatetime:
 
 # -- Tests for attestation DB operations -------------------------------------
 
+
 class TestAttestationDb:
     """Tests for attestation storage and schema migration."""
 
-    def _add_test_series(self, conn: Any, change_id: str = 'att-test-id',
-                         revision: int = 1) -> int:
+    def _add_test_series(
+        self, conn: Any, change_id: str = 'att-test-id', revision: int = 1
+    ) -> int:
         """Insert a minimal series row and return its track_id."""
         return review_tracking.add_series_to_db(
             conn,
@@ -2153,8 +2436,9 @@ class TestAttestationDb:
         conn = review_tracking.init_db(ident)
         self._add_test_series(conn)
         conn.close()
-        review_tracking.update_attestation(ident, 'att-test-id', 1,
-                                           'signed:DKIM/kernel.org')
+        review_tracking.update_attestation(
+            ident, 'att-test-id', 1, 'signed:DKIM/kernel.org'
+        )
         conn = review_tracking.get_db(ident)
         row = conn.execute(
             "SELECT attestation FROM series WHERE change_id = 'att-test-id'"
@@ -2183,8 +2467,9 @@ class TestAttestationDb:
         self._add_test_series(conn)
         conn.close()
         review_tracking.update_attestation(ident, 'att-test-id', 1, 'none')
-        review_tracking.update_attestation(ident, 'att-test-id', 1,
-                                           'signed:DKIM/kernel.org')
+        review_tracking.update_attestation(
+            ident, 'att-test-id', 1, 'signed:DKIM/kernel.org'
+        )
         conn = review_tracking.get_db(ident)
         row = conn.execute(
             "SELECT attestation FROM series WHERE change_id = 'att-test-id'"
@@ -2199,8 +2484,9 @@ class TestAttestationDb:
         self._add_test_series(conn)
         conn.close()
         # revision 99 doesn't exist — should not raise
-        review_tracking.update_attestation(ident, 'att-test-id', 99,
-                                           'signed:DKIM/kernel.org')
+        review_tracking.update_attestation(
+            ident, 'att-test-id', 99, 'signed:DKIM/kernel.org'
+        )
         conn = review_tracking.get_db(ident)
         row = conn.execute(
             "SELECT attestation FROM series WHERE change_id = 'att-test-id'"
@@ -2214,8 +2500,9 @@ class TestAttestationDb:
         conn = review_tracking.init_db(ident)
         self._add_test_series(conn)
         conn.close()
-        review_tracking.update_attestation(ident, 'att-test-id', 1,
-                                           'nokey:ed25519/dev@example.com')
+        review_tracking.update_attestation(
+            ident, 'att-test-id', 1, 'nokey:ed25519/dev@example.com'
+        )
         series_list = review_tracking.get_all_tracked_series(ident)
         assert len(series_list) == 1
         assert series_list[0]['attestation'] == 'nokey:ed25519/dev@example.com'
@@ -2225,8 +2512,9 @@ class TestAttestationDb:
         ident = 'snoozed-listing'
         conn = review_tracking.init_db(ident)
         self._add_test_series(conn)
-        review_tracking.snooze_series(conn, 'att-test-id', '2026-06-01T00:00:00',
-                                      revision=1)
+        review_tracking.snooze_series(
+            conn, 'att-test-id', '2026-06-01T00:00:00', revision=1
+        )
         conn.close()
         series_list = review_tracking.get_all_tracked_series(ident)
         assert len(series_list) == 1
@@ -2235,13 +2523,14 @@ class TestAttestationDb:
     def test_schema_v4_migration_adds_attestation(self) -> None:
         """Migrating from schema v4 adds the attestation column."""
         import sqlite3
+
         ident = 'att-migrate-v4'
         # Create a v4-style database manually
         db_path = review_tracking.get_db_path(ident)
         os.makedirs(os.path.dirname(db_path), exist_ok=True)
         conn = sqlite3.connect(db_path)
         # Create the tables without the attestation column
-        conn.executescript('''
+        conn.executescript("""
             CREATE TABLE schema_version (version INTEGER PRIMARY KEY);
             INSERT INTO schema_version VALUES (4);
             CREATE TABLE series (
@@ -2267,7 +2556,7 @@ class TestAttestationDb:
                 UNIQUE (change_id, revision)
             );
             INSERT INTO series (change_id, revision, subject) VALUES ('migrate-id', 1, 'Test');
-        ''')
+        """)
         conn.close()
         # Opening via get_db triggers migration
         conn = review_tracking.get_db(ident)
@@ -2282,6 +2571,7 @@ class TestAttestationDb:
 
 # -- Tests for _format_attestation() display helper --------------------------
 
+
 class TestFormatAttestation:
     """Tests for the _format_attestation() display helper."""
 
@@ -2325,7 +2615,9 @@ class TestFormatAttestation:
 
     def test_multiple_attestors_comma_separated(self) -> None:
         """Multiple attestors are comma-separated in the output."""
-        text = _format_attestation('signed:DKIM/kernel.org;nokey:ed25519/dev@example.com')
+        text = _format_attestation(
+            'signed:DKIM/kernel.org;nokey:ed25519/dev@example.com'
+        )
         assert text is not None
         plain = text.plain
         assert ', ' in plain
diff --git a/src/tests/test_three_way_merge.py b/src/tests/test_three_way_merge.py
index c0127bf..2b4391e 100644
--- a/src/tests/test_three_way_merge.py
+++ b/src/tests/test_three_way_merge.py
@@ -44,8 +44,9 @@ class TestRewriteFetchHeadOrigin:
         with open(fh_path, 'w') as fh:
             fh.write("abc123\t\tnot-for-merge\tbranch 'master' of /tmp/b4-worktree\n")
 
-        b4._rewrite_fetch_head_origin(gitdir, '/tmp/b4-worktree',
-                                      'https://lore.kernel.org/r/test@msg')
+        b4._rewrite_fetch_head_origin(
+            gitdir, '/tmp/b4-worktree', 'https://lore.kernel.org/r/test@msg'
+        )
 
         with open(fh_path, 'r') as fh:
             contents = fh.read()
@@ -58,8 +59,7 @@ class TestRewriteFetchHeadOrigin:
         with open(fh_path, 'w') as fh:
             fh.write(original)
 
-        b4._rewrite_fetch_head_origin(gitdir, '/tmp/nonexistent',
-                                      'https://example.com')
+        b4._rewrite_fetch_head_origin(gitdir, '/tmp/nonexistent', 'https://example.com')
 
         with open(fh_path, 'r') as fh:
             contents = fh.read()
@@ -68,8 +68,10 @@ class TestRewriteFetchHeadOrigin:
     def test_rewrites_multiple_occurrences(self, gitdir: str) -> None:
         fh_path = os.path.join(gitdir, '.git', 'FETCH_HEAD')
         with open(fh_path, 'w') as fh:
-            fh.write("aaa\t\tnot-for-merge\tbranch 'master' of /tmp/wt\n"
-                     "bbb\t\tnot-for-merge\tbranch 'master' of /tmp/wt\n")
+            fh.write(
+                "aaa\t\tnot-for-merge\tbranch 'master' of /tmp/wt\n"
+                "bbb\t\tnot-for-merge\tbranch 'master' of /tmp/wt\n"
+            )
 
         b4._rewrite_fetch_head_origin(gitdir, '/tmp/wt', 'https://example.com')
 
@@ -124,8 +126,7 @@ def _build_conflicting_patches(gitdir: str) -> Tuple[bytes, str]:
     # Create patch on a temp branch (from original HEAD)
     b4.git_run_command(gitdir, ['checkout', '-b', 'conflict-patch'])
     with open(os.path.join(gitdir, 'file1.txt'), 'w') as fh:
-        fh.write('PATCH version of file 1.\n'
-                 'Rewritten entirely by the patch.\n')
+        fh.write('PATCH version of file 1.\nRewritten entirely by the patch.\n')
     b4.git_run_command(gitdir, ['add', 'file1.txt'])
     b4.git_run_command(gitdir, ['commit', '-m', 'Rewrite file1 (patch side)'])
 
@@ -136,8 +137,7 @@ def _build_conflicting_patches(gitdir: str) -> Tuple[bytes, str]:
     b4.git_run_command(gitdir, ['checkout', 'master'])
     b4.git_run_command(gitdir, ['branch', '-D', 'conflict-patch'])
     with open(os.path.join(gitdir, 'file1.txt'), 'w') as fh:
-        fh.write('MASTER version of file 1.\n'
-                 'Also rewritten, but differently.\n')
+        fh.write('MASTER version of file 1.\nAlso rewritten, but differently.\n')
     b4.git_run_command(gitdir, ['add', 'file1.txt'])
     b4.git_run_command(gitdir, ['commit', '-m', 'Rewrite file1 (master side)'])
 
@@ -154,8 +154,7 @@ class TestGitFetchAmIntoRepo:
         assert common_dir is not None
         gwt = os.path.join(common_dir, 'b4-shazam-worktree')
 
-        b4.git_fetch_am_into_repo(gitdir, ambytes, at_base=base,
-                                  am_flags=['-3'])
+        b4.git_fetch_am_into_repo(gitdir, ambytes, at_base=base, am_flags=['-3'])
 
         # Worktree should be cleaned up after success
         assert not os.path.exists(gwt)
@@ -169,8 +168,9 @@ class TestGitFetchAmIntoRepo:
         ambytes, base = _build_clean_patches(gitdir)
         origin = 'https://lore.kernel.org/r/test@example.com'
 
-        b4.git_fetch_am_into_repo(gitdir, ambytes, at_base=base,
-                                  origin=origin, am_flags=['-3'])
+        b4.git_fetch_am_into_repo(
+            gitdir, ambytes, at_base=base, origin=origin, am_flags=['-3']
+        )
 
         fh_path = os.path.join(gitdir, '.git', 'FETCH_HEAD')
         with open(fh_path, 'r') as fh:
@@ -183,8 +183,9 @@ class TestGitFetchAmIntoRepo:
 
         try:
             with pytest.raises(b4.AmConflictError) as exc_info:
-                b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD',
-                                          am_flags=['-3'])
+                b4.git_fetch_am_into_repo(
+                    gitdir, ambytes, at_base='HEAD', am_flags=['-3']
+                )
 
             assert exc_info.value.worktree_path != ''
             assert exc_info.value.output != ''
@@ -202,16 +203,17 @@ class TestGitFetchAmIntoRepo:
 
         try:
             with pytest.raises(b4.AmConflictError) as exc_info:
-                b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD',
-                                          am_flags=['-3'])
+                b4.git_fetch_am_into_repo(
+                    gitdir, ambytes, at_base='HEAD', am_flags=['-3']
+                )
 
             wt_path = exc_info.value.worktree_path
             # Worktree must still exist for user to resolve
             assert os.path.isdir(wt_path)
             # rebase-apply should be present (am still in progress)
             ecode, wt_gitdir = b4.git_run_command(
-                wt_path, ['rev-parse', '--git-dir'],
-                logstderr=True, rundir=wt_path)
+                wt_path, ['rev-parse', '--git-dir'], logstderr=True, rundir=wt_path
+            )
             assert ecode == 0
             rebase_apply = os.path.join(wt_gitdir.strip(), 'rebase-apply')
             assert os.path.isdir(rebase_apply)
@@ -227,8 +229,7 @@ class TestGitFetchAmIntoRepo:
         """Patches also apply cleanly without -3 (baseline)."""
         ambytes, base = _build_clean_patches(gitdir)
 
-        b4.git_fetch_am_into_repo(gitdir, ambytes, at_base=base,
-                                  am_flags=[])
+        b4.git_fetch_am_into_repo(gitdir, ambytes, at_base=base, am_flags=[])
 
         fh_path = os.path.join(gitdir, '.git', 'FETCH_HEAD')
         assert os.path.exists(fh_path)
@@ -242,8 +243,9 @@ class TestGitFetchAmIntoRepo:
         if os.path.exists(fh_path):
             os.unlink(fh_path)
 
-        b4.git_fetch_am_into_repo(gitdir, ambytes, at_base=base,
-                                  check_only=True, am_flags=['-3'])
+        b4.git_fetch_am_into_repo(
+            gitdir, ambytes, at_base=base, check_only=True, am_flags=['-3']
+        )
 
         # check_only should not fetch (no FETCH_HEAD created)
         assert not os.path.exists(fh_path)
@@ -259,9 +261,11 @@ class TestSuspendToShellCwd:
     """Test that _suspend_to_shell passes cwd to subprocess.run."""
 
     @patch('b4.tui._common.subprocess.run')
-    def test_cwd_passed_through(self, mock_run: Any,
-                                monkeypatch: pytest.MonkeyPatch) -> None:
+    def test_cwd_passed_through(
+        self, mock_run: Any, monkeypatch: pytest.MonkeyPatch
+    ) -> None:
         from b4.review_tui._common import _suspend_to_shell
+
         # Use a shell name that is neither bash nor zsh so we hit
         # the simple else branch (no tempfile/rcfile logic).
         monkeypatch.setenv('SHELL', '/tmp/fakeshell')
@@ -273,9 +277,11 @@ class TestSuspendToShellCwd:
         assert kwargs.get('cwd') == '/tmp/test-worktree'
 
     @patch('b4.tui._common.subprocess.run')
-    def test_cwd_none_by_default(self, mock_run: Any,
-                                 monkeypatch: pytest.MonkeyPatch) -> None:
+    def test_cwd_none_by_default(
+        self, mock_run: Any, monkeypatch: pytest.MonkeyPatch
+    ) -> None:
         from b4.review_tui._common import _suspend_to_shell
+
         monkeypatch.setenv('SHELL', '/tmp/fakeshell')
 
         _suspend_to_shell()
@@ -285,9 +291,11 @@ class TestSuspendToShellCwd:
         assert kwargs.get('cwd') is None
 
     @patch('b4.tui._common.subprocess.run')
-    def test_hint_appears_in_env(self, mock_run: Any,
-                                 monkeypatch: pytest.MonkeyPatch) -> None:
+    def test_hint_appears_in_env(
+        self, mock_run: Any, monkeypatch: pytest.MonkeyPatch
+    ) -> None:
         from b4.review_tui._common import _suspend_to_shell
+
         monkeypatch.setenv('SHELL', '/tmp/fakeshell')
 
         _suspend_to_shell(hint='b4 conflict', cwd='/tmp/wt')
@@ -310,30 +318,29 @@ class TestConflictResolutionFlow:
         ambytes, _base = _build_conflicting_patches(gitdir)
 
         with pytest.raises(b4.AmConflictError) as exc_info:
-            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD',
-                                      am_flags=['-3'])
+            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD', am_flags=['-3'])
 
         wt = exc_info.value.worktree_path
 
         # --- same steps the TUI handler takes ---
         # 1. Disable sparse checkout so files are visible
-        b4.git_run_command(wt, ['sparse-checkout', 'disable'],
-                           logstderr=True, rundir=wt)
+        b4.git_run_command(
+            wt, ['sparse-checkout', 'disable'], logstderr=True, rundir=wt
+        )
         assert os.path.exists(os.path.join(wt, 'file1.txt'))
 
         # 2. Simulate user resolving: accept theirs and continue
-        b4.git_run_command(wt, ['checkout', '--theirs', '.'],
-                           logstderr=True, rundir=wt)
-        b4.git_run_command(wt, ['add', '-A'],
-                           logstderr=True, rundir=wt)
-        ecode, _out = b4.git_run_command(wt, ['am', '--continue'],
-                                         logstderr=True, rundir=wt)
+        b4.git_run_command(wt, ['checkout', '--theirs', '.'], logstderr=True, rundir=wt)
+        b4.git_run_command(wt, ['add', '-A'], logstderr=True, rundir=wt)
+        ecode, _out = b4.git_run_command(
+            wt, ['am', '--continue'], logstderr=True, rundir=wt
+        )
         assert ecode == 0
 
         # 3. Verify rebase-apply is gone (am completed)
         ecode, wt_gitdir = b4.git_run_command(
-            wt, ['rev-parse', '--git-dir'],
-            logstderr=True, rundir=wt)
+            wt, ['rev-parse', '--git-dir'], logstderr=True, rundir=wt
+        )
         assert ecode == 0
         rebase_apply = os.path.join(wt_gitdir.strip(), 'rebase-apply')
         assert not os.path.isdir(rebase_apply)
@@ -353,15 +360,14 @@ class TestConflictResolutionFlow:
         ambytes, _base = _build_conflicting_patches(gitdir)
 
         with pytest.raises(b4.AmConflictError) as exc_info:
-            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD',
-                                      am_flags=['-3'])
+            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD', am_flags=['-3'])
 
         wt = exc_info.value.worktree_path
 
         # User returns from shell without resolving
         ecode, wt_gitdir = b4.git_run_command(
-            wt, ['rev-parse', '--git-dir'],
-            logstderr=True, rundir=wt)
+            wt, ['rev-parse', '--git-dir'], logstderr=True, rundir=wt
+        )
         assert ecode == 0
         rebase_apply = os.path.join(wt_gitdir.strip(), 'rebase-apply')
         assert os.path.isdir(rebase_apply)
@@ -375,15 +381,15 @@ class TestConflictResolutionFlow:
         ambytes, _base = _build_conflicting_patches(gitdir)
 
         with pytest.raises(b4.AmConflictError) as exc_info:
-            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD',
-                                      am_flags=['-3'])
+            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD', am_flags=['-3'])
 
         wt = exc_info.value.worktree_path
 
         # Before: sparse checkout may hide files
         # (the worktree was created with sparse-checkout set to empty)
-        b4.git_run_command(wt, ['sparse-checkout', 'disable'],
-                           logstderr=True, rundir=wt)
+        b4.git_run_command(
+            wt, ['sparse-checkout', 'disable'], logstderr=True, rundir=wt
+        )
 
         # All repo files should now be visible
         assert os.path.exists(os.path.join(wt, 'file1.txt'))
@@ -397,20 +403,17 @@ class TestConflictResolutionFlow:
         ambytes, _base = _build_conflicting_patches(gitdir)
 
         with pytest.raises(b4.AmConflictError) as exc_info:
-            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD',
-                                      am_flags=['-3'])
+            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD', am_flags=['-3'])
 
         wt = exc_info.value.worktree_path
 
         # Resolve and fetch
-        b4.git_run_command(wt, ['sparse-checkout', 'disable'],
-                           logstderr=True, rundir=wt)
-        b4.git_run_command(wt, ['checkout', '--theirs', '.'],
-                           logstderr=True, rundir=wt)
-        b4.git_run_command(wt, ['add', '-A'],
-                           logstderr=True, rundir=wt)
-        b4.git_run_command(wt, ['am', '--continue'],
-                           logstderr=True, rundir=wt)
+        b4.git_run_command(
+            wt, ['sparse-checkout', 'disable'], logstderr=True, rundir=wt
+        )
+        b4.git_run_command(wt, ['checkout', '--theirs', '.'], logstderr=True, rundir=wt)
+        b4.git_run_command(wt, ['add', '-A'], logstderr=True, rundir=wt)
+        b4.git_run_command(wt, ['am', '--continue'], logstderr=True, rundir=wt)
         b4.git_run_command(gitdir, ['fetch', wt], logstderr=True)
 
         # Rewrite FETCH_HEAD (as the TUI handler does)
@@ -438,8 +441,9 @@ class TestDirectAmConflictFlow:
         ambytes, _base = _build_conflicting_patches(gitdir)
 
         # Run git-am directly (as _do_take_am does)
-        ecode, _out = b4.git_run_command(gitdir, ['am', '-3'],
-                                         stdin=ambytes, logstderr=True)
+        ecode, _out = b4.git_run_command(
+            gitdir, ['am', '-3'], stdin=ambytes, logstderr=True
+        )
         assert ecode != 0
 
         # rebase-apply should exist
@@ -447,11 +451,9 @@ class TestDirectAmConflictFlow:
         assert os.path.isdir(rebase_apply)
 
         # Resolve: accept theirs and continue
-        b4.git_run_command(gitdir, ['checkout', '--theirs', '.'],
-                           logstderr=True)
+        b4.git_run_command(gitdir, ['checkout', '--theirs', '.'], logstderr=True)
         b4.git_run_command(gitdir, ['add', '-A'], logstderr=True)
-        ecode, _out = b4.git_run_command(gitdir, ['am', '--continue'],
-                                         logstderr=True)
+        ecode, _out = b4.git_run_command(gitdir, ['am', '--continue'], logstderr=True)
         assert ecode == 0
         assert not os.path.isdir(rebase_apply)
 
@@ -459,8 +461,9 @@ class TestDirectAmConflictFlow:
         """Direct git-am -3 conflict, user aborts."""
         ambytes, _base = _build_conflicting_patches(gitdir)
 
-        ecode, _out = b4.git_run_command(gitdir, ['am', '-3'],
-                                         stdin=ambytes, logstderr=True)
+        ecode, _out = b4.git_run_command(
+            gitdir, ['am', '-3'], stdin=ambytes, logstderr=True
+        )
         assert ecode != 0
 
         # rebase-apply is present (handler detects this)
@@ -476,6 +479,7 @@ class TestDirectAmConflictFlow:
 # Tier 4 — Shazam state machine tests
 # ---------------------------------------------------------------------------
 
+
 def _build_multi_patch_conflict(gitdir: str) -> Tuple[bytes, str]:
     """Create a 3-patch mbox where patches 1-2 are clean but patch 3 conflicts.
 
@@ -522,8 +526,9 @@ def _build_multi_patch_conflict(gitdir: str) -> Tuple[bytes, str]:
     return mbox.encode(), base
 
 
-def _make_shazam_state(common_dir: str,
-                       state: Optional[Dict[str, Any]] = None) -> Tuple[str, str]:
+def _make_shazam_state(
+    common_dir: str, state: Optional[Dict[str, Any]] = None
+) -> Tuple[str, str]:
     """Create shazam state file and patches dir.
 
     Returns (state_file_path, patches_dir_path).
@@ -546,10 +551,11 @@ class TestLoadShazamState:
         assert common_dir is not None
         state_file, patches_dir = _make_shazam_state(common_dir)
         try:
-            _topdir, _cdir, sf, loaded = b4.mbox._load_shazam_state(
-                require_state=True)
-            assert loaded == {'origin': 'https://example.com',
-                              'merge_flags': '--signoff'}
+            _topdir, _cdir, sf, loaded = b4.mbox._load_shazam_state(require_state=True)
+            assert loaded == {
+                'origin': 'https://example.com',
+                'merge_flags': '--signoff',
+            }
             assert sf == state_file
         finally:
             os.unlink(state_file)
@@ -561,8 +567,7 @@ class TestLoadShazamState:
         assert exc_info.value.code == 1
 
     def test_optional_state_returns_none(self, gitdir: str) -> None:
-        _topdir, _cdir, _sf, loaded = b4.mbox._load_shazam_state(
-            require_state=False)
+        _topdir, _cdir, _sf, loaded = b4.mbox._load_shazam_state(require_state=False)
         assert loaded is None
 
     def test_missing_patches_dir_exits(self, gitdir: str) -> None:
@@ -638,8 +643,7 @@ class TestStartMergeResolve:
         assert common_dir is not None
 
         with pytest.raises(b4.AmConflictError) as exc_info:
-            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD',
-                                      am_flags=['-3'])
+            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD', am_flags=['-3'])
 
         state = {
             'origin': 'https://example.com',
@@ -651,8 +655,7 @@ class TestStartMergeResolve:
 
         # _start_merge_resolve exits(1) because remaining patch 3 conflicts
         with pytest.raises(SystemExit) as exit_info:
-            b4.mbox._start_merge_resolve(
-                gitdir, exc_info.value, common_dir, state)
+            b4.mbox._start_merge_resolve(gitdir, exc_info.value, common_dir, state)
         assert exit_info.value.code == 1
 
         # State file and patches dir should exist
@@ -680,8 +683,7 @@ class TestStartMergeResolve:
 
         # Step 1: trigger conflict
         with pytest.raises(b4.AmConflictError) as exc_info:
-            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD',
-                                      am_flags=['-3'])
+            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD', am_flags=['-3'])
 
         state = {
             'origin': 'https://example.com',
@@ -694,8 +696,7 @@ class TestStartMergeResolve:
         # Step 2: _start_merge_resolve extracts patches, starts merge,
         # applies remaining patch 3 which conflicts -> exit(1)
         with pytest.raises(SystemExit):
-            b4.mbox._start_merge_resolve(
-                gitdir, exc_info.value, common_dir, state)
+            b4.mbox._start_merge_resolve(gitdir, exc_info.value, common_dir, state)
 
         # Step 3: resolve the conflict (accept any content)
         with open(os.path.join(gitdir, 'file1.txt'), 'w') as fh:
@@ -709,12 +710,14 @@ class TestStartMergeResolve:
 
         # Step 5: verify merge commit was created
         ecode, _log_out = b4.git_run_command(
-            gitdir, ['log', '--oneline', '-1', '--format=%s'])
+            gitdir, ['log', '--oneline', '-1', '--format=%s']
+        )
         assert ecode == 0
         # The commit was made with -F (the merge template content)
         # Just verify a commit exists on top of our branch
         ecode, parents = b4.git_run_command(
-            gitdir, ['rev-list', '--parents', '-1', 'HEAD'])
+            gitdir, ['rev-list', '--parents', '-1', 'HEAD']
+        )
         assert ecode == 0
         # Merge commit has 2 parents
         parent_list = parents.strip().split()
@@ -733,8 +736,7 @@ class TestStartMergeResolve:
         assert common_dir is not None
 
         with pytest.raises(b4.AmConflictError) as exc_info:
-            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD',
-                                      am_flags=['-3'])
+            b4.git_fetch_am_into_repo(gitdir, ambytes, at_base='HEAD', am_flags=['-3'])
 
         state = {
             'origin': 'https://example.com',
@@ -745,8 +747,7 @@ class TestStartMergeResolve:
         }
 
         with pytest.raises(SystemExit):
-            b4.mbox._start_merge_resolve(
-                gitdir, exc_info.value, common_dir, state)
+            b4.mbox._start_merge_resolve(gitdir, exc_info.value, common_dir, state)
 
         # Abort instead of resolving
         cmdargs = argparse.Namespace()
diff --git a/src/tests/test_tui_bugs.py b/src/tests/test_tui_bugs.py
index 4cab381..a772be7 100644
--- a/src/tests/test_tui_bugs.py
+++ b/src/tests/test_tui_bugs.py
@@ -8,6 +8,7 @@
 Tests the pure-logic functions in _import.py and _tui.py that don't
 need Textual, git-bug, or network access.
 """
+
 from datetime import datetime, timezone
 from typing import Set
 from unittest import mock
@@ -94,6 +95,7 @@ def make_summary(
 # _import.py tests
 # ===========================================================================
 
+
 class TestParseCommentHeader:
     def test_extracts_from(self) -> None:
         text = 'From: Alice <alice@example.com>\nDate: Mon, 1 Jan 2026\n\nBody'
@@ -188,7 +190,9 @@ class TestFormatComment:
             'In-Reply-To': '<parent@test.com>',
         }.get(h)
         with mock.patch('b4.LoreMessage.clean_header', side_effect=lambda x: x):
-            with mock.patch('b4.LoreMessage.get_payload', return_value=('Body text', 'utf-8')):
+            with mock.patch(
+                'b4.LoreMessage.get_payload', return_value=('Body text', 'utf-8')
+            ):
                 result = format_comment(msg)
         assert 'From: Alice <alice@test.com>' in result
         assert 'Message-ID: <abc@test.com>' in result
@@ -201,7 +205,9 @@ class TestFormatComment:
             'Message-ID': '<abc@test.com>',
         }.get(h)
         with mock.patch('b4.LoreMessage.clean_header', side_effect=lambda x: x):
-            with mock.patch('b4.LoreMessage.get_payload', return_value=('Body', 'utf-8')):
+            with mock.patch(
+                'b4.LoreMessage.get_payload', return_value=('Body', 'utf-8')
+            ):
                 result = format_comment(msg, scope='no-parent')
         assert 'X-B4-Bug-Scope: no-parent' in result
 
@@ -210,6 +216,7 @@ class TestFormatComment:
 # _tui.py tests
 # ===========================================================================
 
+
 class TestLabelColor:
     def test_deterministic(self) -> None:
         c1 = label_color('review')
@@ -248,6 +255,7 @@ class TestBugTier:
 
     def test_closed_is_tier_2(self) -> None:
         from ezgb import Status
+
         bug = make_bug(status=Status.CLOSED)
         assert _bug_tier(bug) == 2
 
@@ -275,6 +283,7 @@ class TestBugLifecycle:
 
     def test_closed_no_lifecycle(self) -> None:
         from ezgb import Status
+
         bug = make_bug(status=Status.CLOSED)
         assert _bug_lifecycle(bug) == '\u00d7'  # ×
 
@@ -296,12 +305,16 @@ class TestBugLastActivity:
 
     def test_summary_uses_edited_at(self) -> None:
         from ezgb import BugSummary, Status
+
         edited_time = datetime(2026, 4, 1, tzinfo=timezone.utc)
         s = BugSummary(
-            id='a' * 64, title='Test', status=Status.OPEN,
+            id='a' * 64,
+            title='Test',
+            status=Status.OPEN,
             creator_id='b' * 64,
             created_at=datetime(2026, 1, 1, tzinfo=timezone.utc),
-            labels=frozenset(), comment_count=1,
+            labels=frozenset(),
+            comment_count=1,
             edited_at=edited_time,
         )
         assert _bug_last_activity(s) == edited_time
@@ -314,16 +327,19 @@ class TestRelativeTime:
 
     def test_minutes(self) -> None:
         from datetime import timedelta
+
         t = datetime.now(tz=timezone.utc) - timedelta(minutes=5)
         assert '5m ago' == _relative_time(t)
 
     def test_hours(self) -> None:
         from datetime import timedelta
+
         t = datetime.now(tz=timezone.utc) - timedelta(hours=3)
         assert '3h ago' == _relative_time(t)
 
     def test_days(self) -> None:
         from datetime import timedelta
+
         t = datetime.now(tz=timezone.utc) - timedelta(days=7)
         assert '7d ago' == _relative_time(t)
 
@@ -343,12 +359,14 @@ class TestMatchesLimit:
 
     def test_status_filter_open(self) -> None:
         from ezgb import Status
+
         bug = make_bug(status=Status.OPEN)
         assert BugListApp._matches_limit(bug, 's:open') is True
         assert BugListApp._matches_limit(bug, 's:closed') is False
 
     def test_status_filter_closed(self) -> None:
         from ezgb import Status
+
         bug = make_bug(status=Status.CLOSED)
         assert BugListApp._matches_limit(bug, 's:closed') is True
         assert BugListApp._matches_limit(bug, 's:open') is False
@@ -372,6 +390,7 @@ class TestMatchesLimit:
 
     def test_summary_works(self) -> None:
         from ezgb import Status
+
         s = make_summary(
             title='Network bug',
             status=Status.OPEN,
@@ -440,27 +459,30 @@ class TestParseMsgidForImport:
 
     def test_bare_msgid(self) -> None:
         import b4
+
         result = b4.parse_msgid('abc123@example.com')
         assert result == 'abc123@example.com'
 
     def test_angle_bracketed(self) -> None:
         import b4
+
         result = b4.parse_msgid('<abc123@example.com>')
         assert result == 'abc123@example.com'
 
     def test_lore_url(self) -> None:
         import b4
-        result = b4.parse_msgid(
-            'https://lore.kernel.org/all/abc123@example.com/')
+
+        result = b4.parse_msgid('https://lore.kernel.org/all/abc123@example.com/')
         assert result == 'abc123@example.com'
 
     def test_patch_msgid_link(self) -> None:
         import b4
-        result = b4.parse_msgid(
-            'https://patch.msgid.link/abc123@example.com')
+
+        result = b4.parse_msgid('https://patch.msgid.link/abc123@example.com')
         assert result == 'abc123@example.com'
 
     def test_garbage_has_no_at(self) -> None:
         import b4
+
         result = b4.parse_msgid('not-a-msgid')
         assert '@' not in result
diff --git a/src/tests/test_tui_modals.py b/src/tests/test_tui_modals.py
index e0e6f3f..8f16781 100644
--- a/src/tests/test_tui_modals.py
+++ b/src/tests/test_tui_modals.py
@@ -9,6 +9,7 @@ Uses Textual's built-in ``App.run_test()`` / ``Pilot`` harness so the
 tests run without a real terminal.  Only lightweight, self-contained
 modals are exercised here — no database, network, or git needed.
 """
+
 from typing import Any, Dict, List, Optional, Tuple
 
 import pytest
@@ -35,6 +36,7 @@ from b4.review_tui._modals import (
 # older builds (e.g. Fedora 43 package) still use Static.renderable.
 # ---------------------------------------------------------------------------
 
+
 def _static_text(widget: Any) -> str:
     """Return the text content of a Static widget across Textual versions."""
     if hasattr(widget, 'content'):
@@ -46,6 +48,7 @@ def _static_text(widget: Any) -> str:
 # Minimal host app — just enough to push modal screens onto
 # ---------------------------------------------------------------------------
 
+
 class ModalTestApp(App[None]):
     """Bare app that serves as a host for pushing modal screens."""
 
@@ -57,6 +60,7 @@ class ModalTestApp(App[None]):
 # HelpScreen
 # ---------------------------------------------------------------------------
 
+
 class TestHelpScreen:
     """Tests for the HelpScreen modal."""
 
@@ -124,12 +128,21 @@ class TestHelpScreen:
             await pilot.pause()
             assert isinstance(app.screen, HelpScreen)
 
-            for key in ('j', 'k', 'down', 'up', 'space', 'backspace',
-                        'pagedown', 'pageup'):
+            for key in (
+                'j',
+                'k',
+                'down',
+                'up',
+                'space',
+                'backspace',
+                'pagedown',
+                'pageup',
+            ):
                 await pilot.press(key)
                 await pilot.pause()
-                assert isinstance(app.screen, HelpScreen), \
+                assert isinstance(app.screen, HelpScreen), (
                     f'{key!r} unexpectedly closed the help screen'
+                )
 
     @pytest.mark.asyncio
     async def test_content_rendered(self) -> None:
@@ -149,6 +162,7 @@ class TestHelpScreen:
 # ConfirmScreen
 # ---------------------------------------------------------------------------
 
+
 class TestConfirmScreen:
     """Tests for the ConfirmScreen modal."""
 
@@ -201,8 +215,9 @@ class TestConfirmScreen:
             # body lines + hint line + possibly title
             rendered = [_static_text(s) for s in statics]
             for line in body:
-                assert any(line in r for r in rendered), \
+                assert any(line in r for r in rendered), (
                     f'{line!r} not found in rendered statics'
+                )
 
     @pytest.mark.asyncio
     async def test_subject_shown(self) -> None:
@@ -235,6 +250,7 @@ class TestConfirmScreen:
 # TrailerScreen
 # ---------------------------------------------------------------------------
 
+
 class TestTrailerScreen:
     """Tests for the TrailerScreen modal."""
 
@@ -374,6 +390,7 @@ class TestTrailerScreen:
 # NoteScreen
 # ---------------------------------------------------------------------------
 
+
 class TestNoteScreen:
     """Tests for the NoteScreen modal."""
 
@@ -440,6 +457,7 @@ class TestNoteScreen:
 # PriorReviewScreen
 # ---------------------------------------------------------------------------
 
+
 class TestPriorReviewScreen:
     """Tests for the PriorReviewScreen modal."""
 
@@ -475,6 +493,7 @@ class TestPriorReviewScreen:
 # RevisionChoiceScreen
 # ---------------------------------------------------------------------------
 
+
 class TestRevisionChoiceScreen:
     """Tests for the RevisionChoiceScreen modal."""
 
@@ -532,6 +551,7 @@ class TestRevisionChoiceScreen:
 # SnoozeScreen
 # ---------------------------------------------------------------------------
 
+
 class TestSnoozeScreen:
     """Tests for the SnoozeScreen modal."""
 
@@ -667,6 +687,7 @@ class TestSnoozeScreen:
 # SetStateScreen
 # ---------------------------------------------------------------------------
 
+
 class TestSetStateScreen:
     """Tests for the SetStateScreen modal."""
 
@@ -756,6 +777,7 @@ class TestSetStateScreen:
 # LimitScreen
 # ---------------------------------------------------------------------------
 
+
 class TestLimitScreen:
     """Tests for the LimitScreen modal."""
 
@@ -819,6 +841,7 @@ class TestLimitScreen:
 # ActionScreen
 # ---------------------------------------------------------------------------
 
+
 class TestActionScreen:
     """Tests for the ActionScreen modal."""
 
@@ -888,7 +911,9 @@ class TestActionScreen:
         results: List[Optional[str]] = []
 
         async with app.run_test() as pilot:
-            app.push_screen(ActionScreen(self._actions(), shortcuts=self._SHORTCUTS), results.append)
+            app.push_screen(
+                ActionScreen(self._actions(), shortcuts=self._SHORTCUTS), results.append
+            )
             await pilot.pause()
 
             # 'T' is the shortcut for 'take'
@@ -902,7 +927,9 @@ class TestActionScreen:
         results: List[Optional[str]] = []
 
         async with app.run_test() as pilot:
-            app.push_screen(ActionScreen(self._actions(), shortcuts=self._SHORTCUTS), results.append)
+            app.push_screen(
+                ActionScreen(self._actions(), shortcuts=self._SHORTCUTS), results.append
+            )
             await pilot.pause()
 
             await pilot.press('r')
@@ -915,7 +942,9 @@ class TestActionScreen:
         results: List[Optional[str]] = []
 
         async with app.run_test() as pilot:
-            app.push_screen(ActionScreen(self._actions(), shortcuts=self._SHORTCUTS), results.append)
+            app.push_screen(
+                ActionScreen(self._actions(), shortcuts=self._SHORTCUTS), results.append
+            )
             await pilot.pause()
 
             await pilot.press('x')
@@ -927,6 +956,7 @@ class TestActionScreen:
 # UpdateRevisionScreen
 # ---------------------------------------------------------------------------
 
+
 class TestUpdateRevisionScreen:
     """Tests for the UpdateRevisionScreen modal."""
 
diff --git a/src/tests/test_tui_review.py b/src/tests/test_tui_review.py
index 3222989..fb6fae0 100644
--- a/src/tests/test_tui_review.py
+++ b/src/tests/test_tui_review.py
@@ -8,6 +8,7 @@
 Tests the shell-return reconciliation logic that detects and handles
 cosmetic commit edits (e.g. reworded subjects via git rebase -i).
 """
+
 from typing import Any, Dict, List, Tuple
 
 import pytest
@@ -20,6 +21,7 @@ from b4.review_tui._review_app import ReviewApp
 # Helpers
 # ---------------------------------------------------------------------------
 
+
 def _create_review_branch_with_patches(
     gitdir: str,
     change_id: str,
@@ -53,8 +55,7 @@ def _create_review_branch_with_patches(
     # Create patch commits
     patch_shas: List[str] = []
     for msg in patch_messages:
-        ecode, _ = b4.git_run_command(
-            gitdir, ['commit', '--allow-empty', '-m', msg])
+        ecode, _ = b4.git_run_command(gitdir, ['commit', '--allow-empty', '-m', msg])
         assert ecode == 0
         ecode, sha = b4.git_run_command(gitdir, ['rev-parse', 'HEAD'])
         assert ecode == 0
@@ -63,10 +64,12 @@ def _create_review_branch_with_patches(
     # Build tracking metadata
     patches_meta: List[Dict[str, Any]] = []
     for i, _sha in enumerate(patch_shas):
-        patches_meta.append({
-            'header-info': {'msgid': f'{change_id}-patch{i + 1}@example.com'},
-            'followups': [],
-        })
+        patches_meta.append(
+            {
+                'header-info': {'msgid': f'{change_id}-patch{i + 1}@example.com'},
+                'followups': [],
+            }
+        )
 
     trk: Dict[str, Any] = {
         'series': {
@@ -90,8 +93,7 @@ def _create_review_branch_with_patches(
     commit_msg = f'{subject}\n\n{b4.review.make_review_magic_json(trk)}'
 
     # Create tracking commit (empty)
-    ecode, _ = b4.git_run_command(
-        gitdir, ['commit', '--allow-empty', '-m', commit_msg])
+    ecode, _ = b4.git_run_command(gitdir, ['commit', '--allow-empty', '-m', commit_msg])
     assert ecode == 0
 
     return branch_name, patch_shas
@@ -110,18 +112,17 @@ def _build_session(gitdir: str, branch_name: str) -> Dict[str, Any]:
     else:
         range_spec = f'{base_commit}..{branch_name}~1'
 
-    ecode, out = b4.git_run_command(
-        gitdir, ['rev-list', '--reverse', range_spec])
+    ecode, out = b4.git_run_command(gitdir, ['rev-list', '--reverse', range_spec])
     assert ecode == 0
     commit_shas = out.strip().splitlines()
 
     ecode, out = b4.git_run_command(
-        gitdir, ['log', '--reverse', '--format=%s', range_spec])
+        gitdir, ['log', '--reverse', '--format=%s', range_spec]
+    )
     assert ecode == 0
     commit_subjects = out.strip().splitlines()
 
-    ecode, out = b4.git_run_command(
-        gitdir, ['rev-parse', '--short', 'HEAD'])
+    ecode, out = b4.git_run_command(gitdir, ['rev-parse', '--short', 'HEAD'])
     abbrev_len = len(out.strip()) if ecode == 0 else 7
 
     sha_map: Dict[str, Tuple[str, int]] = {}
@@ -142,7 +143,7 @@ def _build_session(gitdir: str, branch_name: str) -> Dict[str, Any]:
         'commit_subjects': commit_subjects,
         'sha_map': sha_map,
         'abbrev_len': abbrev_len,
-        'default_identity': f"{usercfg.get('name', 'Test')} <{usercfg.get('email', 'test@example.com')}>",
+        'default_identity': f'{usercfg.get("name", "Test")} <{usercfg.get("email", "test@example.com")}>',
         'usercfg': usercfg,
         'cover_subject_clean': series.get('subject', ''),
     }
@@ -150,29 +151,26 @@ def _build_session(gitdir: str, branch_name: str) -> Dict[str, Any]:
 
 def _save_tracking_msg(gitdir: str) -> str:
     """Save the tracking commit message from HEAD."""
-    ecode, msg = b4.git_run_command(
-        gitdir, ['log', '-1', '--format=%B', 'HEAD'])
+    ecode, msg = b4.git_run_command(gitdir, ['log', '-1', '--format=%B', 'HEAD'])
     assert ecode == 0
     return msg.strip()
 
 
-def _rewrite_patches(gitdir: str, base_sha: str,
-                     new_subjects: List[str], trk_msg: str) -> None:
+def _rewrite_patches(
+    gitdir: str, base_sha: str, new_subjects: List[str], trk_msg: str
+) -> None:
     """Reset to base and recreate patches + tracking commit.
 
     Hard-resets to *base_sha*, creates one --allow-empty commit per
     subject in *new_subjects*, then recreates the tracking commit
     from *trk_msg*.
     """
-    ecode, _ = b4.git_run_command(
-        gitdir, ['reset', '--hard', base_sha])
+    ecode, _ = b4.git_run_command(gitdir, ['reset', '--hard', base_sha])
     assert ecode == 0
     for subj in new_subjects:
-        ecode, _ = b4.git_run_command(
-            gitdir, ['commit', '--allow-empty', '-m', subj])
+        ecode, _ = b4.git_run_command(gitdir, ['commit', '--allow-empty', '-m', subj])
         assert ecode == 0
-    ecode, _ = b4.git_run_command(
-        gitdir, ['commit', '--allow-empty', '-m', trk_msg])
+    ecode, _ = b4.git_run_command(gitdir, ['commit', '--allow-empty', '-m', trk_msg])
     assert ecode == 0
 
 
@@ -180,6 +178,7 @@ def _rewrite_patches(gitdir: str, base_sha: str,
 # Tests
 # ---------------------------------------------------------------------------
 
+
 class TestReconcileAfterShell:
     """Tests for _reconcile_after_shell tracking fixup."""
 
@@ -187,7 +186,8 @@ class TestReconcileAfterShell:
     async def test_no_changes(self, gitdir: str) -> None:
         """No-op when commits are unchanged after shell return."""
         branch, patch_shas = _create_review_branch_with_patches(
-            gitdir, 'reconcile-noop', ['patch 1', 'patch 2'])
+            gitdir, 'reconcile-noop', ['patch 1', 'patch 2']
+        )
         session = _build_session(gitdir, branch)
 
         app = ReviewApp(session)
@@ -203,8 +203,8 @@ class TestReconcileAfterShell:
     async def test_reworded_commits(self, gitdir: str) -> None:
         """Tracking is updated after commit messages are reworded."""
         branch, patch_shas = _create_review_branch_with_patches(
-            gitdir, 'reconcile-reword',
-            ['original subject 1', 'original subject 2'])
+            gitdir, 'reconcile-reword', ['original subject 1', 'original subject 2']
+        )
         session = _build_session(gitdir, branch)
         base_sha = session['base_commit']
 
@@ -216,9 +216,9 @@ class TestReconcileAfterShell:
 
             # Simulate rewording both commits (as git rebase -i would)
             trk_msg = _save_tracking_msg(gitdir)
-            _rewrite_patches(gitdir, base_sha,
-                             ['reworded subject 1', 'reworded subject 2'],
-                             trk_msg)
+            _rewrite_patches(
+                gitdir, base_sha, ['reworded subject 1', 'reworded subject 2'], trk_msg
+            )
 
             app._reconcile_after_shell(old_shas)
 
@@ -229,8 +229,7 @@ class TestReconcileAfterShell:
             assert app._series['first-patch-commit'] == app._commit_shas[0]
             assert app._series['first-patch-commit'] != patch_shas[0]
             # Subjects should reflect the reword
-            assert app._commit_subjects == ['reworded subject 1',
-                                            'reworded subject 2']
+            assert app._commit_subjects == ['reworded subject 1', 'reworded subject 2']
             # sha_map should be updated
             assert len(app._sha_map) == 2
 
@@ -238,8 +237,8 @@ class TestReconcileAfterShell:
     async def test_single_reword_preserves_unchanged(self, gitdir: str) -> None:
         """Only the reworded commit gets a new SHA; unchanged ones keep theirs."""
         branch, _patch_shas = _create_review_branch_with_patches(
-            gitdir, 'reconcile-partial',
-            ['keep this one', 'change this one'])
+            gitdir, 'reconcile-partial', ['keep this one', 'change this one']
+        )
         session = _build_session(gitdir, branch)
 
         app = ReviewApp(session)
@@ -250,14 +249,15 @@ class TestReconcileAfterShell:
             # Reword only the second commit: reset to after first patch,
             # then recreate second + tracking
             trk_msg = _save_tracking_msg(gitdir)
-            ecode, _ = b4.git_run_command(
-                gitdir, ['reset', '--hard', old_shas[0]])
+            ecode, _ = b4.git_run_command(gitdir, ['reset', '--hard', old_shas[0]])
             assert ecode == 0
             ecode, _ = b4.git_run_command(
-                gitdir, ['commit', '--allow-empty', '-m', 'changed subject 2'])
+                gitdir, ['commit', '--allow-empty', '-m', 'changed subject 2']
+            )
             assert ecode == 0
             ecode, _ = b4.git_run_command(
-                gitdir, ['commit', '--allow-empty', '-m', trk_msg])
+                gitdir, ['commit', '--allow-empty', '-m', trk_msg]
+            )
             assert ecode == 0
 
             app._reconcile_after_shell(old_shas)
@@ -273,8 +273,8 @@ class TestReconcileAfterShell:
     async def test_patch_count_mismatch(self, gitdir: str) -> None:
         """Warns and does not update when patch count changes."""
         branch, patch_shas = _create_review_branch_with_patches(
-            gitdir, 'reconcile-mismatch',
-            ['patch 1', 'patch 2', 'patch 3'])
+            gitdir, 'reconcile-mismatch', ['patch 1', 'patch 2', 'patch 3']
+        )
         session = _build_session(gitdir, branch)
         base_sha = session['base_commit']
 
@@ -286,9 +286,7 @@ class TestReconcileAfterShell:
 
             # Simulate squashing: recreate with fewer patches
             trk_msg = _save_tracking_msg(gitdir)
-            _rewrite_patches(gitdir, base_sha,
-                             ['patch 1', 'squashed 2+3'],
-                             trk_msg)
+            _rewrite_patches(gitdir, base_sha, ['patch 1', 'squashed 2+3'], trk_msg)
 
             # Reconcile should NOT update tracking
             app._reconcile_after_shell(old_shas)
@@ -301,8 +299,8 @@ class TestReconcileAfterShell:
     async def test_tracking_commit_persisted(self, gitdir: str) -> None:
         """The on-disk tracking commit is amended with new first-patch-commit."""
         branch, _patch_shas = _create_review_branch_with_patches(
-            gitdir, 'reconcile-persist',
-            ['persist patch 1', 'persist patch 2'])
+            gitdir, 'reconcile-persist', ['persist patch 1', 'persist patch 2']
+        )
         session = _build_session(gitdir, branch)
         base_sha = session['base_commit']
 
@@ -313,9 +311,9 @@ class TestReconcileAfterShell:
 
             # Reword both patches
             trk_msg = _save_tracking_msg(gitdir)
-            _rewrite_patches(gitdir, base_sha,
-                             ['reworded persist 1', 'reworded persist 2'],
-                             trk_msg)
+            _rewrite_patches(
+                gitdir, base_sha, ['reworded persist 1', 'reworded persist 2'], trk_msg
+            )
 
             app._reconcile_after_shell(old_shas)
 
diff --git a/src/tests/test_tui_tracking.py b/src/tests/test_tui_tracking.py
index 80004e8..76ff353 100644
--- a/src/tests/test_tui_tracking.py
+++ b/src/tests/test_tui_tracking.py
@@ -10,6 +10,7 @@ Uses real SQLite databases (via b4.review.tracking) and git repos
 core user workflows: series listing, navigation, filtering,
 status transitions, and modal interactions.
 """
+
 import pathlib
 from typing import Any, Dict, List, Optional
 from unittest.mock import patch
@@ -36,6 +37,7 @@ from b4.review_tui._tracking_app import TrackedSeriesItem, TrackingApp
 # older builds (e.g. Fedora 43 package) still use Static.renderable.
 # ---------------------------------------------------------------------------
 
+
 def _static_text(widget: Any) -> str:
     """Return the text content of a Static widget across Textual versions."""
     if hasattr(widget, 'content'):
@@ -47,6 +49,7 @@ def _static_text(widget: Any) -> str:
 # Helpers
 # ---------------------------------------------------------------------------
 
+
 def _seed_db(identifier: str, series_list: List[Dict[str, Any]]) -> None:
     """Create and populate a tracking database with test series."""
     conn = tracking.init_db(identifier)
@@ -76,20 +79,27 @@ def _seed_db(identifier: str, series_list: List[Dict[str, Any]]) -> None:
             conn.execute(
                 'UPDATE series SET message_count = ?, seen_message_count = ? '
                 'WHERE change_id = ? AND revision = ?',
-                (mc, s.get('seen_message_count', mc),
-                 s['change_id'], s.get('revision', 1)),
+                (
+                    mc,
+                    s.get('seen_message_count', mc),
+                    s['change_id'],
+                    s.get('revision', 1),
+                ),
             )
             conn.commit()
     conn.close()
 
 
-def _create_review_branch(gitdir: str, change_id: str,
-                          identifier: str = 'test-project',
-                          revision: int = 1,
-                          status: str = 'reviewing',
-                          subject: str = 'Test series',
-                          sender_name: str = 'Test Author',
-                          sender_email: str = 'test@example.com') -> str:
+def _create_review_branch(
+    gitdir: str,
+    change_id: str,
+    identifier: str = 'test-project',
+    revision: int = 1,
+    status: str = 'reviewing',
+    subject: str = 'Test series',
+    sender_name: str = 'Test Author',
+    sender_email: str = 'test@example.com',
+) -> str:
     """Create a fake b4 review branch with a proper tracking commit.
 
     Returns the branch name.
@@ -128,13 +138,15 @@ def _create_review_branch(gitdir: str, change_id: str,
     assert ecode == 0
     tree = tree.strip()
     ecode, new_sha = b4.git_run_command(
-        gitdir, ['commit-tree', tree, '-p', base_sha],
+        gitdir,
+        ['commit-tree', tree, '-p', base_sha],
         stdin=commit_msg.encode(),
     )
     assert ecode == 0
     new_sha = new_sha.strip()
     ecode, _ = b4.git_run_command(
-        gitdir, ['update-ref', f'refs/heads/{branch_name}', new_sha])
+        gitdir, ['update-ref', f'refs/heads/{branch_name}', new_sha]
+    )
     assert ecode == 0
     return branch_name
 
@@ -177,6 +189,7 @@ SAMPLE_SERIES: List[Dict[str, Any]] = [
 # Tests
 # ---------------------------------------------------------------------------
 
+
 class TestTrackingAppStartup:
     """Tests for the TrackingApp startup and series listing."""
 
@@ -204,7 +217,9 @@ class TestTrackingAppStartup:
             assert len(list(lv.children)) == 3
 
     @pytest.mark.asyncio
-    async def test_title_shows_identifier_and_count(self, tmp_path: pathlib.Path) -> None:
+    async def test_title_shows_identifier_and_count(
+        self, tmp_path: pathlib.Path
+    ) -> None:
         _seed_db('test-title', SAMPLE_SERIES)
 
         app = TrackingApp('test-title')
@@ -292,6 +307,7 @@ class TestTrackingLimit:
             assert isinstance(app.screen, LimitScreen)
 
             from textual.widgets import Input
+
             inp = app.screen.query_one('#limit-input', Input)
             inp.value = 'drm'
             await pilot.press('enter')
@@ -315,6 +331,7 @@ class TestTrackingLimit:
             await pilot.pause()
 
             from textual.widgets import Input
+
             inp = app.screen.query_one('#limit-input', Input)
             inp.value = 'Charlie'
             await pilot.press('enter')
@@ -337,13 +354,16 @@ class TestTrackingLimit:
             await pilot.press('l')
             await pilot.pause()
             from textual.widgets import Input
+
             inp = app.screen.query_one('#limit-input', Input)
             inp.value = 'alpha'
             await pilot.press('enter')
             await pilot.pause()
 
             lv = app.query_one('#tracking-list', ListView)
-            assert len([c for c in lv.children if isinstance(c, TrackedSeriesItem)]) == 1
+            assert (
+                len([c for c in lv.children if isinstance(c, TrackedSeriesItem)]) == 1
+            )
 
             # Clear the filter
             await pilot.press('l')
@@ -354,7 +374,9 @@ class TestTrackingLimit:
             await pilot.pause()
 
             lv = app.query_one('#tracking-list', ListView)
-            assert len([c for c in lv.children if isinstance(c, TrackedSeriesItem)]) == 3
+            assert (
+                len([c for c in lv.children if isinstance(c, TrackedSeriesItem)]) == 3
+            )
 
     @pytest.mark.asyncio
     async def test_limit_title_shows_count(self, tmp_path: pathlib.Path) -> None:
@@ -367,6 +389,7 @@ class TestTrackingLimit:
             await pilot.press('l')
             await pilot.pause()
             from textual.widgets import Input
+
             inp = app.screen.query_one('#limit-input', Input)
             inp.value = 'alpha'
             await pilot.press('enter')
@@ -382,19 +405,22 @@ class TestTrackingLimitPrefixes:
     @pytest.mark.asyncio
     async def test_limit_by_status(self, tmp_path: pathlib.Path) -> None:
         """s:snoozed should show only snoozed series."""
-        _seed_db('test-limit-status', [
-            {
-                'change_id': 'ls-new',
-                'subject': '[PATCH] new one',
-                'message_id': 'lsn@ex.com',
-            },
-            {
-                'change_id': 'ls-snoozed',
-                'subject': '[PATCH] snoozed one',
-                'status': 'snoozed',
-                'message_id': 'lss@ex.com',
-            },
-        ])
+        _seed_db(
+            'test-limit-status',
+            [
+                {
+                    'change_id': 'ls-new',
+                    'subject': '[PATCH] new one',
+                    'message_id': 'lsn@ex.com',
+                },
+                {
+                    'change_id': 'ls-snoozed',
+                    'subject': '[PATCH] snoozed one',
+                    'status': 'snoozed',
+                    'message_id': 'lss@ex.com',
+                },
+            ],
+        )
 
         app = TrackingApp('test-limit-status')
         async with app.run_test(size=(120, 30)) as pilot:
@@ -402,6 +428,7 @@ class TestTrackingLimitPrefixes:
             await pilot.press('l')
             await pilot.pause()
             from textual.widgets import Input
+
             inp = app.screen.query_one('#limit-input', Input)
             inp.value = 's:snoozed'
             await pilot.press('enter')
@@ -444,7 +471,9 @@ class TestTrackingStatusGroups:
     """Tests for status grouping and display."""
 
     @pytest.mark.asyncio
-    async def test_actionable_before_non_actionable(self, tmp_path: pathlib.Path) -> None:
+    async def test_actionable_before_non_actionable(
+        self, tmp_path: pathlib.Path
+    ) -> None:
         """Actionable series (new) should appear before non-actionable (snoozed).
 
         We use only statuses that don't require a real review branch
@@ -546,6 +575,7 @@ class TestTrackingQuit:
 # Tests with real git repos (review branches)
 # ---------------------------------------------------------------------------
 
+
 class TestTrackingWithReviewBranch:
     """Tests that use the gitdir fixture for real review branches."""
 
@@ -555,12 +585,17 @@ class TestTrackingWithReviewBranch:
         identifier = 'test-reviewing'
         change_id = 'test-review-branch-1'
         _create_review_branch(gitdir, change_id, identifier=identifier)
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] series with review branch',
-            'status': 'reviewing',
-            'message_id': 'review-branch-1@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] series with review branch',
+                    'status': 'reviewing',
+                    'message_id': 'review-branch-1@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -576,12 +611,17 @@ class TestTrackingWithReviewBranch:
         identifier = 'test-review-exit'
         change_id = 'test-exit-branch'
         _create_review_branch(gitdir, change_id, identifier=identifier)
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] exit test',
-            'status': 'reviewing',
-            'message_id': 'exit@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] exit test',
+                    'status': 'reviewing',
+                    'message_id': 'exit@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -597,12 +637,17 @@ class TestTrackingWithReviewBranch:
         identifier = 'test-enter-review'
         change_id = 'test-enter-branch'
         _create_review_branch(gitdir, change_id, identifier=identifier)
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] enter test',
-            'status': 'reviewing',
-            'message_id': 'enter@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] enter test',
+                    'status': 'reviewing',
+                    'message_id': 'enter@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -616,14 +661,20 @@ class TestTrackingWithReviewBranch:
         """Pressing 'r' on a waiting series should change it to reviewing."""
         identifier = 'test-wait-review'
         change_id = 'test-waiting-branch'
-        _create_review_branch(gitdir, change_id, identifier=identifier,
-                              status='waiting')
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] waiting test',
-            'status': 'waiting',
-            'message_id': 'waiting@ex.com',
-        }])
+        _create_review_branch(
+            gitdir, change_id, identifier=identifier, status='waiting'
+        )
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] waiting test',
+                    'status': 'waiting',
+                    'message_id': 'waiting@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -637,8 +688,8 @@ class TestTrackingWithReviewBranch:
             # Verify status was updated in DB
             conn = tracking.get_db(identifier)
             cursor = conn.execute(
-                'SELECT status FROM series WHERE change_id = ?',
-                (change_id,))
+                'SELECT status FROM series WHERE change_id = ?', (change_id,)
+            )
             row = cursor.fetchone()
             conn.close()
             assert row[0] == 'reviewing'
@@ -649,14 +700,19 @@ class TestTrackingWithReviewBranch:
         identifier = 'test-seen'
         change_id = 'test-seen-branch'
         _create_review_branch(gitdir, change_id, identifier=identifier)
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] seen test',
-            'status': 'reviewing',
-            'message_id': 'seen@ex.com',
-            'message_count': 10,
-            'seen_message_count': 3,
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] seen test',
+                    'status': 'reviewing',
+                    'message_id': 'seen@ex.com',
+                    'message_count': 10,
+                    'seen_message_count': 3,
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -668,7 +724,8 @@ class TestTrackingWithReviewBranch:
             conn = tracking.get_db(identifier)
             cursor = conn.execute(
                 'SELECT message_count, seen_message_count FROM series WHERE change_id = ?',
-                (change_id,))
+                (change_id,),
+            )
             row = cursor.fetchone()
             conn.close()
             assert row[0] == row[1]  # seen should equal total
@@ -680,11 +737,16 @@ class TestTrackingActionMenu:
     @pytest.mark.asyncio
     async def test_action_menu_for_new_series(self, tmp_path: pathlib.Path) -> None:
         """New series should show review/abandon/snooze actions."""
-        _seed_db('test-action-new', [{
-            'change_id': 'new-action-1',
-            'subject': '[PATCH] new action test',
-            'message_id': 'action-new@ex.com',
-        }])
+        _seed_db(
+            'test-action-new',
+            [
+                {
+                    'change_id': 'new-action-1',
+                    'subject': '[PATCH] new action test',
+                    'message_id': 'action-new@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp('test-action-new')
         async with app.run_test(size=(120, 30)) as pilot:
@@ -696,6 +758,7 @@ class TestTrackingActionMenu:
             # Check available actions
             lv = app.screen.query_one('#action-list', ListView)
             from b4.review_tui._modals import ActionItem
+
             actions = [c.key for c in lv.children if isinstance(c, ActionItem)]
             assert 'review' in actions
             assert 'abandon' in actions
@@ -715,12 +778,17 @@ class TestTrackingActionMenu:
         identifier = 'test-action-reviewing'
         change_id = 'reviewing-action-1'
         _create_review_branch(gitdir, change_id, identifier=identifier)
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] reviewing action test',
-            'status': 'reviewing',
-            'message_id': 'action-rev@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] reviewing action test',
+                    'status': 'reviewing',
+                    'message_id': 'action-rev@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -731,6 +799,7 @@ class TestTrackingActionMenu:
 
             lv = app.screen.query_one('#action-list', ListView)
             from b4.review_tui._modals import ActionItem
+
             actions = [c.key for c in lv.children if isinstance(c, ActionItem)]
             assert 'take' in actions
             assert 'rebase' in actions
@@ -742,12 +811,17 @@ class TestTrackingActionMenu:
     @pytest.mark.asyncio
     async def test_action_menu_for_snoozed(self, tmp_path: pathlib.Path) -> None:
         """Snoozed series should show unsnooze/abandon actions."""
-        _seed_db('test-action-snoozed', [{
-            'change_id': 'snoozed-action-1',
-            'subject': '[PATCH] snoozed action test',
-            'status': 'snoozed',
-            'message_id': 'action-snz@ex.com',
-        }])
+        _seed_db(
+            'test-action-snoozed',
+            [
+                {
+                    'change_id': 'snoozed-action-1',
+                    'subject': '[PATCH] snoozed action test',
+                    'status': 'snoozed',
+                    'message_id': 'action-snz@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp('test-action-snoozed')
         async with app.run_test(size=(120, 30)) as pilot:
@@ -758,6 +832,7 @@ class TestTrackingActionMenu:
 
             lv = app.screen.query_one('#action-list', ListView)
             from b4.review_tui._modals import ActionItem
+
             actions = [c.key for c in lv.children if isinstance(c, ActionItem)]
             assert 'unsnooze' in actions
             assert 'abandon' in actions
@@ -770,11 +845,16 @@ class TestTrackingActionMenu:
     @pytest.mark.asyncio
     async def test_enter_on_new_opens_action_menu(self, tmp_path: pathlib.Path) -> None:
         """Enter on a 'new' series should open action menu (not review)."""
-        _seed_db('test-enter-new', [{
-            'change_id': 'enter-new-1',
-            'subject': '[PATCH] enter new test',
-            'message_id': 'enter-new@ex.com',
-        }])
+        _seed_db(
+            'test-enter-new',
+            [
+                {
+                    'change_id': 'enter-new-1',
+                    'subject': '[PATCH] enter new test',
+                    'message_id': 'enter-new@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp('test-enter-new')
         async with app.run_test(size=(120, 30)) as pilot:
@@ -792,20 +872,27 @@ class TestTrackingUpgradeNewSeries:
 
     @pytest.mark.asyncio
     async def test_action_menu_shows_upgrade_for_new_with_newer(
-            self, tmp_path: pathlib.Path) -> None:
+        self, tmp_path: pathlib.Path
+    ) -> None:
         """New series with a newer revision available should offer upgrade."""
         identifier = 'test-upgrade-new'
         change_id = 'upgrade-new-1'
         conn = tracking.init_db(identifier)
         tracking.add_series_to_db(
-            conn, change_id=change_id, revision=12,
+            conn,
+            change_id=change_id,
+            revision=12,
             subject='[PATCH v12] test upgrade',
-            sender_name='Test', sender_email='t@ex.com',
+            sender_name='Test',
+            sender_email='t@ex.com',
             sent_at='2026-01-15T10:00:00+00:00',
-            message_id='v12@ex.com', num_patches=2)
+            message_id='v12@ex.com',
+            num_patches=2,
+        )
         # Add v13 to the revisions table so has_newer is set
-        tracking.add_revision(conn, change_id, 13, 'v13@ex.com',
-                              subject='[PATCH v13] test upgrade')
+        tracking.add_revision(
+            conn, change_id, 13, 'v13@ex.com', subject='[PATCH v13] test upgrade'
+        )
         conn.close()
 
         app = TrackingApp(identifier)
@@ -816,6 +903,7 @@ class TestTrackingUpgradeNewSeries:
             assert isinstance(app.screen, ActionScreen)
             lv = app.screen.query_one('#action-list', ListView)
             from b4.review_tui._modals import ActionItem
+
             actions = [c.key for c in lv.children if isinstance(c, ActionItem)]
             assert 'upgrade' in actions
             assert 'review' in actions
@@ -823,13 +911,19 @@ class TestTrackingUpgradeNewSeries:
 
     @pytest.mark.asyncio
     async def test_action_menu_no_upgrade_without_newer(
-            self, tmp_path: pathlib.Path) -> None:
+        self, tmp_path: pathlib.Path
+    ) -> None:
         """New series without newer revisions should not offer upgrade."""
-        _seed_db('test-upgrade-none', [{
-            'change_id': 'upgrade-none-1',
-            'subject': '[PATCH] no newer test',
-            'message_id': 'only@ex.com',
-        }])
+        _seed_db(
+            'test-upgrade-none',
+            [
+                {
+                    'change_id': 'upgrade-none-1',
+                    'subject': '[PATCH] no newer test',
+                    'message_id': 'only@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp('test-upgrade-none')
         async with app.run_test(size=(120, 30)) as pilot:
@@ -839,30 +933,38 @@ class TestTrackingUpgradeNewSeries:
             assert isinstance(app.screen, ActionScreen)
             lv = app.screen.query_one('#action-list', ListView)
             from b4.review_tui._modals import ActionItem
+
             actions = [c.key for c in lv.children if isinstance(c, ActionItem)]
             assert 'upgrade' not in actions
             await pilot.press('escape')
 
     @pytest.mark.asyncio
-    async def test_upgrade_switches_revision(
-            self, tmp_path: pathlib.Path) -> None:
+    async def test_upgrade_switches_revision(self, tmp_path: pathlib.Path) -> None:
         """Upgrade on a new series should update the DB to the newer revision."""
         identifier = 'test-upgrade-switch'
         change_id = 'upgrade-switch-1'
         conn = tracking.init_db(identifier)
         tracking.add_series_to_db(
-            conn, change_id=change_id, revision=12,
+            conn,
+            change_id=change_id,
+            revision=12,
             subject='[PATCH v12] switch test',
-            sender_name='Test', sender_email='t@ex.com',
+            sender_name='Test',
+            sender_email='t@ex.com',
             sent_at='2026-01-15T10:00:00+00:00',
-            message_id='v12@ex.com', num_patches=2)
+            message_id='v12@ex.com',
+            num_patches=2,
+        )
         # Set message counts so we can verify they get reset
         conn.execute(
             'UPDATE series SET message_count = 6, seen_message_count = 4'
-            ' WHERE change_id = ?', (change_id,))
+            ' WHERE change_id = ?',
+            (change_id,),
+        )
         conn.commit()
-        tracking.add_revision(conn, change_id, 13, 'v13@ex.com',
-                              subject='[PATCH v13] switch test')
+        tracking.add_revision(
+            conn, change_id, 13, 'v13@ex.com', subject='[PATCH v13] switch test'
+        )
         conn.close()
 
         app = TrackingApp(identifier)
@@ -875,6 +977,7 @@ class TestTrackingUpgradeNewSeries:
             # Select 'upgrade' — it should be in the list
             lv = app.screen.query_one('#action-list', ListView)
             from b4.review_tui._modals import ActionItem
+
             for child in lv.children:
                 if isinstance(child, ActionItem) and child.key == 'upgrade':
                     lv.index = lv.children.index(child)
@@ -887,7 +990,9 @@ class TestTrackingUpgradeNewSeries:
             cursor = conn.execute(
                 'SELECT revision, message_id, message_count,'
                 ' seen_message_count FROM series'
-                ' WHERE change_id = ?', (change_id,))
+                ' WHERE change_id = ?',
+                (change_id,),
+            )
             row = cursor.fetchone()
             conn.close()
             assert row is not None
@@ -904,11 +1009,16 @@ class TestTrackingSnooze:
     async def test_snooze_new_series(self, tmp_path: pathlib.Path) -> None:
         """Snoozing a new series should update the database."""
         identifier = 'test-snooze'
-        _seed_db(identifier, [{
-            'change_id': 'snooze-test-1',
-            'subject': '[PATCH] snooze me',
-            'message_id': 'snooze@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': 'snooze-test-1',
+                    'subject': '[PATCH] snooze me',
+                    'message_id': 'snooze@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -937,7 +1047,8 @@ class TestTrackingSnooze:
             conn = tracking.get_db(identifier)
             cursor = conn.execute(
                 'SELECT status, snoozed_until FROM series WHERE change_id = ?',
-                ('snooze-test-1',))
+                ('snooze-test-1',),
+            )
             row = cursor.fetchone()
             conn.close()
             assert row[0] == 'snoozed'
@@ -947,11 +1058,16 @@ class TestTrackingSnooze:
     async def test_snooze_cancel(self, tmp_path: pathlib.Path) -> None:
         """Cancelling snooze should leave the series unchanged."""
         identifier = 'test-snooze-cancel'
-        _seed_db(identifier, [{
-            'change_id': 'snooze-cancel-1',
-            'subject': '[PATCH] do not snooze',
-            'message_id': 'nosnooze@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': 'snooze-cancel-1',
+                    'subject': '[PATCH] do not snooze',
+                    'message_id': 'nosnooze@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -968,8 +1084,8 @@ class TestTrackingSnooze:
             # Verify status unchanged
             conn = tracking.get_db(identifier)
             cursor = conn.execute(
-                'SELECT status FROM series WHERE change_id = ?',
-                ('snooze-cancel-1',))
+                'SELECT status FROM series WHERE change_id = ?', ('snooze-cancel-1',)
+            )
             row = cursor.fetchone()
             conn.close()
             assert row[0] == 'new'
@@ -980,12 +1096,17 @@ class TestTrackingSnooze:
         identifier = 'test-snooze-branch'
         change_id = 'snooze-branch-1'
         _create_review_branch(gitdir, change_id, identifier=identifier)
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] snooze branch test',
-            'status': 'reviewing',
-            'message_id': 'snzbr@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] snooze branch test',
+                    'status': 'reviewing',
+                    'message_id': 'snzbr@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1004,15 +1125,14 @@ class TestTrackingSnooze:
             # Verify DB
             conn = tracking.get_db(identifier)
             cursor = conn.execute(
-                'SELECT status FROM series WHERE change_id = ?',
-                (change_id,))
+                'SELECT status FROM series WHERE change_id = ?', (change_id,)
+            )
             row = cursor.fetchone()
             conn.close()
             assert row[0] == 'snoozed'
 
             # Verify tracking commit was updated
-            _cover_text, trk = b4.review.load_tracking(
-                gitdir, f'b4/review/{change_id}')
+            _cover_text, trk = b4.review.load_tracking(gitdir, f'b4/review/{change_id}')
             assert trk['series']['status'] == 'snoozed'
             assert 'snoozed' in trk['series']
             assert trk['series']['snoozed']['previous_state'] == 'reviewing'
@@ -1025,20 +1145,23 @@ class TestTrackingAbandon:
     async def test_abandon_new_series(self, tmp_path: pathlib.Path) -> None:
         """Abandoning a new series should remove it from the DB."""
         identifier = 'test-abandon'
-        _seed_db(identifier, [
-            {
-                'change_id': 'keep-1',
-                'subject': '[PATCH] keep me',
-                'sent_at': '2026-03-10T11:00:00+00:00',
-                'message_id': 'keep@ex.com',
-            },
-            {
-                'change_id': 'abandon-1',
-                'subject': '[PATCH] abandon me',
-                'sent_at': '2026-03-10T12:00:00+00:00',
-                'message_id': 'abandon@ex.com',
-            },
-        ])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': 'keep-1',
+                    'subject': '[PATCH] keep me',
+                    'sent_at': '2026-03-10T11:00:00+00:00',
+                    'message_id': 'keep@ex.com',
+                },
+                {
+                    'change_id': 'abandon-1',
+                    'subject': '[PATCH] abandon me',
+                    'sent_at': '2026-03-10T12:00:00+00:00',
+                    'message_id': 'abandon@ex.com',
+                },
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1073,11 +1196,16 @@ class TestTrackingAbandon:
     async def test_abandon_cancel(self, tmp_path: pathlib.Path) -> None:
         """Cancelling abandon should leave the series intact."""
         identifier = 'test-abandon-cancel'
-        _seed_db(identifier, [{
-            'change_id': 'noabandon-1',
-            'subject': '[PATCH] do not abandon',
-            'message_id': 'noabandon@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': 'noabandon-1',
+                    'subject': '[PATCH] do not abandon',
+                    'message_id': 'noabandon@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1094,8 +1222,8 @@ class TestTrackingAbandon:
             # Still in DB
             conn = tracking.get_db(identifier)
             cursor = conn.execute(
-                'SELECT change_id FROM series WHERE change_id = ?',
-                ('noabandon-1',))
+                'SELECT change_id FROM series WHERE change_id = ?', ('noabandon-1',)
+            )
             assert cursor.fetchone() is not None
             conn.close()
 
@@ -1104,14 +1232,18 @@ class TestTrackingAbandon:
         """Abandoning a series with a review branch should delete the branch."""
         identifier = 'test-abandon-branch'
         change_id = 'abandon-branch-1'
-        branch_name = _create_review_branch(
-            gitdir, change_id, identifier=identifier)
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] abandon with branch',
-            'status': 'reviewing',
-            'message_id': 'abr@ex.com',
-        }])
+        branch_name = _create_review_branch(gitdir, change_id, identifier=identifier)
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] abandon with branch',
+                    'status': 'reviewing',
+                    'message_id': 'abr@ex.com',
+                }
+            ],
+        )
 
         # Verify branch exists before
         assert b4.git_branch_exists(gitdir, branch_name)
@@ -1135,8 +1267,8 @@ class TestTrackingAbandon:
             # DB should be clean
             conn = tracking.get_db(identifier)
             cursor = conn.execute(
-                'SELECT change_id FROM series WHERE change_id = ?',
-                (change_id,))
+                'SELECT change_id FROM series WHERE change_id = ?', (change_id,)
+            )
             assert cursor.fetchone() is None
             conn.close()
 
@@ -1150,12 +1282,17 @@ class TestTrackingWaiting:
         identifier = 'test-waiting'
         change_id = 'waiting-test-1'
         _create_review_branch(gitdir, change_id, identifier=identifier)
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] wait for v2',
-            'status': 'reviewing',
-            'message_id': 'wait@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] wait for v2',
+                    'status': 'reviewing',
+                    'message_id': 'wait@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1170,15 +1307,14 @@ class TestTrackingWaiting:
             # Verify DB status
             conn = tracking.get_db(identifier)
             cursor = conn.execute(
-                'SELECT status FROM series WHERE change_id = ?',
-                (change_id,))
+                'SELECT status FROM series WHERE change_id = ?', (change_id,)
+            )
             row = cursor.fetchone()
             conn.close()
             assert row[0] == 'waiting'
 
             # Verify tracking commit
-            _cover_text, trk = b4.review.load_tracking(
-                gitdir, f'b4/review/{change_id}')
+            _cover_text, trk = b4.review.load_tracking(gitdir, f'b4/review/{change_id}')
             assert trk['series']['status'] == 'waiting'
 
     @pytest.mark.asyncio
@@ -1186,11 +1322,16 @@ class TestTrackingWaiting:
         """Marking a new (unimported) series as waiting should update DB only."""
         identifier = 'test-new-waiting'
         change_id = 'new-waiting-1'
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] needs v2',
-            'message_id': 'newwait@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] needs v2',
+                    'message_id': 'newwait@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1205,8 +1346,8 @@ class TestTrackingWaiting:
             # Verify DB status changed
             conn = tracking.get_db(identifier)
             cursor = conn.execute(
-                'SELECT status FROM series WHERE change_id = ?',
-                (change_id,))
+                'SELECT status FROM series WHERE change_id = ?', (change_id,)
+            )
             row = cursor.fetchone()
             conn.close()
             assert row[0] == 'waiting'
@@ -1216,7 +1357,9 @@ class TestTrackingDetailPanel:
     """Tests for the detail panel shown on series highlight."""
 
     @pytest.mark.asyncio
-    async def test_detail_panel_shows_on_highlight(self, tmp_path: pathlib.Path) -> None:
+    async def test_detail_panel_shows_on_highlight(
+        self, tmp_path: pathlib.Path
+    ) -> None:
         _seed_db('test-detail', SAMPLE_SERIES)
 
         app = TrackingApp('test-detail')
@@ -1224,6 +1367,7 @@ class TestTrackingDetailPanel:
             await pilot.pause()
 
             from textual.containers import Vertical
+
             panel = app.query_one('#details-panel', Vertical)
             # Panel should have non-zero height (auto-shown on first highlight)
             assert panel.styles.height is not None
@@ -1240,11 +1384,14 @@ class TestTrackingDetailPanel:
             await pilot.pause()
 
             from textual.containers import Vertical
+
             panel = app.query_one('#details-panel', Vertical)
             assert panel.styles.height.value == 0  # type: ignore[union-attr]
 
     @pytest.mark.asyncio
-    async def test_detail_panel_updates_on_navigation(self, tmp_path: pathlib.Path) -> None:
+    async def test_detail_panel_updates_on_navigation(
+        self, tmp_path: pathlib.Path
+    ) -> None:
         """Navigating to a different series should update the detail panel."""
         _seed_db('test-detail-nav', SAMPLE_SERIES)
 
@@ -1274,30 +1421,34 @@ class TestTrackingMultipleSeries:
         """App should correctly display a mix of new and reviewing series."""
         identifier = 'test-mixed'
         change_id_rev = 'mixed-reviewing-1'
-        _create_review_branch(gitdir, change_id_rev, identifier=identifier,
-                              subject='Reviewing series')
-        _seed_db(identifier, [
-            {
-                'change_id': change_id_rev,
-                'subject': '[PATCH] reviewing series',
-                'status': 'reviewing',
-                'sent_at': '2026-03-10T12:00:00+00:00',
-                'message_id': 'rev@ex.com',
-            },
-            {
-                'change_id': 'mixed-new-1',
-                'subject': '[PATCH] new series',
-                'sent_at': '2026-03-10T11:00:00+00:00',
-                'message_id': 'new@ex.com',
-            },
-            {
-                'change_id': 'mixed-snoozed-1',
-                'subject': '[PATCH] snoozed series',
-                'status': 'snoozed',
-                'sent_at': '2026-03-10T10:00:00+00:00',
-                'message_id': 'snz@ex.com',
-            },
-        ])
+        _create_review_branch(
+            gitdir, change_id_rev, identifier=identifier, subject='Reviewing series'
+        )
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id_rev,
+                    'subject': '[PATCH] reviewing series',
+                    'status': 'reviewing',
+                    'sent_at': '2026-03-10T12:00:00+00:00',
+                    'message_id': 'rev@ex.com',
+                },
+                {
+                    'change_id': 'mixed-new-1',
+                    'subject': '[PATCH] new series',
+                    'sent_at': '2026-03-10T11:00:00+00:00',
+                    'message_id': 'new@ex.com',
+                },
+                {
+                    'change_id': 'mixed-snoozed-1',
+                    'subject': '[PATCH] snoozed series',
+                    'status': 'snoozed',
+                    'sent_at': '2026-03-10T10:00:00+00:00',
+                    'message_id': 'snz@ex.com',
+                },
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1318,23 +1469,27 @@ class TestTrackingMultipleSeries:
         """Navigate to a non-first series and enter review mode."""
         identifier = 'test-nav-review'
         change_id = 'nav-review-target'
-        _create_review_branch(gitdir, change_id, identifier=identifier,
-                              subject='Target series')
-        _seed_db(identifier, [
-            {
-                'change_id': 'nav-review-first',
-                'subject': '[PATCH] first (new)',
-                'sent_at': '2026-03-10T12:00:00+00:00',
-                'message_id': 'first@ex.com',
-            },
-            {
-                'change_id': change_id,
-                'subject': '[PATCH] target (reviewing)',
-                'status': 'reviewing',
-                'sent_at': '2026-03-10T11:00:00+00:00',
-                'message_id': 'target@ex.com',
-            },
-        ])
+        _create_review_branch(
+            gitdir, change_id, identifier=identifier, subject='Target series'
+        )
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': 'nav-review-first',
+                    'subject': '[PATCH] first (new)',
+                    'sent_at': '2026-03-10T12:00:00+00:00',
+                    'message_id': 'first@ex.com',
+                },
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] target (reviewing)',
+                    'status': 'reviewing',
+                    'sent_at': '2026-03-10T11:00:00+00:00',
+                    'message_id': 'target@ex.com',
+                },
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1357,18 +1512,21 @@ class TestTrackingSnoozeRemembersChoice:
     async def test_snooze_remembers_last_input(self, tmp_path: pathlib.Path) -> None:
         """Second snooze should pre-populate with the first snooze's input."""
         identifier = 'test-snooze-memory'
-        _seed_db(identifier, [
-            {
-                'change_id': 'mem-1',
-                'subject': '[PATCH] first',
-                'message_id': 'mem1@ex.com',
-            },
-            {
-                'change_id': 'mem-2',
-                'subject': '[PATCH] second',
-                'message_id': 'mem2@ex.com',
-            },
-        ])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': 'mem-1',
+                    'subject': '[PATCH] first',
+                    'message_id': 'mem1@ex.com',
+                },
+                {
+                    'change_id': 'mem-2',
+                    'subject': '[PATCH] second',
+                    'message_id': 'mem2@ex.com',
+                },
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1388,12 +1546,17 @@ class TestTrackingSnoozeRemembersChoice:
             # Move to the other (non-snoozed) series before snoozing it.
             # The cursor may still be on the just-snoozed item, so press
             # down then up to ensure we land on a non-snoozed item.
-            first_cid = app._selected_series.get('change_id') if app._selected_series else None
+            first_cid = (
+                app._selected_series.get('change_id') if app._selected_series else None
+            )
             if app._selected_series and app._selected_series.get('status') == 'snoozed':
                 await pilot.press('down')
                 await pilot.pause()
                 # If down didn't change, try up
-                if app._selected_series and app._selected_series.get('change_id') == first_cid:
+                if (
+                    app._selected_series
+                    and app._selected_series.get('change_id') == first_cid
+                ):
                     await pilot.press('up')
                     await pilot.pause()
 
@@ -1414,12 +1577,11 @@ class TestTrackingSnoozeRemembersChoice:
 # Lifecycle / state-machine tests
 # ---------------------------------------------------------------------------
 
+
 def _get_db_status(identifier: str, change_id: str) -> str:
     """Read the current status of a series from the tracking database."""
     conn = tracking.get_db(identifier)
-    cursor = conn.execute(
-        'SELECT status FROM series WHERE change_id = ?',
-        (change_id,))
+    cursor = conn.execute('SELECT status FROM series WHERE change_id = ?', (change_id,))
     row = cursor.fetchone()
     conn.close()
     assert row is not None, f'Series {change_id} not found in DB'
@@ -1456,16 +1618,22 @@ class TestSeriesLifecycle:
         branch_name = f'b4/review/{change_id}'
 
         # Seed series as 'reviewing' with a real review branch
-        _create_review_branch(gitdir, change_id, identifier=identifier,
-                              status='reviewing')
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] lifecycle test series',
-            'sender_name': 'Lifecycle Author',
-            'sender_email': 'lifecycle@example.com',
-            'status': 'reviewing',
-            'message_id': 'lifecycle@ex.com',
-        }])
+        _create_review_branch(
+            gitdir, change_id, identifier=identifier, status='reviewing'
+        )
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] lifecycle test series',
+                    'sender_name': 'Lifecycle Author',
+                    'sender_email': 'lifecycle@example.com',
+                    'status': 'reviewing',
+                    'message_id': 'lifecycle@ex.com',
+                }
+            ],
+        )
 
         # === Phase 1: reviewing → waiting ===
         app = TrackingApp(identifier)
@@ -1564,8 +1732,8 @@ class TestSeriesLifecycle:
         # The real 'take' flow needs suspend + am + editor, so we seed.
         conn = tracking.get_db(identifier)
         conn.execute(
-            'UPDATE series SET status = ? WHERE change_id = ?',
-            ('accepted', change_id))
+            'UPDATE series SET status = ? WHERE change_id = ?', ('accepted', change_id)
+        )
         conn.commit()
         conn.close()
         # Also update the tracking commit
@@ -1589,14 +1757,17 @@ class TestSeriesLifecycle:
             await pilot.press('escape')
 
         # === Phase 7: accepted → archived (mock _archive_branch) ===
-        def _mock_archive(self_app: TrackingApp, cid: str,
-                          rev: Optional[int], rbranch: str,
-                          pw_series_id: Optional[int] = None,
-                          notify: bool = True) -> bool:
+        def _mock_archive(
+            self_app: TrackingApp,
+            cid: str,
+            rev: Optional[int],
+            rbranch: str,
+            pw_series_id: Optional[int] = None,
+            notify: bool = True,
+        ) -> bool:
             """Simplified archive: just update DB status."""
             aconn = tracking.get_db(self_app._identifier)
-            tracking.update_series_status(aconn, cid, 'archived',
-                                          revision=rev)
+            tracking.update_series_status(aconn, cid, 'archived', revision=rev)
             aconn.close()
             return True
 
@@ -1623,11 +1794,16 @@ class TestSeriesLifecycle:
         """A new series can be snoozed without ever entering review."""
         identifier = 'test-lifecycle-snooze-new'
         change_id = 'direct-snooze-1'
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] snooze from new',
-            'message_id': 'ds@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] snooze from new',
+                    'message_id': 'ds@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1649,14 +1825,20 @@ class TestSeriesLifecycle:
         """A thanked series can only be archived."""
         identifier = 'test-lifecycle-thanked'
         change_id = 'thanked-series-1'
-        _create_review_branch(gitdir, change_id, identifier=identifier,
-                              status='thanked')
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] thanked ready for archive',
-            'status': 'thanked',
-            'message_id': 'thanked@ex.com',
-        }])
+        _create_review_branch(
+            gitdir, change_id, identifier=identifier, status='thanked'
+        )
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] thanked ready for archive',
+                    'status': 'thanked',
+                    'message_id': 'thanked@ex.com',
+                }
+            ],
+        )
 
         # Verify action menu: only 'archive' should be available
         app = TrackingApp(identifier)
@@ -1674,14 +1856,20 @@ class TestSeriesLifecycle:
         """Accepted series should show review, thank, abandon, and archive."""
         identifier = 'test-lifecycle-accepted'
         change_id = 'accepted-menu-1'
-        _create_review_branch(gitdir, change_id, identifier=identifier,
-                              status='accepted')
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] accepted series menu test',
-            'status': 'accepted',
-            'message_id': 'acc@ex.com',
-        }])
+        _create_review_branch(
+            gitdir, change_id, identifier=identifier, status='accepted'
+        )
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] accepted series menu test',
+                    'status': 'accepted',
+                    'message_id': 'acc@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1697,12 +1885,17 @@ class TestSeriesLifecycle:
         """A 'gone' series (branch deleted externally) should allow
         review and abandon."""
         identifier = 'test-lifecycle-gone'
-        _seed_db(identifier, [{
-            'change_id': 'gone-1',
-            'subject': '[PATCH] gone series',
-            'status': 'gone',
-            'message_id': 'gone@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': 'gone-1',
+                    'subject': '[PATCH] gone series',
+                    'status': 'gone',
+                    'message_id': 'gone@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1725,8 +1918,9 @@ class TestSeriesLifecycle:
         change_id = 'snooze-wait-1'
         branch_name = f'b4/review/{change_id}'
         # Create branch with 'snoozed' status + snoozed metadata
-        _create_review_branch(gitdir, change_id, identifier=identifier,
-                              status='snoozed')
+        _create_review_branch(
+            gitdir, change_id, identifier=identifier, status='snoozed'
+        )
         # Manually inject snoozed.previous_state into tracking commit
         cover_text, trk = b4.review.load_tracking(gitdir, branch_name)
         trk['series']['snoozed'] = {
@@ -1735,12 +1929,17 @@ class TestSeriesLifecycle:
         }
         b4.review.save_tracking_ref(gitdir, branch_name, cover_text, trk)
 
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] waiting then snoozed',
-            'status': 'snoozed',
-            'message_id': 'sw@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] waiting then snoozed',
+                    'status': 'snoozed',
+                    'message_id': 'sw@ex.com',
+                }
+            ],
+        )
 
         # Unsnooze should restore to 'waiting'
         app = TrackingApp(identifier)
@@ -1764,14 +1963,20 @@ class TestSeriesLifecycle:
         for status in ('reviewing', 'snoozed'):
             identifier = f'test-lifecycle-abandon-{status}'
             change_id = f'abandon-{status}'
-            _create_review_branch(gitdir, change_id, identifier=identifier,
-                                  status=status)
-            _seed_db(identifier, [{
-                'change_id': change_id,
-                'subject': f'[PATCH] abandon from {status}',
-                'status': status,
-                'message_id': f'ab-{status}@ex.com',
-            }])
+            _create_review_branch(
+                gitdir, change_id, identifier=identifier, status=status
+            )
+            _seed_db(
+                identifier,
+                [
+                    {
+                        'change_id': change_id,
+                        'subject': f'[PATCH] abandon from {status}',
+                        'status': status,
+                        'message_id': f'ab-{status}@ex.com',
+                    }
+                ],
+            )
 
             app = TrackingApp(identifier)
             async with app.run_test(size=(120, 30)) as pilot:
@@ -1789,16 +1994,18 @@ class TestSeriesLifecycle:
             # Verify series removed from DB
             conn = tracking.get_db(identifier)
             cursor = conn.execute(
-                'SELECT change_id FROM series WHERE change_id = ?',
-                (change_id,))
-            assert cursor.fetchone() is None, \
+                'SELECT change_id FROM series WHERE change_id = ?', (change_id,)
+            )
+            assert cursor.fetchone() is None, (
                 f'Series should be gone after abandon from {status}'
+            )
             conn.close()
 
             # Verify branch deleted
             branch_name = f'b4/review/{change_id}'
-            assert not b4.git_branch_exists(gitdir, branch_name), \
+            assert not b4.git_branch_exists(gitdir, branch_name), (
                 f'Branch should be deleted after abandon from {status}'
+            )
 
 
 @patch('b4.review.tracking.get_review_target_branches', return_value=['master'])
@@ -1806,15 +2013,22 @@ class TestTargetBranch:
     """Tests for per-series target branch tracking."""
 
     @pytest.mark.asyncio
-    async def test_set_target_branch_from_new(self, _mock_branches: Any, gitdir: str) -> None:
+    async def test_set_target_branch_from_new(
+        self, _mock_branches: Any, gitdir: str
+    ) -> None:
         """Press t on a new series, type a branch, confirm — DB is updated."""
         identifier = 'test-target-new'
         change_id = 'target-new-1'
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] target branch test',
-            'message_id': 'target-new@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] target branch test',
+                    'message_id': 'target-new@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1840,17 +2054,24 @@ class TestTargetBranch:
             assert target == 'master'
 
     @pytest.mark.asyncio
-    async def test_set_target_branch_from_reviewing(self, _mock_branches: Any, gitdir: str) -> None:
+    async def test_set_target_branch_from_reviewing(
+        self, _mock_branches: Any, gitdir: str
+    ) -> None:
         """Set target on a reviewing series — tracking commit updated too."""
         identifier = 'test-target-rev'
         change_id = 'target-rev-1'
         _create_review_branch(gitdir, change_id, identifier=identifier)
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] target reviewing test',
-            'status': 'reviewing',
-            'message_id': 'target-rev@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] target reviewing test',
+                    'status': 'reviewing',
+                    'message_id': 'target-rev@ex.com',
+                }
+            ],
+        )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
@@ -1882,11 +2103,16 @@ class TestTargetBranch:
         """Verify detail panel shows Target: row when target is set."""
         identifier = 'test-target-detail'
         change_id = 'target-detail-1'
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] target detail test',
-            'message_id': 'target-detail@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] target detail test',
+                    'message_id': 'target-detail@ex.com',
+                }
+            ],
+        )
         # Set target in DB
         conn = tracking.get_db(identifier)
         tracking.update_target_branch(conn, change_id, 'sound/for-next')
@@ -1905,11 +2131,16 @@ class TestTargetBranch:
         """Ctrl+d in modal clears the target branch."""
         identifier = 'test-target-clear'
         change_id = 'target-clear-1'
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH] clear target test',
-            'message_id': 'target-clear@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH] clear target test',
+                    'message_id': 'target-clear@ex.com',
+                }
+            ],
+        )
         # Set target first
         conn = tracking.get_db(identifier)
         tracking.update_target_branch(conn, change_id, 'old-branch')
@@ -1940,14 +2171,17 @@ class TestTargetBranch:
 # Helpers for update-revision tests
 # ---------------------------------------------------------------------------
 
-def _make_mock_lser(revision: int = 2, expected: int = 1,
-                    complete: bool = False) -> b4.LoreSeries:
+
+def _make_mock_lser(
+    revision: int = 2, expected: int = 1, complete: bool = False
+) -> b4.LoreSeries:
     """Build a minimal LoreSeries usable by _on_update_* callbacks.
 
     Patches list contains a single MagicMock with msgid and body
     attributes so the Phase 3 metadata extraction succeeds.
     """
     from unittest.mock import MagicMock
+
     lser = b4.LoreSeries(revision, expected)
     lser.complete = complete
     lser.fromname = 'Test Author'
@@ -1960,29 +2194,45 @@ def _make_mock_lser(revision: int = 2, expected: int = 1,
     return lser
 
 
-def _setup_update_test(gitdir: str, identifier: str,
-                       change_id: str,
-                       current_rev: int = 1,
-                       target_rev: int = 2) -> str:
+def _setup_update_test(
+    gitdir: str,
+    identifier: str,
+    change_id: str,
+    current_rev: int = 1,
+    target_rev: int = 2,
+) -> str:
     """Seed a DB + review branch for update-revision tests.
 
     Returns the review branch name.
     """
     branch = _create_review_branch(
-        gitdir, change_id, identifier=identifier,
-        revision=current_rev, status='reviewing')
-    _seed_db(identifier, [{
-        'change_id': change_id,
-        'subject': f'[PATCH v{current_rev}] update test',
-        'revision': current_rev,
-        'status': 'reviewing',
-        'message_id': f'v{current_rev}@ex.com',
-    }])
+        gitdir,
+        change_id,
+        identifier=identifier,
+        revision=current_rev,
+        status='reviewing',
+    )
+    _seed_db(
+        identifier,
+        [
+            {
+                'change_id': change_id,
+                'subject': f'[PATCH v{current_rev}] update test',
+                'revision': current_rev,
+                'status': 'reviewing',
+                'message_id': f'v{current_rev}@ex.com',
+            }
+        ],
+    )
     # Register the target revision so _do_update_revision can look it up
     conn = tracking.get_db(identifier)
-    tracking.add_revision(conn, change_id, target_rev,
-                          f'v{target_rev}@ex.com',
-                          subject=f'[PATCH v{target_rev}] update test')
+    tracking.add_revision(
+        conn,
+        change_id,
+        target_rev,
+        f'v{target_rev}@ex.com',
+        subject=f'[PATCH v{target_rev}] update test',
+    )
     conn.close()
     return branch
 
@@ -2003,16 +2253,22 @@ class TestUpdateRevisionWorkflow:
         identifier = 'test-update-nomsgid'
         change_id = 'update-nomsgid-1'
         _create_review_branch(gitdir, change_id, identifier=identifier)
-        _seed_db(identifier, [{
-            'change_id': change_id,
-            'subject': '[PATCH v1] no msgid test',
-            'status': 'reviewing',
-            'message_id': 'v1@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': change_id,
+                    'subject': '[PATCH v1] no msgid test',
+                    'status': 'reviewing',
+                    'message_id': 'v1@ex.com',
+                }
+            ],
+        )
         # Register v2 without a message-id
         conn = tracking.get_db(identifier)
-        tracking.add_revision(conn, change_id, 2, '',
-                              subject='[PATCH v2] no msgid test')
+        tracking.add_revision(
+            conn, change_id, 2, '', subject='[PATCH v2] no msgid test'
+        )
         conn.close()
 
         app = TrackingApp(identifier)
@@ -2022,9 +2278,12 @@ class TestUpdateRevisionWorkflow:
             app._do_update_revision(change_id, 1, 2)
             await pilot.pause()
             # Should stay on the main screen, not a WorkerScreen
-            assert not isinstance(app.screen,
-                                  __import__('b4.review_tui._modals',
-                                             fromlist=['WorkerScreen']).WorkerScreen)
+            assert not isinstance(
+                app.screen,
+                __import__(
+                    'b4.review_tui._modals', fromlist=['WorkerScreen']
+                ).WorkerScreen,
+            )
 
     # --- Phase 2: _on_update_prepared (base selection screen) ------------
 
@@ -2032,32 +2291,42 @@ class TestUpdateRevisionWorkflow:
     async def test_prepared_none_is_noop(self, tmp_path: pathlib.Path) -> None:
         """A None result (worker cancelled) should do nothing."""
         identifier = 'test-update-none'
-        _seed_db(identifier, [{
-            'change_id': 'noop-1',
-            'subject': '[PATCH] noop',
-            'message_id': 'noop@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': 'noop-1',
+                    'subject': '[PATCH] noop',
+                    'message_id': 'noop@ex.com',
+                }
+            ],
+        )
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
             await pilot.pause()
             app._on_update_prepared(
-                None, 'noop-1', 1, 2, 'v2@ex.com', 'subj',
-                'b4/review/noop-1')
+                None, 'noop-1', 1, 2, 'v2@ex.com', 'subj', 'b4/review/noop-1'
+            )
             await pilot.pause()
             # No BaseSelectionScreen should be pushed
             from b4.review_tui._modals import BaseSelectionScreen
+
             assert not isinstance(app.screen, BaseSelectionScreen)
 
     @pytest.mark.asyncio
-    async def test_prepared_pushes_base_selection(
-            self, tmp_path: pathlib.Path) -> None:
+    async def test_prepared_pushes_base_selection(self, tmp_path: pathlib.Path) -> None:
         """Successful worker result should push BaseSelectionScreen."""
         identifier = 'test-update-base'
-        _seed_db(identifier, [{
-            'change_id': 'base-1',
-            'subject': '[PATCH] base select',
-            'message_id': 'base@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': 'base-1',
+                    'subject': '[PATCH] base select',
+                    'message_id': 'base@ex.com',
+                }
+            ],
+        )
         lser = _make_mock_lser()
         ambytes = b'fake mbox'
         result = (lser, ambytes, 'abc123456789', 'Guessed base: foo', 1)
@@ -2066,39 +2335,58 @@ class TestUpdateRevisionWorkflow:
         async with app.run_test(size=(120, 30)) as pilot:
             await pilot.pause()
             app._on_update_prepared(
-                result, 'base-1', 1, 2, 'v2@ex.com',
-                '[PATCH v2] base select', 'b4/review/base-1')
+                result,
+                'base-1',
+                1,
+                2,
+                'v2@ex.com',
+                '[PATCH v2] base select',
+                'b4/review/base-1',
+            )
             await pilot.pause()
             from b4.review_tui._modals import BaseSelectionScreen
+
             assert isinstance(app.screen, BaseSelectionScreen)
 
     # --- Phase 3: _on_update_base_selected (apply + swap) ----------------
 
     @pytest.mark.asyncio
-    async def test_base_selected_none_cancels(
-            self, tmp_path: pathlib.Path) -> None:
+    async def test_base_selected_none_cancels(self, tmp_path: pathlib.Path) -> None:
         """Passing None as base_sha should cancel the update."""
         identifier = 'test-update-cancel'
-        _seed_db(identifier, [{
-            'change_id': 'cancel-1',
-            'subject': '[PATCH] cancel',
-            'message_id': 'cancel@ex.com',
-        }])
+        _seed_db(
+            identifier,
+            [
+                {
+                    'change_id': 'cancel-1',
+                    'subject': '[PATCH] cancel',
+                    'message_id': 'cancel@ex.com',
+                }
+            ],
+        )
         lser = _make_mock_lser()
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
             await pilot.pause()
             app._on_update_base_selected(
-                None, lser, b'mbox', 1, 'cancel-1', 1, 2,
-                'v2@ex.com', 'subj', 'b4/review/cancel-1')
+                None,
+                lser,
+                b'mbox',
+                1,
+                'cancel-1',
+                1,
+                2,
+                'v2@ex.com',
+                'subj',
+                'b4/review/cancel-1',
+            )
             await pilot.pause()
             # App should still be running — not exited
             assert app.is_running
 
     @pytest.mark.asyncio
-    async def test_apply_failure_preserves_old_branch(
-            self, gitdir: str) -> None:
+    async def test_apply_failure_preserves_old_branch(self, gitdir: str) -> None:
         """When git-am fails the old review branch must remain intact."""
         identifier = 'test-update-fail'
         change_id = 'update-fail-1'
@@ -2106,8 +2394,7 @@ class TestUpdateRevisionWorkflow:
         upgrade_branch = f'b4/review/_tmp-{change_id}-v2-upgrade'
 
         # Snapshot old branch HEAD before the attempt
-        ecode, old_head = b4.git_run_command(
-            gitdir, ['rev-parse', review_branch])
+        ecode, old_head = b4.git_run_command(gitdir, ['rev-parse', review_branch])
         assert ecode == 0
         old_head = old_head.strip()
 
@@ -2116,21 +2403,34 @@ class TestUpdateRevisionWorkflow:
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
             await pilot.pause()
-            with patch.object(app, 'suspend', return_value=__import__(
-                    'contextlib').nullcontext()), \
-                 patch.object(app, 'exit'), \
-                 patch('b4.review_tui._tracking_app._wait_for_enter'), \
-                 patch('b4.git_fetch_am_into_repo',
-                       side_effect=RuntimeError('apply failed')):
+            with (
+                patch.object(
+                    app, 'suspend', return_value=__import__('contextlib').nullcontext()
+                ),
+                patch.object(app, 'exit'),
+                patch('b4.review_tui._tracking_app._wait_for_enter'),
+                patch(
+                    'b4.git_fetch_am_into_repo',
+                    side_effect=RuntimeError('apply failed'),
+                ),
+            ):
                 app._on_update_base_selected(
-                    'HEAD', lser, b'mbox', 1, change_id, 1, 2,
-                    'v2@ex.com', 'subj', review_branch)
+                    'HEAD',
+                    lser,
+                    b'mbox',
+                    1,
+                    change_id,
+                    1,
+                    2,
+                    'v2@ex.com',
+                    'subj',
+                    review_branch,
+                )
             await pilot.pause()
 
         # Old review branch must still exist with unchanged HEAD
         assert b4.git_branch_exists(gitdir, review_branch)
-        ecode, cur_head = b4.git_run_command(
-            gitdir, ['rev-parse', review_branch])
+        ecode, cur_head = b4.git_run_command(gitdir, ['rev-parse', review_branch])
         assert ecode == 0
         assert cur_head.strip() == old_head
 
@@ -2140,24 +2440,22 @@ class TestUpdateRevisionWorkflow:
         # DB should still show original revision
         conn = tracking.get_db(identifier)
         cursor = conn.execute(
-            'SELECT revision, status FROM series WHERE change_id = ?',
-            (change_id,))
+            'SELECT revision, status FROM series WHERE change_id = ?', (change_id,)
+        )
         row = cursor.fetchone()
         conn.close()
         assert row[0] == 1
         assert row[1] == 'reviewing'
 
     @pytest.mark.asyncio
-    async def test_conflict_abort_preserves_old_branch(
-            self, gitdir: str) -> None:
+    async def test_conflict_abort_preserves_old_branch(self, gitdir: str) -> None:
         """When user aborts conflict resolution the old branch stays."""
         identifier = 'test-update-abort'
         change_id = 'update-abort-1'
         review_branch = _setup_update_test(gitdir, identifier, change_id)
         upgrade_branch = f'b4/review/_tmp-{change_id}-v2-upgrade'
 
-        ecode, old_head = b4.git_run_command(
-            gitdir, ['rev-parse', review_branch])
+        ecode, old_head = b4.git_run_command(gitdir, ['rev-parse', review_branch])
         assert ecode == 0
         old_head = old_head.strip()
 
@@ -2167,23 +2465,35 @@ class TestUpdateRevisionWorkflow:
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
             await pilot.pause()
-            with patch.object(app, 'suspend', return_value=__import__(
-                    'contextlib').nullcontext()), \
-                 patch.object(app, 'exit'), \
-                 patch('b4.review_tui._tracking_app._wait_for_enter'), \
-                 patch('b4.git_fetch_am_into_repo',
-                       side_effect=conflict), \
-                 patch('b4.review_tui._tracking_app._resolve_worktree_am_conflict',
-                       return_value=False):
+            with (
+                patch.object(
+                    app, 'suspend', return_value=__import__('contextlib').nullcontext()
+                ),
+                patch.object(app, 'exit'),
+                patch('b4.review_tui._tracking_app._wait_for_enter'),
+                patch('b4.git_fetch_am_into_repo', side_effect=conflict),
+                patch(
+                    'b4.review_tui._tracking_app._resolve_worktree_am_conflict',
+                    return_value=False,
+                ),
+            ):
                 app._on_update_base_selected(
-                    'HEAD', lser, b'mbox', 1, change_id, 1, 2,
-                    'v2@ex.com', 'subj', review_branch)
+                    'HEAD',
+                    lser,
+                    b'mbox',
+                    1,
+                    change_id,
+                    1,
+                    2,
+                    'v2@ex.com',
+                    'subj',
+                    review_branch,
+                )
             await pilot.pause()
 
         # Old review branch must be untouched
         assert b4.git_branch_exists(gitdir, review_branch)
-        ecode, cur_head = b4.git_run_command(
-            gitdir, ['rev-parse', review_branch])
+        ecode, cur_head = b4.git_run_command(gitdir, ['rev-parse', review_branch])
         assert ecode == 0
         assert cur_head.strip() == old_head
 
@@ -2191,8 +2501,7 @@ class TestUpdateRevisionWorkflow:
         assert not b4.git_branch_exists(gitdir, upgrade_branch)
 
     @pytest.mark.asyncio
-    async def test_successful_upgrade_renames_branch(
-            self, gitdir: str) -> None:
+    async def test_successful_upgrade_renames_branch(self, gitdir: str) -> None:
         """On success the upgrade branch replaces the old review branch."""
         identifier = 'test-update-ok'
         change_id = 'update-ok-1'
@@ -2205,56 +2514,80 @@ class TestUpdateRevisionWorkflow:
         assert ecode == 0
         base = base.strip()
 
-        def _fake_create(topdir: str, branch: str, base_commit: str,
-                         lser_arg: b4.LoreSeries, linkurl: str,
-                         linkmask: str, num_prereqs: int = 0,
-                         identifier: Optional[str] = None,
-                         status: str = 'reviewing',
-                         **kwargs: Any) -> None:
+        def _fake_create(
+            topdir: str,
+            branch: str,
+            base_commit: str,
+            lser_arg: b4.LoreSeries,
+            linkurl: str,
+            linkmask: str,
+            num_prereqs: int = 0,
+            identifier: Optional[str] = None,
+            status: str = 'reviewing',
+            **kwargs: Any,
+        ) -> None:
             """Simulate create_review_branch by making a real branch."""
             branch_suffix = branch.removeprefix('b4/review/')
-            _create_review_branch(topdir, branch_suffix,
-                                  identifier=identifier or 'test',
-                                  revision=2, status='reviewing')
-
-        def _mock_archive(self_app: TrackingApp, cid: str,
-                          rev: Optional[int], rbranch: str,
-                          pw_series_id: Optional[int] = None,
-                          notify: bool = True) -> bool:
+            _create_review_branch(
+                topdir,
+                branch_suffix,
+                identifier=identifier or 'test',
+                revision=2,
+                status='reviewing',
+            )
+
+        def _mock_archive(
+            self_app: TrackingApp,
+            cid: str,
+            rev: Optional[int],
+            rbranch: str,
+            pw_series_id: Optional[int] = None,
+            notify: bool = True,
+        ) -> bool:
             """Delete branch + mark archived in DB."""
             b4.git_run_command(gitdir, ['branch', '-D', rbranch])
             aconn = tracking.get_db(self_app._identifier)
-            tracking.update_series_status(aconn, cid, 'archived',
-                                          revision=rev)
+            tracking.update_series_status(aconn, cid, 'archived', revision=rev)
             aconn.close()
             return True
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
             await pilot.pause()
-            with patch.object(app, 'suspend', return_value=__import__(
-                    'contextlib').nullcontext()), \
-                 patch('b4.review_tui._tracking_app._wait_for_enter'), \
-                 patch('b4.git_fetch_am_into_repo'), \
-                 patch('b4.review.create_review_branch',
-                       side_effect=_fake_create), \
-                 patch('b4.review.get_review_branch_patch_ids',
-                       return_value=[]), \
-                 patch('b4.review.load_tracking',
-                       return_value=('', {'series': {}, 'patches': []})), \
-                 patch('b4.review.reanchor_patch_comments'), \
-                 patch('b4.review.save_tracking_ref'), \
-                 patch.object(TrackingApp, '_archive_branch',
-                              _mock_archive):
+            with (
+                patch.object(
+                    app, 'suspend', return_value=__import__('contextlib').nullcontext()
+                ),
+                patch('b4.review_tui._tracking_app._wait_for_enter'),
+                patch('b4.git_fetch_am_into_repo'),
+                patch('b4.review.create_review_branch', side_effect=_fake_create),
+                patch('b4.review.get_review_branch_patch_ids', return_value=[]),
+                patch(
+                    'b4.review.load_tracking',
+                    return_value=('', {'series': {}, 'patches': []}),
+                ),
+                patch('b4.review.reanchor_patch_comments'),
+                patch('b4.review.save_tracking_ref'),
+                patch.object(TrackingApp, '_archive_branch', _mock_archive),
+            ):
                 app._on_update_base_selected(
-                    base, lser, b'mbox', 1, change_id, 1, 2,
-                    'v2@ex.com', '[PATCH v2] update test',
-                    review_branch)
+                    base,
+                    lser,
+                    b'mbox',
+                    1,
+                    change_id,
+                    1,
+                    2,
+                    'v2@ex.com',
+                    '[PATCH v2] update test',
+                    review_branch,
+                )
             await pilot.pause()
 
             # Upgrade branch should be gone (was renamed)
             assert not b4.git_branch_exists(
-                gitdir, f'b4/review/_tmp-{change_id}-v2-upgrade')
+                gitdir, f'b4/review/_tmp-{change_id}-v2-upgrade'
+            )
             # Upgrade branch should have been renamed to review branch
             assert b4.git_branch_exists(gitdir, review_branch)
 
@@ -2263,7 +2596,8 @@ class TestUpdateRevisionWorkflow:
             cursor = conn.execute(
                 'SELECT revision, status FROM series'
                 ' WHERE change_id = ? AND revision = 2',
-                (change_id,))
+                (change_id,),
+            )
             row = cursor.fetchone()
             conn.close()
             assert row is not None
@@ -2273,8 +2607,7 @@ class TestUpdateRevisionWorkflow:
             assert app.is_running
 
     @pytest.mark.asyncio
-    async def test_archive_failure_leaves_both_branches(
-            self, gitdir: str) -> None:
+    async def test_archive_failure_leaves_both_branches(self, gitdir: str) -> None:
         """If archiving fails, both branches are left for manual recovery."""
         identifier = 'test-update-archfail'
         change_id = 'update-archfail-1'
@@ -2283,34 +2616,48 @@ class TestUpdateRevisionWorkflow:
 
         lser = _make_mock_lser()
 
-        def _fake_create(topdir: str, branch: str, *args: Any,
-                         **kwargs: Any) -> None:
+        def _fake_create(topdir: str, branch: str, *args: Any, **kwargs: Any) -> None:
             branch_suffix = branch.removeprefix('b4/review/')
-            _create_review_branch(topdir, branch_suffix,
-                                  identifier=identifier,
-                                  revision=2, status='reviewing')
+            _create_review_branch(
+                topdir,
+                branch_suffix,
+                identifier=identifier,
+                revision=2,
+                status='reviewing',
+            )
 
         app = TrackingApp(identifier)
         async with app.run_test(size=(120, 30)) as pilot:
             await pilot.pause()
-            with patch.object(app, 'suspend', return_value=__import__(
-                    'contextlib').nullcontext()), \
-                 patch.object(app, 'exit'), \
-                 patch('b4.review_tui._tracking_app._wait_for_enter'), \
-                 patch('b4.git_fetch_am_into_repo'), \
-                 patch('b4.review.create_review_branch',
-                       side_effect=_fake_create), \
-                 patch('b4.review.get_review_branch_patch_ids',
-                       return_value=[]), \
-                 patch('b4.review.load_tracking',
-                       return_value=('', {'series': {}, 'patches': []})), \
-                 patch('b4.review.reanchor_patch_comments'), \
-                 patch('b4.review.save_tracking_ref'), \
-                 patch.object(TrackingApp, '_archive_branch',
-                              return_value=False):
+            with (
+                patch.object(
+                    app, 'suspend', return_value=__import__('contextlib').nullcontext()
+                ),
+                patch.object(app, 'exit'),
+                patch('b4.review_tui._tracking_app._wait_for_enter'),
+                patch('b4.git_fetch_am_into_repo'),
+                patch('b4.review.create_review_branch', side_effect=_fake_create),
+                patch('b4.review.get_review_branch_patch_ids', return_value=[]),
+                patch(
+                    'b4.review.load_tracking',
+                    return_value=('', {'series': {}, 'patches': []}),
+                ),
+                patch('b4.review.reanchor_patch_comments'),
+                patch('b4.review.save_tracking_ref'),
+                patch.object(TrackingApp, '_archive_branch', return_value=False),
+            ):
                 app._on_update_base_selected(
-                    'HEAD', lser, b'mbox', 1, change_id, 1, 2,
-                    'v2@ex.com', 'subj', review_branch)
+                    'HEAD',
+                    lser,
+                    b'mbox',
+                    1,
+                    change_id,
+                    1,
+                    2,
+                    'v2@ex.com',
+                    'subj',
+                    review_branch,
+                )
             await pilot.pause()
 
         # Both branches should exist — user can recover manually
@@ -2334,7 +2681,9 @@ class TestLoadSeriesCaching:
             assert app._cached_revision_counts is not None
 
     @pytest.mark.asyncio
-    async def test_caches_survive_db_poll_no_change(self, tmp_path: pathlib.Path) -> None:
+    async def test_caches_survive_db_poll_no_change(
+        self, tmp_path: pathlib.Path
+    ) -> None:
         """Caches should persist when _check_db_changed finds no change."""
         _seed_db('cache-nochg', SAMPLE_SERIES)
 
@@ -2348,7 +2697,9 @@ class TestLoadSeriesCaching:
             assert id(app._cached_branch_tips) == tips_id
 
     @pytest.mark.asyncio
-    async def test_full_invalidation_clears_all_caches(self, tmp_path: pathlib.Path) -> None:
+    async def test_full_invalidation_clears_all_caches(
+        self, tmp_path: pathlib.Path
+    ) -> None:
         """_invalidate_caches() without change_id clears everything."""
         _seed_db('cache-full-inv', SAMPLE_SERIES)
 
@@ -2364,7 +2715,8 @@ class TestLoadSeriesCaching:
 
     @pytest.mark.asyncio
     async def test_selective_invalidation_keeps_other_caches(
-            self, tmp_path: pathlib.Path) -> None:
+        self, tmp_path: pathlib.Path
+    ) -> None:
         """_invalidate_caches(change_id) only evicts that ART entry."""
         _seed_db('cache-sel-inv', SAMPLE_SERIES)
 
@@ -2398,8 +2750,11 @@ class TestLoadSeriesCaching:
         async with app.run_test(size=(120, 30)) as pilot:
             await pilot.pause()
             # Find the charlie series and check its stashed revisions
-            charlie = [s for s in app._all_series
-                       if s.get('change_id') == 'test-change-charlie']
+            charlie = [
+                s
+                for s in app._all_series
+                if s.get('change_id') == 'test-change-charlie'
+            ]
             assert len(charlie) == 1
             revs = charlie[0].get('_revisions', [])
             assert len(revs) == 2
@@ -2408,16 +2763,19 @@ class TestLoadSeriesCaching:
     @pytest.mark.asyncio
     async def test_snoozed_until_in_series(self, tmp_path: pathlib.Path) -> None:
         """_load_series should include snoozed_until from the DB."""
-        series = [{
-            'change_id': 'test-snooze-detail',
-            'subject': '[PATCH] snooze test',
-            'sender_name': 'Tester',
-            'status': 'snoozed',
-        }]
+        series = [
+            {
+                'change_id': 'test-snooze-detail',
+                'subject': '[PATCH] snooze test',
+                'sender_name': 'Tester',
+                'status': 'snoozed',
+            }
+        ]
         _seed_db('cache-snooze', series)
         conn = tracking.get_db('cache-snooze')
-        tracking.snooze_series(conn, 'test-snooze-detail',
-                               '2026-06-01T00:00:00', revision=1)
+        tracking.snooze_series(
+            conn, 'test-snooze-detail', '2026-06-01T00:00:00', revision=1
+        )
         conn.close()
 
         app = TrackingApp('cache-snooze')

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 05/11] Fix tests under uv with complex git config
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
                   ` (3 preceding siblings ...)
  2026-04-19 15:59 ` [PATCH b4 v2 04/11] Add ruff format check to CI Tamir Duberstein
@ 2026-04-19 16:00 ` Tamir Duberstein
  2026-04-19 16:00 ` [PATCH b4 v2 06/11] Fix typings in misc/ Tamir Duberstein
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 16:00 UTC (permalink / raw)
  To: Kernel.org Tools; +Cc: Konstantin Ryabitsev, Tamir Duberstein

Add pytest-asyncio to the dev group so pytest can run async TUI
tests.

Pin git-filter-repo to unreleased commit 4697eeb for the multiline
git config parser fix requested in
https://github.com/newren/git-filter-repo/issues/638.

That parser fix is the only functional change since v2.47.0:
https://github.com/newren/git-filter-repo/compare/v2.47.0...4697eeb37b7c3c30b0492e344f6b89f7139cef26

Inject commit.gpgsign=false through the test fixture so synthetic git
commits do not hang on local GPG/pinentry configuration. Also disable
attestation through MAIN_CONFIG so tests keep the old can_patatt=false
behavior after patatt becomes an unconditional dependency.

As a drive-by, route the test b4 globals, pytest sentinel, and XDG
env overrides through monkeypatch so each test gets automatic
cleanup.

Add pytest to the CI script.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 ci.sh                 |  1 +
 pyproject.toml        |  6 ++++--
 src/tests/conftest.py | 39 +++++++++++++++++++++++++++++----------
 3 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/ci.sh b/ci.sh
index ddd4cff..7632e85 100755
--- a/ci.sh
+++ b/ci.sh
@@ -5,3 +5,4 @@ set -eu
 uv run ruff format --check
 uv run ruff check
 uv run mypy .
+uv run pytest --durations=20
diff --git a/pyproject.toml b/pyproject.toml
index 0c4f024..959d168 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -24,7 +24,9 @@ classifiers = [
 ]
 dependencies = [
     "requests>=2.24,<3.0",
-    "git-filter-repo>=2.30,<3.0",
+    # Use unreleased fix for multiline git config values.
+    # https://github.com/newren/git-filter-repo/issues/638
+    "git-filter-repo @ git+https://github.com/newren/git-filter-repo.git@4697eeb37b7c3c30b0492e344f6b89f7139cef26",
     "dkimpy>=1.0,<2.0",
     "patatt>=0.6,<2.0",
     "ezgb>=0.1",
@@ -68,7 +70,7 @@ b4 = "b4.command:cmd"
 asyncio_mode = "auto"
 asyncio_default_fixture_loop_scope = "function"
 filterwarnings = "ignore:.*(pyopenssl|invalid escape sequence).*:DeprecationWarning"
-norecursedirs = ["tests/helpers", "patatt"]
+norecursedirs = ["tests/helpers", "ezgb", "liblore", "patatt"]
 log_file = "pytest.log"
 log_file_level = "DEBUG"
 log_file_format = "%(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)"
diff --git a/src/tests/conftest.py b/src/tests/conftest.py
index 0825373..48b2573 100644
--- a/src/tests/conftest.py
+++ b/src/tests/conftest.py
@@ -1,3 +1,4 @@
+import copy
 import os
 import pathlib
 import sys
@@ -9,20 +10,38 @@ import b4
 
 
 @pytest.fixture(scope='function', autouse=True)
-def settestdefaults(tmp_path: pathlib.Path) -> None:
+def settestdefaults(
+    monkeypatch: pytest.MonkeyPatch,
+    tmp_path: pathlib.Path,
+) -> None:
     topdir = b4.git_get_toplevel()
     if topdir and topdir != os.getcwd():
         os.chdir(topdir)
-    b4.can_network = False
-    b4.MAIN_CONFIG = dict(b4.DEFAULT_CONFIG)
-    b4.USER_CONFIG = {
-        'name': 'Test Override',
-        'email': 'test-override@example.com',
-    }
-    os.environ['XDG_DATA_HOME'] = str(tmp_path)
-    os.environ['XDG_CACHE_HOME'] = str(tmp_path)
+    monkeypatch.setattr(b4, 'can_network', False)
+    monkeypatch.setattr(
+        b4,
+        'MAIN_CONFIG',
+        {
+            **copy.deepcopy(b4.DEFAULT_CONFIG),
+            'attestation-policy': 'off',
+        },
+    )
+    monkeypatch.setattr(
+        b4,
+        'USER_CONFIG',
+        {
+            'name': 'Test Override',
+            'email': 'test-override@example.com',
+        },
+    )
+    monkeypatch.setenv('XDG_DATA_HOME', str(tmp_path))
+    monkeypatch.setenv('XDG_CACHE_HOME', str(tmp_path))
+    git_config_count = int(os.environ.get('GIT_CONFIG_COUNT', '0'))
+    monkeypatch.setenv('GIT_CONFIG_COUNT', str(git_config_count + 1))
+    monkeypatch.setenv(f'GIT_CONFIG_KEY_{git_config_count}', 'commit.gpgsign')
+    monkeypatch.setenv(f'GIT_CONFIG_VALUE_{git_config_count}', 'false')
     # This lets us avoid execvp-ing from inside b4 when testing
-    sys._running_in_pytest = True  # type: ignore[attr-defined]
+    monkeypatch.setattr(sys, '_running_in_pytest', True, raising=False)
 
 
 @pytest.fixture(scope='function')

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 06/11] Fix typings in misc/
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
                   ` (4 preceding siblings ...)
  2026-04-19 16:00 ` [PATCH b4 v2 05/11] Fix tests under uv with complex git config Tamir Duberstein
@ 2026-04-19 16:00 ` Tamir Duberstein
  2026-04-19 16:00 ` [PATCH b4 v2 07/11] Enable mypy unreachable warnings Tamir Duberstein
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 16:00 UTC (permalink / raw)
  To: Kernel.org Tools; +Cc: Konstantin Ryabitsev, Tamir Duberstein

This allows mypy to run over the whole repo.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 misc/retrieve_lore_thread.py |  15 +++-
 misc/send-receive.py         | 198 +++++++++++++++++++++++++++----------------
 pyproject.toml               |   9 +-
 3 files changed, 143 insertions(+), 79 deletions(-)

diff --git a/misc/retrieve_lore_thread.py b/misc/retrieve_lore_thread.py
index aad586b..6a5ad96 100644
--- a/misc/retrieve_lore_thread.py
+++ b/misc/retrieve_lore_thread.py
@@ -1,7 +1,14 @@
 import sys
+from typing import TYPE_CHECKING
 
-from instructor import OpenAISchema
-from pydantic import Field
+from pydantic import BaseModel, Field
+
+# TODO(https://github.com/567-labs/instructor/pull/2246): remove this once the
+# PR is merged and released.
+if TYPE_CHECKING:
+    OpenAISchema = BaseModel
+else:
+    from instructor import OpenAISchema
 
 # This is needed for now while the minimization bits aren't released
 sys.path.insert(0, '/home/user/work/git/korg/b4/src')
@@ -16,8 +23,8 @@ class Function(OpenAISchema):
 
     message_id: str = Field(
         ...,
-        example='20240228-foo-bar-baz@localhost',
-        descriptions='Message-ID of the thread to retrieve from lore.kernel.org',
+        examples=['20240228-foo-bar-baz@localhost'],
+        description='Message-ID of the thread to retrieve from lore.kernel.org',
     )
 
     class Config:
diff --git a/misc/send-receive.py b/misc/send-receive.py
index 22a5d99..c0dbfe2 100644
--- a/misc/send-receive.py
+++ b/misc/send-receive.py
@@ -16,11 +16,13 @@ import textwrap
 from configparser import ConfigParser, ExtendedInterpolation
 from email import charset, utils
 from string import Template
-from typing import List, Tuple, Union
+from typing import List, Mapping, Optional, Sequence, Tuple, Union
 
 import ezpi
 import falcon
 import sqlalchemy as sa
+from sqlalchemy.engine import Connection, Engine
+from sqlalchemy.sql import tuple_
 
 import patatt
 
@@ -34,9 +36,11 @@ DB_VERSION = 1
 logger = logging.getLogger('b4-send-receive')
 logger.setLevel(logging.DEBUG)
 
+JSON = Union[str, int, float, bool, Sequence['JSON'], Mapping[str, 'JSON']]
+
 
 class SendReceiveListener(object):
-    def __init__(self, _engine, _config) -> None:
+    def __init__(self, _engine: Engine, _config: ConfigParser) -> None:
         self._engine = _engine
         self._config = _config
         # You shouldn't use this in production
@@ -89,27 +93,25 @@ class SendReceiveListener(object):
         conn.execute(q)
         conn.close()
 
-    def on_get(self, req, resp):
+    def on_get(self, req: falcon.Request, resp: falcon.Response) -> None:
         resp.status = falcon.HTTP_200
         resp.content_type = falcon.MEDIA_TEXT
         resp.text = "We don't serve GETs here\n"
 
-    def send_error(self, resp, message: str) -> None:
+    def send_error(self, resp: falcon.Response, message: str) -> None:
         resp.status = falcon.HTTP_500
         logger.critical('Returning error: %s', message)
         resp.text = json.dumps({'result': 'error', 'message': message})
 
-    def send_success(self, resp, message: str) -> None:
+    def send_success(self, resp: falcon.Response, message: str) -> None:
         resp.status = falcon.HTTP_200
         logger.debug('Returning success: %s', message)
         resp.text = json.dumps({'result': 'success', 'message': message})
 
-    def get_smtp(
-        self,
-    ) -> Tuple[Union[smtplib.SMTP, smtplib.SMTP_SSL, None], Tuple[str, str]]:
+    def get_smtp(self) -> Tuple[smtplib.SMTP, Tuple[str, str]]:
         sconfig = self._config['sendemail']
         server = sconfig.get('smtpserver', 'localhost')
-        port = sconfig.get('smtpserverport', 0)
+        port = sconfig.getint('smtpserverport', 0)
         encryption = sconfig.get('smtpencryption')
 
         logger.debug('Connecting to %s:%s', server, port)
@@ -142,48 +144,57 @@ class SendReceiveListener(object):
             # We assume you know what you're doing if you don't need encryption
             smtp = smtplib.SMTP(server, port)
 
-        frompair = utils.getaddresses([sconfig.get('from')])[0]
+        afrom = sconfig.get('from')
+        assert afrom is not None
+        frompair = utils.getaddresses([afrom])[0]
         return smtp, frompair
 
-    def auth_new(self, jdata, resp) -> None:
+    def auth_new(self, jdata: Mapping[str, JSON], resp: falcon.Response) -> None:
         # Is it already authorized?
         conn = self._engine.connect()
         md = sa.MetaData()
-        identity = jdata.get('identity')
-        selector = jdata.get('selector')
+        identity: Optional[JSON] = jdata.get('identity')
+        selector: Optional[JSON] = jdata.get('selector')
+        pubkey: Optional[JSON] = jdata.get('pubkey')
+        if (
+            not isinstance(identity, str)
+            or not isinstance(selector, str)
+            or not isinstance(pubkey, str)
+        ):
+            self.send_error(resp, message='Invalid authentication request')
+            return
         logger.info('New authentication request for %s/%s', identity, selector)
-        pubkey = jdata.get('pubkey')
         t_auth = sa.Table('auth', md, autoload=True, autoload_with=self._engine)
-        q = sa.select([t_auth.c.auth_id]).where(
+        select_auth = sa.select(t_auth.c.auth_id).where(
             t_auth.c.identity == identity,
             t_auth.c.selector == selector,
             t_auth.c.verified == 1,
         )
-        rp = conn.execute(q)
+        rp = conn.execute(select_auth)
         if len(rp.fetchall()):
             self.send_error(
                 resp, message='i=%s;s=%s is already authorized' % (identity, selector)
             )
             return
         # delete any existing challenges for this and create a new one
-        q = sa.delete(t_auth).where(
+        delete_auth = sa.delete(t_auth).where(
             t_auth.c.identity == identity,
             t_auth.c.selector == selector,
             t_auth.c.verified == 0,
         )
-        conn.execute(q)
+        conn.execute(delete_auth)
         # create new challenge
         import uuid
 
         cstr = str(uuid.uuid4())
-        q = sa.insert(t_auth).values(
+        insert_auth = sa.insert(t_auth).values(
             identity=identity,
             selector=selector,
             pubkey=pubkey,
             challenge=cstr,
             verified=0,
         )
-        conn.execute(q)
+        conn.execute(insert_auth)
         logger.info('Created new challenge for %s/%s: %s', identity, selector, cstr)
         conn.close()
         smtp, frompair = self.get_smtp()
@@ -218,44 +229,54 @@ class SendReceiveListener(object):
         destaddrs = [identity]
         alwaysbcc = self._config['main'].get('alwayscc')
         if alwaysbcc:
-            destaddrs += [x[1] for x in utils.getaddresses(alwaysbcc)]
+            destaddrs += [x[1] for x in utils.getaddresses([alwaysbcc])]
         logger.info('Sending challenge to %s', identity)
         smtp.sendmail(fromaddr, [identity], bdata)
         smtp.close()
         self.send_success(resp, message=f'Challenge generated and sent to {identity}')
 
-    def validate_message(self, conn, t_auth, bdata, verified=1) -> Tuple[str, str, int]:
+    def validate_message(
+        self,
+        conn: Connection,
+        t_auth: sa.Table,
+        bdata: bytes,
+        verified: int = 1,
+    ) -> Tuple[str, str, int]:
         # Returns auth_id of the matching record
         pm = patatt.PatattMessage(bdata)
         if not pm.signed:
             raise patatt.ValidationError('Message is not signed')
 
-        auth_id = identity = selector = pubkey = None
-        for ds in pm.get_sigs():
-            selector = 'default'
-            identity = ''
-            i = ds.get_field('i')
-            if i:
-                identity = i.decode()
-            s = ds.get_field('s')
-            if s:
-                selector = s.decode()
-            logger.debug('i=%s; s=%s', identity, selector)
-            q = sa.select([t_auth.c.auth_id, t_auth.c.pubkey]).where(
-                t_auth.c.identity == identity,
-                t_auth.c.selector == selector,
-                t_auth.c.verified == verified,
+        identity_selector_pairs = [
+            (
+                ''
+                if (i := ds.get_field('i')) is None
+                else i.decode()
+                if isinstance(i, bytes)
+                else i,
+                'default'
+                if (s := ds.get_field('s')) is None
+                else s.decode()
+                if isinstance(s, bytes)
+                else s,
             )
-            rp = conn.execute(q)
-            res = rp.fetchall()
-            if res:
-                auth_id, pubkey = res[0]
-                break
-
-        if not auth_id:
+            for ds in pm.get_sigs()
+        ]
+        logger.debug('is_pairs=%s', identity_selector_pairs)
+        q = sa.select(
+            t_auth.c.identity, t_auth.c.selector, t_auth.c.auth_id, t_auth.c.pubkey
+        ).where(
+            tuple_(t_auth.c.identity, t_auth.c.selector).in_(identity_selector_pairs),
+            t_auth.c.verified == verified,
+        )
+        rp = conn.execute(q)
+        rows = rp.fetchall()
+        if not rows:
             logger.debug('Did not find a matching identity!')
             raise patatt.NoKeyError('No match for this identity')
 
+        identity, selector, auth_id, pubkey = rows[0]
+
         logger.debug(
             'Found matching %s/%s with auth_id=%s', identity, selector, auth_id
         )
@@ -263,9 +284,9 @@ class SendReceiveListener(object):
 
         return identity, selector, auth_id
 
-    def auth_verify(self, jdata, resp) -> None:
+    def auth_verify(self, jdata: Mapping[str, JSON], resp: falcon.Response) -> None:
         msg = jdata.get('msg')
-        if msg.find('\nverify:') < 0:
+        if not isinstance(msg, str) or msg.find('\nverify:') < 0:
             self.send_error(resp, message='Invalid verification message')
             return
         conn = self._engine.connect()
@@ -287,8 +308,10 @@ class SendReceiveListener(object):
         )
 
         # Now compare the challenge to what we received
-        q = sa.select([t_auth.c.challenge]).where(t_auth.c.auth_id == auth_id)
-        rp = conn.execute(q)
+        select_challenge = sa.select(t_auth.c.challenge).where(
+            t_auth.c.auth_id == auth_id
+        )
+        rp = conn.execute(select_challenge)
         res = rp.fetchall()
         challenge = res[0][0]
         if msg.find(f'\nverify:{challenge}') < 0:
@@ -304,20 +327,20 @@ class SendReceiveListener(object):
             selector,
             auth_id,
         )
-        q = (
+        update_auth = (
             sa.update(t_auth)
             .where(t_auth.c.auth_id == auth_id)
             .values(challenge=None, verified=1)
         )
-        conn.execute(q)
+        conn.execute(update_auth)
         conn.close()
         self.send_success(
             resp, message='Challenge verified for %s/%s' % (identity, selector)
         )
 
-    def auth_delete(self, jdata, resp) -> None:
+    def auth_delete(self, jdata: Mapping[str, JSON], resp: falcon.Response) -> None:
         msg = jdata.get('msg')
-        if msg.find('\nauth-delete') < 0:
+        if not isinstance(msg, str) or msg.find('\nauth-delete') < 0:
             self.send_error(resp, message='Invalid key delete message')
             return
         conn = self._engine.connect()
@@ -333,14 +356,14 @@ class SendReceiveListener(object):
         logger.info(
             'Deleting record for %s/%s with auth_id=%s', identity, selector, auth_id
         )
-        q = sa.delete(t_auth).where(t_auth.c.auth_id == auth_id)
-        conn.execute(q)
+        delete_auth = sa.delete(t_auth).where(t_auth.c.auth_id == auth_id)
+        conn.execute(delete_auth)
         conn.close()
         self.send_success(
             resp, message='Record deleted for %s/%s' % (identity, selector)
         )
 
-    def clean_header(self, hdrval: str) -> str:
+    def clean_header(self, hdrval: Optional[str]) -> str:
         if hdrval is None:
             return ''
 
@@ -386,7 +409,11 @@ class SendReceiveListener(object):
             return all([ord(c) < 128 for c in strval])
 
     def wrap_header(
-        self, hdr, width: int = 75, nl: str = '\r\n', transform: str = 'preserve'
+        self,
+        hdr: Tuple[str, str],
+        width: int = 75,
+        nl: str = '\r\n',
+        transform: str = 'preserve',
     ) -> bytes:
         hname, hval = hdr
         if hname.lower() in ('to', 'cc', 'from', 'x-original-from'):
@@ -467,11 +494,17 @@ class SendReceiveListener(object):
             )
         bdata += nl.encode()
         payload = msg.get_payload(decode=True)
+        assert isinstance(payload, bytes)
         for bline in payload.split(b'\n'):
             bdata += re.sub(rb'[\r\n]*$', b'', bline) + nl.encode()
         return bdata
 
-    def receive(self, jdata, resp, reflect: bool = False) -> None:
+    def receive(
+        self,
+        jdata: Mapping[str, JSON],
+        resp: falcon.Response,
+        reflect: bool = False,
+    ) -> None:
         servicename = self._config['main'].get('myname')
         if not servicename:
             servicename = 'Web Endpoint'
@@ -479,6 +512,9 @@ class SendReceiveListener(object):
         if not umsgs:
             self.send_error(resp, message='Missing the messages array')
             return
+        if not isinstance(umsgs, Sequence):
+            self.send_error(resp, message='Invalid messages array')
+            return
         logger.debug('Received a request for %s messages', len(umsgs))
 
         diffre = re.compile(
@@ -497,6 +533,9 @@ class SendReceiveListener(object):
         # First, validate all messages
         seenid = identity = selector = validfrom = None
         for umsg in umsgs:
+            if not isinstance(umsg, str):
+                self.send_error(resp, message='Invalid message payload')
+                return
             bdata = umsg.encode()
             try:
                 identity, selector, auth_id = self.validate_message(conn, t_auth, bdata)
@@ -535,6 +574,7 @@ class SendReceiveListener(object):
                     passes = False
             if passes:
                 payload = msg.get_payload(decode=True)
+                assert isinstance(payload, bytes)
                 if not (diffre.search(payload) or diffstatre.search(payload)):
                     passes = False
 
@@ -556,7 +596,9 @@ class SendReceiveListener(object):
 
             # Make sure that From: matches the validated identity. We allow + expansion,
             # such that foo+listname@example.com is allowed for foo@example.com
-            allfroms = utils.getaddresses([str(x) for x in msg.get_all('from')])
+            froms = msg.get_all('from')
+            assert froms is not None
+            allfroms = utils.getaddresses(froms)
             # Allow only a single From: address
             if len(allfroms) > 1:
                 self.send_error(
@@ -606,6 +648,10 @@ class SendReceiveListener(object):
             )
             msgs.append((msg, destaddrs))
 
+        # Must be the case if the loop above runs at least once, and we check
+        # that umsgs is truthy (not empty).
+        assert identity is not None
+
         conn.close()
         # All signatures verified. Prepare messages for sending.
         cfgdomains = self._config['main'].get('mydomains')
@@ -620,16 +666,12 @@ class SendReceiveListener(object):
         if _bcc:
             bccaddrs.update([x[1] for x in utils.getaddresses([_bcc])])
 
-        repo = listid = None
-        if (
-            'public-inbox' in self._config
-            and self._config['public-inbox'].get('repo')
-            and not reflect
-        ):
-            repo = self._config['public-inbox'].get('repo')
-            listid = self._config['public-inbox'].get('listid')
-            if not os.path.isdir(repo):
-                repo = None
+        repo_and_listid = None
+        if 'public-inbox' in self._config and not reflect:
+            public_inbox = self._config['public-inbox']
+            if (repo := public_inbox.get('repo')) is not None and os.path.isdir(repo):
+                if (listid := public_inbox.get('listid')) is not None:
+                    repo_and_listid = (repo, listid)
 
         if reflect:
             logger.info('Reflecting %s messages back to %s', len(msgs), identity)
@@ -640,7 +682,8 @@ class SendReceiveListener(object):
 
         for msg, destaddrs in msgs:
             subject = self.clean_header(msg.get('Subject'))
-            if repo:
+            if repo_and_listid is not None:
+                repo, listid = repo_and_listid
                 pmsg = copy.deepcopy(msg)
                 if pmsg.get('List-Id'):
                     pmsg.replace_header('List-Id', listid)
@@ -650,7 +693,9 @@ class SendReceiveListener(object):
                 logger.debug('Wrote %s to public-inbox at %s', subject, repo)
 
             origfrom = msg.get('From')
+            assert origfrom is not None
             origpair = utils.getaddresses([origfrom])[0]
+            assert origpair is not None
             origaddr = origpair[1]
             # Does it match one of our domains
             mydomain = False
@@ -688,6 +733,7 @@ class SendReceiveListener(object):
                     msg.add_header('Reply-To', f'<{origpair[1]}>')
 
                 body = msg.get_payload(decode=True)
+                assert isinstance(body, bytes)
                 # Add a From: header (if there isn't already one), but only if it's a patch
                 if diffre.search(body):
                     # Parse it as a message and see if we get a From: header
@@ -729,7 +775,8 @@ class SendReceiveListener(object):
                 logger.info('---DRYRUN MSG END---')
 
         smtp.close()
-        if repo:
+        if repo_and_listid is not None:
+            repo, _ = repo_and_listid
             # run it once after writing all messages
             logger.debug('Running public-inbox repo hook (if present)')
             ezpi.run_hook(repo)
@@ -740,7 +787,7 @@ class SendReceiveListener(object):
             resp, message=f'{sentaction} {len(msgs)} messages for {identity}/{selector}'
         )
 
-    def on_post(self, req, resp):
+    def on_post(self, req: falcon.Request, resp: falcon.Response) -> None:
         if not req.content_length:
             resp.status = falcon.HTTP_500
             resp.content_type = falcon.MEDIA_TEXT
@@ -748,15 +795,15 @@ class SendReceiveListener(object):
             return
         raw = req.bounded_stream.read()
         try:
-            jdata = json.loads(raw)
+            jdata: JSON = json.loads(raw)
         except Exception:
             resp.status = falcon.HTTP_500
             resp.content_type = falcon.MEDIA_TEXT
             resp.text = 'Failed to parse the request\n'
             return
-        action = jdata.get('action')
-        if not action:
+        if not isinstance(jdata, Mapping) or (action := jdata.get('action')) is None:
             logger.critical('Action not set from %s', req.remote_addr)
+            return
 
         logger.info('Action: %s; from: %s', action, req.remote_addr)
         if action == 'auth-new':
@@ -793,6 +840,9 @@ if gpgbin:
     patatt.GPGBIN = gpgbin
 
 dburl = parser['main'].get('dburl')
+if not dburl:
+    sys.stderr.write('main.dburl is not set in CONFIG')
+    sys.exit(1)
 # By default, recycle db connections after 5 min
 db_pool_recycle = parser['main'].getint('dbpoolrecycle', 300)
 engine = sa.create_engine(dburl, pool_recycle=db_pool_recycle)
diff --git a/pyproject.toml b/pyproject.toml
index 959d168..c5b4593 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -42,6 +42,13 @@ dev = [
     "ruff",
     "types-requests",
 ]
+misc = [
+    "ezpi",
+    "falcon",
+    "instructor",
+    "pydantic",
+    "sqlalchemy",
+]
 
 [project.optional-dependencies]
 completion = [
@@ -120,5 +127,5 @@ typeCheckingMode = "off"
 
 # Configure mypy in strict mode
 [tool.mypy]
-exclude = ["^ezgb/", "^liblore/", "^misc/", "^patatt/"]
+exclude = ["^ezgb/", "^liblore/", "^patatt/"]
 strict = true

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 07/11] Enable mypy unreachable warnings
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
                   ` (5 preceding siblings ...)
  2026-04-19 16:00 ` [PATCH b4 v2 06/11] Fix typings in misc/ Tamir Duberstein
@ 2026-04-19 16:00 ` Tamir Duberstein
  2026-04-19 16:00 ` [PATCH b4 v2 08/11] Enable and fix pyright diagnostics Tamir Duberstein
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 16:00 UTC (permalink / raw)
  To: Kernel.org Tools; +Cc: Konstantin Ryabitsev, Tamir Duberstein

Turn on warn_unreachable and remove dead branches that it exposes.

Some of those branches were stale null checks against non-optional APIs.
Others were test assertions that hit mypy's stale narrowing of mutable
attributes, so add targeted ignores with a reference to the upstream
issue.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 pyproject.toml                   |  2 +-
 src/b4/__init__.py               |  6 ++----
 src/b4/mbox.py                   |  2 --
 src/b4/pr.py                     |  2 +-
 src/b4/review/tracking.py        |  3 ++-
 src/b4/review_tui/_common.py     |  4 ----
 src/b4/review_tui/_modals.py     |  2 --
 src/b4/review_tui/_review_app.py |  4 ++--
 src/b4/ty.py                     |  3 ---
 src/tests/test_tui_modals.py     | 12 +++++++++---
 src/tests/test_tui_tracking.py   | 12 +++++++++---
 11 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/pyproject.toml b/pyproject.toml
index c5b4593..b960994 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -125,7 +125,7 @@ quote-style = "single"
 [tool.pyright]
 typeCheckingMode = "off"
 
-# Configure mypy in strict mode
 [tool.mypy]
 exclude = ["^ezgb/", "^liblore/", "^patatt/"]
 strict = true
+warn_unreachable = true
diff --git a/src/b4/__init__.py b/src/b4/__init__.py
index 3c1c127..6b1789f 100644
--- a/src/b4/__init__.py
+++ b/src/b4/__init__.py
@@ -1190,9 +1190,7 @@ class LoreSeries:
         branches: Optional[List[str]] = None,
         maxdays: int = 30,
     ) -> Tuple[str, int, int]:
-        if self.indexes is None:
-            self.populate_indexes()
-        if self.indexes is None or not len(self.indexes):
+        if not self.indexes:
             raise IndexError('No indexes to check against')
 
         pdate = self.submission_date
@@ -1744,7 +1742,7 @@ class LoreMessage:
         # walk until we find the first text/plain part
         self.body, self.charset = LoreMessage.get_payload(self.msg)
 
-        if self.body is None:
+        if not self.body:
             # Woah, we didn't find any usable parts
             logger.debug('  No plain or patch parts found in message')
             logger.info('  Not plaintext: %s', self.full_subject)
diff --git a/src/b4/mbox.py b/src/b4/mbox.py
index 3198836..66b47ba 100644
--- a/src/b4/mbox.py
+++ b/src/b4/mbox.py
@@ -226,8 +226,6 @@ def make_am(msgs: List[EmailMessage], cmdargs: argparse.Namespace, msgid: str) -
         if cmdargs.cherrypick == '_':
             # We might want to pick a patch sent as a followup, so create a fake series
             # and add followups with diffs
-            if lser is None:
-                lser = b4.LoreSeries(revision=1, expected=1)
             for followup in lmbx.followups:
                 if followup.has_diff:
                     lser.add_patch(followup)
diff --git a/src/b4/pr.py b/src/b4/pr.py
index 7c0659a..b6a0c6b 100644
--- a/src/b4/pr.py
+++ b/src/b4/pr.py
@@ -92,7 +92,7 @@ def git_get_commit_id_from_repo_ref(repo: str, ref: str) -> Optional[str]:
 
 def parse_pr_data(msg: email.message.EmailMessage) -> Optional[b4.LoreMessage]:
     lmsg = b4.LoreMessage(msg)
-    if lmsg.body is None:
+    if not lmsg.body:
         logger.critical('Could not find a plain part in the message body')
         return None
 
diff --git a/src/b4/review/tracking.py b/src/b4/review/tracking.py
index ee0d218..dd9a8e5 100644
--- a/src/b4/review/tracking.py
+++ b/src/b4/review/tracking.py
@@ -229,13 +229,14 @@ def record_take_branch(gitdir: str, branch: str) -> None:
     metadata_dir = os.path.join(gitdir, REVIEW_METADATA_DIR)
     pathlib.Path(metadata_dir).mkdir(parents=True, exist_ok=True)
     metadata_path = get_repo_metadata_path(gitdir)
-    data: Dict[str, Any] = {}
     if os.path.exists(metadata_path):
         try:
             with open(metadata_path, 'r') as f:
                 data = json.load(f)
         except (json.JSONDecodeError, OSError):
             pass
+    else:
+        data = {}
     if not isinstance(data, dict):
         data = {}
     branches = data.get('recent-take-branches', [])
diff --git a/src/b4/review_tui/_common.py b/src/b4/review_tui/_common.py
index c6d5df3..33349fb 100644
--- a/src/b4/review_tui/_common.py
+++ b/src/b4/review_tui/_common.py
@@ -792,10 +792,6 @@ def gather_attestation_info(lser: b4.LoreSeries) -> Dict[str, Any]:
     apply_mismatches = 0
 
     if topdir:
-        # Ensure indexes are populated for applicability check
-        if lser.indexes is None:
-            lser.populate_indexes()
-
         if base_commit:
             base_exists = b4.git_commit_exists(topdir, base_commit)
 
diff --git a/src/b4/review_tui/_modals.py b/src/b4/review_tui/_modals.py
index c8521d6..dc72597 100644
--- a/src/b4/review_tui/_modals.py
+++ b/src/b4/review_tui/_modals.py
@@ -2256,8 +2256,6 @@ class TargetBranchScreen(ModalScreen[Optional[str]]):
             ifh = io.BytesIO()
             b4.save_git_am_mbox(am_msgs, ifh)
             ambytes = ifh.getvalue()
-            if lser.indexes is None:
-                lser.populate_indexes()
             return lser, ambytes
 
     def _check_applicability(self, branch: str) -> None:
diff --git a/src/b4/review_tui/_review_app.py b/src/b4/review_tui/_review_app.py
index e476044..bf6f9ab 100644
--- a/src/b4/review_tui/_review_app.py
+++ b/src/b4/review_tui/_review_app.py
@@ -1265,7 +1265,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 editor_text.encode(), filehint='reply.b4-review.eml'
             )
 
-        if result is None:
+        if not result:
             self.notify('Editor returned no content')
             return
         reply_text = result.decode(errors='replace')
@@ -1388,7 +1388,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         with self.suspend():
             result = b4.edit_in_editor(editor_text.encode(), filehint='note.txt')
 
-        if result is None:
+        if not result:
             self.notify('Editor returned no content')
             return
         raw_text = result.decode(errors='replace')
diff --git a/src/b4/ty.py b/src/b4/ty.py
index 45c7773..5918193 100644
--- a/src/b4/ty.py
+++ b/src/b4/ty.py
@@ -521,9 +521,6 @@ def send_messages(
             # This is a patch series
             msg = generate_am_thanks(gitdir, jsondata, branch, cmdargs)
 
-        if msg is None:
-            continue
-
         assert isinstance(jsondata['msgid'], str), 'msgid must be a string'
         msgids.append(jsondata['msgid'])
         assert isinstance(jsondata['patches'], list), 'patches must be a list'
diff --git a/src/tests/test_tui_modals.py b/src/tests/test_tui_modals.py
index 8f16781..4bb3069 100644
--- a/src/tests/test_tui_modals.py
+++ b/src/tests/test_tui_modals.py
@@ -88,7 +88,9 @@ class TestHelpScreen:
             await pilot.pause()
             # Should be back on the host screen
             assert not isinstance(app.screen, HelpScreen)
-            assert dismissed == [None]
+            # https://github.com/python/mypy/issues/9457:
+            # app.screen is stale-narrowed across await.
+            assert dismissed == [None]  # type: ignore[unreachable]
 
     @pytest.mark.asyncio
     async def test_question_mark_dismisses(self) -> None:
@@ -182,7 +184,9 @@ class TestConfirmScreen:
             await pilot.press('y')
             await pilot.pause()
             assert not isinstance(app.screen, ConfirmScreen)
-            assert results == [True]
+            # https://github.com/python/mypy/issues/9457:
+            # app.screen is stale-narrowed across await.
+            assert results == [True]  # type: ignore[unreachable]
 
     @pytest.mark.asyncio
     async def test_escape_cancels(self) -> None:
@@ -477,7 +481,9 @@ class TestPriorReviewScreen:
             await pilot.press('escape')
             await pilot.pause()
             assert not isinstance(app.screen, PriorReviewScreen)
-            assert results == [None]
+            # https://github.com/python/mypy/issues/9457:
+            # app.screen is stale-narrowed across await.
+            assert results == [None]  # type: ignore[unreachable]
 
     @pytest.mark.asyncio
     async def test_content_rendered(self) -> None:
diff --git a/src/tests/test_tui_tracking.py b/src/tests/test_tui_tracking.py
index 76ff353..a9491ad 100644
--- a/src/tests/test_tui_tracking.py
+++ b/src/tests/test_tui_tracking.py
@@ -1044,7 +1044,9 @@ class TestTrackingSnooze:
             assert not isinstance(app.screen, SnoozeScreen)
 
             # Verify DB was updated
-            conn = tracking.get_db(identifier)
+            # https://github.com/python/mypy/issues/9457:
+            # app.screen is stale-narrowed across await.
+            conn = tracking.get_db(identifier)  # type: ignore[unreachable]
             cursor = conn.execute(
                 'SELECT status, snoozed_until FROM series WHERE change_id = ?',
                 ('snooze-test-1',),
@@ -2161,7 +2163,9 @@ class TestTargetBranch:
             assert not isinstance(app.screen, TargetBranchScreen)
 
             # Verify DB cleared
-            conn = tracking.get_db(identifier)
+            # https://github.com/python/mypy/issues/9457:
+            # app.screen is stale-narrowed across await.
+            conn = tracking.get_db(identifier)  # type: ignore[unreachable]
             target = tracking.get_target_branch(conn, change_id)
             conn.close()
             assert target is None
@@ -2709,7 +2713,9 @@ class TestLoadSeriesCaching:
             assert app._cached_branch_tips is not None
             app._invalidate_caches()
             assert app._cached_branch_tips is None
-            assert app._cached_newest_revisions is None
+            # https://github.com/python/mypy/issues/9457:
+            # app._cached_branch_tips is stale-narrowed across a method call.
+            assert app._cached_newest_revisions is None  # type: ignore[unreachable]
             assert app._cached_revision_counts is None
             assert app._cached_art_counts is None
 

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 08/11] Enable and fix pyright diagnostics
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
                   ` (6 preceding siblings ...)
  2026-04-19 16:00 ` [PATCH b4 v2 07/11] Enable mypy unreachable warnings Tamir Duberstein
@ 2026-04-19 16:00 ` Tamir Duberstein
  2026-04-19 16:00 ` [PATCH b4 v2 09/11] Avoid duplicate map lookups Tamir Duberstein
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 16:00 UTC (permalink / raw)
  To: Kernel.org Tools; +Cc: Konstantin Ryabitsev, Tamir Duberstein

Enable Pyright in standard mode, add it to the dev dependency group, and
report unused imports.

Move lazy b4.review, b4.ty, and b4.ez imports to module scope so Pyright
can see package attributes without local import side effects, and add
explicit stdlib email submodule imports in tests for the same reason.

Tighten type annotations and assertions that Pyright reports on,
including logger and send-mail queue types, Message-ID revision values,
Patchwork state strings, JSON trackfile/msgid fields, and TUI callbacks
that may be dismissed with None.

Fix control-flow edges surfaced by reportPossiblyUnboundVariable:
duplicate list-id preference fallback, patch-id matching after skipped
duplicate diffs, trailer source logging without a backing message,
metadata JSON decoding, and review reply rendering.

Reject non-string -c/--config assignments explicitly and avoid using
ConfigDictT values as dict keys in tests.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 ci.sh                              |  1 +
 misc/send-receive.py               |  2 ++
 pyproject.toml                     | 12 +++++++++---
 src/b4/__init__.py                 | 16 +++++++---------
 src/b4/command.py                  | 11 ++++++-----
 src/b4/dig.py                      |  4 ++--
 src/b4/ez.py                       | 14 +++++++++-----
 src/b4/review/_review.py           |  5 +++--
 src/b4/review/tracking.py          |  5 ++---
 src/b4/review_tui/_common.py       |  4 ----
 src/b4/review_tui/_modals.py       | 19 +++++++------------
 src/b4/review_tui/_pw_app.py       |  3 ++-
 src/b4/review_tui/_review_app.py   | 26 ++++++++++++++------------
 src/b4/review_tui/_tracking_app.py | 37 ++++++++++++++-----------------------
 src/b4/ty.py                       | 13 +++++++++----
 src/tests/test___init__.py         |  4 +++-
 src/tests/test_review_tracking.py  | 14 ++++----------
 17 files changed, 94 insertions(+), 96 deletions(-)

diff --git a/ci.sh b/ci.sh
index 7632e85..7e4e7a4 100755
--- a/ci.sh
+++ b/ci.sh
@@ -5,4 +5,5 @@ set -eu
 uv run ruff format --check
 uv run ruff check
 uv run mypy .
+uv run pyright
 uv run pytest --durations=20
diff --git a/misc/send-receive.py b/misc/send-receive.py
index c0dbfe2..11af5dc 100644
--- a/misc/send-receive.py
+++ b/misc/send-receive.py
@@ -3,8 +3,10 @@
 import copy
 import email
 import email.header
+import email.message
 import email.policy
 import email.quoprimime
+import email.utils
 import json
 import logging
 import logging.handlers
diff --git a/pyproject.toml b/pyproject.toml
index b960994..8167f53 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -37,6 +37,7 @@ dependencies = [
 dev = [
     "mypy",
     "pip-tools",
+    "pyright",
     "pytest",
     "pytest-asyncio",
     "ruff",
@@ -122,10 +123,15 @@ flake8-quotes.inline-quotes = "single"
 [tool.ruff.format]
 quote-style = "single"
 
-[tool.pyright]
-typeCheckingMode = "off"
-
 [tool.mypy]
 exclude = ["^ezgb/", "^liblore/", "^patatt/"]
 strict = true
 warn_unreachable = true
+
+[tool.pyright]
+# pyright automatically ignores virtual environments and other directories, but
+# once we specify `exclude`, we're on our own. See
+# https://github.com/microsoft/pyright/issues/9057#issuecomment-2366938099.
+exclude = [".venv", "ezgb", "liblore", "patatt"]
+typeCheckingMode = "standard"
+reportUnusedImport = true
diff --git a/src/b4/__init__.py b/src/b4/__init__.py
index 6b1789f..c427ee4 100644
--- a/src/b4/__init__.py
+++ b/src/b4/__init__.py
@@ -93,7 +93,7 @@ def _dkim_log_filter(record: logging.LogRecord) -> bool:
     return True
 
 
-logger = logging.getLogger('b4')
+logger: logging.Logger = logging.getLogger('b4')
 dkimlogger = logger.getChild('dkim')
 dkimlogger.addFilter(_dkim_log_filter)
 # Route liblore logging through b4's logger so debug mode covers it
@@ -1349,25 +1349,25 @@ class LoreSeries:
                         seenfiles.add(nfn)
                     # Try to grab full ref_id of this hash
                     try:
-                        ohash = git_revparse_obj(ofi)
+                        hash = git_revparse_obj(ofi)
                         logger.debug('  Found matching blob for: %s', ofn)
                         gitargs = [
                             'update-index',
                             '--add',
                             '--cacheinfo',
-                            f'{fmod},{ohash},{ofn}',
+                            f'{fmod},{hash},{ofn}',
                         ]
                     except RuntimeError:
                         logger.debug(
                             'Could not find matching blob for %s (%s)', ofn, ofi
                         )
                         try:
-                            chash = git_revparse_obj(f':{ofn}', topdir)
+                            hash = git_revparse_obj(f':{ofn}', topdir)
                             gitargs = [
                                 'update-index',
                                 '--add',
                                 '--cacheinfo',
-                                f'{fmod},{chash},{ofn}',
+                                f'{fmod},{hash},{ofn}',
                             ]
                         except RuntimeError:
                             logger.critical(
@@ -1378,9 +1378,7 @@ class LoreSeries:
                     ecode, out = git_run_command(None, gitargs)
                     if ecode > 0:
                         logger.critical(
-                            '  ERROR: Could not run update-index for %s (%s)',
-                            ofn,
-                            ohash,
+                            '  ERROR: Could not run update-index for %s (%s)', ofn, hash
                         )
                         return None, None
 
@@ -4972,7 +4970,7 @@ def send_mail(
     web_endpoint: Optional[str] = None,
     reflect: bool = False,
 ) -> Optional[int]:
-    tosend = list()
+    tosend: List[Tuple[Set[str], bytes, LoreSubject]] = list()
     if output_dir is not None:
         dryrun = True
 
diff --git a/src/b4/command.py b/src/b4/command.py
index 35ed44b..fa6e84c 100644
--- a/src/b4/command.py
+++ b/src/b4/command.py
@@ -279,12 +279,13 @@ class ConfigOption(argparse.Action):
             config = dict()
             setattr(namespace, self.dest, config)
 
-        if isinstance(keyval, str):
-            if '=' in keyval:
-                key, value = keyval.split('=', maxsplit=1)
-            else:
-                key, value = keyval, 'true'
+        if not isinstance(keyval, str):
+            raise TypeError(f'Expected a string config assignment, got {keyval!r}')
 
+        if '=' in keyval:
+            key, value = keyval.split('=', maxsplit=1)
+        else:
+            key, value = keyval, 'true'
         config[key] = value
 
 
diff --git a/src/b4/dig.py b/src/b4/dig.py
index 781d509..eac749c 100644
--- a/src/b4/dig.py
+++ b/src/b4/dig.py
@@ -297,7 +297,7 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
         if not best_match:
             # Next, try to find by exact patch-id
             for lmsg in all_lmsgs:
-                if lmsg.git_patch_id == patch_id:
+                if lmsg.git_patch_id == patch_id:  # pyright: ignore[reportPossiblyUnboundVariable] # broken since 3ae277e9c7dd3e1df61a14884aabdd5834ad1201
                     logger.debug('matched by exact patch-id')
                     best_match = lmsg
                     break
@@ -354,7 +354,7 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
                 continue
             if firstmsg is None:
                 firstmsg = lmsg
-            if lmsg.git_patch_id == patch_id:
+            if lmsg.git_patch_id == patch_id:  # pyright: ignore[reportPossiblyUnboundVariable] # broken since inception in 16329336c1c8faba853b11238a16249306742505
                 logger.debug('Matched by exact patch-id')
                 break
             if lmsg.subject == csubj:
diff --git a/src/b4/ez.py b/src/b4/ez.py
index 02589c5..3a59256 100644
--- a/src/b4/ez.py
+++ b/src/b4/ez.py
@@ -1481,7 +1481,7 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
                         fltr.lmsg.msgid, safe='@'
                     )
                 logger.info('  + %s', rendered)
-                logger.info('    via: %s', source)
+                logger.info('    via: %s', source)  # pyright: ignore[reportPossiblyUnboundVariable] # broken since 742e017c1b5b91d0e6fd6fca7decf73956b31487
             else:
                 logger.debug('  . %s', fltr.as_string(omit_extinfo=True))
 
@@ -1636,7 +1636,7 @@ def get_base_changeid_from_tag(tagname: str) -> Tuple[str, str, str]:
     return cover, base_commit, change_id
 
 
-def make_msgid_tpt(change_id: str, revision: str, domain: Optional[str] = None) -> str:
+def make_msgid_tpt(change_id: str, revision: int, domain: Optional[str] = None) -> str:
     if not domain:
         usercfg = b4.get_user_config()
         myemail = usercfg.get('email')
@@ -1800,7 +1800,7 @@ def get_prep_branch_as_patches(
 
     if prefixes is None:
         prefixes = list()
-    prefixes += tracking['series'].get('prefixes', list())
+    prefixes.extend(tracking['series'].get('prefixes', list()))
     base_commit, start_commit, end_commit = get_series_range(usebranch=usebranch)
     change_id = tracking['series'].get('change-id')
     revision = tracking['series'].get('revision')
@@ -1953,6 +1953,10 @@ def get_prep_branch_as_patches(
         'prerequisites': prerequisites,
         'signature': b4.get_email_signature(),
     }
+    # Possible type confusion here; revision is initially a string, but is
+    # then assigned an integer in the loop above. What happens if that loop
+    # runs zero times? Unclear.
+    assert isinstance(revision, int)
     if cover_template.find('${range_diff}') >= 0:
         if revision > 1:
             oldrev = revision - 1
@@ -2034,7 +2038,7 @@ def get_sent_tag_as_patches(
     csubject, cbody = get_cover_subject_body(cover)
     cbody = cbody.strip() + '\n-- \n' + b4.get_email_signature()
     prefixes = ['RESEND'] + csubject.get_extra_prefixes(exclude=['RESEND'])
-    msgid_tpt = make_msgid_tpt(change_id, str(revision))
+    msgid_tpt = make_msgid_tpt(change_id, revision)
     seriests = int(time.time())
     mailfrom = b4.get_mailfrom()
 
@@ -3178,7 +3182,7 @@ def force_revision(forceto: int) -> None:
 
 
 def range_diff_compare(
-    compareto: str, execvp: bool = True, range_diff_opts: Optional[str] = None
+    compareto: int, execvp: bool = True, range_diff_opts: Optional[str] = None
 ) -> Union[str, None]:
     _, tracking = load_cover()
     # Try the new format first
diff --git a/src/b4/review/_review.py b/src/b4/review/_review.py
index b061139..47098c1 100644
--- a/src/b4/review/_review.py
+++ b/src/b4/review/_review.py
@@ -21,6 +21,7 @@ import liblore.utils
 
 import b4
 import b4.mbox
+import b4.review
 import b4.review.tracking
 
 logger = b4.logger
@@ -2172,7 +2173,7 @@ def update_series_tracking(
 
 def cmd_tui(cmdargs: argparse.Namespace) -> None:
     try:
-        import b4.review_tui
+        import b4.review_tui as review_tui
     except ImportError:
         logger.critical('The TUI requires the textual library.')
         logger.critical('Install it with: pip install b4[tui]')
@@ -2197,7 +2198,7 @@ def cmd_tui(cmdargs: argparse.Namespace) -> None:
             logger.critical('Enroll with: b4 review enroll')
             sys.exit(1)
 
-    b4.review_tui.run_tracking_tui(
+    review_tui.run_tracking_tui(
         identifier,
         email_dryrun=cmdargs.email_dryrun,
         no_sign=cmdargs.no_sign,
diff --git a/src/b4/review/tracking.py b/src/b4/review/tracking.py
index dd9a8e5..fb41bf0 100644
--- a/src/b4/review/tracking.py
+++ b/src/b4/review/tracking.py
@@ -229,14 +229,13 @@ def record_take_branch(gitdir: str, branch: str) -> None:
     metadata_dir = os.path.join(gitdir, REVIEW_METADATA_DIR)
     pathlib.Path(metadata_dir).mkdir(parents=True, exist_ok=True)
     metadata_path = get_repo_metadata_path(gitdir)
+    data: object = {}
     if os.path.exists(metadata_path):
         try:
             with open(metadata_path, 'r') as f:
                 data = json.load(f)
         except (json.JSONDecodeError, OSError):
             pass
-    else:
-        data = {}
     if not isinstance(data, dict):
         data = {}
     branches = data.get('recent-take-branches', [])
@@ -814,7 +813,7 @@ def get_all_revisions_grouped(
     )
     result: dict[str, list[dict[str, Any]]] = {}
     for row in cursor.fetchall():
-        entry = dict(zip(cols, row))
+        entry: dict[str, Any] = dict(zip(cols, row))
         result.setdefault(row[0], []).append(entry)
     return result
 
diff --git a/src/b4/review_tui/_common.py b/src/b4/review_tui/_common.py
index 33349fb..84992e8 100644
--- a/src/b4/review_tui/_common.py
+++ b/src/b4/review_tui/_common.py
@@ -6,9 +6,6 @@
 __author__ = 'Konstantin Ryabitsev <konstantin@linuxfoundation.org>'
 
 import email.message
-import email.parser
-import email.policy
-import email.utils
 import json
 import os
 import tempfile
@@ -23,7 +20,6 @@ from rich.text import Text
 from textual.widgets import RichLog
 
 import b4
-import b4.mbox
 import b4.review
 import b4.review.tracking
 
diff --git a/src/b4/review_tui/_modals.py b/src/b4/review_tui/_modals.py
index dc72597..21f76ad 100644
--- a/src/b4/review_tui/_modals.py
+++ b/src/b4/review_tui/_modals.py
@@ -36,6 +36,9 @@ from textual.widgets import (
 from textual.worker import Worker, WorkerState
 
 import b4
+import b4.review
+import b4.review.tracking
+import b4.ty
 from b4.review_tui._common import (
     CI_CHECK_LABELS,
     JKListNavMixin,
@@ -978,8 +981,6 @@ class TakeConfirmScreen(ModalScreen[bool]):
 
     def _test_take(self) -> Tuple[bool, str]:
         """Test-apply review branch patches at the target base."""
-        import b4.review
-
         with _quiet_worker():
             topdir = b4.git_get_toplevel()
             if not topdir:
@@ -1477,8 +1478,6 @@ class QueueDeliveryScreen(
         self._cancelled = True
 
     def _do_deliver(self) -> Tuple[int, int, List[Tuple[str, int]]]:
-        import b4.ty
-
         def _on_progress(completed: int, total: int, status: str) -> None:
             if not self._cancelled:
                 self.app.call_from_thread(
@@ -1679,7 +1678,8 @@ class ViewSeriesScreen(_FetchViewerScreen):
             msgs = b4.review._retrieve_messages(self._message_id)
             return b4.review._get_lore_series(msgs)
 
-    def _show_result(self, lser: 'b4.LoreSeries') -> None:
+    def _show_result(self, result: 'b4.LoreSeries') -> None:
+        lser = result
         subject = lser.subject or '(no subject)'
         self.query_one('#fv-title', Static).update(subject)
         viewer = self.query_one('#fv-viewer', RichLog)
@@ -1724,13 +1724,12 @@ class CIChecksScreen(_FetchViewerScreen):
         self._series = series
 
     def _fetch(self) -> List[Dict[str, Any]]:
-        import b4.review
-
         with _quiet_worker():
             patch_ids = self._series.get('patch_ids', [])
             return b4.review.pw_fetch_checks(self._pwkey, self._pwurl, patch_ids)
 
-    def _show_result(self, checks: List[Dict[str, Any]]) -> None:
+    def _show_result(self, result: List[Dict[str, Any]]) -> None:
+        checks = result
         series_name = self._series.get('name') or '(no subject)'
         self.query_one('#fv-title', Static).update(f'CI checks \u2014 {series_name}')
         viewer = self.query_one('#fv-viewer', RichLog)
@@ -1989,8 +1988,6 @@ class RebaseScreen(ModalScreen[bool]):
 
     def _prepare_local(self) -> bytes:
         """Build mbox from the local review branch patches."""
-        import b4.review
-
         topdir = b4.git_get_toplevel()
         if not topdir:
             raise RuntimeError('Not in a git repository')
@@ -2760,8 +2757,6 @@ class UpdateAllScreen(ModalScreen[Dict[str, Any]]):
         self._cancelled = True
 
     def _do_updates(self) -> Dict[str, Any]:
-        import b4.review
-
         with _quiet_worker():
             # Rescan local review branches first so the DB reflects current
             # on-disk state before the network update runs.
diff --git a/src/b4/review_tui/_pw_app.py b/src/b4/review_tui/_pw_app.py
index 6efb55a..6ee5b80 100644
--- a/src/b4/review_tui/_pw_app.py
+++ b/src/b4/review_tui/_pw_app.py
@@ -515,8 +515,9 @@ class PwApp(App[None]):
         )
 
     def _on_apply_complete(
-        self, result: Tuple[int, int, str], item: 'PwSeriesItem'
+        self, result: Optional[Tuple[int, int, str]], item: 'PwSeriesItem'
     ) -> None:
+        assert result is not None
         ok, fail, new_state = result
         if fail:
             self.notify(f'{ok} updated, {fail} failed', severity='warning')
diff --git a/src/b4/review_tui/_review_app.py b/src/b4/review_tui/_review_app.py
index bf6f9ab..7cdc6e9 100644
--- a/src/b4/review_tui/_review_app.py
+++ b/src/b4/review_tui/_review_app.py
@@ -1226,7 +1226,6 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         existing_reply = review.get('reply', '')
 
         # Get the real diff for position resolution when parsing back
-        real_diff = ''
         if self._selected_idx > 0:
             patch_idx = self._selected_idx - 1
             if patch_idx >= len(self._commit_shas):
@@ -1238,18 +1237,11 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             if ecode > 0:
                 self.notify('Could not get diff', severity='error')
                 return
-
-        if existing_reply:
-            editor_text = existing_reply
-        else:
-            all_reviews = target.get('reviews', {})
-            my_email = str(self._usercfg.get('email', ''))
-            if self._selected_idx == 0:
-                # Cover letter reply
-                editor_text = b4.review._render_quoted_diff_with_comments(
-                    '', all_reviews, my_email, commit_msg=self._cover_text
-                )
+            if existing_reply:
+                editor_text = existing_reply
             else:
+                all_reviews = target.get('reviews', {})
+                my_email = str(self._usercfg.get('email', ''))
                 ecode, commit_msg = b4.git_run_command(
                     self._topdir, ['show', '--format=%B', '--no-patch', sha]
                 )
@@ -1259,6 +1251,16 @@ class ReviewApp(CheckRunnerMixin, App[None]):
                 editor_text = b4.review._render_quoted_diff_with_comments(
                     real_diff, all_reviews, my_email, commit_msg=commit_msg.strip()
                 )
+        else:
+            real_diff = ''
+            if existing_reply:
+                editor_text = existing_reply
+            else:
+                all_reviews = target.get('reviews', {})
+                my_email = str(self._usercfg.get('email', ''))
+                editor_text = b4.review._render_quoted_diff_with_comments(
+                    '', all_reviews, my_email, commit_msg=self._cover_text
+                )
 
         with self.suspend():
             result = b4.edit_in_editor(
diff --git a/src/b4/review_tui/_tracking_app.py b/src/b4/review_tui/_tracking_app.py
index 2c8087f..6b4151c 100644
--- a/src/b4/review_tui/_tracking_app.py
+++ b/src/b4/review_tui/_tracking_app.py
@@ -9,7 +9,6 @@ import copy
 import datetime
 import email.message
 import email.parser
-import email.policy
 import email.utils
 import io
 import json
@@ -29,9 +28,11 @@ from textual.widgets import Footer, Label, ListItem, ListView, Static
 from textual.worker import Worker, WorkerState
 
 import b4
+import b4.ez
 import b4.mbox
 import b4.review
 import b4.review.tracking
+import b4.ty
 from b4.review_tui._common import (
     CheckRunnerMixin,
     SeparatedFooter,
@@ -829,8 +830,6 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         b4.review.tracking.unsnooze_series(conn, cid, prev_status, revision=rev)
 
     def _load_series(self) -> None:
-        import b4.ty
-
         self._auto_wake_snoozed()
 
         all_series = b4.review.tracking.get_all_tracked_series(self._identifier)
@@ -1644,6 +1643,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                             rend,
                         )
 
+            _is_rt = bool(series.get('is_rethreaded'))
             try:
                 logger.info('Base: %s', base_commit)
                 b4.git_fetch_am_into_repo(
@@ -1655,7 +1655,6 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                 )
 
                 # Create the review branch
-                _is_rt = bool(series.get('is_rethreaded'))
                 b4.review.create_review_branch(
                     topdir,
                     branch_name,
@@ -2145,7 +2144,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     def _on_newer_revision_acknowledged(
         self,
-        proceed: bool,
+        proceed: Optional[bool],
         target_branch: str,
         change_id: str,
         review_branch: str,
@@ -2220,7 +2219,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     def _on_take_confirmed(
         self,
-        confirmed: bool,
+        confirmed: Optional[bool],
         change_id: str,
         review_branch: str,
         take_screen: 'TakeScreen',
@@ -2272,7 +2271,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     def _on_cherrypick_confirmed(
         self,
-        confirmed: bool,
+        confirmed: Optional[bool],
         change_id: str,
         review_branch: str,
         take_screen: 'TakeScreen',
@@ -2323,7 +2322,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     def _on_take_final(
         self,
-        confirmed: bool,
+        confirmed: Optional[bool],
         method: str,
         change_id: str,
         review_branch: str,
@@ -2949,7 +2948,10 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         )
 
     def _on_rebase_confirmed(
-        self, confirmed: bool, review_branch: str, rebase_screen: 'RebaseScreen'
+        self,
+        confirmed: Optional[bool],
+        review_branch: str,
+        rebase_screen: 'RebaseScreen',
     ) -> None:
         """Handle rebase confirmation result."""
         if not confirmed:
@@ -3452,7 +3454,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     def _on_abandon_confirmed(
         self,
-        confirmed: bool,
+        confirmed: Optional[bool],
         change_id: str,
         review_branch: str,
         has_branch: bool,
@@ -3879,6 +3881,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                         )
 
             # --- 3. Apply to temporary upgrade branch ---
+            _is_rt = bool((self._selected_series or {}).get('is_rethreaded'))
             try:
                 logger.info('Base: %s', base_sha)
                 b4.git_fetch_am_into_repo(
@@ -3888,7 +3891,6 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
                     origin=linkurl,
                     am_flags=['-3'],
                 )
-                _is_rt = bool((self._selected_series or {}).get('is_rethreaded'))
                 b4.review.create_review_branch(
                     topdir,
                     upgrade_branch,
@@ -4240,8 +4242,6 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         import tarfile
         import time
 
-        import b4.ez
-
         topdir = b4.git_get_toplevel()
         if not topdir:
             if notify:
@@ -4325,7 +4325,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     def _on_archive_confirmed(
         self,
-        confirmed: bool,
+        confirmed: Optional[bool],
         change_id: str,
         review_branch: str,
         has_branch: bool,
@@ -4347,9 +4347,6 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         """Compose and preview a thank-you reply for a taken series."""
         import argparse
 
-        import b4.review
-        import b4.ty
-
         series = self._selected_series
         if not series:
             return
@@ -4479,8 +4476,6 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         self, msg: email.message.EmailMessage, checkurl: str
     ) -> None:
         """Queue the thanks message for delivery once commits are public."""
-        import b4.ty
-
         series = self._selected_series
         if not series:
             return
@@ -4542,8 +4537,6 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     def _refresh_queue_indicator(self) -> None:
         """Update the title-bar queue count and Q binding visibility."""
-        import b4.ty
-
         self._queue_count = b4.ty.get_queued_count(dryrun=self._email_dryrun)
         try:
             right = self.query_one('#title-right', Static)
@@ -4557,8 +4550,6 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
 
     def action_process_queue(self) -> None:
         """Show the queue modal and optionally deliver."""
-        import b4.ty
-
         entries = b4.ty.get_queued_messages(dryrun=self._email_dryrun)
         if not entries:
             self.notify('No queued thanks messages')
diff --git a/src/b4/ty.py b/src/b4/ty.py
index 5918193..cb33ee9 100644
--- a/src/b4/ty.py
+++ b/src/b4/ty.py
@@ -570,9 +570,11 @@ def send_messages(
         else:
             logger.info('Sent %s thank-you letters', outgoing)
             if pwstate:
+                assert isinstance(pwstate, str), 'pwstate must be a string'
                 b4.patchwork_set_state(msgids, pwstate)
     else:
         if pwstate and not cmdargs.dryrun:
+            assert isinstance(pwstate, str), 'pwstate must be a string'
             b4.patchwork_set_state(msgids, pwstate)
             logger.info('---')
         logger.debug('Wrote %s thank-you letters', outgoing)
@@ -677,12 +679,14 @@ def discard_selected(cmdargs: argparse.Namespace) -> None:
     logger.info('Discarding %s messages', len(listing))
     msgids: List[str] = list()
     for jsondata in listing:
-        assert isinstance(jsondata['trackfile'], str), 'trackfile must be a string'
-        fullpath = os.path.join(datadir, jsondata['trackfile'])
+        trackfile = jsondata['trackfile']
+        assert isinstance(trackfile, str), 'trackfile must be a string'
+        fullpath = os.path.join(datadir, trackfile)
         os.rename(fullpath, '%s.discarded' % fullpath)
         logger.info('  Discarded: %s', jsondata['subject'])
-        assert isinstance(jsondata['msgid'], str), 'msgid must be a string'
-        msgids.append(jsondata['msgid'])
+        msgid = jsondata['msgid']
+        assert isinstance(msgid, str), 'msgid must be a string'
+        msgids.append(msgid)
         patches = cast(List[Tuple[str, str, str, str]], jsondata['patches'])
         for pdata in patches:
             msgids.append(pdata[2])
@@ -692,6 +696,7 @@ def discard_selected(cmdargs: argparse.Namespace) -> None:
     if not pwstate:
         pwstate = config.get('pw-discard-state')
     if pwstate:
+        assert isinstance(pwstate, str), 'pwstate must be a string'
         b4.patchwork_set_state(msgids, pwstate)
 
     sys.exit(0)
diff --git a/src/tests/test___init__.py b/src/tests/test___init__.py
index c997059..ade79b2 100644
--- a/src/tests/test___init__.py
+++ b/src/tests/test___init__.py
@@ -1,5 +1,7 @@
 import email
-import email.parser
+import email.message
+import email.policy
+import email.utils
 import io
 import os
 import pathlib
diff --git a/src/tests/test_review_tracking.py b/src/tests/test_review_tracking.py
index e106312..290c991 100644
--- a/src/tests/test_review_tracking.py
+++ b/src/tests/test_review_tracking.py
@@ -4,7 +4,7 @@ import io
 import os
 import re
 from email.message import EmailMessage
-from typing import Any, Dict, List, Union
+from typing import Any, Dict
 from unittest import mock
 
 import pytest
@@ -1938,20 +1938,14 @@ class TestFollowupBlob:
 class TestPatchState:
     """Tests for _get_patch_state() and _set_patch_state()."""
 
-    _USERCFG: Dict[str, Union[str, List[str], None]] = {
-        'email': 'reviewer@example.com',
-        'name': 'Test Reviewer',
-    }
+    _EMAIL = 'reviewer@example.com'
+    _USERCFG: b4.ConfigDictT = {'email': _EMAIL, 'name': 'Test Reviewer'}
 
     def _make_target(self, review_data: Dict[str, Any] | None = None) -> Dict[str, Any]:
         """Return a minimal target dict, optionally with review data."""
         if review_data is None:
             return {}
-        return {
-            'reviews': {
-                self._USERCFG['email']: {'name': 'Test Reviewer', **review_data}
-            }
-        }
+        return {'reviews': {self._EMAIL: {'name': 'Test Reviewer', **review_data}}}
 
     def test_no_data(self) -> None:
         """Empty reviews dict → no state."""

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 09/11] Avoid duplicate map lookups
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
                   ` (7 preceding siblings ...)
  2026-04-19 16:00 ` [PATCH b4 v2 08/11] Enable and fix pyright diagnostics Tamir Duberstein
@ 2026-04-19 16:00 ` Tamir Duberstein
  2026-04-19 16:00 ` [PATCH b4 v2 10/11] Add ty and configuration Tamir Duberstein
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 16:00 UTC (permalink / raw)
  To: Kernel.org Tools; +Cc: Konstantin Ryabitsev, Tamir Duberstein

Use dict.get and defaultdict to avoid repeated membership checks before
indexing into the same map. This keeps the existing behavior while
making later type narrowing easier for mypy and pyright.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 src/b4/__init__.py | 105 ++++++++++++++++++++++++++---------------------------
 src/b4/ez.py       |  10 ++---
 2 files changed, 56 insertions(+), 59 deletions(-)

diff --git a/src/b4/__init__.py b/src/b4/__init__.py
index c427ee4..72b1c97 100644
--- a/src/b4/__init__.py
+++ b/src/b4/__init__.py
@@ -28,6 +28,7 @@ import tempfile
 import textwrap
 import time
 import urllib.parse
+from collections import defaultdict
 from contextlib import contextmanager
 from email import charset
 from email.message import EmailMessage
@@ -198,7 +199,7 @@ class LoreMailbox:
     msgid_map: Dict[str, 'LoreMessage']
     series: Dict[int, 'LoreSeries']
     covers: Dict[int, 'LoreMessage']
-    trailer_map: Dict[str, List['LoreMessage']]
+    trailer_map: defaultdict[str, List['LoreMessage']]
     followups: List['LoreMessage']
     unknowns: List['LoreMessage']
 
@@ -206,7 +207,7 @@ class LoreMailbox:
         self.msgid_map = dict()
         self.series = dict()
         self.covers = dict()
-        self.trailer_map = dict()
+        self.trailer_map = defaultdict(list)
         self.followups = list()
         self.unknowns = list()
 
@@ -223,11 +224,6 @@ class LoreMailbox:
 
         return '\n'.join(out)
 
-    def get_by_msgid(self, msgid: str) -> Optional['LoreMessage']:
-        if msgid in self.msgid_map:
-            return self.msgid_map[msgid]
-        return None
-
     def partial_reroll(self, revision: int, sloppytrailers: bool) -> None:
         # Is it a partial reroll?
         # To qualify for a partial reroll:
@@ -244,11 +240,13 @@ class LoreMailbox:
         for patch in lser.patches:
             if patch is None:
                 continue
-            if patch.in_reply_to is None or patch.in_reply_to not in self.msgid_map:
+            if (
+                patch.in_reply_to is None
+                or (ppatch := self.msgid_map.get(patch.in_reply_to)) is None
+            ):
                 logger.debug('Patch not sent as a reply-to')
                 sane = False
                 break
-            ppatch = self.msgid_map[patch.in_reply_to]
             found = False
             while True:
                 if (
@@ -263,10 +261,10 @@ class LoreMailbox:
                 # Do we have another level up?
                 if (
                     ppatch.in_reply_to is None
-                    or ppatch.in_reply_to not in self.msgid_map
+                    or (npatch := self.msgid_map.get(ppatch.in_reply_to)) is None
                 ):
                     break
-                ppatch = self.msgid_map[ppatch.in_reply_to]
+                ppatch = npatch
 
             if not found:
                 sane = False
@@ -330,9 +328,7 @@ class LoreMailbox:
             qmsgs, ignore_msgids=set(self.msgid_map.keys())
         )
         for patchid, fmsgs in patchid_map.items():
-            if patchid not in self.trailer_map:
-                self.trailer_map[patchid] = list()
-            self.trailer_map[patchid] += fmsgs
+            self.trailer_map[patchid].extend(fmsgs)
 
     def get_latest_revision(self) -> Optional[int]:
         if not len(self.series):
@@ -356,10 +352,10 @@ class LoreMailbox:
             revision = self.get_latest_revision()
             if revision is None:
                 return None
-        elif revision not in self.series:
-            return None
 
-        lser = self.series[revision]
+        lser = self.series.get(revision)
+        if lser is None:
+            return None
 
         # Is it empty?
         empty = True
@@ -375,15 +371,15 @@ class LoreMailbox:
             self.partial_reroll(revision, sloppytrailers)
 
         # Grab our cover letter if we have one
-        if revision in self.covers:
-            lser.add_patch(self.covers[revision])
+        if (cover := self.covers.get(revision)) is not None:
+            lser.add_patch(cover)
             lser.has_cover = True
         else:
             # Let's find the first patch with an in-reply-to and see if that
             # is our cover letter
             for member in lser.patches:
                 if member is not None and member.in_reply_to is not None:
-                    potential = self.get_by_msgid(member.in_reply_to)
+                    potential = self.msgid_map.get(member.in_reply_to)
                     if (
                         potential is not None
                         and potential.has_diffstat
@@ -414,12 +410,13 @@ class LoreMailbox:
             if fmsg.in_reply_to is None:
                 # Check if there's something matching in References
                 for refid in fmsg.references:
-                    if refid in self.msgid_map and refid != fmsg.msgid:
-                        pmsg = self.msgid_map[refid]
+                    if (
+                        refid != fmsg.msgid
+                        and (pmsg := self.msgid_map.get(refid)) is not None
+                    ):
                         logger.debug('Found a references entry %s in msgid_map', refid)
                         break
-            elif fmsg.in_reply_to in self.msgid_map:
-                pmsg = self.msgid_map[fmsg.in_reply_to]
+            elif (pmsg := self.msgid_map.get(fmsg.in_reply_to)) is not None:
                 logger.debug('Found in-reply-to %s in msgid_map', fmsg.in_reply_to)
             if pmsg is None:
                 # Can't find the message we're replying to here
@@ -443,8 +440,6 @@ class LoreMailbox:
                         # previous revisions to current revision if patch id did
                         # not change
                         if pmsg.git_patch_id:
-                            if pmsg.git_patch_id not in self.trailer_map:
-                                self.trailer_map[pmsg.git_patch_id] = list()
                             self.trailer_map[pmsg.git_patch_id].append(fmsg)
                     pmsg.followup_trailers += trailers
                     break
@@ -452,15 +447,18 @@ class LoreMailbox:
                     # Could be a cover letter
                     pmsg.followup_trailers += trailers
                     break
-                if pmsg.in_reply_to and pmsg.in_reply_to in self.msgid_map:
+                if (
+                    pmsg.in_reply_to
+                    and (nmsg := self.msgid_map.get(pmsg.in_reply_to)) is not None
+                ):
                     # Avoid bad message id causing infinite loop
-                    if pmsg == self.msgid_map[pmsg.in_reply_to]:
+                    if pmsg == nmsg:
                         break
                     lvl += 1
                     for pltr in pmsg.trailers:
                         pltr.lmsg = pmsg
                         trailers.append(pltr)
-                    pmsg = self.msgid_map[pmsg.in_reply_to]
+                    pmsg = nmsg
                     continue
                 break
 
@@ -472,29 +470,28 @@ class LoreMailbox:
             logger.debug(
                 '  matching patch_id %s from: %s', lmsg.git_patch_id, lmsg.full_subject
             )
-            if lmsg.git_patch_id in self.trailer_map:
-                for fmsg in self.trailer_map[lmsg.git_patch_id]:
-                    logger.debug('  matched: %s', fmsg.msgid)
-                    fltrs, fmis = fmsg.get_trailers(sloppy=sloppytrailers)
-                    for fltr in fltrs:
-                        if fltr in lmsg.trailers:
-                            logger.debug('  trailer already exists')
-                            continue
-                        if fltr in lmsg.followup_trailers:
-                            logger.debug('  identical trailer received for this series')
-                            continue
-                        logger.debug(
-                            '  carrying over the trailer to this series (may be duplicate)'
-                        )
-                        logger.debug('  %s', lmsg.full_subject)
-                        logger.debug('    + %s', fltr.as_string())
-                        if fltr.lmsg:
-                            logger.debug('      via: %s', fltr.lmsg.msgid)
-                        lmsg.followup_trailers.append(fltr)
-                    for fltr in fmis:
-                        lser.trailer_mismatches.add(
-                            (fltr.name, fltr.value, fmsg.fromname, fmsg.fromemail)
-                        )
+            for fmsg in self.trailer_map.get(lmsg.git_patch_id, ()):
+                logger.debug('  matched: %s', fmsg.msgid)
+                fltrs, fmis = fmsg.get_trailers(sloppy=sloppytrailers)
+                for fltr in fltrs:
+                    if fltr in lmsg.trailers:
+                        logger.debug('  trailer already exists')
+                        continue
+                    if fltr in lmsg.followup_trailers:
+                        logger.debug('  identical trailer received for this series')
+                        continue
+                    logger.debug(
+                        '  carrying over the trailer to this series (may be duplicate)'
+                    )
+                    logger.debug('  %s', lmsg.full_subject)
+                    logger.debug('    + %s', fltr.as_string())
+                    if fltr.lmsg:
+                        logger.debug('      via: %s', fltr.lmsg.msgid)
+                    lmsg.followup_trailers.append(fltr)
+                for fltr in fmis:
+                    lser.trailer_mismatches.add(
+                        (fltr.name, fltr.value, fmsg.fromname, fmsg.fromemail)
+                    )
 
         return lser
 
@@ -529,7 +526,7 @@ class LoreMailbox:
                 if lmsg.revision_inferred and lmsg.in_reply_to:
                     # We have an inferred revision here.
                     # Do we have an upthread cover letter that specifies a revision?
-                    irt = self.get_by_msgid(lmsg.in_reply_to)
+                    irt = self.msgid_map.get(lmsg.in_reply_to)
                     if irt is not None and irt.has_diffstat and not irt.has_diff:
                         # Yes, this is very likely our cover letter
                         logger.debug('  fixed revision to v%s', irt.revision)
@@ -3796,7 +3793,7 @@ def get_config_from_git(
             chunks = key.split('.')
             cfgkey = chunks[-1].lower()
             if cfgkey in multivals:
-                if cfgkey not in gitconfig or gitconfig[cfgkey] is None:
+                if gitconfig.get(cfgkey) is None:
                     gitconfig[cfgkey] = list()
                 gitconfig[cfgkey].append(value)
             else:
diff --git a/src/b4/ez.py b/src/b4/ez.py
index 3a59256..d64e0bc 100644
--- a/src/b4/ez.py
+++ b/src/b4/ez.py
@@ -26,6 +26,7 @@ import textwrap
 import time
 import urllib.parse
 import uuid
+from collections import defaultdict
 from email.message import EmailMessage
 from string import Template
 from typing import Any, Dict, List, Optional, Set, Tuple, Union
@@ -2262,7 +2263,7 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
 
     seen: Set[str] = set()
     excludes: Set[str] = set()
-    pccs: Dict[str, List[Tuple[str, str]]] = dict()
+    pccs: defaultdict[str, List[Tuple[str, str]]] = defaultdict(list)
 
     if cmdargs.preview_to or cmdargs.no_trailer_to_cc:
         todests = list()
@@ -2285,10 +2286,9 @@ def cmd_send(cmdargs: argparse.Namespace) -> None:
                 if btr.addr[1] in seen:
                     continue
                 if commit:
-                    if commit not in pccs:
-                        pccs[commit] = list()
-                    if btr.addr not in pccs[commit]:
-                        pccs[commit].append(btr.addr)
+                    cpccs = pccs[commit]
+                    if btr.addr not in cpccs:
+                        cpccs.append(btr.addr)
                     continue
                 seen.add(btr.addr[1])
                 if btr.lname == 'to':

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 10/11] Add ty and configuration
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
                   ` (8 preceding siblings ...)
  2026-04-19 16:00 ` [PATCH b4 v2 09/11] Avoid duplicate map lookups Tamir Duberstein
@ 2026-04-19 16:00 ` Tamir Duberstein
  2026-04-19 16:00 ` [PATCH b4 v2 11/11] Enable pyright strict mode Tamir Duberstein
  2026-04-23  2:48 ` [PATCH b4 v2 00/11] Enable stricter local checks Konstantin Ryabitsev
  11 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 16:00 UTC (permalink / raw)
  To: Kernel.org Tools; +Cc: Konstantin Ryabitsev, Tamir Duberstein

This revealed usage that assumed Python >= 3.11 such as `|` unions and a
particular overload of `wsgiref.simple_server.make_server`.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 ci.sh                              |   1 +
 misc/send-receive.py               |   2 +
 pyproject.toml                     |   9 +++-
 src/b4/__init__.py                 |  49 ++++++++++-------
 src/b4/command.py                  |  12 ++---
 src/b4/dig.py                      |   4 +-
 src/b4/ez.py                       |   2 +-
 src/b4/mbox.py                     |   5 +-
 src/b4/pr.py                       |   7 +++
 src/b4/review/checks.py            |  16 +++---
 src/b4/review_tui/_common.py       | 108 +++++++++++++++++++++++++++++++------
 src/b4/review_tui/_review_app.py   |   5 +-
 src/b4/review_tui/_tracking_app.py |   3 +-
 src/b4/tui/_common.py              |  18 ++++---
 src/b4/ty.py                       |   5 +-
 src/tests/test___init__.py         |   6 +--
 src/tests/test_tui_tracking.py     |   3 +-
 17 files changed, 185 insertions(+), 70 deletions(-)

diff --git a/ci.sh b/ci.sh
index 7e4e7a4..658f626 100755
--- a/ci.sh
+++ b/ci.sh
@@ -4,6 +4,7 @@ set -eu
 
 uv run ruff format --check
 uv run ruff check
+uv run ty check
 uv run mypy .
 uv run pyright
 uv run pytest --durations=20
diff --git a/misc/send-receive.py b/misc/send-receive.py
index 11af5dc..e59440b 100644
--- a/misc/send-receive.py
+++ b/misc/send-receive.py
@@ -803,6 +803,8 @@ class SendReceiveListener(object):
             resp.content_type = falcon.MEDIA_TEXT
             resp.text = 'Failed to parse the request\n'
             return
+        # TODO(https://github.com/astral-sh/ruff/pull/24458): remove this when ty understands conditional walrus.
+        action = None
         if not isinstance(jdata, Mapping) or (action := jdata.get('action')) is None:
             logger.critical('Action not set from %s', req.remote_addr)
             return
diff --git a/pyproject.toml b/pyproject.toml
index 8167f53..bde64bf 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,7 +12,7 @@ license = {file = "COPYING"}
 authors = [
     {name = "Konstantin Ryabitsev", email="konstantin@linuxfoundation.org"},
 ]
-requires-python = ">=3.9"
+requires-python = ">=3.11"
 classifiers = [
     "Environment :: Console",
     "Operating System :: POSIX :: Linux",
@@ -41,6 +41,7 @@ dev = [
     "pytest",
     "pytest-asyncio",
     "ruff",
+    "ty",
     "types-requests",
 ]
 misc = [
@@ -135,3 +136,9 @@ warn_unreachable = true
 exclude = [".venv", "ezgb", "liblore", "patatt"]
 typeCheckingMode = "standard"
 reportUnusedImport = true
+
+[tool.ty.src]
+exclude = ["ezgb/", "liblore/", "patatt/"]
+
+[tool.ty.rules]
+all = "error"
diff --git a/src/b4/__init__.py b/src/b4/__init__.py
index 72b1c97..739e1af 100644
--- a/src/b4/__init__.py
+++ b/src/b4/__init__.py
@@ -71,9 +71,9 @@ emlpolicy = email.policy.EmailPolicy(
 qspecials = re.compile(r'[()<>@,:;.\"\[\]]')
 
 # global setting allowing us to turn off networking
-can_network = True
+can_network: bool = True
 
-__VERSION__ = '0.16-dev'
+__VERSION__: str = '0.16-dev'
 PW_REST_API_VERSION = '1.2'
 
 
@@ -237,6 +237,8 @@ class LoreMailbox:
         # Are existing patches replies to previous revisions with the same counter?
         lser = self.series[revision]
         sane = True
+        # TODO(https://github.com/astral-sh/ruff/pull/24458): remove this when ty understands conditional walrus.
+        ppatch = None
         for patch in lser.patches:
             if patch is None:
                 continue
@@ -259,6 +261,8 @@ class LoreMailbox:
                     found = True
                     break
                 # Do we have another level up?
+                # TODO(https://github.com/astral-sh/ruff/pull/24458): remove this when ty understands conditional walrus.
+                npatch = None
                 if (
                     ppatch.in_reply_to is None
                     or (npatch := self.msgid_map.get(ppatch.in_reply_to)) is None
@@ -447,6 +451,8 @@ class LoreMailbox:
                     # Could be a cover letter
                     pmsg.followup_trailers += trailers
                     break
+                # TODO(https://github.com/astral-sh/ruff/pull/24458): remove this when ty understands conditional walrus.
+                nmsg = None
                 if (
                     pmsg.in_reply_to
                     and (nmsg := self.msgid_map.get(pmsg.in_reply_to)) is not None
@@ -1872,10 +1878,10 @@ class LoreMessage:
         # Identify all DKIM-Signature headers and try them in reverse order
         # until we come to a passing one
         dkhdrs = list()
-        for header in list(self.msg._headers):  # type: ignore[attr-defined]
+        for header in list(self.msg._headers):  # type: ignore[attr-defined]  # ty: ignore[unresolved-attribute]
             if header[0].lower() == 'dkim-signature':
                 dkhdrs.append(header)
-                self.msg._headers.remove(header)  # type: ignore[attr-defined]
+                self.msg._headers.remove(header)  # type: ignore[attr-defined]  # ty: ignore[unresolved-attribute]
         dkhdrs.reverse()
 
         seenatts = list()
@@ -1909,7 +1915,7 @@ class LoreMessage:
                 if isinstance(sh, str) and 'date' in sh.lower().split(':'):
                     signtime = self.date
 
-            self.msg._headers.append((hn, hval))  # type: ignore[attr-defined]
+            self.msg._headers.append((hn, hval))  # type: ignore[attr-defined]  # ty: ignore[unresolved-attribute]
             try:
                 res = dkim.verify(
                     self.msg.as_bytes(policy=emlpolicy), logger=dkimlogger
@@ -1928,7 +1934,7 @@ class LoreMessage:
                 self._attestors.append(attestor)
                 return
 
-            self.msg._headers.pop(-1)  # type: ignore[attr-defined]
+            self.msg._headers.pop(-1)  # type: ignore[attr-defined]  # ty: ignore[unresolved-attribute]
             seenatts.append(attestor)
 
         # No exact domain matches, so return everything we have
@@ -2248,12 +2254,12 @@ class LoreMessage:
                             self.fromname = xpair[0]
                             self.fromemail = xpair[1]
                             # Drop the reply-to header if it's exactly the same
-                            for header in list(self.msg._headers):  # type: ignore[attr-defined]
+                            for header in list(self.msg._headers):  # type: ignore[attr-defined]  # ty: ignore[unresolved-attribute]
                                 if (
                                     header[0].lower() == 'reply-to'
                                     and header[1].find(xpair[1]) > 0
                                 ):
-                                    self.msg._headers.remove(header)  # type: ignore[attr-defined]
+                                    self.msg._headers.remove(header)  # type: ignore[attr-defined]  # ty: ignore[unresolved-attribute]
 
                 has_passing = True
                 att_info: Dict[str, Any] = {
@@ -3622,8 +3628,8 @@ def git_run_command(
 
     U = TypeVar('U', str, bytes)
 
-    def _handle(_out: U, _err: U) -> Tuple[int, Union[str, bytes]]:
-        if logstderr and len(_err.strip()):
+    def _handle(_out: U, _err: U) -> Tuple[int, U]:
+        if logstderr and len(_err.strip()):  # ty:ignore[no-matching-overload, invalid-argument-type] # https://github.com/astral-sh/ty/issues/1503
             logger.debug('Stderr: %s', _err)
             _out += _err
 
@@ -5063,10 +5069,17 @@ def send_mail(
 
     if isinstance(smtp, list):
         # This is a local command
+
+        # This a little crazy but it's possible, through multiple inheritance,
+        # for smtp to be a list of something other than str if it is also one of
+        # the other types in the union.
+        #
+        # https://github.com/astral-sh/ty/issues/1578
+        smtps = ' '.join(smtp)  # ty:ignore[no-matching-overload]
         if reflect:
-            logger.info('Reflecting via "%s"', ' '.join(smtp))
+            logger.info('Reflecting via "%s"', smtps)
         else:
-            logger.info('Sending via "%s"', ' '.join(smtp))
+            logger.info('Sending via "%s"', smtps)
         for destaddrs, bdata, lsubject in tosend:
             logger.info('  %s', lsubject.full_subject)
             if reflect:
@@ -5075,9 +5088,7 @@ def send_mail(
                 cmdargs = list(smtp) + list(destaddrs)
             ecode, _out, err = _run_command(cmdargs, stdin=bdata)
             if ecode > 0:
-                raise RuntimeError(
-                    'Error running %s: %s' % (' '.join(smtp), err.decode())
-                )
+                raise RuntimeError('Error running %s: %s' % (smtps, err.decode()))
             sent += 1
 
     elif smtp:
@@ -5815,11 +5826,11 @@ def mailbox_email_factory(fh: BinaryIO) -> EmailMessage:
 
 def get_msgs_from_mailbox_or_maildir(mbmd: str) -> List[EmailMessage]:
     if is_maildir(mbmd):
-        in_mdr = mailbox.Maildir(mbmd, factory=mailbox_email_factory)  # type: ignore[arg-type]
-        return [x[1] for x in in_mdr.items()]  # type: ignore[misc]
+        in_mdr = mailbox.Maildir(mbmd, factory=mailbox_email_factory)  # type: ignore[arg-type]  # ty: ignore[invalid-argument-type]
+        return [x[1] for x in in_mdr.items()]  # type: ignore[misc]  # ty: ignore[invalid-return-type]
 
-    in_mbx = mailbox.mbox(mbmd, factory=mailbox_email_factory)  # type: ignore[arg-type]
-    return [x[1] for x in in_mbx.items()]  # type: ignore[misc]
+    in_mbx = mailbox.mbox(mbmd, factory=mailbox_email_factory)  # type: ignore[arg-type]  # ty: ignore[invalid-argument-type]
+    return [x[1] for x in in_mbx.items()]  # type: ignore[misc]  # ty: ignore[invalid-return-type]
 
 
 def get_mailfrom() -> Tuple[str, str]:
diff --git a/src/b4/command.py b/src/b4/command.py
index fa6e84c..b49d377 100644
--- a/src/b4/command.py
+++ b/src/b4/command.py
@@ -270,7 +270,7 @@ class ConfigOption(argparse.Action):
         self,
         parser: argparse.ArgumentParser,
         namespace: argparse.Namespace,
-        keyval: Union[str, Sequence[Any], None],
+        values: Union[str, Sequence[Any], None],
         option_string: Optional[str] = None,
     ) -> None:
         config = getattr(namespace, self.dest, None)
@@ -279,13 +279,13 @@ class ConfigOption(argparse.Action):
             config = dict()
             setattr(namespace, self.dest, config)
 
-        if not isinstance(keyval, str):
-            raise TypeError(f'Expected a string config assignment, got {keyval!r}')
+        if not isinstance(values, str):
+            raise TypeError(f'Expected a string config assignment, got {values!r}')
 
-        if '=' in keyval:
-            key, value = keyval.split('=', maxsplit=1)
+        if '=' in values:
+            key, value = values.split('=', maxsplit=1)
         else:
-            key, value = keyval, 'true'
+            key, value = values, 'true'
         config[key] = value
 
 
diff --git a/src/b4/dig.py b/src/b4/dig.py
index eac749c..d93eb2b 100644
--- a/src/b4/dig.py
+++ b/src/b4/dig.py
@@ -297,7 +297,7 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
         if not best_match:
             # Next, try to find by exact patch-id
             for lmsg in all_lmsgs:
-                if lmsg.git_patch_id == patch_id:  # pyright: ignore[reportPossiblyUnboundVariable] # broken since 3ae277e9c7dd3e1df61a14884aabdd5834ad1201
+                if lmsg.git_patch_id == patch_id:  # pyright: ignore[reportPossiblyUnboundVariable] # ty:ignore[possibly-unresolved-reference] # broken since 3ae277e9c7dd3e1df61a14884aabdd5834ad1201
                     logger.debug('matched by exact patch-id')
                     best_match = lmsg
                     break
@@ -354,7 +354,7 @@ def dig_commitish(cmdargs: argparse.Namespace) -> None:
                 continue
             if firstmsg is None:
                 firstmsg = lmsg
-            if lmsg.git_patch_id == patch_id:  # pyright: ignore[reportPossiblyUnboundVariable] # broken since inception in 16329336c1c8faba853b11238a16249306742505
+            if lmsg.git_patch_id == patch_id:  # pyright: ignore[reportPossiblyUnboundVariable] # ty:ignore[possibly-unresolved-reference] # broken since inception in 16329336c1c8faba853b11238a16249306742505
                 logger.debug('Matched by exact patch-id')
                 break
             if lmsg.subject == csubj:
diff --git a/src/b4/ez.py b/src/b4/ez.py
index d64e0bc..be3d2fe 100644
--- a/src/b4/ez.py
+++ b/src/b4/ez.py
@@ -1482,7 +1482,7 @@ def update_trailers(cmdargs: argparse.Namespace) -> None:
                         fltr.lmsg.msgid, safe='@'
                     )
                 logger.info('  + %s', rendered)
-                logger.info('    via: %s', source)  # pyright: ignore[reportPossiblyUnboundVariable] # broken since 742e017c1b5b91d0e6fd6fca7decf73956b31487
+                logger.info('    via: %s', source)  # pyright: ignore[reportPossiblyUnboundVariable] # ty:ignore[possibly-unresolved-reference] # broken since 742e017c1b5b91d0e6fd6fca7decf73956b31487
             else:
                 logger.debug('  . %s', fltr.as_string(omit_extinfo=True))
 
diff --git a/src/b4/mbox.py b/src/b4/mbox.py
index 66b47ba..65cff36 100644
--- a/src/b4/mbox.py
+++ b/src/b4/mbox.py
@@ -51,7 +51,7 @@ def save_msgs_as_mbox(
         added = 0
         if filterdupes:
             for emsg in mdr:
-                have_msgids.add(b4.LoreMessage.get_clean_msgid(emsg))  # type: ignore[arg-type]
+                have_msgids.add(b4.LoreMessage.get_clean_msgid(emsg))  # type: ignore[arg-type] # ty:ignore[invalid-argument-type] # this will go away when we update liblore
         for msg in msgs:
             if b4.LoreMessage.get_clean_msgid(msg) not in have_msgids:
                 added += 1
@@ -924,8 +924,7 @@ def refetch(dest: str) -> None:
 
     by_msgid: Dict[str, EmailMessage] = dict()
     for key, msg in mbox.items():
-        # We normally pass EmailMessage objects, but this works, too
-        msgid = b4.LoreMessage.get_clean_msgid(msg)  # type: ignore[arg-type]
+        msgid = b4.LoreMessage.get_clean_msgid(msg)  # type: ignore[arg-type] # ty:ignore[invalid-argument-type] # this will go away when we update liblore
         if not msgid:
             continue
         if msgid not in by_msgid:
diff --git a/src/b4/pr.py b/src/b4/pr.py
index b6a0c6b..4be366b 100644
--- a/src/b4/pr.py
+++ b/src/b4/pr.py
@@ -510,6 +510,7 @@ def main(cmdargs: argparse.Namespace) -> None:
 
         if msgs:
             if cmdargs.sendidentity:
+                assert isinstance(cmdargs.sendidentity, str)
                 # Pass exploded series via git-send-email
                 config = b4.get_config_from_git(
                     rf'sendemail\.{cmdargs.sendidentity}\..*'
@@ -522,6 +523,12 @@ def main(cmdargs: argparse.Namespace) -> None:
                     sys.exit(1)
                 # Make sure from is not overridden by current user
                 mailfrom = msgs[0].get('from')
+                if not isinstance(mailfrom, str):
+                    logger.critical(
+                        'Expected a string From header in exploded message, got %r',
+                        mailfrom,
+                    )
+                    sys.exit(1)
                 gitargs = [
                     'send-email',
                     '--identity',
diff --git a/src/b4/review/checks.py b/src/b4/review/checks.py
index a7e1ff0..36c0780 100644
--- a/src/b4/review/checks.py
+++ b/src/b4/review/checks.py
@@ -13,7 +13,7 @@ import pathlib
 import shlex
 import sqlite3
 from email.message import EmailMessage
-from typing import Any, Dict, List, Optional, Tuple
+from typing import Any, Dict, List, Optional, Tuple, Union
 
 import requests
 
@@ -163,12 +163,14 @@ def load_check_cmds() -> Tuple[List[str], List[str]]:
     """
     config = b4.get_main_config()
 
-    def _as_list(val: Any) -> List[str]:
-        if isinstance(val, str):
-            return [val]
-        if isinstance(val, list):
-            return list(val)
-        return []
+    def _as_list(val: Union[str, List[str], None]) -> List[str]:
+        match val:
+            case str():
+                return [val]
+            case list():
+                return val
+            case None:
+                return []
 
     perpatch = _as_list(config.get('review-perpatch-check-cmd'))
     if not perpatch:
diff --git a/src/b4/review_tui/_common.py b/src/b4/review_tui/_common.py
index 84992e8..441ea67 100644
--- a/src/b4/review_tui/_common.py
+++ b/src/b4/review_tui/_common.py
@@ -9,7 +9,20 @@ import email.message
 import json
 import os
 import tempfile
-from typing import Any, Dict, List, Optional, Set, Tuple
+from typing import (
+    TYPE_CHECKING,
+    Any,
+    Awaitable,
+    Callable,
+    Dict,
+    List,
+    Optional,
+    ParamSpec,
+    Protocol,
+    Set,
+    Tuple,
+    TypeVar,
+)
 
 import liblore.utils
 from rich import box
@@ -18,6 +31,7 @@ from rich.panel import Panel
 from rich.rule import Rule
 from rich.text import Text
 from textual.widgets import RichLog
+from textual.worker import Worker
 
 import b4
 import b4.review
@@ -78,6 +92,13 @@ from b4.tui._common import (
 
 logger = b4.logger
 
+if TYPE_CHECKING:
+    from b4.review_tui._modals import CheckLoadingScreen
+
+_CallFromThreadParams = ParamSpec('_CallFromThreadParams')
+_CallFromThreadReturn = TypeVar('_CallFromThreadReturn')
+_WorkerResult = TypeVar('_WorkerResult')
+
 
 def get_thread_msgs(
     topdir: str,
@@ -122,6 +143,61 @@ CI_CHECK_LABELS = {
 }
 
 
+class _CallFromThreadHost(Protocol):
+    def call_from_thread(
+        self,
+        callback: Callable[
+            _CallFromThreadParams,
+            _CallFromThreadReturn | Awaitable[_CallFromThreadReturn],
+        ],
+        *args: _CallFromThreadParams.args,
+        **kwargs: _CallFromThreadParams.kwargs,
+    ) -> _CallFromThreadReturn: ...
+
+
+class _CheckRunnerHost(Protocol):
+    _check_loading: Optional['CheckLoadingScreen']
+
+    @property
+    def app(self) -> _CallFromThreadHost: ...
+
+    def _get_check_context(self) -> Optional[Tuple[str, str, str]]: ...
+
+    def _run_checks(self, force: bool = ...) -> None: ...
+
+    def _dismiss_loading(self, msg: str = ..., severity: str = ...) -> None: ...
+
+    def _update_loading(self, text: str) -> None: ...
+
+    def _fetch_and_check(
+        self,
+        message_id: str,
+        series_subject: str,
+        change_id: str = '',
+        force: bool = ...,
+    ) -> None: ...
+
+    def notify(self, message: str, *, severity: str = ...) -> None: ...
+
+    def push_screen(
+        self,
+        screen: object,
+        callback: Optional[Callable[[Optional[str]], None]] = ...,
+    ) -> object: ...
+
+    def run_worker(
+        self,
+        work: Callable[[], _WorkerResult],
+        name: Optional[str] = ...,
+        group: str = ...,
+        description: str = ...,
+        exit_on_error: bool = ...,
+        start: bool = ...,
+        exclusive: bool = ...,
+        thread: bool = ...,
+    ) -> Worker[_WorkerResult]: ...
+
+
 class CheckRunnerMixin:
     """Mixin providing CI check execution for Textual App subclasses.
 
@@ -130,7 +206,7 @@ class CheckRunnerMixin:
     interaction (loading overlay, results modal) is handled here.
     """
 
-    _check_loading: Optional[Any] = None
+    _check_loading: Optional['CheckLoadingScreen']
 
     # -- interface for subclasses ------------------------------------------
 
@@ -143,26 +219,26 @@ class CheckRunnerMixin:
 
     # -- public action -----------------------------------------------------
 
-    def action_check(self) -> None:
+    def action_check(self: _CheckRunnerHost) -> None:
         """Run CI checks on the current series."""
         self._run_checks(force=False)
 
     # -- helpers -----------------------------------------------------------
 
-    def _run_checks(self, force: bool = False) -> None:
+    def _run_checks(self: _CheckRunnerHost, force: bool = False) -> None:
         """Show loading overlay and launch the check worker thread."""
         ctx = self._get_check_context()
         if ctx is None:
             return
         message_id, series_subject, change_id = ctx
         if not message_id:
-            self.notify('No message-id for this series', severity='error')  # type: ignore[attr-defined]
+            self.notify('No message-id for this series', severity='error')
             return
         from b4.review_tui._modals import CheckLoadingScreen
 
         self._check_loading = CheckLoadingScreen()
-        self.push_screen(self._check_loading)  # type: ignore[attr-defined]
-        self.run_worker(  # type: ignore[attr-defined]
+        self.push_screen(self._check_loading)
+        self.run_worker(
             lambda: self._fetch_and_check(
                 message_id, series_subject, change_id=change_id, force=force
             ),
@@ -170,28 +246,30 @@ class CheckRunnerMixin:
             thread=True,
         )
 
-    def _dismiss_loading(self, msg: str = '', severity: str = '') -> None:
+    def _dismiss_loading(
+        self: _CheckRunnerHost, msg: str = '', severity: str = ''
+    ) -> None:
         """Dismiss the check loading screen and optionally notify."""
 
         def _do() -> None:
             if self._check_loading is not None and self._check_loading.is_attached:
                 self._check_loading.dismiss(None)
             if msg:
-                self.notify(msg, severity=severity)  # type: ignore[attr-defined]
+                self.notify(msg, severity=severity)
 
-        self.app.call_from_thread(_do)  # type: ignore[attr-defined]
+        self.app.call_from_thread(_do)
 
-    def _update_loading(self, text: str) -> None:
+    def _update_loading(self: _CheckRunnerHost, text: str) -> None:
         """Update the loading screen status text."""
 
         def _do() -> None:
             if self._check_loading is not None and self._check_loading.is_attached:
                 self._check_loading.update_status(text)
 
-        self.app.call_from_thread(_do)  # type: ignore[attr-defined]
+        self.app.call_from_thread(_do)
 
     def _fetch_and_check(
-        self,
+        self: _CheckRunnerHost,
         message_id: str,
         series_subject: str,
         change_id: str = '',
@@ -376,14 +454,14 @@ class CheckRunnerMixin:
         def _push_modal() -> None:
             if self._check_loading is not None and self._check_loading.is_attached:
                 self._check_loading.dismiss(None)
-            self.push_screen(  # type: ignore[attr-defined]
+            self.push_screen(
                 TrackingCheckResultsScreen(
                     title, patch_labels, patch_subjects, tools_sorted, matrix
                 ),
                 callback=_on_result,
             )
 
-        self.app.call_from_thread(_push_modal)  # type: ignore[attr-defined]
+        self.app.call_from_thread(_push_modal)
 
 
 def _make_initials(name: str) -> str:
diff --git a/src/b4/review_tui/_review_app.py b/src/b4/review_tui/_review_app.py
index 7cdc6e9..8786fbb 100644
--- a/src/b4/review_tui/_review_app.py
+++ b/src/b4/review_tui/_review_app.py
@@ -49,6 +49,7 @@ from b4.review_tui._common import (
     reviewer_colours,
 )
 from b4.review_tui._modals import (
+    CheckLoadingScreen,
     FollowupReplyPreviewScreen,
     HelpScreen,
     NoteScreen,
@@ -294,7 +295,7 @@ class ReviewApp(CheckRunnerMixin, App[None]):
         self._collapsed_comment_lines: Dict[int, Tuple[str, int]] = {}
         self._reply_sent: bool = False
         self._hide_skipped: bool = False
-        self._check_loading: Optional[Any] = None
+        self._check_loading: Optional[CheckLoadingScreen] = None
 
     def _get_check_context(self) -> Optional[Tuple[str, str, str]]:
         message_id = self._series.get('header-info', {}).get('msgid', '')
@@ -354,6 +355,8 @@ class ReviewApp(CheckRunnerMixin, App[None]):
             widget.update(f' WARNING: newer version(s) available: {versions}')
             widget.styles.display = 'block'
         else:
+            # Textual infers StringEnumProperty from the default ("block"),
+            # so ty treats the valid "none" value as an invalid assignment.
             widget.styles.display = 'none'
 
     def _populate_patch_list(self) -> None:
diff --git a/src/b4/review_tui/_tracking_app.py b/src/b4/review_tui/_tracking_app.py
index 6b4151c..a5a9389 100644
--- a/src/b4/review_tui/_tracking_app.py
+++ b/src/b4/review_tui/_tracking_app.py
@@ -51,6 +51,7 @@ from b4.review_tui._modals import (
     ActionScreen,
     ArchiveConfirmScreen,
     BaseSelectionScreen,
+    CheckLoadingScreen,
     CherryPickScreen,
     HelpScreen,
     LimitScreen,
@@ -644,7 +645,7 @@ class TrackingApp(CheckRunnerMixin, App[Optional[str]]):
         self._last_snooze_source: str = ''
         self._last_snooze_input: str = ''
         # CI check modal state
-        self._check_loading: Optional[Any] = None
+        self._check_loading: Optional[CheckLoadingScreen] = None
         # Thanks queue count
         self._queue_count: int = 0
         # Show target branch binding only when configured
diff --git a/src/b4/tui/_common.py b/src/b4/tui/_common.py
index b788f35..cdd3818 100644
--- a/src/b4/tui/_common.py
+++ b/src/b4/tui/_common.py
@@ -13,7 +13,7 @@ import subprocess
 import tempfile
 import unicodedata
 from collections import defaultdict
-from typing import Any, Dict, List, Optional
+from typing import Any, Dict, List, Optional, Protocol
 
 from textual.app import ComposeResult
 from textual.binding import Binding
@@ -275,6 +275,12 @@ def _validate_addrs(text: str) -> Optional[str]:
     return None
 
 
+class _ListViewHost(Protocol):
+    _list_id: str
+
+    def query_one(self, selector: str, expect_type: type[ListView]) -> ListView: ...
+
+
 class JKListNavMixin:
     """Mixin providing j/k cursor navigation for a named ListView.
 
@@ -282,15 +288,13 @@ class JKListNavMixin:
     target :class:`ListView` (e.g. ``'#action-list'``).
     """
 
-    _list_id: str = ''
-
-    def action_cursor_down(self) -> None:
-        lv = self.query_one(self._list_id, ListView)  # type: ignore[attr-defined]
+    def action_cursor_down(self: _ListViewHost) -> None:
+        lv = self.query_one(self._list_id, ListView)
         if lv.index is not None and lv.index < len(lv.children) - 1:
             lv.index += 1
 
-    def action_cursor_up(self) -> None:
-        lv = self.query_one(self._list_id, ListView)  # type: ignore[attr-defined]
+    def action_cursor_up(self: _ListViewHost) -> None:
+        lv = self.query_one(self._list_id, ListView)
         if lv.index is not None and lv.index > 0:
             lv.index -= 1
 
diff --git a/src/b4/ty.py b/src/b4/ty.py
index cb33ee9..ba7f646 100644
--- a/src/b4/ty.py
+++ b/src/b4/ty.py
@@ -586,7 +586,10 @@ def list_tracked() -> List[JsonDictT]:
     # find all tracked bits
     tracked = list()
     datadir = b4.get_data_dir()
-    paths = sorted(Path(datadir).iterdir(), key=os.path.getmtime)
+    # Work around https://github.com/astral-sh/ty/issues/2799, which widens the
+    # sorted element type when the key function accepts a broader path-like type
+    # than Path.
+    paths = sorted(Path(datadir).iterdir(), key=lambda path: path.stat().st_mtime)
     for fullpath in paths:
         if fullpath.suffix not in ('.pr', '.am'):
             continue
diff --git a/src/tests/test___init__.py b/src/tests/test___init__.py
index ade79b2..3c4c2d0 100644
--- a/src/tests/test___init__.py
+++ b/src/tests/test___init__.py
@@ -1,5 +1,6 @@
 import email
 import email.message
+import email.parser
 import email.policy
 import email.utils
 import io
@@ -58,17 +59,12 @@ def test_save_git_am_mbox(
         if ismbox:
             msgs = b4.get_msgs_from_mailbox_or_maildir(f'{sampledir}/{source}.txt')
         else:
-            import email
-            import email.parser
-
             with open(f'{sampledir}/{source}.txt', 'rb') as fh:
                 msg = email.parser.BytesParser(
                     policy=b4.emlpolicy, _class=email.message.EmailMessage
                 ).parse(fh)
             msgs = [msg]
     else:
-        import email.message
-
         msgs = list()
         for x in range(0, 3):
             msg = email.message.EmailMessage()
diff --git a/src/tests/test_tui_tracking.py b/src/tests/test_tui_tracking.py
index a9491ad..d8feb6f 100644
--- a/src/tests/test_tui_tracking.py
+++ b/src/tests/test_tui_tracking.py
@@ -1388,7 +1388,8 @@ class TestTrackingDetailPanel:
             from textual.containers import Vertical
 
             panel = app.query_one('#details-panel', Vertical)
-            assert panel.styles.height.value == 0  # type: ignore[union-attr]
+            assert panel.styles.height is not None
+            assert panel.styles.height.value == 0
 
     @pytest.mark.asyncio
     async def test_detail_panel_updates_on_navigation(

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH b4 v2 11/11] Enable pyright strict mode
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
                   ` (9 preceding siblings ...)
  2026-04-19 16:00 ` [PATCH b4 v2 10/11] Add ty and configuration Tamir Duberstein
@ 2026-04-19 16:00 ` Tamir Duberstein
  2026-04-23  2:48 ` [PATCH b4 v2 00/11] Enable stricter local checks Konstantin Ryabitsev
  11 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 16:00 UTC (permalink / raw)
  To: Kernel.org Tools; +Cc: Konstantin Ryabitsev, Tamir Duberstein

This catches a few impossible type assertions.

Signed-off-by: Tamir Duberstein <tamird@kernel.org>
---
 misc/review-ci-example.py |  6 +++---
 pyproject.toml            | 15 +++++++++++++--
 src/b4/__init__.py        |  9 +++++----
 src/b4/ez.py              | 29 +++++++++++++++--------------
 src/b4/review/tracking.py | 16 +++++++++-------
 src/b4/ty.py              |  6 +++---
 6 files changed, 48 insertions(+), 33 deletions(-)

diff --git a/misc/review-ci-example.py b/misc/review-ci-example.py
index dee0a5d..b2170a7 100755
--- a/misc/review-ci-example.py
+++ b/misc/review-ci-example.py
@@ -43,7 +43,7 @@ import sys
 
 def main() -> None:
     msg = email.message_from_binary_file(sys.stdin.buffer)
-    subject = msg.get('subject', '(no subject)')  # noqa: F841
+    subject = msg.get('subject', '(no subject)')  # noqa: F841  # pyright: ignore[reportUnusedVariable]
     msgid = msg.get('message-id', '').strip('<> ')
 
     # Example: read tracking data for commit-based CI lookups
@@ -51,9 +51,9 @@ def main() -> None:
     if tracking_file:
         with open(tracking_file) as fp:
             tracking = json.load(fp)
-        branch_tips = tracking.get('series', {}).get('branch-tips', [])
+        branch_tips = tracking.get('series', {}).get('branch-tips', [])  # pyright: ignore[reportUnusedVariable]
     else:
-        branch_tips = []  # noqa: F841
+        branch_tips = []  # noqa: F841  # pyright: ignore[reportUnusedVariable]
 
     # Seed the RNG with the message-id so results are stable across
     # repeated runs of the same message (simulates cached CI results).
diff --git a/pyproject.toml b/pyproject.toml
index bde64bf..b905090 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -134,8 +134,19 @@ warn_unreachable = true
 # once we specify `exclude`, we're on our own. See
 # https://github.com/microsoft/pyright/issues/9057#issuecomment-2366938099.
 exclude = [".venv", "ezgb", "liblore", "patatt"]
-typeCheckingMode = "standard"
-reportUnusedImport = true
+typeCheckingMode = "strict"
+
+# Overly strict rules.
+reportConstantRedefinition = false
+reportUnknownArgumentType = false
+reportUnknownLambdaType = false
+reportUnknownMemberType = false
+reportUnknownVariableType = false
+
+# False positives caused by underscore-prefixed functions used across files.
+# Might be good to clean this up.
+reportPrivateUsage = false
+reportUnusedFunction = false
 
 [tool.ty.src]
 exclude = ["ezgb/", "liblore/", "patatt/"]
diff --git a/src/b4/__init__.py b/src/b4/__init__.py
index 739e1af..e3452b0 100644
--- a/src/b4/__init__.py
+++ b/src/b4/__init__.py
@@ -62,7 +62,7 @@ ConfigDictT = Dict[str, Union[str, List[str], None]]
 
 charset.add_charset('utf-8', None)
 # Policy we use for saving mail locally
-emlpolicy = email.policy.EmailPolicy(
+emlpolicy: email.policy.EmailPolicy[EmailMessage] = email.policy.EmailPolicy(
     utf8=True, cte_type='8bit', max_line_length=None, message_factory=EmailMessage
 )
 
@@ -4399,7 +4399,9 @@ def git_range_to_patches(
         msg.set_charset('utf-8')
         # Clean From to remove any 7bit-safe encoding
         origfrom = LoreMessage.clean_header(msg.get('From'))
-        lsubject = LoreSubject(msg.get('Subject'), presubject=presubject)
+        lsubject = LoreSubject(
+            LoreMessage.clean_header(msg.get('Subject')), presubject=presubject
+        )
         lsubject.counter = counter + 1
         lsubject.expected = expected
         if revision is not None:
@@ -5820,8 +5822,7 @@ def get_git_bool(gitbool: str) -> bool:
 
 def mailbox_email_factory(fh: BinaryIO) -> EmailMessage:
     """Factory function to create EmailMessage objects"""
-    msg = email.parser.BytesParser(policy=emlpolicy, _class=EmailMessage).parse(fh)  # type: EmailMessage
-    return msg
+    return email.parser.BytesParser(policy=emlpolicy, _class=EmailMessage).parse(fh)
 
 
 def get_msgs_from_mailbox_or_maildir(mbmd: str) -> List[EmailMessage]:
diff --git a/src/b4/ez.py b/src/b4/ez.py
index be3d2fe..cd8a10c 100644
--- a/src/b4/ez.py
+++ b/src/b4/ez.py
@@ -2679,20 +2679,21 @@ def get_sent_tagname(
     tagbase: str, tagprefix: str, revstr: Union[str, int]
 ) -> Tuple[str, Optional[int]]:
     revision = None
-    if isinstance(revstr, int):
-        revision = revstr
-    elif isinstance(revstr, str):
-        try:
-            revision = int(revstr)
-        except ValueError:
-            matches = re.search(r'^v(\d+)$', revstr)
-            if not matches:
-                # assume we got a full tag name, so try to find the revision there
-                matches = re.search(r'v(\d+)$', revstr)
-                if matches:
-                    revision = int(matches.groups()[0])
-                return revstr.replace('refs/tags/', ''), revision
-            revision = int(matches.groups()[0])
+    match revstr:
+        case int():
+            revision = revstr
+        case str():
+            try:
+                revision = int(revstr)
+            except ValueError:
+                matches = re.search(r'^v(\d+)$', revstr)
+                if not matches:
+                    # assume we got a full tag name, so try to find the revision there
+                    matches = re.search(r'v(\d+)$', revstr)
+                    if matches:
+                        revision = int(matches.groups()[0])
+                    return revstr.replace('refs/tags/', ''), revision
+                revision = int(matches.groups()[0])
 
     if tagbase.startswith('b4/'):
         return f'{tagprefix}{tagbase[3:]}-v{revision}', revision
diff --git a/src/b4/review/tracking.py b/src/b4/review/tracking.py
index fb41bf0..1adb99a 100644
--- a/src/b4/review/tracking.py
+++ b/src/b4/review/tracking.py
@@ -1057,13 +1057,15 @@ def get_review_target_branches() -> list[str]:
     """Return all configured review-target-branch values."""
     config = b4.get_main_config()
     val = config.get('review-target-branch')
-    if val is None:
-        return []
-    if isinstance(val, list):
-        return [str(v) for v in val if v]
-    if isinstance(val, str) and val:
-        return [val]
-    return []
+    match val:
+        case None:
+            return []
+        case list():
+            return [v for v in val if v]
+        case str():
+            if val:
+                return [val]
+            return []
 
 
 def get_review_target_branch_default() -> Optional[str]:
diff --git a/src/b4/ty.py b/src/b4/ty.py
index ba7f646..4e096af 100644
--- a/src/b4/ty.py
+++ b/src/b4/ty.py
@@ -530,8 +530,8 @@ def send_messages(
 
         outgoing += 1
         if send_email:
-            if not fromaddr and isinstance(jsondata['myemail'], str):
-                fromaddr = jsondata['myemail']
+            if not fromaddr:
+                fromaddr = user_email
             logger.info(
                 '  Sending: %s', b4.LoreMessage.clean_header(msg.get('subject'))
             )
@@ -959,7 +959,7 @@ def get_branch_info(gitdir: Optional[str], branch: str) -> Dict[str, str]:
     BRANCH_INFO = dict()
 
     remotecfg = b4.get_config_from_git('branch\\.%s\\..*' % branch)
-    if remotecfg is None or 'remote' not in remotecfg:
+    if 'remote' not in remotecfg:
         # Did not find a matching branch entry, so look at remotes
         gitargs = ['remote', 'show']
         lines = b4.git_get_command_lines(gitdir, gitargs)

-- 
2.53.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH b4 v2 04/11] Add ruff format check to CI
  2026-04-19 15:59 ` [PATCH b4 v2 04/11] Add ruff format check to CI Tamir Duberstein
@ 2026-04-19 18:06   ` Tamir Duberstein
  0 siblings, 0 replies; 14+ messages in thread
From: Tamir Duberstein @ 2026-04-19 18:06 UTC (permalink / raw)
  To: Kernel.org Tools

On Sun, Apr 19, 2026 at 12:00 PM Tamir Duberstein <tamird@kernel.org> wrote:
>
> Enable ruff format checking in the b4 CI script and configure Ruff's
> formatter in pyproject.toml.
>
> Apply a one-time repo-wide format pass so the new check enforces the
> current style without leaving the branch permanently red.
>
> Signed-off-by: Tamir Duberstein <tamird@kernel.org>

This patch suggests a bug in B4; looks like it took some emails from
the patch body and put them in the cc line.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH b4 v2 00/11] Enable stricter local checks
  2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
                   ` (10 preceding siblings ...)
  2026-04-19 16:00 ` [PATCH b4 v2 11/11] Enable pyright strict mode Tamir Duberstein
@ 2026-04-23  2:48 ` Konstantin Ryabitsev
  11 siblings, 0 replies; 14+ messages in thread
From: Konstantin Ryabitsev @ 2026-04-23  2:48 UTC (permalink / raw)
  To: Kernel.org Tools, Tamir Duberstein
  Cc: "str = 'reply", "str = 'reviewer",
	"str = 'author", "str = 'patch1"


On Sun, 19 Apr 2026 11:59:55 -0400, Tamir Duberstein wrote:
> Enable stricter local checks
> 
> This series makes b4 local developer checks enforceable from the
> review TUI and makes the repo clean under ruff, mypy, pyright, and ty.
> 
> The early patches set ruff formatting and import behavior, make the
> test environment reproducible under uv, and type the misc helpers enough
> for whole-repo mypy. The middle patches tighten mypy and pyright, then
> add ty with all rules enabled and bump the Python requirement to 3.11
> because the code already uses 3.11-only syntax.
> 
> [...]

Applied, thanks!

[01/11] Add CI script
        commit: a08a591943b0d1f18d6590242fab649ac62616e7
[02/11] Add ruff checks to CI
        commit: 07fa553598c68805f3bb023903fc4f16d64a4f54
[03/11] Import dependencies unconditionally
        commit: 24faa874f9f245484a62c29d0094f251396d93cd
[04/11] Add ruff format check to CI
        commit: 0a0a42f6166f287f986f483502a1ef3a06ebc18d
[05/11] Fix tests under uv with complex git config
        commit: 9231a4ebe52d35fc42ea5f71a6551d79cf7f51c8
[06/11] Fix typings in misc/
        commit: 04c3e2a781b7e557d9be4525eaef5523d1ba35cf
[07/11] Enable mypy unreachable warnings
        commit: 6de7cb316dc3a8cbcaa5547930d0c5b97bb46789
[08/11] Enable and fix pyright diagnostics
        commit: c61f5cb97b461c90732c47130471e72cae069aa7
[09/11] Avoid duplicate map lookups
        commit: badbf476853eefe57c4f51593c1483a937428aee
[10/11] Add ty and configuration
        commit: b150595b630814ba95c1178319dfa5f46730adb1
[11/11] Enable pyright strict mode
        commit: 9186f8a042abb29c90db79fc70967d0d88236d10

Best regards,
-- 
KR



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2026-04-23  2:48 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-19 15:59 [PATCH b4 v2 00/11] Enable stricter local checks Tamir Duberstein
2026-04-19 15:59 ` [PATCH b4 v2 01/11] Add CI script Tamir Duberstein
2026-04-19 15:59 ` [PATCH b4 v2 02/11] Add ruff checks to CI Tamir Duberstein
2026-04-19 15:59 ` [PATCH b4 v2 03/11] Import dependencies unconditionally Tamir Duberstein
2026-04-19 15:59 ` [PATCH b4 v2 04/11] Add ruff format check to CI Tamir Duberstein
2026-04-19 18:06   ` Tamir Duberstein
2026-04-19 16:00 ` [PATCH b4 v2 05/11] Fix tests under uv with complex git config Tamir Duberstein
2026-04-19 16:00 ` [PATCH b4 v2 06/11] Fix typings in misc/ Tamir Duberstein
2026-04-19 16:00 ` [PATCH b4 v2 07/11] Enable mypy unreachable warnings Tamir Duberstein
2026-04-19 16:00 ` [PATCH b4 v2 08/11] Enable and fix pyright diagnostics Tamir Duberstein
2026-04-19 16:00 ` [PATCH b4 v2 09/11] Avoid duplicate map lookups Tamir Duberstein
2026-04-19 16:00 ` [PATCH b4 v2 10/11] Add ty and configuration Tamir Duberstein
2026-04-19 16:00 ` [PATCH b4 v2 11/11] Enable pyright strict mode Tamir Duberstein
2026-04-23  2:48 ` [PATCH b4 v2 00/11] Enable stricter local checks Konstantin Ryabitsev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox