terok_clearance
terok_clearance
¶
Clearance hub + desktop notification library for terok.
Two unrelated wire formats live under this one package:
org.terok.Clearance1over a unix-socket varlink transport — the hub (ClearanceHub) and the client library (ClearanceClient,EventSubscriber) that drive the per-container block / verdict / lifecycle flow.org.freedesktop.Notificationsover D-Bus — theDbusNotifierwrapper that renders those events as desktop popups. Kept because that's the OS API; every other D-Bus path in this package (org.terok.Shield1) was removed in favour of the varlink transport.
CLEARANCE_INTERFACE_NAME = 'org.terok.Clearance1'
module-attribute
¶
__all__ = ['CLEARANCE_INTERFACE_NAME', 'CallbackNotifier', 'Clearance1Interface', 'ClearanceClient', 'ClearanceEvent', 'ClearanceHub', 'ContainerIdentity', 'ContainerInfo', 'ContainerInspector', 'DbusNotifier', 'EventSubscriber', 'IdentityResolver', 'InvalidAction', 'Notification', 'Notifier', 'NullInspector', 'NullNotifier', 'ShieldCliFailed', 'UnknownRequest', 'VerdictTupleMismatch', 'check_units_outdated', 'configure_logging', 'create_notifier', 'default_clearance_socket_path', 'install_notifier_service', 'read_installed_unit_version', 'serve', 'uninstall_notifier_service', 'uninstall_service', 'wait_for_shutdown_signal']
module-attribute
¶
__version__ = '0.0.0'
module-attribute
¶
ClearanceClient(*, socket_path=None)
¶
Thin async client for the Clearance1 varlink service.
Two async coroutines to drive:
start— open the subscribe + RPC connections and begin relaying events to the user-supplied callback. Returns once both channels are live; events arrive viaon_eventfrom then on.verdict— RPC call; returnsTrueifterok-shieldapplied the action,Falseon any refusal or shield failure. The refusal reason is logged at WARNING.
The callback runs on the same event loop as the rest of the client; exceptions it raises are logged and swallowed so one bad handler can't kill the stream for every subsequent event.
Remember the target socket; defaults to default_clearance_socket_path.
Source code in src/terok_clearance/client/client.py
start(on_event)
async
¶
Open both connections and begin relaying events to on_event.
The initial connect is awaited synchronously so callers see
start() return only after the subscription is live — a
hub that's down at startup still propagates as an exception.
Subsequent drops are handled by _run_stream's internal
reconnect loop so long-running consumers (TUI, notifier)
survive a systemctl restart terok-clearance without
restarting themselves.
Source code in src/terok_clearance/client/client.py
stop()
async
¶
Close both connections and await the stream task.
Source code in src/terok_clearance/client/client.py
poke_reconnect()
¶
Skip any in-flight reconnect back-off and retry immediately.
Idempotent; a no-op when the stream is healthy because the
event is only awaited inside _run_stream's back-off
window.
Source code in src/terok_clearance/client/client.py
verdict(container, request_id, dest, action)
async
¶
Apply action (allow / deny) to dest via the hub's Verdict RPC.
Returns True when the hub accepted and applied the verdict,
False for any refusal (unknown request_id, tuple mismatch,
invalid action, shield-exec failure). Callers typically ignore
the return value and let the subsequent verdict_applied
event drive UI updates; refusal reasons are logged at WARNING.
Source code in src/terok_clearance/client/client.py
IdentityResolver(inspector)
¶
Compose a ContainerInspector lookup + task-meta YAML into an identity.
Callable: resolver(container_id) -> ContainerIdentity. Four
soft-fail paths, all returning a degraded identity that keeps the
notification pipeline usable:
- The inspector failed → empty
ContainerIdentity; the subscriber falls back to the raw container ID. - Container carries no terok annotations (a standalone container that happened to hit the firewall) → container-name-only.
ai.terok.task_meta_pathannotation absent → identity withouttask_name(project + task_id still present).task_meta_pathYAML unreadable / missing / malformed → same as above; the name field is left empty.
Configure the resolver with a ContainerInspector implementation.
The inspector is required (no default) so the caller owns the
runtime-selection decision — clearance is runtime-neutral and
must not reach for a specific backend itself. The notifier
entry point picks an appropriate implementation at startup
(terok-sandbox's create_container_inspector when available,
NullInspector otherwise).
Source code in src/terok_clearance/client/identity_resolver.py
__call__(container_id)
¶
Return the task-aware identity for container_id.
Source code in src/terok_clearance/client/identity_resolver.py
EventSubscriber(notifier, client=None, *, identity_resolver=None, socket_path=None)
¶
Bridge clearance-hub events into desktop notifications.
Owns the presentation-layer state a rendering client needs: live-block
dedup keyed on (container, target), the tracked ShieldDown
popup per container so ShieldUp can retire it, and verdict routing
through notifier action callbacks.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
notifier
|
Notifier
|
Desktop notification backend (any |
required |
client
|
ClearanceClient | None
|
Pre-configured |
None
|
identity_resolver
|
Callable[[str], ContainerIdentity] | None
|
Turns a short container ID into a
|
None
|
socket_path
|
Path | None
|
Clearance-socket override when client isn't supplied (tests). |
None
|
Initialise the subscriber with a notifier and transport.
Source code in src/terok_clearance/client/subscriber.py
start()
async
¶
Connect to the clearance hub and begin rendering its event stream.
stop()
async
¶
Drain pending tasks and close the transport.
Closes the client first so no new handler tasks are scheduled,
then awaits the currently-tracked tasks to settle (with their
own CancelledError suppressed). A bare sleep(0) would
yield only one loop turn — not enough for cancellation to
propagate through chained awaits — and tasks.clear() on its
own would drop references to tasks still writing to handles we
then close.
Source code in src/terok_clearance/client/subscriber.py
ContainerInfo(container_id='', name='', state='', annotations=(lambda: _EMPTY_ANNOTATIONS)())
dataclass
¶
What podman inspect tells us about one container.
Empty instance (ContainerInfo()) represents "not found" or
"lookup failed" — callers should treat missing fields as
best-effort and fall back to the raw container ID when they
don't have a better label.
container_id = ''
class-attribute
instance-attribute
¶
The short ID podman reported back, or empty on failure.
name = ''
class-attribute
instance-attribute
¶
The container's name without podman's leading / prefix.
state = ''
class-attribute
instance-attribute
¶
Lifecycle state: running, exited, created, etc. Empty when unknown.
annotations = field(default_factory=(lambda: _EMPTY_ANNOTATIONS))
class-attribute
instance-attribute
¶
Every OCI annotation podman recorded for this container.
Exposed as a read-only Mapping — cached instances are
shared across inspector callers, so mutating the underlying dict
would poison future lookups. Build with types.MappingProxyType
at construction time; callers (clearance's task-aware resolver,
anything else that cares) pluck out the keys they know about.
ClearanceEvent(type, container, request_id='', dest='', port=0, proto=0, domain='', action='', ok=False, reason='')
dataclass
¶
One event fanned out to every Subscribe() caller.
type + container are always populated; the remaining fields
are filled in per-kind and default to zero-values otherwise.
Known values of type (additional fields beyond container):
connection_blocked—request_id,dest,port,proto,domain. Requires an operator verdict.verdict_applied—request_id,action,ok.container_started— no extras.container_exited—reason.shield_up/shield_down/shield_down_all— no extras.
Unknown values are forwarded unchanged so the wire format can grow without breaking clients pinned to older schemas.
type
instance-attribute
¶
container
instance-attribute
¶
request_id = ''
class-attribute
instance-attribute
¶
dest = ''
class-attribute
instance-attribute
¶
port = 0
class-attribute
instance-attribute
¶
proto = 0
class-attribute
instance-attribute
¶
domain = ''
class-attribute
instance-attribute
¶
action = ''
class-attribute
instance-attribute
¶
ok = False
class-attribute
instance-attribute
¶
reason = ''
class-attribute
instance-attribute
¶
ContainerIdentity(container_name='', project='', task_id='', task_name='')
dataclass
¶
Host-side facts about a container, as much as the resolver found.
Terok-managed task containers carry project and task_id via
OCI annotations set at podman run time; task_name is looked
up live from terok's task metadata so a rename between block and
verdict is reflected in the resolved popup. Standalone containers
produce an instance with only container_name set (or empty
everywhere when podman inspect itself failed).
ContainerInspector
¶
Bases: Protocol
Callable that maps a container id to a ContainerInfo.
The protocol intentionally covers only the notification-rendering
use case — name + OCI annotations + lifecycle state. Broader
runtime operations (exec, mount, signals) live on
terok_sandbox.runtime.ContainerRuntime and are not part of
this contract.
Implementations MUST soft-fail: an unreachable runtime / missing
container / malformed metadata returns an empty ContainerInfo
rather than raising, so notification pipelines keep their fallback
label instead of crashing on a lookup hiccup.
__call__(container_id)
¶
Return the best-effort ContainerInfo for container_id.
NullInspector
¶
Always-empty ContainerInspector — the graceful-degradation default.
Installed when no runtime-aware package provides a concrete
backend. Every lookup returns ContainerInfo() so the
notifier still renders (raw container id, no enrichment).
__call__(_container_id)
¶
Return the universal empty ContainerInfo.
ClearanceHub(*, clearance_socket=None, reader_socket=None, verdict_client=None)
¶
Server for the org.terok.Clearance1 interface.
Owns three pieces of state:
_subscribers— a set of bounded per-connection queues; the hub puts aClearanceEventon each one every time the reader ingester delivers an event. Slow clients see their oldest events dropped; fast clients aren't affected._live_verdicts— therequest_id → (container, dest)map theVerdictmethod checks for the authz binding.- An
EventIngesterbound to the canonical reader socket.
Lifecycle: start brings everything up; stop tears
it down under individual timeouts so a flaky bus or a stuck
subscriber can't burn systemd's stop-sigterm deadline.
Configure the two sockets and the verdict-helper client.
verdict_client is injected so tests can stub out shield exec
without spawning the helper process. Production callers leave
it defaulted — a fresh VerdictClient pointing at the
canonical helper socket.
Source code in src/terok_clearance/hub/server.py
start()
async
¶
Bring the ingester + varlink server online and accept clients.
Transactional: if the varlink bind fails after the ingester is already listening, the ingester is stopped before the exception propagates so a half-started hub doesn't leak a live reader-side socket on systemd restart paths.
Source code in src/terok_clearance/hub/server.py
stop()
async
¶
Close the varlink server + ingester; drain subscriber queues.
Source code in src/terok_clearance/hub/server.py
CallbackNotifier(on_notify=None, *, on_container_started=None, on_container_exited=None, on_shield_up=None, on_shield_down=None, on_shield_down_all=None)
¶
Notifier backend that delegates rendering to caller-supplied hooks.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
on_notify
|
Callable[[Notification], None] | None
|
Called for every |
None
|
on_container_started
|
Callable[[str], None] | None
|
Called for every |
None
|
on_container_exited
|
Callable[[str, str], None] | None
|
Called for every |
None
|
on_shield_up
|
Callable[[str], None] | None
|
Called for every |
None
|
on_shield_down
|
Callable[[str], None] | None
|
Called for every |
None
|
on_shield_down_all
|
Callable[[str], None] | None
|
Called for every |
None
|
Bind optional notify and lifecycle callbacks.
Source code in src/terok_clearance/notifications/callback.py
notify(summary, body='', *, actions=(), timeout_ms=-1, hints=None, replaces_id=0, app_icon='', container_id='', container_name='', project='', task_id='', task_name='')
async
¶
Record the notification and invoke the on_notify hook.
Returns a monotonically increasing ID, or replaces_id for updates.
Source code in src/terok_clearance/notifications/callback.py
on_action(notification_id, callback)
async
¶
Store the action callback for later invocation.
close(notification_id)
async
¶
disconnect()
async
¶
invoke_action(notification_id, action_key)
¶
Invoke the stored callback for a user verdict.
This is the entry point for consumers that handle user input
(Allow/Deny) and need to route the decision back through
EventSubscriber to the D-Bus Verdict/Resolve method.
Source code in src/terok_clearance/notifications/callback.py
on_container_started(container)
¶
Forward a ContainerStarted lifecycle event to the consumer hook.
on_container_exited(container, reason)
¶
Forward a ContainerExited lifecycle event to the consumer hook.
on_shield_up(container)
¶
on_shield_down(container)
¶
Forward a ShieldDown signal (partial bypass) to the consumer hook.
on_shield_down_all(container)
¶
Forward a ShieldDownAll signal (full bypass) to the consumer hook.
Notification(nid, summary, body, actions, replaces_id, timeout_ms, container_id='', container_name='', project='', task_id='', task_name='')
dataclass
¶
Snapshot of a single notification posted by the subscriber.
The identity fields (container_id, container_name,
project, task_id, task_name) are presentation-layer
context the subscriber's identity_resolver produced — empty
strings when unresolved. The desktop DbusNotifier
discards all of them; the TUI uses the task triple to render a
Task column for terok-managed containers and falls back to the
container name for standalone ones.
nid
instance-attribute
¶
summary
instance-attribute
¶
body
instance-attribute
¶
actions
instance-attribute
¶
replaces_id
instance-attribute
¶
timeout_ms
instance-attribute
¶
container_id = ''
class-attribute
instance-attribute
¶
container_name = ''
class-attribute
instance-attribute
¶
project = ''
class-attribute
instance-attribute
¶
task_id = ''
class-attribute
instance-attribute
¶
task_name = ''
class-attribute
instance-attribute
¶
DbusNotifier(app_name='terok')
¶
Send desktop notifications over the D-Bus session bus.
The connection is established lazily on the first notify call.
Action callbacks are dispatched from the ActionInvoked signal;
stale callbacks are cleaned up automatically on NotificationClosed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
app_name
|
str
|
Application name sent with every notification. |
'terok'
|
Initialise with the given application name.
Source code in src/terok_clearance/notifications/desktop.py
connect()
async
¶
Idempotently open the session-bus connection and subscribe to signals.
Safe to call concurrently and repeatedly: the lock serialises racing callers so exactly one MessageBus is ever created for this notifier.
Source code in src/terok_clearance/notifications/desktop.py
notify(summary, body='', *, actions=(), timeout_ms=-1, hints=None, replaces_id=0, app_icon='', container_id='', container_name='', project='', task_id='', task_name='')
async
¶
Send a desktop notification.
Freedesktop notifications render summary + body + actions only,
so the structured identity kwargs (container_id and the
terok task triple) are dropped on the floor here — callers are
expected to have folded the user-facing identity into body
already. The kwargs stay in the signature for
Notifier conformance so callers
don't have to branch on notifier kind.
Source code in src/terok_clearance/notifications/desktop.py
on_action(notification_id, callback)
async
¶
Register a callback for when the user clicks an action button.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
notification_id
|
int
|
ID returned by |
required |
callback
|
Callable[[str], None]
|
Called with the |
required |
Source code in src/terok_clearance/notifications/desktop.py
close(notification_id)
async
¶
Close an active notification.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
notification_id
|
int
|
ID returned by |
required |
Source code in src/terok_clearance/notifications/desktop.py
disconnect()
async
¶
Tear down the session-bus connection.
Source code in src/terok_clearance/notifications/desktop.py
NullNotifier
¶
Silent fallback that satisfies the Notifier protocol.
Every method is a no-op. notify always returns 0.
notify(summary, body='', *, actions=(), timeout_ms=-1, hints=None, replaces_id=0, app_icon='', container_id='', container_name='', project='', task_id='', task_name='')
async
¶
Accept and discard a notification, returning 0.
Source code in src/terok_clearance/notifications/null.py
on_action(notification_id, callback)
async
¶
close(notification_id)
async
¶
Notifier
¶
Bases: Protocol
Structural type for desktop notification backends.
Implementations must provide notify, on_action, close, and
disconnect. DbusNotifier talks to a real session bus;
NullNotifier silently discards everything for headless environments.
notify(summary, body='', *, actions=(), timeout_ms=-1, hints=None, replaces_id=0, app_icon='', container_id='', container_name='', project='', task_id='', task_name='')
async
¶
Send a desktop notification.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
summary
|
str
|
Notification title. |
required |
body
|
str
|
Optional body text. |
''
|
actions
|
Sequence[tuple[str, str]]
|
|
()
|
timeout_ms
|
int
|
Expiration hint in milliseconds ( |
-1
|
hints
|
Mapping[str, Any] | None
|
Freedesktop hint dict (values are |
None
|
replaces_id
|
int
|
Replace an existing notification in-place. |
0
|
app_icon
|
str
|
Icon name or |
''
|
container_id
|
str
|
Presentation-layer hint: the 12-char podman
container ID the event refers to. The desktop
|
''
|
container_name
|
str
|
Podman |
''
|
project
|
str
|
Terok project slug when the container is orchestrator-
managed (from the |
''
|
task_id
|
str
|
Terok task ID ( |
''
|
task_name
|
str
|
Human-readable task label from terok's metadata — mutable at any point in the task's life, so resolved live by callers, not snapshotted. Empty when unknown. |
''
|
Returns:
| Type | Description |
|---|---|
int
|
Server-assigned notification ID ( |
Source code in src/terok_clearance/notifications/protocol.py
on_action(notification_id, callback)
async
¶
Register a callback for when the user clicks an action button.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
notification_id
|
int
|
ID returned by |
required |
callback
|
Callable[[str], None]
|
Called with the |
required |
Source code in src/terok_clearance/notifications/protocol.py
InvalidAction
¶
ShieldCliFailed
¶
Bases: TypedVarlinkErrorReply
terok-shield allow|deny exited non-zero or timed out.
Clients render this as the red "Allow failed" / "Deny failed"
popup variant: the user's click reached the hub but the firewall
didn't accept it, so the notification's premise ("you decided X")
is misleading. stderr is whatever terok-shield wrote,
truncated to a reasonable length by the hub.
UnknownRequest
¶
Bases: TypedVarlinkErrorReply
Verdict referenced a request_id the hub didn't emit.
Fail-closed for the attacker's dream-up case: a peer connecting to the clearance socket synthesises a verdict for a block that was never broadcast. No binding, no action.
VerdictTupleMismatch
¶
Bases: TypedVarlinkErrorReply
(container, dest) don't match the hub's pending record.
Cheap defence against replay attackers who sniffed a request_id
on this connection but try to apply a verdict against a different
destination. expected_* are what the hub recorded when it
emitted connection_blocked; got_* are what the call
carried.
Clearance1Interface(event_stream_factory, apply_verdict)
¶
Bases: VarlinkInterface
Varlink interface served by the clearance hub.
Two callables are injected so the state machine stays testable without a live varlink connection:
event_stream_factory— returns a freshAsyncIteratoryieldingClearanceEventinstances. The hub owns one per connected subscriber so backpressure is local to the slow client.apply_verdict— validates the triple and, on success, shells out toterok-shield. Raises a typed varlink error for any refusal path; returnsTrueonly when the shield invocation itself succeeded.
Bind the per-subscriber event stream factory and the verdict callable.
Source code in src/terok_clearance/wire/interface.py
Subscribe()
async
¶
Stream hub events to this caller until the connection closes.
Every yield is forwarded immediately with continues=true;
the stream ends only when the client disconnects. A buffered
(delay_generator=True) stream would hold the first event
until a second arrives, breaking the "something just happened"
liveness contract operators expect from a notification channel.
Source code in src/terok_clearance/wire/interface.py
Verdict(*, container, request_id, dest, action)
async
¶
Apply action (allow / deny) to dest for container.
Returns True when terok-shield accepted the verdict.
Raises UnknownRequest,
VerdictTupleMismatch,
InvalidAction, or
ShieldCliFailed on the
four refusal paths — clients get a typed error they can render
without stringly-matching the message.
Source code in src/terok_clearance/wire/interface.py
serve()
async
¶
Run the hub service until SIGINT/SIGTERM.
The entry point terok-clearance serve hands off here. Blocks forever
on a signal-set asyncio.Event; systemd's SIGTERM flips it,
then stop tears down the server under a timeout.
Source code in src/terok_clearance/hub/server.py
create_notifier(app_name='terok')
async
¶
Return a connected DbusNotifier, or a NullNotifier on failure.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
app_name
|
str
|
Application name sent with every notification. |
'terok'
|
Returns:
| Type | Description |
|---|---|
Notifier
|
A |
Source code in src/terok_clearance/notifications/factory.py
check_units_outdated()
¶
Return a one-line drift warning if any installed unit is stale, else None.
Checks hub + verdict together (they're installed as a pair by
install_service) plus the notifier independently (headless
hosts may install it later, or not at all). None is returned
when neither pair nor notifier is installed (headless host, or
no setup command has run yet); a one-sided hub/verdict pair is
reported as stale so the operator is prompted to restore it. A
legacy terok-dbus.service on disk counts as "stale" so the
operator is prompted to rerun setup and get the split pair.
Source code in src/terok_clearance/runtime/installer.py
install_notifier_service(bin_path=None)
¶
Render + write the notifier unit into the user systemd directory.
Paired with install_service: headless hosts that installed
the hub + verdict pair can opt into the desktop notifier later by
calling only this function. Daemon-reloads once at the end.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bin_path
|
Path | list[str] | None
|
|
None
|
Returns:
| Type | Description |
|---|---|
Path
|
The on-disk path of the written unit file. |
Source code in src/terok_clearance/runtime/installer.py
read_installed_unit_version()
¶
Return the hub unit's # terok-clearance-hub-version: stamp, or None.
None is either "unit not installed" or "unit installed without
a marker" (the pre-split legacy unit) — check_units_outdated
differentiates between those in its operator-facing message.
Source code in src/terok_clearance/runtime/installer.py
uninstall_notifier_service()
¶
Disable + unlink the notifier unit; daemon-reload once.
Symmetric teardown for install_notifier_service. Soft-fail
on every step so a half-installed tree still ends up clean.
Source code in src/terok_clearance/runtime/installer.py
uninstall_service()
¶
Disable + unlink both new units + any pre-split legacy leftover.
Symmetric teardown for install_service — terok uninstall
calls this instead of rolling its own systemctl + unlink sequence.
Daemon-reloads once at the end so systemd's in-memory registry
drops the now-missing units. All individual steps soft-fail so a
half-installed tree still ends up clean.
Source code in src/terok_clearance/runtime/installer.py
configure_logging(level=logging.INFO)
¶
Send INFO-level logs to stderr so journald / systemd pick them up.
Source code in src/terok_clearance/runtime/service.py
wait_for_shutdown_signal()
async
¶
Block the current task until SIGINT or SIGTERM arrives.
Source code in src/terok_clearance/runtime/service.py
default_clearance_socket_path()
¶
Return the canonical clearance-socket path under $XDG_RUNTIME_DIR.