Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.appliedaifoundation.org/llms.txt

Use this file to discover all available pages before exploring further.

Skills overview showing the four-category architecture — collectors, analyzers, document tools, and reports / utilities — with the collector → analyzer → review flow

What skills are

Skills are modular, reusable capabilities that agents invoke to do real work. Each skill packages a workflow — what it reads, what it produces, when to use it — into a portable unit any agent can call. The same skill works equally well for a senior agent reviewing a vessel and a human operator running a one-off check. The library currently has 44 skills grouped into four areas: collectors that gather evidence, analyzers that turn that evidence into senior reviews, document tools that index and search the maritime knowledge base, and utilities for reports, images, and intelligence queries.

How skills compose

Most maritime work follows a two-stage pattern:
  1. A data collector fetches the latest evidence for a vessel domain (engine telemetry, certificates, defects, etc.) and files it as a timestamped, append-only record.
  2. An analyzer reads that evidence, runs a deterministic analysis script, and produces a senior expert review with prioritised actions and an escalation decision.
Splitting the two means analyses are reproducible (re-runnable without re-fetching), auditable (every number traces to evidence), and cheap to operate (collection is a database query, analysis is one model call).

Categories

Collectors — gather evidence

Per-vessel data collectors that snapshot a single domain:

Analyzers — produce senior reviews

Senior agents that turn collector evidence into TSI-grade reviews with escalation logic:

Documents & knowledge

Tools that turn the maritime corpus into searchable, citable knowledge:

Reports & utilities

Cross-cutting tools used during inspections, investigations, and reporting:

How a typical review flows

User asks "How are the AEs on POSUN?"


ae-performance-data-collector
   fetches telemetry, files evidence


ae-performance-analyzer
   runs analysis script
   writes senior review
   decides escalation


   ┌────┴────┐
   │         │
   OK     Escalation


   tsi-review picks up the case
The same pattern applies across every domain — engines, certificates, crew, voyages, finance — which means anyone reviewing fleet status sees the same shape of evidence regardless of who or what produced it.

Why script-driven analysis

Pattern detection, threshold logic, and cost calculation all live in deterministic analysis scripts — not in the language model. The model interprets results into a narrative review; it doesn’t recompute. Reviews stay consistent across vessels, auditable (every number traces to source evidence), and cheap to run.