A unified view of every defect against a vessel — from PSC findings to near-misses to open Conditions of Class — with severity-weighted age scoring and systemic-pattern detection.
Use this file to discover all available pages before exploring further.
“How many open defects do we have?” is the wrong question. The right one is “how many of them have been open longer than they should be, and how many of those are on the same equipment?”
A vessel collects defects from twenty different angles: PSC inspections, SIRE, CDI, charterers, owners, terminals, internal audits, navigational audits, flag state, external audits, ADI events, near-misses, Hull & Machinery, SCMM, the previous VIR, the most recent VIR, and Conditions of Class. The numbers add up fast — most fleets carry several hundred open items at any time. The work isn’t tracking the count. It’s separating the ones that need a Technical Superintendent’s attention from the ones that are just paperwork.The Defect Surveillance pipeline pulls every defect source into one place, scores age and severity, clusters by equipment family, and tracks closure-rate trends. It produces a single ranked list — the items most likely to bite, sorted with the worst on top — alongside the systemic patterns that explain why this vessel keeps generating the same kind of defect.
A finding can come from any of these surfaces. Each one has its own filing format, its own due-date convention, its own severity vocabulary. The pipeline normalises them all into the same record shape:
Source
Origin
Typical severity vocabulary
Where the data comes from
PSC
Port State Control inspection
Major / Detainable / Minor
Port-state-control regime portals (Paris MoU, Tokyo MoU, Indian Ocean MoU, Mediterranean MoU, Caribbean MoU, Black Sea MoU, Abuja MoU, USCG, Riyadh)
Nineteen sources, one normalised store. A summary rollup flattens all of them into a single nested status structure:
def flatten_json(imo, nested_data): """Flatten nested defect summary into per-status records.""" records = [] for key, values in nested_data.items(): question = values.get("questionName") updated_at = values.get("updatedAt") status_counts = values.get("value", {}).get("status_counts", {}) additional = values.get("value", {}) # Per-source breakdown (Class, Shippalm, …) for source_key, source_value in additional.items(): if isinstance(source_value, dict) and "status_counts" in source_value: for status, count in source_value["status_counts"].items(): records.append({"imo": imo, "questionName": question, "source": source_key, "status": status, "count": count, "updatedAt": updated_at}) # Roll-up at the main level if status_counts: for status, count in status_counts.items(): records.append({"imo": imo, "questionName": question, "source": "Main", "status": status, "count": count, "updatedAt": updated_at}) return records
Once flattened, every defect in the fleet has the same shape: vessel, source, severity, status, dates. Everything downstream operates on that shape — there is no per-source analysis logic.
A spike of CRITICAL items in the Chronic tier is the single largest escalation trigger in the pipeline — items that old typically reflect a closure-process gap, not an isolated incident.
Every defect is tagged with an equipment family (engine, cargo system, navigation, accommodation, hull, safety, environmental). Three or more defects on the same family flips the verdict from “isolated incidents” to “systemic issue”:Cluster signal(f)={SystemicIsolatedif Nopen(f)≥3otherwiseA systemic cluster on the cargo system is a different conversation than three unrelated defects across three families.
The pipeline tracks closure rate period-on-period:Closure rate=Open at start+Opened in periodDefects closed in periodTwo patterns matter:
Falling closure rate + rising open count — operational drift. The team is opening defects faster than closing them. Usually points to a resource gap or a process that’s stopped working.
Stable closure rate + rising open count — capacity problem. The team is keeping pace but new defects are arriving faster than expected, often after a change of crew or after a heavy port-call window.
Closure-rate verdict is reported alongside the absolute open count so the reviewer sees direction and magnitude together.
A weekly defect sweep produces this view (numbers are illustrative):
Metric
Value
Open defects
87
New this period
11
Closed this period
6
Closure rate
0.063 (down from 0.18 last period)
CRITICAL open
4
CRITICAL in Chronic tier (>180d)
2
Equipment clusters:
Family
Open count
Cluster signal
Cargo system
6
Systemic
Engine
3
Systemic (just crossed threshold)
Safety
2
Isolated
Accommodation
5
Systemic
Other
71
—
Stuck defects:
Blocker
Count
Awaiting parts
14
Awaiting port
8
Awaiting vendor
5
Awaiting class approval
2
Verdict: HIGH. Two CRITICAL findings open more than 6 months, falling closure rate, three systemic clusters. The pipeline:
Tags escalation_required: true, priority CRITICAL
Routes to TSI with a one-page brief: top 10 ranked defects, three systemic clusters, and the unblock list of 14 parts orders to chase.
Generates a recommendation set focused on the systemic patterns, not the individual items — closing the cargo-system cluster is more valuable than closing five random defects.
A surprising amount of defect-management work is data hygiene — duplicate findings filed against multiple sources, near-misses miscategorised as ADI, equipment family left blank. The pipeline surfaces these issues but does not silently fix them; a Technical Superintendent should know when their fleet’s defect taxonomy is drifting.
The cleanest signal of fleet-wide defect-management health is whether the closure rate is steady. A vessel can carry a lot of open defects and still be healthy if the closure rate is steady and clusters are isolated. A vessel can carry few open defects and be unhealthy if the closure rate is collapsing.