Analysis Article
The Coordination Fingerprint
Three capture windows, one recurrent crew, and one very specific amplification signature. A post-mortem on what the latest zerosixty run actually shows.
Run v3 · post-mortem
The coordination fingerprint
Three capture windows on the same list, five days apart, give us the first defensible answer to v1's unanswered question: does the pattern persist?
- capture window ·
- 2024-10-26 → 2026-04-22
- batch ·
- batch_exporter-1776856502784__members-1776856522346
- by ·
- Analysis Team
When v1 went out in April, it described a dense overlap graph, a handful of central handles, and a nagging feeling that one account — @Briassulistop amplified — was receiving a suspicious amount of attention from the same small crew. Two more captures later we can say three things with numbers, not intuition.
From one snapshot to a time series
v1 was a single 13-hour window. v2 widened the capture to a rolling timeline, and v3 added another five days on top. Critically, v2 was the run where the intelligence layer first shipped — role typology, cohort detection, amplified-target ranking, cascade propagation, recurrent first-retweeter profiles. That means v3 is the first revision we can write where cross-window persistence is a question the pipeline can answer for itself.
v1
2026-04-15
Single 13-hour window. Overlap graph + ML baseline only. Identified a dense core around six handles; flagged five analytic gaps.
197 active accounts
v2
2026-04-17
Intelligence layer lands: roles, cohorts, amplified-target ranking, cascade propagation, first-retweeter profiles. First cohort detected (54 members).
307 active accounts
v3
2026-04-22
Fresh batch on the same list. Every signal recomputed from scratch. Cohort persistence and amplification fingerprint both testable for the first time.
367 active accounts
Members
653
+49 vs v1
Tweets
7,092
+4,893 vs v1
Retweets
6,002
1,090 originals + quotes
Largest component
135
1,103 edges · 4,768 shared RTs
The rest of the sample is a handful of stray pairs. One dominant component, same shape as v1, now nearly double the size because the window is longer.
The crew v1 pointed at is still the crew
v1 described the core as 6 handlescore spine:
nickthegreek5, Track11John, giannMak, 46spiros, AgelisDimitris,
angelikitsakani. v3's weighted-degree ranking reproduces that list and
adds chrysib11 and Briassulis at the top. The unsupervised ML anomaly
ranker, which re-runs from the raw feature matrix each time, independently
places nickthegreek5#1, 46spiros#2, Track11John#3 as the top three most unusual behavioural profiles. Two different lenses, same answer.
The cohort detector is the sharper tool. It runs label-propagation on a time-tight co-retweet subgraph — accounts only connect if they co-retweeted the same originals with at least two of those within 60 minutes of each other — and then recursively splits anything oversized. Between v2 and v3, that detector's first cohort evolved like this:
v2 · 2026-04-17
54 members
target · @Technomagos
v3 · 2026-04-22
64 members
target · @ellada24
Newly surfaced
54 out of 54 kept. Zero dropped. Ten added. That is not a shape that rearranges run-to-run — it is a recurrent crew.
The top target of the cohort shifted from @Technomagos in v2 to
@ellada24 in v3, which tracks the wider window pulling in more
ellada24-sourced cascades; @Technomagos is still among the top targets
the cohort co-amplifies, just no longer the most frequent. The mix of who
the crew boosts changes week to week; the crew itself doesn't.
What this isn't
This is a coordination map, not a proof-of-automation or proof-of-payment report. The cohort detector's thresholds are sensible defaults, not ground truth. Re-running with stricter settings would produce a smaller, denser crew. Everything below should be read as a reviewable fingerprint, not a verdict.
One target, one crew, one speed
The hardest signal in the entire run is inbound pressure on
@Briassulis. Not because of the volume — 84 retweets is not large on
its own — but because of the shape.
Amplification fingerprint
@Briassulis
Track11John alone carries 66 of 84 inbound retweets — 78.6 percent. The remaining seven amplifiers share the other 18. Eight people account for all of it, and the same eight show up across this target's top cascades (repeat-crew overlap 0.70). Eighty of the 84 inbound retweets arrive within fifteen minutes of the original.
0.636
concentrated (one crew carries most of it)
Herfindahl-Hirschman index across amplifiers. 1.0 = one account does it all.
0.680
concentrated (one crew carries most of it)
Same metric, 5 days earlier. The dial barely moved.
Total inbound RTs
—
pre-ranking
81
84
Unique amplifiers
—
7
8
Amplification HHI
—
0.680
0.636
Top amplifier
Track11John
v1 raw edge count: 66
Track11John
Track11John
Top amplifier share
—
81.5%
78.6%
Repeat-crew overlap
Jaccard across the target's top cascades — locked across captures.
—
0.700
0.700
Within 15m inbound
—
77 / 81
80 / 84
Why this is the signal
HHI 0.64 plus an 80% top-amplifier share plus a 0.70 repeat-crew is the
shape a reasonable analyst would expect from organised amplification rather
than organic reach. Compare with @enikos_gr, whose inbound 20 retweets
come from one amplifier (@GiotaPapapetrou) at 100% — that's a one-person
fan, not a crew. @Briassulis is a crew that behaves like a one-person
fan, which is the unusual part.
It's mostly not a burst
v1 carried an implicit assumption: coordinated activity should look like a synchronised burst. The v3 cascade propagation data refutes that for most of this run. The same cohort touches three very different cascade shapes:
@ellada24
Sustained re-retweet- t→k10
- 2.3h
- kurtosis
- 1.20
- same client
- 75%
@kyranakis
Sustained re-retweet- t→k10
- 2.8h
- kurtosis
- 0.72
- same client
- 64%
@Technomagos
Sharp burst- t→k10
- 10m
- kurtosis
- 4.70
- same client
- 53%
Two of the three are slow. @ellada24 takes 2.3 hours to reach ten
retweets, and 75% of those retweets come from the same client build —
a homogeneous device fingerprint on a sustained re-retweet. Only the
@Technomagos cascade fits the stereotype of a sharp synchronised spike
(kurtosis 4.7), and that one has a mixed client stack. The takeaway for
analysts is a correction of reflex: "coordinated" in this data more
often looks like a small crew re-retweeting the same targets over hours
from a narrow device stack, not twenty identical bot accounts firing in
sixty seconds.
The shape of the 367 active accounts
The deterministic role classifier ran once per capture. In v3 it labelled every active account as one of seven roles. The distribution:
- Unclassified87 · 23.7%
- Mixed behaviour83 · 22.6%
- Amplifier suspects72 · 19.6%
- Retail users56 · 15.3%
- Media / business35 · 9.5%
- Source hubs33 · 9.0%
- Journalist / public figure1 · 0.3%
Two numbers matter here. 72 amplifier_suspect is the run's primary
review bucket — accounts with retweet-ratio ≥ 0.85, near-zero originals,
and typically elevated first-mover counts. 87 unknown is a coverage
gap, not a finding; it's mostly accounts with empty descriptions that the
regex classifier can't place. The single journalist_public_figure label
means the patterns are strict; if that bucket should include Greek
opinion-columnists, the classifier's vocabulary needs another pass.
The recurrent first-retweeter profile — which of these accounts is first on captured cascades most often — produces its own leaderboard:
Top recurrent first-retweeters · v3
How many cascades each account was first on, with the target-HHI of that first-retweeter activity.
- 179@GiotaPapapetrou· retail_user · HHI 0.03 · target @enikos_gr
- 143@Track11John· mixed_behavior · HHI 0.20 · target @Briassulis
- 109@46spiros· amplifier_suspect · HHI 0.02 · target @AnAthenianToLDN
- 106@aggelikiME25· mixed_behavior · HHI 0.02
- 105@paliakaravana59· amplifier_suspect · HHI 0.02
- 98@vythos70· mixed_behavior · HHI 0.02
Compare @Track11John, who appears first on only slightly fewer cascades
(143) but with target-HHI ten times higher — 0.20. They lead
specifically on @Briassulis. Two very different lead-lag fingerprints
from two accounts sitting in the same cohort.
Scoring v1's gap list, honestly
v1 enumerated five missing layers. v3 has closed some and not others. Calling it:
Lead-lag structure
missing
first-retweeter profiles
per account
partial
no lead-lag matrix yet
Cross-day stability
missing
one capture
closed
54/54 persistence
URL / domain / media reuse
missing
missing
missing
biggest outstanding layer
Component profiles over time
missing
per-capture only
partial
no cross-capture diff UI
Manual analyst labels
missing
schema only
zero applied
The new layer surfaces gaps v1 couldn't even name: quote-retweet framing (currently dropped), a burst-vs-sustained threshold rule for tagging cascades, a cohort-to-cohort coupling score, and the role classifier's coverage of Greek opinion/commentator vocabulary (23.7% "unknown" is high).
The URL layer is the next wall
No URL extraction, no domain aggregation, no media-identifier reuse. This is the single biggest outstanding data layer. Anything making a "campaign" claim in the narrative sense — coordinated linking to a specific property — would need that layer before it could be published.
The views that back this article
Every number here is joinable back to a view in the visualizer. If you want to check a claim rather than trust the prose:
Briassulis target
Open
amplifier breakdown · role mix · originated cascades
Cohort #1
Open
64 members · 217 cascades · target distribution
Graph lab
Open
colour by role / cohort / anomaly · pin 2+ accounts for shortest path
Direct links:
- Top amplified target — @Briassulis
- Cohort #1 detail
- Network graph lab
- Account browser · filter role=amplifier_suspect
- @GiotaPapapetrou profile — the classifier/network mismatch
Analyst reading
The coordination field v1 described structurally is the same field v3 describes structurally — and it now carries a small, stable fingerprint. An eight-person crew concentrating almost eighty percent of inbound retweet pressure on
@Briassulis, with seventy-percent repeat-crew overlap across captures five days apart.
More carefully: the spine of v1 is the spine of v3; the cohort detector
recovers it as its first cohort; the ML anomaly ranker puts three of its
members in positions #1–#3; the Briassulis amplification pattern is stable
across both captures that could measure it; the shape of that amplification
is slow and device-homogeneous, not bursty; and the single most productive
disagreement inside the system — @GiotaPapapetrou's profile/network
mismatch — is worth a manual review before anything else.
Reviewable, not conclusive
This run is a coordination map, not a proof-of-anything report. HHI 0.64 plus 80% top-amplifier share plus 0.70 repeat-crew is a strong fingerprint worth investigating. It is not a verdict. "First retweeter" and "within 15m" are defined within the captured sample only — they are not platform-wide firsts. The thresholds used by the cohort detector and role classifier are tuned for this dataset and would want re-tuning on a different list.