Back to all articles

Analysis Article

The Coordination Fingerprint

Three capture windows, one recurrent crew, and one very specific amplification signature. A post-mortem on what the latest zerosixty run actually shows.

Apr 22, 2026, 16:00 UTCopen run batch_exporter-1776856502784__members-1776856522346post-mortemcohortsamplificationcross-window

Run v3 · post-mortem

The coordination fingerprint

Three capture windows on the same list, five days apart, give us the first defensible answer to v1's unanswered question: does the pattern persist?

capture window ·
2024-10-26 → 2026-04-22
batch ·
batch_exporter-1776856502784__members-1776856522346
by ·
Analysis Team

When v1 went out in April, it described a dense overlap graph, a handful of central handles, and a nagging feeling that one account — @Briassulistop amplified — was receiving a suspicious amount of attention from the same small crew. Two more captures later we can say three things with numbers, not intuition.

Three windows

From one snapshot to a time series

v1 was a single 13-hour window. v2 widened the capture to a rolling timeline, and v3 added another five days on top. Critically, v2 was the run where the intelligence layer first shipped — role typology, cohort detection, amplified-target ranking, cascade propagation, recurrent first-retweeter profiles. That means v3 is the first revision we can write where cross-window persistence is a question the pipeline can answer for itself.

  1. v1

    2026-04-15

    Single 13-hour window. Overlap graph + ML baseline only. Identified a dense core around six handles; flagged five analytic gaps.

    197 active accounts

  2. v2

    2026-04-17

    Intelligence layer lands: roles, cohorts, amplified-target ranking, cascade propagation, first-retweeter profiles. First cohort detected (54 members).

    307 active accounts

  3. v3

    2026-04-22

    Fresh batch on the same list. Every signal recomputed from scratch. Cohort persistence and amplification fingerprint both testable for the first time.

    367 active accounts

Members

653

+49 vs v1

Tweets

7,092

+4,893 vs v1

Retweets

6,002

1,090 originals + quotes

Largest component

135

1,103 edges · 4,768 shared RTs

The rest of the sample is a handful of stray pairs. One dominant component, same shape as v1, now nearly double the size because the window is longer.

The coordination spine

The crew v1 pointed at is still the crew

v1 described the core as 6 handlescore spine: nickthegreek5, Track11John, giannMak, 46spiros, AgelisDimitris, angelikitsakani. v3's weighted-degree ranking reproduces that list and adds chrysib11 and Briassulis at the top. The unsupervised ML anomaly ranker, which re-runs from the raw feature matrix each time, independently places nickthegreek5#1, 46spiros#2, Track11John#3 as the top three most unusual behavioural profiles. Two different lenses, same answer.

The cohort detector is the sharper tool. It runs label-propagation on a time-tight co-retweet subgraph — accounts only connect if they co-retweeted the same originals with at least two of those within 60 minutes of each other — and then recursively splits anything oversized. Between v2 and v3, that detector's first cohort evolved like this:

Cohort persistence

v2 · 2026-04-17

54 members

target · @Technomagos

54 kept
+ 10 added
0 dropped

v3 · 2026-04-22

64 members

target · @ellada24

Newly surfaced

@Agie2a@FrankiKostas@Mirsini1946041@VforVolemenos@communakios@doubamari@evangelia_re@fanis_rc@fifi_apostolou@jparthen

54 out of 54 kept. Zero dropped. Ten added. That is not a shape that rearranges run-to-run — it is a recurrent crew.

v3 cohort detector · v2 → v3

The top target of the cohort shifted from @Technomagos in v2 to @ellada24 in v3, which tracks the wider window pulling in more ellada24-sourced cascades; @Technomagos is still among the top targets the cohort co-amplifies, just no longer the most frequent. The mix of who the crew boosts changes week to week; the crew itself doesn't.

@46spiros@57lista@ABCHeckles@Aaagiiisss@AgelisDimitris@Agie2a@Ah_riman@ArisBJJ@BatorGr@Briassulis@Catps66@DonConsclavios@Fotinh5@FrankiKostas@GiotaPapapetrou@JSweetGR@K_Sav215@LGkvas@Loxagos_Mark@MKioulafa@MariaDecokeke@MasenkaMm230158@MenoEdo@Mirsini1946041@Myrtilo01@Nogate_T@NonameTZ@RockRock222555@SagamoreMount@SenzakiKamo@ThanosTzimeros@TheMadoula@Track11John@TsigkaEfi@VforVolemenos@Xeniaxe19580370@anasiropiastos@angelikitsakani@angie22gr@chrysib11@communakios@doubamari@economi94461498@evangelia_re@fanis_rc@fifi_apostolou@g_m_theo@galex1908@giannMak@greek_paris_13@hatz_patty@johnblandos@jparthen@klikr@lafillemalgard1@lidakis_manos@maus_miny@mpantazo@nickthegreek5@paliakaravana59@ptableart@sinakatsarou@vythos70@zneraida

What this isn't

This is a coordination map, not a proof-of-automation or proof-of-payment report. The cohort detector's thresholds are sensible defaults, not ground truth. Re-running with stricter settings would produce a smaller, denser crew. Everything below should be read as a reviewable fingerprint, not a verdict.

The fingerprint

One target, one crew, one speed

The hardest signal in the entire run is inbound pressure on @Briassulis. Not because of the volume — 84 retweets is not large on its own — but because of the shape.

Amplification fingerprint

@Briassulis

inbound84amplifiers8HHI0.636repeat-crew70%≤15m80
@Track11John66 · 78.6%mixed_behavior
@nickthegreek55 · 6.0%
@46spiros3 · 3.6%
@AgelisDimitris3 · 3.6%
@hatz_patty3 · 3.6%
@ArisBJJ2 · 2.4%
@Rena_Rethymno1 · 1.2%
@giannMak1 · 1.2%

Track11John alone carries 66 of 84 inbound retweets — 78.6 percent. The remaining seven amplifiers share the other 18. Eight people account for all of it, and the same eight show up across this target's top cascades (repeat-crew overlap 0.70). Eighty of the 84 inbound retweets arrive within fifteen minutes of the original.

HHI · v3

0.636

concentrated (one crew carries most of it)

Herfindahl-Hirschman index across amplifiers. 1.0 = one account does it all.

HHI · v2

0.680

concentrated (one crew carries most of it)

Same metric, 5 days earlier. The dial barely moved.

metricv1 · 2026-04-15v2 · 2026-04-17v3 · 2026-04-22

Total inbound RTs

pre-ranking

81

84

Unique amplifiers

7

8

Amplification HHI

0.680

0.636

Top amplifier

Track11John

v1 raw edge count: 66

Track11John

Track11John

Top amplifier share

81.5%

78.6%

Repeat-crew overlap

Jaccard across the target's top cascades — locked across captures.

0.700

0.700

Within 15m inbound

77 / 81

80 / 84

Why this is the signal

HHI 0.64 plus an 80% top-amplifier share plus a 0.70 repeat-crew is the shape a reasonable analyst would expect from organised amplification rather than organic reach. Compare with @enikos_gr, whose inbound 20 retweets come from one amplifier (@GiotaPapapetrou) at 100% — that's a one-person fan, not a crew. @Briassulis is a crew that behaves like a one-person fan, which is the unusual part.

Shape of amplification

It's mostly not a burst

v1 carried an implicit assumption: coordinated activity should look like a synchronised burst. The v3 cascade propagation data refutes that for most of this run. The same cohort touches three very different cascade shapes:

@ellada24

Sustained re-retweet
t→k10
2.3h
kurtosis
1.20
same client
75%

@kyranakis

Sustained re-retweet
t→k10
2.8h
kurtosis
0.72
same client
64%

@Technomagos

Sharp burst
t→k10
10m
kurtosis
4.70
same client
53%

Two of the three are slow. @ellada24 takes 2.3 hours to reach ten retweets, and 75% of those retweets come from the same client build — a homogeneous device fingerprint on a sustained re-retweet. Only the @Technomagos cascade fits the stereotype of a sharp synchronised spike (kurtosis 4.7), and that one has a mixed client stack. The takeaway for analysts is a correction of reflex: "coordinated" in this data more often looks like a small crew re-retweeting the same targets over hours from a narrow device stack, not twenty identical bot accounts firing in sixty seconds.

Who's in the room

The shape of the 367 active accounts

The deterministic role classifier ran once per capture. In v3 it labelled every active account as one of seven roles. The distribution:

367accounts
  • Unclassified87 · 23.7%
  • Mixed behaviour83 · 22.6%
  • Amplifier suspects72 · 19.6%
  • Retail users56 · 15.3%
  • Media / business35 · 9.5%
  • Source hubs33 · 9.0%
  • Journalist / public figure1 · 0.3%
Roles are deterministic: description-keyword matches, listed_count thresholds, follower ratios, retweet ratios. They are coarse review labels, not judgements.

Two numbers matter here. 72 amplifier_suspect is the run's primary review bucket — accounts with retweet-ratio ≥ 0.85, near-zero originals, and typically elevated first-mover counts. 87 unknown is a coverage gap, not a finding; it's mostly accounts with empty descriptions that the regex classifier can't place. The single journalist_public_figure label means the patterns are strict; if that bucket should include Greek opinion-columnists, the classifier's vocabulary needs another pass.

The recurrent first-retweeter profile — which of these accounts is first on captured cascades most often — produces its own leaderboard:

Top recurrent first-retweeters · v3

How many cascades each account was first on, with the target-HHI of that first-retweeter activity.

  • @GiotaPapapetrou· retail_user · HHI 0.03 · target @enikos_gr
    179
  • @Track11John· mixed_behavior · HHI 0.20 · target @Briassulis
    143
  • @46spiros· amplifier_suspect · HHI 0.02 · target @AnAthenianToLDN
    109
  • @aggelikiME25· mixed_behavior · HHI 0.02
    106
  • @paliakaravana59· amplifier_suspect · HHI 0.02
    105
  • @vythos70· mixed_behavior · HHI 0.02
    98

Compare @Track11John, who appears first on only slightly fewer cascades (143) but with target-HHI ten times higher — 0.20. They lead specifically on @Briassulis. Two very different lead-lag fingerprints from two accounts sitting in the same cohort.

Still unanswered

Scoring v1's gap list, honestly

v1 enumerated five missing layers. v3 has closed some and not others. Calling it:

metricv1 gapv2v3

Lead-lag structure

missing

first-retweeter profiles

per account

partial

no lead-lag matrix yet

Cross-day stability

missing

one capture

closed

54/54 persistence

URL / domain / media reuse

missing

missing

missing

biggest outstanding layer

Component profiles over time

missing

per-capture only

partial

no cross-capture diff UI

Manual analyst labels

missing

schema only

zero applied

The new layer surfaces gaps v1 couldn't even name: quote-retweet framing (currently dropped), a burst-vs-sustained threshold rule for tagging cascades, a cohort-to-cohort coupling score, and the role classifier's coverage of Greek opinion/commentator vocabulary (23.7% "unknown" is high).

The URL layer is the next wall

No URL extraction, no domain aggregation, no media-identifier reuse. This is the single biggest outstanding data layer. Anything making a "campaign" claim in the narrative sense — coordinated linking to a specific property — would need that layer before it could be published.

How to read this on the site

The views that back this article

Every number here is joinable back to a view in the visualizer. If you want to check a claim rather than trust the prose:

Briassulis target

Open

amplifier breakdown · role mix · originated cascades

Cohort #1

Open

64 members · 217 cascades · target distribution

Graph lab

Open

colour by role / cohort / anomaly · pin 2+ accounts for shortest path

Direct links:

Analyst reading

The coordination field v1 described structurally is the same field v3 describes structurally — and it now carries a small, stable fingerprint. An eight-person crew concentrating almost eighty percent of inbound retweet pressure on @Briassulis, with seventy-percent repeat-crew overlap across captures five days apart.

More carefully: the spine of v1 is the spine of v3; the cohort detector recovers it as its first cohort; the ML anomaly ranker puts three of its members in positions #1–#3; the Briassulis amplification pattern is stable across both captures that could measure it; the shape of that amplification is slow and device-homogeneous, not bursty; and the single most productive disagreement inside the system — @GiotaPapapetrou's profile/network mismatch — is worth a manual review before anything else.

Reviewable, not conclusive

This run is a coordination map, not a proof-of-anything report. HHI 0.64 plus 80% top-amplifier share plus 0.70 repeat-crew is a strong fingerprint worth investigating. It is not a verdict. "First retweeter" and "within 15m" are defined within the captured sample only — they are not platform-wide firsts. The thresholds used by the cohort detector and role classifier are tuned for this dataset and would want re-tuning on a different list.