SKYNET Infrastructure Architecture
Purpose: This guide explains the complete data pipeline from Bitget exchange to scanner engines. It does NOT cover trading strategies — only the plumbing that gets market data into your engine.
Architecture Overview
graph TB
subgraph Bitget["Bitget Exchange"]
REST["REST API<br/>(market data)"]
WS["WebSocket<br/>(real-time candles)"]
end
subgraph Layer1["Layer 1: Candle Cache Service"]
CCS["candle_cache_service.mjs<br/>4-shard WS subscriber"]
CACHE["/data/candle_cache.json<br/>(consolidated, all grans)"]
end
subgraph Layer2["Layer 2: Bridge"]
BRIDGE["candle_cache_bridge.mjs<br/>(splits by granularity)"]
end
subgraph Layer3["Layer 3: Per-Granularity Cache"]
C5m["/tmp/candle_cache_5m.json"]
C1H["/tmp/candle_cache_1H.json"]
C4H["/tmp/candle_cache_4H.json"]
end
subgraph Layer4["Layer 4: Core Modules"]
READER["core/candle_cache.mjs<br/>readCandleCache()"]
SIGNALS["core/signals.mjs<br/>RSI, EMA, slope"]
TELEGRAM["core/telegram_notifier.mjs"]
LOGGER["core/signal_logger.mjs"]
GUARD["core/pump_guard.mjs"]
REGISTRY["core/position_registry.mjs"]
end
subgraph Layer5["Layer 5: Your Scanner"]
ENGINE["Your Engine.mjs"]
end
REST -->|fallback| ENGINE
WS --> CCS
CCS --> CACHE
CACHE --> BRIDGE
BRIDGE --> C5m
BRIDGE --> C1H
BRIDGE --> C4H
C5m --> READER
C1H --> READER
C4H --> READER
READER --> ENGINE
SIGNALS --> ENGINE
ENGINE --> TELEGRAM
ENGINE --> LOGGER
ENGINE --> GUARD
ENGINE --> REGISTRY
Pipeline Stages
| Layer | File | Role | PM2 Name |
|---|---|---|---|
| 0 | Bitget API | External data source | — |
| 1 | candle_cache_service.mjs |
WebSocket subscriber + cache writer | candle-cache |
| 2 | scripts/candle_cache_bridge.mjs |
Splits consolidated → per-granularity | candle-bridge |
| 3 | core/candle_cache.mjs |
Reader module for scanners | — (imported) |
| 4 | core/*.mjs |
Shared utilities (RSI, Telegram, logging) | — (imported) |
| 5 | scripts/your_engine.mjs |
Your scanner logic | your-engine |
Key Principles
- No auth needed for market data — Bitget public endpoints require no API key
- Stale-but-valid — Scanners prefer stale cache over REST 429 errors
- Shard to survive — 4 WS connections × 50 symbols = 200 coins tracked
- Log every signal —
logSignal()writes toall_signals.jsonlfor HyperOpt - Never fight pumps —
pump_guard.mjsblocks SHORT signals during active pumps
Quick Start for a New OpenClaw
- Read Bitget API — understand rate limits
- Read Candle Cache — deploy Layer 1-3
- Read Shared Utilities — understand what’s available
- Read Scanner Template — copy-paste boilerplate
- Read PM2 Ecosystem — add your engine to process list
File Locations on Server
/app/trading_engine/
├── candle_cache_service.mjs # Layer 1: WS subscriber
├── scripts/
│ ├── candle_cache_bridge.mjs # Layer 2: Bridge
│ └── your_engine.mjs # Layer 5: Your scanner
├── core/
│ ├── candle_cache.mjs # Layer 3: Reader
│ ├── signals.mjs # Layer 4: RSI/EMA
│ ├── telegram_notifier.mjs # Layer 4: Alerts
│ ├── signal_logger.mjs # Layer 4: HyperOpt
│ ├── pump_guard.mjs # Layer 4: Protection
│ └── position_registry.mjs # Layer 4: Position tracking
├── data/
│ └── candle_cache.json # Consolidated cache (~8MB)
└── ecosystem.config.cjs # PM2 process definitions
Stats (Live on DO)
- Symbols tracked: 200 (top by 24h volume)
- Volume floor: $1M 24h
- WebSocket shards: 4
- Granularities: 5m, 15m, 30m, 1H, 4H, 1Dutc
- 5m history: 200 candles (~16h)
- Cache flush interval: 10s
- Bridge split interval: 30s