Remilio
VERIFIED
EYES ONLY
TOP SECRET // SI // NOFORN

CLASSIFIED

⚠ SIGINT AGGREGATE: NOMINAL — CLUSTER TELEMETRY SYNC ACTIVE — 6 NODES ONLINE — PROMETHEUS FEED LOCKED — ████████ SSE HANDSHAKE CONFIRMED — GPU THERMALS WITHIN SPEC — OBSIDIAN VAULT CACHE TTL 30s — NEXT POLL: 2000ms — ⚠ SIGINT AGGREGATE: NOMINAL — CLUSTER TELEMETRY SYNC ACTIVE —
DEEP-TOP-0451-Ω
TS/SCISIGINT
● ACTIVE
██/02/2026
6
Cluster Nodes
2s
Poll Interval
12+
Subsystems
16
Max Workers
2,017
Lines Backend
11,553
Lines Frontend

Project: DEEPDISH TOP (Terminal Operations Panel)

Type: Full-stack real-time infrastructure monitoring dashboard

Aesthetic: Cypherpunk / Covert operations command center

Stack: Python 3.12 · Flask · Gunicorn + gevent · Docker · Vanilla JS

Host: Privileged Docker container with host PID/network namespace

Port: 5000 (host network mode)

01 Architecture Overview

DEEPDISH TOP is deployed as a privileged Docker container with full host namespace access — PID, network, and hardware. The container runs a Gunicorn WSGI server with gevent async workers, enabling Server-Sent Events and parallel metric collection across all subsystems.

HOST OS
DOCKER (privileged)
GUNICORN + GEVENT
FLASK APP
:5000

Container Deployment

ComponentDetail
Base Imagepython:3.12-slim
WSGI ServerGunicorn 21.2.0 — gevent async workers, 3 workers, 120s timeout
Network Modehost — shares host network stack directly
PID Namespacepid: host — sees all host processes via nsenter -t 1
Privilegeprivileged: true — full hardware access (GPU, sensors, Docker socket)
Restart Policyunless-stopped

Volume Mounts

Host PathContainer PathPurpose
/var/run/docker.sock/var/run/docker.sock:roDocker API access
/proc/host/proc:roHost process info (CPU, mem, disk)
/sys/host/sys:roHardware sensor data
/etc/os-release/etc/os-release:roHost OS identification
/run/dbus/run/dbus:roD-Bus socket for systemd
/var/run/avahi-daemon/var/run/avahi-daemon:romDNS resolution for .local hosts
/mnt/deepdish/mnt/deepdish:roNAS mount (DeepDish storage array)

Python Dependencies

PackageVersionPurpose
flask3.0.0Web framework
psutil5.9.8System metrics collection
docker7.0.0Docker Engine API client
gunicorn21.2.0Production WSGI server
gevent24.2.1Async worker for SSE/streaming
requests2.31.0HTTP client for integrations

02 Backend — app.py (2,017 lines)

mDNS IPv4 Resolution Cache

A custom DNS resolution layer for .local hostnames on the LAN. Required because mDNS can return IPv6 or resolve to the wrong interface on dual-homed machines. Gathers candidate IPs from avahi-resolve, socket.getaddrinfo, and configurable fallbacks. Results are cached with a 120-second TTL and auto-invalidated on connectivity failure.

System Metric Collectors

All collectors read from the host via nsenter -t 1 (PID namespace) or /proc/1/root (mount namespace). Every metric is gathered in parallel using ThreadPoolExecutor.

CollectorStrategy
get_system_info()Kernel, arch, distro, CPU model, core counts, total RAM — cached indefinitely
get_cpu_info()Per-core %, total %, frequency, load averages via psutil
get_memory_info()Total/used/available RAM + swap via psutil
get_disk_info()3-strategy fallback: /proc/1/mounts → nsenter df → psutil. Filters real FSes only
get_network_info()Per-NIC I/O, interface details, top 80 active connections
get_processes()Top 50 processes by CPU% — PID, name, user, RSS/VMS, threads, FDs
get_gpu_info()NVIDIA nvidia-smi — temps, utilization, VRAM, power, fan speed
get_temperatures()psutil.sensors_temperatures() — current/high/critical thresholds
get_docker_containers()All containers with live stats fetched in parallel per running container
get_systemd_services()Top 50 services via systemctl with D-Bus fallback

Cluster Node Monitoring

Polls Prometheus node_exporter (port 9100) on 6 LAN nodes in parallel via ThreadPoolExecutor(max_workers=6). Metrics parsed from Prometheus text format include CPU, memory, disk, network I/O, load averages, and uptime.

MINI
mini.local:9100
CHEESE
cheese.local:9100
PEPPER
pepper.local:9100
ONION
onion.local:9100
GARLIC
garlic.local:9100
SUPREME
supreme.local:9100

External Integrations

Native Planet (Urbit Machine)

Host: nativeplanet.local:9100
Prometheus metrics — root disk, data disk, WiFi, kernel info

Cloud Urbit Server

Host: 142.93.██.██ (urbit.tylerirl.com)
Online check via ICMP ping — extracts RTT from output

Epic Gamer PC

WiFi: 192.168.0.███ / ETH: 192.168.1.███
Dual-interface ICMP probe — reports responding interface

Home Assistant

Host: homeassistant.local:8123
HA version, entity states, lights, phone tracking for 2 users — battery, steps, activity, connection type

Obsidian Vault

Host: localhost:27124 (Local REST API plugin, HTTPS)
Vault stats, full-text search, note viewer, 30s cache TTL

OCS (Ocular Control System)

Host: chef.local:5001
MJPEG stream proxy, eye expressions, vision status, face tracking

DEEPDISH Intake Portal

Accepts text, URL, or image submissions. URLs auto-fetch <title> and meta description. Images are stored as base64 data URIs. All notes are saved to DEEPDISH/intake/ in the Obsidian vault with YAML frontmatter.

03 API Endpoints

Metric Routes

RouteDescriptionConcurrency
GET /api/metricsFast local-only metrics (CPU, mem, disk, net, procs, docker, systemd, GPU, temps)12 workers
GET /api/metrics/integrationsSlow external integrations (cluster, NP, urbit, gamer, OCS, HA, obsidian)8 workers
GET /api/metrics/allCombined local + integrations (chat context builder)16 workers

Management Routes

RouteDescription
POST /api/docker/<name>/actionDocker control: start, stop, restart, remove
GET /api/docker/<name>/logsContainer log tail (default 80 lines)
POST /api/docker/pruneRemove all stopped containers
POST /api/systemd/<name>/actionSystemd control: start, stop, restart, enable, disable
GET /api/systemd/<name>/logsjournald log tail (default 60 lines)

Obsidian & OCS Routes

RouteDescription
GET /api/obsidian/noteFetch note content (JSON + tags + frontmatter)
GET /api/obsidian/searchFull-text vault search (top 20 results)
POST /api/portal/intakeSubmit text/URL/image to vault intake
GET /api/ocs/eye/streamMJPEG stream proxy
POST /api/ocs/eye/expressionSet eye expression + duration

04 Frontend — Main Dashboard (11,553 lines)

Layout Architecture

Three-column responsive grid with independent scrolling. Left column houses CPU/MEM, network I/O, disk, GPU, temps, storage, interfaces, Native Planet, and system info. Center column contains the cluster view, Obsidian vault, processes, Docker, Radgotchi virtual pet, systemd services, net flow matrix, and connections. Right column holds the OCS eye stream, DEEPDISH chat, topology visualization, Home Assistant, and event log.

Left Column (260px)

CPU/MEM · NET I/O · DISK I/O · GPU · TEMPS · STORAGE · INTERFACES · NATIVE PLANET · SYSTEM INFO

Center Column (flex)

CLUSTER · OBSIDIAN VAULT · PROCESSES · DOCKER · RADTECH · SYSTEMD · NET FLOW MATRIX · CONNECTIONS

Right Column (320px)

OCS OCULAR · DEEPDISH CHAT · TOPOLOGY · HOME ASSISTANT · EVENT LOG

Dual-Theme System (Threat Red / Intel Blue)

Two complete color themes toggled by clicking the "DEEPDISH" header text. A radial ripple animation expands from the click origin while all section panels flash with staggered 50ms delays. The class switch is delayed 300ms for dramatic reveal. Theme persists via localStorage.

Red Theme (Default)

--primary: #ff0a1e
--secondary: #cc2233
--accent: #ff3344
--terminal-green: #39ff14

Blue Theme

--primary: #0af0ff
--secondary: #2288cc
--accent: #44aaff
--terminal-green: #00ffcc

Visual Effects & Background Layers

  • Grid background — Animated CSS grid (40px cells) with slow diagonal drift (20s cycle)
  • Wireframe globe — Canvas-rendered rotating sphere (9 lat, 12 lon lines, equator highlight, 2 outer rings). Perspective projection, continuous requestAnimationFrame
  • CRT scanlines — Repeating linear gradient overlay (2px lines)
  • Kanji watermarks — 神 (kami) and 経 (kei) as fixed-position decorative text

Section Panel System

Every section uses a standardized .section component with corner accent lines, hex pattern overlays, alert pulsing, skeleton loading shimmer, collapse/expand toggles, and GPU-accelerated compositing (translateZ(0), contain: style paint).

Data Visualization

  • Sparklines — Inline SVG charts for CPU, memory, network, GPU (60-sample rolling history)
  • Progress bars — Color-coded utilization bars (green → orange → red) with animated sweep overlay
  • Cluster aggregates — 3 large metric cards (CPU/MEM/DISK) with sweep backgrounds, min/max per metric
  • Net flow matrix — Canvas-rendered heatmap of services vs remote hosts. Animated cells, pulse waves, scanline overlay, crosshair on hover
  • Obsidian topology — 50-node hex-grid canvas with data packets flowing along connections, comet trails, ambient glow orbs

05 Network Topology Visualization

Force-directed graph rendered on a dedicated canvas element. DEEPDISH sits as the center-pinned hub node with cluster nodes on an equidistant inner ring and external integrations on an outer ring. Full physics simulation with spring forces, repulsion, velocity damping, and boundary clamping.

Data packets flow along links as diamond and circle shapes with comet trails. Colors shift green → orange → red with actual node load. Spawn rate is proportional to real utilization metrics. Threat level indicator changes based on node online/offline status: NOMINALELEVATEDCRITICAL

Node TypeNameLayout
hostDEEPDISHCenter-pinned hub (diamond SVG)
clustermini, cheese, pepper, onion, garlic, supremeEquidistant inner ring (Apple logo)
tellurianNative PlanetOuter ring
urbitCloud UrbitOuter ring
pcEpic GamerOuter ring (Windows logo)
homeassistantHome AssistantOuter ring (HA gear icon)

06 Radgotchi Virtual Pet System

A complete virtual pet IIFE module (RG) displayed in the RADTECH section. 25 sprite states, 12 CSS animations, military/SIGINT-themed status messages (e.g., "SIGINT NOMINAL", "OVERWATCH ACTIVE", "EXCESSIVE INPUT — CEASE").

Interaction Triggers
  • ▸ Mouse near section → eye tracking
  • ▸ Single click → random positive reaction
  • ▸ 3+ rapid clicks → excited
  • ▸ 5+ rapid clicks → angry
  • ▸ Double click → spin trick
  • ▸ Hover on dashboard sections → themed reaction (2.5s cooldown)
System-Aware Behavior
  • ▸ CPU >90% → angry (critical mood)
  • ▸ Memory >92% → warn state
  • ▸ Temps >85°C → critical reaction
  • ▸ Node offline → sad
  • ▸ Node online → excited
  • ▸ 120s idle → sleep / lonely
  • ▸ Late night (23:00–06:00) → nap bias

07 OCS Eye & DEEPDISH Chat

MJPEG Eye Stream

OCS camera feed from chef.local:5001 proxied through Flask to avoid CORS. Health check every 10 seconds, auto-reconnect on error with 5-second delay. 7 expression controls (happy, sad, angry, bored, confused, surprised, suspicious). System events automatically trigger expressions with cooldown.

AI Chat Integration

SettingValue
LLM Endpointsupreme.local:1234 (LM Studio)
Modelgemma-2-27b
Temperature0.7
Max Tokens300
PersonaDEEPDISH — black-box intelligence, SIGINT style, lowercase/clinical

The system prompt dynamically injects full live telemetry context — system info, CPU/memory/GPU/disk/network, Docker containers, systemd services, cluster nodes, Home Assistant, OCS, and Obsidian vault stats. Personal names are redacted (OPERATOR / ASSET-2) in chat history.

08 Event Detection & Logging

The event system compares current vs previous data snapshots to detect state changes. Events are classified into 4 levels and trigger both Radgotchi reactions and OCS eye expression changes:

ConditionLevel
CPU >90%● CRITICAL
CPU >70%● WARNING
Memory >90%● CRITICAL
Disk >90%● WARNING
Temperature > critical threshold● CRITICAL
Docker container state change● INFO
Cluster node offline● WARNING
Systemd service failure● WARNING
Service recovery● OK

09 Polling & Refresh Architecture

Split polling strategy separates fast local metrics from slow external integrations:

IntervalTargetPurpose
2,000ms/api/metrics + /api/metrics/integrationsCore telemetry polling (parallel)
1,000msColumn height sync
3,000msManagement logsDocker/Systemd auto-refresh
6,000msRadgotchi idle behavior check
10,000ms/api/ocs/eye/pingOCS stream health check
rAFCanvas elementsGlobe, topology, net flow, Obsidian topo

10 Dashboard Preview

■ REC — CLASSIFIED DASHBOARD CAPTURE

11 Performance Optimizations

OptimizationDetail
Split API pollingLocal metrics (fast) and integrations (slow) fetched in separate parallel requests
ThreadPoolExecutorContainer stats, cluster nodes, OCS status — all fetched in parallel
DOM cacheMap-based ID lookup cache avoids repeated getElementById calls
CSS containment.section { contain: style paint } — limits layout recalculation
GPU compositingtranslateZ(0) + backface-visibility: hidden on animated elements
Static info cacheSystem info (kernel, CPU model) cached indefinitely after first fetch
Obsidian TTL cache30-second cache prevents API hammering on every poll cycle
mDNS cache120-second TTL with connectivity validation
Connection poolingShared requests.Session with keep-alive for all external HTTP
Conditional renderRow flash animations only fire on actual value changes

12 Security Posture

  • Container runs privileged with host PID/network — required for full system visibility
  • HA token & Obsidian API key in .env — not committed to version control
  • Obsidian API uses self-signed HTTPS — SSL verification disabled
  • Frontend redacts non-local IPs (e.g., 142.███.███.███)
  • Chat history redacts personal names → OPERATOR / ASSET-2
  • Docker/Systemd management exposed without auth — trusted LAN only
Additional security protocols and network segmentation details are classified under Directive ████-██ and cannot be disclosed in this document. Access requires SCI compartment authorization.

13 External Service Dependencies

ServiceHostProtocolPurpose
Prometheus (×6){node}.local:9100HTTP/PrometheusCluster node metrics
Native Planetnativeplanet.local:9100HTTP/PrometheusUrbit machine metrics
Home Assistanthomeassistant.local:8123HTTP RESTSmart home / phone tracking
Obsidian REST APIlocalhost:27124HTTPS RESTVault access
OCSchef.local:5001HTTP + MJPEGEye rendering / vision
LM Studiosupreme.local:1234OpenAI APIAI chat (gemma-2-27b)
Docker Engine/var/run/docker.sockUnix socketContainer management
systemdD-BusD-Bus / nsenterService management

¹ All network addresses shown are partially redacted per standard operating procedure.

² Metric collection intervals are tunable — values shown reflect production configuration.

³ This document is auto-generated from system introspection. ████████████