v0.2 — open source platform

IoT +
GenAI
EdgeAI

One stack. Three layers. Zero compromise.

p4n4 is a Docker Compose platform that wires together Node-RED, InfluxDB, Grafana, Ollama, Letta, n8n, and Edge Impulse into a single self-hosted stack — from Raspberry Pi to GPU server.

bash — p4n4 init
$ p4n4 init my-factory-stack
─── Configuration ──────────────────────────
? InfluxDB organisation: ming
? Timezone: UTC
? InfluxDB admin password: (auto-generated)
? Grafana admin password: (auto-generated)
─── Generating secrets ─────────────────────
✓ InfluxDB token ✓ Grafana password
✓ Project scaffolded at ./my-factory-stack
$ p4n4 up
✓ mosquitto healthy :1883
✓ influxdb healthy :8086
✓ node-red healthy :1880
✓ grafana healthy :3000
$
3×
Cooperative Stacks
8+
Open Source Services
1cmd
To Full Stack
0$
Cloud Dependency

Three stacks.
One network.

Built on the proven MING stack — Mosquitto, InfluxDB, Node-RED, Grafana — and extended with a GenAI layer and Edge AI inference. All services share a Docker bridge network, with Node-RED at the intersection wiring sensors, databases, and AI together.

Mosquitto :1883
Node-RED :1880
InfluxDB :8086
Grafana :3000
Ollama :11434
Letta :8283
n8n :5678
ei-runner :8080
Visual Flow Orchestration
Visual flow engine with bidirectional data movement, conditional logic, 400+ community nodes, and direct AI call-outs. Everything Telegraf does, and then some.
flow-based programming
Persistent AI Agents
Letta agents maintain core, archival, and recall memory across sessions. Your Site Monitor agent remembers every anomaly, every maintenance event, every operator conversation.
letta / memgpt
TinyML at the Sensor
Edge Impulse inference runs as a Docker container with an HTTP API. Classify vibration, audio, and images in <5ms before data ever reaches the broker.
edge impulse / tflite
Fully Self-Hosted
Zero cloud dependency at runtime. Models are pre-downloaded. Secrets stay local. Works air-gapped. Runs on a Raspberry Pi 5 or a rack-mount GPU server equally well.
on-premise
Composable by Design
Deploy only the IoT stack today. Add GenAI next month. Slot in Edge Impulse models as sensors are deployed. Each stack is independently manageable.
docker compose
Template Registry
Pull industry-specific configurations from Git. Publish your own. Short-name resolution through org indexes. OTA template upgrades with merge safety.
git-backed

The proven IoT foundation.
Now AI-native.

MING — Mosquitto, InfluxDB, Node-RED, Grafana — is a battle-tested open-source IoT stack used across industrial, agricultural, and smart-building deployments worldwide. p4n4 takes MING as its core and extends it with a complete GenAI layer and on-device Edge AI.

M
Mosquitto
MQTT Broker
Eclipse Mosquitto handles all device messaging. Publish/subscribe at scale, TLS-encrypted, ACL-controlled topic namespaces per site and device type.
I
InfluxDB
Time-Series Store
Purpose-built for sensor telemetry. Flux query language, data retention policies, and a REST API consumed by Grafana, Node-RED, and the full AI layer.
N
Node-RED
Flow Orchestrator
Visual, bidirectional flow engine. 400+ community nodes. Replaces Telegraf with full conditional logic, device actuation, and direct AI call-outs — all in a browser.
G
Grafana
Dashboards & Alerts
Real-time dashboards and unified alerting. Alert webhooks feed into n8n for AI-enriched notifications — plain-English anomaly summaries before they hit your inbox.

Why p4n4 over TIGUITTO?

TIGUITTO is a solid starting point. p4n4 is where IoT projects go when they need intelligence, not just telemetry. The IoT stack is identical — what changes is everything on top of it.

Capability TIGUITTO / IoTStack p4n4
Data ingestion Telegraf metrics-scraping agent only Node-RED visual, bidirectional flow engine
MQTT broker Eclipse Mosquitto Eclipse Mosquitto
Time-series DB InfluxDB InfluxDB
Visualisation Grafana Grafana
Conditional logic Limited Telegraf processors only Full Node-RED function nodes + JS
Device actuation MQTT out, HTTP out
Protocol bridging MQTT only MQTT, HTTP, WebSocket, Modbus, OPC-UA…
AI integration Ollama, Letta, n8n
Workflow automation n8n — webhook, cron, AI chains
Agent memory Letta / MemGPT — persistent across sessions
Local LLM inference Ollama — phi3, mistral, llama3
Visual programming Node-RED browser editor
Edge ML inference Edge Impulse — <5ms, Docker container
Edge deployment Pi compatible Pi 5 + optional GPU acceleration

p4n4 init.
That's it.

One interactive wizard scaffolds your entire project — compose files, configs, secrets, base flows, dashboards, and EI model wiring. All version-controlled, all idempotent.

p4n4 init
Interactive wizard. Scaffolds a complete project directory with secrets, configs, and base flows.
p4n4 template
Pull templates from Git or org indexes. Push your own. Upgrade with merge safety.
p4n4 up
Start the IoT stack. Add --ai or --edge to bring up additional stacks.
p4n4 status
Unified health view across all stacks and services with uptime and port listing.
p4n4 ei
Deploy an Edge Impulse model and run inference. Publishes results to MQTT.
p4n4 secret
Rotate secrets in .env with new random values. Shows a confirmation table before writing.
p4n4 validate
Pre-flight checks: Docker version, secrets, YAML validity, port conflicts, EI model presence.
p4n4 init — interactive wizard
$ p4n4 init
──────────────────────────────────
? Project name: my-factory-stack
? Site ID: factory-floor-1

─── Stacks ──────────────────────
? IoT stack? Y
? GenAI stack? y
? Edge stack? y

─── Secrets ─────────────────────
✓ MQTT password generated
✓ InfluxDB token generated
✓ n8n encryption key generated

✓ Scaffolded at ./my-factory-stack

Pull. Customise.
Deploy. Share.

Templates are Git repositories with a p4n4-template.toml manifest. Any Git host works. Org indexes add short-name resolution for teams.

01 — FIND
Discover a template
Search the community index or your organisation's private index.
p4n4 template search manufacturing
02 — APPLY
Apply to your project
Scaffolds a project directory with pre-configured stacks, flows, and dashboards.
p4n4 template apply factory-baseline
03 — RUN
Validate and start
Pre-flight checks, then bring up the stack.
p4n4 validate && p4n4 up
04 — SHARE
Publish your own
Secrets are scrubbed automatically. Tag a version. Register in your org index.
p4n4 template push origin --tag 1.0.0
Short name (org-scoped)
acme/factory-baseline
Resolved via the acme org index registered in ~/.p4n4/config.toml. Version defaults to latest.
Full Git URL
github.com/acme/[email protected]
Pulls directly from any Git host. Supports HTTPS and SSH. Pin to any branch, tag, or commit.
Local path
./my-local-template
Use a local directory as a template. Ideal for developing and testing templates before publishing.

Two AI layers.
One data fabric.

Edge Impulse handles raw-signal TinyML at sub-100ms latency. Ollama and Letta handle natural language reasoning on aggregated context. They're complementary, not redundant.

Edge Impulse Inference
Raw signal → DSP → TFLite classifier → label + anomaly score
<5ms
Mosquitto + Node-RED
MQTT broker + flow engine bridges raw telemetry and EI results
<50ms
InfluxDB
Time-series store for raw telemetry and EI classification labels
persistent
Ollama LLM
Explain anomalies in plain English. phi3:mini to llama3:8b.
~2–8s
Letta Agents
Persistent agent memory. Accumulates fault history. Operator-facing chatbot.
stateful
MQTT inference flow
# Publish sensor data → sensors/raw
{
"device": "motor-m003",
"values": [0.12, -0.05, 1.01, 0.33]
}
─────────────────────────────────
# Result on → inference/results
{
"label": "normal",
"confidence": 0.964,
"anomaly_score": 0.18,
"latency_ms": 4.2,
"mode": "model"
}
Node-RED: if fault > 0.7 → Ollama
─────────────────────────────────
→ "Motor M003 shows early-stage bearing fault signature at 3.2 kHz. Last maintenance: 47 days ago. Recommend inspection within 72h."

Runs on the hardware
you already have.

p4n4 targets ARM64 and x86-64. If it runs Docker and Docker Compose, it runs p4n4. The IoT stack runs comfortably on 1 GB RAM devices. Add the GenAI stack on 4 GB+ boards or any GPU server.

Raspberry Pi
BCM2712 Cortex-A76 · 4× LPDDR4X HDMI HDMI PWR microSD CSI PCIe 2.0
Raspberry Pi 4
ARM Cortex-A72 · 2–8 GB RAM · USB 3.0
IoT GenAI Edge
4 GB+ model recommended for GenAI stack with lightweight models (phi3:mini).
Raspberry Pi 3B+
ARM Cortex-A53 · 1 GB RAM
IoT Edge
IoT + Edge stacks only. GenAI stack requires 4 GB+.
Raspberry Pi Zero 2 W
ARM Cortex-A53 · 512 MB RAM · Wi-Fi
Edge
Edge Impulse runner only. Ideal as a dedicated TinyML inference node.
NVIDIA Jetson
HEATSINK GPU CUDA cores CPU ARM A78 NVIDIA Orin 275 TOPS · Ampere GPU DP 1.4 HDMI PWR M.2 NVMe CSI ×4
Jetson Nano (4 GB)
ARM Cortex-A57 · 4 GB · 128-core Maxwell GPU
IoT GenAI Edge
Use NVIDIA L4T base images. Ollama runs on CPU; GPU-accelerated EI inference works natively.
Jetson Xavier NX
ARM Carmel · 8 GB · 384-core Volta GPU
IoT GenAI Edge
Tensor Cores enable fast LLM inference. Strong choice for industrial vision pipelines.
Banana Pi & Rockchip
NPU 6 TOPS A76×4 GPU RK3588 LPDDR5 eMMC M.2 PCIe 3.0 2.5G HDMI DP PD
Banana Pi BPI-M7
RK3588 · 8–32 GB · NPU 6 TOPS
IoT GenAI Edge
Onboard NPU accelerates Edge Impulse models. PCIe 3.0 for NVMe storage.
Banana Pi BPI-M5 Pro
RK3576 · 4–16 GB · NPU 6 TOPS
IoT GenAI Edge
Cost-effective RK3576 board. Runs all stacks. Good Pi 5 alternative.
Rock 5B / Orange Pi 5 Plus
RK3588 · 4–32 GB · NPU 6 TOPS
IoT GenAI Edge
RK3588 NPU can run RKLLM for local LLM inference. Use arm64 Docker images.
x86-64 Servers & Mini PCs
1U SERVER GPU RTX / A-series MINI PC SD HDMI · DP · USB · ETH ·····
Intel NUC / Beelink Mini PC
x86-64 · 8–32 GB RAM · Intel iGPU
IoT GenAI Edge
Compact, low-power on-premise hub. 16 GB+ recommended for GenAI stack.
Any Linux VPS / VM
x86-64 · 4 GB+ RAM · Ubuntu / Debian
IoT GenAI
Ideal for remote deployments and CI/CD staging. Edge stack requires hardware sensor access.
ARM64 & x86-64 — all Docker images are multi-arch
1 GB minimum for IoT-only stack · 4 GB+ for GenAI stack
Docker 24+ and Docker Compose v2 required
Air-gapped operation supported — no cloud dependency at runtime

Up in five minutes.

Requires Docker, Docker Compose, and Python 3.11+

quickstart
01
pip install p4n4
02
p4n4 init my-project # interactive wizard — prompts for org, timezone, secrets
03
cd my-project
04
p4n4 validate # pre-flight: docker, secrets, ports, EI models
05
p4n4 up # node-red :1880 grafana :3000 influxdb :8086