One stack. Three layers. Zero compromise.
p4n4 is a Docker Compose platform that wires together Node-RED, InfluxDB, Grafana, Ollama, Letta, n8n, and Edge Impulse into a single self-hosted stack — from Raspberry Pi to GPU server.
Built on the proven MING stack — Mosquitto, InfluxDB, Node-RED, Grafana — and extended with a GenAI layer and Edge AI inference. All services share a Docker bridge network, with Node-RED at the intersection wiring sensors, databases, and AI together.
MING — Mosquitto, InfluxDB, Node-RED, Grafana — is a battle-tested open-source IoT stack used across industrial, agricultural, and smart-building deployments worldwide. p4n4 takes MING as its core and extends it with a complete GenAI layer and on-device Edge AI.
TIGUITTO is a solid starting point. p4n4 is where IoT projects go when they need intelligence, not just telemetry. The IoT stack is identical — what changes is everything on top of it.
| Capability | TIGUITTO / IoTStack | p4n4 |
|---|---|---|
| Data ingestion | Telegraf metrics-scraping agent only | Node-RED visual, bidirectional flow engine |
| MQTT broker | Eclipse Mosquitto | Eclipse Mosquitto |
| Time-series DB | InfluxDB | InfluxDB |
| Visualisation | Grafana | Grafana |
| Conditional logic | Limited Telegraf processors only | Full Node-RED function nodes + JS |
| Device actuation | ✗ | ✓ MQTT out, HTTP out |
| Protocol bridging | MQTT only | MQTT, HTTP, WebSocket, Modbus, OPC-UA… |
| AI integration | ✗ | ✓ Ollama, Letta, n8n |
| Workflow automation | ✗ | ✓ n8n — webhook, cron, AI chains |
| Agent memory | ✗ | ✓ Letta / MemGPT — persistent across sessions |
| Local LLM inference | ✗ | ✓ Ollama — phi3, mistral, llama3 |
| Visual programming | ✗ | ✓ Node-RED browser editor |
| Edge ML inference | ✗ | ✓ Edge Impulse — <5ms, Docker container |
| Edge deployment | ✓ Pi compatible | ✓ Pi 5 + optional GPU acceleration |
One interactive wizard scaffolds your entire project — compose files, configs, secrets, base flows, dashboards, and EI model wiring. All version-controlled, all idempotent.
--ai or --edge to bring up additional stacks..env with new random values. Shows a confirmation table before writing.
Templates are Git repositories with a p4n4-template.toml manifest.
Any Git host works. Org indexes add short-name resolution for teams.
~/.p4n4/config.toml. Version defaults to latest.Edge Impulse handles raw-signal TinyML at sub-100ms latency. Ollama and Letta handle natural language reasoning on aggregated context. They're complementary, not redundant.
p4n4 targets ARM64 and x86-64. If it runs Docker and Docker Compose, it runs p4n4. The IoT stack runs comfortably on 1 GB RAM devices. Add the GenAI stack on 4 GB+ boards or any GPU server.
Requires Docker, Docker Compose, and Python 3.11+