Over-the-air update system for FxBlox devices. Manages Docker-based services for IPFS storage, cluster pinning, and the Fula protocol on ARM64 hardware (RK3588/RK1).
Each FxBlox device runs four Docker containers orchestrated by docker-compose:
+---------------------+ +-------------------+ +---------------------+
| fxsupport | | kubo | | ipfs-cluster |
| (alpine, scripts) | | (ipfs/kubo:release| | (functionland/ |
| | | bridge network) | | ipfs-cluster) |
| Carries all config | | | | host network |
| and scripts in | | Ports: 4001, 5001 | | Ports: 9094-9096 |
| /linux/ directory | | 8081 | | |
+---------------------+ +-------------------+ +---------------------+
| | |
| docker cp to | IPFS block storage | Cluster state
| /usr/bin/fula/ | /uniondrive/ipfs_* | /uniondrive/ipfs-cluster/
v v v
+-------------------------------------------------------------------------+
| /uniondrive (merged storage) |
| union-drive.sh (mergerfs mount) |
+-------------------------------------------------------------------------+
| |
+--------+--------+ +-------------------+ +-------------+--------+
| go-fula | | watchtower | | fula.service |
| (functionland/ | | (auto-updates) | | (systemd) |
| go-fula) | | polls Docker Hub | | runs fula.sh |
| host network | | every 3600s | | |
| Port: 40001 | +-------------------+ +---------------------+
+------------------+
| Container | Image | Network | Purpose |
|---|---|---|---|
fula_fxsupport |
functionland/fxsupport:release |
default | Carries scripts and config files. Acts as a delivery mechanism — fula.sh copies /linux/ from this container to /usr/bin/fula/ on the host. |
ipfs_host |
ipfs/kubo:release |
bridge | IPFS node. Stores blocks on /uniondrive. Config template at kubo/config, runtime config at /home/pi/.internal/ipfs_data/config. |
ipfs_cluster |
functionland/ipfs-cluster:release |
host | IPFS Cluster follower node. Manages pin replication across the network. Config at /uniondrive/ipfs-cluster/service.json. |
fula_go |
functionland/go-fula:release |
host | Fula protocol: blockchain proxy, libp2p blox, WAP server. |
fula_updater |
containrrr/watchtower:latest |
default | Polls Docker Hub hourly for updated images, auto-restarts containers. |
- kubo runs in Docker bridge networking (ports mapped:
4001:4001,5001:5001) - go-fula and ipfs-cluster run with
network_mode: "host"(direct host networking) - go-fula cannot reach kubo at
127.0.0.1from host network — it uses thedocker0bridge IP (typically172.17.0.1), auto-detected ingo-fula/blox/kubo_proxy.go
/uniondrive/ # mergerfs mount (union of all attached drives)
ipfs_datastore/blocks/ # IPFS block storage (flatfs)
ipfs_datastore/datastore/ # IPFS metadata (pebble)
ipfs-cluster/ # Cluster state, service.json, identity.json
ipfs_staging/ # IPFS staging directory
/home/pi/.internal/ # Device-internal state
ipfs_data/config # Deployed kubo config (runtime)
ipfs_config # Template copy (used by initipfs)
config.yaml # Fula device config (pool, authorizer, etc.)
/usr/bin/fula/ # On-host scripts and configs (copied from fxsupport)
fula.sh # Main orchestrator script
docker-compose.yml # Container definitions
.env # Image tags (GO_FULA, FX_SUPPROT, IPFS_CLUSTER)
.env.cluster # Cluster env var overrides
kubo/config # Kubo config template
kubo/kubo-container-init.d.sh # Kubo init script
ipfs-cluster/ipfs-cluster-container-init.d.sh # Cluster init script
update_kubo_config.py # Selective kubo config merger
union-drive.sh # UnionDrive mount management
bluetooth.py # BLE command handler
local_command_server.py # Local TCP command server
control_led.py # LED control
...
How code changes in this repo reach devices:
1. Push to GitHub repo
2. GitHub Actions builds Docker images (on release) OR manual build
3. Images pushed to Docker Hub (functionland/fxsupport:release, etc.)
4. Watchtower on device detects updated images (polls hourly)
5. fula.service triggers: fula.sh start
6. First restart() — runs with CURRENT on-device files
7. docker cp fula_fxsupport:/linux/. /usr/bin/fula/ — copies NEW files from updated fxsupport image
8. If fula.sh itself changed -> second restart() — runs with NEW files
On every fula.sh start (before docker-compose up):
update_kubo_config.py(or inline fallback infula.sh) runs- Reads the template (
/usr/bin/fula/kubo/config) and the deployed config (/home/pi/.internal/ipfs_data/config) - Merges only managed fields (Bootstrap, Peering, Swarm, Experimental, Routing, etc.) from template into deployed config
- Preserved fields (Identity, Datastore paths, API/Gateway addresses) are never touched
- Dynamic
StorageMaxis calculated as 80% of/uniondrivetotal space (minimum 800GB floor) - Writes updated deployed config, then runs
docker-compose up
.env.clusteris loaded by docker-compose asenv_file(injects env vars into container).env.clusteris also bind-mounted at/.env.clusterinside the containeripfs-cluster-container-init.d.shruns as entrypoint:- Waits for pool name from
config.yaml - Generates cluster secret from pool name
- Runs
jqto patchservice.jsonwith connection_manager, pubsub, batching, timeouts, informer settings - Starts
ipfs-cluster-service daemonwith bootstrap address
- Waits for pool name from
fula-ota/
docker/
build_and_push_images.sh # Builds all 3 images and pushes to Docker Hub
env_release.sh # Release env vars (image names, tags, branches)
env_test.sh # Test env vars (test tags)
env_release_amd64.sh # AMD64 release env vars
run.sh # Local dev docker-compose runner
fxsupport/
Dockerfile # alpine + COPY ./linux -> /linux
build.sh # buildx build + push
linux/ # All on-device scripts and configs
docker-compose.yml # Container orchestration
.env # Docker image tags
.env.cluster # Cluster env var overrides
fula.sh # Main orchestrator (start/stop/restart/rebuild)
union-drive.sh # mergerfs mount management
update_kubo_config.py # Selective kubo config merger
kubo/
config # Kubo config template
kubo-container-init.d.sh
ipfs-cluster/
ipfs-cluster-container-init.d.sh
bluetooth.py # BLE setup and command handling
local_command_server.py # TCP command server
control_led.py # LED control for device status
plugins/ # Optional plugin system (loyal-agent, streamr-node)
...
go-fula/
Dockerfile # Go build stage + alpine runtime
build.sh # Clones go-fula repo, buildx build + push
go-fula.sh # Container entrypoint
ipfs-cluster/
Dockerfile # Go build stage + alpine runtime
build.sh # Clones ipfs-cluster repo, buildx build + push
.github/workflows/
docker-image.yml # Release CI: builds all images + uploads .tar to GitHub release
docker-image-test.yml # Test CI
tests/ # Shell-based test scripts
install-ubuntu.sh # Ubuntu installation script
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.shOptionally manage Docker as a non-root user (docs):
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp dockersudo curl -L "https://github.com/docker/compose/releases/download/v2.16.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-composesudo systemctl start NetworkManager
sudo systemctl enable NetworkManagersudo apt-get install gcc python3-dev python-is-python3 python3-pip
sudo apt-get install python3-gi python3-gi-cairo gir1.2-gtk-3.0
sudo apt install net-tools dnsmasq-base rfkill lshwRaspberry Pi OS handles auto-mounting natively. On Armbian, set up automount:
sudo apt install net-tools dnsmasq-base rfkill gitsudo nano /usr/local/bin/automount.sh#!/bin/bash
MOUNTPOINT="/media/pi"
DEVICE="/dev/$1"
MOUNTNAME=$(echo $1 | sed 's/[^a-zA-Z0-9]//g')
mkdir -p ${MOUNTPOINT}/${MOUNTNAME}
FSTYPE=$(blkid -o value -s TYPE ${DEVICE})
if [ ${FSTYPE} = "ntfs" ]; then
mount -t ntfs -o uid=pi,gid=pi,dmask=0000,fmask=0000 ${DEVICE} ${MOUNTPOINT}/${MOUNTNAME}
elif [ ${FSTYPE} = "vfat" ]; then
mount -t vfat -o uid=pi,gid=pi,dmask=0000,fmask=0000 ${DEVICE} ${MOUNTPOINT}/${MOUNTNAME}
else
mount ${DEVICE} ${MOUNTPOINT}/${MOUNTNAME}
chown pi:pi ${MOUNTPOINT}/${MOUNTNAME}
fisudo chmod +x /usr/local/bin/automount.shsudo nano /etc/udev/rules.d/99-automount.rulesACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="automount@%k.service"
ACTION=="add", KERNEL=="nvme[0-9]n[0-9]p[0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="automount@%k.service"
ACTION=="remove", KERNEL=="sd[a-z][0-9]", RUN+="/bin/systemctl stop automount@%k.service"
ACTION=="remove", KERNEL=="nvme[0-9]n[0-9]p[0-9]", RUN+="/bin/systemctl stop automount@%k.service"
sudo nano /etc/systemd/system/automount@.service[Unit]
Description=Automount disks
BindsTo=dev-%i.device
After=dev-%i.device
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/automount.sh %I
ExecStop=/usr/bin/sh -c '/bin/umount /media/pi/$(echo %I | sed 's/[^a-zA-Z0-9]//g'); /bin/rmdir /media/pi/$(echo %I | sed 's/[^a-zA-Z0-9]//g')'sudo udevadm control --reload-rules
sudo systemctl daemon-reloadgit clone https://github.com/functionland/fula-ota
cd fula-ota/docker/fxsupport/linux
sudo bash ./fula.sh rebuild
sudo bash ./fula.sh startcd docker
source env_release.sh
bash ./build_and_push_images.shThis builds and pushes all three images:
functionland/fxsupport:releasefunctionland/go-fula:releasefunctionland/ipfs-cluster:release
Use test-tagged images (e.g. test147) to validate changes on a device before promoting to release.
env_test.sh defaults to test147. Override at runtime without editing the file:
cd docker
# Use default tag (test147):
source env_test.sh
# Or override:
TEST_TAG=test148 source env_test.shThis exports all three image tags (fxsupport, go-fula, ipfs-cluster) with your tag and writes a matching .env file.
Option A: GitHub Actions (recommended)
Trigger the docker-image-test.yml workflow manually from the Actions tab. This builds all three images (fxsupport, go-fula, ipfs-cluster) with the test tag configured in env_test.sh and pushes to Docker Hub.
Option B: Local build + push to Docker Hub
cd docker
source env_test.sh # or: TEST_TAG=test148 source env_test.sh
bash ./build_and_push_images.shThis builds and pushes all three images:
functionland/fxsupport:<tag>functionland/go-fula:<tag>functionland/ipfs-cluster:<tag>
Option C: On-device build (no Docker Hub push)
# Build fxsupport (seconds — just file copies)
cd /tmp/fula-ota/docker/fxsupport
sudo docker build --load -t functionland/fxsupport:test147 .
# Build go-fula (requires Go compilation)
cd /tmp/fula-ota/docker/go-fula
git clone -b main https://github.com/functionland/go-fula
sudo docker build --load -t functionland/go-fula:test147 .
# Build ipfs-cluster (requires Go compilation)
cd /tmp/fula-ota/docker/ipfs-cluster
git clone -b master https://github.com/ipfs-cluster/ipfs-cluster
sudo docker build --load -t functionland/ipfs-cluster:test147 .If you built with Options A or B (images pushed to Docker Hub), just update .env and restart:
# 1. Update .env to use test tags
sudo tee /usr/bin/fula/.env << 'EOF'
GO_FULA=functionland/go-fula:test147
FX_SUPPROT=functionland/fxsupport:test147
IPFS_CLUSTER=functionland/ipfs-cluster:test147
WPA_SUPLICANT_PATH=/etc
CURRENT_USER=pi
EOF
# 2. Restart fula — it will pull test147 images from Docker Hub and start them
sudo systemctl restart fulaThat's it. fula.sh runs docker-compose pull --env-file .env which pulls whatever tags are in .env. Watchtower also respects the running container's tag — it will check for updates to :test147, not :release.
The docker cp step (which copies files from fula_fxsupport to /usr/bin/fula/) is also safe: the fxsupport:test147 image was built with env_test.sh, so the .env baked inside it already has test tags.
If you built on-device (Option C), you must also block Docker Hub so fula.sh doesn't pull release images over your local builds:
# Block Docker Hub BEFORE restarting
sudo bash -c 'echo "127.0.0.1 index.docker.io registry-1.docker.io" >> /etc/hosts'
sudo systemctl restart fula# Check running images have correct tags
sudo docker ps --format '{{.Names}}\t{{.Image}}'
# Expected: fula_go -> functionland/go-fula:test147, etc.
# Check logs
sudo docker logs fula_go --tail 50
sudo docker logs ipfs_host --tail 50
sudo docker logs ipfs_cluster --tail 50# 1. If Docker Hub was blocked, unblock it
sudo sed -i '/index.docker.io/d' /etc/hosts
# 2. Restore release .env
sudo tee /usr/bin/fula/.env << 'EOF'
GO_FULA=functionland/go-fula:release
FX_SUPPROT=functionland/fxsupport:release
IPFS_CLUSTER=functionland/ipfs-cluster:release
WPA_SUPLICANT_PATH=/etc
CURRENT_USER=pi
EOF
# 3. Restart fula (will pull release images)
sudo systemctl restart fula- On-device builds need Docker Hub blocked:
fula.shrunsdocker-compose pullon every restart if internet is available. This pulls from Docker Hub using tags from.env. For remote-built test images (Options A/B), this is fine — it pulls your:test147images. But for on-device builds (Option C), the pull would overwrite your local images with Docker Hub versions. Block Docker Hub in/etc/hostsfor on-device builds. - Watchtower tag matching: Watchtower checks for updates to the exact tag the container is running. Containers running
:test147won't be replaced with:release. docker cpand.env: On every restart,fula.shcopies files fromfula_fxsupportcontainer to/usr/bin/fula/, including.env. This is safe when using test images built throughenv_test.sh(the.envinside the image matches your test tags). Usestop_docker_copy.txtonly if you manually edited.envon-device without rebuilding fxsupport.stop_docker_copy.txtexpires after 24 hours: If needed, this file in/home/pi/blocks thedocker cpstep. Expires after 24h — touch it again to extend.
To test code changes from this repo on a device that is already running the production Docker images, without pushing to Docker Hub:
If you only changed files under docker/fxsupport/linux/ (scripts, configs, env files), this is the fastest approach since fxsupport is just an alpine image with file copies:
# On the device:
# 1. Get the latest code
cd /tmp && git clone --depth 1 https://github.com/functionland/fula-ota.git
# Or if already cloned:
cd /tmp/fula-ota && git pull
# 2. Build fxsupport image locally (takes seconds)
cd /tmp/fula-ota/docker/fxsupport
sudo docker build --load -t functionland/fxsupport:release .
# 3. Stop watchtower so it doesn't pull the old image from Docker Hub
sudo docker stop fula_updater
# 4. Restart fula — docker cp will now copy YOUR files from the local image
sudo systemctl restart fulaIf you also changed go-fula or ipfs-cluster code (Go compilation, takes longer):
# Build fxsupport (seconds)
cd /tmp/fula-ota/docker/fxsupport
sudo docker build --load -t functionland/fxsupport:release .
# Build ipfs-cluster (requires Go compilation)
cd /tmp/fula-ota/docker/ipfs-cluster
git clone -b master https://github.com/ipfs-cluster/ipfs-cluster
sudo docker build --load -t functionland/ipfs-cluster:release .
# Build go-fula (requires Go compilation)
cd /tmp/fula-ota/docker/go-fula
git clone -b main https://github.com/functionland/go-fula
sudo docker build --load -t functionland/go-fula:release .
# Stop watchtower, restart
sudo docker stop fula_updater
sudo systemctl restart fulaFor rapid iteration, copy files directly and prevent fula.sh from overwriting them via docker cp:
# 1. Copy changed files
sudo cp /tmp/fula-ota/docker/fxsupport/linux/.env.cluster /usr/bin/fula/.env.cluster
sudo cp /tmp/fula-ota/docker/fxsupport/linux/ipfs-cluster/ipfs-cluster-container-init.d.sh /usr/bin/fula/ipfs-cluster/ipfs-cluster-container-init.d.sh
sudo cp /tmp/fula-ota/docker/fxsupport/linux/kubo/config /usr/bin/fula/kubo/config
sudo cp /tmp/fula-ota/docker/fxsupport/linux/fula.sh /usr/bin/fula/fula.sh
sudo cp /tmp/fula-ota/docker/fxsupport/linux/update_kubo_config.py /usr/bin/fula/update_kubo_config.py
# 2. Block docker cp from overwriting your files (valid for 24 hours)
touch /home/pi/stop_docker_copy.txt
# 3. Restart using your files
sudo /usr/bin/fula/fula.sh restart
# 4. When done testing, remove the block so normal OTA updates resume
rm /home/pi/stop_docker_copy.txt# Kubo StorageMax (should be ~80% of drive size, min 800GB)
cat /home/pi/.internal/ipfs_data/config | jq '.Datastore.StorageMax'
# Kubo AcceleratedDHTClient
cat /home/pi/.internal/ipfs_data/config | jq '.Routing.AcceleratedDHTClient'
# Expected: true
# Cluster connection_manager
cat /uniondrive/ipfs-cluster/service.json | jq '.cluster.connection_manager'
# Expected: high_water: 400, low_water: 100
# Cluster concurrent_pins
cat /uniondrive/ipfs-cluster/service.json | jq '.pin_tracker.stateless.concurrent_pins'
# Expected: 5
# Cluster batching
cat /uniondrive/ipfs-cluster/service.json | jq '.consensus.crdt.batching'
# Expected: max_batch_size: 100, max_batch_age: "1m"
# IPFS repo stats
docker exec ipfs_host ipfs repo stat
# IPFS DHT status (should show full routing table with AcceleratedDHTClient)
docker exec ipfs_host ipfs stats dht
# Cluster peers
docker exec ipfs_cluster ipfs-cluster-ctl peers ls | wc -l| Command | Description |
|---|---|
start |
Start all containers (runs config merge, docker-compose up). |
restart |
Same as start. |
stop |
Stop all containers (docker-compose down). |
rebuild |
Full rebuild: install dependencies, copy files, docker-compose build. |
update |
Pull latest docker images. |
install |
Run the initial installer. |
help |
List all commands. |
Env vars injected into the ipfs-cluster container. Key settings:
| Variable | Value | Purpose |
|---|---|---|
CLUSTER_CONNMGR_HIGHWATER |
400 | Max peer connections (prevents 60-node ceiling) |
CLUSTER_CONNMGR_LOWWATER |
100 | Connection pruning target |
CLUSTER_MONITORPINGINTERVAL |
60s | How often peers ping the monitor |
CLUSTER_STATELESS_CONCURRENTPINS |
5 | Parallel pin operations (lower for ARM I/O) |
CLUSTER_IPFSHTTP_PINTIMEOUT |
15m0s | Timeout for individual pin operations |
CLUSTER_PINRECOVERINTERVAL |
8m0s | How often to retry failed pins |
Template for IPFS node configuration. Managed fields are merged into the deployed config on every restart. Key settings:
AcceleratedDHTClient: true- Full Amino DHT routing table with parallel lookupsStorageMax: "800GB"- Static fallback; dynamically set to 80% of drive on startupConnMgr.HighWater: 200- IPFS swarm connection limitLibp2pStreamMounting: true- Required for go-fula p2p protocol forwarding
Selectively merges managed fields from template into deployed config while preserving device-specific settings (Identity, Datastore). Runs on every fula.sh start.
- Watchtower polls Docker Hub every 3600 seconds (1 hour). After pushing new images, devices update within the next polling cycle.
stop_docker_copy.txt— when this file exists in/home/pi/and was modified within the last 24 hours,fula.shskips thedocker cpstep. Useful for testing local file changes without them being overwritten.- Kubo uses the upstream
ipfs/kubo:releaseimage (not custom-built). Only the config template and init script are customized. - go-fula and ipfs-cluster are custom-built from source in their respective Dockerfiles using Go 1.25.
- All containers except kubo run with
privileged: trueandCAP_ADD: ALL.