Reference documentation for every application running across the homelab. Each entry covers what the app does, where it lives, how to access it, configuration notes, and any quirks worth remembering.
What it is: Web UI for managing Docker containers, images, volumes, and networks. The control plane for everything running in CT 100.
| URL | https://192.168.8.100:9443 |
| Host | CT 100 (192.168.8.100) |
| Image | portainer/portainer-ce:lts |
| Data | /var/lib/docker/volumes/portainer_data |
Notes:
- Use Portainer to inspect container logs, restart services, and update images (pull โ recreate)
- All other Docker services are deployed via
docker-composefiles in/opt/docker/on CT 100 - Portainer itself is started via a standalone
docker runcommand, not compose
What it is: Self-hosted uptime monitoring dashboard. Pings services on a schedule and alerts via Gotify when something goes down.
| URL | http://192.168.8.100:3001 |
| Host | CT 100 (192.168.8.100) |
| Image | louislam/uptime-kuma:1 |
| Data | /opt/docker/uptime-kuma/data |
Monitored services:
- Proxmox Web UI (192.168.8.221:8006)
- Portainer (192.168.8.100:9443)
- Gotify (192.168.8.100:8070)
- N8N (192.168.8.100:5678)
- Pangolin VPS (pangolin.troglodyteconsulting.com)
- Mac Studio ping (192.168.8.180)
Notes:
- Notifications route to Gotify (push to phone)
- Add new monitors from the dashboard โ no config file needed
What it is: Self-hosted push notification server. Uptime Kuma and other services send alerts here; the Gotify app on your phone receives them.
| URL | http://192.168.8.100:8070 |
| Host | CT 100 (192.168.8.100) |
| Image | gotify/server:latest |
| Data | /opt/docker/gotify/data |
Notes:
- Install the Gotify app on iPhone and point it at
http://192.168.8.100:8070 - Each sending service (Uptime Kuma, N8N, etc.) needs its own application token โ create in the Gotify web UI under Apps
- Messages are stored and browsable in the web UI
What it is: Visual workflow automation engine โ think self-hosted Zapier/Make. Connects services together via trigger โ action workflows.
| URL | http://192.168.8.100:5678 |
| Host | CT 100 (192.168.8.100) |
| Image | n8nio/n8n:latest |
| Data | /opt/docker/n8n/data |
Notes:
- Workflows are stored in the data volume โ back this up before updating the image
- Can trigger on webhooks, schedules, or incoming messages
- Integrates with Gotify for sending alerts from custom workflows
- Accessible externally via Pangolin if a public webhook endpoint is needed
What it is: Automated music collection manager. Monitors wanted artists/albums, finds them on Usenet via NZBHydra2, and sends download requests to NZBGet on the seedbox.
| URL | http://192.168.8.100:8686 |
| Host | CT 100 (192.168.8.100) |
| Image | lscr.io/linuxserver/lidarr:nightly |
| Data | /opt/lidarr/data |
| Music | /mnt/music (bind mount โ /nvmepool/music) |
Pipeline:
- Daily 4am cron triggers
MissingAlbumSearchโ Lidarr actively searches all monitored missing albums - Lidarr queries Headphones VIP indexer (primary โ proper
t=musicsupport) and NZBHydra2 (broken for music โ altHUB doesn’t supportt=music) - Matched releases sent to NZBGet on seedbox (tunnel at 192.168.8.221:16789)
- NZBGet downloads to
completed/Music/on seedbox at ~70 MB/s seedbox-sync.sh(every 15 min) pulls to/nvmepool/ingest/Music/on Proxmox- Lidarr import picks up files and moves to
/mnt/music(requires โฅ80% MusicBrainz match) - Navidrome rescans and adds to library
Notes:
- Image:
lscr.io/linuxserver/lidarr:nightly(nightly required for plugin support) - Quality profile: Lossless (FLAC) preferred
- Release profile blocks: Greatest Hits, Best Of, Collection, Anthology, etc. (prevents import loop)
- 114 monitored artists, ~3,819 missing albums (March 2026)
- Plugins planned: Tidal (TrevTV), Tubifarry (TypNull)
- See Music Pipeline page for full detail
What it is: Self-hosted audiobook and podcast server with a polished web UI and mobile apps.
| URL | http://192.168.8.100:13378 |
| Host | CT 100 (192.168.8.100) |
| Image | ghcr.io/advplyr/audiobookshelf:latest |
| Books | /mnt/audiobookshelf (bind mount โ /nvmepool/audiobookshelf) |
| Data | /opt/docker/audiobookshelf/data |
Notes:
- iOS app available โ connects to the local URL or via Pangolin tunnel for remote access
- Supports progress sync across devices
- Podcast feeds can be added directly โ no separate podcast app needed
- Metadata fetched from Audible and Google Books automatically
What it is: Book tracking and discovery app โ think self-hosted Goodreads backed by the Hardcover catalog.
| URL | http://192.168.8.100:8787 |
| Host | CT 100 (192.168.8.100) |
| Image | ghcr.io/pennydreadful/bookshelf:hardcover |
| Data | /mnt/bookshelf (bind mount โ /nvmepool/bookshelf) |
Notes:
- Uses the Hardcover API for book metadata and cover art
- Track read/reading/want-to-read status
- Separate from Audiobookshelf โ this is for physical/ebook tracking, not audio playback
What it is: Book and audiobook search tool that aggregates sources. Proxied outbound through the seedbox SOCKS5 tunnel for exit via Netherlands IP.
| URL | http://192.168.8.100:8084 |
| Host | CT 100 (192.168.8.100) |
| Image | ghcr.io/calibrain/shelfmark:latest |
| Proxy | SOCKS5 via 192.168.8.100:1080 โ seedbox (ismene.usbx.me, NL exit) |
Notes:
- The SOCKS5 proxy is provided by
seedbox-socks.service(autossh systemd service on CT 100) - Shelfmark is also exposed as a Pangolin private resource โ accessible remotely without opening a local port
- If searches fail, check that
seedbox-socks.serviceis running:systemctl status seedbox-socks.serviceon CT 100
What it is: Self-hosted RSS feed aggregator with a clean web UI and API support for mobile clients.
| URL | http://192.168.8.100:8180 |
| Host | CT 100 (192.168.8.100) |
| Image | freshrss/freshrss:latest |
| Data | /opt/docker/freshrss/data |
Notes:
- Compatible with Fever and Google Reader APIs โ most RSS apps on iOS connect via one of these
- OPML import/export supported for migrating feeds
- Feeds refresh on a configurable schedule (default every hour)
What it is: Self-hosted file sync, sharing, and collaboration platform. Runs with a MariaDB backend.
| URL | http://192.168.8.100:8280 |
| Host | CT 100 (192.168.8.100) |
| Images | nextcloud:latest + mariadb:11 (nextcloud-db) |
| Data | /opt/docker/nextcloud/data |
Notes:
- Database container (
nextcloud-db) must be running for Nextcloud to function โ they share a Docker network - Desktop sync client points to
http://192.168.8.100:8280 - For remote access, expose via Pangolin โ do not open port 8280 to the public internet directly
- Run
occcommands via:docker exec -u www-data nextcloud php occ <command>
What it is: The remote access stack running on the SSDNodes VPS. Pangolin manages tunnels and access control, Gerbil handles WireGuard, and Traefik is the reverse proxy.
| Dashboard | https://pangolin.troglodyteconsulting.com |
| Host | VPS โ 172.93.50.184 |
| Images | fosrl/pangolin:1.16.2, fosrl/gerbil:1.3.0, traefik:v3.6 |
Architecture:
- VPS runs Pangolin + Gerbil (WireGuard) + Traefik
- Proxmox host (192.168.8.221) runs Newt as a systemd service โ creates outbound WireGuard tunnel to VPS
- All LAN devices reachable through the tunnel as Pangolin resources
- Farm Proxmox (192.168.0.191) runs its own Newt for the 192.168.0.x subnet
Private resources (accessible without exposing local ports):
- Proxmox Web UI
- Mac Studio
- Router
- Shelfmark
Notes:
- Newt on Proxmox is a systemd service:
systemctl status newton 192.168.8.221 - To add a new resource: Pangolin Dashboard โ Sites โ Proxmox โ Add Resource
- Pangolin version: Community Edition 1.16.2 โ check GitHub releases for updates
Services running directly on macOS โ not containerized.
| Service | Port | URL | Notes |
|---|---|---|---|
| Hugo Hub | 1313 | http://192.168.8.180:1313 | This site โ run with hugo server in bee-hub directory |
| Paperless-NGX | 8100 | http://192.168.8.180:8100 | Document management โ Docker on Mac |
| Life Archive API | 8900 | http://192.168.8.180:8900 | RAG search API for personal knowledge base |
| Life Archive MCP | 8901 | http://192.168.8.180:8901/mcp | MCP server โ exposes Life Archive to Claude |
| Embed Server | 1235 | localhost:1235 | gte-Qwen2-7B on Apple MPS โ local only |
| SyncThing | 8384 | http://192.168.8.180:8384 | File sync between devices |
Notes:
- Hugo Hub serves the bee-hub documentation site during development; rebuild with
hugoto update/public - Life Archive API and MCP server start via launch agents or manual scripts โ check
~/scripts/for startup commands - Embed server (LM Studio or custom) must be running for Life Archive RAG embeddings to work
Remote Usenet server accessed via SSH tunnels on Proxmox.
| Service | Seedbox Port | Local Tunnel | Notes |
|---|---|---|---|
| NZBGet | 13036 | http://192.168.8.221:16789 | Usenet downloader |
| NZBHydra2 | 13033 | http://192.168.8.221:15076 | Indexer aggregator |
| SOCKS5 | โ | 192.168.8.100:1080 | Outbound proxy for Shelfmark via CT 100 |
| SSH | 22 | โ | ssh delgross@46.232.210.50 |
Tunnel management:
Both NZBGet and NZBHydra2 tunnels run as systemd services on Proxmox (nzbget-tunnel.service, nzbhydra2-tunnel.service). Check with systemctl status nzbget-tunnel on 192.168.8.221.
What it is: Lightweight web-based server management UI โ CPU, memory, disk, storage, logs, services, terminal, and updates all in one browser tab. Runs on both Proxmox and the VPS.
| Instance | URL | Notes |
|---|---|---|
| Proxmox | https://192.168.8.221:9090 | System management for Proxmox host |
| VPS | https://172.93.50.184:9090 | System management for SSDNodes VPS |
Notes:
- Cockpit is socket-activated โ
cockpit.serviceshows inactive until you open the URL, which is normal.cockpit.socketis always listening on port 9090. - Login with the system root credentials
- Useful for checking logs (
journalctl), restarting services, monitoring disk/CPU, and running a quick terminal session without SSH - Self-signed cert โ browser will warn on first visit, just accept
What it is: Open-source smart home automation platform. Runs at the Farm (Brownsville, 192.168.0.x subnet).
| Local URL | http://192.168.0.50:8123 |
| Remote URL | https://ha.troglodyteconsulting.com |
| Host | Farm Docker CT 100 (192.168.0.100) |
| Network | Farm subnet 192.168.0.x โ separate from home 192.168.8.x |
Remote access:
- Exposed via Pangolin tunnel from Farm Proxmox (192.168.0.191)
- Farm Proxmox runs its own Newt as a systemd service (not Docker)
- Accessible at
ha.troglodyteconsulting.comwhen the Farm tunnel is up
Notes:
- Farm Proxmox (192.168.0.191) is on a separate network and not always reachable from home โ use Pangolin remote URL when off-site or if LAN route is unavailable
- Farm also runs its own Portainer (192.168.0.100:9443), Uptime Kuma (192.168.0.100:3001), and Gotify (192.168.0.100:8070)
- HA configuration lives in the Docker data volume on Farm CT 100 โ back up before updates
What it is: Self-hosted document management system with OCR, auto-tagging, and full-text search. Runs on the Mac Studio via Docker.
| URL | http://192.168.8.180:8100 |
| Host | Mac Studio (192.168.8.180) |
| Compose file | ~/paperless-ngx/docker/docker-compose.yml |
| Images | paperless-ngx, postgres:16, redis:7, paperless-ai (stopped) |
Start/stop:
cd ~/paperless-ngx/docker
docker compose up -d
docker compose down
Consume folder: Drop files into ~/paperless-ngx/consume/ to ingest. Paperless OCRs, tags, and indexes automatically.
Key volumes:
| Host path | Purpose |
|---|---|
~/paperless-ngx/data/ |
SQLite DB and search index |
~/paperless-ngx/media/ |
Stored documents |
~/paperless-ngx/consume/ |
Drop files here to ingest |
~/paperless-ngx/export/ |
Bulk export output |
Notes:
paperless-aicontainer is stopped โ was used for AI auto-tagging, disabled after memory issues- Integrated with Life Archive โ Paperless documents are a source for the RAG pipeline
- Admin login:
delgross/ see secure notes
BeeDifferent Hub is a Hugo-powered personal reference site running as a persistent service on the Mac Studio. It’s the documentation layer for the entire stack โ apps, homelab, AI tools, terminal workflow, and property systems.
Hugo is a static site generator โ it reads Markdown files from content/ and produces a complete HTML site. The dev server watches for file changes and rebuilds automatically, so editing any _index.md shows up in the browser within a second.
| Item | Value |
|---|---|
| Site root | ~/Sync/ED/homelab/bee_hub/ |
| Content | ~/Sync/ED/homelab/bee_hub/content/ |
| Theme | themes/bee-theme/ (custom) |
| Config | hugo.toml |
| Hugo version | v0.155.3+extended (Homebrew) |
| URL (LAN) | http://192.168.8.180:1313 |
| URL (local) | http://localhost:1313 |
| URL (remote) | Via Pangolin โ MacStudio private resource |
Hugo runs as a persistent launchd agent that starts at login and restarts automatically if it crashes.
| Item | Value |
|---|---|
| Plist | ~/Library/LaunchAgents/com.beedifferent.hugo-hub.plist |
| Label | com.beedifferent.hugo-hub |
| Command | hugo server --port 1313 --bind 192.168.8.180 --baseURL http://192.168.8.180:1313 |
| Working dir | ~/Sync/ED/homelab/bee_hub |
| Log | ~/Library/Logs/hugo-hub.log |
| Error log | ~/Library/Logs/hugo-hub.err.log |
Service commands:
# Check status
launchctl list | grep hugo
# Stop
launchctl unload ~/Library/LaunchAgents/com.beedifferent.hugo-hub.plist
# Start
launchctl load ~/Library/LaunchAgents/com.beedifferent.hugo-hub.plist
# Restart
launchctl unload ~/Library/LaunchAgents/com.beedifferent.hugo-hub.plist && \
launchctl load ~/Library/LaunchAgents/com.beedifferent.hugo-hub.plist
# View logs
tail -50 ~/Library/Logs/hugo-hub.log
tail -20 ~/Library/Logs/hugo-hub.err.log
Note: Hugo’s dev server caches hugo.toml. Changes to the menu (adding/removing nav tabs) require a service restart to appear. Content changes (_index.md files) hot-reload automatically.
Every page is a directory containing an _index.md file. The top-level nav is defined in hugo.toml:
| Weight | Tab | URL |
|---|---|---|
| 1 | Mac Apps | /mac-apps/ |
| 2 | Terminal | /terminal/ |
| 3 | System Settings | /system-settings/ |
| 4 | Menu Bar | /menu-bar/ |
| 5 | AI | /ai/ |
| 6 | Homelab | /homelab/ |
| 7 | Misc | /misc/ |
| 8 | Automation | /automation/ |
| 9 | Mac Studio | /mac-studio/ |
| 10 | Tana | /tana/ |
| 11 | Meshtastic | /meshtastic/ |
| 12 | Docs | /docs/ |
Content tree (homelab section example):
content/homelab/
โโโ _index.md โ Category landing (sidebar list)
โโโ proxmox/_index.md โ Proxmox VE reference
โโโ services/_index.md โ Services directory
โโโ music-pipeline/_index.md
โโโ life-archive/_index.md
โโโ mcp-servers/_index.md
โโโ bee-hub/_index.md โ This page
โโโ shelfmark/_index.md
Category landing page โ has a sidebar with sub-page links:
title: "Homelab"
subtitle: "Infrastructure, services, and remote access"
sidebar_sections:
- { url: "/homelab/proxmox/", name: "Proxmox VE" }
sidebar_links:
- { name: "External Link", url: "https://example.com" }
Individual page โ has quick-access link buttons at top:
title: "Music Pipeline"
page_links:
- { label: "Lidarr", url: "http://192.168.8.100:8686", external: true }
- { label: "Internal Link", url: "/homelab/services/", external: false }
The bee-theme provides shortcodes for structuring content. All shortcode files live in themes/bee-theme/layouts/shortcodes/.
| Shortcode | Purpose | Parameters |
|---|---|---|
section |
Collapsible content block | id, title |
app-card |
App info card with links | title, page, docs, shortcuts, cheatsheet |
app-grid |
Grid wrapper for app-cards | none |
cmd |
Styled terminal command block | none (content is the command) |
grid |
Two-column grid layout | none |
tip |
Tip/note callout box | none |
The app-card shortcode supports a docs parameter that renders a ๐ Docs โ link directly on the card โ already used throughout the Mac Apps pages.
Edit an existing page:
Open content/section/page/_index.md in any editor (BBEdit, VS Code, Obsidian). Hugo hot-reloads within ~1 second.
Add a new page:
mkdir -p ~/Sync/ED/homelab/bee_hub/content/homelab/new-page
# Create _index.md with frontmatter and content
# Add to parent _index.md sidebar_sections list
Add a new nav tab:
- Create
content/new-section/_index.md - Add entry to
hugo.tomlwith aname,url, andweight - Restart the Hugo service โ menu changes need a restart, content changes do not
Can I edit in Obsidian?
Yes โ all Hugo content is plain Markdown. Open ~/Sync/ED/homelab/bee_hub/content/ as an Obsidian vault. The YAML frontmatter block must stay intact. Hugo shortcodes show as raw text in Obsidian’s preview but edit safely.
The dev server serves content dynamically โ no build step needed for day-to-day editing.
To regenerate the static /public/ folder:
cd ~/Sync/ED/homelab/bee_hub
hugo
# Output goes to /public/
The /public/ folder is included in the Mac Studio SYNC directory and gets pulled to Proxmox /nvmepool/sync/ by the nightly sync-mac.sh rsync job.
Installed April 19, 2026 on edge01 VPS (172.93.50.184). Replaces Pangolin/Traefik as the VPS’s reverse proxy and public web server.
Caddy is the single reverse proxy fronting public-facing services on the VPS. It handles:
- Automatic HTTPS โ cert issuance and renewal via Let’s Encrypt. No certbot, no cron jobs, no manual work forever.
- Static file serving โ hosts the Bee Hub at
troglodyteconsulting.com. - Reverse proxy โ routes subdomains to LAN services via NetBird mesh (once NetBird is set up).
- HTTP/3 support โ out of the box on port 443/udp.
- Cloudflare DNS-01 challenge โ for wildcard certs on
*.edmd.me(via the custom build with the Cloudflare DNS module).
| Version | v2.11.2 |
| Custom build | Yes (xcaddy + github.com/caddy-dns/cloudflare) |
| Binary | /usr/bin/caddy |
| Config | /etc/caddy/Caddyfile |
| Env file | /etc/caddy/cloudflare.env (CF_API_TOKEN) |
| Data dir | /var/lib/caddy/ |
| Web root | /var/www/bee-hub/ |
| Service | systemctl {status,reload,restart} caddy |
| Logs | journalctl -u caddy |
When you add a site block to the Caddyfile:
n8n.troglodyteconsulting.com {
reverse_proxy 192.168.8.100:5678
}
On systemctl reload caddy, Caddy:
- Notices the domain is public and has no cert yet
- Contacts Let’s Encrypt via ACME
- Proves ownership โ HTTP-01 challenge (default) or DNS-01 (for wildcards)
- Receives and installs the cert
- Starts serving HTTPS on 443
- Redirects HTTP โ HTTPS automatically
All of that in seconds. Renewals (at 30 days remaining) happen silently in the background. You never touch certs again.
What Caddy handles forever:
- Initial cert request
- Automatic renewal
- OCSP stapling
- Fallback from Let’s Encrypt to ZeroSSL if LE is down
- Modern TLS (1.3, correct ciphers, HSTS)
- Certificate hot-reload without dropping connections
Every site block is independent. The starter Caddyfile looks like:
{
# Global options
email doctor@edwarddelgrosso.com
}
# Main Bee Hub site โ static files
troglodyteconsulting.com, www.troglodyteconsulting.com {
root * /var/www/bee-hub
file_server
encode gzip zstd
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-Content-Type-Options "nosniff"
Referrer-Policy "strict-origin-when-cross-origin"
-Server
}
}
# Subdomain reverse-proxy (requires NetBird mesh)
# n8n.troglodyteconsulting.com {
# reverse_proxy 192.168.8.100:5678
# }
# Wildcard via Cloudflare DNS-01 (needs CF_API_TOKEN)
# *.edmd.me {
# tls {
# dns cloudflare {env.CF_API_TOKEN}
# }
# @lidarr host lidarr.edmd.me
# handle @lidarr { reverse_proxy 192.168.8.100:8686 }
# }
# Catch-all 404 for unknown Host headers
:80, :443 {
respond "Not configured" 404
}
Adding a new service is three lines:
newservice.troglodyteconsulting.com {
reverse_proxy 192.168.8.100:PORT
}
Then: systemctl reload caddy. Cert issued within seconds, service live.
For *.edmd.me wildcard certs, Caddy uses DNS-01 challenge via Cloudflare’s API.
- API Token lives in
/etc/caddy/cloudflare.envasCF_API_TOKEN=...(mode 600, caddy:caddy ownership) - Scope โ Edit zone DNS on both
edmd.meandtroglodyteconsulting.comzones - Referenced in Caddyfile as
{env.CF_API_TOKEN}inside thetlsblock:
*.edmd.me {
tls {
dns cloudflare {env.CF_API_TOKEN}
}
# ... site blocks ...
}
Creating a new token โ dash.cloudflare.com/profile/api-tokens, pick “Edit zone DNS” template, select the target zones.
Rotation โ after replacing the token, systemctl reload caddy picks up the new env file.
Test config before reload (always):
caddy validate --config /etc/caddy/Caddyfile --adapter caddyfile
Reload (graceful, no dropped connections):
systemctl reload caddy
Watch cert issuance in real time:
journalctl -u caddy -f | grep -iE 'cert|acme|tls'
List loaded modules (including cloudflare):
caddy list-modules | grep -iE 'cloudflare|dns'
Certificates stored in:
/var/lib/caddy/.local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/
Common troubleshooting:
| Symptom | Likely cause | Fix |
|---|---|---|
| HTTP 404 on domain but DNS resolves | Site block missing from Caddyfile | Add block, reload |
| Cert issuance fails | Port 80 blocked OR DNS hasn’t propagated | Check UFW, dig the domain |
{env.CF_API_TOKEN} is empty |
cloudflare.env not loaded by systemd |
Check systemd unit EnvironmentFile= directive |
| Reload silently fails | Syntax error in Caddyfile | caddy validate first |
| Slow first request after reload | Cert being issued | Check journal; second request is fast |
- No certbot, ever. Caddy manages all ACME interactions internally. Any certbot tutorial you find online does not apply.
- Port 80 must be open for HTTP-01 challenge. UFW allows it on edge01.
- Cloudflare proxy (orange cloud) is OK โ DNS-01 doesn’t require the domain to resolve directly to the VPS. HTTP-01 requires it though, so prefer DNS-01 when Cloudflare proxy is on.
- Caddyfile syntax is indentation-loose but block braces matter. Always
caddy validatebefore reload. - Env vars in Caddyfile use
{env.VAR_NAME}syntax โ note the dot separator, not underscore. - Reverse-proxying to LAN services (192.168.8.x) only works over NetBird. Without the mesh, the VPS can’t reach LAN IPs.
- HTTP/3 works out of the box once port 443/udp is open in UFW. No extra config.
systemctl reloadvsrestartโ always prefer reload. Restart drops in-flight connections; reload does graceful handoff.- Deploy from Mac uses
/Users/bee/Sync/ED/homelab/bee_hub/deploy-vps.shโ now targetingroot@172.93.50.184:/var/www/bee-hub. Cron runs every 30 min. - Cloudflare token rotation after any suspected exposure. Template: Edit zone DNS, both zones.
| Caddy | nginx + certbot | Traefik | |
|---|---|---|---|
| Cert mgmt | Automatic, zero config | Manual (certbot + cron) | Automatic |
| Config lines per site | ~3 | ~15-20 | YAML, more verbose |
| Systemd unit | 1 | 2 (nginx + certbot.timer) | 1 |
| HTTP/3 | Out of the box | Requires build flags | Config flag |
| Docker-native | No (but doesn’t need to be) | No | Yes, via labels |
| Best for | Self-hosted reverse proxy, mixed static + reverse-proxy | Heavy custom rewrite rules, fine-grained caching | Docker Compose stacks with many services |
Chose Caddy because edge01 is primarily a reverse proxy for a handful of LAN services plus the static Bee Hub site. Caddy’s zero-config HTTPS eliminates the certbot-renewal maintenance burden that plagued the previous Traefik/Pangolin setup.
All cron jobs, launchd agents, and persistent systemd services across every machine. Last updated: 2026-04-18
Health check script updated Apr 18: now monitors nvmepool, Biggest, backups, offsite (was referencing retired pool names). CWA cleanup cron re-added to CT100.
Mac Studio (192.168.8.180)
User Crontab (crontab -e as bee)
| Schedule | Command | Purpose |
|---|---|---|
*/30 * * * * |
/Users/bee/Sync/ED/homelab/bee_hub/deploy-vps.sh |
Build Hugo site and rsync public/ to VPS nginx โ every 30 min |
Homebrew Services
| Service | Purpose | Port |
|---|---|---|
syncthing |
Peer-to-peer file sync โ hub-and-spoke via Proxmox | 22000 (sync), 8384 (UI, localhost only) |
launchd Agents (~/Library/LaunchAgents/)
All agents have RunAtLoad: true and KeepAlive: true.
| Plist | Script/Binary | Port | Purpose |
|---|---|---|---|
com.beedifferent.embed-server |
~/Sync/ED/life_archive/embed_server.py |
1235 | gte-Qwen2-7B embedding server on MPS |
com.beedifferent.hugo-hub |
/opt/homebrew/bin/hugo server |
1313 | Bee Hub docs site on LAN |
com.beedifferent.life-archive-api |
~/Sync/ED/life_archive/http_api.py |
8900 | Life Archive RAG FastAPI |
com.beedifferent.life-archive-mcp-http |
~/Sync/ED/life_archive/mcp_server_http.py |
8901 | Life Archive MCP HTTP server |
launchctl list | grep beedifferent
launchctl kickstart -k gui/$(id -u)/com.beedifferent.hugo-hub
Proxmox VE Host (192.168.8.221)
Root Crontab
| Schedule | Command | Purpose |
|---|---|---|
*/15 * * * * |
seedbox-sync.sh |
Pull from seedbox (nzbget Music+Books + general complete) |
*/15 * * * * |
system-health-check.sh |
Disk, ZFS, backup, USB monitoring โ Gotify |
0 1 * * * |
backup-nvmepool-nightly.sh |
rsync nvmepool โ Biggest/nvmepool-backup (nightly) |
0 3 1 * * |
zpool scrub Biggest |
Monthly scrub โ 1st of month |
0 3 8 * * |
zpool scrub nvmepool |
Monthly scrub โ 8th |
0 3 22 * * |
zpool scrub backups |
Monthly scrub โ 22nd |
0 4 * * * |
curl โฆ MissingAlbumSearch |
Lidarr missing album search via API |
Removed 2026-04-13:
sync-mac.sh(daily 2am) โ was failing with rsync error 12sync-seedbox.sh(every 2 min) โ consolidated into seedbox-sync.sh- Old
seedbox-sync.sh(every 30 min) โ replaced with consolidated version at 15 min
Systemd Services
| Service | Purpose |
|---|---|
syncthing@root.service |
File sync hub โ UI at :8384 |
nzbget-tunnel.service |
SSH tunnel :16789 โ seedbox NZBGet |
nzbhydra2-tunnel.service |
SSH tunnel :15076 โ seedbox NZBHydra2 |
newt.service |
Pangolin WireGuard tunnel to VPS |
ZFS Auto-Snapshots (Proxmox built-in)
| Frequency | Keep |
|---|---|
| Every 15 min | 4 |
| Hourly | 24 |
| Daily | 31 |
| Weekly | 8 |
| Monthly | 12 |
Proxmox vzdump Backup
Daily 2:00 AM, all VMs/CTs, snapshot mode, zstd compression, 3 copies retained โ /backups/dump
CT 100 โ Docker Host (192.168.8.100)
Root Crontab
| Schedule | Command | Purpose |
|---|---|---|
*/30 * * * * |
beet-full-pipeline.sh |
Beets pipeline: import, tag, art |
*/5 * * * * |
kiwix-watcher.sh |
Kiwix content watcher |
0 5 * * * |
clean-cwa-processed.sh |
Clean CWA processed_books older than 7 days |
Note: Beets pipeline was briefly disabled Apr 13 but re-enabled.
Systemd Services
| Service | Purpose |
|---|---|
docker.service |
Docker runtime for all containers |
seedbox-socks.service |
autossh SOCKS5 :1080 โ seedbox (NL exit) |
SMB Shares (Proxmox, all in /etc/samba/smb.conf)
| Share | Path | Access |
|---|---|---|
| Review | /Biggest/Maple | read-write |
| Sync | /nvmepool/sync | read-only |
| Music | /nvmepool/music | read-write |
| Books | /nvmepool/books | read-write |
| Movies | /nvmepool/movies | read-write |
| Video | /nvmepool/video | read-write |
| Seedbox | โ | REMOVED (Birch destroyed) |
| Media Staging | /Biggest/media-staging | read-write |
| backups | /backuppool | read-only |
| nvmepool-backup | /Biggest/nvmepool-backup | read-only |
| Possible Delete | /Biggest/Possible Delete | read-write |
All shares: valid users = bee, no registry shares (migrated 2026-04-13).
Quick Reference โ All Schedules
| Time | Machine | Job |
|---|---|---|
| Every 15 min | Proxmox | seedbox-sync.sh |
| Every 15 min | Proxmox | system-health-check.sh |
| Every 5 min | CT 100 | kiwix-watcher.sh |
| Every 30 min | CT 100 | beet-full-pipeline.sh |
| Every 30 min | Mac Studio | deploy-vps.sh โ hugo build + push |
| Hourly | Proxmox | ZFS hourly snapshot (keep 24) |
| Daily 1:00 AM | Proxmox | backup-nvmepool-nightly.sh |
| Daily 2:00 AM | Proxmox | vzdump โ backup all VMs/CTs |
| Daily 4:00 AM | Proxmox | Lidarr MissingAlbumSearch |
| Daily 5:00 AM | CT 100 | clean-cwa-processed.sh |
| Daily | Proxmox | ZFS daily snapshot (keep 31) |
| Weekly | Proxmox | ZFS weekly snapshot (keep 8) |
| 1st of month | Proxmox | zpool scrub Biggest |
| 8th of month | Proxmox | zpool scrub nvmepool |
| 22nd of month | Proxmox | zpool scrub backups |
| Monthly | Proxmox | ZFS monthly snapshot (keep 12) |
The Farm runs on a separate network (192.168.0.x) connected back to home via a dedicated Pangolin WireGuard tunnel. Farm Proxmox runs Newt as a systemd service, tunneling the entire 192.168.0.x subnet.
| Device | IP | URL | Notes |
|---|---|---|---|
| Farm Proxmox | 192.168.0.191 | https://192.168.0.191:8006 | Hypervisor โ runs Newt as systemd service |
| Farm Docker CT 100 | 192.168.0.100 | https://192.168.0.100:9443 | Portainer |
| Home Assistant | 192.168.0.50 | http://192.168.0.50:8123 / ha.troglodyteconsulting.com | Smart home |
| Weather Station | 192.168.0.x | โ | Orchard weather monitor |
Network: 192.168.0.x โ separate from home (192.168.8.x). Not directly bridged. Access via Pangolin tunnel or physical presence.
| Service | Port | URL |
|---|---|---|
| Portainer | 9443 | https://192.168.0.100:9443 |
| Home Assistant | 8123 | http://192.168.0.50:8123 |
| Uptime Kuma | 3001 | http://192.168.0.100:3001 |
| Gotify | 8070 | http://192.168.0.100:8070 |
Farm connects to the Pangolin VPS via its own Newt instance running as a systemd service on Farm Proxmox (192.168.0.191).
Check tunnel status (when on Farm network or via Proxmox SSH):
ssh root@192.168.0.191 "systemctl status newt"
If tunnel is down:
ssh root@192.168.0.191 "systemctl restart newt"
Farm resources are configured as a separate site in the Pangolin dashboard at pangolin.troglodyteconsulting.com.
Home Assistant is the automation hub for the Farm โ smart plugs, sensors, automations, and the weather station feed.
| Local URL | http://192.168.0.50:8123 |
| Remote URL | https://ha.troglodyteconsulting.com |
| Config backup | Docker volume on Farm CT 100 โ back up before HA updates |
Update HA: Portainer โ home-assistant container โ Recreate with latest image (or use HA Settings โ System โ Updates).
| Size | 93 acres |
| Location | Southwestern Pennsylvania โ Zone 6b |
| Primary goal | Pollinator paradise โ ecological, agricultural, and conservation development |
| Beekeeping | Active hives on property |
Infrastructure projects: solar-powered PoE for remote sensors, Meshtastic nodes for off-grid communication coverage across the property.
Self-hosted RSS feed reader with full-text article retrieval. Running on Proxmox CT 100 (docker-host). Last updated: March 29, 2026.
FreshRSS aggregates RSS/Atom feeds into a single web-based reader. The instance runs as a Docker container on CT 100 alongside a Readability service for full-text content extraction. Feeds are checked every 30 minutes via built-in cron.
| Setting | Value |
|---|---|
| URL | http://192.168.8.100:8180 |
| Host | CT 100 (192.168.8.100) |
| Image | freshrss/freshrss:latest |
| Port | 8180 โ 80 |
| Timezone | America/New_York |
| Cron | Minutes 1 and 31 (every 30 min) |
| User | delgross |
Compose file lives at /opt/freshrss/docker-compose.yml on CT 100. Both services share the freshrss_default network so FreshRSS can reach the Readability container by hostname.
services:
freshrss:
image: freshrss/freshrss:latest
container_name: freshrss
restart: unless-stopped
ports:
- "8180:80"
volumes:
- freshrss_data:/var/www/FreshRSS/data
- freshrss_extensions:/var/www/FreshRSS/extensions
environment:
- TZ=America/New_York
- CRON_MIN=1,31
readability:
image: phpdockerio/readability-js-server
container_name: readability
restart: unless-stopped
volumes:
freshrss_data:
freshrss_extensions:
Volumes:
| Volume | Container path | Purpose |
|---|---|---|
freshrss_data |
/var/www/FreshRSS/data |
User config, SQLite database, logs |
freshrss_extensions |
/var/www/FreshRSS/extensions |
All installed extensions |
By default, many RSS feeds only include a truncated summary. FreshRSS uses the Article Full Text (Af_Readability) extension to fetch the complete article content from the original website when new articles arrive. This runs the Fivefilters Readability.php library directly inside the FreshRSS container โ no external service required.
How it works:
- FreshRSS fetches a feed and finds new articles (every 30 min via cron)
- For each new article in an enabled category, the extension makes an HTTP request to the article’s original URL
- Readability.php extracts the main article content, stripping navigation, ads, and other clutter
- The extracted full text replaces the truncated summary in FreshRSS
Configuration: All 5 categories are enabled for full-text extraction. The config is stored in /var/www/FreshRSS/data/users/delgross/config.php as:
'ext_af_readability_categories' => '{"1":true,"2":true,"3":true,"4":true,"5":true}',
Important notes:
- Only applies to new articles fetched after the extension is enabled. Existing truncated articles are not retroactively updated.
- Some websites rate-limit rapid requests. The extension fetches each article URL sequentially without delay, so high-volume feeds from a single site may occasionally have missing content.
- To reprocess old articles for a feed: delete the feed’s articles in FreshRSS, then let the next cron cycle refetch them.
A standalone readability-js-server container is also deployed alongside FreshRSS. This is used by the xExtension-Readable extension (installed but not currently active) and provides an alternative full-text extraction backend via a Node.js API.
| Setting | Value |
|---|---|
| Container | readability |
| Image | phpdockerio/readability-js-server |
| Internal URL | http://readability:3000 (accessible from FreshRSS on same Docker network) |
| External port | None (internal only) |
This service is available if you want to switch from the built-in Readability.php library to the external readability-js parser. To activate: enable the Readable extension in FreshRSS settings, set the Readability Host to http://readability:3000, and select which feeds/categories to process.
Extensions are stored in the freshrss_extensions Docker volume (host path: /var/lib/docker/volumes/freshrss_freshrss_extensions/_data/).
| Extension | Status | Description |
|---|---|---|
| Af_Readability | โ Enabled | Full-text article extraction using Fivefilters Readability.php โ no external service needed. Applied to all categories. |
| Clickable Links | โ Enabled | Makes URLs in article content clickable. |
| Title-Wrap | โ Enabled | Wraps long article titles in the feed list. |
| Readable | Installed (not enabled) | Alternative full-text extraction via external Readability/Mercury/FiveFilters services. Available as a backup โ uses the readability container. |
| ArticleSummary | Installed (not enabled) | AI-powered article summarization via OpenAI-compatible APIs. |
| Kagi Summarizer | Installed (not enabled) | Summarize articles using Kagi Universal Summarizer. |
| ThreePanesView | Installed (not enabled) | Three-column layout with article preview pane. |
| YouTube Video Feed | Installed (not enabled) | Displays YouTube videos inline in feeds. |
| User CSS | Installed (not enabled) | Custom CSS styling for the FreshRSS UI. |
To install new extensions, clone or copy the extension folder into the extensions volume on CT 100:
cd /var/lib/docker/volumes/freshrss_freshrss_extensions/_data/
git clone https://github.com/<author>/<extension>.git
| Category | Feed |
|---|---|
| Bees | Bee Culture Magazine |
| Gardening | Regenerative Flower Farming Blog |
| Gardening | The Spruce ยท Home Design Ideas and How Tos |
| Tech | MakeUseOf |
| Uncategorized | FreshRSS releases |
| Uncategorized | New in Feedly |
| Woodworking | Popular Woodworking |
Total: 7 feeds across 5 categories.
Updating FreshRSS:
cd /opt/freshrss
docker compose pull
docker compose up -d
Backup: User data (config, database, extension settings) lives in the freshrss_data named volume. Back up /var/lib/docker/volumes/freshrss_freshrss_data/_data/ before major updates.
OPML export: FreshRSS supports OPML import/export from the web UI under Settings โ Import/Export. Feed URLs remain clean (no rewriting) since full-text retrieval is handled by the extension, not URL manipulation.
Adding full-text to a new category: If you add a new category in FreshRSS, update the ext_af_readability_categories value in /var/www/FreshRSS/data/users/delgross/config.php (inside the container) to include the new category ID, or toggle it on from the extension’s configuration page in the FreshRSS web UI under Settings โ Extensions โ Article Full Text.
Network scan + router DHCP client list โ March 11, 2026 โ 32 devices on 192.168.8.x
| IP | Device | Hostname | MAC |
|---|---|---|---|
| 192.168.8.1 | GL.iNet GL-MT3000 (Beryl AX) | console.gl-inet.com | 94:83:C4:4B:17:8C |
| 192.168.8.100 | Docker CT 100 | โ | BC:24:11:6B:8C:C6 |
| 192.168.8.180 | Mac Studio (Ethernet) | Mac-Studio.lan | 1C:1D:D3:E1:A1:EC |
| 192.168.8.182 | Mac Studio (Wi-Fi) | Mac-Studio.local | 5E:6A:4C:A1:2E:01 |
| 192.168.8.221 | Proxmox VE | โ | 58:47:CA:7A:90:6B |
DHCP range: .100โ.249. Static reservations: .180 (Mac Studio) and .221 (Proxmox).
| Service | Port | URL |
|---|---|---|
| Portainer | 9443 | https://192.168.8.100:9443 |
| Uptime Kuma | 3001 | http://192.168.8.100:3001 |
| Gotify | 8070 | http://192.168.8.100:8070 |
| N8N | 5678 | http://192.168.8.100:5678 |
| Audiobookshelf | 13378 | http://192.168.8.100:13378 |
| Navidrome | 4533 | http://192.168.8.100:4533 |
| Lidarr | 8686 | http://192.168.8.100:8686 |
| Bookshelf | 8787 | http://192.168.8.100:8787 |
| FreshRSS | 8180 | http://192.168.8.100:8180 |
| Nextcloud | 8280 | http://192.168.8.100:8280 |
| Service | Port | URL |
|---|---|---|
| SSH | 22 | ssh bee@192.168.8.180 |
| Screen Sharing | 5900 | vnc://192.168.8.180 |
| Hugo Hub | 1313 | http://192.168.8.180:1313 |
| SyncThing | 8384 | http://192.168.8.180:8384 |
| Embed Server | 1235 | http://localhost:1235 (local only) |
| Service | Port | URL |
|---|---|---|
| Proxmox Web UI | 8006 | https://192.168.8.221:8006 |
| Cockpit | 9090 | https://192.168.8.221:9090 |
| Pangolin Newt | โ | Systemd service โ tunnel agent (outbound to VPS) |
| SSH | 22 | ssh root@192.168.8.221 |
| SMB Share | 445 | \\192.168.8.221\shared (user: bee) |
| IP | Node | Hostname | MAC |
|---|---|---|---|
| 192.168.8.140 | eero (main) | eero.lan | 80:DA:13:69:78:92 |
| 192.168.8.123 | eero #2 | eero-d066.local | 64:97:14:CC:1E:CD |
| 192.168.8.203 | eero #3 | eero-3y3p.local | 30:57:8E:F9:12:4B |
| 192.168.8.212 | eero #4 | eero-f8js.local | 80:DA:13:30:78:CB |
| 192.168.8.169 | eero #5 | eero-kchd.local | 9C:0B:05:FC:F7:B2 |
| IP | Device | Hostname | MAC |
|---|---|---|---|
| 192.168.8.115 | Sonos Speaker | sonos000E583E7186.local | 00:0E:58:3E:71:86 |
| 192.168.8.202 | Sonos Speaker (Living Room) | Sonos-804AF24F0200.local | 80:4A:F2:4F:02:00 |
| 192.168.8.101 | Office (2) | Office-2.local | A8:51:AB:2D:B3:8D |
| 192.168.8.116 | YouTube TV (Google TV Streamer) | b27f60d2-…local | 90:CA:FA:B5:AD:62 |
| 192.168.8.188 | Google Home device | fd5a0cba-…local | 22:40:9D:39:D6:20 |
| 192.168.8.233 | TiVo Stream 4K | TiVo-Stream-4K.lan | 00:11:D9:B6:13:14 |
| 192.168.8.141 | WiiM Ultra 9F58 | WiiM-Ultra-9F58.lan | 00:22:6C:31:6C:18 |
| 192.168.8.177 | Amazon device | none.local | 08:12:A5:B2:74:E4 |
| IP | Device | Hostname | MAC |
|---|---|---|---|
| 192.168.8.224 | Homey (smart home hub) | BRW5CF370CD7319.lan | 5C:F3:70:CD:73:19 |
| 192.168.8.187 | Tuya Smart device | โ | 10:D5:61:91:95:FF |
| 192.168.8.245 | Weather Station | Orchard-Weather.lan | 8C:4B:14:DA:24:98 |
| IP | Device | Hostname | MAC |
|---|---|---|---|
| 192.168.8.190 | Office Printer (Brother HL-L3280CDW) | BRW14AC606430F6.local | 14:AC:60:64:30:F6 |
| 192.168.8.240 | Brother HL-L2460DW | BRN94DDF81BDF9A.lan | 94:DD:F8:1B:DF:9A |
| IP | Device | Hostname | MAC |
|---|---|---|---|
| 192.168.8.219 | ED’s iPhone | EdwardDosiPhone.lan | A8:81:7E:AE:6D:9C |
| 192.168.8.216 | iPhone | iPhone.lan | 36:87:14:43:B2:72 |
| 192.168.8.220 | iPhone | iPhone.local | FE:2A:1A:8E:41:24 |
| 192.168.8.110 | Apple Watch | Watch.lan | 4E:66:C3:C7:94:24 |
| 192.168.8.126 | Lois’s iPad (5) | iPad.lan | 6A:D0:65:C9:4B:74 |
| 192.168.8.234 | Lois’s iPad 2 | Lois-Ipad-2.lan | 10:B5:88:08:E7:D0 |
| IP | Hostname | MAC |
|---|---|---|
| 192.168.8.109 | โ | 7C:A6:B0:A6:73:9A |
| 192.168.8.127 | โ | 9A:2C:FB:19:B5:0D |
| 192.168.8.170 | โ | A2:7B:07:D2:98:DA |
| Service | Port | URL |
|---|---|---|
| Pangolin Dashboard | 443 | https://pangolin.troglodyteconsulting.com |
| VPS Cockpit | 9090 | https://172.93.50.184:9090 |
| SSH | 22 | ssh admin@172.93.50.184 |
Farm runs on 192.168.0.x (separate from home’s 192.168.8.x). Connected via Pangolin WireGuard tunnel through VPS.
| IP | Device | Description |
|---|---|---|
| 192.168.0.50 | Home Assistant | Smart home automation (ha.troglodyteconsulting.com) |
| 192.168.0.100 | Farm Docker CT 100 | Portainer, Uptime Kuma, Gotify |
| 192.168.0.191 | Farm Proxmox VE | Farm hypervisor + Pangolin Newt (systemd) |
Personal information retrieval system โ 278K documents across 7 sources, queryable via RAG pipeline on Mac Studio.
Life Archive is a full RAG (Retrieval-Augmented Generation) pipeline that indexes ~278K personal records spanning decades โ Evernote notes, emails, magazine archives, Tana nodes, and Paperless-NGX documents โ into a searchable knowledge base. It runs entirely on the Mac Studio using local embeddings (gte-Qwen2-7B on Apple MPS) and LanceDB for vector storage.
What it answers: “What did I write about X?”, “When did I meet Y?”, “What happened during Z trip?” โ any question against a lifetime of personal documents.
Key paths:
| Path | Content |
|---|---|
~/Sync/ED/life_archive/ |
Project root โ all code, configs, data |
~/Sync/ED/life_archive/.venv/ |
Python virtual environment |
~/Sync/ED/life_archive/lancedb_data/ |
LanceDB vector database (~50 GB) |
~/Sync/ED/life_archive/knowledge_graph.db |
SQLite knowledge graph (~356 MB) |
Data flow:
- Source extraction โ Raw documents parsed from Evernote exports, email archives, magazine PDFs, Tana JSON, Paperless-NGX API
- Enrichment โ Text cleaning, section splitting, paragraph chunking, QA pair generation
- Embedding โ gte-Qwen2-7B encodes text into dense vectors (local, MPS-accelerated)
- Storage โ LanceDB tables for docs, sections, paragraphs, QA pairs; SQLite for knowledge graph
- Query โ Multi-strategy retrieval with fusion and reranking
Retrieval strategies (all run in parallel per query):
| Strategy | What it does |
|---|---|
| Dense vectors | Semantic similarity search against paragraph embeddings |
| SPLADE keywords | Sparse keyword matching for exact terms |
| QA pairs | Matches against pre-generated question-answer pairs |
| Knowledge graph | Entity and relationship lookup |
| HyDE | Hypothetical Document Embedding โ generates a synthetic answer, then searches for similar real content |
Results from all strategies are fused via Reciprocal Rank Fusion (RRF), then reranked with a cross-encoder model for final ordering.
Four persistent services on Mac Studio, all managed via launchd:
| Service | Port | launchd Label | Purpose |
|---|---|---|---|
| Embed Server | 1235 | com.beedifferent.embed-server |
gte-Qwen2-7B on MPS โ generates embeddings |
| Life Archive API | 8900 | com.beedifferent.life-archive-api |
FastAPI HTTP wrapper for remote queries |
| MCP HTTP Server | 8901 | com.beedifferent.life-archive-mcp-http |
Streamable HTTP MCP server for remote Claude clients |
| Paperless-NGX | 8100 | (manual / runserver) | Document ingestion and OCR |
All launchd plists are in ~/Library/LaunchAgents/.
API endpoints (port 8900):
| Method | Path | Description |
|---|---|---|
| POST | /search |
Full RAG search with all retrieval strategies |
| POST | /entity |
Knowledge graph entity lookup |
| POST | /temporal |
Temporal anchor search (events, dates, periods) |
| GET | /stats |
Database statistics |
| GET | /health |
Service health check |
| GET | /docs |
Interactive Swagger UI |
MCP endpoint (port 8901): http://192.168.8.180:8901/mcp โ Streamable HTTP transport for Claude Desktop, Claude Code, or any MCP client.
Remote access:
| Service | Pangolin VPN Address |
|---|---|
| Life Archive API | 100.96.128.19:8900 |
| MCP HTTP Server | 100.96.128.20:8901 |
LanceDB (as of 2026-03-12):
| Table | Rows |
|---|---|
| Documents | 74,041 |
| Paragraphs | 2,689,330 |
| Sections | 714,451 |
| QA pairs | 289,356 |
| Communities | 0 (GraphRAG not run) |
| Total size | ~63 GB |
Knowledge Graph:
| Table | Count |
|---|---|
| Entities | 276,348 |
| Relationships | 230,855 |
| Doc-entity links | 1,153,312 |
| Assets | 456,321 |
| Temporal anchors | 391,565 |
| Entity aliases | 167 |
| Correspondents | 18,385 |
| DB size | ~368 MB |
Entity types: person (92,377) ยท org (85,519) ยท thing (52,346) ยท location (46,106)
Source breakdown:
| Source | Docs in LanceDB | Notes |
|---|---|---|
| magazine_article | 28,309 | โ loaded |
| paperless_doc | 22,555 | โ loaded |
| tana_node | 14,807 | โ loaded |
| evernote_pdf | 5,069 | โ loaded |
| evernote_note | 3,301 | โ loaded |
| epub_articles | 0 | vectors exist (17 GB), not yet loaded |
| emails | 0 | enriched but not embedded (157K records) |
The Life Archive is also available as MCP tools inside Claude Code and Cowork, enabling natural-language queries without the HTTP API.
| Tool | Purpose |
|---|---|
life_archive_search |
Full RAG search โ main query interface |
life_archive_entity_lookup |
Find people, orgs, locations in the knowledge graph |
life_archive_temporal_search |
Search for events, dates, time periods |
life_archive_stats |
Database health and statistics |
life_archive_graph_explore |
Deep-dive any entity โ connections, source docs, aliases |
life_archive_graph_traverse |
Multi-hop graph walk โ map the neighborhood of any entity |
life_archive_graph_search |
Find entities by name, filter by type |
Two transport modes:
| Transport | Server | Use Case |
|---|---|---|
| stdio | mcp_server.py |
Local โ spawned on demand by Claude Code/Cowork on the Mac Studio |
| Streamable HTTP | mcp_server_http.py |
Remote โ any MCP client on the network or over Pangolin VPN |
Remote MCP client config (Claude Desktop / Claude Code):
"mcpServers": {
"life-archive": {
"url": "http://100.96.128.20:8901/mcp"
}
}
All scripts live in ~/Sync/ED/life_archive/:
| Script | Purpose |
|---|---|
query.py |
Core query engine โ LifeArchiveQuery class |
http_api.py |
FastAPI HTTP wrapper |
embed_server.py |
Embedding server (gte-Qwen2-7B on MPS) |
load_lancedb.py |
Loads extracted data into LanceDB tables |
load_knowledge_graph.py |
Builds SQLite knowledge graph from extracted entities |
resolve_entities.py |
Fuzzy dedup of knowledge graph entities |
retry_entity_resolution.py |
Retry failed entity resolution batches |
eval_queries.py |
Evaluation framework for query quality |
mcp_server.py |
MCP stdio server for Claude integration |
mcp_server_http.py |
MCP streamable HTTP server for remote access (port 8901) |
Check service status:
launchctl list | grep beedifferent
Restart embed server:
launchctl kickstart -k gui/$(id -u)/com.beedifferent.embed-server
Restart Life Archive API:
launchctl kickstart -k gui/$(id -u)/com.beedifferent.life-archive-api
Test API health:
curl http://localhost:8900/health
Run a search via API:
curl -X POST http://localhost:8900/search \
-H "Content-Type: application/json" \
-d '{"query": "beekeeping notes from 2023"}'
View logs:
tail -f ~/Sync/ED/life_archive/http_api.stdout.log
tail -f ~/Sync/ED/life_archive/http_api.stderr.log
Load new data into LanceDB:
cd ~/Sync/ED/life_archive
.venv/bin/python load_lancedb.py --source <source_name>
Rebuild knowledge graph:
cd ~/Sync/ED/life_archive
.venv/bin/python load_knowledge_graph.py
The knowledge graph is exposed as a live API that any client can query โ Claude, Obsidian, Tana, local LLMs, browsers, scripts. Three endpoints provide entity exploration, multi-hop traversal, and search, all with source document links back to the original archive content.
Live endpoints (port 8900):
| Endpoint | Method | Purpose |
|---|---|---|
/graph/explore |
POST | Full entity deep-dive: info, connections, source docs, aliases |
/graph/traverse |
POST | Multi-hop subgraph: walk N hops from any starting entity |
/graph/search |
POST | Find entities by name, filter by type |
/docs |
GET | Interactive Swagger UI for all endpoints |
Web explorer: http://192.168.8.180:1313/kg/ โ interactive D3.js force-directed graph backed by the live API.
Example: Explore an entity
curl -X POST http://192.168.8.180:8900/graph/explore \
-H "Content-Type: application/json" \
-d '{"entity": "thomas brown", "max_connections": 20, "max_sources": 5}'
Returns: entity info, all connections with relationship labels, source documents with titles and summaries, total document count.
Example: Traverse the graph (2 hops from Colorado)
curl -X POST http://192.168.8.180:8900/graph/traverse \
-H "Content-Type: application/json" \
-d '{"entity": "colorado", "depth": 2, "max_per_hop": 15}'
Returns: full subgraph of nodes and edges reachable within N hops. Each node tagged with hop distance from root.
Example: Search entities
curl -X POST http://192.168.8.180:8900/graph/search \
-H "Content-Type: application/json" \
-d '{"query": "brown", "entity_type": "person", "limit": 10}'
MCP tools (same functionality): life_archive_graph_explore, life_archive_graph_traverse, life_archive_graph_search โ available via stdio and HTTP MCP servers. Any Claude session or MCP-compatible LLM can call these.
Client compatibility:
| Client | How to connect |
|---|---|
| Claude (Code/Cowork) | MCP tools โ already registered, just ask in natural language |
| Local LLM (LM Studio, etc.) | Point MCP client at http://192.168.8.180:8901/mcp |
| Obsidian | HTTP API via Templater/Dataview, or Obsidian notes export (export_kg_obsidian.py) |
| Tana | API integration to /graph/explore endpoint |
| Browser | Swagger UI at /docs or web explorer at /kg/ |
| Scripts | curl / Python requests / any HTTP client |
Key files:
| File | Purpose |
|---|---|
graph_api.py |
Shared graph traversal logic (KnowledgeGraphAPI class) |
http_api.py |
FastAPI HTTP endpoints (port 8900) |
mcp_server.py |
MCP stdio server with graph tools |
mcp_server_http.py |
MCP HTTP server with graph tools (port 8901) |
export_kg_obsidian.py |
Export KG to Obsidian vault as markdown notes with wikilinks |
export_kg_d3.py |
Export KG to JSON for D3.js visualization |
The knowledge graph can be exported to GEXF format for interactive exploration in Gephi or Cosmograph.
Export script: ~/Sync/ED/life_archive/export_kg_gexf.py
Pre-built exports (in ~/Sync/ED/life_archive/exports/):
| File | Nodes | Edges | Size | Use case |
|---|---|---|---|---|
life_archive_kg_full.gexf |
276K | 231K | 173 MB | Full graph โ Gephi or Cosmograph |
life_archive_kg_top5000.gexf |
5K | 38K | 13 MB | Curated โ best for first exploration |
Color scheme:
| Entity Type | Color |
|---|---|
| Person | Blue |
| Organization | Red |
| Location | Green |
| Thing | Yellow |
| Concept | Purple |
Node sizes scale logarithmically by mention count.
Viewing in Gephi:
- Install:
brew install --cask gephi - File โ Open โ choose a
.gexfexport - Layout โ ForceAtlas 2 โ Run (let settle 30โ60 sec) โ Stop
- Appearance โ Nodes โ Color โ Partition โ
entity_type - Statistics โ Modularity โ Run โ then color by modularity class to see communities
- Use Data Laboratory tab to search/filter entities by name
Viewing in Cosmograph:
- Go to cosmograph.app
- Drag and drop the
.gexffile - WebGL renders instantly โ supports the full 276K-node graph
Custom exports:
cd ~/Sync/ED/life_archive
# Only people and orgs
python3 export_kg_gexf.py --types person org
# Entities mentioned 5+ times
python3 export_kg_gexf.py --min-mentions 5
# Top 10,000 by mention count
python3 export_kg_gexf.py --top 10000
Last updated: 2026-03-24
| Item | Status |
|---|---|
| LanceDB loaded | โ 74K docs, 2.69M paragraphs |
| Knowledge graph | โ 276K entities, 231K relationships |
| Services running | โ API :8900, MCP :8901, Embed :1235 |
| Eval baseline | โ 1.91/3.0 avg quality (2026-03-15) |
| epub_articles in LanceDB | โ Vectors exist, not loaded |
| Emails embedded | โ 157K records deferred |
| Contextual re-embedding | โ ๏ธ Pending โ RunPod run needed |
Contextual re-embedding is the most important pending item. All existing embeddings were generated without document-level context prefixed to chunks. New runpod_embed.py adds this (35-50% retrieval improvement). Previous RunPod run (2026-03-17 to 2026-03-21) failed at source 3/7 with OOM. Scripts fixed 2026-03-21 โ ready for new pod.
See ~/Sync/ED/TASKS.md for step-by-step next actions.
| Task | Status | Notes |
|---|---|---|
| Entity resolution | Done | 37 groups merged (7 original + 30 via Claude Sonnet), 177 aliases |
| Graph API + traversal | Done | /graph/explore, /graph/traverse, /graph/search + MCP tools |
| Email body embedding | Deferred | 157K email bodies not yet embedded (headers indexed) |
| Evaluation set | Framework ready | eval_queries.py exists, needs execution |
| Rule-based query routing | Planned | Replace LLM router with deterministic rules |
| New Paperless doc extraction | Planned | Process recently ingested 1,115 Evernote imports |
MCP (Model Context Protocol) servers extend Claude with tools โ file access, web search, calendar, Tana, Life Archive, and more. Config lives at ~/Library/Application Support/Claude/claude_desktop_config.json.
These run as local processes spawned by Claude Desktop via stdio transport.
| Server | Package / Command | Purpose |
|---|---|---|
| Life Archive | Streamable HTTP โ http://192.168.8.180:8901/mcp |
RAG search, entity lookup, knowledge graph across 278K personal documents |
| Tana Local | npx tana-local |
Read/write Tana nodes, search, import Tana Paste |
| Desktop Commander | npx @wonderwhy-er/desktop-commander |
File system access, terminal, process management on Mac |
| mcp-obsidian | Local vault bridge | Read/write Obsidian notes |
| Exa | exa-mcp |
Semantic web search and code context retrieval |
| sequential-thinking | npm package | Dynamic multi-step reasoning tool |
| voicemode | Local | Voice conversation and service management |
| PDF Tools | Local | Fill, analyze, extract, view PDFs |
| Read and Send iMessages | Local | iMessage access via contacts |
| life-archive | MCP via HTTP | Exposed as Claude.ai connected server |
These are connected via the Claude.ai interface and available in web/desktop sessions.
| Server | URL | Purpose |
|---|---|---|
| S&P Global | https://kfinance.kensho.com/integrations/mcp |
Financial data and market intelligence |
| Sentry | https://mcp.sentry.dev/mcp |
Error tracking |
| Gmail | https://gmail.mcp.claude.com/mcp |
Email reading, searching, draft creation |
| Mermaid Chart | https://mcp.mermaidchart.com/mcp |
Diagram creation and validation |
| Melon | https://mcp.melon.com/mcp |
Task/productivity |
| BioRender | https://mcp.services.biorender.com/mcp |
Scientific figure and illustration generation |
| Hugging Face | https://huggingface.co/mcp |
ML models and datasets |
| Google Calendar | https://gcal.mcp.claude.com/mcp |
Calendar events, scheduling |
| Tool | Type | Purpose |
|---|---|---|
| Claude in Chrome | Browser agent | Interact with web pages, forms, tabs |
| Claude Code | CLI | Agentic coding in terminal |
| Cowork | Desktop app | File and task automation for non-developers |
Location: ~/Library/Application Support/Claude/claude_desktop_config.json
Template structure:
{
"mcpServers": {
"life-archive": {
"url": "http://192.168.8.180:8901/mcp"
},
"desktop-commander": {
"command": "npx",
"args": ["-y", "@wonderwhy-er/desktop-commander"]
},
"tana-local": {
"command": "npx",
"args": ["-y", "tana-local"],
"env": {
"TANA_API_TOKEN": "YOUR_TOKEN_HERE"
}
}
}
}
After editing, restart Claude Desktop to reload servers.
Tana Local MCP server requires OAuth origin to be set to http://127.0.0.1 (not localhost) in Tana settings โ API & Integrations โ MCP.
If Tana tools return auth errors, check:
- OAuth origin is
http://127.0.0.1(notlocalhostโ they’re treated as different origins) - API token in config matches the one in Tana settings
- Restart Claude Desktop after config changes
Known issue: search_nodes has a persistent JSON serialization bug. Workaround: use get_children to navigate nodes instead.
The Life Archive MCP server runs on the Mac Studio and is accessible from any network via Pangolin VPN:
| Context | URL |
|---|---|
| Local (Mac Studio) | http://localhost:8901/mcp |
| LAN | http://192.168.8.180:8901/mcp |
| Remote (Pangolin VPN) | http://100.96.128.20:8901/mcp |
Add to any MCP client config (Claude Code, Cowork, custom) pointing at the appropriate URL.
Server not appearing in Claude:
- Check JSON syntax in config file (no trailing commas, valid quotes)
- Restart Claude Desktop completely (Quit from menu bar, relaunch)
- Check Claude Desktop logs:
~/Library/Logs/Claude/
Life Archive MCP not responding:
launchctl list | grep beedifferent
curl http://localhost:8901/mcp
Tana connection refused:
- Verify
tana-localnpm package is installed:npm list -g tana-local - Check API token is correct in config
- OAuth origin must be
http://127.0.0.1in Tana settings
Automated music acquisition, tagging, and streaming โ running on Proxmox CT 100 (docker-host). Last updated: April 13, 2026.
Music flows through a fully automated chain: Lidarr searches for wanted albums via Headphones VIP indexer, sends grabs to NZBGet on the remote seedbox, rsync pulls completed downloads to Proxmox, Lidarr imports and organizes them, and Navidrome serves the final library for streaming.
Data flow:
| Step | Component | Detail |
|---|---|---|
| 1 | Lidarr (Docker, CT 100) | Searches Headphones VIP via MissingAlbumSearch (daily 4am cron). RSS sync runs every 15 min but grabs 0 โ Headphones VIP RSS feeds random new releases, not library-specific albums. |
| 2 | NZBGet (seedbox ismene.usbx.me) |
Downloads grabbed NZBs to ~/downloads/nzbget/completed/Music/ at ~70 MB/s average. |
| 3 | seedbox-sync.sh (cron, every 15 min) | rsync pulls completed/Music/, completed/Books/, and complete/ โ /nvmepool/ingest/ on Proxmox. Uses --remove-source-files to clean seedbox as files transfer. Partial resuming enabled. |
| 4 | LXC bind mount | /nvmepool/ingest โ /mnt/seedbox inside CT 100. |
| 5 | Docker bind mount | /mnt/seedbox โ /downloads inside Lidarr container. |
| 6 | Lidarr import | Monitors /downloads/Music/ and imports matched albums to /music/, renaming per configured format. Requires โฅ80% MusicBrainz metadata match. |
| 7 | Navidrome (Docker, CT 100) | Serves /mnt/music to audio clients (Feishin on desktop, Subsonic apps on mobile). |
| Setting | Value |
|---|---|
| URL | http://192.168.8.100:8686 |
| Image | lscr.io/linuxserver/lidarr:nightly |
| Version | 3.1.2.4928 (nightly โ required for plugin support) |
| API key | 3dc17d20ca664be4ac90fb89004f91b8 |
| Config mount | /opt/lidarr/data โ /config |
| Downloads mount | /mnt/seedbox โ /downloads |
| Music mount | /mnt/music โ /music |
| Monitored artists | 241 |
| Missing albums | ~10,000+ (93 artists added April 13) |
Indexers:
| Indexer | Status | Notes |
|---|---|---|
| Headphones VIP | โ Working | Primary source. Dedicated music Usenet indexer with proper t=music support. |
| NZBHydra2 | โ Working | Usenet indexer aggregator on seedbox. API key: BM7BCBP0RNBFIIRTJ32LGN9G4M. Base URL in Lidarr: http://192.168.8.221:15076/nzbhydra2. Note: altHUB (underlying indexer) lacks music-search caps so music-specific results are limited, but general searches work. |
Download client: NZBGet at 192.168.8.221:16789 (SSH tunnel to seedbox port 13036).
Quality profiles: Any, Lossless (preferred), Standard. Set artists to Lossless for FLAC.
Release profile โ ignored terms (compilations and greatest hits are blocked to prevent import loop):
Greatest Hits,Best Of,The Essential,The Very BestCollection,Definitive,Complete,Anthology,UltimateYouTube rip,SoundCloud rip,ytrip,camrip,Rock Mix,Rancho Texicano
Track naming format: {Album Title} ({Release Year})/{Artist Name} - {Album Title} - {track:00} - {Track Title}
Lidarr has no built-in scheduled MissingAlbumSearch โ this is a known design gap. The RSS sync (every 15 min) only grabs random new releases from the Headphones VIP feed, not library-specific missing albums. A Proxmox cron job compensates:
| Cron job | Schedule | Purpose |
|---|---|---|
seedbox-sync.sh |
Every 15 min | Consolidated pull from seedbox โ both nzbget/completed/Music/ + Books/ and downloads/complete/ in one script |
MissingAlbumSearch API call |
Daily 4:00 AM | Actively searches all missing albums against all indexers โ the primary download trigger |
Trigger MissingAlbumSearch manually:
curl -s -X POST 'http://192.168.8.100:8686/api/v1/command?apikey=3dc17d20ca664be4ac90fb89004f91b8' \
-H 'Content-Type: application/json' -d '{"name":"MissingAlbumSearch"}'
Beets 2.7.1 is installed on CT 100 at /usr/local/bin/beet. Config at /root/.config/beets/config.yaml. Primary use is importing new music from the seedbox and handling compilations that Lidarr’s 80% threshold rejects.
| Setting | Value |
|---|---|
| Music directory | /mnt/music |
| Library DB | /root/.config/beets/library.db (~29MB) |
| Import mode | move: yes, quiet: yes, quiet_fallback: asis |
| Match threshold | 0.15 (distance) โ much looser than Lidarr’s 80% |
| Duplicate action | skip |
| Compilations path | Compilations/$album/$track $title |
Plugins: musicbrainz, fetchart, embedart, lastgenre, scrub, chroma
Pipeline script: /usr/local/bin/beet-full-pipeline.sh โ DISABLED as of April 13, 2026. The initial bulk import is complete: 2,112 albums / 33,654 tracks / 1.4 TiB across 197 artists. The 30-minute cron was removed because the import had finished but was re-scanning 2,662 folders of duplicates/junk every 3 hours in an infinite loop. 134 potential hi-res/deluxe keepers were preserved in /mnt/seedbox/Music-Duplicates/ for manual review; 658 non-keepers and 1,870 junk-only folders were deleted.
Pipeline behavior:
- Checks for new files in
/mnt/seedbox/Music - If found, runs
beet import -q /mnt/seedbox/Musicto match, tag, and move to/mnt/music - Cleans up empty directories left behind
- If no new files, exits immediately
Manual maintenance commands (run on CT 100 only when needed, not scheduled):
# Fetch missing cover art for albums without artwork
/usr/local/bin/beet fetchart
# Embed cover art into audio file metadata
/usr/local/bin/beet embedart
# Tag genres from Last.fm
/usr/local/bin/beet lastgenre
# Re-catalog any files in /mnt/music not yet in beets DB
/usr/local/bin/beet import -qA /mnt/music
Using beets for compilations (albums blocked by Lidarr’s release profile):
# SSH into CT 100 and run:
/usr/local/bin/beet import /downloads/Music/SomeAlbum/
Beets matches with 15% threshold, auto-fetches art, tags genre from Last.fm, moves to Compilations/ folder. Navidrome picks up automatically.
| Service | Image | Port | Purpose |
|---|---|---|---|
| Navidrome | deluan/navidrome:latest |
4533 | Music streaming server (Subsonic-compatible). Scan schedule: 24h (filesystem watcher handles real-time). Config: env vars only, no toml. Volumes: /var/lib/navidrome:/data, /mnt/music:/music:ro |
| Lidarr | lscr.io/linuxserver/lidarr:nightly |
8686 | Music collection manager and Usenet requester. Note: Lidarr has a known memory leak โ restart periodically if RAM usage exceeds 2GB (docker restart lidarr) |
Two Lidarr plugins are planned to supplement the Usenet pipeline. Both require the nightly branch (already active).
| Plugin | GitHub | Purpose | Status |
|---|---|---|---|
| Tidal (TrevTV) | Lidarr.Plugin.Tidal | Direct Tidal downloads โ FLAC lossless, Dolby Atmos, vast catalog. Best fix for compilations/greatest hits. Requires active Tidal subscription. | Planned |
| Tubifarry (TypNull) | Tubifarry | YouTube fallback (128-256kbps AAC) + Soulseek via Slskd. Use as last resort only. | Planned |
Installing a plugin: System โ Plugins in Lidarr UI โ paste GitHub URL โ Install โ Restart.
Tidal auth flow (quirky): Add indexer โ enter data path โ Test (will error) โ Cancel โ refresh page โ re-open Tidal indexer โ copy the OAuth URL โ log into Tidal in browser โ copy the redirect URL โ paste back into Lidarr. Redirect URLs are single-use.
| Path (CT 100) | Host ZFS dataset | Purpose |
|---|---|---|
/mnt/music |
nvmepool/music |
Tagged music library (Navidrome source) โ 33,654 tracks, 241 artists (197 from beets + 93 added to Lidarr April 13) |
/mnt/seedbox |
nvmepool/ingest |
Seedbox landing zone (rsync target) |
/mnt/seedbox/Music/ |
โ | Empty โ beets import complete. New Lidarr grabs land here. |
/mnt/seedbox/Music-Duplicates/ |
โ | 134 potential hi-res/deluxe keepers pending review |
/mnt/seedbox/Books/ |
โ | Book library |
Seedbox state (as of 2026-04-13):
completed/Music/: Active โ consolidatedseedbox-sync.shruns every 15 min, pulls from both nzbget and general complete paths- NZBGet: idle when queue is empty, average ~70 MB/s when downloading
- Beets import complete โ
/mnt/seedbox/Music/is empty and ready for new Lidarr grabs
| Symptom | Cause | Fix |
|---|---|---|
| Navidrome UI slow to display music | Beets pipeline re-cataloging entire library every 30 min, causing constant Navidrome rescans | Fixed April 2026: pipeline now only imports new seedbox files. Navidrome scan reduced to 24h (filesystem watcher handles real-time) |
| Lidarr using 4GB+ RAM | Memory leak after extended uptime (13+ days observed) | docker restart lidarr โ drops back to ~200MB |
| NZBGet idle, nothing downloading | MissingAlbumSearch completed its run, queue empty | Trigger manually (see Automation section) or wait for 4am cron |
| Queue full, NZBGet idle | 60-item queue full of importFailed items blocking new grabs |
Blocklist + remove importFailed items via Activity โ Queue |
importFailed: album match not close enough |
Lidarr’s 80% MusicBrainz threshold not met | For compilations: use beets. For others: blocklist and let Lidarr find different release |
/downloads/Music/ empty in container |
Docker bind mount broken (Docker started before LXC mount) | docker restart lidarr |
| RSS Sync: 0 grabbed | Normal โ Headphones VIP RSS feeds random new music | Expected behavior. Grabs only come from MissingAlbumSearch |
importFailed: permissions error |
File ownership issue from NZBGet | Check /downloads/Music/ permissions inside container |
| Stale lockfile blocking sync | Previous rsync killed mid-run | rm /tmp/sync-seedbox.lock on Proxmox |
| Item | Value |
|---|---|
| Provider | SSDNodes |
| IP Address | 172.93.50.184 |
| SSH | ssh admin@172.93.50.184 |
| Dashboard | pangolin.troglodyteconsulting.com |
| Cockpit | https://172.93.50.184:9090 |
| Components | Pangolin + Gerbil (WireGuard) + Traefik |
| Version | v1.16.2 Community Edition |
| License | e33f66aa-416a-4ec7-9ffc-46fb5e2af290 |
| Site | Identifier | Status | Network | Newt Location |
|---|---|---|---|---|
| Proxmox (Home) | clueless-long-nosed-snake |
Online | 192.168.8.0/24 | Proxmox PVE host (192.168.8.221) โ systemd service |
| Farm (Brownsville) | lovely-sunbeam-snake |
Online | 192.168.0.0/24 | Farm Proxmox PVE (192.168.0.191) โ systemd service (v1.10.1) |
| Seedbox | unwilling-caecilia-nigricans |
Online | โ | Remote |
Home uses 192.168.8.x, Farm uses 192.168.0.x โ separate subnets connected via Pangolin WireGuard tunnels through VPS. Newt runs as a bare-metal systemd service on each site’s Proxmox PVE host (not in Docker).
Managed via Portainer. Running on Proxmox LXC Container 100 (Ubuntu 24.04, 4 cores, 8 GB RAM).
| Service | Port | Web Interface | Description |
|---|---|---|---|
| Portainer | 9443 | https://192.168.8.100:9443 | Container management UI |
| Uptime Kuma | 3001 | http://192.168.8.100:3001 | Uptime monitoring dashboard |
| Gotify | 8070 | http://192.168.8.100:8070 | Push notification server |
| N8N | 5678 | http://192.168.8.100:5678 | Automation / workflow engine |
| Audiobookshelf | 13378 | http://192.168.8.100:13378 | Audiobook & podcast server |
| Navidrome | 4533 | http://192.168.8.100:4533 | Music streaming server |
| Lidarr | 8686 | http://192.168.8.100:8686 | Music collection manager |
| Bookshelf | 8787 | http://192.168.8.100:8787 | Book tracking (Hardcover) |
| Shelfmark | 8084 | http://192.168.8.100:8084 | Pangolin private resource โ ebook search. Outbound traffic via SOCKS5 tunnel to seedbox (NL). No public access. |
Note: Pangolin Newt no longer runs in Docker on CT 100. It runs as a systemd service on the Proxmox PVE host (192.168.8.221).
Shelfmark networking: All search/download traffic exits via Netherlands seedbox (
ismene.usbx.me/ 46.232.210.50) through a persistent SOCKS5 tunnel (autosshsystemd serviceseedbox-socks.serviceon CT 100, port 1080). Shelfmark docker-compose setsPROXY_MODE=socks5andSOCKS5_PROXY=socks5://172.25.0.1:1080. Accessible remotely only via Pangolin VPN (private resource, not public).
Managed via Portainer. Running on Farm Proxmox LXC Container 100 at 192.168.0.191. Farm subnet is 192.168.0.x (separate from home’s 192.168.8.x).
| Service | Port | Web Interface | Description |
|---|---|---|---|
| Portainer | 9443 | https://192.168.0.100:9443 | Container management UI |
| Uptime Kuma | 3001 | http://192.168.0.100:3001 | Uptime monitoring dashboard |
| Gotify | 8070 | http://192.168.0.100:8070 | Push notification server |
Note: Pangolin Newt runs as a systemd service on the Farm Proxmox PVE host (192.168.0.191), not in Docker. Home Assistant runs on a separate device at 192.168.0.50.
| Resource | Site | Destination IP | Web Interface | Description |
|---|---|---|---|---|
| Proxmox VE (Home) | Proxmox | 192.168.8.221 | https://192.168.8.221:8006 | Minisforum i9-13900H, 128 GB |
| Mac Studio | Proxmox | 192.168.8.180 | โ | M3 Ultra, 256 GB (runs Pangolin client for farm access) |
| Router (Home) | Proxmox | 192.168.8.1 | http://192.168.8.1 | GL.iNet GL-MT3000 (Beryl AX) |
| Home Assistant | Farm | 192.168.0.50 | http://192.168.0.50:8123 / ha.troglodyteconsulting.com | Smart home automation |
| Proxmox VE (Farm) | Farm | 192.168.0.191 | https://192.168.0.191:8006 | Farm hypervisor |
| Target | Site | Command |
|---|---|---|
| VPS (Pangolin Server) | Direct | ssh admin@172.93.50.184 |
| Home Proxmox | Proxmox | ssh root@192.168.8.221 |
| Mac Studio | Proxmox | ssh bee@192.168.8.180 |
| Home Router | Proxmox | ssh root@192.168.8.1 |
| Home Docker CT 100 | Proxmox | ssh root@192.168.8.100 |
| Farm Proxmox | Farm | ssh root@192.168.0.191 |
| Farm Docker CT 100 | Farm | ssh root@192.168.0.100 |
Requires Pangolin VPN connection for all except VPS direct access. Home uses 192.168.8.x, Farm uses 192.168.0.x. Mac Studio also runs Pangolin client for farm resource access.
VPS
| Service | URL |
|---|---|
| Pangolin Dashboard | https://pangolin.troglodyteconsulting.com |
| VPS Cockpit | https://172.93.50.184:9090 |
Home (Proxmox Site)
| Service | URL |
|---|---|
| Proxmox VE | https://192.168.8.221:8006 |
| Portainer | https://192.168.8.100:9443 |
| Uptime Kuma | http://192.168.8.100:3001 |
| Gotify | http://192.168.8.100:8070 |
| N8N | http://192.168.8.100:5678 |
| Audiobookshelf | http://192.168.8.100:13378 |
| Navidrome | http://192.168.8.100:4533 |
| Lidarr | http://192.168.8.100:8686 |
| Bookshelf | http://192.168.8.100:8787 |
| Router | http://192.168.8.1 |
Farm (Brownsville โ 192.168.0.x)
| Service | URL |
|---|---|
| Home Assistant | http://192.168.0.50:8123 / ha.troglodyteconsulting.com |
| Proxmox VE | https://192.168.0.191:8006 |
| Portainer | https://192.168.0.100:9443 |
| Uptime Kuma | http://192.168.0.100:3001 |
| Gotify | http://192.168.0.100:8070 |
Proxmox VE 9.1.1 โ Intel i9-13900H (20 threads) โ 128 GB RAM โ Kernel 6.17.2-1-pve
| Drive | Size | Type | ZFS Pool | Purpose |
|---|---|---|---|---|
| nvme0n1 | 1.8 TB | NVMe | โ | Boot (PVE root + LVM-thin) |
| nvme1n1 | 3.6 TB | NVMe | nvmepool (stripe) |
VMs, containers, sync, music, movies, books, photos, video |
| nvme2n1 | 3.6 TB | NVMe | nvmepool (stripe) |
โ |
| nvme3n1 | 3.6 TB | NVMe | nvmepool (stripe) |
โ |
| sda | 465.8 GB | NVMe (USB) | backups |
Vzdump backups, ISOs (Crucial P5 500GB in Sabrent USB enclosure, installed Apr 19 2026) |
| โ | 2ร 18.2 TB | HDD (TB3) | Biggest mirror-0 |
Archive/backup mirror (ORICO 9858T3 Thunderbolt 3 enclosure) |
| โ | 3ร 4 TB | HDD (TB3) | โ | 3 free bays in ORICO 9858T3 Thunderbolt 3 enclosure (Birch pool retired Apr 2026) |
Retired (Apr 2026):
BIGGIE(Seagate 5TB USB),Big(932GB SSD),Birch(3ร4TB RAIDZ1 โ pool destroyed, seedbox sync moved to nvmepool/ingest). Nextcloud removed.
| Pool | Size | Used | Health | Key Datasets |
|---|---|---|---|---|
nvmepool |
10.9 TB | ~6.4 TB (59%) | ONLINE | sync, music, movies, books, photos, video, audiobookshelf, bookshelf, tv, ingest, container-data, vms |
Biggest |
18.2 TB | ~16.2 TB (89%) | ONLINE | Maple (Amigo, Monte, Ichabod โ archive data), nvmepool-backup (nightly rsync of nvmepool), Kiwix |
Birch |
โ | โ | โ | RETIRED Apr 2026 โ pool destroyed. Seedbox sync moved to nvmepool/ingest. 3 free drive bays available in ORICO enclosure. |
backups |
464 GB | โ | ONLINE | dump, isos โ Crucial P5 500GB (CT500P5SSD8, serial 21022FE3A911) in Sabrent USB enclosure (Realtek bridge 0bda:9210). Replaced failed Samsung 980 1TB on Apr 19 2026 (original Samsung lasted 6 days). |
offsite |
18.2 TB | ~10.8 TB (59%) | ONLINE | maple (Biggest/Maple mirror), nvmepool-data (nvmepool backup copy), ct100-backups, seedbox |
Dataset breakdown (nvmepool):
| Dataset | Used | Mount | Purpose |
|---|---|---|---|
nvmepool/sync |
1.87 TB | /nvmepool/sync |
Mac Studio SYNC mirror |
nvmepool/music |
2.35 TB | /nvmepool/music |
Music library (Navidrome + Plex) |
nvmepool/movies |
1.83 TB | /nvmepool/movies |
Movie library (Plex) |
nvmepool/audiobookshelf |
24.7 GB | /nvmepool/audiobookshelf |
Audiobook library |
nvmepool/bookshelf |
6.24 GB | /nvmepool/bookshelf |
Readarr app data |
nvmepool/books |
33.2 GB | /nvmepool/books |
Calibre-Web library |
nvmepool/photos |
1.40 TB | /nvmepool/photos |
Photo library (Plex + Immich external library) |
nvmepool/video |
27.9 GB | /nvmepool/video |
Video library (Plex) |
nvmepool/tv |
187 GB | /nvmepool/tv |
TV library (Plex + Sonarr) |
nvmepool/ingest |
varies | /nvmepool/ingest |
Seedbox download landing zone (replaces retired Birch pool) |
nvmepool/container-data |
38.0 GB | /nvmepool/container-data |
Large container configs (Lidarr, Plex, CWA, Sonarr, Immich DB + uploads) โ moved off CT100 rootfs Apr 2026 |
nvmepool/vms |
95.4 GB | /nvmepool/vms |
VM/CT disk images |
Dataset breakdown (Biggest):
| Dataset | Used | Contents |
|---|---|---|
Biggest/Maple |
10.1 TB | Amigo (Cell Photos, ISO, TV, Video), Ichabod (Movies, Music, Databases, Podcasts), Monte (Dropbox, Mystuff, PDF, Photos) |
Biggest/nvmepool-backup |
5.81 TB | Nightly rsync mirror of all nvmepool datasets |
Biggest/Kiwix |
99 GB | Offline reference content (Wikipedia, Stack Exchange, Gutenberg) โ zstd compressed |
Biggest/media-staging |
empty | General staging area on mirrored drives |
Speedy, TimeMachineOne, Ichabod/Sort, Amigo/delgross, Amigo/Youtube, Possible Delete โ all deleted (Apr 7 and Apr 16 2026). Special vdev (Optane 110GB) and cache SSD (465GB) removed from pool.
Dataset breakdown (offsite):
| Dataset | Used | Contents |
|---|---|---|
offsite/maple |
6.23 TB | Mirror of Biggest/Maple โ irreplaceable archive data |
offsite/nvmepool-data |
4.55 TB | Backup copy of nvmepool media |
offsite/ct100-backups |
empty | CT100 vzdump backup destination |
offsite/seedbox |
empty | Seedbox data backup destination |
The offsite pool is a single 18.2 TB drive that travels intermittently to the farm for geographic redundancy. Manual sync before each departure.
CT 100 โ docker-host (primary media/apps container)
| Setting | Value |
|---|---|
| OS | Debian 12 (LXC) |
| Cores | 4 |
| RAM | 16 GB |
| Swap | 4 GB |
| Root disk | 48 GB on nvme-data (expanded from 32 GB Apr 2026) |
| IP | 192.168.8.100 |
| Features | Nesting, keyctl, privileged (unprivileged: 0) โ required for stable Docker networking |
| Autostart | Yes |
Bind mounts into CT 100:
| Host path | Container mount | Purpose |
|---|---|---|
/nvmepool/ingest |
/mnt/seedbox |
Seedbox downloads landing (Music + Books) |
/nvmepool/books |
/mnt/books |
Calibre-Web library |
/nvmepool/music |
/mnt/music |
Music library |
/nvmepool/audiobookshelf |
/mnt/audiobookshelf |
Audiobookshelf data |
/nvmepool/bookshelf |
/mnt/bookshelf |
Readarr app data |
/nvmepool/movies |
/mnt/movies |
Movie library |
/nvmepool/photos |
/mnt/photos |
Photo library |
/nvmepool/video |
/mnt/video |
Video library |
/nvmepool/tv |
/mnt/tv |
TV library |
/nvmepool/container-data |
/mnt/container-data |
Large container configs (Lidarr, Plex, CWA) |
/Biggest/Kiwix |
/mnt/kiwix |
Kiwix ZIM file storage (offline Wikipedia, etc.) |
CT 101 โ immich (dedicated Immich photo management host, created Apr 18 2026)
| Setting | Value |
|---|---|
| OS | Debian 12 (LXC) |
| Cores | 8 (bumped from 4 for faster initial ML scan) |
| RAM | 8 GB |
| Swap | 2 GB |
| Root disk | 32 GB on nvme-data |
| IP | 192.168.8.103 (originally .101, changed Apr 18 due to IP conflict with office-2.lan) |
| Features | Nesting, keyctl |
| Autostart | Yes |
| MAC | BC:24:11:D5:67:E8 |
Bind mounts into CT 101:
| Host path | Container mount | Purpose |
|---|---|---|
/nvmepool/photos |
/mnt/photos |
Immich external library (read-only, 1.4 TB) |
/nvmepool/container-data/immich |
/mnt/immich-data |
Immich uploads, postgres DB, thumbs, model cache |
Docker-specific notes: IPv6 disabled in /etc/docker/daemon.json (required โ ghcr.io was causing connection resets because CT101 has no IPv6 default route). DNS set to 8.8.8.8 + 1.1.1.1.
| Service | Image | Port | URL | Status |
|---|---|---|---|---|
| Plex | linuxserver/plex | 32400 | http://192.168.8.100:32400/web | Up |
| Calibre-Web (CWA) | calibre-web-automated | 8083 | http://192.168.8.100:8083 | Up |
| Portainer | portainer-ce:lts | 9443 | https://192.168.8.100:9443 | Up |
| Uptime Kuma | uptime-kuma:1 | 3001 | http://192.168.8.100:3001 | Up |
| Gotify | gotify/server | 8070 | http://192.168.8.100:8070 | Up |
| Gotify-Telegram Bridge | custom (Python) | โ | โ | Up |
| N8N | n8n:latest | 5678 | http://192.168.8.100:5678 | Up |
| Audiobookshelf | audiobookshelf:latest | 13378 | http://192.168.8.100:13378 | Up |
| Navidrome | navidrome:latest | 4533 | http://192.168.8.100:4533 | Up |
| Lidarr | lidarr:nightly | 8686 | http://192.168.8.100:8686 | Up |
| Bookshelf | bookshelf:hardcover | 8787 | http://192.168.8.100:8787 | Up |
| Shelfmark | shelfmark | 8084 | http://192.168.8.100:8084 | Up |
| Radarr | linuxserver/radarr | 7878 | http://192.168.8.100:7878 | Up |
| Sonarr | linuxserver/sonarr | 8989 | http://192.168.8.100:8989 | Up |
| Prowlarr | prowlarr | 9696 | http://192.168.8.100:9696 | Up |
| FreshRSS | freshrss | 8180 | http://192.168.8.100:8180 | Up |
| Kiwix | ghcr.io/kiwix/kiwix-serve | 8380 | http://192.168.8.100:8380 | Up |
| Wallabag | wallabag/wallabag | 8480 | http://192.168.8.100:8480 | Up |
| Wallabag DB | mariadb:11 | โ | internal | Up |
| Wallabag Redis | redis:7-alpine | โ | internal | Up |
| ConvertX | ghcr.io/c4illin/convertx | 3100 | http://192.168.8.100:3100 | Up |
| Aurral | ghcr.io/lklynet/aurral | 3002 | http://192.168.8.100:3002 | Up |
| Recyclarr | ghcr.io/recyclarr/recyclarr | โ | headless | Up |
| Dozzle | amir20/dozzle | 9999 | http://192.168.8.100:9999 | Up |
| Homepage | gethomepage.dev | 3000 | http://192.168.8.100:3000 | Up |
| FlareSolverr | flaresolverr | 8191 | http://192.168.8.100:8191 | Up |
| Watchtower | containrrr/watchtower | โ | headless | Up |
| Prometheus | prom/prometheus | 9090 | http://192.168.8.100:9090 | Up |
| Grafana | grafana/grafana | 3200 | http://192.168.8.100:3200 | Up |
| node-exporter | prom/node-exporter | โ | internal | Up |
| cAdvisor | gcr.io/cadvisor | โ | internal | Up |
| weather-exporter | custom | โ | internal | Up |
| Service | Image | Port | URL | Status |
|---|---|---|---|---|
| Immich Server | ghcr.io/immich-app/immich-server:release | 2283 | http://192.168.8.103:2283 | Up |
| Immich ML | ghcr.io/immich-app/immich-machine-learning:release | โ | internal | Up |
| Immich Postgres | ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0 | โ | internal | Up |
| Immich Redis | redis:6.2-alpine | โ | internal | Up |
Immich is a self-hosted photo and video management platform (Google Photos alternative). Deployed as a 4-container stack on CT 101 via Docker Compose at /opt/immich/. External library points at /nvmepool/photos (1.4 TB, ~134K files) in read-only mode so originals are never modified. Immich’s own data (uploads, thumbnails, transcoded video, Postgres DB, ML model cache) lives in /nvmepool/container-data/immich/. Admin account created on first web access. DB password stored in /opt/immich/.env. Image tag locked to :release.
Plex serves movies, music, photos, video, and audiobooks from nvmepool. Plexamp (iOS/Mac client) connects to it for music. Uses network_mode: host.
Radarr manages the movie library at /mnt/movies (nvmepool/movies). Searches via Prowlarr indexers, downloads via seedbox, auto-renames and organizes movies for Plex. API key: b117993eb50f465ea485654bc0118861. Compose at /opt/radarr/docker-compose.yml.
Filebot (v5.2.1) is installed as a system package on CT100 (/bin/filebot) for ad-hoc movie/media renaming. Not containerized.
Calibre-Web Automated (CWA) serves the book library from /mnt/books (nvmepool/books). Auto-ingests books dropped into /mnt/books/ingest, auto-converts 28 formats to epub, fetches metadata, detects duplicates. Calibre bundled. Default login: admin / admin123. Image: crocodilestick/calibre-web-automated:latest.
Kiwix serves offline reference content (Wikipedia, Stack Exchange, Project Gutenberg, etc.) from /mnt/kiwix (Biggest/Kiwix โ zstd compressed, 5.6TB available). ZIM files are downloaded manually from library.kiwix.org. A cron-based watcher (/usr/local/bin/kiwix-watcher.sh, every 5 min) detects new/changed ZIMs via MD5 hash of the file list and restarts the container to pick them up. Compose at /opt/kiwix/docker-compose.yml. Starter ZIM: wikipedia_en_simple_all_nopic_2026-02.zim (922 MB).
Wallabag is a self-hosted read-it-later service (alternative to Pocket/Instapaper). Stack: Wallabag app + MariaDB 11 (wallabag-db) + Redis 7 (wallabag-redis), all on dedicated wallabag-net bridge network. Compose at /opt/wallabag/docker-compose.yml. Secrets (DB password, Symfony secret) saved in /opt/wallabag/credentials.txt (root-only, chmod 600). Data persisted in named Docker volumes (wallabag-db, wallabag-redis, wallabag-images). Default admin account needs to be created on first visit. Browser extensions for Firefox/Chrome and mobile apps (iOS/Android) support direct capture.
ConvertX is a self-hosted file converter supporting 1000+ formats via FFmpeg, Pandoc, LibreOffice, GraphicsMagick, Inkscape, and more. Compose at /opt/convertx/docker-compose.yml. Data persisted in named volume convertx-data. Account registration disabled after first account creation (ACCOUNT_REGISTRATION=false). Converted files auto-delete after 24 hours (AUTO_DELETE_EVERY_N_HOURS=24). HTTP_ALLOWED=true set for local HTTP access.
| Share | Path | Access | Purpose |
|---|---|---|---|
Review |
/Biggest/Maple |
read/write, user: bee | Archive data on mirrored drives (Amigo, Ichabod, Monte) |
Sync |
/nvmepool/sync |
read-only, user: bee | Mac Studio SYNC mirror |
Music |
/nvmepool/music |
read/write, user: bee | Music library (33,654 tracks) |
Books |
/nvmepool/books |
read/write, user: bee | Book library |
Movies |
/nvmepool/movies |
read/write, user: bee | Movie library |
Video |
/nvmepool/video |
read/write, user: bee | Video library |
Seedbox |
โ | โ | โ |
Media Staging |
/Biggest/media-staging |
read/write, user: bee | Staging area on mirrored drives |
backups |
/backuppool |
read-only, user: bee | Proxmox dumps/ISOs |
nvmepool-backup |
/Biggest/nvmepool-backup |
read-only, user: bee | Nightly nvmepool backup |
All shares configured in /etc/samba/smb.conf (no registry shares). valid users = bee, ownership standardized to bee:bee across all datasets. Apple vfs objects = fruit streams_xattr for macOS compatibility.
Mac Finder access: smb://192.168.8.221/<share_name> or via Network โ PVE (Avahi/mDNS advertised).
The seedbox is a remote Usenet server at ismene.usbx.me (IP 46.232.210.50). NZBGet runs on the seedbox and downloads to categorized folders. Two SSH tunnels on Proxmox expose the seedbox UIs locally, and cron scripts pull completed files down.
Data flow:
- Lidarr/Radarr request albums/movies โ send to NZBGet on seedbox
- NZBGet downloads and sorts into
completed/Music/,completed/Books/, etc. seedbox-sync.sh(every 15 min) pulls Music, Books, and general completed downloads to/nvmepool/ingest/- Lidarr/Beets process and move finished files to
nvmepool/music - Plex/Navidrome serve from nvmepool
Mac Studio Sync:
| Script | Schedule | Source | Destination | Notes |
|---|---|---|---|---|
sync-mac.sh |
DISABLED (Apr 13, 2026) | bee@192.168.8.180:/Users/bee/SYNC/ |
/nvmepool/sync/ |
Was failing with rsync protocol error (exit 12). Syncthing may cover this path. |
Backups:
| Job | Schedule | Scope | Compression | Retention | Storage |
|---|---|---|---|---|---|
| vzdump-daily | 2:00 AM | All VMs/CTs | zstd | 3 copies | backup-hdd (/backups/dump/dump/) |
| Docker prune | Sundays 4:00 AM | CT100 | โ | โ | Cleans dangling containers, networks, images |
| Radarr start | Midnight | CT100 | โ | โ | Starts Radarr for nightly indexer hits |
| Radarr stop | 5:00 AM | CT100 | โ | โ | Stops Radarr to limit downloads to off-hours |
| CWA processed cleanup | 5:00 AM | CT100 | โ | โ | Clears calibre-web/processed_books |
| Kiwix ZIM watcher | Every 5 min | CT100 | โ | โ | Restarts kiwix-serve when ZIM file list changes (MD5 hash check) |
Offsite Backup:
A 20TB Seagate Exos (ST20000NM002C, serial ZXA0FLHC) in an ASMT105x USB 3.2 enclosure serves as the offsite backup drive. Formatted as ZFS pool offsite with zstd compression, atime=off, xattr=sa, ashift=12. Negotiates USB 3.2 Gen 2 (10 Gbps SuperSpeed Plus) on Bus 6 Port 1 โ critical to plug into the correct USB-A port: the other USB-A ports on the Minisforum Venus are USB 2.0 and will bottleneck transfers to ~42 MB/s. On the USB 3 port, rsync hits ~200 MB/s sustained (bottlenecked by spinning disk sequential write).
| Dataset | Source | Contents |
|---|---|---|
offsite/nvmepool-data |
/Biggest/nvmepool-backup/ |
Mirror of nvmepool (music, movies, books, sync, etc.) |
offsite/maple |
/Biggest/Maple/ |
Unique archive data (Amigo, Ichabod, Monte) |
offsite/seedbox |
โ | Seedbox downloads (placeholder โ seedbox now on nvmepool/ingest) |
offsite/ct100-backups |
/backups/dump/ |
Vzdump CT100 backups |
Script: /usr/local/bin/offsite-backup.sh โ rsync with --delete for incremental updates. Workflow: connect drive โ zpool import offsite โ offsite-backup.sh โ zpool export offsite โ disconnect and take offsite.
Health Monitoring (v2, updated Apr 18 2026):
Script: /usr/local/bin/system-health-check.sh โ runs every 15 min via /etc/cron.d/system-health-check. Pushes alerts to Gotify. Checks: root disk space, all 4 active ZFS pools (nvmepool, Biggest, backups, offsite โ health + suspended + capacity + removed/faulted vdevs), backup age/location, USB hub errors and pool suspension events, snapshot counts, key services (pveproxy, pvedaemon, smbd). Daily summary at 7 AM.
ZFS Maintenance:
| Task | Schedule | Pool |
|---|---|---|
| Auto-snapshot | Every 15 min (keep 4 frequent, 24 hourly, 31 daily, 8 weekly, 12 monthly) | All |
| Scrub Biggest | 1st of month, 3 AM | Biggest |
| Scrub nvmepool | 8th of month, 3 AM | nvmepool |
| Scrub backups | 22nd of month, 3 AM | backups |
| Service | Config |
|---|---|
| UFW | Active โ default DROP on INPUT. Allowed: SSH (22), Proxmox (8006), SMB (445, 139), VNC (5900-5999), Spice (3128) |
| Fail2Ban | Active โ jails: proxmox, sshd |
| SSH | Key-based auth to seedbox (id_ed25519) and Mac Studio (id_rsa) |
Uptime Kuma (http://192.168.8.100:3001) โ 61 monitors covering:
| Category | Monitors | Check Interval |
|---|---|---|
| Internet connectivity | Google, Cloudflare, DNS 8.8.8.8 | 60s |
| Network infrastructure | Router, CT100 ping | 60-120s |
| CT100 Docker services | Plex, Navidrome, CWA, Portainer, Gotify, FreshRSS, N8N, Audiobookshelf, Lidarr, Bookshelf, Shelfmark, Prowlarr, Radarr, Sonarr, Dozzle, FlareSolverr, Homepage, Prometheus, Grafana, Wallabag, Kiwix, ConvertX | 120s |
| CT101 Docker services | Immich | 60s |
| Proxmox host | Web UI, Cockpit, SMB, Syncthing, NZBGet tunnel, NZBHydra2 tunnel | 120-300s |
| Mac Studio | Ping, SSH, Life Archive API, Paperless-NGX, Syncthing, LM Studio, Embed Server, Hugo Bee Hub | 120-300s |
| VPS | Ping, Pangolin Dashboard, Bee Hub (VPS), DNS resolution | 120-300s |
| SSL certificates | Pangolin, Home Assistant | 3600s |
| Keyword health checks | Plex API, Navidrome API, Portainer API | 300s |
| Farm | Home Assistant (via Pangolin) | 300s |
| Seedbox | SSH | 300s |
Notification chain: Uptime Kuma โ Gotify โ Telegram bridge โ @beenetworkbot
Gotify-Telegram Bridge (Docker, /opt/gotify-telegram/):
Polls Gotify every 10 seconds for new messages and forwards to Telegram with priority-based emojis (๐ด critical, ๐ก warning, ๐ข info). All Gotify sources are forwarded โ Uptime Kuma alerts, health check script alerts, and any other Gotify notifications.
| Setting | Value |
|---|---|
| Telegram Bot | @beenetworkbot |
| Telegram Chat ID | 5289824155 |
| Gotify App Token | ARCkVc0wf001L.e |
| Gotify Client Token | COXHgqAwb_mZdz0 |
Health Check Script (/usr/local/bin/system-health-check.sh):
Runs every 15 min via cron. Monitors root disk space, all 4 ZFS pools (nvmepool, Biggest, backups, offsite โ health/suspended/capacity/vdevs), backup age, USB hub errors, snapshot counts, key services. Daily summary at 7 AM. Alerts via Gotify โ Telegram. Updated Apr 18 2026.
| Method | Command / URL |
|---|---|
| Web UI | https://192.168.8.221:8006 |
| Cockpit | https://192.168.8.221:9090 |
| SSH | ssh root@192.168.8.221 |
| SMB (Music) | smb://192.168.8.221/Music (user: bee) |
| SMB (Movies) | smb://192.168.8.221/Movies (user: bee) |
| SMB (Books) | smb://192.168.8.221/Books (user: bee) |
| SMB (Seedbox) | smb://192.168.8.221/Seedbox (user: bee) |
| SMB (Review) | smb://192.168.8.221/Review (user: bee) |
| NZBGet UI | http://192.168.8.221:16789 (tunneled from seedbox) |
| NZBHydra2 UI | http://192.168.8.221:15076 (tunneled from seedbox) |
| Plex | http://192.168.8.100:32400/web |
| Calibre-Web | http://192.168.8.100:8083 |
The Pangolin client on your laptop creates a WireGuard tunnel to the Pangolin VPS. Through that tunnel you can reach every service on the home and farm LANs using their normal IP addresses โ nothing is exposed to the public internet.
Laptop (hotel/airport) โ WireGuard โ Pangolin VPS (172.93.50.184)
โ Newt (Proxmox PVE 192.168.8.221) โ Home LAN (192.168.8.x)
โ Newt (Farm PVE 192.168.0.191) โ Farm LAN (192.168.0.x)
Pangolin uses resource-based access control โ clients can only reach resources that an admin has explicitly defined in the dashboard, not entire subnets. This keeps everything private: no public URLs, no open ports on the home router.
At home, disconnect the Pangolin client. It routes
192.168.8.0/24through the tunnel, which conflicts with direct LAN access. Only connect when you’re away (see At Home vs. Away below).
| Component | Location | Role |
|---|---|---|
| Pangolin client | Your laptop | WireGuard VPN โ connects you to home/farm networks |
| Pangolin client | Mac Studio (192.168.8.180) | Provides farm LAN access from home network |
| Pangolin | VPS (172.93.50.184) | Auth, coordination, relay |
| Gerbil | VPS | WireGuard server |
| Newt | Proxmox PVE (192.168.8.221) | Tunnel endpoint for home LAN (systemd service) |
| Newt | Farm PVE (192.168.0.191) | Tunnel endpoint for farm LAN (systemd service, v1.10.1) |
The Pangolin client is already installed on the MacBook. To connect:
pangolin
Credentials are saved in ~/Library/Application Support/olm-client/config.json from the first run. No need to re-enter them.
Verify it’s working:
ping 192.168.8.180 # Mac Studio
ping 192.168.8.100 # Docker CT 100
ping 192.168.0.50 # Farm Home Assistant
If pings work, open any service URL in a browser. You’re on the network.
Once connected, use the same LAN URLs you’d use at home.
Mac Studio (192.168.8.180)
| Service | URL |
|---|---|
| Life Archive Search | http://192.168.8.180:8900 |
| Life Archive MCP | http://192.168.8.180:8901/mcp |
| Paperless-NGX | http://192.168.8.180:8100 |
| Hugo Docs | http://192.168.8.180:1313 |
| SyncThing | http://192.168.8.180:8384 |
| SSH | ssh bee@192.168.8.180 |
| Screen Sharing | vnc://192.168.8.180 |
Docker โ CT 100 (192.168.8.100)
| Service | URL |
|---|---|
| Portainer | https://192.168.8.100:9443 |
| Uptime Kuma | http://192.168.8.100:3001 |
| Navidrome | http://192.168.8.100:4533 |
| Audiobookshelf | http://192.168.8.100:13378 |
| N8N | http://192.168.8.100:5678 |
| Gotify | http://192.168.8.100:8070 |
| Lidarr | http://192.168.8.100:8686 |
| FreshRSS | http://192.168.8.100:8180 |
Proxmox (192.168.8.221)
| Service | URL |
|---|---|
| Proxmox VE | https://192.168.8.221:8006 |
| SSH | ssh root@192.168.8.221 |
Farm โ Brownsville (192.168.0.x)
| Service | URL |
|---|---|
| Home Assistant | http://192.168.0.50:8123 |
To use Life Archive tools from Claude on your laptop while away, add the MCP HTTP server to your Claude config:
Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"life-archive": {
"url": "http://192.168.8.180:8901/mcp"
}
}
}
Claude Code (~/.claude/settings.json):
{
"mcpServers": {
"life-archive": {
"url": "http://192.168.8.180:8901/mcp"
}
}
}
This gives you life_archive_search, life_archive_entity_lookup, life_archive_temporal_search, and life_archive_stats โ same as at home. Requires the Pangolin client to be connected.
Navidrome speaks the Subsonic API. Use a Subsonic-compatible app on your phone with the Pangolin client running.
iOS apps: play:Sub, Amperfy, iSub
| Field | Value |
|---|---|
| Server URL | http://192.168.8.100:4533 |
| Username | Your Navidrome username |
| Password | Your Navidrome password |
Requires the Pangolin client running on the phone (if available) or on the same network as a device running it.
Do this before leaving:
- Disconnect Pangolin client on laptop (don’t need it at home)
- Verify all services are running:
http://192.168.8.100:3001(Uptime Kuma) - Verify Pangolin Dashboard shows Proxmox Home site Online
- Verify Pangolin Dashboard shows Farm Brownsville site Online
- Add Life Archive MCP config to Claude on laptop (see section above)
- Bookmark this page:
http://192.168.8.180:1313/homelab/remote-access/
Once in KC:
- Connect Pangolin client:
pangolin - Test:
ping 192.168.8.180 - Open any service URL to confirm
The Pangolin client routes 192.168.8.0/24 through the WireGuard tunnel because the Proxmox Home site defines that as its network. This is by design โ it’s how clients reach home services when they’re away.
When you’re already on the home LAN, this creates a conflict: traffic to 192.168.8.x goes through the tunnel instead of staying local. There is no split-tunnel or local-network-detection feature in Pangolin yet (community request pending).
The rule is simple:
- At home (laptop): Disconnect the Pangolin client. You’re already on the LAN.
- Away (KC, travel, anywhere else): Connect the Pangolin client. Everything works by LAN IP through the tunnel.
- Mac Studio: Runs its own Pangolin client permanently so it can reach farm resources (192.168.0.x) from the home LAN. This is separate from the laptop client.
| Problem | Fix |
|---|---|
| Can’t reach anything | Is the Pangolin client connected? Run pangolin |
| Pings work but browser won’t load | Service is stopped โ SSH in and restart it |
| Everything down at once | CT 100 is probably down โ open Proxmox at https://192.168.8.221:8006 and restart CT 100 |
| Farm services unreachable | Farm Newt offline โ check Pangolin Dashboard; requires on-site fix if farm internet is down |
| Pangolin client won’t connect | Check VPS: ssh admin@172.93.50.184 โ make sure Pangolin/Gerbil are running |
| Works at home but not away | You were testing with Pangolin client off; connect it |
| LAN broken while Pangolin is on at home | Disconnect the client โ you’re on the LAN already (see At Home vs. Away) |
Master directory of all web-accessible services across Home, Mac Studio, Proxmox, VPS, Seedbox, and Farm.
Managed via Portainer. Running on Proxmox LXC Container 100 (Ubuntu 24.04, 4 cores, 16 GB RAM).
| Service | Port | URL | Description |
|---|---|---|---|
| Portainer | 9443 | https://192.168.8.100:9443 | Container management UI |
| Gotify | 8070 | http://192.168.8.100:8070 | Push notification server โ forwards all alerts to Telegram via bridge |
| Gotify-Telegram Bridge | โ | โ | Polls Gotify, forwards to Telegram @beenetworkbot (chat ID: 5289824155) |
| Uptime Kuma | 3001 | http://192.168.8.100:3001 | 60 monitors โ services, infrastructure, SSL certs, keyword health checks, cron push monitors. Alerts via Gotify โ Telegram |
| N8N | 5678 | http://192.168.8.100:5678 | Automation / workflow engine |
| Audiobookshelf | 13378 | http://192.168.8.100:13378 | Audiobook & podcast server |
| Navidrome | 4533 | http://192.168.8.100:4533 | Music streaming (Subsonic-compatible) |
| Lidarr | 8686 | http://192.168.8.100:8686 | Music collection manager |
| Bookshelf | 8787 | http://192.168.8.100:8787 | Book tracking (Hardcover) |
| Shelfmark | 8084 | http://192.168.8.100:8084 | Book & audiobook search โ SOCKS5 proxy via seedbox (NL), Pangolin private resource |
| Radarr | 7878 | http://192.168.8.100:7878 | Movie collection manager โ automated search, download, rename. Connected to Prowlarr + seedbox |
| Sonarr | 8989 | http://192.168.8.100:8989 | TV show collection manager โ automated search, download, rename. Connected to Prowlarr + seedbox |
| Prowlarr | 9696 | http://192.168.8.100:9696 | Indexer aggregator โ SOCKS5 proxy via seedbox (NL), feeds Shelfmark + Lidarr + Radarr + Sonarr + Bookshelf |
| FreshRSS | 8180 | http://192.168.8.100:8180 | RSS feed reader |
| Plex | 32400 | http://192.168.8.100:32400/web | Media server โ movies, music, TV, photos, video, audiobooks. Plexamp for music on iOS/Mac |
| Calibre-Web (CWA) | 8083 | http://192.168.8.100:8083 | Book library โ auto-ingest, auto-convert, duplicate detection, metadata fetch |
| Kiwix | 8380 | http://192.168.8.100:8380 | Offline Wikipedia / reference content server โ ZIMs stored on Biggest/Kiwix (mounted /mnt/kiwix). Auto-restarts when new ZIMs are added via cron watcher |
| Wallabag | 8480 | http://192.168.8.100:8480 | Read-it-later / article archive. MariaDB + Redis stack. Browser extensions and mobile apps available |
| ConvertX | 3100 | http://192.168.8.100:3100 | Self-hosted file converter โ 1000+ formats via FFmpeg, Pandoc, LibreOffice, GraphicsMagick. Auto-deletes files after 24h |
| Aurral | 3002 | http://192.168.8.100:3002 | Music discovery and request manager for Lidarr โ library-aware recommendations, playlist flows, artist discovery via Last.fm |
| Recyclarr | โ | โ | Headless TRaSH Guides sync โ automatically updates Radarr + Sonarr quality profiles and custom formats daily |
Two Seagate 20TB drives (ST20000NE000) in ZFS mirror in the ORICO 9858T3 Thunderbolt 3 enclosure. Special vdev (Optane) and cache SSD removed Apr 7 2026. Pool is now a clean 2-drive mirror. Currently at 89% capacity (16.2 TB used).
| Dataset | Size | Contents |
|---|---|---|
Biggest/Maple |
10.1 TB | Amigo (Cell Photos, ISO, TV, Video), Ichabod (Movies, Music, Databases, Podcasts), Monte (Dropbox, Mystuff, PDF, Photos) |
Biggest/nvmepool-backup |
5.81 TB | Nightly rsync mirror of all nvmepool datasets |
Biggest/Kiwix |
99 GB | Offline reference content (Wikipedia, Stack Exchange, Gutenberg) |
Deleted Apr 7: Speedy, TimeMachineOne (4.3TB), Ichabod/Sort (232GB), Amigo/delgross (4TB), Amigo/Youtube (148GB). Pool went from 90% โ 41%.
| Service | Port | URL | Description |
|---|---|---|---|
| Proxmox VE | 8006 | https://192.168.8.221:8006 | Hypervisor web UI |
| Cockpit | 9090 | https://192.168.8.221:9090 | System admin panel |
| Syncthing | 8384 | http://192.168.8.221:8384 | File sync hub โ always-on relay for Mac Studio and MacBook |
| NZBGet (tunneled) | 16789 | http://192.168.8.221:16789 | Usenet downloader (seedbox tunnel) |
| NZBHydra2 (tunneled) | 15076 | http://192.168.8.221:15076/nzbhydra2 | Usenet indexer aggregator (seedbox tunnel) |
| Pangolin Newt | โ | โ | Systemd service โ tunnel agent (outbound to VPS) |
| SMB Shares | 445 | smb://192.168.8.221/<share> |
Network file shares: Movies, TV, Music, Books, Video, Sync (read-only), Review (Biggest/Maple), nvmepool, Biggest, offsite, backups |
| Service | Port | URL | Description |
|---|---|---|---|
| Hugo Hub | 1313 | http://192.168.8.180:1313 | BeeDifferent documentation site |
| SyncThing | 8384 | http://192.168.8.180:8384 | File sync between devices (GUI bound to 0.0.0.0 for network access) |
| Paperless-NGX | 8100 | http://192.168.8.180:8100 | Document management system |
| Life Archive API | 8900 | http://192.168.8.180:8900 | Life Archive RAG search API |
| Life Archive MCP | 8901 | http://192.168.8.180:8901/mcp | MCP server for remote Claude clients |
| Embed Server | 1235 | http://localhost:1235 | gte-Qwen2-7B on MPS (local only) |
| SSH | 22 | ssh bee@192.168.8.180 |
Remote shell |
| Screen Sharing | 5900 | vnc://192.168.8.180 | macOS VNC |
| Service | Port | URL | Description |
|---|---|---|---|
| Pangolin Dashboard | 443 | https://pangolin.troglodyteconsulting.com | Tunnel management UI |
| Cockpit | 9090 | https://172.93.50.184:9090 | VPS system admin panel |
| SSH | 22 | ssh admin@172.93.50.184 |
Remote shell |
Services on the seedbox are accessed via SSH tunnels through Proxmox (192.168.8.221) and CT 100 (192.168.8.100).
| Service | Local Tunnel | URL | Description |
|---|---|---|---|
| NZBGet | 192.168.8.221:16789 | http://192.168.8.221:16789 | Usenet downloader |
| NZBHydra2 | 192.168.8.221:15076 | http://192.168.8.221:15076/nzbhydra2 | Usenet indexer aggregator (base URL: /nzbhydra2) |
| SOCKS5 Proxy | 192.168.8.100:1080 | โ | Shelfmark outbound traffic exit (autossh, systemd seedbox-socks.service on CT 100) |
NZBHydra2 runs on the seedbox at port 13033 internally, exposed via SSH tunnel to Proxmox port 15076. Auth: user delgross. The /nzbhydra2 base path is required for all access โ bare http://192.168.8.221:15076 returns 404. API key for Lidarr/external access: BM7BCBP0RNBFIIRTJ32LGN9G4M.
Farm runs on the 192.168.0.x subnet (separate from home’s 192.168.8.x). Pangolin routes via WireGuard tunnel through VPS. Farm Proxmox is at 192.168.0.191, Docker CT 100 at 192.168.0.100.
| Service | Port | URL | Description |
|---|---|---|---|
| Home Assistant | 8123 | http://192.168.0.50:8123 / ha.troglodyteconsulting.com | Smart home automation |
| Proxmox VE | 8006 | https://192.168.0.191:8006 | Farm hypervisor |
| Portainer | 9443 | https://192.168.0.100:9443 | Farm container management |
| Uptime Kuma | 3001 | http://192.168.0.100:3001 | Farm uptime monitoring |
| Gotify | 8070 | http://192.168.0.100:8070 | Farm push notifications |
Pangolin Newt runs as a systemd service on Farm Proxmox PVE (192.168.0.191), not in Docker.
| Device | IP | URL | Description |
|---|---|---|---|
| Router (GL.iNet Beryl AX) | 192.168.8.1 | http://192.168.8.1 | Home router admin |
| Homey | 192.168.8.224 | โ | Smart home hub |
| Weather Station | 192.168.8.245 | โ | Orchard weather monitor |
Shelfmark is a self-hosted book and audiobook search aggregator. It searches multiple sources simultaneously and returns results in a unified interface. All outbound traffic exits via the seedbox in the Netherlands through a persistent SOCKS5 tunnel.
| Setting | Value |
|---|---|
| URL | http://192.168.8.100:8084 |
| Host | CT 100 (192.168.8.100) |
| Image | ghcr.io/calibrain/shelfmark:latest |
| Version | v1.2.0 (build 2026-03-07) |
| Port | 8084 |
| Compose file | /opt/shelfmark/docker-compose.yml (on CT 100) |
| Proxy mode | SOCKS5 via 172.25.0.1:1080 (Docker bridge to CT 100 host) |
| Proxy exit | ismene.usbx.me (Netherlands, UltraSeedbox) |
| Remote access | Pangolin private resource only โ not publicly accessible |
| Metadata provider | Hardcover |
Docker volumes:
| Container path | Host path (CT 100) | ZFS dataset | Purpose |
|---|---|---|---|
/books |
/mnt/seedbox/Books |
nvmepool/ingest |
Downloaded books โ shared with Readarr (Bookshelf) |
/config |
/opt/shelfmark/config |
CT 100 rootfs | Settings, users.db, cover cache |
Previously /books pointed to /opt/shelfmark/books (isolated from Readarr). Changed 2026-03-31 to /mnt/seedbox/Books so Shelfmark downloads land directly in the seedbox Books folder where Readarr can see them.
Docker environment:
PROXY_MODE=socks5
SOCKS5_PROXY=socks5://172.25.0.1:1080
NO_PROXY=localhost,127.0.0.1,192.168.8.*,172.25.0.*
TZ=America/New_York
PUID=0
PGID=0
PUID/PGID set to 0 (root): The seedbox Books folder receives files via rsync with UID 1040, which Shelfmark’s default appuser (UID 1000) cannot write to. Running as root prevents “destination not writable” errors during post-processing. Changed 2026-04-02.
Shelfmark routes all search and download traffic through the seedbox to avoid region-based restrictions. The chain is:
Shelfmark (CT 100 Docker)
โ SOCKS5 to 172.25.0.1:1080 (Docker bridge โ CT 100 host)
seedbox-socks.service (autossh, CT 100 systemd)
โ SSH tunnel to delgross@46.232.210.50
ismene.usbx.me (Netherlands exit, UltraSeedbox)
โ
Book search sources (Anna's Archive, Libgen, etc.)
Check proxy tunnel status:
# SSH into CT 100 first
ssh root@192.168.8.221
pct exec 100 -- bash
# Then:
systemctl status seedbox-socks.service
Restart proxy tunnel:
systemctl restart seedbox-socks.service
If Shelfmark searches fail or time out: The SOCKS5 tunnel is almost always the culprit. Check the service status above. The tunnel reconnects automatically via autossh, but occasionally needs a manual restart after seedbox connectivity issues.
Prowlarr runs on CT100 as an indexer aggregator, providing Usenet-based book search as an additional release source alongside the direct download sources (Anna’s Archive, Libgen, etc.). Prowlarr routes all traffic through the same SOCKS5 seedbox tunnel as Shelfmark.
| Setting | Value |
|---|---|
| URL | http://192.168.8.100:9696 |
| Image | lscr.io/linuxserver/prowlarr:latest |
| Compose file | /opt/prowlarr/docker-compose.yml (on CT 100) |
| API Key | 2adb6f9d248840bcadc0ab93222b78fd |
| Auth | Forms (user: bee), disabled for local addresses |
| SOCKS5 proxy | 172.26.0.1:1080 (Prowlarr Docker bridge gateway โ CT 100 host tunnel) |
| Bypass | 192.168.8.*,localhost,127.0.0.1,172.26.0.* |
Indexers configured:
| Indexer | Type | API Key | Book categories |
|---|---|---|---|
| altHUB | Newznab (Usenet) | f0d9327bc1db3011025b40176ec6955a |
7000 (Books), 107020 (Ebook), 107030 (Comics), 107010 (Mags) |
Shelfmark connection: Enabled in Shelfmark Settings โ Prowlarr with auto-expand search on. When Shelfmark searches for a book, it queries both direct_download and prowlarr sources simultaneously. Prowlarr found 115 additional books that direct download sources couldn’t locate.
Note: Prowlarr’s Docker network (prowlarr_default) uses gateway 172.26.0.1, which is different from Shelfmark’s network (shelfmark_default, gateway 172.25.0.1). Both reach the same SOCKS tunnel on 0.0.0.0:1080 on the CT100 host, just via different Docker bridge IPs.
A Python script at ~/Sync/ED/homelab/book_library/kindle_to_shelfmark.py automates bulk importing from the Kindle library into Shelfmark. It reads the Kindle NZB results JSON (1,773 books with titles, authors, and ASINs) and for each book: searches Shelfmark’s Hardcover metadata provider, finds downloadable releases, and queues the best epub for download.
First full run (2026-03-31):
| Metric | Count |
|---|---|
| Total processed | 1,773 |
| Metadata found | 1,732 (97.7%) |
| Metadata not found | 41 |
| Releases found | 1,273 (73.5% of metadata matches) |
| Releases not found | 459 |
| Queued for download | 1,349 (1,234 first run + 115 Prowlarr retry) |
| Queue failures | 39 (mostly duplicates already in queue) |
Usage:
cd ~/Sync/ED/homelab/book_library
# Dry run โ search only, don't download
python3 kindle_to_shelfmark.py --dry-run --skip-existing
# Full run โ queue all downloads
python3 kindle_to_shelfmark.py --skip-existing --delay 5
# Resume after interruption
python3 kindle_to_shelfmark.py --skip-existing --resume
# Process in batches
python3 kindle_to_shelfmark.py --skip-existing --limit 50
Files:
| File | Purpose |
|---|---|
kindle_to_shelfmark.py |
Import script |
kindle_nzb_results.json |
Source data โ 1,773 Kindle books with ASIN, title, author |
shelfmark_state.json |
Resume state โ tracks which ASINs have been processed |
shelfmark_import_*.log |
Timestamped log files for each run |
How the script works:
- For each Kindle book, search Shelfmark metadata API (
/api/metadata/search) using title + author - Find best match by title word overlap (โฅ40% threshold)
- Search for downloadable releases (
/api/releases) using the matched provider/book_id - Score releases โ prefer epub format, reasonable file size (0.5โ100 MB)
- Queue the best release via
/api/releases/download - Save state after every 20 books for resume capability
Shelfmark has no authentication enabled (auth_mode: none). All API endpoints are accessible without credentials.
| Endpoint | Method | Purpose |
|---|---|---|
/api/health |
GET | Health check |
/api/metadata/search?query=...&limit=N |
GET | Search book metadata (Hardcover) |
/api/releases?provider=...&book_id=... |
GET | Search downloadable releases for a book |
/api/releases/download |
POST | Queue a release for download |
/api/localdownload |
GET | List locally downloaded books |
/api/downloads/active |
GET | Active download queue |
/api/config |
GET | Current configuration |
Shelfmark searches multiple sources in priority order until a download succeeds.
Fast sources (tried first):
| Source | Status | Notes |
|---|---|---|
| AA Fast Downloads | โ Active | Requires donator key. Dedicated fast servers, typically 2-4 MB/s. |
| Library Genesis | โ Active | Default mirrors: libgen.gl, .li, .bz, .la, .vg |
Slow sources (fallback):
| Source | Status | Notes |
|---|---|---|
| AA Slow (No Waitlist) | โ Active | Partner servers, no countdown |
| AA Slow (Waitlist) | โ Active | Partner servers with countdown timer |
| Welib | โ Active | Alternative mirror, requires Cloudflare bypass |
| Zlib | โ Active | Z-Library mirror, requires Cloudflare bypass |
Additional sources:
| Source | Status | Notes |
|---|---|---|
| Prowlarr (altHUB) | โ Active | Usenet indexer, finds books not in direct download sources |
Anna’s Archive donator key is configured in Settings โ Download Sources. This unlocks the AA Fast Downloads tier with dedicated servers instead of the free mirrors that crawl at 3-10 KB/s.
DNS-over-HTTPS (DoH) is disabled in Shelfmark’s network config (/opt/shelfmark/config/plugins/network.json, USE_DOH: false). DoH via Quad9 was returning 400 errors for several domains when routed through the SOCKS tunnel, causing all post-processing to fail. System DNS resolution works correctly as a fallback.
Shelfmark is configured as a Pangolin private resource โ it’s reachable remotely without opening any public ports. Connect via the Pangolin VPN client and access it at http://192.168.8.100:8084 as if you’re on the home network.
It does not have a public subdomain โ intentionally kept private since it’s a search aggregation tool.
Peer-to-peer file synchronization across Mac Studio, MacBook, and Proxmox โ hub-and-spoke topology with Proxmox as the always-on relay. Installed March 27, 2026.
Syncthing provides real-time, encrypted, peer-to-peer file synchronization without relying on cloud services. It replaces iCloud Drive sync for app configuration files (Typinator, BetterTouchTool, etc.) which proved unreliable โ iCloud aggressively evicts files, struggles with frequently-updated small configs, and silently creates conflict copies instead of merging.
Topology: Hub-and-spoke
| Node | Role | IP | Syncthing Port | Web UI |
|---|---|---|---|---|
| Proxmox (pve) | Hub โ always-on relay | 192.168.8.221 | 22000 | http://192.168.8.221:8384 |
| Mac Studio | Spoke | 192.168.8.180 | 22000 | http://127.0.0.1:8384 |
| MacBook | Spoke | 192.168.8.160 | 22000 | http://127.0.0.1:8384 |
Both Macs sync exclusively to Proxmox. They do not connect to each other directly. This means changes sync even when one Mac is asleep or powered off โ Proxmox holds the canonical copy and relays changes when the other Mac comes online.
Data flow:
Mac Studio โโ Proxmox (always-on hub) โโ MacBook
โ
/nvmepool/sync/SyncConfigs
(canonical copy on ZFS)
Device IDs:
| Device | ID | Addresses |
|---|---|---|
| Proxmox (pve) | FXMOTJR-XYM6RAO-NIY7KE6-4RPX2M4-NCMYYSG-KZX577Y-6QYSLHH-WL3HVAD |
tcp://192.168.8.221:22000 |
| Mac Studio | UXJMRP2-N2ZX2B7-KWI6OO2-IT7W5LC-GESRIJC-JV2DGTD-FEND4MO-F46LPAS |
tcp://192.168.8.180:22000 |
| MacBook | VLIWDBL-5VD3VSC-XQTOXYS-EGU3NSB-BCHQOML-ACRYBI3-VFT2SPR-WDNBEAF |
tcp://192.168.8.160:22000 |
All devices have autoAcceptFolders: true to simplify adding new shared folders.
| Folder ID | Label | Purpose | Shared With |
|---|---|---|---|
app-configs |
App Configs | Typinator, BetterTouchTool, and other app configuration sync | All three devices |
Folder paths per device:
| Device | Path |
|---|---|
| Proxmox | /nvmepool/sync/SyncConfigs |
| Mac Studio | ~/Sync/SyncConfigs |
| MacBook | ~/Sync/SyncConfigs |
All folders use Send & Receive mode โ changes on any device propagate to all others via Proxmox.
Proxmox (Debian):
| Setting | Value |
|---|---|
| Version | 1.29.5 |
| Install method | apt install syncthing |
| Service | syncthing@root.service (systemd) |
| Config location | /root/.local/state/syncthing/config.xml |
| API key | RzHyGwQhmkvb9A4burcfWGHThGcoThqM |
| Web UI | http://0.0.0.0:8384 (LAN-accessible) |
Start/stop/restart:
systemctl start syncthing@root
systemctl stop syncthing@root
systemctl restart syncthing@root
systemctl status syncthing@root
Mac Studio (macOS):
| Setting | Value |
|---|---|
| Version | 2.0.15 |
| Install method | brew install syncthing |
| Service | Homebrew launchd (homebrew.mxcl.syncthing) |
| Config location | ~/Library/Application Support/Syncthing/config.xml |
| API key | CCuJcwA9wTsfDecNXtymtZwfpQvYWAU7 |
| Web UI | http://127.0.0.1:8384 (localhost only) |
Start/stop/restart:
brew services start syncthing
brew services stop syncthing
brew services restart syncthing
brew services list | grep syncthing
MacBook (macOS):
| Setting | Value |
|---|---|
| Install method | brew install syncthing |
| Service | Homebrew launchd (homebrew.mxcl.syncthing) |
| Web UI | http://127.0.0.1:8384 (localhost only) |
Same brew services commands as Mac Studio.
Syncthing uses three ports. All were opened on Proxmox via UFW:
| Port | Protocol | Purpose | UFW Rule |
|---|---|---|---|
| 8384 | TCP | Web UI | ufw allow 8384/tcp comment 'Syncthing Web UI' |
| 22000 | TCP | Sync protocol (file transfer) | ufw allow 22000/tcp comment 'Syncthing sync' |
| 21027 | UDP | Local discovery (LAN device detection) | ufw allow 21027/udp comment 'Syncthing discovery' |
On the Macs, no firewall changes are needed โ macOS will prompt on first run and Syncthing only listens on localhost for the web UI.
Global discovery and relaying are enabled by default but not needed on the LAN. All devices are configured with explicit tcp://IP:22000 addresses for direct LAN connections. Global discovery and relaying serve as fallback if a device connects from outside the home network.
The primary use case is syncing app configs between both Macs, replacing unreliable iCloud sync.
Typinator:
- Quit Typinator on both Macs
- Open Typinator Preferences โ Advanced โ Data Folder
- Point it at
~/Sync/SyncConfigs/Typinator/ - Do the same on the other Mac
- Relaunch โ configs now sync via Syncthing
BetterTouchTool:
- Open BTT Preferences โ Sync
- Set the sync folder to
~/Sync/SyncConfigs/BTT/ - Repeat on the other Mac
Adding new apps:
For apps with a built-in “data folder” or “sync folder” setting, point it at a subfolder inside ~/Sync/SyncConfigs/. For apps without a custom path setting, use a symbolic link:
# Quit the app first, then:
mv ~/Library/Application\ Support/AppName ~/Sync/SyncConfigs/AppName
ln -s ~/Sync/SyncConfigs/AppName ~/Library/Application\ Support/AppName
Note: Sandboxed App Store apps may not follow symlinks. Check that the app works after symlinking before relying on it.
Syncthing is included in the Proxmox health monitoring system. The system-health-check.sh script (runs every 15 minutes, pushes to Gotify) monitors:
- ZFS pool health for
nvmepool(where SyncConfigs lives) - Disk space on all pools
- Proxmox service availability
The Syncthing web UIs on each device also show connection status, sync progress, and any file conflicts.
Gotify alert configuration:
| Setting | Value |
|---|---|
| Gotify URL | http://192.168.8.100:8070 |
| App name | System Alerts |
| Token | ARCkVc0wf001L.e |
Device shows “Disconnected”:
- Check if Syncthing is running on the remote device (
brew services list | grep syncthingorsystemctl status syncthing@root) - Verify the device address is set to
tcp://IP:22000(not justdynamic) - Check UFW on Proxmox:
ufw status | grep -E '22000|21027' - Restart Syncthing:
brew services restart syncthingorsystemctl restart syncthing@root
Files not syncing:
- Check the Syncthing web UI for errors or conflicts
- Verify the folder path exists on both devices
- Check folder type is “Send & Receive” on all devices
- Look for
.stignorefiles that might be filtering content
Conflict files:
Syncthing creates .sync-conflict-YYYYMMDD-HHMMSS files when the same file is modified on multiple devices simultaneously. Resolve by keeping the correct version and deleting the conflict copy.
Reset a device’s Syncthing config:
# Mac (will regenerate on next start)
brew services stop syncthing
rm -rf ~/Library/Application\ Support/Syncthing/
brew services start syncthing
# Proxmox
systemctl stop syncthing@root
rm -rf /root/.local/state/syncthing/
systemctl start syncthing@root
After resetting, you’ll need to re-add devices and shared folders.
Problem: Typinator and BetterTouchTool configuration files were not syncing reliably between Mac Studio and MacBook despite using their built-in iCloud sync. iCloud Drive was identified as the root cause โ it aggressively evicts files to free local storage, handles frequently-updated small files poorly, and creates silent conflict copies.
Solution: Syncthing was deployed with Proxmox as the always-on hub. This provides: real-time LAN sync without cloud dependency, no file eviction, proper conflict detection with visible conflict files, and ZFS-backed storage on the hub with automatic snapshots every 15 minutes.
Why not direct Mac-to-Mac sync: Both Macs would need to be powered on simultaneously for sync to occur. With Proxmox as the hub, one Mac can be off โ changes queue on Proxmox and sync when the other Mac comes online.
Installed: March 27, 2026.