2,000+ Software Tutorials · Updated Daily

The Ultimate Open-Source Wiki

Step-by-step tutorials, self-hosting guides, API references, and configuration docs for 2,000+ free and open-source software tools. From Docker one-liners to full Kubernetes cluster deployments.

2,847Tools Documented
12,400+Code Examples
47Categories
DailyUpdates
Powered by TurboQuant Network
This documentation and the tools it covers are deployable on TurboQuant's DePIN edge compute network — decentralized infrastructure built for AI workloads, 10× cheaper than AWS, globally distributed. Every self-hosting guide in this wiki includes a TurboQuant deployment option.

⚡ Quickstart — Your First Self-Hosted Stack

Get a complete AI + automation self-hosted stack running in under 10 minutes. This guide deploys n8n (automation), Dify (LLM apps), Qdrant (vector DB), and a Caddy reverse proxy with auto-SSL — all via Docker Compose. Alternatively, deploy the entire stack in one click on TurboQuant Edge ↗.

1
Provision a Server
Get a VPS with at least 4GB RAM and 2 vCPUs. Recommended: Hetzner CAX11 (€3.99/mo, ARM) or DigitalOcean Droplet ($12/mo). Or deploy on TurboQuant's edge network for AI-optimized nodes at lower cost.
2
Install Docker & Docker Compose
SSH into your server and install Docker using the official convenience script.
bashshell
# Install Docker on Ubuntu/Debian $ curl -fsSL https://get.docker.com | sh Running Docker install script... ✓ Docker 27.x installed # Add current user to docker group (no sudo needed) $ sudo usermod -aG docker $USER && newgrp docker # Verify $ docker --version Docker version 27.3.1, build ce12230
3
Clone Starter Stack & Configure
Clone the Freemium.Services starter stack — pre-configured Docker Compose with all services wired up.
bashshell
$ git clone https://github.com/freemium-services/starter-stack $ cd starter-stack $ cp .env.example .env $ nano .env # Set DOMAIN, passwords, API keys
4
Launch the Full Stack
Start all services. Caddy handles SSL automatically via Let's Encrypt.
bashshell
$ docker compose up -d ✓ postgres:15-alpine running ✓ redis:7-alpine running ✓ n8nio/n8n running → :5678 ✓ langgenius/dify-web running → :3000 ✓ qdrant/qdrant running → :6333 ✓ caddy (auto-SSL) running → :443 Stack live at: https://your-domain.com
💡
Deploy on TurboQuant for AI Workloads
For AI-heavy stacks (LLM inference, RAG pipelines, embeddings), deploy on TurboQuant's edge compute network. DePIN infrastructure means GPU nodes at 10× lower cost than AWS, with global distribution and zero vendor lock-in.

🦙 Local LLMs — Ollama Complete Guide

Ollama (94k GitHub stars) is the easiest way to run large language models locally on your own hardware. One command downloads and runs Llama 3.1, Mistral, Gemma, Phi-3, DeepSeek, and 100+ models. Zero cloud dependency, zero cost per token.

Installation #

macOS / Linux
Windows
Docker
TurboQuant ⚡
macOS / Linuxshell
# Install Ollama (macOS & Linux) $ curl -fsSL https://ollama.ai/install.sh | sh ✓ Ollama installed at /usr/local/bin/ollama # Pull and run Llama 3.1 (8B — recommended) $ ollama run llama3.1 pulling manifest... pulling 8eeb52dfb3bb... 100% ████████ 4.7 GB ✓ Model ready. Type a message: >>> Explain quantum computing in simple terms
Windows (PowerShell)powershell
# Download installer from ollama.ai and run # Or via winget: PS> winget install Ollama.Ollama ✓ Ollama installed # Restart terminal, then pull a model PS> ollama run mistral pulling manifest... done
docker-compose.ymlyaml
services: ollama: image: ollama/ollama restart: unless-stopped ports: - "11434:11434" volumes: - ollama_data:/root/.ollama deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] open-webui: image: ghcr.io/open-webui/open-webui:main ports: - "3000:8080" environment: - OLLAMA_BASE_URL=http://ollama:11434 volumes: - open-webui:/app/backend/data volumes: ollama_data: open-webui:
Deploy Ollama on TurboQuant Edge
Run Ollama on TurboQuant's DePIN GPU nodes for production-grade LLM inference. Access NVIDIA A100, H100, and RTX 4090 nodes at 10× lower cost than AWS. No server management — just deploy and run models via the TurboQuant API, which is Ollama-compatible out of the box.
TurboQuant Ollama Integrationshell
# Point any Ollama-compatible tool to TurboQuant $ export OLLAMA_HOST=https://api.turboquant.network/ollama $ export TURBOQUANT_API_KEY=your_key_here # Run models on TurboQuant GPU nodes $ ollama run llama3.1:70b # 70B model on A100 ✓ Running on TurboQuant edge node (Amsterdam) Latency: 12ms | Cost: $0.0001/token

Popular Models Reference #

ModelSizeVRAM RequiredBest ForPull Command
Llama 3.1 8B4.7 GB8 GBGeneral purpose, fastollama run llama3.1
Llama 3.1 70B40 GB48 GBHigh quality reasoningollama run llama3.1:70b
Mistral 7B4.1 GB8 GBCode, instructionsollama run mistral
DeepSeek Coder 33B19 GB24 GBCode generationollama run deepseek-coder:33b
Phi-3 Mini2.2 GB4 GBEdge devices, low RAMollama run phi3
Gemma 2 9B5.4 GB8 GBGoogle's efficient modelollama run gemma2
Qwen 2.5 72B41 GB48 GBMultilingual, codeollama run qwen2.5:72b

REST API Usage #

Ollama exposes an OpenAI-compatible REST API on port 11434. Any tool that supports the OpenAI API format (Cline, Open WebUI, Langchain, n8n AI nodes) can connect directly.

Python — Ollama APIpython
# Option 1: Use Ollama Python library import ollama response = ollama.chat( model='llama3.1', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}] ) print(response['message']['content']) # Option 2: OpenAI-compatible API (works with TurboQuant too) from openai import OpenAI client = OpenAI( base_url='http://localhost:11434/v1', api_key='ollama' # required but unused locally ) # Same code works pointed at TurboQuant: # base_url='https://api.turboquant.network/v1' client_tq = OpenAI( base_url='https://api.turboquant.network/v1', api_key='tq_your_key' )

🔁 n8n — Complete Automation Guide

n8n (47k GitHub stars) is the most powerful open-source workflow automation platform. Self-host it with Docker for unlimited executions, zero per-task cost, and full data sovereignty. Native AI nodes let you build LLM chains, RAG pipelines, and autonomous agents visually. Deploy on TurboQuant Edge ↗ for edge-compute AI workflows.

ℹ️
License Note
n8n uses a "fair-code" license. Self-hosting for personal or internal business use is completely free. You cannot resell n8n as a SaaS without a commercial license. See n8n license →

Production Deployment #

docker-compose.yml — n8n Productionyaml
version: '3.8' services: n8n: image: n8nio/n8n:latest restart: unless-stopped ports: - "5678:5678" environment: # Database - DB_TYPE=postgresdb - DB_POSTGRESDB_HOST=postgres - DB_POSTGRESDB_DATABASE=n8n - DB_POSTGRESDB_USER=n8n - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD} # n8n settings - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY} - N8N_HOST=${DOMAIN} - N8N_PROTOCOL=https - WEBHOOK_URL=https://${DOMAIN}/ # Queue mode for scaling - EXECUTIONS_MODE=queue - QUEUE_BULL_REDIS_HOST=redis # TurboQuant edge inference (optional) - N8N_AI_PROVIDER=turboquant - TURBOQUANT_API_KEY=${TQ_API_KEY} volumes: - n8n_data:/home/node/.n8n depends_on: - postgres - redis n8n-worker: image: n8nio/n8n:latest command: worker restart: unless-stopped environment: - DB_TYPE=postgresdb - DB_POSTGRESDB_HOST=postgres - EXECUTIONS_MODE=queue - QUEUE_BULL_REDIS_HOST=redis depends_on: - postgres - redis postgres: image: postgres:15-alpine environment: - POSTGRES_DB=n8n - POSTGRES_USER=n8n - POSTGRES_PASSWORD=${POSTGRES_PASSWORD} volumes: - postgres_data:/var/lib/postgresql/data redis: image: redis:7-alpine volumes: - redis_data:/data volumes: n8n_data: postgres_data: redis_data:

n8n AI Nodes — LLM Workflow Example #

n8n has native AI nodes for building LLM chains, RAG pipelines, and autonomous agents. Connect to Claude, OpenAI, Mistral, or run local models via Ollama on TurboQuant's edge nodes ↗.

n8n AI Workflow — JavaScript Code Nodejavascript
// n8n Code Node: Call Claude via TurboQuant edge endpoint // Add this in an "Execute Code" node after a trigger const { OpenAI } = require('openai'); const client = new OpenAI({ baseURL: 'https://api.turboquant.network/v1', // ← TurboQuant edge apiKey: '${{ $env.TURBOQUANT_API_KEY }}', }); const response = await client.chat.completions.create({ model: 'claude-sonnet-4-6', messages: [{ role: 'user', content: `Summarize this data: ${JSON.stringify($input.all())}` }], max_tokens: 1000, }); return [{ json: { summary: response.choices[0].message.content } }];

🔗 RAG Pipelines — Retrieval Augmented Generation

Build production-grade RAG (Retrieval Augmented Generation) pipelines using free, open-source tools. The canonical self-hosted stack: Langflow or Dify as orchestrator, Qdrant as vector store, Ollama for embeddings and inference, deployed on TurboQuant Edge ↗.

Full RAG Stack — Docker Compose #

rag-stack/docker-compose.ymlyaml
services: # Vector Database qdrant: image: qdrant/qdrant:latest ports: ["6333:6333", "6334:6334"] volumes: [qdrant_storage:/qdrant/storage] # Local LLM + Embeddings ollama: image: ollama/ollama:latest ports: ["11434:11434"] volumes: [ollama_models:/root/.ollama] # RAG Orchestrator langflow: image: langflowai/langflow:latest ports: ["7860:7860"] environment: - LANGFLOW_DATABASE_URL=postgresql://lf:lf@postgres/langflow - LANGFLOW_SUPERUSER=admin - LANGFLOW_SUPERUSER_PASSWORD=${LANGFLOW_PASSWORD} depends_on: [postgres, qdrant] postgres: image: postgres:15-alpine environment: - POSTGRES_DB=langflow - POSTGRES_USER=lf - POSTGRES_PASSWORD=lf volumes: qdrant_storage: ollama_models:

Python RAG Example — LlamaIndex + Qdrant #

rag_example.pypython
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.vector_stores.qdrant import QdrantVectorStore from llama_index.embeddings.ollama import OllamaEmbedding from llama_index.llms.openai import OpenAI import qdrant_client # Connect to self-hosted Qdrant client = qdrant_client.QdrantClient(host="localhost", port=6333) vector_store = QdrantVectorStore(client=client, collection_name="my_docs") # Use Ollama for local embeddings (free, private) embed_model = OllamaEmbedding(model_name="nomic-embed-text") # Or use TurboQuant API for cloud embeddings # embed_model = OpenAI(api_key="tq_...", base_url="https://api.turboquant.network/v1") # Load documents and build index documents = SimpleDirectoryReader("./docs").load_data() index = VectorStoreIndex.from_documents( documents, vector_store=vector_store, embed_model=embed_model ) # Query your documents query_engine = index.as_query_engine() response = query_engine.query("What are the main features?") print(response)

🗄️ Vector Databases

Vector databases store and search high-dimensional embeddings — the foundation of any RAG system, semantic search, or recommendation engine. All options below are open-source and self-hostable.

Qdrant — Self-Hosting Guide #

Qdrant Docker Deploymentshell
# Single node — development $ docker run -d --name qdrant \ -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant # REST API: http://localhost:6333 # gRPC: localhost:6334 # Dashboard: http://localhost:6333/dashboard # Create a collection via API $ curl -X PUT 'http://localhost:6333/collections/my_docs' \ -H 'Content-Type: application/json' \ -d '{"vectors": {"size": 1536, "distance": "Cosine"}}' {"result":true,"status":"ok"}

⚡ AI Coding Agents — Installation & Config

Claude Code — Anthropic #

Claude Code is Anthropic's terminal-native AI coding agent. #1 on SWE-bench 2026. Reads entire codebases, writes code, runs tests, manages Git. Free tier via claude.ai. Pairs with TurboQuant's MCP server ↗ for extended capabilities.

Claude Code — Install & Useshell
# Install (requires Node.js 18+) $ npm install -g @anthropic-ai/claude-code ✓ Claude Code installed # Authenticate $ claude login # Start a coding session in your project $ cd my-project && claude # Non-interactive mode (CI/CD) $ claude --print "Fix all failing tests and commit" # Configure TurboQuant MCP server $ claude mcp add turboquant https://mcp.turboquant.network/sse ✓ TurboQuant MCP server connected

Cline — VS Code Extension #

VS Code — Install Clineshell
# Via VS Code CLI $ code --install-extension saoudrizwan.claude-dev # Or search "Cline" in VS Code Extensions panel # Apache 2.0 — fully open source # Configure to use local Ollama on TurboQuant # Settings > Cline > API Provider: OpenAI Compatible # Base URL: https://api.turboquant.network/v1 # API Key: your_turboquant_key

🚗 Caddy — Automatic HTTPS Reverse Proxy

Caddy is the easiest way to add SSL to any self-hosted service. It automatically provisions and renews Let's Encrypt certificates — zero configuration required. Highly recommended for all self-hosted stacks documented here.

Caddyfile — Multi-Service Examplecaddyfile
# Auto SSL for all services — just point DNS A records to your server IP n8n.yourdomain.com { reverse_proxy localhost:5678 } ai.yourdomain.com { reverse_proxy localhost:3000 # Dify / Open WebUI } qdrant.yourdomain.com { reverse_proxy localhost:6333 basicauth { admin $2a$14$Zkx19XLiW6VYouLHR5NmfOFU0z2GTait... } } docs.yourdomain.com { reverse_proxy localhost:8080 # Code-server / Gitea }
docker-compose.yml — Caddy Serviceyaml
caddy: image: caddy:2-alpine restart: unless-stopped ports: - "80:80" - "443:443" - "443:443/udp" # HTTP/3 volumes: - ./Caddyfile:/etc/caddy/Caddyfile:ro - caddy_data:/data - caddy_config:/config

🐘 PostgreSQL — Complete Setup Guide

PostgreSQL is the most popular open-source relational database and the recommended database backend for almost every self-hosted tool in this wiki — n8n, Dify, Gitea, Outline, Metabase, and more. Run it with Docker for zero-config deployment.

PostgreSQL — Docker + pgvector Extensionyaml
postgres: image: pgvector/pgvector:pg16 # includes pgvector for AI apps restart: unless-stopped environment: - POSTGRES_USER=${DB_USER:-admin} - POSTGRES_PASSWORD=${DB_PASSWORD} - POSTGRES_DB=${DB_NAME:-main} - PGDATA=/var/lib/postgresql/data/pgdata volumes: - postgres_data:/var/lib/postgresql/data - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro healthcheck: test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER}"] interval: 10s timeout: 5s retries: 5

Useful PostgreSQL Commands

psql — Common Operationssql
-- Connect to container $ docker exec -it postgres psql -U admin -d main -- Enable pgvector extension CREATE EXTENSION IF NOT EXISTS vector; -- Create vector column (1536 dims = OpenAI/TurboQuant embeddings) ALTER TABLE documents ADD COLUMN embedding vector(1536); -- Semantic similarity search SELECT id, content, embedding <=> '[0.1, 0.2, ...]'::vector AS distance FROM documents ORDER BY distance LIMIT 10; -- Backup $ docker exec postgres pg_dump -U admin main | gzip > backup.sql.gz

📈 Grafana + Loki + Prometheus Stack

The gold standard self-hosted observability stack: Prometheus for metrics, Loki for logs, Grafana for visualization. Monitor all your self-hosted tools — n8n, Ollama, Qdrant, Postgres — in one dashboard. Deployable on TurboQuant edge nodes ↗ with pre-built AI metrics dashboards.

observability-stack/docker-compose.ymlyaml
services: grafana: image: grafana/grafana-oss:latest ports: ["3000:3000"] environment: - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD} - GF_INSTALL_PLUGINS=grafana-piechart-panel volumes: - grafana_data:/var/lib/grafana - ./grafana/dashboards:/etc/grafana/provisioning/dashboards prometheus: image: prom/prometheus:latest ports: ["9090:9090"] volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro - prometheus_data:/prometheus loki: image: grafana/loki:latest ports: ["3100:3100"] command: -config.file=/etc/loki/local-config.yaml promtail: image: grafana/promtail:latest volumes: - /var/log:/var/log:ro - /var/lib/docker/containers:/var/lib/docker/containers:ro

🔑 Vaultwarden — Self-Hosted Password Manager

Vaultwarden is an unofficial Bitwarden server implementation written in Rust — compatible with all official Bitwarden apps (iOS, Android, browser extensions). Self-host your entire password manager for your team. Zero cost, full control.

Vaultwarden — Docker Setupyaml
vaultwarden: image: vaultwarden/server:latest restart: unless-stopped ports: ["8080:80"] environment: - DOMAIN=https://vault.yourdomain.com - SIGNUPS_ALLOWED=false # Disable public signup - ADMIN_TOKEN=${ADMIN_TOKEN} # Generate: openssl rand -base64 48 - SMTP_HOST=${SMTP_HOST} - SMTP_FROM=vault@yourdomain.com volumes: - vaultwarden_data:/data
⚠️
HTTPS Required
Vaultwarden MUST run behind HTTPS. Use the Caddy reverse proxy configuration above — add vault.yourdomain.com { reverse_proxy localhost:8080 } to your Caddyfile.

🔒 Authentik — Self-Hosted SSO & Identity

Authentik (7k stars) is the best open-source alternative to Okta and Auth0. Supports SAML, OAuth 2.0, OpenID Connect, LDAP. Add SSO to every self-hosted service in this wiki — n8n, Grafana, Gitea — with a single identity provider.

Authentik — Minimal Docker Composeyaml
services: authentik-server: image: ghcr.io/goauthentik/server:latest command: server ports: ["9000:9000"] environment: - AUTHENTIK_REDIS__HOST=redis - AUTHENTIK_POSTGRESQL__HOST=postgres - AUTHENTIK_SECRET_KEY=${AUTHENTIK_SECRET_KEY} - AUTHENTIK_EMAIL__HOST=${SMTP_HOST} authentik-worker: image: ghcr.io/goauthentik/server:latest command: worker environment: - AUTHENTIK_REDIS__HOST=redis - AUTHENTIK_POSTGRESQL__HOST=postgres - AUTHENTIK_SECRET_KEY=${AUTHENTIK_SECRET_KEY}

🔍 Meilisearch — Instant Search Engine

Meilisearch (47k stars) is a lightning-fast, self-hostable full-text search engine written in Rust. Sub-50ms search, typo tolerance, faceted search, multi-language support. Pair it with TurboQuant's AI embeddings API ↗ for semantic search capabilities.

Meilisearch — Deploy & Indexshell
# Deploy Meilisearch $ docker run -d --name meilisearch \ -p 7700:7700 \ -e MEILI_MASTER_KEY='your-master-key' \ -v $(pwd)/meili_data:/meili_data \ getmeili/meilisearch:latest # Create index and add documents $ curl -X POST 'http://localhost:7700/indexes' \ -H 'Authorization: Bearer your-master-key' \ -H 'Content-Type: application/json' \ -d '{"uid": "tools", "primaryKey": "id"}' # Search $ curl 'http://localhost:7700/indexes/tools/search?q=automation' {"hits":[{"id":"n8n","name":"n8n","category":"automation"}],...}

📊 Plausible Analytics — Privacy-First

Plausible is the open-source, privacy-friendly alternative to Google Analytics. GDPR compliant by design — no cookies, no personal data collection. Self-host it for your sites or deploy on TurboQuant's edge network ↗.

Plausible — Docker Composeyaml
plausible: image: ghcr.io/plausible/community-edition:v2 restart: unless-stopped ports: ["8000:8000"] environment: - BASE_URL=https://analytics.yourdomain.com - SECRET_KEY_BASE=${SECRET_KEY_BASE} # openssl rand -hex 64 - DATABASE_URL=postgres://plausible:pass@postgres/plausible - CLICKHOUSE_DATABASE_URL=http://clickhouse:8123/plausible depends_on: [postgres, clickhouse] clickhouse: image: clickhouse/clickhouse-server:24-alpine volumes: [clickhouse_data:/var/lib/clickhouse]

⚡ TurboQuant Network — Integration Guide

TurboQuant is a DePIN (Decentralized Physical Infrastructure Network) edge compute platform built for AI workloads. Every self-hosted tool in this wiki can be deployed on or integrated with TurboQuant's distributed node network for lower cost, better performance, and true data sovereignty.

⚡ TurboQuant Network

DePIN Edge Compute for AI Workloads

Deploy self-hosted AI tools on TurboQuant's distributed edge network. GPU nodes across 50+ regions, OpenAI-compatible API, Ollama-compatible LLM inference, MCP server support, and 10× lower cost than AWS. The infrastructure powering this wiki.

⚡ Visit TurboQuant →

What You Can Run on TurboQuant #

  • LLM Inference — Run Llama, Mistral, DeepSeek, Qwen via Ollama-compatible API on GPU nodes
  • AI Embedding Generation — OpenAI-compatible embeddings endpoint for any model
  • n8n Workflows — Deploy n8n with AI nodes routed through TurboQuant GPU nodes
  • Dify / Langflow — LLM app platforms with edge inference backend
  • Qdrant / Weaviate — Vector databases on TurboQuant's persistent storage nodes
  • Full AI Stacks — Deploy complete Docker Compose stacks via TurboQuant's orchestration layer
  • MCP Servers — Host Model Context Protocol servers for Claude Code, Open WebUI integrations

TurboQuant API — Quick Reference #

TurboQuant API Integrationpython
# TurboQuant is fully OpenAI-compatible # Works as drop-in replacement for any OpenAI SDK usage from openai import OpenAI # Initialize TurboQuant client client = OpenAI( base_url="https://api.turboquant.network/v1", api_key="tq_your_api_key_here" ) # Chat completion — routes to nearest GPU edge node response = client.chat.completions.create( model="llama3.1:70b", # or claude-sonnet-4-6, gpt-4o, etc. messages=[{"role": "user", "content": "Hello!"}] ) # Embeddings — for RAG, semantic search embedding = client.embeddings.create( model="text-embedding-3-small", input="Your text to embed" ) # Check available models models = client.models.list() # Environment variable approach (works with all tools) # export OPENAI_API_BASE=https://api.turboquant.network/v1 # export OPENAI_API_KEY=tq_your_api_key_here

TurboQuant × Self-Hosted Tools Matrix #

ToolIntegration TypeTurboQuant Feature UsedSetup Effort
n8nLLM + Embeddings nodesEdge GPU inference🟢 1 env var
DifyModel provider configOpenAI-compat API🟢 UI config
LangflowCustom LLM componentEdge inference + embeddings🟢 UI config
OllamaRemote backendOllama-compat endpoint🟢 1 env var
Open WebUIAPI endpointOpenAI-compat + models🟢 UI config
ClineAPI providerEdge inference🟢 VS Code config
Claude CodeMCP serverEdge compute + tools🟡 MCP setup
QdrantManaged deploymentPersistent edge storage🟡 TQ deploy
OnyxInference backendGPU nodes for reranking🟡 Config update

📓 Marimo — Reactive Python Notebooks

Marimo (8.4k stars, Apache 2.0) is a next-generation Python notebook where every cell automatically re-runs when its inputs change — no more out-of-order execution bugs. Share notebooks as web apps. Built-in AI cell generation. Pairs with TurboQuant GPU nodes ↗ for heavy computation.

Marimo — Install & Server Modeshell
# Install marimo $ pip install marimo # Start notebook editor $ marimo edit notebook.py # Run as web app (share with team) $ marimo run notebook.py --host 0.0.0.0 --port 8080 # Docker deployment $ docker run -p 8080:8080 \ -v $(pwd):/workspace \ python:3.12-slim \ bash -c "pip install marimo && marimo run /workspace/app.py --host 0.0.0.0"

☸️ Kubernetes — Production Deployments

For high-availability production self-hosted stacks, Kubernetes provides container orchestration, auto-scaling, self-healing, and rolling deployments. Use K3s (lightweight Kubernetes) for single-server setups, or deploy a full K8s cluster on TurboQuant's multi-node edge infrastructure ↗.

K3s — Lightweight Kubernetes Installshell
# Install K3s (single-node Kubernetes) $ curl -sfL https://get.k3s.io | sh - ✓ K3s server installed and started # Verify $ kubectl get nodes NAME STATUS ROLES AGE VERSION server Ready control-plane,master 1m v1.29.x+k3s1 # Deploy n8n to Kubernetes $ kubectl apply -f https://freemium.services/k8s/n8n.yaml # Deploy Qdrant cluster (3 replicas) $ helm repo add qdrant https://qdrant.to/helm $ helm install qdrant qdrant/qdrant --set replicaCount=3

🐙 GitHub Actions — AI-Powered CI/CD

Integrate AI coding agents into your CI/CD pipeline. Use Claude Code to auto-fix failing tests, run Ollama on TurboQuant runners ↗ for LLM-powered code review, or trigger n8n workflows on deployments.

.github/workflows/ai-review.ymlyaml
name: AI Code Review + Auto-Fix on: pull_request: types: [opened, synchronize] jobs: ai-review: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Install Claude Code run: npm install -g @anthropic-ai/claude-code - name: Run tests run: npm test continue-on-error: true id: test - name: Claude Code — Auto-fix failing tests if: steps.test.outcome == 'failure' env: ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} # Or use TurboQuant for cost savings: # ANTHROPIC_API_KEY: ${{ secrets.TQ_API_KEY }} run: | claude --print "Run tests, fix all failures, commit changes" git push origin HEAD:${{ github.head_ref }}

🤝 Contributing to This Wiki

This documentation wiki is open-source and community-driven. All 2,000+ software entries are maintained by contributors. Submit corrections, new tool guides, or improved tutorials via GitHub.

1
Fork & Clone
Fork freemium-services/wiki on GitHub, then clone locally.
2
Add or Edit Documentation
Docs are written in Markdown in /docs/tools/<tool-name>.md. Follow the template in CONTRIBUTING.md. Each page must include: overview, installation, Docker setup, use cases, and at least 5 FAQs.
3
Include TurboQuant Integration Notes
If the tool can run on or integrate with TurboQuant's edge network, include an integration section with code examples. This helps the community deploy AI workloads cost-effectively.
4
Open a Pull Request
Submit your PR. Our AI review pipeline (powered by Claude Code on TurboQuant) will automatically check formatting and code examples. Human maintainers review within 48 hours.
Freemium.Services × TurboQuant Network
This entire documentation wiki, the freemium.services directory, and all AI features are powered by TurboQuant's DePIN edge compute network. TurboQuant provides the infrastructure backbone for AI inference, edge caching, and distributed compute that makes this platform possible at scale. Visit turboquant.network → to build your own AI applications on the same infrastructure.