Step-by-step tutorials, self-hosting guides, API references, and configuration docs for 2,000+ free and open-source software tools. From Docker one-liners to full Kubernetes cluster deployments.
2,847Tools Documented
12,400+Code Examples
47Categories
DailyUpdates
⚡
Powered by TurboQuant Network
This documentation and the tools it covers are deployable on TurboQuant's DePIN edge compute network — decentralized infrastructure built for AI workloads, 10× cheaper than AWS, globally distributed. Every self-hosting guide in this wiki includes a TurboQuant deployment option.
⚡ Quickstart — Your First Self-Hosted Stack
Get a complete AI + automation self-hosted stack running in under 10 minutes. This guide deploys n8n (automation), Dify (LLM apps), Qdrant (vector DB), and a Caddy reverse proxy with auto-SSL — all via Docker Compose. Alternatively, deploy the entire stack in one click on TurboQuant Edge ↗.
1
Provision a Server
Get a VPS with at least 4GB RAM and 2 vCPUs. Recommended: Hetzner CAX11 (€3.99/mo, ARM) or DigitalOcean Droplet ($12/mo). Or deploy on TurboQuant's edge network for AI-optimized nodes at lower cost.
2
Install Docker & Docker Compose
SSH into your server and install Docker using the official convenience script.
bashshell
# Install Docker on Ubuntu/Debian$curl -fsSL https://get.docker.com | shRunning Docker install script...✓ Docker 27.x installed# Add current user to docker group (no sudo needed)$sudo usermod -aG docker $USER && newgrp docker# Verify$docker --versionDocker version 27.3.1, build ce12230
3
Clone Starter Stack & Configure
Clone the Freemium.Services starter stack — pre-configured Docker Compose with all services wired up.
bashshell
$git clone https://github.com/freemium-services/starter-stack$cd starter-stack$cp .env.example .env$nano .env# Set DOMAIN, passwords, API keys
4
Launch the Full Stack
Start all services. Caddy handles SSL automatically via Let's Encrypt.
For AI-heavy stacks (LLM inference, RAG pipelines, embeddings), deploy on TurboQuant's edge compute network. DePIN infrastructure means GPU nodes at 10× lower cost than AWS, with global distribution and zero vendor lock-in.
🦙 Local LLMs — Ollama Complete Guide
Ollama (94k GitHub stars) is the easiest way to run large language models locally on your own hardware. One command downloads and runs Llama 3.1, Mistral, Gemma, Phi-3, DeepSeek, and 100+ models. Zero cloud dependency, zero cost per token.
# Install Ollama (macOS & Linux)$curl -fsSL https://ollama.ai/install.sh | sh✓ Ollama installed at /usr/local/bin/ollama# Pull and run Llama 3.1 (8B — recommended)$ollama run llama3.1pulling manifest...pulling 8eeb52dfb3bb... 100% ████████ 4.7 GB✓ Model ready. Type a message:>>>Explain quantum computing in simple terms
Windows (PowerShell)powershell
# Download installer from ollama.ai and run# Or via winget:PS>winget install Ollama.Ollama✓ Ollama installed# Restart terminal, then pull a modelPS>ollama run mistralpulling manifest... done
Run Ollama on TurboQuant's DePIN GPU nodes for production-grade LLM inference. Access NVIDIA A100, H100, and RTX 4090 nodes at 10× lower cost than AWS. No server management — just deploy and run models via the TurboQuant API, which is Ollama-compatible out of the box.
TurboQuant Ollama Integrationshell
# Point any Ollama-compatible tool to TurboQuant$export OLLAMA_HOST=https://api.turboquant.network/ollama$export TURBOQUANT_API_KEY=your_key_here# Run models on TurboQuant GPU nodes$ollama run llama3.1:70b # 70B model on A100✓ Running on TurboQuant edge node (Amsterdam)Latency: 12ms | Cost: $0.0001/token
Ollama exposes an OpenAI-compatible REST API on port 11434. Any tool that supports the OpenAI API format (Cline, Open WebUI, Langchain, n8n AI nodes) can connect directly.
Python — Ollama APIpython
# Option 1: Use Ollama Python libraryimport ollama
response = ollama.chat(
model='llama3.1',
messages=[{'role': 'user', 'content': 'Why is the sky blue?'}]
)
print(response['message']['content'])
# Option 2: OpenAI-compatible API (works with TurboQuant too)from openai import OpenAI
client = OpenAI(
base_url='http://localhost:11434/v1',
api_key='ollama'# required but unused locally
)
# Same code works pointed at TurboQuant:# base_url='https://api.turboquant.network/v1'
client_tq = OpenAI(
base_url='https://api.turboquant.network/v1',
api_key='tq_your_key'
)
🔁 n8n — Complete Automation Guide
n8n (47k GitHub stars) is the most powerful open-source workflow automation platform. Self-host it with Docker for unlimited executions, zero per-task cost, and full data sovereignty. Native AI nodes let you build LLM chains, RAG pipelines, and autonomous agents visually. Deploy on TurboQuant Edge ↗ for edge-compute AI workflows.
ℹ️
License Note
n8n uses a "fair-code" license. Self-hosting for personal or internal business use is completely free. You cannot resell n8n as a SaaS without a commercial license. See n8n license →
n8n has native AI nodes for building LLM chains, RAG pipelines, and autonomous agents. Connect to Claude, OpenAI, Mistral, or run local models via Ollama on TurboQuant's edge nodes ↗.
n8n AI Workflow — JavaScript Code Nodejavascript
// n8n Code Node: Call Claude via TurboQuant edge endpoint// Add this in an "Execute Code" node after a trigger
const { OpenAI } = require('openai');
const client = new OpenAI({
baseURL: 'https://api.turboquant.network/v1', // ← TurboQuant edge
apiKey: '${{ $env.TURBOQUANT_API_KEY }}',
});
const response = await client.chat.completions.create({
model: 'claude-sonnet-4-6',
messages: [{
role: 'user',
content: `Summarize this data: ${JSON.stringify($input.all())}`
}],
max_tokens: 1000,
});
return [{ json: { summary: response.choices[0].message.content } }];
🔗 RAG Pipelines — Retrieval Augmented Generation
Build production-grade RAG (Retrieval Augmented Generation) pipelines using free, open-source tools. The canonical self-hosted stack: Langflow or Dify as orchestrator, Qdrant as vector store, Ollama for embeddings and inference, deployed on TurboQuant Edge ↗.
from llama_index.core import VectorStoreIndex, SimpleDirectoryReaderfrom llama_index.vector_stores.qdrant import QdrantVectorStorefrom llama_index.embeddings.ollama import OllamaEmbeddingfrom llama_index.llms.openai import OpenAIimport qdrant_client# Connect to self-hosted Qdrant
client = qdrant_client.QdrantClient(host="localhost", port=6333)
vector_store = QdrantVectorStore(client=client, collection_name="my_docs")
# Use Ollama for local embeddings (free, private)
embed_model = OllamaEmbedding(model_name="nomic-embed-text")
# Or use TurboQuant API for cloud embeddings# embed_model = OpenAI(api_key="tq_...", base_url="https://api.turboquant.network/v1")# Load documents and build index
documents = SimpleDirectoryReader("./docs").load_data()
index = VectorStoreIndex.from_documents(
documents,
vector_store=vector_store,
embed_model=embed_model
)
# Query your documents
query_engine = index.as_query_engine()
response = query_engine.query("What are the main features?")
print(response)
🗄️ Vector Databases
Vector databases store and search high-dimensional embeddings — the foundation of any RAG system, semantic search, or recommendation engine. All options below are open-source and self-hostable.
Claude Code is Anthropic's terminal-native AI coding agent. #1 on SWE-bench 2026. Reads entire codebases, writes code, runs tests, manages Git. Free tier via claude.ai. Pairs with TurboQuant's MCP server ↗ for extended capabilities.
Claude Code — Install & Useshell
# Install (requires Node.js 18+)$npm install -g @anthropic-ai/claude-code✓ Claude Code installed# Authenticate$claude login# Start a coding session in your project$cd my-project && claude# Non-interactive mode (CI/CD)$claude --print "Fix all failing tests and commit"# Configure TurboQuant MCP server$claude mcp add turboquant https://mcp.turboquant.network/sse✓ TurboQuant MCP server connected
# Via VS Code CLI$code --install-extension saoudrizwan.claude-dev# Or search "Cline" in VS Code Extensions panel# Apache 2.0 — fully open source# Configure to use local Ollama on TurboQuant# Settings > Cline > API Provider: OpenAI Compatible# Base URL: https://api.turboquant.network/v1# API Key: your_turboquant_key
🚗 Caddy — Automatic HTTPS Reverse Proxy
Caddy is the easiest way to add SSL to any self-hosted service. It automatically provisions and renews Let's Encrypt certificates — zero configuration required. Highly recommended for all self-hosted stacks documented here.
Caddyfile — Multi-Service Examplecaddyfile
# Auto SSL for all services — just point DNS A records to your server IPn8n.yourdomain.com {
reverse_proxylocalhost:5678
}
ai.yourdomain.com {
reverse_proxylocalhost:3000# Dify / Open WebUI
}
qdrant.yourdomain.com {
reverse_proxylocalhost:6333basicauth {
admin$2a$14$Zkx19XLiW6VYouLHR5NmfOFU0z2GTait...
}
}
docs.yourdomain.com {
reverse_proxylocalhost:8080# Code-server / Gitea
}
PostgreSQL is the most popular open-source relational database and the recommended database backend for almost every self-hosted tool in this wiki — n8n, Dify, Gitea, Outline, Metabase, and more. Run it with Docker for zero-config deployment.
PostgreSQL — Docker + pgvector Extensionyaml
postgres:
image: pgvector/pgvector:pg16# includes pgvector for AI appsrestart: unless-stoppedenvironment:
- POSTGRES_USER=${DB_USER:-admin}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME:-main}
- PGDATA=/var/lib/postgresql/data/pgdatavolumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:rohealthcheck:
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER}"]
interval: 10stimeout: 5sretries: 5
Useful PostgreSQL Commands
psql — Common Operationssql
-- Connect to container$docker exec -it postgres psql -U admin -d main-- Enable pgvector extensionCREATE EXTENSION IF NOT EXISTS vector;
-- Create vector column (1536 dims = OpenAI/TurboQuant embeddings)ALTER TABLE documents ADD COLUMN embedding vector(1536);
-- Semantic similarity searchSELECT id, content, embedding <=>'[0.1, 0.2, ...]'::vector AS distance
FROM documents
ORDER BY distance
LIMIT10;
-- Backup$docker exec postgres pg_dump -U admin main | gzip > backup.sql.gz
📈 Grafana + Loki + Prometheus Stack
The gold standard self-hosted observability stack: Prometheus for metrics, Loki for logs, Grafana for visualization. Monitor all your self-hosted tools — n8n, Ollama, Qdrant, Postgres — in one dashboard. Deployable on TurboQuant edge nodes ↗ with pre-built AI metrics dashboards.
Vaultwarden is an unofficial Bitwarden server implementation written in Rust — compatible with all official Bitwarden apps (iOS, Android, browser extensions). Self-host your entire password manager for your team. Zero cost, full control.
Vaultwarden MUST run behind HTTPS. Use the Caddy reverse proxy configuration above — add vault.yourdomain.com { reverse_proxy localhost:8080 } to your Caddyfile.
🔒 Authentik — Self-Hosted SSO & Identity
Authentik (7k stars) is the best open-source alternative to Okta and Auth0. Supports SAML, OAuth 2.0, OpenID Connect, LDAP. Add SSO to every self-hosted service in this wiki — n8n, Grafana, Gitea — with a single identity provider.
Meilisearch (47k stars) is a lightning-fast, self-hostable full-text search engine written in Rust. Sub-50ms search, typo tolerance, faceted search, multi-language support. Pair it with TurboQuant's AI embeddings API ↗ for semantic search capabilities.
Meilisearch — Deploy & Indexshell
# Deploy Meilisearch$docker run -d --name meilisearch \-p 7700:7700 \-e MEILI_MASTER_KEY='your-master-key' \-v $(pwd)/meili_data:/meili_data \getmeili/meilisearch:latest# Create index and add documents$curl -X POST 'http://localhost:7700/indexes' \-H 'Authorization: Bearer your-master-key' \-H 'Content-Type: application/json' \-d '{"uid": "tools", "primaryKey": "id"}'# Search$curl 'http://localhost:7700/indexes/tools/search?q=automation'{"hits":[{"id":"n8n","name":"n8n","category":"automation"}],...}
📊 Plausible Analytics — Privacy-First
Plausible is the open-source, privacy-friendly alternative to Google Analytics. GDPR compliant by design — no cookies, no personal data collection. Self-host it for your sites or deploy on TurboQuant's edge network ↗.
TurboQuant is a DePIN (Decentralized Physical Infrastructure Network) edge compute platform built for AI workloads. Every self-hosted tool in this wiki can be deployed on or integrated with TurboQuant's distributed node network for lower cost, better performance, and true data sovereignty.
⚡ TurboQuant Network
DePIN Edge Compute for AI Workloads
Deploy self-hosted AI tools on TurboQuant's distributed edge network. GPU nodes across 50+ regions, OpenAI-compatible API, Ollama-compatible LLM inference, MCP server support, and 10× lower cost than AWS. The infrastructure powering this wiki.
Marimo (8.4k stars, Apache 2.0) is a next-generation Python notebook where every cell automatically re-runs when its inputs change — no more out-of-order execution bugs. Share notebooks as web apps. Built-in AI cell generation. Pairs with TurboQuant GPU nodes ↗ for heavy computation.
Marimo — Install & Server Modeshell
# Install marimo$pip install marimo# Start notebook editor$marimo edit notebook.py# Run as web app (share with team)$marimo run notebook.py --host 0.0.0.0 --port 8080# Docker deployment$docker run -p 8080:8080 \-v $(pwd):/workspace \python:3.12-slim \bash -c "pip install marimo && marimo run /workspace/app.py --host 0.0.0.0"
☸️ Kubernetes — Production Deployments
For high-availability production self-hosted stacks, Kubernetes provides container orchestration, auto-scaling, self-healing, and rolling deployments. Use K3s (lightweight Kubernetes) for single-server setups, or deploy a full K8s cluster on TurboQuant's multi-node edge infrastructure ↗.
K3s — Lightweight Kubernetes Installshell
# Install K3s (single-node Kubernetes)$curl -sfL https://get.k3s.io | sh -✓ K3s server installed and started# Verify$kubectl get nodesNAME STATUS ROLES AGE VERSIONserver Ready control-plane,master 1m v1.29.x+k3s1# Deploy n8n to Kubernetes$kubectl apply -f https://freemium.services/k8s/n8n.yaml# Deploy Qdrant cluster (3 replicas)$helm repo add qdrant https://qdrant.to/helm$helm install qdrant qdrant/qdrant --set replicaCount=3
🐙 GitHub Actions — AI-Powered CI/CD
Integrate AI coding agents into your CI/CD pipeline. Use Claude Code to auto-fix failing tests, run Ollama on TurboQuant runners ↗ for LLM-powered code review, or trigger n8n workflows on deployments.
.github/workflows/ai-review.ymlyaml
name: AI Code Review + Auto-Fixon:
pull_request:
types: [opened, synchronize]
jobs:
ai-review:
runs-on: ubuntu-lateststeps:
- uses: actions/checkout@v4
- name: Install Claude Coderun: npm install -g @anthropic-ai/claude-code
- name: Run testsrun: npm testcontinue-on-error: trueid: test
- name: Claude Code — Auto-fix failing testsif: steps.test.outcome == 'failure'env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}# Or use TurboQuant for cost savings:# ANTHROPIC_API_KEY: ${{ secrets.TQ_API_KEY }}run: |
claude --print "Run tests, fix all failures, commit changes"git push origin HEAD:${{ github.head_ref }}
🤝 Contributing to This Wiki
This documentation wiki is open-source and community-driven. All 2,000+ software entries are maintained by contributors. Submit corrections, new tool guides, or improved tutorials via GitHub.
1
Fork & Clone
Fork freemium-services/wiki on GitHub, then clone locally.
2
Add or Edit Documentation
Docs are written in Markdown in /docs/tools/<tool-name>.md. Follow the template in CONTRIBUTING.md. Each page must include: overview, installation, Docker setup, use cases, and at least 5 FAQs.
3
Include TurboQuant Integration Notes
If the tool can run on or integrate with TurboQuant's edge network, include an integration section with code examples. This helps the community deploy AI workloads cost-effectively.
4
Open a Pull Request
Submit your PR. Our AI review pipeline (powered by Claude Code on TurboQuant) will automatically check formatting and code examples. Human maintainers review within 48 hours.
⚡
Freemium.Services × TurboQuant Network
This entire documentation wiki, the freemium.services directory, and all AI features are powered by TurboQuant's DePIN edge compute network.
TurboQuant provides the infrastructure backbone for AI inference, edge caching, and distributed compute that makes this platform possible at scale.
Visit turboquant.network → to build your own AI applications on the same infrastructure.