Docker Deployment
FileLens is designed to run efficiently in containerized environments. This guide covers everything you need to deploy FileLens using Docker, from quick development setups to production-ready configurations with scaling and monitoring.
Quick Start
Using Docker Compose
The fastest way to get FileLens running is with Docker Compose:
version: '3.8'
services:
filelens:
image: filelens:latest
ports:
- "3000:3000"
environment:
- PORT=3000
- LOG_LEVEL=info
volumes:
- ./temp:/tmp/filelens
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Single Container
For simple deployments, you can run FileLens as a single container:
# Pull the image
docker pull filelens:latest
# Run the container
docker run -d \
--name filelens \
-p 3000:3000 \
-v $(pwd)/temp:/tmp/filelens \
-e LOG_LEVEL=info \
--restart unless-stopped \
filelens:latest
# Check logs
docker logs -f filelens
Production Setup
Multi-Container Architecture
For production environments, use a more robust setup with load balancing and monitoring:
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- filelens
restart: unless-stopped
filelens:
image: filelens:latest
deploy:
replicas: 3
environment:
- PORT=3000
- LOG_LEVEL=warn
- MAX_WORKERS=4
- TEMP_DIR=/tmp/filelens
volumes:
- filelens_temp:/tmp/filelens
- ./config:/app/config:ro
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
redis:
image: redis:alpine
volumes:
- redis_data:/data
restart: unless-stopped
command: redis-server --appendonly yes
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
restart: unless-stopped
grafana:
image: grafana/grafana
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana_data:/var/lib/grafana
restart: unless-stopped
volumes:
filelens_temp:
redis_data:
prometheus_data:
grafana_data:
Kubernetes Deployment
For Kubernetes environments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: filelens
labels:
app: filelens
spec:
replicas: 3
selector:
matchLabels:
app: filelens
template:
metadata:
labels:
app: filelens
spec:
containers:
- name: filelens
image: filelens:latest
ports:
- containerPort: 3000
env:
- name: PORT
value: "3000"
- name: LOG_LEVEL
value: "info"
- name: MAX_WORKERS
value: "4"
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 30
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
volumeMounts:
- name: temp-storage
mountPath: /tmp/filelens
volumes:
- name: temp-storage
emptyDir:
sizeLimit: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: filelens-service
spec:
selector:
app: filelens
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: filelens-ingress
annotations:
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
spec:
rules:
- host: filelens.your-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: filelens-service
port:
number: 80
Configuration
Environment Variables
Configure FileLens using environment variables:
- Name
PORT- Type
- integer
- Description
Port number for the server (default: 3000)
- Name
LOG_LEVEL- Type
- string
- Description
Logging level: debug, info, warn, error (default: info)
- Name
MAX_WORKERS- Type
- integer
- Description
Maximum number of worker processes (default: CPU cores)
- Name
TEMP_DIR- Type
- string
- Description
Directory for temporary files (default: /tmp/filelens)
- Name
MAX_FILE_SIZE- Type
- string
- Description
Maximum file size for processing (default: 100MB)
- Name
REQUEST_TIMEOUT- Type
- integer
- Description
Request timeout in seconds (default: 300)
Volume Mounts
Important directories to mount:
volumes:
# Temporary files (required)
- ./temp:/tmp/filelens
# Configuration files (optional)
- ./config:/app/config:ro
# Logs (optional)
- ./logs:/app/logs
Resource Limits
Set appropriate resource limits based on your workload:
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
Scaling
Horizontal Scaling
Scale FileLens horizontally by running multiple instances:
# Scale to 5 instances
docker-compose up --scale filelens=5
# Check running containers
docker-compose ps
# Monitor resource usage
docker stats
Auto-scaling
Implement auto-scaling based on metrics:
# Initialize swarm
docker swarm init
# Deploy stack
docker stack deploy -c docker-compose.yml filelens
# Update service with auto-scaling
docker service update --replicas-max-per-node 2 filelens_filelens
# Monitor services
docker service ls
docker service ps filelens_filelens
Monitoring
Health Checks
Implement comprehensive health monitoring:
services:
filelens:
healthcheck:
test: |
curl -f http://localhost:3000/health &&
curl -f http://localhost:3000/metrics | grep -q "filelens_up 1"
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Metrics and Logging
Set up comprehensive monitoring:
# grafana-dashboard.json
{
"dashboard": {
"title": "FileLens Monitoring",
"panels": [
{
"title": "Request Rate",
"type": "stat",
"targets": [
{
"expr": "rate(filelens_requests_total[5m])",
"legendFormat": "Requests/sec"
}
]
},
{
"title": "Response Time",
"type": "stat",
"targets": [
{
"expr": "histogram_quantile(0.95, filelens_request_duration_seconds_bucket)",
"legendFormat": "95th percentile"
}
]
},
{
"title": "Active Jobs",
"type": "stat",
"targets": [
{
"expr": "filelens_active_jobs",
"legendFormat": "Active jobs"
}
]
},
{
"title": "Error Rate",
"type": "stat",
"targets": [
{
"expr": "rate(filelens_errors_total[5m])",
"legendFormat": "Errors/sec"
}
]
}
]
}
}
Alerting Rules
Configure alerts for critical issues:
groups:
- name: filelens
rules:
- alert: FileLensDown
expr: up{job="filelens"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "FileLens instance is down"
description: "FileLens instance {{ $labels.instance }} has been down for more than 1 minute."
- alert: HighErrorRate
expr: rate(filelens_errors_total[5m]) > 0.1
for: 2m
labels:
severity: warning
annotations:
summary: "High error rate in FileLens"
description: "Error rate is {{ $value }} errors per second."
- alert: HighMemoryUsage
expr: (container_memory_usage_bytes{name=~".*filelens.*"} / container_spec_memory_limit_bytes) > 0.9
for: 5m
labels:
severity: warning
annotations:
summary: "High memory usage in FileLens"
description: "Memory usage is above 90% for {{ $labels.name }}."
- alert: LongProcessingTime
expr: histogram_quantile(0.95, filelens_request_duration_seconds_bucket) > 60
for: 5m
labels:
severity: warning
annotations:
summary: "Long processing times in FileLens"
description: "95th percentile processing time is {{ $value }} seconds."
Backup and Recovery
Implement backup strategies for persistent data:
#!/bin/bash
# backup.sh - Backup FileLens data
BACKUP_DIR="/backups/filelens"
DATE=$(date +%Y%m%d_%H%M%S)
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Backup configuration
docker run --rm -v filelens_config:/data -v "$BACKUP_DIR":/backup alpine \
tar czf "/backup/config_$DATE.tar.gz" -C /data .
# Backup persistent volumes
docker run --rm -v filelens_temp:/data -v "$BACKUP_DIR":/backup alpine \
tar czf "/backup/temp_$DATE.tar.gz" -C /data .
# Cleanup old backups (keep last 7 days)
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +7 -delete
echo "Backup completed: $DATE"