This guide provides detailed instructions for deploying and configuring K8sMed in both local and Kubernetes environments. It covers different AI provider options, configuration parameters, and troubleshooting tips.
K8sMed is an AI-powered Kubernetes troubleshooting assistant designed to diagnose issues, provide natural language explanations, and generate actionable remediation commands. It can be deployed locally as a CLI tool or within a Kubernetes cluster.
kubectl installed and configured with access to a Kubernetes clustergit clone https://github.com/k8smed/k8smed.git
cd k8smed
make build
This will create a binary at bin/kubectl-k8smed.
# Option 1: Move to a location in your PATH
sudo mv bin/kubectl-k8smed /usr/local/bin/
# Option 2: Add the bin directory to your PATH
export PATH=$PATH:$(pwd)/bin
K8sMed can be configured using environment variables:
# For OpenAI
export OPENAI_API_KEY="your-api-key"
export K8SMED_AI_PROVIDER="openai"
export K8SMED_AI_MODEL="gpt-4" # Options: gpt-3.5-turbo, gpt-4, etc.
# For LocalAI/Ollama
export K8SMED_AI_PROVIDER="localai"
export K8SMED_AI_MODEL="llama2" # Or your model name
export K8SMED_AI_ENDPOINT="http://localhost:11434/v1" # Your LocalAI/Ollama endpoint
# Analyze a specific pod
kubectl-k8smed analyze pod mypod -n mynamespace
# Analyze with detailed explanations
kubectl-k8smed analyze "pod mypod has CrashLoopBackOff" --explain
# Anonymize sensitive information
kubectl-k8smed analyze deployment myapp -n mynamespace --anonymize
K8sMed can be deployed within your Kubernetes cluster to provide centralized troubleshooting capabilities.
# Build the image
docker build -t yourusername/k8smed:latest .
# Push to a registry (if deploying to a remote cluster)
docker push yourusername/k8smed:latest
# For kind
kind load docker-image k8smed:latest
# For minikube
minikube image load k8smed:latest
Create the necessary ConfigMap with your AI provider settings:
cat > deploy/manifests/configmap.yaml << EOF
apiVersion: v1
data:
ai_endpoint: "https://api.openai.com/v1" # For OpenAI
ai_model: "gpt-4"
ai_provider: "openai"
kind: ConfigMap
metadata:
name: k8smed-config
namespace: k8smed-system
EOF
Create a Secret for your API key:
kubectl create secret generic k8smed-secrets \
--namespace=k8smed-system \
--from-literal=openai_api_key=your-api-key \
--dry-run=client -o yaml > deploy/manifests/secret.yaml
# Create namespace, RBAC, ConfigMap, Secret and Deployment
kubectl apply -f deploy/manifests/
kubectl get pods -n k8smed-system
Once deployed, you can use K8sMed by executing commands in the pod:
# Get the pod name
K8SMED_POD=$(kubectl get pods -n k8smed-system -o jsonpath='{.items[0].metadata.name}')
# Analyze a pod
kubectl exec -it -n k8smed-system $K8SMED_POD -- kubectl-k8smed analyze pod problematic-pod
K8sMed supports multiple AI providers, each with their own configuration requirements.
For OpenAI (GPT-3.5, GPT-4), you need:
# For local usage (environment variables)
export OPENAI_API_KEY="your-api-key"
export K8SMED_AI_PROVIDER="openai"
export K8SMED_AI_MODEL="gpt-4" # Options: gpt-3.5-turbo, gpt-4, etc.
# For Kubernetes (ConfigMap)
apiVersion: v1
data:
ai_endpoint: "https://api.openai.com/v1"
ai_model: "gpt-4"
ai_provider: "openai"
kind: ConfigMap
metadata:
name: k8smed-config
namespace: k8smed-system
And a Secret for the API key:
apiVersion: v1
data:
openai_api_key: "base64-encoded-api-key"
kind: Secret
metadata:
name: k8smed-secrets
namespace: k8smed-system
For LocalAI or Ollama (self-hosted models):
# For local usage (environment variables)
export K8SMED_AI_PROVIDER="localai"
export K8SMED_AI_MODEL="llama2" # Or your model name
export K8SMED_AI_ENDPOINT="http://localhost:11434/v1" # Your LocalAI/Ollama endpoint
# For Kubernetes (ConfigMap)
apiVersion: v1
data:
ai_endpoint: "http://localhost:11434/v1"
ai_model: "llama2"
ai_provider: "localai"
kind: ConfigMap
metadata:
name: k8smed-config
namespace: k8smed-system
For Kubernetes access to locally hosted models, use ngrok to expose your LocalAI endpoint:
Start your LocalAI/Ollama server locally
Expose it with ngrok:
ngrok http 11434 # Adjust port as needed
apiVersion: v1
data:
ai_endpoint: "https://your-ngrok-url.ngrok-free.app/v1/chat/completions"
ai_model: "your-model-name"
ai_provider: "localai"
kind: ConfigMap
metadata:
name: k8smed-config
namespace: k8smed-system
Note: For LocalAI with OpenAI-compatible API, you must include the full path to the chat completions endpoint. The exact path may vary based on your LocalAI implementation.
# Analyze a pod
kubectl-k8smed analyze pod mypod
# Analyze a deployment
kubectl-k8smed analyze deployment mydeployment
# Analyze a service
kubectl-k8smed analyze service myservice
# Ask about a problem with specific symptoms
kubectl-k8smed analyze "why is my pod in CrashLoopBackOff"
# Get remediation steps for a specific issue
kubectl-k8smed analyze "how to fix ImagePullBackOff in my deployment"
# Start an interactive troubleshooting session
kubectl-k8smed interactive
If you see errors like Error getting LLM response: connection refused or Error getting LLM response: no completions returned:
If the K8sMed pod has ErrImagePull or ImagePullBackOff status:
kind load or minikube image loadIf you see Error: forbidden: User "system:serviceaccount:..." cannot get resource errors:
A: Yes! While the LLM integration provides enhanced analysis and natural language interaction, K8sMed has built-in analyzers that can identify common issues like CrashLoopBackOff, ImagePullBackOff, and resource constraints without requiring an LLM.
A: GPT-4 and similar high-capability models generally provide the most accurate and detailed analysis. However, smaller models like Gemma-3-4B-IT can still provide useful insights for common Kubernetes issues.
A: The recommended approach is to deploy K8sMed in your Kubernetes cluster with appropriate RBAC permissions. This allows multiple team members to use the tool without individual setup.
A: K8sMed collects resource data from your cluster to send to the configured LLM. If you’re using OpenAI, this data will be sent to their API. For sensitive environments, we recommend:
--anonymize flag to remove sensitive informationA: We welcome contributions! Please check our CONTRIBUTING.md guide and feel free to open issues or pull requests in the GitHub repository.