Ollama Integration

Run local open-source LLM models with Ollama for privacy and cost savings.

Setup

1. Install Ollama

bash
curl -fsSL https://ollama.ai/install.sh | sh

2. Pull a Model

bash
ollama pull llama2

3. Configure CognitiveX

.envbash
COGNITIVEX_API_KEY=your_key
OLLAMA_BASE_URL=http://localhost:11434

Usage

typescript
const result = await client.cognition.reason({
  query: "Your question",
  model: "llama2",
  provider: "ollama"
});

Benefits

  • Privacy - Data stays local
  • Cost - No API fees
  • Offline - Works without internet
  • Control - Full model control