OpenAI Integration

Use OpenAI models (GPT-4, GPT-3.5 Turbo) with CognitiveX for powerful reasoning and generation capabilities.

Supported Models

GPT-4

Most capable model. Best for complex reasoning, analysis, and generation.

GPT-4 Turbo

Faster, more cost-effective GPT-4. 128K context window.

GPT-3.5 Turbo

Fast and economical. Great for simple tasks and high-volume applications.

Setup

1. Get OpenAI API Key

Sign up at platform.openai.com and create an API key.

2. Configure CognitiveX

.envbash
COGNITIVEX_API_KEY=your_cognitivex_key
OPENAI_API_KEY=your_openai_key

3. Use in Code

typescript
import { CognitiveXClient } from '@cognitivex/sdk';

const client = new CognitiveXClient({
  apiKey: process.env.COGNITIVEX_API_KEY
});

// Use GPT-4
const result = await client.cognition.reason({
  query: "Analyze this complex problem...",
  model: "gpt-4",
  temperature: 0.7
});

Model Selection Guide

Use CaseRecommended ModelCost
Complex reasoninggpt-4$$$
Long context (128K)gpt-4-turbo$$
Simple Q&Agpt-3.5-turbo$
High volume tasksgpt-3.5-turbo$

Examples

Simple Query

typescript
const result = await client.cognition.reason({
  query: "What are the benefits of serverless?",
  model: "gpt-3.5-turbo"
});

Complex Analysis

typescript
const result = await client.cognition.reason({
  query: "Analyze legal implications of...",
  model: "gpt-4",
  temperature: 0.2, // Lower for more focused
  reflection: true  // Capture reasoning
});

Long Context

typescript
// Store long documents
const docs = await client.memory.store({
  content: longDocument, // Up to 128K tokens
  tags: ["legal"]
});

// Query with full context
const result = await client.cognition.reason({
  query: "Summarize key points",
  model: "gpt-4-turbo",
  context: [docs.id]
});

Best Practices

  • • Use GPT-3.5 Turbo for simple tasks to save costs
  • • Reserve GPT-4 for complex reasoning and analysis
  • • Lower temperature (0.1-0.3) for factual tasks
  • • Higher temperature (0.7-0.9) for creative tasks
  • • Enable caching for repeated queries
  • • Monitor token usage with analytics
  • • Use adaptive routing to automatically select best model