AI SDK
Official Python and Node.js SDKs for building AI agents on the Lexia platform
The Lexia AI SDK provides a clean, production-ready interface for building intelligent agents that integrate seamlessly with the Lexia platform. Available for both Python and Node.js, it handles real-time streaming, user memory, and environment variables so you can focus on your AI logic.
Python: lexia (PyPI) | Node.js: @lexia/sdk (npm)
π Quick Guide
For experienced developers who want to get started immediately
Prerequisites
- Python >=3.8 or Node.js >=14.0.0
- A Lexia account and API key
- Basic knowledge of async/await
How Lexia Works
Lexia provides the user interface β you only build the backend logic (process_message). The FastAPI app is created and served by Lexia utilities β you don't write any endpoints manually. The Lexia SDK handles all communication, streaming, and error logging automatically.
Auth headers, tenant IDs, and API routing are automatically handled by the Lexia SDK.
[Lexia UI (Dev or Cloud)]
β (API request)
[FastAPI App via Lexia SDK]
β
process_message()
β
[Your AI Logic (OpenAI, etc.)]
Key Points:
- Lexia provides the user interface β you only build the backend logic (process_message)
- The FastAPI app is created and served by Lexia utilities β you don't write any endpoints manually
- Auth headers, tenant IDs, and API routing are automatically handled by the Lexia SDK
- Same code works in both Dev Mode and Production β just different streaming protocols
- UI comes built-in for Dev Mode (localhost:3000) and automatically hosted in Cloud for Production (workspace.lexiaplatform.com)
Installation
# Python
pip install lexia
# Node.js
npm install @lexia/sdk
What Code Do You Actually Write?
You only write the process_message function. Lexia calls it automatically:
async def process_message(data: ChatMessage):
session = lexia.begin(data)
session.stream("Hello from your AI agent!")
session.close()
Basic Pattern (3 lines)
Python:
from lexia import lexia, ChatMessage
async def process_message(data: ChatMessage):
session = lexia.begin(data)
session.stream("Hello from your AI agent!")
session.close()
Node.js:
const { LexiaHandler } = require('@lexia/sdk');
async function processMessage(data) {
await lexia.completeResponse(data, "Hello from your AI agent!");
}
Key Methods
| Method | Purpose |
|---|---|
session.stream(content) | Send text to frontend |
session.close() | Complete response |
session.start_loading(type) | Show loading indicator |
session.image(url) | Send image |
session.error(message) | Send error |
Variables & Memory
vars = Variables(data.variables)
api_key = vars.get_openai_key()
memory = MemoryHelper(data.memory)
user_name = memory.get_name()
Dev Mode
export LEXIA_DEV_MODE=true
python main.py
End-to-End Local Development
Exact local dev flow:
lexia loginβ Authenticate with Lexialexia kickstart pythonβ Generate project templateuvicorn main.pyβ Start your AI agent server- Open
http://localhost:3000β Test with Lexia UI
How to Debug
All requests go through add_standard_endpoints β so you can log or breakpoint inside process_message:
async def process_message(data: ChatMessage):
print(f"Received: {data.message}") # Log incoming data
session = lexia.begin(data)
# Set breakpoint here in your IDE
session.stream("Response")
session.close()
That's it! Your AI agent is ready. Scroll down for comprehensive details and advanced features.
π Comprehensive Guide
Everything you need to know about building with the Lexia AI SDK
This guide covers all aspects of the SDK, from basic concepts to advanced features. Each section builds on the previous one, so you can follow along step-by-step or jump to specific topics.
Installation
Python SDK
pip install lexia
Note: The Python SDK requires Python 3.8 or later. For web applications, you may also want to install FastAPI dependencies:
pip install lexia[web]
Node.js SDK
npm install @lexia/sdk
Note: The Node.js SDK requires Node.js 14.0.0 or later. For Express applications, install Express separately:
npm install express
Verify Installation
Python:
import lexia
print(f"Lexia SDK version: {lexia.__version__}")
Node.js:
const { LexiaHandler } = require('@lexia/sdk');
console.log('Lexia SDK loaded successfully');
Basic Usage
The SDK provides a simple interface for processing messages and sending responses back to the Lexia platform. Let's start with the fundamentals.
Python (Session API)
The Python SDK uses a session-based approach that automatically handles streaming and response aggregation:
from lexia import lexia, ChatMessage
async def process_message(data: ChatMessage):
session = lexia.begin(data)
try:
# Your AI logic here
response = f"Processed: {data.message}"
session.stream(response)
session.close()
except Exception as e:
session.error(str(e), exception=e)
Node.js
The Node.js SDK uses a more traditional approach with explicit method calls:
const { LexiaHandler } = require('@lexia/sdk');
const lexia = new LexiaHandler();
async function processMessage(data) {
try {
const response = `Processed: ${data.message}`;
await lexia.completeResponse(data, response);
} catch (error) {
await lexia.sendError(data, error.message, null, error);
}
}
Note: Always wrap your processing logic in try-catch blocks to handle errors gracefully.
Advanced Features
Loading States (Python)
Show users when processing takes time, especially for operations like image generation:
session = lexia.begin(data)
# Show loading for image generation
session.start_loading("image")
await generate_image()
session.end_loading("image")
session.image("https://example.com/image.jpg")
session.close()
Supported loading types: "thinking", "image", "code", "search"
OpenAI Integration
Connect to OpenAI's API using environment variables from Lexia:
from openai import OpenAI
from lexia import Variables
async def process_message(data: ChatMessage):
session = lexia.begin(data)
# Get API key from Lexia config
vars = Variables(data.variables)
api_key = vars.get_openai_key()
if not api_key:
session.error("OpenAI API key not configured")
return
# Use OpenAI
client = OpenAI(api_key=api_key)
response = client.chat.completions.create(
model=data.model,
messages=[{"role": "user", "content": data.message}]
)
session.stream(response.choices[0].message.content)
session.close()
User Personalization
Create personalized responses using user memory data:
from lexia import MemoryHelper
async def process_message(data: ChatMessage):
session = lexia.begin(data)
memory = MemoryHelper(data.memory)
# Personalized greeting
if memory.has_name():
greeting = f"Hello {memory.get_name()}!"
else:
greeting = "Hello!"
# Add context based on user goals
if memory.has_goals():
goals = ", ".join(memory.get_goals())
greeting += f" I see you're working on: {goals}."
session.stream(greeting)
session.close()
Complete Examples
Python: Full OpenAI Agent
Here's a complete example that combines all the features we've covered:
from openai import OpenAI
from lexia import lexia, ChatMessage, Variables, MemoryHelper
async def process_message(data: ChatMessage):
session = lexia.begin(data)
try:
# Get API key and user data
vars = Variables(data.variables)
memory = MemoryHelper(data.memory)
api_key = vars.get_openai_key()
if not api_key:
session.error("OpenAI API key not configured")
return
# Build personalized context
context = []
if memory.has_name():
context.append(f"User: {memory.get_name()}")
if memory.has_goals():
context.append(f"Goals: {', '.join(memory.get_goals())}")
system_message = "You are a helpful AI assistant."
if context:
system_message += f"\n\nUser Context:\n" + "\n".join(context)
# Show loading indicator
session.start_loading("thinking")
# Call OpenAI
client = OpenAI(api_key=api_key)
response = client.chat.completions.create(
model=data.model,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": data.message}
]
)
# Hide loading and respond
session.end_loading("thinking")
session.stream(response.choices[0].message.content)
session.close()
except Exception as e:
session.error(f"Error: {str(e)}", exception=e)
Node.js: Full OpenAI Agent
const { LexiaHandler, Variables, MemoryHelper } = require('@lexia/sdk');
const OpenAI = require('openai');
const lexia = new LexiaHandler();
async function processMessage(data) {
try {
// Get API key and user data
const vars = new Variables(data.variables);
const memory = new MemoryHelper(data.memory);
const apiKey = vars.get('OPENAI_API_KEY');
if (!apiKey) {
await lexia.sendError(data, 'OpenAI API key not configured');
return;
}
// Build personalized context
const context = [];
if (memory.hasName()) {
context.push(`User: ${memory.getName()}`);
}
if (memory.hasGoals()) {
context.push(`Goals: ${memory.getGoals().join(', ')}`);
}
let systemMessage = "You are a helpful AI assistant.";
if (context.length > 0) {
systemMessage += `\n\nUser Context:\n${context.join('\n')}`;
}
// Call OpenAI
const client = new OpenAI({ apiKey });
const response = await client.chat.completions.create({
model: data.model,
messages: [
{ role: 'system', content: systemMessage },
{ role: 'user', content: data.message }
]
});
// Respond
await lexia.completeResponse(data, response.choices[0].message.content);
} catch (error) {
await lexia.sendError(data, `Error: ${error.message}`, null, error);
}
}
Core Concepts
Understanding these concepts will help you build more effective AI agents with the Lexia SDK.
Architecture Overview
The Lexia SDK sits between your AI logic and the Lexia platform, handling communication and data transformation:
βββββββββββββββββββ
β Your AI Agent β β Your custom logic
β (OpenAI, etc) β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Lexia Handler β β This SDK
β (lexia/sdk) β
ββββββββββ¬βββββββββ
β
ββββββ΄βββββ
β β
βΌ βΌ
ββββββββββ ββββββββββββ
βCentrifugoβ βLexia API β β Lexia Platform
β(Stream)β β(Backend) β
ββββββββββ ββββββββββββ
Key Components
| Component | Purpose |
|---|---|
| LexiaHandler | Main interface for all Lexia communication |
| ChatMessage | Model for incoming requests |
| Variables | Helper for accessing environment variables |
| MemoryHelper | Helper for accessing user memory data |
| Session | Python-only interface for response handling |
Session API (Python)
Python only: The Session API provides a clean interface for handling AI responses with automatic streaming and aggregation.
Starting a Session
from lexia import LexiaHandler, ChatMessage
lexia = LexiaHandler()
async def process_message(data: ChatMessage):
# Start a session
session = lexia.begin(data)
Session Methods
session.stream(content) - Stream Content
Stream a chunk of AI response in real-time. The SDK automatically aggregates all streamed content for the final response.
session = lexia.begin(data)
# Stream chunks as they're generated
for chunk in ai_stream:
session.stream(chunk)
session.close(usage_info=None) - Complete Response
Finalize the response and send to Lexia platform. Automatically aggregates all streamed content.
session = lexia.begin(data)
for chunk in ai_stream:
session.stream(chunk)
# Close with usage info
usage = {
'prompt_tokens': 100,
'completion_tokens': 50,
'total_tokens': 150
}
full_text = session.close(usage_info=usage)
session.error(error_message, exception=None) - Send Error
Send error notification to Lexia platform with automatic logging.
session = lexia.begin(data)
try:
# Your AI processing
result = process_ai(data)
session.stream(result)
session.close()
except Exception as e:
session.error("Processing failed", exception=e)
session.start_loading(kind) / session.end_loading(kind) - Loading Indicators
Display loading indicators to the user during processing.
session = lexia.begin(data)
# Show thinking indicator
session.start_loading("thinking")
# Process...
result = complex_computation()
# Hide indicator
session.end_loading("thinking")
session.stream(result)
session.close()
Loading types: "thinking", "image", "code", "search"
session.image(url) - Send Image
Send an image URL with Lexia-specific markers for proper display.
session = lexia.begin(data)
# Show loading
session.start_loading("image")
# Generate image
image_url = generate_image(data.message)
# Hide loading
session.end_loading("image")
# Send image
session.image(image_url)
session.close()
Working with Variables
Lexia sends environment variables (API keys, configs, etc.) with each request. Both SDKs provide easy access to these variables.
Variables Helper Class
Python:
from lexia import Variables, ChatMessage
async def process_message(data: ChatMessage):
# Create Variables helper
vars = Variables(data.variables)
# Get any variable
openai_key = vars.get("OPENAI_API_KEY")
anthropic_key = vars.get("ANTHROPIC_API_KEY")
db_url = vars.get("DATABASE_URL")
custom_var = vars.get("MY_CUSTOM_VAR")
# Check if exists
if vars.has("OPENAI_API_KEY"):
# Use it
pass
# List all variable names
all_names = vars.list_names()
print(f"Available variables: {all_names}")
# Convert to dictionary
vars_dict = vars.to_dict()
Node.js:
const { Variables } = require('@lexia/sdk');
async function processMessage(data) {
// Create variables helper
const vars = new Variables(data.variables);
// Get any variable
const openaiKey = vars.get('OPENAI_API_KEY');
const anthropicKey = vars.get('ANTHROPIC_API_KEY');
const dbUrl = vars.get('DATABASE_URL');
// Check if exists
if (vars.has('OPENAI_API_KEY')) {
const key = vars.get('OPENAI_API_KEY');
}
// Get all variable names
const allNames = vars.listNames();
console.log(`Available variables: ${allNames}`);
// Convert to object
const varsDict = vars.toDict();
}
Common Patterns
Multiple AI Providers
Python:
vars = Variables(data.variables)
# Try multiple providers
openai_key = vars.get("OPENAI_API_KEY")
anthropic_key = vars.get("ANTHROPIC_API_KEY")
groq_key = vars.get("GROQ_API_KEY")
if openai_key:
# Use OpenAI
client = OpenAI(api_key=openai_key)
elif anthropic_key:
# Use Anthropic
client = Anthropic(api_key=anthropic_key)
elif groq_key:
# Use Groq
client = Groq(api_key=groq_key)
else:
session.error("No AI API key provided")
return
Node.js:
const vars = new Variables(data.variables);
const openaiKey = vars.get('OPENAI_API_KEY');
const anthropicKey = vars.get('ANTHROPIC_API_KEY');
const groqKey = vars.get('GROQ_API_KEY');
if (openaiKey) {
// Use OpenAI
const client = new OpenAI({ apiKey: openaiKey });
} else if (anthropicKey) {
// Use Anthropic
const client = new Anthropic({ apiKey: anthropicKey });
} else if (groqKey) {
// Use Groq
const client = new Groq({ apiKey: groqKey });
} else {
await lexia.sendError(data, "No AI API key provided");
return;
}
User Memory System
Lexia provides user memory for personalized AI responses. This allows your agents to remember user preferences, goals, and past interactions.
Memory Structure
User memory data follows this structure:
{
"name": "John Doe",
"goals": ["Learn Python", "Build AI apps"],
"location": "San Francisco, CA",
"interests": ["AI", "Programming", "Music"],
"preferences": ["Prefers code examples", "Likes detailed explanations"],
"past_experiences": ["Built a web scraper", "Completed ML course"]
}
MemoryHelper Class
Python:
from lexia import MemoryHelper, ChatMessage
async def process_message(data: ChatMessage):
# Create memory helper
memory = MemoryHelper(data.memory)
# Get user information
name = memory.get_name() # "John Doe"
goals = memory.get_goals() # ["Learn Python", "Build AI apps"]
location = memory.get_location() # "San Francisco, CA"
interests = memory.get_interests() # ["AI", "Programming", "Music"]
preferences = memory.get_preferences() # ["Prefers code examples", ...]
experiences = memory.get_past_experiences() # ["Built a web scraper", ...]
# Check if data exists
if memory.has_name():
greeting = f"Hello {name}!"
else:
greeting = "Hello!"
# Check if memory is empty
if memory.is_empty():
response = "This is a new user with no memory"
else:
response = f"Welcome back, {name}!"
Node.js:
const { MemoryHelper } = require('@lexia/sdk');
async function processMessage(data) {
// Create memory helper
const memory = new MemoryHelper(data.memory);
// Get user information
const name = memory.getName();
const goals = memory.getGoals();
const location = memory.getLocation();
const interests = memory.getInterests();
const preferences = memory.getPreferences();
const experiences = memory.getPastExperiences();
// Check if data exists
let greeting;
if (memory.hasName()) {
greeting = `Hello ${name}!`;
} else {
greeting = "Hello!";
}
// Check if memory is empty
if (memory.isEmpty()) {
response = "This is a new user with no memory";
} else {
response = `Welcome back, ${name}!`;
}
}
Personalized Responses
Python:
async def process_message(data: ChatMessage):
memory = MemoryHelper(data.memory)
# Build personalized context
context_parts = []
if memory.has_name():
context_parts.append(f"User: {memory.get_name()}")
if memory.has_goals():
goals = ", ".join(memory.get_goals())
context_parts.append(f"Goals: {goals}")
if memory.has_interests():
interests = ", ".join(memory.get_interests())
context_parts.append(f"Interests: {interests}")
# Add to system message
if context_parts:
user_context = "\n".join(context_parts)
system_message = f"""You are a helpful AI assistant.
User Context:
{user_context}
Please personalize your responses based on this context."""
else:
system_message = "You are a helpful AI assistant."
Node.js:
async function processMessage(data) {
const memory = new MemoryHelper(data.memory);
// Build personalized context
const contextParts = [];
if (memory.hasName()) {
contextParts.push(`User: ${memory.getName()}`);
}
if (memory.hasGoals()) {
const goals = memory.getGoals().join(', ');
contextParts.push(`Goals: ${goals}`);
}
if (memory.hasInterests()) {
const interests = memory.getInterests().join(', ');
contextParts.push(`Interests: ${interests}`);
}
// Add to system message
let systemMessage;
if (contextParts.length > 0) {
const userContext = contextParts.join('\n');
systemMessage = `You are a helpful AI assistant.
User Context:
${userContext}
Please personalize your responses based on this context.`;
} else {
systemMessage = "You are a helpful AI assistant.";
}
}
Streaming Responses
Python (Session API)
The Python SDK's Session API automatically handles streaming and aggregation:
from openai import OpenAI
from lexia import LexiaHandler, ChatMessage, Variables
lexia = LexiaHandler()
async def process_message(data: ChatMessage):
session = lexia.begin(data)
try:
vars = Variables(data.variables)
api_key = vars.get("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)
# Stream response from OpenAI
stream = client.chat.completions.create(
model=data.model,
messages=[{"role": "user", "content": data.message}],
stream=True
)
# Stream each chunk (automatically aggregated)
for chunk in stream:
if chunk.choices[0].delta.content:
session.stream(chunk.choices[0].delta.content)
# Complete (uses aggregated content)
session.close()
except Exception as e:
session.error(f"Error: {str(e)}", exception=e)
Node.js
The Node.js SDK requires manual handling of streaming:
const { LexiaHandler, Variables } = require('@lexia/sdk');
const OpenAI = require('openai');
const lexia = new LexiaHandler();
async function processMessage(data) {
try {
const vars = new Variables(data.variables);
const apiKey = vars.get('OPENAI_API_KEY');
const client = new OpenAI({ apiKey });
// Get response from OpenAI (non-streaming)
const response = await client.chat.completions.create({
model: data.model,
messages: [{ role: 'user', content: data.message }]
});
const aiResponse = response.choices[0].message.content;
// Complete the response
await lexia.completeResponse(data, aiResponse);
} catch (error) {
await lexia.sendError(data, `Error: ${error.message}`, null, error);
}
}
Error Handling
Always handle errors gracefully to provide a good user experience and proper logging.
Python
from lexia import LexiaHandler, ChatMessage, Variables
lexia = LexiaHandler()
async def process_message(data: ChatMessage):
session = lexia.begin(data)
try:
vars = Variables(data.variables)
# Validate required variables
api_key = vars.get("OPENAI_API_KEY")
if not api_key:
session.error("OPENAI_API_KEY is required but not provided")
return
# Validate message
if not data.message.strip():
session.error("Message cannot be empty")
return
# Proceed with processing
result = call_ai(api_key, data.message)
session.stream(result)
session.close()
except Exception as e:
session.error(str(e), exception=e)
Node.js
const { LexiaHandler, Variables } = require('@lexia/sdk');
const lexia = new LexiaHandler();
async function processMessage(data) {
try {
const vars = new Variables(data.variables);
// Validate required variables
const apiKey = vars.get('OPENAI_API_KEY');
if (!apiKey) {
await lexia.sendError(data, 'OPENAI_API_KEY is required but not provided');
return;
}
// Validate message
if (!data.message.trim()) {
await lexia.sendError(data, 'Message cannot be empty');
return;
}
// Proceed with processing
const result = await callAI(apiKey, data.message);
await lexia.completeResponse(data, result);
} catch (error) {
await lexia.sendError(data, error.message, null, error);
}
}
Error Logging to Lexia Backend
When you call session.error() or lexia.sendError(), the SDK automatically:
- Streams error to frontend (visible to user)
- Persists error to Lexia backend API
- Sends detailed error log with:
- Error message (max 1000 chars)
- Stack trace (max 5000 chars)
- Error level (error, warning, info, critical)
- Additional context (UUID, conversation_id, etc.)
Dev Mode vs Production
The SDKs support two modes: Development and Production. Your code works identically in both modes!
Technical Differences
| Mode | UI Location | Streaming Protocol |
|---|---|---|
| Dev Mode | Local UI (localhost:3000) | SSE (Server-Sent Events) |
| Production | Cloud UI (workspace.lexiaplatform.com) | WebSocket (Centrifugo) |
Visual Flow
Dev Mode:
Lexia Local UI (http://localhost:3000)
β API
FastAPI on http://localhost:5001
Production:
Lexia Cloud UI (workspace.lexiaplatform.com)
β WebSocket
Your deployed Agent (containerized via Lexia Cloud)
Summary:
- Dev Mode: auto-runs local FastAPI + Lexia UI
- Prod Mode: Lexia Cloud hosts both, you deploy your agent once
- Same code, same SDK interface
What Happens When Deployed
When deployed, the same FastAPI app runs in Lexia Cloud and the Lexia Platform handles all routing, scaling, and UI streaming.
Production Mode
Uses Centrifugo for real-time WebSocket streaming.
Python:
lexia = LexiaHandler(dev_mode=False) # or just LexiaHandler()
Node.js:
const lexia = new LexiaHandler(false); // or just new LexiaHandler()
Features:
- Real-time WebSocket streaming via Centrifugo
- Production-grade reliability
- Scales to many concurrent users
- Requires Centrifugo server
Dev Mode
Uses in-memory storage and Server-Sent Events (SSE) for streaming.
Python:
lexia = LexiaHandler(dev_mode=True)
Node.js:
const lexia = new LexiaHandler(true);
Features:
- No external dependencies (no Centrifugo needed)
- Real-time console output for debugging
- SSE streaming to frontend
- Simpler local development
Enabling Dev Mode
Option 1: Direct Parameter
Python:
from lexia import LexiaHandler
lexia = LexiaHandler(dev_mode=True)
Node.js:
const { LexiaHandler } = require('@lexia/sdk');
const lexia = new LexiaHandler(true);
Option 2: Environment Variable
export LEXIA_DEV_MODE=true
Python:
lexia = LexiaHandler() # Auto-detects LEXIA_DEV_MODE
Node.js:
const lexia = new LexiaHandler(); // Auto-detects LEXIA_DEV_MODE
Comparison
| Feature | Production | Dev Mode |
|---|---|---|
| Streaming Protocol | Centrifugo WebSocket | SSE / Polling |
| External Dependencies | Centrifugo Server | None |
| Console Output | No | Yes (real-time) |
| Setup Complexity | Medium | Low |
| Real-time Performance | Excellent | Good |
| Best For | Production deployment | Local development |
Best Practices
1. Always Use Try-Except
Python:
async def process_message(data: ChatMessage):
session = lexia.begin(data)
try:
# Your processing
result = process(data)
session.stream(result)
session.close()
except Exception as e:
session.error(str(e), exception=e)
Node.js:
async function processMessage(data) {
try {
// Your processing
const result = await process(data);
await lexia.completeResponse(data, result);
} catch (error) {
await lexia.sendError(data, error.message, null, error);
}
}
2. Validate Inputs
Always validate API keys and message content before processing.
3. Use Session API (Python)
β RECOMMENDED:
session = lexia.begin(data)
try:
session.stream(chunk1)
session.stream(chunk2)
session.close() # Auto-aggregates chunks
except Exception as e:
session.error(str(e), exception=e)
4. Personalize with Memory
Use MemoryHelper to create personalized responses based on user data.
5. Provide Usage Info
Include token usage information when completing responses to help with monitoring and cost tracking.
6. Use Dev Mode for Development
export LEXIA_DEV_MODE=true
7. Log Important Events
Use proper logging to track processing steps and errors.
8. Implement Graceful Fallbacks
Support multiple AI providers and fall back gracefully when one fails.
Troubleshooting
Python Issues
ModuleNotFoundError: No module named 'lexia'
Solution:
pip install lexia
Verify installation:
pip list | grep lexia
ImportError: No module named 'fastapi'
Solution:
pip install lexia[web]
Node.js Issues
Cannot find module '@lexia/sdk'
Solution:
npm install @lexia/sdk
Missing Dependencies: Cannot find module 'express'
Solution:
npm install express
Common Issues (Both)
Variables Not Found
Ensure variables are sent in request:
{
"variables": [
{"name": "OPENAI_API_KEY", "value": "sk-..."}
]
}
Check variable names (case-sensitive):
Python:
vars = Variables(data.variables)
print(f"Available variables: {vars.list_names()}")
Node.js:
const vars = new Variables(data.variables);
console.log(`Available variables: ${vars.listNames()}`);
Memory Data Empty
Python:
memory = MemoryHelper(data.memory)
if memory.is_empty():
print("Memory is empty")
else:
print(f"Memory data: {memory.to_dict()}")
Node.js:
const memory = new MemoryHelper(data.memory);
if (memory.isEmpty()) {
console.log("Memory is empty");
} else {
console.log(`Memory data: ${JSON.stringify(memory.toDict())}`);
}
Dev Mode Not Activating
Python:
lexia = LexiaHandler(dev_mode=True)
print(f"Dev mode: {lexia.dev_mode}") # Should print: True
Node.js:
const lexia = new LexiaHandler(true);
console.log('Dev mode enabled');
Getting Help
- Check Documentation: Review this guide
- Enable Debug Logs: See what's happening internally
- Verify Request Data: Check incoming data structure
- Test Endpoints: Use
/api/v1/health - GitHub Issues: Report bugs or ask questions
Package Information
Python SDK
- Package Name:
lexia - Version: 1.2.7
- Python Support: >=3.8, <4.0
- License: MIT
- Repository: https://github.com/Xalantico/lexia-pip
- PyPI: https://pypi.org/project/lexia/
Node.js SDK
- Package Name:
@lexia/sdk - Version: 1.0.0
- Node Support: >=14.0.0
- License: MIT
- Repository: https://github.com/Xalantico/lexia-npm
- npm: https://www.npmjs.com/package/@lexia/sdk
Happy Building! π
For questions or support, visit our GitHub repositories.
The Lexia SDKs provide a professional, production-ready interface for building AI agents that seamlessly integrate with the Lexia platform. Available for both Python and Node.js, they offer identical functionality with language-specific implementations.
Design Philosophy
The SDKs follow these principles:
- Platform Agnostic - Your AI logic stays independent of Lexia internals
- Clean Interface - Simple, intuitive methods that do one thing well
- Developer Friendly - Works identically in dev and production modes
- Type Safe - Pydantic models (Python) ensure data integrity
- Production Ready - Comprehensive error handling and logging
Core Features
- β Real-time streaming to Lexia frontend
- β Backend communication with Lexia API
- β Data validation with models
- β Environment variable management
- β User memory handling for personalized responses
- β Error handling and logging to Lexia backend
- β Dev mode for local development without Centrifugo
- β FastAPI/Express integration with standard endpoints
Installation
Python SDK
# Basic installation (core features only)
pip install lexia
# With web dependencies (FastAPI + Uvicorn)
pip install lexia[web]
# With development tools (pytest, black, flake8)
pip install lexia[dev]
# With everything
pip install lexia[web,dev]
Node.js SDK
# Basic installation
npm install @lexia/sdk
# With Express dependencies
npm install @lexia/sdk express
β View Node.js SDK on GitHub
Verify Installation
Python:
import lexia
print(f"Lexia SDK version: {lexia.__version__}")
Node.js:
const { LexiaHandler } = require('@lexia/sdk');
console.log('Lexia SDK loaded successfully');
Quick Start
Minimal Example
Python:
from lexia import LexiaHandler, ChatMessage
# Initialize Lexia handler
lexia = LexiaHandler()
# Your AI processing function
async def process_message(data: ChatMessage):
# Start a session
session = lexia.begin(data)
try:
# Your AI logic here
response = f"Processed: {data.message}"
# Stream and complete
session.stream(response)
session.close()
except Exception as e:
session.error(str(e), exception=e)
Node.js:
const { LexiaHandler } = require('@lexia/sdk');
// Initialize Lexia handler
const lexia = new LexiaHandler();
// Your AI processing function
async function processMessage(data) {
try {
// Your AI logic here
const response = `Processed: ${data.message}`;
// Complete the response
await lexia.completeResponse(data, response);
} catch (error) {
await lexia.sendError(data, error.message, null, error);
}
}
Core Concepts
Architecture Overview
βββββββββββββββββββ
β Your AI Agent β β Your custom logic
β (OpenAI, etc) β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Lexia Handler β β This SDK
β (lexia/sdk) β
ββββββββββ¬βββββββββ
β
ββββββ΄βββββ
β β
βΌ βΌ
ββββββββββ ββββββββββββ
βCentrifugoβ βLexia API β β Lexia Platform
β(Stream)β β(Backend) β
ββββββββββ ββββββββββββ
Key Components
| Component | Purpose |
|---|---|
| LexiaHandler | Main interface for all Lexia communication |
| ChatMessage | Model for incoming requests |
| ChatResponse | Model for responses |
| Variables | Helper for accessing environment variables |
| MemoryHelper | Helper for accessing user memory data |
| DevStreamClient | Dev mode streaming (no Centrifugo) |
| CentrifugoClient | Production real-time streaming |
| APIClient | HTTP communication with Lexia backend |
Session API (Python)
Python only: The Session API provides a clean interface for handling AI responses.
Starting a Session
from lexia import LexiaHandler, ChatMessage
lexia = LexiaHandler()
async def process_message(data: ChatMessage):
# Start a session
session = lexia.begin(data)
Session Methods
session.stream(content) - Stream Content
Stream a chunk of AI response in real-time.
session = lexia.begin(data)
# Stream chunks as they're generated
for chunk in ai_stream:
session.stream(chunk)
session.close(usage_info=None) - Complete Response
Finalize the response and send to Lexia platform. Automatically aggregates all streamed content.
session = lexia.begin(data)
for chunk in ai_stream:
session.stream(chunk)
# Close with usage info
usage = {
'prompt_tokens': 100,
'completion_tokens': 50,
'total_tokens': 150
}
full_text = session.close(usage_info=usage)
session.error(error_message, exception=None) - Send Error
Send error notification to Lexia platform with automatic logging.
session = lexia.begin(data)
try:
# Your AI processing
result = process_ai(data)
session.stream(result)
session.close()
except Exception as e:
session.error("Processing failed", exception=e)
session.start_loading(kind) / session.end_loading(kind) - Loading Indicators
Display loading indicators to the user.
session = lexia.begin(data)
# Show thinking indicator
session.start_loading("thinking")
# Process...
result = complex_computation()
# Hide indicator
session.end_loading("thinking")
session.stream(result)
session.close()
Loading types: "thinking", "image", "code", "search"
session.image(url) - Send Image
Send an image URL with Lexia-specific markers for proper display.
session = lexia.begin(data)
# Show loading
session.start_loading("image")
# Generate image
image_url = generate_image(data.message)
# Hide loading
session.end_loading("image")
# Send image
session.image(image_url)
session.close()
Working with Variables
Lexia sends environment variables (API keys, configs, etc.) with each request. Both SDKs provide easy access.
Variables Helper Class
Python:
from lexia import Variables, ChatMessage
async def process_message(data: ChatMessage):
# Create Variables helper
vars = Variables(data.variables)
# Get any variable
openai_key = vars.get("OPENAI_API_KEY")
anthropic_key = vars.get("ANTHROPIC_API_KEY")
db_url = vars.get("DATABASE_URL")
custom_var = vars.get("MY_CUSTOM_VAR")
# Check if exists
if vars.has("OPENAI_API_KEY"):
# Use it
pass
# List all variable names
all_names = vars.list_names()
print(f"Available variables: {all_names}")
# Convert to dictionary
vars_dict = vars.to_dict()
Node.js:
const { Variables } = require('@lexia/sdk');
async function processMessage(data) {
// Create variables helper
const vars = new Variables(data.variables);
// Get any variable
const openaiKey = vars.get('OPENAI_API_KEY');
const anthropicKey = vars.get('ANTHROPIC_API_KEY');
const dbUrl = vars.get('DATABASE_URL');
// Check if exists
if (vars.has('OPENAI_API_KEY')) {
const key = vars.get('OPENAI_API_KEY');
}
// Get all variable names
const allNames = vars.listNames();
console.log(`Available variables: ${allNames}`);
// Convert to object
const varsDict = vars.toDict();
}
Common Patterns
Multiple AI Providers
Python:
vars = Variables(data.variables)
# Try multiple providers
openai_key = vars.get("OPENAI_API_KEY")
anthropic_key = vars.get("ANTHROPIC_API_KEY")
groq_key = vars.get("GROQ_API_KEY")
if openai_key:
# Use OpenAI
client = OpenAI(api_key=openai_key)
elif anthropic_key:
# Use Anthropic
client = Anthropic(api_key=anthropic_key)
elif groq_key:
# Use Groq
client = Groq(api_key=groq_key)
else:
session.error("No AI API key provided")
return
Node.js:
const vars = new Variables(data.variables);
const openaiKey = vars.get('OPENAI_API_KEY');
const anthropicKey = vars.get('ANTHROPIC_API_KEY');
const groqKey = vars.get('GROQ_API_KEY');
if (openaiKey) {
// Use OpenAI
const client = new OpenAI({ apiKey: openaiKey });
} else if (anthropicKey) {
// Use Anthropic
const client = new Anthropic({ apiKey: anthropicKey });
} else if (groqKey) {
// Use Groq
const client = new Groq({ apiKey: groqKey });
} else {
await lexia.sendError(data, "No AI API key provided");
return;
}
User Memory System
Lexia provides user memory for personalized AI responses.
Memory Structure
{
"name": "John Doe",
"goals": ["Learn Python", "Build AI apps"],
"location": "San Francisco, CA",
"interests": ["AI", "Programming", "Music"],
"preferences": ["Prefers code examples", "Likes detailed explanations"],
"past_experiences": ["Built a web scraper", "Completed ML course"]
}
MemoryHelper Class
Python:
from lexia import MemoryHelper, ChatMessage
async def process_message(data: ChatMessage):
# Create memory helper
memory = MemoryHelper(data.memory)
# Get user information
name = memory.get_name() # "John Doe"
goals = memory.get_goals() # ["Learn Python", "Build AI apps"]
location = memory.get_location() # "San Francisco, CA"
interests = memory.get_interests() # ["AI", "Programming", "Music"]
preferences = memory.get_preferences() # ["Prefers code examples", ...]
experiences = memory.get_past_experiences() # ["Built a web scraper", ...]
# Check if data exists
if memory.has_name():
greeting = f"Hello {name}!"
else:
greeting = "Hello!"
# Check if memory is empty
if memory.is_empty():
response = "This is a new user with no memory"
else:
response = f"Welcome back, {name}!"
Node.js:
const { MemoryHelper } = require('@lexia/sdk');
async function processMessage(data) {
// Create memory helper
const memory = new MemoryHelper(data.memory);
// Get user information
const name = memory.getName();
const goals = memory.getGoals();
const location = memory.getLocation();
const interests = memory.getInterests();
const preferences = memory.getPreferences();
const experiences = memory.getPastExperiences();
// Check if data exists
let greeting;
if (memory.hasName()) {
greeting = `Hello ${name}!`;
} else {
greeting = "Hello!";
}
// Check if memory is empty
if (memory.isEmpty()) {
response = "This is a new user with no memory";
} else {
response = `Welcome back, ${name}!`;
}
}
Personalized Responses
Python:
async def process_message(data: ChatMessage):
memory = MemoryHelper(data.memory)
# Build personalized context
context_parts = []
if memory.has_name():
context_parts.append(f"User: {memory.get_name()}")
if memory.has_goals():
goals = ", ".join(memory.get_goals())
context_parts.append(f"Goals: {goals}")
if memory.has_interests():
interests = ", ".join(memory.get_interests())
context_parts.append(f"Interests: {interests}")
# Add to system message
if context_parts:
user_context = "\n".join(context_parts)
system_message = f"""You are a helpful AI assistant.
User Context:
{user_context}
Please personalize your responses based on this context."""
else:
system_message = "You are a helpful AI assistant."
Node.js:
async function processMessage(data) {
const memory = new MemoryHelper(data.memory);
// Build personalized context
const contextParts = [];
if (memory.hasName()) {
contextParts.push(`User: ${memory.getName()}`);
}
if (memory.hasGoals()) {
const goals = memory.getGoals().join(', ');
contextParts.push(`Goals: ${goals}`);
}
if (memory.hasInterests()) {
const interests = memory.getInterests().join(', ');
contextParts.push(`Interests: ${interests}`);
}
// Add to system message
let systemMessage;
if (contextParts.length > 0) {
const userContext = contextParts.join('\n');
systemMessage = `You are a helpful AI assistant.
User Context:
${userContext}
Please personalize your responses based on this context.`;
} else {
systemMessage = "You are a helpful AI assistant.";
}
}
Streaming Responses
Python (Session API)
from openai import OpenAI
from lexia import LexiaHandler, ChatMessage, Variables
lexia = LexiaHandler()
async def process_message(data: ChatMessage):
session = lexia.begin(data)
try:
vars = Variables(data.variables)
api_key = vars.get("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)
# Stream response from OpenAI
stream = client.chat.completions.create(
model=data.model,
messages=[{"role": "user", "content": data.message}],
stream=True
)
# Stream each chunk (automatically aggregated)
for chunk in stream:
if chunk.choices[0].delta.content:
session.stream(chunk.choices[0].delta.content)
# Complete (uses aggregated content)
session.close()
except Exception as e:
session.error(f"Error: {str(e)}", exception=e)
Node.js
const { LexiaHandler, Variables } = require('@lexia/sdk');
const OpenAI = require('openai');
const lexia = new LexiaHandler();
async function processMessage(data) {
try {
const vars = new Variables(data.variables);
const apiKey = vars.get('OPENAI_API_KEY');
const client = new OpenAI({ apiKey });
// Get response from OpenAI (non-streaming)
const response = await client.chat.completions.create({
model: data.model,
messages: [{ role: 'user', content: data.message }]
});
const aiResponse = response.choices[0].message.content;
// Complete the response
await lexia.completeResponse(data, aiResponse);
} catch (error) {
await lexia.sendError(data, `Error: ${error.message}`, null, error);
}
}
Loading Indicators
Python only (via Session API):
session = lexia.begin(data)
try:
# Show thinking indicator
session.start_loading("thinking")
# AI processing...
result = think_hard(data.message)
# Hide thinking indicator
session.end_loading("thinking")
# Show image generation indicator
session.start_loading("image")
# Generate image...
image_url = generate_image(result)
# Hide image indicator
session.end_loading("image")
# Send image
session.image(image_url)
session.close()
except Exception as e:
session.error("Failed", exception=e)
Supported loading types:
"thinking"- General thinking/processing"image"- Image generation/processing"code"- Code generation/execution"search"- Web search operation
Image Handling
Python
from lexia import LexiaHandler
lexia = LexiaHandler()
session = lexia.begin(data)
# Generate or get image URL
image_url = "https://example.com/image.png"
# Send with Lexia markers (auto-wrapped)
session.image(image_url)
# Multiple images
for url in image_urls:
session.image(url)
session.close()
Node.js
For Node.js, manually add Lexia markers:
const imageUrl = "https://example.com/image.png";
const markedImage = `[lexia.image.start]${imageUrl}[lexia.image.end]`;
await lexia.completeResponse(data, `Here's your image:\n${markedImage}`);
Error Handling
Python
from lexia import LexiaHandler, ChatMessage, Variables
lexia = LexiaHandler()
async def process_message(data: ChatMessage):
session = lexia.begin(data)
try:
vars = Variables(data.variables)
# Validate required variables
api_key = vars.get("OPENAI_API_KEY")
if not api_key:
session.error("OPENAI_API_KEY is required but not provided")
return
# Validate message
if not data.message.strip():
session.error("Message cannot be empty")
return
# Proceed with processing
result = call_ai(api_key, data.message)
session.stream(result)
session.close()
except Exception as e:
session.error(str(e), exception=e)
Node.js
const { LexiaHandler, Variables } = require('@lexia/sdk');
const lexia = new LexiaHandler();
async function processMessage(data) {
try {
const vars = new Variables(data.variables);
// Validate required variables
const apiKey = vars.get('OPENAI_API_KEY');
if (!apiKey) {
await lexia.sendError(data, 'OPENAI_API_KEY is required but not provided');
return;
}
// Validate message
if (!data.message.trim()) {
await lexia.sendError(data, 'Message cannot be empty');
return;
}
// Proceed with processing
const result = await callAI(apiKey, data.message);
await lexia.completeResponse(data, result);
} catch (error) {
await lexia.sendError(data, error.message, null, error);
}
}
Error Logging to Lexia Backend
When you call session.error() or lexia.sendError(), the SDK automatically:
- Streams error to frontend (visible to user)
- Persists error to Lexia backend API
- Sends detailed error log with:
- Error message (max 1000 chars)
- Stack trace (max 5000 chars)
- Error level (error, warning, info, critical)
- Additional context (UUID, conversation_id, etc.)
Web Framework Integration
Python (FastAPI)
create_lexia_app and add_standard_endpoints create /api/v1/send_message and SSE endpoints used by the Lexia UI. You only supply your AI logic.
from fastapi import FastAPI
from lexia import (
LexiaHandler,
ChatMessage,
create_lexia_app,
add_standard_endpoints
)
# Create Lexia handler
lexia = LexiaHandler()
# Create FastAPI app with defaults
app = create_lexia_app(
title="My AI Agent",
version="1.0.0",
description="Custom AI agent powered by Lexia"
)
# Define your processing logic
async def process_message(data: ChatMessage):
session = lexia.begin(data)
try:
# Your AI logic here
session.stream("Response")
session.close()
except Exception as e:
session.error(str(e), exception=e)
# Add all Lexia-standard endpoints:
# /api/v1/send_message (main chat)
# /api/v1/health (health check)
# /api/v1/stream/{channel} (Dev Mode SSE)
add_standard_endpoints(
app,
lexia_handler=lexia,
process_message_func=process_message
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
These utilities register all the routes Lexia UI expects. You never need to write HTTP endpoints yourself.
Node.js (Express)
const express = require('express');
const {
createLexiaApp,
addStandardEndpoints,
LexiaHandler
} = require('@lexia/sdk');
// Create Express app
const app = createLexiaApp({
title: 'My AI Agent',
version: '1.0.0'
});
// Initialize Lexia handler
const lexia = new LexiaHandler();
// Define your processing logic
async function processMessage(data) {
try {
// Your AI logic here
const response = "Response from AI";
await lexia.completeResponse(data, response);
} catch (error) {
await lexia.sendError(data, error.message, null, error);
}
}
// Add standard endpoints
addStandardEndpoints(app, {
lexiaHandler: lexia,
processMessageFunc: processMessage
});
// Start server
app.listen(8000, () => {
console.log('Server running on port 8000');
});
Standard Endpoints
When you call add_standard_endpoints(), these endpoints are added:
| Endpoint | Method | Description |
|---|---|---|
/ | GET | Root endpoint with service info |
/api/v1/health | GET | Health check |
/api/v1/send_message | POST | Main chat endpoint |
/api/v1/stream/{channel} | GET | SSE stream (dev mode) |
/api/v1/poll/{channel} | GET | Polling endpoint (dev mode) |
/docs | GET | Auto-generated API docs (Python only) |
Dev Mode vs Production
The SDKs support two modes: Development and Production. Your code works identically in both modes!
Production Mode
Uses Centrifugo for real-time WebSocket streaming.
Python:
lexia = LexiaHandler(dev_mode=False) # or just LexiaHandler()
Node.js:
const lexia = new LexiaHandler(false); // or just new LexiaHandler()
Features:
- Real-time WebSocket streaming via Centrifugo
- Production-grade reliability
- Scales to many concurrent users
- Requires Centrifugo server
Dev Mode
Uses in-memory storage and Server-Sent Events (SSE) for streaming.
Python:
lexia = LexiaHandler(dev_mode=True)
Node.js:
const lexia = new LexiaHandler(true);
Features:
- No external dependencies (no Centrifugo needed)
- Real-time console output for debugging
- SSE streaming to frontend
- Simpler local development
Enabling Dev Mode
Option 1: Direct Parameter
Python:
from lexia import LexiaHandler
lexia = LexiaHandler(dev_mode=True)
Node.js:
const { LexiaHandler } = require('@lexia/sdk');
const lexia = new LexiaHandler(true);
Option 2: Environment Variable
export LEXIA_DEV_MODE=true
Python:
lexia = LexiaHandler() # Auto-detects LEXIA_DEV_MODE
Node.js:
const lexia = new LexiaHandler(); # Auto-detects LEXIA_DEV_MODE
Comparison
| Feature | Production | Dev Mode |
|---|---|---|
| Streaming Protocol | Centrifugo WebSocket | SSE / Polling |
| External Dependencies | Centrifugo Server | None |
| Console Output | No | Yes (real-time) |
| Setup Complexity | Medium | Low |
| Real-time Performance | Excellent | Good |
| Best For | Production deployment | Local development |
Complete Examples
Python: OpenAI Integration with Session API
from openai import OpenAI
from lexia import LexiaHandler, ChatMessage, Variables, MemoryHelper
lexia = LexiaHandler()
async def process_message(data: ChatMessage):
session = lexia.begin(data)
try:
# Get variables and memory
vars = Variables(data.variables)
memory = MemoryHelper(data.memory)
# Get API key
api_key = vars.get("OPENAI_API_KEY")
if not api_key:
session.error("OPENAI_API_KEY not found")
return
# Initialize OpenAI client
client = OpenAI(api_key=api_key)
# Build personalized context
context = []
if memory.has_name():
context.append(f"User: {memory.get_name()}")
if memory.has_goals():
context.append(f"Goals: {', '.join(memory.get_goals())}")
system_message = "You are a helpful AI assistant."
if context:
system_message += f"\n\nUser Context:\n" + "\n".join(context)
# Show loading
session.start_loading("thinking")
# Call OpenAI with streaming
stream = client.chat.completions.create(
model=data.model,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": data.message}
],
stream=True
)
# Hide loading
session.end_loading("thinking")
# Stream response
for chunk in stream:
if chunk.choices[0].delta.content:
session.stream(chunk.choices[0].delta.content)
# Complete with usage info (if available)
usage_info = {
'prompt_tokens': 100,
'completion_tokens': 50,
'total_tokens': 150
}
session.close(usage_info=usage_info)
except Exception as e:
session.error(f"Error: {str(e)}", exception=e)
Node.js: OpenAI Integration
const { LexiaHandler, Variables, MemoryHelper } = require('@lexia/sdk');
const OpenAI = require('openai');
const lexia = new LexiaHandler();
async function processMessage(data) {
try {
// Get variables and memory
const vars = new Variables(data.variables);
const memory = new MemoryHelper(data.memory);
// Get API key
const apiKey = vars.get('OPENAI_API_KEY');
if (!apiKey) {
await lexia.sendError(data, 'OPENAI_API_KEY not found');
return;
}
// Initialize OpenAI client
const client = new OpenAI({ apiKey });
// Build personalized context
const context = [];
if (memory.hasName()) {
context.push(`User: ${memory.getName()}`);
}
if (memory.hasGoals()) {
context.push(`Goals: ${memory.getGoals().join(', ')}`);
}
let systemMessage = "You are a helpful AI assistant.";
if (context.length > 0) {
systemMessage += `\n\nUser Context:\n${context.join('\n')}`;
}
// Call OpenAI
const response = await client.chat.completions.create({
model: data.model,
messages: [
{ role: 'system', content: systemMessage },
{ role: 'user', content: data.message }
]
});
const aiResponse = response.choices[0].message.content;
// Extract usage info
const usageInfo = {
prompt_tokens: response.usage.prompt_tokens,
completion_tokens: response.usage.completion_tokens,
total_tokens: response.usage.total_tokens
};
// Complete the response
await lexia.completeResponse(data, aiResponse, usageInfo);
} catch (error) {
await lexia.sendError(data, `Error: ${error.message}`, null, error);
}
}
Best Practices
1. Always Use Try-Except
Python:
async def process_message(data: ChatMessage):
session = lexia.begin(data)
try:
# Your processing
result = process(data)
session.stream(result)
session.close()
except Exception as e:
session.error(str(e), exception=e)
Node.js:
async function processMessage(data) {
try {
// Your processing
const result = await process(data);
await lexia.completeResponse(data, result);
} catch (error) {
await lexia.sendError(data, error.message, null, error);
}
}
2. Validate Inputs
Always validate API keys and message content before processing.
3. Use Session API (Python)
β RECOMMENDED:
session = lexia.begin(data)
try:
session.stream(chunk1)
session.stream(chunk2)
session.close() # Auto-aggregates chunks
except Exception as e:
session.error(str(e), exception=e)
4. Personalize with Memory
Use MemoryHelper to create personalized responses based on user data.
5. Provide Usage Info
Include token usage information when completing responses to help with monitoring and cost tracking.
6. Use Dev Mode for Development
export LEXIA_DEV_MODE=true
7. Log Important Events
Use proper logging to track processing steps and errors.
8. Implement Graceful Fallbacks
Support multiple AI providers and fall back gracefully when one fails.
Troubleshooting
Python Issues
ModuleNotFoundError: No module named 'lexia'
Solution:
pip install lexia
Verify installation:
pip list | grep lexia
ImportError: No module named 'fastapi'
Solution:
pip install lexia[web]
Node.js Issues
Cannot find module '@lexia/sdk'
Solution:
npm install @lexia/sdk
Missing Dependencies: Cannot find module 'express'
Solution:
npm install express
Common Issues (Both)
Variables Not Found
Ensure variables are sent in request:
{
"variables": [
{"name": "OPENAI_API_KEY", "value": "sk-..."}
]
}
Check variable names (case-sensitive):
Python:
vars = Variables(data.variables)
print(f"Available variables: {vars.list_names()}")
Node.js:
const vars = new Variables(data.variables);
console.log(`Available variables: ${vars.listNames()}`);
Memory Data Empty
Python:
memory = MemoryHelper(data.memory)
if memory.is_empty():
print("Memory is empty")
else:
print(f"Memory data: {memory.to_dict()}")
Node.js:
const memory = new MemoryHelper(data.memory);
if (memory.isEmpty()) {
console.log("Memory is empty");
} else {
console.log(`Memory data: ${JSON.stringify(memory.toDict())}`);
}
Dev Mode Not Activating
Python:
lexia = LexiaHandler(dev_mode=True)
print(f"Dev mode: {lexia.dev_mode}") # Should print: True
Node.js:
const lexia = new LexiaHandler(true);
console.log('Dev mode enabled');
Getting Help
- Check Documentation: Review this guide
- Enable Debug Logs: See what's happening internally
- Verify Request Data: Check incoming data structure
- Test Endpoints: Use
/api/v1/health - GitHub Issues: Report bugs or ask questions
Package Information
Python SDK
- Package Name:
lexia - Version: 1.2.7
- Python Support: >=3.8, <4.0
- License: MIT
- Repository: https://github.com/Xalantico/lexia-pip
- PyPI: https://pypi.org/project/lexia/
Node.js SDK
- Package Name:
@lexia/sdk - Version: 1.0.0
- Node Support: >=14.0.0
- License: MIT
- Repository: https://github.com/Xalantico/lexia-npm
- npm: https://www.npmjs.com/package/@lexia/sdk
Happy Building! π
For questions or support, visit our GitHub repositories.