Copy this prompt and paste it into any AI chat (ChatGPT, Claude, Gemini) for personalized, step-by-step setup help. For complete feature documentation, share this URL with your AI: helixailabs.com/valence_docs.html
Provider Setup Guides
GPT-4o, GPT-4 Turbo, and o1 reasoning models. The most widely-used AI platform.
Claude 4, Claude 3.5 Sonnet, and Claude 3 Haiku. Strong at coding, analysis, and long-context tasks.
Gemini 2.0 Flash and Gemini Pro. Fast, multimodal, and has the most generous free tier of any major provider.
Enterprise-grade OpenAI models hosted on Azure. Requires an Azure subscription and resource deployment.
Grok-2 and Grok-3 models from xAI. Real-time knowledge and competitive reasoning performance.
Unified gateway to 100+ models from many providers. One API key, many models. Great for experimentation.
Run open-source models on your own hardware. No API key needed, no usage costs, complete privacy. Requires a decent GPU for best results.
- Download and install from ollama.com/download
- Open a terminal and run
ollama pull llama3 - In Valence, select Ollama as your provider — it connects automatically
Local Memory
Local Memory gives Valence persistent memory across conversations. When enabled, the AI can remember things you've discussed and recall them later — all stored locally on your machine, never sent to Helix servers.
- Open Settings → Memory
- Toggle Enable Local Memory on
- Choose a Save Mode:
• Summary — AI summarizes conversations into compact memories (recommended)
• Full — saves complete message history (uses more disk space) - Set Auto-Save Interval — how many AI responses between automatic saves (0 = manual only)
- Enable Recall Triggers to let the AI automatically search memory when you ask about past conversations
Browse, search, and manage all your saved memories from Settings → Memory.
- Browse — memories are listed with title, date, summary, and topic tags. Only 20 load at a time; click Show More to load more.
- Filter — type in the search box to instantly filter memories by title, summary, or topics.
- Deep Search — press Enter or click Search to run an AI-powered deep search across all memory content.
- Delete — click the trash icon on any memory to remove it permanently.
- Manual Save — click Save Current Chat to Memory to save the active conversation at any time.
Importing Conversations
Import your entire ChatGPT history into Valence's local memory. Choose between a browser login (fastest) or a ZIP export.
- Open Settings → Memory → Import Conversations
- Option A — Browser Login (recommended): Click Import from ChatGPT. A browser window opens. Log in to your OpenAI account, then confirm the import. Valence fetches your conversations automatically via the API.
- Option B — ZIP Export: Click Import from ZIP file. First, export your data from ChatGPT Settings → Data Controls → Export Data (takes a few minutes to receive the email). Then select the downloaded ZIP file.
- Conversations are converted to local memories with titles, summaries, and topic tags.
Import your Claude conversation history into Valence's local memory via browser login.
- Open Settings → Memory → Import Conversations
- Click Import from Claude
- A browser window opens. Log in to your Anthropic account, then confirm the import.
- Valence fetches your conversations and indexes them as local memories.
Import your Google Gemini conversations. Gemini doesn't offer a bulk API, so Valence navigates through your conversations one at a time.
- Open Settings → Memory → Import Conversations
- Click Import from Gemini
- A browser window opens. Log in to your Google account, then confirm the import.
- Valence reads conversations one at a time — this may take a few minutes depending on how many you have.
- Progress updates appear in real-time as each conversation is imported.
Ready to Chat?
Once you have an API key, open Valence → Settings → API Keys and paste it in. You'll be chatting in seconds.
Explore Valence Features