Product / Android · Development alpha

Valence Mobile

Cross-model AI for Android.

Carry the same nine-provider Valence stack — cloud, enterprise, and on-device local — into voice, widgets, quick settings, and local memory without adding a Helix middleman between you and the models.

Same Valence stack, tuned for touch.

The Android version keeps the cross-model core intact while adapting the working surface to pockets, commutes, quick capture, and short bursts of attention.

Continuity

Cross-model continuity

Start reasoning with Claude, get a second opinion from GPT-4o, summarize with Gemini, and keep the same thread moving from your phone.

  • Switch providers without dropping context
  • @mentions keep rerouting fast on mobile

Memory

Local memory on device

Conversations are indexed into local memory on the handset so #recall and #remember stay close to the work instead of depending on a remote proxy.

  • Search past conversations by topic or keyword
  • Useful for quick capture and later retrieval

Grounding

Google Search grounding

Gemini can lean on real-time web search when freshness matters, returning cited results instead of pretending static knowledge is enough.

  • Useful for current events, pricing, and live references
  • Source-aware answers inside the same surface

Input

Voice-first capture

Use speech input for hands-free prompting when typing is friction. Voice works with the same provider choice instead of locking you to a single assistant stack.

  • Fast prompt capture on the move
  • Pairs naturally with default assistant integration

Attachments

Files and images in context

Attach photos or files directly to prompts for models that can reason over images and mixed content, which is especially useful on a device with a camera always in reach.

  • Gallery and camera workflows fit naturally
  • Good for capture, analysis, and field notes

Trust boundary

No middleman

API keys are encrypted with AES-256 through Android Keystore and sent directly to providers. Conversations stay on the device unless you choose a remote model.

  • Your keys stay under your control
  • Ollama remains the path for local-first execution

Nine providers, zero mobile downgrade.

Valence Mobile keeps the same model menu and local-first posture as the desktop product, so moving between devices does not mean downgrading the workflow.

OpenAI

GPT-4o, GPT-4, GPT-3.5

Useful for fast generalist answers, multimodal prompts, and broad app utility.

Cloud

Anthropic

Claude 4, Claude 3.5 Sonnet, Haiku

Strong reasoning and careful review when the mobile prompt still needs depth.

Cloud

Google

Gemini 2.0, Gemini Pro + Search

Pairs well with search-grounded mobile questions and live information requests.

Cloud

Azure OpenAI

Enterprise GPT deployments

Fits organizations that need Azure-hosted deployments and tighter platform controls.

Enterprise

xAI

Grok-3

Adds another response style when multiple viewpoints improve the decision.

Cloud

OpenRouter

Multi-model gateway

Helpful when the mobile workflow benefits from broader model access under one provider umbrella.

Gateway

Ollama

Local models via network

The private path when the phone talks to local infrastructure instead of public APIs.

Local

7
Provider families on Android
0
Helix middleman servers
AES-256
Key encryption via Android Keystore
100%
On-device storage posture

Integrated with the OS, not trapped inside it.

Valence Mobile is built with .NET MAUI and uses Android-native APIs where they reduce friction: launch surfaces, quick actions, background durability, and assistant access.

Widget

Home screen widget

Keep a Valence entry point on the home screen for quick chat starts, voice capture, and at-a-glance continuity.

Quick access

Quick Settings tile

Drop straight into Valence from the Android notification shade without hunting through the launcher.

Assistant

Default assistant handoff

Long-press home and launch directly into Valence so voice entry can target any provider, not just a single bundled model.

Continuity

Bubble notifications

Keep responses close while staying in another app, then expand the thread when you need to re-enter the full workspace.

Launch paths

App shortcuts

Expose quick actions from the app icon for new chat, current thread return, and fast voice capture.

Durability

Background saves

Conversations and memory persist when Android backgrounds the app, which matters on devices aggressive about lifecycle cleanup.

Useful between desks, not just on one.

A few moments where the Android version changes how the product fits into daily work: quick capture, voice access, continuity, and privacy on the move.

On-the-go research

Ask current questions while commuting or between meetings, then switch providers when one answer needs a deeper second pass.

  • Google Search grounding with citations
  • Cross-model follow-up without rebuilding context
  • Past threads remain recallable on device

Voice-first interaction

When typing is the bottleneck, capture prompts by voice and keep the provider choice open instead of defaulting to a single assistant stack.

  • Speech input for any connected provider
  • Default assistant routing into Valence
  • Good for quick questions and field notes

Quick capture and remember

Use #remember when an idea lands, then pull it back later with #recall from phone or desktop.

  • Memory capture while the thought is fresh
  • Semantic retrieval later in the day
  • Works well for travel, meetings, and errands

Desktop continuity

Keep the same providers, profiles, and logic on both platforms so leaving the desk does not reset the system.

  • Conversation import and export
  • Settings and profile portability
  • Shared product language across devices

Cost-smart mobile AI

Route quick tasks to cheaper models, reserve premium models for harder questions, and lean on Ollama when local infrastructure is available.

  • Choose model cost per task
  • No Valence subscription on top
  • Provider choice stays explicit

Private and secure

Local storage, Android Keystore, and direct provider calls keep the trust boundary readable even on a device you carry everywhere.

  • AES-256 credential encryption
  • Zero Helix telemetry layer in the loop
  • Ollama path for local-first execution

AI that goes where you go.

Valence Mobile extends the Helix product posture to Android: provider choice, local control, native touchpoints, and direct calls to the models you trust.