Desktop Features
Local AI Inference
Run AI models directly on your hardware. Supports NVIDIA, AMD, and Apple Silicon GPUs for fast local processing.
Distributed Inference
Contribute idle GPU/CPU to the Better AI network. Earn token credits for your hardware contributions — up to $0.50 per GPU-hour.
System Tray & Auto-Updates
Runs quietly in your system tray. Automatic updates ensure you always have the latest features and security patches.
Offline Capable
Access AI capabilities even without internet. Local models and cached responses keep you productive anywhere.
System Requirements
- Windows: Windows 10 or later, 64-bit, 4GB RAM minimum (8GB recommended for local inference)
- macOS: macOS 12 Monterey or later, Apple Silicon or Intel
- GPU (optional): NVIDIA GTX 1060+ or AMD RX 580+ for GPU inference; Apple Silicon M1+ for macOS