Getting Started Guide
Follow these simple steps to set up and use our service.
Step 4: Configure LLM
Choose between cloud LLM or local inference for privacy.
- Select your preferred LLM option in Settings
- For cloud: Add your API key if you have one
- For local inference: Enable WebGPU in Chrome settings
- Test your configuration with a sample prompt
Tip: Local inference keeps all your data on your device for maximum privacy, but requires a modern GPU for best performance.
Settings are configured within the Chrome extension after installation
Quick Troubleshooting
I can't see the WebGPU option in Chrome flags
Make sure you're using Google Chrome version 113 or newer. WebGPU might not be available in older versions of Chrome.
The extension isn't working after I enabled WebGPU
Remember to restart Chrome completely after enabling WebGPU. You might need to close all Chrome windows and reopen.
My GPU isn't powerful enough for local inference
You can still use our service with cloud-based inference. Just add your API key in the settings.