Configure AI providers, manage API keys, and secure credential storage.
Guardian requires authentication credentials to communicate with AI providers. This guide covers how to configure, manage, and secure your authentication setup.
Guardian supports multiple AI providers for code analysis:
| Provider | Description | Best For |
|---|---|---|
| Ollama | Local or self-hosted models | Privacy-focused, offline use |
| OpenAI | Hosted models | General purpose, fast responses |
| Anthropic | Hosted models | Deep code analysis, security review |
| Google Gemini | Hosted models | High speed, multimodal |
| Hosted Models | Provider-hosted AI models | Managed infrastructure |
Open Settings (gear icon) and configure your provider:
The app will automatically validate your credentials when you start monitoring or use Guru.
Guardian uses platform-native secure storage:
| Platform | Storage Method | Encryption |
|---|---|---|
| macOS | Keychain | OS-managed |
| Windows | Credential Manager (DPAPI) | OS-managed |
| Linux | Secret Service API | OS-managed |
Security Note: API keys are never stored in plain text files or exposed in logs.
For local AI models using Ollama:
ollama pull llama3.2http://127.0.0.1:11434 (Local) or your cloud endpointIf you have a hosted Ollama instance:
Guardian automatically handles rate limits with exponential backoff. If you encounter rate limiting:
Regularly rotate your API keys for security:
| Error | Cause | Solution |
|---|---|---|
| "Invalid API Key" | Incorrect or expired key | Verify key in provider dashboard |
| "Rate Limited" | Too many requests | Wait and try again |
| "Network Error" | Connectivity issue | Check firewall/proxy settings |
| "Model Not Found" | Invalid model name | Update to supported model |
To clear stored credentials: