Authentication
Configure AI providers for release-governance scans, manage API keys, and secure credential storage.
Overview
Guardian authentication is part of the release-governance pipeline. GitHub device authorization unlocks monitoring, and provider credentials determine how Guardian evaluates AI-generated changes before release.
Supported Providers
Guardian supports multiple AI providers for governance-oriented code analysis:
| Provider | Description | Best For |
|---|---|---|
| Ollama | Local or self-hosted models | Privacy-focused, offline use |
| OpenAI | Hosted models | General purpose, fast responses |
| Anthropic | Hosted models | Deep code analysis, security review |
| Google Gemini | Hosted models | High speed, multimodal |
| GitHub Models | GitHub-hosted models | Managed infrastructure |
GitHub Sign-In
- Click Sign In with GitHub when prompted.
- Follow the device authorization flow in your browser.
- Return to Guardian and click Check Now if needed.
Provider Setup
Follow the steps below to configure your AI provider.
Step 1: Obtain API Keys
- OpenAI: Visit platform.openai.com
- Anthropic: Visit console.anthropic.com
- Google: Visit aistudio.google.com
- GitHub Models: Generate a fine-grained GitHub token with
models:readpermission (see the in-app Settings note). - Ollama: No API key required for local installation
Step 2: Configure in Guardian
Open Settings (gear icon) and configure your provider:
- Select your preferred Provider from the dropdown
- Enter your API Key (if required)
- Choose a Model from the available options
- Click Save to apply
Step 3: Validate Connection
The app will automatically validate your credentials when you start monitoring or use Guru.
Credential Storage
Guardian uses platform-native secure storage:
| Platform | Storage Method | Encryption |
|---|---|---|
| macOS | Keychain | OS-managed |
Security Note: API keys are never stored in plain text files or exposed in logs.
Ollama Configuration
For local AI models using Ollama:
Local Setup
- Install Ollama from ollama.com
- Pull a model:
ollama pull llama3.2 - In Guardian, select Ollama provider
- Set Base URL to
http://localhost:11434(Local) or your cloud endpoint
Cloud/Remote Setup
If you have a hosted Ollama instance:
- Select Ollama provider
- Choose "Cloud" from the Base URL dropdown or enter your custom endpoint
Rate Limit Management
Guardian automatically handles rate limits with exponential backoff. If you encounter rate limiting:
- Wait a few moments and try again
- Consider using a different provider temporarily
API Key Rotation
Regularly rotate your API keys for security:
- Generate new key from provider dashboard
- Update in Guardian Settings
- Test by using Guru or starting monitoring
- Revoke old key from provider dashboard
Troubleshooting
Common Issues
| Error | Cause | Solution |
|---|---|---|
| "Invalid API Key" | Incorrect or expired key | Verify key in provider dashboard |
| "Rate Limited" | Too many requests | Wait and try again |
| "Network Error" | Connectivity issue | Check firewall/proxy settings |
| "Model Not Found" | Invalid model name | Update to supported model |
Reset Credentials
To clear stored credentials:
- Open Settings
- Click "Clear" next to the API Key field
- Re-enter new credentials
Best Practices
- Use Separate Keys: One key per environment (dev, staging, prod)
- Monitor Usage: Check provider dashboards for usage
- Rotate Quarterly: Change keys every 3 months
- Limit Scope: Use provider's API key restrictions if available
- Never Commit: Keep keys out of version control