Guru: AI Assistant
How to use the AI assistant for code analysis, explanations, and fix suggestions.
What is Guru?
Guru is Guardian's built-in AI assistant. It provides context-aware explanations of code issues, suggests fixes, and helps you understand your codebase better.
Key Features
| Feature | Description |
|---|---|
| Context-Aware Analysis | Understands your codebase structure |
| Fix Suggestions | Provides actionable code changes |
| Explanations | Describes why an issue matters |
| Web Search | Can search the web for additional context |
Using Guru
Opening the Chat
The Guru chat panel is accessible from the main interface. Simply type your question in the chat input at the bottom of the screen.
Asking Questions
Guru understands natural language queries:
"Why is this function flagged as a security issue?"
"How can I fix the N+1 query problem here?"
"What's the best way to refactor this class?"
"Explain the architectural impact of this change"
Web Search
Guardian can use Tavily to pull up-to-date information from the web. Web search is opt-in and requires a Tavily API key.
Enable it in Settings > Web Search. You can also force web search for a single message by adding /web or @web:
"/web What are the latest React best practices?"
"Explain the latest TypeScript 5 features @web"
How It Works
- If no Tavily key is stored, web search requests will fail (web search is optional; you can keep it off).
- If your message includes a URL, Guardian prefers Tavily Extract (focused extraction from the page).
- Otherwise Guardian uses Tavily Search (top results + short answer).
- The Tavily request contains your question (not your code). The extracted/search results are appended into the Guru context so your AI provider can cite them.
Search Depth
You can control depth in Settings > Web Search:
- Basic: best default (fast and relevant)
- Advanced: highest recall (slower)
- Fast / Ultra-fast: quickest response (smaller result set)
- Auto: Guardian picks depth from your query (for example, “latest”, “pricing”, “CVE”, “security advisory” tends to trigger advanced depth)
Best Practices (Better Results, Lower Cost)
- Keep queries short and specific (Guardian truncates Tavily queries to ~400 characters).
- Split complex questions into a few focused requests instead of one long prompt.
- Use
site:example.comto restrict sources when you already know the best domain. - Prefer a URL + a clear instruction when you want one page summarized (Extract is tighter than broad Search).
Applying Fixes
When Guru suggests a fix, prefer receiving the FULL updated file content (no diff markers and no markdown). Then:
- Use the FIX action in the Monitor view when a proposed fix is available.
- Or write the proposal to
.guardian-proposals/fix_proposals.jsonland use Reviews to apply and track it.
Tip: Always review fixes before applying. Guru is helpful but suggestions should be verified.
Guru Capabilities
Code Analysis
Guru can analyze code for:
- Security vulnerabilities
- Architectural patterns
- Performance issues
- Code style problems
Explanations
Ask Guru to explain:
- Why a pattern is problematic
- The impact of an issue
- Alternative approaches
- Best practices for your stack
Refactoring
Guru can suggest:
- Function extraction
- Interface improvements
- Dependency patterns
- Test structure improvements
Configuration
Provider Settings
Navigate to Settings to configure Guru:
| Setting | Description |
|---|---|
| Provider | AI model provider (Ollama, OpenAI, etc.) |
| Model | Specific model version |
| API Key | Your provider API key |
Web Search Settings
In Settings, you can configure Tavily for web search:
| Setting | Description |
|---|---|
| Tavily API Key | Enable web search capabilities |
Chat History
Guru maintains conversation history per project:
- Conversations persist across sessions
- Use Clear Chat to start fresh
- History helps Guru understand context
Best Practices
Effective Prompting
Good prompts are specific:
"Explain why the useEffect in UserProfile.tsx causes a memory leak"
Avoid vague prompts:
"What's wrong with my code?"
Workflow Integration
- Review Phase: Use Guru to understand issues
- Fix Phase: Apply suggested fixes or use as reference
- Verify Phase: Check if the fix resolves the issue
Learning from Guru
Guru explanations are educational:
- Read the "why" explanations
- Understand the principles behind suggestions
- Apply learnings to new code proactively
Semantic Search
Guardian includes semantic search to find similar code patterns across your codebase:
How It Works
When you scan your project, Guardian creates semantic embeddings of findings and stores them locally. This enables similarity-based retrieval.
Using Semantic Search
Trigger semantic search by using these keywords in your query:
"Find similar issues to this"
"Show me critical patterns like this"
"Are there semantic matches for this vulnerability?"
Search Triggers
Guru automatically uses semantic search when you include:
- English: "similar", "like this", "resemble", "pattern", "semantic"
What You'll See
When semantic matches are found, Guru includes them in the context:
### Semantic Similarity Matches
- `src/auth.ts` [Critical] similarity=0.87 (openai:text-embedding-3-small)
- preview: SQL injection in user input validation
- `src/db.rs` [Critical] similarity=0.82 (openai:text-embedding-3-small)
- preview: Raw SQL concatenation with user input
Embedding Providers
Guardian supports multiple embedding strategies:
| Provider | Model | Use Case |
|---|---|---|
| Auto (default) | openai:text-embedding-3-small or ollama:nomic-embed-text | Uses OpenAI when a valid key exists; otherwise starts with Ollama and falls back to local hash |
| OpenAI | text-embedding-3-small | Best quality, requires API key |
| Ollama | nomic-embed-text | Local/offline option |
| Local Hash | Deterministic fallback | Offline mode, no external calls |
Configure via environment variables:
GUARDIAN_EMBED_MODE=ollama # or "openai", "local", "auto"
GUARDIAN_EMBED_PROVIDER=openai # legacy alias
GUARDIAN_EMBED_MODEL=text-embedding-3-small
GUARDIAN_EMBED_MODEL_OLLAMA=nomic-embed-text
GUARDIAN_OFFLINE=1 # Forces local hash fallback
Privacy
- Embeddings are stored locally in
~/.guardian/memory.db - No code leaves your machine for embedding generation (local hash mode)
- OpenAI/Ollama modes send only the analyzed code snippet
Limitations
Guru is powerful but has limits:
| Limitation | Workaround |
|---|---|
| Large files may be truncated | Focus on specific sections |
| May not know proprietary patterns | Provide context in prompts |
| Suggestions need review | Always verify before applying |
| Rate limits apply | Configure provider limits |
| Semantic search requires prior scans | Run a scan first to build the index |
Privacy Considerations
When using Guru:
- Code snippets are sent to your configured AI provider
- Choose providers with appropriate data policies
- Avoid sending secrets or PII in prompts
- Review provider terms for your compliance needs
Troubleshooting
| Issue | Solution |
|---|---|
| No response | Check API key in Settings |
| Generic answers | Include more context in your question |
| Slow responses | Try a faster model |
| Incorrect suggestions | Provide more specific context |