AI Code Review
Get instant, intelligent code reviews powered by GPT-4 or Claude
AI Code Review uses advanced language models to automatically review your pull requests, catching bugs, security issues, and code quality problems before they reach production.
📖 Table of Contents
Section titled “📖 Table of Contents”- Overview
- Setup
- Using AI Review
- Understanding Review Output
- Customizing Reviews
- Cost Considerations
- Best Practices
- Troubleshooting
Overview
Section titled “Overview”What AI Review Catches
Section titled “What AI Review Catches”🔒 Security Vulnerabilities
- SQL injection risks
- XSS (Cross-Site Scripting) vectors
- Authentication bypasses
- Secrets/credentials in code
- Insecure dependencies
⚡ Performance Issues
- N+1 database queries
- Inefficient algorithms (O(n²) → O(n log n))
- Memory leaks
- Blocking operations in async code
- Missing indexes
📝 Code Quality
- Best practice violations
- Code smells (long functions, deep nesting)
- Incomplete error handling
- Missing input validation
- Poor naming conventions
🐛 Potential Bugs
- Null/undefined pointer exceptions
- Race conditions
- Off-by-one errors
- Logic flaws
- Type mismatches
How It Works
Section titled “How It Works”1. Developer opens/updates PR ↓2. OpenCodeHub sends diff to AI ↓3. AI analyzes code changes ↓4. AI posts inline comments + summary ↓5. Developer addresses feedbackPrivacy note: Code is sent to AI provider (OpenAI/Anthropic). Use on-premise models for sensitive code.
Configure AI Provider
Section titled “Configure AI Provider”Option 1: OpenAI (GPT-4)
-
Get API key from platform.openai.com
-
Add to
.env:
AI_PROVIDER=openai OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx OPENAI_MODEL=gpt-4-turbo-preview # or gpt-4, gpt-3.5-turbo- Restart OpenCodeHub
Option 2: Anthropic (Claude)
-
Get API key from console.anthropic.com
-
Add to
.env:
AI_PROVIDER=anthropic ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxx ANTHROPIC_MODEL=claude-3-opus-20240229 # or claude-3-sonnet- Restart OpenCodeHub
Option 3: Azure OpenAI
AI_PROVIDER=azure-openaiAZURE_OPENAI_API_KEY=xxxxxAZURE_OPENAI_ENDPOINT=https://yourinstance.openai.azure.com/AZURE_OPENAI_DEPLOYMENT=your-deployment-nameOption 4: Self-Hosted (Coming Soon)
OpenCodeHub will support local models via Ollama:
AI_PROVIDER=ollamaOLLAMA_ENDPOINT=http://localhost:11434OLLAMA_MODEL=codellama:13bEnable for Repository
Section titled “Enable for Repository”Via Web UI:
- Navigate to Repository → Settings → AI Review
- Toggle “Enable AI Code Review”
- Configure settings:
- Trigger:
On PR open,On push, orManual only - Auto-comment: Post comments automatically
- Severity threshold:
All,Warning+, orCritical only - Max files: Limit review to N files (cost control)
- Trigger:
- Click Save
Via API:
curl -X PATCH https://git.yourcompany.com/api/repos/123/ai-review \ -H "Authorization: Bearer $TOKEN" \ -d '{ "enabled": true, "trigger": "on_pr_open", "auto_comment": true, "severity_threshold": "warning" }'Via CLI:
och repo config ai-review --enable \ --trigger on_pr_open \ --auto-commentUsing AI Review
Section titled “Using AI Review”Automatic Reviews
Section titled “Automatic Reviews”If enabled, AI reviews trigger automatically:
# 1. Create PRgit push origin feature-branch# → AI review starts automatically
# 2. Update PRgit push origin feature-branch# → AI re-reviews changed files onlyYou’ll see:
- ✅ Status check: “AI Review: In Progress” → “AI Review: Complete”
- 💬 Inline comments: On specific lines with issues
- 📊 Summary comment: Overall assessment
Manual Reviews
Section titled “Manual Reviews”Request AI review on-demand:
Via Web UI:
- Open PR page
- Click “Request AI Review” button
- Wait ~30-60 seconds
- Review appears as comments
Via CLI:
# Review specific PRoch review ai 125
# Wait for the review to completeoch review ai 125 --waitVia API:
curl -X POST https://git.yourcompany.com/api/repos/OWNER/REPO/pulls/125/ai-review \ -H "Authorization: Bearer $TOKEN"Understanding Review Output
Section titled “Understanding Review Output”Severity Levels
Section titled “Severity Levels”AI assigns severity to each finding:
| Severity | Icon | Meaning | Example |
|---|---|---|---|
| 🔴 CRITICAL | Must fix | Security vulnerability, data loss risk | SQL injection, exposed API key |
| 🟡 WARNING | Should fix | Performance issue, bad practice | N+1 query, missing error handling |
| 🔵 INFO | Consider | Suggestion, refactoring opportunity | Better variable name, code simplification |
Example Review Comment
Section titled “Example Review Comment”🔴 CRITICAL: SQL Injection Vulnerability
**File:** `src/api/users.ts`**Line:** 45
**Issue:**User input is directly concatenated into SQL query without sanitization.
**Current Code:**
```typescriptconst query = `SELECT * FROM users WHERE email = '${userEmail}'`;```Risk: An attacker could inject malicious SQL:
userEmail = "' OR '1'='1"→ Query becomes: SELECT * FROM users WHERE email = '' OR '1'='1'→ Returns ALL users (authentication bypass)Fix: Use parameterized queries or ORM:
const query = `SELECT * FROM users WHERE email = ?`;const results = await db.query(query, [userEmail]);
// Or with ORM:const user = await db.users.findOne({ email: userEmail });References:
### Summary Comment
AI posts an overall summary:
```markdown## 🤖 AI Code Review Summary
**Overall Assessment:** ⚠️ Needs Attention
### Statistics- Files reviewed: 8- Critical issues: 2- Warnings: 5- Info: 3
### Critical Issues1. 🔴 SQL Injection in `src/api/users.ts:45`2. 🔴 Hardcoded API key in `src/config.ts:12`
### Recommendations1. Add input validation to all API endpoints2. Move secrets to environment variables3. Consider adding integration tests for auth flows
### Positive Notes- ✅ Good error handling in payment service- ✅ Well-structured database migrations- ✅ Comprehensive TypeScript types
**Estimated fix time:** 2-3 hoursCustomizing Reviews
Section titled “Customizing Reviews”Review Rules
Section titled “Review Rules”Create .opencodehub/ai-review.yml in your repository:
# AI Review Configuration
# Which files to reviewinclude: - "src/**/*.ts" - "src/**/*.tsx" - "lib/**/*.js"
# Which files to skipexclude: - "**/*.test.ts" - "**/*.spec.ts" - "**/generated/**" - "dist/**"
# Focus areas (weighted 1-10)focus: security: 10 # Highest priority performance: 8 bugs: 8 code_quality: 5 documentation: 3
# Severity thresholdsreport: critical: always warning: always info: on_request # Only if --verbose flag used
# Custom rulescustom_rules: - name: "No console.log in production" pattern: 'console\\.log\\(' severity: warning message: "Remove console.log before merging"
- name: "Require error handling" pattern: 'await.*\\(.*\\)' check: must_have_try_catch severity: warning
# AI instructionsinstructions: > Pay special attention to: 1. Authentication and authorization logic 2. Database query efficiency 3. Input validation 4. Proper error handling
Our coding standards: - TypeScript strict mode - Functional programming preferred - Maximum function length: 50 lines - Maximum cyclomatic complexity: 10Custom Prompts
Section titled “Custom Prompts”Use Settings → AI Review Rules to add repository‑specific prompts and checks. Rules are applied automatically when AI reviews run.
Language-Specific Reviews
Section titled “Language-Specific Reviews”AI automatically adapts to programming language:
- TypeScript/JavaScript: Type safety, async/await, React best practices
- Python: PEP 8, type hints, Django/Flask patterns
- Go: Go idioms, error handling, concurrency
- Rust: Borrow checker, unsafe code, idiomatic Rust
- Java: Design patterns, Spring best practices, thread safety
Cost Considerations
Section titled “Cost Considerations”Pricing
Section titled “Pricing”AI review costs depend on provider and model:
OpenAI (GPT-4 Turbo):
- ~$0.01 per 1K tokens (input)
- ~$0.03 per 1K tokens (output)
- Typical PR: 500-2000 tokens = $0.02-0.10 per review
Anthropic (Claude 3 Opus):
- ~$0.015 per 1K tokens (input)
- ~$0.075 per 1K tokens (output)
- Typical PR: $0.03-0.15 per review
Example monthly cost:
- 50 PRs/month × $0.05/review = $2.50/month
- 500 PRs/month × $0.05/review = $25/month
Cost Optimization
Section titled “Cost Optimization”To reduce cost:
- Keep PRs small and focused.
- Avoid very large diffs in a single review.
3. Skip test files:
exclude: - "**/*.test.*" - "**/*.spec.*"4. Review only on-request:
trigger: manual # Don't auto-review every push5. Use severity threshold:
severity_threshold: warning # Skip "info" level commentsBest Practices
Section titled “Best Practices”💡 When to Use AI Review
Section titled “💡 When to Use AI Review”✅ Good Use Cases:
- Security-sensitive code (auth, payments)
- Complex algorithms
- Database queries
- API endpoints
- Third-party integrations
❌ Less Useful:
- Generated code
- Configuration files
- Simple refactoring
- Documentation-only changes
💡 Responding to AI Feedback
Section titled “💡 Responding to AI Feedback”1. Don’t auto-fix everything
- AI can be wrong
- Context matters
- Use judgment
2. Engage in discussion
@ai-review That's a good catch, but in this case we're using atrusted source and the data is already sanitized earlier in the pipeline.
See line 23 where we validate input.3. Track false positives
ignore: - file: src/utils/legacy.ts reason: "Legacy code, requires major refactor" until: 2024-12-31💡 Combining with Human Review
Section titled “💡 Combining with Human Review”AI complements, doesn’t replace humans:
1. AI reviews first (catches obvious issues)2. Developer fixes critical/warnings3. Human reviewer approvesThis workflow:
- Reduces human reviewer burden
- Catches more bugs
- Faster feedback cycles
Troubleshooting
Section titled “Troubleshooting””AI review taking too long”
Section titled “”AI review taking too long””Cause: Large PR (many files/lines).
Solution:
# Check review statusoch review status 125
# Retry the reviewoch review ai 125“AI review failed”
Section titled ““AI review failed””Causes:
- API key invalid/expired
- Rate limit exceeded
- Timeout (very large PR)
Solutions:
# 1. Verify API keyecho $OPENAI_API_KEY
# 2. Check rate limitscurl https://api.openai.com/v1/usage \ -H "Authorization: Bearer $OPENAI_API_KEY"
# 3. Split large PR or review manually.“Too many false positives”
Section titled ““Too many false positives””Solution: Use Settings → AI Review Rules to refine prompts and reduce false positives.
”Missing obvious bugs”
Section titled “”Missing obvious bugs””Cause: AI missed context or edge case.
Solution: Add custom instructions in Settings → AI Review Rules so reviews incorporate your conventions.
Advanced: CI/CD Integration
Section titled “Advanced: CI/CD Integration”Block merge on critical findings
Section titled “Block merge on critical findings”name: AI Code Reviewon: [pull_request]
jobs: ai-review: runs-on: ubuntu-latest steps: - name: Run AI Review run: och review ai ${{ github.event.pull_request.number }} env: OCH_TOKEN: ${{ secrets.OCH_TOKEN }}
- name: Check for critical issues run: | REVIEW=$(curl -s \ -H "Authorization: Bearer $OCH_TOKEN" \ https://git.yourcompany.com/api/repos/OWNER/REPO/pulls/${{ github.event.pull_request.number }}/ai-review/latest) SEVERITY=$(echo "$REVIEW" | jq -r '.data.overallSeverity') if [ "$SEVERITY" = "critical" ]; then echo "❌ Critical issues found" exit 1 fi