skill_id stringlengths 3 31 | name stringlengths 12 35 | description stringclasses 2
values | type stringclasses 5
values | task_prompt stringlengths 1.01k 2.23k | skill_document stringlengths 495 26.8k | test_code stringlengths 3.04k 18.1k | repo_url stringlengths 0 72 | repo_commit stringclasses 2
values | docker_image stringclasses 8
values |
|---|---|---|---|---|---|---|---|---|---|
add-uint-support | Add UInt Support | Restore uint32/uint64 operator support in PyTorch | repair | # Task: Enable Unsigned Integer Support for Target Operators
## Background
Several operators in PyTorch do not currently support unsigned integer types (uint16, uint32, uint64). When users attempt to perform calculations with these tensor types, the system returns an error stating that the type is not implemented.
Modify the underlying code so that the following operators can correctly process unsigned integer types.
**Target Operators:**
- `remainder`
- `gcd`
- `floor_divide`
## Files to Modify
- `aten/src/ATen/native/BinaryOps.cpp` - Add unsigned integer type dispatch
- `aten/src/ATen/native/cpu/BinaryOpsKernel.cpp` - Add kernel implementations for unsigned types
## Requirements
- **Full Coverage**: Ensure `uint16`, `uint32`, and `uint64` are all supported for all three operators
- **Standard Compliance**: Follow PyTorch's current recommended type dispatch patterns. Use the standard macro approach for groups of types rather than listing individual types manually
- **Consistency**: Match the coding patterns already used by neighboring operators in the same files
## Acceptance Criteria
- The code compiles successfully
- `uint16`, `uint32`, and `uint64` work correctly for `remainder`, `gcd`, and `floor_divide` operators
| ---
name: add-uint-support
description: Add unsigned integer (uint) type support to PyTorch operators by updating AT_DISPATCH macros. Use when adding support for uint16, uint32, uint64 types to operators, kernels, or when user mentions enabling unsigned types, barebones unsigned types, or uint support.
---
# Add Unsigned Integer (uint) Support to Operators
This skill helps add support for unsigned integer types (uint16, uint32, uint64) to PyTorch operators by updating their AT_DISPATCH macros.
## When to use this skill
Use this skill when:
- Adding uint16, uint32, or uint64 support to an operator
- User mentions "unsigned types", "uint support", "barebones unsigned types"
- Enabling support for kUInt16, kUInt32, kUInt64 in kernels
- Working with operator implementations that need expanded type coverage
## Quick reference
**Add unsigned types to existing dispatch:**
```cpp
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES));
// After (method 1: add unsigned types explicitly)
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES));
// After (method 2: use V2 integral types if AT_INTEGRAL_TYPES present)
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES));
```
## Type group reference
**Unsigned type groups:**
- `AT_BAREBONES_UNSIGNED_TYPES`: kUInt16, kUInt32, kUInt64
- `AT_INTEGRAL_TYPES_V2`: AT_INTEGRAL_TYPES + AT_BAREBONES_UNSIGNED_TYPES
**Relationship:**
```cpp
AT_INTEGRAL_TYPES // kByte, kChar, kInt, kLong, kShort
AT_BAREBONES_UNSIGNED_TYPES // kUInt16, kUInt32, kUInt64
AT_INTEGRAL_TYPES_V2 // INTEGRAL_TYPES + BAREBONES_UNSIGNED_TYPES
```
## Instructions
### Step 1: Determine if conversion to V2 is needed
Check if the file uses AT_DISPATCH_V2:
**If using old AT_DISPATCH:**
- First convert to AT_DISPATCH_V2 using the at-dispatch-v2 skill
- Then proceed with adding uint support
**If already using AT_DISPATCH_V2:**
- Proceed directly to Step 2
### Step 2: Analyze the current dispatch macro
Identify what type groups are currently in use:
```cpp
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
// body
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
^^^^^^^^^^^^^^^^^^^^^^^^^
Current type coverage
```
Common patterns:
- `AT_EXPAND(AT_ALL_TYPES)` → includes AT_INTEGRAL_TYPES + AT_FLOATING_TYPES
- `AT_EXPAND(AT_INTEGRAL_TYPES)` → signed integers only
- `AT_EXPAND(AT_FLOATING_TYPES)` → floating point types
### Step 3: Choose the uint addition method
Two approaches:
**Method 1: Add AT_BAREBONES_UNSIGNED_TYPES explicitly**
- Use when: You want to be explicit about adding uint support
- Add `AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES)` to the type list
**Method 2: Substitute AT_INTEGRAL_TYPES with AT_INTEGRAL_TYPES_V2**
- Use when: The dispatch already uses `AT_EXPAND(AT_INTEGRAL_TYPES)`
- More concise: replaces one type group with its superset
- Only applicable if AT_INTEGRAL_TYPES is present
### Step 4: Apply the transformation
**Method 1 example:**
```cpp
// Before
AT_DISPATCH_V2(
dtype,
"min_values_cuda",
AT_WRAP([&]() {
kernel_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
kBFloat16, kHalf, kBool
);
// After (add unsigned types)
AT_DISPATCH_V2(
dtype,
"min_values_cuda",
AT_WRAP([&]() {
kernel_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES),
kBFloat16, kHalf, kBool
);
```
**Method 2 example:**
```cpp
// Before
AT_DISPATCH_V2(
dtype,
"integral_op",
AT_WRAP([&]() {
kernel<scalar_t>();
}),
AT_EXPAND(AT_INTEGRAL_TYPES)
);
// After (substitute with V2)
AT_DISPATCH_V2(
dtype,
"integral_op",
AT_WRAP([&]() {
kernel<scalar_t>();
}),
AT_EXPAND(AT_INTEGRAL_TYPES_V2)
);
```
### Step 5: Handle AT_ALL_TYPES vs individual type groups
If the dispatch uses `AT_EXPAND(AT_ALL_TYPES)`:
- `AT_ALL_TYPES` = `AT_INTEGRAL_TYPES` + `AT_FLOATING_TYPES`
- To add uint: add `AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES)` to the list
If the dispatch separately lists INTEGRAL and FLOATING:
```cpp
// Before
AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES)
// After (Method 2 preferred)
AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES)
```
### Step 6: Verify all dispatch sites
Check the file for ALL dispatch macros that need uint support:
- Some operators have multiple dispatch sites (CPU, CUDA, different functions)
- Apply the transformation consistently across all sites
- Ensure each gets the same type coverage updates
### Step 7: Validate the changes
Check that:
- [ ] AT_DISPATCH_V2 format is used (not old AT_DISPATCH)
- [ ] Unsigned types are added via one of the two methods
- [ ] All relevant dispatch sites in the file are updated
- [ ] Type groups use `AT_EXPAND()`
- [ ] Arguments are properly formatted and comma-separated
## Common patterns
### Pattern 1: AT_ALL_TYPES + extras
```cpp
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
```
### Pattern 2: Separate INTEGRAL + FLOATING
```cpp
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES));
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES));
```
### Pattern 3: Old dispatch needs conversion first
```cpp
// Before (needs v2 conversion first)
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, dtype, "op", [&]() {
kernel<scalar_t>();
});
// After v2 conversion
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
// After adding uint support
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
```
## Multiple dispatch sites example
For a file with multiple functions:
```cpp
void min_values_kernel_cuda(TensorIterator& iter) {
AT_DISPATCH_V2(iter.dtype(), "min_values_cuda", AT_WRAP([&]() {
impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// Added uint support
}
void min_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "min_cuda", AT_WRAP([&]() {
gpu_reduce_kernel<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// Added uint support here too
}
```
## Decision tree
Use this decision tree to determine the approach:
```
Is the file using AT_DISPATCH_V2?
├─ No → Use at-dispatch-v2 skill first, then continue
└─ Yes
└─ Does it use AT_EXPAND(AT_INTEGRAL_TYPES)?
├─ Yes → Replace with AT_EXPAND(AT_INTEGRAL_TYPES_V2)
└─ No → Add AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES) to type list
```
## Edge cases
### Case 1: Dispatch with only floating types
If the operator only supports floating point types, don't add uint support:
```cpp
// Leave as-is - floating point only operator
AT_DISPATCH_V2(dtype, "float_op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_FLOATING_TYPES), kHalf);
```
### Case 2: Complex types present
Unsigned types work alongside complex types:
```cpp
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES),
AT_EXPAND(AT_COMPLEX_TYPES),
kHalf, kBFloat16);
```
### Case 3: Already has uint support
Check if uint types are already present:
- If `AT_INTEGRAL_TYPES_V2` is used → already has uint support
- If `AT_BAREBONES_UNSIGNED_TYPES` is already in list → already has uint support
- Skip the file if uint support is already present
## Workflow
When asked to add uint support:
1. Read the target file
2. Check if using AT_DISPATCH_V2:
- If not → use at-dispatch-v2 skill first
3. Identify all dispatch macro sites
4. For each dispatch:
- Analyze current type groups
- Choose method (add BAREBONES_UNSIGNED or upgrade to V2)
- Apply transformation with Edit tool
5. Show the user the changes
6. Explain what was modified
## Important notes
- Always check if v2 conversion is needed first
- Apply changes consistently across all dispatch sites in the file
- Method 2 (AT_INTEGRAL_TYPES_V2) is cleaner when applicable
- Method 1 (explicit AT_BAREBONES_UNSIGNED_TYPES) is more explicit
- Unsigned types are: kUInt16, kUInt32, kUInt64 (not kByte which is uint8)
- Some operators may not semantically support unsigned types - use judgment
## Testing
After adding uint support, the operator should accept uint16, uint32, and uint64 tensors. The user is responsible for functional testing. | """
Unit Test for UInt32/64 Operator Support in PyTorch
"""
import torch
import pytest
class TestUIntOperators:
"""Tests for uint32 and uint64 operator support."""
@pytest.fixture(params=["uint32", "uint64"])
def dtype(self, request):
"""Parametrized fixture: uint32 and uint64."""
dtype_map = {
"uint32": torch.uint32,
"uint64": torch.uint64,
}
return dtype_map[request.param]
# =========================================================================
# Supported group: these 3 operators typically support uint32/64 in PyTorch
# =========================================================================
def test_bitwise_and(self, dtype):
"""Test bitwise_and operation (already supported)."""
a = torch.tensor(0b1100, dtype=dtype) # 12
b = torch.tensor(0b1010, dtype=dtype) # 10
result = torch.bitwise_and(a, b)
expected = torch.tensor(0b1000, dtype=dtype) # 8
assert torch.equal(result, expected), f"bitwise_and failed for {dtype}"
def test_mul(self, dtype):
"""Test multiplication operation (already supported)."""
a = torch.tensor(3, dtype=dtype)
b = torch.tensor(4, dtype=dtype)
result = torch.mul(a, b)
expected = torch.tensor(12, dtype=dtype)
assert torch.equal(result, expected), f"mul failed for {dtype}"
def test_eq(self, dtype):
"""Test equality comparison operation (already supported)."""
a = torch.tensor(5, dtype=dtype)
b = torch.tensor(5, dtype=dtype)
result = torch.eq(a, b)
expected = torch.tensor(True)
assert torch.equal(result, expected), f"eq failed for {dtype}"
# =========================================================================
# Unsupported group: these 3 operators typically do not support uint32/64 (need to be fixed)
# =========================================================================
def test_remainder(self, dtype):
"""Test remainder operation (support pending)."""
a = torch.tensor(10, dtype=dtype)
b = torch.tensor(3, dtype=dtype)
result = torch.remainder(a, b)
expected = torch.tensor(1, dtype=dtype)
assert torch.equal(result, expected), f"remainder failed for {dtype}"
def test_gcd(self, dtype):
"""Test GCD (greatest common divisor) operation (support pending)."""
a = torch.tensor(12, dtype=dtype)
b = torch.tensor(8, dtype=dtype)
result = torch.gcd(a, b)
expected = torch.tensor(4, dtype=dtype)
assert torch.equal(result, expected), f"gcd failed for {dtype}"
def test_floor_divide(self, dtype):
"""Test floor_divide operation (support pending)."""
a = torch.tensor(10, dtype=dtype)
b = torch.tensor(3, dtype=dtype)
result = torch.floor_divide(a, b)
expected = torch.tensor(3, dtype=dtype)
assert torch.equal(result, expected), f"floor_divide failed for {dtype}"
| zhangyiiiiii/swe-skills-bench-pytorch:latest | ||
fix | React Code Fix & Linter | See task file for detailed mission requirements. | fix | # Task: Fix ESLint Violations in TypeScript Codebase
## Background
The upgradle project uses TypeScript + ESLint for code quality enforcement. Currently, the `src/` directory contains multiple ESLint rule violations that need to be addressed:
- `no-unused-vars`
- `@typescript-eslint/no-explicit-any`
- `eqeqeq` (strict equality)
## Objective
Scan and fix all lint errors in `.ts` files under the `src/` directory to ensure the codebase passes linting checks.
## Scope
- **Files to modify**: `src/**/*.ts` (all TypeScript files in src directory)
- **Files to preserve**: Do NOT modify any test files
- **Repo requirements**: Ensure a `package.json` exists with `lint` and `test` scripts and a `src/` directory containing one or more `.ts` files so the test harness can run.
## Requirements
- Fix all ESLint error-level violations
- Maintain existing functionality (all existing tests must continue to pass)
- Follow TypeScript best practices
- Replace `any` types with proper type definitions where possible
- Use strict equality (`===`) instead of loose equality (`==`)
- Remove or properly use unused variables
## Acceptance Criteria
- `npm run lint` exits with code 0 (no error-level reports)
- No new lint warnings introduced
| ---
name: fix
description: Use when you have lint errors, formatting issues, or before committing code to ensure it passes CI.
---
# Fix Lint and Formatting
## Instructions
1. Run `yarn prettier` to fix formatting
2. Run `yarn linc` to check for remaining lint issues
3. Report any remaining manual fixes needed
## Common Mistakes
- **Running prettier on wrong files** - `yarn prettier` only formats changed files
- **Ignoring linc errors** - These will fail CI, fix them before committing
| """
Test for 'fix' skill — React Code Fix & Linter
Validates that the Agent scanned and fixed all ESLint violations in the upgradle
TypeScript codebase so that `npm run lint` passes cleanly.
"""
import os
import subprocess
import glob
import re
import pytest
from _dependency_utils import ensure_npm_dependencies
@pytest.fixture(scope="module", autouse=True)
def _ensure_repo_dependencies():
ensure_npm_dependencies(TestFix.REPO_DIR)
class TestFix:
"""Verify ESLint violations in upgradle src/ have been fixed."""
REPO_DIR = "/workspace/upgradle"
# ------------------------------------------------------------------
# L1: basic file / project integrity
# ------------------------------------------------------------------
def test_src_directory_exists(self):
"""src/ directory must exist in the repository."""
assert os.path.isdir(
os.path.join(self.REPO_DIR, "src")
), "src/ directory is missing"
def test_package_json_exists(self):
"""package.json must exist at repo root."""
assert os.path.isfile(
os.path.join(self.REPO_DIR, "package.json")
), "package.json is missing"
def test_ts_files_exist_in_src(self):
"""At least one .ts file must exist under src/."""
ts_files = glob.glob(
os.path.join(self.REPO_DIR, "src", "**", "*.ts"), recursive=True
)
assert len(ts_files) >= 1, "No .ts files found under src/"
# ------------------------------------------------------------------
# L2: functional lint verification
# ------------------------------------------------------------------
def test_npm_run_lint_exit_code(self):
"""npm run lint must exit with code 0 (no error-level reports)."""
result = subprocess.run(
["npm", "run", "lint"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
assert result.returncode == 0, (
f"npm run lint failed (rc={result.returncode}):\n"
f"stdout={result.stdout[-2000:]}\nstderr={result.stderr[-2000:]}"
)
def test_no_eslint_errors_in_stdout(self):
"""Lint output must not contain error-level reports."""
result = subprocess.run(
["npm", "run", "lint"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
combined = result.stdout + result.stderr
# ESLint outputs "X errors" when there are error-level problems
match = re.search(r"(\d+)\s+error", combined)
if match:
error_count = int(match.group(1))
assert (
error_count == 0
), f"ESLint reported {error_count} error(s):\n{combined[-2000:]}"
def test_no_unused_vars_in_src(self):
"""No @typescript-eslint/no-unused-vars violations should remain."""
result = subprocess.run(
["npx", "eslint", "src/", "--format", "json"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
import json
try:
data = json.loads(result.stdout)
except json.JSONDecodeError:
pytest.skip("ESLint JSON output could not be parsed")
for file_report in data:
for msg in file_report.get("messages", []):
if msg.get("severity", 0) >= 2:
assert "no-unused-vars" not in msg.get(
"ruleId", ""
), f"no-unused-vars error in {file_report['filePath']}:{msg['line']}"
def test_no_explicit_any_in_src(self):
"""No @typescript-eslint/no-explicit-any errors should remain."""
result = subprocess.run(
["npx", "eslint", "src/", "--format", "json"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
import json
try:
data = json.loads(result.stdout)
except json.JSONDecodeError:
pytest.skip("ESLint JSON output could not be parsed")
for file_report in data:
for msg in file_report.get("messages", []):
if msg.get("severity", 0) >= 2:
assert "no-explicit-any" not in msg.get(
"ruleId", ""
), f"no-explicit-any error in {file_report['filePath']}:{msg['line']}"
def test_no_eqeqeq_violations_in_src(self):
"""No eqeqeq (loose equality) errors should remain."""
result = subprocess.run(
["npx", "eslint", "src/", "--format", "json"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
import json
try:
data = json.loads(result.stdout)
except json.JSONDecodeError:
pytest.skip("ESLint JSON output could not be parsed")
for file_report in data:
for msg in file_report.get("messages", []):
if msg.get("severity", 0) >= 2:
assert "eqeqeq" not in msg.get(
"ruleId", ""
), f"eqeqeq error in {file_report['filePath']}:{msg['line']}"
def test_no_loose_equality_operators(self):
"""Source files should not contain == or != (use === / !==)."""
ts_files = glob.glob(
os.path.join(self.REPO_DIR, "src", "**", "*.ts"), recursive=True
)
for fpath in ts_files:
with open(fpath, "r", encoding="utf-8", errors="replace") as f:
for lineno, line in enumerate(f, 1):
stripped = line.strip()
if stripped.startswith("//") or stripped.startswith("*"):
continue
# Match == or != but not === or !==
if re.search(r"(?<!=)==(?!=)", stripped) or re.search(
r"(?<!!)!=(?!=)", stripped
):
pytest.fail(
f"Loose equality in {os.path.relpath(fpath, self.REPO_DIR)}:{lineno}: {stripped[:120]}"
)
def test_no_new_lint_warnings_introduced(self):
"""npm run lint should produce no new warning-level reports beyond baseline."""
result = subprocess.run(
["npm", "run", "lint"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
combined = result.stdout + result.stderr
match = re.search(r"(\d+)\s+warning", combined)
if match:
warning_count = int(match.group(1))
# Acceptance criteria: no *new* warnings. Allow 0.
assert (
warning_count == 0
), f"ESLint reported {warning_count} warning(s); task requires 0 new warnings."
def test_test_files_not_modified(self):
"""Test files must not have been modified by the Agent."""
result = subprocess.run(
["git", "diff", "--name-only", "HEAD"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
changed_files = result.stdout.strip().splitlines()
test_changes = [
f for f in changed_files if f.startswith("test") or "/test" in f
]
assert (
len(test_changes) == 0
), f"Test files were modified but should be preserved: {test_changes}"
def test_existing_tests_still_pass(self):
"""All existing tests in the project must continue to pass."""
pkg_json = os.path.join(self.REPO_DIR, "package.json")
import json
with open(pkg_json) as f:
pkg = json.load(f)
if "test" not in pkg.get("scripts", {}):
pytest.skip("No test script defined in package.json")
result = subprocess.run(
["npm", "test"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
assert (
result.returncode == 0
), f"Existing tests failed (rc={result.returncode}):\n{result.stderr[-2000:]}"
| https://github.com/michaelasper/upgradle | 5f292188e9b427a96d3573b29e5677e4cdce58ea | zhangyiiiiii/swe-skills-bench-python |
tdd-workflow | TDD Workflow | See task file for detailed mission requirements. | feature | # Task: Implement Smart Coupon Calculator
## Required File Paths (Agent must only modify/create under these)
- MUST modify: `src/calculator.py` — implement or update `SmartCouponCalculator` here.
## Background
We need a flexible discount calculation system for our e-commerce platform that can handle multiple promotion strategies simultaneously.
## Objective
Implement a `SmartCouponCalculator` class in `src/calculator.py` that supports the following discount strategies:
### Discount Rules
1. **Progressive Discount**
- $10 off when order total ≥ $100
- Additional $15 off (total $25 off) when order total ≥ $200
2. **Category Discount**
- 10% off for items in specified promotional categories
3. **User Tier Discount**
- VIP members: 5% off final price
- SVIP members: 10% off final price
When multiple discounts apply, they should be stacked optimally to maximize customer savings.
## Implementation Requirements
### Core Functionality
- Calculate final price with all applicable discounts
- Support user tier levels: regular, VIP, SVIP
- Handle category-specific discounts
- Apply progressive discounts based on order total
- Implement optimal discount stacking logic
### Edge Cases to Handle
- Zero or negative amounts
- Empty shopping carts
- Invalid user tier values
- Items without category information
## Acceptance Criteria
- Calculator correctly applies all three discount types
- Discount stacking produces accurate final prices for complex scenarios
- Edge cases are handled gracefully without errors
- Code is maintainable and follows Python best practices
| ---
name: tdd-workflow
description: Use this skill when writing new features, fixing bugs, or refactoring code. Enforces test-driven development with 80%+ coverage including unit, integration, and E2E tests.
---
# 測試驅動開發工作流程
此技能確保所有程式碼開發遵循 TDD 原則,並具有完整的測試覆蓋率。
## 何時啟用
- 撰寫新功能或功能性程式碼
- 修復 Bug 或問題
- 重構現有程式碼
- 新增 API 端點
- 建立新元件
## 核心原則
### 1. 測試先於程式碼
總是先寫測試,然後實作程式碼使測試通過。
### 2. 覆蓋率要求
- 最低 80% 覆蓋率(單元 + 整合 + E2E)
- 涵蓋所有邊界案例
- 測試錯誤情境
- 驗證邊界條件
### 3. 測試類型
#### 單元測試
- 個別函式和工具
- 元件邏輯
- 純函式
- 輔助函式和工具
#### 整合測試
- API 端點
- 資料庫操作
- 服務互動
- 外部 API 呼叫
#### E2E 測試(Playwright)
- 關鍵使用者流程
- 完整工作流程
- 瀏覽器自動化
- UI 互動
## TDD 工作流程步驟
### 步驟 1:撰寫使用者旅程
```
身為 [角色],我想要 [動作],以便 [好處]
範例:
身為使用者,我想要語意搜尋市場,
以便即使沒有精確關鍵字也能找到相關市場。
```
### 步驟 2:產生測試案例
為每個使用者旅程建立完整的測試案例:
```typescript
describe('Semantic Search', () => {
it('returns relevant markets for query', async () => {
// 測試實作
})
it('handles empty query gracefully', async () => {
// 測試邊界案例
})
it('falls back to substring search when Redis unavailable', async () => {
// 測試回退行為
})
it('sorts results by similarity score', async () => {
// 測試排序邏輯
})
})
```
### 步驟 3:執行測試(應該失敗)
```bash
npm test
# 測試應該失敗 - 我們還沒實作
```
### 步驟 4:實作程式碼
撰寫最少的程式碼使測試通過:
```typescript
// 由測試引導的實作
export async function searchMarkets(query: string) {
// 實作在此
}
```
### 步驟 5:再次執行測試
```bash
npm test
# 測試現在應該通過
```
### 步驟 6:重構
在保持測試通過的同時改善程式碼品質:
- 移除重複
- 改善命名
- 優化效能
- 增強可讀性
### 步驟 7:驗證覆蓋率
```bash
npm run test:coverage
# 驗證達到 80%+ 覆蓋率
```
## 測試模式
### 單元測試模式(Jest/Vitest)
```typescript
import { render, screen, fireEvent } from '@testing-library/react'
import { Button } from './Button'
describe('Button Component', () => {
it('renders with correct text', () => {
render(<Button>Click me</Button>)
expect(screen.getByText('Click me')).toBeInTheDocument()
})
it('calls onClick when clicked', () => {
const handleClick = jest.fn()
render(<Button onClick={handleClick}>Click</Button>)
fireEvent.click(screen.getByRole('button'))
expect(handleClick).toHaveBeenCalledTimes(1)
})
it('is disabled when disabled prop is true', () => {
render(<Button disabled>Click</Button>)
expect(screen.getByRole('button')).toBeDisabled()
})
})
```
### API 整合測試模式
```typescript
import { NextRequest } from 'next/server'
import { GET } from './route'
describe('GET /api/markets', () => {
it('returns markets successfully', async () => {
const request = new NextRequest('http://localhost/api/markets')
const response = await GET(request)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.success).toBe(true)
expect(Array.isArray(data.data)).toBe(true)
})
it('validates query parameters', async () => {
const request = new NextRequest('http://localhost/api/markets?limit=invalid')
const response = await GET(request)
expect(response.status).toBe(400)
})
it('handles database errors gracefully', async () => {
// Mock 資料庫失敗
const request = new NextRequest('http://localhost/api/markets')
// 測試錯誤處理
})
})
```
### E2E 測試模式(Playwright)
```typescript
import { test, expect } from '@playwright/test'
test('user can search and filter markets', async ({ page }) => {
// 導航到市場頁面
await page.goto('/')
await page.click('a[href="/markets"]')
// 驗證頁面載入
await expect(page.locator('h1')).toContainText('Markets')
// 搜尋市場
await page.fill('input[placeholder="Search markets"]', 'election')
// 等待 debounce 和結果
await page.waitForTimeout(600)
// 驗證搜尋結果顯示
const results = page.locator('[data-testid="market-card"]')
await expect(results).toHaveCount(5, { timeout: 5000 })
// 驗證結果包含搜尋詞
const firstResult = results.first()
await expect(firstResult).toContainText('election', { ignoreCase: true })
// 依狀態篩選
await page.click('button:has-text("Active")')
// 驗證篩選結果
await expect(results).toHaveCount(3)
})
test('user can create a new market', async ({ page }) => {
// 先登入
await page.goto('/creator-dashboard')
// 填寫市場建立表單
await page.fill('input[name="name"]', 'Test Market')
await page.fill('textarea[name="description"]', 'Test description')
await page.fill('input[name="endDate"]', '2025-12-31')
// 提交表單
await page.click('button[type="submit"]')
// 驗證成功訊息
await expect(page.locator('text=Market created successfully')).toBeVisible()
// 驗證重導向到市場頁面
await expect(page).toHaveURL(/\/markets\/test-market/)
})
```
## 測試檔案組織
```
src/
├── components/
│ ├── Button/
│ │ ├── Button.tsx
│ │ ├── Button.test.tsx # 單元測試
│ │ └── Button.stories.tsx # Storybook
│ └── MarketCard/
│ ├── MarketCard.tsx
│ └── MarketCard.test.tsx
├── app/
│ └── api/
│ └── markets/
│ ├── route.ts
│ └── route.test.ts # 整合測試
└── e2e/
├── markets.spec.ts # E2E 測試
├── trading.spec.ts
└── auth.spec.ts
```
## Mock 外部服務
### Supabase Mock
```typescript
jest.mock('@/lib/supabase', () => ({
supabase: {
from: jest.fn(() => ({
select: jest.fn(() => ({
eq: jest.fn(() => Promise.resolve({
data: [{ id: 1, name: 'Test Market' }],
error: null
}))
}))
}))
}
}))
```
### Redis Mock
```typescript
jest.mock('@/lib/redis', () => ({
searchMarketsByVector: jest.fn(() => Promise.resolve([
{ slug: 'test-market', similarity_score: 0.95 }
])),
checkRedisHealth: jest.fn(() => Promise.resolve({ connected: true }))
}))
```
### OpenAI Mock
```typescript
jest.mock('@/lib/openai', () => ({
generateEmbedding: jest.fn(() => Promise.resolve(
new Array(1536).fill(0.1) // Mock 1536 維嵌入向量
))
}))
```
## 測試覆蓋率驗證
### 執行覆蓋率報告
```bash
npm run test:coverage
```
### 覆蓋率門檻
```json
{
"jest": {
"coverageThresholds": {
"global": {
"branches": 80,
"functions": 80,
"lines": 80,
"statements": 80
}
}
}
}
```
## 常見測試錯誤避免
### ❌ 錯誤:測試實作細節
```typescript
// 不要測試內部狀態
expect(component.state.count).toBe(5)
```
### ✅ 正確:測試使用者可見行為
```typescript
// 測試使用者看到的內容
expect(screen.getByText('Count: 5')).toBeInTheDocument()
```
### ❌ 錯誤:脆弱的選擇器
```typescript
// 容易壞掉
await page.click('.css-class-xyz')
```
### ✅ 正確:語意選擇器
```typescript
// 對變更有彈性
await page.click('button:has-text("Submit")')
await page.click('[data-testid="submit-button"]')
```
### ❌ 錯誤:無測試隔離
```typescript
// 測試互相依賴
test('creates user', () => { /* ... */ })
test('updates same user', () => { /* 依賴前一個測試 */ })
```
### ✅ 正確:獨立測試
```typescript
// 每個測試設置自己的資料
test('creates user', () => {
const user = createTestUser()
// 測試邏輯
})
test('updates user', () => {
const user = createTestUser()
// 更新邏輯
})
```
## 持續測試
### 開發期間的 Watch 模式
```bash
npm test -- --watch
# 檔案變更時自動執行測試
```
### Pre-Commit Hook
```bash
# 每次 commit 前執行
npm test && npm run lint
```
### CI/CD 整合
```yaml
# GitHub Actions
- name: Run Tests
run: npm test -- --coverage
- name: Upload Coverage
uses: codecov/codecov-action@v3
```
## 最佳實務
1. **先寫測試** - 總是 TDD
2. **一個測試一個斷言** - 專注單一行為
3. **描述性測試名稱** - 解釋測試內容
4. **Arrange-Act-Assert** - 清晰的測試結構
5. **Mock 外部依賴** - 隔離單元測試
6. **測試邊界案例** - Null、undefined、空值、大值
7. **測試錯誤路徑** - 不只是快樂路徑
8. **保持測試快速** - 單元測試每個 < 50ms
9. **測試後清理** - 無副作用
10. **檢視覆蓋率報告** - 識別缺口
## 成功指標
- 達到 80%+ 程式碼覆蓋率
- 所有測試通過(綠色)
- 無跳過或停用的測試
- 快速測試執行(單元測試 < 30s)
- E2E 測試涵蓋關鍵使用者流程
- 測試在生產前捕捉 Bug
---
**記住**:測試不是可選的。它們是實現自信重構、快速開發和生產可靠性的安全網。
| """
Test for 'tdd-workflow' skill — Smart Coupon Calculator
Validates that the Agent implemented SmartCouponCalculator with progressive,
category, and user-tier discounts in src/calculator.py.
"""
import os
import sys
import importlib
import subprocess
import pytest
class TestTddWorkflow:
"""Verify SmartCouponCalculator implementation correctness."""
REPO_DIR = "/workspace/python"
@classmethod
def setup_class(cls):
"""Add repo to sys.path so we can import src.calculator."""
if cls.REPO_DIR not in sys.path:
sys.path.insert(0, cls.REPO_DIR)
# ------------------------------------------------------------------
# L1: file & syntax checks
# ------------------------------------------------------------------
def test_calculator_file_exists(self):
"""src/calculator.py must exist."""
fpath = os.path.join(self.REPO_DIR, "src", "calculator.py")
assert os.path.isfile(fpath), "src/calculator.py is missing"
def test_calculator_compiles(self):
"""src/calculator.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "src/calculator.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: functional verification — import & instantiate
# ------------------------------------------------------------------
def _get_calculator(self):
"""Helper: import and return a fresh SmartCouponCalculator instance."""
mod = importlib.import_module("src.calculator")
importlib.reload(mod)
cls = getattr(mod, "SmartCouponCalculator", None)
assert (
cls is not None
), "SmartCouponCalculator class not found in src/calculator.py"
return cls()
def test_class_exists(self):
"""SmartCouponCalculator class must be importable."""
calc = self._get_calculator()
assert calc is not None
# --- Progressive Discount ---
def test_progressive_no_discount_below_100(self):
"""Order < $100 should get no progressive discount."""
calc = self._get_calculator()
items = [{"name": "A", "price": 50, "quantity": 1}]
result = calc.calculate(items=items, user_tier="regular")
# Exact price depends on implementation; no progressive discount
assert isinstance(result, (int, float)), "calculate() must return a number"
assert result == pytest.approx(50, abs=0.01), f"Expected 50, got {result}"
def test_progressive_10_off_at_100(self):
"""Order = $100 should get $10 off progressive discount."""
calc = self._get_calculator()
items = [{"name": "A", "price": 100, "quantity": 1}]
result = calc.calculate(items=items, user_tier="regular")
assert result == pytest.approx(90, abs=0.01), f"Expected 90, got {result}"
def test_progressive_25_off_at_200(self):
"""Order = $200 should get $25 off progressive discount."""
calc = self._get_calculator()
items = [{"name": "A", "price": 200, "quantity": 1}]
result = calc.calculate(items=items, user_tier="regular")
assert result == pytest.approx(175, abs=0.01), f"Expected 175, got {result}"
# --- Category Discount ---
def test_category_discount_10_percent(self):
"""Items in promotional categories should get 10% off."""
calc = self._get_calculator()
items = [
{
"name": "Promo item",
"price": 80,
"quantity": 1,
"category": "electronics",
}
]
result = calc.calculate(
items=items,
user_tier="regular",
promo_categories=["electronics"],
)
# 80 * 0.9 = 72, below 100 so no progressive
assert result == pytest.approx(72, abs=0.01), f"Expected 72, got {result}"
def test_category_discount_only_promo(self):
"""Non-promo category items should not get category discount."""
calc = self._get_calculator()
items = [
{"name": "Promo", "price": 50, "quantity": 1, "category": "electronics"},
{"name": "Normal", "price": 50, "quantity": 1, "category": "food"},
]
result = calc.calculate(
items=items,
user_tier="regular",
promo_categories=["electronics"],
)
# electronics: 50*0.9=45, food: 50, total 95 < 100 no progressive
assert result == pytest.approx(95, abs=0.5), f"Expected ~95, got {result}"
# --- User Tier Discount ---
def test_vip_5_percent_off(self):
"""VIP members get 5% off final price."""
calc = self._get_calculator()
items = [{"name": "A", "price": 50, "quantity": 1}]
result = calc.calculate(items=items, user_tier="VIP")
assert result == pytest.approx(47.5, abs=0.01), f"Expected 47.5, got {result}"
def test_svip_10_percent_off(self):
"""SVIP members get 10% off final price."""
calc = self._get_calculator()
items = [{"name": "A", "price": 50, "quantity": 1}]
result = calc.calculate(items=items, user_tier="SVIP")
assert result == pytest.approx(45, abs=0.01), f"Expected 45, got {result}"
# --- Stacking ---
def test_all_discounts_stacked(self):
"""Progressive + category + SVIP should stack optimally."""
calc = self._get_calculator()
items = [
{"name": "Gadget", "price": 120, "quantity": 1, "category": "electronics"},
{"name": "Book", "price": 100, "quantity": 1, "category": "books"},
]
result = calc.calculate(
items=items,
user_tier="SVIP",
promo_categories=["electronics"],
)
# electronics: 120*0.9=108, books: 100, subtotal=208
# Progressive: 208 >= 200 → -25 → 183
# SVIP: 183*0.9 = 164.7
assert isinstance(result, (int, float))
assert 140 <= result <= 190, f"Stacked discount result {result} looks wrong"
# --- Edge Cases ---
def test_zero_amount(self):
"""Zero-priced items should not cause errors."""
calc = self._get_calculator()
items = [{"name": "Free", "price": 0, "quantity": 1}]
result = calc.calculate(items=items, user_tier="regular")
assert result == pytest.approx(0, abs=0.01)
def test_empty_cart(self):
"""Empty shopping cart should return 0."""
calc = self._get_calculator()
result = calc.calculate(items=[], user_tier="regular")
assert result == pytest.approx(0, abs=0.01)
def test_invalid_tier_handled(self):
"""Invalid user tier should fallback to regular pricing or raise ValueError."""
calc = self._get_calculator()
items = [{"name": "A", "price": 50, "quantity": 1}]
try:
result = calc.calculate(items=items, user_tier="UNKNOWN")
# If no error, should be treated as regular (no tier discount)
assert result == pytest.approx(50, abs=0.01)
except (ValueError, KeyError):
pass # Acceptable to raise on invalid tier
| https://github.com/tdd-starters/python | zhangyiiiiii/swe-skills-bench-python | |
security-review | Security Review (zh-TW) | See task file for detailed mission requirements. | feature | # Task: Implement Secure Export API Endpoints for BabyBuddy
## Background
We need to add export endpoints to BabyBuddy's REST API that allow users to export feeding and sleep records. The implementation must enforce proper authentication and authorization to ensure users can only access their own children's data.
## Files to Modify
- `api/serializers.py` - Add FeedingExportSerializer and SleepExportSerializer
- `api/views.py` - Add ExportViewSet
- `api/urls.py` - Register export routes
- `tests/test_api.py` - Add security test cases
## Requirements
### API Endpoint
- `GET /api/child/{child_id}/export/?type=feeding|sleep`
- Returns last 30 days of records in JSON format
### Security Requirements
- Use Django Permission to validate authenticated users
- Users can ONLY access their own children's data
- Proper HTTP status codes for different scenarios:
- Authenticated user accessing own child's data → 200 OK
- Unauthenticated request → 401 Unauthorized
- User accessing another user's child data → 403 Forbidden
### Serializers
- **FeedingExportSerializer**: id, start, end, duration, type, method, amount
- **SleepExportSerializer**: id, start, end, duration, quality
## Acceptance Criteria
- `python manage.py test babybuddy.tests.test_api -v 2` passes with all tests successful
- Export endpoint returns correct JSON structure
- Security checks properly implemented and tested
| ---
name: security-review
description: Use this skill when adding authentication, handling user input, working with secrets, creating API endpoints, or implementing payment/sensitive features. Provides comprehensive security checklist and patterns.
---
# 安全性審查技能
此技能確保所有程式碼遵循安全性最佳實務並識別潛在漏洞。
## 何時啟用
- 實作認證或授權
- 處理使用者輸入或檔案上傳
- 建立新的 API 端點
- 處理密鑰或憑證
- 實作支付功能
- 儲存或傳輸敏感資料
- 整合第三方 API
## 安全性檢查清單
### 1. 密鑰管理
#### ❌ 絕不這樣做
```typescript
const apiKey = "sk-proj-xxxxx" // 寫死的密鑰
const dbPassword = "password123" // 在原始碼中
```
#### ✅ 總是這樣做
```typescript
const apiKey = process.env.OPENAI_API_KEY
const dbUrl = process.env.DATABASE_URL
// 驗證密鑰存在
if (!apiKey) {
throw new Error('OPENAI_API_KEY not configured')
}
```
#### 驗證步驟
- [ ] 無寫死的 API 金鑰、Token 或密碼
- [ ] 所有密鑰在環境變數中
- [ ] `.env.local` 在 .gitignore 中
- [ ] git 歷史中無密鑰
- [ ] 生產密鑰在託管平台(Vercel、Railway)中
### 2. 輸入驗證
#### 總是驗證使用者輸入
```typescript
import { z } from 'zod'
// 定義驗證 schema
const CreateUserSchema = z.object({
email: z.string().email(),
name: z.string().min(1).max(100),
age: z.number().int().min(0).max(150)
})
// 處理前驗證
export async function createUser(input: unknown) {
try {
const validated = CreateUserSchema.parse(input)
return await db.users.create(validated)
} catch (error) {
if (error instanceof z.ZodError) {
return { success: false, errors: error.errors }
}
throw error
}
}
```
#### 檔案上傳驗證
```typescript
function validateFileUpload(file: File) {
// 大小檢查(最大 5MB)
const maxSize = 5 * 1024 * 1024
if (file.size > maxSize) {
throw new Error('File too large (max 5MB)')
}
// 類型檢查
const allowedTypes = ['image/jpeg', 'image/png', 'image/gif']
if (!allowedTypes.includes(file.type)) {
throw new Error('Invalid file type')
}
// 副檔名檢查
const allowedExtensions = ['.jpg', '.jpeg', '.png', '.gif']
const extension = file.name.toLowerCase().match(/\.[^.]+$/)?.[0]
if (!extension || !allowedExtensions.includes(extension)) {
throw new Error('Invalid file extension')
}
return true
}
```
#### 驗證步驟
- [ ] 所有使用者輸入以 schema 驗證
- [ ] 檔案上傳受限(大小、類型、副檔名)
- [ ] 查詢中不直接使用使用者輸入
- [ ] 白名單驗證(非黑名單)
- [ ] 錯誤訊息不洩露敏感資訊
### 3. SQL 注入預防
#### ❌ 絕不串接 SQL
```typescript
// 危險 - SQL 注入漏洞
const query = `SELECT * FROM users WHERE email = '${userEmail}'`
await db.query(query)
```
#### ✅ 總是使用參數化查詢
```typescript
// 安全 - 參數化查詢
const { data } = await supabase
.from('users')
.select('*')
.eq('email', userEmail)
// 或使用原始 SQL
await db.query(
'SELECT * FROM users WHERE email = $1',
[userEmail]
)
```
#### 驗證步驟
- [ ] 所有資料庫查詢使用參數化查詢
- [ ] SQL 中無字串串接
- [ ] ORM/查詢建構器正確使用
- [ ] Supabase 查詢正確淨化
### 4. 認證與授權
#### JWT Token 處理
```typescript
// ❌ 錯誤:localStorage(易受 XSS 攻擊)
localStorage.setItem('token', token)
// ✅ 正確:httpOnly cookies
res.setHeader('Set-Cookie',
`token=${token}; HttpOnly; Secure; SameSite=Strict; Max-Age=3600`)
```
#### 授權檢查
```typescript
export async function deleteUser(userId: string, requesterId: string) {
// 總是先驗證授權
const requester = await db.users.findUnique({
where: { id: requesterId }
})
if (requester.role !== 'admin') {
return NextResponse.json(
{ error: 'Unauthorized' },
{ status: 403 }
)
}
// 繼續刪除
await db.users.delete({ where: { id: userId } })
}
```
#### Row Level Security(Supabase)
```sql
-- 在所有表格上啟用 RLS
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
-- 使用者只能查看自己的資料
CREATE POLICY "Users view own data"
ON users FOR SELECT
USING (auth.uid() = id);
-- 使用者只能更新自己的資料
CREATE POLICY "Users update own data"
ON users FOR UPDATE
USING (auth.uid() = id);
```
#### 驗證步驟
- [ ] Token 儲存在 httpOnly cookies(非 localStorage)
- [ ] 敏感操作前有授權檢查
- [ ] Supabase 已啟用 Row Level Security
- [ ] 已實作基於角色的存取控制
- [ ] 工作階段管理安全
### 5. XSS 預防
#### 淨化 HTML
```typescript
import DOMPurify from 'isomorphic-dompurify'
// 總是淨化使用者提供的 HTML
function renderUserContent(html: string) {
const clean = DOMPurify.sanitize(html, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
})
return <div dangerouslySetInnerHTML={{ __html: clean }} />
}
```
#### Content Security Policy
```typescript
// next.config.js
const securityHeaders = [
{
key: 'Content-Security-Policy',
value: `
default-src 'self';
script-src 'self' 'unsafe-eval' 'unsafe-inline';
style-src 'self' 'unsafe-inline';
img-src 'self' data: https:;
font-src 'self';
connect-src 'self' https://api.example.com;
`.replace(/\s{2,}/g, ' ').trim()
}
]
```
#### 驗證步驟
- [ ] 使用者提供的 HTML 已淨化
- [ ] CSP headers 已設定
- [ ] 無未驗證的動態內容渲染
- [ ] 使用 React 內建 XSS 保護
### 6. CSRF 保護
#### CSRF Tokens
```typescript
import { csrf } from '@/lib/csrf'
export async function POST(request: Request) {
const token = request.headers.get('X-CSRF-Token')
if (!csrf.verify(token)) {
return NextResponse.json(
{ error: 'Invalid CSRF token' },
{ status: 403 }
)
}
// 處理請求
}
```
#### SameSite Cookies
```typescript
res.setHeader('Set-Cookie',
`session=${sessionId}; HttpOnly; Secure; SameSite=Strict`)
```
#### 驗證步驟
- [ ] 狀態變更操作有 CSRF tokens
- [ ] 所有 cookies 設定 SameSite=Strict
- [ ] 已實作 Double-submit cookie 模式
### 7. 速率限制
#### API 速率限制
```typescript
import rateLimit from 'express-rate-limit'
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 分鐘
max: 100, // 每視窗 100 個請求
message: 'Too many requests'
})
// 套用到路由
app.use('/api/', limiter)
```
#### 昂貴操作
```typescript
// 搜尋的積極速率限制
const searchLimiter = rateLimit({
windowMs: 60 * 1000, // 1 分鐘
max: 10, // 每分鐘 10 個請求
message: 'Too many search requests'
})
app.use('/api/search', searchLimiter)
```
#### 驗證步驟
- [ ] 所有 API 端點有速率限制
- [ ] 昂貴操作有更嚴格限制
- [ ] 基於 IP 的速率限制
- [ ] 基於使用者的速率限制(已認證)
### 8. 敏感資料暴露
#### 日誌記錄
```typescript
// ❌ 錯誤:記錄敏感資料
console.log('User login:', { email, password })
console.log('Payment:', { cardNumber, cvv })
// ✅ 正確:遮蔽敏感資料
console.log('User login:', { email, userId })
console.log('Payment:', { last4: card.last4, userId })
```
#### 錯誤訊息
```typescript
// ❌ 錯誤:暴露內部細節
catch (error) {
return NextResponse.json(
{ error: error.message, stack: error.stack },
{ status: 500 }
)
}
// ✅ 正確:通用錯誤訊息
catch (error) {
console.error('Internal error:', error)
return NextResponse.json(
{ error: 'An error occurred. Please try again.' },
{ status: 500 }
)
}
```
#### 驗證步驟
- [ ] 日誌中無密碼、token 或密鑰
- [ ] 使用者收到通用錯誤訊息
- [ ] 詳細錯誤只在伺服器日誌
- [ ] 不向使用者暴露堆疊追蹤
### 9. 區塊鏈安全(Solana)
#### 錢包驗證
```typescript
import { verify } from '@solana/web3.js'
async function verifyWalletOwnership(
publicKey: string,
signature: string,
message: string
) {
try {
const isValid = verify(
Buffer.from(message),
Buffer.from(signature, 'base64'),
Buffer.from(publicKey, 'base64')
)
return isValid
} catch (error) {
return false
}
}
```
#### 交易驗證
```typescript
async function verifyTransaction(transaction: Transaction) {
// 驗證收款人
if (transaction.to !== expectedRecipient) {
throw new Error('Invalid recipient')
}
// 驗證金額
if (transaction.amount > maxAmount) {
throw new Error('Amount exceeds limit')
}
// 驗證使用者有足夠餘額
const balance = await getBalance(transaction.from)
if (balance < transaction.amount) {
throw new Error('Insufficient balance')
}
return true
}
```
#### 驗證步驟
- [ ] 錢包簽章已驗證
- [ ] 交易詳情已驗證
- [ ] 交易前有餘額檢查
- [ ] 無盲目交易簽署
### 10. 依賴安全
#### 定期更新
```bash
# 檢查漏洞
npm audit
# 自動修復可修復的問題
npm audit fix
# 更新依賴
npm update
# 檢查過時套件
npm outdated
```
#### Lock 檔案
```bash
# 總是 commit lock 檔案
git add package-lock.json
# 在 CI/CD 中使用以獲得可重現的建置
npm ci # 而非 npm install
```
#### 驗證步驟
- [ ] 依賴保持最新
- [ ] 無已知漏洞(npm audit 乾淨)
- [ ] Lock 檔案已 commit
- [ ] GitHub 上已啟用 Dependabot
- [ ] 定期安全更新
## 安全測試
### 自動化安全測試
```typescript
// 測試認證
test('requires authentication', async () => {
const response = await fetch('/api/protected')
expect(response.status).toBe(401)
})
// 測試授權
test('requires admin role', async () => {
const response = await fetch('/api/admin', {
headers: { Authorization: `Bearer ${userToken}` }
})
expect(response.status).toBe(403)
})
// 測試輸入驗證
test('rejects invalid input', async () => {
const response = await fetch('/api/users', {
method: 'POST',
body: JSON.stringify({ email: 'not-an-email' })
})
expect(response.status).toBe(400)
})
// 測試速率限制
test('enforces rate limits', async () => {
const requests = Array(101).fill(null).map(() =>
fetch('/api/endpoint')
)
const responses = await Promise.all(requests)
const tooManyRequests = responses.filter(r => r.status === 429)
expect(tooManyRequests.length).toBeGreaterThan(0)
})
```
## 部署前安全檢查清單
任何生產部署前:
- [ ] **密鑰**:無寫死密鑰,全在環境變數中
- [ ] **輸入驗證**:所有使用者輸入已驗證
- [ ] **SQL 注入**:所有查詢已參數化
- [ ] **XSS**:使用者內容已淨化
- [ ] **CSRF**:保護已啟用
- [ ] **認證**:正確的 token 處理
- [ ] **授權**:角色檢查已就位
- [ ] **速率限制**:所有端點已啟用
- [ ] **HTTPS**:生產環境強制使用
- [ ] **安全標頭**:CSP、X-Frame-Options 已設定
- [ ] **錯誤處理**:錯誤中無敏感資料
- [ ] **日誌記錄**:無敏感資料被記錄
- [ ] **依賴**:最新,無漏洞
- [ ] **Row Level Security**:Supabase 已啟用
- [ ] **CORS**:正確設定
- [ ] **檔案上傳**:已驗證(大小、類型)
- [ ] **錢包簽章**:已驗證(如果是區塊鏈)
## 資源
- [OWASP Top 10](https://owasp.org/www-project-top-ten/)
- [Next.js Security](https://nextjs.org/docs/security)
- [Supabase Security](https://supabase.com/docs/guides/auth)
- [Web Security Academy](https://portswigger.net/web-security)
---
**記住**:安全性不是可選的。一個漏洞可能危及整個平台。有疑慮時,選擇謹慎的做法。
| """
Test for 'security-review' skill — Secure Export API for BabyBuddy
Validates that the Agent implemented authenticated, authorized export endpoints
with proper serializers, views, URLs, and security checks.
"""
import os
import ast
import subprocess
import pytest
from _dependency_utils import ensure_python_dependencies
@pytest.fixture(scope="module", autouse=True)
def _ensure_repo_dependencies():
ensure_python_dependencies(TestSecurityReview.REPO_DIR)
class TestSecurityReview:
"""Verify secure data export endpoint implementation for BabyBuddy."""
REPO_DIR = "/workspace/babybuddy"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_serializers_file_exists(self):
"""api/serializers.py must exist."""
fpath = os.path.join(self.REPO_DIR, "api", "serializers.py")
assert os.path.isfile(fpath), "api/serializers.py not found"
def test_views_file_exists(self):
"""api/views.py must exist."""
fpath = os.path.join(self.REPO_DIR, "api", "views.py")
assert os.path.isfile(fpath), "api/views.py not found"
def test_urls_file_exists(self):
"""api/urls.py must exist."""
fpath = os.path.join(self.REPO_DIR, "api", "urls.py")
assert os.path.isfile(fpath), "api/urls.py not found"
# ------------------------------------------------------------------
# L2: functional verification
# ------------------------------------------------------------------
def test_feeding_export_serializer_defined(self):
"""FeedingExportSerializer must be defined in api/serializers.py."""
fpath = os.path.join(self.REPO_DIR, "api", "serializers.py")
with open(fpath, "r", encoding="utf-8") as f:
source = f.read()
tree = ast.parse(source)
class_names = [n.name for n in ast.walk(tree) if isinstance(n, ast.ClassDef)]
assert (
"FeedingExportSerializer" in class_names
), f"FeedingExportSerializer not found; classes: {class_names}"
def test_sleep_export_serializer_defined(self):
"""SleepExportSerializer must be defined in api/serializers.py."""
fpath = os.path.join(self.REPO_DIR, "api", "serializers.py")
with open(fpath, "r", encoding="utf-8") as f:
source = f.read()
tree = ast.parse(source)
class_names = [n.name for n in ast.walk(tree) if isinstance(n, ast.ClassDef)]
assert (
"SleepExportSerializer" in class_names
), f"SleepExportSerializer not found; classes: {class_names}"
def test_feeding_serializer_fields(self):
"""FeedingExportSerializer must include required fields."""
fpath = os.path.join(self.REPO_DIR, "api", "serializers.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
required = ["id", "start", "end", "duration", "type", "method", "amount"]
for field in required:
assert field in content, f"Field '{field}' not found in serializers.py"
def test_sleep_serializer_fields(self):
"""SleepExportSerializer must include required fields."""
fpath = os.path.join(self.REPO_DIR, "api", "serializers.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
required = ["id", "start", "end", "duration"]
for field in required:
assert field in content, f"Field '{field}' not found in serializers.py"
def test_export_viewset_defined(self):
"""ExportViewSet (or similar) must be defined in api/views.py."""
fpath = os.path.join(self.REPO_DIR, "api", "views.py")
with open(fpath, "r", encoding="utf-8") as f:
source = f.read()
tree = ast.parse(source)
class_names = [n.name for n in ast.walk(tree) if isinstance(n, ast.ClassDef)]
export_views = [c for c in class_names if "export" in c.lower()]
assert (
len(export_views) >= 1
), f"No export-related ViewSet found; classes: {class_names}"
def test_authentication_enforced(self):
"""Views must enforce authentication (IsAuthenticated or similar)."""
fpath = os.path.join(self.REPO_DIR, "api", "views.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
auth_patterns = [
"IsAuthenticated",
"permission_classes",
"authentication_classes",
]
found = any(p in content for p in auth_patterns)
assert found, "No authentication enforcement found in views.py"
def test_export_route_registered(self):
"""Export route must be registered in api/urls.py."""
fpath = os.path.join(self.REPO_DIR, "api", "urls.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert "export" in content.lower(), "No export route registered in urls.py"
def test_django_system_check(self):
"""python manage.py check should pass without errors."""
result = subprocess.run(
["python", "manage.py", "check"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert result.returncode == 0, f"Django check failed:\n{result.stderr}"
def test_child_ownership_validation(self):
"""Views must validate child ownership (403 for other user's child)."""
fpath = os.path.join(self.REPO_DIR, "api", "views.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
ownership_patterns = [
"child",
"403",
"Forbidden",
"get_object_or_404",
"request.user",
"filter",
"permission",
]
found_count = sum(1 for p in ownership_patterns if p in content)
assert found_count >= 3, (
f"Insufficient ownership validation logic in views.py "
f"(matched {found_count}/6 patterns)"
)
def test_api_tests_exist(self):
"""Test file for the export API must exist."""
candidates = [
os.path.join(self.REPO_DIR, "tests", "test_api.py"),
os.path.join(self.REPO_DIR, "api", "tests.py"),
os.path.join(self.REPO_DIR, "babybuddy", "tests", "test_api.py"),
]
found = any(os.path.isfile(c) for c in candidates)
assert found, f"No API test file found among {candidates}"
| https://github.com/babybuddy/babybuddy | zhangyiiiiii/swe-skills-bench-python | |
springboot-tdd | Spring Boot TDD | See task file for detailed mission requirements. | feature | # Task: Add Pet Weight Tracking Feature to PetClinic
## Background
We need to add a weight tracking feature to the Spring PetClinic application. Pet owners should be able to record and view their pets' weight history over time.
## Files to Create/Modify
- `src/main/java/org/springframework/samples/petclinic/owner/WeightRecord.java` - Entity class
- `src/main/java/org/springframework/samples/petclinic/owner/WeightRecordRepository.java` - Data access
- `src/main/java/org/springframework/samples/petclinic/owner/OwnerController.java` - REST endpoints
- `src/main/resources/db/h2/` - DDL for weight_record table
## Requirements
### Entity (WeightRecord.java)
- `id`: Long (Primary Key)
- `petId`: Long (Foreign Key to Pet)
- `weightKg`: Double (Required, positive value)
- `recordDate`: LocalDate
### Repository
- Extend `JpaRepository<WeightRecord, Long>`
- Method: `findByPetIdOrderByRecordDateDesc(Long petId)`
### Controller Endpoints
- `POST /owners/{ownerId}/pets/{petId}/weight` - Record new weight
- `GET /owners/{ownerId}/pets/{petId}/weight/history` - Get weight history
### Database
- Create DDL in `src/main/resources/db/h2/`
## Expected Functionality
1. Successfully record pet weight → returns 201 Created
2. Reject invalid petId → returns 404 Not Found
3. Reject missing weightKg field → returns 400 Bad Request
4. Weight history returns list ordered by date (newest first)
## Acceptance Criteria
- Application compiles without errors: `./mvnw compile`
- All CRUD operations work correctly
- Endpoints handle edge cases appropriately (invalid input, missing data)
| ---
name: springboot-tdd
description: Test-driven development for Spring Boot using JUnit 5, Mockito, MockMvc, Testcontainers, and JaCoCo. Use when adding features, fixing bugs, or refactoring.
---
# Spring Boot TDD Workflow
TDD guidance for Spring Boot services with 80%+ coverage (unit + integration).
## When to Use
- New features or endpoints
- Bug fixes or refactors
- Adding data access logic or security rules
## Workflow
1) Write tests first (they should fail)
2) Implement minimal code to pass
3) Refactor with tests green
4) Enforce coverage (JaCoCo)
## Unit Tests (JUnit 5 + Mockito)
```java
@ExtendWith(MockitoExtension.class)
class MarketServiceTest {
@Mock MarketRepository repo;
@InjectMocks MarketService service;
@Test
void createsMarket() {
CreateMarketRequest req = new CreateMarketRequest("name", "desc", Instant.now(), List.of("cat"));
when(repo.save(any())).thenAnswer(inv -> inv.getArgument(0));
Market result = service.create(req);
assertThat(result.name()).isEqualTo("name");
verify(repo).save(any());
}
}
```
Patterns:
- Arrange-Act-Assert
- Avoid partial mocks; prefer explicit stubbing
- Use `@ParameterizedTest` for variants
## Web Layer Tests (MockMvc)
```java
@WebMvcTest(MarketController.class)
class MarketControllerTest {
@Autowired MockMvc mockMvc;
@MockBean MarketService marketService;
@Test
void returnsMarkets() throws Exception {
when(marketService.list(any())).thenReturn(Page.empty());
mockMvc.perform(get("/api/markets"))
.andExpect(status().isOk())
.andExpect(jsonPath("$.content").isArray());
}
}
```
## Integration Tests (SpringBootTest)
```java
@SpringBootTest
@AutoConfigureMockMvc
@ActiveProfiles("test")
class MarketIntegrationTest {
@Autowired MockMvc mockMvc;
@Test
void createsMarket() throws Exception {
mockMvc.perform(post("/api/markets")
.contentType(MediaType.APPLICATION_JSON)
.content("""
{"name":"Test","description":"Desc","endDate":"2030-01-01T00:00:00Z","categories":["general"]}
"""))
.andExpect(status().isCreated());
}
}
```
## Persistence Tests (DataJpaTest)
```java
@DataJpaTest
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
@Import(TestContainersConfig.class)
class MarketRepositoryTest {
@Autowired MarketRepository repo;
@Test
void savesAndFinds() {
MarketEntity entity = new MarketEntity();
entity.setName("Test");
repo.save(entity);
Optional<MarketEntity> found = repo.findByName("Test");
assertThat(found).isPresent();
}
}
```
## Testcontainers
- Use reusable containers for Postgres/Redis to mirror production
- Wire via `@DynamicPropertySource` to inject JDBC URLs into Spring context
## Coverage (JaCoCo)
Maven snippet:
```xml
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.14</version>
<executions>
<execution>
<goals><goal>prepare-agent</goal></goals>
</execution>
<execution>
<id>report</id>
<phase>verify</phase>
<goals><goal>report</goal></goals>
</execution>
</executions>
</plugin>
```
## Assertions
- Prefer AssertJ (`assertThat`) for readability
- For JSON responses, use `jsonPath`
- For exceptions: `assertThatThrownBy(...)`
## Test Data Builders
```java
class MarketBuilder {
private String name = "Test";
MarketBuilder withName(String name) { this.name = name; return this; }
Market build() { return new Market(null, name, MarketStatus.ACTIVE); }
}
```
## CI Commands
- Maven: `mvn -T 4 test` or `mvn verify`
- Gradle: `./gradlew test jacocoTestReport`
**Remember**: Keep tests fast, isolated, and deterministic. Test behavior, not implementation details.
| """
Test for 'springboot-tdd' skill — Spring Boot TDD Workflow
Validates that the Agent added REST endpoints with TDD approach in the
Spring PetClinic application: controller, service, model, and tests.
"""
import os
import subprocess
import pytest
class TestSpringbootTdd:
"""Verify Spring Boot TDD implementation in PetClinic."""
REPO_DIR = "/workspace/spring-petclinic"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_controller_exists(self):
"""A new controller Java file must exist."""
src_dir = os.path.join(self.REPO_DIR, "src", "main", "java")
found = []
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith("Controller.java") and "Visit" in f:
found.append(os.path.join(root, f))
assert len(found) >= 1, "No Visit*Controller.java found"
def test_service_exists(self):
"""A service class for the feature must exist."""
src_dir = os.path.join(self.REPO_DIR, "src", "main", "java")
found = []
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith("Service.java") and "Visit" in f:
found.append(os.path.join(root, f))
assert len(found) >= 1, "No Visit*Service.java found"
def test_test_file_exists(self):
"""Test class for the controller must exist."""
test_dir = os.path.join(self.REPO_DIR, "src", "test", "java")
found = []
for root, dirs, files in os.walk(test_dir):
for f in files:
if "Visit" in f and f.endswith("Test.java"):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No Visit*Test.java found"
# ------------------------------------------------------------------
# L2: compilation & test execution
# ------------------------------------------------------------------
def test_maven_compile(self):
"""./mvnw compile must succeed."""
result = subprocess.run(
["./mvnw", "compile", "-q", "-B"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=600,
)
assert (
result.returncode == 0
), f"Maven compile failed:\n{result.stdout[-2000:]}\n{result.stderr[-1000:]}"
def test_maven_tests_pass(self):
"""./mvnw test must pass."""
result = subprocess.run(
["./mvnw", "test", "-q", "-B"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=600,
)
assert (
result.returncode == 0
), f"Maven tests failed:\n{result.stdout[-2000:]}\n{result.stderr[-1000:]}"
def test_controller_has_rest_annotations(self):
"""Controller must use Spring REST annotations."""
src_dir = os.path.join(self.REPO_DIR, "src", "main", "java")
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith("Controller.java") and "Visit" in f:
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
rest_annotations = [
"@RestController",
"@Controller",
"@GetMapping",
"@PostMapping",
"@RequestMapping",
]
found = any(a in content for a in rest_annotations)
assert found, f"{f} missing REST annotations"
return
pytest.fail("Controller file not found for annotation check")
def test_service_has_transactional(self):
"""Service should use @Transactional or @Service."""
src_dir = os.path.join(self.REPO_DIR, "src", "main", "java")
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith("Service.java") and "Visit" in f:
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
annotations = ["@Service", "@Transactional", "@Component"]
found = any(a in content for a in annotations)
assert found, f"{f} missing Spring service annotations"
return
def test_test_uses_spring_testing(self):
"""Test class must use Spring test annotations."""
test_dir = os.path.join(self.REPO_DIR, "src", "test", "java")
for root, dirs, files in os.walk(test_dir):
for f in files:
if "Visit" in f and f.endswith("Test.java"):
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
annotations = [
"@SpringBootTest",
"@WebMvcTest",
"@MockBean",
"@DataJpaTest",
"@AutoConfigureMockMvc",
"@Test",
]
found = sum(1 for a in annotations if a in content)
assert found >= 2, f"{f} needs Spring test annotations"
return
def test_controller_has_validation(self):
"""Controller should validate inputs with @Valid or similar."""
src_dir = os.path.join(self.REPO_DIR, "src", "main", "java")
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith("Controller.java") and "Visit" in f:
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
validation = [
"@Valid",
"@NotNull",
"@NotBlank",
"@RequestBody",
"BindingResult",
]
found = any(v in content for v in validation)
assert found, f"{f} missing input validation"
return
def test_at_least_5_test_methods(self):
"""Test class must have at least 5 @Test methods."""
test_dir = os.path.join(self.REPO_DIR, "src", "test", "java")
for root, dirs, files in os.walk(test_dir):
for f in files:
if "Visit" in f and f.endswith("Test.java"):
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
test_count = content.count("@Test")
assert (
test_count >= 5
), f"{f} has only {test_count} @Test methods, need >= 5"
return
| https://github.com/spring-projects/spring-petclinic | zhangyiiiiii/swe-skills-bench-jvm | |
add-admin-api-endpoint | Ghost Admin API Endpoint Creator | See task file for detailed mission requirements. | feature | # Task: Create audit_logs Admin API Endpoint for Ghost CMS
## Background
We need to add an `audit_logs` resource endpoint to the Ghost Admin API, allowing administrators to query recent user operation records for security and compliance purposes.
## Files to Create/Modify
* `ghost/core/core/server/api/endpoints/audit-logs.js` - API endpoint implementation
* `ghost/core/core/server/models/audit-log.js` - Data model
* `ghost/core/core/server/web/api/endpoints/admin/routes.js` - Register endpoint
* `ghost/core/test/e2e-api/admin/audit-logs.test.js` - Test cases
## Requirements
### Model (audit-log.js)
* `id`: ObjectId (Primary Key)
* `userId`: ObjectId (Reference to User)
* `action`: String (e.g., "post.created", "user.login")
* `context`: JSON (Additional metadata)
* `createdAt`: DateTime
### API Endpoints
* `GET /ghost/api/admin/audit_logs/` - Browse with pagination (limit/page)
* `GET /ghost/api/admin/audit_logs/:id` - Read single record
### Implementation (audit-logs.js)
* **browse** : Support limit and page pagination parameters
* **read** : Query single record by id
* Proper permission checking (admin only)
## Expected Functionality
1. Authenticated owner/admin users receive 200 OK with audit_logs array in response body
2. Unauthenticated requests return 401 Unauthorized
3. Pagination parameters (limit, page) work correctly
## Acceptance Criteria
* API endpoints respond with correct status codes
* Response body contains `audit_logs` field with proper structure
* Permission checking works (admin-only access)
* Pagination functions as specified
| ---
name: Add Admin API Endpoint
description: Add a new endpoint or endpoints to Ghost's Admin API at `ghost/api/admin/**`.
---
# Create Admin API Endpoint
## Instructions
1. If creating an endpoint for an entirely new resource, create a new endpoint file in `ghost/core/core/server/api/endpoints/`. Otherwise, locate the existing endpoint file in the same directory.
2. The endpoint file should create a controller object using the JSDoc type from (@tryghost/api-framework).Controller, including at minimum a `docName` and a single endpoint definition, i.e. `browse`.
3. Add routes for each endpoint to `ghost/core/core/server/web/api/endpoints/admin/routes.js`.
4. Add basic `e2e-api` tests for the endpoint in `ghost/core/test/e2e-api/admin` to ensure the new endpoints function as expected.
5. Run the tests and iterate until they pass: `cd ghost/core && yarn test:single test/e2e-api/admin/{test-file-name}`.
## Reference
For a detailed reference on Ghost's API framework and how to create API controllers, see [reference.md](reference.md). | """
Test for 'add-admin-api-endpoint' skill — Ghost Admin API Endpoint
Validates that the Agent added a new audit_logs Admin API endpoint in Ghost with
model, endpoint handler, route registration, and tests.
"""
import os
import re
import subprocess
import pytest
from _dependency_utils import ensure_npm_dependencies
# @pytest.fixture(scope="module", autouse=True)
# def _ensure_repo_dependencies():
# ensure_npm_dependencies(TestAddAdminApiEndpoint.REPO_DIR)
class TestAddAdminApiEndpoint:
"""Verify Ghost Admin API audit_logs endpoint implementation."""
REPO_DIR = "/workspace/Ghost"
# ------------------------------------------------------------------
# Helpers
# ------------------------------------------------------------------
def _read(self, *parts):
fpath = os.path.join(self.REPO_DIR, *parts)
assert os.path.isfile(fpath), f"Required file not found: {fpath}"
with open(fpath, "r", errors="ignore") as fh:
return fh.read()
# ------------------------------------------------------------------
# L1: Model field & schema validation
# ------------------------------------------------------------------
def test_model_defines_all_required_fields(self):
"""AuditLog model must define ALL five schema fields: id, userId, action, context, createdAt."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
required_fields = ["userId", "action", "context", "createdAt"]
missing = [f for f in required_fields if f not in content]
assert not missing, f"audit-log.js is missing required schema fields: {missing}"
def test_model_userId_is_objectid_type(self):
"""userId field in model must be declared as ObjectId (foreign key reference to User)."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
# Typical Ghost/Bookshelf pattern: type: 'string' with length 24, or ObjectId comment,
# or a foreign-key relationship to users table.
objectid_patterns = [
r"ObjectId",
r"userId.*user",
r"user.*userId",
r"references.*users",
r"foreign.*key",
]
matched = any(re.search(p, content, re.IGNORECASE) for p in objectid_patterns)
# At minimum userId must appear in close proximity to an id-like qualifier
assert matched or re.search(
r"userId\s*[=:,]", content
), "userId does not appear to be declared as an ObjectId / user reference in audit-log.js"
def test_model_action_is_string_type(self):
"""action field must be declared as a string type in the schema."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
# Look for action appearing alongside string type hints or schema definition
assert re.search(r"action", content), "action field missing from model"
# Ensure it is not only used as a variable in logic — it should appear in a schema block
assert re.search(
r"['\"]action['\"]|action\s*:", content
), "action does not appear to be declared as a schema property in audit-log.js"
def test_model_context_supports_json(self):
"""context field must support JSON/object storage (not a plain scalar type)."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
json_patterns = [
r"JSON",
r"json",
r"jsonb",
r"context.*object",
r"object.*context",
r"serialize",
r"parse",
]
assert (
any(re.search(p, content) for p in json_patterns) or "context" in content
), "context field does not appear to support JSON storage in audit-log.js"
def test_model_extends_ghost_base_model(self):
"""Model must extend Ghost's base model (ghostBookshelf or similar backbone/bookshelf pattern)."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
base_patterns = [
r"ghostBookshelf",
r"bookshelf",
r"Model\.extend",
r"extend\(",
r"GhostModel",
]
assert any(
re.search(p, content, re.IGNORECASE) for p in base_patterns
), "audit-log.js does not appear to extend Ghost's base model (ghostBookshelf/bookshelf)"
def test_model_exports_audit_log(self):
"""Model must export the AuditLog class/object so it can be require()'d."""
content = self._read(
"ghost", "core", "core", "server", "models", "audit-log.js"
)
assert re.search(
r"module\.exports|exports\.", content
), "audit-log.js does not export anything via module.exports"
assert re.search(
r"[Aa]udit[_-]?[Ll]og", content
), "audit-log.js exports do not reference AuditLog"
# ------------------------------------------------------------------
# L2: Endpoint handler structure & pagination logic
# ------------------------------------------------------------------
def test_endpoint_exports_browse_and_read(self):
"""audit-logs.js must export both browse and read handler functions."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
assert "browse" in content, "audit-logs.js missing 'browse' handler export"
assert "read" in content, "audit-logs.js missing 'read' handler export"
# Both must appear in an exports/module.exports context
assert re.search(
r"module\.exports|exports\.", content
), "audit-logs.js does not use module.exports"
def test_endpoint_browse_supports_limit_and_page(self):
"""browse handler must declare support for BOTH limit AND page pagination parameters."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
assert (
"limit" in content
), "audit-logs.js browse handler missing 'limit' pagination param"
assert (
"page" in content
), "audit-logs.js browse handler missing 'page' pagination param"
def test_endpoint_read_accepts_id_param(self):
"""read handler must accept an id parameter to fetch a single audit log record."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
id_patterns = [r"\bid\b", r"options\.id", r"data\.id", r"params\.id"]
assert any(
re.search(p, content) for p in id_patterns
), "read handler in audit-logs.js does not appear to consume an 'id' parameter"
def test_endpoint_response_wraps_in_audit_logs_key(self):
"""Response must wrap records under the 'audit_logs' key (Ghost API convention)."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
assert re.search(
r"audit_logs|auditLogs", content
), "audit-logs.js does not wrap its response data in an 'audit_logs' key"
def test_endpoint_browse_calls_model_fetch(self):
"""browse handler must call a model method to retrieve records (findPage, findAll, fetchAll, etc.)."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
fetch_patterns = [
r"findPage",
r"findAll",
r"fetchAll",
r"\.fetch\b",
r"\.findOne",
r"getFilteredCollection",
]
assert any(
re.search(p, content) for p in fetch_patterns
), "browse handler does not appear to call any model fetch method (findPage/findAll/fetchAll)"
# ------------------------------------------------------------------
# L3: Permission / authentication enforcement
# ------------------------------------------------------------------
def test_endpoint_declares_permissions(self):
"""Endpoint must declare admin-only permissions (Ghost uses permissions objects or docName)."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
perm_patterns = [
r"permissions",
r"docName",
r"canThis",
r"isAuthenticated",
r"authorize",
r"owner",
r"administrator",
]
assert any(
re.search(p, content, re.IGNORECASE) for p in perm_patterns
), "No permission/auth declaration found in audit-logs.js — endpoint must be admin-only"
def test_endpoint_permission_targets_audit_log_resource(self):
"""The permission check must reference the audit_log or audit-log resource (not a generic wildcard)."""
content = self._read(
"ghost", "core", "core", "server", "api", "endpoints", "audit-logs.js"
)
resource_patterns = [r"audit[_\-]log", r"auditLog", r"audit_log"]
assert any(
re.search(p, content, re.IGNORECASE) for p in resource_patterns
), "Permission check in audit-logs.js does not appear to reference the audit_log resource"
# ------------------------------------------------------------------
# L4: Route registration structure
# ------------------------------------------------------------------
def test_routes_maps_get_method_for_browse(self):
"""admin/routes.js must register a GET route for the audit_logs collection endpoint."""
content = self._read(
"ghost",
"core",
"core",
"server",
"web",
"api",
"endpoints",
"admin",
"routes.js",
)
# Must have a GET (or router.get) associated with audit_logs path
get_audit_pattern = re.search(
r"(get|GET).*audit|audit.*(get|GET)", content, re.IGNORECASE | re.DOTALL
)
# Or a resource/router definition that lists audit_logs as a route
resource_pattern = re.search(r"audit_logs|audit-logs", content, re.IGNORECASE)
assert (
resource_pattern
), "admin/routes.js does not register any route containing 'audit_logs' or 'audit-logs'"
assert get_audit_pattern or re.search(
r"router\.(get|use)", content, re.IGNORECASE
), "admin/routes.js does not use a GET handler alongside the audit_logs route"
def test_routes_registers_single_record_route(self):
"""admin/routes.js must register both the collection route and the /:id single-record route."""
content = self._read(
"ghost",
"core",
"core",
"server",
"web",
"api",
"endpoints",
"admin",
"routes.js",
)
# Single-record GET route (:id parameter or similar)
id_route_pattern = re.search(r":id|\/\:|\/:id", content)
# OR at minimum two separate mentions of audit in the route block
audit_mentions = len(re.findall(r"audit", content, re.IGNORECASE))
assert id_route_pattern or audit_mentions >= 2, (
"admin/routes.js appears to be missing a /:id single-record route for audit_logs "
"(need GET /audit_logs/:id in addition to GET /audit_logs/)"
)
def test_routes_references_endpoint_handler(self):
"""admin/routes.js must reference the audit-logs endpoint handler module."""
content = self._read(
"ghost",
"core",
"core",
"server",
"web",
"api",
"endpoints",
"admin",
"routes.js",
)
require_pattern = re.search(
r"require.*audit|audit.*require|audit.*endpoint|endpoint.*audit",
content,
re.IGNORECASE,
)
import_pattern = re.search(
r"import.*audit|audit.*import", content, re.IGNORECASE
)
# May also reference via a router binding without explicit require if it is auto-loaded
direct_ref = re.search(
r"auditLogs|audit_logs|audit-logs", content, re.IGNORECASE
)
assert (
require_pattern or import_pattern or direct_ref
), "admin/routes.js does not appear to reference the audit-logs endpoint handler"
# ------------------------------------------------------------------
# L5: E2E test file coverage
# ------------------------------------------------------------------
def test_e2e_tests_cover_browse_endpoint(self):
"""E2E test file must contain tests for the list/browse (GET /audit_logs/) endpoint."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
browse_patterns = [
r"/audit_logs/\b",
r"audit[_-]logs.*get",
r"get.*audit[_-]logs",
r"browse",
]
assert any(
re.search(p, content, re.IGNORECASE) for p in browse_patterns
), "E2E test file does not appear to test the browse (GET /audit_logs/) endpoint"
def test_e2e_tests_cover_read_by_id(self):
"""E2E test file must contain a test for the single-record (GET /audit_logs/:id) endpoint."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
read_patterns = [
r"audit_logs/\$\{",
r"audit_logs/.*id",
r"/:id",
r"\bread\b",
r"single",
]
assert any(
re.search(p, content, re.IGNORECASE) for p in read_patterns
), "E2E test file does not appear to test the single-record (GET /audit_logs/:id) endpoint"
def test_e2e_tests_assert_200_on_authenticated_request(self):
"""E2E test must assert HTTP 200 for authenticated owner/admin requests."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
assert re.search(
r"200", content
), "E2E test file does not assert HTTP 200 for authenticated requests"
def test_e2e_tests_assert_401_for_unauthenticated(self):
"""E2E test must assert HTTP 401 for unauthenticated requests."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
assert re.search(
r"401", content
), "E2E test file does not assert HTTP 401 for unauthenticated requests"
def test_e2e_tests_validate_response_structure(self):
"""E2E test must inspect the response body for the audit_logs field."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
assert re.search(
r"audit_logs|auditLogs", content
), "E2E test does not validate that response body contains the 'audit_logs' field"
def test_e2e_tests_cover_pagination(self):
"""E2E test must exercise the pagination parameters (limit and/or page)."""
content = self._read(
"ghost", "core", "test", "e2e-api", "admin", "audit-logs.test.js"
)
pagination_patterns = [r"limit", r"page", r"pagination", r"per_page"]
assert any(
re.search(p, content, re.IGNORECASE) for p in pagination_patterns
), "E2E test file does not exercise pagination (limit/page) parameters"
# ------------------------------------------------------------------
# L6: Node.js syntax sanity checks
# ------------------------------------------------------------------
def test_model_has_no_syntax_errors(self):
"""Node.js must be able to parse the AuditLog model without SyntaxErrors."""
model_path = os.path.join(
self.REPO_DIR,
"ghost",
"core",
"core",
"server",
"models",
"audit-log.js",
)
result = subprocess.run(
["node", "--check", model_path],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"Syntax error detected in audit-log.js:\n{result.stderr}"
def test_endpoint_has_no_syntax_errors(self):
"""Node.js must be able to parse the audit-logs endpoint without SyntaxErrors."""
endpoint_path = os.path.join(
self.REPO_DIR,
"ghost",
"core",
"core",
"server",
"api",
"endpoints",
"audit-logs.js",
)
result = subprocess.run(
["node", "--check", endpoint_path],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"Syntax error detected in audit-logs.js:\n{result.stderr}"
def test_e2e_test_file_has_no_syntax_errors(self):
"""Node.js must be able to parse the E2E test file without SyntaxErrors."""
test_path = os.path.join(
self.REPO_DIR,
"ghost",
"core",
"test",
"e2e-api",
"admin",
"audit-logs.test.js",
)
result = subprocess.run(
["node", "--check", test_path],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"Syntax error detected in audit-logs.test.js:\n{result.stderr}"
| https://github.com/TryGhost/Ghost | zhangyiiiiii/swe-skills-bench-python | |
mcp-builder | MCP Server Builder | See task file for detailed mission requirements. | feature | # Task: Build MCP Server for Markdown Knowledge Base with SQLite
## Background
We need to create a hybrid MCP (Model Context Protocol) server using TypeScript and `@modelcontextprotocol/sdk` that connects a local Markdown knowledge base with a SQLite metadata database.
## Files to Create/Modify
- `src/markdown-sqlite/index.ts` - Main server implementation
- `src/markdown-sqlite/package.json` - Package configuration
- `src/markdown-sqlite/tests/index.test.ts` - Unit tests
## Requirements
### Tools to Implement
**1. index_markdown(dir_path: string)**
- Scan all `.md` files in specified directory
- Extract: file path, first-level heading, tags (from YAML front-matter)
- Write to SQLite table `documents`
**2. search_documents(query: string)**
- Use SQLite FTS5 full-text search
- Return matching document summaries: id, title, snippet
**3. read_document(doc_id: number)**
- Return complete Markdown content of specified document
### Package Configuration
- TypeScript compilation with `@modelcontextprotocol/sdk`
- `"build"` script for compilation
- `"test"` script for running tests
### SQLite Schema
```sql
CREATE VIRTUAL TABLE documents USING fts5(
path, title, tags, content
);
```
### Expected Functionality
- `index_markdown` successfully indexes all markdown files in directory
- `search_documents` returns relevant results matching the query
- `read_document` returns complete and correct markdown content
- Graceful error handling for non-existent file paths
## Acceptance Criteria
- `cd src/markdown-sqlite && npm run build` compiles without errors
- All three MCP tools work as specified
- Error cases are handled appropriately
| ---
name: mcp-builder
description: Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
license: Complete terms in LICENSE.txt
---
# MCP Server Development Guide
## Overview
Create MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. The quality of an MCP server is measured by how well it enables LLMs to accomplish real-world tasks.
---
# Process
## 🚀 High-Level Workflow
Creating a high-quality MCP server involves four main phases:
### Phase 1: Deep Research and Planning
#### 1.1 Understand Modern MCP Design
**API Coverage vs. Workflow Tools:**
Balance comprehensive API endpoint coverage with specialized workflow tools. Workflow tools can be more convenient for specific tasks, while comprehensive coverage gives agents flexibility to compose operations. Performance varies by client—some clients benefit from code execution that combines basic tools, while others work better with higher-level workflows. When uncertain, prioritize comprehensive API coverage.
**Tool Naming and Discoverability:**
Clear, descriptive tool names help agents find the right tools quickly. Use consistent prefixes (e.g., `github_create_issue`, `github_list_repos`) and action-oriented naming.
**Context Management:**
Agents benefit from concise tool descriptions and the ability to filter/paginate results. Design tools that return focused, relevant data. Some clients support code execution which can help agents filter and process data efficiently.
**Actionable Error Messages:**
Error messages should guide agents toward solutions with specific suggestions and next steps.
#### 1.2 Study MCP Protocol Documentation
**Navigate the MCP specification:**
Start with the sitemap to find relevant pages: `https://modelcontextprotocol.io/sitemap.xml`
Then fetch specific pages with `.md` suffix for markdown format (e.g., `https://modelcontextprotocol.io/specification/draft.md`).
Key pages to review:
- Specification overview and architecture
- Transport mechanisms (streamable HTTP, stdio)
- Tool, resource, and prompt definitions
#### 1.3 Study Framework Documentation
**Recommended stack:**
- **Language**: TypeScript (high-quality SDK support and good compatibility in many execution environments e.g. MCPB. Plus AI models are good at generating TypeScript code, benefiting from its broad usage, static typing and good linting tools)
- **Transport**: Streamable HTTP for remote servers, using stateless JSON (simpler to scale and maintain, as opposed to stateful sessions and streaming responses). stdio for local servers.
**Load framework documentation:**
- **MCP Best Practices**: [📋 View Best Practices](./reference/mcp_best_practices.md) - Core guidelines
**For TypeScript (recommended):**
- **TypeScript SDK**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md`
- [⚡ TypeScript Guide](./reference/node_mcp_server.md) - TypeScript patterns and examples
**For Python:**
- **Python SDK**: Use WebFetch to load `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`
- [🐍 Python Guide](./reference/python_mcp_server.md) - Python patterns and examples
#### 1.4 Plan Your Implementation
**Understand the API:**
Review the service's API documentation to identify key endpoints, authentication requirements, and data models. Use web search and WebFetch as needed.
**Tool Selection:**
Prioritize comprehensive API coverage. List endpoints to implement, starting with the most common operations.
---
### Phase 2: Implementation
#### 2.1 Set Up Project Structure
See language-specific guides for project setup:
- [⚡ TypeScript Guide](./reference/node_mcp_server.md) - Project structure, package.json, tsconfig.json
- [🐍 Python Guide](./reference/python_mcp_server.md) - Module organization, dependencies
#### 2.2 Implement Core Infrastructure
Create shared utilities:
- API client with authentication
- Error handling helpers
- Response formatting (JSON/Markdown)
- Pagination support
#### 2.3 Implement Tools
For each tool:
**Input Schema:**
- Use Zod (TypeScript) or Pydantic (Python)
- Include constraints and clear descriptions
- Add examples in field descriptions
**Output Schema:**
- Define `outputSchema` where possible for structured data
- Use `structuredContent` in tool responses (TypeScript SDK feature)
- Helps clients understand and process tool outputs
**Tool Description:**
- Concise summary of functionality
- Parameter descriptions
- Return type schema
**Implementation:**
- Async/await for I/O operations
- Proper error handling with actionable messages
- Support pagination where applicable
- Return both text content and structured data when using modern SDKs
**Annotations:**
- `readOnlyHint`: true/false
- `destructiveHint`: true/false
- `idempotentHint`: true/false
- `openWorldHint`: true/false
---
### Phase 3: Review and Test
#### 3.1 Code Quality
Review for:
- No duplicated code (DRY principle)
- Consistent error handling
- Full type coverage
- Clear tool descriptions
#### 3.2 Build and Test
**TypeScript:**
- Run `npm run build` to verify compilation
- Test with MCP Inspector: `npx @modelcontextprotocol/inspector`
**Python:**
- Verify syntax: `python -m py_compile your_server.py`
- Test with MCP Inspector
See language-specific guides for detailed testing approaches and quality checklists.
---
### Phase 4: Create Evaluations
After implementing your MCP server, create comprehensive evaluations to test its effectiveness.
**Load [✅ Evaluation Guide](./reference/evaluation.md) for complete evaluation guidelines.**
#### 4.1 Understand Evaluation Purpose
Use evaluations to test whether LLMs can effectively use your MCP server to answer realistic, complex questions.
#### 4.2 Create 10 Evaluation Questions
To create effective evaluations, follow the process outlined in the evaluation guide:
1. **Tool Inspection**: List available tools and understand their capabilities
2. **Content Exploration**: Use READ-ONLY operations to explore available data
3. **Question Generation**: Create 10 complex, realistic questions
4. **Answer Verification**: Solve each question yourself to verify answers
#### 4.3 Evaluation Requirements
Ensure each question is:
- **Independent**: Not dependent on other questions
- **Read-only**: Only non-destructive operations required
- **Complex**: Requiring multiple tool calls and deep exploration
- **Realistic**: Based on real use cases humans would care about
- **Verifiable**: Single, clear answer that can be verified by string comparison
- **Stable**: Answer won't change over time
#### 4.4 Output Format
Create an XML file with this structure:
```xml
<evaluation>
<qa_pair>
<question>Find discussions about AI model launches with animal codenames. One model needed a specific safety designation that uses the format ASL-X. What number X was being determined for the model named after a spotted wild cat?</question>
<answer>3</answer>
</qa_pair>
<!-- More qa_pairs... -->
</evaluation>
```
---
# Reference Files
## 📚 Documentation Library
Load these resources as needed during development:
### Core MCP Documentation (Load First)
- **MCP Protocol**: Start with sitemap at `https://modelcontextprotocol.io/sitemap.xml`, then fetch specific pages with `.md` suffix
- [📋 MCP Best Practices](./reference/mcp_best_practices.md) - Universal MCP guidelines including:
- Server and tool naming conventions
- Response format guidelines (JSON vs Markdown)
- Pagination best practices
- Transport selection (streamable HTTP vs stdio)
- Security and error handling standards
### SDK Documentation (Load During Phase 1/2)
- **Python SDK**: Fetch from `https://raw.githubusercontent.com/modelcontextprotocol/python-sdk/main/README.md`
- **TypeScript SDK**: Fetch from `https://raw.githubusercontent.com/modelcontextprotocol/typescript-sdk/main/README.md`
### Language-Specific Implementation Guides (Load During Phase 2)
- [🐍 Python Implementation Guide](./reference/python_mcp_server.md) - Complete Python/FastMCP guide with:
- Server initialization patterns
- Pydantic model examples
- Tool registration with `@mcp.tool`
- Complete working examples
- Quality checklist
- [⚡ TypeScript Implementation Guide](./reference/node_mcp_server.md) - Complete TypeScript guide with:
- Project structure
- Zod schema patterns
- Tool registration with `server.registerTool`
- Complete working examples
- Quality checklist
### Evaluation Guide (Load During Phase 4)
- [✅ Evaluation Guide](./reference/evaluation.md) - Complete evaluation creation guide with:
- Question creation guidelines
- Answer verification strategies
- XML format specifications
- Example questions and answers
- Running an evaluation with the provided scripts
| """
Test for 'mcp-builder' skill — MCP Server Builder
Validates that the Agent created a new MCP (Model Context Protocol) server
implementation with TypeScript source, build config, and tests.
"""
import os
import subprocess
import pytest
from _dependency_utils import ensure_npm_dependencies
@pytest.fixture(scope="module", autouse=True)
def _ensure_repo_dependencies():
ensure_npm_dependencies(TestMcpBuilder.REPO_DIR)
class TestMcpBuilder:
"""Verify MCP server implementation."""
REPO_DIR = "/workspace/servers"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_src_directory_exists(self):
"""New server source directory must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".ts") and "index" in f.lower():
found.append(os.path.join(root, f))
assert len(found) >= 1, "No TypeScript index.ts found"
def test_package_json_exists(self):
"""package.json must exist for the server."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if "package.json" in files:
fpath = os.path.join(root, "package.json")
with open(fpath, "r") as f:
content = f.read()
if "mcp" in content.lower() or "server" in content.lower():
found = True
break
assert found, "No package.json for MCP server found"
def test_tsconfig_exists(self):
"""tsconfig.json must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if "tsconfig.json" in files:
found = True
break
assert found, "tsconfig.json not found"
# ------------------------------------------------------------------
# L2: content & build validation
# ------------------------------------------------------------------
def _find_ts_files(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".ts") and "node_modules" not in root:
found.append(os.path.join(root, f))
return found
def _read_all_ts(self):
content = ""
for fpath in self._find_ts_files():
try:
with open(fpath, "r", errors="ignore") as f:
content += f.read() + "\n"
except OSError:
pass
return content
def test_mcp_protocol_implementation(self):
"""Source must implement MCP protocol concepts."""
content = self._read_all_ts()
mcp_patterns = [
"Server",
"Tool",
"Resource",
"Prompt",
"handler",
"schema",
"jsonrpc",
]
found = sum(1 for p in mcp_patterns if p in content)
assert found >= 3, f"Only {found} MCP protocol concepts found"
def test_tool_definitions(self):
"""Server must define at least one tool."""
content = self._read_all_ts()
tool_patterns = [
"tool",
"Tool",
"tools",
"listTools",
"callTool",
"inputSchema",
]
found = sum(1 for p in tool_patterns if p in content)
assert found >= 2, "Insufficient tool definitions"
def test_error_handling(self):
"""Server must implement error handling."""
content = self._read_all_ts()
error_patterns = ["catch", "Error", "throw", "try", "McpError", "ErrorCode"]
found = sum(1 for p in error_patterns if p in content)
assert found >= 2, "Insufficient error handling"
def test_npm_build(self):
"""npm run build must succeed (find the right package dir)."""
# Find package.json with build script
for root, dirs, files in os.walk(self.REPO_DIR):
if "package.json" in files and "node_modules" not in root:
import json
pkg_path = os.path.join(root, "package.json")
with open(pkg_path, "r") as f:
pkg = json.load(f)
if "build" in pkg.get("scripts", {}):
result = subprocess.run(
["npm", "run", "build"],
cwd=root,
capture_output=True,
text=True,
timeout=300,
)
assert (
result.returncode == 0
), f"npm build failed in {root}:\n{result.stderr[-1000:]}"
return
pytest.skip("No package.json with build script found")
def test_input_validation(self):
"""Tools must validate input schemas."""
content = self._read_all_ts()
validation_patterns = [
"schema",
"zod",
"Zod",
"validate",
"inputSchema",
"z.object",
"z.string",
]
found = any(p in content for p in validation_patterns)
assert found, "No input validation/schema found"
def test_transport_handling(self):
"""Server must handle transport (stdio or HTTP)."""
content = self._read_all_ts()
transport_patterns = [
"stdio",
"StdioServerTransport",
"SSEServerTransport",
"StreamableHTTPServerTransport",
"transport",
"stdin",
"stdout",
]
found = any(p in content for p in transport_patterns)
assert found, "No transport handling found"
def test_exports_or_main(self):
"""Package must have main/exports in package.json."""
for root, dirs, files in os.walk(self.REPO_DIR):
if "package.json" in files and "node_modules" not in root:
import json
with open(os.path.join(root, "package.json"), "r") as f:
pkg = json.load(f)
if pkg.get("main") or pkg.get("bin") or pkg.get("exports"):
return
pytest.fail("No main/bin/exports field in package.json")
| https://github.com/modelcontextprotocol/servers | zhangyiiiiii/swe-skills-bench-python | |
python-resilience | Python Resilience Patterns | See task file for detailed mission requirements. | feature | # Task: Implement Resilient Transport Layer for httpx
## Background
We need to add a resilient transport module to the httpx library that provides automatic retry and circuit breaker capabilities directly within the httpx transport layer.
## Files to Create/Modify
- `httpx/_transports/resilient.py` - Resilient transport implementation (new)
## Requirements
### ResilientTransport Class
Implement a `ResilientTransport` class in `httpx/_transports/resilient.py` that wraps an existing `httpx.BaseTransport` and adds:
**Retry Logic:**
- Maximum 3 retry attempts on transient failures
- Exponential backoff between retries: 1s → 2s → 4s
- Retry only on: HTTP 5xx responses, `ConnectError`, `TimeoutException`
- Do NOT retry on: HTTP 4xx responses (client errors)
- Configurable timeout settings per request
**Circuit Breaker:**
- Three states: `CLOSED`, `OPEN`, `HALF_OPEN`
- Transition to `OPEN` after 5 consecutive failures
- 30-second cooldown before transitioning to `HALF_OPEN`
- Single success in `HALF_OPEN` restores to `CLOSED`
- Raise custom `CircuitOpenError` when circuit is open
### Expected Functionality
- Retry logic exhausts attempts and raises the final exception appropriately
- Circuit breaker opens when the failure threshold is reached
- Circuit transitions from `HALF_OPEN` to `CLOSED` on a successful request
- 4xx client errors are not retried (only 5xx and connection errors)
## Acceptance Criteria
- `httpx/_transports/resilient.py` compiles without syntax errors
- `ResilientTransport` correctly implements retry and circuit breaker behavior
- Error handling covers all specified scenarios
| ---
name: python-resilience
description: Python resilience patterns including automatic retries, exponential backoff, timeouts, and fault-tolerant decorators. Use when adding retry logic, implementing timeouts, building fault-tolerant services, or handling transient failures.
---
# Python Resilience Patterns
Build fault-tolerant Python applications that gracefully handle transient failures, network issues, and service outages. Resilience patterns keep systems running when dependencies are unreliable.
## When to Use This Skill
- Adding retry logic to external service calls
- Implementing timeouts for network operations
- Building fault-tolerant microservices
- Handling rate limiting and backpressure
- Creating infrastructure decorators
- Designing circuit breakers
## Core Concepts
### 1. Transient vs Permanent Failures
Retry transient errors (network timeouts, temporary service issues). Don't retry permanent errors (invalid credentials, bad requests).
### 2. Exponential Backoff
Increase wait time between retries to avoid overwhelming recovering services.
### 3. Jitter
Add randomness to backoff to prevent thundering herd when many clients retry simultaneously.
### 4. Bounded Retries
Cap both attempt count and total duration to prevent infinite retry loops.
## Quick Start
```python
from tenacity import retry, stop_after_attempt, wait_exponential_jitter
@retry(
stop=stop_after_attempt(3),
wait=wait_exponential_jitter(initial=1, max=10),
)
def call_external_service(request: dict) -> dict:
return httpx.post("https://api.example.com", json=request).json()
```
## Fundamental Patterns
### Pattern 1: Basic Retry with Tenacity
Use the `tenacity` library for production-grade retry logic. For simpler cases, consider built-in retry functionality or a lightweight custom implementation.
```python
from tenacity import (
retry,
stop_after_attempt,
stop_after_delay,
wait_exponential_jitter,
retry_if_exception_type,
)
TRANSIENT_ERRORS = (ConnectionError, TimeoutError, OSError)
@retry(
retry=retry_if_exception_type(TRANSIENT_ERRORS),
stop=stop_after_attempt(5) | stop_after_delay(60),
wait=wait_exponential_jitter(initial=1, max=30),
)
def fetch_data(url: str) -> dict:
"""Fetch data with automatic retry on transient failures."""
response = httpx.get(url, timeout=30)
response.raise_for_status()
return response.json()
```
### Pattern 2: Retry Only Appropriate Errors
Whitelist specific transient exceptions. Never retry:
- `ValueError`, `TypeError` - These are bugs, not transient issues
- `AuthenticationError` - Invalid credentials won't become valid
- HTTP 4xx errors (except 429) - Client errors are permanent
```python
from tenacity import retry, retry_if_exception_type
import httpx
# Define what's retryable
RETRYABLE_EXCEPTIONS = (
ConnectionError,
TimeoutError,
httpx.ConnectTimeout,
httpx.ReadTimeout,
)
@retry(
retry=retry_if_exception_type(RETRYABLE_EXCEPTIONS),
stop=stop_after_attempt(3),
wait=wait_exponential_jitter(initial=1, max=10),
)
def resilient_api_call(endpoint: str) -> dict:
"""Make API call with retry on network issues."""
return httpx.get(endpoint, timeout=10).json()
```
### Pattern 3: HTTP Status Code Retries
Retry specific HTTP status codes that indicate transient issues.
```python
from tenacity import retry, retry_if_result, stop_after_attempt
import httpx
RETRY_STATUS_CODES = {429, 502, 503, 504}
def should_retry_response(response: httpx.Response) -> bool:
"""Check if response indicates a retryable error."""
return response.status_code in RETRY_STATUS_CODES
@retry(
retry=retry_if_result(should_retry_response),
stop=stop_after_attempt(3),
wait=wait_exponential_jitter(initial=1, max=10),
)
def http_request(method: str, url: str, **kwargs) -> httpx.Response:
"""Make HTTP request with retry on transient status codes."""
return httpx.request(method, url, timeout=30, **kwargs)
```
### Pattern 4: Combined Exception and Status Retry
Handle both network exceptions and HTTP status codes.
```python
from tenacity import (
retry,
retry_if_exception_type,
retry_if_result,
stop_after_attempt,
wait_exponential_jitter,
before_sleep_log,
)
import logging
import httpx
logger = logging.getLogger(__name__)
TRANSIENT_EXCEPTIONS = (
ConnectionError,
TimeoutError,
httpx.ConnectError,
httpx.ReadTimeout,
)
RETRY_STATUS_CODES = {429, 500, 502, 503, 504}
def is_retryable_response(response: httpx.Response) -> bool:
return response.status_code in RETRY_STATUS_CODES
@retry(
retry=(
retry_if_exception_type(TRANSIENT_EXCEPTIONS) |
retry_if_result(is_retryable_response)
),
stop=stop_after_attempt(5),
wait=wait_exponential_jitter(initial=1, max=30),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def robust_http_call(
method: str,
url: str,
**kwargs,
) -> httpx.Response:
"""HTTP call with comprehensive retry handling."""
return httpx.request(method, url, timeout=30, **kwargs)
```
## Advanced Patterns
### Pattern 5: Logging Retry Attempts
Track retry behavior for debugging and alerting.
```python
from tenacity import retry, stop_after_attempt, wait_exponential
import structlog
logger = structlog.get_logger()
def log_retry_attempt(retry_state):
"""Log detailed retry information."""
exception = retry_state.outcome.exception()
logger.warning(
"Retrying operation",
attempt=retry_state.attempt_number,
exception_type=type(exception).__name__,
exception_message=str(exception),
next_wait_seconds=retry_state.next_action.sleep if retry_state.next_action else None,
)
@retry(
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, max=10),
before_sleep=log_retry_attempt,
)
def call_with_logging(request: dict) -> dict:
"""External call with retry logging."""
...
```
### Pattern 6: Timeout Decorator
Create reusable timeout decorators for consistent timeout handling.
```python
import asyncio
from functools import wraps
from typing import TypeVar, Callable
T = TypeVar("T")
def with_timeout(seconds: float):
"""Decorator to add timeout to async functions."""
def decorator(func: Callable[..., T]) -> Callable[..., T]:
@wraps(func)
async def wrapper(*args, **kwargs) -> T:
return await asyncio.wait_for(
func(*args, **kwargs),
timeout=seconds,
)
return wrapper
return decorator
@with_timeout(30)
async def fetch_with_timeout(url: str) -> dict:
"""Fetch URL with 30 second timeout."""
async with httpx.AsyncClient() as client:
response = await client.get(url)
return response.json()
```
### Pattern 7: Cross-Cutting Concerns via Decorators
Stack decorators to separate infrastructure from business logic.
```python
from functools import wraps
from typing import TypeVar, Callable
import structlog
logger = structlog.get_logger()
T = TypeVar("T")
def traced(name: str | None = None):
"""Add tracing to function calls."""
def decorator(func: Callable[..., T]) -> Callable[..., T]:
span_name = name or func.__name__
@wraps(func)
async def wrapper(*args, **kwargs) -> T:
logger.info("Operation started", operation=span_name)
try:
result = await func(*args, **kwargs)
logger.info("Operation completed", operation=span_name)
return result
except Exception as e:
logger.error("Operation failed", operation=span_name, error=str(e))
raise
return wrapper
return decorator
# Stack multiple concerns
@traced("fetch_user_data")
@with_timeout(30)
@retry(stop=stop_after_attempt(3), wait=wait_exponential_jitter())
async def fetch_user_data(user_id: str) -> dict:
"""Fetch user with tracing, timeout, and retry."""
...
```
### Pattern 8: Dependency Injection for Testability
Pass infrastructure components through constructors for easy testing.
```python
from dataclasses import dataclass
from typing import Protocol
class Logger(Protocol):
def info(self, msg: str, **kwargs) -> None: ...
def error(self, msg: str, **kwargs) -> None: ...
class MetricsClient(Protocol):
def increment(self, metric: str, tags: dict | None = None) -> None: ...
def timing(self, metric: str, value: float) -> None: ...
@dataclass
class UserService:
"""Service with injected infrastructure."""
repository: UserRepository
logger: Logger
metrics: MetricsClient
async def get_user(self, user_id: str) -> User:
self.logger.info("Fetching user", user_id=user_id)
start = time.perf_counter()
try:
user = await self.repository.get(user_id)
self.metrics.increment("user.fetch.success")
return user
except Exception as e:
self.metrics.increment("user.fetch.error")
self.logger.error("Failed to fetch user", user_id=user_id, error=str(e))
raise
finally:
elapsed = time.perf_counter() - start
self.metrics.timing("user.fetch.duration", elapsed)
# Easy to test with fakes
service = UserService(
repository=FakeRepository(),
logger=FakeLogger(),
metrics=FakeMetrics(),
)
```
### Pattern 9: Fail-Safe Defaults
Degrade gracefully when non-critical operations fail.
```python
from typing import TypeVar
from collections.abc import Callable
T = TypeVar("T")
def fail_safe(default: T, log_failure: bool = True):
"""Return default value on failure instead of raising."""
def decorator(func: Callable[..., T]) -> Callable[..., T]:
@wraps(func)
async def wrapper(*args, **kwargs) -> T:
try:
return await func(*args, **kwargs)
except Exception as e:
if log_failure:
logger.warning(
"Operation failed, using default",
function=func.__name__,
error=str(e),
)
return default
return wrapper
return decorator
@fail_safe(default=[])
async def get_recommendations(user_id: str) -> list[str]:
"""Get recommendations, return empty list on failure."""
...
```
## Best Practices Summary
1. **Retry only transient errors** - Don't retry bugs or authentication failures
2. **Use exponential backoff** - Give services time to recover
3. **Add jitter** - Prevent thundering herd from synchronized retries
4. **Cap total duration** - `stop_after_attempt(5) | stop_after_delay(60)`
5. **Log every retry** - Silent retries hide systemic problems
6. **Use decorators** - Keep retry logic separate from business logic
7. **Inject dependencies** - Make infrastructure testable
8. **Set timeouts everywhere** - Every network call needs a timeout
9. **Fail gracefully** - Return cached/default values for non-critical paths
10. **Monitor retry rates** - High retry rates indicate underlying issues
| """
Test for 'python-resilience' skill — Resilient Transport Layer for httpx
Validates that the Agent implemented ResilientTransport with retry and
circuit-breaker logic in httpx/_transports/resilient.py.
"""
import os
import sys
import ast
import subprocess
import importlib
import pytest
class TestPythonResilience:
"""Verify resilient transport implementation for httpx."""
REPO_DIR = "/workspace/httpx"
@classmethod
def setup_class(cls):
if cls.REPO_DIR not in sys.path:
sys.path.insert(0, cls.REPO_DIR)
# ------------------------------------------------------------------
# L1: file & syntax
# ------------------------------------------------------------------
def test_resilient_module_exists(self):
"""httpx/_transports/resilient.py must exist."""
fpath = os.path.join(self.REPO_DIR, "httpx", "_transports", "resilient.py")
assert os.path.isfile(fpath), "resilient.py not found"
def test_resilient_compiles(self):
"""resilient.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "httpx/_transports/resilient.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: structural & functional verification
# ------------------------------------------------------------------
def _load_source(self):
fpath = os.path.join(self.REPO_DIR, "httpx", "_transports", "resilient.py")
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def _parse_classes(self):
source = self._load_source()
tree = ast.parse(source)
return {n.name: n for n in ast.walk(tree) if isinstance(n, ast.ClassDef)}
def test_resilient_transport_class_exists(self):
"""ResilientTransport class must be defined."""
classes = self._parse_classes()
assert (
"ResilientTransport" in classes
), f"ResilientTransport not found; classes: {list(classes.keys())}"
def test_circuit_open_error_defined(self):
"""CircuitOpenError exception class must be defined."""
classes = self._parse_classes()
assert (
"CircuitOpenError" in classes
), f"CircuitOpenError not found; classes: {list(classes.keys())}"
def test_retry_max_attempts_configured(self):
"""Retry logic must define maximum 3 attempts."""
source = self._load_source()
assert "3" in source, "No mention of 3 retry attempts in source"
# Verify there's a retry-related constant or parameter
retry_keywords = ["max_retries", "max_attempts", "retry", "retries"]
assert any(
kw in source.lower() for kw in retry_keywords
), "No retry configuration found in source"
def test_exponential_backoff_defined(self):
"""Exponential backoff (1s, 2s, 4s or similar) must be implemented."""
source = self._load_source()
backoff_indicators = ["backoff", "exponential", "sleep", "**", "pow"]
found = sum(1 for ind in backoff_indicators if ind in source.lower())
assert found >= 1, "No exponential backoff logic found"
def test_circuit_breaker_states(self):
"""Circuit breaker must define CLOSED, OPEN, HALF_OPEN states."""
source = self._load_source()
for state in ["CLOSED", "OPEN", "HALF_OPEN"]:
assert state in source, f"Circuit breaker state '{state}' not found"
def test_circuit_breaker_failure_threshold(self):
"""Circuit should open after 5 consecutive failures."""
source = self._load_source()
assert "5" in source, "Failure threshold of 5 not found in source"
threshold_keywords = [
"threshold",
"failure_count",
"consecutive",
"max_failures",
]
assert any(
kw in source.lower() for kw in threshold_keywords
), "No failure threshold configuration found"
def test_circuit_breaker_cooldown(self):
"""30-second cooldown before HALF_OPEN transition."""
source = self._load_source()
assert "30" in source, "30-second cooldown not found in source"
def test_no_retry_on_4xx(self):
"""4xx errors must NOT be retried — only 5xx and connection errors."""
source = self._load_source()
# Source should distinguish between 4xx (client error) and 5xx (server error)
if "4" in source and ("5" in source or "500" in source):
pass # basic sanity
status_patterns = [
"status_code",
"response.status",
"5xx",
"500",
">=500",
"> 499",
]
found = any(p in source.lower() for p in status_patterns)
assert found, "No HTTP status code handling found for retry logic"
def test_import_resilient_transport(self):
"""ResilientTransport should be importable at runtime."""
result = subprocess.run(
[
"python",
"-c",
"from httpx._transports.resilient import ResilientTransport; print('OK')",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Import failed:\n{result.stderr}"
assert "OK" in result.stdout
def test_import_circuit_open_error(self):
"""CircuitOpenError should be importable."""
result = subprocess.run(
[
"python",
"-c",
"from httpx._transports.resilient import CircuitOpenError; print('OK')",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Import failed:\n{result.stderr}"
assert "OK" in result.stdout
| https://github.com/encode/httpx | zhangyiiiiii/swe-skills-bench-python | |
xlsx | Excel & Spreadsheet Automation | See task file for detailed mission requirements. | feature | # Task: Implement Sales Report Generation Engine for openpyxl
## Background
We need to add a report generation engine to the openpyxl library that can produce Excel reports with automated summary formulas, conditional formatting, and trend charts.
## Files to Create/Modify
- `openpyxl/utils/report_engine.py` - Report generation engine (new)
## Requirements
### report_engine.py
Implement a `generate_sales_report(data: List[Dict], output_path: str) -> None` function that:
**Sheet1 - Raw Data with Summary:**
- Write input data (list of dicts with month, product, amount) to cells
- Insert `SUM` and `AVERAGE` formulas in a summary row at the bottom
**Sheet2 - Conditional Formatting:**
- Apply conditional formatting to the 'amount' column
- Red background (`PatternFill` with `fgColor=FF0000`) for month-over-month decline > 10%
**Sheet3 - Trend Chart:**
- `LineChart` showing monthly sales trend
- Proper axis labels and title
### Additional Examples (in the same module or separate helper):
- `BarChart` for category comparison
- `PieChart` for distribution
- Combined chart with secondary axis (optional)
## Expected Functionality
- Generated `.xlsx` files are valid (`load_workbook` succeeds)
- Summary formulas compute correctly
- Conditional formatting rules apply to correct cells
- charts render with correct data ranges
## Acceptance Criteria
- `openpyxl/utils/report_engine.py` compiles without syntax errors
- Generated Excel files are valid and contain all three sheets
- Formulas, conditional formatting, and charts are properly configured
| ---
name: xlsx
description: "Use this skill any time a spreadsheet file is the primary input or output. This means any task where the user wants to: open, read, edit, or fix an existing .xlsx, .xlsm, .csv, or .tsv file (e.g., adding columns, computing formulas, formatting, charting, cleaning messy data); create a new spreadsheet from scratch or from other data sources; or convert between tabular file formats. Trigger especially when the user references a spreadsheet file by name or path — even casually (like \"the xlsx in my downloads\") — and wants something done to it or produced from it. Also trigger for cleaning or restructuring messy tabular data files (malformed rows, misplaced headers, junk data) into proper spreadsheets. The deliverable must be a spreadsheet file. Do NOT trigger when the primary deliverable is a Word document, HTML report, standalone Python script, database pipeline, or Google Sheets API integration, even if tabular data is involved."
license: Proprietary. LICENSE.txt has complete terms
---
# Requirements for Outputs
## All Excel files
### Professional Font
- Use a consistent, professional font (e.g., Arial, Times New Roman) for all deliverables unless otherwise instructed by the user
### Zero Formula Errors
- Every Excel model MUST be delivered with ZERO formula errors (#REF!, #DIV/0!, #VALUE!, #N/A, #NAME?)
### Preserve Existing Templates (when updating templates)
- Study and EXACTLY match existing format, style, and conventions when modifying files
- Never impose standardized formatting on files with established patterns
- Existing template conventions ALWAYS override these guidelines
## Financial models
### Color Coding Standards
Unless otherwise stated by the user or existing template
#### Industry-Standard Color Conventions
- **Blue text (RGB: 0,0,255)**: Hardcoded inputs, and numbers users will change for scenarios
- **Black text (RGB: 0,0,0)**: ALL formulas and calculations
- **Green text (RGB: 0,128,0)**: Links pulling from other worksheets within same workbook
- **Red text (RGB: 255,0,0)**: External links to other files
- **Yellow background (RGB: 255,255,0)**: Key assumptions needing attention or cells that need to be updated
### Number Formatting Standards
#### Required Format Rules
- **Years**: Format as text strings (e.g., "2024" not "2,024")
- **Currency**: Use $#,##0 format; ALWAYS specify units in headers ("Revenue ($mm)")
- **Zeros**: Use number formatting to make all zeros "-", including percentages (e.g., "$#,##0;($#,##0);-")
- **Percentages**: Default to 0.0% format (one decimal)
- **Multiples**: Format as 0.0x for valuation multiples (EV/EBITDA, P/E)
- **Negative numbers**: Use parentheses (123) not minus -123
### Formula Construction Rules
#### Assumptions Placement
- Place ALL assumptions (growth rates, margins, multiples, etc.) in separate assumption cells
- Use cell references instead of hardcoded values in formulas
- Example: Use =B5*(1+$B$6) instead of =B5*1.05
#### Formula Error Prevention
- Verify all cell references are correct
- Check for off-by-one errors in ranges
- Ensure consistent formulas across all projection periods
- Test with edge cases (zero values, negative numbers)
- Verify no unintended circular references
#### Documentation Requirements for Hardcodes
- Comment or in cells beside (if end of table). Format: "Source: [System/Document], [Date], [Specific Reference], [URL if applicable]"
- Examples:
- "Source: Company 10-K, FY2024, Page 45, Revenue Note, [SEC EDGAR URL]"
- "Source: Company 10-Q, Q2 2025, Exhibit 99.1, [SEC EDGAR URL]"
- "Source: Bloomberg Terminal, 8/15/2025, AAPL US Equity"
- "Source: FactSet, 8/20/2025, Consensus Estimates Screen"
# XLSX creation, editing, and analysis
## Overview
A user may ask you to create, edit, or analyze the contents of an .xlsx file. You have different tools and workflows available for different tasks.
## Important Requirements
**LibreOffice Required for Formula Recalculation**: You can assume LibreOffice is installed for recalculating formula values using the `scripts/recalc.py` script. The script automatically configures LibreOffice on first run, including in sandboxed environments where Unix sockets are restricted (handled by `scripts/office/soffice.py`)
## Reading and analyzing data
### Data analysis with pandas
For data analysis, visualization, and basic operations, use **pandas** which provides powerful data manipulation capabilities:
```python
import pandas as pd
# Read Excel
df = pd.read_excel('file.xlsx') # Default: first sheet
all_sheets = pd.read_excel('file.xlsx', sheet_name=None) # All sheets as dict
# Analyze
df.head() # Preview data
df.info() # Column info
df.describe() # Statistics
# Write Excel
df.to_excel('output.xlsx', index=False)
```
## Excel File Workflows
## CRITICAL: Use Formulas, Not Hardcoded Values
**Always use Excel formulas instead of calculating values in Python and hardcoding them.** This ensures the spreadsheet remains dynamic and updateable.
### ❌ WRONG - Hardcoding Calculated Values
```python
# Bad: Calculating in Python and hardcoding result
total = df['Sales'].sum()
sheet['B10'] = total # Hardcodes 5000
# Bad: Computing growth rate in Python
growth = (df.iloc[-1]['Revenue'] - df.iloc[0]['Revenue']) / df.iloc[0]['Revenue']
sheet['C5'] = growth # Hardcodes 0.15
# Bad: Python calculation for average
avg = sum(values) / len(values)
sheet['D20'] = avg # Hardcodes 42.5
```
### ✅ CORRECT - Using Excel Formulas
```python
# Good: Let Excel calculate the sum
sheet['B10'] = '=SUM(B2:B9)'
# Good: Growth rate as Excel formula
sheet['C5'] = '=(C4-C2)/C2'
# Good: Average using Excel function
sheet['D20'] = '=AVERAGE(D2:D19)'
```
This applies to ALL calculations - totals, percentages, ratios, differences, etc. The spreadsheet should be able to recalculate when source data changes.
## Common Workflow
1. **Choose tool**: pandas for data, openpyxl for formulas/formatting
2. **Create/Load**: Create new workbook or load existing file
3. **Modify**: Add/edit data, formulas, and formatting
4. **Save**: Write to file
5. **Recalculate formulas (MANDATORY IF USING FORMULAS)**: Use the scripts/recalc.py script
```bash
python scripts/recalc.py output.xlsx
```
6. **Verify and fix any errors**:
- The script returns JSON with error details
- If `status` is `errors_found`, check `error_summary` for specific error types and locations
- Fix the identified errors and recalculate again
- Common errors to fix:
- `#REF!`: Invalid cell references
- `#DIV/0!`: Division by zero
- `#VALUE!`: Wrong data type in formula
- `#NAME?`: Unrecognized formula name
### Creating new Excel files
```python
# Using openpyxl for formulas and formatting
from openpyxl import Workbook
from openpyxl.styles import Font, PatternFill, Alignment
wb = Workbook()
sheet = wb.active
# Add data
sheet['A1'] = 'Hello'
sheet['B1'] = 'World'
sheet.append(['Row', 'of', 'data'])
# Add formula
sheet['B2'] = '=SUM(A1:A10)'
# Formatting
sheet['A1'].font = Font(bold=True, color='FF0000')
sheet['A1'].fill = PatternFill('solid', start_color='FFFF00')
sheet['A1'].alignment = Alignment(horizontal='center')
# Column width
sheet.column_dimensions['A'].width = 20
wb.save('output.xlsx')
```
### Editing existing Excel files
```python
# Using openpyxl to preserve formulas and formatting
from openpyxl import load_workbook
# Load existing file
wb = load_workbook('existing.xlsx')
sheet = wb.active # or wb['SheetName'] for specific sheet
# Working with multiple sheets
for sheet_name in wb.sheetnames:
sheet = wb[sheet_name]
print(f"Sheet: {sheet_name}")
# Modify cells
sheet['A1'] = 'New Value'
sheet.insert_rows(2) # Insert row at position 2
sheet.delete_cols(3) # Delete column 3
# Add new sheet
new_sheet = wb.create_sheet('NewSheet')
new_sheet['A1'] = 'Data'
wb.save('modified.xlsx')
```
## Recalculating formulas
Excel files created or modified by openpyxl contain formulas as strings but not calculated values. Use the provided `scripts/recalc.py` script to recalculate formulas:
```bash
python scripts/recalc.py <excel_file> [timeout_seconds]
```
Example:
```bash
python scripts/recalc.py output.xlsx 30
```
The script:
- Automatically sets up LibreOffice macro on first run
- Recalculates all formulas in all sheets
- Scans ALL cells for Excel errors (#REF!, #DIV/0!, etc.)
- Returns JSON with detailed error locations and counts
- Works on both Linux and macOS
## Formula Verification Checklist
Quick checks to ensure formulas work correctly:
### Essential Verification
- [ ] **Test 2-3 sample references**: Verify they pull correct values before building full model
- [ ] **Column mapping**: Confirm Excel columns match (e.g., column 64 = BL, not BK)
- [ ] **Row offset**: Remember Excel rows are 1-indexed (DataFrame row 5 = Excel row 6)
### Common Pitfalls
- [ ] **NaN handling**: Check for null values with `pd.notna()`
- [ ] **Far-right columns**: FY data often in columns 50+
- [ ] **Multiple matches**: Search all occurrences, not just first
- [ ] **Division by zero**: Check denominators before using `/` in formulas (#DIV/0!)
- [ ] **Wrong references**: Verify all cell references point to intended cells (#REF!)
- [ ] **Cross-sheet references**: Use correct format (Sheet1!A1) for linking sheets
### Formula Testing Strategy
- [ ] **Start small**: Test formulas on 2-3 cells before applying broadly
- [ ] **Verify dependencies**: Check all cells referenced in formulas exist
- [ ] **Test edge cases**: Include zero, negative, and very large values
### Interpreting scripts/recalc.py Output
The script returns JSON with error details:
```json
{
"status": "success", // or "errors_found"
"total_errors": 0, // Total error count
"total_formulas": 42, // Number of formulas in file
"error_summary": { // Only present if errors found
"#REF!": {
"count": 2,
"locations": ["Sheet1!B5", "Sheet1!C10"]
}
}
}
```
## Best Practices
### Library Selection
- **pandas**: Best for data analysis, bulk operations, and simple data export
- **openpyxl**: Best for complex formatting, formulas, and Excel-specific features
### Working with openpyxl
- Cell indices are 1-based (row=1, column=1 refers to cell A1)
- Use `data_only=True` to read calculated values: `load_workbook('file.xlsx', data_only=True)`
- **Warning**: If opened with `data_only=True` and saved, formulas are replaced with values and permanently lost
- For large files: Use `read_only=True` for reading or `write_only=True` for writing
- Formulas are preserved but not evaluated - use scripts/recalc.py to update values
### Working with pandas
- Specify data types to avoid inference issues: `pd.read_excel('file.xlsx', dtype={'id': str})`
- For large files, read specific columns: `pd.read_excel('file.xlsx', usecols=['A', 'C', 'E'])`
- Handle dates properly: `pd.read_excel('file.xlsx', parse_dates=['date_column'])`
## Code Style Guidelines
**IMPORTANT**: When generating Python code for Excel operations:
- Write minimal, concise Python code without unnecessary comments
- Avoid verbose variable names and redundant operations
- Avoid unnecessary print statements
**For Excel files themselves**:
- Add comments to cells with complex formulas or important assumptions
- Document data sources for hardcoded values
- Include notes for key calculations and model sections | """
Test for 'xlsx' skill — Excel & Spreadsheet Automation
Validates that the Agent implemented generate_sales_report() in
openpyxl/utils/report_engine.py with summary formulas, conditional formatting,
and trend charts.
"""
import os
import sys
import ast
import subprocess
import tempfile
import pytest
class TestXlsx:
"""Verify report_engine.py implementation for openpyxl."""
REPO_DIR = "/workspace/openpyxl"
@classmethod
def setup_class(cls):
if cls.REPO_DIR not in sys.path:
sys.path.insert(0, cls.REPO_DIR)
# ------------------------------------------------------------------
# L1: file & syntax
# ------------------------------------------------------------------
def test_report_engine_exists(self):
"""openpyxl/utils/report_engine.py must exist."""
fpath = os.path.join(self.REPO_DIR, "openpyxl", "utils", "report_engine.py")
assert os.path.isfile(fpath), "report_engine.py not found"
def test_report_engine_compiles(self):
"""report_engine.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "openpyxl/utils/report_engine.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: structural verification via AST
# ------------------------------------------------------------------
def test_generate_sales_report_function_exists(self):
"""generate_sales_report function must be defined."""
fpath = os.path.join(self.REPO_DIR, "openpyxl", "utils", "report_engine.py")
with open(fpath, "r", encoding="utf-8") as f:
tree = ast.parse(f.read())
func_names = [
n.name
for n in ast.walk(tree)
if isinstance(n, (ast.FunctionDef, ast.AsyncFunctionDef))
]
assert (
"generate_sales_report" in func_names
), f"generate_sales_report not found; functions: {func_names}"
# ------------------------------------------------------------------
# L2: runtime verification — generate and validate xlsx
# ------------------------------------------------------------------
def _generate_report(self, tmp_path):
"""Helper: call generate_sales_report and return the output path."""
script = f"""
import sys
sys.path.insert(0, '{self.REPO_DIR}')
from openpyxl.utils.report_engine import generate_sales_report
data = [
{{"month": "Jan", "product": "Widget", "amount": 1200}},
{{"month": "Feb", "product": "Widget", "amount": 1100}},
{{"month": "Mar", "product": "Widget", "amount": 1300}},
{{"month": "Apr", "product": "Widget", "amount": 900}},
{{"month": "May", "product": "Gadget", "amount": 1500}},
{{"month": "Jun", "product": "Gadget", "amount": 1400}},
]
output = '{tmp_path}'
generate_sales_report(data, output)
print("DONE")
"""
result = subprocess.run(
["python", "-c", script],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
return result
def test_generate_report_runs(self):
"""generate_sales_report must execute without errors."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
result = self._generate_report(tmp_path)
assert result.returncode == 0, f"Report generation failed:\n{result.stderr}"
assert "DONE" in result.stdout
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_generated_file_is_valid_xlsx(self):
"""Generated file must be loadable by openpyxl."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
assert wb is not None
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_report_has_three_sheets(self):
"""Generated workbook must contain at least 3 sheets."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
assert (
len(wb.sheetnames) >= 3
), f"Expected >= 3 sheets, got {len(wb.sheetnames)}: {wb.sheetnames}"
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_sheet1_has_summary_formulas(self):
"""Sheet1 must contain SUM and/or AVERAGE formulas."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
ws = wb.worksheets[0]
formulas_found = []
for row in ws.iter_rows():
for cell in row:
val = cell.value
if isinstance(val, str) and val.startswith("="):
formulas_found.append(val)
wb.close()
has_sum = any("SUM" in f.upper() for f in formulas_found)
has_avg = any("AVERAGE" in f.upper() for f in formulas_found)
assert (
has_sum or has_avg
), f"No SUM/AVERAGE formulas found in Sheet1. Formulas: {formulas_found}"
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_sheet2_has_conditional_formatting(self):
"""Sheet2 must have conditional formatting rules."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
ws = wb.worksheets[1]
cf_rules = ws.conditional_formatting
assert (
len(list(cf_rules)) >= 1
), "No conditional formatting rules found on Sheet2"
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_sheet3_has_chart(self):
"""Sheet3 must contain at least one chart."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
ws = wb.worksheets[2]
assert len(ws._charts) >= 1, "No chart found on Sheet3"
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_chart_is_line_chart(self):
"""Sheet3 chart should be a LineChart for trend visualization."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
from openpyxl.chart import LineChart
wb = load_workbook(tmp_path)
ws = wb.worksheets[2]
line_charts = [c for c in ws._charts if isinstance(c, LineChart)]
assert (
len(line_charts) >= 1
), f"Expected a LineChart on Sheet3; chart types: {[type(c).__name__ for c in ws._charts]}"
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
def test_sheet1_contains_data_rows(self):
"""Sheet1 must contain the input data rows."""
with tempfile.NamedTemporaryFile(suffix=".xlsx", delete=False) as tmp:
tmp_path = tmp.name
try:
gen = self._generate_report(tmp_path)
if gen.returncode != 0:
pytest.skip(f"Report generation failed: {gen.stderr[:500]}")
from openpyxl import load_workbook
wb = load_workbook(tmp_path)
ws = wb.worksheets[0]
# Should have at least header + 6 data rows + 1 summary = 8 rows
row_count = ws.max_row
assert (
row_count >= 7
), f"Expected at least 7 rows (header+data+summary), got {row_count}"
wb.close()
finally:
if os.path.exists(tmp_path):
os.unlink(tmp_path)
| https://github.com/ericgazoni/openpyxl | zhangyiiiiii/swe-skills-bench-python | |
turborepo | Turborepo Monorepo Build System | See task file for detailed mission requirements. | feature | # Task: Create Turborepo Monorepo Example with Cache Demonstration
## Background
We need a complete monorepo example in the `examples/` directory that demonstrates Turborepo's task caching and incremental build mechanisms.
## Project Structure
Create the following structure:
```
examples/cache-demo/
├── package.json
├── turbo.json
├── benchmark.sh
└── packages/
├── core/
│ ├── package.json
│ └── src/index.ts
├── utils/
│ ├── package.json
│ └── src/index.ts
└── app/
├── package.json
└── src/index.ts
```
## Requirements
### Root Configuration
- `examples/cache-demo/package.json` - Root package with workspace configuration
- `examples/cache-demo/turbo.json` - Pipeline configuration
### turbo.json Pipeline Configuration
Define tasks with proper caching:
- **build**: Configure outputs, inputs, dependsOn
- **lint**: Configure caching
- **test**: Configure caching
Key fields to configure:
- `"outputs"`: Specify build output directories
- `"inputs"`: Specify input file patterns
- `"dependsOn"`: Define task dependencies (use `^build` for workspace dependencies)
### Benchmark Script
Create `benchmark.sh` that:
- Runs build twice consecutively
- Measures and compares build times
- Displays cache hit information
### Expected Behavior
- **First build**: Full compilation
- **Second build**: Cache hit (should show "FULL TURBO")
- Significant time reduction on second run
## Acceptance Criteria
- `cd examples/cache-demo && bash benchmark.sh` runs successfully
- Second build shows "FULL TURBO" or "cache hit" in output
- Build time significantly reduced on cache hit
| ---
name: turborepo
description: |
Turborepo monorepo build system guidance. Triggers on: turbo.json, task pipelines,
dependsOn, caching, remote cache, the "turbo" CLI, --filter, --affected, CI optimization, environment
variables, internal packages, monorepo structure/best practices, and boundaries.
Use when user: configures tasks/workflows/pipelines, creates packages, sets up
monorepo, shares code between apps, runs changed/affected packages, debugs cache,
or has apps/packages directories.
metadata:
version: 2.7.6-canary.3
---
# Turborepo Skill
Build system for JavaScript/TypeScript monorepos. Turborepo caches task outputs and runs tasks in parallel based on dependency graph.
## IMPORTANT: Package Tasks, Not Root Tasks
**DO NOT create Root Tasks. ALWAYS create package tasks.**
When creating tasks/scripts/pipelines, you MUST:
1. Add the script to each relevant package's `package.json`
2. Register the task in root `turbo.json`
3. Root `package.json` only delegates via `turbo run <task>`
**DO NOT** put task logic in root `package.json`. This defeats Turborepo's parallelization.
```json
// DO THIS: Scripts in each package
// apps/web/package.json
{ "scripts": { "build": "next build", "lint": "eslint .", "test": "vitest" } }
// apps/api/package.json
{ "scripts": { "build": "tsc", "lint": "eslint .", "test": "vitest" } }
// packages/ui/package.json
{ "scripts": { "build": "tsc", "lint": "eslint .", "test": "vitest" } }
```
```json
// turbo.json - register tasks
{
"tasks": {
"build": { "dependsOn": ["^build"], "outputs": ["dist/**"] },
"lint": {},
"test": { "dependsOn": ["build"] }
}
}
```
```json
// Root package.json - ONLY delegates, no task logic
{
"scripts": {
"build": "turbo run build",
"lint": "turbo run lint",
"test": "turbo run test"
}
}
```
```json
// DO NOT DO THIS - defeats parallelization
// Root package.json
{
"scripts": {
"build": "cd apps/web && next build && cd ../api && tsc",
"lint": "eslint apps/ packages/",
"test": "vitest"
}
}
```
Root Tasks (`//#taskname`) are ONLY for tasks that truly cannot exist in packages (rare).
## Secondary Rule: `turbo run` vs `turbo`
**Always use `turbo run` when the command is written into code:**
```json
// package.json - ALWAYS "turbo run"
{
"scripts": {
"build": "turbo run build"
}
}
```
```yaml
# CI workflows - ALWAYS "turbo run"
- run: turbo run build --affected
```
**The shorthand `turbo <tasks>` is ONLY for one-off terminal commands** typed directly by humans or agents. Never write `turbo build` into package.json, CI, or scripts.
## Quick Decision Trees
### "I need to configure a task"
```
Configure a task?
├─ Define task dependencies → references/configuration/tasks.md
├─ Lint/check-types (parallel + caching) → Use Transit Nodes pattern (see below)
├─ Specify build outputs → references/configuration/tasks.md#outputs
├─ Handle environment variables → references/environment/README.md
├─ Set up dev/watch tasks → references/configuration/tasks.md#persistent
├─ Package-specific config → references/configuration/README.md#package-configurations
└─ Global settings (cacheDir, daemon) → references/configuration/global-options.md
```
### "My cache isn't working"
```
Cache problems?
├─ Tasks run but outputs not restored → Missing `outputs` key
├─ Cache misses unexpectedly → references/caching/gotchas.md
├─ Need to debug hash inputs → Use --summarize or --dry
├─ Want to skip cache entirely → Use --force or cache: false
├─ Remote cache not working → references/caching/remote-cache.md
└─ Environment causing misses → references/environment/gotchas.md
```
### "I want to run only changed packages"
```
Run only what changed?
├─ Changed packages + dependents (RECOMMENDED) → turbo run build --affected
├─ Custom base branch → --affected --affected-base=origin/develop
├─ Manual git comparison → --filter=...[origin/main]
└─ See all filter options → references/filtering/README.md
```
**`--affected` is the primary way to run only changed packages.** It automatically compares against the default branch and includes dependents.
### "I want to filter packages"
```
Filter packages?
├─ Only changed packages → --affected (see above)
├─ By package name → --filter=web
├─ By directory → --filter=./apps/*
├─ Package + dependencies → --filter=web...
├─ Package + dependents → --filter=...web
└─ Complex combinations → references/filtering/patterns.md
```
### "Environment variables aren't working"
```
Environment issues?
├─ Vars not available at runtime → Strict mode filtering (default)
├─ Cache hits with wrong env → Var not in `env` key
├─ .env changes not causing rebuilds → .env not in `inputs`
├─ CI variables missing → references/environment/gotchas.md
└─ Framework vars (NEXT_PUBLIC_*) → Auto-included via inference
```
### "I need to set up CI"
```
CI setup?
├─ GitHub Actions → references/ci/github-actions.md
├─ Vercel deployment → references/ci/vercel.md
├─ Remote cache in CI → references/caching/remote-cache.md
├─ Only build changed packages → --affected flag
├─ Skip unnecessary builds → turbo-ignore (references/cli/commands.md)
└─ Skip container setup when no changes → turbo-ignore
```
### "I want to watch for changes during development"
```
Watch mode?
├─ Re-run tasks on change → turbo watch (references/watch/README.md)
├─ Dev servers with dependencies → Use `with` key (references/configuration/tasks.md#with)
├─ Restart dev server on dep change → Use `interruptible: true`
└─ Persistent dev tasks → Use `persistent: true`
```
### "I need to create/structure a package"
```
Package creation/structure?
├─ Create an internal package → references/best-practices/packages.md
├─ Repository structure → references/best-practices/structure.md
├─ Dependency management → references/best-practices/dependencies.md
├─ Best practices overview → references/best-practices/README.md
├─ JIT vs Compiled packages → references/best-practices/packages.md#compilation-strategies
└─ Sharing code between apps → references/best-practices/README.md#package-types
```
### "How should I structure my monorepo?"
```
Monorepo structure?
├─ Standard layout (apps/, packages/) → references/best-practices/README.md
├─ Package types (apps vs libraries) → references/best-practices/README.md#package-types
├─ Creating internal packages → references/best-practices/packages.md
├─ TypeScript configuration → references/best-practices/structure.md#typescript-configuration
├─ ESLint configuration → references/best-practices/structure.md#eslint-configuration
├─ Dependency management → references/best-practices/dependencies.md
└─ Enforce package boundaries → references/boundaries/README.md
```
### "I want to enforce architectural boundaries"
```
Enforce boundaries?
├─ Check for violations → turbo boundaries
├─ Tag packages → references/boundaries/README.md#tags
├─ Restrict which packages can import others → references/boundaries/README.md#rule-types
└─ Prevent cross-package file imports → references/boundaries/README.md
```
## Critical Anti-Patterns
### Using `turbo` Shorthand in Code
**`turbo run` is recommended in package.json scripts and CI pipelines.** The shorthand `turbo <task>` is intended for interactive terminal use.
```json
// WRONG - using shorthand in package.json
{
"scripts": {
"build": "turbo build",
"dev": "turbo dev"
}
}
// CORRECT
{
"scripts": {
"build": "turbo run build",
"dev": "turbo run dev"
}
}
```
```yaml
# WRONG - using shorthand in CI
- run: turbo build --affected
# CORRECT
- run: turbo run build --affected
```
### Root Scripts Bypassing Turbo
Root `package.json` scripts MUST delegate to `turbo run`, not run tasks directly.
```json
// WRONG - bypasses turbo entirely
{
"scripts": {
"build": "bun build",
"dev": "bun dev"
}
}
// CORRECT - delegates to turbo
{
"scripts": {
"build": "turbo run build",
"dev": "turbo run dev"
}
}
```
### Using `&&` to Chain Turbo Tasks
Don't chain turbo tasks with `&&`. Let turbo orchestrate.
```json
// WRONG - turbo task not using turbo run
{
"scripts": {
"changeset:publish": "bun build && changeset publish"
}
}
// CORRECT
{
"scripts": {
"changeset:publish": "turbo run build && changeset publish"
}
}
```
### `prebuild` Scripts That Manually Build Dependencies
Scripts like `prebuild` that manually build other packages bypass Turborepo's dependency graph.
```json
// WRONG - manually building dependencies
{
"scripts": {
"prebuild": "cd ../../packages/types && bun run build && cd ../utils && bun run build",
"build": "next build"
}
}
```
**However, the fix depends on whether workspace dependencies are declared:**
1. **If dependencies ARE declared** (e.g., `"@repo/types": "workspace:*"` in package.json), remove the `prebuild` script. Turbo's `dependsOn: ["^build"]` handles this automatically.
2. **If dependencies are NOT declared**, the `prebuild` exists because `^build` won't trigger without a dependency relationship. The fix is to:
- Add the dependency to package.json: `"@repo/types": "workspace:*"`
- Then remove the `prebuild` script
```json
// CORRECT - declare dependency, let turbo handle build order
// package.json
{
"dependencies": {
"@repo/types": "workspace:*",
"@repo/utils": "workspace:*"
},
"scripts": {
"build": "next build"
}
}
// turbo.json
{
"tasks": {
"build": {
"dependsOn": ["^build"]
}
}
}
```
**Key insight:** `^build` only runs build in packages listed as dependencies. No dependency declaration = no automatic build ordering.
### Overly Broad `globalDependencies`
`globalDependencies` affects ALL tasks in ALL packages. Be specific.
```json
// WRONG - heavy hammer, affects all hashes
{
"globalDependencies": ["**/.env.*local"]
}
// BETTER - move to task-level inputs
{
"globalDependencies": [".env"],
"tasks": {
"build": {
"inputs": ["$TURBO_DEFAULT$", ".env*"],
"outputs": ["dist/**"]
}
}
}
```
### Repetitive Task Configuration
Look for repeated configuration across tasks that can be collapsed. Turborepo supports shared configuration patterns.
```json
// WRONG - repetitive env and inputs across tasks
{
"tasks": {
"build": {
"env": ["API_URL", "DATABASE_URL"],
"inputs": ["$TURBO_DEFAULT$", ".env*"]
},
"test": {
"env": ["API_URL", "DATABASE_URL"],
"inputs": ["$TURBO_DEFAULT$", ".env*"]
},
"dev": {
"env": ["API_URL", "DATABASE_URL"],
"inputs": ["$TURBO_DEFAULT$", ".env*"],
"cache": false,
"persistent": true
}
}
}
// BETTER - use globalEnv and globalDependencies for shared config
{
"globalEnv": ["API_URL", "DATABASE_URL"],
"globalDependencies": [".env*"],
"tasks": {
"build": {},
"test": {},
"dev": {
"cache": false,
"persistent": true
}
}
}
```
**When to use global vs task-level:**
- `globalEnv` / `globalDependencies` - affects ALL tasks, use for truly shared config
- Task-level `env` / `inputs` - use when only specific tasks need it
### NOT an Anti-Pattern: Large `env` Arrays
A large `env` array (even 50+ variables) is **not** a problem. It usually means the user was thorough about declaring their build's environment dependencies. Do not flag this as an issue.
### Using `--parallel` Flag
The `--parallel` flag bypasses Turborepo's dependency graph. If tasks need parallel execution, configure `dependsOn` correctly instead.
```bash
# WRONG - bypasses dependency graph
turbo run lint --parallel
# CORRECT - configure tasks to allow parallel execution
# In turbo.json, set dependsOn appropriately (or use transit nodes)
turbo run lint
```
### Package-Specific Task Overrides in Root turbo.json
When multiple packages need different task configurations, use **Package Configurations** (`turbo.json` in each package) instead of cluttering root `turbo.json` with `package#task` overrides.
```json
// WRONG - root turbo.json with many package-specific overrides
{
"tasks": {
"test": { "dependsOn": ["build"] },
"@repo/web#test": { "outputs": ["coverage/**"] },
"@repo/api#test": { "outputs": ["coverage/**"] },
"@repo/utils#test": { "outputs": [] },
"@repo/cli#test": { "outputs": [] },
"@repo/core#test": { "outputs": [] }
}
}
// CORRECT - use Package Configurations
// Root turbo.json - base config only
{
"tasks": {
"test": { "dependsOn": ["build"] }
}
}
// packages/web/turbo.json - package-specific override
{
"extends": ["//"],
"tasks": {
"test": { "outputs": ["coverage/**"] }
}
}
// packages/api/turbo.json
{
"extends": ["//"],
"tasks": {
"test": { "outputs": ["coverage/**"] }
}
}
```
**Benefits of Package Configurations:**
- Keeps configuration close to the code it affects
- Root turbo.json stays clean and focused on base patterns
- Easier to understand what's special about each package
- Works with `$TURBO_EXTENDS$` to inherit + extend arrays
**When to use `package#task` in root:**
- Single package needs a unique dependency (e.g., `"deploy": { "dependsOn": ["web#build"] }`)
- Temporary override while migrating
See `references/configuration/README.md#package-configurations` for full details.
### Using `../` to Traverse Out of Package in `inputs`
Don't use relative paths like `../` to reference files outside the package. Use `$TURBO_ROOT$` instead.
```json
// WRONG - traversing out of package
{
"tasks": {
"build": {
"inputs": ["$TURBO_DEFAULT$", "../shared-config.json"]
}
}
}
// CORRECT - use $TURBO_ROOT$ for repo root
{
"tasks": {
"build": {
"inputs": ["$TURBO_DEFAULT$", "$TURBO_ROOT$/shared-config.json"]
}
}
}
```
### Missing `outputs` for File-Producing Tasks
**Before flagging missing `outputs`, check what the task actually produces:**
1. Read the package's script (e.g., `"build": "tsc"`, `"test": "vitest"`)
2. Determine if it writes files to disk or only outputs to stdout
3. Only flag if the task produces files that should be cached
```json
// WRONG: build produces files but they're not cached
{
"tasks": {
"build": {
"dependsOn": ["^build"]
}
}
}
// CORRECT: build outputs are cached
{
"tasks": {
"build": {
"dependsOn": ["^build"],
"outputs": ["dist/**"]
}
}
}
```
Common outputs by framework:
- Next.js: `[".next/**", "!.next/cache/**"]`
- Vite/Rollup: `["dist/**"]`
- tsc: `["dist/**"]` or custom `outDir`
**TypeScript `--noEmit` can still produce cache files:**
When `incremental: true` in tsconfig.json, `tsc --noEmit` writes `.tsbuildinfo` files even without emitting JS. Check the tsconfig before assuming no outputs:
```json
// If tsconfig has incremental: true, tsc --noEmit produces cache files
{
"tasks": {
"typecheck": {
"outputs": ["node_modules/.cache/tsbuildinfo.json"] // or wherever tsBuildInfoFile points
}
}
}
```
To determine correct outputs for TypeScript tasks:
1. Check if `incremental` or `composite` is enabled in tsconfig
2. Check `tsBuildInfoFile` for custom cache location (default: alongside `outDir` or in project root)
3. If no incremental mode, `tsc --noEmit` produces no files
### `^build` vs `build` Confusion
```json
{
"tasks": {
// ^build = run build in DEPENDENCIES first (other packages this one imports)
"build": {
"dependsOn": ["^build"]
},
// build (no ^) = run build in SAME PACKAGE first
"test": {
"dependsOn": ["build"]
},
// pkg#task = specific package's task
"deploy": {
"dependsOn": ["web#build"]
}
}
}
```
### Environment Variables Not Hashed
```json
// WRONG: API_URL changes won't cause rebuilds
{
"tasks": {
"build": {
"outputs": ["dist/**"]
}
}
}
// CORRECT: API_URL changes invalidate cache
{
"tasks": {
"build": {
"outputs": ["dist/**"],
"env": ["API_URL", "API_KEY"]
}
}
}
```
### `.env` Files Not in Inputs
Turbo does NOT load `.env` files - your framework does. But Turbo needs to know about changes:
```json
// WRONG: .env changes don't invalidate cache
{
"tasks": {
"build": {
"env": ["API_URL"]
}
}
}
// CORRECT: .env file changes invalidate cache
{
"tasks": {
"build": {
"env": ["API_URL"],
"inputs": ["$TURBO_DEFAULT$", ".env", ".env.*"]
}
}
}
```
### Root `.env` File in Monorepo
A `.env` file at the repo root is an anti-pattern — even for small monorepos or starter templates. It creates implicit coupling between packages and makes it unclear which packages depend on which variables.
```
// WRONG - root .env affects all packages implicitly
my-monorepo/
├── .env # Which packages use this?
├── apps/
│ ├── web/
│ └── api/
└── packages/
// CORRECT - .env files in packages that need them
my-monorepo/
├── apps/
│ ├── web/
│ │ └── .env # Clear: web needs DATABASE_URL
│ └── api/
│ └── .env # Clear: api needs API_KEY
└── packages/
```
**Problems with root `.env`:**
- Unclear which packages consume which variables
- All packages get all variables (even ones they don't need)
- Cache invalidation is coarse-grained (root .env change invalidates everything)
- Security risk: packages may accidentally access sensitive vars meant for others
- Bad habits start small — starter templates should model correct patterns
**If you must share variables**, use `globalEnv` to be explicit about what's shared, and document why.
### Strict Mode Filtering CI Variables
By default, Turborepo filters environment variables to only those in `env`/`globalEnv`. CI variables may be missing:
```json
// If CI scripts need GITHUB_TOKEN but it's not in env:
{
"globalPassThroughEnv": ["GITHUB_TOKEN", "CI"],
"tasks": { ... }
}
```
Or use `--env-mode=loose` (not recommended for production).
### Shared Code in Apps (Should Be a Package)
```
// WRONG: Shared code inside an app
apps/
web/
shared/ # This breaks monorepo principles!
utils.ts
// CORRECT: Extract to a package
packages/
utils/
src/utils.ts
```
### Accessing Files Across Package Boundaries
```typescript
// WRONG: Reaching into another package's internals
import { Button } from "../../packages/ui/src/button";
// CORRECT: Install and import properly
import { Button } from "@repo/ui/button";
```
### Too Many Root Dependencies
```json
// WRONG: App dependencies in root
{
"dependencies": {
"react": "^18",
"next": "^14"
}
}
// CORRECT: Only repo tools in root
{
"devDependencies": {
"turbo": "latest"
}
}
```
## Common Task Configurations
### Standard Build Pipeline
```json
{
"$schema": "https://turborepo.dev/schema.v2.json",
"tasks": {
"build": {
"dependsOn": ["^build"],
"outputs": ["dist/**", ".next/**", "!.next/cache/**"]
},
"dev": {
"cache": false,
"persistent": true
}
}
}
```
Add a `transit` task if you have tasks that need parallel execution with cache invalidation (see below).
### Dev Task with `^dev` Pattern (for `turbo watch`)
A `dev` task with `dependsOn: ["^dev"]` and `persistent: false` in root turbo.json may look unusual but is **correct for `turbo watch` workflows**:
```json
// Root turbo.json
{
"tasks": {
"dev": {
"dependsOn": ["^dev"],
"cache": false,
"persistent": false // Packages have one-shot dev scripts
}
}
}
// Package turbo.json (apps/web/turbo.json)
{
"extends": ["//"],
"tasks": {
"dev": {
"persistent": true // Apps run long-running dev servers
}
}
}
```
**Why this works:**
- **Packages** (e.g., `@acme/db`, `@acme/validators`) have `"dev": "tsc"` — one-shot type generation that completes quickly
- **Apps** override with `persistent: true` for actual dev servers (Next.js, etc.)
- **`turbo watch`** re-runs the one-shot package `dev` scripts when source files change, keeping types in sync
**Intended usage:** Run `turbo watch dev` (not `turbo run dev`). Watch mode re-executes one-shot tasks on file changes while keeping persistent tasks running.
**Alternative pattern:** Use a separate task name like `prepare` or `generate` for one-shot dependency builds to make the intent clearer:
```json
{
"tasks": {
"prepare": {
"dependsOn": ["^prepare"],
"outputs": ["dist/**"]
},
"dev": {
"dependsOn": ["prepare"],
"cache": false,
"persistent": true
}
}
}
```
### Transit Nodes for Parallel Tasks with Cache Invalidation
Some tasks can run in parallel (don't need built output from dependencies) but must invalidate cache when dependency source code changes.
**The problem with `dependsOn: ["^taskname"]`:**
- Forces sequential execution (slow)
**The problem with `dependsOn: []` (no dependencies):**
- Allows parallel execution (fast)
- But cache is INCORRECT - changing dependency source won't invalidate cache
**Transit Nodes solve both:**
```json
{
"tasks": {
"transit": { "dependsOn": ["^transit"] },
"my-task": { "dependsOn": ["transit"] }
}
}
```
The `transit` task creates dependency relationships without matching any actual script, so tasks run in parallel with correct cache invalidation.
**How to identify tasks that need this pattern:** Look for tasks that read source files from dependencies but don't need their build outputs.
### With Environment Variables
```json
{
"globalEnv": ["NODE_ENV"],
"globalDependencies": [".env"],
"tasks": {
"build": {
"dependsOn": ["^build"],
"outputs": ["dist/**"],
"env": ["API_URL", "DATABASE_URL"]
}
}
}
```
## Reference Index
### Configuration
| File | Purpose |
| ------------------------------------------------------------------------------- | -------------------------------------------------------- |
| [configuration/README.md](./references/configuration/README.md) | turbo.json overview, Package Configurations |
| [configuration/tasks.md](./references/configuration/tasks.md) | dependsOn, outputs, inputs, env, cache, persistent |
| [configuration/global-options.md](./references/configuration/global-options.md) | globalEnv, globalDependencies, cacheDir, daemon, envMode |
| [configuration/gotchas.md](./references/configuration/gotchas.md) | Common configuration mistakes |
### Caching
| File | Purpose |
| --------------------------------------------------------------- | -------------------------------------------- |
| [caching/README.md](./references/caching/README.md) | How caching works, hash inputs |
| [caching/remote-cache.md](./references/caching/remote-cache.md) | Vercel Remote Cache, self-hosted, login/link |
| [caching/gotchas.md](./references/caching/gotchas.md) | Debugging cache misses, --summarize, --dry |
### Environment Variables
| File | Purpose |
| ------------------------------------------------------------- | ----------------------------------------- |
| [environment/README.md](./references/environment/README.md) | env, globalEnv, passThroughEnv |
| [environment/modes.md](./references/environment/modes.md) | Strict vs Loose mode, framework inference |
| [environment/gotchas.md](./references/environment/gotchas.md) | .env files, CI issues |
### Filtering
| File | Purpose |
| ----------------------------------------------------------- | ------------------------ |
| [filtering/README.md](./references/filtering/README.md) | --filter syntax overview |
| [filtering/patterns.md](./references/filtering/patterns.md) | Common filter patterns |
### CI/CD
| File | Purpose |
| --------------------------------------------------------- | ------------------------------- |
| [ci/README.md](./references/ci/README.md) | General CI principles |
| [ci/github-actions.md](./references/ci/github-actions.md) | Complete GitHub Actions setup |
| [ci/vercel.md](./references/ci/vercel.md) | Vercel deployment, turbo-ignore |
| [ci/patterns.md](./references/ci/patterns.md) | --affected, caching strategies |
### CLI
| File | Purpose |
| ----------------------------------------------- | --------------------------------------------- |
| [cli/README.md](./references/cli/README.md) | turbo run basics |
| [cli/commands.md](./references/cli/commands.md) | turbo run flags, turbo-ignore, other commands |
### Best Practices
| File | Purpose |
| ----------------------------------------------------------------------------- | --------------------------------------------------------------- |
| [best-practices/README.md](./references/best-practices/README.md) | Monorepo best practices overview |
| [best-practices/structure.md](./references/best-practices/structure.md) | Repository structure, workspace config, TypeScript/ESLint setup |
| [best-practices/packages.md](./references/best-practices/packages.md) | Creating internal packages, JIT vs Compiled, exports |
| [best-practices/dependencies.md](./references/best-practices/dependencies.md) | Dependency management, installing, version sync |
### Watch Mode
| File | Purpose |
| ----------------------------------------------- | ----------------------------------------------- |
| [watch/README.md](./references/watch/README.md) | turbo watch, interruptible tasks, dev workflows |
### Boundaries (Experimental)
| File | Purpose |
| --------------------------------------------------------- | ----------------------------------------------------- |
| [boundaries/README.md](./references/boundaries/README.md) | Enforce package isolation, tag-based dependency rules |
## Source Documentation
This skill is based on the official Turborepo documentation at:
- Source: `docs/site/content/docs/` in the Turborepo repository
- Live: https://turborepo.dev/docs
| """
Test for 'turborepo' skill — Turborepo Monorepo Configuration
Validates that the Agent set up a Turborepo monorepo with workspaces,
shared packages, and proper turbo.json pipeline config.
"""
import os
import json
import subprocess
import pytest
class TestTurborepo:
"""Verify Turborepo monorepo setup."""
REPO_DIR = "/workspace/turbo"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_turbo_json_exists(self):
"""turbo.json must exist at project root or examples dir."""
paths = [
os.path.join(self.REPO_DIR, "turbo.json"),
os.path.join(self.REPO_DIR, "examples", "turbo.json"),
]
found = any(os.path.isfile(p) for p in paths)
if not found:
# Search recursively
for root, dirs, files in os.walk(self.REPO_DIR):
if "turbo.json" in files and "node_modules" not in root:
found = True
break
assert found, "turbo.json not found"
def test_package_json_workspaces(self):
"""Root package.json must define workspaces."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if "package.json" in files and "node_modules" not in root:
fpath = os.path.join(root, "package.json")
with open(fpath, "r") as f:
pkg = json.load(f)
if "workspaces" in pkg:
found = True
break
assert found, "No package.json with workspaces found"
def test_apps_directory_exists(self):
"""apps/ or packages/ directory must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if "apps" in dirs or "packages" in dirs:
found = True
break
assert found, "Neither apps/ nor packages/ directory found"
# ------------------------------------------------------------------
# L2: configuration validation
# ------------------------------------------------------------------
def _find_turbo_json(self):
for root, dirs, files in os.walk(self.REPO_DIR):
if "turbo.json" in files and "node_modules" not in root:
return os.path.join(root, "turbo.json")
return None
def test_turbo_json_valid(self):
"""turbo.json must be valid JSON."""
fpath = self._find_turbo_json()
assert fpath, "turbo.json not found"
with open(fpath, "r") as f:
config = json.load(f)
assert isinstance(config, dict), "turbo.json must be a JSON object"
def test_turbo_has_pipeline_or_tasks(self):
"""turbo.json must define pipeline or tasks."""
fpath = self._find_turbo_json()
assert fpath, "turbo.json not found"
with open(fpath, "r") as f:
config = json.load(f)
has_pipeline = "pipeline" in config or "tasks" in config
assert has_pipeline, "turbo.json missing pipeline/tasks"
def test_build_task_defined(self):
"""turbo.json must define a build task."""
fpath = self._find_turbo_json()
assert fpath, "turbo.json not found"
with open(fpath, "r") as f:
config = json.load(f)
tasks = config.get("pipeline", config.get("tasks", {}))
assert (
"build" in tasks
), f"build task not in pipeline; tasks: {list(tasks.keys())}"
def test_build_task_has_deps(self):
"""Build task should declare dependencies."""
fpath = self._find_turbo_json()
assert fpath, "turbo.json not found"
with open(fpath, "r") as f:
config = json.load(f)
tasks = config.get("pipeline", config.get("tasks", {}))
build = tasks.get("build", {})
has_deps = "dependsOn" in build or "inputs" in build or "outputs" in build
assert has_deps, "Build task missing dependsOn/inputs/outputs"
def test_shared_package_exists(self):
"""At least one shared package in packages/ must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if "packages" in root.split(os.sep) and "package.json" in files:
found = True
break
assert found, "No shared package found in packages/"
def test_at_least_two_workspaces(self):
"""Must have at least 2 workspace packages."""
count = 0
for root, dirs, files in os.walk(self.REPO_DIR):
if "package.json" in files and "node_modules" not in root:
# Exclude root
if root != self.REPO_DIR:
count += 1
assert count >= 2, f"Only {count} workspace packages found, need >= 2"
def test_lint_task_defined(self):
"""turbo.json should define a lint task."""
fpath = self._find_turbo_json()
assert fpath, "turbo.json not found"
with open(fpath, "r") as f:
config = json.load(f)
tasks = config.get("pipeline", config.get("tasks", {}))
assert "lint" in tasks or "check" in tasks, "No lint/check task in pipeline"
| https://github.com/vercel/turbo | zhangyiiiiii/swe-skills-bench-python | |
github-actions-templates | GitHub Actions Templates | See task file for detailed mission requirements. | feature | # Task: Create Python CI Workflow Template for GitHub Actions
## Background
Add a new CI workflow template for Python projects in the starter-workflows
repository, covering multi-version testing with dependency caching.
## Files to Create/Modify
- ci/python-pytest.yml - Workflow template
- ci/properties/python-pytest.properties.json - Template metadata
## Requirements
Workflow Template (python-pytest.yml):
Triggers:
- push to main/master
- pull_request to main/master
Matrix Strategy:
- Python versions: 3.9, 3.10, 3.11, 3.12
- OS: ubuntu-latest
Steps:
1. Checkout code
2. Set up Python with version matrix
3. Cache pip dependencies
4. Install dependencies from requirements.txt
5. Run pytest with coverage
Caching:
- Use actions/cache or setup-python's built-in cache
- Cache key based on requirements.txt hash
Properties File (python-pytest.properties.json):
```json
{
"name": "Python pytest",
"description": "Run Python tests with pytest across multiple Python versions",
"iconName": "python",
"categories": ["Python", "CI"]
}
```
4. Validation Requirements:
- Workflow passes actionlint syntax check
- Properties JSON is valid and contains required fields
- Proper YAML formatting and indentation
## Acceptance Criteria
- `actionlint ci/python-pytest.yml` exits with code 0 (no errors)
- ci/properties/python-pytest.properties.json exists with required fields
- Workflow syntax is valid GitHub Actions format
| ---
name: github-actions-templates
description: Create production-ready GitHub Actions workflows for automated testing, building, and deploying applications. Use when setting up CI/CD with GitHub Actions, automating development workflows, or creating reusable workflow templates.
---
# GitHub Actions Templates
Production-ready GitHub Actions workflow patterns for testing, building, and deploying applications.
## Purpose
Create efficient, secure GitHub Actions workflows for continuous integration and deployment across various tech stacks.
## When to Use
- Automate testing and deployment
- Build Docker images and push to registries
- Deploy to Kubernetes clusters
- Run security scans
- Implement matrix builds for multiple environments
## Common Workflow Patterns
### Pattern 1: Test Workflow
```yaml
name: Test
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18.x, 20.x]
steps:
- uses: actions/checkout@v4
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Run tests
run: npm test
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: ./coverage/lcov.info
```
**Reference:** See `assets/test-workflow.yml`
### Pattern 2: Build and Push Docker Image
```yaml
name: Build and Push
on:
push:
branches: [main]
tags: ["v*"]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
```
**Reference:** See `assets/deploy-workflow.yml`
### Pattern 3: Deploy to Kubernetes
```yaml
name: Deploy to Kubernetes
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-2
- name: Update kubeconfig
run: |
aws eks update-kubeconfig --name production-cluster --region us-west-2
- name: Deploy to Kubernetes
run: |
kubectl apply -f k8s/
kubectl rollout status deployment/my-app -n production
kubectl get services -n production
- name: Verify deployment
run: |
kubectl get pods -n production
kubectl describe deployment my-app -n production
```
### Pattern 4: Matrix Build
```yaml
name: Matrix Build
on: [push, pull_request]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: ["3.9", "3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run tests
run: pytest
```
**Reference:** See `assets/matrix-build.yml`
## Workflow Best Practices
1. **Use specific action versions** (@v4, not @latest)
2. **Cache dependencies** to speed up builds
3. **Use secrets** for sensitive data
4. **Implement status checks** on PRs
5. **Use matrix builds** for multi-version testing
6. **Set appropriate permissions**
7. **Use reusable workflows** for common patterns
8. **Implement approval gates** for production
9. **Add notification steps** for failures
10. **Use self-hosted runners** for sensitive workloads
## Reusable Workflows
```yaml
# .github/workflows/reusable-test.yml
name: Reusable Test Workflow
on:
workflow_call:
inputs:
node-version:
required: true
type: string
secrets:
NPM_TOKEN:
required: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}
- run: npm ci
- run: npm test
```
**Use reusable workflow:**
```yaml
jobs:
call-test:
uses: ./.github/workflows/reusable-test.yml
with:
node-version: "20.x"
secrets:
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
```
## Security Scanning
```yaml
name: Security Scan
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: "fs"
scan-ref: "."
format: "sarif"
output: "trivy-results.sarif"
- name: Upload Trivy results to GitHub Security
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: "trivy-results.sarif"
- name: Run Snyk Security Scan
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
```
## Deployment with Approvals
```yaml
name: Deploy to Production
on:
push:
tags: ["v*"]
jobs:
deploy:
runs-on: ubuntu-latest
environment:
name: production
url: https://app.example.com
steps:
- uses: actions/checkout@v4
- name: Deploy application
run: |
echo "Deploying to production..."
# Deployment commands here
- name: Notify Slack
if: success()
uses: slackapi/slack-github-action@v1
with:
webhook-url: ${{ secrets.SLACK_WEBHOOK }}
payload: |
{
"text": "Deployment to production completed successfully!"
}
```
## Reference Files
- `assets/test-workflow.yml` - Testing workflow template
- `assets/deploy-workflow.yml` - Deployment workflow template
- `assets/matrix-build.yml` - Matrix build template
- `references/common-workflows.md` - Common workflow patterns
## Related Skills
- `gitlab-ci-patterns` - For GitLab CI workflows
- `deployment-pipeline-design` - For pipeline architecture
- `secrets-management` - For secrets handling
| """
Test for 'github-actions-templates' skill — GitHub Actions Workflow Templates
Validates that the Agent created reusable workflow YAML templates with
properties JSON metadata files the starter-workflows repo expects.
"""
import os
import json
import pytest
class TestGithubActionsTemplates:
"""Verify GitHub Actions reusable workflow templates."""
REPO_DIR = "/workspace/starter-workflows"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_ci_workflow_exists(self):
"""ci/ directory must contain at least one new workflow."""
ci_dir = os.path.join(self.REPO_DIR, "ci")
assert os.path.isdir(ci_dir), "ci/ directory not found"
yamls = [f for f in os.listdir(ci_dir) if f.endswith((".yml", ".yaml"))]
assert len(yamls) >= 1, "No workflow YAML in ci/"
def test_properties_json_exists(self):
"""Each workflow in ci/ must have a matching .properties.json."""
ci_dir = os.path.join(self.REPO_DIR, "ci")
yamls = [f for f in os.listdir(ci_dir) if f.endswith((".yml", ".yaml"))]
for yml in yamls:
base = os.path.splitext(yml)[0]
props = os.path.join(ci_dir, base + ".properties.json")
assert os.path.isfile(props), f"Missing {base}.properties.json"
# ------------------------------------------------------------------
# L2: workflow YAML validation
# ------------------------------------------------------------------
def _get_workflow_files(self):
"""Get all workflow YAML files in ci/."""
ci_dir = os.path.join(self.REPO_DIR, "ci")
return [
os.path.join(ci_dir, f)
for f in os.listdir(ci_dir)
if f.endswith((".yml", ".yaml"))
]
def test_workflows_are_valid_yaml(self):
"""All workflow files must be valid YAML."""
import yaml
for fpath in self._get_workflow_files():
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
assert isinstance(doc, dict), f"{fpath} is not a YAML mapping"
def test_workflows_have_on_trigger(self):
"""Workflow must define trigger events (on:)."""
import yaml
for fpath in self._get_workflow_files():
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
assert "on" in doc or True in doc, f"Workflow {fpath} missing 'on' trigger"
def test_workflows_have_jobs(self):
"""Each workflow must define jobs."""
import yaml
for fpath in self._get_workflow_files():
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
assert "jobs" in doc, f"Workflow {fpath} missing 'jobs'"
assert len(doc["jobs"]) >= 1, f"Workflow {fpath} has 0 jobs"
def test_jobs_have_runs_on(self):
"""Each job must specify runs-on."""
import yaml
for fpath in self._get_workflow_files():
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
for job_name, job_body in doc.get("jobs", {}).items():
assert (
"runs-on" in job_body
), f"Job '{job_name}' in {fpath} missing runs-on"
def test_jobs_have_steps(self):
"""Each job must have at least one step."""
import yaml
for fpath in self._get_workflow_files():
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
for job_name, job_body in doc.get("jobs", {}).items():
steps = job_body.get("steps", [])
assert len(steps) >= 1, f"Job '{job_name}' in {fpath} has no steps"
def test_properties_json_valid(self):
"""Properties JSON must be valid and have required fields."""
ci_dir = os.path.join(self.REPO_DIR, "ci")
yamls = [f for f in os.listdir(ci_dir) if f.endswith((".yml", ".yaml"))]
for yml in yamls:
base = os.path.splitext(yml)[0]
props_path = os.path.join(ci_dir, base + ".properties.json")
if os.path.isfile(props_path):
with open(props_path, "r") as f:
props = json.load(f)
assert isinstance(props, dict), f"{props_path} is not a JSON object"
assert "name" in props, f"{props_path} missing 'name'"
def test_uses_actions_checkout(self):
"""At least one workflow must use actions/checkout."""
import yaml
found = False
for fpath in self._get_workflow_files():
with open(fpath, "r") as f:
content = f.read()
if "actions/checkout" in content:
found = True
break
assert found, "No workflow uses actions/checkout"
def test_workflow_name_field(self):
"""Workflows must have a name field."""
import yaml
for fpath in self._get_workflow_files():
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
assert "name" in doc, f"Workflow {fpath} missing 'name'"
| https://github.com/actions/starter-workflows | zhangyiiiiii/swe-skills-bench-python | |
analytics-events | Metabase Frontend Analytics Events | See task file for detailed mission requirements. | feature | # Task: Add Frontend Analytics Event Definitions for Metabase
## Background
We need to define and implement key user behavior analytics events for Metabase's frontend, enabling better understanding of user interactions.
## Files to Create/Modify
- `frontend/src/metabase/lib/analytics.ts` - Event definitions and types
- `frontend/test/metabase/lib/analytics.test.ts` - Unit tests
## Requirements
### Event Definitions (2-3 key events)
**1. dashboard_viewed**
- Payload: `dashboard_id`, `view_duration_ms`, `card_count`
**2. question_saved**
- Payload: `question_id`, `question_type`, `database_id`, `save_duration_ms`
**3. filter_applied**
- Payload: `dashboard_id`, `filter_type`, `filter_value_count`
### Event Interface
```typescript
interface AnalyticsEvent {
event_name: string;
payload: Record<string, unknown>;
timestamp: number;
}
```
### Naming Convention
- Use `snake_case` for event names and payload field names
- Include TypeScript type definitions for each event payload
### Expected Functionality
- Event triggers produce correct payload structure
- All required fields are present in each payload
- Field names follow `snake_case` convention consistently
- Timestamps are valid Unix timestamps
- Payloads conform to their TypeScript type definitions
## Acceptance Criteria
- Event definitions have proper TypeScript types
- Payload fields are complete and correctly named
- Implementation follows naming conventions
- Code compiles without type errors
| ---
name: analytics-events
description: Add product analytics events to track user interactions in the Metabase frontend
allowed-tools: Read, Write, Edit, Grep, Glob
---
# Frontend Analytics Events Skill
This skill helps you add product analytics (Snowplow) events to track user interactions in the Metabase frontend codebase.
## Quick Reference
Analytics events in Metabase use Snowplow with typed event schemas. All events must be defined in TypeScript types before use.
**Key Files:**
- `frontend/src/metabase-types/analytics/event.ts` - Event type definitions
- `frontend/src/metabase-types/analytics/schema.ts` - Schema registry
- `frontend/src/metabase/lib/analytics.ts` - Core tracking functions
- Feature-specific `analytics.ts` files - Tracking function wrappers
## Quick Checklist
When adding a new analytics event:
- [ ] Define event type in `frontend/src/metabase-types/analytics/event.ts`
- [ ] Add event to appropriate union type (e.g., `DataStudioEvent`, `SimpleEvent`)
- [ ] Create tracking function in feature's `analytics.ts` file
- [ ] Import and call tracking function at the interaction point
- [ ] Use `trackSimpleEvent()` for basic events (most common)
## Event Schema Types
### 1. Simple Events (Most Common)
Use `SimpleEventSchema` for straightforward tracking. It supports these standard fields:
```typescript
type SimpleEventSchema = {
event: string; // Required: Event name (snake_case)
target_id?: number | null; // Optional: ID of affected entity
triggered_from?: string | null; // Optional: UI location/context
duration_ms?: number | null; // Optional: Duration in milliseconds
result?: string | null; // Optional: Outcome (e.g., "success", "failure")
event_detail?: string | null; // Optional: Additional detail/variant
};
```
**When to use:** 90% of events fit this schema. Use for clicks, opens, closes, creates, deletes, etc.
### 2. Custom Schemas (legacy, no events are being added)
Consider adding new event schema only in very special cases.
**Examples:** `DashboardEventSchema`, `CleanupEventSchema`, `QuestionEventSchema`
## Step-by-Step: Adding a Simple Event
### Example: Track when a user applies filters in a table picker
#### Step 1: Define Event Types
Add event type definitions to `frontend/src/metabase-types/analytics/event.ts`:
```typescript
export type DataStudioTablePickerFiltersAppliedEvent = ValidateEvent<{
event: "data_studio_table_picker_filters_applied";
}>;
export type DataStudioTablePickerFiltersClearedEvent = ValidateEvent<{
event: "data_studio_table_picker_filters_cleared";
}>;
```
#### Step 2: Add to Union Type
Find or create the appropriate union type and add your events:
```typescript
export type DataStudioEvent =
| DataStudioLibraryCreatedEvent
| DataStudioTablePublishedEvent
| DataStudioGlossaryCreatedEvent
| DataStudioGlossaryEditedEvent
| DataStudioGlossaryDeletedEvent
| DataStudioTablePickerFiltersAppliedEvent // <- Add here
| DataStudioTablePickerFiltersClearedEvent; // <- Add here
```
#### Step 3: Create Tracking Functions
In your feature's `analytics.ts` file (e.g., `enterprise/frontend/src/metabase-enterprise/data-studio/analytics.ts`):
```typescript
import { trackSimpleEvent } from "metabase/lib/analytics";
export const trackDataStudioTablePickerFiltersApplied = () => {
trackSimpleEvent({
event: "data_studio_table_picker_filters_applied",
});
};
export const trackDataStudioTablePickerFiltersCleared = () => {
trackSimpleEvent({
event: "data_studio_table_picker_filters_cleared",
});
};
```
#### Step 4: Use in Components
Import and call the tracking function at the interaction point:
```typescript
import {
trackDataStudioTablePickerFiltersApplied,
trackDataStudioTablePickerFiltersCleared,
} from "metabase-enterprise/data-studio/analytics";
function FilterPopover({ filters, onSubmit }) {
const handleReset = () => {
trackDataStudioTablePickerFiltersCleared(); // <- Track here
onSubmit(emptyFilters);
};
return (
<form
onSubmit={(event) => {
event.preventDefault();
trackDataStudioTablePickerFiltersApplied(); // <- Track here
onSubmit(form);
}}
>
{/* form content */}
</form>
);
}
```
## Using SimpleEventSchema Fields
### Example: Event with target_id
```typescript
// Type definition
export type DataStudioLibraryCreatedEvent = ValidateEvent<{
event: "data_studio_library_created";
target_id: number | null;
}>;
// Tracking function
export const trackDataStudioLibraryCreated = (id: CollectionId) => {
trackSimpleEvent({
event: "data_studio_library_created",
target_id: Number(id),
});
};
// Usage
trackDataStudioLibraryCreated(newLibrary.id);
```
### Example: Event with triggered_from
```typescript
// Type definition
export type NewButtonClickedEvent = ValidateEvent<{
event: "new_button_clicked";
triggered_from: "app-bar" | "empty-collection";
}>;
// Tracking function
export const trackNewButtonClicked = (location: "app-bar" | "empty-collection") => {
trackSimpleEvent({
event: "new_button_clicked",
triggered_from: location,
});
};
// Usage
<Button onClick={() => {
trackNewButtonClicked("app-bar");
handleCreate();
}}>
New
</Button>
```
### Example: Event with event_detail
```typescript
// Type definition
export type MetadataEditEvent = ValidateEvent<{
event: "metadata_edited";
event_detail: "type_casting" | "semantic_type_change" | "visibility_change";
triggered_from: "admin" | "data_studio";
}>;
// Tracking function
export const trackMetadataChange = (
detail: "type_casting" | "semantic_type_change" | "visibility_change",
location: "admin" | "data_studio"
) => {
trackSimpleEvent({
event: "metadata_edited",
event_detail: detail,
triggered_from: location,
});
};
// Usage
trackMetadataChange("semantic_type_change", "data_studio");
```
### Example: Event with result and duration
```typescript
// Type definition
export type MoveToTrashEvent = ValidateEvent<{
event: "moved-to-trash";
target_id: number | null;
triggered_from: "collection" | "detail_page" | "cleanup_modal";
duration_ms: number | null;
result: "success" | "failure";
event_detail: "question" | "model" | "metric" | "dashboard";
}>;
// Tracking function
export const trackMoveToTrash = (params: {
targetId: number | null;
triggeredFrom: "collection" | "detail_page" | "cleanup_modal";
durationMs: number | null;
result: "success" | "failure";
itemType: "question" | "model" | "metric" | "dashboard";
}) => {
trackSimpleEvent({
event: "moved-to-trash",
target_id: params.targetId,
triggered_from: params.triggeredFrom,
duration_ms: params.durationMs,
result: params.result,
event_detail: params.itemType,
});
};
// Usage with timing
const startTime = Date.now();
try {
await moveToTrash(item);
trackMoveToTrash({
targetId: item.id,
triggeredFrom: "collection",
durationMs: Date.now() - startTime,
result: "success",
itemType: "question",
});
} catch (error) {
trackMoveToTrash({
targetId: item.id,
triggeredFrom: "collection",
durationMs: Date.now() - startTime,
result: "failure",
itemType: "question",
});
}
```
## Naming Conventions
### Event Names (snake_case)
```typescript
// Good
"data_studio_library_created"
"table_picker_filters_applied"
"metabot_chat_opened"
// Bad
"DataStudioLibraryCreated" // Wrong case
"tablePickerFiltersApplied" // Wrong case
"filters-applied" // Use underscore, not hyphen
```
### Event Type Names (PascalCase with "Event" suffix)
```typescript
// Good
DataStudioLibraryCreatedEvent
TablePickerFiltersAppliedEvent
MetabotChatOpenedEvent
// Bad
dataStudioLibraryCreated // Wrong case
DataStudioLibraryCreated // Missing "Event" suffix
```
### Tracking Function Names (camelCase with "track" prefix)
```typescript
// Good
trackDataStudioLibraryCreated
trackTablePickerFiltersApplied
trackMetabotChatOpened
// Bad
DataStudioLibraryCreated // Missing "track" prefix
track_library_created // Wrong case
logLibraryCreated // Use "track" prefix
```
## Common Patterns
### Pattern 1: Feature-Specific Union Types
Group related events together:
```typescript
export type DataStudioEvent =
| DataStudioLibraryCreatedEvent
| DataStudioTablePublishedEvent
| DataStudioGlossaryCreatedEvent;
export type MetabotEvent =
| MetabotChatOpenedEvent
| MetabotRequestSentEvent
| MetabotFixQueryClickedEvent;
// Then add to SimpleEvent union
export type SimpleEvent =
| /* other events */
| DataStudioEvent
| MetabotEvent
| /* more events */;
```
### Pattern 2: Conditional Tracking
Track different events based on user action:
```typescript
const handleSave = async () => {
if (isNewItem) {
await createItem(data);
trackItemCreated(newItem.id);
} else {
await updateItem(id, data);
trackItemUpdated(id);
}
};
```
## Common Pitfalls
### Don't: Add custom fields to SimpleEvent
```typescript
// WRONG - SimpleEvent doesn't support custom fields
export const trackFiltersApplied = (filters: FilterState) => {
trackSimpleEvent({
event: "filters_applied",
data_layer: filters.dataLayer, // ❌ Not in SimpleEventSchema
data_source: filters.dataSource, // ❌ Not in SimpleEventSchema
with_owner: filters.hasOwner, // ❌ Not in SimpleEventSchema
});
};
// RIGHT - Use only standard SimpleEventSchema fields
export const trackFiltersApplied = () => {
trackSimpleEvent({
event: "filters_applied",
});
};
// Or use event_detail for a single variant
export const trackFilterApplied = (filterType: string) => {
trackSimpleEvent({
event: "filter_applied",
event_detail: filterType, // ✓ "data_layer", "data_source", etc.
});
};
```
### Don't: Forget to add event to union type
```typescript
// Define the event
export type NewFeatureClickedEvent = ValidateEvent<{
event: "new_feature_clicked";
}>;
// ❌ WRONG - Forgot to add to SimpleEvent union
// Event won't be recognized by TypeScript
// ✓ RIGHT - Add to appropriate union
export type SimpleEvent =
| /* other events */
| NewFeatureClickedEvent;
```
### Don't: Mix up event name formats
```typescript
// WRONG
event: "dataStudioLibraryCreated" // camelCase
event: "data-studio-library-created" // kebab-case
event: "Data_Studio_Library_Created" // Mixed case
// RIGHT
event: "data_studio_library_created" // snake_case
```
### Don't: Track PII or sensitive data
```typescript
// WRONG - Don't track user emails, names, or sensitive data
trackSimpleEvent({
event: "user_logged_in",
event_detail: user.email, // ❌ PII
});
// RIGHT - Track non-sensitive identifiers only
trackSimpleEvent({
event: "user_logged_in",
target_id: user.id, // ✓ Just the ID
});
```
### Don't: Forget to track both success and failure
```typescript
// WRONG - Only tracking success
try {
await saveData();
trackDataSaved();
} catch (error) {
// ❌ No tracking for failure case
}
// RIGHT - Track both outcomes
try {
await saveData();
trackDataSaved({ result: "success" });
} catch (error) {
trackDataSaved({ result: "failure" });
}
```
## Testing Analytics Events
While developing, you can verify events are firing:
1. **Check browser console** - When `SNOWPLOW_ENABLED=true` in dev, events are logged
2. **Use shouldLogAnalytics** - Set in `metabase/env` to see all analytics in console
3. **Check Snowplow debugger** - Browser extension for Snowplow events
Example console output:
```
[SNOWPLOW EVENT | event sent:true], data_studio_table_picker_filters_applied
```
## File Organization
### Where to put tracking functions:
```
Feature-specific analytics functions:
frontend/src/metabase/{feature}/analytics.ts
enterprise/frontend/src/metabase-enterprise/{feature}/analytics.ts
Event type definitions (all in one place):
frontend/src/metabase-types/analytics/event.ts
Core tracking utilities:
frontend/src/metabase/lib/analytics.ts
```
## Real-World Examples
See these files for reference:
- **Simple events**: `enterprise/frontend/src/metabase-enterprise/data-studio/analytics.ts`
- **Events with variants**: `frontend/src/metabase/dashboard/analytics.ts`
- **Complex events**: `frontend/src/metabase/query_builder/analytics.js`
- **Event type examples**: `frontend/src/metabase-types/analytics/event.ts`
## Workflow Summary
1. **Identify the user interaction** to track
2. **Decide on event name** (snake_case, descriptive)
3. **Define event type** in `event.ts` using `ValidateEvent`
4. **Add to union type** (create feature union if needed)
5. **Create tracking function** in feature's `analytics.ts`
6. **Import and call** at the interaction point
7. **Test** that events fire correctly
## Tips
- **Be specific** - `filters_applied` is better than `action_performed`
- **Use past tense** - `library_created` not `create_library`
- **Group related events** - Create feature-specific event union types
- **Track meaningful actions** - Not every click needs tracking
- **Consider the data** - What would you want to analyze later?
- **Stay consistent** - Follow existing naming patterns in the codebase
- **Document context** - Use `triggered_from` to track where the action happened
| """
Test for 'analytics-events' skill — Metabase Analytics Event Definitions
Validates that the Agent created TypeScript analytics event interfaces with
proper typing, naming conventions, and event schema validation.
"""
import os
import pytest
class TestAnalyticsEvents:
"""Verify analytics event definitions in Metabase."""
REPO_DIR = "/workspace/metabase"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_event_definition_file_exists(self):
"""A TypeScript analytics event file must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
("analytics" in f.lower() or "event" in f.lower())
and f.endswith((".ts", ".tsx"))
and "node_modules" not in root
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No analytics event TypeScript file found"
def test_test_file_exists(self):
"""Test file for analytics events must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
("analytics" in f.lower() or "event" in f.lower())
and ("test" in f.lower() or "spec" in f.lower())
and f.endswith((".ts", ".tsx", ".js"))
and "node_modules" not in root
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No analytics event test file found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_event_files(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
("analytics" in f.lower() or "event" in f.lower())
and f.endswith((".ts", ".tsx"))
and "node_modules" not in root
):
found.append(os.path.join(root, f))
return found
def _read_all_events(self):
content = ""
for fpath in self._find_event_files():
try:
with open(fpath, "r", errors="ignore") as f:
content += f.read() + "\n"
except OSError:
pass
return content
def test_interface_or_type_definitions(self):
"""Must define TypeScript interfaces or types for events."""
content = self._read_all_events()
ts_patterns = ["interface ", "type ", "Event", "Schema"]
found = sum(1 for p in ts_patterns if p in content)
assert found >= 2, "Insufficient TypeScript type definitions"
def test_snake_case_event_names(self):
"""Event names should use snake_case convention."""
import re
content = self._read_all_events()
# Look for string literals with underscores (snake_case event names)
snake_events = re.findall(r'["\']([a-z]+_[a-z_]+)["\']', content)
assert (
len(snake_events) >= 3
), f"Only {len(snake_events)} snake_case event names; need >= 3"
def test_event_has_properties(self):
"""Events must define properties/payload."""
content = self._read_all_events()
prop_patterns = [
"properties",
"payload",
"data:",
"params",
"event_name",
"event_type",
]
found = sum(1 for p in prop_patterns if p in content)
assert found >= 2, "Events missing properties definition"
def test_event_categories(self):
"""Events should cover multiple categories."""
content = self._read_all_events()
categories = [
"dashboard",
"question",
"model",
"collection",
"search",
"admin",
"auth",
"navigation",
]
found = sum(1 for c in categories if c in content.lower())
assert found >= 2, f"Only {found} event categories found"
def test_export_statements(self):
"""Event definitions must be exported."""
content = self._read_all_events()
export_patterns = ["export ", "export default", "module.exports"]
found = any(p in content for p in export_patterns)
assert found, "No export statements found"
def test_timestamp_or_metadata(self):
"""Events should include timestamp or metadata fields."""
content = self._read_all_events()
meta_patterns = [
"timestamp",
"created_at",
"user_id",
"session_id",
"metadata",
"context",
]
found = any(p in content for p in meta_patterns)
assert found, "No timestamp/metadata fields found"
def test_validation_logic(self):
"""Event schema should have validation logic."""
content = self._read_all_events()
validation_patterns = [
"validate",
"required",
"z.object",
"yup.",
"joi.",
"assert",
"check",
]
found = any(p in content for p in validation_patterns)
# Also check test files for validation coverage
if not found:
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if ("analytics" in f.lower() or "event" in f.lower()) and (
"test" in f.lower() or "spec" in f.lower()
):
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
test_content = fh.read()
if any(p in test_content for p in validation_patterns):
found = True
break
if found:
break
assert found, "No validation logic found for events"
def test_at_least_5_event_types(self):
"""Must define at least 5 distinct event types."""
import re
content = self._read_all_events()
# Count unique snake_case strings that look like event names
event_names = set(re.findall(r'["\']([a-z][a-z_]*_[a-z_]+)["\']', content))
assert (
len(event_names) >= 5
), f"Only {len(event_names)} event types found, need >= 5"
| https://github.com/metabase/metabase | zhangyiiiiii/swe-skills-bench-python | |
prometheus-configuration | Prometheus Configuration | See task file for detailed mission requirements. | feature | # Task: Create Multi-Job Scrape Configuration Example for Prometheus
## Background
Add a comprehensive scrape configuration example demonstrating multi-job
setup with relabeling rules, and add unit tests to verify configuration parsing.
## Files to Create/Modify
- documentation/examples/multi-job-prometheus.yml - Configuration example
- config/config_test.go - Add parsing unit tests
## Requirements
Configuration Example (multi-job-prometheus.yml):
Multiple Scrape Jobs:
1) prometheus (self-monitoring)
2) node-exporter (static_configs)
3) kubernetes-pods (with relabel_configs)
Required Sections:
```yaml
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node-exporter'
static_configs:
- targets: ['node1:9100', 'node2:9100']
relabel_configs:
- source_labels: [__address__]
target_label: instance
regex: '([^:]+):\d+'
replacement: '${1}'
- job_name: 'kubernetes-pods'
metric_relabel_configs:
- source_labels: [__name__]
regex: 'go_.*'
action: drop
```
Relabeling Features to Demonstrate:
- source_labels and target_label
- regex matching and replacement
- metric_relabel_configs for metric filtering
- action: keep/drop/replace
4. Test Cases (in config_test.go):
- Configuration parses without errors
- Job count matches expected (3 jobs)
- Relabeling rules applied in correct sequence
- Target labels transformed correctly
- metric_relabel_configs filtering works
## Acceptance Criteria
- `go test ./config/...` passes all tests including new ones (exit 0)
- Configuration example is valid YAML
- Relabeling transformation results match expected values
| ---
name: prometheus-configuration
description: Set up Prometheus for comprehensive metric collection, storage, and monitoring of infrastructure and applications. Use when implementing metrics collection, setting up monitoring infrastructure, or configuring alerting systems.
---
# Prometheus Configuration
Complete guide to Prometheus setup, metric collection, scrape configuration, and recording rules.
## Purpose
Configure Prometheus for comprehensive metric collection, alerting, and monitoring of infrastructure and applications.
## When to Use
- Set up Prometheus monitoring
- Configure metric scraping
- Create recording rules
- Design alert rules
- Implement service discovery
## Prometheus Architecture
```
┌──────────────┐
│ Applications │ ← Instrumented with client libraries
└──────┬───────┘
│ /metrics endpoint
↓
┌──────────────┐
│ Prometheus │ ← Scrapes metrics periodically
│ Server │
└──────┬───────┘
│
├─→ AlertManager (alerts)
├─→ Grafana (visualization)
└─→ Long-term storage (Thanos/Cortex)
```
## Installation
### Kubernetes with Helm
```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--set prometheus.prometheusSpec.retention=30d \
--set prometheus.prometheusSpec.storageVolumeSize=50Gi
```
### Docker Compose
```yaml
version: "3.8"
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus-data:/prometheus
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention.time=30d"
volumes:
prometheus-data:
```
## Configuration File
**prometheus.yml:**
```yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
external_labels:
cluster: "production"
region: "us-west-2"
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
# Load rules files
rule_files:
- /etc/prometheus/rules/*.yml
# Scrape configurations
scrape_configs:
# Prometheus itself
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
# Node exporters
- job_name: "node-exporter"
static_configs:
- targets:
- "node1:9100"
- "node2:9100"
- "node3:9100"
relabel_configs:
- source_labels: [__address__]
target_label: instance
regex: "([^:]+)(:[0-9]+)?"
replacement: "${1}"
# Kubernetes pods with annotations
- job_name: "kubernetes-pods"
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels:
[__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
# Application metrics
- job_name: "my-app"
static_configs:
- targets:
- "app1.example.com:9090"
- "app2.example.com:9090"
metrics_path: "/metrics"
scheme: "https"
tls_config:
ca_file: /etc/prometheus/ca.crt
cert_file: /etc/prometheus/client.crt
key_file: /etc/prometheus/client.key
```
**Reference:** See `assets/prometheus.yml.template`
## Scrape Configurations
### Static Targets
```yaml
scrape_configs:
- job_name: "static-targets"
static_configs:
- targets: ["host1:9100", "host2:9100"]
labels:
env: "production"
region: "us-west-2"
```
### File-based Service Discovery
```yaml
scrape_configs:
- job_name: "file-sd"
file_sd_configs:
- files:
- /etc/prometheus/targets/*.json
- /etc/prometheus/targets/*.yml
refresh_interval: 5m
```
**targets/production.json:**
```json
[
{
"targets": ["app1:9090", "app2:9090"],
"labels": {
"env": "production",
"service": "api"
}
}
]
```
### Kubernetes Service Discovery
```yaml
scrape_configs:
- job_name: "kubernetes-services"
kubernetes_sd_configs:
- role: service
relabel_configs:
- source_labels:
[__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels:
[__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
```
**Reference:** See `references/scrape-configs.md`
## Recording Rules
Create pre-computed metrics for frequently queried expressions:
```yaml
# /etc/prometheus/rules/recording_rules.yml
groups:
- name: api_metrics
interval: 15s
rules:
# HTTP request rate per service
- record: job:http_requests:rate5m
expr: sum by (job) (rate(http_requests_total[5m]))
# Error rate percentage
- record: job:http_requests_errors:rate5m
expr: sum by (job) (rate(http_requests_total{status=~"5.."}[5m]))
- record: job:http_requests_error_rate:percentage
expr: |
(job:http_requests_errors:rate5m / job:http_requests:rate5m) * 100
# P95 latency
- record: job:http_request_duration:p95
expr: |
histogram_quantile(0.95,
sum by (job, le) (rate(http_request_duration_seconds_bucket[5m]))
)
- name: resource_metrics
interval: 30s
rules:
# CPU utilization percentage
- record: instance:node_cpu:utilization
expr: |
100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
# Memory utilization percentage
- record: instance:node_memory:utilization
expr: |
100 - ((node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100)
# Disk usage percentage
- record: instance:node_disk:utilization
expr: |
100 - ((node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100)
```
**Reference:** See `references/recording-rules.md`
## Alert Rules
```yaml
# /etc/prometheus/rules/alert_rules.yml
groups:
- name: availability
interval: 30s
rules:
- alert: ServiceDown
expr: up{job="my-app"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Service {{ $labels.instance }} is down"
description: "{{ $labels.job }} has been down for more than 1 minute"
- alert: HighErrorRate
expr: job:http_requests_error_rate:percentage > 5
for: 5m
labels:
severity: warning
annotations:
summary: "High error rate for {{ $labels.job }}"
description: "Error rate is {{ $value }}% (threshold: 5%)"
- alert: HighLatency
expr: job:http_request_duration:p95 > 1
for: 5m
labels:
severity: warning
annotations:
summary: "High latency for {{ $labels.job }}"
description: "P95 latency is {{ $value }}s (threshold: 1s)"
- name: resources
interval: 1m
rules:
- alert: HighCPUUsage
expr: instance:node_cpu:utilization > 80
for: 5m
labels:
severity: warning
annotations:
summary: "High CPU usage on {{ $labels.instance }}"
description: "CPU usage is {{ $value }}%"
- alert: HighMemoryUsage
expr: instance:node_memory:utilization > 85
for: 5m
labels:
severity: warning
annotations:
summary: "High memory usage on {{ $labels.instance }}"
description: "Memory usage is {{ $value }}%"
- alert: DiskSpaceLow
expr: instance:node_disk:utilization > 90
for: 5m
labels:
severity: critical
annotations:
summary: "Low disk space on {{ $labels.instance }}"
description: "Disk usage is {{ $value }}%"
```
## Validation
```bash
# Validate configuration
promtool check config prometheus.yml
# Validate rules
promtool check rules /etc/prometheus/rules/*.yml
# Test query
promtool query instant http://localhost:9090 'up'
```
**Reference:** See `scripts/validate-prometheus.sh`
## Best Practices
1. **Use consistent naming** for metrics (prefix_name_unit)
2. **Set appropriate scrape intervals** (15-60s typical)
3. **Use recording rules** for expensive queries
4. **Implement high availability** (multiple Prometheus instances)
5. **Configure retention** based on storage capacity
6. **Use relabeling** for metric cleanup
7. **Monitor Prometheus itself**
8. **Implement federation** for large deployments
9. **Use Thanos/Cortex** for long-term storage
10. **Document custom metrics**
## Troubleshooting
**Check scrape targets:**
```bash
curl http://localhost:9090/api/v1/targets
```
**Check configuration:**
```bash
curl http://localhost:9090/api/v1/status/config
```
**Test query:**
```bash
curl 'http://localhost:9090/api/v1/query?query=up'
```
## Reference Files
- `assets/prometheus.yml.template` - Complete configuration template
- `references/scrape-configs.md` - Scrape configuration patterns
- `references/recording-rules.md` - Recording rule examples
- `scripts/validate-prometheus.sh` - Validation script
## Related Skills
- `grafana-dashboards` - For visualization
- `slo-implementation` - For SLO monitoring
- `distributed-tracing` - For request tracing
| """
Test for 'prometheus-configuration' skill — Prometheus Configuration
Validates that the Agent created a multi-job scrape configuration example with
relabeling rules and added config parsing tests.
"""
import os
import subprocess
import pytest
from _dependency_utils import ensure_go_dependencies
@pytest.fixture(scope="module", autouse=True)
def _ensure_repo_dependencies():
ensure_go_dependencies(TestPrometheusConfiguration.REPO_DIR)
class TestPrometheusConfiguration:
"""Verify Prometheus multi-job scrape configuration."""
REPO_DIR = "/workspace/prometheus"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_config_example_exists(self):
"""documentation/examples/multi-job-prometheus.yml must exist."""
fpath = os.path.join(
self.REPO_DIR, "documentation", "examples", "multi-job-prometheus.yml"
)
assert os.path.isfile(fpath), "multi-job-prometheus.yml not found"
def test_config_test_exists(self):
"""config/config_test.go must exist."""
fpath = os.path.join(self.REPO_DIR, "config", "config_test.go")
assert os.path.isfile(fpath), "config_test.go not found"
# ------------------------------------------------------------------
# L2: YAML structure validation
# ------------------------------------------------------------------
def test_config_is_valid_yaml(self):
"""Configuration file must be valid YAML."""
import yaml
fpath = os.path.join(
self.REPO_DIR, "documentation", "examples", "multi-job-prometheus.yml"
)
with open(fpath, "r") as f:
config = yaml.safe_load(f)
assert isinstance(config, dict), "Config root must be a mapping"
def test_has_scrape_configs(self):
"""Configuration must have scrape_configs section."""
import yaml
fpath = os.path.join(
self.REPO_DIR, "documentation", "examples", "multi-job-prometheus.yml"
)
with open(fpath, "r") as f:
config = yaml.safe_load(f)
assert "scrape_configs" in config, "scrape_configs not found"
def test_at_least_3_jobs(self):
"""Must define at least 3 scrape jobs."""
import yaml
fpath = os.path.join(
self.REPO_DIR, "documentation", "examples", "multi-job-prometheus.yml"
)
with open(fpath, "r") as f:
config = yaml.safe_load(f)
jobs = config.get("scrape_configs", [])
assert len(jobs) >= 3, f"Need >= 3 jobs, got {len(jobs)}"
def test_each_job_has_job_name(self):
"""Every job must have a job_name field."""
import yaml
fpath = os.path.join(
self.REPO_DIR, "documentation", "examples", "multi-job-prometheus.yml"
)
with open(fpath, "r") as f:
config = yaml.safe_load(f)
for job in config.get("scrape_configs", []):
assert "job_name" in job, f"Job missing job_name: {job}"
def test_prometheus_self_monitoring_job(self):
"""Must include a 'prometheus' self-monitoring job."""
import yaml
fpath = os.path.join(
self.REPO_DIR, "documentation", "examples", "multi-job-prometheus.yml"
)
with open(fpath, "r") as f:
config = yaml.safe_load(f)
job_names = [j.get("job_name") for j in config.get("scrape_configs", [])]
assert (
"prometheus" in job_names
), f"'prometheus' job not found; jobs: {job_names}"
def test_node_exporter_job_exists(self):
"""Must include a 'node-exporter' job with static_configs."""
import yaml
fpath = os.path.join(
self.REPO_DIR, "documentation", "examples", "multi-job-prometheus.yml"
)
with open(fpath, "r") as f:
config = yaml.safe_load(f)
for job in config.get("scrape_configs", []):
if "node" in job.get("job_name", "").lower():
assert "static_configs" in job, "node-exporter job needs static_configs"
return
pytest.fail("node-exporter job not found")
def test_relabel_configs_present(self):
"""At least one job must have relabel_configs."""
import yaml
fpath = os.path.join(
self.REPO_DIR, "documentation", "examples", "multi-job-prometheus.yml"
)
with open(fpath, "r") as f:
config = yaml.safe_load(f)
has_relabel = any(
"relabel_configs" in job for job in config.get("scrape_configs", [])
)
assert has_relabel, "No relabel_configs found in any job"
def test_metric_relabel_configs_present(self):
"""At least one job must have metric_relabel_configs."""
import yaml
fpath = os.path.join(
self.REPO_DIR, "documentation", "examples", "multi-job-prometheus.yml"
)
with open(fpath, "r") as f:
config = yaml.safe_load(f)
has_metric_relabel = any(
"metric_relabel_configs" in job for job in config.get("scrape_configs", [])
)
assert has_metric_relabel, "No metric_relabel_configs found in any job"
def test_go_config_tests_pass(self):
"""go test ./config/... must pass."""
result = subprocess.run(
["go", "test", "./config/...", "-v", "-count=1"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=600,
)
assert (
result.returncode == 0
), f"Go config tests failed:\n{result.stdout[-2000:]}\n{result.stderr[-1000:]}"
def test_relabel_has_source_and_target(self):
"""Relabel configs must use source_labels and target_label."""
import yaml
fpath = os.path.join(
self.REPO_DIR, "documentation", "examples", "multi-job-prometheus.yml"
)
with open(fpath, "r") as f:
config = yaml.safe_load(f)
for job in config.get("scrape_configs", []):
for rule in job.get("relabel_configs", []):
if "source_labels" in rule:
assert (
"target_label" in rule or "action" in rule
), f"relabel rule missing target_label or action: {rule}"
return
pytest.fail("No relabel rule with source_labels found")
| https://github.com/prometheus/prometheus | zhangyiiiiii/swe-skills-bench-golang | |
python-anti-patterns | Python Anti-Pattern Review | See task file for detailed mission requirements. | refactor | # Task: Refactor boltons Core Modules to Modern Python Patterns
## Background
The `boltons/iterutils.py` and `boltons/strutils.py` modules contain legacy Python patterns that should be modernized to Python 3.9+ idioms while maintaining backward compatibility.
## Files to Modify
- `boltons/iterutils.py` - Refactor to modern Python patterns
- `boltons/strutils.py` - Refactor to modern Python patterns
## Requirements
### iterutils.py Improvements
- Replace old-style `str.format()` calls with f-strings where applicable
- Replace manual type checks (`type(x) == ...`) with `isinstance()` calls
- Use walrus operator (`:=`) where it simplifies assignments in conditionals
- Replace `dict()` calls with dict literals `{}`
- Use modern `dict | dict` union operator where merging dicts (Python 3.9+)
### strutils.py Improvements
- Convert `str.format()` to f-string formatting
- Replace bare `except:` clauses with explicit exception types
- Use `isinstance()` for type guards instead of `type() ==`
- Simplify comprehensions where possible (avoid unnecessary list wrapping)
- Use generator expressions instead of list comprehensions for memory efficiency where the list is not reused
### Constraints
- All existing tests must continue to pass
- Do not change public API signatures
- Maintain backward compatibility with Python 3.9+
## Acceptance Criteria
- `boltons/iterutils.py` and `boltons/strutils.py` compile without syntax errors
- All modernization changes follow PEP 8 and Python 3.9+ conventions
- Existing functionality is preserved
| ---
name: python-anti-patterns
description: Common Python anti-patterns to avoid. Use as a checklist when reviewing code, before finalizing implementations, or when debugging issues that might stem from known bad practices.
---
# Python Anti-Patterns Checklist
A reference checklist of common mistakes and anti-patterns in Python code. Review this before finalizing implementations to catch issues early.
## When to Use This Skill
- Reviewing code before merge
- Debugging mysterious issues
- Teaching or learning Python best practices
- Establishing team coding standards
- Refactoring legacy code
**Note:** This skill focuses on what to avoid. For guidance on positive patterns and architecture, see the `python-design-patterns` skill.
## Infrastructure Anti-Patterns
### Scattered Timeout/Retry Logic
```python
# BAD: Timeout logic duplicated everywhere
def fetch_user(user_id):
try:
return requests.get(url, timeout=30)
except Timeout:
logger.warning("Timeout fetching user")
return None
def fetch_orders(user_id):
try:
return requests.get(url, timeout=30)
except Timeout:
logger.warning("Timeout fetching orders")
return None
```
**Fix:** Centralize in decorators or client wrappers.
```python
# GOOD: Centralized retry logic
@retry(stop=stop_after_attempt(3), wait=wait_exponential())
def http_get(url: str) -> Response:
return requests.get(url, timeout=30)
```
### Double Retry
```python
# BAD: Retrying at multiple layers
@retry(max_attempts=3) # Application retry
def call_service():
return client.request() # Client also has retry configured!
```
**Fix:** Retry at one layer only. Know your infrastructure's retry behavior.
### Hard-Coded Configuration
```python
# BAD: Secrets and config in code
DB_HOST = "prod-db.example.com"
API_KEY = "sk-12345"
def connect():
return psycopg.connect(f"host={DB_HOST}...")
```
**Fix:** Use environment variables with typed settings.
```python
# GOOD
from pydantic_settings import BaseSettings
class Settings(BaseSettings):
db_host: str = Field(alias="DB_HOST")
api_key: str = Field(alias="API_KEY")
settings = Settings()
```
## Architecture Anti-Patterns
### Exposed Internal Types
```python
# BAD: Leaking ORM model to API
@app.get("/users/{id}")
def get_user(id: str) -> UserModel: # SQLAlchemy model
return db.query(UserModel).get(id)
```
**Fix:** Use DTOs/response models.
```python
# GOOD
@app.get("/users/{id}")
def get_user(id: str) -> UserResponse:
user = db.query(UserModel).get(id)
return UserResponse.from_orm(user)
```
### Mixed I/O and Business Logic
```python
# BAD: SQL embedded in business logic
def calculate_discount(user_id: str) -> float:
user = db.query("SELECT * FROM users WHERE id = ?", user_id)
orders = db.query("SELECT * FROM orders WHERE user_id = ?", user_id)
# Business logic mixed with data access
if len(orders) > 10:
return 0.15
return 0.0
```
**Fix:** Repository pattern. Keep business logic pure.
```python
# GOOD
def calculate_discount(user: User, orders: list[Order]) -> float:
# Pure business logic, easily testable
if len(orders) > 10:
return 0.15
return 0.0
```
## Error Handling Anti-Patterns
### Bare Exception Handling
```python
# BAD: Swallowing all exceptions
try:
process()
except Exception:
pass # Silent failure - bugs hidden forever
```
**Fix:** Catch specific exceptions. Log or handle appropriately.
```python
# GOOD
try:
process()
except ConnectionError as e:
logger.warning("Connection failed, will retry", error=str(e))
raise
except ValueError as e:
logger.error("Invalid input", error=str(e))
raise BadRequestError(str(e))
```
### Ignored Partial Failures
```python
# BAD: Stops on first error
def process_batch(items):
results = []
for item in items:
result = process(item) # Raises on error - batch aborted
results.append(result)
return results
```
**Fix:** Capture both successes and failures.
```python
# GOOD
def process_batch(items) -> BatchResult:
succeeded = {}
failed = {}
for idx, item in enumerate(items):
try:
succeeded[idx] = process(item)
except Exception as e:
failed[idx] = e
return BatchResult(succeeded, failed)
```
### Missing Input Validation
```python
# BAD: No validation
def create_user(data: dict):
return User(**data) # Crashes deep in code on bad input
```
**Fix:** Validate early at API boundaries.
```python
# GOOD
def create_user(data: dict) -> User:
validated = CreateUserInput.model_validate(data)
return User.from_input(validated)
```
## Resource Anti-Patterns
### Unclosed Resources
```python
# BAD: File never closed
def read_file(path):
f = open(path)
return f.read() # What if this raises?
```
**Fix:** Use context managers.
```python
# GOOD
def read_file(path):
with open(path) as f:
return f.read()
```
### Blocking in Async
```python
# BAD: Blocks the entire event loop
async def fetch_data():
time.sleep(1) # Blocks everything!
response = requests.get(url) # Also blocks!
```
**Fix:** Use async-native libraries.
```python
# GOOD
async def fetch_data():
await asyncio.sleep(1)
async with httpx.AsyncClient() as client:
response = await client.get(url)
```
## Type Safety Anti-Patterns
### Missing Type Hints
```python
# BAD: No types
def process(data):
return data["value"] * 2
```
**Fix:** Annotate all public functions.
```python
# GOOD
def process(data: dict[str, int]) -> int:
return data["value"] * 2
```
### Untyped Collections
```python
# BAD: Generic list without type parameter
def get_users() -> list:
...
```
**Fix:** Use type parameters.
```python
# GOOD
def get_users() -> list[User]:
...
```
## Testing Anti-Patterns
### Only Testing Happy Paths
```python
# BAD: Only tests success case
def test_create_user():
user = service.create_user(valid_data)
assert user.id is not None
```
**Fix:** Test error conditions and edge cases.
```python
# GOOD
def test_create_user_success():
user = service.create_user(valid_data)
assert user.id is not None
def test_create_user_invalid_email():
with pytest.raises(ValueError, match="Invalid email"):
service.create_user(invalid_email_data)
def test_create_user_duplicate_email():
service.create_user(valid_data)
with pytest.raises(ConflictError):
service.create_user(valid_data)
```
### Over-Mocking
```python
# BAD: Mocking everything
def test_user_service():
mock_repo = Mock()
mock_cache = Mock()
mock_logger = Mock()
mock_metrics = Mock()
# Test doesn't verify real behavior
```
**Fix:** Use integration tests for critical paths. Mock only external services.
## Quick Review Checklist
Before finalizing code, verify:
- [ ] No scattered timeout/retry logic (centralized)
- [ ] No double retry (app + infrastructure)
- [ ] No hard-coded configuration or secrets
- [ ] No exposed internal types (ORM models, protobufs)
- [ ] No mixed I/O and business logic
- [ ] No bare `except Exception: pass`
- [ ] No ignored partial failures in batches
- [ ] No missing input validation
- [ ] No unclosed resources (using context managers)
- [ ] No blocking calls in async code
- [ ] All public functions have type hints
- [ ] Collections have type parameters
- [ ] Error paths are tested
- [ ] Edge cases are covered
## Common Fixes Summary
| Anti-Pattern | Fix |
|-------------|-----|
| Scattered retry logic | Centralized decorators |
| Hard-coded config | Environment variables + pydantic-settings |
| Exposed ORM models | DTO/response schemas |
| Mixed I/O + logic | Repository pattern |
| Bare except | Catch specific exceptions |
| Batch stops on error | Return BatchResult with successes/failures |
| No validation | Validate at boundaries with Pydantic |
| Unclosed resources | Context managers |
| Blocking in async | Async-native libraries |
| Missing types | Type annotations on all public APIs |
| Only happy path tests | Test errors and edge cases |
| """
Test for 'python-anti-patterns' skill — Python Anti-Pattern Review
Validates that the Agent refactored boltons/iterutils.py and boltons/strutils.py
to use modern Python 3.9+ patterns while keeping all existing tests passing.
"""
import os
import re
import ast
import subprocess
import pytest
class TestPythonAntiPatterns:
"""Verify modernisation of boltons core modules."""
REPO_DIR = "/workspace/boltons"
# ------------------------------------------------------------------
# L1: file & syntax
# ------------------------------------------------------------------
def test_iterutils_exists(self):
"""boltons/iterutils.py must exist."""
assert os.path.isfile(os.path.join(self.REPO_DIR, "boltons", "iterutils.py"))
def test_strutils_exists(self):
"""boltons/strutils.py must exist."""
assert os.path.isfile(os.path.join(self.REPO_DIR, "boltons", "strutils.py"))
def test_iterutils_compiles(self):
"""iterutils.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "boltons/iterutils.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
def test_strutils_compiles(self):
"""strutils.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "boltons/strutils.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: pattern modernisation checks
# ------------------------------------------------------------------
def _read(self, relpath):
fpath = os.path.join(self.REPO_DIR, relpath)
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def test_iterutils_uses_fstrings(self):
"""iterutils.py should use f-strings instead of .format()."""
src = self._read("boltons/iterutils.py")
tree = ast.parse(src)
fstring_count = sum(1 for n in ast.walk(tree) if isinstance(n, ast.JoinedStr))
assert (
fstring_count >= 1
), "No f-strings found in iterutils.py — expected modernisation"
def test_strutils_uses_fstrings(self):
"""strutils.py should use f-strings instead of .format()."""
src = self._read("boltons/strutils.py")
tree = ast.parse(src)
fstring_count = sum(1 for n in ast.walk(tree) if isinstance(n, ast.JoinedStr))
assert (
fstring_count >= 1
), "No f-strings found in strutils.py — expected modernisation"
def test_no_type_eq_checks_iterutils(self):
"""iterutils.py should not use type(x) == ... comparisons."""
src = self._read("boltons/iterutils.py")
matches = re.findall(r"\btype\s*\([^)]+\)\s*==", src)
assert (
len(matches) == 0
), f"Found type(x)==... patterns in iterutils.py: {matches[:5]}"
def test_no_type_eq_checks_strutils(self):
"""strutils.py should not use type(x) == ... comparisons."""
src = self._read("boltons/strutils.py")
matches = re.findall(r"\btype\s*\([^)]+\)\s*==", src)
assert (
len(matches) == 0
), f"Found type(x)==... patterns in strutils.py: {matches[:5]}"
def test_no_bare_except_strutils(self):
"""strutils.py should not use bare except: clauses."""
src = self._read("boltons/strutils.py")
tree = ast.parse(src)
for node in ast.walk(tree):
if isinstance(node, ast.ExceptHandler):
# bare except has type=None
assert (
node.type is not None
), f"Bare except: found at line {node.lineno} in strutils.py"
def test_existing_tests_pass(self):
"""All existing boltons tests must continue to pass."""
result = subprocess.run(
["python", "-m", "pytest", "tests/", "-x", "-q", "--tb=short"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=300,
)
assert (
result.returncode == 0
), f"Existing tests failed (rc={result.returncode}):\n{result.stdout[-2000:]}\n{result.stderr[-1000:]}"
| https://github.com/mahmoud/boltons | zhangyiiiiii/swe-skills-bench-python | |
implementing-jsc-classes-zig | Bun Zig-JS Class Generator | See task file for detailed mission requirements. | feature | # Task: Implement BunHash JavaScript Class Using Zig Bindings
## Background
Implement a new `BunHash` JavaScript class in Bun's runtime that exposes multiple hash algorithms (murmur3, xxhash32, xxhash64, wyhash) to JavaScript. The class should be implemented using Bun's `.classes.ts` definition file and Zig implementation pattern.
## Files to Create/Modify
- `src/bun.js/api/BunHash.classes.ts` - Class definition file for the code generator
- `src/bun.js/api/BunHash.zig` - Zig implementation of the hash class
- `test/js/bun/hash/hash.test.ts` - Comprehensive test suite
## Requirements
### Class Definition (BunHash.classes.ts)
Define the class using Bun's `define()` pattern:
- `name: "BunHash"`
- `constructor: true`
- Prototype methods: `hash(data)`, `digest()`
- Prototype getters: `algorithm` (cached)
- `finalize: true` for cleanup
### Zig Implementation (BunHash.zig)
- Implement `constructor` accepting algorithm name string
- Supported algorithms: `"murmur3"`, `"xxhash32"`, `"xxhash64"`, `"wyhash"`
- `hash(data)` method: Accept string or Uint8Array, return hash value
- `digest()` method: Return hex string of current hash
- `getAlgorithm` getter: Return algorithm name
- Proper `deinit` and `finalize` for memory cleanup
### Test Suite (hash.test.ts)
- Test all 4 hash algorithms
- Test scenarios: empty string, ASCII, Unicode/UTF-8, binary data (Uint8Array)
- Known test vector verification
- Large input (>1MB) handling
## Acceptance Criteria
- `bun run build` compiles without errors
- `BunHash` class is accessible from JavaScript
- All hash algorithms produce correct, consistent results
- Test suite covers all algorithms and edge cases
| ---
name: implementing-jsc-classes-zig
description: Creates JavaScript classes using Bun's Zig bindings generator (.classes.ts). Use when implementing new JS APIs in Zig with JSC integration.
---
# Bun's JavaScriptCore Class Bindings Generator
Bridge JavaScript and Zig through `.classes.ts` definitions and Zig implementations.
## Architecture
1. **Zig Implementation** (.zig files)
2. **JavaScript Interface Definition** (.classes.ts files)
3. **Generated Code** (C++/Zig files connecting them)
## Class Definition (.classes.ts)
```typescript
define({
name: "TextDecoder",
constructor: true,
JSType: "object",
finalize: true,
proto: {
decode: { args: 1 },
encoding: { getter: true, cache: true },
fatal: { getter: true },
},
});
```
Options:
- `name`: Class name
- `constructor`: Has public constructor
- `JSType`: "object", "function", etc.
- `finalize`: Needs cleanup
- `proto`: Properties/methods
- `cache`: Cache property values via WriteBarrier
## Zig Implementation
```zig
pub const TextDecoder = struct {
pub const js = JSC.Codegen.JSTextDecoder;
pub const toJS = js.toJS;
pub const fromJS = js.fromJS;
pub const fromJSDirect = js.fromJSDirect;
encoding: []const u8,
fatal: bool,
pub fn constructor(
globalObject: *JSGlobalObject,
callFrame: *JSC.CallFrame,
) bun.JSError!*TextDecoder {
return bun.new(TextDecoder, .{ .encoding = "utf-8", .fatal = false });
}
pub fn decode(
this: *TextDecoder,
globalObject: *JSGlobalObject,
callFrame: *JSC.CallFrame,
) bun.JSError!JSC.JSValue {
const args = callFrame.arguments();
if (args.len < 1 or args.ptr[0].isUndefinedOrNull()) {
return globalObject.throw("Input cannot be null", .{});
}
return JSC.JSValue.jsString(globalObject, "result");
}
pub fn getEncoding(this: *TextDecoder, globalObject: *JSGlobalObject) JSC.JSValue {
return JSC.JSValue.createStringFromUTF8(globalObject, this.encoding);
}
fn deinit(this: *TextDecoder) void {
// Release resources
}
pub fn finalize(this: *TextDecoder) void {
this.deinit();
bun.destroy(this);
}
};
```
**Key patterns:**
- Use `bun.JSError!JSValue` return type for error handling
- Use `globalObject` not `ctx`
- `deinit()` for cleanup, `finalize()` called by GC
- Update `src/bun.js/bindings/generated_classes_list.zig`
## CallFrame Access
```zig
const args = callFrame.arguments();
const first_arg = args.ptr[0]; // Access as slice
const argCount = args.len;
const thisValue = callFrame.thisValue();
```
## Property Caching
For `cache: true` properties, generated accessors:
```zig
// Get cached value
pub fn encodingGetCached(thisValue: JSC.JSValue) ?JSC.JSValue {
const result = TextDecoderPrototype__encodingGetCachedValue(thisValue);
if (result == .zero) return null;
return result;
}
// Set cached value
pub fn encodingSetCached(thisValue: JSC.JSValue, globalObject: *JSC.JSGlobalObject, value: JSC.JSValue) void {
TextDecoderPrototype__encodingSetCachedValue(thisValue, globalObject, value);
}
```
## Error Handling
```zig
pub fn method(this: *MyClass, globalObject: *JSGlobalObject, callFrame: *JSC.CallFrame) bun.JSError!JSC.JSValue {
const args = callFrame.arguments();
if (args.len < 1) {
return globalObject.throw("Missing required argument", .{});
}
return JSC.JSValue.jsString(globalObject, "Success!");
}
```
## Memory Management
```zig
pub fn deinit(this: *TextDecoder) void {
this._encoding.deref();
if (this.buffer) |buffer| {
bun.default_allocator.free(buffer);
}
}
pub fn finalize(this: *TextDecoder) void {
JSC.markBinding(@src());
this.deinit();
bun.default_allocator.destroy(this);
}
```
## Creating a New Binding
1. Define interface in `.classes.ts`:
```typescript
define({
name: "MyClass",
constructor: true,
finalize: true,
proto: {
myMethod: { args: 1 },
myProperty: { getter: true, cache: true },
},
});
```
2. Implement in `.zig`:
```zig
pub const MyClass = struct {
pub const js = JSC.Codegen.JSMyClass;
pub const toJS = js.toJS;
pub const fromJS = js.fromJS;
value: []const u8,
pub const new = bun.TrivialNew(@This());
pub fn constructor(globalObject: *JSGlobalObject, callFrame: *JSC.CallFrame) bun.JSError!*MyClass {
return MyClass.new(.{ .value = "" });
}
pub fn myMethod(this: *MyClass, globalObject: *JSGlobalObject, callFrame: *JSC.CallFrame) bun.JSError!JSC.JSValue {
return JSC.JSValue.jsUndefined();
}
pub fn getMyProperty(this: *MyClass, globalObject: *JSGlobalObject) JSC.JSValue {
return JSC.JSValue.jsString(globalObject, this.value);
}
pub fn deinit(this: *MyClass) void {}
pub fn finalize(this: *MyClass) void {
this.deinit();
bun.destroy(this);
}
};
```
3. Add to `src/bun.js/bindings/generated_classes_list.zig`
## Generated Components
- **C++ Classes**: `JSMyClass`, `JSMyClassPrototype`, `JSMyClassConstructor`
- **Method Bindings**: `MyClassPrototype__myMethodCallback`
- **Property Accessors**: `MyClassPrototype__myPropertyGetterWrap`
- **Zig Bindings**: External function declarations, cached value accessors
| """
Test for 'implementing-jsc-classes-zig' skill — JSC Classes in Zig (Bun)
Validates that the Agent implemented JavaScript Core (JSC) class bindings
in Zig within the Bun runtime.
"""
import os
import subprocess
import pytest
class TestImplementingJscClassesZig:
"""Verify JSC class implementation in Zig for Bun."""
REPO_DIR = "/workspace/bun"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_zig_source_exists(self):
"""New Zig JSC class file must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".zig") and "node_modules" not in root:
fpath = os.path.join(root, f)
try:
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if (
"JSC" in content
or "JSValue" in content
or "JSGlobalObject" in content
):
found.append(fpath)
except OSError:
pass
assert len(found) >= 1, "No Zig JSC class file found"
def test_test_file_exists(self):
"""Test file for JSC class must exist."""
found = []
patterns = ["test", "spec"]
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
any(p in f.lower() for p in patterns)
and (f.endswith((".zig", ".js", ".ts")))
and "node_modules" not in root
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No test file found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_jsc_zig_files(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".zig") and "node_modules" not in root:
fpath = os.path.join(root, f)
try:
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if "JSC" in content or "JSValue" in content:
found.append(fpath)
except OSError:
pass
return found
def _read_all_jsc(self):
content = ""
for fpath in self._find_jsc_zig_files():
with open(fpath, "r", errors="ignore") as f:
content += f.read() + "\n"
return content
def test_struct_definition(self):
"""Must define a Zig struct for the JSC class."""
content = self._read_all_jsc()
assert "struct" in content, "No struct definition found"
def test_jsc_class_interface(self):
"""Must implement JSC class interface methods."""
content = self._read_all_jsc()
interface_patterns = [
"getProperty",
"setProperty",
"constructor",
"finalize",
"call",
"JSClassDefinition",
"toJS",
"fromJS",
"getter",
"setter",
]
found = sum(1 for p in interface_patterns if p in content)
assert found >= 3, f"Only {found} JSC interface methods found"
def test_memory_management(self):
"""Must handle memory management (allocator)."""
content = self._read_all_jsc()
mem_patterns = [
"allocator",
"alloc",
"free",
"destroy",
"deinit",
"Allocator",
"GC",
"ref_count",
]
found = any(p in content for p in mem_patterns)
assert found, "No memory management found"
def test_error_handling(self):
"""Must implement error handling."""
content = self._read_all_jsc()
error_patterns = [
"error",
"catch",
"throw",
"JSError",
"makeError",
"createError",
"try",
]
found = any(p in content.lower() for p in error_patterns)
assert found, "No error handling found"
def test_js_value_conversions(self):
"""Must convert between JS values and Zig types."""
content = self._read_all_jsc()
conv_patterns = [
"toJSValue",
"fromJS",
"toString",
"toNumber",
"JSStringRef",
"JSValueRef",
"jsNumber",
"jsString",
"toZigString",
]
found = sum(1 for p in conv_patterns if p in content)
assert found >= 2, "Insufficient JS value conversions"
def test_export_to_js(self):
"""Class must be exported/registered for JS use."""
content = self._read_all_jsc()
export_patterns = [
"export",
"register",
"JSClassCreate",
"defineProperty",
"globalObject",
"comptime",
"pub fn",
]
found = sum(1 for p in export_patterns if p in content)
assert found >= 2, "Class not properly exported for JS use"
def test_zig_build_check(self):
"""Zig files must have valid syntax (basic check)."""
for fpath in self._find_jsc_zig_files():
with open(fpath, "r") as f:
content = f.read()
# Basic bracket balance
opens = content.count("{")
closes = content.count("}")
diff = abs(opens - closes)
assert diff <= 2, f"{fpath} has bracket imbalance: {diff}"
def test_at_least_3_methods(self):
"""JSC class must expose at least 3 methods."""
content = self._read_all_jsc()
import re
pub_fns = re.findall(r"pub\s+fn\s+(\w+)", content)
assert len(pub_fns) >= 3, f"Only {len(pub_fns)} pub fn definitions found"
| https://github.com/oven-sh/bun | zhangyiiiiii/swe-skills-bench-python | |
add-malli-schemas | Metabase Malli Schema Architect | See task file for detailed mission requirements. | feature | # Task: Add Malli Schema Validation for Metabase Alert API
## Background
Add complete Malli Schema input validation for Metabase's Alert API endpoints
to ensure type safety and proper validation of incoming requests.
## Files to Create/Modify
- src/metabase/api/alert.clj
- test/metabase/api/alert_malli_test.clj
## Requirements
Endpoints to Add Schema:
- POST /api/alert
- PUT /api/alert/:id
Schema Fields:
- card_id: Positive integer, required
- channels: Non-empty array, each item must contain channel_type string
- alert_condition: Enum ("rows" or "goal"), required
- alert_first_only: Boolean, optional, default false
Implementation:
- Use metabase.util.malli.schema existing base types
- Proper constraint definitions
- Clear error messages
### Expected Functionality
1) Valid input → 200 OK
2) Missing card_id → 400 Bad Request
3) channels is empty array → 400 Bad Request
4) Invalid alert_condition value → 400 Bad Request
## Acceptance Criteria
- All validation errors return proper status codes
- No schema validation bypasses
| ---
name: add-malli-schemas
description: Efficiently add Malli schemas to API endpoints in the Metabase codebase with proper patterns, validation timing, and error handling
---
# Add Malli Schemas to API Endpoints
This skill helps you efficiently and uniformly add Malli schemas to API endpoints in the Metabase codebase.
## Reference Files (Best Examples)
- `src/metabase/warehouses/api.clj` - Most comprehensive schemas, custom error messages
- `src/metabase/api_keys/api.clj` - Excellent response schemas
- `src/metabase/collections/api.clj` - Great named schema patterns
- `src/metabase/timeline/api/timeline.clj` - Clean, simple examples
## Quick Checklist
When adding Malli schemas to an endpoint:
- [ ] Route params have schemas
- [ ] Query params have schemas with `:optional true` and `:default` where appropriate
- [ ] Request body has a schema (for POST/PUT)
- [ ] Response schema is defined (using `:-` after route string)
- [ ] Use existing schema types from `ms` namespace when possible
- [ ] Consider creating named schemas for reusable or complex types
- [ ] Add contextual error messages for validation failures
## Basic Structure
### Complete Endpoint Example
```clojure
(mr/def ::Color [:enum "red" "blue" "green"])
(mr/def ::ResponseSchema
[:map
[:id pos-int?]
[:name string?]
[:color ::Color]
[:created_at ms/TemporalString]])
(api.macros/defendpoint :post "/:name" :- ::ResponseSchema
"Create a resource with a given name."
[;; Route Params:
{:keys [name]} :- [:map [:name ms/NonBlankString]]
;; Query Params:
{:keys [include archived]} :- [:map
[:include {:optional true} [:maybe [:= "details"]]]
[:archived {:default false} [:maybe ms/BooleanValue]]]
;; Body Params:
{:keys [color]} :- [:map [:color ::Color]]
]
;; endpoint implementation, ex:
{:id 99
:name (str "mr or mrs " name)
:color ({"red" "blue" "blue" "green" "green" "red"} color)
:created_at (t/format (t/formatter "yyyy-MM-dd'T'HH:mm:ssXXX") (t/zoned-date-time))}
)
```
## Common Schema Patterns
1. Route Params (the 5 in `api/user/id/5`)
2. Query Params (the sort+asc pair in `api/users?sort=asc`)
3. Body Params (the contents of a request body. Almost always decoded from json into edn)
4. The Raw Request map
Of the 4 arguments, deprioritize usage of the raw request unless necessary.
### Route Params
Always required, typically just a map with an ID:
```clojure
[{:keys [id]} :- [:map [:id ms/PositiveInt]]]
```
For multiple route params:
```clojure
[{:keys [id field-id]} :- [:map
[:id ms/PositiveInt]
[:field-id ms/PositiveInt]]]
```
### Query Params
Add properties for `{:optional true ...}` and `:default` values:
```clojure
{:keys [archived include limit offset]} :- [:map
[:archived {:default false} [:maybe ms/BooleanValue]]
[:include {:optional true} [:maybe [:= "tables"]]]
[:limit {:optional true} [:maybe ms/PositiveInt]]
[:offset {:optional true} [:maybe ms/PositiveInt]]]
```
### Request Body (POST/PUT)
```clojure
{:keys [name description parent_id]} :- [:map
[:name ms/NonBlankString]
[:description {:optional true} [:maybe ms/NonBlankString]]
[:parent_id {:optional true} [:maybe ms/PositiveInt]]]
```
### Response Schemas
#### Simple inline response:
```clojure
(api.macros/defendpoint :get "/:id" :- [:map
[:id pos-int?]
[:name string?]]
"Get a thing"
...)
```
#### Named schema for reuse:
```clojure
(mr/def ::Thing
[:map
[:id pos-int?]
[:name string?]
[:description [:maybe string?]]])
(api.macros/defendpoint :get "/:id" :- ::Thing
"Get a thing"
...)
(api.macros/defendpoint :get "/" :- [:sequential ::Thing]
"Get all things"
...)
```
## Common Schema Types
### From `metabase.util.malli.schema` (aliased as `ms`)
Prefer the schemas in the ms/* namespace, since they work better with our api infrastructure.
For example use `ms/PositiveInt` instead of `pos-int?`.
```clojure
ms/PositiveInt ;; Positive integer
ms/NonBlankString ;; Non-empty string
ms/BooleanValue ;; String "true"/"false" or boolean
ms/MaybeBooleanValue ;; BooleanValue or nil
ms/TemporalString ;; ISO-8601 date/time string (for REQUEST params only!)
ms/Map ;; Any map
ms/JSONString ;; JSON-encoded string
ms/PositiveNum ;; Positive number
ms/IntGreaterThanOrEqualToZero ;; 0 or positive
```
**IMPORTANT:** For response schemas, use `:any` for temporal fields, not `ms/TemporalString`!
Response schemas validate BEFORE JSON serialization, so they see Java Time objects.
### Built-in Malli Types
```clojure
:string ;; Any string
:boolean ;; true/false
:int ;; Any integer
:keyword ;; Clojure keyword
pos-int? ;; Positive integer predicate
[:maybe X] ;; X or nil
[:enum "a" "b" "c"] ;; One of these values
[:or X Y] ;; Schema that satisfies X or Y
[:and X Y] ;; Schema that satisfies X and Y
[:sequential X] ;; Sequential of Xs
[:set X] ;; Set of Xs
[:map-of K V] ;; Map with keys w/ schema K and values w/ schema V
[:tuple X Y Z] ;; Fixed-length tuple of schemas X Y Z
```
Avoid using sequence schemas unless completely necessary.
## Step-by-Step: Adding Schemas to an Endpoint
### Example: Adding return schema to `GET /api/field/:id/related`
**Before:**
```clojure
(api.macros/defendpoint :get "/:id/related"
"Return related entities."
[{:keys [id]} :- [:map [:id ms/PositiveInt]]]
(-> (t2/select-one :model/Field :id id) api/read-check xrays/related))
```
**Step 1:** Check what the function returns (look at `xrays/related`)
**Step 2:** Define response schema based on return type:
```clojure
(mr/def ::RelatedEntity
[:map
[:tables [:sequential [:map [:id pos-int?] [:name string?]]]]
[:fields [:sequential [:map [:id pos-int?] [:name string?]]]]])
```
**Step 3:** Add response schema to endpoint:
```clojure
(api.macros/defendpoint :get "/:id/related" :- ::RelatedEntity
"Return related entities."
[{:keys [id]} :- [:map [:id ms/PositiveInt]]]
(-> (t2/select-one :model/Field :id id) api/read-check xrays/related))
```
## Advanced Patterns
### Custom Error Messages
```clojure
(def DBEngineString
"Schema for a valid database engine name."
(mu/with-api-error-message
[:and
ms/NonBlankString
[:fn
{:error/message "Valid database engine"}
#(u/ignore-exceptions (driver/the-driver %))]]
(deferred-tru "value must be a valid database engine.")))
```
### Enum with Documentation
```clojure
(def PinnedState
(into [:enum {:error/message "pinned state must be 'all', 'is_pinned', or 'is_not_pinned'"}]
#{"all" "is_pinned" "is_not_pinned"}))
```
### Complex Nested Response
```clojure
(mr/def ::DashboardQuestionCandidate
[:map
[:id ms/PositiveInt]
[:name ms/NonBlankString]
[:description [:maybe string?]]
[:sole_dashboard_info
[:map
[:id ms/PositiveInt]
[:name ms/NonBlankString]
[:description [:maybe string?]]]]])
(mr/def ::DashboardQuestionCandidatesResponse
[:map
[:data [:sequential ::DashboardQuestionCandidate]]
[:total ms/PositiveInt]])
```
### Paginated Response Pattern
```clojure
(mr/def ::PaginatedResponse
[:map
[:data [:sequential ::Item]]
[:total integer?]
[:limit {:optional true} [:maybe integer?]]
[:offset {:optional true} [:maybe integer?]]])
```
## Common Pitfalls
### Don't: Forget `:maybe` for nullable fields
```clojure
[:description ms/NonBlankString] ;; WRONG - fails if nil
[:description [:maybe ms/NonBlankString]] ;; RIGHT - allows nil
```
### Don't: Forget `:optional true` for optional query params
```clojure
[:limit ms/PositiveInt] ;; WRONG - required but shouldn't be
[:limit {:optional true} [:maybe ms/PositiveInt]] ;; RIGHT
```
### Don't: Forget `:default` values for known params
```clojure
[:limit ms/PositiveInt] ;; WRONG - required but shouldn't be
[:limit {:optional true :default 0} [:maybe ms/PositiveInt]] ;; RIGHT
```
### Don't: Mix up route params, query params, and body
```clojure
;; WRONG - all in one map
[{:keys [id name archived]} :- [:map ...]]
;; RIGHT - separate destructuring
[{:keys [id]} :- [:map [:id ms/PositiveInt]]
{:keys [archived]} :- [:map [:archived {:default false} ms/BooleanValue]]
{:keys [name]} :- [:map [:name ms/NonBlankString]]]
```
### Don't: Use `ms/TemporalString` for Java Time objects in response schemas
```clojure
;; WRONG - Java Time objects aren't strings yet
[:date_joined ms/TemporalString]
;; RIGHT - schemas validate BEFORE JSON serialization
[:date_joined :any] ;; Java Time object, serialized to string by middleware
[:last_login [:maybe :any]] ;; Java Time object or nil
```
**Why:** Response schemas validate the internal Clojure data structures BEFORE they are serialized to JSON. Java Time objects like `OffsetDateTime` get converted to ISO-8601 strings by the JSON middleware, so the schema needs to accept the raw Java objects.
### Don't: Use `[:sequential X]` when the data is actually a set
```clojure
;; WRONG - group_ids is actually a set
[:group_ids {:optional true} [:sequential pos-int?]]
;; RIGHT - matches the actual data structure
[:group_ids {:optional true} [:maybe [:set pos-int?]]]
```
**Why:** Toucan hydration methods often return sets. The JSON middleware will serialize sets to arrays, but the schema validates before serialization.
### Don't: Create anonymous schemas for reused structures
Use `mr/def` for schemas used in multiple places:
```clojure
(mr/def ::User
[:map
[:id pos-int?]
[:email string?]
[:name string?]])
```
## Finding Return Types
1. **Look at the function being called**
```clojure
(api.macros/defendpoint :get "/:id"
[{:keys [id]}]
(t2/select-one :model/Field :id id)) ;; Returns a Field instance
```
2. **Check Toucan models for structure**
Look in `src/metabase/*/models/*.clj` for model definitions.
3. **Use clojure-mcp or REPL to inspect**
```bash
./bin/mage -repl '(require '\''metabase.xrays.core) (doc metabase.xrays.core/related)'
```
4. **Check tests**
Tests often show the expected response structure.
## Understanding Schema Validation Timing
**CRITICAL CONCEPT:** Schemas validate at different points in the request/response lifecycle:
### Request Parameter Schemas (Query/Body/Route)
- Validate AFTER JSON parsing
- Data is already deserialized (strings, numbers, booleans)
- Use `ms/TemporalString` for date/time inputs
- Use `ms/BooleanValue` for boolean query params
### Response Schemas
- Validate BEFORE JSON serialization
- Data is still in Clojure format (Java Time objects, sets, keywords)
- Use `:any` for Java Time objects
- Use `[:set X]` for sets
- Use `[:enum :keyword]` for keyword enums
### Serialization Flow
```
Request: JSON string → Parse → Coerce → Handler
Response: Handler → Schema Check → Encode → Serialize → JSON string
```
## Workflow Summary
1. **Read the endpoint** - understand what it does
2. **Identify params** - route, query, body
3. **Add parameter schemas** - use existing types from `ms`
4. **Determine return type** - check the implementation
5. **Define response schema** - inline or named with `mr/def`
6. **Test** - ensure the endpoint works and validates correctly
## Testing Your Schemas
After adding schemas, verify:
1. **Valid requests work** - test with correct data
2. **Invalid requests fail gracefully** - test with wrong types
3. **Optional params work** - test with/without optional params
4. **Error messages are clear** - check validation error responses
## Tips
- **Start simple** - begin with basic types, refine later
- **Reuse schemas** - if you see the same structure twice, make it a named schema
- **Be specific** - use `ms/PositiveInt` instead of `pos-int?`
- **Document intent** - add docstrings to named schemas
- **Follow conventions** - look at similar endpoints in the same namespace
- **Check the actual data** - use REPL to inspect what's actually returned before serialization
## Additional Resources
- [Malli Documentation](https://github.com/metosin/malli)
- Metabase Malli utilities: `src/metabase/util/malli/schema.clj`
- Metabase schema registry: `src/metabase/util/malli/registry.clj`
| """
Test for 'add-malli-schemas' skill — Malli Schema Validation in Metabase
Validates that the Agent added Malli schemas for API request/response
validation in the Metabase Clojure codebase.
"""
import os
import subprocess
import pytest
class TestAddMalliSchemas:
"""Verify Malli schema implementation in Metabase."""
REPO_DIR = "/workspace/metabase"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_schema_file_exists(self):
"""A Malli schema definition file must exist."""
src_dir = os.path.join(self.REPO_DIR, "src")
found = []
for root, dirs, files in os.walk(src_dir):
for f in files:
if f.endswith(".clj") and "schema" in f.lower():
found.append(os.path.join(root, f))
assert len(found) >= 1, "No schema-related .clj file found in src/"
def test_test_file_exists(self):
"""Test file for schema validation must exist."""
test_dir = os.path.join(self.REPO_DIR, "test")
found = []
for root, dirs, files in os.walk(test_dir):
for f in files:
if f.endswith(".clj") and "schema" in f.lower():
found.append(os.path.join(root, f))
assert len(found) >= 1, "No schema test .clj file found in test/"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_schema_files(self):
"""Find all Clojure files referencing malli."""
result = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".clj"):
fpath = os.path.join(root, f)
try:
with open(fpath, "r", encoding="utf-8", errors="ignore") as fh:
content = fh.read()
if "malli" in content:
result.append(fpath)
except OSError:
pass
return result
def test_malli_dependency_used(self):
"""Project must use malli library (referenced in source)."""
files = self._find_schema_files()
assert len(files) >= 1, "No .clj file references malli"
def test_schema_has_map_definition(self):
"""Schema file must define :map or [:map ...] schemas."""
files = self._find_schema_files()
for fpath in files:
with open(fpath, "r") as f:
content = f.read()
if ":map" in content or "[:map" in content:
return
pytest.fail("No :map schema definition found")
def test_schema_has_required_fields(self):
"""Schema must define required fields (card, dashboard, etc.)."""
files = self._find_schema_files()
field_patterns = [
":name",
":id",
":description",
":type",
":email",
":string",
":int",
]
for fpath in files:
with open(fpath, "r") as f:
content = f.read()
found = sum(1 for p in field_patterns if p in content)
if found >= 3:
return
pytest.fail("Schema files don't define enough typed fields")
def test_schema_uses_malli_core(self):
"""Files must require malli.core."""
files = self._find_schema_files()
for fpath in files:
with open(fpath, "r") as f:
content = f.read()
if "malli.core" in content or "m/schema" in content or "[m " in content:
return
pytest.fail("No file requires malli.core")
def test_validation_function_exists(self):
"""Must define validation functions (m/validate, m/explain, etc.)."""
files = self._find_schema_files()
validators = [
"m/validate",
"m/explain",
"m/decode",
"m/encode",
"validate",
"explain",
"coerce",
]
for fpath in files:
with open(fpath, "r") as f:
content = f.read()
found = any(v in content for v in validators)
if found:
return
pytest.fail("No validation function (m/validate etc.) found")
def test_error_handling(self):
"""Schema validation must include error handling."""
files = self._find_schema_files()
error_patterns = [
"m/explain",
"humanize",
"error",
"invalid",
"throw",
"ex-info",
"assert",
]
for fpath in files:
with open(fpath, "r") as f:
content = f.read()
found = any(p in content for p in error_patterns)
if found:
return
pytest.fail("No error handling found in schema validation")
def test_clojure_syntax_check(self):
"""Clojure files must be parseable (basic bracket matching)."""
files = self._find_schema_files()
for fpath in files:
with open(fpath, "r") as f:
content = f.read()
opens = content.count("(") + content.count("[") + content.count("{")
closes = content.count(")") + content.count("]") + content.count("}")
# Allow small imbalance from strings/comments but not large
diff = abs(opens - closes)
assert (
diff <= 5
), f"{fpath} has {diff} bracket imbalance (opens={opens}, closes={closes})"
def test_api_endpoint_integration(self):
"""Schema should be integrated with API endpoints (compojure/reitit)."""
files = self._find_schema_files()
api_patterns = [
"defendpoint",
"compojure",
"reitit",
"api/",
"middleware",
"coercion",
"ring",
]
for fpath in files:
with open(fpath, "r") as f:
content = f.read()
found = any(p in content.lower() for p in api_patterns)
if found:
return
# Check broader source
src_dir = os.path.join(self.REPO_DIR, "src")
for root, dirs, files_list in os.walk(src_dir):
for fname in files_list:
if fname.endswith(".clj"):
fpath = os.path.join(root, fname)
try:
with open(fpath, "r", errors="ignore") as f:
content = f.read()
if "malli" in content and any(
p in content.lower() for p in api_patterns
):
return
except OSError:
pass
pytest.fail("No API endpoint integration with malli schemas found")
| https://github.com/metabase/metabase | zhangyiiiiii/swe-skills-bench-clojure | |
clojure-write | Clojure Development & REPL Workflow | See task file for detailed mission requirements. | feature | # Task: Add Currency Field Conversion to Metabase Query Export
## Background
We need to add currency field conversion functionality to Metabase's query result export module, allowing automatic currency formatting based on site settings.
## Files to Create/Modify
- `src/metabase/query_processor/middleware/currency_formatter.clj` (new)
- `src/metabase/api/dataset.clj` (modify export paths)
- `test/metabase/query_processor/middleware/currency_formatter_test.clj` (new)
## Requirements
### Currency Formatter Middleware
- Read site-currency setting from `src/metabase/models/setting.clj`
- Format columns with type `:type/Currency`
- Apply conversion to result set
### Integration Points
- Call middleware in `POST /api/dataset/csv` export path
- Call middleware in `POST /api/dataset/json` export path
### Conversion Logic
- Support common currency pairs (USD, EUR, CNY, etc.)
- Handle null values gracefully
- Non-currency columns should not be affected
### Expected Functionality
- Currency conversions work correctly (e.g., USD → CNY with proper exchange rate)
- Null values are skipped without raising errors
- Non-currency columns remain unchanged
- Invalid currency configuration falls back to default behavior
## Acceptance Criteria
- Implementation compiles and runs without errors
- All currency conversion scenarios work as specified
- Edge cases are handled appropriately
| ---
name: clojure-write
description: Guide Clojure and ClojureScript development using REPL-driven workflow, coding conventions, and best practices. Use when writing, developing, or refactoring Clojure/ClojureScript code.
---
# Clojure Development Skill
## Tool Preference
When `clojure-mcp` tools are available (e.g., `clojure_eval`, `clojure_edit`), **always use them**
instead of shell commands like `./bin/mage -repl`. The MCP tools provide:
- Direct REPL integration without shell escaping issues
- Better error messages and feedback
- Structural Clojure editing that prevents syntax errors
Only fall back to `./bin/mage` commands when clojure-mcp is not available.
@./../_shared/development-workflow.md
@./../_shared/clojure-style-guide.md
@./../_shared/clojure-commands.md
## REPL-Driven Development Workflow
- Start with small, fundamental functions:
- Identify the core features or functionalities required for your task.
- Break each feature down into the smallest, most basic functions that can be developed and tested independently.
- Write and test in the REPL:
- Write the code for each small function directly in the REPL (Read-Eval-Print Loop).
- Test it thoroughly with a variety of inputs, including typical use cases and relevant edge cases, to ensure it
behaves as expected.
- Integrate into source code:
- Once a function works correctly in the REPL, move it from the REPL environment into your source code files (e.g.,
within appropriate namespaces).
- Gradually increase complexity:
- Build upon tested, basic functions to create more complex functions or components.
- Compose smaller functions together, testing each new composition in the REPL to verify correctness step by step.
- Ensure dependency testing:
- Make sure every function is fully tested in the REPL before it is depended upon by other functions.
- This ensures that each layer of your application is reliable before you build on it.
- Use the REPL fully:
- Use the REPL as your primary tool to experiment with different approaches, iterate quickly, and get immediate
feedback on your code.
- Follow functional programming principles:
- Keep functions small, focused, and composable.
- Use Clojure's functional programming features—like immutability, higher-order functions, and the standard
library—to write concise, effective code.
## How to Evaluate Code
### Bottom-up Dev Loop
1. Write code into a file.
2. Evaluate the file's namespace and make sure it loads correctly with:
```
./bin/mage -repl --namespace metabase.app-db.connection
```
3. Call functions in the namespace with test inputs, and observe that the outputs are correct
Feel free to copy these REPL session trials into actual test cases using `deftest` and `is`.
4. Once you know these functions are good, return to 1, and compose them into the task that you need to build.
## Critical Rules for Editing
- Be careful with parentheses counts when editing Clojure code
- After EVERY change to Clojure code, verify readability with `-check-readable`
- End all files with a newline
- When editing tabular code, where the columns line up, try to keep them aligned
- Spaces on a line with nothing after it is not allowed
| """
Test for 'clojure-write' skill — Clojure Currency Formatter
Validates that the Agent created a Clojure currency formatting namespace
with proper locale handling and tests.
"""
import os
import subprocess
import pytest
class TestClojureWrite:
"""Verify Clojure currency formatter in Metabase."""
REPO_DIR = "/workspace/metabase"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_formatter_file_exists(self):
"""A currency_formatter.clj file must exist somewhere in src/."""
found = []
for root, dirs, files in os.walk(os.path.join(self.REPO_DIR, "src")):
for f in files:
if "currency" in f.lower() and f.endswith(".clj"):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No currency formatter .clj file found"
def test_test_file_exists(self):
"""Test file for currency formatter must exist."""
found = []
for root, dirs, files in os.walk(os.path.join(self.REPO_DIR, "test")):
for f in files:
if "currency" in f.lower() and f.endswith(".clj"):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No currency formatter test file found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_formatter(self):
for root, dirs, files in os.walk(os.path.join(self.REPO_DIR, "src")):
for f in files:
if "currency" in f.lower() and f.endswith(".clj"):
return os.path.join(root, f)
return None
def test_has_namespace_declaration(self):
"""Formatter must have proper (ns ...) declaration."""
fpath = self._find_formatter()
assert fpath, "Formatter file not found"
with open(fpath, "r") as f:
content = f.read()
assert "(ns " in content, "Missing (ns ...) declaration"
def test_format_currency_function(self):
"""Must define a format-currency function."""
fpath = self._find_formatter()
assert fpath, "Formatter file not found"
with open(fpath, "r") as f:
content = f.read()
fn_patterns = [
"format-currency",
"format_currency",
"defn format",
"defn- format",
]
found = any(p in content for p in fn_patterns)
assert found, "No format-currency function found"
def test_supports_multiple_currencies(self):
"""Formatter must support multiple currencies (USD, EUR, etc.)."""
fpath = self._find_formatter()
assert fpath, "Formatter file not found"
with open(fpath, "r") as f:
content = f.read()
currencies = ["USD", "EUR", "GBP", "JPY", "CNY"]
found = sum(1 for c in currencies if c in content)
assert found >= 2, f"Only {found} currency codes found, need >= 2"
def test_locale_handling(self):
"""Formatter must handle locale-specific formatting."""
fpath = self._find_formatter()
assert fpath, "Formatter file not found"
with open(fpath, "r") as f:
content = f.read()
locale_patterns = [
"locale",
"Locale",
"java.util.Locale",
"NumberFormat",
"java.text",
]
found = any(p in content for p in locale_patterns)
assert found, "No locale handling found"
def test_currency_symbol_handling(self):
"""Formatter must handle currency symbols ($, €, etc.)."""
fpath = self._find_formatter()
assert fpath, "Formatter file not found"
with open(fpath, "r") as f:
content = f.read()
symbol_patterns = [
"$",
"€",
"£",
"symbol",
"currency-symbol",
"getCurrencyInstance",
"Currency",
]
found = any(p in content for p in symbol_patterns)
assert found, "No currency symbol handling found"
def test_decimal_precision(self):
"""Formatter must handle decimal precision."""
fpath = self._find_formatter()
assert fpath, "Formatter file not found"
with open(fpath, "r") as f:
content = f.read()
precision_patterns = [
"decimal",
"precision",
"scale",
"fraction",
"setMinimumFractionDigits",
"BigDecimal",
".2f",
"round",
]
found = any(p in content for p in precision_patterns)
assert found, "No decimal precision handling found"
def test_bracket_balance(self):
"""Clojure source must have balanced brackets."""
fpath = self._find_formatter()
assert fpath, "Formatter file not found"
with open(fpath, "r") as f:
content = f.read()
opens = content.count("(") + content.count("[") + content.count("{")
closes = content.count(")") + content.count("]") + content.count("}")
diff = abs(opens - closes)
assert diff <= 3, f"Bracket imbalance: {diff}"
def test_test_file_has_assertions(self):
"""Test file must contain (is ...) or deftest assertions."""
found = []
for root, dirs, files in os.walk(os.path.join(self.REPO_DIR, "test")):
for f in files:
if "currency" in f.lower() and f.endswith(".clj"):
found.append(os.path.join(root, f))
assert len(found) >= 1, "Test file not found"
with open(found[0], "r") as f:
content = f.read()
assert "deftest" in content, "Test file missing deftest"
assert (
"(is " in content or "(are " in content
), "Test file missing (is ...) assertions"
def test_edge_cases_in_tests(self):
"""Test file should cover edge cases (zero, negative, large)."""
found = []
for root, dirs, files in os.walk(os.path.join(self.REPO_DIR, "test")):
for f in files:
if "currency" in f.lower() and f.endswith(".clj"):
found.append(os.path.join(root, f))
assert len(found) >= 1
with open(found[0], "r") as f:
content = f.read()
edge_patterns = [
"0",
"negative",
"-1",
"1000000",
"nil",
"zero",
"large",
"small",
]
found_edges = sum(1 for p in edge_patterns if p in content.lower())
assert found_edges >= 2, "Test file needs more edge case coverage"
| https://github.com/metabase/metabase | zhangyiiiiii/swe-skills-bench-clojure | |
django-patterns | Django Architecture Patterns | See task file for detailed mission requirements. | feature | # Task: Implement Low Stock Alert Feature for Saleor
## Background
Implement an inventory alert feature in Saleor that automatically triggers
alerts when product variant stock falls below a specified threshold.
## Files to Create/Modify
- saleor/warehouse/models.py (add field to Stock)
- saleor/warehouse/signals.py (add post_save handler)
- saleor/plugins/manager.py (add plugin hook)
- saleor/warehouse/tests/test_low_stock.py (new)
## Requirements
Stock Model Update:
- Add low_stock_threshold field (IntegerField, default=10)
Signal Handler:
- Create post_save signal on Stock model
- When stock < threshold, publish LOW_STOCK event
- Call plugin_low_stock_alert hook in plugin manager
Caching (High Concurrency):
- Use Django cache (redis backend)
- Cache key: variant alert trigger state
- TTL: 300 seconds
- Prevent duplicate alerts for same variant
Plugin Hook:
- Add plugin_low_stock_alert method to manager
### Expected Functionality
- Threshold trigger fires alert
- Cache hit skips duplicate push
- Cache expiry re-triggers alert
- Above threshold no alert
## Acceptance Criteria
- No Django system check errors
- Cache correctly prevents duplicate alerts
| ---
name: django-patterns
description: Django architecture patterns, REST API design with DRF, ORM best practices, caching, signals, middleware, and production-grade Django apps.
---
# Django Development Patterns
Production-grade Django architecture patterns for scalable, maintainable applications.
## When to Activate
- Building Django web applications
- Designing Django REST Framework APIs
- Working with Django ORM and models
- Setting up Django project structure
- Implementing caching, signals, middleware
## Project Structure
### Recommended Layout
```
myproject/
├── config/
│ ├── __init__.py
│ ├── settings/
│ │ ├── __init__.py
│ │ ├── base.py # Base settings
│ │ ├── development.py # Dev settings
│ │ ├── production.py # Production settings
│ │ └── test.py # Test settings
│ ├── urls.py
│ ├── wsgi.py
│ └── asgi.py
├── manage.py
└── apps/
├── __init__.py
├── users/
│ ├── __init__.py
│ ├── models.py
│ ├── views.py
│ ├── serializers.py
│ ├── urls.py
│ ├── permissions.py
│ ├── filters.py
│ ├── services.py
│ └── tests/
└── products/
└── ...
```
### Split Settings Pattern
```python
# config/settings/base.py
from pathlib import Path
BASE_DIR = Path(__file__).resolve().parent.parent.parent
SECRET_KEY = env('DJANGO_SECRET_KEY')
DEBUG = False
ALLOWED_HOSTS = []
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'rest_framework.authtoken',
'corsheaders',
# Local apps
'apps.users',
'apps.products',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'corsheaders.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'config.urls'
WSGI_APPLICATION = 'config.wsgi.application'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': env('DB_NAME'),
'USER': env('DB_USER'),
'PASSWORD': env('DB_PASSWORD'),
'HOST': env('DB_HOST'),
'PORT': env('DB_PORT', default='5432'),
}
}
# config/settings/development.py
from .base import *
DEBUG = True
ALLOWED_HOSTS = ['localhost', '127.0.0.1']
DATABASES['default']['NAME'] = 'myproject_dev'
INSTALLED_APPS += ['debug_toolbar']
MIDDLEWARE += ['debug_toolbar.middleware.DebugToolbarMiddleware']
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# config/settings/production.py
from .base import *
DEBUG = False
ALLOWED_HOSTS = env.list('ALLOWED_HOSTS')
SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
SECURE_HSTS_SECONDS = 31536000
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
# Logging
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'file': {
'level': 'WARNING',
'class': 'logging.FileHandler',
'filename': '/var/log/django/django.log',
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'WARNING',
'propagate': True,
},
},
}
```
## Model Design Patterns
### Model Best Practices
```python
from django.db import models
from django.contrib.auth.models import AbstractUser
from django.core.validators import MinValueValidator, MaxValueValidator
class User(AbstractUser):
"""Custom user model extending AbstractUser."""
email = models.EmailField(unique=True)
phone = models.CharField(max_length=20, blank=True)
birth_date = models.DateField(null=True, blank=True)
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['username']
class Meta:
db_table = 'users'
verbose_name = 'user'
verbose_name_plural = 'users'
ordering = ['-date_joined']
def __str__(self):
return self.email
def get_full_name(self):
return f"{self.first_name} {self.last_name}".strip()
class Product(models.Model):
"""Product model with proper field configuration."""
name = models.CharField(max_length=200)
slug = models.SlugField(unique=True, max_length=250)
description = models.TextField(blank=True)
price = models.DecimalField(
max_digits=10,
decimal_places=2,
validators=[MinValueValidator(0)]
)
stock = models.PositiveIntegerField(default=0)
is_active = models.BooleanField(default=True)
category = models.ForeignKey(
'Category',
on_delete=models.CASCADE,
related_name='products'
)
tags = models.ManyToManyField('Tag', blank=True, related_name='products')
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
db_table = 'products'
ordering = ['-created_at']
indexes = [
models.Index(fields=['slug']),
models.Index(fields=['-created_at']),
models.Index(fields=['category', 'is_active']),
]
constraints = [
models.CheckConstraint(
check=models.Q(price__gte=0),
name='price_non_negative'
)
]
def __str__(self):
return self.name
def save(self, *args, **kwargs):
if not self.slug:
self.slug = slugify(self.name)
super().save(*args, **kwargs)
```
### QuerySet Best Practices
```python
from django.db import models
class ProductQuerySet(models.QuerySet):
"""Custom QuerySet for Product model."""
def active(self):
"""Return only active products."""
return self.filter(is_active=True)
def with_category(self):
"""Select related category to avoid N+1 queries."""
return self.select_related('category')
def with_tags(self):
"""Prefetch tags for many-to-many relationship."""
return self.prefetch_related('tags')
def in_stock(self):
"""Return products with stock > 0."""
return self.filter(stock__gt=0)
def search(self, query):
"""Search products by name or description."""
return self.filter(
models.Q(name__icontains=query) |
models.Q(description__icontains=query)
)
class Product(models.Model):
# ... fields ...
objects = ProductQuerySet.as_manager() # Use custom QuerySet
# Usage
Product.objects.active().with_category().in_stock()
```
### Manager Methods
```python
class ProductManager(models.Manager):
"""Custom manager for complex queries."""
def get_or_none(self, **kwargs):
"""Return object or None instead of DoesNotExist."""
try:
return self.get(**kwargs)
except self.model.DoesNotExist:
return None
def create_with_tags(self, name, price, tag_names):
"""Create product with associated tags."""
product = self.create(name=name, price=price)
tags = [Tag.objects.get_or_create(name=name)[0] for name in tag_names]
product.tags.set(tags)
return product
def bulk_update_stock(self, product_ids, quantity):
"""Bulk update stock for multiple products."""
return self.filter(id__in=product_ids).update(stock=quantity)
# In model
class Product(models.Model):
# ... fields ...
custom = ProductManager()
```
## Django REST Framework Patterns
### Serializer Patterns
```python
from rest_framework import serializers
from django.contrib.auth.password_validation import validate_password
from .models import Product, User
class ProductSerializer(serializers.ModelSerializer):
"""Serializer for Product model."""
category_name = serializers.CharField(source='category.name', read_only=True)
average_rating = serializers.FloatField(read_only=True)
discount_price = serializers.SerializerMethodField()
class Meta:
model = Product
fields = [
'id', 'name', 'slug', 'description', 'price',
'discount_price', 'stock', 'category_name',
'average_rating', 'created_at'
]
read_only_fields = ['id', 'slug', 'created_at']
def get_discount_price(self, obj):
"""Calculate discount price if applicable."""
if hasattr(obj, 'discount') and obj.discount:
return obj.price * (1 - obj.discount.percent / 100)
return obj.price
def validate_price(self, value):
"""Ensure price is non-negative."""
if value < 0:
raise serializers.ValidationError("Price cannot be negative.")
return value
class ProductCreateSerializer(serializers.ModelSerializer):
"""Serializer for creating products."""
class Meta:
model = Product
fields = ['name', 'description', 'price', 'stock', 'category']
def validate(self, data):
"""Custom validation for multiple fields."""
if data['price'] > 10000 and data['stock'] > 100:
raise serializers.ValidationError(
"Cannot have high-value products with large stock."
)
return data
class UserRegistrationSerializer(serializers.ModelSerializer):
"""Serializer for user registration."""
password = serializers.CharField(
write_only=True,
required=True,
validators=[validate_password],
style={'input_type': 'password'}
)
password_confirm = serializers.CharField(write_only=True, style={'input_type': 'password'})
class Meta:
model = User
fields = ['email', 'username', 'password', 'password_confirm']
def validate(self, data):
"""Validate passwords match."""
if data['password'] != data['password_confirm']:
raise serializers.ValidationError({
"password_confirm": "Password fields didn't match."
})
return data
def create(self, validated_data):
"""Create user with hashed password."""
validated_data.pop('password_confirm')
password = validated_data.pop('password')
user = User.objects.create(**validated_data)
user.set_password(password)
user.save()
return user
```
### ViewSet Patterns
```python
from rest_framework import viewsets, status, filters
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated, IsAdminUser
from django_filters.rest_framework import DjangoFilterBackend
from .models import Product
from .serializers import ProductSerializer, ProductCreateSerializer
from .permissions import IsOwnerOrReadOnly
from .filters import ProductFilter
from .services import ProductService
class ProductViewSet(viewsets.ModelViewSet):
"""ViewSet for Product model."""
queryset = Product.objects.select_related('category').prefetch_related('tags')
permission_classes = [IsAuthenticated, IsOwnerOrReadOnly]
filter_backends = [DjangoFilterBackend, filters.SearchFilter, filters.OrderingFilter]
filterset_class = ProductFilter
search_fields = ['name', 'description']
ordering_fields = ['price', 'created_at', 'name']
ordering = ['-created_at']
def get_serializer_class(self):
"""Return appropriate serializer based on action."""
if self.action == 'create':
return ProductCreateSerializer
return ProductSerializer
def perform_create(self, serializer):
"""Save with user context."""
serializer.save(created_by=self.request.user)
@action(detail=False, methods=['get'])
def featured(self, request):
"""Return featured products."""
featured = self.queryset.filter(is_featured=True)[:10]
serializer = self.get_serializer(featured, many=True)
return Response(serializer.data)
@action(detail=True, methods=['post'])
def purchase(self, request, pk=None):
"""Purchase a product."""
product = self.get_object()
service = ProductService()
result = service.purchase(product, request.user)
return Response(result, status=status.HTTP_201_CREATED)
@action(detail=False, methods=['get'], permission_classes=[IsAuthenticated])
def my_products(self, request):
"""Return products created by current user."""
products = self.queryset.filter(created_by=request.user)
page = self.paginate_queryset(products)
serializer = self.get_serializer(page, many=True)
return self.get_paginated_response(serializer.data)
```
### Custom Actions
```python
from rest_framework.decorators import api_view, permission_classes
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
@api_view(['POST'])
@permission_classes([IsAuthenticated])
def add_to_cart(request):
"""Add product to user cart."""
product_id = request.data.get('product_id')
quantity = request.data.get('quantity', 1)
try:
product = Product.objects.get(id=product_id)
except Product.DoesNotExist:
return Response(
{'error': 'Product not found'},
status=status.HTTP_404_NOT_FOUND
)
cart, _ = Cart.objects.get_or_create(user=request.user)
CartItem.objects.create(
cart=cart,
product=product,
quantity=quantity
)
return Response({'message': 'Added to cart'}, status=status.HTTP_201_CREATED)
```
## Service Layer Pattern
```python
# apps/orders/services.py
from typing import Optional
from django.db import transaction
from .models import Order, OrderItem
class OrderService:
"""Service layer for order-related business logic."""
@staticmethod
@transaction.atomic
def create_order(user, cart: Cart) -> Order:
"""Create order from cart."""
order = Order.objects.create(
user=user,
total_price=cart.total_price
)
for item in cart.items.all():
OrderItem.objects.create(
order=order,
product=item.product,
quantity=item.quantity,
price=item.product.price
)
# Clear cart
cart.items.all().delete()
return order
@staticmethod
def process_payment(order: Order, payment_data: dict) -> bool:
"""Process payment for order."""
# Integration with payment gateway
payment = PaymentGateway.charge(
amount=order.total_price,
token=payment_data['token']
)
if payment.success:
order.status = Order.Status.PAID
order.save()
# Send confirmation email
OrderService.send_confirmation_email(order)
return True
return False
@staticmethod
def send_confirmation_email(order: Order):
"""Send order confirmation email."""
# Email sending logic
pass
```
## Caching Strategies
### View-Level Caching
```python
from django.views.decorators.cache import cache_page
from django.utils.decorators import method_decorator
@method_decorator(cache_page(60 * 15), name='dispatch') # 15 minutes
class ProductListView(generic.ListView):
model = Product
template_name = 'products/list.html'
context_object_name = 'products'
```
### Template Fragment Caching
```django
{% load cache %}
{% cache 500 sidebar %}
... expensive sidebar content ...
{% endcache %}
```
### Low-Level Caching
```python
from django.core.cache import cache
def get_featured_products():
"""Get featured products with caching."""
cache_key = 'featured_products'
products = cache.get(cache_key)
if products is None:
products = list(Product.objects.filter(is_featured=True))
cache.set(cache_key, products, timeout=60 * 15) # 15 minutes
return products
```
### QuerySet Caching
```python
from django.core.cache import cache
def get_popular_categories():
cache_key = 'popular_categories'
categories = cache.get(cache_key)
if categories is None:
categories = list(Category.objects.annotate(
product_count=Count('products')
).filter(product_count__gt=10).order_by('-product_count')[:20])
cache.set(cache_key, categories, timeout=60 * 60) # 1 hour
return categories
```
## Signals
### Signal Patterns
```python
# apps/users/signals.py
from django.db.models.signals import post_save
from django.dispatch import receiver
from django.contrib.auth import get_user_model
from .models import Profile
User = get_user_model()
@receiver(post_save, sender=User)
def create_user_profile(sender, instance, created, **kwargs):
"""Create profile when user is created."""
if created:
Profile.objects.create(user=instance)
@receiver(post_save, sender=User)
def save_user_profile(sender, instance, **kwargs):
"""Save profile when user is saved."""
instance.profile.save()
# apps/users/apps.py
from django.apps import AppConfig
class UsersConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'apps.users'
def ready(self):
"""Import signals when app is ready."""
import apps.users.signals
```
## Middleware
### Custom Middleware
```python
# middleware/active_user_middleware.py
import time
from django.utils.deprecation import MiddlewareMixin
class ActiveUserMiddleware(MiddlewareMixin):
"""Middleware to track active users."""
def process_request(self, request):
"""Process incoming request."""
if request.user.is_authenticated:
# Update last active time
request.user.last_active = timezone.now()
request.user.save(update_fields=['last_active'])
class RequestLoggingMiddleware(MiddlewareMixin):
"""Middleware for logging requests."""
def process_request(self, request):
"""Log request start time."""
request.start_time = time.time()
def process_response(self, request, response):
"""Log request duration."""
if hasattr(request, 'start_time'):
duration = time.time() - request.start_time
logger.info(f'{request.method} {request.path} - {response.status_code} - {duration:.3f}s')
return response
```
## Performance Optimization
### N+1 Query Prevention
```python
# Bad - N+1 queries
products = Product.objects.all()
for product in products:
print(product.category.name) # Separate query for each product
# Good - Single query with select_related
products = Product.objects.select_related('category').all()
for product in products:
print(product.category.name)
# Good - Prefetch for many-to-many
products = Product.objects.prefetch_related('tags').all()
for product in products:
for tag in product.tags.all():
print(tag.name)
```
### Database Indexing
```python
class Product(models.Model):
name = models.CharField(max_length=200, db_index=True)
slug = models.SlugField(unique=True)
category = models.ForeignKey('Category', on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True)
class Meta:
indexes = [
models.Index(fields=['name']),
models.Index(fields=['-created_at']),
models.Index(fields=['category', 'created_at']),
]
```
### Bulk Operations
```python
# Bulk create
Product.objects.bulk_create([
Product(name=f'Product {i}', price=10.00)
for i in range(1000)
])
# Bulk update
products = Product.objects.all()[:100]
for product in products:
product.is_active = True
Product.objects.bulk_update(products, ['is_active'])
# Bulk delete
Product.objects.filter(stock=0).delete()
```
## Quick Reference
| Pattern | Description |
|---------|-------------|
| Split settings | Separate dev/prod/test settings |
| Custom QuerySet | Reusable query methods |
| Service Layer | Business logic separation |
| ViewSet | REST API endpoints |
| Serializer validation | Request/response transformation |
| select_related | Foreign key optimization |
| prefetch_related | Many-to-many optimization |
| Cache first | Cache expensive operations |
| Signals | Event-driven actions |
| Middleware | Request/response processing |
Remember: Django provides many shortcuts, but for production applications, structure and organization matter more than concise code. Build for maintainability.
| """
Test for 'django-patterns' skill — Django Design Patterns in Saleor
Validates that the Agent implemented Django best practices including
custom model managers, signals, middleware, and proper migrations.
"""
import os
import subprocess
import pytest
from _dependency_utils import ensure_python_dependencies
@pytest.fixture(scope="module", autouse=True)
def _ensure_repo_dependencies():
ensure_python_dependencies(TestDjangoPatterns.REPO_DIR)
class TestDjangoPatterns:
"""Verify Django design patterns in Saleor."""
REPO_DIR = "/workspace/saleor"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_custom_manager_exists(self):
"""A custom model manager file must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and "manager" in f.lower():
found.append(os.path.join(root, f))
assert len(found) >= 1, "No custom manager .py file found"
def test_middleware_exists(self):
"""A custom middleware file must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and "middleware" in f.lower():
found.append(os.path.join(root, f))
assert len(found) >= 1, "No middleware .py file found"
def test_signals_file_exists(self):
"""A signals file must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and "signal" in f.lower():
found.append(os.path.join(root, f))
assert len(found) >= 1, "No signals .py file found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def test_custom_manager_uses_queryset(self):
"""Custom manager must use QuerySet methods."""
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and "manager" in f.lower():
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
qs_patterns = [
"Manager",
"QuerySet",
"get_queryset",
"objects",
"models.Manager",
]
if any(p in content for p in qs_patterns):
return
pytest.fail("No manager using QuerySet found")
def test_middleware_has_process_methods(self):
"""Middleware must implement __call__ or process_request."""
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and "middleware" in f.lower():
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
method_patterns = [
"__call__",
"process_request",
"process_response",
"process_view",
"MiddlewareMixin",
]
if any(p in content for p in method_patterns):
return
pytest.fail("No middleware with proper methods found")
def test_signals_use_receiver(self):
"""Signals must use @receiver decorator or signal.connect."""
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and "signal" in f.lower():
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
signal_patterns = [
"@receiver",
".connect(",
"post_save",
"pre_save",
"Signal()",
"django.dispatch",
]
if any(p in content for p in signal_patterns):
return
pytest.fail("No signal with @receiver or .connect found")
def test_migration_exists(self):
"""New migration file must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if "migrations" in root:
for f in files:
if f.endswith(".py") and f != "__init__.py":
found = True
break
if found:
break
assert found, "No migration files found"
def test_manage_py_check(self):
"""python manage.py check must pass."""
result = subprocess.run(
["python", "manage.py", "check"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
env={**os.environ, "DJANGO_SETTINGS_MODULE": "saleor.settings"},
)
assert result.returncode == 0, f"manage.py check failed:\n{result.stderr}"
def test_files_compile(self):
"""All new Python files must compile."""
targets = ["manager", "middleware", "signal"]
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and any(t in f.lower() for t in targets):
fpath = os.path.join(root, f)
result = subprocess.run(
["python", "-m", "py_compile", fpath],
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"{fpath} failed to compile:\n{result.stderr}"
def test_model_has_meta_class(self):
"""Models should define Meta class with ordering or indexes."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and "model" in f.lower():
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
meta_patterns = [
"class Meta:",
"ordering",
"indexes",
"verbose_name",
"db_table",
]
if any(p in content for p in meta_patterns):
found = True
break
if found:
break
assert found, "No model with Meta class found"
def test_type_hints_present(self):
"""New Python code should use type hints."""
type_found = False
targets = ["manager", "middleware", "signal"]
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and any(t in f.lower() for t in targets):
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
type_patterns = [
"-> ",
": str",
": int",
": bool",
": list",
": dict",
"Optional[",
"QuerySet[",
"Type[",
]
if any(p in content for p in type_patterns):
type_found = True
break
if type_found:
break
assert type_found, "No type hints in new code"
| https://github.com/saleor/saleor | zhangyiiiiii/swe-skills-bench-python | |
python-background-jobs | Python Background Jobs | See task file for detailed mission requirements. | feature | # Task: Design Video Transcoding Task System with Celery
## Background
Design a video transcoding task system based on Celery's Task base class,
implementing proper task chaining, state updates, and retry logic.
## Files to Create/Modify
- examples/transcoding/tasks.py
- examples/transcoding/workflow.py
- t/unit/tasks/test_transcoding.py
## Requirements
Tasks to Implement (in tasks.py):
1) extract_audio:
- bind=True (access to self)
- Call self.update_state to report EXTRACTING status
2) transcode_video:
- Support meta={'progress': pct} for percentage progress
- Implement self.retry on failure
- Max 3 retries, countdown=60
3) generate_thumbnail:
- Generate thumbnail from video frame
Workflow (in workflow.py):
- Use chain() to compose: extract_audio.s() | transcode_video.s() | generate_thumbnail.s()
- Proper signature passing between tasks
4. Test Cases (CELERY_TASK_ALWAYS_EAGER=True):
- Retry logic (mock transcode failure, verify retry called)
- Progress reporting assertions
- Chain execution returns correct final result
- State updates are recorded
## Acceptance Criteria
- All three tasks implemented
- Chain workflow correctly orchestrates tasks
| ---
name: python-background-jobs
description: Python background job patterns including task queues, workers, and event-driven architecture. Use when implementing async task processing, job queues, long-running operations, or decoupling work from request/response cycles.
---
# Python Background Jobs & Task Queues
Decouple long-running or unreliable work from request/response cycles. Return immediately to the user while background workers handle the heavy lifting asynchronously.
## When to Use This Skill
- Processing tasks that take longer than a few seconds
- Sending emails, notifications, or webhooks
- Generating reports or exporting data
- Processing uploads or media transformations
- Integrating with unreliable external services
- Building event-driven architectures
## Core Concepts
### 1. Task Queue Pattern
API accepts request, enqueues a job, returns immediately with a job ID. Workers process jobs asynchronously.
### 2. Idempotency
Tasks may be retried on failure. Design for safe re-execution.
### 3. Job State Machine
Jobs transition through states: pending → running → succeeded/failed.
### 4. At-Least-Once Delivery
Most queues guarantee at-least-once delivery. Your code must handle duplicates.
## Quick Start
This skill uses Celery for examples, a widely adopted task queue. Alternatives like RQ, Dramatiq, and cloud-native solutions (AWS SQS, GCP Tasks) are equally valid choices.
```python
from celery import Celery
app = Celery("tasks", broker="redis://localhost:6379")
@app.task
def send_email(to: str, subject: str, body: str) -> None:
# This runs in a background worker
email_client.send(to, subject, body)
# In your API handler
send_email.delay("user@example.com", "Welcome!", "Thanks for signing up")
```
## Fundamental Patterns
### Pattern 1: Return Job ID Immediately
For operations exceeding a few seconds, return a job ID and process asynchronously.
```python
from uuid import uuid4
from dataclasses import dataclass
from enum import Enum
from datetime import datetime
class JobStatus(Enum):
PENDING = "pending"
RUNNING = "running"
SUCCEEDED = "succeeded"
FAILED = "failed"
@dataclass
class Job:
id: str
status: JobStatus
created_at: datetime
started_at: datetime | None = None
completed_at: datetime | None = None
result: dict | None = None
error: str | None = None
# API endpoint
async def start_export(request: ExportRequest) -> JobResponse:
"""Start export job and return job ID."""
job_id = str(uuid4())
# Persist job record
await jobs_repo.create(Job(
id=job_id,
status=JobStatus.PENDING,
created_at=datetime.utcnow(),
))
# Enqueue task for background processing
await task_queue.enqueue(
"export_data",
job_id=job_id,
params=request.model_dump(),
)
# Return immediately with job ID
return JobResponse(
job_id=job_id,
status="pending",
poll_url=f"/jobs/{job_id}",
)
```
### Pattern 2: Celery Task Configuration
Configure Celery tasks with proper retry and timeout settings.
```python
from celery import Celery
app = Celery("tasks", broker="redis://localhost:6379")
# Global configuration
app.conf.update(
task_time_limit=3600, # Hard limit: 1 hour
task_soft_time_limit=3000, # Soft limit: 50 minutes
task_acks_late=True, # Acknowledge after completion
task_reject_on_worker_lost=True,
worker_prefetch_multiplier=1, # Don't prefetch too many tasks
)
@app.task(
bind=True,
max_retries=3,
default_retry_delay=60,
autoretry_for=(ConnectionError, TimeoutError),
)
def process_payment(self, payment_id: str) -> dict:
"""Process payment with automatic retry on transient errors."""
try:
result = payment_gateway.charge(payment_id)
return {"status": "success", "transaction_id": result.id}
except PaymentDeclinedError as e:
# Don't retry permanent failures
return {"status": "declined", "reason": str(e)}
except TransientError as e:
# Retry with exponential backoff
raise self.retry(exc=e, countdown=2 ** self.request.retries * 60)
```
### Pattern 3: Make Tasks Idempotent
Workers may retry on crash or timeout. Design for safe re-execution.
```python
@app.task(bind=True)
def process_order(self, order_id: str) -> None:
"""Process order idempotently."""
order = orders_repo.get(order_id)
# Already processed? Return early
if order.status == OrderStatus.COMPLETED:
logger.info("Order already processed", order_id=order_id)
return
# Already in progress? Check if we should continue
if order.status == OrderStatus.PROCESSING:
# Use idempotency key to avoid double-charging
pass
# Process with idempotency key
result = payment_provider.charge(
amount=order.total,
idempotency_key=f"order-{order_id}", # Critical!
)
orders_repo.update(order_id, status=OrderStatus.COMPLETED)
```
**Idempotency Strategies:**
1. **Check-before-write**: Verify state before action
2. **Idempotency keys**: Use unique tokens with external services
3. **Upsert patterns**: `INSERT ... ON CONFLICT UPDATE`
4. **Deduplication window**: Track processed IDs for N hours
### Pattern 4: Job State Management
Persist job state transitions for visibility and debugging.
```python
class JobRepository:
"""Repository for managing job state."""
async def create(self, job: Job) -> Job:
"""Create new job record."""
await self._db.execute(
"""INSERT INTO jobs (id, status, created_at)
VALUES ($1, $2, $3)""",
job.id, job.status.value, job.created_at,
)
return job
async def update_status(
self,
job_id: str,
status: JobStatus,
**fields,
) -> None:
"""Update job status with timestamp."""
updates = {"status": status.value, **fields}
if status == JobStatus.RUNNING:
updates["started_at"] = datetime.utcnow()
elif status in (JobStatus.SUCCEEDED, JobStatus.FAILED):
updates["completed_at"] = datetime.utcnow()
await self._db.execute(
"UPDATE jobs SET status = $1, ... WHERE id = $2",
updates, job_id,
)
logger.info(
"Job status updated",
job_id=job_id,
status=status.value,
)
```
## Advanced Patterns
### Pattern 5: Dead Letter Queue
Handle permanently failed tasks for manual inspection.
```python
@app.task(bind=True, max_retries=3)
def process_webhook(self, webhook_id: str, payload: dict) -> None:
"""Process webhook with DLQ for failures."""
try:
result = send_webhook(payload)
if not result.success:
raise WebhookFailedError(result.error)
except Exception as e:
if self.request.retries >= self.max_retries:
# Move to dead letter queue for manual inspection
dead_letter_queue.send({
"task": "process_webhook",
"webhook_id": webhook_id,
"payload": payload,
"error": str(e),
"attempts": self.request.retries + 1,
"failed_at": datetime.utcnow().isoformat(),
})
logger.error(
"Webhook moved to DLQ after max retries",
webhook_id=webhook_id,
error=str(e),
)
return
# Exponential backoff retry
raise self.retry(exc=e, countdown=2 ** self.request.retries * 60)
```
### Pattern 6: Status Polling Endpoint
Provide an endpoint for clients to check job status.
```python
from fastapi import FastAPI, HTTPException
app = FastAPI()
@app.get("/jobs/{job_id}")
async def get_job_status(job_id: str) -> JobStatusResponse:
"""Get current status of a background job."""
job = await jobs_repo.get(job_id)
if job is None:
raise HTTPException(404, f"Job {job_id} not found")
return JobStatusResponse(
job_id=job.id,
status=job.status.value,
created_at=job.created_at,
started_at=job.started_at,
completed_at=job.completed_at,
result=job.result if job.status == JobStatus.SUCCEEDED else None,
error=job.error if job.status == JobStatus.FAILED else None,
# Helpful for clients
is_terminal=job.status in (JobStatus.SUCCEEDED, JobStatus.FAILED),
)
```
### Pattern 7: Task Chaining and Workflows
Compose complex workflows from simple tasks.
```python
from celery import chain, group, chord
# Simple chain: A → B → C
workflow = chain(
extract_data.s(source_id),
transform_data.s(),
load_data.s(destination_id),
)
# Parallel execution: A, B, C all at once
parallel = group(
send_email.s(user_email),
send_sms.s(user_phone),
update_analytics.s(event_data),
)
# Chord: Run tasks in parallel, then a callback
# Process all items, then send completion notification
workflow = chord(
[process_item.s(item_id) for item_id in item_ids],
send_completion_notification.s(batch_id),
)
workflow.apply_async()
```
### Pattern 8: Alternative Task Queues
Choose the right tool for your needs.
**RQ (Redis Queue)**: Simple, Redis-based
```python
from rq import Queue
from redis import Redis
queue = Queue(connection=Redis())
job = queue.enqueue(send_email, "user@example.com", "Subject", "Body")
```
**Dramatiq**: Modern Celery alternative
```python
import dramatiq
from dramatiq.brokers.redis import RedisBroker
dramatiq.set_broker(RedisBroker())
@dramatiq.actor
def send_email(to: str, subject: str, body: str) -> None:
email_client.send(to, subject, body)
```
**Cloud-native options:**
- AWS SQS + Lambda
- Google Cloud Tasks
- Azure Functions
## Best Practices Summary
1. **Return immediately** - Don't block requests for long operations
2. **Persist job state** - Enable status polling and debugging
3. **Make tasks idempotent** - Safe to retry on any failure
4. **Use idempotency keys** - For external service calls
5. **Set timeouts** - Both soft and hard limits
6. **Implement DLQ** - Capture permanently failed tasks
7. **Log transitions** - Track job state changes
8. **Retry appropriately** - Exponential backoff for transient errors
9. **Don't retry permanent failures** - Validation errors, invalid credentials
10. **Monitor queue depth** - Alert on backlog growth
| """
Test for 'python-background-jobs' skill — Video Transcoding Task System
Validates that the Agent implemented Celery tasks (extract_audio, transcode_video,
generate_thumbnail) and a chain workflow in the celery repository.
"""
import os
import ast
import subprocess
import pytest
class TestPythonBackgroundJobs:
"""Verify Celery transcoding task implementation."""
REPO_DIR = "/workspace/celery"
# ------------------------------------------------------------------
# L1: file & syntax
# ------------------------------------------------------------------
def test_tasks_file_exists(self):
"""examples/transcoding/tasks.py must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "transcoding", "tasks.py")
assert os.path.isfile(fpath), "tasks.py not found"
def test_workflow_file_exists(self):
"""examples/transcoding/workflow.py must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "transcoding", "workflow.py")
assert os.path.isfile(fpath), "workflow.py not found"
def test_tasks_compiles(self):
"""tasks.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "examples/transcoding/tasks.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
def test_workflow_compiles(self):
"""workflow.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "examples/transcoding/workflow.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: structural verification
# ------------------------------------------------------------------
def _parse_tasks(self):
fpath = os.path.join(self.REPO_DIR, "examples", "transcoding", "tasks.py")
with open(fpath, "r", encoding="utf-8") as f:
return f.read(), ast.parse(f.read())
def _read_tasks_source(self):
fpath = os.path.join(self.REPO_DIR, "examples", "transcoding", "tasks.py")
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def _read_workflow_source(self):
fpath = os.path.join(self.REPO_DIR, "examples", "transcoding", "workflow.py")
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def test_extract_audio_task_defined(self):
"""extract_audio task must be defined in tasks.py."""
source = self._read_tasks_source()
assert "extract_audio" in source, "extract_audio task not found"
def test_transcode_video_task_defined(self):
"""transcode_video task must be defined in tasks.py."""
source = self._read_tasks_source()
assert "transcode_video" in source, "transcode_video task not found"
def test_generate_thumbnail_task_defined(self):
"""generate_thumbnail task must be defined in tasks.py."""
source = self._read_tasks_source()
assert "generate_thumbnail" in source, "generate_thumbnail task not found"
def test_bind_true_on_tasks(self):
"""Tasks should use bind=True for self access."""
source = self._read_tasks_source()
assert (
"bind" in source and "True" in source
), "bind=True not found — tasks should have self access"
def test_update_state_used(self):
"""Tasks should call self.update_state for progress reporting."""
source = self._read_tasks_source()
assert "update_state" in source, "update_state not found in tasks.py"
def test_retry_configured(self):
"""transcode_video should implement retry logic."""
source = self._read_tasks_source()
assert "retry" in source.lower(), "No retry logic found in tasks.py"
def test_max_retries_set(self):
"""Max retries should be configured (3)."""
source = self._read_tasks_source()
assert (
"max_retries" in source or "3" in source
), "max_retries configuration not found"
def test_chain_in_workflow(self):
"""workflow.py should use chain() to compose tasks."""
source = self._read_workflow_source()
assert "chain" in source, "chain() not found in workflow.py"
def test_signatures_in_workflow(self):
"""workflow.py should use .s() or .si() signatures."""
source = self._read_workflow_source()
assert (
".s(" in source or ".si(" in source
), "Celery signatures (.s() or .si()) not found in workflow.py"
| https://github.com/celery/celery | zhangyiiiiii/swe-skills-bench-python | |
python-configuration | Python Configuration Management | See task file for detailed mission requirements. | feature | # Task: Implement Type-Safe Configuration with pydantic-settings
## Background
Transform FastAPI's hardcoded configuration into a type-safe configuration
system using pydantic-settings BaseSettings.
## Files to Create/Modify
- docs_src/settings/tutorial001.py
- docs_src/settings/tutorial001_test.py
## Requirements
Settings Class:
- Inherit from pydantic_settings.BaseSettings
- Fields:
* app_name: str
* admin_email: EmailStr
* database_url: PostgresDsn
* debug: bool (default: False)
* max_connections: PositiveInt (default: 10)
Singleton Pattern:
- Use @lru_cache decorator for lazy loading
- Single instance throughout application
Dependency Injection:
- Use FastAPI Depends to inject into routes
- Proper type annotations
### Expected Functionality
1) Environment variables set via monkeypatch → config loads correctly
2) Missing required field admin_email → raises ValidationError
3) Invalid database_url format → error message contains field name
4) Default values applied when not specified
## Acceptance Criteria
- Settings class validates all fields properly
- Error messages are descriptive
| ---
name: python-configuration
description: Python configuration management via environment variables and typed settings. Use when externalizing config, setting up pydantic-settings, managing secrets, or implementing environment-specific behavior.
---
# Python Configuration Management
Externalize configuration from code using environment variables and typed settings. Well-managed configuration enables the same code to run in any environment without modification.
## When to Use This Skill
- Setting up a new project's configuration system
- Migrating from hardcoded values to environment variables
- Implementing pydantic-settings for typed configuration
- Managing secrets and sensitive values
- Creating environment-specific settings (dev/staging/prod)
- Validating configuration at application startup
## Core Concepts
### 1. Externalized Configuration
All environment-specific values (URLs, secrets, feature flags) come from environment variables, not code.
### 2. Typed Settings
Parse and validate configuration into typed objects at startup, not scattered throughout code.
### 3. Fail Fast
Validate all required configuration at application boot. Missing config should crash immediately with a clear message.
### 4. Sensible Defaults
Provide reasonable defaults for local development while requiring explicit values for sensitive settings.
## Quick Start
```python
from pydantic_settings import BaseSettings
from pydantic import Field
class Settings(BaseSettings):
database_url: str = Field(alias="DATABASE_URL")
api_key: str = Field(alias="API_KEY")
debug: bool = Field(default=False, alias="DEBUG")
settings = Settings() # Loads from environment
```
## Fundamental Patterns
### Pattern 1: Typed Settings with Pydantic
Create a central settings class that loads and validates all configuration.
```python
from pydantic_settings import BaseSettings
from pydantic import Field, PostgresDsn, ValidationError
import sys
class Settings(BaseSettings):
"""Application configuration loaded from environment variables."""
# Database
db_host: str = Field(alias="DB_HOST")
db_port: int = Field(default=5432, alias="DB_PORT")
db_name: str = Field(alias="DB_NAME")
db_user: str = Field(alias="DB_USER")
db_password: str = Field(alias="DB_PASSWORD")
# Redis
redis_url: str = Field(default="redis://localhost:6379", alias="REDIS_URL")
# API Keys
api_secret_key: str = Field(alias="API_SECRET_KEY")
# Feature flags
enable_new_feature: bool = Field(default=False, alias="ENABLE_NEW_FEATURE")
model_config = {
"env_file": ".env",
"env_file_encoding": "utf-8",
}
# Create singleton instance at module load
try:
settings = Settings()
except ValidationError as e:
print(f"Configuration error:\n{e}")
sys.exit(1)
```
Import `settings` throughout your application:
```python
from myapp.config import settings
def get_database_connection():
return connect(
host=settings.db_host,
port=settings.db_port,
database=settings.db_name,
)
```
### Pattern 2: Fail Fast on Missing Configuration
Required settings should crash the application immediately with a clear error.
```python
from pydantic_settings import BaseSettings
from pydantic import Field, ValidationError
import sys
class Settings(BaseSettings):
# Required - no default means it must be set
api_key: str = Field(alias="API_KEY")
database_url: str = Field(alias="DATABASE_URL")
# Optional with defaults
log_level: str = Field(default="INFO", alias="LOG_LEVEL")
try:
settings = Settings()
except ValidationError as e:
print("=" * 60)
print("CONFIGURATION ERROR")
print("=" * 60)
for error in e.errors():
field = error["loc"][0]
print(f" - {field}: {error['msg']}")
print("\nPlease set the required environment variables.")
sys.exit(1)
```
A clear error at startup is better than a cryptic `None` failure mid-request.
### Pattern 3: Local Development Defaults
Provide sensible defaults for local development while requiring explicit values for secrets.
```python
class Settings(BaseSettings):
# Has local default, but prod will override
db_host: str = Field(default="localhost", alias="DB_HOST")
db_port: int = Field(default=5432, alias="DB_PORT")
# Always required - no default for secrets
db_password: str = Field(alias="DB_PASSWORD")
api_secret_key: str = Field(alias="API_SECRET_KEY")
# Development convenience
debug: bool = Field(default=False, alias="DEBUG")
model_config = {"env_file": ".env"}
```
Create a `.env` file for local development (never commit this):
```bash
# .env (add to .gitignore)
DB_PASSWORD=local_dev_password
API_SECRET_KEY=dev-secret-key
DEBUG=true
```
### Pattern 4: Namespaced Environment Variables
Prefix related variables for clarity and easy debugging.
```bash
# Database configuration
DB_HOST=localhost
DB_PORT=5432
DB_NAME=myapp
DB_USER=admin
DB_PASSWORD=secret
# Redis configuration
REDIS_URL=redis://localhost:6379
REDIS_MAX_CONNECTIONS=10
# Authentication
AUTH_SECRET_KEY=your-secret-key
AUTH_TOKEN_EXPIRY_SECONDS=3600
AUTH_ALGORITHM=HS256
# Feature flags
FEATURE_NEW_CHECKOUT=true
FEATURE_BETA_UI=false
```
Makes `env | grep DB_` useful for debugging.
## Advanced Patterns
### Pattern 5: Type Coercion
Pydantic handles common conversions automatically.
```python
from pydantic_settings import BaseSettings
from pydantic import Field, field_validator
class Settings(BaseSettings):
# Automatically converts "true", "1", "yes" to True
debug: bool = False
# Automatically converts string to int
max_connections: int = 100
# Parse comma-separated string to list
allowed_hosts: list[str] = Field(default_factory=list)
@field_validator("allowed_hosts", mode="before")
@classmethod
def parse_allowed_hosts(cls, v: str | list[str]) -> list[str]:
if isinstance(v, str):
return [host.strip() for host in v.split(",") if host.strip()]
return v
```
Usage:
```bash
ALLOWED_HOSTS=example.com,api.example.com,localhost
MAX_CONNECTIONS=50
DEBUG=true
```
### Pattern 6: Environment-Specific Configuration
Use an environment enum to switch behavior.
```python
from enum import Enum
from pydantic_settings import BaseSettings
from pydantic import Field, computed_field
class Environment(str, Enum):
LOCAL = "local"
STAGING = "staging"
PRODUCTION = "production"
class Settings(BaseSettings):
environment: Environment = Field(
default=Environment.LOCAL,
alias="ENVIRONMENT",
)
# Settings that vary by environment
log_level: str = Field(default="DEBUG", alias="LOG_LEVEL")
@computed_field
@property
def is_production(self) -> bool:
return self.environment == Environment.PRODUCTION
@computed_field
@property
def is_local(self) -> bool:
return self.environment == Environment.LOCAL
# Usage
if settings.is_production:
configure_production_logging()
else:
configure_debug_logging()
```
### Pattern 7: Nested Configuration Groups
Organize related settings into nested models.
```python
from pydantic import BaseModel
from pydantic_settings import BaseSettings
class DatabaseSettings(BaseModel):
host: str = "localhost"
port: int = 5432
name: str
user: str
password: str
class RedisSettings(BaseModel):
url: str = "redis://localhost:6379"
max_connections: int = 10
class Settings(BaseSettings):
database: DatabaseSettings
redis: RedisSettings
debug: bool = False
model_config = {
"env_nested_delimiter": "__",
"env_file": ".env",
}
```
Environment variables use double underscore for nesting:
```bash
DATABASE__HOST=db.example.com
DATABASE__PORT=5432
DATABASE__NAME=myapp
DATABASE__USER=admin
DATABASE__PASSWORD=secret
REDIS__URL=redis://redis.example.com:6379
```
### Pattern 8: Secrets from Files
For container environments, read secrets from mounted files.
```python
from pydantic_settings import BaseSettings
from pydantic import Field
from pathlib import Path
class Settings(BaseSettings):
# Read from environment variable or file
db_password: str = Field(alias="DB_PASSWORD")
model_config = {
"secrets_dir": "/run/secrets", # Docker secrets location
}
```
Pydantic will look for `/run/secrets/db_password` if the env var isn't set.
### Pattern 9: Configuration Validation
Add custom validation for complex requirements.
```python
from pydantic_settings import BaseSettings
from pydantic import Field, model_validator
class Settings(BaseSettings):
db_host: str = Field(alias="DB_HOST")
db_port: int = Field(alias="DB_PORT")
read_replica_host: str | None = Field(default=None, alias="READ_REPLICA_HOST")
read_replica_port: int = Field(default=5432, alias="READ_REPLICA_PORT")
@model_validator(mode="after")
def validate_replica_settings(self):
if self.read_replica_host and self.read_replica_port == self.db_port:
if self.read_replica_host == self.db_host:
raise ValueError(
"Read replica cannot be the same as primary database"
)
return self
```
## Best Practices Summary
1. **Never hardcode config** - All environment-specific values from env vars
2. **Use typed settings** - Pydantic-settings with validation
3. **Fail fast** - Crash on missing required config at startup
4. **Provide dev defaults** - Make local development easy
5. **Never commit secrets** - Use `.env` files (gitignored) or secret managers
6. **Namespace variables** - `DB_HOST`, `REDIS_URL` for clarity
7. **Import settings singleton** - Don't call `os.getenv()` throughout code
8. **Document all variables** - README should list required env vars
9. **Validate early** - Check config correctness at boot time
10. **Use secrets_dir** - Support mounted secrets in containers
| """
Test for 'python-configuration' skill — Python Configuration Management
Validates that the Agent transformed FastAPI hardcoded config into a
pydantic-settings BaseSettings class with validation, @lru_cache, and DI.
"""
import os
import sys
import ast
import subprocess
import pytest
class TestPythonConfiguration:
"""Verify pydantic-settings configuration implementation for FastAPI."""
REPO_DIR = "/workspace/fastapi"
# ------------------------------------------------------------------
# L1: file & syntax
# ------------------------------------------------------------------
def test_settings_file_exists(self):
"""docs_src/settings/tutorial001.py must exist."""
fpath = os.path.join(self.REPO_DIR, "docs_src", "settings", "tutorial001.py")
assert os.path.isfile(fpath), "tutorial001.py not found"
def test_settings_compiles(self):
"""tutorial001.py must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "docs_src/settings/tutorial001.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: structural verification via AST
# ------------------------------------------------------------------
def _read_source(self):
fpath = os.path.join(self.REPO_DIR, "docs_src", "settings", "tutorial001.py")
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def test_settings_class_inherits_base_settings(self):
"""Settings class should inherit from BaseSettings."""
source = self._read_source()
assert "BaseSettings" in source, "BaseSettings not found in source"
def test_field_app_name(self):
"""Settings must define app_name: str field."""
source = self._read_source()
assert "app_name" in source, "app_name field not defined"
def test_field_admin_email(self):
"""Settings must define admin_email field (EmailStr type)."""
source = self._read_source()
assert "admin_email" in source, "admin_email field not defined"
assert "EmailStr" in source, "EmailStr type not used for admin_email"
def test_field_database_url(self):
"""Settings must define database_url field (PostgresDsn or similar)."""
source = self._read_source()
assert "database_url" in source, "database_url field not defined"
assert (
"Dsn" in source or "PostgresDsn" in source or "AnyUrl" in source
), "No Dsn/URL type annotation found for database_url"
def test_field_debug_with_default(self):
"""Settings must define debug: bool with default False."""
source = self._read_source()
assert "debug" in source, "debug field not defined"
def test_field_max_connections(self):
"""Settings must define max_connections: PositiveInt with default 10."""
source = self._read_source()
assert "max_connections" in source, "max_connections field not defined"
def test_lru_cache_decorator(self):
"""Singleton pattern via @lru_cache must be present."""
source = self._read_source()
assert "lru_cache" in source, "@lru_cache decorator not found"
def test_depends_injection(self):
"""FastAPI Depends should be used for DI."""
source = self._read_source()
assert "Depends" in source, "FastAPI Depends not found"
def test_settings_importable(self):
"""Settings module should be importable and expose settings getter."""
result = subprocess.run(
[
"python",
"-c",
"import sys; sys.path.insert(0,'.'); "
"from docs_src.settings.tutorial001 import *; print('OK')",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
env={
**os.environ,
"APP_NAME": "test",
"ADMIN_EMAIL": "a@b.com",
"DATABASE_URL": "postgresql://localhost/test",
},
)
assert result.returncode == 0, f"Import failed:\n{result.stderr}"
def test_validation_error_on_missing_required(self):
"""Missing required field should raise ValidationError."""
result = subprocess.run(
[
"python",
"-c",
"import sys; sys.path.insert(0,'.'); "
"from docs_src.settings.tutorial001 import *",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
env={"PATH": os.environ.get("PATH", "")}, # minimal env
)
# Should fail because required fields are missing
if result.returncode == 0:
pytest.skip(
"Settings loaded without env vars — defaults may cover all fields"
)
assert (
"ValidationError" in result.stderr or "validation" in result.stderr.lower()
), f"Expected ValidationError, got:\n{result.stderr[-1000:]}"
| https://github.com/fastapi/fastapi | zhangyiiiiii/swe-skills-bench-python | |
creating-financial-models | Financial Modeling Suite | See task file for detailed mission requirements. | feature | # Task: Create QuantLib Usage Examples with DCF Valuation
## Background
Add practical examples to the
QuantLib repository demonstrating discounted cash flow (DCF) valuation
using QuantLib's existing API.
## Files to Create/Modify
- Examples/DCFValuation/DCFDemo.cpp (main example)
- Examples/DCFValuation/CMakeLists.txt (build config)
- Examples/DCFValuation/README.md (documentation)
## Requirements
DCF Valuation Demo (DCFDemo.cpp):
- Using QuantLib's YieldTermStructure for discount rates
- Creating cash flow schedules with QuantLib::Schedule
- Present value calculation using QuantLib::CashFlows::npv
- Terminal value modeling
Components to Demonstrate:
- FlatForward term structure setup
- FixedRateCoupon for regular cash flows
- Simple bond-like cash flow structure
- Sensitivity analysis (parallel shift in rates)
Example Output:
- NPV of cash flow stream
- Individual discounted cash flows
- Duration and convexity metrics
4. Build Integration:
- CMakeLists.txt links against QuantLib
- Can be built standalone after QuantLib is installed
- Cross-platform (Windows, Linux, macOS)
## Acceptance Criteria
- Example compiles and links against installed QuantLib
- Output shows correct NPV calculations
- README explains financial concepts and code structure
| ---
name: creating-financial-models
description: This skill provides an advanced financial modeling suite with DCF analysis, sensitivity testing, Monte Carlo simulations, and scenario planning for investment decisions
---
# Financial Modeling Suite
A comprehensive financial modeling toolkit for investment analysis, valuation, and risk assessment using industry-standard methodologies.
## Core Capabilities
### 1. Discounted Cash Flow (DCF) Analysis
- Build complete DCF models with multiple growth scenarios
- Calculate terminal values using perpetuity growth and exit multiple methods
- Determine weighted average cost of capital (WACC)
- Generate enterprise and equity valuations
### 2. Sensitivity Analysis
- Test key assumptions impact on valuation
- Create data tables for multiple variables
- Generate tornado charts for sensitivity ranking
- Identify critical value drivers
### 3. Monte Carlo Simulation
- Run thousands of scenarios with probability distributions
- Model uncertainty in key inputs
- Generate confidence intervals for valuations
- Calculate probability of achieving targets
### 4. Scenario Planning
- Build best/base/worst case scenarios
- Model different economic environments
- Test strategic alternatives
- Compare outcome probabilities
## Input Requirements
### For DCF Analysis
- Historical financial statements (3-5 years)
- Revenue growth assumptions
- Operating margin projections
- Capital expenditure forecasts
- Working capital requirements
- Terminal growth rate or exit multiple
- Discount rate components (risk-free rate, beta, market premium)
### For Sensitivity Analysis
- Base case model
- Variable ranges to test
- Key metrics to track
### For Monte Carlo Simulation
- Probability distributions for uncertain variables
- Correlation assumptions between variables
- Number of iterations (typically 1,000-10,000)
### For Scenario Planning
- Scenario definitions and assumptions
- Probability weights for scenarios
- Key performance indicators to track
## Output Formats
### DCF Model Output
- Complete financial projections
- Free cash flow calculations
- Terminal value computation
- Enterprise and equity value summary
- Valuation multiples implied
- Excel workbook with full model
### Sensitivity Analysis Output
- Sensitivity tables showing value ranges
- Tornado chart of key drivers
- Break-even analysis
- Charts showing relationships
### Monte Carlo Output
- Probability distribution of valuations
- Confidence intervals (e.g., 90%, 95%)
- Statistical summary (mean, median, std dev)
- Risk metrics (VaR, probability of loss)
### Scenario Planning Output
- Scenario comparison table
- Probability-weighted expected values
- Decision tree visualization
- Risk-return profiles
## Model Types Supported
1. **Corporate Valuation**
- Mature companies with stable cash flows
- Growth companies with J-curve projections
- Turnaround situations
2. **Project Finance**
- Infrastructure projects
- Real estate developments
- Energy projects
3. **M&A Analysis**
- Acquisition valuations
- Synergy modeling
- Accretion/dilution analysis
4. **LBO Models**
- Leveraged buyout analysis
- Returns analysis (IRR, MOIC)
- Debt capacity assessment
## Best Practices Applied
### Modeling Standards
- Consistent formatting and structure
- Clear assumption documentation
- Separation of inputs, calculations, outputs
- Error checking and validation
- Version control and change tracking
### Valuation Principles
- Use multiple valuation methods for triangulation
- Apply appropriate risk adjustments
- Consider market comparables
- Validate against trading multiples
- Document key assumptions clearly
### Risk Management
- Identify and quantify key risks
- Use probability-weighted scenarios
- Stress test extreme cases
- Consider correlation effects
- Provide confidence intervals
## Example Usage
"Build a DCF model for this technology company using the attached financials"
"Run a Monte Carlo simulation on this acquisition model with 5,000 iterations"
"Create sensitivity analysis showing impact of growth rate and WACC on valuation"
"Develop three scenarios for this expansion project with probability weights"
## Scripts Included
- `dcf_model.py`: Complete DCF valuation engine
- `sensitivity_analysis.py`: Sensitivity testing framework
## Limitations and Disclaimers
- Models are only as good as their assumptions
- Past performance doesn't guarantee future results
- Market conditions can change rapidly
- Regulatory and tax changes may impact results
- Professional judgment required for interpretation
- Not a substitute for professional financial advice
## Quality Checks
The model automatically performs:
1. Balance sheet balancing checks
2. Cash flow reconciliation
3. Circular reference resolution
4. Sensitivity bound checking
5. Statistical validation of Monte Carlo results
## Updates and Maintenance
- Models use latest financial theory and practices
- Regular updates for market parameter defaults
- Incorporation of regulatory changes
- Continuous improvement based on usage patterns
| """
Test for 'creating-financial-models' skill — QuantLib Financial Models
Validates that the Agent created financial model implementations using
QuantLib with proper pricing engines and term structure setup.
"""
import os
import subprocess
import pytest
class TestCreatingFinancialModels:
"""Verify QuantLib financial model implementation."""
REPO_DIR = "/workspace/QuantLib"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_model_file_exists(self):
"""A financial model implementation file must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith((".py", ".cpp", ".hpp")) and (
"model" in f.lower()
or "pricing" in f.lower()
or "option" in f.lower()
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No financial model file found"
def test_example_script_exists(self):
"""An example/demo script must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (f.endswith(".py") or f.endswith(".cpp")) and (
"example" in f.lower()
or "demo" in f.lower()
or "pricing" in f.lower()
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No example/demo script found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_model_files(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith((".py", ".cpp", ".hpp")) and (
"model" in f.lower()
or "pricing" in f.lower()
or "option" in f.lower()
or "finance" in f.lower()
):
found.append(os.path.join(root, f))
return found
def _read_all_models(self):
content = ""
for fpath in self._find_model_files():
try:
with open(fpath, "r", errors="ignore") as f:
content += f.read() + "\n"
except OSError:
pass
return content
def test_quantlib_usage(self):
"""Must use QuantLib library."""
content = self._read_all_models()
ql_patterns = [
"QuantLib",
"ql.",
"import QuantLib",
"#include <ql/",
"ql::",
"from QuantLib",
]
found = any(p in content for p in ql_patterns)
assert found, "No QuantLib usage found"
def test_term_structure(self):
"""Must define yield/term structure."""
content = self._read_all_models()
ts_patterns = [
"YieldTermStructure",
"TermStructure",
"FlatForward",
"ZeroCurve",
"ForwardCurve",
"term_structure",
"yield_curve",
"discount",
]
found = any(p in content for p in ts_patterns)
assert found, "No term structure defined"
def test_pricing_engine(self):
"""Must set up a pricing engine."""
content = self._read_all_models()
engine_patterns = [
"PricingEngine",
"AnalyticEuropeanEngine",
"BinomialEngine",
"MCEuropeanEngine",
"BlackScholes",
"setPricingEngine",
"set_pricing_engine",
]
found = any(p in content for p in engine_patterns)
assert found, "No pricing engine found"
def test_option_definition(self):
"""Must define at least one option instrument."""
content = self._read_all_models()
option_patterns = [
"VanillaOption",
"EuropeanOption",
"AmericanOption",
"Option",
"Payoff",
"PlainVanillaPayoff",
"EuropeanExercise",
"AmericanExercise",
]
found = sum(1 for p in option_patterns if p in content)
assert found >= 2, "Insufficient option instrument definition"
def test_npv_calculation(self):
"""Must calculate NPV or pricing result."""
content = self._read_all_models()
calc_patterns = [
"NPV",
"npv",
"delta",
"gamma",
"vega",
"theta",
"rho",
"impliedVolatility",
]
found = any(p in content for p in calc_patterns)
assert found, "No NPV/Greeks calculation found"
def test_market_data(self):
"""Must set up market data (spot, vol, rate)."""
content = self._read_all_models()
market_patterns = [
"spot",
"volatility",
"riskFree",
"risk_free",
"SimpleQuote",
"BlackVolTermStructure",
"dividend",
"strike",
]
found = sum(1 for p in market_patterns if p in content)
assert found >= 2, "Insufficient market data setup"
def test_date_handling(self):
"""Must use QuantLib date handling."""
content = self._read_all_models()
date_patterns = [
"Date",
"Calendar",
"DayCounter",
"Actual365",
"TARGET",
"Settings.instance",
"evaluationDate",
"Schedule",
]
found = sum(1 for p in date_patterns if p in content)
assert found >= 2, "Insufficient date handling"
def test_python_demo_runs(self):
"""Python demo scripts must run successfully."""
for fpath in self._find_model_files():
if fpath.endswith(".py"):
result = subprocess.run(
["python", fpath],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode == 0:
return
# If no Python file runs, try compile check
for fpath in self._find_model_files():
if fpath.endswith(".py"):
result = subprocess.run(
["python", "-m", "py_compile", fpath],
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"{fpath} compile error:\n{result.stderr}"
return
pytest.skip("No Python model files found")
| https://github.com/lballabio/QuantLib | zhangyiiiiii/swe-skills-bench-python | |
prompt-engineering-patterns | Prompt Engineering Patterns | See task file for detailed mission requirements. | feature | # Task: Implement Prompt Engineering Templates with Automated Evaluation
## Background
Create a reproducible prompt engineering template system with automated
evaluation capabilities in the LangChain repository.
## Files to Create/Modify
- examples/prompt_templates/ (new directory)
- scripts/run_prompt_eval.py
- tests/test_prompt_eval.py
## Requirements
Prompt Templates (multiple use cases):
- Instruction-type prompts
- Conversational prompts
- Extraction prompts
- Translation prompts
- Code generation prompts
- Evaluation prompts
JSON Schema (input/output):
- input_id: unique identifier
- prompt: the prompt text
- expected_output: expected response
- metadata: additional context
Evaluation Script:
- Pluggable scorers (string assertion, similarity, custom)
- Generate JSON/CSV report
- Support batch evaluation
4. Output Requirements:
- JSON schema compliant output
- Evaluation report generated
- All required fields present and typed correctly
## Acceptance Criteria
- `python scripts/run_prompt_eval.py` exits with code 0
- Output follows JSON schema
- Report file generated (JSON or CSV)
| ---
name: prompt-engineering-patterns
description: Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, improving LLM outputs, or designing production prompt templates.
---
# Prompt Engineering Patterns
Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability.
## When to Use This Skill
- Designing complex prompts for production LLM applications
- Optimizing prompt performance and consistency
- Implementing structured reasoning patterns (chain-of-thought, tree-of-thought)
- Building few-shot learning systems with dynamic example selection
- Creating reusable prompt templates with variable interpolation
- Debugging and refining prompts that produce inconsistent outputs
- Implementing system prompts for specialized AI assistants
## Core Capabilities
### 1. Few-Shot Learning
- Example selection strategies (semantic similarity, diversity sampling)
- Balancing example count with context window constraints
- Constructing effective demonstrations with input-output pairs
- Dynamic example retrieval from knowledge bases
- Handling edge cases through strategic example selection
### 2. Chain-of-Thought Prompting
- Step-by-step reasoning elicitation
- Zero-shot CoT with "Let's think step by step"
- Few-shot CoT with reasoning traces
- Self-consistency techniques (sampling multiple reasoning paths)
- Verification and validation steps
### 3. Prompt Optimization
- Iterative refinement workflows
- A/B testing prompt variations
- Measuring prompt performance metrics (accuracy, consistency, latency)
- Reducing token usage while maintaining quality
- Handling edge cases and failure modes
### 4. Template Systems
- Variable interpolation and formatting
- Conditional prompt sections
- Multi-turn conversation templates
- Role-based prompt composition
- Modular prompt components
### 5. System Prompt Design
- Setting model behavior and constraints
- Defining output formats and structure
- Establishing role and expertise
- Safety guidelines and content policies
- Context setting and background information
## Quick Start
```python
from prompt_optimizer import PromptTemplate, FewShotSelector
# Define a structured prompt template
template = PromptTemplate(
system="You are an expert SQL developer. Generate efficient, secure SQL queries.",
instruction="Convert the following natural language query to SQL:\n{query}",
few_shot_examples=True,
output_format="SQL code block with explanatory comments"
)
# Configure few-shot learning
selector = FewShotSelector(
examples_db="sql_examples.jsonl",
selection_strategy="semantic_similarity",
max_examples=3
)
# Generate optimized prompt
prompt = template.render(
query="Find all users who registered in the last 30 days",
examples=selector.select(query="user registration date filter")
)
```
## Key Patterns
### Progressive Disclosure
Start with simple prompts, add complexity only when needed:
1. **Level 1**: Direct instruction
- "Summarize this article"
2. **Level 2**: Add constraints
- "Summarize this article in 3 bullet points, focusing on key findings"
3. **Level 3**: Add reasoning
- "Read this article, identify the main findings, then summarize in 3 bullet points"
4. **Level 4**: Add examples
- Include 2-3 example summaries with input-output pairs
### Instruction Hierarchy
```
[System Context] → [Task Instruction] → [Examples] → [Input Data] → [Output Format]
```
### Error Recovery
Build prompts that gracefully handle failures:
- Include fallback instructions
- Request confidence scores
- Ask for alternative interpretations when uncertain
- Specify how to indicate missing information
## Best Practices
1. **Be Specific**: Vague prompts produce inconsistent results
2. **Show, Don't Tell**: Examples are more effective than descriptions
3. **Test Extensively**: Evaluate on diverse, representative inputs
4. **Iterate Rapidly**: Small changes can have large impacts
5. **Monitor Performance**: Track metrics in production
6. **Version Control**: Treat prompts as code with proper versioning
7. **Document Intent**: Explain why prompts are structured as they are
## Common Pitfalls
- **Over-engineering**: Starting with complex prompts before trying simple ones
- **Example pollution**: Using examples that don't match the target task
- **Context overflow**: Exceeding token limits with excessive examples
- **Ambiguous instructions**: Leaving room for multiple interpretations
- **Ignoring edge cases**: Not testing on unusual or boundary inputs
## Integration Patterns
### With RAG Systems
```python
# Combine retrieved context with prompt engineering
prompt = f"""Given the following context:
{retrieved_context}
{few_shot_examples}
Question: {user_question}
Provide a detailed answer based solely on the context above. If the context doesn't contain enough information, explicitly state what's missing."""
```
### With Validation
```python
# Add self-verification step
prompt = f"""{main_task_prompt}
After generating your response, verify it meets these criteria:
1. Answers the question directly
2. Uses only information from provided context
3. Cites specific sources
4. Acknowledges any uncertainty
If verification fails, revise your response."""
```
## Performance Optimization
### Token Efficiency
- Remove redundant words and phrases
- Use abbreviations consistently after first definition
- Consolidate similar instructions
- Move stable content to system prompts
### Latency Reduction
- Minimize prompt length without sacrificing quality
- Use streaming for long-form outputs
- Cache common prompt prefixes
- Batch similar requests when possible
## Resources
- **references/few-shot-learning.md**: Deep dive on example selection and construction
- **references/chain-of-thought.md**: Advanced reasoning elicitation techniques
- **references/prompt-optimization.md**: Systematic refinement workflows
- **references/prompt-templates.md**: Reusable template patterns
- **references/system-prompts.md**: System-level prompt design
- **assets/prompt-template-library.md**: Battle-tested prompt templates
- **assets/few-shot-examples.json**: Curated example datasets
- **scripts/optimize-prompt.py**: Automated prompt optimization tool
## Success Metrics
Track these KPIs for your prompts:
- **Accuracy**: Correctness of outputs
- **Consistency**: Reproducibility across similar inputs
- **Latency**: Response time (P50, P95, P99)
- **Token Usage**: Average tokens per request
- **Success Rate**: Percentage of valid outputs
- **User Satisfaction**: Ratings and feedback
## Next Steps
1. Review the prompt template library for common patterns
2. Experiment with few-shot learning for your specific use case
3. Implement prompt versioning and A/B testing
4. Set up automated evaluation pipelines
5. Document your prompt engineering decisions and learnings
| """
Test for 'prompt-engineering-patterns' skill — Prompt Engineering Patterns
Validates that the Agent implemented prompt templates and automated evaluation
in the LangChain repository.
"""
import os
import subprocess
import json
import pytest
class TestPromptEngineeringPatterns:
"""Verify prompt engineering template system and evaluation."""
REPO_DIR = "/workspace/langchain"
# ------------------------------------------------------------------
# L1: file existence & syntax
# ------------------------------------------------------------------
def test_eval_script_exists(self):
"""scripts/run_prompt_eval.py must exist."""
fpath = os.path.join(self.REPO_DIR, "scripts", "run_prompt_eval.py")
assert os.path.isfile(fpath), "run_prompt_eval.py not found"
def test_eval_script_compiles(self):
"""run_prompt_eval.py must compile."""
result = subprocess.run(
["python", "-m", "py_compile", "scripts/run_prompt_eval.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
def test_templates_directory_exists(self):
"""examples/prompt_templates/ directory must exist."""
dpath = os.path.join(self.REPO_DIR, "examples", "prompt_templates")
assert os.path.isdir(dpath), "prompt_templates directory not found"
# ------------------------------------------------------------------
# L2: functional verification
# ------------------------------------------------------------------
def test_eval_script_runs(self):
"""run_prompt_eval.py must execute with exit code 0."""
result = subprocess.run(
["python", "scripts/run_prompt_eval.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert result.returncode == 0, f"Script failed:\n{result.stderr}"
def test_eval_output_is_valid_json_or_csv(self):
"""Evaluation output should contain structured data (JSON/CSV)."""
result = subprocess.run(
["python", "scripts/run_prompt_eval.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
pytest.skip(f"Script failed: {result.stderr[:500]}")
# Check if a report file was generated
report_candidates = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith((".json", ".csv")) and "report" in f.lower():
report_candidates.append(os.path.join(root, f))
if len(report_candidates) > 10:
break
# Or check stdout for JSON
output = result.stdout.strip()
is_json = False
try:
json.loads(output)
is_json = True
except (json.JSONDecodeError, ValueError):
pass
assert (
is_json or len(report_candidates) >= 1
), "No structured report output found (JSON stdout or report file)"
def test_template_files_present(self):
"""At least 2 prompt template files must exist."""
dpath = os.path.join(self.REPO_DIR, "examples", "prompt_templates")
if not os.path.isdir(dpath):
pytest.skip("prompt_templates directory not found")
files = os.listdir(dpath)
template_files = [
f
for f in files
if f.endswith((".json", ".yaml", ".yml", ".txt", ".md", ".py"))
]
assert (
len(template_files) >= 2
), f"Expected >= 2 template files, found {len(template_files)}: {template_files}"
def test_source_has_json_schema(self):
"""Eval script or templates should follow a JSON schema structure."""
fpath = os.path.join(self.REPO_DIR, "scripts", "run_prompt_eval.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
schema_fields = [
"input_id",
"prompt",
"expected_output",
"metadata",
"score",
"input",
"output",
]
found = sum(1 for sf in schema_fields if sf in content)
assert found >= 3, f"Insufficient schema fields in eval script (found {found})"
def test_pluggable_scorers(self):
"""Eval script should support pluggable scoring mechanisms."""
fpath = os.path.join(self.REPO_DIR, "scripts", "run_prompt_eval.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
scorer_patterns = [
"scorer",
"score",
"assert",
"similarity",
"evaluate",
"metric",
]
found = sum(1 for sp in scorer_patterns if sp in content.lower())
assert found >= 2, "No scoring/evaluation mechanism found"
def test_batch_evaluation_support(self):
"""Script should support batch evaluation of multiple prompts."""
fpath = os.path.join(self.REPO_DIR, "scripts", "run_prompt_eval.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
batch_patterns = ["for ", "batch", "loop", "iterate", "results"]
found = sum(1 for bp in batch_patterns if bp in content.lower())
assert found >= 2, "No batch evaluation support found"
def test_multiple_prompt_types(self):
"""Templates should cover multiple use cases."""
dpath = os.path.join(self.REPO_DIR, "examples", "prompt_templates")
if not os.path.isdir(dpath):
pytest.skip("prompt_templates directory not found")
all_content = ""
for f in os.listdir(dpath):
fpath = os.path.join(dpath, f)
if os.path.isfile(fpath):
with open(fpath, "r", encoding="utf-8", errors="replace") as fh:
all_content += fh.read() + "\n"
categories = [
"instruction",
"conversation",
"extract",
"translat",
"code",
"evaluat",
]
found = sum(1 for c in categories if c in all_content.lower())
assert found >= 2, f"Only {found} prompt categories found in templates"
| https://github.com/langchain-ai/langchain | zhangyiiiiii/swe-skills-bench-python | |
risk-metrics-calculation | Risk Metrics Calculation | See task file for detailed mission requirements. | feature | # Task: Add Risk Metrics Calculation Examples and Tests
## Background
Add example scripts and unit tests for risk metrics calculation in pyfolio,
demonstrating Sharpe ratio, maximum drawdown, and other key metrics.
## Files to Create/Modify
- examples/risk_metrics_demo.py (new)
- tests/test_risk_metrics.py (new)
- notebooks/risk_metrics.ipynb (optional)
## Requirements
Example Script:
- Small backtest example with sample price series
- Calculate key metrics:
* Sharpe Ratio
* Maximum Drawdown
* Sortino Ratio
* Calmar Ratio
- Output results to JSON/CSV
### Expected Functionality
- Test Sharpe calculation with known inputs
- Test Max Drawdown calculation
- Verify metrics within tolerance range
## Expected Functionality
- Sharpe ratio calculation matches expected formula
- Max drawdown correctly identifies peak-to-trough
- All metrics have proper handling of edge cases
## Acceptance Criteria
- Example script produces valid output
- Metrics calculated within acceptable tolerance
| ---
name: risk-metrics-calculation
description: Calculate portfolio risk metrics including VaR, CVaR, Sharpe, Sortino, and drawdown analysis. Use when measuring portfolio risk, implementing risk limits, or building risk monitoring systems.
---
# Risk Metrics Calculation
Comprehensive risk measurement toolkit for portfolio management, including Value at Risk, Expected Shortfall, and drawdown analysis.
## When to Use This Skill
- Measuring portfolio risk
- Implementing risk limits
- Building risk dashboards
- Calculating risk-adjusted returns
- Setting position sizes
- Regulatory reporting
## Core Concepts
### 1. Risk Metric Categories
| Category | Metrics | Use Case |
| ----------------- | --------------- | -------------------- |
| **Volatility** | Std Dev, Beta | General risk |
| **Tail Risk** | VaR, CVaR | Extreme losses |
| **Drawdown** | Max DD, Calmar | Capital preservation |
| **Risk-Adjusted** | Sharpe, Sortino | Performance |
### 2. Time Horizons
```
Intraday: Minute/hourly VaR for day traders
Daily: Standard risk reporting
Weekly: Rebalancing decisions
Monthly: Performance attribution
Annual: Strategic allocation
```
## Implementation
### Pattern 1: Core Risk Metrics
```python
import numpy as np
import pandas as pd
from scipy import stats
from typing import Dict, Optional, Tuple
class RiskMetrics:
"""Core risk metric calculations."""
def __init__(self, returns: pd.Series, rf_rate: float = 0.02):
"""
Args:
returns: Series of periodic returns
rf_rate: Annual risk-free rate
"""
self.returns = returns
self.rf_rate = rf_rate
self.ann_factor = 252 # Trading days per year
# Volatility Metrics
def volatility(self, annualized: bool = True) -> float:
"""Standard deviation of returns."""
vol = self.returns.std()
if annualized:
vol *= np.sqrt(self.ann_factor)
return vol
def downside_deviation(self, threshold: float = 0, annualized: bool = True) -> float:
"""Standard deviation of returns below threshold."""
downside = self.returns[self.returns < threshold]
if len(downside) == 0:
return 0.0
dd = downside.std()
if annualized:
dd *= np.sqrt(self.ann_factor)
return dd
def beta(self, market_returns: pd.Series) -> float:
"""Beta relative to market."""
aligned = pd.concat([self.returns, market_returns], axis=1).dropna()
if len(aligned) < 2:
return np.nan
cov = np.cov(aligned.iloc[:, 0], aligned.iloc[:, 1])
return cov[0, 1] / cov[1, 1] if cov[1, 1] != 0 else 0
# Value at Risk
def var_historical(self, confidence: float = 0.95) -> float:
"""Historical VaR at confidence level."""
return -np.percentile(self.returns, (1 - confidence) * 100)
def var_parametric(self, confidence: float = 0.95) -> float:
"""Parametric VaR assuming normal distribution."""
z_score = stats.norm.ppf(confidence)
return self.returns.mean() - z_score * self.returns.std()
def var_cornish_fisher(self, confidence: float = 0.95) -> float:
"""VaR with Cornish-Fisher expansion for non-normality."""
z = stats.norm.ppf(confidence)
s = stats.skew(self.returns) # Skewness
k = stats.kurtosis(self.returns) # Excess kurtosis
# Cornish-Fisher expansion
z_cf = (z + (z**2 - 1) * s / 6 +
(z**3 - 3*z) * k / 24 -
(2*z**3 - 5*z) * s**2 / 36)
return -(self.returns.mean() + z_cf * self.returns.std())
# Conditional VaR (Expected Shortfall)
def cvar(self, confidence: float = 0.95) -> float:
"""Expected Shortfall / CVaR / Average VaR."""
var = self.var_historical(confidence)
return -self.returns[self.returns <= -var].mean()
# Drawdown Analysis
def drawdowns(self) -> pd.Series:
"""Calculate drawdown series."""
cumulative = (1 + self.returns).cumprod()
running_max = cumulative.cummax()
return (cumulative - running_max) / running_max
def max_drawdown(self) -> float:
"""Maximum drawdown."""
return self.drawdowns().min()
def avg_drawdown(self) -> float:
"""Average drawdown."""
dd = self.drawdowns()
return dd[dd < 0].mean() if (dd < 0).any() else 0
def drawdown_duration(self) -> Dict[str, int]:
"""Drawdown duration statistics."""
dd = self.drawdowns()
in_drawdown = dd < 0
# Find drawdown periods
drawdown_starts = in_drawdown & ~in_drawdown.shift(1).fillna(False)
drawdown_ends = ~in_drawdown & in_drawdown.shift(1).fillna(False)
durations = []
current_duration = 0
for i in range(len(dd)):
if in_drawdown.iloc[i]:
current_duration += 1
elif current_duration > 0:
durations.append(current_duration)
current_duration = 0
if current_duration > 0:
durations.append(current_duration)
return {
"max_duration": max(durations) if durations else 0,
"avg_duration": np.mean(durations) if durations else 0,
"current_duration": current_duration
}
# Risk-Adjusted Returns
def sharpe_ratio(self) -> float:
"""Annualized Sharpe ratio."""
excess_return = self.returns.mean() * self.ann_factor - self.rf_rate
vol = self.volatility(annualized=True)
return excess_return / vol if vol > 0 else 0
def sortino_ratio(self) -> float:
"""Sortino ratio using downside deviation."""
excess_return = self.returns.mean() * self.ann_factor - self.rf_rate
dd = self.downside_deviation(threshold=0, annualized=True)
return excess_return / dd if dd > 0 else 0
def calmar_ratio(self) -> float:
"""Calmar ratio (return / max drawdown)."""
annual_return = (1 + self.returns).prod() ** (self.ann_factor / len(self.returns)) - 1
max_dd = abs(self.max_drawdown())
return annual_return / max_dd if max_dd > 0 else 0
def omega_ratio(self, threshold: float = 0) -> float:
"""Omega ratio."""
returns_above = self.returns[self.returns > threshold] - threshold
returns_below = threshold - self.returns[self.returns <= threshold]
if returns_below.sum() == 0:
return np.inf
return returns_above.sum() / returns_below.sum()
# Information Ratio
def information_ratio(self, benchmark_returns: pd.Series) -> float:
"""Information ratio vs benchmark."""
active_returns = self.returns - benchmark_returns
tracking_error = active_returns.std() * np.sqrt(self.ann_factor)
active_return = active_returns.mean() * self.ann_factor
return active_return / tracking_error if tracking_error > 0 else 0
# Summary
def summary(self) -> Dict[str, float]:
"""Generate comprehensive risk summary."""
dd_stats = self.drawdown_duration()
return {
# Returns
"total_return": (1 + self.returns).prod() - 1,
"annual_return": (1 + self.returns).prod() ** (self.ann_factor / len(self.returns)) - 1,
# Volatility
"annual_volatility": self.volatility(),
"downside_deviation": self.downside_deviation(),
# VaR & CVaR
"var_95_historical": self.var_historical(0.95),
"var_99_historical": self.var_historical(0.99),
"cvar_95": self.cvar(0.95),
# Drawdowns
"max_drawdown": self.max_drawdown(),
"avg_drawdown": self.avg_drawdown(),
"max_drawdown_duration": dd_stats["max_duration"],
# Risk-Adjusted
"sharpe_ratio": self.sharpe_ratio(),
"sortino_ratio": self.sortino_ratio(),
"calmar_ratio": self.calmar_ratio(),
"omega_ratio": self.omega_ratio(),
# Distribution
"skewness": stats.skew(self.returns),
"kurtosis": stats.kurtosis(self.returns),
}
```
### Pattern 2: Portfolio Risk
```python
class PortfolioRisk:
"""Portfolio-level risk calculations."""
def __init__(
self,
returns: pd.DataFrame,
weights: Optional[pd.Series] = None
):
"""
Args:
returns: DataFrame with asset returns (columns = assets)
weights: Portfolio weights (default: equal weight)
"""
self.returns = returns
self.weights = weights if weights is not None else \
pd.Series(1/len(returns.columns), index=returns.columns)
self.ann_factor = 252
def portfolio_return(self) -> float:
"""Weighted portfolio return."""
return (self.returns @ self.weights).mean() * self.ann_factor
def portfolio_volatility(self) -> float:
"""Portfolio volatility."""
cov_matrix = self.returns.cov() * self.ann_factor
port_var = self.weights @ cov_matrix @ self.weights
return np.sqrt(port_var)
def marginal_risk_contribution(self) -> pd.Series:
"""Marginal contribution to risk by asset."""
cov_matrix = self.returns.cov() * self.ann_factor
port_vol = self.portfolio_volatility()
# Marginal contribution
mrc = (cov_matrix @ self.weights) / port_vol
return mrc
def component_risk(self) -> pd.Series:
"""Component contribution to total risk."""
mrc = self.marginal_risk_contribution()
return self.weights * mrc
def risk_parity_weights(self, target_vol: float = None) -> pd.Series:
"""Calculate risk parity weights."""
from scipy.optimize import minimize
n = len(self.returns.columns)
cov_matrix = self.returns.cov() * self.ann_factor
def risk_budget_objective(weights):
port_vol = np.sqrt(weights @ cov_matrix @ weights)
mrc = (cov_matrix @ weights) / port_vol
rc = weights * mrc
target_rc = port_vol / n # Equal risk contribution
return np.sum((rc - target_rc) ** 2)
constraints = [
{"type": "eq", "fun": lambda w: np.sum(w) - 1}, # Weights sum to 1
]
bounds = [(0.01, 1.0) for _ in range(n)] # Min 1%, max 100%
x0 = np.array([1/n] * n)
result = minimize(
risk_budget_objective,
x0,
method="SLSQP",
bounds=bounds,
constraints=constraints
)
return pd.Series(result.x, index=self.returns.columns)
def correlation_matrix(self) -> pd.DataFrame:
"""Asset correlation matrix."""
return self.returns.corr()
def diversification_ratio(self) -> float:
"""Diversification ratio (higher = more diversified)."""
asset_vols = self.returns.std() * np.sqrt(self.ann_factor)
weighted_vol = (self.weights * asset_vols).sum()
port_vol = self.portfolio_volatility()
return weighted_vol / port_vol if port_vol > 0 else 1
def tracking_error(self, benchmark_returns: pd.Series) -> float:
"""Tracking error vs benchmark."""
port_returns = self.returns @ self.weights
active_returns = port_returns - benchmark_returns
return active_returns.std() * np.sqrt(self.ann_factor)
def conditional_correlation(
self,
threshold_percentile: float = 10
) -> pd.DataFrame:
"""Correlation during stress periods."""
port_returns = self.returns @ self.weights
threshold = np.percentile(port_returns, threshold_percentile)
stress_mask = port_returns <= threshold
return self.returns[stress_mask].corr()
```
### Pattern 3: Rolling Risk Metrics
```python
class RollingRiskMetrics:
"""Rolling window risk calculations."""
def __init__(self, returns: pd.Series, window: int = 63):
"""
Args:
returns: Return series
window: Rolling window size (default: 63 = ~3 months)
"""
self.returns = returns
self.window = window
def rolling_volatility(self, annualized: bool = True) -> pd.Series:
"""Rolling volatility."""
vol = self.returns.rolling(self.window).std()
if annualized:
vol *= np.sqrt(252)
return vol
def rolling_sharpe(self, rf_rate: float = 0.02) -> pd.Series:
"""Rolling Sharpe ratio."""
rolling_return = self.returns.rolling(self.window).mean() * 252
rolling_vol = self.rolling_volatility()
return (rolling_return - rf_rate) / rolling_vol
def rolling_var(self, confidence: float = 0.95) -> pd.Series:
"""Rolling historical VaR."""
return self.returns.rolling(self.window).apply(
lambda x: -np.percentile(x, (1 - confidence) * 100),
raw=True
)
def rolling_max_drawdown(self) -> pd.Series:
"""Rolling maximum drawdown."""
def max_dd(returns):
cumulative = (1 + returns).cumprod()
running_max = cumulative.cummax()
drawdowns = (cumulative - running_max) / running_max
return drawdowns.min()
return self.returns.rolling(self.window).apply(max_dd, raw=False)
def rolling_beta(self, market_returns: pd.Series) -> pd.Series:
"""Rolling beta vs market."""
def calc_beta(window_data):
port_ret = window_data.iloc[:, 0]
mkt_ret = window_data.iloc[:, 1]
cov = np.cov(port_ret, mkt_ret)
return cov[0, 1] / cov[1, 1] if cov[1, 1] != 0 else 0
combined = pd.concat([self.returns, market_returns], axis=1)
return combined.rolling(self.window).apply(
lambda x: calc_beta(x.to_frame()),
raw=False
).iloc[:, 0]
def volatility_regime(
self,
low_threshold: float = 0.10,
high_threshold: float = 0.20
) -> pd.Series:
"""Classify volatility regime."""
vol = self.rolling_volatility()
def classify(v):
if v < low_threshold:
return "low"
elif v > high_threshold:
return "high"
else:
return "normal"
return vol.apply(classify)
```
### Pattern 4: Stress Testing
```python
class StressTester:
"""Historical and hypothetical stress testing."""
# Historical crisis periods
HISTORICAL_SCENARIOS = {
"2008_financial_crisis": ("2008-09-01", "2009-03-31"),
"2020_covid_crash": ("2020-02-19", "2020-03-23"),
"2022_rate_hikes": ("2022-01-01", "2022-10-31"),
"dot_com_bust": ("2000-03-01", "2002-10-01"),
"flash_crash_2010": ("2010-05-06", "2010-05-06"),
}
def __init__(self, returns: pd.Series, weights: pd.Series = None):
self.returns = returns
self.weights = weights
def historical_stress_test(
self,
scenario_name: str,
historical_data: pd.DataFrame
) -> Dict[str, float]:
"""Test portfolio against historical crisis period."""
if scenario_name not in self.HISTORICAL_SCENARIOS:
raise ValueError(f"Unknown scenario: {scenario_name}")
start, end = self.HISTORICAL_SCENARIOS[scenario_name]
# Get returns during crisis
crisis_returns = historical_data.loc[start:end]
if self.weights is not None:
port_returns = (crisis_returns @ self.weights)
else:
port_returns = crisis_returns
total_return = (1 + port_returns).prod() - 1
max_dd = self._calculate_max_dd(port_returns)
worst_day = port_returns.min()
return {
"scenario": scenario_name,
"period": f"{start} to {end}",
"total_return": total_return,
"max_drawdown": max_dd,
"worst_day": worst_day,
"volatility": port_returns.std() * np.sqrt(252)
}
def hypothetical_stress_test(
self,
shocks: Dict[str, float]
) -> float:
"""
Test portfolio against hypothetical shocks.
Args:
shocks: Dict of {asset: shock_return}
"""
if self.weights is None:
raise ValueError("Weights required for hypothetical stress test")
total_impact = 0
for asset, shock in shocks.items():
if asset in self.weights.index:
total_impact += self.weights[asset] * shock
return total_impact
def monte_carlo_stress(
self,
n_simulations: int = 10000,
horizon_days: int = 21,
vol_multiplier: float = 2.0
) -> Dict[str, float]:
"""Monte Carlo stress test with elevated volatility."""
mean = self.returns.mean()
vol = self.returns.std() * vol_multiplier
simulations = np.random.normal(
mean,
vol,
(n_simulations, horizon_days)
)
total_returns = (1 + simulations).prod(axis=1) - 1
return {
"expected_loss": -total_returns.mean(),
"var_95": -np.percentile(total_returns, 5),
"var_99": -np.percentile(total_returns, 1),
"worst_case": -total_returns.min(),
"prob_10pct_loss": (total_returns < -0.10).mean()
}
def _calculate_max_dd(self, returns: pd.Series) -> float:
cumulative = (1 + returns).cumprod()
running_max = cumulative.cummax()
drawdowns = (cumulative - running_max) / running_max
return drawdowns.min()
```
## Quick Reference
```python
# Daily usage
metrics = RiskMetrics(returns)
print(f"Sharpe: {metrics.sharpe_ratio():.2f}")
print(f"Max DD: {metrics.max_drawdown():.2%}")
print(f"VaR 95%: {metrics.var_historical(0.95):.2%}")
# Full summary
summary = metrics.summary()
for metric, value in summary.items():
print(f"{metric}: {value:.4f}")
```
## Best Practices
### Do's
- **Use multiple metrics** - No single metric captures all risk
- **Consider tail risk** - VaR isn't enough, use CVaR
- **Rolling analysis** - Risk changes over time
- **Stress test** - Historical and hypothetical
- **Document assumptions** - Distribution, lookback, etc.
### Don'ts
- **Don't rely on VaR alone** - Underestimates tail risk
- **Don't assume normality** - Returns are fat-tailed
- **Don't ignore correlation** - Increases in stress
- **Don't use short lookbacks** - Miss regime changes
- **Don't forget transaction costs** - Affects realized risk
## Resources
- [Risk Management and Financial Institutions (John Hull)](https://www.amazon.com/Risk-Management-Financial-Institutions-5th/dp/1119448115)
- [Quantitative Risk Management (McNeil, Frey, Embrechts)](https://www.amazon.com/Quantitative-Risk-Management-Techniques-Princeton/dp/0691166277)
- [pyfolio Documentation](https://quantopian.github.io/pyfolio/)
| """
Test for 'risk-metrics-calculation' skill — Risk Metrics Calculation
Validates that the Agent implemented risk metric calculations (Sharpe, Max Drawdown,
Sortino, Calmar) with example scripts and tests in pyfolio.
"""
import os
import subprocess
import json
import pytest
class TestRiskMetricsCalculation:
"""Verify risk metrics calculation implementation in pyfolio."""
REPO_DIR = "/workspace/pyfolio"
# ------------------------------------------------------------------
# L1: file existence & syntax
# ------------------------------------------------------------------
def test_demo_script_exists(self):
"""examples/risk_metrics_demo.py must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "risk_metrics_demo.py")
assert os.path.isfile(fpath), "risk_metrics_demo.py not found"
def test_demo_script_compiles(self):
"""Demo script must compile."""
result = subprocess.run(
["python", "-m", "py_compile", "examples/risk_metrics_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: runtime & content verification
# ------------------------------------------------------------------
def test_demo_script_runs(self):
"""Demo script must run and exit with code 0."""
result = subprocess.run(
["python", "examples/risk_metrics_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert result.returncode == 0, f"Demo failed:\n{result.stderr}"
def test_demo_produces_output(self):
"""Demo must produce non-empty output."""
result = subprocess.run(
["python", "examples/risk_metrics_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert len(result.stdout.strip()) > 0, "Demo produced no output"
def test_sharpe_ratio_in_source(self):
"""Demo must calculate Sharpe ratio."""
fpath = os.path.join(self.REPO_DIR, "examples", "risk_metrics_demo.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert "sharpe" in content.lower(), "Sharpe ratio calculation not found"
def test_max_drawdown_in_source(self):
"""Demo must calculate maximum drawdown."""
fpath = os.path.join(self.REPO_DIR, "examples", "risk_metrics_demo.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert "drawdown" in content.lower(), "Max drawdown calculation not found"
def test_sortino_ratio_in_source(self):
"""Demo must calculate Sortino ratio."""
fpath = os.path.join(self.REPO_DIR, "examples", "risk_metrics_demo.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert "sortino" in content.lower(), "Sortino ratio calculation not found"
def test_calmar_ratio_in_source(self):
"""Demo must calculate Calmar ratio."""
fpath = os.path.join(self.REPO_DIR, "examples", "risk_metrics_demo.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert "calmar" in content.lower(), "Calmar ratio calculation not found"
def test_output_contains_numeric_metrics(self):
"""Demo output must contain numeric metric values."""
result = subprocess.run(
["python", "examples/risk_metrics_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
pytest.skip(f"Demo failed: {result.stderr[:500]}")
output = result.stdout
import re
numbers = re.findall(r"-?\d+\.\d+", output)
assert (
len(numbers) >= 2
), f"Expected numeric metric values in output, found {len(numbers)} numbers"
def test_output_or_file_has_json_csv(self):
"""Demo should produce JSON or CSV format output."""
fpath = os.path.join(self.REPO_DIR, "examples", "risk_metrics_demo.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
output_formats = ["json", "csv", "to_json", "to_csv", "json.dump", "DictWriter"]
found = any(fmt in content.lower() for fmt in output_formats)
assert found, "No JSON/CSV output generation found in demo script"
| https://github.com/quantopian/pyfolio | zhangyiiiiii/swe-skills-bench-python | |
vector-index-tuning | Vector Index Tuning | See task file for detailed mission requirements. | feature | # Task: Create Vector Index Tuning Examples for FAISS
## Background
Add index tuning examples demonstrating the trade-off between recall
and latency for different FAISS index configurations.
## Files to Create/Modify
- benchs/index_tuning_demo.py (new)
- examples/index_tuning/ (new directory)
- tools/benchmark_index.py (optional)
## Requirements
Index Types to Demonstrate:
- Flat (brute force baseline)
- IVF (inverted file)
- HNSW (hierarchical navigable small world)
- PQ (product quantization)
Benchmark Script:
- Multiple parameter configurations
- Measure recall@K for different nprobe/efSearch values
- Measure query latency
- Output results to CSV/JSON
Parameters to Vary:
- nlist (for IVF)
- nprobe (for IVF)
- M, efConstruction, efSearch (for HNSW)
4. Output Requirements:
- recall@10 and recall@100 metrics
- Query latency in milliseconds
- Memory usage statistics
## Acceptance Criteria
- `python benchs/index_tuning_demo.py` exits with code 0
- Output contains recall and latency metrics
- Clear trade-off demonstration
| ---
name: vector-index-tuning
description: Optimize vector index performance for latency, recall, and memory. Use when tuning HNSW parameters, selecting quantization strategies, or scaling vector search infrastructure.
---
# Vector Index Tuning
Guide to optimizing vector indexes for production performance.
## When to Use This Skill
- Tuning HNSW parameters
- Implementing quantization
- Optimizing memory usage
- Reducing search latency
- Balancing recall vs speed
- Scaling to billions of vectors
## Core Concepts
### 1. Index Type Selection
```
Data Size Recommended Index
────────────────────────────────────────
< 10K vectors → Flat (exact search)
10K - 1M → HNSW
1M - 100M → HNSW + Quantization
> 100M → IVF + PQ or DiskANN
```
### 2. HNSW Parameters
| Parameter | Default | Effect |
| ------------------ | ------- | ---------------------------------------------------- |
| **M** | 16 | Connections per node, ↑ = better recall, more memory |
| **efConstruction** | 100 | Build quality, ↑ = better index, slower build |
| **efSearch** | 50 | Search quality, ↑ = better recall, slower search |
### 3. Quantization Types
```
Full Precision (FP32): 4 bytes × dimensions
Half Precision (FP16): 2 bytes × dimensions
INT8 Scalar: 1 byte × dimensions
Product Quantization: ~32-64 bytes total
Binary: dimensions/8 bytes
```
## Templates
### Template 1: HNSW Parameter Tuning
```python
import numpy as np
from typing import List, Tuple
import time
def benchmark_hnsw_parameters(
vectors: np.ndarray,
queries: np.ndarray,
ground_truth: np.ndarray,
m_values: List[int] = [8, 16, 32, 64],
ef_construction_values: List[int] = [64, 128, 256],
ef_search_values: List[int] = [32, 64, 128, 256]
) -> List[dict]:
"""Benchmark different HNSW configurations."""
import hnswlib
results = []
dim = vectors.shape[1]
n = vectors.shape[0]
for m in m_values:
for ef_construction in ef_construction_values:
# Build index
index = hnswlib.Index(space='cosine', dim=dim)
index.init_index(max_elements=n, M=m, ef_construction=ef_construction)
build_start = time.time()
index.add_items(vectors)
build_time = time.time() - build_start
# Get memory usage
memory_bytes = index.element_count * (
dim * 4 + # Vector storage
m * 2 * 4 # Graph edges (approximate)
)
for ef_search in ef_search_values:
index.set_ef(ef_search)
# Measure search
search_start = time.time()
labels, distances = index.knn_query(queries, k=10)
search_time = time.time() - search_start
# Calculate recall
recall = calculate_recall(labels, ground_truth, k=10)
results.append({
"M": m,
"ef_construction": ef_construction,
"ef_search": ef_search,
"build_time_s": build_time,
"search_time_ms": search_time * 1000 / len(queries),
"recall@10": recall,
"memory_mb": memory_bytes / 1024 / 1024
})
return results
def calculate_recall(predictions: np.ndarray, ground_truth: np.ndarray, k: int) -> float:
"""Calculate recall@k."""
correct = 0
for pred, truth in zip(predictions, ground_truth):
correct += len(set(pred[:k]) & set(truth[:k]))
return correct / (len(predictions) * k)
def recommend_hnsw_params(
num_vectors: int,
target_recall: float = 0.95,
max_latency_ms: float = 10,
available_memory_gb: float = 8
) -> dict:
"""Recommend HNSW parameters based on requirements."""
# Base recommendations
if num_vectors < 100_000:
m = 16
ef_construction = 100
elif num_vectors < 1_000_000:
m = 32
ef_construction = 200
else:
m = 48
ef_construction = 256
# Adjust ef_search based on recall target
if target_recall >= 0.99:
ef_search = 256
elif target_recall >= 0.95:
ef_search = 128
else:
ef_search = 64
return {
"M": m,
"ef_construction": ef_construction,
"ef_search": ef_search,
"notes": f"Estimated for {num_vectors:,} vectors, {target_recall:.0%} recall"
}
```
### Template 2: Quantization Strategies
```python
import numpy as np
from typing import Optional
class VectorQuantizer:
"""Quantization strategies for vector compression."""
@staticmethod
def scalar_quantize_int8(
vectors: np.ndarray,
min_val: Optional[float] = None,
max_val: Optional[float] = None
) -> Tuple[np.ndarray, dict]:
"""Scalar quantization to INT8."""
if min_val is None:
min_val = vectors.min()
if max_val is None:
max_val = vectors.max()
# Scale to 0-255 range
scale = 255.0 / (max_val - min_val)
quantized = np.clip(
np.round((vectors - min_val) * scale),
0, 255
).astype(np.uint8)
params = {"min_val": min_val, "max_val": max_val, "scale": scale}
return quantized, params
@staticmethod
def dequantize_int8(
quantized: np.ndarray,
params: dict
) -> np.ndarray:
"""Dequantize INT8 vectors."""
return quantized.astype(np.float32) / params["scale"] + params["min_val"]
@staticmethod
def product_quantize(
vectors: np.ndarray,
n_subvectors: int = 8,
n_centroids: int = 256
) -> Tuple[np.ndarray, dict]:
"""Product quantization for aggressive compression."""
from sklearn.cluster import KMeans
n, dim = vectors.shape
assert dim % n_subvectors == 0
subvector_dim = dim // n_subvectors
codebooks = []
codes = np.zeros((n, n_subvectors), dtype=np.uint8)
for i in range(n_subvectors):
start = i * subvector_dim
end = (i + 1) * subvector_dim
subvectors = vectors[:, start:end]
kmeans = KMeans(n_clusters=n_centroids, random_state=42)
codes[:, i] = kmeans.fit_predict(subvectors)
codebooks.append(kmeans.cluster_centers_)
params = {
"codebooks": codebooks,
"n_subvectors": n_subvectors,
"subvector_dim": subvector_dim
}
return codes, params
@staticmethod
def binary_quantize(vectors: np.ndarray) -> np.ndarray:
"""Binary quantization (sign of each dimension)."""
# Convert to binary: positive = 1, negative = 0
binary = (vectors > 0).astype(np.uint8)
# Pack bits into bytes
n, dim = vectors.shape
packed_dim = (dim + 7) // 8
packed = np.zeros((n, packed_dim), dtype=np.uint8)
for i in range(dim):
byte_idx = i // 8
bit_idx = i % 8
packed[:, byte_idx] |= (binary[:, i] << bit_idx)
return packed
def estimate_memory_usage(
num_vectors: int,
dimensions: int,
quantization: str = "fp32",
index_type: str = "hnsw",
hnsw_m: int = 16
) -> dict:
"""Estimate memory usage for different configurations."""
# Vector storage
bytes_per_dimension = {
"fp32": 4,
"fp16": 2,
"int8": 1,
"pq": 0.05, # Approximate
"binary": 0.125
}
vector_bytes = num_vectors * dimensions * bytes_per_dimension[quantization]
# Index overhead
if index_type == "hnsw":
# Each node has ~M*2 edges, each edge is 4 bytes (int32)
index_bytes = num_vectors * hnsw_m * 2 * 4
elif index_type == "ivf":
# Inverted lists + centroids
index_bytes = num_vectors * 8 + 65536 * dimensions * 4
else:
index_bytes = 0
total_bytes = vector_bytes + index_bytes
return {
"vector_storage_mb": vector_bytes / 1024 / 1024,
"index_overhead_mb": index_bytes / 1024 / 1024,
"total_mb": total_bytes / 1024 / 1024,
"total_gb": total_bytes / 1024 / 1024 / 1024
}
```
### Template 3: Qdrant Index Configuration
```python
from qdrant_client import QdrantClient
from qdrant_client.http import models
def create_optimized_collection(
client: QdrantClient,
collection_name: str,
vector_size: int,
num_vectors: int,
optimize_for: str = "balanced" # "recall", "speed", "memory"
) -> None:
"""Create collection with optimized settings."""
# HNSW configuration based on optimization target
hnsw_configs = {
"recall": models.HnswConfigDiff(m=32, ef_construct=256),
"speed": models.HnswConfigDiff(m=16, ef_construct=64),
"balanced": models.HnswConfigDiff(m=16, ef_construct=128),
"memory": models.HnswConfigDiff(m=8, ef_construct=64)
}
# Quantization configuration
quantization_configs = {
"recall": None, # No quantization for max recall
"speed": models.ScalarQuantization(
scalar=models.ScalarQuantizationConfig(
type=models.ScalarType.INT8,
quantile=0.99,
always_ram=True
)
),
"balanced": models.ScalarQuantization(
scalar=models.ScalarQuantizationConfig(
type=models.ScalarType.INT8,
quantile=0.99,
always_ram=False
)
),
"memory": models.ProductQuantization(
product=models.ProductQuantizationConfig(
compression=models.CompressionRatio.X16,
always_ram=False
)
)
}
# Optimizer configuration
optimizer_configs = {
"recall": models.OptimizersConfigDiff(
indexing_threshold=10000,
memmap_threshold=50000
),
"speed": models.OptimizersConfigDiff(
indexing_threshold=5000,
memmap_threshold=20000
),
"balanced": models.OptimizersConfigDiff(
indexing_threshold=20000,
memmap_threshold=50000
),
"memory": models.OptimizersConfigDiff(
indexing_threshold=50000,
memmap_threshold=10000 # Use disk sooner
)
}
client.create_collection(
collection_name=collection_name,
vectors_config=models.VectorParams(
size=vector_size,
distance=models.Distance.COSINE
),
hnsw_config=hnsw_configs[optimize_for],
quantization_config=quantization_configs[optimize_for],
optimizers_config=optimizer_configs[optimize_for]
)
def tune_search_parameters(
client: QdrantClient,
collection_name: str,
target_recall: float = 0.95
) -> dict:
"""Tune search parameters for target recall."""
# Search parameter recommendations
if target_recall >= 0.99:
search_params = models.SearchParams(
hnsw_ef=256,
exact=False,
quantization=models.QuantizationSearchParams(
ignore=True, # Don't use quantization for search
rescore=True
)
)
elif target_recall >= 0.95:
search_params = models.SearchParams(
hnsw_ef=128,
exact=False,
quantization=models.QuantizationSearchParams(
ignore=False,
rescore=True,
oversampling=2.0
)
)
else:
search_params = models.SearchParams(
hnsw_ef=64,
exact=False,
quantization=models.QuantizationSearchParams(
ignore=False,
rescore=False
)
)
return search_params
```
### Template 4: Performance Monitoring
```python
import time
from dataclasses import dataclass
from typing import List
import numpy as np
@dataclass
class SearchMetrics:
latency_p50_ms: float
latency_p95_ms: float
latency_p99_ms: float
recall: float
qps: float
class VectorSearchMonitor:
"""Monitor vector search performance."""
def __init__(self, ground_truth_fn=None):
self.latencies = []
self.recalls = []
self.ground_truth_fn = ground_truth_fn
def measure_search(
self,
search_fn,
query_vectors: np.ndarray,
k: int = 10,
num_iterations: int = 100
) -> SearchMetrics:
"""Benchmark search performance."""
latencies = []
for _ in range(num_iterations):
for query in query_vectors:
start = time.perf_counter()
results = search_fn(query, k=k)
latency = (time.perf_counter() - start) * 1000
latencies.append(latency)
latencies = np.array(latencies)
total_queries = num_iterations * len(query_vectors)
total_time = sum(latencies) / 1000 # seconds
return SearchMetrics(
latency_p50_ms=np.percentile(latencies, 50),
latency_p95_ms=np.percentile(latencies, 95),
latency_p99_ms=np.percentile(latencies, 99),
recall=self._calculate_recall(search_fn, query_vectors, k) if self.ground_truth_fn else 0,
qps=total_queries / total_time
)
def _calculate_recall(self, search_fn, queries: np.ndarray, k: int) -> float:
"""Calculate recall against ground truth."""
if not self.ground_truth_fn:
return 0
correct = 0
total = 0
for query in queries:
predicted = set(search_fn(query, k=k))
actual = set(self.ground_truth_fn(query, k=k))
correct += len(predicted & actual)
total += k
return correct / total
def profile_index_build(
build_fn,
vectors: np.ndarray,
batch_sizes: List[int] = [1000, 10000, 50000]
) -> dict:
"""Profile index build performance."""
results = {}
for batch_size in batch_sizes:
times = []
for i in range(0, len(vectors), batch_size):
batch = vectors[i:i + batch_size]
start = time.perf_counter()
build_fn(batch)
times.append(time.perf_counter() - start)
results[batch_size] = {
"avg_batch_time_s": np.mean(times),
"vectors_per_second": batch_size / np.mean(times)
}
return results
```
## Best Practices
### Do's
- **Benchmark with real queries** - Synthetic may not represent production
- **Monitor recall continuously** - Can degrade with data drift
- **Start with defaults** - Tune only when needed
- **Use quantization** - Significant memory savings
- **Consider tiered storage** - Hot/cold data separation
### Don'ts
- **Don't over-optimize early** - Profile first
- **Don't ignore build time** - Index updates have cost
- **Don't forget reindexing** - Plan for maintenance
- **Don't skip warming** - Cold indexes are slow
## Resources
- [HNSW Paper](https://arxiv.org/abs/1603.09320)
- [Faiss Wiki](https://github.com/facebookresearch/faiss/wiki)
- [ANN Benchmarks](https://ann-benchmarks.com/)
| """
Test for 'vector-index-tuning' skill — FAISS Vector Index Tuning
Validates that the Agent created optimized FAISS index configurations
with proper parameter tuning and benchmarking.
"""
import os
import subprocess
import pytest
class TestVectorIndexTuning:
"""Verify FAISS vector index tuning implementation."""
REPO_DIR = "/workspace/faiss"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_tuning_script_exists(self):
"""A vector index tuning script must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and (
"tun" in f.lower()
or "bench" in f.lower()
or "index" in f.lower()
or "optim" in f.lower()
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No tuning/benchmark script found"
def test_config_or_readme_exists(self):
"""Configuration or README for tuning must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
"tun" in f.lower() or "bench" in f.lower() or "index" in f.lower()
) and (f.endswith((".md", ".yml", ".yaml", ".json", ".cfg"))):
found = True
break
if found:
break
if not found:
# Check for README in relevant dirs
for root, dirs, files in os.walk(self.REPO_DIR):
if "README" in files or "README.md" in files:
fpath = os.path.join(root, "README.md")
if os.path.isfile(fpath):
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if "index" in content.lower() and "tuning" in content.lower():
found = True
break
assert found, "No tuning config or README found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_tuning_files(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".py") and "node_modules" not in root:
fpath = os.path.join(root, f)
try:
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if "faiss" in content.lower() or "index" in content.lower():
found.append(fpath)
except OSError:
pass
return found
def _read_all_tuning(self):
content = ""
for fpath in self._find_tuning_files():
with open(fpath, "r", errors="ignore") as f:
content += f.read() + "\n"
return content
def test_faiss_import(self):
"""Must import faiss library."""
content = self._read_all_tuning()
assert (
"import faiss" in content or "from faiss" in content
), "No faiss import found"
def test_index_factory(self):
"""Must use index_factory or build index."""
content = self._read_all_tuning()
factory_patterns = [
"index_factory",
"IndexFlatL2",
"IndexIVFFlat",
"IndexIVFPQ",
"IndexHNSW",
"IndexPQ",
"IndexScalarQuantizer",
"GpuIndex",
]
found = sum(1 for p in factory_patterns if p in content)
assert found >= 2, "Insufficient index construction patterns"
def test_parameter_sweep(self):
"""Must demonstrate parameter tuning/sweep."""
content = self._read_all_tuning()
param_patterns = [
"nprobe",
"nlist",
"M",
"efConstruction",
"efSearch",
"nbits",
"ParameterSpace",
"set_search_params",
]
found = sum(1 for p in param_patterns if p in content)
assert found >= 2, "Insufficient parameter tuning"
def test_training(self):
"""Must train the index."""
content = self._read_all_tuning()
train_patterns = [".train(", "train", "is_trained", "ntotal"]
found = any(p in content for p in train_patterns)
assert found, "No index training found"
def test_search_operation(self):
"""Must perform search operations."""
content = self._read_all_tuning()
search_patterns = [
".search(",
"knn_search",
"range_search",
"reconstruct",
"k=",
]
found = any(p in content for p in search_patterns)
assert found, "No search operation found"
def test_recall_metric(self):
"""Must measure recall or accuracy."""
content = self._read_all_tuning()
recall_patterns = [
"recall",
"accuracy",
"precision",
"intersection",
"ground_truth",
"evaluate",
]
found = any(p in content.lower() for p in recall_patterns)
assert found, "No recall/accuracy measurement found"
def test_benchmark_timing(self):
"""Must benchmark query latency."""
content = self._read_all_tuning()
timing_patterns = [
"time.",
"timeit",
"perf_counter",
"latency",
"qps",
"throughput",
"queries per second",
]
found = any(p in content.lower() for p in timing_patterns)
assert found, "No timing/benchmark measurement found"
def test_python_scripts_compile(self):
"""All Python tuning scripts must compile."""
for fpath in self._find_tuning_files():
result = subprocess.run(
["python", "-m", "py_compile", fpath],
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"{fpath} compile error:\n{result.stderr}"
| https://github.com/facebookresearch/faiss | zhangyiiiiii/swe-skills-bench-python | |
rag-implementation | RAG Implementation Framework | See task file for detailed mission requirements. | feature | # Task: Implement End-to-End RAG Demo in LangChain
## Background
Create an end-to-end RAG (Retrieval-Augmented Generation) demonstration
in LangChain showing data import, retriever configuration, and generation.
## Files to Create/Modify
- examples/rag_demo.py (new)
- examples/rag_config.yaml (configuration)
- demo/rag/ (optional directory)
## Requirements
RAG Pipeline Components:
1) Data Import:
- Load documents from local files
- Text splitting with configurable chunk size
2) Retriever Configuration:
- Vector store setup (FAISS or Chroma)
- Embedding model configuration
- Similarity search parameters
3) Generation Pipeline:
- Retrieval + LLM chain
- Context injection into prompt
- Source citation in output
Minimal Local Configuration:
- Can run without external API (mock LLM if needed)
- README with clear instructions
4. Output Requirements:
- Generated output includes retrieved context
- Source references present in output
- Successful exit code
## Acceptance Criteria
- `python examples/rag_demo.py` exits with code 0
- Output contains retrieved context snippets
- Source citations or references present
| ---
name: rag-implementation
description: Build Retrieval-Augmented Generation (RAG) systems for LLM applications with vector databases and semantic search. Use when implementing knowledge-grounded AI, building document Q&A systems, or integrating LLMs with external knowledge bases.
---
# RAG Implementation
Master Retrieval-Augmented Generation (RAG) to build LLM applications that provide accurate, grounded responses using external knowledge sources.
## When to Use This Skill
- Building Q&A systems over proprietary documents
- Creating chatbots with current, factual information
- Implementing semantic search with natural language queries
- Reducing hallucinations with grounded responses
- Enabling LLMs to access domain-specific knowledge
- Building documentation assistants
- Creating research tools with source citation
## Core Components
### 1. Vector Databases
**Purpose**: Store and retrieve document embeddings efficiently
**Options:**
- **Pinecone**: Managed, scalable, serverless
- **Weaviate**: Open-source, hybrid search, GraphQL
- **Milvus**: High performance, on-premise
- **Chroma**: Lightweight, easy to use, local development
- **Qdrant**: Fast, filtered search, Rust-based
- **pgvector**: PostgreSQL extension, SQL integration
### 2. Embeddings
**Purpose**: Convert text to numerical vectors for similarity search
**Models (2026):**
| Model | Dimensions | Best For |
|-------|------------|----------|
| **voyage-3-large** | 1024 | Claude apps (Anthropic recommended) |
| **voyage-code-3** | 1024 | Code search |
| **text-embedding-3-large** | 3072 | OpenAI apps, high accuracy |
| **text-embedding-3-small** | 1536 | OpenAI apps, cost-effective |
| **bge-large-en-v1.5** | 1024 | Open source, local deployment |
| **multilingual-e5-large** | 1024 | Multi-language support |
### 3. Retrieval Strategies
**Approaches:**
- **Dense Retrieval**: Semantic similarity via embeddings
- **Sparse Retrieval**: Keyword matching (BM25, TF-IDF)
- **Hybrid Search**: Combine dense + sparse with weighted fusion
- **Multi-Query**: Generate multiple query variations
- **HyDE**: Generate hypothetical documents for better retrieval
### 4. Reranking
**Purpose**: Improve retrieval quality by reordering results
**Methods:**
- **Cross-Encoders**: BERT-based reranking (ms-marco-MiniLM)
- **Cohere Rerank**: API-based reranking
- **Maximal Marginal Relevance (MMR)**: Diversity + relevance
- **LLM-based**: Use LLM to score relevance
## Quick Start with LangGraph
```python
from langgraph.graph import StateGraph, START, END
from langchain_anthropic import ChatAnthropic
from langchain_voyageai import VoyageAIEmbeddings
from langchain_pinecone import PineconeVectorStore
from langchain_core.documents import Document
from langchain_core.prompts import ChatPromptTemplate
from langchain_text_splitters import RecursiveCharacterTextSplitter
from typing import TypedDict, Annotated
class RAGState(TypedDict):
question: str
context: list[Document]
answer: str
# Initialize components
llm = ChatAnthropic(model="claude-sonnet-4-6")
embeddings = VoyageAIEmbeddings(model="voyage-3-large")
vectorstore = PineconeVectorStore(index_name="docs", embedding=embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})
# RAG prompt
rag_prompt = ChatPromptTemplate.from_template(
"""Answer based on the context below. If you cannot answer, say so.
Context:
{context}
Question: {question}
Answer:"""
)
async def retrieve(state: RAGState) -> RAGState:
"""Retrieve relevant documents."""
docs = await retriever.ainvoke(state["question"])
return {"context": docs}
async def generate(state: RAGState) -> RAGState:
"""Generate answer from context."""
context_text = "\n\n".join(doc.page_content for doc in state["context"])
messages = rag_prompt.format_messages(
context=context_text,
question=state["question"]
)
response = await llm.ainvoke(messages)
return {"answer": response.content}
# Build RAG graph
builder = StateGraph(RAGState)
builder.add_node("retrieve", retrieve)
builder.add_node("generate", generate)
builder.add_edge(START, "retrieve")
builder.add_edge("retrieve", "generate")
builder.add_edge("generate", END)
rag_chain = builder.compile()
# Use
result = await rag_chain.ainvoke({"question": "What are the main features?"})
print(result["answer"])
```
## Advanced RAG Patterns
### Pattern 1: Hybrid Search with RRF
```python
from langchain_community.retrievers import BM25Retriever
from langchain.retrievers import EnsembleRetriever
# Sparse retriever (BM25 for keyword matching)
bm25_retriever = BM25Retriever.from_documents(documents)
bm25_retriever.k = 10
# Dense retriever (embeddings for semantic search)
dense_retriever = vectorstore.as_retriever(search_kwargs={"k": 10})
# Combine with Reciprocal Rank Fusion weights
ensemble_retriever = EnsembleRetriever(
retrievers=[bm25_retriever, dense_retriever],
weights=[0.3, 0.7] # 30% keyword, 70% semantic
)
```
### Pattern 2: Multi-Query Retrieval
```python
from langchain.retrievers.multi_query import MultiQueryRetriever
# Generate multiple query perspectives for better recall
multi_query_retriever = MultiQueryRetriever.from_llm(
retriever=vectorstore.as_retriever(search_kwargs={"k": 5}),
llm=llm
)
# Single query → multiple variations → combined results
results = await multi_query_retriever.ainvoke("What is the main topic?")
```
### Pattern 3: Contextual Compression
```python
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
# Compressor extracts only relevant portions
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor,
base_retriever=vectorstore.as_retriever(search_kwargs={"k": 10})
)
# Returns only relevant parts of documents
compressed_docs = await compression_retriever.ainvoke("specific query")
```
### Pattern 4: Parent Document Retriever
```python
from langchain.retrievers import ParentDocumentRetriever
from langchain.storage import InMemoryStore
from langchain_text_splitters import RecursiveCharacterTextSplitter
# Small chunks for precise retrieval, large chunks for context
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=50)
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=200)
# Store for parent documents
docstore = InMemoryStore()
parent_retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=docstore,
child_splitter=child_splitter,
parent_splitter=parent_splitter
)
# Add documents (splits children, stores parents)
await parent_retriever.aadd_documents(documents)
# Retrieval returns parent documents with full context
results = await parent_retriever.ainvoke("query")
```
### Pattern 5: HyDE (Hypothetical Document Embeddings)
```python
from langchain_core.prompts import ChatPromptTemplate
class HyDEState(TypedDict):
question: str
hypothetical_doc: str
context: list[Document]
answer: str
hyde_prompt = ChatPromptTemplate.from_template(
"""Write a detailed passage that would answer this question:
Question: {question}
Passage:"""
)
async def generate_hypothetical(state: HyDEState) -> HyDEState:
"""Generate hypothetical document for better retrieval."""
messages = hyde_prompt.format_messages(question=state["question"])
response = await llm.ainvoke(messages)
return {"hypothetical_doc": response.content}
async def retrieve_with_hyde(state: HyDEState) -> HyDEState:
"""Retrieve using hypothetical document."""
# Use hypothetical doc for retrieval instead of original query
docs = await retriever.ainvoke(state["hypothetical_doc"])
return {"context": docs}
# Build HyDE RAG graph
builder = StateGraph(HyDEState)
builder.add_node("hypothetical", generate_hypothetical)
builder.add_node("retrieve", retrieve_with_hyde)
builder.add_node("generate", generate)
builder.add_edge(START, "hypothetical")
builder.add_edge("hypothetical", "retrieve")
builder.add_edge("retrieve", "generate")
builder.add_edge("generate", END)
hyde_rag = builder.compile()
```
## Document Chunking Strategies
### Recursive Character Text Splitter
```python
from langchain_text_splitters import RecursiveCharacterTextSplitter
splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200,
length_function=len,
separators=["\n\n", "\n", ". ", " ", ""] # Try in order
)
chunks = splitter.split_documents(documents)
```
### Token-Based Splitting
```python
from langchain_text_splitters import TokenTextSplitter
splitter = TokenTextSplitter(
chunk_size=512,
chunk_overlap=50,
encoding_name="cl100k_base" # OpenAI tiktoken encoding
)
```
### Semantic Chunking
```python
from langchain_experimental.text_splitter import SemanticChunker
splitter = SemanticChunker(
embeddings=embeddings,
breakpoint_threshold_type="percentile",
breakpoint_threshold_amount=95
)
```
### Markdown Header Splitter
```python
from langchain_text_splitters import MarkdownHeaderTextSplitter
headers_to_split_on = [
("#", "Header 1"),
("##", "Header 2"),
("###", "Header 3"),
]
splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on,
strip_headers=False
)
```
## Vector Store Configurations
### Pinecone (Serverless)
```python
from pinecone import Pinecone, ServerlessSpec
from langchain_pinecone import PineconeVectorStore
# Initialize Pinecone client
pc = Pinecone(api_key=os.environ["PINECONE_API_KEY"])
# Create index if needed
if "my-index" not in pc.list_indexes().names():
pc.create_index(
name="my-index",
dimension=1024, # voyage-3-large dimensions
metric="cosine",
spec=ServerlessSpec(cloud="aws", region="us-east-1")
)
# Create vector store
index = pc.Index("my-index")
vectorstore = PineconeVectorStore(index=index, embedding=embeddings)
```
### Weaviate
```python
import weaviate
from langchain_weaviate import WeaviateVectorStore
client = weaviate.connect_to_local() # or connect_to_weaviate_cloud()
vectorstore = WeaviateVectorStore(
client=client,
index_name="Documents",
text_key="content",
embedding=embeddings
)
```
### Chroma (Local Development)
```python
from langchain_chroma import Chroma
vectorstore = Chroma(
collection_name="my_collection",
embedding_function=embeddings,
persist_directory="./chroma_db"
)
```
### pgvector (PostgreSQL)
```python
from langchain_postgres.vectorstores import PGVector
connection_string = "postgresql+psycopg://user:pass@localhost:5432/vectordb"
vectorstore = PGVector(
embeddings=embeddings,
collection_name="documents",
connection=connection_string,
)
```
## Retrieval Optimization
### 1. Metadata Filtering
```python
from langchain_core.documents import Document
# Add metadata during indexing
docs_with_metadata = []
for doc in documents:
doc.metadata.update({
"source": doc.metadata.get("source", "unknown"),
"category": determine_category(doc.page_content),
"date": datetime.now().isoformat()
})
docs_with_metadata.append(doc)
# Filter during retrieval
results = await vectorstore.asimilarity_search(
"query",
filter={"category": "technical"},
k=5
)
```
### 2. Maximal Marginal Relevance (MMR)
```python
# Balance relevance with diversity
results = await vectorstore.amax_marginal_relevance_search(
"query",
k=5,
fetch_k=20, # Fetch 20, return top 5 diverse
lambda_mult=0.5 # 0=max diversity, 1=max relevance
)
```
### 3. Reranking with Cross-Encoder
```python
from sentence_transformers import CrossEncoder
reranker = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-6-v2')
async def retrieve_and_rerank(query: str, k: int = 5) -> list[Document]:
# Get initial results
candidates = await vectorstore.asimilarity_search(query, k=20)
# Rerank
pairs = [[query, doc.page_content] for doc in candidates]
scores = reranker.predict(pairs)
# Sort by score and take top k
ranked = sorted(zip(candidates, scores), key=lambda x: x[1], reverse=True)
return [doc for doc, score in ranked[:k]]
```
### 4. Cohere Rerank
```python
from langchain.retrievers import CohereRerank
from langchain_cohere import CohereRerank
reranker = CohereRerank(model="rerank-english-v3.0", top_n=5)
# Wrap retriever with reranking
reranked_retriever = ContextualCompressionRetriever(
base_compressor=reranker,
base_retriever=vectorstore.as_retriever(search_kwargs={"k": 20})
)
```
## Prompt Engineering for RAG
### Contextual Prompt with Citations
```python
rag_prompt = ChatPromptTemplate.from_template(
"""Answer the question based on the context below. Include citations using [1], [2], etc.
If you cannot answer based on the context, say "I don't have enough information."
Context:
{context}
Question: {question}
Instructions:
1. Use only information from the context
2. Cite sources with [1], [2] format
3. If uncertain, express uncertainty
Answer (with citations):"""
)
```
### Structured Output for RAG
```python
from pydantic import BaseModel, Field
class RAGResponse(BaseModel):
answer: str = Field(description="The answer based on context")
confidence: float = Field(description="Confidence score 0-1")
sources: list[str] = Field(description="Source document IDs used")
reasoning: str = Field(description="Brief reasoning for the answer")
# Use with structured output
structured_llm = llm.with_structured_output(RAGResponse)
```
## Evaluation Metrics
```python
from typing import TypedDict
class RAGEvalMetrics(TypedDict):
retrieval_precision: float # Relevant docs / retrieved docs
retrieval_recall: float # Retrieved relevant / total relevant
answer_relevance: float # Answer addresses question
faithfulness: float # Answer grounded in context
context_relevance: float # Context relevant to question
async def evaluate_rag_system(
rag_chain,
test_cases: list[dict]
) -> RAGEvalMetrics:
"""Evaluate RAG system on test cases."""
metrics = {k: [] for k in RAGEvalMetrics.__annotations__}
for test in test_cases:
result = await rag_chain.ainvoke({"question": test["question"]})
# Retrieval metrics
retrieved_ids = {doc.metadata["id"] for doc in result["context"]}
relevant_ids = set(test["relevant_doc_ids"])
precision = len(retrieved_ids & relevant_ids) / len(retrieved_ids)
recall = len(retrieved_ids & relevant_ids) / len(relevant_ids)
metrics["retrieval_precision"].append(precision)
metrics["retrieval_recall"].append(recall)
# Use LLM-as-judge for quality metrics
quality = await evaluate_answer_quality(
question=test["question"],
answer=result["answer"],
context=result["context"],
expected=test.get("expected_answer")
)
metrics["answer_relevance"].append(quality["relevance"])
metrics["faithfulness"].append(quality["faithfulness"])
metrics["context_relevance"].append(quality["context_relevance"])
return {k: sum(v) / len(v) for k, v in metrics.items()}
```
## Resources
- [LangChain RAG Tutorial](https://python.langchain.com/docs/tutorials/rag/)
- [LangGraph RAG Examples](https://langchain-ai.github.io/langgraph/tutorials/rag/)
- [Pinecone Best Practices](https://docs.pinecone.io/guides/get-started/overview)
- [Voyage AI Embeddings](https://docs.voyageai.com/)
- [RAG Evaluation Guide](https://docs.ragas.io/)
## Best Practices
1. **Chunk Size**: Balance between context (larger) and specificity (smaller) - typically 500-1000 tokens
2. **Overlap**: Use 10-20% overlap to preserve context at boundaries
3. **Metadata**: Include source, page, timestamp for filtering and debugging
4. **Hybrid Search**: Combine semantic and keyword search for best recall
5. **Reranking**: Use cross-encoder reranking for precision-critical applications
6. **Citations**: Always return source documents for transparency
7. **Evaluation**: Continuously test retrieval quality and answer accuracy
8. **Monitoring**: Track retrieval metrics and latency in production
## Common Issues
- **Poor Retrieval**: Check embedding quality, chunk size, query formulation
- **Irrelevant Results**: Add metadata filtering, use hybrid search, rerank
- **Missing Information**: Ensure documents are properly indexed, check chunking
- **Slow Queries**: Optimize vector store, use caching, reduce k
- **Hallucinations**: Improve grounding prompt, add verification step
- **Context Too Long**: Use compression or parent document retriever
| """
Test for 'rag-implementation' skill — RAG Implementation Framework
Validates that the Agent created an end-to-end RAG demo in the LangChain repo.
"""
import os
import subprocess
import pytest
class TestRagImplementation:
"""Verify RAG demo implementation in LangChain."""
REPO_DIR = "/workspace/langchain"
# ------------------------------------------------------------------
# L1: file existence & syntax
# ------------------------------------------------------------------
def test_rag_demo_exists(self):
"""examples/rag_demo.py must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "rag_demo.py")
assert os.path.isfile(fpath), "rag_demo.py not found"
def test_rag_demo_compiles(self):
"""rag_demo.py must compile."""
result = subprocess.run(
["python", "-m", "py_compile", "examples/rag_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: content & structure verification
# ------------------------------------------------------------------
def _read_source(self):
fpath = os.path.join(self.REPO_DIR, "examples", "rag_demo.py")
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def test_document_loading(self):
"""Demo must include document loading logic."""
source = self._read_source()
load_patterns = [
"load",
"read",
"Document",
"TextLoader",
"DirectoryLoader",
"open(",
]
found = sum(1 for p in load_patterns if p in source)
assert found >= 2, "Document loading not implemented"
def test_text_splitting(self):
"""Demo must include text splitting/chunking."""
source = self._read_source()
split_patterns = [
"split",
"chunk",
"TextSplitter",
"RecursiveCharacterTextSplitter",
]
found = any(p in source for p in split_patterns)
assert found, "Text splitting not implemented"
def test_vector_store(self):
"""Demo must configure a vector store."""
source = self._read_source()
vs_patterns = [
"FAISS",
"Chroma",
"vectorstore",
"VectorStore",
"from_documents",
"from_texts",
"embedding",
]
found = sum(1 for p in vs_patterns if p in source)
assert found >= 2, "Vector store not configured"
def test_retrieval_chain(self):
"""Demo must implement retrieval + generation chain."""
source = self._read_source()
chain_patterns = [
"chain",
"retriev",
"qa",
"generate",
"RetrievalQA",
"invoke",
"run",
]
found = sum(1 for p in chain_patterns if p in source.lower())
assert found >= 2, "Retrieval chain not implemented"
def test_source_citation(self):
"""Demo output should include source citations/references."""
source = self._read_source()
cite_patterns = [
"source",
"citation",
"reference",
"metadata",
"page_content",
"document",
]
found = sum(1 for p in cite_patterns if p in source.lower())
assert found >= 2, "Source citation handling not implemented"
def test_rag_demo_runs(self):
"""rag_demo.py must run and exit with code 0."""
result = subprocess.run(
["python", "examples/rag_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert result.returncode == 0, f"Demo failed:\n{result.stderr}"
def test_output_has_content(self):
"""Demo must produce non-empty output."""
result = subprocess.run(
["python", "examples/rag_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
pytest.skip(f"Demo failed: {result.stderr[:500]}")
assert len(result.stdout.strip()) > 10, "Demo output is too short"
def test_config_file_if_exists(self):
"""If rag_config.yaml exists, it must be valid YAML."""
fpath = os.path.join(self.REPO_DIR, "examples", "rag_config.yaml")
if not os.path.isfile(fpath):
pytest.skip("rag_config.yaml not created (optional)")
import yaml
with open(fpath, "r") as f:
config = yaml.safe_load(f)
assert isinstance(config, dict), "Config must be a YAML mapping"
def test_no_external_api_required(self):
"""Demo should run locally without external API keys (mock LLM if needed)."""
source = self._read_source()
# Should use mock/fake LLM or local model
local_patterns = [
"mock",
"fake",
"FakeLLM",
"local",
"HuggingFace",
"dummy",
"test",
"FakeListLLM",
]
found = any(p.lower() in source.lower() for p in local_patterns)
# If not found, the demo might still work with env var check
if not found:
# Check it doesn't hard-require OPENAI_API_KEY without fallback
assert (
"OPENAI_API_KEY" not in source or "os.environ.get" in source
), "Demo appears to require external API key without fallback"
| https://github.com/langchain-ai/langchain | zhangyiiiiii/swe-skills-bench-python | |
spark-optimization | Spark Optimization | See task file for detailed mission requirements. | feature | # Task: Add Spark Job Example with Performance Benchmarking
## Background
Add a small Spark job example with baseline measurement and optimization
suggestions like shuffle and partition tuning.
## Files to Create/Modify
- examples/spark_optimization_demo.py (new)
- examples/spark_benchmark.sh (benchmark script)
- benchmarks/spark_perf/ (optional directory)
## Requirements
Example Job:
- Simple but representative workload
- Configurable data size
- Clear performance characteristics
Optimization Demonstrations:
- Shuffle optimization (coalesce vs repartition)
- Partition tuning
- Broadcast joins for small tables
- Caching strategies
Benchmark Script:
- Measure execution time
- Record memory usage
- Compare before/after optimization
- Output results to JSON/CSV
4. Output Requirements:
- Performance metrics recorded
- Comparison results documented
- Clear speedup demonstration
## Acceptance Criteria
- `python examples/spark_optimization_demo.py` exits with code 0
- Comparison results output (JSON/CSV)
- Performance improvement documented
| ---
name: spark-optimization
description: Optimize Apache Spark jobs with partitioning, caching, shuffle optimization, and memory tuning. Use when improving Spark performance, debugging slow jobs, or scaling data processing pipelines.
---
# Apache Spark Optimization
Production patterns for optimizing Apache Spark jobs including partitioning strategies, memory management, shuffle optimization, and performance tuning.
## When to Use This Skill
- Optimizing slow Spark jobs
- Tuning memory and executor configuration
- Implementing efficient partitioning strategies
- Debugging Spark performance issues
- Scaling Spark pipelines for large datasets
- Reducing shuffle and data skew
## Core Concepts
### 1. Spark Execution Model
```
Driver Program
↓
Job (triggered by action)
↓
Stages (separated by shuffles)
↓
Tasks (one per partition)
```
### 2. Key Performance Factors
| Factor | Impact | Solution |
| ----------------- | --------------------- | ----------------------------- |
| **Shuffle** | Network I/O, disk I/O | Minimize wide transformations |
| **Data Skew** | Uneven task duration | Salting, broadcast joins |
| **Serialization** | CPU overhead | Use Kryo, columnar formats |
| **Memory** | GC pressure, spills | Tune executor memory |
| **Partitions** | Parallelism | Right-size partitions |
## Quick Start
```python
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
# Create optimized Spark session
spark = (SparkSession.builder
.appName("OptimizedJob")
.config("spark.sql.adaptive.enabled", "true")
.config("spark.sql.adaptive.coalescePartitions.enabled", "true")
.config("spark.sql.adaptive.skewJoin.enabled", "true")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.config("spark.sql.shuffle.partitions", "200")
.getOrCreate())
# Read with optimized settings
df = (spark.read
.format("parquet")
.option("mergeSchema", "false")
.load("s3://bucket/data/"))
# Efficient transformations
result = (df
.filter(F.col("date") >= "2024-01-01")
.select("id", "amount", "category")
.groupBy("category")
.agg(F.sum("amount").alias("total")))
result.write.mode("overwrite").parquet("s3://bucket/output/")
```
## Patterns
### Pattern 1: Optimal Partitioning
```python
# Calculate optimal partition count
def calculate_partitions(data_size_gb: float, partition_size_mb: int = 128) -> int:
"""
Optimal partition size: 128MB - 256MB
Too few: Under-utilization, memory pressure
Too many: Task scheduling overhead
"""
return max(int(data_size_gb * 1024 / partition_size_mb), 1)
# Repartition for even distribution
df_repartitioned = df.repartition(200, "partition_key")
# Coalesce to reduce partitions (no shuffle)
df_coalesced = df.coalesce(100)
# Partition pruning with predicate pushdown
df = (spark.read.parquet("s3://bucket/data/")
.filter(F.col("date") == "2024-01-01")) # Spark pushes this down
# Write with partitioning for future queries
(df.write
.partitionBy("year", "month", "day")
.mode("overwrite")
.parquet("s3://bucket/partitioned_output/"))
```
### Pattern 2: Join Optimization
```python
from pyspark.sql import functions as F
from pyspark.sql.types import *
# 1. Broadcast Join - Small table joins
# Best when: One side < 10MB (configurable)
small_df = spark.read.parquet("s3://bucket/small_table/") # < 10MB
large_df = spark.read.parquet("s3://bucket/large_table/") # TBs
# Explicit broadcast hint
result = large_df.join(
F.broadcast(small_df),
on="key",
how="left"
)
# 2. Sort-Merge Join - Default for large tables
# Requires shuffle, but handles any size
result = large_df1.join(large_df2, on="key", how="inner")
# 3. Bucket Join - Pre-sorted, no shuffle at join time
# Write bucketed tables
(df.write
.bucketBy(200, "customer_id")
.sortBy("customer_id")
.mode("overwrite")
.saveAsTable("bucketed_orders"))
# Join bucketed tables (no shuffle!)
orders = spark.table("bucketed_orders")
customers = spark.table("bucketed_customers") # Same bucket count
result = orders.join(customers, on="customer_id")
# 4. Skew Join Handling
# Enable AQE skew join optimization
spark.conf.set("spark.sql.adaptive.skewJoin.enabled", "true")
spark.conf.set("spark.sql.adaptive.skewJoin.skewedPartitionFactor", "5")
spark.conf.set("spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes", "256MB")
# Manual salting for severe skew
def salt_join(df_skewed, df_other, key_col, num_salts=10):
"""Add salt to distribute skewed keys"""
# Add salt to skewed side
df_salted = df_skewed.withColumn(
"salt",
(F.rand() * num_salts).cast("int")
).withColumn(
"salted_key",
F.concat(F.col(key_col), F.lit("_"), F.col("salt"))
)
# Explode other side with all salts
df_exploded = df_other.crossJoin(
spark.range(num_salts).withColumnRenamed("id", "salt")
).withColumn(
"salted_key",
F.concat(F.col(key_col), F.lit("_"), F.col("salt"))
)
# Join on salted key
return df_salted.join(df_exploded, on="salted_key", how="inner")
```
### Pattern 3: Caching and Persistence
```python
from pyspark import StorageLevel
# Cache when reusing DataFrame multiple times
df = spark.read.parquet("s3://bucket/data/")
df_filtered = df.filter(F.col("status") == "active")
# Cache in memory (MEMORY_AND_DISK is default)
df_filtered.cache()
# Or with specific storage level
df_filtered.persist(StorageLevel.MEMORY_AND_DISK_SER)
# Force materialization
df_filtered.count()
# Use in multiple actions
agg1 = df_filtered.groupBy("category").count()
agg2 = df_filtered.groupBy("region").sum("amount")
# Unpersist when done
df_filtered.unpersist()
# Storage levels explained:
# MEMORY_ONLY - Fast, but may not fit
# MEMORY_AND_DISK - Spills to disk if needed (recommended)
# MEMORY_ONLY_SER - Serialized, less memory, more CPU
# DISK_ONLY - When memory is tight
# OFF_HEAP - Tungsten off-heap memory
# Checkpoint for complex lineage
spark.sparkContext.setCheckpointDir("s3://bucket/checkpoints/")
df_complex = (df
.join(other_df, "key")
.groupBy("category")
.agg(F.sum("amount")))
df_complex.checkpoint() # Breaks lineage, materializes
```
### Pattern 4: Memory Tuning
```python
# Executor memory configuration
# spark-submit --executor-memory 8g --executor-cores 4
# Memory breakdown (8GB executor):
# - spark.memory.fraction = 0.6 (60% = 4.8GB for execution + storage)
# - spark.memory.storageFraction = 0.5 (50% of 4.8GB = 2.4GB for cache)
# - Remaining 2.4GB for execution (shuffles, joins, sorts)
# - 40% = 3.2GB for user data structures and internal metadata
spark = (SparkSession.builder
.config("spark.executor.memory", "8g")
.config("spark.executor.memoryOverhead", "2g") # For non-JVM memory
.config("spark.memory.fraction", "0.6")
.config("spark.memory.storageFraction", "0.5")
.config("spark.sql.shuffle.partitions", "200")
# For memory-intensive operations
.config("spark.sql.autoBroadcastJoinThreshold", "50MB")
# Prevent OOM on large shuffles
.config("spark.sql.files.maxPartitionBytes", "128MB")
.getOrCreate())
# Monitor memory usage
def print_memory_usage(spark):
"""Print current memory usage"""
sc = spark.sparkContext
for executor in sc._jsc.sc().getExecutorMemoryStatus().keySet().toArray():
mem_status = sc._jsc.sc().getExecutorMemoryStatus().get(executor)
total = mem_status._1() / (1024**3)
free = mem_status._2() / (1024**3)
print(f"{executor}: {total:.2f}GB total, {free:.2f}GB free")
```
### Pattern 5: Shuffle Optimization
```python
# Reduce shuffle data size
spark.conf.set("spark.sql.shuffle.partitions", "auto") # With AQE
spark.conf.set("spark.shuffle.compress", "true")
spark.conf.set("spark.shuffle.spill.compress", "true")
# Pre-aggregate before shuffle
df_optimized = (df
# Local aggregation first (combiner)
.groupBy("key", "partition_col")
.agg(F.sum("value").alias("partial_sum"))
# Then global aggregation
.groupBy("key")
.agg(F.sum("partial_sum").alias("total")))
# Avoid shuffle with map-side operations
# BAD: Shuffle for each distinct
distinct_count = df.select("category").distinct().count()
# GOOD: Approximate distinct (no shuffle)
approx_count = df.select(F.approx_count_distinct("category")).collect()[0][0]
# Use coalesce instead of repartition when reducing partitions
df_reduced = df.coalesce(10) # No shuffle
# Optimize shuffle with compression
spark.conf.set("spark.io.compression.codec", "lz4") # Fast compression
```
### Pattern 6: Data Format Optimization
```python
# Parquet optimizations
(df.write
.option("compression", "snappy") # Fast compression
.option("parquet.block.size", 128 * 1024 * 1024) # 128MB row groups
.parquet("s3://bucket/output/"))
# Column pruning - only read needed columns
df = (spark.read.parquet("s3://bucket/data/")
.select("id", "amount", "date")) # Spark only reads these columns
# Predicate pushdown - filter at storage level
df = (spark.read.parquet("s3://bucket/partitioned/year=2024/")
.filter(F.col("status") == "active")) # Pushed to Parquet reader
# Delta Lake optimizations
(df.write
.format("delta")
.option("optimizeWrite", "true") # Bin-packing
.option("autoCompact", "true") # Compact small files
.mode("overwrite")
.save("s3://bucket/delta_table/"))
# Z-ordering for multi-dimensional queries
spark.sql("""
OPTIMIZE delta.`s3://bucket/delta_table/`
ZORDER BY (customer_id, date)
""")
```
### Pattern 7: Monitoring and Debugging
```python
# Enable detailed metrics
spark.conf.set("spark.sql.codegen.wholeStage", "true")
spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")
# Explain query plan
df.explain(mode="extended")
# Modes: simple, extended, codegen, cost, formatted
# Get physical plan statistics
df.explain(mode="cost")
# Monitor task metrics
def analyze_stage_metrics(spark):
"""Analyze recent stage metrics"""
status_tracker = spark.sparkContext.statusTracker()
for stage_id in status_tracker.getActiveStageIds():
stage_info = status_tracker.getStageInfo(stage_id)
print(f"Stage {stage_id}:")
print(f" Tasks: {stage_info.numTasks}")
print(f" Completed: {stage_info.numCompletedTasks}")
print(f" Failed: {stage_info.numFailedTasks}")
# Identify data skew
def check_partition_skew(df):
"""Check for partition skew"""
partition_counts = (df
.withColumn("partition_id", F.spark_partition_id())
.groupBy("partition_id")
.count()
.orderBy(F.desc("count")))
partition_counts.show(20)
stats = partition_counts.select(
F.min("count").alias("min"),
F.max("count").alias("max"),
F.avg("count").alias("avg"),
F.stddev("count").alias("stddev")
).collect()[0]
skew_ratio = stats["max"] / stats["avg"]
print(f"Skew ratio: {skew_ratio:.2f}x (>2x indicates skew)")
```
## Configuration Cheat Sheet
```python
# Production configuration template
spark_configs = {
# Adaptive Query Execution (AQE)
"spark.sql.adaptive.enabled": "true",
"spark.sql.adaptive.coalescePartitions.enabled": "true",
"spark.sql.adaptive.skewJoin.enabled": "true",
# Memory
"spark.executor.memory": "8g",
"spark.executor.memoryOverhead": "2g",
"spark.memory.fraction": "0.6",
"spark.memory.storageFraction": "0.5",
# Parallelism
"spark.sql.shuffle.partitions": "200",
"spark.default.parallelism": "200",
# Serialization
"spark.serializer": "org.apache.spark.serializer.KryoSerializer",
"spark.sql.execution.arrow.pyspark.enabled": "true",
# Compression
"spark.io.compression.codec": "lz4",
"spark.shuffle.compress": "true",
# Broadcast
"spark.sql.autoBroadcastJoinThreshold": "50MB",
# File handling
"spark.sql.files.maxPartitionBytes": "128MB",
"spark.sql.files.openCostInBytes": "4MB",
}
```
## Best Practices
### Do's
- **Enable AQE** - Adaptive query execution handles many issues
- **Use Parquet/Delta** - Columnar formats with compression
- **Broadcast small tables** - Avoid shuffle for small joins
- **Monitor Spark UI** - Check for skew, spills, GC
- **Right-size partitions** - 128MB - 256MB per partition
### Don'ts
- **Don't collect large data** - Keep data distributed
- **Don't use UDFs unnecessarily** - Use built-in functions
- **Don't over-cache** - Memory is limited
- **Don't ignore data skew** - It dominates job time
- **Don't use `.count()` for existence** - Use `.take(1)` or `.isEmpty()`
## Resources
- [Spark Performance Tuning](https://spark.apache.org/docs/latest/sql-performance-tuning.html)
- [Spark Configuration](https://spark.apache.org/docs/latest/configuration.html)
- [Databricks Optimization Guide](https://docs.databricks.com/en/optimizations/index.html)
| """
Test for 'spark-optimization' skill — Apache Spark Query Optimization
Validates that the Agent created optimized Spark query examples with
proper partitioning, caching, and broadcast join patterns.
"""
import os
import subprocess
import pytest
class TestSparkOptimization:
"""Verify Spark optimization demo scripts."""
REPO_DIR = "/workspace/spark"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_demo_script_exists(self):
"""An optimization demo script must exist."""
examples_dir = os.path.join(self.REPO_DIR, "examples", "src", "main")
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "optim" in f.lower() and (
f.endswith(".py") or f.endswith(".scala") or f.endswith(".java")
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No optimization demo file found"
def test_readme_exists(self):
"""README or doc for optimization examples must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.lower() == "readme.md" and "optim" in root.lower():
found = True
break
if found:
break
if not found:
# Also check for inline docs in the script
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "optim" in f.lower() and f.endswith((".py", ".scala")):
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if len(content) > 200:
found = True
break
if found:
break
assert found, "No README or substantial docs for optimization examples"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_demo_files(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "optim" in f.lower() and (
f.endswith(".py") or f.endswith(".scala") or f.endswith(".java")
):
found.append(os.path.join(root, f))
return found
def _read_all_demos(self):
content = ""
for fpath in self._find_demo_files():
with open(fpath, "r", errors="ignore") as f:
content += f.read() + "\n"
return content
def test_broadcast_join(self):
"""Demo must show broadcast join optimization."""
content = self._read_all_demos()
patterns = [
"broadcast",
"BroadcastHashJoin",
"broadcast_join",
"F.broadcast",
"spark.sql.autoBroadcastJoinThreshold",
]
found = any(p in content for p in patterns)
assert found, "No broadcast join pattern found"
def test_caching_strategy(self):
"""Demo must show caching (persist/cache)."""
content = self._read_all_demos()
patterns = [
".cache()",
".persist()",
"MEMORY_AND_DISK",
"StorageLevel",
"unpersist",
]
found = any(p in content for p in patterns)
assert found, "No caching strategy found"
def test_partitioning(self):
"""Demo must show repartition or coalesce."""
content = self._read_all_demos()
patterns = [
"repartition",
"coalesce",
"partitionBy",
"numPartitions",
"spark.sql.shuffle.partitions",
]
found = any(p in content for p in patterns)
assert found, "No partitioning strategy found"
def test_predicate_pushdown(self):
"""Demo should demonstrate predicate pushdown or filter early."""
content = self._read_all_demos()
patterns = ["filter", "where", "pushdown", "predicate", ".filter(", ".where("]
found = any(p in content for p in patterns)
assert found, "No predicate pushdown/filter pattern found"
def test_explain_plan(self):
"""Demo should use .explain() to show query plans."""
content = self._read_all_demos()
patterns = [
".explain(",
"EXPLAIN",
"queryExecution",
"logical plan",
"physical plan",
]
found = any(p in content for p in patterns)
assert found, "No explain plan usage found"
def test_avoid_shuffle(self):
"""Demo should discuss or address shuffle reduction."""
content = self._read_all_demos()
patterns = [
"shuffle",
"reduceByKey",
"aggregateByKey",
"groupByKey",
"avoid",
"minimize",
]
found = any(p in content.lower() for p in patterns)
assert found, "No shuffle optimization discussion found"
def test_spark_session_creation(self):
"""Demo must create a SparkSession."""
content = self._read_all_demos()
patterns = [
"SparkSession",
"spark.builder",
"getOrCreate",
"SparkConf",
"SparkContext",
]
found = any(p in content for p in patterns)
assert found, "No SparkSession creation found"
def test_python_demo_compiles(self):
"""Python demo files must compile."""
for fpath in self._find_demo_files():
if fpath.endswith(".py"):
result = subprocess.run(
["python", "-m", "py_compile", fpath],
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"{fpath} failed to compile:\n{result.stderr}"
| https://github.com/apache/spark | zhangyiiiiii/swe-skills-bench-jvm | |
similarity-search-patterns | Similarity Search Patterns | See task file for detailed mission requirements. | feature | # Task: Create Similarity Search Demonstration for Milvus
## Background
Add examples demonstrating similarity search behavior in Milvus with
index building, vector insertion, and query operations.
## Files to Create/Modify
- examples/similarity_search_demo.py (new)
- examples/test_vectors.json (test data)
- benchmarks/similarity_benchmark.py (optional)
## Requirements
Demo Script:
- Create collection with proper schema
- Build appropriate index (IVF_FLAT or HNSW)
- Insert test vectors with known neighbors
- Execute similarity queries
Test Dataset:
- Pre-annotated ground truth neighbors
- Various vector dimensions
- Edge cases (identical vectors, orthogonal vectors)
Output Requirements:
- Top-K results for each query
- Verify known neighbors in results
- Query latency and parameters logged
4. Validation:
- Top-K results contain pre-annotated neighbors
- Query parameters and latency in output
- JSON/CSV output format
## Acceptance Criteria
- `python examples/similarity_search_demo.py` exits with code 0
- Output contains query parameters and latency
- Known neighbors appear in top-K results
| ---
name: similarity-search-patterns
description: Implement efficient similarity search with vector databases. Use when building semantic search, implementing nearest neighbor queries, or optimizing retrieval performance.
---
# Similarity Search Patterns
Patterns for implementing efficient similarity search in production systems.
## When to Use This Skill
- Building semantic search systems
- Implementing RAG retrieval
- Creating recommendation engines
- Optimizing search latency
- Scaling to millions of vectors
- Combining semantic and keyword search
## Core Concepts
### 1. Distance Metrics
| Metric | Formula | Best For |
| ------------------ | ------------------ | --------------------- | --- | -------------- |
| **Cosine** | 1 - (A·B)/(‖A‖‖B‖) | Normalized embeddings |
| **Euclidean (L2)** | √Σ(a-b)² | Raw embeddings |
| **Dot Product** | A·B | Magnitude matters |
| **Manhattan (L1)** | Σ | a-b | | Sparse vectors |
### 2. Index Types
```
┌─────────────────────────────────────────────────┐
│ Index Types │
├─────────────┬───────────────┬───────────────────┤
│ Flat │ HNSW │ IVF+PQ │
│ (Exact) │ (Graph-based) │ (Quantized) │
├─────────────┼───────────────┼───────────────────┤
│ O(n) search │ O(log n) │ O(√n) │
│ 100% recall │ ~95-99% │ ~90-95% │
│ Small data │ Medium-Large │ Very Large │
└─────────────┴───────────────┴───────────────────┘
```
## Templates
### Template 1: Pinecone Implementation
```python
from pinecone import Pinecone, ServerlessSpec
from typing import List, Dict, Optional
import hashlib
class PineconeVectorStore:
def __init__(
self,
api_key: str,
index_name: str,
dimension: int = 1536,
metric: str = "cosine"
):
self.pc = Pinecone(api_key=api_key)
# Create index if not exists
if index_name not in self.pc.list_indexes().names():
self.pc.create_index(
name=index_name,
dimension=dimension,
metric=metric,
spec=ServerlessSpec(cloud="aws", region="us-east-1")
)
self.index = self.pc.Index(index_name)
def upsert(
self,
vectors: List[Dict],
namespace: str = ""
) -> int:
"""
Upsert vectors.
vectors: [{"id": str, "values": List[float], "metadata": dict}]
"""
# Batch upsert
batch_size = 100
total = 0
for i in range(0, len(vectors), batch_size):
batch = vectors[i:i + batch_size]
self.index.upsert(vectors=batch, namespace=namespace)
total += len(batch)
return total
def search(
self,
query_vector: List[float],
top_k: int = 10,
namespace: str = "",
filter: Optional[Dict] = None,
include_metadata: bool = True
) -> List[Dict]:
"""Search for similar vectors."""
results = self.index.query(
vector=query_vector,
top_k=top_k,
namespace=namespace,
filter=filter,
include_metadata=include_metadata
)
return [
{
"id": match.id,
"score": match.score,
"metadata": match.metadata
}
for match in results.matches
]
def search_with_rerank(
self,
query: str,
query_vector: List[float],
top_k: int = 10,
rerank_top_n: int = 50,
namespace: str = ""
) -> List[Dict]:
"""Search and rerank results."""
# Over-fetch for reranking
initial_results = self.search(
query_vector,
top_k=rerank_top_n,
namespace=namespace
)
# Rerank with cross-encoder or LLM
reranked = self._rerank(query, initial_results)
return reranked[:top_k]
def _rerank(self, query: str, results: List[Dict]) -> List[Dict]:
"""Rerank results using cross-encoder."""
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-6-v2')
pairs = [(query, r["metadata"]["text"]) for r in results]
scores = model.predict(pairs)
for result, score in zip(results, scores):
result["rerank_score"] = float(score)
return sorted(results, key=lambda x: x["rerank_score"], reverse=True)
def delete(self, ids: List[str], namespace: str = ""):
"""Delete vectors by ID."""
self.index.delete(ids=ids, namespace=namespace)
def delete_by_filter(self, filter: Dict, namespace: str = ""):
"""Delete vectors matching filter."""
self.index.delete(filter=filter, namespace=namespace)
```
### Template 2: Qdrant Implementation
```python
from qdrant_client import QdrantClient
from qdrant_client.http import models
from typing import List, Dict, Optional
class QdrantVectorStore:
def __init__(
self,
url: str = "localhost",
port: int = 6333,
collection_name: str = "documents",
vector_size: int = 1536
):
self.client = QdrantClient(url=url, port=port)
self.collection_name = collection_name
# Create collection if not exists
collections = self.client.get_collections().collections
if collection_name not in [c.name for c in collections]:
self.client.create_collection(
collection_name=collection_name,
vectors_config=models.VectorParams(
size=vector_size,
distance=models.Distance.COSINE
),
# Optional: enable quantization for memory efficiency
quantization_config=models.ScalarQuantization(
scalar=models.ScalarQuantizationConfig(
type=models.ScalarType.INT8,
quantile=0.99,
always_ram=True
)
)
)
def upsert(self, points: List[Dict]) -> int:
"""
Upsert points.
points: [{"id": str/int, "vector": List[float], "payload": dict}]
"""
qdrant_points = [
models.PointStruct(
id=p["id"],
vector=p["vector"],
payload=p.get("payload", {})
)
for p in points
]
self.client.upsert(
collection_name=self.collection_name,
points=qdrant_points
)
return len(points)
def search(
self,
query_vector: List[float],
limit: int = 10,
filter: Optional[models.Filter] = None,
score_threshold: Optional[float] = None
) -> List[Dict]:
"""Search for similar vectors."""
results = self.client.search(
collection_name=self.collection_name,
query_vector=query_vector,
limit=limit,
query_filter=filter,
score_threshold=score_threshold
)
return [
{
"id": r.id,
"score": r.score,
"payload": r.payload
}
for r in results
]
def search_with_filter(
self,
query_vector: List[float],
must_conditions: List[Dict] = None,
should_conditions: List[Dict] = None,
must_not_conditions: List[Dict] = None,
limit: int = 10
) -> List[Dict]:
"""Search with complex filters."""
conditions = []
if must_conditions:
conditions.extend([
models.FieldCondition(
key=c["key"],
match=models.MatchValue(value=c["value"])
)
for c in must_conditions
])
filter = models.Filter(must=conditions) if conditions else None
return self.search(query_vector, limit=limit, filter=filter)
def search_with_sparse(
self,
dense_vector: List[float],
sparse_vector: Dict[int, float],
limit: int = 10,
dense_weight: float = 0.7
) -> List[Dict]:
"""Hybrid search with dense and sparse vectors."""
# Requires collection with named vectors
results = self.client.search(
collection_name=self.collection_name,
query_vector=models.NamedVector(
name="dense",
vector=dense_vector
),
limit=limit
)
return [{"id": r.id, "score": r.score, "payload": r.payload} for r in results]
```
### Template 3: pgvector with PostgreSQL
```python
import asyncpg
from typing import List, Dict, Optional
import numpy as np
class PgVectorStore:
def __init__(self, connection_string: str):
self.connection_string = connection_string
async def init(self):
"""Initialize connection pool and extension."""
self.pool = await asyncpg.create_pool(self.connection_string)
async with self.pool.acquire() as conn:
# Enable extension
await conn.execute("CREATE EXTENSION IF NOT EXISTS vector")
# Create table
await conn.execute("""
CREATE TABLE IF NOT EXISTS documents (
id TEXT PRIMARY KEY,
content TEXT,
metadata JSONB,
embedding vector(1536)
)
""")
# Create index (HNSW for better performance)
await conn.execute("""
CREATE INDEX IF NOT EXISTS documents_embedding_idx
ON documents
USING hnsw (embedding vector_cosine_ops)
WITH (m = 16, ef_construction = 64)
""")
async def upsert(self, documents: List[Dict]):
"""Upsert documents with embeddings."""
async with self.pool.acquire() as conn:
await conn.executemany(
"""
INSERT INTO documents (id, content, metadata, embedding)
VALUES ($1, $2, $3, $4)
ON CONFLICT (id) DO UPDATE SET
content = EXCLUDED.content,
metadata = EXCLUDED.metadata,
embedding = EXCLUDED.embedding
""",
[
(
doc["id"],
doc["content"],
doc.get("metadata", {}),
np.array(doc["embedding"]).tolist()
)
for doc in documents
]
)
async def search(
self,
query_embedding: List[float],
limit: int = 10,
filter_metadata: Optional[Dict] = None
) -> List[Dict]:
"""Search for similar documents."""
query = """
SELECT id, content, metadata,
1 - (embedding <=> $1::vector) as similarity
FROM documents
"""
params = [query_embedding]
if filter_metadata:
conditions = []
for key, value in filter_metadata.items():
params.append(value)
conditions.append(f"metadata->>'{key}' = ${len(params)}")
query += " WHERE " + " AND ".join(conditions)
query += f" ORDER BY embedding <=> $1::vector LIMIT ${len(params) + 1}"
params.append(limit)
async with self.pool.acquire() as conn:
rows = await conn.fetch(query, *params)
return [
{
"id": row["id"],
"content": row["content"],
"metadata": row["metadata"],
"score": row["similarity"]
}
for row in rows
]
async def hybrid_search(
self,
query_embedding: List[float],
query_text: str,
limit: int = 10,
vector_weight: float = 0.5
) -> List[Dict]:
"""Hybrid search combining vector and full-text."""
async with self.pool.acquire() as conn:
rows = await conn.fetch(
"""
WITH vector_results AS (
SELECT id, content, metadata,
1 - (embedding <=> $1::vector) as vector_score
FROM documents
ORDER BY embedding <=> $1::vector
LIMIT $3 * 2
),
text_results AS (
SELECT id, content, metadata,
ts_rank(to_tsvector('english', content),
plainto_tsquery('english', $2)) as text_score
FROM documents
WHERE to_tsvector('english', content) @@ plainto_tsquery('english', $2)
LIMIT $3 * 2
)
SELECT
COALESCE(v.id, t.id) as id,
COALESCE(v.content, t.content) as content,
COALESCE(v.metadata, t.metadata) as metadata,
COALESCE(v.vector_score, 0) * $4 +
COALESCE(t.text_score, 0) * (1 - $4) as combined_score
FROM vector_results v
FULL OUTER JOIN text_results t ON v.id = t.id
ORDER BY combined_score DESC
LIMIT $3
""",
query_embedding, query_text, limit, vector_weight
)
return [dict(row) for row in rows]
```
### Template 4: Weaviate Implementation
```python
import weaviate
from weaviate.util import generate_uuid5
from typing import List, Dict, Optional
class WeaviateVectorStore:
def __init__(
self,
url: str = "http://localhost:8080",
class_name: str = "Document"
):
self.client = weaviate.Client(url=url)
self.class_name = class_name
self._ensure_schema()
def _ensure_schema(self):
"""Create schema if not exists."""
schema = {
"class": self.class_name,
"vectorizer": "none", # We provide vectors
"properties": [
{"name": "content", "dataType": ["text"]},
{"name": "source", "dataType": ["string"]},
{"name": "chunk_id", "dataType": ["int"]}
]
}
if not self.client.schema.exists(self.class_name):
self.client.schema.create_class(schema)
def upsert(self, documents: List[Dict]):
"""Batch upsert documents."""
with self.client.batch as batch:
batch.batch_size = 100
for doc in documents:
batch.add_data_object(
data_object={
"content": doc["content"],
"source": doc.get("source", ""),
"chunk_id": doc.get("chunk_id", 0)
},
class_name=self.class_name,
uuid=generate_uuid5(doc["id"]),
vector=doc["embedding"]
)
def search(
self,
query_vector: List[float],
limit: int = 10,
where_filter: Optional[Dict] = None
) -> List[Dict]:
"""Vector search."""
query = (
self.client.query
.get(self.class_name, ["content", "source", "chunk_id"])
.with_near_vector({"vector": query_vector})
.with_limit(limit)
.with_additional(["distance", "id"])
)
if where_filter:
query = query.with_where(where_filter)
results = query.do()
return [
{
"id": item["_additional"]["id"],
"content": item["content"],
"source": item["source"],
"score": 1 - item["_additional"]["distance"]
}
for item in results["data"]["Get"][self.class_name]
]
def hybrid_search(
self,
query: str,
query_vector: List[float],
limit: int = 10,
alpha: float = 0.5 # 0 = keyword, 1 = vector
) -> List[Dict]:
"""Hybrid search combining BM25 and vector."""
results = (
self.client.query
.get(self.class_name, ["content", "source"])
.with_hybrid(query=query, vector=query_vector, alpha=alpha)
.with_limit(limit)
.with_additional(["score"])
.do()
)
return [
{
"content": item["content"],
"source": item["source"],
"score": item["_additional"]["score"]
}
for item in results["data"]["Get"][self.class_name]
]
```
## Best Practices
### Do's
- **Use appropriate index** - HNSW for most cases
- **Tune parameters** - ef_search, nprobe for recall/speed
- **Implement hybrid search** - Combine with keyword search
- **Monitor recall** - Measure search quality
- **Pre-filter when possible** - Reduce search space
### Don'ts
- **Don't skip evaluation** - Measure before optimizing
- **Don't over-index** - Start with flat, scale up
- **Don't ignore latency** - P99 matters for UX
- **Don't forget costs** - Vector storage adds up
## Resources
- [Pinecone Docs](https://docs.pinecone.io/)
- [Qdrant Docs](https://qdrant.tech/documentation/)
- [pgvector](https://github.com/pgvector/pgvector)
- [Weaviate Docs](https://weaviate.io/developers/weaviate)
| """
Test for 'similarity-search-patterns' skill — Milvus Similarity Search
Validates that the Agent created similarity search examples with proper
collection setup, indexing, and search patterns in Milvus.
"""
import os
import subprocess
import pytest
class TestSimilaritySearchPatterns:
"""Verify Milvus similarity search implementation."""
REPO_DIR = "/workspace/milvus"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_search_example_exists(self):
"""A similarity search example file must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith((".py", ".go")) and (
"search" in f.lower()
or "similar" in f.lower()
or "example" in f.lower()
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No similarity search example found"
def test_documentation_exists(self):
"""README or documentation must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.lower() in ("readme.md",) and (
"example" in root.lower() or "search" in root.lower()
):
found = True
break
if found:
break
if not found:
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if ("search" in f.lower() or "similar" in f.lower()) and f.endswith(
".md"
):
found = True
break
if found:
break
assert found, "No documentation found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_search_files(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith((".py", ".go")) and "node_modules" not in root:
fpath = os.path.join(root, f)
try:
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if any(
p in content.lower()
for p in [
"milvus",
"collection",
"similarity",
"vector",
"search",
]
):
found.append(fpath)
except OSError:
pass
return found
def _read_all_search(self):
content = ""
for fpath in self._find_search_files():
with open(fpath, "r", errors="ignore") as f:
content += f.read() + "\n"
return content
def test_collection_creation(self):
"""Must create a Milvus collection."""
content = self._read_all_search()
coll_patterns = [
"Collection",
"create_collection",
"CollectionSchema",
"FieldSchema",
"CreateCollection",
]
found = any(p in content for p in coll_patterns)
assert found, "No collection creation found"
def test_index_building(self):
"""Must build a vector index."""
content = self._read_all_search()
index_patterns = [
"create_index",
"IndexType",
"IVF_FLAT",
"IVF_SQ8",
"HNSW",
"index_params",
"CreateIndex",
"FLAT",
]
found = any(p in content for p in index_patterns)
assert found, "No vector index creation found"
def test_vector_insertion(self):
"""Must insert vectors into collection."""
content = self._read_all_search()
insert_patterns = ["insert", "Import", "upsert", "Insert"]
found = any(p in content for p in insert_patterns)
assert found, "No vector insertion found"
def test_search_operation(self):
"""Must perform similarity search."""
content = self._read_all_search()
search_patterns = ["search", "Search", "query", "Query", "ann_search", "knn"]
found = any(p in content for p in search_patterns)
assert found, "No search operation found"
def test_distance_metric(self):
"""Must specify a distance metric."""
content = self._read_all_search()
metric_patterns = [
"L2",
"IP",
"COSINE",
"metric_type",
"MetricType",
"euclidean",
"cosine",
"inner_product",
]
found = any(p in content for p in metric_patterns)
assert found, "No distance metric specified"
def test_search_params(self):
"""Must configure search parameters."""
content = self._read_all_search()
param_patterns = [
"search_params",
"nprobe",
"ef",
"top_k",
"limit",
"output_fields",
"anns_field",
]
found = any(p in content for p in param_patterns)
assert found, "No search parameters found"
def test_schema_definition(self):
"""Must define collection schema with vector field."""
content = self._read_all_search()
schema_patterns = [
"DataType.FLOAT_VECTOR",
"dim=",
"FloatVector",
"BinaryVector",
"vector_field",
"VARCHAR",
"INT64",
]
found = sum(1 for p in schema_patterns if p in content)
assert found >= 2, "Insufficient schema definition"
def test_python_scripts_compile(self):
"""Python search files must compile."""
for fpath in self._find_search_files():
if fpath.endswith(".py"):
result = subprocess.run(
["python", "-m", "py_compile", fpath],
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"{fpath} compile error:\n{result.stderr}"
| https://github.com/milvus-io/milvus | zhangyiiiiii/swe-skills-bench-golang | |
llm-evaluation | LLM Evaluation | See task file for detailed mission requirements. | feature | # Task: Add LLM Evaluation Example and Test Cases for HELM
## Background
Add a small LLM evaluation use case with configuration, sample inputs,
and execution script to the HELM repository.
## Files to Create/Modify
- examples/llm_eval_demo.py (new)
- examples/eval_config.yaml (configuration)
- benchmarks/simple_eval/ (optional directory)
## Requirements
Evaluation Configuration:
- Small, locally runnable evaluation
- Clear dependency documentation
- Sample input data included
Evaluation Script:
- Load configuration
- Run evaluation on sample inputs
- Generate structured output
Output Format:
- score: numeric evaluation score
- labels: classification labels if applicable
- metrics: detailed metric breakdown
4. Dependencies:
- All dependencies documented
- Can run locally without external APIs
- Mock model if needed for testing
## Acceptance Criteria
- `python examples/llm_eval_demo.py` exits with code 0
- Output contains score and labels fields
- Evaluation report generated (JSON/CSV)
| ---
name: llm-evaluation
description: Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks.
---
# LLM Evaluation
Master comprehensive evaluation strategies for LLM applications, from automated metrics to human evaluation and A/B testing.
## When to Use This Skill
- Measuring LLM application performance systematically
- Comparing different models or prompts
- Detecting performance regressions before deployment
- Validating improvements from prompt changes
- Building confidence in production systems
- Establishing baselines and tracking progress over time
- Debugging unexpected model behavior
## Core Evaluation Types
### 1. Automated Metrics
Fast, repeatable, scalable evaluation using computed scores.
**Text Generation:**
- **BLEU**: N-gram overlap (translation)
- **ROUGE**: Recall-oriented (summarization)
- **METEOR**: Semantic similarity
- **BERTScore**: Embedding-based similarity
- **Perplexity**: Language model confidence
**Classification:**
- **Accuracy**: Percentage correct
- **Precision/Recall/F1**: Class-specific performance
- **Confusion Matrix**: Error patterns
- **AUC-ROC**: Ranking quality
**Retrieval (RAG):**
- **MRR**: Mean Reciprocal Rank
- **NDCG**: Normalized Discounted Cumulative Gain
- **Precision@K**: Relevant in top K
- **Recall@K**: Coverage in top K
### 2. Human Evaluation
Manual assessment for quality aspects difficult to automate.
**Dimensions:**
- **Accuracy**: Factual correctness
- **Coherence**: Logical flow
- **Relevance**: Answers the question
- **Fluency**: Natural language quality
- **Safety**: No harmful content
- **Helpfulness**: Useful to the user
### 3. LLM-as-Judge
Use stronger LLMs to evaluate weaker model outputs.
**Approaches:**
- **Pointwise**: Score individual responses
- **Pairwise**: Compare two responses
- **Reference-based**: Compare to gold standard
- **Reference-free**: Judge without ground truth
## Quick Start
```python
from dataclasses import dataclass
from typing import Callable
import numpy as np
@dataclass
class Metric:
name: str
fn: Callable
@staticmethod
def accuracy():
return Metric("accuracy", calculate_accuracy)
@staticmethod
def bleu():
return Metric("bleu", calculate_bleu)
@staticmethod
def bertscore():
return Metric("bertscore", calculate_bertscore)
@staticmethod
def custom(name: str, fn: Callable):
return Metric(name, fn)
class EvaluationSuite:
def __init__(self, metrics: list[Metric]):
self.metrics = metrics
async def evaluate(self, model, test_cases: list[dict]) -> dict:
results = {m.name: [] for m in self.metrics}
for test in test_cases:
prediction = await model.predict(test["input"])
for metric in self.metrics:
score = metric.fn(
prediction=prediction,
reference=test.get("expected"),
context=test.get("context")
)
results[metric.name].append(score)
return {
"metrics": {k: np.mean(v) for k, v in results.items()},
"raw_scores": results
}
# Usage
suite = EvaluationSuite([
Metric.accuracy(),
Metric.bleu(),
Metric.bertscore(),
Metric.custom("groundedness", check_groundedness)
])
test_cases = [
{
"input": "What is the capital of France?",
"expected": "Paris",
"context": "France is a country in Europe. Paris is its capital."
},
]
results = await suite.evaluate(model=your_model, test_cases=test_cases)
```
## Automated Metrics Implementation
### BLEU Score
```python
from nltk.translate.bleu_score import sentence_bleu, SmoothingFunction
def calculate_bleu(reference: str, hypothesis: str, **kwargs) -> float:
"""Calculate BLEU score between reference and hypothesis."""
smoothie = SmoothingFunction().method4
return sentence_bleu(
[reference.split()],
hypothesis.split(),
smoothing_function=smoothie
)
```
### ROUGE Score
```python
from rouge_score import rouge_scorer
def calculate_rouge(reference: str, hypothesis: str, **kwargs) -> dict:
"""Calculate ROUGE scores."""
scorer = rouge_scorer.RougeScorer(
['rouge1', 'rouge2', 'rougeL'],
use_stemmer=True
)
scores = scorer.score(reference, hypothesis)
return {
'rouge1': scores['rouge1'].fmeasure,
'rouge2': scores['rouge2'].fmeasure,
'rougeL': scores['rougeL'].fmeasure
}
```
### BERTScore
```python
from bert_score import score
def calculate_bertscore(
references: list[str],
hypotheses: list[str],
**kwargs
) -> dict:
"""Calculate BERTScore using pre-trained model."""
P, R, F1 = score(
hypotheses,
references,
lang='en',
model_type='microsoft/deberta-xlarge-mnli'
)
return {
'precision': P.mean().item(),
'recall': R.mean().item(),
'f1': F1.mean().item()
}
```
### Custom Metrics
```python
def calculate_groundedness(response: str, context: str, **kwargs) -> float:
"""Check if response is grounded in provided context."""
from transformers import pipeline
nli = pipeline(
"text-classification",
model="microsoft/deberta-large-mnli"
)
result = nli(f"{context} [SEP] {response}")[0]
# Return confidence that response is entailed by context
return result['score'] if result['label'] == 'ENTAILMENT' else 0.0
def calculate_toxicity(text: str, **kwargs) -> float:
"""Measure toxicity in generated text."""
from detoxify import Detoxify
results = Detoxify('original').predict(text)
return max(results.values()) # Return highest toxicity score
def calculate_factuality(claim: str, sources: list[str], **kwargs) -> float:
"""Verify factual claims against sources."""
from transformers import pipeline
nli = pipeline("text-classification", model="facebook/bart-large-mnli")
scores = []
for source in sources:
result = nli(f"{source}</s></s>{claim}")[0]
if result['label'] == 'entailment':
scores.append(result['score'])
return max(scores) if scores else 0.0
```
## LLM-as-Judge Patterns
### Single Output Evaluation
```python
from anthropic import Anthropic
from pydantic import BaseModel, Field
import json
class QualityRating(BaseModel):
accuracy: int = Field(ge=1, le=10, description="Factual correctness")
helpfulness: int = Field(ge=1, le=10, description="Answers the question")
clarity: int = Field(ge=1, le=10, description="Well-written and understandable")
reasoning: str = Field(description="Brief explanation")
async def llm_judge_quality(
response: str,
question: str,
context: str = None
) -> QualityRating:
"""Use Claude to judge response quality."""
client = Anthropic()
system = """You are an expert evaluator of AI responses.
Rate responses on accuracy, helpfulness, and clarity (1-10 scale).
Provide brief reasoning for your ratings."""
prompt = f"""Rate the following response:
Question: {question}
{f'Context: {context}' if context else ''}
Response: {response}
Provide ratings in JSON format:
{{
"accuracy": <1-10>,
"helpfulness": <1-10>,
"clarity": <1-10>,
"reasoning": "<brief explanation>"
}}"""
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=500,
system=system,
messages=[{"role": "user", "content": prompt}]
)
return QualityRating(**json.loads(message.content[0].text))
```
### Pairwise Comparison
```python
from pydantic import BaseModel, Field
from typing import Literal
class ComparisonResult(BaseModel):
winner: Literal["A", "B", "tie"]
reasoning: str
confidence: int = Field(ge=1, le=10)
async def compare_responses(
question: str,
response_a: str,
response_b: str
) -> ComparisonResult:
"""Compare two responses using LLM judge."""
client = Anthropic()
prompt = f"""Compare these two responses and determine which is better.
Question: {question}
Response A: {response_a}
Response B: {response_b}
Consider accuracy, helpfulness, and clarity.
Answer with JSON:
{{
"winner": "A" or "B" or "tie",
"reasoning": "<explanation>",
"confidence": <1-10>
}}"""
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=500,
messages=[{"role": "user", "content": prompt}]
)
return ComparisonResult(**json.loads(message.content[0].text))
```
### Reference-Based Evaluation
```python
class ReferenceEvaluation(BaseModel):
semantic_similarity: float = Field(ge=0, le=1)
factual_accuracy: float = Field(ge=0, le=1)
completeness: float = Field(ge=0, le=1)
issues: list[str]
async def evaluate_against_reference(
response: str,
reference: str,
question: str
) -> ReferenceEvaluation:
"""Evaluate response against gold standard reference."""
client = Anthropic()
prompt = f"""Compare the response to the reference answer.
Question: {question}
Reference Answer: {reference}
Response to Evaluate: {response}
Evaluate:
1. Semantic similarity (0-1): How similar is the meaning?
2. Factual accuracy (0-1): Are all facts correct?
3. Completeness (0-1): Does it cover all key points?
4. List any specific issues or errors.
Respond in JSON:
{{
"semantic_similarity": <0-1>,
"factual_accuracy": <0-1>,
"completeness": <0-1>,
"issues": ["issue1", "issue2"]
}}"""
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=500,
messages=[{"role": "user", "content": prompt}]
)
return ReferenceEvaluation(**json.loads(message.content[0].text))
```
## Human Evaluation Frameworks
### Annotation Guidelines
```python
from dataclasses import dataclass, field
from typing import Optional
@dataclass
class AnnotationTask:
"""Structure for human annotation task."""
response: str
question: str
context: Optional[str] = None
def get_annotation_form(self) -> dict:
return {
"question": self.question,
"context": self.context,
"response": self.response,
"ratings": {
"accuracy": {
"scale": "1-5",
"description": "Is the response factually correct?"
},
"relevance": {
"scale": "1-5",
"description": "Does it answer the question?"
},
"coherence": {
"scale": "1-5",
"description": "Is it logically consistent?"
}
},
"issues": {
"factual_error": False,
"hallucination": False,
"off_topic": False,
"unsafe_content": False
},
"feedback": ""
}
```
### Inter-Rater Agreement
```python
from sklearn.metrics import cohen_kappa_score
def calculate_agreement(
rater1_scores: list[int],
rater2_scores: list[int]
) -> dict:
"""Calculate inter-rater agreement."""
kappa = cohen_kappa_score(rater1_scores, rater2_scores)
if kappa < 0:
interpretation = "Poor"
elif kappa < 0.2:
interpretation = "Slight"
elif kappa < 0.4:
interpretation = "Fair"
elif kappa < 0.6:
interpretation = "Moderate"
elif kappa < 0.8:
interpretation = "Substantial"
else:
interpretation = "Almost Perfect"
return {
"kappa": kappa,
"interpretation": interpretation
}
```
## A/B Testing
### Statistical Testing Framework
```python
from scipy import stats
import numpy as np
from dataclasses import dataclass, field
@dataclass
class ABTest:
variant_a_name: str = "A"
variant_b_name: str = "B"
variant_a_scores: list[float] = field(default_factory=list)
variant_b_scores: list[float] = field(default_factory=list)
def add_result(self, variant: str, score: float):
"""Add evaluation result for a variant."""
if variant == "A":
self.variant_a_scores.append(score)
else:
self.variant_b_scores.append(score)
def analyze(self, alpha: float = 0.05) -> dict:
"""Perform statistical analysis."""
a_scores = np.array(self.variant_a_scores)
b_scores = np.array(self.variant_b_scores)
# T-test
t_stat, p_value = stats.ttest_ind(a_scores, b_scores)
# Effect size (Cohen's d)
pooled_std = np.sqrt((np.std(a_scores)**2 + np.std(b_scores)**2) / 2)
cohens_d = (np.mean(b_scores) - np.mean(a_scores)) / pooled_std
return {
"variant_a_mean": np.mean(a_scores),
"variant_b_mean": np.mean(b_scores),
"difference": np.mean(b_scores) - np.mean(a_scores),
"relative_improvement": (np.mean(b_scores) - np.mean(a_scores)) / np.mean(a_scores),
"p_value": p_value,
"statistically_significant": p_value < alpha,
"cohens_d": cohens_d,
"effect_size": self._interpret_cohens_d(cohens_d),
"winner": self.variant_b_name if np.mean(b_scores) > np.mean(a_scores) else self.variant_a_name
}
@staticmethod
def _interpret_cohens_d(d: float) -> str:
"""Interpret Cohen's d effect size."""
abs_d = abs(d)
if abs_d < 0.2:
return "negligible"
elif abs_d < 0.5:
return "small"
elif abs_d < 0.8:
return "medium"
else:
return "large"
```
## Regression Testing
### Regression Detection
```python
from dataclasses import dataclass
@dataclass
class RegressionResult:
metric: str
baseline: float
current: float
change: float
is_regression: bool
class RegressionDetector:
def __init__(self, baseline_results: dict, threshold: float = 0.05):
self.baseline = baseline_results
self.threshold = threshold
def check_for_regression(self, new_results: dict) -> dict:
"""Detect if new results show regression."""
regressions = []
for metric in self.baseline.keys():
baseline_score = self.baseline[metric]
new_score = new_results.get(metric)
if new_score is None:
continue
# Calculate relative change
relative_change = (new_score - baseline_score) / baseline_score
# Flag if significant decrease
is_regression = relative_change < -self.threshold
if is_regression:
regressions.append(RegressionResult(
metric=metric,
baseline=baseline_score,
current=new_score,
change=relative_change,
is_regression=True
))
return {
"has_regression": len(regressions) > 0,
"regressions": regressions,
"summary": f"{len(regressions)} metric(s) regressed"
}
```
## LangSmith Evaluation Integration
```python
from langsmith import Client
from langsmith.evaluation import evaluate, LangChainStringEvaluator
# Initialize LangSmith client
client = Client()
# Create dataset
dataset = client.create_dataset("qa_test_cases")
client.create_examples(
inputs=[{"question": q} for q in questions],
outputs=[{"answer": a} for a in expected_answers],
dataset_id=dataset.id
)
# Define evaluators
evaluators = [
LangChainStringEvaluator("qa"), # QA correctness
LangChainStringEvaluator("context_qa"), # Context-grounded QA
LangChainStringEvaluator("cot_qa"), # Chain-of-thought QA
]
# Run evaluation
async def target_function(inputs: dict) -> dict:
result = await your_chain.ainvoke(inputs)
return {"answer": result}
experiment_results = await evaluate(
target_function,
data=dataset.name,
evaluators=evaluators,
experiment_prefix="v1.0.0",
metadata={"model": "claude-sonnet-4-6", "version": "1.0.0"}
)
print(f"Mean score: {experiment_results.aggregate_metrics['qa']['mean']}")
```
## Benchmarking
### Running Benchmarks
```python
from dataclasses import dataclass
import numpy as np
@dataclass
class BenchmarkResult:
metric: str
mean: float
std: float
min: float
max: float
class BenchmarkRunner:
def __init__(self, benchmark_dataset: list[dict]):
self.dataset = benchmark_dataset
async def run_benchmark(
self,
model,
metrics: list[Metric]
) -> dict[str, BenchmarkResult]:
"""Run model on benchmark and calculate metrics."""
results = {metric.name: [] for metric in metrics}
for example in self.dataset:
# Generate prediction
prediction = await model.predict(example["input"])
# Calculate each metric
for metric in metrics:
score = metric.fn(
prediction=prediction,
reference=example["reference"],
context=example.get("context")
)
results[metric.name].append(score)
# Aggregate results
return {
metric: BenchmarkResult(
metric=metric,
mean=np.mean(scores),
std=np.std(scores),
min=min(scores),
max=max(scores)
)
for metric, scores in results.items()
}
```
## Resources
- [LangSmith Evaluation Guide](https://docs.smith.langchain.com/evaluation)
- [RAGAS Framework](https://docs.ragas.io/)
- [DeepEval Library](https://docs.deepeval.com/)
- [Arize Phoenix](https://docs.arize.com/phoenix/)
- [HELM Benchmark](https://crfm.stanford.edu/helm/)
## Best Practices
1. **Multiple Metrics**: Use diverse metrics for comprehensive view
2. **Representative Data**: Test on real-world, diverse examples
3. **Baselines**: Always compare against baseline performance
4. **Statistical Rigor**: Use proper statistical tests for comparisons
5. **Continuous Evaluation**: Integrate into CI/CD pipeline
6. **Human Validation**: Combine automated metrics with human judgment
7. **Error Analysis**: Investigate failures to understand weaknesses
8. **Version Control**: Track evaluation results over time
## Common Pitfalls
- **Single Metric Obsession**: Optimizing for one metric at the expense of others
- **Small Sample Size**: Drawing conclusions from too few examples
- **Data Contamination**: Testing on training data
- **Ignoring Variance**: Not accounting for statistical uncertainty
- **Metric Mismatch**: Using metrics not aligned with business goals
- **Position Bias**: In pairwise evals, randomize order
- **Overfitting Prompts**: Optimizing for test set instead of real use
| """
Test for 'llm-evaluation' skill — LLM Evaluation
Validates that the Agent created an LLM evaluation demo with config, sample inputs,
and structured output in the HELM repository.
"""
import os
import subprocess
import json
import pytest
class TestLlmEvaluation:
"""Verify LLM evaluation demo in HELM."""
REPO_DIR = "/workspace/helm"
# ------------------------------------------------------------------
# L1: file existence & syntax
# ------------------------------------------------------------------
def test_eval_demo_exists(self):
"""examples/llm_eval_demo.py must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "llm_eval_demo.py")
assert os.path.isfile(fpath), "llm_eval_demo.py not found"
def test_eval_demo_compiles(self):
"""llm_eval_demo.py must compile."""
result = subprocess.run(
["python", "-m", "py_compile", "examples/llm_eval_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
def test_eval_config_exists(self):
"""examples/eval_config.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "eval_config.yaml")
assert os.path.isfile(fpath), "eval_config.yaml not found"
# ------------------------------------------------------------------
# L2: structural & content verification
# ------------------------------------------------------------------
def _read_demo_source(self):
fpath = os.path.join(self.REPO_DIR, "examples", "llm_eval_demo.py")
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def test_config_is_valid_yaml(self):
"""eval_config.yaml must be valid YAML."""
import yaml
fpath = os.path.join(self.REPO_DIR, "examples", "eval_config.yaml")
with open(fpath, "r") as f:
config = yaml.safe_load(f)
assert isinstance(config, dict), "eval_config.yaml must be a YAML mapping"
def test_demo_loads_config(self):
"""Demo must load configuration."""
source = self._read_demo_source()
load_patterns = ["yaml", "json", "config", "load"]
found = sum(1 for p in load_patterns if p in source.lower())
assert found >= 2, "No configuration loading found in demo"
def test_demo_has_score_output(self):
"""Demo must produce score in its output."""
source = self._read_demo_source()
assert "score" in source.lower(), "No score output in demo"
def test_demo_has_labels_or_metrics(self):
"""Demo must include labels or detailed metrics."""
source = self._read_demo_source()
metric_patterns = ["label", "metric", "accuracy", "precision", "recall", "f1"]
found = sum(1 for p in metric_patterns if p in source.lower())
assert found >= 1, "No labels/metrics output in demo"
def test_demo_runs_successfully(self):
"""llm_eval_demo.py must exit with code 0."""
result = subprocess.run(
["python", "examples/llm_eval_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert result.returncode == 0, f"Demo failed:\n{result.stderr}"
def test_output_contains_score(self):
"""Output must include a score value."""
result = subprocess.run(
["python", "examples/llm_eval_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
pytest.skip(f"Demo failed: {result.stderr[:500]}")
combined = result.stdout + result.stderr
assert (
"score" in combined.lower()
), f"'score' not found in output:\n{combined[:2000]}"
def test_sample_inputs_present(self):
"""Demo or config should include sample evaluation inputs."""
source = self._read_demo_source()
import yaml
config_path = os.path.join(self.REPO_DIR, "examples", "eval_config.yaml")
with open(config_path, "r") as f:
config_content = f.read()
combined = source + config_content
input_patterns = ["sample", "input", "prompt", "question", "example"]
found = sum(1 for p in input_patterns if p in combined.lower())
assert found >= 2, "No sample evaluation inputs found"
def test_can_run_without_external_api(self):
"""Demo should run locally without external API calls."""
source = self._read_demo_source()
# Should use mock model or local evaluation
has_mock = any(
p in source.lower() for p in ["mock", "fake", "local", "dummy", "test_data"]
)
has_api_key = "API_KEY" in source and "os.environ.get" not in source
assert (
has_mock or not has_api_key
), "Demo requires external API without fallback"
def test_generates_report_file(self):
"""Demo should generate an evaluation report file."""
result = subprocess.run(
["python", "examples/llm_eval_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
pytest.skip(f"Demo failed: {result.stderr[:500]}")
# Check for generated report files
report_extensions = [".json", ".csv"]
found_reports = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if any(f.endswith(ext) for ext in report_extensions):
fpath = os.path.join(root, f)
if (
os.path.getmtime(fpath)
> os.path.getmtime(
os.path.join(self.REPO_DIR, "examples", "llm_eval_demo.py")
)
- 60
):
found_reports.append(fpath)
if len(found_reports) > 5:
break
# Or check stdout is JSON
try:
json.loads(result.stdout.strip())
found_reports.append("stdout")
except (json.JSONDecodeError, ValueError):
pass
assert (
len(found_reports) >= 1
), "No evaluation report generated (JSON/CSV file or JSON stdout)"
| https://github.com/stanford-crfm/helm | zhangyiiiiii/swe-skills-bench-python | |
analyze-ci | CI Failure Analyzer | See task file for detailed mission requirements. | feature | # Task: Create CI Failure Analysis Script for Sentry
## Background
Add a CI failure analysis script that parses pytest output logs,
extracts failure information, and generates structured diagnostic reports.
## Files to Create/Modify
- `scripts/analyze_ci_failures.py` (新建)
- `sample_pytest_output.log` (新建,测试用的 pytest 输出样本)
## Requirements
Script Functionality:
- Parse pytest format test output logs
- Extract failed test names
- Identify error types (AssertionError, Exception, etc.)
- Generate stack trace summaries
Output JSON Structure:
```json
{
"failed_tests": ["test_name_1", "test_name_2"],
"error_type": "AssertionError",
"stack_summary": "Brief stack trace excerpt"
}
```
Sample Log File (sample_pytest_output.log):
- 创建一个合法的 pytest 输出样本文件,包含至少 1 个失败的测试用例
- 包含 pytest 典型输出格式:FAILED 标记、traceback、AssertionError 等
CLI Interface:
- `--input`: Path to pytest output log
- `--output`: Path for JSON report
## Acceptance Criteria
- `python scripts/analyze_ci_failures.py --input sample_pytest_output.log --output report.json` succeeds
- Output JSON contains failed_tests, error_type, stack_summary fields
- Script exits with code 0
| ---
name: analyze-ci
description: Analyze failed GitHub Action jobs for a pull request.
allowed-tools:
- Bash(uv run skills analyze-ci:*)
---
# Analyze CI Failures
This skill analyzes logs from failed GitHub Action jobs using Claude.
## Prerequisites
- **GitHub Token**: Auto-detected via `gh auth token`, or set `GH_TOKEN` env var
## Usage
```bash
# Analyze all failed jobs in a PR
uv run skills analyze-ci <pr_url>
# Analyze specific job URLs directly
uv run skills analyze-ci <job_url> [job_url ...]
# Show debug info (tokens and costs)
uv run skills analyze-ci <pr_url> --debug
```
Output: A concise failure summary with root cause, error messages, test names, and relevant log snippets.
## Examples
```bash
# Analyze CI failures for a PR
uv run skills analyze-ci https://github.com/mlflow/mlflow/pull/19601
# Analyze specific job URLs directly
uv run skills analyze-ci https://github.com/mlflow/mlflow/actions/runs/12345/job/67890
```
| """
Test for 'analyze-ci' skill — CI Failure Analyzer
Validates that the Agent created a CI failure analysis script that parses pytest
output logs and generates structured JSON diagnostic reports.
"""
import os
import json
import subprocess
import pytest
class TestAnalyzeCi:
"""Verify CI failure analysis script for Sentry."""
REPO_DIR = "/workspace/sentry"
# ------------------------------------------------------------------
# L1: file existence & syntax
# ------------------------------------------------------------------
def test_analysis_script_exists(self):
"""scripts/analyze_ci_failures.py must exist."""
fpath = os.path.join(self.REPO_DIR, "scripts", "analyze_ci_failures.py")
assert os.path.isfile(fpath), "analyze_ci_failures.py not found"
def test_sample_log_exists(self):
"""sample_pytest_output.log must exist."""
candidates = [
os.path.join(self.REPO_DIR, "sample_pytest_output.log"),
os.path.join(self.REPO_DIR, "scripts", "sample_pytest_output.log"),
]
found = any(os.path.isfile(c) for c in candidates)
assert found, f"sample_pytest_output.log not found at {candidates}"
def test_script_compiles(self):
"""Analysis script must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "scripts/analyze_ci_failures.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: functional verification
# ------------------------------------------------------------------
def _find_sample_log(self):
for p in ["sample_pytest_output.log", "scripts/sample_pytest_output.log"]:
fpath = os.path.join(self.REPO_DIR, p)
if os.path.isfile(fpath):
return p
pytest.fail("sample_pytest_output.log not found")
def test_script_runs_with_input(self):
"""Script must run with --input/--output and exit code 0."""
log_path = self._find_sample_log()
result = subprocess.run(
[
"python",
"scripts/analyze_ci_failures.py",
"--input",
log_path,
"--output",
"/tmp/ci_report.json",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert (
result.returncode == 0
), f"Script failed (rc={result.returncode}):\n{result.stderr}"
def test_output_json_is_valid(self):
"""Output report must be valid JSON."""
log_path = self._find_sample_log()
subprocess.run(
[
"python",
"scripts/analyze_ci_failures.py",
"--input",
log_path,
"--output",
"/tmp/ci_report.json",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert os.path.isfile("/tmp/ci_report.json"), "Report not generated"
with open("/tmp/ci_report.json", "r") as f:
data = json.load(f)
assert isinstance(data, dict), "Report root must be a JSON object"
def test_output_has_failed_tests_field(self):
"""Report must contain failed_tests field."""
log_path = self._find_sample_log()
subprocess.run(
[
"python",
"scripts/analyze_ci_failures.py",
"--input",
log_path,
"--output",
"/tmp/ci_report.json",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
with open("/tmp/ci_report.json", "r") as f:
data = json.load(f)
assert (
"failed_tests" in data
), f"Missing failed_tests; keys: {list(data.keys())}"
assert isinstance(data["failed_tests"], list), "failed_tests must be a list"
def test_output_has_error_type_field(self):
"""Report must contain error_type field."""
log_path = self._find_sample_log()
subprocess.run(
[
"python",
"scripts/analyze_ci_failures.py",
"--input",
log_path,
"--output",
"/tmp/ci_report.json",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
with open("/tmp/ci_report.json", "r") as f:
data = json.load(f)
assert "error_type" in data, f"Missing error_type; keys: {list(data.keys())}"
def test_output_has_stack_summary_field(self):
"""Report must contain stack_summary field."""
log_path = self._find_sample_log()
subprocess.run(
[
"python",
"scripts/analyze_ci_failures.py",
"--input",
log_path,
"--output",
"/tmp/ci_report.json",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
with open("/tmp/ci_report.json", "r") as f:
data = json.load(f)
assert (
"stack_summary" in data
), f"Missing stack_summary; keys: {list(data.keys())}"
def test_failed_tests_not_empty(self):
"""Sample log contains failures, so failed_tests should not be empty."""
log_path = self._find_sample_log()
subprocess.run(
[
"python",
"scripts/analyze_ci_failures.py",
"--input",
log_path,
"--output",
"/tmp/ci_report.json",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
with open("/tmp/ci_report.json", "r") as f:
data = json.load(f)
assert (
len(data.get("failed_tests", [])) >= 1
), "failed_tests is empty; sample log should contain at least 1 failure"
def test_sample_log_has_valid_format(self):
"""Sample log must contain pytest-style output markers."""
log_path = self._find_sample_log()
fpath = os.path.join(self.REPO_DIR, log_path)
with open(fpath, "r", encoding="utf-8", errors="replace") as f:
content = f.read()
markers = ["FAILED", "PASSED", "ERROR", "=====", "-----"]
found = sum(1 for m in markers if m in content)
assert (
found >= 2
), f"Sample log doesn't look like pytest output (matched {found} markers)"
def test_cli_help_available(self):
"""Script --help should work."""
result = subprocess.run(
["python", "scripts/analyze_ci_failures.py", "--help"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"--help failed:\n{result.stderr}"
assert "--input" in result.stdout, "--input not mentioned in help"
| https://github.com/getsentry/sentry | zhangyiiiiii/swe-skills-bench-python | |
python-packaging | Python Packaging & Distribution | See task file for detailed mission requirements. | test | # Task: Add Version Parsing Edge Case Tests for Python Packaging
## Background
Add integration tests covering version number parsing edge cases
and a demo script showing packaging library usage.
## Files to Create/Modify
- tests/test_version_edge_cases.py (new tests)
- scripts/demo_packaging.py (new demo script)
## Requirements
Edge Cases to Test:
- Pre-release versions: 1.0a1, 1.0b2, 1.0rc1
- Local version identifiers: 1.0+local, 1.0+ubuntu1
- Epoch versions: 1!2.0
- Post-release: 1.0.post1
- Dev versions: 1.0.dev1
Demo Script Features:
- Version comparison examples
- Specifier filtering demonstration
- Wheel metadata parsing
- Complete usage workflow
4. Test Coverage:
- Version parsing correctness
- Specifier matching logic
- Version comparison operators
- Error handling for invalid versions
## Acceptance Criteria
- `python scripts/demo_packaging.py` outputs comparison results
- All edge case versions parsed correctly
| ---
name: python-packaging
description: Create distributable Python packages with proper project structure, setup.py/pyproject.toml, and publishing to PyPI. Use when packaging Python libraries, creating CLI tools, or distributing Python code.
---
# Python Packaging
Comprehensive guide to creating, structuring, and distributing Python packages using modern packaging tools, pyproject.toml, and publishing to PyPI.
## When to Use This Skill
- Creating Python libraries for distribution
- Building command-line tools with entry points
- Publishing packages to PyPI or private repositories
- Setting up Python project structure
- Creating installable packages with dependencies
- Building wheels and source distributions
- Versioning and releasing Python packages
- Creating namespace packages
- Implementing package metadata and classifiers
## Core Concepts
### 1. Package Structure
- **Source layout**: `src/package_name/` (recommended)
- **Flat layout**: `package_name/` (simpler but less flexible)
- **Package metadata**: pyproject.toml, setup.py, or setup.cfg
- **Distribution formats**: wheel (.whl) and source distribution (.tar.gz)
### 2. Modern Packaging Standards
- **PEP 517/518**: Build system requirements
- **PEP 621**: Metadata in pyproject.toml
- **PEP 660**: Editable installs
- **pyproject.toml**: Single source of configuration
### 3. Build Backends
- **setuptools**: Traditional, widely used
- **hatchling**: Modern, opinionated
- **flit**: Lightweight, for pure Python
- **poetry**: Dependency management + packaging
### 4. Distribution
- **PyPI**: Python Package Index (public)
- **TestPyPI**: Testing before production
- **Private repositories**: JFrog, AWS CodeArtifact, etc.
## Quick Start
### Minimal Package Structure
```
my-package/
├── pyproject.toml
├── README.md
├── LICENSE
├── src/
│ └── my_package/
│ ├── __init__.py
│ └── module.py
└── tests/
└── test_module.py
```
### Minimal pyproject.toml
```toml
[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"
[project]
name = "my-package"
version = "0.1.0"
description = "A short description"
authors = [{name = "Your Name", email = "you@example.com"}]
readme = "README.md"
requires-python = ">=3.8"
dependencies = [
"requests>=2.28.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.0",
"black>=22.0",
]
```
## Package Structure Patterns
### Pattern 1: Source Layout (Recommended)
```
my-package/
├── pyproject.toml
├── README.md
├── LICENSE
├── .gitignore
├── src/
│ └── my_package/
│ ├── __init__.py
│ ├── core.py
│ ├── utils.py
│ └── py.typed # For type hints
├── tests/
│ ├── __init__.py
│ ├── test_core.py
│ └── test_utils.py
└── docs/
└── index.md
```
**Advantages:**
- Prevents accidentally importing from source
- Cleaner test imports
- Better isolation
**pyproject.toml for source layout:**
```toml
[tool.setuptools.packages.find]
where = ["src"]
```
### Pattern 2: Flat Layout
```
my-package/
├── pyproject.toml
├── README.md
├── my_package/
│ ├── __init__.py
│ └── module.py
└── tests/
└── test_module.py
```
**Simpler but:**
- Can import package without installing
- Less professional for libraries
### Pattern 3: Multi-Package Project
```
project/
├── pyproject.toml
├── packages/
│ ├── package-a/
│ │ └── src/
│ │ └── package_a/
│ └── package-b/
│ └── src/
│ └── package_b/
└── tests/
```
## Complete pyproject.toml Examples
### Pattern 4: Full-Featured pyproject.toml
```toml
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "my-awesome-package"
version = "1.0.0"
description = "An awesome Python package"
readme = "README.md"
requires-python = ">=3.8"
license = {text = "MIT"}
authors = [
{name = "Your Name", email = "you@example.com"},
]
maintainers = [
{name = "Maintainer Name", email = "maintainer@example.com"},
]
keywords = ["example", "package", "awesome"]
classifiers = [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
]
dependencies = [
"requests>=2.28.0,<3.0.0",
"click>=8.0.0",
"pydantic>=2.0.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.0.0",
"pytest-cov>=4.0.0",
"black>=23.0.0",
"ruff>=0.1.0",
"mypy>=1.0.0",
]
docs = [
"sphinx>=5.0.0",
"sphinx-rtd-theme>=1.0.0",
]
all = [
"my-awesome-package[dev,docs]",
]
[project.urls]
Homepage = "https://github.com/username/my-awesome-package"
Documentation = "https://my-awesome-package.readthedocs.io"
Repository = "https://github.com/username/my-awesome-package"
"Bug Tracker" = "https://github.com/username/my-awesome-package/issues"
Changelog = "https://github.com/username/my-awesome-package/blob/main/CHANGELOG.md"
[project.scripts]
my-cli = "my_package.cli:main"
awesome-tool = "my_package.tools:run"
[project.entry-points."my_package.plugins"]
plugin1 = "my_package.plugins:plugin1"
[tool.setuptools]
package-dir = {"" = "src"}
zip-safe = false
[tool.setuptools.packages.find]
where = ["src"]
include = ["my_package*"]
exclude = ["tests*"]
[tool.setuptools.package-data]
my_package = ["py.typed", "*.pyi", "data/*.json"]
# Black configuration
[tool.black]
line-length = 100
target-version = ["py38", "py39", "py310", "py311"]
include = '\.pyi?$'
# Ruff configuration
[tool.ruff]
line-length = 100
target-version = "py38"
[tool.ruff.lint]
select = ["E", "F", "I", "N", "W", "UP"]
# MyPy configuration
[tool.mypy]
python_version = "3.8"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = true
# Pytest configuration
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
addopts = "-v --cov=my_package --cov-report=term-missing"
# Coverage configuration
[tool.coverage.run]
source = ["src"]
omit = ["*/tests/*"]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"raise AssertionError",
"raise NotImplementedError",
]
```
### Pattern 5: Dynamic Versioning
```toml
[build-system]
requires = ["setuptools>=61.0", "setuptools-scm>=8.0"]
build-backend = "setuptools.build_meta"
[project]
name = "my-package"
dynamic = ["version"]
description = "Package with dynamic version"
[tool.setuptools.dynamic]
version = {attr = "my_package.__version__"}
# Or use setuptools-scm for git-based versioning
[tool.setuptools_scm]
write_to = "src/my_package/_version.py"
```
**In **init**.py:**
```python
# src/my_package/__init__.py
__version__ = "1.0.0"
# Or with setuptools-scm
from importlib.metadata import version
__version__ = version("my-package")
```
## Command-Line Interface (CLI) Patterns
### Pattern 6: CLI with Click
```python
# src/my_package/cli.py
import click
@click.group()
@click.version_option()
def cli():
"""My awesome CLI tool."""
pass
@cli.command()
@click.argument("name")
@click.option("--greeting", default="Hello", help="Greeting to use")
def greet(name: str, greeting: str):
"""Greet someone."""
click.echo(f"{greeting}, {name}!")
@cli.command()
@click.option("--count", default=1, help="Number of times to repeat")
def repeat(count: int):
"""Repeat a message."""
for i in range(count):
click.echo(f"Message {i + 1}")
def main():
"""Entry point for CLI."""
cli()
if __name__ == "__main__":
main()
```
**Register in pyproject.toml:**
```toml
[project.scripts]
my-tool = "my_package.cli:main"
```
**Usage:**
```bash
pip install -e .
my-tool greet World
my-tool greet Alice --greeting="Hi"
my-tool repeat --count=3
```
### Pattern 7: CLI with argparse
```python
# src/my_package/cli.py
import argparse
import sys
def main():
"""Main CLI entry point."""
parser = argparse.ArgumentParser(
description="My awesome tool",
prog="my-tool"
)
parser.add_argument(
"--version",
action="version",
version="%(prog)s 1.0.0"
)
subparsers = parser.add_subparsers(dest="command", help="Commands")
# Add subcommand
process_parser = subparsers.add_parser("process", help="Process data")
process_parser.add_argument("input_file", help="Input file path")
process_parser.add_argument(
"--output", "-o",
default="output.txt",
help="Output file path"
)
args = parser.parse_args()
if args.command == "process":
process_data(args.input_file, args.output)
else:
parser.print_help()
sys.exit(1)
def process_data(input_file: str, output_file: str):
"""Process data from input to output."""
print(f"Processing {input_file} -> {output_file}")
if __name__ == "__main__":
main()
```
## Building and Publishing
### Pattern 8: Build Package Locally
```bash
# Install build tools
pip install build twine
# Build distribution
python -m build
# This creates:
# dist/
# my-package-1.0.0.tar.gz (source distribution)
# my_package-1.0.0-py3-none-any.whl (wheel)
# Check the distribution
twine check dist/*
```
### Pattern 9: Publishing to PyPI
```bash
# Install publishing tools
pip install twine
# Test on TestPyPI first
twine upload --repository testpypi dist/*
# Install from TestPyPI to test
pip install --index-url https://test.pypi.org/simple/ my-package
# If all good, publish to PyPI
twine upload dist/*
```
**Using API tokens (recommended):**
```bash
# Create ~/.pypirc
[distutils]
index-servers =
pypi
testpypi
[pypi]
username = __token__
password = pypi-...your-token...
[testpypi]
username = __token__
password = pypi-...your-test-token...
```
### Pattern 10: Automated Publishing with GitHub Actions
```yaml
# .github/workflows/publish.yml
name: Publish to PyPI
on:
release:
types: [created]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.11"
- name: Install dependencies
run: |
pip install build twine
- name: Build package
run: python -m build
- name: Check package
run: twine check dist/*
- name: Publish to PyPI
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
run: twine upload dist/*
```
## Advanced Patterns
### Pattern 11: Including Data Files
```toml
[tool.setuptools.package-data]
my_package = [
"data/*.json",
"templates/*.html",
"static/css/*.css",
"py.typed",
]
```
**Accessing data files:**
```python
# src/my_package/loader.py
from importlib.resources import files
import json
def load_config():
"""Load configuration from package data."""
config_file = files("my_package").joinpath("data/config.json")
with config_file.open() as f:
return json.load(f)
# Python 3.9+
from importlib.resources import files
data = files("my_package").joinpath("data/file.txt").read_text()
```
### Pattern 12: Namespace Packages
**For large projects split across multiple repositories:**
```
# Package 1: company-core
company/
└── core/
├── __init__.py
└── models.py
# Package 2: company-api
company/
└── api/
├── __init__.py
└── routes.py
```
**Do NOT include **init**.py in the namespace directory (company/):**
```toml
# company-core/pyproject.toml
[project]
name = "company-core"
[tool.setuptools.packages.find]
where = ["."]
include = ["company.core*"]
# company-api/pyproject.toml
[project]
name = "company-api"
[tool.setuptools.packages.find]
where = ["."]
include = ["company.api*"]
```
**Usage:**
```python
# Both packages can be imported under same namespace
from company.core import models
from company.api import routes
```
### Pattern 13: C Extensions
```toml
[build-system]
requires = ["setuptools>=61.0", "wheel", "Cython>=0.29"]
build-backend = "setuptools.build_meta"
[tool.setuptools]
ext-modules = [
{name = "my_package.fast_module", sources = ["src/fast_module.c"]},
]
```
**Or with setup.py:**
```python
# setup.py
from setuptools import setup, Extension
setup(
ext_modules=[
Extension(
"my_package.fast_module",
sources=["src/fast_module.c"],
include_dirs=["src/include"],
)
]
)
```
## Version Management
### Pattern 14: Semantic Versioning
```python
# src/my_package/__init__.py
__version__ = "1.2.3"
# Semantic versioning: MAJOR.MINOR.PATCH
# MAJOR: Breaking changes
# MINOR: New features (backward compatible)
# PATCH: Bug fixes
```
**Version constraints in dependencies:**
```toml
dependencies = [
"requests>=2.28.0,<3.0.0", # Compatible range
"click~=8.1.0", # Compatible release (~= 8.1.0 means >=8.1.0,<8.2.0)
"pydantic>=2.0", # Minimum version
"numpy==1.24.3", # Exact version (avoid if possible)
]
```
### Pattern 15: Git-Based Versioning
```toml
[build-system]
requires = ["setuptools>=61.0", "setuptools-scm>=8.0"]
build-backend = "setuptools.build_meta"
[project]
name = "my-package"
dynamic = ["version"]
[tool.setuptools_scm]
write_to = "src/my_package/_version.py"
version_scheme = "post-release"
local_scheme = "dirty-tag"
```
**Creates versions like:**
- `1.0.0` (from git tag)
- `1.0.1.dev3+g1234567` (3 commits after tag)
## Testing Installation
### Pattern 16: Editable Install
```bash
# Install in development mode
pip install -e .
# With optional dependencies
pip install -e ".[dev]"
pip install -e ".[dev,docs]"
# Now changes to source code are immediately reflected
```
### Pattern 17: Testing in Isolated Environment
```bash
# Create virtual environment
python -m venv test-env
source test-env/bin/activate # Linux/Mac
# test-env\Scripts\activate # Windows
# Install package
pip install dist/my_package-1.0.0-py3-none-any.whl
# Test it works
python -c "import my_package; print(my_package.__version__)"
# Test CLI
my-tool --help
# Cleanup
deactivate
rm -rf test-env
```
## Documentation
### Pattern 18: README.md Template
````markdown
# My Package
[](https://pypi.org/project/my-package/)
[](https://pypi.org/project/my-package/)
[](https://github.com/username/my-package/actions)
Brief description of your package.
## Installation
```bash
pip install my-package
```
````
## Quick Start
```python
from my_package import something
result = something.do_stuff()
```
## Features
- Feature 1
- Feature 2
- Feature 3
## Documentation
Full documentation: https://my-package.readthedocs.io
## Development
```bash
git clone https://github.com/username/my-package.git
cd my-package
pip install -e ".[dev]"
pytest
```
## License
MIT
````
## Common Patterns
### Pattern 19: Multi-Architecture Wheels
```yaml
# .github/workflows/wheels.yml
name: Build wheels
on: [push, pull_request]
jobs:
build_wheels:
name: Build wheels on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
steps:
- uses: actions/checkout@v3
- name: Build wheels
uses: pypa/cibuildwheel@v2.16.2
- uses: actions/upload-artifact@v3
with:
path: ./wheelhouse/*.whl
````
### Pattern 20: Private Package Index
```bash
# Install from private index
pip install my-package --index-url https://private.pypi.org/simple/
# Or add to pip.conf
[global]
index-url = https://private.pypi.org/simple/
extra-index-url = https://pypi.org/simple/
# Upload to private index
twine upload --repository-url https://private.pypi.org/ dist/*
```
## File Templates
### .gitignore for Python Packages
```gitignore
# Build artifacts
build/
dist/
*.egg-info/
*.egg
.eggs/
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
# Virtual environments
venv/
env/
ENV/
# IDE
.vscode/
.idea/
*.swp
# Testing
.pytest_cache/
.coverage
htmlcov/
# Distribution
*.whl
*.tar.gz
```
### MANIFEST.in
```
# MANIFEST.in
include README.md
include LICENSE
include pyproject.toml
recursive-include src/my_package/data *.json
recursive-include src/my_package/templates *.html
recursive-exclude * __pycache__
recursive-exclude * *.py[co]
```
## Checklist for Publishing
- [ ] Code is tested (pytest passing)
- [ ] Documentation is complete (README, docstrings)
- [ ] Version number updated
- [ ] CHANGELOG.md updated
- [ ] License file included
- [ ] pyproject.toml is complete
- [ ] Package builds without errors
- [ ] Installation tested in clean environment
- [ ] CLI tools work (if applicable)
- [ ] PyPI metadata is correct (classifiers, keywords)
- [ ] GitHub repository linked
- [ ] Tested on TestPyPI first
- [ ] Git tag created for release
## Resources
- **Python Packaging Guide**: https://packaging.python.org/
- **PyPI**: https://pypi.org/
- **TestPyPI**: https://test.pypi.org/
- **setuptools documentation**: https://setuptools.pypa.io/
- **build**: https://pypa-build.readthedocs.io/
- **twine**: https://twine.readthedocs.io/
## Best Practices Summary
1. **Use src/ layout** for cleaner package structure
2. **Use pyproject.toml** for modern packaging
3. **Pin build dependencies** in build-system.requires
4. **Version appropriately** with semantic versioning
5. **Include all metadata** (classifiers, URLs, etc.)
6. **Test installation** in clean environments
7. **Use TestPyPI** before publishing to PyPI
8. **Document thoroughly** with README and docstrings
9. **Include LICENSE** file
10. **Automate publishing** with CI/CD
| """
Test for 'python-packaging' skill — Python Packaging & Distribution
Validates that the Agent added version parsing edge case tests and a demo script
to the packaging repository.
"""
import os
import subprocess
import pytest
class TestPythonPackaging:
"""Verify version parsing edge-case tests and demo for packaging."""
REPO_DIR = "/workspace/packaging"
# ------------------------------------------------------------------
# L1: file existence & syntax
# ------------------------------------------------------------------
def test_edge_case_tests_exist(self):
"""tests/test_version_edge_cases.py must exist."""
fpath = os.path.join(self.REPO_DIR, "tests", "test_version_edge_cases.py")
assert os.path.isfile(fpath), "test_version_edge_cases.py not found"
def test_demo_script_exists(self):
"""scripts/demo_packaging.py must exist."""
fpath = os.path.join(self.REPO_DIR, "scripts", "demo_packaging.py")
assert os.path.isfile(fpath), "demo_packaging.py not found"
def test_edge_case_tests_compile(self):
"""Edge case test file must compile."""
result = subprocess.run(
["python", "-m", "py_compile", "tests/test_version_edge_cases.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
def test_demo_script_compiles(self):
"""Demo script must compile."""
result = subprocess.run(
["python", "-m", "py_compile", "scripts/demo_packaging.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: functional verification
# ------------------------------------------------------------------
def test_demo_script_runs(self):
"""Demo script must run successfully and produce output."""
result = subprocess.run(
["python", "scripts/demo_packaging.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert result.returncode == 0, f"Demo failed:\n{result.stderr}"
assert len(result.stdout.strip()) > 0, "Demo produced no output"
def test_edge_case_tests_pass(self):
"""Edge case version tests must pass."""
result = subprocess.run(
[
"python",
"-m",
"pytest",
"tests/test_version_edge_cases.py",
"-v",
"--tb=short",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert (
result.returncode == 0
), f"Edge case tests failed:\n{result.stdout[-2000:]}\n{result.stderr[-500:]}"
def test_prerelease_versions_covered(self):
"""Tests must cover pre-release versions (alpha, beta, rc)."""
fpath = os.path.join(self.REPO_DIR, "tests", "test_version_edge_cases.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
prerelease_terms = ["a1", "b2", "rc1", "alpha", "beta", "pre"]
found = sum(1 for t in prerelease_terms if t in content)
assert found >= 2, f"Insufficient pre-release version coverage (found {found})"
def test_epoch_version_covered(self):
"""Tests must cover epoch versions (e.g. 1!2.0)."""
fpath = os.path.join(self.REPO_DIR, "tests", "test_version_edge_cases.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert (
"!" in content or "epoch" in content.lower()
), "Epoch version test case not found"
def test_local_version_covered(self):
"""Tests must cover local version identifiers (e.g. 1.0+local)."""
fpath = os.path.join(self.REPO_DIR, "tests", "test_version_edge_cases.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert (
"+" in content or "local" in content.lower()
), "Local version identifier test case not found"
def test_post_and_dev_versions_covered(self):
"""Tests must cover post-release and dev versions."""
fpath = os.path.join(self.REPO_DIR, "tests", "test_version_edge_cases.py")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert (
"post" in content.lower() or "dev" in content.lower()
), "Post-release/dev version test cases not found"
def test_demo_shows_comparison(self):
"""Demo output should demonstrate version comparison."""
result = subprocess.run(
["python", "scripts/demo_packaging.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
pytest.skip(f"Demo failed: {result.stderr[:500]}")
output = result.stdout
# Should show comparison operators or version ordering
comparison_indicators = [">", "<", "==", "True", "False", "Version"]
found = sum(1 for ind in comparison_indicators if ind in output)
assert found >= 2, f"Demo output lacks version comparison results"
| https://github.com/pypa/packaging | zhangyiiiiii/swe-skills-bench-python | |
gitops-workflow | GitOps Workflow for Kubernetes | See task file for detailed mission requirements. | feature | # Task: Add Flux GitOps Demo Configuration and Verification Script
## Background
在 Flux CD 仓库中新增一个完整的 GitOps 配置示例,展示 Flux 部署清单和 Kustomize Overlay 结构,并编写一个验证脚本来检查配置的正确性。
## Files to Create/Modify
- `hack/gitops-demo/clusters/dev/flux-system/gotk-components.yaml` (Flux bootstrap 组件)
- `hack/gitops-demo/clusters/dev/flux-system/kustomization.yaml` (Flux kustomization)
- `hack/gitops-demo/apps/base/deployment.yaml` (基础 Deployment)
- `hack/gitops-demo/apps/base/service.yaml` (基础 Service)
- `hack/gitops-demo/apps/base/kustomization.yaml` (基础 kustomization)
- `hack/gitops-demo/apps/overlays/dev/kustomization.yaml` (dev overlay)
- `hack/gitops-demo/apps/overlays/dev/patch-replicas.yaml` (replicas patch)
- `hack/verify-gitops-demo.sh` (验证脚本)
## Requirements
### 目录结构
```
hack/
├── gitops-demo/
│ ├── clusters/
│ │ └── dev/
│ │ └── flux-system/
│ │ ├── gotk-components.yaml
│ │ └── kustomization.yaml
│ └── apps/
│ ├── base/
│ │ ├── deployment.yaml
│ │ ├── service.yaml
│ │ └── kustomization.yaml
│ └── overlays/
│ └── dev/
│ ├── kustomization.yaml
│ └── patch-replicas.yaml
└── verify-gitops-demo.sh
```
### Kustomize 配置要求
- base/deployment.yaml: 包含完整的 Kubernetes Deployment (apiVersion, kind, metadata.name, spec.replicas, spec.template)
- base/service.yaml: 包含完整的 Kubernetes Service 暴露端口
- base/kustomization.yaml: 引用 deployment.yaml 和 service.yaml
- overlays/dev/kustomization.yaml: 基于 ../../base,应用 patch-replicas.yaml 补丁
- overlays/dev/patch-replicas.yaml: Strategic Merge Patch,将 replicas 修改为 dev 环境值
### 验证脚本要求 (hack/verify-gitops-demo.sh)
- 脚本必须可执行 (`chmod +x`)
- 检查 `hack/gitops-demo/` 下所有必要文件存在
- 使用 `kustomize build hack/gitops-demo/apps/overlays/dev` 验证 Kustomize 构建成功
- 验证生成的 manifest 包含 Deployment 和 Service 资源
- 验证 overlay patch 已正确应用
- 所有检查通过时以退出码 0 退出,任何检查失败以非零退出码退出
## Acceptance Criteria
- `kustomize build hack/gitops-demo/apps/overlays/dev` 退出码为 0
- 生成的 manifest 包含 deployment 和 service
- Overlay patches 正确应用
- `bash hack/verify-gitops-demo.sh` 退出码为 0
| ---
name: gitops-workflow
description: Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes deployments with continuous reconciliation. Use when implementing GitOps practices, automating Kubernetes deployments, or setting up declarative infrastructure management.
---
# GitOps Workflow
Complete guide to implementing GitOps workflows with ArgoCD and Flux for automated Kubernetes deployments.
## Purpose
Implement declarative, Git-based continuous delivery for Kubernetes using ArgoCD or Flux CD, following OpenGitOps principles.
## When to Use This Skill
- Set up GitOps for Kubernetes clusters
- Automate application deployments from Git
- Implement progressive delivery strategies
- Manage multi-cluster deployments
- Configure automated sync policies
- Set up secret management in GitOps
## OpenGitOps Principles
1. **Declarative** - Entire system described declaratively
2. **Versioned and Immutable** - Desired state stored in Git
3. **Pulled Automatically** - Software agents pull desired state
4. **Continuously Reconciled** - Agents reconcile actual vs desired state
## ArgoCD Setup
### 1. Installation
```bash
# Create namespace
kubectl create namespace argocd
# Install ArgoCD
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Get admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
```
**Reference:** See `references/argocd-setup.md` for detailed setup
### 2. Repository Structure
```
gitops-repo/
├── apps/
│ ├── production/
│ │ ├── app1/
│ │ │ ├── kustomization.yaml
│ │ │ └── deployment.yaml
│ │ └── app2/
│ └── staging/
├── infrastructure/
│ ├── ingress-nginx/
│ ├── cert-manager/
│ └── monitoring/
└── argocd/
├── applications/
└── projects/
```
### 3. Create Application
```yaml
# argocd/applications/my-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/org/gitops-repo
targetRevision: main
path: apps/production/my-app
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
```
### 4. App of Apps Pattern
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: applications
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/org/gitops-repo
targetRevision: main
path: argocd/applications
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated: {}
```
## Flux CD Setup
### 1. Installation
```bash
# Install Flux CLI
curl -s https://fluxcd.io/install.sh | sudo bash
# Bootstrap Flux
flux bootstrap github \
--owner=org \
--repository=gitops-repo \
--branch=main \
--path=clusters/production \
--personal
```
### 2. Create GitRepository
```yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: my-app
namespace: flux-system
spec:
interval: 1m
url: https://github.com/org/my-app
ref:
branch: main
```
### 3. Create Kustomization
```yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: my-app
namespace: flux-system
spec:
interval: 5m
path: ./deploy
prune: true
sourceRef:
kind: GitRepository
name: my-app
```
## Sync Policies
### Auto-Sync Configuration
**ArgoCD:**
```yaml
syncPolicy:
automated:
prune: true # Delete resources not in Git
selfHeal: true # Reconcile manual changes
allowEmpty: false
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
```
**Flux:**
```yaml
spec:
interval: 1m
prune: true
wait: true
timeout: 5m
```
**Reference:** See `references/sync-policies.md`
## Progressive Delivery
### Canary Deployment with ArgoCD Rollouts
```yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: my-app
spec:
replicas: 5
strategy:
canary:
steps:
- setWeight: 20
- pause: { duration: 1m }
- setWeight: 50
- pause: { duration: 2m }
- setWeight: 100
```
### Blue-Green Deployment
```yaml
strategy:
blueGreen:
activeService: my-app
previewService: my-app-preview
autoPromotionEnabled: false
```
## Secret Management
### External Secrets Operator
```yaml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: db-credentials
data:
- secretKey: password
remoteRef:
key: prod/db/password
```
### Sealed Secrets
```bash
# Encrypt secret
kubeseal --format yaml < secret.yaml > sealed-secret.yaml
# Commit sealed-secret.yaml to Git
```
## Best Practices
1. **Use separate repos or branches** for different environments
2. **Implement RBAC** for Git repositories
3. **Enable notifications** for sync failures
4. **Use health checks** for custom resources
5. **Implement approval gates** for production
6. **Keep secrets out of Git** (use External Secrets)
7. **Use App of Apps pattern** for organization
8. **Tag releases** for easy rollback
9. **Monitor sync status** with alerts
10. **Test changes** in staging first
## Troubleshooting
**Sync failures:**
```bash
argocd app get my-app
argocd app sync my-app --prune
```
**Out of sync status:**
```bash
argocd app diff my-app
argocd app sync my-app --force
```
## Related Skills
- `k8s-manifest-generator` - For creating manifests
- `helm-chart-scaffolding` - For packaging applications
| """
Test for 'gitops-workflow' skill — GitOps Workflow for Kubernetes
Validates that the Agent created a Flux GitOps demo with Kustomize overlays
and a verification script.
"""
import os
import subprocess
import pytest
class TestGitopsWorkflow:
"""Verify Flux GitOps demo configuration."""
REPO_DIR = "/workspace/flux2"
BASE_DIR = "hack/gitops-demo"
# ------------------------------------------------------------------
# L1: directory structure & file existence
# ------------------------------------------------------------------
def test_base_deployment_exists(self):
"""apps/base/deployment.yaml must exist."""
fpath = os.path.join(
self.REPO_DIR, self.BASE_DIR, "apps", "base", "deployment.yaml"
)
assert os.path.isfile(fpath), "base/deployment.yaml not found"
def test_base_service_exists(self):
"""apps/base/service.yaml must exist."""
fpath = os.path.join(
self.REPO_DIR, self.BASE_DIR, "apps", "base", "service.yaml"
)
assert os.path.isfile(fpath), "base/service.yaml not found"
def test_base_kustomization_exists(self):
"""apps/base/kustomization.yaml must exist."""
fpath = os.path.join(
self.REPO_DIR, self.BASE_DIR, "apps", "base", "kustomization.yaml"
)
assert os.path.isfile(fpath), "base/kustomization.yaml not found"
def test_dev_overlay_exists(self):
"""apps/overlays/dev/kustomization.yaml must exist."""
fpath = os.path.join(
self.REPO_DIR,
self.BASE_DIR,
"apps",
"overlays",
"dev",
"kustomization.yaml",
)
assert os.path.isfile(fpath), "dev overlay kustomization.yaml not found"
def test_dev_patch_exists(self):
"""apps/overlays/dev/patch-replicas.yaml must exist."""
fpath = os.path.join(
self.REPO_DIR,
self.BASE_DIR,
"apps",
"overlays",
"dev",
"patch-replicas.yaml",
)
assert os.path.isfile(fpath), "patch-replicas.yaml not found"
def test_verify_script_exists(self):
"""hack/verify-gitops-demo.sh must exist."""
fpath = os.path.join(self.REPO_DIR, "hack", "verify-gitops-demo.sh")
assert os.path.isfile(fpath), "verify-gitops-demo.sh not found"
# ------------------------------------------------------------------
# L2: YAML content validation
# ------------------------------------------------------------------
def test_deployment_has_required_fields(self):
"""base/deployment.yaml must have apiVersion, kind, metadata, spec."""
import yaml
fpath = os.path.join(
self.REPO_DIR, self.BASE_DIR, "apps", "base", "deployment.yaml"
)
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
required = ["apiVersion", "kind", "metadata", "spec"]
for field in required:
assert field in doc, f"Deployment missing '{field}'"
assert (
doc["kind"] == "Deployment"
), f"Expected kind=Deployment, got {doc['kind']}"
def test_service_has_required_fields(self):
"""base/service.yaml must have apiVersion, kind, metadata, spec."""
import yaml
fpath = os.path.join(
self.REPO_DIR, self.BASE_DIR, "apps", "base", "service.yaml"
)
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
assert doc.get("kind") == "Service", f"Expected kind=Service"
def test_base_kustomization_references_files(self):
"""base kustomization.yaml must reference deployment and service."""
import yaml
fpath = os.path.join(
self.REPO_DIR, self.BASE_DIR, "apps", "base", "kustomization.yaml"
)
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
resources = doc.get("resources", [])
assert "deployment.yaml" in resources, "deployment.yaml not in resources"
assert "service.yaml" in resources, "service.yaml not in resources"
def test_kustomize_build_succeeds(self):
"""kustomize build on dev overlay must succeed."""
overlay_path = os.path.join(self.BASE_DIR, "apps", "overlays", "dev")
result = subprocess.run(
["kustomize", "build", overlay_path],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert result.returncode == 0, f"kustomize build failed:\n{result.stderr}"
def test_kustomize_output_has_deployment(self):
"""kustomize build output must contain a Deployment resource."""
overlay_path = os.path.join(self.BASE_DIR, "apps", "overlays", "dev")
result = subprocess.run(
["kustomize", "build", overlay_path],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
pytest.skip(f"kustomize build failed: {result.stderr[:500]}")
assert "kind: Deployment" in result.stdout, "No Deployment in kustomize output"
def test_kustomize_output_has_service(self):
"""kustomize build output must contain a Service resource."""
overlay_path = os.path.join(self.BASE_DIR, "apps", "overlays", "dev")
result = subprocess.run(
["kustomize", "build", overlay_path],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
pytest.skip(f"kustomize build failed: {result.stderr[:500]}")
assert "kind: Service" in result.stdout, "No Service in kustomize output"
def test_verify_script_runs(self):
"""verify-gitops-demo.sh must run with exit code 0."""
result = subprocess.run(
["bash", "hack/verify-gitops-demo.sh"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert (
result.returncode == 0
), f"Verification script failed:\n{result.stdout}\n{result.stderr}"
| https://github.com/fluxcd/flux2 | zhangyiiiiii/swe-skills-bench-golang | |
linkerd-patterns | Linkerd Service Mesh Patterns | See task file for detailed mission requirements. | feature | # Task: Add Linkerd mTLS Verification Example
## Background
Add a complete mTLS (mutual TLS) verification example for Linkerd2, demonstrating how to validate that service-to-service communication is encrypted and identities are properly verified within the mesh.
## Files to Create/Modify
- `examples/mtls-demo/deployments.yaml` - Sample client and server deployments with Linkerd annotations
- `examples/mtls-demo/service.yaml` - Kubernetes Service definitions
- `examples/mtls-demo/server-policy.yaml` - Linkerd Server and ServerAuthorization CRDs
- `examples/mtls-demo/README.md` - Setup documentation
- `bin/check-mtls-demo.sh` - Verification script
## Requirements
### Kubernetes Manifests
- Client deployment with `linkerd.io/inject: enabled` annotation
- Server deployment with `linkerd.io/inject: enabled` annotation
- Service exposing the server
### Server Policy (server-policy.yaml)
- `Server` CRD selecting the server pods
- `ServerAuthorization` CRD requiring mTLS identity
- Restrict access to only the client's ServiceAccount identity
### Verification Script (bin/check-mtls-demo.sh)
- Validate all YAML files are syntactically valid
- Check that Server and ServerAuthorization resources are present
- Verify the `linkerd.io/inject` annotation exists on both deployments
- Confirm the ServerAuthorization references a valid mTLS identity
## Acceptance Criteria
- `kubectl apply --dry-run=client -f examples/mtls-demo/` succeeds
- `bash bin/check-mtls-demo.sh` passes all checks
- Server policy enforces mTLS identity verification
| ---
name: linkerd-patterns
description: Implement Linkerd service mesh patterns for lightweight, security-focused service mesh deployments. Use when setting up Linkerd, configuring traffic policies, or implementing zero-trust networking with minimal overhead.
---
# Linkerd Patterns
Production patterns for Linkerd service mesh - the lightweight, security-first service mesh for Kubernetes.
## When to Use This Skill
- Setting up a lightweight service mesh
- Implementing automatic mTLS
- Configuring traffic splits for canary deployments
- Setting up service profiles for per-route metrics
- Implementing retries and timeouts
- Multi-cluster service mesh
## Core Concepts
### 1. Linkerd Architecture
```
┌─────────────────────────────────────────────┐
│ Control Plane │
│ ┌─────────┐ ┌──────────┐ ┌──────────────┐ │
│ │ destiny │ │ identity │ │ proxy-inject │ │
│ └─────────┘ └──────────┘ └──────────────┘ │
└─────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────┐
│ Data Plane │
│ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │proxy│────│proxy│────│proxy│ │
│ └─────┘ └─────┘ └─────┘ │
│ │ │ │ │
│ ┌──┴──┐ ┌──┴──┐ ┌──┴──┐ │
│ │ app │ │ app │ │ app │ │
│ └─────┘ └─────┘ └─────┘ │
└─────────────────────────────────────────────┘
```
### 2. Key Resources
| Resource | Purpose |
| ----------------------- | ------------------------------------ |
| **ServiceProfile** | Per-route metrics, retries, timeouts |
| **TrafficSplit** | Canary deployments, A/B testing |
| **Server** | Define server-side policies |
| **ServerAuthorization** | Access control policies |
## Templates
### Template 1: Mesh Installation
```bash
# Install CLI
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
# Validate cluster
linkerd check --pre
# Install CRDs
linkerd install --crds | kubectl apply -f -
# Install control plane
linkerd install | kubectl apply -f -
# Verify installation
linkerd check
# Install viz extension (optional)
linkerd viz install | kubectl apply -f -
```
### Template 2: Inject Namespace
```yaml
# Automatic injection for namespace
apiVersion: v1
kind: Namespace
metadata:
name: my-app
annotations:
linkerd.io/inject: enabled
---
# Or inject specific deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
annotations:
linkerd.io/inject: enabled
spec:
template:
metadata:
annotations:
linkerd.io/inject: enabled
```
### Template 3: Service Profile with Retries
```yaml
apiVersion: linkerd.io/v1alpha2
kind: ServiceProfile
metadata:
name: my-service.my-namespace.svc.cluster.local
namespace: my-namespace
spec:
routes:
- name: GET /api/users
condition:
method: GET
pathRegex: /api/users
responseClasses:
- condition:
status:
min: 500
max: 599
isFailure: true
isRetryable: true
- name: POST /api/users
condition:
method: POST
pathRegex: /api/users
# POST not retryable by default
isRetryable: false
- name: GET /api/users/{id}
condition:
method: GET
pathRegex: /api/users/[^/]+
timeout: 5s
isRetryable: true
retryBudget:
retryRatio: 0.2
minRetriesPerSecond: 10
ttl: 10s
```
### Template 4: Traffic Split (Canary)
```yaml
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: my-service-canary
namespace: my-namespace
spec:
service: my-service
backends:
- service: my-service-stable
weight: 900m # 90%
- service: my-service-canary
weight: 100m # 10%
```
### Template 5: Server Authorization Policy
```yaml
# Define the server
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
name: my-service-http
namespace: my-namespace
spec:
podSelector:
matchLabels:
app: my-service
port: http
proxyProtocol: HTTP/1
---
# Allow traffic from specific clients
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
name: allow-frontend
namespace: my-namespace
spec:
server:
name: my-service-http
client:
meshTLS:
serviceAccounts:
- name: frontend
namespace: my-namespace
---
# Allow unauthenticated traffic (e.g., from ingress)
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
name: allow-ingress
namespace: my-namespace
spec:
server:
name: my-service-http
client:
unauthenticated: true
networks:
- cidr: 10.0.0.0/8
```
### Template 6: HTTPRoute for Advanced Routing
```yaml
apiVersion: policy.linkerd.io/v1beta2
kind: HTTPRoute
metadata:
name: my-route
namespace: my-namespace
spec:
parentRefs:
- name: my-service
kind: Service
group: core
port: 8080
rules:
- matches:
- path:
type: PathPrefix
value: /api/v2
- headers:
- name: x-api-version
value: v2
backendRefs:
- name: my-service-v2
port: 8080
- matches:
- path:
type: PathPrefix
value: /api
backendRefs:
- name: my-service-v1
port: 8080
```
### Template 7: Multi-cluster Setup
```bash
# On each cluster, install with cluster credentials
linkerd multicluster install | kubectl apply -f -
# Link clusters
linkerd multicluster link --cluster-name west \
--api-server-address https://west.example.com:6443 \
| kubectl apply -f -
# Export a service to other clusters
kubectl label svc/my-service mirror.linkerd.io/exported=true
# Verify cross-cluster connectivity
linkerd multicluster check
linkerd multicluster gateways
```
## Monitoring Commands
```bash
# Live traffic view
linkerd viz top deploy/my-app
# Per-route metrics
linkerd viz routes deploy/my-app
# Check proxy status
linkerd viz stat deploy -n my-namespace
# View service dependencies
linkerd viz edges deploy -n my-namespace
# Dashboard
linkerd viz dashboard
```
## Debugging
```bash
# Check injection status
linkerd check --proxy -n my-namespace
# View proxy logs
kubectl logs deploy/my-app -c linkerd-proxy
# Debug identity/TLS
linkerd identity -n my-namespace
# Tap traffic (live)
linkerd viz tap deploy/my-app --to deploy/my-backend
```
## Best Practices
### Do's
- **Enable mTLS everywhere** - It's automatic with Linkerd
- **Use ServiceProfiles** - Get per-route metrics and retries
- **Set retry budgets** - Prevent retry storms
- **Monitor golden metrics** - Success rate, latency, throughput
### Don'ts
- **Don't skip check** - Always run `linkerd check` after changes
- **Don't over-configure** - Linkerd defaults are sensible
- **Don't ignore ServiceProfiles** - They unlock advanced features
- **Don't forget timeouts** - Set appropriate values per route
## Resources
- [Linkerd Documentation](https://linkerd.io/2.14/overview/)
- [Service Profiles](https://linkerd.io/2.14/features/service-profiles/)
- [Authorization Policy](https://linkerd.io/2.14/features/server-policy/)
| """
Test for 'linkerd-patterns' skill — Linkerd Service Mesh Patterns
Validates that the Agent created mTLS verification examples with Server/
ServerAuthorization CRDs and proper Linkerd annotations.
"""
import os
import subprocess
import pytest
class TestLinkerdPatterns:
"""Verify Linkerd mTLS demonstration setup."""
REPO_DIR = "/workspace/linkerd2"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_deployments_yaml_exists(self):
"""examples/mtls-demo/deployments.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "mtls-demo", "deployments.yaml")
assert os.path.isfile(fpath), "deployments.yaml not found"
def test_service_yaml_exists(self):
"""examples/mtls-demo/service.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "mtls-demo", "service.yaml")
assert os.path.isfile(fpath), "service.yaml not found"
def test_server_policy_exists(self):
"""examples/mtls-demo/server-policy.yaml must exist."""
fpath = os.path.join(
self.REPO_DIR, "examples", "mtls-demo", "server-policy.yaml"
)
assert os.path.isfile(fpath), "server-policy.yaml not found"
def test_check_script_exists(self):
"""bin/check-mtls-demo.sh must exist."""
fpath = os.path.join(self.REPO_DIR, "bin", "check-mtls-demo.sh")
assert os.path.isfile(fpath), "check-mtls-demo.sh not found"
# ------------------------------------------------------------------
# L2: YAML content validation
# ------------------------------------------------------------------
def _load_all_yamls(self, relpath):
"""Load all YAML documents from a multi-doc file."""
import yaml
fpath = os.path.join(self.REPO_DIR, relpath)
with open(fpath, "r") as f:
return list(yaml.safe_load_all(f))
def test_deployments_have_linkerd_inject(self):
"""Deployments must have linkerd.io/inject: enabled annotation."""
docs = self._load_all_yamls("examples/mtls-demo/deployments.yaml")
inject_count = 0
for doc in docs:
if doc and doc.get("kind") == "Deployment":
annotations = (
doc.get("spec", {})
.get("template", {})
.get("metadata", {})
.get("annotations", {})
)
meta_annotations = doc.get("metadata", {}).get("annotations", {})
all_annotations = {**meta_annotations, **annotations}
if "linkerd.io/inject" in all_annotations:
inject_count += 1
assert (
inject_count >= 2
), f"Expected >= 2 deployments with linkerd.io/inject, found {inject_count}"
def test_server_crd_defined(self):
"""server-policy.yaml must define a Server CRD."""
docs = self._load_all_yamls("examples/mtls-demo/server-policy.yaml")
server_found = any(d and d.get("kind") == "Server" for d in docs)
assert server_found, "No Server CRD found in server-policy.yaml"
def test_server_authorization_defined(self):
"""server-policy.yaml must define a ServerAuthorization CRD."""
docs = self._load_all_yamls("examples/mtls-demo/server-policy.yaml")
auth_found = any(d and d.get("kind") == "ServerAuthorization" for d in docs)
assert auth_found, "No ServerAuthorization CRD found"
def test_server_auth_requires_mtls(self):
"""ServerAuthorization must require mTLS identity."""
docs = self._load_all_yamls("examples/mtls-demo/server-policy.yaml")
for doc in docs:
if doc and doc.get("kind") == "ServerAuthorization":
content = str(doc)
mtls_patterns = [
"meshTLS",
"identities",
"serviceAccount",
"meshTLS",
"authenticated",
]
found = any(p in content for p in mtls_patterns)
if found:
return
pytest.fail("ServerAuthorization does not reference mTLS identity")
def test_service_yaml_valid(self):
"""service.yaml must define a valid Service resource."""
docs = self._load_all_yamls("examples/mtls-demo/service.yaml")
svc_found = any(d and d.get("kind") == "Service" for d in docs)
assert svc_found, "No Service resource found in service.yaml"
def test_check_script_runs(self):
"""check-mtls-demo.sh must run with exit code 0."""
result = subprocess.run(
["bash", "bin/check-mtls-demo.sh"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert (
result.returncode == 0
), f"Check script failed:\n{result.stdout}\n{result.stderr}"
def test_readme_exists(self):
"""examples/mtls-demo/README.md must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "mtls-demo", "README.md")
assert os.path.isfile(fpath), "README.md not found"
| https://github.com/linkerd/linkerd2 | zhangyiiiiii/swe-skills-bench-golang | |
changelog-automation | Changelog Automation | See task file for detailed mission requirements. | feature | # Task: Add GitHub Changelog Generator Configuration Example
## Background
Add a complete changelog generation
configuration example demonstrating the github-changelog-generator
tool's capabilities and customization options.
## Files to Create/Modify
- examples/advanced_config/.github_changelog_generator (configuration)
- examples/advanced_config/CHANGELOG.md (sample output)
- examples/advanced_config/README.md (documentation)
## Requirements
Configuration File (.github_changelog_generator):
- user and project settings
- since_tag and due_tag options
- issue/PR label filtering
- Section customization (enhancement, bug, etc.)
Label Configuration:
- enhancement-labels: ["enhancement", "feature"]
- bug-labels: ["bug", "fix"]
- breaking-labels: ["breaking-change"]
- exclude-labels: ["duplicate", "wontfix"]
Output Formatting:
- Custom header template
- Date format configuration
- Compare URL inclusion
- Unreleased section handling
4. Configuration Options to Demonstrate:
- unreleased: true/false
- base: HISTORY.md (optional base file)
- header: Custom header text
- include_labels: Label filtering
- breaking_prefix: "**Breaking Changes:**"
## Acceptance Criteria
- Configuration file is valid and parseable
- README explains each configuration option
- `github_changelog_generator --config examples/advanced_config/.github_changelog_generator --help` validates
| ---
name: changelog-automation
description: Automate changelog generation from commits, PRs, and releases following Keep a Changelog format. Use when setting up release workflows, generating release notes, or standardizing commit conventions.
---
# Changelog Automation
Patterns and tools for automating changelog generation, release notes, and version management following industry standards.
## When to Use This Skill
- Setting up automated changelog generation
- Implementing Conventional Commits
- Creating release note workflows
- Standardizing commit message formats
- Generating GitHub/GitLab release notes
- Managing semantic versioning
## Core Concepts
### 1. Keep a Changelog Format
```markdown
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- New feature X
## [1.2.0] - 2024-01-15
### Added
- User profile avatars
- Dark mode support
### Changed
- Improved loading performance by 40%
### Deprecated
- Old authentication API (use v2)
### Removed
- Legacy payment gateway
### Fixed
- Login timeout issue (#123)
### Security
- Updated dependencies for CVE-2024-1234
[Unreleased]: https://github.com/user/repo/compare/v1.2.0...HEAD
[1.2.0]: https://github.com/user/repo/compare/v1.1.0...v1.2.0
```
### 2. Conventional Commits
```
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
```
| Type | Description | Changelog Section |
| ---------- | ---------------- | ------------------ |
| `feat` | New feature | Added |
| `fix` | Bug fix | Fixed |
| `docs` | Documentation | (usually excluded) |
| `style` | Formatting | (usually excluded) |
| `refactor` | Code restructure | Changed |
| `perf` | Performance | Changed |
| `test` | Tests | (usually excluded) |
| `chore` | Maintenance | (usually excluded) |
| `ci` | CI changes | (usually excluded) |
| `build` | Build system | (usually excluded) |
| `revert` | Revert commit | Removed |
### 3. Semantic Versioning
```
MAJOR.MINOR.PATCH
MAJOR: Breaking changes (feat! or BREAKING CHANGE)
MINOR: New features (feat)
PATCH: Bug fixes (fix)
```
## Implementation
### Method 1: Conventional Changelog (Node.js)
```bash
# Install tools
npm install -D @commitlint/cli @commitlint/config-conventional
npm install -D husky
npm install -D standard-version
# or
npm install -D semantic-release
# Setup commitlint
cat > commitlint.config.js << 'EOF'
module.exports = {
extends: ['@commitlint/config-conventional'],
rules: {
'type-enum': [
2,
'always',
[
'feat',
'fix',
'docs',
'style',
'refactor',
'perf',
'test',
'chore',
'ci',
'build',
'revert',
],
],
'subject-case': [2, 'never', ['start-case', 'pascal-case', 'upper-case']],
'subject-max-length': [2, 'always', 72],
},
};
EOF
# Setup husky
npx husky init
echo "npx --no -- commitlint --edit \$1" > .husky/commit-msg
```
### Method 2: standard-version Configuration
```javascript
// .versionrc.js
module.exports = {
types: [
{ type: "feat", section: "Features" },
{ type: "fix", section: "Bug Fixes" },
{ type: "perf", section: "Performance Improvements" },
{ type: "revert", section: "Reverts" },
{ type: "docs", section: "Documentation", hidden: true },
{ type: "style", section: "Styles", hidden: true },
{ type: "chore", section: "Miscellaneous", hidden: true },
{ type: "refactor", section: "Code Refactoring", hidden: true },
{ type: "test", section: "Tests", hidden: true },
{ type: "build", section: "Build System", hidden: true },
{ type: "ci", section: "CI/CD", hidden: true },
],
commitUrlFormat: "{{host}}/{{owner}}/{{repository}}/commit/{{hash}}",
compareUrlFormat:
"{{host}}/{{owner}}/{{repository}}/compare/{{previousTag}}...{{currentTag}}",
issueUrlFormat: "{{host}}/{{owner}}/{{repository}}/issues/{{id}}",
userUrlFormat: "{{host}}/{{user}}",
releaseCommitMessageFormat: "chore(release): {{currentTag}}",
scripts: {
prebump: 'echo "Running prebump"',
postbump: 'echo "Running postbump"',
prechangelog: 'echo "Running prechangelog"',
postchangelog: 'echo "Running postchangelog"',
},
};
```
```json
// package.json scripts
{
"scripts": {
"release": "standard-version",
"release:minor": "standard-version --release-as minor",
"release:major": "standard-version --release-as major",
"release:patch": "standard-version --release-as patch",
"release:dry": "standard-version --dry-run"
}
}
```
### Method 3: semantic-release (Full Automation)
```javascript
// release.config.js
module.exports = {
branches: [
"main",
{ name: "beta", prerelease: true },
{ name: "alpha", prerelease: true },
],
plugins: [
"@semantic-release/commit-analyzer",
"@semantic-release/release-notes-generator",
[
"@semantic-release/changelog",
{
changelogFile: "CHANGELOG.md",
},
],
[
"@semantic-release/npm",
{
npmPublish: true,
},
],
[
"@semantic-release/github",
{
assets: ["dist/**/*.js", "dist/**/*.css"],
},
],
[
"@semantic-release/git",
{
assets: ["CHANGELOG.md", "package.json"],
message:
"chore(release): ${nextRelease.version} [skip ci]\n\n${nextRelease.notes}",
},
],
],
};
```
### Method 4: GitHub Actions Workflow
```yaml
# .github/workflows/release.yml
name: Release
on:
push:
branches: [main]
workflow_dispatch:
inputs:
release_type:
description: "Release type"
required: true
default: "patch"
type: choice
options:
- patch
- minor
- major
permissions:
contents: write
pull-requests: write
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- run: npm ci
- name: Configure Git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Run semantic-release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
run: npx semantic-release
# Alternative: manual release with standard-version
manual-release:
if: github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: "20"
- run: npm ci
- name: Configure Git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Bump version and generate changelog
run: npx standard-version --release-as ${{ inputs.release_type }}
- name: Push changes
run: git push --follow-tags origin main
- name: Create GitHub Release
uses: softprops/action-gh-release@v1
with:
tag_name: ${{ steps.version.outputs.tag }}
body_path: RELEASE_NOTES.md
generate_release_notes: true
```
### Method 5: git-cliff (Rust-based, Fast)
```toml
# cliff.toml
[changelog]
header = """
# Changelog
All notable changes to this project will be documented in this file.
"""
body = """
{% if version %}\
## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }}
{% else %}\
## [Unreleased]
{% endif %}\
{% for group, commits in commits | group_by(attribute="group") %}
### {{ group | upper_first }}
{% for commit in commits %}
- {% if commit.scope %}**{{ commit.scope }}:** {% endif %}\
{{ commit.message | upper_first }}\
{% if commit.github.pr_number %} ([#{{ commit.github.pr_number }}](https://github.com/owner/repo/pull/{{ commit.github.pr_number }})){% endif %}\
{% endfor %}
{% endfor %}
"""
footer = """
{% for release in releases -%}
{% if release.version -%}
{% if release.previous.version -%}
[{{ release.version | trim_start_matches(pat="v") }}]: \
https://github.com/owner/repo/compare/{{ release.previous.version }}...{{ release.version }}
{% endif -%}
{% else -%}
[unreleased]: https://github.com/owner/repo/compare/{{ release.previous.version }}...HEAD
{% endif -%}
{% endfor %}
"""
trim = true
[git]
conventional_commits = true
filter_unconventional = true
split_commits = false
commit_parsers = [
{ message = "^feat", group = "Features" },
{ message = "^fix", group = "Bug Fixes" },
{ message = "^doc", group = "Documentation" },
{ message = "^perf", group = "Performance" },
{ message = "^refactor", group = "Refactoring" },
{ message = "^style", group = "Styling" },
{ message = "^test", group = "Testing" },
{ message = "^chore\\(release\\)", skip = true },
{ message = "^chore", group = "Miscellaneous" },
]
filter_commits = false
tag_pattern = "v[0-9]*"
skip_tags = ""
ignore_tags = ""
topo_order = false
sort_commits = "oldest"
[github]
owner = "owner"
repo = "repo"
```
```bash
# Generate changelog
git cliff -o CHANGELOG.md
# Generate for specific range
git cliff v1.0.0..v2.0.0 -o RELEASE_NOTES.md
# Preview without writing
git cliff --unreleased --dry-run
```
### Method 6: Python (commitizen)
```toml
# pyproject.toml
[tool.commitizen]
name = "cz_conventional_commits"
version = "1.0.0"
version_files = [
"pyproject.toml:version",
"src/__init__.py:__version__",
]
tag_format = "v$version"
update_changelog_on_bump = true
changelog_incremental = true
changelog_start_rev = "v0.1.0"
[tool.commitizen.customize]
message_template = "{{change_type}}{% if scope %}({{scope}}){% endif %}: {{message}}"
schema = "<type>(<scope>): <subject>"
schema_pattern = "^(feat|fix|docs|style|refactor|perf|test|chore)(\\(\\w+\\))?:\\s.*"
bump_pattern = "^(feat|fix|perf|refactor)"
bump_map = {"feat" = "MINOR", "fix" = "PATCH", "perf" = "PATCH", "refactor" = "PATCH"}
```
```bash
# Install
pip install commitizen
# Create commit interactively
cz commit
# Bump version and update changelog
cz bump --changelog
# Check commits
cz check --rev-range HEAD~5..HEAD
```
## Release Notes Templates
### GitHub Release Template
```markdown
## What's Changed
### 🚀 Features
{{ range .Features }}
- {{ .Title }} by @{{ .Author }} in #{{ .PR }}
{{ end }}
### 🐛 Bug Fixes
{{ range .Fixes }}
- {{ .Title }} by @{{ .Author }} in #{{ .PR }}
{{ end }}
### 📚 Documentation
{{ range .Docs }}
- {{ .Title }} by @{{ .Author }} in #{{ .PR }}
{{ end }}
### 🔧 Maintenance
{{ range .Chores }}
- {{ .Title }} by @{{ .Author }} in #{{ .PR }}
{{ end }}
## New Contributors
{{ range .NewContributors }}
- @{{ .Username }} made their first contribution in #{{ .PR }}
{{ end }}
**Full Changelog**: https://github.com/owner/repo/compare/v{{ .Previous }}...v{{ .Current }}
```
### Internal Release Notes
```markdown
# Release v2.1.0 - January 15, 2024
## Summary
This release introduces dark mode support and improves checkout performance
by 40%. It also includes important security updates.
## Highlights
### 🌙 Dark Mode
Users can now switch to dark mode from settings. The preference is
automatically saved and synced across devices.
### ⚡ Performance
- Checkout flow is 40% faster
- Reduced bundle size by 15%
## Breaking Changes
None in this release.
## Upgrade Guide
No special steps required. Standard deployment process applies.
## Known Issues
- Dark mode may flicker on initial load (fix scheduled for v2.1.1)
## Dependencies Updated
| Package | From | To | Reason |
| ------- | ------- | ------- | ------------------------ |
| react | 18.2.0 | 18.3.0 | Performance improvements |
| lodash | 4.17.20 | 4.17.21 | Security patch |
```
## Commit Message Examples
```bash
# Feature with scope
feat(auth): add OAuth2 support for Google login
# Bug fix with issue reference
fix(checkout): resolve race condition in payment processing
Closes #123
# Breaking change
feat(api)!: change user endpoint response format
BREAKING CHANGE: The user endpoint now returns `userId` instead of `id`.
Migration guide: Update all API consumers to use the new field name.
# Multiple paragraphs
fix(database): handle connection timeouts gracefully
Previously, connection timeouts would cause the entire request to fail
without retry. This change implements exponential backoff with up to
3 retries before failing.
The timeout threshold has been increased from 5s to 10s based on p99
latency analysis.
Fixes #456
Reviewed-by: @alice
```
## Best Practices
### Do's
- **Follow Conventional Commits** - Enables automation
- **Write clear messages** - Future you will thank you
- **Reference issues** - Link commits to tickets
- **Use scopes consistently** - Define team conventions
- **Automate releases** - Reduce manual errors
### Don'ts
- **Don't mix changes** - One logical change per commit
- **Don't skip validation** - Use commitlint
- **Don't manual edit** - Generated changelogs only
- **Don't forget breaking changes** - Mark with `!` or footer
- **Don't ignore CI** - Validate commits in pipeline
## Resources
- [Keep a Changelog](https://keepachangelog.com/)
- [Conventional Commits](https://www.conventionalcommits.org/)
- [Semantic Versioning](https://semver.org/)
- [semantic-release](https://semantic-release.gitbook.io/)
- [git-cliff](https://git-cliff.org/)
| """
Test for 'changelog-automation' skill — Changelog Generation Automation
Validates that the Agent configured automatic changelog generation from
git commit history using github-changelog-generator conventions.
"""
import os
import subprocess
import pytest
class TestChangelogAutomation:
"""Verify changelog automation setup."""
REPO_DIR = "/workspace/github-changelog-generator"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_config_file_exists(self):
"""A changelog configuration file must exist."""
config_names = [
".github_changelog_generator",
".changelog.yml",
".changelog.yaml",
"changelog.config.js",
".chglog/config.yml",
]
found = False
for name in config_names:
if os.path.isfile(os.path.join(self.REPO_DIR, name)):
found = True
break
if not found:
# Search recursively
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "changelog" in f.lower() and (
f.endswith((".yml", ".yaml", ".json", ".rb", ".js"))
):
found = True
break
if found:
break
assert found, "No changelog configuration file found"
def test_template_exists(self):
"""A changelog template or script must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "changelog" in f.lower() and (
f.endswith((".md", ".erb", ".mustache", ".hbs", ".tpl"))
or "template" in f.lower()
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No changelog template found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def test_config_has_sections(self):
"""Config must define change categories (added, fixed, etc.)."""
config_content = ""
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "changelog" in f.lower() and f.endswith(
(".yml", ".yaml", ".json", ".rb")
):
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
config_content += fh.read() + "\n"
categories = [
"added",
"changed",
"deprecated",
"removed",
"fixed",
"security",
"bug",
"feature",
"enhancement",
"breaking",
]
found = sum(1 for c in categories if c in config_content.lower())
assert found >= 3, f"Only {found} changelog categories found, need >= 3"
def test_git_integration(self):
"""Config must reference git-based changelog generation."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "changelog" in f.lower():
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
git_patterns = [
"git",
"commit",
"tag",
"merge",
"pull_request",
"issue",
"label",
]
if any(p in content.lower() for p in git_patterns):
found = True
break
if found:
break
assert found, "No git integration in changelog config"
def test_output_format(self):
"""Changelog must be in Markdown format."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.upper() == "CHANGELOG.MD" or (
"changelog" in f.lower() and f.endswith(".md")
):
found = True
break
if found:
break
if not found:
# Check if config references markdown output
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "changelog" in f.lower():
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if "markdown" in content.lower() or ".md" in content:
found = True
break
if found:
break
assert found, "No Markdown changelog output found"
def test_version_extraction(self):
"""Config or script must handle version extraction from tags."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "changelog" in f.lower():
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
version_markers = [
"version",
"tag",
"semver",
"release",
"since_tag",
"future_release",
]
if any(m in content.lower() for m in version_markers):
found = True
break
if found:
break
assert found, "No version extraction mechanism found"
def test_label_mapping(self):
"""Config should map PR labels to changelog sections."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "changelog" in f.lower() and f.endswith(
(".yml", ".yaml", ".json", ".rb")
):
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
label_patterns = [
"label",
"section",
"category",
"enhancement_label",
"bug_label",
]
if any(p in content.lower() for p in label_patterns):
found = True
break
if found:
break
assert found, "No label-to-section mapping found"
def test_ci_integration(self):
"""GitHub Actions or CI workflow for changelog generation."""
found = False
ci_dirs = [
os.path.join(self.REPO_DIR, ".github", "workflows"),
os.path.join(self.REPO_DIR, ".circleci"),
os.path.join(self.REPO_DIR, ".travis.yml"),
]
for ci_dir in ci_dirs:
if os.path.isdir(ci_dir):
for f in os.listdir(ci_dir):
fpath = os.path.join(ci_dir, f)
if os.path.isfile(fpath):
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if "changelog" in content.lower():
found = True
break
if found:
break
# Also check Rakefile or Gemfile
for fname in ["Rakefile", "Gemfile"]:
fpath = os.path.join(self.REPO_DIR, fname)
if os.path.isfile(fpath):
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if "changelog" in content.lower():
found = True
break
assert found, "No CI integration for changelog"
def test_exclusion_patterns(self):
"""Config should define exclusion patterns."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "changelog" in f.lower() and f.endswith(
(".yml", ".yaml", ".json", ".rb")
):
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
exclude_patterns = [
"exclude",
"ignore",
"filter",
"skip",
"exclude_labels",
]
if any(p in content.lower() for p in exclude_patterns):
found = True
break
if found:
break
assert found, "No exclusion patterns found in config"
def test_at_least_3_config_options(self):
"""Config must have at least 3 meaningful settings."""
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "changelog" in f.lower() and f.endswith(
(".yml", ".yaml", ".json", ".rb")
):
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
lines = [
l.strip()
for l in fh.readlines()
if l.strip() and not l.strip().startswith("#")
]
if len(lines) >= 3:
return
pytest.fail("Config has fewer than 3 meaningful settings")
| https://github.com/github-changelog-generator/github-changelog-generator | zhangyiiiiii/swe-skills-bench-ruby | |
k8s-manifest-generator | Kubernetes Manifest Generator | See task file for detailed mission requirements. | feature | # Task: Create Multi-Environment Kustomize Overlay Example
## Background
Add a comprehensive
multi-environment overlay example to the Kustomize repository demonstrating
production-ready configuration management patterns.
## Files to Create/Modify
- examples/multi-env/base/deployment.yaml (base deployment)
- examples/multi-env/base/service.yaml (base service)
- examples/multi-env/base/kustomization.yaml (base kustomization)
- examples/multi-env/overlays/dev/kustomization.yaml (dev overlay)
- examples/multi-env/overlays/staging/kustomization.yaml (staging overlay)
- examples/multi-env/overlays/production/kustomization.yaml (prod overlay)
## Requirements
Base Configuration:
- Deployment with configurable replicas
- Service exposing the deployment
- ConfigMap for application settings
- Resource limits and requests
Overlay Features:
- Dev: single replica, debug logging, no resource limits
- Staging: 2 replicas, info logging, moderate resources
- Production: 3 replicas, warning logging, high resources, HPA
Kustomize Patterns to Demonstrate:
- patches (strategic merge and JSON)
- configMapGenerator
- commonLabels and commonAnnotations
- namespace transformation
- images transformer
4. Validation Commands:
- `kustomize build examples/multi-env/overlays/dev`
- `kustomize build examples/multi-env/overlays/production`
## Acceptance Criteria
- `kustomize build examples/multi-env/overlays/production` exits with code 0
- Generated manifests include all required resources
- Each environment has appropriate configuration differences
| ---
name: k8s-manifest-generator
description: Create production-ready Kubernetes manifests for Deployments, Services, ConfigMaps, and Secrets following best practices and security standards. Use when generating Kubernetes YAML manifests, creating K8s resources, or implementing production-grade Kubernetes configurations.
---
# Kubernetes Manifest Generator
Step-by-step guidance for creating production-ready Kubernetes manifests including Deployments, Services, ConfigMaps, Secrets, and PersistentVolumeClaims.
## Purpose
This skill provides comprehensive guidance for generating well-structured, secure, and production-ready Kubernetes manifests following cloud-native best practices and Kubernetes conventions.
## When to Use This Skill
Use this skill when you need to:
- Create new Kubernetes Deployment manifests
- Define Service resources for network connectivity
- Generate ConfigMap and Secret resources for configuration management
- Create PersistentVolumeClaim manifests for stateful workloads
- Follow Kubernetes best practices and naming conventions
- Implement resource limits, health checks, and security contexts
- Design manifests for multi-environment deployments
## Step-by-Step Workflow
### 1. Gather Requirements
**Understand the workload:**
- Application type (stateless/stateful)
- Container image and version
- Environment variables and configuration needs
- Storage requirements
- Network exposure requirements (internal/external)
- Resource requirements (CPU, memory)
- Scaling requirements
- Health check endpoints
**Questions to ask:**
- What is the application name and purpose?
- What container image and tag will be used?
- Does the application need persistent storage?
- What ports does the application expose?
- Are there any secrets or configuration files needed?
- What are the CPU and memory requirements?
- Does the application need to be exposed externally?
### 2. Create Deployment Manifest
**Follow this structure:**
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <app-name>
namespace: <namespace>
labels:
app: <app-name>
version: <version>
spec:
replicas: 3
selector:
matchLabels:
app: <app-name>
template:
metadata:
labels:
app: <app-name>
version: <version>
spec:
containers:
- name: <container-name>
image: <image>:<tag>
ports:
- containerPort: <port>
name: http
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: http
initialDelaySeconds: 5
periodSeconds: 5
env:
- name: ENV_VAR
value: "value"
envFrom:
- configMapRef:
name: <app-name>-config
- secretRef:
name: <app-name>-secret
```
**Best practices to apply:**
- Always set resource requests and limits
- Implement both liveness and readiness probes
- Use specific image tags (never `:latest`)
- Apply security context for non-root users
- Use labels for organization and selection
- Set appropriate replica count based on availability needs
**Reference:** See `references/deployment-spec.md` for detailed deployment options
### 3. Create Service Manifest
**Choose the appropriate Service type:**
**ClusterIP (internal only):**
```yaml
apiVersion: v1
kind: Service
metadata:
name: <app-name>
namespace: <namespace>
labels:
app: <app-name>
spec:
type: ClusterIP
selector:
app: <app-name>
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
```
**LoadBalancer (external access):**
```yaml
apiVersion: v1
kind: Service
metadata:
name: <app-name>
namespace: <namespace>
labels:
app: <app-name>
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
type: LoadBalancer
selector:
app: <app-name>
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
```
**Reference:** See `references/service-spec.md` for service types and networking
### 4. Create ConfigMap
**For application configuration:**
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: <app-name>-config
namespace: <namespace>
data:
APP_MODE: production
LOG_LEVEL: info
DATABASE_HOST: db.example.com
# For config files
app.properties: |
server.port=8080
server.host=0.0.0.0
logging.level=INFO
```
**Best practices:**
- Use ConfigMaps for non-sensitive data only
- Organize related configuration together
- Use meaningful names for keys
- Consider using one ConfigMap per component
- Version ConfigMaps when making changes
**Reference:** See `assets/configmap-template.yaml` for examples
### 5. Create Secret
**For sensitive data:**
```yaml
apiVersion: v1
kind: Secret
metadata:
name: <app-name>-secret
namespace: <namespace>
type: Opaque
stringData:
DATABASE_PASSWORD: "changeme"
API_KEY: "secret-api-key"
# For certificate files
tls.crt: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
tls.key: |
-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
```
**Security considerations:**
- Never commit secrets to Git in plain text
- Use Sealed Secrets, External Secrets Operator, or Vault
- Rotate secrets regularly
- Use RBAC to limit secret access
- Consider using Secret type: `kubernetes.io/tls` for TLS secrets
### 6. Create PersistentVolumeClaim (if needed)
**For stateful applications:**
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <app-name>-data
namespace: <namespace>
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp3
resources:
requests:
storage: 10Gi
```
**Mount in Deployment:**
```yaml
spec:
template:
spec:
containers:
- name: app
volumeMounts:
- name: data
mountPath: /var/lib/app
volumes:
- name: data
persistentVolumeClaim:
claimName: <app-name>-data
```
**Storage considerations:**
- Choose appropriate StorageClass for performance needs
- Use ReadWriteOnce for single-pod access
- Use ReadWriteMany for multi-pod shared storage
- Consider backup strategies
- Set appropriate retention policies
### 7. Apply Security Best Practices
**Add security context to Deployment:**
```yaml
spec:
template:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
```
**Security checklist:**
- [ ] Run as non-root user
- [ ] Drop all capabilities
- [ ] Use read-only root filesystem
- [ ] Disable privilege escalation
- [ ] Set seccomp profile
- [ ] Use Pod Security Standards
### 8. Add Labels and Annotations
**Standard labels (recommended):**
```yaml
metadata:
labels:
app.kubernetes.io/name: <app-name>
app.kubernetes.io/instance: <instance-name>
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/component: backend
app.kubernetes.io/part-of: <system-name>
app.kubernetes.io/managed-by: kubectl
```
**Useful annotations:**
```yaml
metadata:
annotations:
description: "Application description"
contact: "team@example.com"
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
prometheus.io/path: "/metrics"
```
### 9. Organize Multi-Resource Manifests
**File organization options:**
**Option 1: Single file with `---` separator**
```yaml
# app-name.yaml
---
apiVersion: v1
kind: ConfigMap
...
---
apiVersion: v1
kind: Secret
...
---
apiVersion: apps/v1
kind: Deployment
...
---
apiVersion: v1
kind: Service
...
```
**Option 2: Separate files**
```
manifests/
├── configmap.yaml
├── secret.yaml
├── deployment.yaml
├── service.yaml
└── pvc.yaml
```
**Option 3: Kustomize structure**
```
base/
├── kustomization.yaml
├── deployment.yaml
├── service.yaml
└── configmap.yaml
overlays/
├── dev/
│ └── kustomization.yaml
└── prod/
└── kustomization.yaml
```
### 10. Validate and Test
**Validation steps:**
```bash
# Dry-run validation
kubectl apply -f manifest.yaml --dry-run=client
# Server-side validation
kubectl apply -f manifest.yaml --dry-run=server
# Validate with kubeval
kubeval manifest.yaml
# Validate with kube-score
kube-score score manifest.yaml
# Check with kube-linter
kube-linter lint manifest.yaml
```
**Testing checklist:**
- [ ] Manifest passes dry-run validation
- [ ] All required fields are present
- [ ] Resource limits are reasonable
- [ ] Health checks are configured
- [ ] Security context is set
- [ ] Labels follow conventions
- [ ] Namespace exists or is created
## Common Patterns
### Pattern 1: Simple Stateless Web Application
**Use case:** Standard web API or microservice
**Components needed:**
- Deployment (3 replicas for HA)
- ClusterIP Service
- ConfigMap for configuration
- Secret for API keys
- HorizontalPodAutoscaler (optional)
**Reference:** See `assets/deployment-template.yaml`
### Pattern 2: Stateful Database Application
**Use case:** Database or persistent storage application
**Components needed:**
- StatefulSet (not Deployment)
- Headless Service
- PersistentVolumeClaim template
- ConfigMap for DB configuration
- Secret for credentials
### Pattern 3: Background Job or Cron
**Use case:** Scheduled tasks or batch processing
**Components needed:**
- CronJob or Job
- ConfigMap for job parameters
- Secret for credentials
- ServiceAccount with RBAC
### Pattern 4: Multi-Container Pod
**Use case:** Application with sidecar containers
**Components needed:**
- Deployment with multiple containers
- Shared volumes between containers
- Init containers for setup
- Service (if needed)
## Templates
The following templates are available in the `assets/` directory:
- `deployment-template.yaml` - Standard deployment with best practices
- `service-template.yaml` - Service configurations (ClusterIP, LoadBalancer, NodePort)
- `configmap-template.yaml` - ConfigMap examples with different data types
- `secret-template.yaml` - Secret examples (to be generated, not committed)
- `pvc-template.yaml` - PersistentVolumeClaim templates
## Reference Documentation
- `references/deployment-spec.md` - Detailed Deployment specification
- `references/service-spec.md` - Service types and networking details
## Best Practices Summary
1. **Always set resource requests and limits** - Prevents resource starvation
2. **Implement health checks** - Ensures Kubernetes can manage your application
3. **Use specific image tags** - Avoid unpredictable deployments
4. **Apply security contexts** - Run as non-root, drop capabilities
5. **Use ConfigMaps and Secrets** - Separate config from code
6. **Label everything** - Enables filtering and organization
7. **Follow naming conventions** - Use standard Kubernetes labels
8. **Validate before applying** - Use dry-run and validation tools
9. **Version your manifests** - Keep in Git with version control
10. **Document with annotations** - Add context for other developers
## Troubleshooting
**Pods not starting:**
- Check image pull errors: `kubectl describe pod <pod-name>`
- Verify resource availability: `kubectl get nodes`
- Check events: `kubectl get events --sort-by='.lastTimestamp'`
**Service not accessible:**
- Verify selector matches pod labels: `kubectl get endpoints <service-name>`
- Check service type and port configuration
- Test from within cluster: `kubectl run debug --rm -it --image=busybox -- sh`
**ConfigMap/Secret not loading:**
- Verify names match in Deployment
- Check namespace
- Ensure resources exist: `kubectl get configmap,secret`
## Next Steps
After creating manifests:
1. Store in Git repository
2. Set up CI/CD pipeline for deployment
3. Consider using Helm or Kustomize for templating
4. Implement GitOps with ArgoCD or Flux
5. Add monitoring and observability
## Related Skills
- `helm-chart-scaffolding` - For templating and packaging
- `gitops-workflow` - For automated deployments
- `k8s-security-policies` - For advanced security configurations
| """
Test for 'k8s-manifest-generator' skill — Kustomize Manifest Generator
Validates that the Agent created Kustomize base+overlay structure for 3
environments and that kustomize build succeeds for each.
"""
import os
import subprocess
import pytest
import yaml # Imported at the top for consistency
class TestK8sManifestGenerator:
"""Verify Kubernetes manifest generation with Kustomize."""
REPO_DIR = "/workspace/kustomize"
# [!] Change: updated app-generator to multi-env as required by the requirements doc
BASE_DIR = "examples/multi-env"
ENVIRONMENTS = ["dev", "staging", "production"]
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_base_kustomization_exists(self):
"""base/kustomization.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, self.BASE_DIR, "base", "kustomization.yaml")
assert os.path.isfile(fpath), "base/kustomization.yaml not found"
def test_base_deployment_exists(self):
"""base/deployment.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, self.BASE_DIR, "base", "deployment.yaml")
assert os.path.isfile(fpath), "base/deployment.yaml not found"
@pytest.mark.parametrize("env", ENVIRONMENTS)
def test_overlay_kustomization_exists(self, env):
"""Each overlay must have kustomization.yaml."""
fpath = os.path.join(
self.REPO_DIR, self.BASE_DIR, "overlays", env, "kustomization.yaml"
)
assert os.path.isfile(fpath), f"overlays/{env}/kustomization.yaml not found"
# ------------------------------------------------------------------
# L2: YAML validation & kustomize build
# ------------------------------------------------------------------
def test_base_kustomization_has_resources(self):
"""base/kustomization.yaml must list resources."""
fpath = os.path.join(self.REPO_DIR, self.BASE_DIR, "base", "kustomization.yaml")
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
assert "resources" in doc, "kustomization.yaml missing resources list"
assert len(doc["resources"]) >= 1, "resources list is empty"
@pytest.mark.parametrize("env", ENVIRONMENTS)
def test_overlay_references_base(self, env):
"""Each overlay must reference ../base via resources or bases."""
fpath = os.path.join(
self.REPO_DIR, self.BASE_DIR, "overlays", env, "kustomization.yaml"
)
with open(fpath, "r") as f:
doc = yaml.safe_load(f)
resources = doc.get("resources", []) + doc.get("bases", [])
has_base = any("base" in str(r) for r in resources)
assert has_base, f"Overlay {env} doesn't reference base directory"
@pytest.mark.parametrize("env", ENVIRONMENTS)
def test_kustomize_build_succeeds(self, env):
"""kustomize build must succeed for each environment."""
overlay = os.path.join(self.BASE_DIR, "overlays", env)
result = subprocess.run(
["kustomize", "build", overlay],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert (
result.returncode == 0
), f"kustomize build failed for {env}:\n{result.stderr}"
def test_production_has_higher_replicas(self):
"""Production overlay should specify more replicas than dev."""
prod_path = os.path.join(self.REPO_DIR, self.BASE_DIR, "overlays", "production")
dev_path = os.path.join(self.REPO_DIR, self.BASE_DIR, "overlays", "dev")
prod_out = subprocess.run(
["kustomize", "build", prod_path],
capture_output=True,
text=True,
timeout=120,
)
dev_out = subprocess.run(
["kustomize", "build", dev_path],
capture_output=True,
text=True,
timeout=120,
)
if prod_out.returncode != 0 or dev_out.returncode != 0:
pytest.skip("kustomize build failed")
# Simple check: production should have replicas >= dev
assert (
"replicas" in prod_out.stdout or "Deployment" in prod_out.stdout
), "Production output missing deployment info"
def test_namespace_differs_between_envs(self):
"""Different environments should have different namespaces or prefixes."""
envs_content = {}
for env in self.ENVIRONMENTS:
fpath = os.path.join(
self.REPO_DIR, self.BASE_DIR, "overlays", env, "kustomization.yaml"
)
with open(fpath, "r") as f:
envs_content[env] = yaml.safe_load(f)
# Check if overlays customize namespace, namePrefix, or patches
customizations = set()
for env, doc in envs_content.items():
if doc.get("namespace"):
customizations.add(doc["namespace"])
if doc.get("namePrefix"):
customizations.add(doc["namePrefix"])
assert (
len(customizations) >= 2 or len(envs_content) >= 3
), "Overlays should differentiate environments"
| https://github.com/kubernetes-sigs/kustomize | zhangyiiiiii/swe-skills-bench-golang | |
nx-workspace-patterns | Nx Workspace Patterns | See task file for detailed mission requirements. | test | # Task: Add Nx Workspace Demo with Generator
## Background
Add a minimal Nx workspace demo with a custom generator stub
and affected task listing.
## Files to Create/Modify
- examples/nx-demo/workspace.json (or nx.json)
- examples/nx-demo/packages/my-lib/ (sample library)
- examples/nx-demo/tools/generators/my-generator/ (custom generator)
## Requirements
Workspace Configuration:
- Basic Nx configuration
- Sample library package
- Generator configuration
Custom Generator:
- schema.json defining inputs
- index.ts with generator implementation stub
- Template files (optional)
Affected Commands:
- `nx affected:build` working
- `nx affected:test` working
- Proper dependency graph
4. Generator Schema:
- name: string input
- directory: optional string
- tags: optional string array
## Acceptance Criteria
- `npx nx affected:list` exits with code 0
- Generator schema validates successfully
- Output shows affected projects or "No affected projects"
| ---
name: nx-workspace-patterns
description: Configure and optimize Nx monorepo workspaces. Use when setting up Nx, configuring project boundaries, optimizing build caching, or implementing affected commands.
---
# Nx Workspace Patterns
Production patterns for Nx monorepo management.
## When to Use This Skill
- Setting up new Nx workspaces
- Configuring project boundaries
- Optimizing CI with affected commands
- Implementing remote caching
- Managing dependencies between projects
- Migrating to Nx
## Core Concepts
### 1. Nx Architecture
```
workspace/
├── apps/ # Deployable applications
│ ├── web/
│ └── api/
├── libs/ # Shared libraries
│ ├── shared/
│ │ ├── ui/
│ │ └── utils/
│ └── feature/
│ ├── auth/
│ └── dashboard/
├── tools/ # Custom executors/generators
├── nx.json # Nx configuration
└── workspace.json # Project configuration
```
### 2. Library Types
| Type | Purpose | Example |
| --------------- | -------------------------------- | ------------------- |
| **feature** | Smart components, business logic | `feature-auth` |
| **ui** | Presentational components | `ui-buttons` |
| **data-access** | API calls, state management | `data-access-users` |
| **util** | Pure functions, helpers | `util-formatting` |
| **shell** | App bootstrapping | `shell-web` |
## Templates
### Template 1: nx.json Configuration
```json
{
"$schema": "./node_modules/nx/schemas/nx-schema.json",
"npmScope": "myorg",
"affected": {
"defaultBase": "main"
},
"tasksRunnerOptions": {
"default": {
"runner": "nx/tasks-runners/default",
"options": {
"cacheableOperations": [
"build",
"lint",
"test",
"e2e",
"build-storybook"
],
"parallel": 3
}
}
},
"targetDefaults": {
"build": {
"dependsOn": ["^build"],
"inputs": ["production", "^production"],
"cache": true
},
"test": {
"inputs": ["default", "^production", "{workspaceRoot}/jest.preset.js"],
"cache": true
},
"lint": {
"inputs": ["default", "{workspaceRoot}/.eslintrc.json"],
"cache": true
},
"e2e": {
"inputs": ["default", "^production"],
"cache": true
}
},
"namedInputs": {
"default": ["{projectRoot}/**/*", "sharedGlobals"],
"production": [
"default",
"!{projectRoot}/**/?(*.)+(spec|test).[jt]s?(x)?(.snap)",
"!{projectRoot}/tsconfig.spec.json",
"!{projectRoot}/jest.config.[jt]s",
"!{projectRoot}/.eslintrc.json"
],
"sharedGlobals": [
"{workspaceRoot}/babel.config.json",
"{workspaceRoot}/tsconfig.base.json"
]
},
"generators": {
"@nx/react": {
"application": {
"style": "css",
"linter": "eslint",
"bundler": "webpack"
},
"library": {
"style": "css",
"linter": "eslint"
},
"component": {
"style": "css"
}
}
}
}
```
### Template 2: Project Configuration
```json
// apps/web/project.json
{
"name": "web",
"$schema": "../../node_modules/nx/schemas/project-schema.json",
"sourceRoot": "apps/web/src",
"projectType": "application",
"tags": ["type:app", "scope:web"],
"targets": {
"build": {
"executor": "@nx/webpack:webpack",
"outputs": ["{options.outputPath}"],
"defaultConfiguration": "production",
"options": {
"compiler": "babel",
"outputPath": "dist/apps/web",
"index": "apps/web/src/index.html",
"main": "apps/web/src/main.tsx",
"tsConfig": "apps/web/tsconfig.app.json",
"assets": ["apps/web/src/assets"],
"styles": ["apps/web/src/styles.css"]
},
"configurations": {
"development": {
"extractLicenses": false,
"optimization": false,
"sourceMap": true
},
"production": {
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"extractLicenses": true
}
}
},
"serve": {
"executor": "@nx/webpack:dev-server",
"defaultConfiguration": "development",
"options": {
"buildTarget": "web:build"
},
"configurations": {
"development": {
"buildTarget": "web:build:development"
},
"production": {
"buildTarget": "web:build:production"
}
}
},
"test": {
"executor": "@nx/jest:jest",
"outputs": ["{workspaceRoot}/coverage/{projectRoot}"],
"options": {
"jestConfig": "apps/web/jest.config.ts",
"passWithNoTests": true
}
},
"lint": {
"executor": "@nx/eslint:lint",
"outputs": ["{options.outputFile}"],
"options": {
"lintFilePatterns": ["apps/web/**/*.{ts,tsx,js,jsx}"]
}
}
}
}
```
### Template 3: Module Boundary Rules
```json
// .eslintrc.json
{
"root": true,
"ignorePatterns": ["**/*"],
"plugins": ["@nx"],
"overrides": [
{
"files": ["*.ts", "*.tsx", "*.js", "*.jsx"],
"rules": {
"@nx/enforce-module-boundaries": [
"error",
{
"enforceBuildableLibDependency": true,
"allow": [],
"depConstraints": [
{
"sourceTag": "type:app",
"onlyDependOnLibsWithTags": [
"type:feature",
"type:ui",
"type:data-access",
"type:util"
]
},
{
"sourceTag": "type:feature",
"onlyDependOnLibsWithTags": [
"type:ui",
"type:data-access",
"type:util"
]
},
{
"sourceTag": "type:ui",
"onlyDependOnLibsWithTags": ["type:ui", "type:util"]
},
{
"sourceTag": "type:data-access",
"onlyDependOnLibsWithTags": ["type:data-access", "type:util"]
},
{
"sourceTag": "type:util",
"onlyDependOnLibsWithTags": ["type:util"]
},
{
"sourceTag": "scope:web",
"onlyDependOnLibsWithTags": ["scope:web", "scope:shared"]
},
{
"sourceTag": "scope:api",
"onlyDependOnLibsWithTags": ["scope:api", "scope:shared"]
},
{
"sourceTag": "scope:shared",
"onlyDependOnLibsWithTags": ["scope:shared"]
}
]
}
]
}
}
]
}
```
### Template 4: Custom Generator
```typescript
// tools/generators/feature-lib/index.ts
import {
Tree,
formatFiles,
generateFiles,
joinPathFragments,
names,
readProjectConfiguration,
} from "@nx/devkit";
import { libraryGenerator } from "@nx/react";
interface FeatureLibraryGeneratorSchema {
name: string;
scope: string;
directory?: string;
}
export default async function featureLibraryGenerator(
tree: Tree,
options: FeatureLibraryGeneratorSchema,
) {
const { name, scope, directory } = options;
const projectDirectory = directory
? `${directory}/${name}`
: `libs/${scope}/feature-${name}`;
// Generate base library
await libraryGenerator(tree, {
name: `feature-${name}`,
directory: projectDirectory,
tags: `type:feature,scope:${scope}`,
style: "css",
skipTsConfig: false,
skipFormat: true,
unitTestRunner: "jest",
linter: "eslint",
});
// Add custom files
const projectConfig = readProjectConfiguration(
tree,
`${scope}-feature-${name}`,
);
const projectNames = names(name);
generateFiles(
tree,
joinPathFragments(__dirname, "files"),
projectConfig.sourceRoot,
{
...projectNames,
scope,
tmpl: "",
},
);
await formatFiles(tree);
}
```
### Template 5: CI Configuration with Affected
```yaml
# .github/workflows/ci.yml
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }}
jobs:
main:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 20
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Derive SHAs for affected commands
uses: nrwl/nx-set-shas@v4
- name: Run affected lint
run: npx nx affected -t lint --parallel=3
- name: Run affected test
run: npx nx affected -t test --parallel=3 --configuration=ci
- name: Run affected build
run: npx nx affected -t build --parallel=3
- name: Run affected e2e
run: npx nx affected -t e2e --parallel=1
```
### Template 6: Remote Caching Setup
```typescript
// nx.json with Nx Cloud
{
"tasksRunnerOptions": {
"default": {
"runner": "nx-cloud",
"options": {
"cacheableOperations": ["build", "lint", "test", "e2e"],
"accessToken": "your-nx-cloud-token",
"parallel": 3,
"cacheDirectory": ".nx/cache"
}
}
},
"nxCloudAccessToken": "your-nx-cloud-token"
}
// Self-hosted cache with S3
{
"tasksRunnerOptions": {
"default": {
"runner": "@nx-aws-cache/nx-aws-cache",
"options": {
"cacheableOperations": ["build", "lint", "test"],
"awsRegion": "us-east-1",
"awsBucket": "my-nx-cache-bucket",
"awsProfile": "default"
}
}
}
}
```
## Common Commands
```bash
# Generate new library
nx g @nx/react:lib feature-auth --directory=libs/web --tags=type:feature,scope:web
# Run affected tests
nx affected -t test --base=main
# View dependency graph
nx graph
# Run specific project
nx build web --configuration=production
# Reset cache
nx reset
# Run migrations
nx migrate latest
nx migrate --run-migrations
```
## Best Practices
### Do's
- **Use tags consistently** - Enforce with module boundaries
- **Enable caching early** - Significant CI savings
- **Keep libs focused** - Single responsibility
- **Use generators** - Ensure consistency
- **Document boundaries** - Help new developers
### Don'ts
- **Don't create circular deps** - Graph should be acyclic
- **Don't skip affected** - Test only what changed
- **Don't ignore boundaries** - Tech debt accumulates
- **Don't over-granularize** - Balance lib count
## Resources
- [Nx Documentation](https://nx.dev/getting-started/intro)
- [Module Boundaries](https://nx.dev/core-features/enforce-module-boundaries)
- [Nx Cloud](https://nx.app/)
| """
Test for 'nx-workspace-patterns' skill — Nx Monorepo Workspace Patterns
Validates that the Agent configured Nx workspace with generators, plugins,
and proper dependency graph setup.
"""
import os
import json
import subprocess
import pytest
class TestNxWorkspacePatterns:
"""Verify Nx workspace configuration and generators."""
REPO_DIR = "/workspace/nx"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_workspace_config_exists(self):
"""nx.json or workspace.json must exist."""
paths = [
os.path.join(self.REPO_DIR, "nx.json"),
os.path.join(self.REPO_DIR, "workspace.json"),
]
found = any(os.path.isfile(p) for p in paths)
if not found:
for root, dirs, files in os.walk(self.REPO_DIR):
if "nx.json" in files and "node_modules" not in root:
found = True
break
assert found, "nx.json not found"
def test_generator_exists(self):
"""Custom generator/schematic must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
("generator" in f.lower() or "schematic" in f.lower())
and f.endswith((".ts", ".js"))
and "node_modules" not in root
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No generator/schematic file found"
def test_generator_schema_exists(self):
"""Generator must have a schema.json."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if (
"schema.json" in files
and "node_modules" not in root
and ("generator" in root.lower() or "schematic" in root.lower())
):
found = True
break
if not found:
# Broader search
for root, dirs, files in os.walk(self.REPO_DIR):
if "schema.json" in files and "node_modules" not in root:
fpath = os.path.join(root, "schema.json")
with open(fpath, "r", errors="ignore") as f:
content = f.read()
if "properties" in content:
found = True
break
assert found, "No generator schema.json found"
# ------------------------------------------------------------------
# L2: configuration & content validation
# ------------------------------------------------------------------
def _find_nx_json(self):
for root, dirs, files in os.walk(self.REPO_DIR):
if "nx.json" in files and "node_modules" not in root:
return os.path.join(root, "nx.json")
return None
def test_nx_json_valid(self):
"""nx.json must be valid JSON."""
fpath = self._find_nx_json()
assert fpath, "nx.json not found"
with open(fpath, "r") as f:
config = json.load(f)
assert isinstance(config, dict)
def test_nx_has_targets_or_plugins(self):
"""nx.json must configure targets/plugins/generators."""
fpath = self._find_nx_json()
assert fpath, "nx.json not found"
with open(fpath, "r") as f:
config = json.load(f)
keys = set(config.keys())
expected_keys = {
"targetDefaults",
"plugins",
"generators",
"defaultProject",
"tasksRunnerOptions",
"namedInputs",
"targets",
}
found = keys & expected_keys
assert len(found) >= 1, f"nx.json missing expected config; keys: {keys}"
def test_generator_has_tree_param(self):
"""Generator function must accept Tree parameter."""
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
("generator" in f.lower() or "schematic" in f.lower())
and f.endswith((".ts", ".js"))
and "node_modules" not in root
):
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
tree_patterns = ["Tree", "tree", "host", "SchematicContext"]
if any(p in content for p in tree_patterns):
return
pytest.fail("No generator with Tree parameter found")
def test_generator_creates_files(self):
"""Generator must create or modify files."""
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
("generator" in f.lower() or "schematic" in f.lower())
and f.endswith((".ts", ".js"))
and "node_modules" not in root
):
fpath = os.path.join(root, f)
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
file_ops = [
"generateFiles",
"addProjectConfiguration",
"tree.write",
"tree.create",
"apply",
"template",
"mergeWith",
]
if any(p in content for p in file_ops):
return
pytest.fail("Generator doesn't appear to create/modify files")
def test_schema_has_properties(self):
"""schema.json must define input properties."""
for root, dirs, files in os.walk(self.REPO_DIR):
if "schema.json" in files and "node_modules" not in root:
fpath = os.path.join(root, "schema.json")
with open(fpath, "r", errors="ignore") as f:
schema = json.load(f)
if "properties" in schema:
props = schema["properties"]
assert len(props) >= 1, "schema.json has no properties"
return
pytest.fail("No schema.json with properties found")
def test_generator_has_tests(self):
"""Generator test file must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
"generator" in f.lower()
and ("spec" in f.lower() or "test" in f.lower())
and f.endswith((".ts", ".js"))
and "node_modules" not in root
):
found = True
break
if found:
break
assert found, "No generator test file found"
def test_project_json_or_config(self):
"""At least one project.json or project configuration must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if "project.json" in files and "node_modules" not in root:
found = True
break
assert found, "No project.json found"
| https://github.com/nrwl/nx | zhangyiiiiii/swe-skills-bench-python | |
bazel-build-optimization | Bazel Build Optimization | See task file for detailed mission requirements. | feature | # Task: Create Bazel Remote Execution Example Project
## Background
Add a minimal but complete Bazel
project example demonstrating remote execution configuration and
build caching patterns to the Bazel repository.
## Files to Create/Modify
- examples/python-bazel/WORKSPACE (workspace configuration)
- examples/python-bazel/BUILD.bazel (root build file)
- examples/python-bazel/.bazelrc (build configuration)
- examples/python-bazel/src/BUILD.bazel (source build)
- examples/python-bazel/src/main.py (sample Python code)
- examples/python-bazel/tests/BUILD.bazel (test build)
## Requirements
Project Structure:
- Simple py_binary target in src/
- py_test targets in tests/
- Hermetic Python toolchain configuration
.bazelrc Configuration:
- Remote cache configuration (commented placeholder)
- Remote execution flags (commented placeholder)
- Local development settings
- CI-specific settings
Build Targets:
- //src:main (Python binary)
- //tests:all (test suite)
- //:format (formatting target, optional)
4. Configuration Flags to Include:
- --remote_cache placeholder
- --remote_executor placeholder
- --spawn_strategy options
- --disk_cache for local caching
## Acceptance Criteria
- `cd examples/python-bazel && bazel build //...` exits with code 0
- `cd examples/python-bazel && bazel test //...` passes
- .bazelrc contains documented remote execution configuration
| ---
name: bazel-build-optimization
description: Optimize Bazel builds for large-scale monorepos. Use when configuring Bazel, implementing remote execution, or optimizing build performance for enterprise codebases.
---
# Bazel Build Optimization
Production patterns for Bazel in large-scale monorepos.
## When to Use This Skill
- Setting up Bazel for monorepos
- Configuring remote caching/execution
- Optimizing build times
- Writing custom Bazel rules
- Debugging build issues
- Migrating to Bazel
## Core Concepts
### 1. Bazel Architecture
```
workspace/
├── WORKSPACE.bazel # External dependencies
├── .bazelrc # Build configurations
├── .bazelversion # Bazel version
├── BUILD.bazel # Root build file
├── apps/
│ └── web/
│ └── BUILD.bazel
├── libs/
│ └── utils/
│ └── BUILD.bazel
└── tools/
└── bazel/
└── rules/
```
### 2. Key Concepts
| Concept | Description |
| ----------- | -------------------------------------- |
| **Target** | Buildable unit (library, binary, test) |
| **Package** | Directory with BUILD file |
| **Label** | Target identifier `//path/to:target` |
| **Rule** | Defines how to build a target |
| **Aspect** | Cross-cutting build behavior |
## Templates
### Template 1: WORKSPACE Configuration
```python
# WORKSPACE.bazel
workspace(name = "myproject")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# Rules for JavaScript/TypeScript
http_archive(
name = "aspect_rules_js",
sha256 = "...",
strip_prefix = "rules_js-1.34.0",
url = "https://github.com/aspect-build/rules_js/releases/download/v1.34.0/rules_js-v1.34.0.tar.gz",
)
load("@aspect_rules_js//js:repositories.bzl", "rules_js_dependencies")
rules_js_dependencies()
load("@rules_nodejs//nodejs:repositories.bzl", "nodejs_register_toolchains")
nodejs_register_toolchains(
name = "nodejs",
node_version = "20.9.0",
)
load("@aspect_rules_js//npm:repositories.bzl", "npm_translate_lock")
npm_translate_lock(
name = "npm",
pnpm_lock = "//:pnpm-lock.yaml",
verify_node_modules_ignored = "//:.bazelignore",
)
load("@npm//:repositories.bzl", "npm_repositories")
npm_repositories()
# Rules for Python
http_archive(
name = "rules_python",
sha256 = "...",
strip_prefix = "rules_python-0.27.0",
url = "https://github.com/bazelbuild/rules_python/releases/download/0.27.0/rules_python-0.27.0.tar.gz",
)
load("@rules_python//python:repositories.bzl", "py_repositories")
py_repositories()
```
### Template 2: .bazelrc Configuration
```bash
# .bazelrc
# Build settings
build --enable_platform_specific_config
build --incompatible_enable_cc_toolchain_resolution
build --experimental_strict_conflict_checks
# Performance
build --jobs=auto
build --local_cpu_resources=HOST_CPUS*.75
build --local_ram_resources=HOST_RAM*.75
# Caching
build --disk_cache=~/.cache/bazel-disk
build --repository_cache=~/.cache/bazel-repo
# Remote caching (optional)
build:remote-cache --remote_cache=grpcs://cache.example.com
build:remote-cache --remote_upload_local_results=true
build:remote-cache --remote_timeout=3600
# Remote execution (optional)
build:remote-exec --remote_executor=grpcs://remote.example.com
build:remote-exec --remote_instance_name=projects/myproject/instances/default
build:remote-exec --jobs=500
# Platform configurations
build:linux --platforms=//platforms:linux_x86_64
build:macos --platforms=//platforms:macos_arm64
# CI configuration
build:ci --config=remote-cache
build:ci --build_metadata=ROLE=CI
build:ci --bes_results_url=https://results.example.com/invocation/
build:ci --bes_backend=grpcs://bes.example.com
# Test settings
test --test_output=errors
test --test_summary=detailed
# Coverage
coverage --combined_report=lcov
coverage --instrumentation_filter="//..."
# Convenience aliases
build:opt --compilation_mode=opt
build:dbg --compilation_mode=dbg
# Import user settings
try-import %workspace%/user.bazelrc
```
### Template 3: TypeScript Library BUILD
```python
# libs/utils/BUILD.bazel
load("@aspect_rules_ts//ts:defs.bzl", "ts_project")
load("@aspect_rules_js//js:defs.bzl", "js_library")
load("@npm//:defs.bzl", "npm_link_all_packages")
npm_link_all_packages(name = "node_modules")
ts_project(
name = "utils_ts",
srcs = glob(["src/**/*.ts"]),
declaration = True,
source_map = True,
tsconfig = "//:tsconfig.json",
deps = [
":node_modules/@types/node",
],
)
js_library(
name = "utils",
srcs = [":utils_ts"],
visibility = ["//visibility:public"],
)
# Tests
load("@aspect_rules_jest//jest:defs.bzl", "jest_test")
jest_test(
name = "utils_test",
config = "//:jest.config.js",
data = [
":utils",
"//:node_modules/jest",
],
node_modules = "//:node_modules",
)
```
### Template 4: Python Library BUILD
```python
# libs/ml/BUILD.bazel
load("@rules_python//python:defs.bzl", "py_library", "py_test", "py_binary")
load("@pip//:requirements.bzl", "requirement")
py_library(
name = "ml",
srcs = glob(["src/**/*.py"]),
deps = [
requirement("numpy"),
requirement("pandas"),
requirement("scikit-learn"),
"//libs/utils:utils_py",
],
visibility = ["//visibility:public"],
)
py_test(
name = "ml_test",
srcs = glob(["tests/**/*.py"]),
deps = [
":ml",
requirement("pytest"),
],
size = "medium",
timeout = "moderate",
)
py_binary(
name = "train",
srcs = ["train.py"],
deps = [":ml"],
data = ["//data:training_data"],
)
```
### Template 5: Custom Rule for Docker
```python
# tools/bazel/rules/docker.bzl
def _docker_image_impl(ctx):
dockerfile = ctx.file.dockerfile
base_image = ctx.attr.base_image
layers = ctx.files.layers
# Build the image
output = ctx.actions.declare_file(ctx.attr.name + ".tar")
args = ctx.actions.args()
args.add("--dockerfile", dockerfile)
args.add("--output", output)
args.add("--base", base_image)
args.add_all("--layer", layers)
ctx.actions.run(
inputs = [dockerfile] + layers,
outputs = [output],
executable = ctx.executable._builder,
arguments = [args],
mnemonic = "DockerBuild",
progress_message = "Building Docker image %s" % ctx.label,
)
return [DefaultInfo(files = depset([output]))]
docker_image = rule(
implementation = _docker_image_impl,
attrs = {
"dockerfile": attr.label(
allow_single_file = [".dockerfile", "Dockerfile"],
mandatory = True,
),
"base_image": attr.string(mandatory = True),
"layers": attr.label_list(allow_files = True),
"_builder": attr.label(
default = "//tools/docker:builder",
executable = True,
cfg = "exec",
),
},
)
```
### Template 6: Query and Dependency Analysis
```bash
# Find all dependencies of a target
bazel query "deps(//apps/web:web)"
# Find reverse dependencies (what depends on this)
bazel query "rdeps(//..., //libs/utils:utils)"
# Find all targets in a package
bazel query "//libs/..."
# Find changed targets since commit
bazel query "rdeps(//..., set($(git diff --name-only HEAD~1 | sed 's/.*/"&"/' | tr '\n' ' ')))"
# Generate dependency graph
bazel query "deps(//apps/web:web)" --output=graph | dot -Tpng > deps.png
# Find all test targets
bazel query "kind('.*_test', //...)"
# Find targets with specific tag
bazel query "attr(tags, 'integration', //...)"
# Compute build graph size
bazel query "deps(//...)" --output=package | wc -l
```
### Template 7: Remote Execution Setup
```python
# platforms/BUILD.bazel
platform(
name = "linux_x86_64",
constraint_values = [
"@platforms//os:linux",
"@platforms//cpu:x86_64",
],
exec_properties = {
"container-image": "docker://gcr.io/myproject/bazel-worker:latest",
"OSFamily": "Linux",
},
)
platform(
name = "remote_linux",
parents = [":linux_x86_64"],
exec_properties = {
"Pool": "default",
"dockerNetwork": "standard",
},
)
# toolchains/BUILD.bazel
toolchain(
name = "cc_toolchain_linux",
exec_compatible_with = [
"@platforms//os:linux",
"@platforms//cpu:x86_64",
],
target_compatible_with = [
"@platforms//os:linux",
"@platforms//cpu:x86_64",
],
toolchain = "@remotejdk11_linux//:jdk",
toolchain_type = "@bazel_tools//tools/jdk:runtime_toolchain_type",
)
```
## Performance Optimization
```bash
# Profile build
bazel build //... --profile=profile.json
bazel analyze-profile profile.json
# Identify slow actions
bazel build //... --execution_log_json_file=exec_log.json
# Memory profiling
bazel build //... --memory_profile=memory.json
# Skip analysis cache
bazel build //... --notrack_incremental_state
```
## Best Practices
### Do's
- **Use fine-grained targets** - Better caching
- **Pin dependencies** - Reproducible builds
- **Enable remote caching** - Share build artifacts
- **Use visibility wisely** - Enforce architecture
- **Write BUILD files per directory** - Standard convention
### Don'ts
- **Don't use glob for deps** - Explicit is better
- **Don't commit bazel-\* dirs** - Add to .gitignore
- **Don't skip WORKSPACE setup** - Foundation of build
- **Don't ignore build warnings** - Technical debt
## Resources
- [Bazel Documentation](https://bazel.build/docs)
- [Bazel Remote Execution](https://bazel.build/docs/remote-execution)
- [rules_js](https://github.com/aspect-build/rules_js)
| """
Test for 'bazel-build-optimization' skill — Bazel Build Optimization
Validates that the Agent optimized Bazel build configuration with remote
caching, build parallelism, and dependency management.
"""
import os
import subprocess
import pytest
class TestBazelBuildOptimization:
"""Verify Bazel build optimization setup."""
REPO_DIR = "/workspace/bazel"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_bazelrc_exists(self):
"""A .bazelrc file must exist with optimization flags."""
fpath = os.path.join(self.REPO_DIR, ".bazelrc")
found = os.path.isfile(fpath)
if not found:
# Check for user.bazelrc or ci.bazelrc
for name in [".bazelrc", "user.bazelrc", "ci.bazelrc", ".bazelrc.user"]:
if os.path.isfile(os.path.join(self.REPO_DIR, name)):
found = True
break
assert found, ".bazelrc not found"
def test_build_files_exist(self):
"""BUILD or BUILD.bazel files must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f in ("BUILD", "BUILD.bazel"):
found = True
break
if found:
break
assert found, "No BUILD/BUILD.bazel files found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _read_bazelrc(self):
for name in [".bazelrc", "user.bazelrc", "ci.bazelrc"]:
fpath = os.path.join(self.REPO_DIR, name)
if os.path.isfile(fpath):
with open(fpath, "r") as f:
return f.read()
return ""
def test_remote_cache_config(self):
"""bazelrc should configure remote caching."""
content = self._read_bazelrc()
cache_patterns = [
"remote_cache",
"disk_cache",
"http_cache",
"--remote_cache",
"--disk_cache",
]
found = any(p in content for p in cache_patterns)
assert found, "No remote/disk cache configuration in .bazelrc"
def test_jobs_parallelism(self):
"""bazelrc should set parallelism flags."""
content = self._read_bazelrc()
parallel_patterns = [
"--jobs",
"--local_cpu_resources",
"--local_ram_resources",
"worker",
]
found = any(p in content for p in parallel_patterns)
assert found, "No parallelism configuration in .bazelrc"
def test_build_config_sections(self):
"""bazelrc should define named configs (--config=ci etc.)."""
content = self._read_bazelrc()
config_patterns = [
"build:ci",
"build:opt",
"build:remote",
"build --config",
"test:ci",
]
found = any(p in content for p in config_patterns)
if not found:
# Check for any build: directives
found = "build:" in content or "build --" in content
assert found, "No named build configs in .bazelrc"
def test_test_configuration(self):
"""bazelrc should configure test settings."""
content = self._read_bazelrc()
test_patterns = [
"test --",
"test:",
"--test_output",
"--test_timeout",
"--flaky_test",
]
found = any(p in content for p in test_patterns)
assert found, "No test configuration in .bazelrc"
def test_optimization_flags(self):
"""bazelrc should include compilation optimization flags."""
content = self._read_bazelrc()
opt_patterns = [
"-c opt",
"--compilation_mode",
"--strip",
"--copt",
"-O2",
"--linkopt",
"--experimental",
"--incompatible",
]
found = any(p in content for p in opt_patterns)
assert found, "No compilation optimization flags"
def test_workspace_file_exists(self):
"""WORKSPACE or MODULE.bazel file must exist."""
workspace_files = [
"WORKSPACE",
"WORKSPACE.bazel",
"MODULE.bazel",
"WORKSPACE.bzlmod",
]
found = any(
os.path.isfile(os.path.join(self.REPO_DIR, f)) for f in workspace_files
)
assert found, "No WORKSPACE/MODULE.bazel file found"
def test_dependency_management(self):
"""Build must define external dependencies."""
dep_found = False
for fname in ["WORKSPACE", "WORKSPACE.bazel", "MODULE.bazel"]:
fpath = os.path.join(self.REPO_DIR, fname)
if os.path.isfile(fpath):
with open(fpath, "r") as f:
content = f.read()
dep_patterns = [
"http_archive",
"git_repository",
"maven_install",
"bazel_dep",
"load(",
"module(",
]
if any(p in content for p in dep_patterns):
dep_found = True
break
assert dep_found, "No dependency management found"
def test_bazelrc_has_multiple_settings(self):
"""bazelrc must have at least 5 non-comment lines."""
content = self._read_bazelrc()
lines = [
l.strip()
for l in content.splitlines()
if l.strip() and not l.strip().startswith("#")
]
assert len(lines) >= 5, f".bazelrc has only {len(lines)} settings, need >= 5"
| https://github.com/bazelbuild/bazel | zhangyiiiiii/swe-skills-bench-bazel | |
istio-traffic-management | Istio Traffic Management | See task file for detailed mission requirements. | feature | # Task: Add Istio Canary Deployment Example
## Background
Add an Istio canary deployment example to the `samples/` directory demonstrating weighted traffic routing between stable and canary versions of a service.
## Files to Create/Modify
- `samples/canary-demo/virtual-service.yaml` - VirtualService with weighted routing
- `samples/canary-demo/destination-rule.yaml` - DestinationRule for subsets
- `samples/canary-demo/gateway.yaml` - Ingress gateway configuration
- `samples/canary-demo/deployments.yaml` - Stable and canary Deployment manifests
- `samples/canary-demo/verify.sh` - Verification script
## Requirements
### VirtualService Configuration
- `apiVersion: networking.istio.io/v1beta1`
- HTTP route rules with weighted traffic split (e.g., 90% stable / 10% canary)
- Match conditions on headers or URI prefix
- Retry policy and timeout settings
### DestinationRule
- Define subsets: `stable` and `canary` with label selectors
- Load balancing policy
- Connection pool settings and outlier detection
### Deployments
- Stable deployment with label `version: stable`
- Canary deployment with label `version: canary`
- Kubernetes Service selecting both versions
### Verification Script (verify.sh)
- Validate all YAML files with `istioctl analyze`
- Check that VirtualService references valid destinations
- Verify weight percentages sum to 100
## Acceptance Criteria
- `istioctl analyze samples/canary-demo/` exits with code 0
- VirtualService contains `spec.http[].route` entries with weight
- DestinationRule defines `stable` and `canary` subsets
- Verification script passes all checks
| ---
name: istio-traffic-management
description: Configure Istio traffic management including routing, load balancing, circuit breakers, and canary deployments. Use when implementing service mesh traffic policies, progressive delivery, or resilience patterns.
---
# Istio Traffic Management
Comprehensive guide to Istio traffic management for production service mesh deployments.
## When to Use This Skill
- Configuring service-to-service routing
- Implementing canary or blue-green deployments
- Setting up circuit breakers and retries
- Load balancing configuration
- Traffic mirroring for testing
- Fault injection for chaos engineering
## Core Concepts
### 1. Traffic Management Resources
| Resource | Purpose | Scope |
| ------------------- | ----------------------------- | ------------- |
| **VirtualService** | Route traffic to destinations | Host-based |
| **DestinationRule** | Define policies after routing | Service-based |
| **Gateway** | Configure ingress/egress | Cluster edge |
| **ServiceEntry** | Add external services | Mesh-wide |
### 2. Traffic Flow
```
Client → Gateway → VirtualService → DestinationRule → Service
(routing) (policies) (pods)
```
## Templates
### Template 1: Basic Routing
```yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: reviews-route
namespace: bookinfo
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: reviews-destination
namespace: bookinfo
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
```
### Template 2: Canary Deployment
```yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-service-canary
spec:
hosts:
- my-service
http:
- route:
- destination:
host: my-service
subset: stable
weight: 90
- destination:
host: my-service
subset: canary
weight: 10
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: my-service-dr
spec:
host: my-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
h2UpgradePolicy: UPGRADE
http1MaxPendingRequests: 100
http2MaxRequests: 1000
subsets:
- name: stable
labels:
version: stable
- name: canary
labels:
version: canary
```
### Template 3: Circuit Breaker
```yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: circuit-breaker
spec:
host: my-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 100
http2MaxRequests: 1000
maxRequestsPerConnection: 10
maxRetries: 3
outlierDetection:
consecutive5xxErrors: 5
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50
minHealthPercent: 30
```
### Template 4: Retry and Timeout
```yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: ratings-retry
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
timeout: 10s
retries:
attempts: 3
perTryTimeout: 3s
retryOn: connect-failure,refused-stream,unavailable,cancelled,retriable-4xx,503
retryRemoteLocalities: true
```
### Template 5: Traffic Mirroring
```yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: mirror-traffic
spec:
hosts:
- my-service
http:
- route:
- destination:
host: my-service
subset: v1
mirror:
host: my-service
subset: v2
mirrorPercentage:
value: 100.0
```
### Template 6: Fault Injection
```yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: fault-injection
spec:
hosts:
- ratings
http:
- fault:
delay:
percentage:
value: 10
fixedDelay: 5s
abort:
percentage:
value: 5
httpStatus: 503
route:
- destination:
host: ratings
```
### Template 7: Ingress Gateway
```yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: my-tls-secret
hosts:
- "*.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-vs
spec:
hosts:
- "api.example.com"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: /api/v1
route:
- destination:
host: api-service
port:
number: 8080
```
## Load Balancing Strategies
```yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: load-balancing
spec:
host: my-service
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN # or LEAST_CONN, RANDOM, PASSTHROUGH
---
# Consistent hashing for sticky sessions
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: sticky-sessions
spec:
host: my-service
trafficPolicy:
loadBalancer:
consistentHash:
httpHeaderName: x-user-id
# or: httpCookie, useSourceIp, httpQueryParameterName
```
## Best Practices
### Do's
- **Start simple** - Add complexity incrementally
- **Use subsets** - Version your services clearly
- **Set timeouts** - Always configure reasonable timeouts
- **Enable retries** - But with backoff and limits
- **Monitor** - Use Kiali and Jaeger for visibility
### Don'ts
- **Don't over-retry** - Can cause cascading failures
- **Don't ignore outlier detection** - Enable circuit breakers
- **Don't mirror to production** - Mirror to test environments
- **Don't skip canary** - Test with small traffic percentage first
## Debugging Commands
```bash
# Check VirtualService configuration
istioctl analyze
# View effective routes
istioctl proxy-config routes deploy/my-app -o json
# Check endpoint discovery
istioctl proxy-config endpoints deploy/my-app
# Debug traffic
istioctl proxy-config log deploy/my-app --level debug
```
## Resources
- [Istio Traffic Management](https://istio.io/latest/docs/concepts/traffic-management/)
- [Virtual Service Reference](https://istio.io/latest/docs/reference/config/networking/virtual-service/)
- [Destination Rule Reference](https://istio.io/latest/docs/reference/config/networking/destination-rule/)
| """
Test for 'istio-traffic-management' skill — Istio Traffic Management
Validates that the Agent created VirtualService, DestinationRule, Gateway,
Deployments configs and a verify script for canary routing with proper traffic weights.
"""
import os
import pytest
class TestIstioTrafficManagement:
"""Verify Istio traffic management configs."""
REPO_DIR = "/workspace/istio"
CANARY_DIR = "samples/canary-demo"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_virtualservice_exists(self):
"""samples/canary-demo/virtual-service.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, self.CANARY_DIR, "virtual-service.yaml")
assert os.path.isfile(fpath), "virtual-service.yaml not found"
def test_destinationrule_exists(self):
"""samples/canary-demo/destination-rule.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, self.CANARY_DIR, "destination-rule.yaml")
assert os.path.isfile(fpath), "destination-rule.yaml not found"
def test_gateway_exists(self):
"""samples/canary-demo/gateway.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, self.CANARY_DIR, "gateway.yaml")
assert os.path.isfile(fpath), "gateway.yaml not found"
def test_deployments_exists(self):
"""samples/canary-demo/deployments.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, self.CANARY_DIR, "deployments.yaml")
assert os.path.isfile(fpath), "deployments.yaml not found"
def test_verify_sh_exists(self):
"""samples/canary-demo/verify.sh must exist."""
fpath = os.path.join(self.REPO_DIR, self.CANARY_DIR, "verify.sh")
assert os.path.isfile(fpath), "verify.sh not found"
# ------------------------------------------------------------------
# L2: YAML content validation
# ------------------------------------------------------------------
def _load_yamls(self, relpath):
import yaml
fpath = os.path.join(self.REPO_DIR, relpath)
with open(fpath, "r") as f:
return list(yaml.safe_load_all(f))
def test_virtualservice_kind(self):
"""VirtualService file must define kind: VirtualService."""
docs = self._load_yamls(f"{self.CANARY_DIR}/virtual-service.yaml")
vs_found = any(d and d.get("kind") == "VirtualService" for d in docs)
assert vs_found, "No VirtualService resource found"
def test_destinationrule_kind(self):
"""DestinationRule file must define kind: DestinationRule."""
docs = self._load_yamls(f"{self.CANARY_DIR}/destination-rule.yaml")
dr_found = any(d and d.get("kind") == "DestinationRule" for d in docs)
assert dr_found, "No DestinationRule resource found"
def test_virtualservice_has_http_routes(self):
"""VirtualService must define HTTP routes."""
docs = self._load_yamls(f"{self.CANARY_DIR}/virtual-service.yaml")
for doc in docs:
if doc and doc.get("kind") == "VirtualService":
spec = doc.get("spec", {})
http = spec.get("http", [])
assert len(http) >= 1, "VirtualService has no HTTP routes"
return
pytest.fail("No VirtualService with spec.http found")
def test_traffic_weights_sum_to_100(self):
"""Route weights in a single match must sum to 100."""
docs = self._load_yamls(f"{self.CANARY_DIR}/virtual-service.yaml")
for doc in docs:
if doc and doc.get("kind") == "VirtualService":
for route_block in doc.get("spec", {}).get("http", []):
routes = route_block.get("route", [])
if len(routes) >= 2:
total = sum(r.get("weight", 0) for r in routes)
assert total == 100, f"Weights sum to {total}, expected 100"
return
pytest.fail("No route block with >= 2 weighted destinations found")
def test_destinationrule_has_subsets(self):
"""DestinationRule must define at least 2 subsets."""
docs = self._load_yamls(f"{self.CANARY_DIR}/destination-rule.yaml")
for doc in docs:
if doc and doc.get("kind") == "DestinationRule":
subsets = doc.get("spec", {}).get("subsets", [])
assert len(subsets) >= 2, f"Need >= 2 subsets, got {len(subsets)}"
return
pytest.fail("No DestinationRule with subsets found")
def test_subsets_have_labels(self):
"""Each subset must have version labels."""
docs = self._load_yamls(f"{self.CANARY_DIR}/destination-rule.yaml")
for doc in docs:
if doc and doc.get("kind") == "DestinationRule":
subsets = doc.get("spec", {}).get("subsets", [])
for subset in subsets:
assert "name" in subset, "Subset missing name"
labels = subset.get("labels", {})
assert len(labels) >= 1, f"Subset '{subset['name']}' has no labels"
return
def test_virtualservice_has_hosts(self):
"""VirtualService must specify hosts."""
docs = self._load_yamls(f"{self.CANARY_DIR}/virtual-service.yaml")
for doc in docs:
if doc and doc.get("kind") == "VirtualService":
hosts = doc.get("spec", {}).get("hosts", [])
assert len(hosts) >= 1, "VirtualService has no hosts"
return
def test_verify_sh_contains_istioctl(self):
"""verify.sh must call istioctl analyze for validation."""
fpath = os.path.join(self.REPO_DIR, self.CANARY_DIR, "verify.sh")
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
assert "istioctl" in content, "verify.sh does not invoke istioctl analyze"
def test_yaml_files_parseable(self):
"""Core YAML files must contain valid YAML."""
import yaml
for fname in [
"virtual-service.yaml",
"destination-rule.yaml",
"gateway.yaml",
"deployments.yaml",
]:
fpath = os.path.join(self.REPO_DIR, self.CANARY_DIR, fname)
with open(fpath, "r") as f:
docs = list(yaml.safe_load_all(f))
assert all(
isinstance(d, dict) for d in docs if d is not None
), f"{fname} contains non-mapping documents"
| https://github.com/istio/istio | zhangyiiiiii/swe-skills-bench-python | |
bash-defensive-patterns | Bash Defensive Patterns | See task file for detailed mission requirements. | feature | # Task: Add Defensive Bash Scripts to ShellCheck Test Suite
## Background
Add example shell scripts to the ShellCheck repository's `test/` directory that demonstrate robust, production-quality Bash patterns and pass ShellCheck analysis without warnings.
## Files to Create/Modify
- `test/safe_backup.sh` - Backup script demonstrating defensive coding
- `test/common_utils.sh` - Reusable utility functions library
- `test/test_scripts.bats` - BATS test suite for the scripts (optional)
## Requirements
### safe_backup.sh
- `set -euo pipefail` at script start
- Proper quoting of all variable expansions
- `trap` for cleanup on `EXIT` / `ERR`
- Input validation for directory arguments
- Meaningful exit codes on errors
### common_utils.sh
- Logging functions (info, warn, error)
- Error handling helpers
- Argument parsing template using `getopts` or manual parsing
### Static Analysis
- Both `.sh` files must pass `shellcheck --severity=warning` with exit code 0
- Consistent formatting (shfmt-compatible)
## Acceptance Criteria
- `shellcheck --severity=warning test/*.sh` exits with code 0
- Scripts demonstrate defensive coding patterns
- Utility functions are reusable and well-structured
| ---
name: bash-defensive-patterns
description: Master defensive Bash programming techniques for production-grade scripts. Use when writing robust shell scripts, CI/CD pipelines, or system utilities requiring fault tolerance and safety.
---
# Bash Defensive Patterns
Comprehensive guidance for writing production-ready Bash scripts using defensive programming techniques, error handling, and safety best practices to prevent common pitfalls and ensure reliability.
## When to Use This Skill
- Writing production automation scripts
- Building CI/CD pipeline scripts
- Creating system administration utilities
- Developing error-resilient deployment automation
- Writing scripts that must handle edge cases safely
- Building maintainable shell script libraries
- Implementing comprehensive logging and monitoring
- Creating scripts that must work across different platforms
## Core Defensive Principles
### 1. Strict Mode
Enable bash strict mode at the start of every script to catch errors early.
```bash
#!/bin/bash
set -Eeuo pipefail # Exit on error, unset variables, pipe failures
```
**Key flags:**
- `set -E`: Inherit ERR trap in functions
- `set -e`: Exit on any error (command returns non-zero)
- `set -u`: Exit on undefined variable reference
- `set -o pipefail`: Pipe fails if any command fails (not just last)
### 2. Error Trapping and Cleanup
Implement proper cleanup on script exit or error.
```bash
#!/bin/bash
set -Eeuo pipefail
trap 'echo "Error on line $LINENO"' ERR
trap 'echo "Cleaning up..."; rm -rf "$TMPDIR"' EXIT
TMPDIR=$(mktemp -d)
# Script code here
```
### 3. Variable Safety
Always quote variables to prevent word splitting and globbing issues.
```bash
# Wrong - unsafe
cp $source $dest
# Correct - safe
cp "$source" "$dest"
# Required variables - fail with message if unset
: "${REQUIRED_VAR:?REQUIRED_VAR is not set}"
```
### 4. Array Handling
Use arrays safely for complex data handling.
```bash
# Safe array iteration
declare -a items=("item 1" "item 2" "item 3")
for item in "${items[@]}"; do
echo "Processing: $item"
done
# Reading output into array safely
mapfile -t lines < <(some_command)
readarray -t numbers < <(seq 1 10)
```
### 5. Conditional Safety
Use `[[ ]]` for Bash-specific features, `[ ]` for POSIX.
```bash
# Bash - safer
if [[ -f "$file" && -r "$file" ]]; then
content=$(<"$file")
fi
# POSIX - portable
if [ -f "$file" ] && [ -r "$file" ]; then
content=$(cat "$file")
fi
# Test for existence before operations
if [[ -z "${VAR:-}" ]]; then
echo "VAR is not set or is empty"
fi
```
## Fundamental Patterns
### Pattern 1: Safe Script Directory Detection
```bash
#!/bin/bash
set -Eeuo pipefail
# Correctly determine script directory
SCRIPT_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd -P)"
SCRIPT_NAME="$(basename -- "${BASH_SOURCE[0]}")"
echo "Script location: $SCRIPT_DIR/$SCRIPT_NAME"
```
### Pattern 2: Comprehensive Function Templat
```bash
#!/bin/bash
set -Eeuo pipefail
# Prefix for functions: handle_*, process_*, check_*, validate_*
# Include documentation and error handling
validate_file() {
local -r file="$1"
local -r message="${2:-File not found: $file}"
if [[ ! -f "$file" ]]; then
echo "ERROR: $message" >&2
return 1
fi
return 0
}
process_files() {
local -r input_dir="$1"
local -r output_dir="$2"
# Validate inputs
[[ -d "$input_dir" ]] || { echo "ERROR: input_dir not a directory" >&2; return 1; }
# Create output directory if needed
mkdir -p "$output_dir" || { echo "ERROR: Cannot create output_dir" >&2; return 1; }
# Process files safely
while IFS= read -r -d '' file; do
echo "Processing: $file"
# Do work
done < <(find "$input_dir" -maxdepth 1 -type f -print0)
return 0
}
```
### Pattern 3: Safe Temporary File Handling
```bash
#!/bin/bash
set -Eeuo pipefail
trap 'rm -rf -- "$TMPDIR"' EXIT
# Create temporary directory
TMPDIR=$(mktemp -d) || { echo "ERROR: Failed to create temp directory" >&2; exit 1; }
# Create temporary files in directory
TMPFILE1="$TMPDIR/temp1.txt"
TMPFILE2="$TMPDIR/temp2.txt"
# Use temporary files
touch "$TMPFILE1" "$TMPFILE2"
echo "Temp files created in: $TMPDIR"
```
### Pattern 4: Robust Argument Parsing
```bash
#!/bin/bash
set -Eeuo pipefail
# Default values
VERBOSE=false
DRY_RUN=false
OUTPUT_FILE=""
THREADS=4
usage() {
cat <<EOF
Usage: $0 [OPTIONS]
Options:
-v, --verbose Enable verbose output
-d, --dry-run Run without making changes
-o, --output FILE Output file path
-j, --jobs NUM Number of parallel jobs
-h, --help Show this help message
EOF
exit "${1:-0}"
}
# Parse arguments
while [[ $# -gt 0 ]]; do
case "$1" in
-v|--verbose)
VERBOSE=true
shift
;;
-d|--dry-run)
DRY_RUN=true
shift
;;
-o|--output)
OUTPUT_FILE="$2"
shift 2
;;
-j|--jobs)
THREADS="$2"
shift 2
;;
-h|--help)
usage 0
;;
--)
shift
break
;;
*)
echo "ERROR: Unknown option: $1" >&2
usage 1
;;
esac
done
# Validate required arguments
[[ -n "$OUTPUT_FILE" ]] || { echo "ERROR: -o/--output is required" >&2; usage 1; }
```
### Pattern 5: Structured Logging
```bash
#!/bin/bash
set -Eeuo pipefail
# Logging functions
log_info() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] INFO: $*" >&2
}
log_warn() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] WARN: $*" >&2
}
log_error() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] ERROR: $*" >&2
}
log_debug() {
if [[ "${DEBUG:-0}" == "1" ]]; then
echo "[$(date +'%Y-%m-%d %H:%M:%S')] DEBUG: $*" >&2
fi
}
# Usage
log_info "Starting script"
log_debug "Debug information"
log_warn "Warning message"
log_error "Error occurred"
```
### Pattern 6: Process Orchestration with Signals
```bash
#!/bin/bash
set -Eeuo pipefail
# Track background processes
PIDS=()
cleanup() {
log_info "Shutting down..."
# Terminate all background processes
for pid in "${PIDS[@]}"; do
if kill -0 "$pid" 2>/dev/null; then
kill -TERM "$pid" 2>/dev/null || true
fi
done
# Wait for graceful shutdown
for pid in "${PIDS[@]}"; do
wait "$pid" 2>/dev/null || true
done
}
trap cleanup SIGTERM SIGINT
# Start background tasks
background_task &
PIDS+=($!)
another_task &
PIDS+=($!)
# Wait for all background processes
wait
```
### Pattern 7: Safe File Operations
```bash
#!/bin/bash
set -Eeuo pipefail
# Use -i flag to move safely without overwriting
safe_move() {
local -r source="$1"
local -r dest="$2"
if [[ ! -e "$source" ]]; then
echo "ERROR: Source does not exist: $source" >&2
return 1
fi
if [[ -e "$dest" ]]; then
echo "ERROR: Destination already exists: $dest" >&2
return 1
fi
mv "$source" "$dest"
}
# Safe directory cleanup
safe_rmdir() {
local -r dir="$1"
if [[ ! -d "$dir" ]]; then
echo "ERROR: Not a directory: $dir" >&2
return 1
fi
# Use -I flag to prompt before rm (BSD/GNU compatible)
rm -rI -- "$dir"
}
# Atomic file writes
atomic_write() {
local -r target="$1"
local -r tmpfile
tmpfile=$(mktemp) || return 1
# Write to temp file first
cat > "$tmpfile"
# Atomic rename
mv "$tmpfile" "$target"
}
```
### Pattern 8: Idempotent Script Design
```bash
#!/bin/bash
set -Eeuo pipefail
# Check if resource already exists
ensure_directory() {
local -r dir="$1"
if [[ -d "$dir" ]]; then
log_info "Directory already exists: $dir"
return 0
fi
mkdir -p "$dir" || {
log_error "Failed to create directory: $dir"
return 1
}
log_info "Created directory: $dir"
}
# Ensure configuration state
ensure_config() {
local -r config_file="$1"
local -r default_value="$2"
if [[ ! -f "$config_file" ]]; then
echo "$default_value" > "$config_file"
log_info "Created config: $config_file"
fi
}
# Rerunning script multiple times should be safe
ensure_directory "/var/cache/myapp"
ensure_config "/etc/myapp/config" "DEBUG=false"
```
### Pattern 9: Safe Command Substitution
```bash
#!/bin/bash
set -Eeuo pipefail
# Use $() instead of backticks
name=$(<"$file") # Modern, safe variable assignment from file
output=$(command -v python3) # Get command location safely
# Handle command substitution with error checking
result=$(command -v node) || {
log_error "node command not found"
return 1
}
# For multiple lines
mapfile -t lines < <(grep "pattern" "$file")
# NUL-safe iteration
while IFS= read -r -d '' file; do
echo "Processing: $file"
done < <(find /path -type f -print0)
```
### Pattern 10: Dry-Run Support
```bash
#!/bin/bash
set -Eeuo pipefail
DRY_RUN="${DRY_RUN:-false}"
run_cmd() {
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY RUN] Would execute: $*"
return 0
fi
"$@"
}
# Usage
run_cmd cp "$source" "$dest"
run_cmd rm "$file"
run_cmd chown "$owner" "$target"
```
## Advanced Defensive Techniques
### Named Parameters Pattern
```bash
#!/bin/bash
set -Eeuo pipefail
process_data() {
local input_file=""
local output_dir=""
local format="json"
# Parse named parameters
while [[ $# -gt 0 ]]; do
case "$1" in
--input=*)
input_file="${1#*=}"
;;
--output=*)
output_dir="${1#*=}"
;;
--format=*)
format="${1#*=}"
;;
*)
echo "ERROR: Unknown parameter: $1" >&2
return 1
;;
esac
shift
done
# Validate required parameters
[[ -n "$input_file" ]] || { echo "ERROR: --input is required" >&2; return 1; }
[[ -n "$output_dir" ]] || { echo "ERROR: --output is required" >&2; return 1; }
}
```
### Dependency Checking
```bash
#!/bin/bash
set -Eeuo pipefail
check_dependencies() {
local -a missing_deps=()
local -a required=("jq" "curl" "git")
for cmd in "${required[@]}"; do
if ! command -v "$cmd" &>/dev/null; then
missing_deps+=("$cmd")
fi
done
if [[ ${#missing_deps[@]} -gt 0 ]]; then
echo "ERROR: Missing required commands: ${missing_deps[*]}" >&2
return 1
fi
}
check_dependencies
```
## Best Practices Summary
1. **Always use strict mode** - `set -Eeuo pipefail`
2. **Quote all variables** - `"$variable"` prevents word splitting
3. **Use [[]] conditionals** - More robust than [ ]
4. **Implement error trapping** - Catch and handle errors gracefully
5. **Validate all inputs** - Check file existence, permissions, formats
6. **Use functions for reusability** - Prefix with meaningful names
7. **Implement structured logging** - Include timestamps and levels
8. **Support dry-run mode** - Allow users to preview changes
9. **Handle temporary files safely** - Use mktemp, cleanup with trap
10. **Design for idempotency** - Scripts should be safe to rerun
11. **Document requirements** - List dependencies and minimum versions
12. **Test error paths** - Ensure error handling works correctly
13. **Use `command -v`** - Safer than `which` for checking executables
14. **Prefer printf over echo** - More predictable across systems
## Resources
- **Bash Strict Mode**: http://redsymbol.net/articles/unofficial-bash-strict-mode/
- **Google Shell Style Guide**: https://google.github.io/styleguide/shellguide.html
- **Defensive BASH Programming**: https://www.lifepipe.net/
| """
Test for 'bash-defensive-patterns' skill — Bash Defensive Scripting
Validates that the Agent created idiomatic, defensive bash scripts with
proper error handling and that shellcheck validates them.
"""
import os
import subprocess
import pytest
class TestBashDefensivePatterns:
"""Verify defensive bash scripting patterns."""
REPO_DIR = "/workspace/shellcheck"
# [!] Change: updated path from examples/defensive to test as specified in the requirements doc
SCRIPTS_DIR = "test"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_main_script_exists(self):
"""Main defensive script must exist."""
script_dir = os.path.join(self.REPO_DIR, self.SCRIPTS_DIR)
if not os.path.isdir(script_dir):
pytest.fail(f"Directory {self.SCRIPTS_DIR} not found")
scripts = [f for f in os.listdir(script_dir) if f.endswith(".sh")]
# [!] Change: updated the path in the error message text
assert len(scripts) >= 1, "No .sh scripts found in test/"
def test_readme_exists(self):
"""README.md must exist in test/."""
fpath = os.path.join(self.REPO_DIR, self.SCRIPTS_DIR, "README.md")
assert os.path.isfile(fpath), "README.md not found"
# ------------------------------------------------------------------
# L2: content & shellcheck validation
# ------------------------------------------------------------------
def _get_scripts(self):
script_dir = os.path.join(self.REPO_DIR, self.SCRIPTS_DIR)
return [
os.path.join(script_dir, f)
for f in os.listdir(script_dir)
if f.endswith(".sh")
]
def test_scripts_have_shebang(self):
"""All scripts must start with #!/bin/bash or #!/usr/bin/env bash."""
for script in self._get_scripts():
with open(script, "r") as f:
first_line = f.readline().strip()
valid = first_line.startswith("#!/bin/bash") or first_line.startswith(
"#!/usr/bin/env bash"
)
assert valid, f"{script} missing proper shebang: {first_line}"
def test_set_euo_pipefail(self):
"""Scripts must include 'set -euo pipefail'."""
for script in self._get_scripts():
with open(script, "r") as f:
content = f.read()
assert "set -e" in content, f"{script} missing set -e"
# Check for u and o pipefail (may be separate)
has_u = (
"set -u" in content or "-u" in content.split("set -")[1]
if "set -" in content
else False
)
has_pipefail = "pipefail" in content
assert has_u or has_pipefail, f"{script} missing -u or pipefail"
def test_trap_handler(self):
"""Scripts must define a trap for error handling."""
found_trap = False
for script in self._get_scripts():
with open(script, "r") as f:
content = f.read()
if "trap " in content or "trap\t" in content:
found_trap = True
break
assert found_trap, "No script defines a trap handler"
def test_shellcheck_passes(self):
"""shellcheck must pass on all scripts."""
for script in self._get_scripts():
result = subprocess.run(
["shellcheck", "-S", "warning", script],
capture_output=True,
text=True,
timeout=60,
)
assert (
result.returncode == 0
), f"shellcheck failed on {script}:\n{result.stdout}"
def test_variable_quoting(self):
"""Scripts should use quoted variables (e.g., "$var" not $var)."""
for script in self._get_scripts():
with open(script, "r") as f:
lines = f.readlines()
for i, line in enumerate(lines, 1):
stripped = line.strip()
# Skip comments and shebang
if stripped.startswith("#") or stripped.startswith("set "):
continue
def test_function_definitions(self):
"""At least one script should define helper functions."""
found = False
for script in self._get_scripts():
with open(script, "r") as f:
content = f.read()
if "function " in content or "()" in content:
found = True
break
assert found, "No script defines functions"
def test_readonly_variables(self):
"""Scripts should use readonly or declare -r for constants."""
found = False
for script in self._get_scripts():
with open(script, "r") as f:
content = f.read()
if "readonly " in content or "declare -r" in content:
found = True
break
assert found, "No script uses readonly/declare -r for constants"
def test_error_messages_to_stderr(self):
"""Error messages should be directed to stderr (>&2)."""
found = False
for script in self._get_scripts():
with open(script, "r") as f:
content = f.read()
if ">&2" in content or "1>&2" in content or "2>" in content:
found = True
break
assert found, "No script sends error messages to stderr"
def test_script_is_executable_or_runnable(self):
"""Scripts must be runnable with bash."""
for script in self._get_scripts():
result = subprocess.run(
["bash", "-n", script],
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"Syntax check failed for {script}:\n{result.stderr}"
| https://github.com/koalaman/shellcheck | zhangyiiiiii/swe-skills-bench-python | |
gitlab-ci-patterns | GitLab CI Patterns | See task file for detailed mission requirements. | fix | # Task: Fix GitLab CI Security Pipeline Templates
## Background
The existing GitLab CI security scanning templates under `lib/gitlab/ci/templates/Security/` have missing or incomplete `extends` and `rules` fields, causing them to fail validation. These templates need to be updated to conform to GitLab CI template standards.
## Files to Modify
- `lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml` - Fix missing extends/rules
- `lib/gitlab/ci/templates/Security/Dependency-Scanning.gitlab-ci.yml` - Fix missing extends/rules
- `lib/gitlab/ci/templates/Security/Secret-Detection.gitlab-ci.yml` - Fix missing extends/rules
## Requirements
### For each Security template:
- Ensure every job definition includes `extends` referencing the correct base job (if applicable)
- Add proper `rules` section with:
- CI pipeline trigger conditions
- Branch/merge request filtering
- `allow_failure` settings where appropriate
- Ensure `stage` is set correctly (typically `test` or a security-specific stage)
- `artifacts:reports` paths must be correctly configured for SARIF or JSON output
- Template variables (`$SAST_EXCLUDED_PATHS`, `$DS_EXCLUDED_PATHS`, etc.) should have sensible defaults
### Validation
- All YAML files must be syntactically valid Ruby-parseable YAML
- Template structure must follow GitLab CI syntax conventions
## Acceptance Criteria
- `lib/gitlab/ci/templates/Security/*.yml` files are valid YAML
- Each security template contains proper `rules` and `extends` fields
- Templates conform to GitLab CI pipeline syntax
| ---
name: gitlab-ci-patterns
description: Build GitLab CI/CD pipelines with multi-stage workflows, caching, and distributed runners for scalable automation. Use when implementing GitLab CI/CD, optimizing pipeline performance, or setting up automated testing and deployment.
---
# GitLab CI Patterns
Comprehensive GitLab CI/CD pipeline patterns for automated testing, building, and deployment.
## Purpose
Create efficient GitLab CI pipelines with proper stage organization, caching, and deployment strategies.
## When to Use
- Automate GitLab-based CI/CD
- Implement multi-stage pipelines
- Configure GitLab Runners
- Deploy to Kubernetes from GitLab
- Implement GitOps workflows
## Basic Pipeline Structure
```yaml
stages:
- build
- test
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
build:
stage: build
image: node:20
script:
- npm ci
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 hour
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
test:
stage: test
image: node:20
script:
- npm ci
- npm run lint
- npm test
coverage: '/Lines\s*:\s*(\d+\.\d+)%/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
deploy:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl apply -f k8s/
- kubectl rollout status deployment/my-app
only:
- main
environment:
name: production
url: https://app.example.com
```
## Docker Build and Push
```yaml
build-docker:
stage: build
image: docker:24
services:
- docker:24-dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker build -t $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
only:
- main
- tags
```
## Multi-Environment Deployment
```yaml
.deploy_template: &deploy_template
image: bitnami/kubectl:latest
before_script:
- kubectl config set-cluster k8s --server="$KUBE_URL" --insecure-skip-tls-verify=true
- kubectl config set-credentials admin --token="$KUBE_TOKEN"
- kubectl config set-context default --cluster=k8s --user=admin
- kubectl config use-context default
deploy:staging:
<<: *deploy_template
stage: deploy
script:
- kubectl apply -f k8s/ -n staging
- kubectl rollout status deployment/my-app -n staging
environment:
name: staging
url: https://staging.example.com
only:
- develop
deploy:production:
<<: *deploy_template
stage: deploy
script:
- kubectl apply -f k8s/ -n production
- kubectl rollout status deployment/my-app -n production
environment:
name: production
url: https://app.example.com
when: manual
only:
- main
```
## Terraform Pipeline
```yaml
stages:
- validate
- plan
- apply
variables:
TF_ROOT: ${CI_PROJECT_DIR}/terraform
TF_VERSION: "1.6.0"
before_script:
- cd ${TF_ROOT}
- terraform --version
validate:
stage: validate
image: hashicorp/terraform:${TF_VERSION}
script:
- terraform init -backend=false
- terraform validate
- terraform fmt -check
plan:
stage: plan
image: hashicorp/terraform:${TF_VERSION}
script:
- terraform init
- terraform plan -out=tfplan
artifacts:
paths:
- ${TF_ROOT}/tfplan
expire_in: 1 day
apply:
stage: apply
image: hashicorp/terraform:${TF_VERSION}
script:
- terraform init
- terraform apply -auto-approve tfplan
dependencies:
- plan
when: manual
only:
- main
```
## Security Scanning
```yaml
include:
- template: Security/SAST.gitlab-ci.yml
- template: Security/Dependency-Scanning.gitlab-ci.yml
- template: Security/Container-Scanning.gitlab-ci.yml
trivy-scan:
stage: test
image: aquasec/trivy:latest
script:
- trivy image --exit-code 1 --severity HIGH,CRITICAL $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
allow_failure: true
```
## Caching Strategies
```yaml
# Cache node_modules
build:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull-push
# Global cache
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .cache/
- vendor/
# Separate cache per job
job1:
cache:
key: job1-cache
paths:
- build/
job2:
cache:
key: job2-cache
paths:
- dist/
```
## Dynamic Child Pipelines
```yaml
generate-pipeline:
stage: build
script:
- python generate_pipeline.py > child-pipeline.yml
artifacts:
paths:
- child-pipeline.yml
trigger-child:
stage: deploy
trigger:
include:
- artifact: child-pipeline.yml
job: generate-pipeline
strategy: depend
```
## Reference Files
- `assets/gitlab-ci.yml.template` - Complete pipeline template
- `references/pipeline-stages.md` - Stage organization patterns
## Best Practices
1. **Use specific image tags** (node:20, not node:latest)
2. **Cache dependencies** appropriately
3. **Use artifacts** for build outputs
4. **Implement manual gates** for production
5. **Use environments** for deployment tracking
6. **Enable merge request pipelines**
7. **Use pipeline schedules** for recurring jobs
8. **Implement security scanning**
9. **Use CI/CD variables** for secrets
10. **Monitor pipeline performance**
## Related Skills
- `github-actions-templates` - For GitHub Actions
- `deployment-pipeline-design` - For architecture
- `secrets-management` - For secrets handling
| """
Test for 'gitlab-ci-patterns' skill — GitLab CI Security Templates
Validates that the Agent created SAST, DAST, and Dependency Scanning
CI/CD template YAML files following GitLab conventions.
"""
import os
import pytest
class TestGitlabCiPatterns:
"""Verify GitLab CI security scanning templates."""
REPO_DIR = "/workspace/gitlabhq"
TEMPLATE_FILES = [
"lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml",
"lib/gitlab/ci/templates/Security/DAST.gitlab-ci.yml",
"lib/gitlab/ci/templates/Security/Dependency-Scanning.gitlab-ci.yml",
]
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
@pytest.mark.parametrize("tpl", TEMPLATE_FILES)
def test_template_exists(self, tpl):
"""Security template file must exist."""
fpath = os.path.join(self.REPO_DIR, tpl)
assert os.path.isfile(fpath), f"{tpl} not found"
# ------------------------------------------------------------------
# L2: YAML content validation
# ------------------------------------------------------------------
def _load_template(self, tpl):
import yaml
fpath = os.path.join(self.REPO_DIR, tpl)
with open(fpath, "r") as f:
return yaml.safe_load(f)
@pytest.mark.parametrize("tpl", TEMPLATE_FILES)
def test_template_is_valid_yaml(self, tpl):
"""Template must be valid YAML."""
doc = self._load_template(tpl)
assert isinstance(doc, dict), f"{tpl} is not a YAML mapping"
def test_sast_has_job_definition(self):
"""SAST template must define at least one job."""
doc = self._load_template(self.TEMPLATE_FILES[0])
job_keys = [k for k in doc.keys() if not k.startswith(".") and k != "variables"]
assert len(job_keys) >= 1, f"SAST template has no job definitions"
def test_sast_uses_sast_image(self):
"""SAST template must reference a SAST scanner image."""
fpath = os.path.join(self.REPO_DIR, self.TEMPLATE_FILES[0])
with open(fpath, "r") as f:
content = f.read()
image_markers = ["image:", "SAST", "sast", "semgrep", "analyzer"]
found = sum(1 for m in image_markers if m in content)
assert found >= 2, "SAST template doesn't reference scanner image"
def test_dast_has_stage(self):
"""DAST template must define a stage."""
doc = self._load_template(self.TEMPLATE_FILES[1])
content = str(doc)
assert "stage" in content.lower(), "DAST template missing stage definition"
def test_dependency_scanning_has_artifacts(self):
"""Dependency Scanning template must define artifacts."""
fpath = os.path.join(self.REPO_DIR, self.TEMPLATE_FILES[2])
with open(fpath, "r") as f:
content = f.read()
assert "artifacts" in content, "Dependency Scanning missing artifacts section"
def test_templates_have_script_or_include(self):
"""Each template must define script or include for execution."""
for tpl in self.TEMPLATE_FILES:
fpath = os.path.join(self.REPO_DIR, tpl)
with open(fpath, "r") as f:
content = f.read()
has_execution = (
"script:" in content or "include:" in content or "extends:" in content
)
assert has_execution, f"{tpl} has no script/include/extends"
def test_sast_generates_report(self):
"""SAST template must configure gl-sast-report.json artifact."""
fpath = os.path.join(self.REPO_DIR, self.TEMPLATE_FILES[0])
with open(fpath, "r") as f:
content = f.read()
assert "report" in content.lower(), "SAST template missing report artifact"
def test_templates_have_allow_failure(self):
"""Security templates should set allow_failure for non-blocking runs."""
for tpl in self.TEMPLATE_FILES:
fpath = os.path.join(self.REPO_DIR, tpl)
with open(fpath, "r") as f:
content = f.read()
# allow_failure is a best practice but not strictly required
# instead check for any of these common security template patterns
patterns = ["allow_failure", "rules:", "only:", "when:"]
found = any(p in content for p in patterns)
assert found, f"{tpl} missing execution control (rules/only/when)"
def test_templates_use_variables(self):
"""Templates should define configurable variables."""
for tpl in self.TEMPLATE_FILES:
fpath = os.path.join(self.REPO_DIR, tpl)
with open(fpath, "r") as f:
content = f.read()
assert (
"variables" in content or "$" in content
), f"{tpl} has no variables for configuration"
| https://github.com/gitlabhq/gitlabhq | zhangyiiiiii/swe-skills-bench-ruby | |
implementing-agent-modes | PostHog Agent Mode Architect | See task file for detailed mission requirements. | feature | # Task: Add Agent Batch Processing Mode for PostHog
## Background
Add batch event processing
capabilities for agent mode, enabling efficient bulk event capture
with configurable batching parameters.
## Files to Create/Modify
- posthog/api/capture.py (batch endpoint addition)
- posthog/settings/batch_config.py (new configuration)
- posthog/tests/test_batch_capture.py (new tests)
## Requirements
Batch Capture Endpoint:
- POST /batch endpoint for bulk events
- Accept array of events in request body
- Maximum batch size: 100 events
- Validate each event in batch
Configuration (batch_config.py):
- BATCH_MAX_SIZE: Maximum events per batch
- BATCH_TIMEOUT_MS: Timeout for batch processing
- BATCH_RETRY_COUNT: Retry attempts on failure
- Environment variable overrides
Batch Processing Logic:
- Atomic batch processing (all or nothing)
- Individual event validation
- Detailed error response for invalid events
- Performance metrics logging
### Expected Functionality
- Valid batch succeeds with 200 OK
- Oversized batch returns 400 Bad Request
- Invalid event in batch returns detailed error
- Partial failure handling
## Acceptance Criteria
- `python manage.py test posthog.tests.test_batch_capture` works correctly
- Batch endpoint handles 100 events in <500ms
- Configuration is properly documented
| ---
name: implementing-agent-modes
description: Guidelines to create/update a new mode for PostHog AI agent. Modes are a way to limit what tools, prompts, and prompt injections are applied and under what conditions. Achieve better results using your plan mode.
---
# Agent modes
Use the steps below to plan or implement a new mode. A mode is a way to manage the context of the agent and inject tools, prompts, and mode-related behavior relevant to a product, use case, JTBD, etc. The agent has the `switch_mode` tool that allows it to switch itself to another mode, which might change tools, prompt, and executables, preserving the current context. Some previously created tools are contextual, meaning they're injected on particular pages of the frontend. The modes change the approach and always have tools in the mode context.
## Determine mode name
Explore the `ee/hogai/core/agent_modes/presets` directory and check if there are already modes that match the user's intent. If you want to create a new mode, you should scope it by a PostHog product (Product analytics), product area (SQL), or agent (Instrumentation agent).
## (optionally) Create a new mode in schema
Add a new AgentMode to `frontend/src/queries/schema/schema-assistant-messages.ts` and regenerate the schema using:
```bash
hogli build:schema
```
Alternatively, you may use this command:
```bash
pnpm run schema:build
```
## Create or update mode's scaffolding
A mode should typically contain at least two things:
- An AgentToolkit exposing tools that are specific to the mode and trajectory examples for the todo tool.
- An AgentModeDefinition containing the AgentMode, mode description that is always injected into the context window of the agent, and classes for toolkit and executables.
Note: you should only create new executables if the user needs to modify the prompt, behavior of that mode, or the execution loop itself.
## Adding tools to the mode
Relevant tools might be located in `ee/hogai/tools` or `products/<product_name>/backend/max_tools`. There is a set of tools that is always injected into the context, like the `read_data` tool, but all other tools should be specific to the mode.
Before adding a tool to the toolkit, determine if those tools have tool dependencies. If there are dependencies (like an experiment depends on feature flag creation), loop back to the user to determine whether they want to merge modes into a single one. If they don't want to do that, make sure that you later add a trajectory example clearly explaining mode switching and tool selection.
You should also verify that the tools are backend-first. If tools apply frontend changes without passing proper context back to the conversation, you should propose a way to make them backend-first so the agent has the right context.
## Review the default toolkit
If the new mode contains new Django models, you should review whether the `read_data`, `search`, and `list_data` tools have the functionality to retrieve the models. If they don't support these models, you should use or implement one of the context providers available in `ee/hogai/context/...`.
## Write JTBD-like trajectory examples
Update the AgentToolkit to include trajectory examples. These should be JTBD-style examples showing how the agent should achieve typical tasks with the available tools. Check the Product analytics preset for reference.
## Implement frontend
Update `max-constants.tsx` to include new tools and add the mode to the mode selector. You might also need to create new UI elements for displaying data from the tools.
### Example
Say you've updated the Error tracking tool to list issues. It used to be a frontend tool that only updated filters, but now it outputs error tracking issues. While the agent has the context it needs, the user also needs to see the issues in a human-readable way. In this case, you should design and implement a new component to display the tool's output.
## Add feature flag
All new modes must be feature-flagged. Example:
```ee/hogai/chat_agent/mode_manager.py
@property
def mode_registry(self) -> dict[AgentMode, AgentModeDefinition]:
registry = dict(DEFAULT_CHAT_AGENT_MODE_REGISTRY)
if has_error_tracking_mode_feature_flag(self._team, self._user):
registry[AgentMode.ERROR_TRACKING] = error_tracking_agent
return registry
```
If you have created new tools, make sure you feature flag them correctly:
1. Old tools that are being migrated should not be available if the feature flag is active.
2. New tools should only be available if the feature flag is active.
## Implement and update tests
You should test new tools, presets, executables, and optionally implement evals.
| """
Test for 'implementing-agent-modes' skill — PostHog Agent Mode Implementation
Validates that the Agent implemented custom agent modes with proper state
management, transitions, and configuration.
"""
import os
import subprocess
import pytest
class TestImplementingAgentModes:
"""Verify agent mode implementation in PostHog."""
REPO_DIR = "/workspace/posthog"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_agent_mode_file_exists(self):
"""An agent mode implementation file must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
("agent" in f.lower() or "mode" in f.lower())
and f.endswith((".py", ".ts", ".tsx"))
and "node_modules" not in root
and "__pycache__" not in root
):
fpath = os.path.join(root, f)
try:
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if "mode" in content.lower() and "agent" in content.lower():
found.append(fpath)
except OSError:
pass
assert len(found) >= 1, "No agent mode implementation file found"
def test_test_file_exists(self):
"""Test file for agent modes must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
("agent" in f.lower() or "mode" in f.lower())
and ("test" in f.lower() or "spec" in f.lower())
and f.endswith((".py", ".ts", ".tsx"))
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No agent mode test file found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_mode_files(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if (
("agent" in f.lower() or "mode" in f.lower())
and f.endswith((".py", ".ts", ".tsx"))
and "node_modules" not in root
and "__pycache__" not in root
):
found.append(os.path.join(root, f))
return found
def _read_all_mode_files(self):
content = ""
for fpath in self._find_mode_files():
try:
with open(fpath, "r", errors="ignore") as f:
content += f.read() + "\n"
except OSError:
pass
return content
def test_mode_enum_or_constants(self):
"""Must define agent mode types/enum."""
content = self._read_all_mode_files()
enum_patterns = [
"enum",
"Enum",
"MODE_",
"AgentMode",
"MODES",
"mode_type",
"class Mode",
]
found = any(p in content for p in enum_patterns)
assert found, "No agent mode enum/constants defined"
def test_state_management(self):
"""Must implement state management for modes."""
content = self._read_all_mode_files()
state_patterns = [
"state",
"setState",
"transition",
"current_mode",
"active_mode",
"switch_mode",
"change_mode",
]
found = sum(1 for p in state_patterns if p in content)
assert found >= 2, "Insufficient state management"
def test_mode_transitions(self):
"""Must implement mode transition logic."""
content = self._read_all_mode_files()
transition_patterns = [
"transition",
"switch",
"activate",
"deactivate",
"enter",
"exit",
"from_mode",
"to_mode",
]
found = sum(1 for p in transition_patterns if p in content)
assert found >= 2, "No mode transition logic found"
def test_mode_configuration(self):
"""Modes must be configurable."""
content = self._read_all_mode_files()
config_patterns = [
"config",
"settings",
"options",
"params",
"properties",
"attributes",
"capabilities",
]
found = any(p in content for p in config_patterns)
assert found, "No mode configuration found"
def test_at_least_3_modes(self):
"""Must define at least 3 distinct modes."""
content = self._read_all_mode_files()
import re
# Look for mode name patterns
mode_names = set()
# String constants like "analysis", "generation", etc.
strings = re.findall(r'["\']([a-z_]+_mode|[a-z_]+)["\']', content.lower())
for s in strings:
if "mode" in s or len(s) > 3:
mode_names.add(s)
# Also count enum-style definitions
enum_values = re.findall(r'(\w+)\s*=\s*["\']', content)
mode_names.update(enum_values)
assert len(mode_names) >= 3, f"Only {len(mode_names)} mode definitions found"
def test_error_handling_in_transitions(self):
"""Transition logic must handle errors."""
content = self._read_all_mode_files()
error_patterns = [
"except",
"catch",
"Error",
"raise",
"throw",
"invalid",
"ValueError",
]
found = any(p in content for p in error_patterns)
assert found, "No error handling in mode transitions"
def test_python_files_compile(self):
"""Python mode files must compile."""
for fpath in self._find_mode_files():
if fpath.endswith(".py"):
result = subprocess.run(
["python", "-m", "py_compile", fpath],
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"{fpath} compile error:\n{result.stderr}"
def test_api_or_interface(self):
"""Modes must expose an API or interface."""
content = self._read_all_mode_files()
api_patterns = [
"def ",
"function ",
"class ",
"interface ",
"export ",
"async def ",
"@api",
"@action",
]
found = sum(1 for p in api_patterns if p in content)
assert found >= 3, "Insufficient API surface for modes"
| https://github.com/PostHog/posthog | zhangyiiiiii/swe-skills-bench-python | |
python-observability | Python Observability Patterns | See task file for detailed mission requirements. | feature | # Task: Add End-to-End Observability Demo for OpenTelemetry Python
## Background
Add a comprehensive observability demonstration script to the opentelemetry-python repository that shows manual instrumentation with tracing, context propagation, and metric collection.
## Files to Create/Modify
- `docs/examples/observability_demo.py` - Main demo script combining tracing and context propagation
## Requirements
### Tracing
- `TracerProvider` configuration with `ConsoleSpanExporter`
- Creating and nesting spans
- Adding span attributes and events
- Exception recording with proper span status
### Context Propagation
- W3C TraceContext format (`traceparent`, `tracestate` headers)
- HTTP header injection for outgoing requests
- Context extraction from incoming request headers
- Cross-service trace correlation demonstration
### Additional Features
- Span links for async workflows
- Baggage for cross-cutting data
- Resource detection
- Proper span error handling and status codes
### Output
- The script must produce trace output with valid `trace_id` format (32-char hex)
- Nested spans must appear with correct parent relationships
## Acceptance Criteria
- `python docs/examples/observability_demo.py` exits with code 0
- Output shows `trace_id` in correct 32-character hex format
- Spans are properly nested and context propagates correctly
| ---
name: python-observability
description: Python observability patterns including structured logging, metrics, and distributed tracing. Use when adding logging, implementing metrics collection, setting up tracing, or debugging production systems.
---
# Python Observability
Instrument Python applications with structured logs, metrics, and traces. When something breaks in production, you need to answer "what, where, and why" without deploying new code.
## When to Use This Skill
- Adding structured logging to applications
- Implementing metrics collection with Prometheus
- Setting up distributed tracing across services
- Propagating correlation IDs through request chains
- Debugging production issues
- Building observability dashboards
## Core Concepts
### 1. Structured Logging
Emit logs as JSON with consistent fields for production environments. Machine-readable logs enable powerful queries and alerts. For local development, consider human-readable formats.
### 2. The Four Golden Signals
Track latency, traffic, errors, and saturation for every service boundary.
### 3. Correlation IDs
Thread a unique ID through all logs and spans for a single request, enabling end-to-end tracing.
### 4. Bounded Cardinality
Keep metric label values bounded. Unbounded labels (like user IDs) explode storage costs.
## Quick Start
```python
import structlog
structlog.configure(
processors=[
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.JSONRenderer(),
],
)
logger = structlog.get_logger()
logger.info("Request processed", user_id="123", duration_ms=45)
```
## Fundamental Patterns
### Pattern 1: Structured Logging with Structlog
Configure structlog for JSON output with consistent fields.
```python
import logging
import structlog
def configure_logging(log_level: str = "INFO") -> None:
"""Configure structured logging for the application."""
structlog.configure(
processors=[
structlog.contextvars.merge_contextvars,
structlog.processors.add_log_level,
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.processors.JSONRenderer(),
],
wrapper_class=structlog.make_filtering_bound_logger(
getattr(logging, log_level.upper())
),
context_class=dict,
logger_factory=structlog.PrintLoggerFactory(),
cache_logger_on_first_use=True,
)
# Initialize at application startup
configure_logging("INFO")
logger = structlog.get_logger()
```
### Pattern 2: Consistent Log Fields
Every log entry should include standard fields for filtering and correlation.
```python
import structlog
from contextvars import ContextVar
# Store correlation ID in context
correlation_id: ContextVar[str] = ContextVar("correlation_id", default="")
logger = structlog.get_logger()
def process_request(request: Request) -> Response:
"""Process request with structured logging."""
logger.info(
"Request received",
correlation_id=correlation_id.get(),
method=request.method,
path=request.path,
user_id=request.user_id,
)
try:
result = handle_request(request)
logger.info(
"Request completed",
correlation_id=correlation_id.get(),
status_code=200,
duration_ms=elapsed,
)
return result
except Exception as e:
logger.error(
"Request failed",
correlation_id=correlation_id.get(),
error_type=type(e).__name__,
error_message=str(e),
)
raise
```
### Pattern 3: Semantic Log Levels
Use log levels consistently across the application.
| Level | Purpose | Examples |
|-------|---------|----------|
| `DEBUG` | Development diagnostics | Variable values, internal state |
| `INFO` | Request lifecycle, operations | Request start/end, job completion |
| `WARNING` | Recoverable anomalies | Retry attempts, fallback used |
| `ERROR` | Failures needing attention | Exceptions, service unavailable |
```python
# DEBUG: Detailed internal information
logger.debug("Cache lookup", key=cache_key, hit=cache_hit)
# INFO: Normal operational events
logger.info("Order created", order_id=order.id, total=order.total)
# WARNING: Abnormal but handled situations
logger.warning(
"Rate limit approaching",
current_rate=950,
limit=1000,
reset_seconds=30,
)
# ERROR: Failures requiring investigation
logger.error(
"Payment processing failed",
order_id=order.id,
error=str(e),
payment_provider="stripe",
)
```
Never log expected behavior at `ERROR`. A user entering a wrong password is `INFO`, not `ERROR`.
### Pattern 4: Correlation ID Propagation
Generate a unique ID at ingress and thread it through all operations.
```python
from contextvars import ContextVar
import uuid
import structlog
correlation_id: ContextVar[str] = ContextVar("correlation_id", default="")
def set_correlation_id(cid: str | None = None) -> str:
"""Set correlation ID for current context."""
cid = cid or str(uuid.uuid4())
correlation_id.set(cid)
structlog.contextvars.bind_contextvars(correlation_id=cid)
return cid
# FastAPI middleware example
from fastapi import Request
async def correlation_middleware(request: Request, call_next):
"""Middleware to set and propagate correlation ID."""
# Use incoming header or generate new
cid = request.headers.get("X-Correlation-ID") or str(uuid.uuid4())
set_correlation_id(cid)
response = await call_next(request)
response.headers["X-Correlation-ID"] = cid
return response
```
Propagate to outbound requests:
```python
import httpx
async def call_downstream_service(endpoint: str, data: dict) -> dict:
"""Call downstream service with correlation ID."""
async with httpx.AsyncClient() as client:
response = await client.post(
endpoint,
json=data,
headers={"X-Correlation-ID": correlation_id.get()},
)
return response.json()
```
## Advanced Patterns
### Pattern 5: The Four Golden Signals with Prometheus
Track these metrics for every service boundary:
```python
from prometheus_client import Counter, Histogram, Gauge
# Latency: How long requests take
REQUEST_LATENCY = Histogram(
"http_request_duration_seconds",
"Request latency in seconds",
["method", "endpoint", "status"],
buckets=[0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10],
)
# Traffic: Request rate
REQUEST_COUNT = Counter(
"http_requests_total",
"Total HTTP requests",
["method", "endpoint", "status"],
)
# Errors: Error rate
ERROR_COUNT = Counter(
"http_errors_total",
"Total HTTP errors",
["method", "endpoint", "error_type"],
)
# Saturation: Resource utilization
DB_POOL_USAGE = Gauge(
"db_connection_pool_used",
"Number of database connections in use",
)
```
Instrument your endpoints:
```python
import time
from functools import wraps
def track_request(func):
"""Decorator to track request metrics."""
@wraps(func)
async def wrapper(request: Request, *args, **kwargs):
method = request.method
endpoint = request.url.path
start = time.perf_counter()
try:
response = await func(request, *args, **kwargs)
status = str(response.status_code)
return response
except Exception as e:
status = "500"
ERROR_COUNT.labels(
method=method,
endpoint=endpoint,
error_type=type(e).__name__,
).inc()
raise
finally:
duration = time.perf_counter() - start
REQUEST_COUNT.labels(method=method, endpoint=endpoint, status=status).inc()
REQUEST_LATENCY.labels(method=method, endpoint=endpoint, status=status).observe(duration)
return wrapper
```
### Pattern 6: Bounded Cardinality
Avoid labels with unbounded values to prevent metric explosion.
```python
# BAD: User ID has potentially millions of values
REQUEST_COUNT.labels(method="GET", user_id=user.id) # Don't do this!
# GOOD: Bounded values only
REQUEST_COUNT.labels(method="GET", endpoint="/users", status="200")
# If you need per-user metrics, use a different approach:
# - Log the user_id and query logs
# - Use a separate analytics system
# - Bucket users by type/tier
REQUEST_COUNT.labels(
method="GET",
endpoint="/users",
user_tier="premium", # Bounded set of values
)
```
### Pattern 7: Timed Operations with Context Manager
Create a reusable timing context manager for operations.
```python
from contextlib import contextmanager
import time
import structlog
logger = structlog.get_logger()
@contextmanager
def timed_operation(name: str, **extra_fields):
"""Context manager for timing and logging operations."""
start = time.perf_counter()
logger.debug("Operation started", operation=name, **extra_fields)
try:
yield
except Exception as e:
elapsed_ms = (time.perf_counter() - start) * 1000
logger.error(
"Operation failed",
operation=name,
duration_ms=round(elapsed_ms, 2),
error=str(e),
**extra_fields,
)
raise
else:
elapsed_ms = (time.perf_counter() - start) * 1000
logger.info(
"Operation completed",
operation=name,
duration_ms=round(elapsed_ms, 2),
**extra_fields,
)
# Usage
with timed_operation("fetch_user_orders", user_id=user.id):
orders = await order_repository.get_by_user(user.id)
```
### Pattern 8: OpenTelemetry Tracing
Set up distributed tracing with OpenTelemetry.
**Note:** OpenTelemetry is actively evolving. Check the [official Python documentation](https://opentelemetry.io/docs/languages/python/) for the latest API patterns and best practices.
```python
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
def configure_tracing(service_name: str, otlp_endpoint: str) -> None:
"""Configure OpenTelemetry tracing."""
provider = TracerProvider()
processor = BatchSpanProcessor(OTLPSpanExporter(endpoint=otlp_endpoint))
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
tracer = trace.get_tracer(__name__)
async def process_order(order_id: str) -> Order:
"""Process order with tracing."""
with tracer.start_as_current_span("process_order") as span:
span.set_attribute("order.id", order_id)
with tracer.start_as_current_span("validate_order"):
validate_order(order_id)
with tracer.start_as_current_span("charge_payment"):
charge_payment(order_id)
with tracer.start_as_current_span("send_confirmation"):
send_confirmation(order_id)
return order
```
## Best Practices Summary
1. **Use structured logging** - JSON logs with consistent fields
2. **Propagate correlation IDs** - Thread through all requests and logs
3. **Track the four golden signals** - Latency, traffic, errors, saturation
4. **Bound label cardinality** - Never use unbounded values as metric labels
5. **Log at appropriate levels** - Don't cry wolf with ERROR
6. **Include context** - User ID, request ID, operation name in logs
7. **Use context managers** - Consistent timing and error handling
8. **Separate concerns** - Observability code shouldn't pollute business logic
9. **Test your observability** - Verify logs and metrics in integration tests
10. **Set up alerts** - Metrics are useless without alerting
| """
Test for 'python-observability' skill — End-to-End Observability Demo
Validates that the Agent created an OpenTelemetry demo script with tracing,
context propagation, and proper trace_id formatting.
"""
import os
import re
import subprocess
import pytest
class TestPythonObservability:
"""Verify OpenTelemetry observability demo implementation."""
REPO_DIR = "/workspace/opentelemetry-python"
# ------------------------------------------------------------------
# L1: file existence & syntax
# ------------------------------------------------------------------
def test_demo_script_exists(self):
"""docs/examples/observability_demo.py must exist."""
fpath = os.path.join(self.REPO_DIR, "docs", "examples", "observability_demo.py")
assert os.path.isfile(fpath), "observability_demo.py not found"
def test_demo_script_compiles(self):
"""Demo script must compile without syntax errors."""
result = subprocess.run(
["python", "-m", "py_compile", "docs/examples/observability_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: structural checks
# ------------------------------------------------------------------
def _read_source(self):
fpath = os.path.join(self.REPO_DIR, "docs", "examples", "observability_demo.py")
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def test_tracer_provider_configured(self):
"""Script must configure TracerProvider."""
source = self._read_source()
assert "TracerProvider" in source, "TracerProvider not configured"
def test_console_exporter_used(self):
"""Script must use ConsoleSpanExporter."""
source = self._read_source()
assert "ConsoleSpanExporter" in source, "ConsoleSpanExporter not used"
def test_span_creation(self):
"""Script must create spans (start_as_current_span or start_span)."""
source = self._read_source()
patterns = ["start_as_current_span", "start_span"]
assert any(p in source for p in patterns), "No span creation found"
def test_context_propagation(self):
"""Script must demonstrate W3C context propagation."""
source = self._read_source()
ctx_patterns = [
"inject",
"extract",
"traceparent",
"TraceContext",
"propagate",
"Propagator",
]
found = sum(1 for p in ctx_patterns if p in source)
assert found >= 2, f"Insufficient context propagation code (matched {found}/6)"
def test_span_attributes_or_events(self):
"""Script must add attributes or events to spans."""
source = self._read_source()
patterns = ["set_attribute", "add_event", "set_status"]
assert any(
p in source for p in patterns
), "No span attributes/events/status found"
def test_exception_recording(self):
"""Script must record exceptions with proper span status."""
source = self._read_source()
patterns = ["record_exception", "set_status", "StatusCode.ERROR"]
assert any(
p in source for p in patterns
), "No exception recording / error status handling found"
# ------------------------------------------------------------------
# L2: runtime verification
# ------------------------------------------------------------------
def test_demo_runs_successfully(self):
"""Demo script must exit with code 0."""
result = subprocess.run(
["python", "docs/examples/observability_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert (
result.returncode == 0
), f"Demo failed (rc={result.returncode}):\n{result.stderr[-2000:]}"
def test_output_contains_trace_id(self):
"""Output must contain a trace_id in 32-char hex format."""
result = subprocess.run(
["python", "docs/examples/observability_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
pytest.skip(f"Demo script failed: {result.stderr[:500]}")
combined = result.stdout + result.stderr
# 32-char hex trace_id
hex32_pattern = re.compile(r"[0-9a-fA-F]{32}")
assert hex32_pattern.search(
combined
), f"No 32-char hex trace_id found in output:\n{combined[:2000]}"
def test_output_shows_nested_spans(self):
"""Output should show multiple span names indicating nesting."""
result = subprocess.run(
["python", "docs/examples/observability_demo.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
if result.returncode != 0:
pytest.skip(f"Demo script failed: {result.stderr[:500]}")
combined = result.stdout + result.stderr
# ConsoleSpanExporter outputs span name; count unique "name" occurrences
span_count = combined.count('"name"') + combined.count("'name'")
assert (
span_count >= 2
), f"Expected ≥2 spans in output, found {span_count} 'name' fields"
| https://github.com/open-telemetry/opentelemetry-python | zhangyiiiiii/swe-skills-bench-python | |
distributed-tracing | Distributed Tracing & Observability | See task file for detailed mission requirements. | feature | # Task: Add OpenTelemetry Collector Pipeline Configuration Example
## Background
Add a complete collector pipeline
configuration example demonstrating receivers, processors, and exporters
in the OpenTelemetry Collector repository.
## Files to Create/Modify
- examples/pipeline-demo/config.yaml (collector configuration)
- examples/pipeline-demo/README.md (documentation)
- examples/pipeline-demo/docker-compose.yaml (optional local setup)
## Requirements
Collector Configuration (config.yaml):
Receivers:
- otlp: gRPC and HTTP protocols
- prometheus: Prometheus scrape endpoint
- jaeger: Jaeger thrift receiver
Processors:
- batch: Batch telemetry data
- memory_limiter: Limit memory usage
- attributes: Add/modify span attributes
- filter: Drop unwanted telemetry
Exporters:
- otlp: Send to OTLP endpoint
- prometheus: Expose Prometheus endpoint
- logging: Debug output
Pipelines:
- traces: otlp -> batch -> otlp
- metrics: prometheus -> memory_limiter -> prometheus
- logs: otlp -> filter -> logging
4. Configuration Features:
- Multi-pipeline setup
- Batch configuration tuning
- Memory limits for production
- TLS configuration placeholders
## Acceptance Criteria
- `otelcol validate --config examples/pipeline-demo/config.yaml` exits with code 0
- All receivers, processors, exporters properly configured
- README explains each pipeline component
| ---
name: distributed-tracing
description: Implement distributed tracing with Jaeger and Tempo to track requests across microservices and identify performance bottlenecks. Use when debugging microservices, analyzing request flows, or implementing observability for distributed systems.
---
# Distributed Tracing
Implement distributed tracing with Jaeger and Tempo for request flow visibility across microservices.
## Purpose
Track requests across distributed systems to understand latency, dependencies, and failure points.
## When to Use
- Debug latency issues
- Understand service dependencies
- Identify bottlenecks
- Trace error propagation
- Analyze request paths
## Distributed Tracing Concepts
### Trace Structure
```
Trace (Request ID: abc123)
↓
Span (frontend) [100ms]
↓
Span (api-gateway) [80ms]
├→ Span (auth-service) [10ms]
└→ Span (user-service) [60ms]
└→ Span (database) [40ms]
```
### Key Components
- **Trace** - End-to-end request journey
- **Span** - Single operation within a trace
- **Context** - Metadata propagated between services
- **Tags** - Key-value pairs for filtering
- **Logs** - Timestamped events within a span
## Jaeger Setup
### Kubernetes Deployment
```bash
# Deploy Jaeger Operator
kubectl create namespace observability
kubectl create -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.51.0/jaeger-operator.yaml -n observability
# Deploy Jaeger instance
kubectl apply -f - <<EOF
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger
namespace: observability
spec:
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: http://elasticsearch:9200
ingress:
enabled: true
EOF
```
### Docker Compose
```yaml
version: "3.8"
services:
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778"
- "16686:16686" # UI
- "14268:14268" # Collector
- "14250:14250" # gRPC
- "9411:9411" # Zipkin
environment:
- COLLECTOR_ZIPKIN_HOST_PORT=:9411
```
**Reference:** See `references/jaeger-setup.md`
## Application Instrumentation
### OpenTelemetry (Recommended)
#### Python (Flask)
```python
from opentelemetry import trace
from opentelemetry.exporter.jaeger.thrift import JaegerExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.instrumentation.flask import FlaskInstrumentor
from flask import Flask
# Initialize tracer
resource = Resource(attributes={SERVICE_NAME: "my-service"})
provider = TracerProvider(resource=resource)
processor = BatchSpanProcessor(JaegerExporter(
agent_host_name="jaeger",
agent_port=6831,
))
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
# Instrument Flask
app = Flask(__name__)
FlaskInstrumentor().instrument_app(app)
@app.route('/api/users')
def get_users():
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("get_users") as span:
span.set_attribute("user.count", 100)
# Business logic
users = fetch_users_from_db()
return {"users": users}
def fetch_users_from_db():
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("database_query") as span:
span.set_attribute("db.system", "postgresql")
span.set_attribute("db.statement", "SELECT * FROM users")
# Database query
return query_database()
```
#### Node.js (Express)
```javascript
const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
const { JaegerExporter } = require("@opentelemetry/exporter-jaeger");
const { BatchSpanProcessor } = require("@opentelemetry/sdk-trace-base");
const { registerInstrumentations } = require("@opentelemetry/instrumentation");
const { HttpInstrumentation } = require("@opentelemetry/instrumentation-http");
const {
ExpressInstrumentation,
} = require("@opentelemetry/instrumentation-express");
// Initialize tracer
const provider = new NodeTracerProvider({
resource: { attributes: { "service.name": "my-service" } },
});
const exporter = new JaegerExporter({
endpoint: "http://jaeger:14268/api/traces",
});
provider.addSpanProcessor(new BatchSpanProcessor(exporter));
provider.register();
// Instrument libraries
registerInstrumentations({
instrumentations: [new HttpInstrumentation(), new ExpressInstrumentation()],
});
const express = require("express");
const app = express();
app.get("/api/users", async (req, res) => {
const tracer = trace.getTracer("my-service");
const span = tracer.startSpan("get_users");
try {
const users = await fetchUsers();
span.setAttributes({ "user.count": users.length });
res.json({ users });
} finally {
span.end();
}
});
```
#### Go
```go
package main
import (
"context"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/jaeger"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
)
func initTracer() (*sdktrace.TracerProvider, error) {
exporter, err := jaeger.New(jaeger.WithCollectorEndpoint(
jaeger.WithEndpoint("http://jaeger:14268/api/traces"),
))
if err != nil {
return nil, err
}
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exporter),
sdktrace.WithResource(resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String("my-service"),
)),
)
otel.SetTracerProvider(tp)
return tp, nil
}
func getUsers(ctx context.Context) ([]User, error) {
tracer := otel.Tracer("my-service")
ctx, span := tracer.Start(ctx, "get_users")
defer span.End()
span.SetAttributes(attribute.String("user.filter", "active"))
users, err := fetchUsersFromDB(ctx)
if err != nil {
span.RecordError(err)
return nil, err
}
span.SetAttributes(attribute.Int("user.count", len(users)))
return users, nil
}
```
**Reference:** See `references/instrumentation.md`
## Context Propagation
### HTTP Headers
```
traceparent: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01
tracestate: congo=t61rcWkgMzE
```
### Propagation in HTTP Requests
#### Python
```python
from opentelemetry.propagate import inject
headers = {}
inject(headers) # Injects trace context
response = requests.get('http://downstream-service/api', headers=headers)
```
#### Node.js
```javascript
const { propagation } = require("@opentelemetry/api");
const headers = {};
propagation.inject(context.active(), headers);
axios.get("http://downstream-service/api", { headers });
```
## Tempo Setup (Grafana)
### Kubernetes Deployment
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tempo-config
data:
tempo.yaml: |
server:
http_listen_port: 3200
distributor:
receivers:
jaeger:
protocols:
thrift_http:
grpc:
otlp:
protocols:
http:
grpc:
storage:
trace:
backend: s3
s3:
bucket: tempo-traces
endpoint: s3.amazonaws.com
querier:
frontend_worker:
frontend_address: tempo-query-frontend:9095
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tempo
spec:
replicas: 1
template:
spec:
containers:
- name: tempo
image: grafana/tempo:latest
args:
- -config.file=/etc/tempo/tempo.yaml
volumeMounts:
- name: config
mountPath: /etc/tempo
volumes:
- name: config
configMap:
name: tempo-config
```
**Reference:** See `assets/jaeger-config.yaml.template`
## Sampling Strategies
### Probabilistic Sampling
```yaml
# Sample 1% of traces
sampler:
type: probabilistic
param: 0.01
```
### Rate Limiting Sampling
```yaml
# Sample max 100 traces per second
sampler:
type: ratelimiting
param: 100
```
### Adaptive Sampling
```python
from opentelemetry.sdk.trace.sampling import ParentBased, TraceIdRatioBased
# Sample based on trace ID (deterministic)
sampler = ParentBased(root=TraceIdRatioBased(0.01))
```
## Trace Analysis
### Finding Slow Requests
**Jaeger Query:**
```
service=my-service
duration > 1s
```
### Finding Errors
**Jaeger Query:**
```
service=my-service
error=true
tags.http.status_code >= 500
```
### Service Dependency Graph
Jaeger automatically generates service dependency graphs showing:
- Service relationships
- Request rates
- Error rates
- Average latencies
## Best Practices
1. **Sample appropriately** (1-10% in production)
2. **Add meaningful tags** (user_id, request_id)
3. **Propagate context** across all service boundaries
4. **Log exceptions** in spans
5. **Use consistent naming** for operations
6. **Monitor tracing overhead** (<1% CPU impact)
7. **Set up alerts** for trace errors
8. **Implement distributed context** (baggage)
9. **Use span events** for important milestones
10. **Document instrumentation** standards
## Integration with Logging
### Correlated Logs
```python
import logging
from opentelemetry import trace
logger = logging.getLogger(__name__)
def process_request():
span = trace.get_current_span()
trace_id = span.get_span_context().trace_id
logger.info(
"Processing request",
extra={"trace_id": format(trace_id, '032x')}
)
```
## Troubleshooting
**No traces appearing:**
- Check collector endpoint
- Verify network connectivity
- Check sampling configuration
- Review application logs
**High latency overhead:**
- Reduce sampling rate
- Use batch span processor
- Check exporter configuration
## Reference Files
- `references/jaeger-setup.md` - Jaeger installation
- `references/instrumentation.md` - Instrumentation patterns
- `assets/jaeger-config.yaml.template` - Jaeger configuration
## Related Skills
- `prometheus-configuration` - For metrics
- `grafana-dashboards` - For visualization
- `slo-implementation` - For latency SLOs
| """
Test for 'distributed-tracing' skill — OpenTelemetry Collector Pipeline
Validates that the Agent created a complete collector pipeline config with
receivers, processors, exporters, and pipelines.
"""
import os
import subprocess
import pytest
class TestDistributedTracing:
"""Verify OpenTelemetry Collector pipeline configuration."""
REPO_DIR = "/workspace/opentelemetry-collector"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_config_yaml_exists(self):
"""examples/pipeline-demo/config.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "pipeline-demo", "config.yaml")
assert os.path.isfile(fpath), "config.yaml not found"
def test_readme_exists(self):
"""examples/pipeline-demo/README.md must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "pipeline-demo", "README.md")
assert os.path.isfile(fpath), "README.md not found"
# ------------------------------------------------------------------
# L2: YAML structure validation
# ------------------------------------------------------------------
def _load_config(self):
import yaml
fpath = os.path.join(self.REPO_DIR, "examples", "pipeline-demo", "config.yaml")
with open(fpath, "r") as f:
return yaml.safe_load(f)
def test_config_is_valid_yaml(self):
"""config.yaml must be valid YAML."""
config = self._load_config()
assert isinstance(config, dict), "Config must be a YAML mapping"
def test_receivers_section(self):
"""Config must have receivers section."""
config = self._load_config()
assert "receivers" in config, "receivers section missing"
assert (
len(config["receivers"]) >= 2
), f"Expected >= 2 receivers, got {len(config['receivers'])}"
def test_otlp_receiver(self):
"""Must include OTLP receiver."""
config = self._load_config()
receivers = config.get("receivers", {})
assert "otlp" in receivers or any(
"otlp" in k for k in receivers
), f"OTLP receiver not found; receivers: {list(receivers.keys())}"
def test_processors_section(self):
"""Config must have processors section."""
config = self._load_config()
assert "processors" in config, "processors section missing"
assert (
len(config["processors"]) >= 2
), f"Expected >= 2 processors, got {len(config['processors'])}"
def test_batch_processor(self):
"""Must include batch processor."""
config = self._load_config()
processors = config.get("processors", {})
assert "batch" in processors or any(
"batch" in k for k in processors
), f"batch processor not found; processors: {list(processors.keys())}"
def test_memory_limiter_processor(self):
"""Must include memory_limiter processor."""
config = self._load_config()
processors = config.get("processors", {})
assert "memory_limiter" in processors or any(
"memory" in k for k in processors
), f"memory_limiter not found; processors: {list(processors.keys())}"
def test_exporters_section(self):
"""Config must have exporters section."""
config = self._load_config()
assert "exporters" in config, "exporters section missing"
assert (
len(config["exporters"]) >= 2
), f"Expected >= 2 exporters, got {len(config['exporters'])}"
def test_service_pipelines(self):
"""Config must define service.pipelines."""
config = self._load_config()
service = config.get("service", {})
pipelines = service.get("pipelines", {})
assert len(pipelines) >= 2, f"Expected >= 2 pipelines, got {len(pipelines)}"
def test_traces_pipeline(self):
"""Must define a traces pipeline."""
config = self._load_config()
pipelines = config.get("service", {}).get("pipelines", {})
assert (
"traces" in pipelines
), f"traces pipeline not found; pipelines: {list(pipelines.keys())}"
traces = pipelines["traces"]
assert "receivers" in traces, "traces pipeline missing receivers"
assert "exporters" in traces, "traces pipeline missing exporters"
def test_metrics_pipeline(self):
"""Must define a metrics pipeline."""
config = self._load_config()
pipelines = config.get("service", {}).get("pipelines", {})
assert (
"metrics" in pipelines
), f"metrics pipeline not found; pipelines: {list(pipelines.keys())}"
def test_readme_explains_components(self):
"""README must explain pipeline components."""
fpath = os.path.join(self.REPO_DIR, "examples", "pipeline-demo", "README.md")
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
components = ["receiver", "processor", "exporter", "pipeline"]
found = sum(1 for c in components if c in content.lower())
assert found >= 3, f"README only covers {found}/4 pipeline components"
| https://github.com/open-telemetry/opentelemetry-collector | zhangyiiiiii/swe-skills-bench-golang | |
service-mesh-observability | Service Mesh Observability | See task file for detailed mission requirements. | feature | # Task: Add Linkerd TCP Metrics Collection Example
## Background
Add TCP connection metrics collection
example demonstrating service mesh observability features for TCP
workloads in the Linkerd2 repository.
## Files to Create/Modify
- viz/metrics-api/examples/tcp_metrics_demo.go (demo code)
- viz/metrics-api/examples/README.md (documentation)
- viz/metrics-api/tcp_metrics_test.go (tests)
## Requirements
TCP Metrics Demo (tcp_metrics_demo.go):
- Connection establishment metrics
- Bytes sent/received counters
- Connection duration tracking
- Error rate monitoring
Metrics to Collect:
- tcp_open_total: Total TCP connections opened
- tcp_close_total: Total TCP connections closed
- tcp_connection_duration_ms: Connection duration histogram
- tcp_read_bytes_total: Total bytes read
- tcp_write_bytes_total: Total bytes written
Integration Points:
- Prometheus metric exposition
- Grafana dashboard configuration
- Linkerd proxy integration
4. Test Coverage:
- Metric counter increments correctly
- Duration histogram bucketing
- Label cardinality validation
- Thread-safe metric updates
## Acceptance Criteria
- `go build ./viz/...` exits with code 0
- `go test ./viz/metrics-api/...` passes
- Metrics follow Linkerd naming conventions
| ---
name: service-mesh-observability
description: Implement comprehensive observability for service meshes including distributed tracing, metrics, and visualization. Use when setting up mesh monitoring, debugging latency issues, or implementing SLOs for service communication.
---
# Service Mesh Observability
Complete guide to observability patterns for Istio, Linkerd, and service mesh deployments.
## When to Use This Skill
- Setting up distributed tracing across services
- Implementing service mesh metrics and dashboards
- Debugging latency and error issues
- Defining SLOs for service communication
- Visualizing service dependencies
- Troubleshooting mesh connectivity
## Core Concepts
### 1. Three Pillars of Observability
```
┌─────────────────────────────────────────────────────┐
│ Observability │
├─────────────────┬─────────────────┬─────────────────┤
│ Metrics │ Traces │ Logs │
│ │ │ │
│ • Request rate │ • Span context │ • Access logs │
│ • Error rate │ • Latency │ • Error details │
│ • Latency P50 │ • Dependencies │ • Debug info │
│ • Saturation │ • Bottlenecks │ • Audit trail │
└─────────────────┴─────────────────┴─────────────────┘
```
### 2. Golden Signals for Mesh
| Signal | Description | Alert Threshold |
| -------------- | ------------------------- | ----------------- |
| **Latency** | Request duration P50, P99 | P99 > 500ms |
| **Traffic** | Requests per second | Anomaly detection |
| **Errors** | 5xx error rate | > 1% |
| **Saturation** | Resource utilization | > 80% |
## Templates
### Template 1: Istio with Prometheus & Grafana
```yaml
# Install Prometheus
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
namespace: istio-system
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'istio-mesh'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name]
action: keep
regex: istio-telemetry
---
# ServiceMonitor for Prometheus Operator
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: istio-mesh
namespace: istio-system
spec:
selector:
matchLabels:
app: istiod
endpoints:
- port: http-monitoring
interval: 15s
```
### Template 2: Key Istio Metrics Queries
```promql
# Request rate by service
sum(rate(istio_requests_total{reporter="destination"}[5m])) by (destination_service_name)
# Error rate (5xx)
sum(rate(istio_requests_total{reporter="destination", response_code=~"5.."}[5m]))
/ sum(rate(istio_requests_total{reporter="destination"}[5m])) * 100
# P99 latency
histogram_quantile(0.99,
sum(rate(istio_request_duration_milliseconds_bucket{reporter="destination"}[5m]))
by (le, destination_service_name))
# TCP connections
sum(istio_tcp_connections_opened_total{reporter="destination"}) by (destination_service_name)
# Request size
histogram_quantile(0.99,
sum(rate(istio_request_bytes_bucket{reporter="destination"}[5m]))
by (le, destination_service_name))
```
### Template 3: Jaeger Distributed Tracing
```yaml
# Jaeger installation for Istio
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
enableTracing: true
defaultConfig:
tracing:
sampling: 100.0 # 100% in dev, lower in prod
zipkin:
address: jaeger-collector.istio-system:9411
---
# Jaeger deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger
namespace: istio-system
spec:
selector:
matchLabels:
app: jaeger
template:
metadata:
labels:
app: jaeger
spec:
containers:
- name: jaeger
image: jaegertracing/all-in-one:1.50
ports:
- containerPort: 5775 # UDP
- containerPort: 6831 # Thrift
- containerPort: 6832 # Thrift
- containerPort: 5778 # Config
- containerPort: 16686 # UI
- containerPort: 14268 # HTTP
- containerPort: 14250 # gRPC
- containerPort: 9411 # Zipkin
env:
- name: COLLECTOR_ZIPKIN_HOST_PORT
value: ":9411"
```
### Template 4: Linkerd Viz Dashboard
```bash
# Install Linkerd viz extension
linkerd viz install | kubectl apply -f -
# Access dashboard
linkerd viz dashboard
# CLI commands for observability
# Top requests
linkerd viz top deploy/my-app
# Per-route metrics
linkerd viz routes deploy/my-app --to deploy/backend
# Live traffic inspection
linkerd viz tap deploy/my-app --to deploy/backend
# Service edges (dependencies)
linkerd viz edges deployment -n my-namespace
```
### Template 5: Grafana Dashboard JSON
```json
{
"dashboard": {
"title": "Service Mesh Overview",
"panels": [
{
"title": "Request Rate",
"type": "graph",
"targets": [
{
"expr": "sum(rate(istio_requests_total{reporter=\"destination\"}[5m])) by (destination_service_name)",
"legendFormat": "{{destination_service_name}}"
}
]
},
{
"title": "Error Rate",
"type": "gauge",
"targets": [
{
"expr": "sum(rate(istio_requests_total{response_code=~\"5..\"}[5m])) / sum(rate(istio_requests_total[5m])) * 100"
}
],
"fieldConfig": {
"defaults": {
"thresholds": {
"steps": [
{ "value": 0, "color": "green" },
{ "value": 1, "color": "yellow" },
{ "value": 5, "color": "red" }
]
}
}
}
},
{
"title": "P99 Latency",
"type": "graph",
"targets": [
{
"expr": "histogram_quantile(0.99, sum(rate(istio_request_duration_milliseconds_bucket{reporter=\"destination\"}[5m])) by (le, destination_service_name))",
"legendFormat": "{{destination_service_name}}"
}
]
},
{
"title": "Service Topology",
"type": "nodeGraph",
"targets": [
{
"expr": "sum(rate(istio_requests_total{reporter=\"destination\"}[5m])) by (source_workload, destination_service_name)"
}
]
}
]
}
}
```
### Template 6: Kiali Service Mesh Visualization
```yaml
# Kiali installation
apiVersion: kiali.io/v1alpha1
kind: Kiali
metadata:
name: kiali
namespace: istio-system
spec:
auth:
strategy: anonymous # or openid, token
deployment:
accessible_namespaces:
- "**"
external_services:
prometheus:
url: http://prometheus.istio-system:9090
tracing:
url: http://jaeger-query.istio-system:16686
grafana:
url: http://grafana.istio-system:3000
```
### Template 7: OpenTelemetry Integration
```yaml
# OpenTelemetry Collector for mesh
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-config
data:
config.yaml: |
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
zipkin:
endpoint: 0.0.0.0:9411
processors:
batch:
timeout: 10s
exporters:
jaeger:
endpoint: jaeger-collector:14250
tls:
insecure: true
prometheus:
endpoint: 0.0.0.0:8889
service:
pipelines:
traces:
receivers: [otlp, zipkin]
processors: [batch]
exporters: [jaeger]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [prometheus]
---
# Istio Telemetry v2 with OTel
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
namespace: istio-system
spec:
tracing:
- providers:
- name: otel
randomSamplingPercentage: 10
```
## Alerting Rules
```yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: mesh-alerts
namespace: istio-system
spec:
groups:
- name: mesh.rules
rules:
- alert: HighErrorRate
expr: |
sum(rate(istio_requests_total{response_code=~"5.."}[5m])) by (destination_service_name)
/ sum(rate(istio_requests_total[5m])) by (destination_service_name) > 0.05
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate for {{ $labels.destination_service_name }}"
- alert: HighLatency
expr: |
histogram_quantile(0.99, sum(rate(istio_request_duration_milliseconds_bucket[5m]))
by (le, destination_service_name)) > 1000
for: 5m
labels:
severity: warning
annotations:
summary: "High P99 latency for {{ $labels.destination_service_name }}"
- alert: MeshCertExpiring
expr: |
(certmanager_certificate_expiration_timestamp_seconds - time()) / 86400 < 7
labels:
severity: warning
annotations:
summary: "Mesh certificate expiring in less than 7 days"
```
## Best Practices
### Do's
- **Sample appropriately** - 100% in dev, 1-10% in prod
- **Use trace context** - Propagate headers consistently
- **Set up alerts** - For golden signals
- **Correlate metrics/traces** - Use exemplars
- **Retain strategically** - Hot/cold storage tiers
### Don'ts
- **Don't over-sample** - Storage costs add up
- **Don't ignore cardinality** - Limit label values
- **Don't skip dashboards** - Visualize dependencies
- **Don't forget costs** - Monitor observability costs
## Resources
- [Istio Observability](https://istio.io/latest/docs/tasks/observability/)
- [Linkerd Observability](https://linkerd.io/2.14/features/dashboard/)
- [OpenTelemetry](https://opentelemetry.io/)
- [Kiali](https://kiali.io/)
| """
Test for 'service-mesh-observability' skill — Linkerd TCP Metrics Collection
Validates that the Agent added TCP connection metrics collection code with
Prometheus metric exposition in the Linkerd2 viz package.
"""
import os
import subprocess
import pytest
from _dependency_utils import ensure_go_dependencies
@pytest.fixture(scope="module", autouse=True)
def _ensure_repo_dependencies():
ensure_go_dependencies(TestServiceMeshObservability.REPO_DIR)
class TestServiceMeshObservability:
"""Verify TCP metrics collection demo in Linkerd2."""
REPO_DIR = "/workspace/linkerd2"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_tcp_metrics_demo_exists(self):
"""viz/metrics-api/examples/tcp_metrics_demo.go must exist."""
fpath = os.path.join(
self.REPO_DIR, "viz", "metrics-api", "examples", "tcp_metrics_demo.go"
)
assert os.path.isfile(fpath), "tcp_metrics_demo.go not found"
def test_readme_exists(self):
"""viz/metrics-api/examples/README.md must exist."""
fpath = os.path.join(
self.REPO_DIR, "viz", "metrics-api", "examples", "README.md"
)
assert os.path.isfile(fpath), "README.md not found"
# ------------------------------------------------------------------
# L2: content verification
# ------------------------------------------------------------------
def _read_demo(self):
fpath = os.path.join(
self.REPO_DIR, "viz", "metrics-api", "examples", "tcp_metrics_demo.go"
)
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def test_tcp_open_total_metric(self):
"""Must define tcp_open_total metric."""
source = self._read_demo()
assert "tcp_open_total" in source, "tcp_open_total metric not defined"
def test_tcp_close_total_metric(self):
"""Must define tcp_close_total metric."""
source = self._read_demo()
assert "tcp_close_total" in source, "tcp_close_total metric not defined"
def test_tcp_connection_duration_metric(self):
"""Must define tcp_connection_duration metric."""
source = self._read_demo()
assert (
"tcp_connection_duration" in source
), "tcp_connection_duration metric not defined"
def test_tcp_read_bytes_metric(self):
"""Must define tcp_read_bytes_total metric."""
source = self._read_demo()
assert "tcp_read_bytes" in source, "tcp_read_bytes metric not defined"
def test_tcp_write_bytes_metric(self):
"""Must define tcp_write_bytes_total metric."""
source = self._read_demo()
assert "tcp_write_bytes" in source, "tcp_write_bytes metric not defined"
def test_prometheus_import(self):
"""Must import Prometheus client library."""
source = self._read_demo()
prom_patterns = ["prometheus", "promauto", "promhttp"]
assert any(
p in source for p in prom_patterns
), "No Prometheus library import found"
def test_histogram_bucketing(self):
"""Duration metric should use histogram bucketing."""
source = self._read_demo()
histogram_patterns = [
"Histogram",
"NewHistogram",
"HistogramVec",
"Buckets",
"histogram",
]
found = any(p in source for p in histogram_patterns)
assert found, "No histogram definition found for duration metric"
def test_go_build_viz(self):
"""go build ./viz/... must succeed."""
result = subprocess.run(
["go", "build", "./viz/..."],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=600,
)
assert result.returncode == 0, f"Build failed:\n{result.stderr}"
def test_readme_documents_metrics(self):
"""README must document the TCP metrics."""
fpath = os.path.join(
self.REPO_DIR, "viz", "metrics-api", "examples", "README.md"
)
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert "tcp" in content.lower(), "README doesn't mention TCP metrics"
assert len(content) >= 100, "README is too short"
def test_go_vet_passes(self):
"""go vet should pass on the demo file."""
result = subprocess.run(
["go", "vet", "./viz/metrics-api/examples/..."],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
# go vet may warn but should not error
assert result.returncode == 0, f"go vet failed:\n{result.stderr}"
| https://github.com/linkerd/linkerd2 | zhangyiiiiii/swe-skills-bench-golang | |
slo-implementation | SLO Implementation Framework | See task file for detailed mission requirements. | feature | # Task: Add Prometheus Backend SLO Configuration Support
## Background
在 slo-generator 项目中增强 Prometheus 后端集成,新增 availability SLO 配置的计算逻辑。
需要在 `slo_generator/` Python 包中新增或修改模块以支持基于 Prometheus 的 availability SLI 计算。
## Files to Create/Modify
- `slo_generator/backends/prometheus_availability.py` (新建,Prometheus availability SLI 计算模块)
- `slo_generator/utils/slo_config_validator.py` (新建,SLO 配置验证工具)
- `tests/test_slo_implementation.py` (新建,单元测试)
## Requirements
### Prometheus Availability 模块 (slo_generator/backends/prometheus_availability.py)
- 实现 `PrometheusAvailabilitySLI` 类
- 支持通过 PromQL 查询计算 error rate:`sum(rate(http_requests_total{status=~"5.."}[5m])) / sum(rate(http_requests_total[5m]))`
- 支持配置 SLO 目标值 (如 0.999 即 99.9%)
- 支持配置滚动窗口 (如 28 天)
- 提供 `compute_sli()` 方法返回 SLI 值
- 提供 `evaluate_slo()` 方法判断是否达标
### SLO 配置验证器 (slo_generator/utils/slo_config_validator.py)
- 实现 `validate_slo_config(config: dict) -> bool` 函数
- 验证必填字段: service_name, slo_name, backend.type, goal, window
- 验证 backend.type 为支持的类型 (如 "prometheus")
- 验证 goal 在 (0, 1) 范围内
- 配置不合法时抛出 ValueError 并提供描述性错误信息
### 单元测试 (tests/test_slo_implementation.py)
- 测试 PrometheusAvailabilitySLI 类初始化
- 测试 SLO 配置验证:合法配置通过、非法配置抛出 ValueError
- 测试 evaluate_slo() 在 SLI 高于/低于目标时的返回值
## Acceptance Criteria
- `python -m py_compile slo_generator/backends/prometheus_availability.py` 成功
- `python -m py_compile slo_generator/utils/slo_config_validator.py` 成功
- `python -m pytest tests/test_slo_implementation.py -v --tb=short` 全部通过
| ---
name: slo-implementation
description: Define and implement Service Level Indicators (SLIs) and Service Level Objectives (SLOs) with error budgets and alerting. Use when establishing reliability targets, implementing SRE practices, or measuring service performance.
---
# SLO Implementation
Framework for defining and implementing Service Level Indicators (SLIs), Service Level Objectives (SLOs), and error budgets.
## Purpose
Implement measurable reliability targets using SLIs, SLOs, and error budgets to balance reliability with innovation velocity.
## When to Use
- Define service reliability targets
- Measure user-perceived reliability
- Implement error budgets
- Create SLO-based alerts
- Track reliability goals
## SLI/SLO/SLA Hierarchy
```
SLA (Service Level Agreement)
↓ Contract with customers
SLO (Service Level Objective)
↓ Internal reliability target
SLI (Service Level Indicator)
↓ Actual measurement
```
## Defining SLIs
### Common SLI Types
#### 1. Availability SLI
```promql
# Successful requests / Total requests
sum(rate(http_requests_total{status!~"5.."}[28d]))
/
sum(rate(http_requests_total[28d]))
```
#### 2. Latency SLI
```promql
# Requests below latency threshold / Total requests
sum(rate(http_request_duration_seconds_bucket{le="0.5"}[28d]))
/
sum(rate(http_request_duration_seconds_count[28d]))
```
#### 3. Durability SLI
```
# Successful writes / Total writes
sum(storage_writes_successful_total)
/
sum(storage_writes_total)
```
**Reference:** See `references/slo-definitions.md`
## Setting SLO Targets
### Availability SLO Examples
| SLO % | Downtime/Month | Downtime/Year |
| ------ | -------------- | ------------- |
| 99% | 7.2 hours | 3.65 days |
| 99.9% | 43.2 minutes | 8.76 hours |
| 99.95% | 21.6 minutes | 4.38 hours |
| 99.99% | 4.32 minutes | 52.56 minutes |
### Choose Appropriate SLOs
**Consider:**
- User expectations
- Business requirements
- Current performance
- Cost of reliability
- Competitor benchmarks
**Example SLOs:**
```yaml
slos:
- name: api_availability
target: 99.9
window: 28d
sli: |
sum(rate(http_requests_total{status!~"5.."}[28d]))
/
sum(rate(http_requests_total[28d]))
- name: api_latency_p95
target: 99
window: 28d
sli: |
sum(rate(http_request_duration_seconds_bucket{le="0.5"}[28d]))
/
sum(rate(http_request_duration_seconds_count[28d]))
```
## Error Budget Calculation
### Error Budget Formula
```
Error Budget = 1 - SLO Target
```
**Example:**
- SLO: 99.9% availability
- Error Budget: 0.1% = 43.2 minutes/month
- Current Error: 0.05% = 21.6 minutes/month
- Remaining Budget: 50%
### Error Budget Policy
```yaml
error_budget_policy:
- remaining_budget: 100%
action: Normal development velocity
- remaining_budget: 50%
action: Consider postponing risky changes
- remaining_budget: 10%
action: Freeze non-critical changes
- remaining_budget: 0%
action: Feature freeze, focus on reliability
```
**Reference:** See `references/error-budget.md`
## SLO Implementation
### Prometheus Recording Rules
```yaml
# SLI Recording Rules
groups:
- name: sli_rules
interval: 30s
rules:
# Availability SLI
- record: sli:http_availability:ratio
expr: |
sum(rate(http_requests_total{status!~"5.."}[28d]))
/
sum(rate(http_requests_total[28d]))
# Latency SLI (requests < 500ms)
- record: sli:http_latency:ratio
expr: |
sum(rate(http_request_duration_seconds_bucket{le="0.5"}[28d]))
/
sum(rate(http_request_duration_seconds_count[28d]))
- name: slo_rules
interval: 5m
rules:
# SLO compliance (1 = meeting SLO, 0 = violating)
- record: slo:http_availability:compliance
expr: sli:http_availability:ratio >= bool 0.999
- record: slo:http_latency:compliance
expr: sli:http_latency:ratio >= bool 0.99
# Error budget remaining (percentage)
- record: slo:http_availability:error_budget_remaining
expr: |
(sli:http_availability:ratio - 0.999) / (1 - 0.999) * 100
# Error budget burn rate
- record: slo:http_availability:burn_rate_5m
expr: |
(1 - (
sum(rate(http_requests_total{status!~"5.."}[5m]))
/
sum(rate(http_requests_total[5m]))
)) / (1 - 0.999)
```
### SLO Alerting Rules
```yaml
groups:
- name: slo_alerts
interval: 1m
rules:
# Fast burn: 14.4x rate, 1 hour window
# Consumes 2% error budget in 1 hour
- alert: SLOErrorBudgetBurnFast
expr: |
slo:http_availability:burn_rate_1h > 14.4
and
slo:http_availability:burn_rate_5m > 14.4
for: 2m
labels:
severity: critical
annotations:
summary: "Fast error budget burn detected"
description: "Error budget burning at {{ $value }}x rate"
# Slow burn: 6x rate, 6 hour window
# Consumes 5% error budget in 6 hours
- alert: SLOErrorBudgetBurnSlow
expr: |
slo:http_availability:burn_rate_6h > 6
and
slo:http_availability:burn_rate_30m > 6
for: 15m
labels:
severity: warning
annotations:
summary: "Slow error budget burn detected"
description: "Error budget burning at {{ $value }}x rate"
# Error budget exhausted
- alert: SLOErrorBudgetExhausted
expr: slo:http_availability:error_budget_remaining < 0
for: 5m
labels:
severity: critical
annotations:
summary: "SLO error budget exhausted"
description: "Error budget remaining: {{ $value }}%"
```
## SLO Dashboard
**Grafana Dashboard Structure:**
```
┌────────────────────────────────────┐
│ SLO Compliance (Current) │
│ ✓ 99.95% (Target: 99.9%) │
├────────────────────────────────────┤
│ Error Budget Remaining: 65% │
│ ████████░░ 65% │
├────────────────────────────────────┤
│ SLI Trend (28 days) │
│ [Time series graph] │
├────────────────────────────────────┤
│ Burn Rate Analysis │
│ [Burn rate by time window] │
└────────────────────────────────────┘
```
**Example Queries:**
```promql
# Current SLO compliance
sli:http_availability:ratio * 100
# Error budget remaining
slo:http_availability:error_budget_remaining
# Days until error budget exhausted (at current burn rate)
(slo:http_availability:error_budget_remaining / 100)
*
28
/
(1 - sli:http_availability:ratio) * (1 - 0.999)
```
## Multi-Window Burn Rate Alerts
```yaml
# Combination of short and long windows reduces false positives
rules:
- alert: SLOBurnRateHigh
expr: |
(
slo:http_availability:burn_rate_1h > 14.4
and
slo:http_availability:burn_rate_5m > 14.4
)
or
(
slo:http_availability:burn_rate_6h > 6
and
slo:http_availability:burn_rate_30m > 6
)
labels:
severity: critical
```
## SLO Review Process
### Weekly Review
- Current SLO compliance
- Error budget status
- Trend analysis
- Incident impact
### Monthly Review
- SLO achievement
- Error budget usage
- Incident postmortems
- SLO adjustments
### Quarterly Review
- SLO relevance
- Target adjustments
- Process improvements
- Tooling enhancements
## Best Practices
1. **Start with user-facing services**
2. **Use multiple SLIs** (availability, latency, etc.)
3. **Set achievable SLOs** (don't aim for 100%)
4. **Implement multi-window alerts** to reduce noise
5. **Track error budget** consistently
6. **Review SLOs regularly**
7. **Document SLO decisions**
8. **Align with business goals**
9. **Automate SLO reporting**
10. **Use SLOs for prioritization**
## Reference Files
- `assets/slo-template.md` - SLO definition template
- `references/slo-definitions.md` - SLO definition patterns
- `references/error-budget.md` - Error budget calculations
## Related Skills
- `prometheus-configuration` - For metric collection
- `grafana-dashboards` - For SLO visualization
| """
Test for 'slo-implementation' skill — SLO Implementation Framework
Validates that the Agent implemented PrometheusAvailabilitySLI and
slo_config_validator in the slo-generator project.
"""
import os
import sys
import subprocess
import pytest
class TestSloImplementation:
"""Verify SLO implementation modules in slo-generator."""
REPO_DIR = "/workspace/slo-generator"
@classmethod
def setup_class(cls):
if cls.REPO_DIR not in sys.path:
sys.path.insert(0, cls.REPO_DIR)
# ------------------------------------------------------------------
# L1: file existence & syntax
# ------------------------------------------------------------------
def test_prometheus_availability_exists(self):
"""slo_generator/backends/prometheus_availability.py must exist."""
fpath = os.path.join(
self.REPO_DIR, "slo_generator", "backends", "prometheus_availability.py"
)
assert os.path.isfile(fpath), "prometheus_availability.py not found"
def test_config_validator_exists(self):
"""slo_generator/utils/slo_config_validator.py must exist."""
fpath = os.path.join(
self.REPO_DIR, "slo_generator", "utils", "slo_config_validator.py"
)
assert os.path.isfile(fpath), "slo_config_validator.py not found"
def test_prometheus_availability_compiles(self):
"""prometheus_availability.py must compile."""
result = subprocess.run(
[
"python",
"-m",
"py_compile",
"slo_generator/backends/prometheus_availability.py",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
def test_config_validator_compiles(self):
"""slo_config_validator.py must compile."""
result = subprocess.run(
[
"python",
"-m",
"py_compile",
"slo_generator/utils/slo_config_validator.py",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: structural verification
# ------------------------------------------------------------------
def _read_prom(self):
fpath = os.path.join(
self.REPO_DIR, "slo_generator", "backends", "prometheus_availability.py"
)
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def _read_validator(self):
fpath = os.path.join(
self.REPO_DIR, "slo_generator", "utils", "slo_config_validator.py"
)
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def test_prometheus_sli_class_defined(self):
"""PrometheusAvailabilitySLI class must exist."""
source = self._read_prom()
assert (
"PrometheusAvailabilitySLI" in source
), "PrometheusAvailabilitySLI class not found"
def test_compute_sli_method(self):
"""compute_sli method must be defined."""
source = self._read_prom()
assert "compute_sli" in source, "compute_sli method not found"
def test_evaluate_slo_method(self):
"""evaluate_slo method must be defined."""
source = self._read_prom()
assert "evaluate_slo" in source, "evaluate_slo method not found"
def test_slo_goal_configurable(self):
"""SLO target/goal should be configurable (e.g. 0.999)."""
source = self._read_prom()
goal_patterns = ["goal", "target", "objective", "slo"]
found = sum(1 for p in goal_patterns if p in source.lower())
assert found >= 1, "No SLO goal/target configuration found"
def test_validate_slo_config_function(self):
"""validate_slo_config function must be defined in validator."""
source = self._read_validator()
assert "validate_slo_config" in source, "validate_slo_config function not found"
def test_validator_checks_required_fields(self):
"""Validator must check required fields."""
source = self._read_validator()
required = ["service_name", "slo_name", "backend", "goal", "window"]
found = sum(1 for f in required if f in source)
assert found >= 4, f"Validator only checks {found}/5 required fields"
def test_validator_raises_value_error(self):
"""Validator should raise ValueError on invalid config."""
source = self._read_validator()
assert "ValueError" in source, "ValueError not raised in validator"
def test_validator_checks_goal_range(self):
"""Validator must ensure goal is in (0, 1) range."""
source = self._read_validator()
range_patterns = ["0", "1", "goal", "range", "<", ">"]
found = sum(1 for p in range_patterns if p in source)
assert found >= 3, "Goal range validation not clearly implemented"
def test_import_prometheus_availability(self):
"""PrometheusAvailabilitySLI should be importable."""
result = subprocess.run(
[
"python",
"-c",
"import sys; sys.path.insert(0,'.'); "
"from slo_generator.backends.prometheus_availability import "
"PrometheusAvailabilitySLI; print('OK')",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Import failed:\n{result.stderr}"
def test_import_validate_slo_config(self):
"""validate_slo_config should be importable."""
result = subprocess.run(
[
"python",
"-c",
"import sys; sys.path.insert(0,'.'); "
"from slo_generator.utils.slo_config_validator import "
"validate_slo_config; print('OK')",
],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Import failed:\n{result.stderr}"
| https://github.com/google/slo-generator | zhangyiiiiii/swe-skills-bench-python | |
python-performance-optimization | Python Performance Optimizer | See task file for detailed mission requirements. | feature | # Task: Create Python Profiling Demo Scripts for py-spy
## Background
Add practical profiling demo
scripts to the py-spy repository that demonstrate various profiling
scenarios and analysis workflows.
## Files to Create/Modify
- examples/profiling_targets/cpu_bound.py (CPU-intensive workload)
- examples/profiling_targets/io_bound.py (I/O-intensive workload)
- examples/profiling_targets/README.md (documentation)
- scripts/analyze_profile.py (profile analysis helper)
## Requirements
CPU-Bound Example (cpu_bound.py):
- Recursive Fibonacci calculation
- Matrix multiplication
- String processing loops
- Clear hotspot functions for easy identification
I/O-Bound Example (io_bound.py):
- File operations
- Sleep-based simulation
- Network call simulation (localhost)
- Threading/async patterns
Analysis Script (scripts/analyze_profile.py):
- Load py-spy output (flamegraph SVG or speedscope JSON)
- Extract top functions by time
- Generate summary report
- JSON export for further analysis
4. Expected py-spy Commands:
- `py-spy record -o profile.svg -- python examples/profiling_targets/cpu_bound.py`
- `py-spy top -- python examples/profiling_targets/io_bound.py`
## Acceptance Criteria
- Demo scripts run independently without py-spy
- `python examples/profiling_targets/cpu_bound.py` exits with code 0
- README explains how to use py-spy with examples
| ---
name: python-performance-optimization
description: Profile and optimize Python code using cProfile, memory profilers, and performance best practices. Use when debugging slow Python code, optimizing bottlenecks, or improving application performance.
---
# Python Performance Optimization
Comprehensive guide to profiling, analyzing, and optimizing Python code for better performance, including CPU profiling, memory optimization, and implementation best practices.
## When to Use This Skill
- Identifying performance bottlenecks in Python applications
- Reducing application latency and response times
- Optimizing CPU-intensive operations
- Reducing memory consumption and memory leaks
- Improving database query performance
- Optimizing I/O operations
- Speeding up data processing pipelines
- Implementing high-performance algorithms
- Profiling production applications
## Core Concepts
### 1. Profiling Types
- **CPU Profiling**: Identify time-consuming functions
- **Memory Profiling**: Track memory allocation and leaks
- **Line Profiling**: Profile at line-by-line granularity
- **Call Graph**: Visualize function call relationships
### 2. Performance Metrics
- **Execution Time**: How long operations take
- **Memory Usage**: Peak and average memory consumption
- **CPU Utilization**: Processor usage patterns
- **I/O Wait**: Time spent on I/O operations
### 3. Optimization Strategies
- **Algorithmic**: Better algorithms and data structures
- **Implementation**: More efficient code patterns
- **Parallelization**: Multi-threading/processing
- **Caching**: Avoid redundant computation
- **Native Extensions**: C/Rust for critical paths
## Quick Start
### Basic Timing
```python
import time
def measure_time():
"""Simple timing measurement."""
start = time.time()
# Your code here
result = sum(range(1000000))
elapsed = time.time() - start
print(f"Execution time: {elapsed:.4f} seconds")
return result
# Better: use timeit for accurate measurements
import timeit
execution_time = timeit.timeit(
"sum(range(1000000))",
number=100
)
print(f"Average time: {execution_time/100:.6f} seconds")
```
## Profiling Tools
### Pattern 1: cProfile - CPU Profiling
```python
import cProfile
import pstats
from pstats import SortKey
def slow_function():
"""Function to profile."""
total = 0
for i in range(1000000):
total += i
return total
def another_function():
"""Another function."""
return [i**2 for i in range(100000)]
def main():
"""Main function to profile."""
result1 = slow_function()
result2 = another_function()
return result1, result2
# Profile the code
if __name__ == "__main__":
profiler = cProfile.Profile()
profiler.enable()
main()
profiler.disable()
# Print stats
stats = pstats.Stats(profiler)
stats.sort_stats(SortKey.CUMULATIVE)
stats.print_stats(10) # Top 10 functions
# Save to file for later analysis
stats.dump_stats("profile_output.prof")
```
**Command-line profiling:**
```bash
# Profile a script
python -m cProfile -o output.prof script.py
# View results
python -m pstats output.prof
# In pstats:
# sort cumtime
# stats 10
```
### Pattern 2: line_profiler - Line-by-Line Profiling
```python
# Install: pip install line-profiler
# Add @profile decorator (line_profiler provides this)
@profile
def process_data(data):
"""Process data with line profiling."""
result = []
for item in data:
processed = item * 2
result.append(processed)
return result
# Run with:
# kernprof -l -v script.py
```
**Manual line profiling:**
```python
from line_profiler import LineProfiler
def process_data(data):
"""Function to profile."""
result = []
for item in data:
processed = item * 2
result.append(processed)
return result
if __name__ == "__main__":
lp = LineProfiler()
lp.add_function(process_data)
data = list(range(100000))
lp_wrapper = lp(process_data)
lp_wrapper(data)
lp.print_stats()
```
### Pattern 3: memory_profiler - Memory Usage
```python
# Install: pip install memory-profiler
from memory_profiler import profile
@profile
def memory_intensive():
"""Function that uses lots of memory."""
# Create large list
big_list = [i for i in range(1000000)]
# Create large dict
big_dict = {i: i**2 for i in range(100000)}
# Process data
result = sum(big_list)
return result
if __name__ == "__main__":
memory_intensive()
# Run with:
# python -m memory_profiler script.py
```
### Pattern 4: py-spy - Production Profiling
```bash
# Install: pip install py-spy
# Profile a running Python process
py-spy top --pid 12345
# Generate flamegraph
py-spy record -o profile.svg --pid 12345
# Profile a script
py-spy record -o profile.svg -- python script.py
# Dump current call stack
py-spy dump --pid 12345
```
## Optimization Patterns
### Pattern 5: List Comprehensions vs Loops
```python
import timeit
# Slow: Traditional loop
def slow_squares(n):
"""Create list of squares using loop."""
result = []
for i in range(n):
result.append(i**2)
return result
# Fast: List comprehension
def fast_squares(n):
"""Create list of squares using comprehension."""
return [i**2 for i in range(n)]
# Benchmark
n = 100000
slow_time = timeit.timeit(lambda: slow_squares(n), number=100)
fast_time = timeit.timeit(lambda: fast_squares(n), number=100)
print(f"Loop: {slow_time:.4f}s")
print(f"Comprehension: {fast_time:.4f}s")
print(f"Speedup: {slow_time/fast_time:.2f}x")
# Even faster for simple operations: map
def faster_squares(n):
"""Use map for even better performance."""
return list(map(lambda x: x**2, range(n)))
```
### Pattern 6: Generator Expressions for Memory
```python
import sys
def list_approach():
"""Memory-intensive list."""
data = [i**2 for i in range(1000000)]
return sum(data)
def generator_approach():
"""Memory-efficient generator."""
data = (i**2 for i in range(1000000))
return sum(data)
# Memory comparison
list_data = [i for i in range(1000000)]
gen_data = (i for i in range(1000000))
print(f"List size: {sys.getsizeof(list_data)} bytes")
print(f"Generator size: {sys.getsizeof(gen_data)} bytes")
# Generators use constant memory regardless of size
```
### Pattern 7: String Concatenation
```python
import timeit
def slow_concat(items):
"""Slow string concatenation."""
result = ""
for item in items:
result += str(item)
return result
def fast_concat(items):
"""Fast string concatenation with join."""
return "".join(str(item) for item in items)
def faster_concat(items):
"""Even faster with list."""
parts = [str(item) for item in items]
return "".join(parts)
items = list(range(10000))
# Benchmark
slow = timeit.timeit(lambda: slow_concat(items), number=100)
fast = timeit.timeit(lambda: fast_concat(items), number=100)
faster = timeit.timeit(lambda: faster_concat(items), number=100)
print(f"Concatenation (+): {slow:.4f}s")
print(f"Join (generator): {fast:.4f}s")
print(f"Join (list): {faster:.4f}s")
```
### Pattern 8: Dictionary Lookups vs List Searches
```python
import timeit
# Create test data
size = 10000
items = list(range(size))
lookup_dict = {i: i for i in range(size)}
def list_search(items, target):
"""O(n) search in list."""
return target in items
def dict_search(lookup_dict, target):
"""O(1) search in dict."""
return target in lookup_dict
target = size - 1 # Worst case for list
# Benchmark
list_time = timeit.timeit(
lambda: list_search(items, target),
number=1000
)
dict_time = timeit.timeit(
lambda: dict_search(lookup_dict, target),
number=1000
)
print(f"List search: {list_time:.6f}s")
print(f"Dict search: {dict_time:.6f}s")
print(f"Speedup: {list_time/dict_time:.0f}x")
```
### Pattern 9: Local Variable Access
```python
import timeit
# Global variable (slow)
GLOBAL_VALUE = 100
def use_global():
"""Access global variable."""
total = 0
for i in range(10000):
total += GLOBAL_VALUE
return total
def use_local():
"""Use local variable."""
local_value = 100
total = 0
for i in range(10000):
total += local_value
return total
# Local is faster
global_time = timeit.timeit(use_global, number=1000)
local_time = timeit.timeit(use_local, number=1000)
print(f"Global access: {global_time:.4f}s")
print(f"Local access: {local_time:.4f}s")
print(f"Speedup: {global_time/local_time:.2f}x")
```
### Pattern 10: Function Call Overhead
```python
import timeit
def calculate_inline():
"""Inline calculation."""
total = 0
for i in range(10000):
total += i * 2 + 1
return total
def helper_function(x):
"""Helper function."""
return x * 2 + 1
def calculate_with_function():
"""Calculation with function calls."""
total = 0
for i in range(10000):
total += helper_function(i)
return total
# Inline is faster due to no call overhead
inline_time = timeit.timeit(calculate_inline, number=1000)
function_time = timeit.timeit(calculate_with_function, number=1000)
print(f"Inline: {inline_time:.4f}s")
print(f"Function calls: {function_time:.4f}s")
```
## Advanced Optimization
### Pattern 11: NumPy for Numerical Operations
```python
import timeit
import numpy as np
def python_sum(n):
"""Sum using pure Python."""
return sum(range(n))
def numpy_sum(n):
"""Sum using NumPy."""
return np.arange(n).sum()
n = 1000000
python_time = timeit.timeit(lambda: python_sum(n), number=100)
numpy_time = timeit.timeit(lambda: numpy_sum(n), number=100)
print(f"Python: {python_time:.4f}s")
print(f"NumPy: {numpy_time:.4f}s")
print(f"Speedup: {python_time/numpy_time:.2f}x")
# Vectorized operations
def python_multiply():
"""Element-wise multiplication in Python."""
a = list(range(100000))
b = list(range(100000))
return [x * y for x, y in zip(a, b)]
def numpy_multiply():
"""Vectorized multiplication in NumPy."""
a = np.arange(100000)
b = np.arange(100000)
return a * b
py_time = timeit.timeit(python_multiply, number=100)
np_time = timeit.timeit(numpy_multiply, number=100)
print(f"\nPython multiply: {py_time:.4f}s")
print(f"NumPy multiply: {np_time:.4f}s")
print(f"Speedup: {py_time/np_time:.2f}x")
```
### Pattern 12: Caching with functools.lru_cache
```python
from functools import lru_cache
import timeit
def fibonacci_slow(n):
"""Recursive fibonacci without caching."""
if n < 2:
return n
return fibonacci_slow(n-1) + fibonacci_slow(n-2)
@lru_cache(maxsize=None)
def fibonacci_fast(n):
"""Recursive fibonacci with caching."""
if n < 2:
return n
return fibonacci_fast(n-1) + fibonacci_fast(n-2)
# Massive speedup for recursive algorithms
n = 30
slow_time = timeit.timeit(lambda: fibonacci_slow(n), number=1)
fast_time = timeit.timeit(lambda: fibonacci_fast(n), number=1000)
print(f"Without cache (1 run): {slow_time:.4f}s")
print(f"With cache (1000 runs): {fast_time:.4f}s")
# Cache info
print(f"Cache info: {fibonacci_fast.cache_info()}")
```
### Pattern 13: Using **slots** for Memory
```python
import sys
class RegularClass:
"""Regular class with __dict__."""
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
class SlottedClass:
"""Class with __slots__ for memory efficiency."""
__slots__ = ['x', 'y', 'z']
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
# Memory comparison
regular = RegularClass(1, 2, 3)
slotted = SlottedClass(1, 2, 3)
print(f"Regular class size: {sys.getsizeof(regular)} bytes")
print(f"Slotted class size: {sys.getsizeof(slotted)} bytes")
# Significant savings with many instances
regular_objects = [RegularClass(i, i+1, i+2) for i in range(10000)]
slotted_objects = [SlottedClass(i, i+1, i+2) for i in range(10000)]
print(f"\nMemory for 10000 regular objects: ~{sys.getsizeof(regular) * 10000} bytes")
print(f"Memory for 10000 slotted objects: ~{sys.getsizeof(slotted) * 10000} bytes")
```
### Pattern 14: Multiprocessing for CPU-Bound Tasks
```python
import multiprocessing as mp
import time
def cpu_intensive_task(n):
"""CPU-intensive calculation."""
return sum(i**2 for i in range(n))
def sequential_processing():
"""Process tasks sequentially."""
start = time.time()
results = [cpu_intensive_task(1000000) for _ in range(4)]
elapsed = time.time() - start
return elapsed, results
def parallel_processing():
"""Process tasks in parallel."""
start = time.time()
with mp.Pool(processes=4) as pool:
results = pool.map(cpu_intensive_task, [1000000] * 4)
elapsed = time.time() - start
return elapsed, results
if __name__ == "__main__":
seq_time, seq_results = sequential_processing()
par_time, par_results = parallel_processing()
print(f"Sequential: {seq_time:.2f}s")
print(f"Parallel: {par_time:.2f}s")
print(f"Speedup: {seq_time/par_time:.2f}x")
```
### Pattern 15: Async I/O for I/O-Bound Tasks
```python
import asyncio
import aiohttp
import time
import requests
urls = [
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/1",
]
def synchronous_requests():
"""Synchronous HTTP requests."""
start = time.time()
results = []
for url in urls:
response = requests.get(url)
results.append(response.status_code)
elapsed = time.time() - start
return elapsed, results
async def async_fetch(session, url):
"""Async HTTP request."""
async with session.get(url) as response:
return response.status
async def asynchronous_requests():
"""Asynchronous HTTP requests."""
start = time.time()
async with aiohttp.ClientSession() as session:
tasks = [async_fetch(session, url) for url in urls]
results = await asyncio.gather(*tasks)
elapsed = time.time() - start
return elapsed, results
# Async is much faster for I/O-bound work
sync_time, sync_results = synchronous_requests()
async_time, async_results = asyncio.run(asynchronous_requests())
print(f"Synchronous: {sync_time:.2f}s")
print(f"Asynchronous: {async_time:.2f}s")
print(f"Speedup: {sync_time/async_time:.2f}x")
```
## Database Optimization
### Pattern 16: Batch Database Operations
```python
import sqlite3
import time
def create_db():
"""Create test database."""
conn = sqlite3.connect(":memory:")
conn.execute("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)")
return conn
def slow_inserts(conn, count):
"""Insert records one at a time."""
start = time.time()
cursor = conn.cursor()
for i in range(count):
cursor.execute("INSERT INTO users (name) VALUES (?)", (f"User {i}",))
conn.commit() # Commit each insert
elapsed = time.time() - start
return elapsed
def fast_inserts(conn, count):
"""Batch insert with single commit."""
start = time.time()
cursor = conn.cursor()
data = [(f"User {i}",) for i in range(count)]
cursor.executemany("INSERT INTO users (name) VALUES (?)", data)
conn.commit() # Single commit
elapsed = time.time() - start
return elapsed
# Benchmark
conn1 = create_db()
slow_time = slow_inserts(conn1, 1000)
conn2 = create_db()
fast_time = fast_inserts(conn2, 1000)
print(f"Individual inserts: {slow_time:.4f}s")
print(f"Batch insert: {fast_time:.4f}s")
print(f"Speedup: {slow_time/fast_time:.2f}x")
```
### Pattern 17: Query Optimization
```python
# Use indexes for frequently queried columns
"""
-- Slow: No index
SELECT * FROM users WHERE email = 'user@example.com';
-- Fast: With index
CREATE INDEX idx_users_email ON users(email);
SELECT * FROM users WHERE email = 'user@example.com';
"""
# Use query planning
import sqlite3
conn = sqlite3.connect("example.db")
cursor = conn.cursor()
# Analyze query performance
cursor.execute("EXPLAIN QUERY PLAN SELECT * FROM users WHERE email = ?", ("test@example.com",))
print(cursor.fetchall())
# Use SELECT only needed columns
# Slow: SELECT *
# Fast: SELECT id, name
```
## Memory Optimization
### Pattern 18: Detecting Memory Leaks
```python
import tracemalloc
import gc
def memory_leak_example():
"""Example that leaks memory."""
leaked_objects = []
for i in range(100000):
# Objects added but never removed
leaked_objects.append([i] * 100)
# In real code, this would be an unintended reference
def track_memory_usage():
"""Track memory allocations."""
tracemalloc.start()
# Take snapshot before
snapshot1 = tracemalloc.take_snapshot()
# Run code
memory_leak_example()
# Take snapshot after
snapshot2 = tracemalloc.take_snapshot()
# Compare
top_stats = snapshot2.compare_to(snapshot1, 'lineno')
print("Top 10 memory allocations:")
for stat in top_stats[:10]:
print(stat)
tracemalloc.stop()
# Monitor memory
track_memory_usage()
# Force garbage collection
gc.collect()
```
### Pattern 19: Iterators vs Lists
```python
import sys
def process_file_list(filename):
"""Load entire file into memory."""
with open(filename) as f:
lines = f.readlines() # Loads all lines
return sum(1 for line in lines if line.strip())
def process_file_iterator(filename):
"""Process file line by line."""
with open(filename) as f:
return sum(1 for line in f if line.strip())
# Iterator uses constant memory
# List loads entire file into memory
```
### Pattern 20: Weakref for Caches
```python
import weakref
class CachedResource:
"""Resource that can be garbage collected."""
def __init__(self, data):
self.data = data
# Regular cache prevents garbage collection
regular_cache = {}
def get_resource_regular(key):
"""Get resource from regular cache."""
if key not in regular_cache:
regular_cache[key] = CachedResource(f"Data for {key}")
return regular_cache[key]
# Weak reference cache allows garbage collection
weak_cache = weakref.WeakValueDictionary()
def get_resource_weak(key):
"""Get resource from weak cache."""
resource = weak_cache.get(key)
if resource is None:
resource = CachedResource(f"Data for {key}")
weak_cache[key] = resource
return resource
# When no strong references exist, objects can be GC'd
```
## Benchmarking Tools
### Custom Benchmark Decorator
```python
import time
from functools import wraps
def benchmark(func):
"""Decorator to benchmark function execution."""
@wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"{func.__name__} took {elapsed:.6f} seconds")
return result
return wrapper
@benchmark
def slow_function():
"""Function to benchmark."""
time.sleep(0.5)
return sum(range(1000000))
result = slow_function()
```
### Performance Testing with pytest-benchmark
```python
# Install: pip install pytest-benchmark
def test_list_comprehension(benchmark):
"""Benchmark list comprehension."""
result = benchmark(lambda: [i**2 for i in range(10000)])
assert len(result) == 10000
def test_map_function(benchmark):
"""Benchmark map function."""
result = benchmark(lambda: list(map(lambda x: x**2, range(10000))))
assert len(result) == 10000
# Run with: pytest test_performance.py --benchmark-compare
```
## Best Practices
1. **Profile before optimizing** - Measure to find real bottlenecks
2. **Focus on hot paths** - Optimize code that runs most frequently
3. **Use appropriate data structures** - Dict for lookups, set for membership
4. **Avoid premature optimization** - Clarity first, then optimize
5. **Use built-in functions** - They're implemented in C
6. **Cache expensive computations** - Use lru_cache
7. **Batch I/O operations** - Reduce system calls
8. **Use generators** for large datasets
9. **Consider NumPy** for numerical operations
10. **Profile production code** - Use py-spy for live systems
## Common Pitfalls
- Optimizing without profiling
- Using global variables unnecessarily
- Not using appropriate data structures
- Creating unnecessary copies of data
- Not using connection pooling for databases
- Ignoring algorithmic complexity
- Over-optimizing rare code paths
- Not considering memory usage
## Resources
- **cProfile**: Built-in CPU profiler
- **memory_profiler**: Memory usage profiling
- **line_profiler**: Line-by-line profiling
- **py-spy**: Sampling profiler for production
- **NumPy**: High-performance numerical computing
- **Cython**: Compile Python to C
- **PyPy**: Alternative Python interpreter with JIT
## Performance Checklist
- [ ] Profiled code to identify bottlenecks
- [ ] Used appropriate data structures
- [ ] Implemented caching where beneficial
- [ ] Optimized database queries
- [ ] Used generators for large datasets
- [ ] Considered multiprocessing for CPU-bound tasks
- [ ] Used async I/O for I/O-bound tasks
- [ ] Minimized function call overhead in hot loops
- [ ] Checked for memory leaks
- [ ] Benchmarked before and after optimization
| """
Test for 'python-performance-optimization' skill — Python Profiling Demo Scripts
Validates that the Agent created profiling target scripts and analysis tools
for py-spy.
"""
import os
import subprocess
import pytest
class TestPythonPerformanceOptimization:
"""Verify profiling demo scripts for py-spy."""
REPO_DIR = "/workspace/py-spy"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_cpu_bound_exists(self):
"""examples/profiling_targets/cpu_bound.py must exist."""
fpath = os.path.join(
self.REPO_DIR, "examples", "profiling_targets", "cpu_bound.py"
)
assert os.path.isfile(fpath), "cpu_bound.py not found"
def test_io_bound_exists(self):
"""examples/profiling_targets/io_bound.py must exist."""
fpath = os.path.join(
self.REPO_DIR, "examples", "profiling_targets", "io_bound.py"
)
assert os.path.isfile(fpath), "io_bound.py not found"
def test_readme_exists(self):
"""examples/profiling_targets/README.md must exist."""
fpath = os.path.join(
self.REPO_DIR, "examples", "profiling_targets", "README.md"
)
assert os.path.isfile(fpath), "README.md not found"
def test_analyze_script_exists(self):
"""scripts/analyze_profile.py must exist."""
fpath = os.path.join(self.REPO_DIR, "scripts", "analyze_profile.py")
assert os.path.isfile(fpath), "analyze_profile.py not found"
# ------------------------------------------------------------------
# L1: syntax
# ------------------------------------------------------------------
def test_cpu_bound_compiles(self):
"""cpu_bound.py must compile."""
result = subprocess.run(
["python", "-m", "py_compile", "examples/profiling_targets/cpu_bound.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
def test_io_bound_compiles(self):
"""io_bound.py must compile."""
result = subprocess.run(
["python", "-m", "py_compile", "examples/profiling_targets/io_bound.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
def test_analyze_compiles(self):
"""analyze_profile.py must compile."""
result = subprocess.run(
["python", "-m", "py_compile", "scripts/analyze_profile.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: runtime & content verification
# ------------------------------------------------------------------
def test_cpu_bound_runs(self):
"""cpu_bound.py must run independently without py-spy."""
result = subprocess.run(
["python", "examples/profiling_targets/cpu_bound.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert result.returncode == 0, f"cpu_bound.py failed:\n{result.stderr}"
def test_io_bound_runs(self):
"""io_bound.py must run independently."""
result = subprocess.run(
["python", "examples/profiling_targets/io_bound.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=120,
)
assert result.returncode == 0, f"io_bound.py failed:\n{result.stderr}"
def test_cpu_bound_has_hotspot(self):
"""cpu_bound.py should contain identifiable hotspot functions."""
fpath = os.path.join(
self.REPO_DIR, "examples", "profiling_targets", "cpu_bound.py"
)
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
hotspot_patterns = ["fibonacci", "matrix", "multiply", "prime", "sort", "loop"]
found = sum(1 for p in hotspot_patterns if p in content.lower())
assert found >= 1, "No identifiable CPU hotspot functions found"
def test_readme_explains_usage(self):
"""README.md should explain how to use py-spy with examples."""
fpath = os.path.join(
self.REPO_DIR, "examples", "profiling_targets", "README.md"
)
with open(fpath, "r", encoding="utf-8") as f:
content = f.read()
assert (
"py-spy" in content.lower() or "py_spy" in content.lower()
), "README doesn't mention py-spy"
assert len(content) >= 100, "README is too short to be useful"
| https://github.com/benfred/py-spy | zhangyiiiiii/swe-skills-bench-rust | |
grafana-dashboards | Grafana Dashboards | See task file for detailed mission requirements. | feature | # Task: Add Infrastructure Monitoring Dashboard to Grafana
## Background
Add a pre-built infrastructure monitoring dashboard JSON and provisioning configuration to the Grafana repository. The dashboard should be placed in Grafana's devenv provisioning directory for use in development and testing.
## Files to Create/Modify
- `devenv/dev-dashboards/infra/service_metrics.json` - Dashboard JSON definition
- `devenv/provisioning/dashboards/infra.yaml` - Dashboard provider config
- `devenv/provisioning/datasources/prometheus.yaml` - Prometheus datasource config
## Requirements
### Dashboard JSON (service_metrics.json)
- `title` and `uid` fields (uid must be unique)
- Multiple panel types:
- Graph panel: Request rate over time
- Stat panel: Error rate percentage
- Histogram panel: Latency distribution
- Table panel: Top endpoints by request count
- Variable templating (`$namespace`, `$service`)
- Time range configuration (default: last 1 hour)
- Prometheus queries for all panels
### Dashboard Provisioning (infra.yaml)
- Dashboard provider pointing to `devenv/dev-dashboards/infra/`
- `disableDeletion: false`
- `updateIntervalSeconds: 10`
### Datasource Provisioning (prometheus.yaml)
- Prometheus datasource definition
- URL: `http://localhost:9090`
- Access mode: proxy
## Acceptance Criteria
- `go build ./...` compiles without errors (dashboard files don't affect Go build)
- Dashboard JSON is valid (parseable without errors)
- Dashboard contains `panels` array with at least 4 panel definitions
- Provisioning YAML files are valid and contain provider configuration
| ---
name: grafana-dashboards
description: Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visualizing metrics, or creating operational observability interfaces.
---
# Grafana Dashboards
Create and manage production-ready Grafana dashboards for comprehensive system observability.
## Purpose
Design effective Grafana dashboards for monitoring applications, infrastructure, and business metrics.
## When to Use
- Visualize Prometheus metrics
- Create custom dashboards
- Implement SLO dashboards
- Monitor infrastructure
- Track business KPIs
## Dashboard Design Principles
### 1. Hierarchy of Information
```
┌─────────────────────────────────────┐
│ Critical Metrics (Big Numbers) │
├─────────────────────────────────────┤
│ Key Trends (Time Series) │
├─────────────────────────────────────┤
│ Detailed Metrics (Tables/Heatmaps) │
└─────────────────────────────────────┘
```
### 2. RED Method (Services)
- **Rate** - Requests per second
- **Errors** - Error rate
- **Duration** - Latency/response time
### 3. USE Method (Resources)
- **Utilization** - % time resource is busy
- **Saturation** - Queue length/wait time
- **Errors** - Error count
## Dashboard Structure
### API Monitoring Dashboard
```json
{
"dashboard": {
"title": "API Monitoring",
"tags": ["api", "production"],
"timezone": "browser",
"refresh": "30s",
"panels": [
{
"title": "Request Rate",
"type": "graph",
"targets": [
{
"expr": "sum(rate(http_requests_total[5m])) by (service)",
"legendFormat": "{{service}}"
}
],
"gridPos": { "x": 0, "y": 0, "w": 12, "h": 8 }
},
{
"title": "Error Rate %",
"type": "graph",
"targets": [
{
"expr": "(sum(rate(http_requests_total{status=~\"5..\"}[5m])) / sum(rate(http_requests_total[5m]))) * 100",
"legendFormat": "Error Rate"
}
],
"alert": {
"conditions": [
{
"evaluator": { "params": [5], "type": "gt" },
"operator": { "type": "and" },
"query": { "params": ["A", "5m", "now"] },
"type": "query"
}
]
},
"gridPos": { "x": 12, "y": 0, "w": 12, "h": 8 }
},
{
"title": "P95 Latency",
"type": "graph",
"targets": [
{
"expr": "histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket[5m])) by (le, service))",
"legendFormat": "{{service}}"
}
],
"gridPos": { "x": 0, "y": 8, "w": 24, "h": 8 }
}
]
}
}
```
**Reference:** See `assets/api-dashboard.json`
## Panel Types
### 1. Stat Panel (Single Value)
```json
{
"type": "stat",
"title": "Total Requests",
"targets": [
{
"expr": "sum(http_requests_total)"
}
],
"options": {
"reduceOptions": {
"values": false,
"calcs": ["lastNotNull"]
},
"orientation": "auto",
"textMode": "auto",
"colorMode": "value"
},
"fieldConfig": {
"defaults": {
"thresholds": {
"mode": "absolute",
"steps": [
{ "value": 0, "color": "green" },
{ "value": 80, "color": "yellow" },
{ "value": 90, "color": "red" }
]
}
}
}
}
```
### 2. Time Series Graph
```json
{
"type": "graph",
"title": "CPU Usage",
"targets": [
{
"expr": "100 - (avg by (instance) (rate(node_cpu_seconds_total{mode=\"idle\"}[5m])) * 100)"
}
],
"yaxes": [
{ "format": "percent", "max": 100, "min": 0 },
{ "format": "short" }
]
}
```
### 3. Table Panel
```json
{
"type": "table",
"title": "Service Status",
"targets": [
{
"expr": "up",
"format": "table",
"instant": true
}
],
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": { "Time": true },
"indexByName": {},
"renameByName": {
"instance": "Instance",
"job": "Service",
"Value": "Status"
}
}
}
]
}
```
### 4. Heatmap
```json
{
"type": "heatmap",
"title": "Latency Heatmap",
"targets": [
{
"expr": "sum(rate(http_request_duration_seconds_bucket[5m])) by (le)",
"format": "heatmap"
}
],
"dataFormat": "tsbuckets",
"yAxis": {
"format": "s"
}
}
```
## Variables
### Query Variables
```json
{
"templating": {
"list": [
{
"name": "namespace",
"type": "query",
"datasource": "Prometheus",
"query": "label_values(kube_pod_info, namespace)",
"refresh": 1,
"multi": false
},
{
"name": "service",
"type": "query",
"datasource": "Prometheus",
"query": "label_values(kube_service_info{namespace=\"$namespace\"}, service)",
"refresh": 1,
"multi": true
}
]
}
}
```
### Use Variables in Queries
```
sum(rate(http_requests_total{namespace="$namespace", service=~"$service"}[5m]))
```
## Alerts in Dashboards
```json
{
"alert": {
"name": "High Error Rate",
"conditions": [
{
"evaluator": {
"params": [5],
"type": "gt"
},
"operator": { "type": "and" },
"query": {
"params": ["A", "5m", "now"]
},
"reducer": { "type": "avg" },
"type": "query"
}
],
"executionErrorState": "alerting",
"for": "5m",
"frequency": "1m",
"message": "Error rate is above 5%",
"noDataState": "no_data",
"notifications": [{ "uid": "slack-channel" }]
}
}
```
## Dashboard Provisioning
**dashboards.yml:**
```yaml
apiVersion: 1
providers:
- name: "default"
orgId: 1
folder: "General"
type: file
disableDeletion: false
updateIntervalSeconds: 10
allowUiUpdates: true
options:
path: /etc/grafana/dashboards
```
## Common Dashboard Patterns
### Infrastructure Dashboard
**Key Panels:**
- CPU utilization per node
- Memory usage per node
- Disk I/O
- Network traffic
- Pod count by namespace
- Node status
**Reference:** See `assets/infrastructure-dashboard.json`
### Database Dashboard
**Key Panels:**
- Queries per second
- Connection pool usage
- Query latency (P50, P95, P99)
- Active connections
- Database size
- Replication lag
- Slow queries
**Reference:** See `assets/database-dashboard.json`
### Application Dashboard
**Key Panels:**
- Request rate
- Error rate
- Response time (percentiles)
- Active users/sessions
- Cache hit rate
- Queue length
## Best Practices
1. **Start with templates** (Grafana community dashboards)
2. **Use consistent naming** for panels and variables
3. **Group related metrics** in rows
4. **Set appropriate time ranges** (default: Last 6 hours)
5. **Use variables** for flexibility
6. **Add panel descriptions** for context
7. **Configure units** correctly
8. **Set meaningful thresholds** for colors
9. **Use consistent colors** across dashboards
10. **Test with different time ranges**
## Dashboard as Code
### Terraform Provisioning
```hcl
resource "grafana_dashboard" "api_monitoring" {
config_json = file("${path.module}/dashboards/api-monitoring.json")
folder = grafana_folder.monitoring.id
}
resource "grafana_folder" "monitoring" {
title = "Production Monitoring"
}
```
### Ansible Provisioning
```yaml
- name: Deploy Grafana dashboards
copy:
src: "{{ item }}"
dest: /etc/grafana/dashboards/
with_fileglob:
- "dashboards/*.json"
notify: restart grafana
```
## Reference Files
- `assets/api-dashboard.json` - API monitoring dashboard
- `assets/infrastructure-dashboard.json` - Infrastructure dashboard
- `assets/database-dashboard.json` - Database monitoring dashboard
- `references/dashboard-design.md` - Dashboard design guide
## Related Skills
- `prometheus-configuration` - For metric collection
- `slo-implementation` - For SLO dashboards
| """
Test for 'grafana-dashboards' skill — Grafana Dashboard Provisioning
Validates that the Agent created JSON dashboards with proper panel definitions
and a provisioning YAML for Grafana.
"""
import os
import json
import pytest
import yaml # Imported at the top for consistency
class TestGrafanaDashboards:
"""Verify Grafana dashboard provisioning setup."""
REPO_DIR = "/workspace/grafana"
# [!] Change: extracted constant path to match the requirements doc
JSON_PATH = ("devenv", "dev-dashboards", "infra", "service_metrics.json")
YAML_PATH = ("devenv", "provisioning", "dashboards", "infra.yaml")
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_dashboard_json_exists(self):
"""service_metrics.json must exist."""
fpath = os.path.join(self.REPO_DIR, *self.JSON_PATH)
assert os.path.isfile(fpath), f"{self.JSON_PATH[-1]} not found"
def test_provisioning_yaml_exists(self):
"""infra.yaml must exist."""
fpath = os.path.join(self.REPO_DIR, *self.YAML_PATH)
assert os.path.isfile(fpath), f"{self.YAML_PATH[-1]} not found"
# ------------------------------------------------------------------
# L2: dashboard JSON validation
# ------------------------------------------------------------------
def _load_dashboard(self):
fpath = os.path.join(self.REPO_DIR, *self.JSON_PATH)
with open(fpath, "r") as f:
return json.load(f)
def test_dashboard_is_valid_json(self):
"""service_metrics.json must be valid JSON."""
dash = self._load_dashboard()
assert isinstance(dash, dict), "Dashboard root must be an object"
def test_dashboard_has_title(self):
"""Dashboard must have a title."""
dash = self._load_dashboard()
assert "title" in dash, "Dashboard 'title' field missing"
assert len(dash["title"]) > 0, "Dashboard title is empty"
def test_dashboard_has_panels(self):
"""Dashboard must have at least 4 panels."""
dash = self._load_dashboard()
panels = dash.get("panels", [])
assert len(panels) >= 4, f"Need >= 4 panels, got {len(panels)}"
def test_panels_have_required_fields(self):
"""Each panel must have id, type, title, and targets or similar."""
dash = self._load_dashboard()
for panel in dash.get("panels", []):
assert "type" in panel, f"Panel missing 'type': {panel.get('title', '?')}"
assert (
"title" in panel or "id" in panel
), "Panel missing both 'title' and 'id'"
def test_has_graph_or_timeseries_panel(self):
"""At least one panel must be graph or timeseries type."""
dash = self._load_dashboard()
types = {p.get("type") for p in dash.get("panels", [])}
graph_types = {"graph", "timeseries", "stat", "gauge", "barchart"}
assert types & graph_types, f"No graph-like panel found; types: {types}"
def test_dashboard_has_datasource(self):
"""Dashboard should reference a data source."""
dash = self._load_dashboard()
content = json.dumps(dash)
assert "datasource" in content.lower(), "No datasource reference found"
def test_has_prometheus_queries(self):
"""Dashboard panels should have Prometheus queries (expr)."""
dash = self._load_dashboard()
content = json.dumps(dash)
query_markers = ["expr", "promQL", "rate(", "sum(", "histogram_quantile"]
found = any(m in content for m in query_markers)
assert found, "No Prometheus query expressions found in panels"
def test_provisioning_yaml_valid(self):
"""infra.yaml must be valid YAML with providers."""
fpath = os.path.join(self.REPO_DIR, *self.YAML_PATH)
with open(fpath, "r") as f:
config = yaml.safe_load(f)
assert isinstance(config, dict), "Provisioning must be a mapping"
def test_provisioning_references_path(self):
"""Provisioning config must reference dashboard folder path."""
fpath = os.path.join(self.REPO_DIR, *self.YAML_PATH)
with open(fpath, "r") as f:
content = f.read()
assert (
"path" in content or "folder" in content
), "Provisioning config missing path/folder reference"
def test_templating_variables(self):
"""Dashboard should have templating variables."""
dash = self._load_dashboard()
templating = dash.get("templating", {})
var_list = templating.get("list", [])
assert len(var_list) >= 1, "Dashboard should have at least 1 template variable"
| https://github.com/grafana/grafana | zhangyiiiiii/swe-skills-bench-golang | |
dbt-transformation-patterns | dbt Transformation Patterns | See task file for detailed mission requirements. | test | # Task: Add dbt Model Transformation Tests for dbt-core
## Background
Add comprehensive transformation test coverage for dbt-core's model compilation and execution, including staging model examples and custom test definitions.
## Files to Create/Modify
- `tests/functional/staging/test_stg_orders.py` - Python test for model compilation
- `tests/functional/staging/fixtures.py` - Test fixtures with model SQL and schema YAML
- `core/dbt/tests/staging/stg_orders.sql` - Example staging model (optional fixture)
- `core/dbt/tests/staging/schema.yml` - Model documentation and tests (optional fixture)
## Requirements
### Staging Model Definition (stg_orders.sql)
- Source reference using `{{ source() }}` macro
- Column transformations (renaming, type casting)
- Appropriate materialization config block (`{{ config(materialized='view') }}`)
### Schema Documentation (schema.yml)
- Model description
- Column-level descriptions
- Built-in tests: `unique`, `not_null` on key columns
- Custom test reference for positive amount validation
### Custom Test
- SQL-based test that returns rows that fail the condition
- `assert_positive_amounts` test on the amount column
### Python Test (test_stg_orders.py)
- Verify model SQL compiles without errors
- Verify schema.yml contains model description
- Verify custom test file exists with valid SQL
## Acceptance Criteria
- `core/dbt/*.py` compiles without syntax errors
- schema.yml contains model descriptions and test definitions
- Custom test SQL file validates positive amounts
| ---
name: dbt-transformation-patterns
description: Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data transformations, creating data models, or implementing analytics engineering best practices.
---
# dbt Transformation Patterns
Production-ready patterns for dbt (data build tool) including model organization, testing strategies, documentation, and incremental processing.
## When to Use This Skill
- Building data transformation pipelines with dbt
- Organizing models into staging, intermediate, and marts layers
- Implementing data quality tests
- Creating incremental models for large datasets
- Documenting data models and lineage
- Setting up dbt project structure
## Core Concepts
### 1. Model Layers (Medallion Architecture)
```
sources/ Raw data definitions
↓
staging/ 1:1 with source, light cleaning
↓
intermediate/ Business logic, joins, aggregations
↓
marts/ Final analytics tables
```
### 2. Naming Conventions
| Layer | Prefix | Example |
| ------------ | -------------- | ----------------------------- |
| Staging | `stg_` | `stg_stripe__payments` |
| Intermediate | `int_` | `int_payments_pivoted` |
| Marts | `dim_`, `fct_` | `dim_customers`, `fct_orders` |
## Quick Start
```yaml
# dbt_project.yml
name: "analytics"
version: "1.0.0"
profile: "analytics"
model-paths: ["models"]
analysis-paths: ["analyses"]
test-paths: ["tests"]
seed-paths: ["seeds"]
macro-paths: ["macros"]
vars:
start_date: "2020-01-01"
models:
analytics:
staging:
+materialized: view
+schema: staging
intermediate:
+materialized: ephemeral
marts:
+materialized: table
+schema: analytics
```
```
# Project structure
models/
├── staging/
│ ├── stripe/
│ │ ├── _stripe__sources.yml
│ │ ├── _stripe__models.yml
│ │ ├── stg_stripe__customers.sql
│ │ └── stg_stripe__payments.sql
│ └── shopify/
│ ├── _shopify__sources.yml
│ └── stg_shopify__orders.sql
├── intermediate/
│ └── finance/
│ └── int_payments_pivoted.sql
└── marts/
├── core/
│ ├── _core__models.yml
│ ├── dim_customers.sql
│ └── fct_orders.sql
└── finance/
└── fct_revenue.sql
```
## Patterns
### Pattern 1: Source Definitions
```yaml
# models/staging/stripe/_stripe__sources.yml
version: 2
sources:
- name: stripe
description: Raw Stripe data loaded via Fivetran
database: raw
schema: stripe
loader: fivetran
loaded_at_field: _fivetran_synced
freshness:
warn_after: { count: 12, period: hour }
error_after: { count: 24, period: hour }
tables:
- name: customers
description: Stripe customer records
columns:
- name: id
description: Primary key
tests:
- unique
- not_null
- name: email
description: Customer email
- name: created
description: Account creation timestamp
- name: payments
description: Stripe payment transactions
columns:
- name: id
tests:
- unique
- not_null
- name: customer_id
tests:
- not_null
- relationships:
to: source('stripe', 'customers')
field: id
```
### Pattern 2: Staging Models
```sql
-- models/staging/stripe/stg_stripe__customers.sql
with source as (
select * from {{ source('stripe', 'customers') }}
),
renamed as (
select
-- ids
id as customer_id,
-- strings
lower(email) as email,
name as customer_name,
-- timestamps
created as created_at,
-- metadata
_fivetran_synced as _loaded_at
from source
)
select * from renamed
```
```sql
-- models/staging/stripe/stg_stripe__payments.sql
{{
config(
materialized='incremental',
unique_key='payment_id',
on_schema_change='append_new_columns'
)
}}
with source as (
select * from {{ source('stripe', 'payments') }}
{% if is_incremental() %}
where _fivetran_synced > (select max(_loaded_at) from {{ this }})
{% endif %}
),
renamed as (
select
-- ids
id as payment_id,
customer_id,
invoice_id,
-- amounts (convert cents to dollars)
amount / 100.0 as amount,
amount_refunded / 100.0 as amount_refunded,
-- status
status as payment_status,
-- timestamps
created as created_at,
-- metadata
_fivetran_synced as _loaded_at
from source
)
select * from renamed
```
### Pattern 3: Intermediate Models
```sql
-- models/intermediate/finance/int_payments_pivoted_to_customer.sql
with payments as (
select * from {{ ref('stg_stripe__payments') }}
),
customers as (
select * from {{ ref('stg_stripe__customers') }}
),
payment_summary as (
select
customer_id,
count(*) as total_payments,
count(case when payment_status = 'succeeded' then 1 end) as successful_payments,
sum(case when payment_status = 'succeeded' then amount else 0 end) as total_amount_paid,
min(created_at) as first_payment_at,
max(created_at) as last_payment_at
from payments
group by customer_id
)
select
customers.customer_id,
customers.email,
customers.created_at as customer_created_at,
coalesce(payment_summary.total_payments, 0) as total_payments,
coalesce(payment_summary.successful_payments, 0) as successful_payments,
coalesce(payment_summary.total_amount_paid, 0) as lifetime_value,
payment_summary.first_payment_at,
payment_summary.last_payment_at
from customers
left join payment_summary using (customer_id)
```
### Pattern 4: Mart Models (Dimensions and Facts)
```sql
-- models/marts/core/dim_customers.sql
{{
config(
materialized='table',
unique_key='customer_id'
)
}}
with customers as (
select * from {{ ref('int_payments_pivoted_to_customer') }}
),
orders as (
select * from {{ ref('stg_shopify__orders') }}
),
order_summary as (
select
customer_id,
count(*) as total_orders,
sum(total_price) as total_order_value,
min(created_at) as first_order_at,
max(created_at) as last_order_at
from orders
group by customer_id
),
final as (
select
-- surrogate key
{{ dbt_utils.generate_surrogate_key(['customers.customer_id']) }} as customer_key,
-- natural key
customers.customer_id,
-- attributes
customers.email,
customers.customer_created_at,
-- payment metrics
customers.total_payments,
customers.successful_payments,
customers.lifetime_value,
customers.first_payment_at,
customers.last_payment_at,
-- order metrics
coalesce(order_summary.total_orders, 0) as total_orders,
coalesce(order_summary.total_order_value, 0) as total_order_value,
order_summary.first_order_at,
order_summary.last_order_at,
-- calculated fields
case
when customers.lifetime_value >= 1000 then 'high'
when customers.lifetime_value >= 100 then 'medium'
else 'low'
end as customer_tier,
-- timestamps
current_timestamp as _loaded_at
from customers
left join order_summary using (customer_id)
)
select * from final
```
```sql
-- models/marts/core/fct_orders.sql
{{
config(
materialized='incremental',
unique_key='order_id',
incremental_strategy='merge'
)
}}
with orders as (
select * from {{ ref('stg_shopify__orders') }}
{% if is_incremental() %}
where updated_at > (select max(updated_at) from {{ this }})
{% endif %}
),
customers as (
select * from {{ ref('dim_customers') }}
),
final as (
select
-- keys
orders.order_id,
customers.customer_key,
orders.customer_id,
-- dimensions
orders.order_status,
orders.fulfillment_status,
orders.payment_status,
-- measures
orders.subtotal,
orders.tax,
orders.shipping,
orders.total_price,
orders.total_discount,
orders.item_count,
-- timestamps
orders.created_at,
orders.updated_at,
orders.fulfilled_at,
-- metadata
current_timestamp as _loaded_at
from orders
left join customers on orders.customer_id = customers.customer_id
)
select * from final
```
### Pattern 5: Testing and Documentation
```yaml
# models/marts/core/_core__models.yml
version: 2
models:
- name: dim_customers
description: Customer dimension with payment and order metrics
columns:
- name: customer_key
description: Surrogate key for the customer dimension
tests:
- unique
- not_null
- name: customer_id
description: Natural key from source system
tests:
- unique
- not_null
- name: email
description: Customer email address
tests:
- not_null
- name: customer_tier
description: Customer value tier based on lifetime value
tests:
- accepted_values:
values: ["high", "medium", "low"]
- name: lifetime_value
description: Total amount paid by customer
tests:
- dbt_utils.expression_is_true:
expression: ">= 0"
- name: fct_orders
description: Order fact table with all order transactions
tests:
- dbt_utils.recency:
datepart: day
field: created_at
interval: 1
columns:
- name: order_id
tests:
- unique
- not_null
- name: customer_key
tests:
- not_null
- relationships:
to: ref('dim_customers')
field: customer_key
```
### Pattern 6: Macros and DRY Code
```sql
-- macros/cents_to_dollars.sql
{% macro cents_to_dollars(column_name, precision=2) %}
round({{ column_name }} / 100.0, {{ precision }})
{% endmacro %}
-- macros/generate_schema_name.sql
{% macro generate_schema_name(custom_schema_name, node) %}
{%- set default_schema = target.schema -%}
{%- if custom_schema_name is none -%}
{{ default_schema }}
{%- else -%}
{{ default_schema }}_{{ custom_schema_name }}
{%- endif -%}
{% endmacro %}
-- macros/limit_data_in_dev.sql
{% macro limit_data_in_dev(column_name, days=3) %}
{% if target.name == 'dev' %}
where {{ column_name }} >= dateadd(day, -{{ days }}, current_date)
{% endif %}
{% endmacro %}
-- Usage in model
select * from {{ ref('stg_orders') }}
{{ limit_data_in_dev('created_at') }}
```
### Pattern 7: Incremental Strategies
```sql
-- Delete+Insert (default for most warehouses)
{{
config(
materialized='incremental',
unique_key='id',
incremental_strategy='delete+insert'
)
}}
-- Merge (best for late-arriving data)
{{
config(
materialized='incremental',
unique_key='id',
incremental_strategy='merge',
merge_update_columns=['status', 'amount', 'updated_at']
)
}}
-- Insert Overwrite (partition-based)
{{
config(
materialized='incremental',
incremental_strategy='insert_overwrite',
partition_by={
"field": "created_date",
"data_type": "date",
"granularity": "day"
}
)
}}
select
*,
date(created_at) as created_date
from {{ ref('stg_events') }}
{% if is_incremental() %}
where created_date >= dateadd(day, -3, current_date)
{% endif %}
```
## dbt Commands
```bash
# Development
dbt run # Run all models
dbt run --select staging # Run staging models only
dbt run --select +fct_orders # Run fct_orders and its upstream
dbt run --select fct_orders+ # Run fct_orders and its downstream
dbt run --full-refresh # Rebuild incremental models
# Testing
dbt test # Run all tests
dbt test --select stg_stripe # Test specific models
dbt build # Run + test in DAG order
# Documentation
dbt docs generate # Generate docs
dbt docs serve # Serve docs locally
# Debugging
dbt compile # Compile SQL without running
dbt debug # Test connection
dbt ls --select tag:critical # List models by tag
```
## Best Practices
### Do's
- **Use staging layer** - Clean data once, use everywhere
- **Test aggressively** - Not null, unique, relationships
- **Document everything** - Column descriptions, model descriptions
- **Use incremental** - For tables > 1M rows
- **Version control** - dbt project in Git
### Don'ts
- **Don't skip staging** - Raw → mart is tech debt
- **Don't hardcode dates** - Use `{{ var('start_date') }}`
- **Don't repeat logic** - Extract to macros
- **Don't test in prod** - Use dev target
- **Don't ignore freshness** - Monitor source data
## Resources
- [dbt Documentation](https://docs.getdbt.com/)
- [dbt Best Practices](https://docs.getdbt.com/guides/best-practices)
- [dbt-utils Package](https://hub.getdbt.com/dbt-labs/dbt_utils/latest/)
- [dbt Discourse](https://discourse.getdbt.com/)
| """
Test for 'dbt-transformation-patterns' skill — dbt Transformation Patterns
Validates that the Agent created dbt model files with proper SQL
transformations, tests, and documentation.
"""
import os
import pytest
class TestDbtTransformationPatterns:
"""Verify dbt model and transformation setup."""
REPO_DIR = "/workspace/dbt-core"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_model_sql_exists(self):
"""At least one .sql model file must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".sql") and "model" in root.lower():
found.append(os.path.join(root, f))
if not found:
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".sql"):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No .sql model files found"
def test_schema_yml_exists(self):
"""A schema.yml or _schema.yml must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "schema" in f.lower() and f.endswith((".yml", ".yaml")):
found = True
break
if found:
break
assert found, "No schema.yml found"
def test_dbt_project_yml_exists(self):
"""dbt_project.yml must exist."""
found = False
for root, dirs, files in os.walk(self.REPO_DIR):
if "dbt_project.yml" in files:
found = True
break
assert found, "dbt_project.yml not found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_sql_models(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith(".sql"):
found.append(os.path.join(root, f))
return found
def test_models_use_ref(self):
"""SQL models should use {{ ref('...') }} for dependencies."""
found = False
for fpath in self._find_sql_models():
with open(fpath, "r", errors="ignore") as f:
content = f.read()
if "ref(" in content or "{{ ref" in content:
found = True
break
assert found, "No model uses {{ ref() }}"
def test_models_use_source(self):
"""SQL models should use {{ source('...') }} for raw data."""
found = False
for fpath in self._find_sql_models():
with open(fpath, "r", errors="ignore") as f:
content = f.read()
if "source(" in content or "{{ source" in content:
found = True
break
assert found, "No model uses {{ source() }}"
def test_staging_model_pattern(self):
"""Should follow staging/marts layer pattern."""
dirs_found = set()
for root, dirs, files in os.walk(self.REPO_DIR):
for d in dirs:
if d.lower() in (
"staging",
"marts",
"intermediate",
"raw",
"transform",
):
dirs_found.add(d.lower())
if not dirs_found:
# Check SQL file prefixes
for fpath in self._find_sql_models():
fname = os.path.basename(fpath).lower()
if (
fname.startswith("stg_")
or fname.startswith("fct_")
or fname.startswith("dim_")
):
dirs_found.add(fname[:3])
assert len(dirs_found) >= 1, "No staging/marts layer pattern found"
def test_schema_has_tests(self):
"""schema.yml must define column tests."""
import yaml
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "schema" in f.lower() and f.endswith((".yml", ".yaml")):
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
doc = yaml.safe_load(fh)
if doc and isinstance(doc, dict):
content = str(doc)
test_patterns = [
"tests:",
"not_null",
"unique",
"accepted_values",
"relationships",
]
if any(p in content for p in test_patterns):
return
pytest.fail("No column tests in schema.yml")
def test_schema_has_descriptions(self):
"""schema.yml must include model/column descriptions."""
import yaml
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if "schema" in f.lower() and f.endswith((".yml", ".yaml")):
fpath = os.path.join(root, f)
with open(fpath, "r") as fh:
content = fh.read()
if "description:" in content:
return
pytest.fail("No descriptions in schema.yml")
def test_cte_pattern(self):
"""SQL models should use CTE (WITH ... AS) pattern."""
found = False
for fpath in self._find_sql_models():
with open(fpath, "r", errors="ignore") as f:
content = f.read().upper()
if "WITH " in content and " AS " in content:
found = True
break
assert found, "No CTE pattern (WITH ... AS) in SQL models"
def test_incremental_or_materialized(self):
"""At least one model should use config with materialization."""
found = False
for fpath in self._find_sql_models():
with open(fpath, "r", errors="ignore") as f:
content = f.read()
if "materialized" in content or "config(" in content:
found = True
break
assert found, "No materialization config in models"
| https://github.com/dbt-labs/dbt-core | zhangyiiiiii/swe-skills-bench-python | |
langsmith-fetch | LangSmith Fetch | See task file for detailed mission requirements. | feature | # Task: Create LangSmith Data Fetch Utility for LangChain
## Background
Create a utility script within the LangChain repository that demonstrates how to fetch run data from LangSmith API and export results to JSON/CSV format.
## Files to Create/Modify
- `examples/langsmith_fetch.py` - Main fetch utility script
## Requirements
### Fetch Utility (langsmith_fetch.py)
- LangSmith client initialization with API key
- Query runs by project name
- Filter by date range (start_date, end_date)
- Pagination handling for large result sets
### Export Features
- JSON export with full run data
- CSV export with selected fields
- Configurable output path via CLI argument
### Fields to Export
- `run_id`
- `inputs`
- `outputs`
- `feedback_scores`
- `start_time`, `end_time`
### CLI Interface
- `--project`: LangSmith project name
- `--output`: Output file path
- `--format`: `json` or `csv`
- `--help`: Show usage information
## Acceptance Criteria
- `python examples/langsmith_fetch.py --help` shows usage
- Script handles API authentication via environment variable
- Output format matches specification (JSON or CSV with correct fields)
| ---
name: langsmith-fetch
description: Debug LangChain and LangGraph agents by fetching execution traces from LangSmith Studio. Use when debugging agent behavior, investigating errors, analyzing tool calls, checking memory operations, or examining agent performance. Automatically fetches recent traces and analyzes execution patterns. Requires langsmith-fetch CLI installed.
---
# LangSmith Fetch - Agent Debugging Skill
Debug LangChain and LangGraph agents by fetching execution traces directly from LangSmith Studio in your terminal.
## When to Use This Skill
Automatically activate when user mentions:
- 🐛 "Debug my agent" or "What went wrong?"
- 🔍 "Show me recent traces" or "What happened?"
- ❌ "Check for errors" or "Why did it fail?"
- 💾 "Analyze memory operations" or "Check LTM"
- 📊 "Review agent performance" or "Check token usage"
- 🔧 "What tools were called?" or "Show execution flow"
## Prerequisites
### 1. Install langsmith-fetch
```bash
pip install langsmith-fetch
```
### 2. Set Environment Variables
```bash
export LANGSMITH_API_KEY="your_langsmith_api_key"
export LANGSMITH_PROJECT="your_project_name"
```
**Verify setup:**
```bash
echo $LANGSMITH_API_KEY
echo $LANGSMITH_PROJECT
```
## Core Workflows
### Workflow 1: Quick Debug Recent Activity
**When user asks:** "What just happened?" or "Debug my agent"
**Execute:**
```bash
langsmith-fetch traces --last-n-minutes 5 --limit 5 --format pretty
```
**Analyze and report:**
1. ✅ Number of traces found
2. ⚠️ Any errors or failures
3. 🛠️ Tools that were called
4. ⏱️ Execution times
5. 💰 Token usage
**Example response format:**
```
Found 3 traces in the last 5 minutes:
Trace 1: ✅ Success
- Agent: memento
- Tools: recall_memories, create_entities
- Duration: 2.3s
- Tokens: 1,245
Trace 2: ❌ Error
- Agent: cypher
- Error: "Neo4j connection timeout"
- Duration: 15.1s
- Failed at: search_nodes tool
Trace 3: ✅ Success
- Agent: memento
- Tools: store_memory
- Duration: 1.8s
- Tokens: 892
💡 Issue found: Trace 2 failed due to Neo4j timeout. Recommend checking database connection.
```
---
### Workflow 2: Deep Dive Specific Trace
**When user provides:** Trace ID or says "investigate that error"
**Execute:**
```bash
langsmith-fetch trace <trace-id> --format json
```
**Analyze JSON and report:**
1. 🎯 What the agent was trying to do
2. 🛠️ Which tools were called (in order)
3. ✅ Tool results (success/failure)
4. ❌ Error messages (if any)
5. 💡 Root cause analysis
6. 🔧 Suggested fix
**Example response format:**
```
Deep Dive Analysis - Trace abc123
Goal: User asked "Find all projects in Neo4j"
Execution Flow:
1. ✅ search_nodes(query: "projects")
→ Found 24 nodes
2. ❌ get_node_details(node_id: "proj_123")
→ Error: "Node not found"
→ This is the failure point
3. ⏹️ Execution stopped
Root Cause:
The search_nodes tool returned node IDs that no longer exist in the database,
possibly due to recent deletions.
Suggested Fix:
1. Add error handling in get_node_details tool
2. Filter deleted nodes in search results
3. Update cache invalidation strategy
Token Usage: 1,842 tokens ($0.0276)
Execution Time: 8.7 seconds
```
---
### Workflow 3: Export Debug Session
**When user says:** "Save this session" or "Export traces"
**Execute:**
```bash
# Create session folder with timestamp
SESSION_DIR="langsmith-debug/session-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$SESSION_DIR"
# Export traces
langsmith-fetch traces "$SESSION_DIR/traces" --last-n-minutes 30 --limit 50 --include-metadata
# Export threads (conversations)
langsmith-fetch threads "$SESSION_DIR/threads" --limit 20
```
**Report:**
```
✅ Session exported successfully!
Location: langsmith-debug/session-20251224-143022/
- Traces: 42 files
- Threads: 8 files
You can now:
1. Review individual trace files
2. Share folder with team
3. Analyze with external tools
4. Archive for future reference
Session size: 2.3 MB
```
---
### Workflow 4: Error Detection
**When user asks:** "Show me errors" or "What's failing?"
**Execute:**
```bash
# Fetch recent traces
langsmith-fetch traces --last-n-minutes 30 --limit 50 --format json > recent-traces.json
# Search for errors
grep -i "error\|failed\|exception" recent-traces.json
```
**Analyze and report:**
1. 📊 Total errors found
2. ❌ Error types and frequency
3. 🕐 When errors occurred
4. 🎯 Which agents/tools failed
5. 💡 Common patterns
**Example response format:**
```
Error Analysis - Last 30 Minutes
Total Traces: 50
Failed Traces: 7 (14% failure rate)
Error Breakdown:
1. Neo4j Connection Timeout (4 occurrences)
- Agent: cypher
- Tool: search_nodes
- First occurred: 14:32
- Last occurred: 14:45
- Pattern: Happens during peak load
2. Memory Store Failed (2 occurrences)
- Agent: memento
- Tool: store_memory
- Error: "Pinecone rate limit exceeded"
- Occurred: 14:38, 14:41
3. Tool Not Found (1 occurrence)
- Agent: sqlcrm
- Attempted tool: "export_report" (doesn't exist)
- Occurred: 14:35
💡 Recommendations:
1. Add retry logic for Neo4j timeouts
2. Implement rate limiting for Pinecone
3. Fix sqlcrm tool configuration
```
---
## Common Use Cases
### Use Case 1: "Agent Not Responding"
**User says:** "My agent isn't doing anything"
**Steps:**
1. Check if traces exist:
```bash
langsmith-fetch traces --last-n-minutes 5 --limit 5
```
2. **If NO traces found:**
- Tracing might be disabled
- Check: `LANGCHAIN_TRACING_V2=true` in environment
- Check: `LANGCHAIN_API_KEY` is set
- Verify agent actually ran
3. **If traces found:**
- Review for errors
- Check execution time (hanging?)
- Verify tool calls completed
---
### Use Case 2: "Wrong Tool Called"
**User says:** "Why did it use the wrong tool?"
**Steps:**
1. Get the specific trace
2. Review available tools at execution time
3. Check agent's reasoning for tool selection
4. Examine tool descriptions/instructions
5. Suggest prompt or tool config improvements
---
### Use Case 3: "Memory Not Working"
**User says:** "Agent doesn't remember things"
**Steps:**
1. Search for memory operations:
```bash
langsmith-fetch traces --last-n-minutes 10 --limit 20 --format raw | grep -i "memory\|recall\|store"
```
2. Check:
- Were memory tools called?
- Did recall return results?
- Were memories actually stored?
- Are retrieved memories being used?
---
### Use Case 4: "Performance Issues"
**User says:** "Agent is too slow"
**Steps:**
1. Export with metadata:
```bash
langsmith-fetch traces ./perf-analysis --last-n-minutes 30 --limit 50 --include-metadata
```
2. Analyze:
- Execution time per trace
- Tool call latencies
- Token usage (context size)
- Number of iterations
- Slowest operations
3. Identify bottlenecks and suggest optimizations
---
## Output Format Guide
### Pretty Format (Default)
```bash
langsmith-fetch traces --limit 5 --format pretty
```
**Use for:** Quick visual inspection, showing to users
### JSON Format
```bash
langsmith-fetch traces --limit 5 --format json
```
**Use for:** Detailed analysis, syntax-highlighted review
### Raw Format
```bash
langsmith-fetch traces --limit 5 --format raw
```
**Use for:** Piping to other commands, automation
---
## Advanced Features
### Time-Based Filtering
```bash
# After specific timestamp
langsmith-fetch traces --after "2025-12-24T13:00:00Z" --limit 20
# Last N minutes (most common)
langsmith-fetch traces --last-n-minutes 60 --limit 100
```
### Include Metadata
```bash
# Get extra context
langsmith-fetch traces --limit 10 --include-metadata
# Metadata includes: agent type, model, tags, environment
```
### Concurrent Fetching (Faster)
```bash
# Speed up large exports
langsmith-fetch traces ./output --limit 100 --concurrent 10
```
---
## Troubleshooting
### "No traces found matching criteria"
**Possible causes:**
1. No agent activity in the timeframe
2. Tracing is disabled
3. Wrong project name
4. API key issues
**Solutions:**
```bash
# 1. Try longer timeframe
langsmith-fetch traces --last-n-minutes 1440 --limit 50
# 2. Check environment
echo $LANGSMITH_API_KEY
echo $LANGSMITH_PROJECT
# 3. Try fetching threads instead
langsmith-fetch threads --limit 10
# 4. Verify tracing is enabled in your code
# Check for: LANGCHAIN_TRACING_V2=true
```
### "Project not found"
**Solution:**
```bash
# View current config
langsmith-fetch config show
# Set correct project
export LANGSMITH_PROJECT="correct-project-name"
# Or configure permanently
langsmith-fetch config set project "your-project-name"
```
### Environment variables not persisting
**Solution:**
```bash
# Add to shell config file (~/.bashrc or ~/.zshrc)
echo 'export LANGSMITH_API_KEY="your_key"' >> ~/.bashrc
echo 'export LANGSMITH_PROJECT="your_project"' >> ~/.bashrc
# Reload shell config
source ~/.bashrc
```
---
## Best Practices
### 1. Regular Health Checks
```bash
# Quick check after making changes
langsmith-fetch traces --last-n-minutes 5 --limit 5
```
### 2. Organized Storage
```
langsmith-debug/
├── sessions/
│ ├── 2025-12-24/
│ └── 2025-12-25/
├── error-cases/
└── performance-tests/
```
### 3. Document Findings
When you find bugs:
1. Export the problematic trace
2. Save to `error-cases/` folder
3. Note what went wrong in a README
4. Share trace ID with team
### 4. Integration with Development
```bash
# Before committing code
langsmith-fetch traces --last-n-minutes 10 --limit 5
# If errors found
langsmith-fetch trace <error-id> --format json > pre-commit-error.json
```
---
## Quick Reference
```bash
# Most common commands
# Quick debug
langsmith-fetch traces --last-n-minutes 5 --limit 5 --format pretty
# Specific trace
langsmith-fetch trace <trace-id> --format pretty
# Export session
langsmith-fetch traces ./debug-session --last-n-minutes 30 --limit 50
# Find errors
langsmith-fetch traces --last-n-minutes 30 --limit 50 --format raw | grep -i error
# With metadata
langsmith-fetch traces --limit 10 --include-metadata
```
---
## Resources
- **LangSmith Fetch CLI:** https://github.com/langchain-ai/langsmith-fetch
- **LangSmith Studio:** https://smith.langchain.com/
- **LangChain Docs:** https://docs.langchain.com/
- **This Skill Repo:** https://github.com/OthmanAdi/langsmith-fetch-skill
---
## Notes for Claude
- Always check if `langsmith-fetch` is installed before running commands
- Verify environment variables are set
- Use `--format pretty` for human-readable output
- Use `--format json` when you need to parse and analyze data
- When exporting sessions, create organized folder structures
- Always provide clear analysis and actionable insights
- If commands fail, help troubleshoot configuration issues
---
**Version:** 0.1.0
**Author:** Ahmad Othman Ammar Adi
**License:** MIT
**Repository:** https://github.com/OthmanAdi/langsmith-fetch-skill
| """
Test for 'langsmith-fetch' skill — LangSmith Data Fetch Utility
Validates that the Agent created a LangSmith fetch utility with CLI interface,
JSON/CSV export, and proper field handling.
"""
import os
import ast
import subprocess
import pytest
class TestLangsmithFetch:
"""Verify LangSmith fetch utility in LangChain."""
REPO_DIR = "/workspace/langchain"
# ------------------------------------------------------------------
# L1: file existence & syntax
# ------------------------------------------------------------------
def test_fetch_script_exists(self):
"""examples/langsmith_fetch.py must exist."""
fpath = os.path.join(self.REPO_DIR, "examples", "langsmith_fetch.py")
assert os.path.isfile(fpath), "langsmith_fetch.py not found"
def test_fetch_script_compiles(self):
"""langsmith_fetch.py must compile."""
result = subprocess.run(
["python", "-m", "py_compile", "examples/langsmith_fetch.py"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"Syntax error:\n{result.stderr}"
# ------------------------------------------------------------------
# L2: CLI & content verification
# ------------------------------------------------------------------
def _read_source(self):
fpath = os.path.join(self.REPO_DIR, "examples", "langsmith_fetch.py")
with open(fpath, "r", encoding="utf-8") as f:
return f.read()
def test_help_flag_works(self):
"""--help must display usage information."""
result = subprocess.run(
["python", "examples/langsmith_fetch.py", "--help"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert result.returncode == 0, f"--help failed:\n{result.stderr}"
assert len(result.stdout) > 20, "--help output is too short"
def test_project_argument(self):
"""CLI must support --project argument."""
result = subprocess.run(
["python", "examples/langsmith_fetch.py", "--help"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert "--project" in result.stdout, "--project not in help output"
def test_output_argument(self):
"""CLI must support --output argument."""
result = subprocess.run(
["python", "examples/langsmith_fetch.py", "--help"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert "--output" in result.stdout, "--output not in help output"
def test_format_argument(self):
"""CLI must support --format argument (json or csv)."""
result = subprocess.run(
["python", "examples/langsmith_fetch.py", "--help"],
cwd=self.REPO_DIR,
capture_output=True,
text=True,
timeout=30,
)
assert "--format" in result.stdout, "--format not in help output"
def test_source_uses_argparse(self):
"""Script should use argparse or click for CLI parsing."""
source = self._read_source()
assert (
"argparse" in source or "click" in source or "typer" in source
), "No CLI framework (argparse/click/typer) found"
def test_required_export_fields(self):
"""Script must reference required export fields."""
source = self._read_source()
fields = [
"run_id",
"inputs",
"outputs",
"feedback_scores",
"start_time",
"end_time",
]
found = sum(1 for f in fields if f in source)
assert found >= 4, f"Only {found}/6 required fields found in source"
def test_json_export_support(self):
"""Script must support JSON export."""
source = self._read_source()
assert "json" in source.lower(), "No JSON export support found"
def test_csv_export_support(self):
"""Script must support CSV export."""
source = self._read_source()
assert "csv" in source.lower(), "No CSV export support found"
def test_api_key_handling(self):
"""Script must handle API authentication via environment variable."""
source = self._read_source()
auth_patterns = [
"API_KEY",
"api_key",
"LANGSMITH",
"LANGCHAIN",
"environ",
"getenv",
]
found = sum(1 for p in auth_patterns if p in source)
assert found >= 2, "Insufficient API key handling"
def test_pagination_support(self):
"""Script should handle pagination for large result sets."""
source = self._read_source()
page_patterns = ["page", "offset", "cursor", "limit", "next", "pagina", "batch"]
found = any(p in source.lower() for p in page_patterns)
assert found, "No pagination handling found"
| https://github.com/langchain-ai/langchain | zhangyiiiiii/swe-skills-bench-python | |
v3-performance-optimization | V3 Performance Optimization | See task file for detailed mission requirements. | feature | # Task: Add Flash Attention Performance Benchmark and Optimization Examples
## Background
Create performance benchmarks and optimization examples for Flash Attention
demonstrating speedup comparisons with standard attention mechanisms.
## Files to Create/Modify
- benchmarks/benchmark_attention.py (new)
- benchmarks/configs/benchmark_config.yaml (new)
- examples/optimization_demo.py (new)
## Requirements
Benchmark Script (benchmark_attention.py):
- Compare Flash Attention vs standard PyTorch attention
- Test multiple sequence lengths: 512, 1024, 2048, 4096
- Test multiple head dimensions: 64, 128
- Measure forward and backward pass separately
- Output speedup ratios (target: 2.49x-7.47x)
- Memory usage comparison
Configuration (benchmark_config.yaml):
- batch_sizes: [1, 4, 8, 16]
- seq_lengths: [512, 1024, 2048, 4096]
- head_dim: [64, 128]
- num_heads: [8, 12, 16]
- dtype: [float16, bfloat16]
Optimization Demo (optimization_demo.py):
- Demonstrate causal masking optimization
- Show memory-efficient backward pass
- Include dropout optimization example
- Performance timing decorators
4. Output Requirements:
- JSON results file with speedup metrics
- Memory reduction percentages
- Latency comparisons (ms)
## Acceptance Criteria
- `python benchmarks/benchmark_attention.py` exits with code 0
- Benchmark results show Flash Attention speedup > 2x
- Memory reduction > 40% for long sequences
| ---
name: "V3 Performance Optimization"
description: "Achieve aggressive v3 performance targets: 2.49x-7.47x Flash Attention speedup, 150x-12,500x search improvements, 50-75% memory reduction. Comprehensive benchmarking and optimization suite."
---
# V3 Performance Optimization
## What This Skill Does
Validates and optimizes claude-flow v3 to achieve industry-leading performance through Flash Attention, AgentDB HNSW indexing, and comprehensive system optimization with continuous benchmarking.
## Quick Start
```bash
# Initialize performance optimization
Task("Performance baseline", "Establish v2 performance benchmarks", "v3-performance-engineer")
# Target validation (parallel)
Task("Flash Attention", "Validate 2.49x-7.47x speedup target", "v3-performance-engineer")
Task("Search optimization", "Validate 150x-12,500x search improvement", "v3-performance-engineer")
Task("Memory optimization", "Achieve 50-75% memory reduction", "v3-performance-engineer")
```
## Performance Target Matrix
### Flash Attention Revolution
```
┌─────────────────────────────────────────┐
│ FLASH ATTENTION │
├─────────────────────────────────────────┤
│ Baseline: Standard attention │
│ Target: 2.49x - 7.47x speedup │
│ Memory: 50-75% reduction │
│ Latency: Sub-millisecond processing │
└─────────────────────────────────────────┘
```
### Search Performance Revolution
```
┌─────────────────────────────────────────┐
│ SEARCH OPTIMIZATION │
├─────────────────────────────────────────┤
│ Current: O(n) linear search │
│ Target: 150x - 12,500x improvement │
│ Method: HNSW indexing │
│ Latency: <100ms for 1M+ entries │
└─────────────────────────────────────────┘
```
## Comprehensive Benchmark Suite
### Startup Performance
```typescript
class StartupBenchmarks {
async benchmarkColdStart(): Promise<BenchmarkResult> {
const startTime = performance.now();
await this.initializeCLI();
await this.initializeMCPServer();
await this.spawnTestAgent();
const totalTime = performance.now() - startTime;
return {
total: totalTime,
target: 500, // ms
achieved: totalTime < 500
};
}
}
```
### Memory Operation Benchmarks
```typescript
class MemoryBenchmarks {
async benchmarkVectorSearch(): Promise<SearchBenchmark> {
const queries = this.generateTestQueries(10000);
// Baseline: Current linear search
const baselineTime = await this.timeOperation(() =>
this.currentMemory.searchAll(queries)
);
// Target: HNSW search
const hnswTime = await this.timeOperation(() =>
this.agentDBMemory.hnswSearchAll(queries)
);
const improvement = baselineTime / hnswTime;
return {
baseline: baselineTime,
hnsw: hnswTime,
improvement,
targetRange: [150, 12500],
achieved: improvement >= 150
};
}
async benchmarkMemoryUsage(): Promise<MemoryBenchmark> {
const baseline = process.memoryUsage().heapUsed;
await this.loadTestDataset();
const withData = process.memoryUsage().heapUsed;
await this.enableOptimization();
const optimized = process.memoryUsage().heapUsed;
const reduction = (withData - optimized) / withData;
return {
baseline,
withData,
optimized,
reductionPercent: reduction * 100,
targetReduction: [50, 75],
achieved: reduction >= 0.5
};
}
}
```
### Swarm Coordination Benchmarks
```typescript
class SwarmBenchmarks {
async benchmark15AgentCoordination(): Promise<SwarmBenchmark> {
const agents = await this.spawn15Agents();
// Coordination latency
const coordinationTime = await this.timeOperation(() =>
this.coordinateSwarmTask(agents)
);
// Task decomposition
const decompositionTime = await this.timeOperation(() =>
this.decomposeComplexTask()
);
// Consensus achievement
const consensusTime = await this.timeOperation(() =>
this.achieveSwarmConsensus(agents)
);
return {
coordination: coordinationTime,
decomposition: decompositionTime,
consensus: consensusTime,
agentCount: 15,
efficiency: this.calculateEfficiency(agents)
};
}
}
```
### Flash Attention Benchmarks
```typescript
class AttentionBenchmarks {
async benchmarkFlashAttention(): Promise<AttentionBenchmark> {
const sequences = this.generateSequences([512, 1024, 2048, 4096]);
const results = [];
for (const sequence of sequences) {
// Baseline attention
const baselineResult = await this.benchmarkStandardAttention(sequence);
// Flash attention
const flashResult = await this.benchmarkFlashAttention(sequence);
results.push({
sequenceLength: sequence.length,
speedup: baselineResult.time / flashResult.time,
memoryReduction: (baselineResult.memory - flashResult.memory) / baselineResult.memory,
targetSpeedup: [2.49, 7.47],
achieved: this.checkTarget(flashResult, [2.49, 7.47])
});
}
return {
results,
averageSpeedup: this.calculateAverage(results, 'speedup'),
averageMemoryReduction: this.calculateAverage(results, 'memoryReduction')
};
}
}
```
### SONA Learning Benchmarks
```typescript
class SONABenchmarks {
async benchmarkAdaptationTime(): Promise<SONABenchmark> {
const scenarios = [
'pattern_recognition',
'task_optimization',
'error_correction',
'performance_tuning'
];
const results = [];
for (const scenario of scenarios) {
const startTime = performance.hrtime.bigint();
await this.sona.adapt(scenario);
const endTime = performance.hrtime.bigint();
const adaptationTimeMs = Number(endTime - startTime) / 1000000;
results.push({
scenario,
adaptationTime: adaptationTimeMs,
target: 0.05, // ms
achieved: adaptationTimeMs <= 0.05
});
}
return {
scenarios: results,
averageTime: results.reduce((sum, r) => sum + r.adaptationTime, 0) / results.length,
successRate: results.filter(r => r.achieved).length / results.length
};
}
}
```
## Performance Monitoring Dashboard
### Real-time Metrics
```typescript
class PerformanceMonitor {
async collectMetrics(): Promise<PerformanceSnapshot> {
return {
timestamp: Date.now(),
flashAttention: await this.measureFlashAttention(),
searchPerformance: await this.measureSearchSpeed(),
memoryUsage: await this.measureMemoryEfficiency(),
startupTime: await this.measureStartupLatency(),
sonaAdaptation: await this.measureSONASpeed(),
swarmCoordination: await this.measureSwarmEfficiency()
};
}
async generateReport(): Promise<PerformanceReport> {
const snapshot = await this.collectMetrics();
return {
summary: this.generateSummary(snapshot),
achievements: this.checkTargetAchievements(snapshot),
trends: this.analyzeTrends(),
recommendations: this.generateOptimizations(),
regressions: await this.detectRegressions()
};
}
}
```
### Continuous Regression Detection
```typescript
class PerformanceRegression {
async detectRegressions(): Promise<RegressionReport> {
const current = await this.runFullBenchmark();
const baseline = await this.getBaseline();
const regressions = [];
for (const [metric, currentValue] of Object.entries(current)) {
const baselineValue = baseline[metric];
const change = (currentValue - baselineValue) / baselineValue;
if (change < -0.05) { // 5% regression threshold
regressions.push({
metric,
baseline: baselineValue,
current: currentValue,
regressionPercent: change * 100,
severity: this.classifyRegression(change)
});
}
}
return {
hasRegressions: regressions.length > 0,
regressions,
recommendations: this.generateRegressionFixes(regressions)
};
}
}
```
## Optimization Strategies
### Memory Optimization
```typescript
class MemoryOptimization {
async optimizeMemoryUsage(): Promise<OptimizationResult> {
// Implement memory pooling
await this.setupMemoryPools();
// Enable garbage collection tuning
await this.optimizeGarbageCollection();
// Implement object reuse patterns
await this.setupObjectPools();
// Enable memory compression
await this.enableMemoryCompression();
return this.validateMemoryReduction();
}
}
```
### CPU Optimization
```typescript
class CPUOptimization {
async optimizeCPUUsage(): Promise<OptimizationResult> {
// Implement worker thread pools
await this.setupWorkerThreads();
// Enable CPU-specific optimizations
await this.enableSIMDInstructions();
// Implement task batching
await this.optimizeTaskBatching();
return this.validateCPUImprovement();
}
}
```
## Target Validation Framework
### Performance Gates
```typescript
class PerformanceGates {
async validateAllTargets(): Promise<ValidationReport> {
const results = await Promise.all([
this.validateFlashAttention(), // 2.49x-7.47x
this.validateSearchPerformance(), // 150x-12,500x
this.validateMemoryReduction(), // 50-75%
this.validateStartupTime(), // <500ms
this.validateSONAAdaptation() // <0.05ms
]);
return {
allTargetsAchieved: results.every(r => r.achieved),
results,
overallScore: this.calculateOverallScore(results),
recommendations: this.generateRecommendations(results)
};
}
}
```
## Success Metrics
### Primary Targets
- [ ] **Flash Attention**: 2.49x-7.47x speedup validated
- [ ] **Search Performance**: 150x-12,500x improvement confirmed
- [ ] **Memory Reduction**: 50-75% usage optimization achieved
- [ ] **Startup Time**: <500ms cold start consistently
- [ ] **SONA Adaptation**: <0.05ms learning response time
- [ ] **15-Agent Coordination**: Efficient parallel execution
### Continuous Monitoring
- [ ] **Performance Dashboard**: Real-time metrics collection
- [ ] **Regression Testing**: Automated performance validation
- [ ] **Trend Analysis**: Performance evolution tracking
- [ ] **Alert System**: Immediate regression notification
## Related V3 Skills
- `v3-integration-deep` - Performance integration with agentic-flow
- `v3-memory-unification` - Memory performance optimization
- `v3-swarm-coordination` - Swarm performance coordination
- `v3-security-overhaul` - Secure performance patterns
## Usage Examples
### Complete Performance Validation
```bash
# Full performance suite
npm run benchmark:v3
# Specific target validation
npm run benchmark:flash-attention
npm run benchmark:agentdb-search
npm run benchmark:memory-optimization
# Continuous monitoring
npm run monitor:performance
``` | """
Test for 'v3-performance-optimization' skill — Flash Attention Performance
Validates that the Agent implemented or optimized performance benchmarks
and kernel configurations in the flash-attention project.
"""
import os
import subprocess
import pytest
class TestV3PerformanceOptimization:
"""Verify flash-attention performance optimization."""
REPO_DIR = "/workspace/flash-attention"
# ------------------------------------------------------------------
# L1: file existence
# ------------------------------------------------------------------
def test_benchmark_file_exists(self):
"""A benchmark or performance test file must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if ("bench" in f.lower() or "perf" in f.lower()) and f.endswith(
(".py", ".cu", ".cuh")
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No benchmark/perf file found"
def test_config_or_readme_exists(self):
"""Documentation for optimization must exist."""
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.lower() in ("readme.md", "benchmark.md") or (
"optim" in f.lower() and f.endswith(".md")
):
found.append(os.path.join(root, f))
assert len(found) >= 1, "No optimization documentation found"
# ------------------------------------------------------------------
# L2: content validation
# ------------------------------------------------------------------
def _find_perf_files(self):
found = []
for root, dirs, files in os.walk(self.REPO_DIR):
for f in files:
if f.endswith((".py", ".cu", ".cuh")) and "node_modules" not in root:
fpath = os.path.join(root, f)
try:
with open(fpath, "r", errors="ignore") as fh:
content = fh.read()
if any(
p in content.lower()
for p in [
"benchmark",
"flash_attn",
"attention",
"performance",
"kernel",
]
):
found.append(fpath)
except OSError:
pass
return found
def _read_all_perf(self):
content = ""
for fpath in self._find_perf_files():
with open(fpath, "r", errors="ignore") as f:
content += f.read() + "\n"
return content
def test_flash_attention_import(self):
"""Must reference flash_attn or attention implementation."""
content = self._read_all_perf()
patterns = [
"flash_attn",
"flash_attention",
"FlashAttention",
"attention",
"softmax",
]
found = any(p in content for p in patterns)
assert found, "No flash attention reference found"
def test_timing_measurement(self):
"""Must measure execution time."""
content = self._read_all_perf()
timing_patterns = [
"time.",
"benchmark",
"timer",
"elapsed",
"torch.cuda.synchronize",
"Event",
"perf_counter",
"ms",
"latency",
]
found = any(p in content.lower() for p in timing_patterns)
assert found, "No timing measurement found"
def test_memory_profiling(self):
"""Must profile memory usage."""
content = self._read_all_perf()
mem_patterns = [
"memory",
"max_memory_allocated",
"memory_reserved",
"cuda.mem",
"mem_efficient",
"peak_memory",
"FLOPS",
"flops",
]
found = any(p in content.lower() for p in mem_patterns)
assert found, "No memory profiling found"
def test_batch_size_variation(self):
"""Must test with varying batch/sequence sizes."""
content = self._read_all_perf()
size_patterns = [
"batch_size",
"seq_len",
"seqlen",
"head_dim",
"num_heads",
"d_model",
"causal",
"for .* in",
]
found = sum(1 for p in size_patterns if p in content.lower())
assert found >= 2, "Insufficient size variation in benchmarks"
def test_comparison_baseline(self):
"""Must compare against baseline implementation."""
content = self._read_all_perf()
baseline_patterns = [
"baseline",
"standard",
"vanilla",
"reference",
"naive",
"comparison",
"speedup",
"vs",
"pytorch",
]
found = any(p in content.lower() for p in baseline_patterns)
assert found, "No baseline comparison found"
def test_cuda_or_torch(self):
"""Must use CUDA or PyTorch."""
content = self._read_all_perf()
cuda_patterns = [
"torch",
"cuda",
"__global__",
"cudaMalloc",
"torch.cuda",
"device",
"GPU",
]
found = any(p in content for p in cuda_patterns)
assert found, "No CUDA/PyTorch usage found"
def test_results_output(self):
"""Must output benchmark results."""
content = self._read_all_perf()
output_patterns = [
"print",
"log",
"save",
"write",
"csv",
"json",
"table",
"format",
]
found = any(p in content.lower() for p in output_patterns)
assert found, "No results output found"
def test_python_scripts_compile(self):
"""Python benchmark files must compile."""
for fpath in self._find_perf_files():
if fpath.endswith(".py"):
result = subprocess.run(
["python", "-m", "py_compile", fpath],
capture_output=True,
text=True,
timeout=30,
)
assert (
result.returncode == 0
), f"{fpath} compile error:\n{result.stderr}"
| https://github.com/Dao-AILab/flash-attention | zhangyiiiiii/swe-skills-bench-python |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.