- docs/workflow/INTEGRATION-TESTING.md - Full guide on integration testing - What it is vs unit/e2e tests - 5 principles to test per integration point - Folder structure suggestions - Self-improving loop for human findings - integration-tests/ - Template for project's integration tests - README.md - What to test (Feynman fills per project) - package.json - Test runner setup - scripts/setup.sh - Service startup - scripts/teardown.sh - Cleanup - docs/workflow/WORKFLOW.md - Added integration testing references - docs/workflow/AGENT-PROMPTS.md - Added integration testing prompts - docs/workflow/INDEX.md - Updated file structure
8.0 KiB
8.0 KiB
Agent Prompts
Copy-paste ready prompts for each stage of the workflow.
Stage 2: Pick Up — Agent Prompt
Copy and paste this when starting work on an issue:
You're picking up issue #<number>: <title>
Steps:
1. Read /.issues/<number>-<title>.md completely
2. Read all depends-on issues:
<list issue numbers here>
3. Read related code:
<list files here>
4. Write your Plan in ## Agent Working Notes > Plan:
- Approach: how will you solve this?
- Files to touch: which files will you change?
- Risks: what could go wrong?
5. Change status to `in-progress`
6. Wait for human acknowledgment before proceeding
Remember: You must understand the dependencies before starting. Don't assume they work a certain way.
Stage 3: Implementation — Status Log Prompt
After each significant step, log it:
Add to ## Status Log:
YYYY-MM-DD HH:MM - <What you did>
<Why you did it this way>
<Any decisions made>
Example:
2024-04-18 14:30 - Implemented delete button component
Decided to put it in SettingsPage.tsx rather than separate component
since it's only used in one place. Button uses danger variant for emphasis.
Stage 3: Implementation — Scope Change Prompt
If scope needs to change:
Stop implementation immediately.
Add to ## Status Log:
---
Scope change detected: <description>
Original scope: <what was scoped>
New requirement: <what was found>
Proposed solution:
<your proposed approach>
Status: Waiting for human approval before proceeding.
---
Stage 3: Implementation — Blocker Prompt
If blocked:
Add to ## Blockers:
Blocker: <description>
Proposed solution: <your proposed approach>
Status: Waiting for resolution.
Do not proceed until this is resolved.
Stage 4: Self-Verification Prompt
Copy and paste this template, then fill in each item:
## Self-Verification
Complete before claiming done:
- [ ] Smoke test ran:
Command: <command>
Result: <pass/fail>
If fail: <what failed, what you did about it>
- [ ] Acceptance criteria checked:
<List each criterion and result>
- [ ] Criterion 1: <pass/fail> — <notes>
- [ ] Criterion 2: <pass/fail> — <notes>
- [ ] Screenshot taken (if UI change):
<path to screenshot> or N/A
- [ ] Dependent issues still work:
- Issue #<number>: <what you tested> — <pass/fail>
- Issue #<number>: <what you tested> — <pass/fail>
Stage 5: Request Review Prompt
After completing self-verification:
## Agent Working Notes
[after Self-Verification is complete]
Add to ### Human Review Notes:
Agent requests review.
All self-verification checks passed. Ready for approval to close.
Files changed: <list>
Tests added: <list if any>
Screenshots: <path if applicable>
Full Issue Implementation Prompt
For the agent to use from start to finish:
## Your Task
Implement issue #<number>: <title>
## Instructions
1. **Read the issue file**: `/.issues/<number>-<title>.md`
2. **Read dependencies**: Issues listed in `depends-on`
3. **Read related code**: Files that will be affected
4. **Write your Plan** in the issue file's `## Agent Working Notes > Plan`
5. **Wait for acknowledgment** ("ok" from human)
6. **Implement** the feature or fix
7. **Log decisions** in `## Status Log` as you go
8. **If scope changes or blocked**: Stop, document, wait
9. **Self-verify** using the template in Stage 4
10. **Request review** when done
## Key Rules
- Think out loud in the issue file
- Verify before claiming done
- Ask if unsure — don't assume
- Dependencies must be verified, not assumed
- Pre-commit hook will check your issue file — keep it complete
## Do NOT
- Skip the issue file format
- Skip self-verification
- Proceed without human acknowledgment on plans
- Ignore scope changes
- Assume dependencies work without checking
Dependency Verification Prompt
When checking if a dependency works:
## Dependency Check: Issue #<number>
Issue: <title>
Status: <status>
Output: <what this issue produces>
Verification performed:
- Ran tests: <command> — <pass/fail>
- Checked integration: <what you tested>
- Screenshots (if UI): <path or N/A>
Result: <dependency works / dependency has issues>
If dependency has issues:
- Document what doesn't work
- Write proposed fix in ## Blockers
- Do not proceed until resolved
Issue Creation Prompt
For creating a new issue:
## Create New Issue
Create a new issue file in /.issues/
1. Check /.issues/INDEX.md for the next available number
2. Create /.issues/<number>-<slug>.md using the format in /docs/workflow/ISSUE-FORMAT.md
3. Fill in:
- All frontmatter
- ## What — what needs to be built or fixed
- ## Why — why it matters
- ## Acceptance Criteria — testable checklist
- ## Verification — how to test it
- Leave ## Agent Working Notes empty
4. Add to /.issues/INDEX.md
5. Set status to `open`
Issue number: <next number>
Issue slug: <descriptive-kebab-case>
Pre-commit Hook Failure Response
If the pre-commit hook fails:
The pre-commit hook rejected your commit due to issue file errors.
Error: <error message from hook>
Fix the issue file(s) before committing:
- Missing frontmatter fields
- Missing required sections
- Invalid status values
- Missing Plan when status is in-progress
- Missing completed date when status is done
- Agent Working Notes not deleted when status is done
After fixing, try committing again.
Integration Testing: Initial Survey Prompt
When starting a new project or adopting this workflow for the first time:
Survey this codebase and fill in integration-tests/README.md:
1. Identify all systems in this project
2. Identify all integration points between systems
3. For each integration point, list what should be tested:
- Happy path
- Data integrity
- Error handling
- Auth (if applicable)
- Timing/async
4. List any known gaps in integration testing
5. List any tests that should exist but don't yet
Read docs/workflow/INTEGRATION-TESTING.md for guidance on what to test.
Output: Update integration-tests/README.md with your findings.
Human will review and approve.
Integration Testing: Per-Issue Prompt
When working on an issue that touches integration points:
This issue involves integration between systems.
Before implementing:
1. Read integration-tests/README.md
2. Identify which integration points this issue affects
3. Check if tests exist for those integration points
4. If tests don't exist, plan to create them
During implementation:
5. Implement the feature/fix
6. Add or update integration tests to cover:
- Happy path (does the connection work?)
- Error handling (what if something fails?)
- Any edge cases specific to this integration
After implementation:
7. Run integration tests: pnpm run test:integration
8. If tests fail, fix before claiming done
9. Document test changes in the issue file
Integration Testing: Human Bug → Regression Test Prompt
When human finds a bug during manual testing:
Human found an integration bug: <description>
1. Add to integration-tests/README.md > Human Findings table:
| <date> | <bug description> | No | <proposed test file> |
2. Implement a regression test that would catch this bug:
- Test that fails now (demonstrating the bug)
- Test that would pass when bug is fixed
3. Add the test to the appropriate test file in integration-tests/tests/
4. Document in the issue file:
- What bug was found
- What regression test was added
- How to verify the test works
Integration Testing: Adding Tests to Issue Prompt
For issues that introduce new integration points:
This issue introduces a new integration point or modifies an existing one.
Add to integration-tests/README.md:
1. New entry in Integration Points table (if new)
2. What to test for this integration point
3. List of tests to implement
Then implement the tests in integration-tests/tests/:
- One test file per integration point
- Test functions named clearly
- Happy path first, then edge cases
Run tests and verify they pass before claiming done.