AI Collaboration
MuseMVP's AI collaboration system: built-in rules, AI-friendly structure, and docs/manual division.
MuseMVP treats AI collaboration as engineering infrastructure. Beyond built-in rule and skill entrypoints like AGENTS.md and .agents, the repository layout itself is designed for AI readability. This page focuses on the stable collaboration framework; high-frequency tactics and tool frontiers are maintained in the MVP Manual.
Start with the split: Docs vs MVP Manual
Documentation (/docs)
Stable knowledge: architecture boundaries, directory conventions, layering responsibilities, and pre-commit checks. Use this as the team baseline and onboarding reference.
MVP Manual (/manual)
Fast-moving content: latest AI IDE usage patterns, prompt strategy updates, model capability changes, and real-world troubleshooting.
Read together
This page covers the long-lived collaboration framework, while the AI IDE Live Usage Summary tracks rapidly changing tactical practices.
Built-in AI Collaboration Infrastructure in MuseMVP
AGENTS.md: defines mandatory engineering constraints and reduces off-spec AI output..agents/skills: modularizes frequent workflows for on-demand reuse.src/modules + backendlayering: makes it easier for AI to place orchestration vs data access logic correctly.
Do not turn rules into an encyclopedia
Rules should be short, executable, and verifiable. Overlong rule files lower the hit rate of critical constraints.
Why this structure is AI-friendly
Business-domain colocation
`src/modules/*` colocates components, hooks, and lib logic, so AI can implement and refactor within one coherent context.
Strict backend layering
`routes -> modules/lib(orchestrator) -> queries` keeps responsibilities explicit and reduces cross-layer mistakes.
Type-safe request chain
Hono RPC + TypeScript surfaces frontend/backend mismatches at compile time.
Fixed i18n destination
User-facing copy is centralized in `src/i18n/translations/*/mvp.json`, making bilingual sync predictable.
Recommended AI Collaboration Flow (Stable)
Provide context first, do not ask AI to code immediately
Specify target files, affected modules, and whether auth/billing/i18n is involved.
Ask AI for a change plan first
The plan should include file paths, layer-specific changes, and verification commands.
Implement by layer
Recommended order: queries -> modules/lib -> routes -> api-client -> UI.
Require AI self-verification
At minimum, run type-check and build, then report failures.
Do final human review
Focus on permission boundaries, billing flows, i18n sync, and error branches.
Context:
- Feature: src/modules/muse-billing
- Route: src/backend/api/routes/upgrade
- Query: src/backend/database/queries/billing-contracts.ts
- i18n: src/i18n/translations/{en,zh}/mvp.json
Task:
Add a "downgrade to free" endpoint and a frontend action button.
Requirements:
1) Strictly follow modules/lib orchestration + queries data-access layering
2) Sync user-facing copy in both en/zh mvp.json files
3) Run pnpm type-check && pnpm build after implementationPre-PR Verification (Mandatory)
| Check | Requirement |
|---|---|
| Type correctness | pnpm type-check passes |
| Build integrity | pnpm build passes |
| Code quality | pnpm check returns no errors |
| Layer boundaries | Follows queries -> modules/lib -> routes -> api-client -> UI |
| i18n synchronization | User-facing text is updated in both en and zh |
pnpm type-check
pnpm build
pnpm checkPut verification commands in your prompt
Having AI run verification before handoff is one of the most effective ways to reduce rework.
For real-time tactics and tool frontiers: use the MVP Manual
/docs is for stable methodology. When you need "what works this week" for AI coding (new model comparisons, latest agent tooling, prompt patterns, and field-tested fixes), go directly to the MVP Manual.