Add Copilot instructions and MCP configuration files

This commit is contained in:
Kenny Cheng 2025-10-10 19:42:57 +08:00
parent 7217404a2d
commit ca9a891921
2 changed files with 146 additions and 0 deletions

120
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,120 @@
# AI Coding Agent Instructions
## Project Overview
This is an OpenPose-based image generation server that integrates with ComfyUI for AI image generation. The system processes pose coordinates to generate skeleton images and sends them to ComfyUI for final image generation.
## Architecture & Key Components
### Core Service Flow
1. **Flask API** (`app.py`) receives pose coordinates via REST endpoints
2. **OpenPose Generation** (`openpose_gen.py`) converts coordinates to skeleton images
3. **ComfyUI Integration** uploads images and queues generation workflows
4. **Skeleton Library** (`skeleton_lib.py`) handles pose drawing with different formats (COCO, Body25)
### Critical Dependencies
- **ComfyUI Server**: External service at `localhost:8188` for AI image generation
- **OpenCV**: For image processing and skeleton rendering
- **Flask**: REST API server
- **requests-toolbelt**: For multipart file uploads to ComfyUI
## Development Patterns
### Coordinate System Convention
- Input coordinates are flat arrays: `[x1, y1, confidence1, x2, y2, confidence2, ...]`
- Use `coordinates_to_keypoints()` to convert to `Keypoint` objects
- Support both single pose (`/gen_image`) and multi-pose (`/gen_group_pic`) workflows
### File Naming & Counters
- Output images use incremental counters: `body_pose_output0000.png`, `body_pose_output0001.png`
- Each function maintains its own static counter using `hasattr()` pattern
- Circular queue naming for ComfyUI uploads with hash-based names
### MCP Integration
- Use `context7` defined in `.vscode/mcp.json` for coding tasks
- Use `python-language-server` MCP for Python coding assistance
- Use `analyzer` MCP for code analysis tasks
### ComfyUI Workflow Integration
- Workflow templates stored as JSON: `fencerAPI.json`, `group_pic.json`
- Modify seed values for randomization: `prompt["3"]["inputs"]["seed"] = random.randint(0, 10000000000)`
- Reference uploaded images by name in workflow nodes: `prompt["17"]["inputs"]['image'] = openpose_image_name`
### Error Handling Pattern
```python
if not coordinates or not canvas_size:
return jsonify({"status": "error", "message": "Missing data"}), 422
```
## Key Functions to Understand
### `save_bodypose()` / `save_bodypose_mulit()`
- Converts coordinates to skeleton images using CV2
- Creates output directory if missing
- Returns image path for ComfyUI upload
### `upload_image_circular_queue()`
- Manages unique image names per user/session using SHA256 hash
- Implements circular queue to prevent infinite file accumulation
- Essential for ComfyUI integration
### `queue_prompt()`
- Sends workflow JSON to ComfyUI `/prompt` endpoint
- Triggers actual AI image generation
## Development Workflow
### Testing API Endpoints
```bash
# Single pose generation
curl -X POST -H "Content-Type: application/json" \
-d '{"coordinates": [x1,y1,conf1,...], "canvas_size": [width,height], "pid": "user123"}' \
http://localhost:5000/gen_image
```
### Running the Server
```bash
python app.py # Starts Flask in debug mode on localhost:5000
```
### Skeleton Format Support
- **COCO format**: 18 keypoints (default for single poses)
- **Body25 format**: 25 keypoints (used in `main()` function)
- Use corresponding `limbSeq` and `colors` arrays from `skeleton_lib.py`
## Integration Points
### ComfyUI Server Requirements
- Must be running on `localhost:8188`
- Requires `/upload/image` and `/prompt` endpoints
- Workflow JSON files must match ComfyUI node structure
### Output Directory Structure
```
output/ # Generated skeleton images
embeddings/ # ComfyUI embeddings and models
script_examples/ # API usage examples
```
## Common Modifications
## Adding New API Endpoints
1. Define new Flask route in `app.py`
2. Create corresponding handler function
3. Follow existing patterns for input validation, image generation, and ComfyUI queuing
### Adding New Pose Formats
1. Define new `limbSeq` and `colors` in `skeleton_lib.py`
2. Update coordinate conversion in `coordinates_to_keypoints()`
3. Modify canvas drawing in `save_bodypose()`
### New ComfyUI Workflows
1. Export workflow from ComfyUI as JSON
2. Save in project root (e.g., `new_workflow.json`)
3. Create API function following `gen_fencer_prompt()` pattern
4. Add Flask endpoint in `app.py`
### Debugging ComfyUI Integration
- Check ComfyUI server status at `http://localhost:8188`
- Verify uploaded images in ComfyUI interface
- Monitor workflow queue for errors
- Use `script_examples/` for isolated testing

26
.vscode/mcp.json vendored Normal file
View File

@ -0,0 +1,26 @@
{
"servers": {
"github-mcp": {
"type": "http",
"url": "https://api.githubcopilot.com/mcp"
},
"python-language-server": {
"command": "mcp-language-server",
"args": [
"--workspace",
"${workspaceFolder}",
"--lsp",
"pylsp"
],
},
"analyzer": {
"command": "uvx",
"args": ["mcp-server-analyzer"]
},
"context7": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@upstash/context7-mcp"]
}
}
}