diff --git a/docs/ai-agent.md b/docs/ai-agent.md index 65ab1df..6ba7193 100644 --- a/docs/ai-agent.md +++ b/docs/ai-agent.md @@ -23,7 +23,7 @@ After creating a project, you have two options to add tests: **A. Generate new tests automatically** - Start a [NEW ANALYSIS](concepts/analysis-process.md) to let the system create tests for you. -- Learn about [🛠️ Tools & Assertions](tools-and-assertions.md) to understand how the agent interacts with your app. +- Learn about [🛠️ Tools & Assertions](concepts/tools-and-assertions.md) to understand how the agent interacts with your app. **B. Integrate your existing tests** diff --git a/docs/concepts/analysis-inputs.md b/docs/concepts/analysis-inputs.md index 5837b30..2ce9e6e 100644 --- a/docs/concepts/analysis-inputs.md +++ b/docs/concepts/analysis-inputs.md @@ -51,15 +51,15 @@ Technical Notes: Step 1: Add an user story: -![Add an user story](../../img/concepts/analysis-inputs/add-user-story.png) +![Add an user story](../img/concepts/analysis-inputs/add-user-story.png) Step 2: Prompt for user story: -![Prompt for user story](../../img/concepts/analysis-inputs/prompt-user-story.png) +![Prompt for user story](../img/concepts/analysis-inputs/prompt-user-story.png) Step 3: Test case generated: -![Test case generated](../../img/concepts/analysis-inputs/tests-generated.png) +![Test case generated](../img/concepts/analysis-inputs/tests-generated.png) ### Figma Frame Example diff --git a/docs/concepts/analysis-process.md b/docs/concepts/analysis-process.md index 41baf67..b4b9266 100644 --- a/docs/concepts/analysis-process.md +++ b/docs/concepts/analysis-process.md @@ -6,7 +6,7 @@ Below is a step-by-step overview of how your application is analyzed and how you ## New Analysis -![New analysis](../../img/analysis/2025-04-16_04-39.png) +![New analysis](../img/analysis/2025-04-16_04-39.png) Before starting the analysis, you can tailor how source data is collected and provide specific instructions for the crawler: diff --git a/docs/concepts/tools-and-assertions.md b/docs/concepts/tools-and-assertions.md index dccc064..6e0f1e8 100644 --- a/docs/concepts/tools-and-assertions.md +++ b/docs/concepts/tools-and-assertions.md @@ -131,7 +131,7 @@ After a successful agent run, Wopee.io emits **deterministic test code** that ex ## Related topics -- [Getting Started with Wopee.io Agent testing](/ai-agent) -- [Analysis Process](/concepts/analysis-process) -- [Prompting Guidelines](/concepts/prompting-guidelines) -- [Project Context](/guides/project-context) +- [Getting Started with Wopee.io Agent testing](../ai-agent.md) +- [Analysis Process](analysis-process.md) +- [Prompting Guidelines](prompting-guidelines.md) +- [Project Context](../guides/project-context.md) diff --git a/docs/guides/http-tools.md b/docs/guides/http-tools.md new file mode 100644 index 0000000..cc62edf --- /dev/null +++ b/docs/guides/http-tools.md @@ -0,0 +1,409 @@ +# 🔌 HTTP Request Tool + +Blend UI and API testing in one seamless flow with the new HTTP Request Tool. Trigger API calls directly within your web tests without context switching or additional setup. + +## Overview + +The HTTP Request Tool allows you to perform API calls as part of your web testing workflow. This enables you to: + +- **Test API endpoints** within the same test flow as UI interactions +- **Validate data** fetched from APIs before or after UI actions +- **Set up test data** via API calls before UI testing begins +- **Verify backend state** after UI operations complete + +## What this page covers + +How to use HTTP requests in your tests, supported HTTP methods, request/response handling, and best practices for combining API and UI testing. + +--- + +## Supported HTTP methods + +The HTTP Request Tool supports all standard HTTP methods: + +- **GET** – Retrieve data from an endpoint +- **POST** – Send data to create new resources +- **PUT** – Update existing resources +- **PATCH** – Partial updates to resources +- **DELETE** – Remove resources +- **HEAD** – Get headers without response body +- **OPTIONS** – Get allowed methods for a resource + +--- + +## Making HTTP requests + +### Basic syntax + +Use the HTTP Request Tool with natural language instructions: + +```prompt +Make GET request to https://petstore.swagger.io/v2/pet/123 +``` + +```prompt +Send POST request to https://petstore.swagger.io/v2/pet with body {"name": "Fluffy", "status": "available"} +``` + +### JSON Configuration + +The HTTP Request Tool uses a JSON object to configure requests, following the [RequestInit interface](https://developer.mozilla.org/en-US/docs/Web/API/RequestInit) standard. You can specify: + +```json +{ + "method": "POST", + "headers": { + "Content-Type": "application/json", + "Authorization": "Bearer YOUR_TOKEN_HERE" + }, + "body": "{\"name\": \"John Doe\"}" +} +``` + +### Request configuration options + +The tool supports all standard RequestInit properties: + +- **Headers**: Content-Type, Authorization, and custom headers +- **Request body**: JSON, form data, and other content types +- **Query parameters**: URL parameters and query strings +- **Authentication**: Bearer tokens, API keys, and basic auth +- **Cache control**: Cache behavior and policies +- **Credentials**: Cookie and authentication handling +- **Redirect handling**: Follow, error, or manual redirect behavior + +--- + +## Common use cases + +### Data validation + +Verify API responses before or after UI interactions: + +```prompt +Test pet information flow: +- Make GET request to https://petstore.swagger.io/v2/pet/123 +- Verify response status is 200 +- Navigate to pet details page +- Verify displayed data matches API response +``` + +### Test data setup + +Create test data via API before UI testing: + +```prompt +Set up test scenario: +- Send POST request to https://petstore.swagger.io/v2/pet +- Verify pet created successfully +- Navigate to application +- Test pet search with created pet data +``` + +### Backend verification + +Confirm backend state after UI operations: + +```prompt +Test pet order placement: +- Navigate to pet store checkout page +- Fill order form and submit +- Make GET request to https://petstore.swagger.io/v2/store/order/latest +- Verify order status is "placed" +``` + +--- + +## Request and response handling + +### JSON Configuration Examples + +The HTTP Request Tool supports comprehensive JSON configuration following the RequestInit standard: + +#### Basic POST request +```json +{ + "method": "POST", + "headers": { + "Content-Type": "application/json" + }, + "body": "{\"name\": \"Fluffy\", \"status\": \"available\"}" +} +``` + +#### Request with authentication +```json +{ + "method": "GET", + "headers": { + "Authorization": "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." + } +} +``` + +#### Request with custom headers and cache control +```json +{ + "method": "GET", + "headers": { + "X-Custom-Header": "custom-value", + "Accept": "application/json" + }, + "cache": "no-cache", + "credentials": "include" +} +``` + +### Request body formats + +The HTTP Request Tool supports multiple content types: + +**JSON payloads:** +```prompt +Send POST request to https://petstore.swagger.io/v2/store/order with JSON body: +{ + "petId": 123, + "quantity": 2, + "shipDate": "2024-01-15T10:30:00Z", + "status": "placed" +} +``` + +**Form data:** +```prompt +Send POST request to https://petstore.swagger.io/v2/user with form data: +username=testuser +firstName=John +lastName=Doe +email=john@example.com +``` + +### RequestInit Properties + +The tool supports all standard RequestInit properties: + +- **method**: HTTP method (GET, POST, PUT, DELETE, etc.) +- **headers**: Request headers object +- **body**: Request body (string, FormData, Blob, etc.) +- **cache**: Cache behavior (`default`, `no-store`, `reload`, `no-cache`, `force-cache`) +- **credentials**: Credential handling (`omit`, `same-origin`, `include`) +- **mode**: Request mode (`cors`, `no-cors`, `same-origin`, `navigate`) +- **redirect**: Redirect handling (`follow`, `error`, `manual`) +- **referrer**: Referrer URL +- **referrerPolicy**: Referrer policy +- **integrity**: Subresource integrity hash +- **keepalive**: Keep request alive after page unload +- **signal**: AbortSignal for request cancellation + +### Response validation + +Validate API responses using assertions: + +```prompt +Make GET request to https://petstore.swagger.io/v2/pet/123 +- Verify response status is 200 +- Verify response contains field "name" +- Verify response field "status" matches "available" or "sold" +``` + +### Error handling + +Test error scenarios and edge cases: + +```prompt +Test API error handling: +- Make GET request to https://petstore.swagger.io/v2/pet/999999 +- Verify response status is 404 +- Verify response contains error message +``` + +--- + +## Authentication + +### API keys + +```json +{ + "method": "GET", + "headers": { + "api_key": "special-key" + } +} +``` + +### Bearer tokens + +```json +{ + "method": "GET", + "headers": { + "Authorization": "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." + } +} +``` + +### Custom headers + +```json +{ + "method": "GET", + "headers": { + "Authorization": "Bearer your-token-here", + "X-Custom-Header": "custom-value", + "Accept": "application/json" + } +} +``` + +### Credential handling + +```json +{ + "method": "GET", + "credentials": "include", + "headers": { + "Content-Type": "application/json" + } +} +``` + +--- + +## Best practices + +!!! tip "HTTP Request Tool best practices" + + - **Test realistic scenarios**: Use API calls that mirror real user workflows + - **Validate responses**: Always verify API response status and content + - **Handle authentication**: Include proper authentication for protected endpoints + - **Test error cases**: Verify API behavior for invalid requests and edge cases + - **Combine with UI testing**: Use API calls to set up data or verify backend state + - **Use meaningful assertions**: Verify specific response fields and values + - **Test different HTTP methods**: Cover GET, POST, PUT, DELETE operations as needed + +### Advanced RequestInit Options + +#### Cache control +```json +{ + "method": "GET", + "cache": "no-cache", + "headers": { + "Cache-Control": "no-cache" + } +} +``` + +#### Request timeout and abort +```json +{ + "method": "POST", + "body": "{\"data\": \"example\"}", + "headers": { + "Content-Type": "application/json" + }, + "keepalive": true +} +``` + +#### Cross-origin requests +```json +{ + "method": "GET", + "mode": "cors", + "credentials": "include", + "headers": { + "Accept": "application/json" + } +} +``` + +### When to use HTTP requests + +**Good use cases:** +- Setting up test data before UI testing +- Verifying backend state after UI operations +- Testing API endpoints as part of end-to-end flows +- Validating data consistency between UI and API + +**Avoid when:** +- Pure UI testing is sufficient +- API testing can be done separately +- You need complex API testing scenarios (use dedicated API testing tools) + +--- + +## Example workflows + +### E-commerce order flow + +```prompt +Test complete pet order workflow: +1. Make POST request to https://petstore.swagger.io/v2/user +2. Navigate to pet store page +3. Add pet to cart +4. Make GET request to verify cart contents +5. Proceed to checkout +6. Complete purchase +7. Make GET request to verify order status +``` + +### User authentication flow + +```prompt +Test user login with API verification: +1. Send POST request to https://petstore.swagger.io/v2/user/login with credentials +2. Verify response contains valid session +3. Navigate to user profile page +4. Verify user is logged in +5. Make GET request to verify user session +``` + +### Data synchronization test + +```prompt +Test UI-API data sync: +1. Navigate to pet store settings page +2. Update pet information +3. Make GET request to verify changes saved +4. Navigate to pet details page +5. Verify displayed data matches API response +``` + +--- + +## Troubleshooting + +### Common issues + +**Request fails:** +- Verify URL is correct and accessible +- Check authentication credentials +- Ensure proper headers are set +- Verify network connectivity + +**Response validation fails:** +- Check response format and structure +- Verify field names and data types +- Ensure response contains expected data +- Review API documentation for correct response format + +**Authentication errors:** +- Verify API key or token is valid +- Check token expiration +- Ensure proper authentication header format +- Verify user has required permissions + +!!! note "Need help?" + + If you encounter issues with HTTP requests, contact our support team at [help@🦒.io](mailto:help@wopee.io) or visit our [community discussions](https://github.com/orgs/Wopee-io/discussions). + +--- + +## Related topics + +- [Tools & Assertions](../concepts/tools-and-assertions.md) +- [Getting Started with Wopee.io Agent testing](../ai-agent.md) +- [Project Context](project-context.md) +- [Analysis Process](../concepts/analysis-process.md) diff --git a/docs/guides/upload-files.md b/docs/guides/upload-files.md index 244bb02..3486c38 100644 --- a/docs/guides/upload-files.md +++ b/docs/guides/upload-files.md @@ -26,7 +26,7 @@ You can create tests that use file uploads in several ways: Create new user story and use prompt to generate test that uses file upload. -![Add new user story](img/guides/upload-files/add-stories.png) +![Add new user story](../img/guides/upload-files/add-stories.png) !!! tip "Use prompt" @@ -40,13 +40,13 @@ Create new user story and use prompt to generate test that uses file upload. - Verify that the message `File Uploaded!` was displayed. ``` - ![Generate test with prompt](img/guides/upload-files/prompt-new-test.png) + ![Generate test with prompt](../img/guides/upload-files/prompt-new-test.png) #### Method 2: Manual test creation You can also create tests manually that include file upload steps: -![Create new test with file upload](img/guides/upload-files/new-test-upload-file.png) +![Create new test with file upload](../img/guides/upload-files/new-test-upload-file.png) 1. Navigate to your project in [Wopee Commander](https://cmd.wopee.io): Project > Analysis > Test 2. Click "Add new user story" or "Add new test" for existing user story diff --git a/docs/guides/wopee-mcp.md b/docs/guides/wopee-mcp.md new file mode 100644 index 0000000..d13b6a5 --- /dev/null +++ b/docs/guides/wopee-mcp.md @@ -0,0 +1,566 @@ +# 🚀 Wopee.io MCP + +(early preview) + +## Overview + +Wopee.io MCP is a Model Context Protocol server for integrating with the Wopee testing platform. This server provides tools for dispatching analysis, generating app context, user stories, test cases, and running test executions. + +Now, you can use Wopee.io MCP to integrate with your favorite IDE and use Wopee.io AI Agents to increase your productivity and speed up your testing while developing your web app. + +## Features + +- **Dispatch Analysis**: Start analysis of web applications to understand their structure and behavior +- **Dispatch Agent**: Execute tests for specific projects and suites +- **Generate App Context**: Create detailed application context based on analysis results +- **Generate General User Stories**: Generate high-level user stories from analysis data +- **Generate User Stories**: Generate detailed user stories and acceptance criteria from analysis data +- **Generate Test Cases**: Generate comprehensive test cases from analysis and user stories +- **Get App Context**: Retrieve existing app context for a project and suite +- **Get User Stories**: Retrieve existing user stories for a project and suite +- **Get Test Cases**: Retrieve existing test cases for a project and suite +- **Fetch Analysis Suites**: Fetch all analysis suites for a project + +## Quick Start Guide + +### 🚀 One-Click Installation + +**For VS Code / Cursor:** +1. `Ctrl+Shift+P` → "MCP: Install Server" +2. Enter: `wopee-mcp` +3. Add your API key into .env file + +### 🛠 Available Tools + +| Tool | Purpose | Example | +|------|---------|---------| +| `wopee_dispatch_analysis` | Start app analysis | `@wopee wopee_dispatch_analysis Project UUID: project-123` | +| `wopee_dispatch_agent` | Execute tests | `@wopee wopee_dispatch_agent Project UUID: project-123 Suite UUID: suite-123` | +| `wopee_generate_app_context` | Generate app context | `@wopee wopee_generate_app_context Project UUID: project-123 Suite UUID: suite-123` | +| `wopee_generate_general_user_stories` | Generate general user stories | `@wopee wopee_generate_general_user_stories Project UUID: project-123 Suite UUID: suite-123` | +| `wopee_generate_user_stories` | Generate detailed user stories | `@wopee wopee_generate_user_stories Project UUID: project-123 Suite UUID: suite-123` | +| `wopee_generate_test_cases` | Generate test cases | `@wopee wopee_generate_test_cases Project UUID: project-123 Suite UUID: suite-123` | +| `wopee_get_app_context` | Get existing app context | `@wopee wopee_get_app_context Project UUID: project-123 Suite UUID: suite-123` | +| `wopee_get_user_stories` | Get existing user stories | `@wopee wopee_get_user_stories Project UUID: project-123 Suite UUID: suite-123` | +| `wopee_get_test_cases` | Get existing test cases | `@wopee wopee_get_test_cases Project UUID: project-123 Suite UUID: suite-123` | +| `wopee_fetch_analysis_suites` | Fetch all analysis suites | `@wopee wopee_fetch_analysis_suites Project UUID: project-123` | + +### 🔧 Manual Installation + +```bash +npm install -g wopee-mcp +``` + +### ⚙️ Configuration + +Set environment variables: +```bash +export WOPEE_API_KEY=your_api_key_here +export WOPEE_PROJECT_UUID=your_project_uuid_here +``` + +### Prerequisites + +Before using the Wopee MCP server, ensure you have: + +1. **VS Code** with the MCP extension installed, or **Cursor** ... or even could be used with ChatGPT or Claude or any other AI agent that supports MCP. +2. A **Wopee API key** from [wopee.io](https://wopee.io) +3. **Node.js 18+** installed on your system + +## Configuration + +The server loads configuration from a `.env` file in the project root directory (where `package.json` is located). + +### Environment Variables + +- `WOPEE_API_KEY` (required): Your Wopee API key +- `WOPEE_PROJECT_UUID` (required): Your Wopee project UUID +- `WOPEE_API_URL` (optional): Wopee API endpoint (defaults to `https://api.wopee.io/`) + +### Setting up .env file + +1. **Copy the example file:** + ```bash + cp env.example .env + ``` + +2. **Edit the .env file in the project root:** + ```bash + # Wopee API Configuration + WOPEE_API_KEY=your_actual_api_key_here + WOPEE_PROJECT_UUID=your_project_uuid_here + WOPEE_API_URL=https://api.dev.wopee.io/ + ``` + +3. **For MCP integration, update your `mcp.json`:** + ```json + { + "mcpServers": { + "wopee": { + "command": "npx", + "args": ["wopee-mcp@latest"], + "env": {} + } + } + } + ``` + + **Note:** The server automatically loads API keys from the `.env` file in the project root. No need to hardcode them in the MCP configuration. + +## Usage + +Once configured, you can use the Wopee tools in your chat interface. Simply type `@wopee` followed by the tool name and required parameters. + +### Quick Examples + +**Start Analysis:** +``` +@wopee wopee_dispatch_analysis Project UUID: project-123 +``` + +**Generate Test Cases:** +``` +@wopee wopee_generate_test_cases Project UUID: project-123 Suite UUID: suite-123 +``` + +**Execute Tests:** +``` +@wopee wopee_dispatch_agent Project UUID: project-123 Suite UUID: suite-123 +``` + +## Real-World Usage Examples + +### Complete Testing Workflow + +Here's a typical workflow from analysis to test execution: + +#### 1. Start Analysis +``` +Dispatch analysis +``` + +#### 2. Generate Application Context +``` +Generate app context +``` + +#### 3. Create User Stories +``` +Generate user stories +``` + +#### 4. Generate Test Cases +``` +Generate test cases +``` + +#### 5. Review Generated Tests +``` +Give me all the generated tests in tabular format +``` + +#### 6. Execute Tests +``` +Dispatch agent to run test TC001 from US001 +``` + +### Advanced Usage Examples + +#### Multi-Language Support +``` +Generate user stories with additional instructions: "All outputs has to be in Portuguese language" +``` + +``` +Generate test cases with additional instructions: "Focus on fields validations, make sure to test all fields. Use USD and EUR currencies. Focus on payment flows." +``` + +#### Custom Analysis Instructions +``` +Dispatch analysis with additional instructions: "All outputs has to be in Czech Language" +``` + +``` +Generate test cases with focus on security testing like password validation, email validation, etc. +``` + +### Data Retrieval Examples + +#### Check Analysis Status +``` +What is the status of my analysis? +``` + +``` +Show me all available analysis suites +``` + +#### Get Specific Test Cases +``` +Give me all tests from the A001 +``` + +``` +Show me test cases from analysis A007 +``` + +#### View User Stories +``` +Give me a list of the user stories in bullet points format +``` + +``` +Give me the same table with the user stories but also provide a column with number of tests per story +``` + +### Test Execution Examples + +#### Run Specific Tests +``` +Dispatch all tests for user story US001 +``` + +``` +Dispatch agent to run all tests for user story US001 +``` + +#### Monitor Execution +``` +What about now? +``` + +``` +What are the current test execution results? +``` + +## Common Workflows + +### 1. New Project Setup +1. **Dispatch analysis** for your application +2. **Generate app context** to understand the application +3. **Generate user stories** based on analysis +4. **Generate test cases** from user stories +5. **Review and organize** the generated content + +### 2. Test Execution Workflow +1. **Check available tests** in your analysis +2. **Select specific tests** to execute +3. **Dispatch agent** to run selected tests +4. **Monitor execution status** and results +5. **Review test outcomes** and iterate + +### 3. Multi-Analysis Comparison +1. **Fetch all analysis suites** for your project +2. **Compare test cases** from different analyses +3. **Check statuses** of all analyses +4. **Select best performing** analysis for execution + +## Tips and Best Practices + +### 1. Use Descriptive Analysis Names +- Include the application name and version +- Add date or iteration information +- Example: `"E-commerce App v2.1 - Payment Testing"` + +### 2. Provide Clear Instructions +- Be specific about language requirements +- Include focus areas for testing +- Example: `"Focus on user authentication and payment flows"` + +### 3. Monitor Progress +- Check analysis status regularly +- Wait for completion before proceeding +- Use status queries to track progress + +### 4. Organize by Analysis +- Keep related tests in the same analysis +- Use consistent naming conventions +- Document analysis purposes + +### 5. Test Execution +- Start with single test cases +- Monitor execution status +- Scale up to multiple tests once stable + +## Available Tools + +### 1. wopee_dispatch_analysis + +Start a new analysis for a given URL. + +**Parameters:** +- `projectUuid` (string, required): UUID of the project +- `iterations` (number, required): Number of analysis iterations +- `suiteAnalysisConfig` (object, required): Configuration for the analysis + +**Example:** +```json +{ + "projectUuid": "project-123", + "iterations": 5, + "suiteAnalysisConfig": { + "startingUrl": "https://example.com", + "username": "testuser", + "password": "testpass", + "cookiesPreference": "ACCEPT_ALL" + } +} +``` + +### 2. wopee_dispatch_agent + +Execute tests for specific projects and suites. + +**Parameters:** +- `projectUuid` (string, required): UUID of the project +- `suiteUuid` (string, required): UUID of the test suite +- `analysisIdentifier` (string, required): Analysis identifier +- `testCases` (array, required): Array of test cases to execute + +**Example:** +```json +{ + "projectUuid": "project-123", + "suiteUuid": "suite-123", + "analysisIdentifier": "analysis-123", + "testCases": [ + { + "testCaseId": "test-1", + "userStoryId": "story-1" + } + ] +} +``` + +### 3. wopee_generate_app_context + +Generate application context based on analysis results. + +**Parameters:** +- `projectUuid` (string, required): UUID of the project +- `suiteUuid` (string, required): UUID of the test suite +- `extraPrompt` (string, optional): Optional prompt to modify the app context generation + +**Example:** +```json +{ + "projectUuid": "project-123", + "suiteUuid": "suite-123", + "extraPrompt": "Focus on user authentication flows" +} +``` + +### 4. wopee_generate_general_user_stories + +Generate high-level user stories from analysis data. + +**Parameters:** +- `projectUuid` (string, required): UUID of the project +- `suiteUuid` (string, required): UUID of the test suite +- `extraPrompt` (string, optional): Optional prompt to modify the user story generation + +**Example:** +```json +{ + "projectUuid": "project-123", + "suiteUuid": "suite-123", + "extraPrompt": "Include high-level business requirements" +} +``` + +### 5. wopee_generate_user_stories + +Generate detailed user stories and acceptance criteria from analysis data. + +**Parameters:** +- `projectUuid` (string, required): UUID of the project +- `suiteUuid` (string, required): UUID of the test suite +- `extraPrompt` (string, optional): Optional prompt to modify the user story generation + +**Example:** +```json +{ + "projectUuid": "project-123", + "suiteUuid": "suite-123", + "extraPrompt": "Include edge cases and error scenarios" +} +``` + +### 6. wopee_generate_test_cases + +Generate comprehensive test cases from analysis and user stories. + +**Parameters:** +- `projectUuid` (string, required): UUID of the project +- `suiteUuid` (string, required): UUID of the test suite +- `extraPrompt` (string, optional): Optional prompt to modify the test case generation +- `selectedUserStories` (array, optional): Array of selected user story IDs + +**Example:** +```json +{ + "projectUuid": "project-123", + "suiteUuid": "suite-123", + "extraPrompt": "Generate comprehensive test coverage", + "selectedUserStories": ["story-1", "story-2"] +} +``` + +### 7. wopee_get_app_context + +Get existing app context for a project and suite. + +**Parameters:** +- `projectUuid` (string, required): UUID of the project +- `suiteUuid` (string, required): UUID of the test suite + +**Example:** +```json +{ + "projectUuid": "project-123", + "suiteUuid": "suite-123" +} +``` + +### 8. wopee_get_user_stories + +Get existing user stories for a project and suite. + +**Parameters:** +- `projectUuid` (string, required): UUID of the project +- `suiteUuid` (string, required): UUID of the test suite + +**Example:** +```json +{ + "projectUuid": "project-123", + "suiteUuid": "suite-123" +} +``` + +### 9. wopee_get_test_cases + +Get existing test cases for a project and suite. + +**Parameters:** +- `projectUuid` (string, required): UUID of the project +- `suiteUuid` (string, required): UUID of the test suite + +**Example:** +```json +{ + "projectUuid": "project-123", + "suiteUuid": "suite-123" +} +``` + +### 10. wopee_fetch_analysis_suites + +Fetch all analysis suites for a given project. + +**Parameters:** +- `projectUuid` (string, required): UUID of the project + +**Example:** +```json +{ + "projectUuid": "project-123" +} +``` + +**Response:** +Returns an array of analysis suites with detailed information including: +- Suite UUID, name, and type +- Upload and execution status +- Analysis identifier +- Suite running status +- Generation state for app context, user stories, and test cases +- Creation and update timestamps + +## Example Session Flow + +Here's a complete example session showing the typical workflow: + +``` +User: Dispatch analysis + +Assistant: ✅ Analysis successfully dispatched! Analysis UUID: 008ccdd2-91a3-432e-8086-020bf337eaa7 + +User: Generate app context + +Assistant: ✅ App context generated successfully! + +User: Generate user stories + +Assistant: ✅ User stories generated successfully! + +User: Give me all the generated tests in tabular format + +Assistant: 📊 Here are all your generated test cases... + +User: Dispatch agent to run test TC001 from US001 + +Assistant: ✅ Test execution started successfully! +``` + +## 🆘 Troubleshooting + +### Common Issues + +- **Command not found**: `npm install -g wopee-mcp` +- **API key error**: Check environment variables +- **Connection failed**: Verify internet and API key +- **Tools not showing**: Restart editor + +### Error Handling + +#### Common Error Messages + +1. **"User not found" error:** + - Check API key configuration + - Verify project permissions + - Contact support if persistent + +2. **"Analysis not found" error:** + - Verify project UUID + - Check analysis completion status + - Ensure analysis exists + +3. **"Test execution failed" error:** + - Check test case validity + - Verify application accessibility + - Review test steps + +### Getting Help + +- **Check logs**: Look in the MCP server output panel +- **Verify installation**: Run `wopee-mcp --help` in terminal +- **Test connection**: Use the `wopee_dispatch_analysis` tool with a simple URL + +## Response Format + +All tools return responses in the following format: + +```json +{ + "success": true, + "data": { /* tool-specific data */ }, + "message": "Success message", + "error": "Error message (only present if success is false)" +} +``` + +## Error Handling + +The server provides detailed error messages for: +- Invalid parameters +- GraphQL API errors +- Network connectivity issues +- Configuration problems + +!!! note "Need help?" + + For support and further information, please refer to the [npm package page](https://www.npmjs.com/package/wopee-mcp) or contact the package maintainers. + +*Note: This package is currently in early preview; features and functionalities are subject to change.* + diff --git a/docs/index.md b/docs/index.md index 8428079..d3c17e8 100644 --- a/docs/index.md +++ b/docs/index.md @@ -44,8 +44,8 @@ The output is deterministic test code you can run anywhere. No LLMs or Wopee.io ## Start with bot testing : Step by step -- See detailed [📙 Getting Started with bot testing](bot.md) section -- Learn about [🛠️ Tools & Assertions](tools-and-assertions.md) and how the agent works +- See detailed [📙 Getting Started with bot testing](ai-agent.md) section +- Learn about [🛠️ Tools & Assertions](concepts/tools-and-assertions.md) and how the agent works - Understand [📖 Vocabulary](glossary.md) of 🐒 ## Wopee.io Integrations diff --git a/docs/security/enterprise-connectivity.md b/docs/security/enterprise-connectivity.md index 083e908..702e0dd 100644 --- a/docs/security/enterprise-connectivity.md +++ b/docs/security/enterprise-connectivity.md @@ -68,6 +68,6 @@ Run Wopee.io agents within your internal network. ## Related Pages -- [Quick Start Guide](/docs/get-started/quickstart.md) - Initial setup -- [Stability Guide](/docs/troubleshooting/stability.md) - Troubleshooting -- [Billing and Licensing](/docs/billing-and-licensing.md) - Enterprise plans +- [Getting Started Guide](../ai-agent.md) - Initial setup +- [Pilot Projects](../pilot-projects.md) - More about pilot projects +- [Start now!](https://wopee.io/book-demo) - Get started with your pilot project diff --git a/mkdocs.yml b/mkdocs.yml index 50624b6..c9c4d7b 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -25,6 +25,8 @@ nav: - 📥 Download files: guides/download-files.md - 🗄️ Browser local storage: guides/browser-local-storage.md - 🌐 Project context: guides/project-context.md + - 🔌 HTTP Request Tool: guides/http-tools.md + - 🚀 Wopee.io MCP: guides/wopee-mcp.md - Other docs: - 🧑‍✈️ Pilot projects: pilot-projects.md - 🔒 Enterprise Connectivity: security/enterprise-connectivity.md @@ -98,7 +100,8 @@ plugins: "integrations/robot-framework/01-getting-started.md": "robot-framework/01-getting-started.md" "robot-framework.md": "robot-framework/01-getting-started.md" "rf.md": "robot-framework/01-getting-started.md" - "getting-started.md": "bot.md" + "getting-started.md": "ai-agent.md" + "bot.md": "ai-agent.md" markdown_extensions: - attr_list