Available Functions

Pie’s MCP server exposes 37 tools for regular users (plus 3 admin-only tools). Each tool can be invoked through natural conversation - just describe what you want and your AI assistant will call the right tool.

Looking for structured tool schemas? See the MCP Tool Reference for all tools with parameter types, required fields, and technical details.


AI-Powered Bug Fixing - The Killer Workflow

The most powerful way to use Pie’s MCP: connect Pie to your AI coding agent and let it fix bugs automatically.

Pie captures full context for every issue - reproduction steps, expected vs. observed behavior, screenshots, and DOM snapshots. When an AI coding agent (like Claude Code) has access to Pie via MCP, it can pull an issue, understand exactly what broke, fix the code, and open a PR.

Fix a Single Issue

Prompt: Get issue ISS-142 from Pie and fix it in my codebase

The agent calls get_issue to fetch full details (repro steps, assertions, expected/observed behavior, screenshots), locates the relevant code, applies a targeted fix, and creates a PR.

Batch-Fix All Issues

Prompt: Get all approved issues from Pie. For each one, fix it and create a separate PR.

The agent calls get_issues with status filter, iterates through each issue, and creates isolated fixes. Pie provides enough context per issue that the agent can work through them autonomously.

Triage and Fix in One Session

Prompt: Get all pending issues from Pie, help me triage them, then fix the ones I approve

Combines issue review with automated fixing - approve the real bugs, reject the false positives, and let the agent fix everything you approved.

Fix with Context from Test Results

Prompt: Get the failed test results from the latest run, pull the related issues, and fix them

Chains get_results (filtered to failures) → get_issue (for each failed test’s issue) → code fix → PR. The agent gets both the test execution details and the issue context.

Why this works: Unlike generic bug reports, Pie issues include exact reproduction steps, expected vs. observed behavior with specific assertions, screenshots, and DOM snapshots - the same context a senior QA engineer would provide. AI agents don’t need to guess what went wrong.



Test Case Management

Tools: get_testcases, create_custom_testcase, run_specific_testcases, update_testcase, archive_testcases, unarchive_testcases, add_testcase_to_suite

List Test Cases

Prompt: List all my test cases

Fetches all active test cases with IDs, titles, and descriptions. You can filter by status, archived state, or test suite.

Prompt: Show me the detailed steps and assertions for test case 42

Fetches full details for a specific test case including every step and assertion.

Create a Custom Test Case

Prompt: Create a test case that verifies the checkout flow - add an item to cart, proceed to checkout, enter payment details, and confirm the order

Creates a new test case from your natural language description. The test will be automatically generated, queued, and executed. Provide clear objectives, steps, and expected outcomes in your prompt for best results.

Run Existing Test Cases

Prompt: Run the login test

Finds the test case by description and executes it. If you know the ID, you can say:

Prompt: Run test cases 12, 15, and 23

Update a Test Case

Prompt: Update test case 42 - change the title to "Login with SSO" and add an assertion that the user sees the dashboard

Modifies an existing test case’s title, description, steps, assertions, or feature group assignment.

Archive and Unarchive

Prompt: Archive test cases 5, 8, and 12

Marks test cases as archived (they won’t run in future test executions).

Prompt: Show me all archived test cases and unarchive the ones related to payments

Add to Main Suite

Prompt: Add test case 42 to the main test suite

Moves a custom test case out of “in-creation” status into the main suite so it runs in normal test executions.

Find Duplicates

Prompt: List the duplicate test cases. Give me an overview of which to keep and which to archive.

Analyzes all test cases, identifies duplicates based on test steps and assertions, and recommends which to keep and which to archive.

Import from CSV

Prompt: Use this attached CSV to create test case prompts for [App Name], then create them in Pie

Validates CSV format, generates test case prompts for each row, and creates all test cases automatically.

Expand Coverage

Prompt: List the deep dive test cases on [feature name] and add them

Retrieves detailed test cases for a specific feature and adds them to expand coverage.


Issue Management

Tools: get_issues, get_issue, approve_issues, reject_issues, resolve_issues

Review Issues

Prompt: List the issues

Fetches all active issues from your latest test run with severity, type, and descriptions.

Prompt: Show me the details for issue ISS-123

Gets full details including test case info, steps, assertions, and comments.

Filter Issues

Prompt: Show me all approved issues

Prompt: List issues that were first found in run R-456

Prompt: Show me all issues including resolved and rejected ones

Approve Issues

Prompt: Approve issues ISS-101 and ISS-102 as valid bugs

Confirms issues as valid after admin review. Changes triage state from “pending” to “approved”.

Prompt: Undo the approval on ISS-101

Reverses a previous approval back to “pending”.

Reject Issues

Prompt: Reject ISS-103 as a false positive because the expected behavior changed in the latest release

Marks issues as false positives with a reason. Also rejects associated findings.

Prompt: Undo the rejection on ISS-103

Reverses a rejection, restoring the issue and unarchiving associated test cases.

Resolve Issues

Prompt: Resolve ISS-104 and ISS-105 - these have been fixed in the latest build

Closes issues as fixed/resolved.

Cross-Run Analysis

Prompt: Check for issues approved in the latest run, and compare them to resolved issues from previous runs

Compares current issues against historical runs to identify recurring issues and resolution trends.

Find Duplicate Issues

Prompt: List the duplicate issues, suggest which one to archive

Identifies duplicates and recommends which to keep based on detail quality.


Test Results & Findings

Tools: get_results, get_findings, approve_findings, reject_findings

View Results

Prompt: Show me the results from the latest test run

Fetches simplified results with test IDs, titles, pass/fail status, and reasoning.

Prompt: Show me the detailed execution steps for test case 42

Gets step-by-step execution details including actions taken and assertions checked.

Prompt: List all failed tests from run R-456

Filters results by status and run.

View Findings

Prompt: Show me the findings from the latest run

Findings are run-specific bug detections that emerge from individual test cases. Issues are the de-duplicated, app-level version of findings.

Prompt: Show me pending findings for test case 42

Approve or Reject Findings

Prompt: Approve findings F-201 and F-202

Prompt: Reject finding F-203 - this is expected behavior after the redesign


Test Runs

Tools: create_run, run_discovery, get_runs

View Run History

Prompt: Show me all test runs

Lists all runs with IDs, build info, status, and timestamps.

Create a New Run

Prompt: Create a new test run

Prompt: Create a run with build B-789 on iOS 18.2

Trigger Discovery

Prompt: Run discovery on my app

Starts an automated discovery process that explores your app and generates test cases. Only works when the app has no existing test cases.


Test Suites

Tools: create_test_suite, get_test_suites, update_test_suite, delete_test_suite

List Suites

Prompt: Show me all test suites

Returns suite names, IDs, and test case counts.

Create a Suite

Prompt: Create a test suite called "Smoke Tests" with test cases 1, 5, 12, and 20

Prompt: Create a "Regression Suite" with custom instructions: focus on payment flows

Update a Suite

Prompt: Add test cases 25 and 30 to the Smoke Tests suite

Prompt: Remove test case 5 from the Smoke Tests suite

Prompt: Rename the Smoke Tests suite to "Critical Path Tests"

Delete a Suite

Prompt: Delete the old Regression Suite

Removes the suite but does NOT delete the test cases - they’re just unlinked.


Key Features

Tools: manage_key_features, get_available_icons

Key features (also called “groups”) organize test cases into logical categories like “Login”, “Checkout”, or “Profile Management”.

View All Features

Prompt: List the key features and provide an overview

Shows all features with test coverage summaries and test case counts.

Create a Feature

Prompt: Create a key feature called "Onboarding Flow" with description "New user registration and setup"

Update a Feature

Prompt: Rename the "Login" feature to "Authentication" and update its description

Delete a Feature

Prompt: Delete the "Legacy Checkout" feature

Get Feature Tests

Prompt: List the test cases under [Key Feature Name]

Shows all test cases associated with the specified feature.


Credentials

Tools: get_credentials, create_credential

Manage login credentials that Pie uses when testing flows behind authentication screens.

List Credentials

Prompt: Show me all saved credentials

Returns credential IDs, names, usernames, and default status. Passwords are never returned.

Create a Credential

Prompt: Create a credential called "Admin User" with username admin@example.com and password TestPass123

Prompt: Create a default credential for the test account

If marked as default, all other credentials lose their default status.


Scripts

Tools: get_scripts, create_script

Scripts are shell commands (typically curl commands) that Pie executes during test runs. They let tests interact with your backend - fetching test data, creating users, generating OTPs, or bypassing verification steps. Reference them in test steps using #{script-name} syntax, and Pie’s AI agent executes them at the right moment during the test flow.

List Scripts

Prompt: Show me all available scripts

Returns script IDs, names, and shell command instructions for each script.

Create a Script

Prompt: Create a script called "get-unique-phone" that runs: curl -X GET "https://api.example.com/test-phone" -H "Authorization: Bearer TOKEN"

The script can then be referenced in any test case as #{get-unique-phone}.

Use Scripts in Test Cases

Prompt: Create a test case: Sign up with a new phone number from #{get-unique-phone}, complete the OTP flow using #{generate-otp}, and verify the user lands on the dashboard

Scripts are referenced with #{script-name} in test prompts. Pie’s AI agent executes each script when the test reaches the relevant step, captures the output, and uses the returned data (phone numbers, OTPs, user credentials, etc.) in the following test steps.

Common Script Patterns

Generating unique test data: Prompt: Create a script called "create-test-user" with: curl -X POST "https://staging-api.example.com/test/users" -H "Authorization: Bearer TOKEN" -H "Content-Type: application/json" -d '{"role": "premium"}'

Bypassing OTP/verification: Prompt: Create a script called "generate-otp" with: curl -X POST "https://staging-api.example.com/test/otp" -H "Authorization: Bearer TOKEN" -d '{"phone": "{{use the phone number from signup}}"}'

Fetching environment-specific data: Prompt: Create a script called "get-valid-card" with: curl -X GET "https://staging-api.example.com/test/payment-cards" -H "Authorization: Bearer TOKEN"

Scripts support parameterization with {{placeholder}} syntax - values from earlier test steps or previous script outputs are automatically substituted. See Script Parameterization for details.


Advanced Testing

Tools: trigger_rediscovery, get_step_doms, generate_tests_on_local, start_test_monitoring

Trigger Rediscovery

Prompt: Rediscover test case 42 - the navigation menu has been redesigned

Re-runs the test with awareness of UI changes and automatically updates the test case if needed. Provide context about what changed for better accuracy.

Inspect Step DOMs

Prompt: Show me the DOM content for steps 1 and 3 of test case 42

Fetches the actual HTML at each step of a test execution. Useful for debugging test failures.

Local Testing

Prompt: Generate a command to test my local app at http://localhost:3000

Produces a terminal command template for running test exploration against a localhost URL.

Monitor Test Completion

Prompt: Check the results for test case 42 after execution

Fetches results and findings for a completed test execution. Only call after the test has finished running.


App Information

Tools: get_app, health_check

View App Config

Prompt: Show me my app details

Returns app ID, name, platform, configuration, and settings.

Health Check

Prompt: Check if Pie is connected

Verifies connectivity and authentication with the Pie API.


Reporting

These prompts combine multiple tools to generate comprehensive reports:

Weekly Summary

Prompt: Create a testing summary including total tests, pass rates, new issues, and critical blockers

Generates a report with total tests executed, pass/fail percentages, new issues, critical blockers, and week-over-week trends.

Coverage Analysis

Prompt: Show me test coverage for all key features and identify gaps

Provides coverage percentage per feature, identifies insufficient coverage, and recommends improvements.

Trend Analysis

Prompt: Compare the last 3 test runs and identify trends in failure rates

Compares recent runs, analyzes failure trends, and highlights improvements or regressions.