Turns scattered customer feedback from multiple sources into a scored, ranked backlog, automatically.
Customer feedback is scattered across App Store reviews, G2, Typeform, support tickets, and sales calls, each in a different format with different signal strengths. Synthesizing it into a prioritized backlog is manual, time-consuming, and still subjective: high-severity enterprise complaints get buried next to casual wishlist items, revenue signal is ignored, and effort is a gut call.
This pipeline automates that synthesis end-to-end.
Multi-source ingestion pipeline that normalizes customer feedback into a unified schema, uses Claude to perform thematic clustering, scores each cluster across four weighted dimensions, and emits a prioritized backlog report to Markdown, Slack, or Notion.
Input sources: App Store reviews (live via iTunes RSS), G2 reviews, Typeform responses, CSV upload
Triggers: Manual execution, Slack slash command (/customer-fr)
Output: A scored, ranked report with executive summary, theme-by-theme breakdown, representative quotes, and recommended next actions. Posts to Slack, saves to Markdown, or writes to Notion.
[Trigger: Manual, Cron, or Slack /customer-fr]
↓
Parallel ingestion
├── iTunes RSS API (live App Store reviews - no auth required)
├── G2 Reviews (public RSS feed)
├── Typeform (webhook or poll)
└── CSV / Sheets (upload or Google Sheets node)
↓
Merge + Normalize
(unified schema: text, source, date, rating per item)
↓
Claude - Thematic Clustering
(groups all feedback into named themes; assigns each item to a theme)
↓
Claude - Multi-Dimensional Scoring
(scores each theme: frequency · sentiment · revenue signal · effort)
↓
Sort by composite score
↓
Claude - Report Generation
(executive summary, ranked themes, representative quotes, recommendations)
↓
Switch: Output Mode (set in Config)
├── markdown → local file (via HTTP listener)
├── slack → Incoming Webhook with Block Kit formatting
└── notion → Notion API page in target database
Clustering over summarization. Most feedback tools summarize: they tell you what people said. This pipeline clusters: it identifies which customers said the same thing in different ways and groups them together. Summarization flattens signal; clustering preserves it. Each cluster surfaces item count, source distribution, and verbatim feedback per item.
RICE-adjacent scoring, not sentiment alone. Sentiment tells you how people feel. It doesn't tell you what to build next. Each theme gets scored across four dimensions: frequency (how many items), sentiment (positive demand vs. frustration-driven churn risk), revenue signal (enterprise accounts, upgrade blockers, churned customers), and effort hint. The composite score produces a ranked list that maps directly to how a PM would actually prioritize, not just what's loudest.
Chained prompts, not one big prompt. The three Claude calls build on each other: clustering produces structured theme definitions, scoring operates on those definitions (not raw feedback), and the report generates from scored data. Each stage emits auditable output before the next prompt executes, making failures easy to isolate.
Config node pattern. All credentials and settings live in a single Code node at the start of the workflow. Every downstream node references $node['Config'].json. Changing the API key, output mode, or target app ID is a one-node operation: no hunting through 19 nodes to update a value.
# Feature Request Intelligence Report
Generated: 2025-01-14 | Sources: 47 items across 4 sources
## Executive Summary
Customers are most frustrated by the lack of bulk export functionality,
with churn risk elevated among enterprise accounts. Mobile performance
issues represent a high-frequency complaint skewing heavily 1-star.
Collaboration features are requested with positive sentiment - users
want them rather than being blocked by their absence.
## Prioritized Themes
### 1. Bulk Export / Data Portability - Score: 91/100
- **Frequency:** 18 mentions across App Store, G2, and Typeform
- **Sentiment:** Strongly negative - described as a "dealbreaker" repeatedly
- **Revenue signal:** 4 enterprise accounts cited this in churned feedback
- **Effort hint:** API work + permissions model - medium complexity
- **Representative quotes:**
> "We can't use this at scale without a way to export everything at once." - G2, 2 stars
> "I've been waiting 2 years for bulk export. Moving to a competitor." - App Store, 1 star
### 2. Mobile Performance - Score: 78/100
- **Frequency:** 14 mentions (App Store-heavy)
- **Sentiment:** Negative - frustration, not feature requests
- **Revenue signal:** Low (consumer segment; no enterprise signals detected)
- **Effort hint:** Platform-level performance work - high complexity
- **Representative quotes:**
> "Crashes every time I try to open a large file on iPhone." - App Store, 1 star
### 3. Real-Time Collaboration - Score: 61/100
- **Frequency:** 9 mentions across G2 and Typeform
- **Sentiment:** Positive - framed as excitement, not frustration
- **Revenue signal:** 2 mid-market accounts mentioned in expansion feedback
- **Effort hint:** Infrastructure-level change - high complexity
## Recommended Next Actions
1. Prioritize bulk export: high frequency, confirmed revenue signal, active churn risk
2. Investigate mobile crash reports before next App Store release cycle
3. Add collaboration to roadmap with a public timeline - positive sentiment makes it a win to announce
Three Claude calls, chained: each builds on structured output from the last.
| Step | Prompt | Input | Output |
|---|---|---|---|
| 1 | prompts/clustering-prompt.md |
Raw feedback array (text, source, date, rating) | Named theme clusters with item assignments |
| 2 | prompts/scoring-prompt.md |
Theme clusters + constituent items | Each theme scored: frequency · sentiment · revenue signal · effort |
| 3 | prompts/report-prompt.md |
Scored + sorted clusters | Full Markdown report with exec summary, ranked themes, quotes, recommendations |
Claude doesn't see raw feedback at the scoring stage: it works from the clustering output. This reduces noise and keeps scoring focused on theme-level signal rather than individual item variation.
| Source | Method | Auth | Status |
|---|---|---|---|
| App Store reviews | iTunes RSS API | None | ✅ Live |
| G2 reviews | HTTP Request (public RSS) | None | ✅ Live |
| CSV / Google Sheets | Google Sheets node | OAuth | ✅ Live |
| Typeform | Typeform trigger | API key | ✅ Live |
| Zendesk | Zendesk Search API | API key + email | ✅ Live |
| Sleekplan | Sleekplan API | API key | 🔲 Planned |
| Claude | Anthropic API | API key | ✅ Live |
| Slack (trigger) | Slash command /customer-fr |
Bot token | ✅ Live |
| Slack (output) | chat.postMessage API |
Bot token | ✅ Live |
| Notion (output) | Notion API | Integration token | ✅ Live |
- Core Pipeline: Sample data (22 rows, no external credentials) → Claude clustering + scoring + report → Markdown output
- Multi-Source Ingestion: Live App Store RSS (iTunes API) + G2/Typeform sample → Merge node → same Claude pipeline
- Multi-Output Routing: Switch node routes output to Markdown, Slack, or Notion based on
outputMode - Zendesk Ingestion: Live Zendesk ticket pull via Search API → normalized to unified schema → parallel branch feeding Merge node
- Slack Trigger:
/customer-frslash command triggers the pipeline from any channel, acks within 3 seconds, runs async, posts the report back to the channel with Block Kit formatting
| # | Source | Notes |
|---|---|---|
| 1 | Sleekplan | Ingest feature votes and feedback posts via Sleekplan API; weight by vote count as a frequency signal |
Feeds the existing Merge + Normalize stage with no changes to the Claude pipeline or output layer.
- n8n running locally or on Cloud
file-writer.ps1running onlocalhost:9998(Markdown output mode only; see File writing workaround below)
- Import
workflows/phase-1-csv-to-report.jsoninto n8n - Open the Config node and set your Anthropic API key
- Run
file-writer.ps1(Markdown mode only) - Click Execute Workflow
- Report appears in
outputs/YYYY-MM-DD-feature-request-report.md
- Import
workflows/phase-2-multi-source.json - Open Config and set
apiKeyandappIdto your target app- App ID is the number after
/idin any App Store URL:https://apps.apple.com/us/app/notion/id1252015962 ^^^^^^^^^^ - Default app ID (Notion: 1252015962) runs without modification
- App ID is the number after
- Run
file-writer.ps1and click Execute Workflow
- Import
workflows/phase-3-multi-output.json - Open Config and set:
outputMode:'markdown'|'slack'|'notion'slackWebhookUrl: Incoming Webhook URL (Slack mode)notionApiKey+notionDatabaseId(Notion mode)
Slack: api.slack.com/apps → New App → Incoming Webhooks → Add to Workspace → copy URL
Notion: notion.so/my-integrations → New Integration → copy token → share your database with the integration → copy database ID from its URL
- Import
workflows/phase-4-zendesk.json - Open Config and set:
zendeskEmail: your Zendesk agent emailzendeskToken: your Zendesk API token (Admin → Apps & Integrations → Zendesk API → Add API token)zendeskSubdomain: the part before.zendesk.comin your URLzendeskQuery: search query to filter tickets (default:type:ticket created>2025-01-01)
- Set all other fields (Anthropic key, output mode, etc.) as in phase 3
- Click Execute Workflow
Query examples:
- All tickets since a date:
type:ticket created>2025-01-01 - Tagged tickets only:
type:ticket tags:product-feedback - Bad CSAT tickets:
type:ticket status:solved satisfaction:bad - Solved bugs since date:
type:ticket tags:bug created>2025-01-01 status:solved
Rating mapping: CSAT score takes priority (good=5, bad=1). Falls back to ticket priority (urgent=1, high=2, normal=3, low=4).
- Import
workflows/phase-slack-trigger.json - Open Config and set:
apiKey: your Anthropic API keyslackBotToken: your Slack bot token (xoxb-...)appId: App Store app ID (optional, for iTunes RSS source)
- Publish the workflow (production webhooks only register on published workflows)
Slack App setup:
- Create a Slack App at api.slack.com/apps
- Add bot scopes:
chat:write,commands(OAuth & Permissions) - Install to workspace, copy the Bot User OAuth Token
- Create slash command
/customer-frpointing tohttps://<your-n8n-host>/webhook/fri-slack-trigger - Invite the bot to channels where you want to use the command:
/invite @YourBotName
Slash command parameters:
/customer-fr: all sources, last 30 days/customer-fr ios: filter feedback by keyword/customer-fr last 7 days: custom time range/customer-fr ios last 14 days: both filters combined
Local development: n8n must be reachable from Slack. Use ngrok (ngrok http 5678) to expose your local instance. Avoid localtunnel, it drops connections frequently.
Important n8n v2 notes:
- "Publish" is how n8n v2 activates workflows. Production webhook URLs only work on published workflows.
- Every
n8n import:workflowdeactivates the workflow. You must republish after each import. - Production webhook executions don't appear on the canvas. Check the Executions list in the sidebar.
- The workflow uses the Config node pattern for API keys (key passed as header), not n8n's credential system. This avoids credential ID mismatch issues when importing across n8n instances.
n8n's Code node sandbox blocks fs. Markdown output works by POSTing to a local HTTP listener instead:
powershell -ExecutionPolicy Bypass -File "path\to\file-writer.ps1"Bring your own file-writer.ps1 - any HTTP listener that accepts POST { filename, content } and writes the file to disk will work.
workflows/
phase-1-csv-to-report.json - Core pipeline: sample data to report
phase-2-multi-source.json - Live App Store + G2 ingestion
phase-3-multi-output.json - Multi-output routing (Markdown/Slack/Notion)
phase-4-zendesk.json - Zendesk ticket ingestion
phase-slack-trigger.json - Slack /customer-fr slash command trigger
*-with-creds.json - Gitignored variants with credentials baked in
prompts/ - Claude prompt templates: clustering, scoring, report
samples/ - Sample input CSV + example output report
outputs/ - Generated reports (gitignored)
docs/ - Architecture notes and implementation decisions
file-writer.ps1 - Local HTTP listener for Markdown file output
