Augmenting Product Ops with AI: Building Scaffolding for Scale

Context

At CompanyCam (a visual communication platform serving trades like roofing, plumbing, and construction), our Product & Engineering organization has been growing quickly. In 2025, we crossed the $100M ARR milestone, expanded our product portfolio, and deepened our bets on AI-powered workflows, a CRM wedge, and mid-market growth.

As Director of Product Design & Research, and later, Product Operations, my job has been to create the systems, rituals, and infrastructure that help us learn faster and build smarter. But as our product suite and customer base scaled, so did the friction: from manual research synthesis to inconsistent visibility into user feedback. I began experimenting with AI to remove that friction, reduce operational drag, and strengthen our culture of continuous discovery.

This isn’t a story about chasing shiny tools. It’s a story about using AI as scaffolding; the quiet structure that supports better, faster, more human work.

Challenge

Product Ops sits at the intersection of people, process, and product. My team’s challenge was to keep discovery lightweight and continuous, even as we scaled.

We had:

  • Dozens of ongoing discovery efforts across trios (PMs, Designers, and Tech Leads)

  • Multiple streams of customer feedback pouring in from Intercom, UserTesting, and Gong

  • Manual effort required to synthesize feedback, summarize interviews, and route insights

  • A desire to democratize discovery, but not overwhelm teams with new tools or complexity

In short: We needed to reduce friction without reducing rigor.

AI offered a path to do that, but only if it was implemented intentionally, with the same care we apply to product design itself.

Approach: Building AI Scaffolding

I approached AI not as a single solution, but as an ecosystem of lightweight tools and automations designed to:

  1. Automate repetitive operational tasks

  2. Accelerate sense-making and synthesis

  3. Scale enablement across Product, Design, and Engineering

Here’s how I did it.

⚙️ 1. Automating Customer Feedback Summaries

Every day, hundreds of customer conversations close in Intercom. Buried in those threads are product insights, but finding them used to be a full-time job.

I built an automation using Zapier + GPT-4.1 Nano that summarizes each closed Intercom conversation, identifies the most relevant customer quote, and routes it to a feature-specific Slack channel such as:

  • #feedback-reputation

  • #reports-to-pages-feedback

  • #mid-market-feedback

This turned a noisy firehose into a curated stream of insights. Designers and PMs now see real, usable feedback in context, not buried in tickets.

Impact:

  • Reduced manual triage time by 60%

  • Improved visibility of customer needs across Product & Engineering

  • Created a shared, always-on pulse of customer sentiment

🧠 2. Streamlining Research Synthesis with AI Snapshots

I designed a custom GPT to help teams synthesize customer interviews using Teresa Torres’ Interview Snapshot format. By uploading a transcript (e.g., .vtt file), the GPT automatically:

  • Extracts key quotes

  • Summarizes insights

  • Suggests potential “Opportunities” for the team’s Opportunity Solution Tree

This gave every trio — not just researchers — the ability to capture high-quality insights without starting from scratch.

Impact:

  • Democratized discovery documentation

  • Reduced synthesis time from hours to minutes

  • Increased adoption of Continuous Discovery Habits across trios

  • Reduced research rework — teammates could scan concise snapshots in Dovetail instead of rewatching full recordings

  • Helped stakeholders quickly understand what customers were actually saying

🧾 3. Using AI to Strengthen Tool Decisions

As we evaluated whether to consolidate our research platforms, I used ChatGPT and Perplexity to run a structured comparison between Maze + UserInterviews and UserTesting.

The AI helped me analyze seat pricing, feature overlap, and workflow friction, surfacing the insight that UserTesting could replace both at a lower cost and with broader seat access.

Impact:

  • Informed our decision to streamline the research stack

  • Saved the company approximately $15K per year

  • Increased researcher autonomy by reducing tool-access bottlenecks

📰 4. Creating an AI Digest for the Leadership Team

To stay ahead of the rapidly changing AI landscape, I built a daily 9 a.m. AI Digest that summarizes:

  • Key AI industry trends

  • Developments in construction tech and field services

  • Practical tool updates relevant to Product

This digest is automatically delivered via ChatGPT and Slack, ensuring that leadership decisions are informed by current, credible context.

🧑‍🔬 5. Testing Product Copy with a Synthetic User

When we began refining the messaging for CompanyCam’s Payments feature, I wanted a way to test our copy against realistic customer reactions, without waiting for full campaigns to launch or scheduling rounds of interviews every time we made a change.

So, I built a synthetic user testing environment inside Claude.

Using our established CompanyCam personas, I trained a Claude project to role-play as these users based on real customer insights from our research library. Each persona was grounded in authentic payment-related attitudes around cash flow, fees, simplicity, and trust: the same themes we hear repeatedly from our customers.

Then I extended the model to dynamically:

  1. Ask which persona(s) and company size(s) I want to test with

  2. Role-play as those users in response to marketing, onboarding, or product copy

  3. Provide structured feedback on:

    • Resonance (“Does this speak to my priorities?”)

    • Tone and language fit (“Would I trust this message?”)

    • Pain-point alignment (“Does this solve the right problem?”)

    • Purchase influence (“Would this make me more likely to use Payments?”)

Example output:

“As an admin at a mid-sized company, this headline worries me — it highlights credit card fees before value. Lead with how this helps me get paid faster.”

“Field worker here — this sounds complicated. I need two-button simple, not five steps. Show me the flow, not just the feature.”

Impact

This synthetic user became a living mirror for our messaging decisions. It allowed our teams to:

  • Validate product and marketing copy before release

  • Pressure-test assumptions about tone, trust, and comprehension

  • Align messaging with real-world motivations like cash flow reliability, speed, and ease

  • Give Product Marketing and Lifecycle Marketing teams faster, more confident starting points for A/B testing

Instead of replacing real testing, it helped us get to better test options sooner. By using AI to rule out weak directions early, our marketing teams could focus their experiments on impactful variations — the ones most likely to move the needle with real users.

It also served as a lightweight enablement tool for PMs and designers, who could quickly “talk” to our personas when crafting launch materials or writing release notes.

By grounding synthetic feedback in real behavioral patterns, not just guesswork, I created a fast, ethical, and scalable way to sense-check our storytelling that complements, not replaces, human research. We’re using AI to accelerate good research, helping teams spend less time guessing and more time testing what actually matters.

Tool Stack

Prompt-Based Tools: ChatGPT (4.1 / 5), Claude, Perplexity, Gemini, Notion AI
Automation & Integration: Zapier + GPT Nano, Make, n8n
Product Discovery Enablement: Custom GPTs (Interview Snapshot, AI Digest), UserTesting, Dovetail AI

Impact

By intentionally layering AI into our operational workflows, we:

  • Reduced repetitive, manual tasks that slowed discovery

  • Made customer feedback more visible, timely, and actionable

  • Strengthened cross-team learning loops

  • Enabled faster, smarter testing for Product and Marketing teams

  • Positioned Product Ops as the connective tissue for safe, scalable AI adoption

AI didn’t replace our curiosity; it amplified it. It helped us focus less on admin work and more on understanding our users and shaping strategy.

Reflection

The real power of AI isn’t automation for automation’s sake. It’s about creating space: for creativity, curiosity, and deeper thinking.

By using AI as scaffolding rather than structure, I gave my team the breathing room to do their best work. We’re now exploring how to embed these patterns into broader systems, from AI-powered feedback routing to internal “AI Playbooks” that help others build responsibly.

This work reminded me that operational excellence isn’t about doing more. It’s about making room for what matters most and sometimes, the smartest way to do that is by letting the machines do the boring parts.

Previous
Previous

Design and Product Operations: Operationalizing Continuous Discovery

Next
Next

Product Design Strategy