Piper Morgan - AI Product Management Assistant

What We've Learned:Building AI Systems That Actually Work

Insights from three months of intensive AI development by Christian Crumlish - discoveries that counter conventional AI wisdom and demonstrate systematic human-AI collaboration.

Meet Christian: The Context Behind These Discoveries

Christian Crumlish is a product management professional with deep expertise in civic technology, systematic methodology development, and AI integration. Through building Piper Morgan, he's demonstrating how AI can systematically enhance rather than replace human PM expertise.

  • Director of Product Management with civic technology focus at Kind Systems
  • AI Integration Pioneer developing practical human-AI collaboration patterns
  • Systematic Excellence Advocate proving rigorous methodology accelerates development
  • Building-in-Public Practitioner sharing every decision and lesson learned

Vision for PM × AI

"AI doesn't replace PM judgment—it amplifies it systematically. Through transparent methodology development and ethical-first architecture, we're proving that human-AI collaboration can achieve capabilities neither could reach alone."

Current Mission: Demonstrate that AI-augmented product management, done with systematic excellence, creates compound value that transforms strategic work.

Why This Context Matters

The insights shared below come from hands-on experience building an AI system systematically while maintaining PM judgment and strategic thinking. This isn't theoretical AI advice—it's battle-tested patterns from actual development work.

Building-in-Public Community

635+

PM professionals following this systematic methodology development

The Biggest Surprise: Verification Accelerates Rather Than Slows Development

What everyone assumes

Checking AI work takes extra time and slows down the "AI speed advantage."

What we discovered

Systematic verification actually makes AI development faster, not slower.

The evidence

Our 15-minute ADR (Architecture Decision Record) migration pattern became our most reliable development accelerator. Instead of debugging mysterious failures hours later, we catch misalignment immediately and course-correct in real time.

Strategic insight for leaders: AI tools that encourage verification aren't slower - they're more sustainable. Budget for systematic checking up front, save time on debugging later.

The Context Problem: AI Memory vs. AI Understanding

What everyone assumes

AI tools with longer context windows solve the "AI forgets what we were doing" problem.

What we discovered

Context length doesn't solve context quality. AI can "remember" everything and still misunderstand what you're trying to accomplish.

Strategic insight

When evaluating AI tools, ask about role clarity and handoff protocols, not just context window size. A focused AI assistant beats a confused AI encyclopedia.

The Skill Evolution Reality: "I Am Not a Programmer" Still True

What everyone assumes

Using AI for development means you need to become technical or AI will replace non-technical roles.

What we discovered

The most valuable skill is systematic thinking about problems, not technical implementation. Six months of AI-assisted development and I'm still fundamentally a PM orchestrating intelligent tools.

Strategic insight

Invest in systematic thinking skills and process clarity, not just AI tool training. The professionals who thrive with AI are the ones who can clearly define what success looks like.

The Integration Paradox: Simple Tools, Complex Orchestration

What everyone assumes

AI will simplify workflows by replacing multiple tools with one intelligent assistant.

What we discovered

The most powerful AI applications use multiple specialized tools with sophisticated coordination, not one general-purpose tool.

Strategic insight

Plan for AI tool portfolios, not AI tool replacement. The organizations that win with AI will be those that excel at multi-tool coordination, not those that find the one perfect AI solution.

The Documentation Discovery: AI Changes What's Worth Capturing

What everyone assumes

AI eliminates the need for documentation because AI can figure things out.

What we discovered

AI makes certain types of documentation incredibly valuable and other types obsolete. The skill is knowing which is which.

Strategic insight

Invest in decision documentation and pattern capture, not detailed process manuals. AI can recreate the "how" if you've clearly documented the "what" and "why."

The Quality Paradox: Higher Standards, Faster Delivery

What everyone assumes

AI development means accepting "good enough" quality in exchange for speed.

What we discovered

AI enables higher quality standards because systematic approaches scale better than manual approaches.

Strategic insight

Use AI adoption as an opportunity to raise quality standards, not lower them. Organizations with strong systematic practices will see the biggest AI benefits.

The evidence: Our test-driven development patterns work better with AI than without because AI excels at systematic implementation of clear specifications. We catch more edge cases and handle more scenarios because AI doesn't get bored with thoroughness.

The Human Factor: AI Makes Soft Skills More Important, Not Less

What everyone assumes

AI reduces the importance of communication, stakeholder management, and other "soft skills" because AI handles more of the technical work.

What we discovered

AI amplifies the impact of human judgment, strategic thinking, and relationship skills. Technical execution becomes easier; strategic alignment becomes more critical.

Strategic insight

AI adoption requires investment in strategic thinking and communication skills, not just technical training. The bottleneck shifts from implementation capacity to alignment quality.

The evidence: Our biggest project risks come from misaligned objectives or unclear requirements, not technical implementation failures. AI makes it easier to build the wrong thing efficiently.

The Iteration Insight: AI Changes How You Learn From Mistakes

What everyone assumes

AI reduces the importance of learning from mistakes because AI handles more of the error-prone work.

What we discovered

AI changes what types of mistakes you make and how quickly you can learn from them. Pattern recognition becomes more valuable than error prevention.

Strategic insight

Build organizational capabilities for rapid iteration and course correction, not just careful planning. AI environments reward adaptive learning over comprehensive foresight.

The evidence: We make different mistakes now - more strategic misalignment, fewer technical bugs. But we catch and correct strategic mistakes much faster because AI enables rapid iteration on approach, not just implementation.

The Transparency Advantage: Showing Your Work Builds Confidence

What everyone assumes

AI development should hide the complexity and present clean final results.

What we discovered

Transparent AI development processes build more stakeholder confidence than polished presentations of AI output.

The evidence

Our building-in-public approach generated more professional interest and credibility than traditional product development approaches. People trust AI-augmented work more when they can see the systematic thinking behind it.

Strategic insight: Invest in process transparency and documentation systems. The ability to show how AI-augmented decisions were made becomes a strategic differentiator.

The Scale Reality: Patterns That Work at Small Scale Need Conscious Design to Scale

What everyone assumes

AI patterns that work for individual contributors automatically scale to teams and organizations.

What we discovered

Scaling AI collaboration requires explicit coordination design, not just tool deployment. The patterns that work for one PM need systematic adaptation for teams.

The evidence

Our multi-agent coordination patterns work well for individual development but require additional handoff protocols and alignment mechanisms when multiple people are involved.

Strategic insight: Plan for AI coordination systems, not just AI tool rollouts. The organizations that scale AI successfully will be those that invest in collaboration design, not just capability deployment.

What This Means for Your AI Strategy

If You're Just Getting Started

  • Start with verification habits: Build systematic checking into your AI workflow from day one. Quality-first approaches pay dividends immediately.
  • Focus on problem definition: The biggest AI wins come from clear specifications, not clever prompting. Invest in requirements thinking.
  • Plan for coordination: Even if you're working individually, design your AI workflow for eventual team collaboration.

If You're Already Using AI

  • Audit for systematic approaches: Which of your current AI uses follow repeatable patterns? Which are ad hoc? Systematize what's working.
  • Evaluate your tool portfolio: Are you trying to make one AI tool do everything, or are you strategically deploying different tools for different tasks?
  • Document strategic decisions: Capture the "why" behind AI approaches, not just the "how." Future you (and your team) will thank you.

If You're Leading AI Adoption

  • Invest in process design: The biggest AI ROI comes from systematic approaches to AI collaboration, not just tool deployment.
  • Plan for skill evolution: Budget for strategic thinking development, not just AI tool training. The bottleneck is alignment quality, not implementation capacity.
  • Build transparency systems: Stakeholder confidence in AI-augmented work comes from visible systematic processes, not just impressive outputs.

The Bottom Line

Three months of intensive AI development taught us that the future isn't human vs. AI or even human + AI. It's systematic human intelligence orchestrating systematic AI capabilities.

The professionals and organizations that understand this distinction will build sustainable competitive advantages while others cycle through AI tools looking for magic solutions.

Want to see how these insights apply in practice? Follow the Building Piper Morgan series where we document ongoing discoveries - including the inevitable mistakes and course corrections.

Ready to test these patterns in your own work? Get involved - we're always looking for perspectives from other practitioners who are building systematically with AI.