Skip to main content

Client Conversation Analysis - Uncovered Issues

Analysis of client conversations reveals critical technical and business issues NOT covered in current technical discovery

Sources: 3 client meetings (Aug 27, Sep 15, Sep 26, 2025)


🚨 CRITICAL GAPS DISCOVERED

Gap #1: AI Analysis System NOT Working 🔴 CRITICAL

From Transcript (Sep 26 - Francis):

"First of all, not all the ads have AI feedback. I have no idea why. I've looked at a few and maybe of the few, like three out of ten, you know, that are giving feedback, one maybe... What happens to the other seven that have no feedback? What's causing the gap?"

"There's definitely no grounding [in AI suggestions]. And also, if we were to give feedback, there is no tracing of how well changes based on the feedback."

"When we sampled it... I don't think there's any usage of vector databases, and I don't think there's any usage of RAG, because I'm looking at all this, this is all just Python. You're not using any AI tooling."

What We Missed in Technical Analysis:

  • ❌ We documented AI services (triangle, violin, ipu) exist
  • ❌ We didn't analyze if they WORK
  • ❌ We didn't check AI feedback completion rate (only 30%!)
  • ❌ We didn't examine grounding/RAG architecture
  • ❌ We didn't trace AI recommendation effectiveness

NEW SCOPE REQUIRED:

  1. AI Effectiveness Analysis:

    • Measure: What % of ads actually get AI feedback? (Client says ~30%)
    • Measure: What % of AI feedback is grounded/accurate?
    • Analyze: Why do 70% of ads get no feedback?
    • Document: Current AI architecture (appears to be missing RAG/vector DB)
  2. AI Architecture Audit:

    • Check for vector database usage (chromadb mentioned but not detected)
    • Check for RAG (Retrieval Augmented Generation) patterns
    • Check for prompt engineering and grounding
    • Check compliance rule integration into AI
  3. Create: 02-Technical/ai-system-analysis.md - Deep dive into AI effectiveness


Gap #2: Data Quality Crisis 🔴 CRITICAL

From Transcript (Sep 26 - Francis):

"We're starting to work with data issues. A lot of everybody's kind of like, what is going on with data? We don't trust it. There's duplications. It's like unmanageable."

"I know this is stuff that Jordan and I have honestly questioned around. How is the data actually coming into our Postgres? I wouldn't need to double... We'll figure it out. We question like the quality around the data coming in because we see a lot of discrepancies."

"We have data touchpoints with no context."

What We Missed:

  • ❌ Database schema analysis created, but didn't check data QUALITY
  • ❌ Didn't analyze data pipeline (Facebook → Postgres → BigQuery)
  • ❌ Didn't check for duplications
  • ❌ Didn't validate data integrity
  • ❌ Didn't examine ETL process quality

NEW SCOPE REQUIRED:

  1. Data Quality Audit:

    • Analyze: Duplication rates in PostgreSQL
    • Check: Data integrity constraints
    • Examine: Facebook API → Postgres ETL quality
    • Measure: Data discrepancy rates
    • Map: Complete data lineage (source → destination)
  2. Create: 02-Technical/data-quality-audit.md

  3. Tool: Enhance database-schema-analyzer.py with data quality checks


Gap #3: Customer Retention/Churn Crisis 🔴 CRITICAL

From Transcript (Sep 26 - Francis):

"Right now, where we're at is, who cares if you sign up, you're going to churn in 30 [days]."

"We need to focus on some retention and what it's going to take to get there."

Product Value Diagram (described in transcript):

  1. Quality of processing - Not adding value
  2. Feedback and insights - Should be AI-based, isn't
  3. Resolution speed - How to improve
  4. Stability, control, insights - Deep understanding

Drop-off happens at steps 1-2 (first 60 days)

What We Missed:

  • ❌ Customer Success Framework created BUT didn't analyze CURRENT churn
  • ❌ Didn't measure actual 30-60 day retention
  • ❌ Didn't analyze WHY customers churn
  • ❌ Didn't examine product value delivery in first 60 days

NEW SCOPE REQUIRED:

  1. Churn Analysis:

    • Measure: Actual 30-day retention rate
    • Measure: 60-day retention rate
    • Analyze: Drop-off points in customer journey
    • Identify: Why customers leave (data quality? AI feedback?)
  2. Product Value Analysis:

    • Audit: Steps 1-2 of their retention diagram
    • Measure: AI feedback quality impact on retention
    • Analyze: Feedback loop effectiveness
  3. Create: 03-Strategy/churn-analysis-current-state.md


Gap #4: Staging Environment Issues 🔴 HIGH

From Transcript (Sep 26 - Francis):

"Staging DB and production DB should be similar, but I don't think they are."

"I heard it was empty... Only for staging. I saw that on the list that there's no staging [DB]."

"They're connecting sandbox API from meta into our staging, which for whatever reason was never connected before."

What We Missed:

  • ❌ Didn't check staging vs. production environment parity
  • ❌ Didn't validate staging database exists/has data
  • ❌ Didn't examine deployment pipeline (staging → production)
  • ❌ Didn't check Meta sandbox configuration

NEW SCOPE REQUIRED:

  1. Environment Parity Analysis:

    • Validate: Does staging DB exist and have data?
    • Compare: Staging vs. production configuration differences
    • Check: Meta sandbox vs. production API configuration
    • Audit: Deployment process (how code goes staging → prod)
  2. Create: 02-Technical/E-DevOps/environment-parity-audit.md


Gap #5: No Monitoring/Observability 🔴 HIGH

From Transcript (Sep 15 - Charles asks):

"Application monitoring, analytics. I assume there's some stuff in there?"

Francis responds:

"No, none. We haven't really set any of that up. Aside from that one DC was using... Google Analytics has been set up."

What We Missed:

  • ❌ Assumed monitoring exists (Datadog/New Relic)
  • ❌ Didn't verify observability stack
  • ❌ No APM (Application Performance Monitoring)
  • ❌ No error tracking (Sentry assumed, not confirmed)
  • ❌ No performance monitoring

NEW SCOPE REQUIRED:

  1. Observability Gap Analysis:

    • Document: Complete lack of monitoring
    • Recommend: APM stack (Datadog, New Relic, or Sentry)
    • Design: Monitoring implementation plan
    • Estimate: Cost and effort
  2. Create: 02-Technical/F-Performance/observability-gap-analysis.md


Gap #6: AI Compliance Feedback Loop Broken 🔴 CRITICAL

From Transcript (Sep 26 - Francis):

"If we're talking to our partners... here are the keywords that you can use. These are words that you can't use... We haven't done any work on the feedback loop."

"Not all the ads have AI feedback. Of the few that do, only 1 in 3 are actually giving valid feedback."

"You've got policymakers who are just kind of like, let's write the legal stuff, right? Like lawmakers... And then you've got a separate force that polices everything and runs the AI. Those two don't talk."

"No grounding. No tracing of how well changes based on feedback work."

What This Reveals:

  • Their core product (AI compliance checking) has 70% failure rate!
  • Policymakers ≠ AI system (disconnected)
  • No feedback validation (did AI suggestions work?)
  • Missing keyword/rule database

NEW SCOPE REQUIRED:

  1. AI Policy Integration Analysis:

    • Map: How compliance rules get into AI system
    • Document: Policy-to-AI pipeline (currently broken)
    • Analyze: Keyword database and rule management
    • Design: Grounded AI architecture (RAG-based)
  2. Feedback Loop Analysis:

    • Measure: AI suggestion → customer action → outcome tracking
    • Design: Feedback validation system
    • Create: Grounding mechanism (rules database → AI)
  3. Create: 02-Technical/ai-compliance-feedback-analysis.md


Gap #7: Communication & Context Gaps 🟡 HIGH

From Transcript (Sep 26 - Charles):

"We want to scan code, front end, messages... where is the context gaps of something comes in via Zapier that somebody put on a credit card that we didn't know about... starts to get mapped."

"My guess is you guys are gonna have a lot of translation gaps where it's a technical person and a non-technical person that are trying to figure out how to do something and they just don't have the skill set."

What We Missed:

  • ❌ Didn't analyze cross-team communication patterns
  • ❌ Didn't map knowledge silos
  • ❌ Didn't examine tech ↔ business translation issues
  • ❌ Didn't check Slack for communication quality

NEW SCOPE REQUIRED:

  1. Communication Analysis:

    • Analyze Slack channels for communication patterns
    • Identify knowledge silos (who knows what?)
    • Map information flow (technical → product → customers)
    • Document translation bottlenecks
  2. Create: 02-Technical/communication-analysis.md


Gap #8: Ticketing & Support Process Broken 🟡 HIGH

From Transcript (Sep 26 - Francis):

"Tickets are a function of scraping the conversation, like somebody actually manually viewing it and then maybe ticketing it."

"We're not using like an AI to recognize, hey, this is a question, ticket it."

"Natalie, Maria, and Nick... can't share product info, they need to interact on Slack just to deliver updates. If you're talking about tickets... somebody manually viewing it."

What We Missed:

  • ❌ Didn't analyze customer support process
  • ❌ Manual ticketing (no automation)
  • ❌ Knowledge base incomplete (mentioned as "building")
  • ❌ Support scattered across Slack + ClickUp

NEW SCOPE REQUIRED:

  1. Support Process Analysis:

    • Document: Current support workflow (manual!)
    • Measure: Response times, ticket volume
    • Analyze: Knowledge base completeness
    • Design: Automated ticketing system
  2. Create: 02-Technical/customer-support-process-analysis.md


Gap #9: Meta/Facebook Integration Problems 🟡 MEDIUM

From Transcript (Sep 26 - Francis):

"They're connecting sandbox API from meta into our staging, which for whatever reason was never connected before."

Need Meta Business Manager full operations access for hooks, clients

What We Missed:

  • ❌ Documented Facebook API usage, didn't check sandbox vs. production
  • ❌ Didn't verify Meta Business Manager configuration
  • ❌ Didn't check API permissions and scopes
  • ❌ Staging environment not properly configured

NEW SCOPE REQUIRED:

  1. Meta Integration Audit:
    • Verify: Sandbox vs. production API setup
    • Check: Business Manager permissions
    • Document: API scopes and access levels
    • Test: Staging Meta integration

Gap #10: Team Structure & Resource Planning 🟡 MEDIUM

From Transcripts:

Tech Team (Sep 15):

  • Ralph (head of product + tech coordination)
  • Avi (senior dev - backend, fixing microservices issues)
  • Jim (senior dev - Pacific time, Manhattan Beach area)
  • Alex (junior frontend)
  • Edison (UI/UX under Ralph's "bootcamp")

Finance/Ops:

  • Francis (CEO)
  • Jordan (finance + operations)
  • RG (bookkeeper → elevating, learning APIs/automation with n8n)
  • Maria (customer success)

Customer Success:

  • Natalie (account management, knowledge base in ClickUp)
  • Heidi (customer support)
  • Nick (product/support interface?)

Ralph's Staffing Requests (mentioned but on hold):

  • Product manager
  • QA engineer
  • Product designer
  • Product marketer
  • Senior frontend engineer

What We Missed:

  • ❌ Didn't analyze team capacity vs. workload
  • ❌ Didn't validate Ralph's resource requests
  • ❌ Didn't assess skill gaps
  • ❌ Team working across 3 time zones (LA, Manila, various)

NEW SCOPE REQUIRED:

  1. Team Capacity Analysis:

    • Current team: 11 people identified
    • Requested additions: 5 roles
    • Validate: Are these needed or can we optimize with current team?
    • Analyze: Skill distribution and gaps
  2. Create: 00-Project-Management/team-capacity-analysis.md


📊 Additional Scope Items from Conversations

Scope Item #1: Data Pipeline Analysis 🔴

Issue: "How is the data actually coming into our Postgres?" - They don't know!

Analysis Needed:

  • Facebook API → Postgres pipeline
  • Postgres → BigQuery sync (Maestro service)
  • Data transformations and quality at each step
  • ETL tool identification (unknown to client)

Deliverable: 02-Technical/data-pipeline-analysis.md


Scope Item #2: 60-Day Customer Retention Deep Dive 🔴

Issue: "Who cares if you sign up, you're going to churn in 30 [days]"

Francis's Retention Diagram (4 steps):

  1. Quality of processing ← Current gap
  2. Feedback and insights (AI-based) ← Current gap
  3. Stability, control, insights
  4. Scale and growth

Drop-off: Steps 1-2 (first 60 days)

Analysis Needed:

  • Actual churn metrics (30-day, 60-day)
  • User journey analysis (step-by-step drop-off)
  • Product value delivery assessment
  • AI feedback impact on retention

Deliverable: 03-Strategy/A-Growth/customer-retention-analysis.md


Scope Item #3: Compliance Rule Management 🔴

Issue: Policymakers and AI system are disconnected

Quote:

"You've got policymakers who write the legal stuff... And then you've got a separate force that runs the AI. Those two don't talk."

"If we were to give feedback, there is no tracing of how well changes based on the feedback work."

Analysis Needed:

  • How compliance rules are authored
  • How rules get into AI system (manual? automated?)
  • Rule versioning and updates
  • AI training on new rules
  • Feedback validation (did suggestion prevent violation?)

Deliverable: 02-Technical/compliance-rule-pipeline-analysis.md


Scope Item #4: Monitoring Implementation Plan 🔴

Issue: "No monitoring set up. None."

Current State: Only Google Analytics for web traffic

Missing:

  • Application Performance Monitoring (APM)
  • Error tracking (Sentry, Rollbar)
  • Infrastructure monitoring (CloudWatch, Datadog)
  • User behavior analytics (Mixpanel, Amplitude)
  • Uptime monitoring (Pingdom, StatusPage)

Deliverable: 02-Technical/F-Performance/monitoring-implementation-plan.md


Scope Item #5: Staging Environment Audit 🔴

Issue: "Staging DB might be empty" / "Meta sandbox wasn't connected"

Problems Identified:

  • Staging database potentially empty or misconfigured
  • Meta sandbox API not connected to staging
  • Production vs. staging parity unknown

Analysis Needed:

  • Validate staging environment exists and works
  • Check database parity (schema, sample data)
  • Verify Meta sandbox integration
  • Test full staging deployment pipeline

Deliverable: 02-Technical/E-DevOps/staging-environment-audit.md


Scope Item #6: Support & Ticketing System 🟡

Issue: Manual ticketing process

Current Process:

  1. Customer message in Slack
  2. Human reads and decides if it's a ticket
  3. Manually create ticket in ClickUp
  4. No automation, no AI categorization

Opportunity: Automate with AI ticket classification

Deliverable: 02-Technical/support-automation-analysis.md


Scope Item #7: Knowledge Base & Documentation 🟡

Issue: Knowledge base "being built" in ClickUp but incomplete

Current State:

  • Natalie managing in ClickUp
  • Incomplete
  • Not properly accessible
  • No search functionality

Deliverable: 02-Technical/knowledge-base-analysis.md


🎯 Prioritized New Analysis Tasks

Week 1 (CRITICAL) - Add to Current Analysis

PriorityTaskEffortOutput Document
🔴 P1AI Effectiveness Audit1 dayai-system-analysis.md
🔴 P1Data Quality Audit1 daydata-quality-audit.md
🔴 P1Churn Analysis1 daycustomer-retention-analysis.md
🔴 P1Compliance Rule Pipeline1 daycompliance-rule-pipeline-analysis.md

Week 2 (HIGH) - Additional Deep Dives

PriorityTaskEffortOutput Document
🟡 P2Monitoring Plan1 daymonitoring-implementation-plan.md
🟡 P2Staging Audit0.5 daystaging-environment-audit.md
🟡 P2Data Pipeline Analysis1 daydata-pipeline-analysis.md
🟡 P2Team Capacity Analysis0.5 dayteam-capacity-analysis.md

Week 3 (MEDIUM) - Process Analysis

PriorityTaskEffortOutput Document
🟢 P3Support Process0.5 daysupport-automation-analysis.md
🟢 P3Knowledge Base0.5 dayknowledge-base-analysis.md
🟢 P3Communication Patterns0.5 daycommunication-analysis.md

💡 Key Business Insights from Conversations

Product Focus:

  • Ad compliance is core business (Facebook ad policy compliance)
  • Video ad support recently added (Abhi's work)
  • Analytics dashboard being built (Jim's focus on WAU/MAU)
  • Self-serve sign-up just launched (removed sales friction)

Customer Pain Points:

  • Not all ads get AI feedback (70% failure rate!)
  • Feedback given is not grounded (no rule references)
  • Can't trace if AI suggestions actually work
  • Churn at 30-60 days (value not delivered fast enough)

Technical Debt:

  • Microservices don't scale well (Avi trying to fix)
  • Latency issues in services
  • Data quality problems (duplications, discrepancies)
  • No monitoring ("we haven't set any of that up")
  • Staging environment broken/empty

Team Dynamics:

  • Ralph: Product + Tech lead (coordinating devs)
  • Avi: Senior dev (fixing backend microservice issues)
  • Jim: Backend dev (analytics features, knows all APIs)
  • Alex: Junior frontend (UI work)
  • Edison: UI/UX (under Ralph's mentorship)

Immediate (This Week)

Critical Analysis Gaps:

  1. AI Effectiveness Audit

    • Measure AI feedback completion rate
    • Analyze why 70% of ads get no feedback
    • Check for vector DB/RAG usage
    • Document grounding mechanism (or lack thereof)
  2. Data Quality Audit

    • Run duplicate detection queries
    • Check data integrity
    • Analyze Facebook → Postgres pipeline
    • Measure data discrepancy rates
  3. Churn Analysis

    • Pull actual retention metrics
    • Analyze drop-off points
    • Correlate AI feedback quality with retention
    • Document customer journey gaps

Next Week

High-Value Analysis: 4. ✅ Monitoring Implementation Plan

  • Design APM stack recommendation
  • Cost/effort estimates
  • Implementation roadmap
  1. Staging Environment Audit
    • Validate staging DB exists
    • Check Meta sandbox integration
    • Environment parity assessment

📁 Updated Deliverables List

NEW Technical Deliverables (Based on Conversations):

Critical (Week 1):

  1. 02-Technical/ai-system-analysis.md - AI effectiveness audit
  2. 02-Technical/data-quality-audit.md - Data integrity analysis
  3. 03-Strategy/customer-retention-analysis.md - Churn deep dive
  4. 02-Technical/compliance-rule-pipeline-analysis.md - Policy → AI integration

High Priority (Week 2): 5. 02-Technical/F-Performance/monitoring-implementation-plan.md 6. 02-Technical/E-DevOps/staging-environment-audit.md 7. 02-Technical/data-pipeline-analysis.md 8. 00-Project-Management/team-capacity-analysis.md

Medium Priority (Week 3): 9. 02-Technical/support-automation-analysis.md 10. 02-Technical/knowledge-base-analysis.md 11. 02-Technical/communication-analysis.md


What We've Already Covered Well

From Conversations - Already Addressed:

  • ✅ Repository analysis (13 repos) - Covered
  • ✅ Developer metrics (3 devs) - Covered
  • ✅ Service consolidation (musical naming = microservices) - Covered
  • ✅ Dependency mapping (external APIs) - Covered
  • ✅ 77% Python (optimization opportunities) - Covered
  • ✅ No PR culture (direct commits) - Covered
  • ✅ Branch protection gaps - Covered
  • ✅ CI/CD gaps - Covered

🎯 Critical Client Needs Summary

What Francis/Team Actually Care About (from conversations):

  1. Fix AI system (70% of ads get no feedback!) ← NOT in current analysis
  2. Data trust (duplications, discrepancies) ← Partially covered
  3. Customer retention (30-60 day churn) ← Framework exists, no current state
  4. Feedback loop (AI suggestions → outcomes) ← NOT covered
  5. Monitoring/visibility (they have none!) ← NOT covered
  6. Staging environment (broken/empty) ← NOT covered
  7. Scale microservices (Avi working on) ← Mentioned, not analyzed
  8. Reduce manual processes (support, ticketing) ← NOT covered

Week 1 Email:

"Based on our conversation analysis and technical discovery, we've identified several critical issues beyond our initial scope that require immediate attention:

  1. AI Feedback System: Only 30% of ads receiving feedback (70% failure rate)
  2. Data Quality Crisis: Duplications and discrepancies preventing data trust
  3. Customer Churn: Need to measure and address 30-60 day retention
  4. No Monitoring: Zero APM/observability in production
  5. Staging Environment: Potentially broken or misconfigured

We're expanding our analysis to cover these areas this week and will have findings by [date]."


🔧 Tools Needed for New Analysis

AI Analysis:

# Check for vector database
grep -r "chromadb\|pinecone\|weaviate\|qdrant" 01-Discovery/repositories/*/

# Check for RAG patterns
grep -r "retrieval\|embedding\|vector" 01-Discovery/repositories/*/

# Check AI frameworks
grep -r "langchain\|llamaindex\|haystack" 01-Discovery/repositories/*/

Data Quality:

# Connect to PostgreSQL
python 07-Tools/database-schema-analyzer.py --database complyai-api --analyze-quality

# Run duplicate detection
# (Would need database access)

Churn Analysis:

# Query customer data
# SELECT retention metrics from PostgreSQL
# (Would need database access + analytics)

Next Steps

Immediate:

  1. Create 4 critical analysis documents (AI, data, churn, compliance)
  2. Run enhanced analysis tools
  3. Report findings to client

This Week: 4. Complete high-priority analyses 5. Update technical discovery with new findings 6. Prepare recommendations

Next Week: 7. Present comprehensive findings 8. Prioritize remediations 9. Update Growth Acceleration Framework with new insights


Document Version: 1.0
Created: November 4, 2025
Based On: 3 client meeting transcripts
Priority: CRITICAL - Gaps in current analysis identified

🚨 These conversation insights reveal critical product/technical issues our automated analysis missed!