Overview · How Flockr uses AI — In every business layer
Claude operates across every layer of the business, from the first line of integration code to the conversation an operator has with their live data..
Signal — the AI assistant embedded in the portal
Signal is the stand-out capability of the Flockr platform. Where the rest of Flockr automates signal evaluation, message selection, and attribution measurement, Signal makes all of it conversational. It is the interface through which a commercial team — not just a data analyst — can engage with demand intelligence, act on it, understand it, and explain it.
Every answer is grounded in three layers of knowledge: how Flockr works, the live state of the specific account, and the account’s actual data. Accessible from every page via a persistent panel that does not reset when navigating.
Layer 1 — How Flockr works
Complete knowledge of every concept, metric, and feature — sourced from the 17-file RAG knowledge base built directly from source code. Signal can explain the exposure tier attribution model, why attention signals are ineligible on the cart drawer, what a momentum BREAKOUT state means, or how the slot budget is calculated.
Layer 2 — Client's configuration
Signal knows the specific account’s setup: which surfaces are active, which display variants are configured, whether any pipeline overrides are in place, and what mode the account is running. Answers are account-specific, not generic.
Layer 3 — Client's live data
Signal queries the account’s analytics directly — returning conversational analytics, e.g. the conversion rate lift over a date range, which signal families are firing most, or tracing any product through the pipeline.
Understanding a metric
Why a product isn’t being messaged
Live attribution data
Catalogue monitoring
Evaluating an Action Layer recommendation
Writing a team brief
Understanding scoring changes
Cross-surface coordination
Client onboarding — site analysis and integration generation
Integrating Flockr into a client’s website is an AI workflow. Claude Code analyses the live website and generates the site-specific integration configuration — a task that previously required manual front-end engineering work.
What Claude Code does
- Site analysis — inspects URL structure, DOM patterns, page type detection, product ID locations, and how each surface renders in the client’s specific theme.
- Placement selection — identifies the optimal DOM injection target per surface with separate mobile and desktop handling.
- Event model — Flockr has a configurable event model that adapts to each client’s data layer, normalising their shopper behaviour events into a consistent schema.
- Dynamic content handling — adapts to each site’s live DOM behaviour, including dynamic panels, asynchronous content, and real-time UI changes.
- Integration module — produces a complete, site-specific integration — every piece of configuration a new storefront needs to go live.
AI-generated integration, independently maintained core
The shared core handles the universal mechanics — session tracking, message rendering, visibility, attribution, and slot allocation. The AI-generated integration module handles everything site-specific.
Core upgrades roll out independently of any client configuration, so improvements reach the entire fleet without re-touching individual storefronts. Every integration is reviewed and tested in preview modes before going live.
What used to require days of front-end engineering is now a single AI-assisted session. Integration scales without scaling the team.
Development - Governance model
Adopting AI in a development workflow requires three things working together: persistent architectural memory, a clear source-authority model, and automated compliance checks. Without all three, codebases drift. With them, AI-built systems stay coherent indefinitely.
The dual-tool pattern
Claude Code handles all implementation. Claude Chat handles strategy, architecture, and review. Chat produces a structured spec with a reference-document checklist; Code implements it and updates every reference document in the same commit.
Architecture memory
A single versioned file carries the full system context Claude Code needs across sessions: the bundle pipeline, data stores, feature flags, signal thresholds, eligibility rules, and coding conventions. Its accuracy is enforced by the audit cycle.
Tiered source authority
| Tier | Source | Authority |
|---|---|---|
| Tier 1 | Source code | Always correct. Docs that conflict with code are wrong. |
| Tier 2 | Primary written knowledge | Sidebar content catalogue (172+ entries), information architecture, architecture memory |
| Tier 3 | Structural / diagnostic | Sidebar coverage index, known-issues register, glossary |
| Tier 4 | Visual verification | 29 checkpoint screenshots — screenshot wins over written description |
Automated compliance checks
- Table consistency check — validates that every data table in the portal conforms to the standard interaction pattern: correct controls, sortable headers, pagination, and filtering. Blocks deploys on violation.
- Explanatory coverage check — verifies 100% info-text coverage across 172+ column and section headers. Every metric in the portal has a definition.
- Production audit trail — every signal selection decision is logged with its eligibility, scoring, slot allocation, and rejection reasons.
Checkpoint and review cycle
A single checkpoint command rotates the 29-screenshot review set to a prior version, captures the current portal states, generates a visual changelog from committed changes, and advances the checkpoint marker.
A saved review prompt is then applied fresh per checkpoint in Claude Chat. Findings produce a structured spec with a mandatory reference-document checklist, which Claude Code implements alongside every reference-document update in the same commit.
The closed loop — AI development that stays coherent over time
Two Claude surfaces, separate authority, one governed system
Most teams using AI in development treat it as a faster pair programmer. The codebase still drifts, the docs fall behind, and the AI doesn’t retain what it built two weeks ago. Flockr’s workflow is different: neither surface is trusted to review its own work, state lives in committed files rather than conversation, and every session reviews every layer — not just what changed.
What makes this different
Separate authority. Claude Chat holds visual context — the portal screenshots, the information architecture, the glossary, and the sidebar content catalogue. It catches visual inconsistency, terminology drift, and conceptual confusion. Claude Code holds the source code. It catches threshold drift, missing explanations, stale eligibility rules. Neither is asked to do the other’s job.
State in files, not conversation. Specs, changelogs, audit reports, rebuild logs, and the knowledge base are all committed files. Any session with either surface can be resumed or replaced without losing context.
The hard gate
Before Signal’s knowledge base can be regenerated, a standing prompt checks the audit report for unresolved conflicts. If any exist, generation refuses to proceed. This single gate prevents Signal from ever giving answers grounded in stale documentation.
Real conflicts caught in production
1/8; the actual formula is ceil(products / 6). Caught by audit.rank === 1 in code, while documentation described it as top-10. Caught by audit.This is a transferable model. The mechanism — two surfaces with complementary context, bridged by committed files, gated by prompts — applies to any software project. The specific files and standing prompts would change; the governance structure would not.
Knowledge base — automated content generation from source
Signal answers questions by retrieving from a structured knowledge base. Every chunk is written by Claude directly from the project’s source-authority tiers — not hand-authored — and regenerated whenever the audit cycle passes clean. This guarantees the knowledge base stays synchronised with the running system indefinitely.
How the knowledge base is generated
A standing prompt instructs Claude to study the source code, schemas, and portal UI, then write the knowledge base as a series of self-contained, query-optimised chunks. The prompt enforces a four-tier source authority (source code → written documentation → structural indices → visual verification) and refuses to run if the audit report contains unresolved conflicts. Every value, threshold, formula, and rule in the knowledge base is traceable to a Tier 1 source. When the system changes, regeneration is a single command — there is no drift period where Signal gives outdated answers.
What Signal knows
The knowledge base is organised into 18 files covering every topic an operator could ask about. Content is structured for retrieval: each H2 heading echoes the likely question, and each chunk is written to stand alone so the retrieval system can surface the right answer without needing surrounding context.
| Topic area | What Signal can answer |
|---|---|
| What Flockr is | Positioning, how Flockr connects to a store, data mode vs live mode, why demand signals influence conversion |
| Portal navigation and pages | What every page shows, every KPI card, every chart, every tab — across Home, Demand, Analytics, Conversion, Messages, Realtime, and Settings |
| Signal families and eligibility | All 11 signal families, the complete eligibility matrix (11 families × 8 surfaces), timeframe restrictions, lifecycle states, suppression rules, why a product might have no qualifying signal |
| Selection engine and scoring | All 13 pipeline steps, the full scoring formula with every weight, slot allocation logic, elimination reason codes, the coverage funnel |
| Message system | Every message code Flockr produces, surfaces and placements, slot budget, message variants, breakpoints, rank badge logic |
| Attribution and conversion | The exposure tier model, conversion rate lift methodology, incremental revenue calculation, the dose-response chart |
| Demand intelligence | Lifecycle classification, momentum calculation, scarcity and overstock detection, live demand events, category rankings |
| Data layer | End-to-end pipeline, every event type, the analytics views, join keys, the difference between data mode and live mode field-by-field |
| Pipeline configuration | Surfaces and placements, eligibility, thresholds, scoring weights, client modes |
| Troubleshooting | Twelve structured diagnostic topics — why activation rate is low, why a product isn’t messaged, why lift isn’t showing, how to verify Flockr is working |
| Glossary | Every term used anywhere in the portal, with canonical definitions — ~130 entries |
Action Layer — demand intelligence to commercial decision
The Action Layer surfaces a single recommended action per product across four areas of the Demand page: Scarcity risk table, Fading tab, Momentum leaderboard, and the Live demand event feed (HIGH and CRITICAL events only).
Eight actions and their inputs
| Action | Urgency | Derives from |
|---|---|---|
| Restock immediately | Critical | Runway <5 days, demand Accelerating or Stable |
| Restock soon | High | Runway 5–14 days, demand Accelerating or Stable |
| Feature now | High | 3H + 7D momentum elevated, or top-3 category entry |
| Promote | Medium | Fading views, purchases Holding |
| Investigate | Medium | Cliff decline in 1–2 days, or sudden anomaly |
| Monitor | Medium | Short runway, fading demand |
| Watch | Low | Single-grain momentum (3H only) |
| Consider markdown | Low–Med | Views, ATB, purchases declining 7+ days |
Classification logic
- Scarcity risk — two inputs: runway bucket × 3-day demand trend.
- Fading demand — decline magnitude, purchase trend (Holding / Declining / Collapsed), days declining. Lifecycle modifier: a Discovering product with collapsed purchase shows Investigate not Consider markdown.
- Momentum leaderboard — 3H and 7D grains. Compound note “Low stock — act urgently” when Feature now product has active scarcity.
- Live demand feed — Feature now on lifecycle breakout; Restock immediately on accelerating low stock; Investigate on rank exit.
When a product row is expanded, the action appears with a one-sentence explanation: “Promote — views have declined 47% since peak but purchase rate is holding.” Generated from classification inputs, not a fixed string.
Signal explains any classification and helps communicate decisions to the team. See Section H.
Adaptive scoring — self-optimising signal selection
The scoring model learns continuously from each client’s own conversion data. Five dimensions are optimised simultaneously without manual configuration.
- Adaptive timeframe selection — learns which time windows convert per signal family per surface. If hour12 and hour1 purchase signals convert at the same rate, the penalty for the longer window is reduced.
- Signal weight optimisation — per-client, per-surface weight distributions adjust from observed conversion data. Compounds over time.
- Placement tuning — learns which families perform better in primary versus secondary position on PDP.
- Journey-stage weighting — cart pressure and scarcity gain weight at the decided stage. Newness signals lose weight as shoppers progress from browsing to considering.
- Cross-surface orchestration — session context via
session_idinforms weight adjustments within a session. A signal family seen and not acted on twice is weighted down on subsequent pages.
The adaptive model is treated as a Tier 1 source. Weight changes above a defined threshold surface as conflicts in the audit cycle — Signal’s knowledge of scoring behaviour stays current automatically as the model learns.
Extensibility — the pattern beyond e-commerce
The architecture is not catalogue-specific. The core pattern applies wherever there are items to surface and decisions to influence.
Concept translation to a CMS context
| Flockr concept | CMS equivalent |
|---|---|
| Product | Any managed content item — article, page, asset |
| Purchase / attention signal | Read completion, scroll depth, click-through, share |
| Surface | Article feed, related content panel, search, email digest |
| Slot | Position in a feed, recommendation slot, featured block |
| Conversion | Time on site, subscription, download, form completion |
What transfers without modification
The scoring formula and its seven dimensions, the slot allocation model, the exposure-tier attribution methodology, and the governance approach are all directly transferable. The signal taxonomy and surface definitions are the variables; the system architecture is the invariant.
The closed-loop model is also portable
Architecture memory, tiered source authority, audit-and-rebuild cycle, and automated compliance checks are applicable to any software project. A CMS engineering team could adopt the same structure with their own domain-specific architecture memory.
Why this system matters
This operating model directly impacts scalability, defensibility, and product differentiation.
Scalable integration model
Client onboarding scales without proportional engineering cost. What required days of front-end work is now a single AI session — repeatable across any Shopify storefront.
Persistent system knowledge
System knowledge persists beyond individuals. Architecture memory, the audit cycle, and committed artefacts mean the codebase and its documentation stay coherent regardless of team changes.
Product differentiation
Signal transforms a reporting tool into a conversational intelligence layer. No competitor offers an operator-facing AI assistant grounded in live behavioural data, account configuration, and verified system knowledge simultaneously.