How We Use AI: The Complete Operating Model  ·  Confidential  ·  April 2026
Prepared for prospective acquirer

AI-native infrastructure for real-time demand intelligence

The complete operating model

Confidential  ·  April 2026

Note for Core Media

The development and governance models are not specific to Flockr’s domain. It is a transferable approach to building software with AI without losing quality control or codebase coherence. The closed-loop system represents an advanced pattern for governed AI development that any engineering team could adopt.

Signal

Signal is the stand-out capability. It knows how Flockr works, your specific configuration, and queries your live analytics directly — turning the portal from a reporting tool into a conversational intelligence layer.

Overview · How Flockr uses AI — In every business layer

Claude operates across every layer of the business, from the first line of integration code to the conversation an operator has with their live data..

Claude Code
All code — the data pipeline, operator portal, browser integration, client-specific modules, analytics, and automated quality checks
Claude Chat
Structured project knowledge drives product decisions, reviews, audits, spec generation, and documentation governance
Claude API
Powers Signal — the AI assistant embedded in the Flockr portal, account-specific, with live data access and persistence across sessions

Signal — the AI assistant embedded in the portal

Claude API Three knowledge layers Persistent across every page Source-attributed answers

Signal is the stand-out capability of the Flockr platform. Where the rest of Flockr automates signal evaluation, message selection, and attribution measurement, Signal makes all of it conversational. It is the interface through which a commercial team — not just a data analyst — can engage with demand intelligence, act on it, understand it, and explain it.

Every answer is grounded in three layers of knowledge: how Flockr works, the live state of the specific account, and the account’s actual data. Accessible from every page via a persistent panel that does not reset when navigating.

Layer 1 — How Flockr works

Complete knowledge of every concept, metric, and feature — sourced from the 17-file RAG knowledge base built directly from source code. Signal can explain the exposure tier attribution model, why attention signals are ineligible on the cart drawer, what a momentum BREAKOUT state means, or how the slot budget is calculated.

Layer 2 — Client's configuration

Signal knows the specific account’s setup: which surfaces are active, which display variants are configured, whether any pipeline overrides are in place, and what mode the account is running. Answers are account-specific, not generic.

Layer 3 — Client's live data

Signal queries the account’s analytics directly — returning conversational analytics, e.g. the conversion rate lift over a date range, which signal families are firing most, or tracing any product through the pipeline.

Understanding a metric

You: What does demand coverage mean?
Signal: Demand coverage is the percentage of products evaluated by Flockr that had at least one qualifying signal. If Flockr evaluated 4,200 products and 3,948 had at least one qualifying signal, demand coverage would be 94%. A low figure typically means either signal data is sparse or thresholds are set higher than your traffic volume supports.

Why a product isn’t being messaged

You: Why isn’t the Logo Hoody Black getting any messages?
Signal: The Logo Hoody Black has fading demand — views down 15% since peak and no purchases in the last 7 days. Purchase not qualifying (22 views in last 24h, zero purchases); Momentum below threshold (ratio 0.36×); Attention: 0 views in last hour — below minimum of 5 for hour1. No signal family is meeting its minimum threshold. Flockr doesn’t fabricate urgency when no genuine signal exists.

Live attribution data

You: What was our conversion rate lift over the last 14 days?
Signal: Your conversion rate lift for the last 14 days is +29%. Flockr session order rate was 1.2% versus a baseline of 0.9%. Your tier 5 (cart visible) sessions are showing the strongest lift at approximately 2.1×, consistent with the dose-response pattern if Flockr is influencing purchase decisions at the cart stage.

Catalogue monitoring

You: Are there any products I should be paying attention to right now?
Signal: 448 products are currently at scarcity risk — 19 are high priority, both low in stock and still attracting active demand. You have 5 demand spikes detected in the last 24 hours: three are the same pattern — purchases jumping from 1 to 3 on sets-category products. This is the early signal pattern that precedes a BUILDING momentum state.

Evaluating an Action Layer recommendation

You: The Logo Hoody is showing “Consider markdown” — is that right?
Signal: The classification comes from three inputs: a 15% decline since peak, 4 days declining, and a Collapsed purchase trend. The product has 32 units remaining — if you run a markdown and demand recovers, you have limited stock to capitalise on. Promote might clarify whether the decline is structural before committing to a price change.

Writing a team brief

You: Can you write a brief for our buying team on the restock situation?
Signal: 19 products require immediate restock action. All have under 5 days of stock remaining at current sell-through rates and are showing stable or accelerating demand. Flockr is already activating scarcity messaging for qualifying products, which is expected to increase purchase velocity — meaning runway may shorten faster than the current estimate.

Understanding scoring changes

You: Why is scarcity winning so many slots on PDP lately?
Signal: The adaptive model has observed that scarcity signals on PDP are converting at a significantly higher rate than purchase signals. The proofStrength component for scarcity on PDP has been tuned upward. Check the Click rate by signal type table on Analytics Signals, filtered to PDP. If scarcity’s visible CTR is materially higher than purchase’s, the model is correct.

Cross-surface coordination

You: Is Flockr coordinating what a shopper sees across pages?
Signal: The adaptive model coordinates across surfaces within a session. Each selection is made independently per page load, but scoring weights are informed by what the shopper has already seen. If a shopper has seen and ignored a scarcity message twice in the same session, the model reduces scarcity’s weight for subsequent page loads. The orchestration layer affects which family wins the slot — not whether the underlying signal is genuine.

Client onboarding — site analysis and integration generation

Integrating Flockr into a client’s website is an AI workflow. Claude Code analyses the live website and generates the site-specific integration configuration — a task that previously required manual front-end engineering work.

What Claude Code does

  • Site analysis — inspects URL structure, DOM patterns, page type detection, product ID locations, and how each surface renders in the client’s specific theme.
  • Placement selection — identifies the optimal DOM injection target per surface with separate mobile and desktop handling.
  • Event model — Flockr has a configurable event model that adapts to each client’s data layer, normalising their shopper behaviour events into a consistent schema.
  • Dynamic content handling — adapts to each site’s live DOM behaviour, including dynamic panels, asynchronous content, and real-time UI changes.
  • Integration module — produces a complete, site-specific integration — every piece of configuration a new storefront needs to go live.

AI-generated integration, independently maintained core

The shared core handles the universal mechanics — session tracking, message rendering, visibility, attribution, and slot allocation. The AI-generated integration module handles everything site-specific.

Core upgrades roll out independently of any client configuration, so improvements reach the entire fleet without re-touching individual storefronts. Every integration is reviewed and tested in preview modes before going live.

What used to require days of front-end engineering is now a single AI-assisted session. Integration scales without scaling the team.

Development - Governance model

Adopting AI in a development workflow requires three things working together: persistent architectural memory, a clear source-authority model, and automated compliance checks. Without all three, codebases drift. With them, AI-built systems stay coherent indefinitely.

The dual-tool pattern

Claude Code handles all implementation. Claude Chat handles strategy, architecture, and review. Chat produces a structured spec with a reference-document checklist; Code implements it and updates every reference document in the same commit.

Architecture memory

A single versioned file carries the full system context Claude Code needs across sessions: the bundle pipeline, data stores, feature flags, signal thresholds, eligibility rules, and coding conventions. Its accuracy is enforced by the audit cycle.

Tiered source authority

TierSourceAuthority
Tier 1Source codeAlways correct. Docs that conflict with code are wrong.
Tier 2Primary written knowledgeSidebar content catalogue (172+ entries), information architecture, architecture memory
Tier 3Structural / diagnosticSidebar coverage index, known-issues register, glossary
Tier 4Visual verification29 checkpoint screenshots — screenshot wins over written description

Automated compliance checks

  • Table consistency check — validates that every data table in the portal conforms to the standard interaction pattern: correct controls, sortable headers, pagination, and filtering. Blocks deploys on violation.
  • Explanatory coverage check — verifies 100% info-text coverage across 172+ column and section headers. Every metric in the portal has a definition.
  • Production audit trail — every signal selection decision is logged with its eligibility, scoring, slot allocation, and rejection reasons.

Checkpoint and review cycle

A single checkpoint command rotates the 29-screenshot review set to a prior version, captures the current portal states, generates a visual changelog from committed changes, and advances the checkpoint marker.

A saved review prompt is then applied fresh per checkpoint in Claude Chat. Findings produce a structured spec with a mandatory reference-document checklist, which Claude Code implements alongside every reference-document update in the same commit.

The closed loop — AI development that stays coherent over time

Two Claude surfaces, separate authority, one governed system

Most teams using AI in development treat it as a faster pair programmer. The codebase still drifts, the docs fall behind, and the AI doesn’t retain what it built two weeks ago. Flockr’s workflow is different: neither surface is trusted to review its own work, state lives in committed files rather than conversation, and every session reviews every layer — not just what changed.

What makes this different

Separate authority. Claude Chat holds visual context — the portal screenshots, the information architecture, the glossary, and the sidebar content catalogue. It catches visual inconsistency, terminology drift, and conceptual confusion. Claude Code holds the source code. It catches threshold drift, missing explanations, stale eligibility rules. Neither is asked to do the other’s job.

State in files, not conversation. Specs, changelogs, audit reports, rebuild logs, and the knowledge base are all committed files. Any session with either surface can be resumed or replaced without losing context.

The hard gate

Before Signal’s knowledge base can be regenerated, a standing prompt checks the audit report for unresolved conflicts. If any exist, generation refuses to proceed. This single gate prevents Signal from ever giving answers grounded in stale documentation.

Real conflicts caught in production

An out-of-date comment in the browser integration stated the message slot budget was 1/8; the actual formula is ceil(products / 6). Caught by audit.
The “most viewed” badge rule required rank === 1 in code, while documentation described it as top-10. Caught by audit.
Momentum signal eligibility on the cart drawer was marked disabled in architecture memory but enabled in source code. Caught by audit.

This is a transferable model. The mechanism — two surfaces with complementary context, bridged by committed files, gated by prompts — applies to any software project. The specific files and standing prompts would change; the governance structure would not.

Knowledge base — automated content generation from source

Signal answers questions by retrieving from a structured knowledge base. Every chunk is written by Claude directly from the project’s source-authority tiers — not hand-authored — and regenerated whenever the audit cycle passes clean. This guarantees the knowledge base stays synchronised with the running system indefinitely.

How the knowledge base is generated
A standing prompt instructs Claude to study the source code, schemas, and portal UI, then write the knowledge base as a series of self-contained, query-optimised chunks. The prompt enforces a four-tier source authority (source code → written documentation → structural indices → visual verification) and refuses to run if the audit report contains unresolved conflicts. Every value, threshold, formula, and rule in the knowledge base is traceable to a Tier 1 source. When the system changes, regeneration is a single command — there is no drift period where Signal gives outdated answers.

What Signal knows

The knowledge base is organised into 18 files covering every topic an operator could ask about. Content is structured for retrieval: each H2 heading echoes the likely question, and each chunk is written to stand alone so the retrieval system can surface the right answer without needing surrounding context.

Topic areaWhat Signal can answer
What Flockr isPositioning, how Flockr connects to a store, data mode vs live mode, why demand signals influence conversion
Portal navigation and pagesWhat every page shows, every KPI card, every chart, every tab — across Home, Demand, Analytics, Conversion, Messages, Realtime, and Settings
Signal families and eligibilityAll 11 signal families, the complete eligibility matrix (11 families × 8 surfaces), timeframe restrictions, lifecycle states, suppression rules, why a product might have no qualifying signal
Selection engine and scoringAll 13 pipeline steps, the full scoring formula with every weight, slot allocation logic, elimination reason codes, the coverage funnel
Message systemEvery message code Flockr produces, surfaces and placements, slot budget, message variants, breakpoints, rank badge logic
Attribution and conversionThe exposure tier model, conversion rate lift methodology, incremental revenue calculation, the dose-response chart
Demand intelligenceLifecycle classification, momentum calculation, scarcity and overstock detection, live demand events, category rankings
Data layerEnd-to-end pipeline, every event type, the analytics views, join keys, the difference between data mode and live mode field-by-field
Pipeline configurationSurfaces and placements, eligibility, thresholds, scoring weights, client modes
TroubleshootingTwelve structured diagnostic topics — why activation rate is low, why a product isn’t messaged, why lift isn’t showing, how to verify Flockr is working
GlossaryEvery term used anywhere in the portal, with canonical definitions — ~130 entries

Action Layer — demand intelligence to commercial decision

The Action Layer surfaces a single recommended action per product across four areas of the Demand page: Scarcity risk table, Fading tab, Momentum leaderboard, and the Live demand event feed (HIGH and CRITICAL events only).

Eight actions and their inputs

ActionUrgencyDerives from
Restock immediatelyCriticalRunway <5 days, demand Accelerating or Stable
Restock soonHighRunway 5–14 days, demand Accelerating or Stable
Feature nowHigh3H + 7D momentum elevated, or top-3 category entry
PromoteMediumFading views, purchases Holding
InvestigateMediumCliff decline in 1–2 days, or sudden anomaly
MonitorMediumShort runway, fading demand
WatchLowSingle-grain momentum (3H only)
Consider markdownLow–MedViews, ATB, purchases declining 7+ days

Classification logic

  • Scarcity risk — two inputs: runway bucket × 3-day demand trend.
  • Fading demand — decline magnitude, purchase trend (Holding / Declining / Collapsed), days declining. Lifecycle modifier: a Discovering product with collapsed purchase shows Investigate not Consider markdown.
  • Momentum leaderboard — 3H and 7D grains. Compound note “Low stock — act urgently” when Feature now product has active scarcity.
  • Live demand feed — Feature now on lifecycle breakout; Restock immediately on accelerating low stock; Investigate on rank exit.

When a product row is expanded, the action appears with a one-sentence explanation: “Promote — views have declined 47% since peak but purchase rate is holding.” Generated from classification inputs, not a fixed string.

Signal explains any classification and helps communicate decisions to the team. See Section H.

Adaptive scoring — self-optimising signal selection

The scoring model learns continuously from each client’s own conversion data. Five dimensions are optimised simultaneously without manual configuration.

  • Adaptive timeframe selection — learns which time windows convert per signal family per surface. If hour12 and hour1 purchase signals convert at the same rate, the penalty for the longer window is reduced.
  • Signal weight optimisation — per-client, per-surface weight distributions adjust from observed conversion data. Compounds over time.
  • Placement tuning — learns which families perform better in primary versus secondary position on PDP.
  • Journey-stage weighting — cart pressure and scarcity gain weight at the decided stage. Newness signals lose weight as shoppers progress from browsing to considering.
  • Cross-surface orchestration — session context via session_id informs weight adjustments within a session. A signal family seen and not acted on twice is weighted down on subsequent pages.

The adaptive model is treated as a Tier 1 source. Weight changes above a defined threshold surface as conflicts in the audit cycle — Signal’s knowledge of scoring behaviour stays current automatically as the model learns.

Extensibility — the pattern beyond e-commerce

The architecture is not catalogue-specific. The core pattern applies wherever there are items to surface and decisions to influence.

Concept translation to a CMS context

Flockr conceptCMS equivalent
ProductAny managed content item — article, page, asset
Purchase / attention signalRead completion, scroll depth, click-through, share
SurfaceArticle feed, related content panel, search, email digest
SlotPosition in a feed, recommendation slot, featured block
ConversionTime on site, subscription, download, form completion

What transfers without modification

The scoring formula and its seven dimensions, the slot allocation model, the exposure-tier attribution methodology, and the governance approach are all directly transferable. The signal taxonomy and surface definitions are the variables; the system architecture is the invariant.

The closed-loop model is also portable

Architecture memory, tiered source authority, audit-and-rebuild cycle, and automated compliance checks are applicable to any software project. A CMS engineering team could adopt the same structure with their own domain-specific architecture memory.

Why this system matters

This operating model directly impacts scalability, defensibility, and product differentiation.

Scalable integration model

Client onboarding scales without proportional engineering cost. What required days of front-end work is now a single AI session — repeatable across any Shopify storefront.

Persistent system knowledge

System knowledge persists beyond individuals. Architecture memory, the audit cycle, and committed artefacts mean the codebase and its documentation stay coherent regardless of team changes.

Product differentiation

Signal transforms a reporting tool into a conversational intelligence layer. No competitor offers an operator-facing AI assistant grounded in live behavioural data, account configuration, and verified system knowledge simultaneously.