Anomalies

The Anomalies page (sidebar Anomalies) flags unusual spikes or drops in feature usage so you can investigate without watching every metric manually. Beacon runs anomaly detection daily and surfaces statistically unusual days as anomaly cards.

Prerequisites

  • Plan: Available on all plans, including Trial.
  • Permission: view_alerts to see anomalies. manage_alerts additionally to acknowledge anomalies, mute features, and configure sensitivity overrides.

How detection works

Once per day (overnight UTC), Beacon scans the previous day’s event counts for every distinct feature in your data — one count per product, category, and name combination. For each feature, it compares yesterday’s count to a baseline built from the recent rolling average and how much that feature typically varies. If the count is far enough from the baseline — measured as a z-score — that day becomes an anomaly.

Two values drive the decision:

  • Baseline mean — average count for this feature across the recent baseline window.
  • Baseline standard deviation — how variable that feature normally is.

A z-score above the threshold (default 2.0, configurable per feature) marks the day as anomalous. Higher absolute z-scores indicate larger departures from normal — both unusual spikes AND unusual drops.

Cadence: anomalies for today don’t appear until tomorrow’s detection pass runs. If a feature you watch is spiking right now, expect to see it after the next overnight pass.

Read the list

Default view shows unacknowledged anomalies, sorted by detection day (newest first), then by absolute z-score (largest first).

Each anomaly card shows:

  • Feature identifiersource_app / category / name.
  • Detection day — the bucket day the anomaly was detected on.
  • Observed count — actual event count for that day.
  • Baseline mean ± std dev — what was expected.
  • Z-score — how many standard deviations the observed count was from the baseline. Positive = spike, negative = drop.

Filters

  • Product — restrict to one product. (Selecting one also enables the Mute Product button — see below.)
  • Show acknowledged — toggle to include anomalies you’ve already reviewed.

Acknowledge an anomaly

If you have manage_alerts, each card shows an Acknowledge button. Clicking it marks the anomaly as reviewed and hides it from the default view; toggle Show acknowledged to see them again. Acknowledgment is informational — it doesn’t suppress future detection.

Mute a feature

If a feature regularly produces anomalies you don’t care about (a noisy debug event, a known-variable metric), you can mute it. Click Mute on an anomaly card and choose:

  • A duration in days, or
  • Indefinite (no expiry).

Muted features are excluded from future anomaly cards until the mute expires or you unmute. Manage active mutes via the Active mutes section at the top of the page (requires manage_alerts).

Product-level mute

Select a product in the filter, then click Mute Product to mute every feature for that product at once. Useful during a known event volume change (a release, a load test, a third-party integration outage) when you don’t want noise across the whole product.

Sensitivity overrides

The default z-score threshold of 2.0 works for most features. If a particular feature is too noisy (lots of false-positive anomalies) or too quiet (real anomalies don’t trigger), click Sensitivity on a card and set a custom threshold for that specific product / category / name.

Higher thresholds = fewer anomalies (only the most extreme deviations trigger). Lower thresholds = more anomalies (more sensitive to small departures). Reset the override to return that feature to the global default.

Common questions

The list is empty. Either no anomalies have been detected (good — your feature usage is steady), or your data history is too short to compute baselines. The detector needs enough days of past data to compute a meaningful mean and standard deviation; brand-new features take a few days to accumulate baseline.

A spike I expected isn’t on the list. Three reasons it might be suppressed: (1) the feature is muted — check Active mutes; (2) a sensitivity override on that feature uses a high threshold; (3) the change wasn’t large enough relative to the baseline variability. Features that normally swing widely have wider baselines, so a spike within that band doesn’t qualify.

Drops show up alongside spikes. Yes — z-scores can be negative. A feature that usually fires 1000 times a day and suddenly drops to 50 is just as worth investigating as one that jumps to 5000.

Can I get alerts via email or Slack? Not from this page today. The Anomalies page is the visual surface. Programmatic alerting (email / webhook) is on the roadmap as a separate Alert Rules feature.

Next

You’ve reached the end of Exploring Your Data. Continue with Products & Versions in the sidebar, or revisit the Glossary.