Monitor a metric
You monitor metrics to stay aware of data quality issues the metrics uncover. A monitor generates incidents when a metric value falls outside of normal thresholds, which you can set explicitly or by anomaly detection.
To monitor a metric takes three main steps:
- Create the monitor
- Train the monitor
- Start monitoring
Create a monitor
When you create a monitor, you specify its metric and monitor type, and optionally add an alerting channel and set advanced detection settings.
Decide which monitor to use
Lightup supports three monitor types. Available monitors depend on the metric type, and you'll only see options that work with the metric you've chosen to monitor. You can create basic monitors quickly from Explorer. For more advanced monitors, start from the Monitors List.
Use Case | Monitor type | Start in |
---|---|---|
I know what metric values or changes are normal and want to set thresholds myself | Manual thresholds | Explorer Monitors List |
I want Lightup to automatically determine what's normal and set thresholds for me | Value out of expectations | Explorer Monitors List |
My metric values vary period over period, and I want to know when there's a sudden change | Sharp change | Monitors List |
Training monitors
Training a monitor prepares it for identifying out-of-bounds metric values— detecting incidents. If you enter manual thresholds, training isn't needed because the thresholds are specified, not detected. Training anomaly detection monitors takes some time, depending on the length and number of training periods you add.
Anomaly detection monitors need metric history to train on, which you specify by adding one or more training periods. The training periods you add should have enough historical metric values to properly discern what normal metric values look like. Monitors for incremental metrics can query the datasource to get this history. Monitors for Full Table metrics and metadata metrics can't get historical metric values by querying the datasource, so you should let the metric run for a while before you train. Review the training period options for default monitors to get an idea of how much metric history is sufficient to be sure the monitor can successfully train.
Training sliced metrics
When you add a monitor to a sliced metric, each slice gets it own monitor and can generate its own incidents. Training creates a separate profile for each slice value that's in the metric when you add the monitor. When new slice values appear in the metric, you'll need to train the monitor on the resulting new slices before they can generate any incidents.
Preview a monitor
Once you've configured, saved, and trained a monitor, you can preview it. Preview shows what the monitor would detect without requiring it to be live. You can view sample data first to help you decide what date range to preview.
Start monitoring
After you save and train a monitor, by default it is paused. To start monitoring, on the Define tab set Monitor Status to Live. The monitor will begin logging incidents from that moment forward.
If you also want the monitor to check for incidents in the past, set Backfill Incidents Starting to log incidents back to that date.
Updated 9 months ago