User Guide
Search…
Review data quality summaries
See how well your data quality measures up, scored overall and per quality dimension
Every metric measures a data quality dimension: accuracy, completeness, timeliness, or custom. The Dashboards -> Data Quality tab helps you see at a glance where you have room to improve data quality, overall and for each data quality dimension.
  1. 1.
    Data quality summary scores - Summary scores, overall and by data quality dimension, for the most recent 24-hour period (12AM - 11:59:59 PM), ignoring data within the evaluation delay period.
  2. 2.
    Percentage of passing monitor by day - Line graph showing two weeks' history of the currently selected summary score. A monitor passes if it logs no incidents; a monitor has failed as soon as it logs an incident.
  3. 3.
    2-Week Monitor Summary - Table of failed monitors with columns for monitor, metric, and a heatmap of incidents. On the heatmaps, red cells indicate days that produced incidents. Up to ten failing monitors are listed, ordered by number of days with at least one logged incident.

Review data quality summary scores and history

The Data Quality tab contains scores for each dimension and an overall score representing all metrics. Scores are calculated daily. Each score is a simple ratio expressed as a percent, indicating how many relevant monitors passed (logged no incidents) for the day. Below the scores, a chart displays two weeks' score history and a per-metric heatmap that shows which monitors recorded failures on which days (for the same two weeks).
  1. 1.
    Select a score. The chart (passing monitors) and heatmap (failing monitors) change to provide details about the selected score's related monitors.
  2. 2.
    For a definition of the score's dimension, move your cursor over the (i) icon next to the score's name.
  3. 3.
    The chart shows the score for each day over the preceding two weeks - the number of passing monitors divided by the total number of monitors.
  4. 4.
    The heatmap lists up to ten failing monitors and the metric for each, ordered by the highest count of failing days in the two weeks leading up to the score's date. On the heatmap, days with failures (one or more incidents) show as red and passing days (no incidents logged) show as gray.