Every metric measures a data quality dimension: accuracy, completeness, timeliness, or custom. The Dashboards -> Data Quality tab helps you see at a glance where you have room to improve data quality, overall and for each data quality dimension.
- Data quality summary scores (depicted above) - Summary scores, overall and by data quality dimension, for the most recent 24-hour period (12AM - 11:59:59 PM), ignoring data within the evaluation delay period. Scroll down to see two-weeks of history for the selected summary score. When you open the Dashboard, the Overall summary score is selected, and the history includes all data quality dimensions. Choose a different summary score to see its history instead.
- Percentage of passing monitor by day (item 1, below) - Line graph showing two weeks' history of the currently selected summary score. A monitor passes if it logs no incidents; a monitor has failed as soon as it logs an incident. The Y-axis values automatically adjust to the range of displayed values.
- 2-Week Monitor Summary (item 2, below) - Table of failed monitors with columns for monitor, metric, and a heatmap of incidents. On the heatmaps, red cells indicate days that produced incidents. Up to ten failing monitors are listed, ordered by number of days with at least one logged incident.
The Data Quality tab contains scores for each dimension and an overall score representing all metrics. Scores are calculated daily. Each score is a simple ratio expressed as a percent, indicating how many relevant monitors passed (logged no incidents) for the day. Below the scores, a chart displays two weeks' score history and a per-metric heatmap that shows which monitors recorded failures on which days (for the same two weeks).
- On the Dashboard > Data Quality tab, select a summary score. The chart (passing monitors) and heatmap (failing monitors) change to provide details about the selected score's related monitors.
- For a definition of the score's dimension, move your cursor over the (i) icon next to the score's name.
- The chart shows the score for each day over the preceding two weeks: the number of passing monitors divided by the total number of monitors.
- The heatmap lists up to ten failing monitors and the metric for each, ordered by the highest count of failing days in the two weeks leading up to the score's date. On the heatmap, days with failures (one or more incidents) show as red and passing days (no incidents logged) show as gray.
If your workspace has an Alation integration set up, you can review DQ Health information about tables in supported datasources. For an introduction, see Alation's Help page, View Data Health.
Updated 14 days ago