Request data exports

Prerequisites and process for initiating ongoing data exports

📘

The process described on this page requires a Lightup deployment that uses S3 or Azure Blob storage, as detailed below.

Upon request, Lightup will begin regular exports of your data quality metrics to your cloud-based storage (currently, S3 or Azure Blob). You can then use the data exports for custom analysis, custom dashboard generation, and archiving purposes. If you use S3 for cloud storage, you can create a Redshift database and import the data there.

What data we export

When exporting is turned on, each day we export a a summary of all metric data points collected that day along with indicators for whether or not an incident was detected for each data point. The export goes in a new CSV file in your cloud storage (S3 bucket or Azure Blob Container), using the following columns with one row per incident logged. You can then use these exports as needed.

📘

Items listed below have the format columnName - value source: value data type.

eventTs - datapoint timestamp: seconds from Unix Epoch
slice - datapoint slice: JSON object
value - datapoint value (what it represents depends on the type of metric): floating point number
metricUuid - datapoint metric uuid in Lightup db: string
metricName - datapoint metric name: string
metricType - datapoint metric type (whatever we have in config.aggregation.type): string
metricAggregationWindow - datapoint metric aggregation window (hourly, daily, etc.): string
metricTags - datapoint metric tags: Array[string]
sourceUuid - datapoint metric source uuid in Lightup db: string
sourceName - datapoint metric source name: string
schemaName - datapoint metric schema name: string
tableUuid - datapoint metric table uuid in Lightup db: string
tableName - datapoint metric table name in Lightup db: string
columnUuid - datapoint metric column uuid in Lightup db: string
columnName - datapoint metric column name: string
incidentCount - number of active incidents associated with the datapoint metric at eventTs time: integer
workspaceId - workspace id of the metric: uuid
monitorUuid - uuid of the monitor if datapoint was monitored: string
monitorName - name of the monitor if datapoint was monitored: string
monitorTags - monitor tags if datapoint was monitored: string
monitoredValue - processed value that is compared with monitor bounds: floating point number
monitorLowerBound - lower bound of the monitor if datapoint was monitored: floating point number
monitorUpperBound - upper bound of the monitor if datapoint was monitored: floating point number
dataExtractionComments - Description of any errors that were encountered during data extraction: string

Start getting data exports

Email [email protected] and provide the following information, based on your cloud storage account. There are currently two options: S3 and Azure Blob.

  • Start exports to S3
  • Start exports to Azure Blob

S3 exports - parameters

Include these details in your message:

  • S3 Bucket name where you want the export files (Required)
  • AWS access key ID (Optional)
  • AWS secret access key (Optional)
  • Region name (Optional)

AWS Blob exports - parameters

Include these details in your message:

  • Azure Blob Container name where you want the export files (Required)
  • Account name (Required)
  • Account key (Required)

Import S3 CSV files into a Redshift database

If you use S3 cloud storage for Lightup you can create a Redshift database and import the data from your CSV files, unlocking numerous analytical options.
The schema of the exports is subject to planned change. If you create a database per the following steps, include this factor in your maintenance plans.

  1. Create a Redshift database. For syntax, see Amazon RedShift - Create database.
  2. Open Redshift query editor in the AWS console.
  3. Make sure Redshift IAM role is allowed to read from the source S3 bucket. For help using IAM roles, see Using IAM roles.
  4. Create a table using following SQL:
    [block:code]
    {
    "codes": [
    {
    "code": "CREATE TABLE datapointsexport (\neventTs timestamp,\nslice super,\nvalue float,\nmetricUuid varchar,\nmetricName varchar,\nmetricType varchar,\nmetricAggregationWindow varchar,\nmetricTags super,\nsourceUuid varchar,\nsourceName varchar,\nschemaName varchar,\ntableUuid varchar,\ntableName varchar,\ncolumnUuid varchar,\ncolumnName varchar,\nincidentCount int,\nworkspaceId varchar,\nmonitorUuid varchar,\nmonitorName varchar,\nmonitorTags varchar,\nmonitoredValue float,\nmonitorLowerBound float,\nmonitorUpperBound float,\nincidentData varchar,\nuserDescription varchar,\ndataExtractionComments varchar\n);",
    "language": "sql"
    }
    ]
    }
    [/block]
  5. To import a CSV file, run the following SQL, but replace "datapointsexport" with the name of the CSV file (excluding the .csv extension), "lightup-datapoints-dump" with your S3 bucket name, and the value of iam_role:
    [block:code]
    {
    "codes": [
    {
    "code": " COPY datapointsexport\n FROM 's3://lightup-datapoints-dump'\n iam_role 'arn:aws:iam::231612517276:role/RedshiftCopy'\n csv\n IGNOREHEADER 1\n DELIMITER ','\n EMPTYASNULL\n TIMEFORMAT 'epochsecs';",
    "language": "sql"
    }
    ]
    }
    [/block]

For information about automating imports from S3 files into Redshift, see A Zero-Administration Amazon Redshift Database Loader.