Redash
Module redash
This plugin extracts the following:
- Redash dashboards and queries/visualization
- Redash chart table lineages (disabled by default)
CLI based Ingestion
Install the Plugin
pip install 'acryl-datahub[redash]'
Starter Recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide.
source:
type: "redash"
config:
connect_uri: http://localhost:5000/
api_key: REDASH_API_KEY
# Optionals
# api_page_limit: 1 #default: None, no limit on ingested dashboards and charts API pagination
# skip_draft: true #default: true, only ingest published dashboards and charts
# dashboard_patterns:
# deny:
# - ^denied dashboard.*
# allow:
# - .*allowed dashboard.*
# chart_patterns:
# deny:
# - ^denied chart.*
# allow:
# - .*allowed chart.*
# parse_table_names_from_sql: false
Config Details
- Options
- Schema
Note that a .
is used to denote nested fields in the YAML recipe.
View All Configuration Options
Field [Required] | Type | Description | Default | Notes |
---|---|---|---|---|
api_key | string | Redash user API key. | REDASH_API_KEY | |
api_page_limit | integer | Limit on number of pages queried for ingesting dashboards and charts API during pagination. | 9223372036854775807 | |
connect_uri | string | Redash base URL. | http://localhost:5000 | |
page_size | integer | Limit on number of items to be queried at once. | 25 | |
parallelism | integer | Parallelism to use while processing. | 1 | |
parse_table_names_from_sql | boolean | See note below. | None | |
skip_draft | boolean | Only ingest published dashboards and charts. | True | |
sql_parser | string | custom SQL parser. See note below for details. | datahub.utilities.sql_parser.DefaultSQLParser | |
env | string | Environment to use in namespace when constructing URNs. | PROD | |
chart_patterns | AllowDenyPattern | regex patterns for charts to filter for ingestion. | {'allow': ['.*'], 'deny': [], 'ignoreCase': True} | |
chart_patterns.allow | array(string) | None | ||
chart_patterns.deny | array(string) | None | ||
chart_patterns.ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True | |
dashboard_patterns | AllowDenyPattern | regex patterns for dashboards to filter for ingestion. | {'allow': ['.*'], 'deny': [], 'ignoreCase': True} | |
dashboard_patterns.allow | array(string) | None | ||
dashboard_patterns.deny | array(string) | None | ||
dashboard_patterns.ignoreCase | boolean | Whether to ignore case sensitivity during pattern matching. | True |
The JSONSchema for this configuration is inlined below.
{
"title": "RedashConfig",
"type": "object",
"properties": {
"connect_uri": {
"title": "Connect Uri",
"description": "Redash base URL.",
"default": "http://localhost:5000",
"type": "string"
},
"api_key": {
"title": "Api Key",
"description": "Redash user API key.",
"default": "REDASH_API_KEY",
"type": "string"
},
"dashboard_patterns": {
"title": "Dashboard Patterns",
"description": "regex patterns for dashboards to filter for ingestion.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"chart_patterns": {
"title": "Chart Patterns",
"description": "regex patterns for charts to filter for ingestion.",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"allOf": [
{
"$ref": "#/definitions/AllowDenyPattern"
}
]
},
"skip_draft": {
"title": "Skip Draft",
"description": "Only ingest published dashboards and charts.",
"default": true,
"type": "boolean"
},
"page_size": {
"title": "Page Size",
"description": "Limit on number of items to be queried at once.",
"default": 25,
"type": "integer"
},
"api_page_limit": {
"title": "Api Page Limit",
"description": "Limit on number of pages queried for ingesting dashboards and charts API during pagination.",
"default": 9223372036854775807,
"type": "integer"
},
"parallelism": {
"title": "Parallelism",
"description": "Parallelism to use while processing.",
"default": 1,
"type": "integer"
},
"parse_table_names_from_sql": {
"title": "Parse Table Names From Sql",
"description": "See note below.",
"default": false,
"type": "boolean"
},
"sql_parser": {
"title": "Sql Parser",
"description": "custom SQL parser. See note below for details.",
"default": "datahub.utilities.sql_parser.DefaultSQLParser",
"type": "string"
},
"env": {
"title": "Env",
"description": "Environment to use in namespace when constructing URNs.",
"default": "PROD",
"type": "string"
}
},
"additionalProperties": false,
"definitions": {
"AllowDenyPattern": {
"title": "AllowDenyPattern",
"description": "A class to store allow deny regexes",
"type": "object",
"properties": {
"allow": {
"title": "Allow",
"description": "List of regex patterns to include in ingestion",
"default": [
".*"
],
"type": "array",
"items": {
"type": "string"
}
},
"deny": {
"title": "Deny",
"description": "List of regex patterns to exclude from ingestion.",
"default": [],
"type": "array",
"items": {
"type": "string"
}
},
"ignoreCase": {
"title": "Ignorecase",
"description": "Whether to ignore case sensitivity during pattern matching.",
"default": true,
"type": "boolean"
}
},
"additionalProperties": false
}
}
}
Note! The integration can use an SQL parser to try to parse the tables the chart depends on. This parsing is disabled by default,
but can be enabled by setting parse_table_names_from_sql: true
. The default parser is based on the sqllineage
package.
As this package doesn't officially support all the SQL dialects that Redash supports, the result might not be correct. You can, however, implement a
custom parser and take it into use by setting the sql_parser
configuration value. A custom SQL parser must inherit from datahub.utilities.sql_parser.SQLParser
and must be made available to Datahub by ,for example, installing it. The configuration then needs to be set to module_name.ClassName
of the parser.
Code Coordinates
- Class Name:
datahub.ingestion.source.redash.RedashSource
- Browse on GitHub
Questions
If you've got any questions on configuring ingestion for Redash, feel free to ping us on our Slack