Amazon Redshift

This document outlines the integration with Amazon Redshift, enabling streamlined metadata management through features such as crawling, profiling, querying, data preview, and lineage building (both automatic and manual). It also supports impact analysis and ensures secure authentication via Credential Manager.

Overview

Connector Details

Connector Category

Data Warehouse

Connector Version

Release7.1

Releases Supported (Available from)

Release4

Connectivity

[How the connection is established with Amazon Redshift]

JDBC driver

Verified Amazon Redshift Version

1.0.109768

Note: The Amazon Redshift connector has been validated with the mentioned "Verified Amazon Redshift Versions" and is expected to be compatible with other supported Amazon Redshift versions. If there are any issues with validation or metadata crawling, please submit a support ticket for investigation and feedback.

Connector Features

Feature
Availability

Crawling

Delta Crawling

Profiling

Query Sheet

Data Preview

Auto Lineage

Manual Lineage

Secure Authentication via Credential Manager

Data Quality

DAM (Data Access Management)

Bridge

Metadata Mapping

The following objects are crawled from Amazon Redshift and mapped to the corresponding UI assets.

Redshift Object
Redshift Property
OvalEdge Attribute
OvalEdge Category
OvalEdge Type

Table

Table Name

Table

Tables

Table

Table

Table Type

Type

Tables

Table

Table

Table Comments

Source Description

Descriptions

Table

Columns

Column Name

Column

Table Columns

Columns

Columns

Data Type

Column Type

Table Columns

Columns

Columns

Description

Source Description

Table Columns

Columns

Columns

Ordinal Position

Column Position

Table Columns

Columns

Columns

Length

Data Type Size

Table Columns

Columns

Views

View Name

View

Tables

Views

Views

Text

View Query

Views

Views

Procedures

Routine_name

Name

Procedures

Procedures

Procedures

Description

Source Description

Descriptions

Procedures

Procedures

Routine_definition

Procedure

Procedures

Procedures

Functions

Routine_name

Name

Functions

Functions

Functions

Routine_definition

Function

Functions

Functions

Functions

Description

Source Description

Descriptions

Functions

Set up a Connection

Prerequisites

The following are the prerequisites to establish a connection:

External Supporting Files

The supporting file is required to enable connectivity with Amazon Redshift using the Redshift JDBC driver. Use this file according to the installation environment. The supporting files are available for download. Extract the ZIP file and select the required files to proceed. To download the ZIP file, click here.

File Name
Description

JDBC driver file (RedshiftJDBC42)

Use this file to enable connectivity with Amazon Redshift without relying on the AWS SDK. Place the file in the Third Party Jars folder.

The above supporting files are required for the connector setup process. Use the respective files based on the specific installation environment.

Whitelisting Ports

Ensure the inbound port “5439” is whitelisted for OvalEdge to connect to the Amazon Redshift database.

Service Account User Permissions

👨‍💻 Who can provide these permissions? These permissions are typically granted by the Amazon Redshift administrator, as users may not have the required access to assign them independently.

Operations
Objects
System Tables
Access Permission

Crawling & Profiling

Schema

pg_catalog.pg_namespace

USAGE

Crawling & Profiling

Schema

information_schema, svv_external_schemas

USAGE

Crawling & Profiling

Tables

information_schema.tables

SELECT

Crawling & Profiling

Tables

pg_catalog.pg_namespace

SELECT

Crawling & Profiling

Table Columns

pg_catalog.pg_namespace

SELECT, UPDATE

Crawling & Profiling

Table Columns

information_schema.columns

SELECT, UPDATE

Crawling & Profiling

Table Columns

information_schema.table_constraints

SELECT, UPDATE

Crawling & Profiling

Table Columns

svv_external_columns

SELECT, UPDATE

Crawling & Profiling

Table Columns

information_schema.key_column_usage

SELECT, UPDATE

Crawling & Profiling

Table Columns

pg_catalog.pg_description

SELECT, UPDATE

Crawling, Profiling, & Lineage Building

Views

pg_catalog.pg_class

SELECT

Crawling, Profiling, & Lineage Building

Views

pg_catalog.pg_namespace

SELECT

Crawling, & Lineage Building

Functions, Stored Procedures Source code

pg_catalog.pg_class, pg_proc_info

EXECUTE

Crawling, & Lineage Building

Functions, Stored Procedures Source code

pg_catalog.pg_namespace

EXECUTE

Crawling

Relationships

information_schema.table_constraints, information_schema.key_column_usage, information_schema.referential_constraints

REFERENCES

Connection Configuration Steps

  1. Log into OvalEdge, go to Administration > Connectors, click + (New Connector), search for Redshift, and complete the required parameters.

Note: Fields marked with an asterisk (*) are mandatory for establishing a connection.

Field Name
Description

Connector Type

By default, "Redshift" is displayed as the selected connector type.

Credential Manager*

Select the desired credentials manager from the drop-down list. Relevant parameters will be displayed based on your selection.

Supported Credential Managers:

  • OE Credential Manager

  • AWS Secrets Manager

  • HashiCorp Vault

  • Azure Key Vault

License Add Ons

Auto Lineage

Supported

Data Quality

Supported

Data Access

Supported

  • Select the checkbox for the Auto Lineage Add-On to build data lineage automatically.

  • Select the checkbox for Data Quality Add-On to identify data quality issues using data quality rules and anomaly detection.

  • Select the checkbox for the Data Access Add-On license that will enforce connector access via OvalEdge with the Data Access Management (DAM) feature enabled.

Connector Name*

Enter a unique name for the Amazon Redshift connection

(Example: "Redshift_Prod").

Connector Environment

Select the environment (Example: PROD, STG) configured for the connector.

Connector Description

Enter a brief description of the connector.

Server*

Enter the Amazon Redshift database server name or IP address (Example: xxxx-redshift.xxxx4ijtzasl.xx-south-1.rds.xxxx.com or 192.xxx.1.xx).

Port*

By default, the port number for the Amazon Redshift, "5439" is auto-populated. If required, the port number can be modified as per custom port number that is configured for your Redshift.

Database*

Enter the database name to which the service account user has access within the Redshift.

Driver*

By default, the Redshift driver details are auto-populated.

Username*

Enter the service account username set up to access the Amazon Redshift database (Example: "oesauser").

Password*

Enter the password associated with the service account user.

Connection String

Configure the connection string for the Amazon Redshift database:

  • Automatic Mode: The system generates a connection string based on the provided credentials.

  • Manual Mode: Enter a valid connection string manually.

Replace placeholders with actual database details.

{sid} refers to Database Name.

Plugin Server

Enter the server name when running as a plugin server.

Plugin Port

Enter the port number on which the plugin is running.

Default Governance Roles

Default Governance Roles*

Select the appropriate users or teams for each governance role from the drop-down list. All users configured in the security settings are available for selection.

Admin Roles

Admin Roles*

Select one or more users from the dropdown list for Integration Admin and Security & Governance Admin. All users configured in the security settings are available for selection.

No of Archive Objects

No Of Archive Objects*

This shows the number of recent metadata changes to a dataset at the source. By default, it is off. To enable it, toggle the Archive button and specify the number of objects to archive.Example: Setting it to 4 retrieves the last four changes, displayed in the 'Version' column of the 'Metadata Changes' module.

Bridge

Select Bridge*

If applicable, select the bridge from the drop-down list.The drop-down list displays all active bridges that have been configured. These bridges facilitate communication between data sources and the system without requiring changes to firewall rules.

  1. After entering all connection details, the following actions can be performed:

    1. Click Validate to verify the connection.

    2. Click Save to store the connection for future use.

    3. Click Save & Configure to apply additional settings before saving.

  2. The saved connection will appear on the Connectors home page.

Manage Connector Operations

Crawl/Profile

The Crawl/Profile button allows users to select one or more schemas for crawling and profiling.

  1. Navigate to the Connectors page and click Crawl/Profile.

  2. Select the schemas to be crawled.

  3. The Crawl option is selected by default. To perform both operations, select the Crawl & Profile radio button.

  4. Click Run to collect metadata from the connected source and load it into the Data Catalog.

  5. After a successful crawl, the information appears in the Data Catalog > Databases tab.

The Schedule checkbox allows automated crawling and profiling at defined intervals, from a minute to a year.

  1. Click the Schedule checkbox to enable the Select Period drop-down.

  2. Select a time period for the operation from the drop-down menu.

  3. Click Schedule to initiate metadata collection from the connected source.

  4. The system will automatically execute the selected operation (Crawl or Crawl & Profile) at the scheduled time.

Other Operations

The Connectors page provides a centralized view of all configured connectors, along with their health status.

Managing connectors includes:

  • Connectors Health: Displays the current status of each connector using a green icon for active connections and a red icon for inactive connections, helping to monitor the connectivity with data sources.

  • Viewing: Click the Eye icon next to the connector name to view connector details, including databases, tables, columns, and codes.

Nine Dots Menu Options:

To view, edit, validate, build lineage, configure, or delete connectors, click on the Nine Dots menu.

  • Edit Connector: Update and revalidate the data source.

  • Validate Connector: Check the connection's integrity.

  • Settings: Modify connector settings.

    • Crawler: Configure data extraction.

    • Profiler: Customize data profiling rules and methods.

    • Query Policies: Define query execution rules based on roles.

    • Access Instructions: Include notes on how to access the data.

    • Business Glossary Settings: Manage term associations at the connector level.

    • Anomaly Detection Settings: Configure anomaly detection preferences at the connector level.

    • Others: Configure notification recipients for metadata changes.

  • Build Lineage: Automatically build data lineage using source code parsing.

  • Delete Connector: Remove a connector with confirmation.

Connectivity Troubleshooting

If incorrect parameters are entered, error messages may appear. Ensure all inputs are accurate to resolve these issues. If issues persist, contact the assigned support team.

S. No.
Error Message(s)
Description & Resolution

1

Error while validating connection: Error while validating Redshift Connection : Failed to load driver class com.amazon.redshift.jdbc.Driver in either of HikariConfig class loader or Thread context classloader

Description: Connection validation failed due to missing Redshift JDBC driver.

Resolution:

Download the Redshift JDBC driver from Amazon's official site and upload it under Admin > Drivers. Then retry the connection.

2

Error while validating connection: Error while validating Redshift Connection : Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: [Amazon](500150) Error setting/closing connection: UnknownHostException

Description: Connection validation failed due to an UnknownHostException, indicating that the hostname or IP is incorrect or the Redshift server is not reachable.

Resolution: Verify that the provided host/IP is correct and ensure the Redshift server is up and accessible from the network.

3

Error while validating connection: Error while validating Redshift Connection : Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: [Amazon](500310) Invalid operation: password authentication failed for user "ovaledge1";

Description:

Connection validation failed due to incorrect credentials.

Resolution:

Check and update the username or password. Ensure the credentials are correct and have access to the Redshift cluster

4

Error while validating connection: Error while validating Redshift Connection : Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: [Amazon](500310) Invalid operation: database "ovaledge" does not exist;

Description: Connection failed because the specified database does not exist.

Resolution: Verify and correct the database name in the connection settings.


Copyright © 2025, OvalEdge LLC, Peachtree Corners GA USA

Last updated

Was this helpful?