Release7.2.3

OvalEdge Release7.2.3 is a service update that focuses on improving AI-driven data insights and data quality monitoring. This release introduces significant updates to askEdgi, Data Quality, and the Data Classification Recommendation (DCR) AI Model, as well as several usability and governance enhancements across other modules.

Key Highlights:

  • askEdgi Enhancements

    • askEdgi now offers better AI configuration, workspace management, and marketplace integration.

    • Users can perform natural language queries, analyze datasets, create reusable recipes, and manage AI-enriched workflows with improved control and security.

    • It supports multiple deployment editions (Public, SaaS, On-Prem) and introduces new controls for subscriptions, token tracking, and governance settings.

  • Data Quality Enhancements

    • A new Rules Summary view provides a comprehensive overview of all rule executions and scheduled jobs.

    • The Rule Execution ID dropdown enables users to filter dashboards by specific executions, facilitating better tracking and comparison.

    • These updates simplify monitoring and improve visibility into rule performance and scheduling.

  • Data Classification Recommendation (DCR) AI Model

    • The DCR engine now supports multiple algorithms—LLM (Semantic), Cosine Similarity, Levenshtein Similarity, and Fuzzy Logic—for higher accuracy.

    • Users can configure scoring logic, manage pattern or data matching, and set thresholds for recommendations.

    • This gives complete control over how data classifications are suggested and validated.

  • Data Catalog

    • Administrators can now set custom weightages for curation scores to match governance standards.

    • Index crawling for RDBMS connectors automatically captures index details for better metadata visibility.

  • Business Glossary

    • When a term is moved to a new domain, both the term and related objects automatically inherit the new governance roles.

    • A configurable Notes field is added to related objects for collaboration and clarity.

  • Tags and Terms

    • Tags and Terms in the tree view are now sorted alphabetically, making it easier to locate and assign them.

  • Service Desk

    • When stewards or governance roles change, SLA alerts and approvals are automatically updated, ensuring proper request routing.

    • A new job aligns all date and time columns with the database time zone for consistent SLA tracking.

  • Jobs

    • Users can now run INIT-state jobs on a priority queue, preventing delays and improving reliability.

    • Large output logs can now be downloaded without memory errors.

  • Connectors

    • A new Connector Summary Page gives a complete view of connector health, profiling status, and recent job activity.

    • Connection Pool settings are now configurable, and Credential Manager visibility is added for easier administration.

  • Upload File or Folder

    • Duplicate uploads to the same directory are now restricted, preventing accidental overwrites in shared environments.

  • Advanced Jobs and System Settings

    • New Advanced Jobs have been added to support operations such as Synapse label fetching and table archival.

    • Several System Settings are introduced for askEdgi, Data Classification, Jobs, and Dashboard configurations, giving administrators better control and flexibility.

Release Details:

Release Type
Release Version
Build <Release. Build Number. Release Stamp>
Build Date

Service Release

Release7.2.3

Release7.2.3.7230.8b8e7a4

16 October, 2025

Home

New & Improved

Browser Extension – Display Certification Status of Data Objects

The Browser Extension now displays the certification status of data objects in both the List View and Detailed View.

Users can quickly identify whether a data object is certified, requires caution, or has a violation. This enhancement improves transparency and governance by allowing users to view certification details directly within the extension without navigating to other modules.

Previously, the extension did not display certification indicators, requiring users to verify certification details manually. In this update, certification status indicators are displayed next to each data object in the List View, and the Detailed View includes a dedicated badge at the top that clearly indicates the certification status of the selected object.

Browser Extension – Display Stakeholders for Data Objects

The Browser Extension now displays all governance roles—such as Steward, Owner, Custodian, and other configured roles—for each data object and business glossary term accessed through the plug-in.

Previously, only the Steward information appeared in the summary view, limiting visibility into other governance roles.

With this enhancement, the extension shows all assigned roles along with user details. If a user holds multiple roles, all are displayed together to ensure transparency. This improvement helps users easily identify responsible stakeholders, promotes accountability, and facilitates faster collaboration.

Tags

New & Improved

Improved Tags and Terms Selection with Alphabetical Sorting

The Tree view for Tags and Terms previously displayed objects based on their creation date. Now, the Tree View displays Tags and Terms in alphabetical order at every hierarchy level:

  • Tags: Master Tags → Root Tags → Child Tags

  • Terms: Domain → Categories → Sub-Categories → Terms

This alphabetical ordering is also applied in the Assigning component, ensuring a consistent and user-friendly experience when locating or assigning Tags and Terms.

Change Management

  • Alphabetical sorting is applied at all hierarchy levels for Tags and Terms in the Tree View.

    What Changed

    In Tags and Business Glossary, previously, Tags and Terms in the Tree component were sorted by creation date. Now, alphabetical sorting is applied at all hierarchy levels for Tags (Master, Root, Child) and Terms (Domain, Categories, Subcategories, Terms).

    Affected Users: Administrators and Business Users

    👉 For more details, see Improved Tags and Terms Selection.

Data Catalog

New & Improved

Configurable Weights for Curation Score

In Data Catalog and Business Glossary, the Metadata Curation Score was calculated using fixed, predefined weights for various metadata attributes. This limited flexibility for users who prioritize different attributes when evaluating metadata quality. With this enhancement, users can now configure weightages for metadata attributes to align the scoring model with governance standards. The system automatically ensures that the total weightage equals 100, maintaining scoring consistency and accuracy across evaluations.

Enhanced System Views and Catalog Options for Codes in Data Catalog

In the Data Catalog, a new system view (System View - Temporary Cataloged Codes) has been introduced to display only temporarily cataloged codes. The existing system view will now display only permanently cataloged codes, and the Catalog column will no longer be visible in system view.

Users can create custom views that include both permanent and temporary codes. The Catalog column, which can be added to any custom view, has been renamed to 'Is Permanently Cataloged' for better clarity.

Additionally, in Query Sheet → Queries History, the Catalog option has been renamed to Permanently Catalog.

Index crawling

In the Data Catalog module, for RDBMS Connectors, index crawling has been implemented to automatically discover and manage database indexes during metadata crawling, with index attributes captured and integrated into table and column metadata to enhance visibility and governance.

Key capabilities include:

  • Automatic Index Discovery: Identifies primary, unique, composite, and full-text indexes.

  • Metadata Enrichment: Adds index type, uniqueness, and usage details to tables and columns.

  • Connector-Level Configuration: Index crawling can be enabled or adjusted per connector.

  • Framework Integration: Operates within the existing connector framework without additional setup.

Detail Lineage - Selective Node Enhancement

In Detail Lineage, the feature now supports the addition of selective nodes for summary lineage, improving visualization and analysis efficiency.

The following updates have been implemented:

  • Users can include only relevant nodes in lineage views to focus on specific data flows.

  • Visual clutter is reduced, allowing clearer interpretation of data movement.

  • The enhancement enables faster analysis of complex lineage scenarios while maintaining contextual accuracy.

Fixed

Last Crawl Date Now Includes Timezone

In Data Catalog | Reports, an issue occurred where the exported Sample Report lacked the timezone information in the Last Crawl Date column. This issue has been resolved. The Last Crawl Date column now includes the timezone information.

Temporary File Connections Without Data Are Hidden

In Data Catalog, an issue occurred where temporary file connections appeared in the Connector Name filter list even though they had no associated data. Selecting these filters returned no results. This issue has been resolved. Temporary file connections without associated data are now automatically hidden, preventing empty filter results and improving user experience.

Tableau-Filtered Reports Load in Preview

In Data Catalog | Reports, an issue occurred where reports filtered using the Tableau connector in any open "view" report did not load in the preview tab and remained stuck. This issue has been resolved. Reports now load correctly in the preview tab after applying the Tableau connector filter.

Exclude Deactivated Columns in Description Downloads

In Data Catalog, an issue occurred where downloading descriptions from the object summary also included deactivated columns in the exported file. This behavior was observed across all objects. This issue has been resolved. The download now excludes descriptions for deactivated columns, ensuring only active columns are included.

Business Glossary

New & Improved

Term and Associated Objects Align with New Domain Roles

In the Business Glossary, an improvement has been made to ensure that Governance Roles remain consistent when a Term is moved from one Domain to another. Previously, only the Term Governance Roles were updated based on the new Domain, while their associated objects retained their old roles.

With this enhancement, both the Term and its associated objects now automatically inherit the Governance Roles of the new Domain, ensuring alignment and reducing the need for manual adjustments. This functionality applies only when the Copy Governance Roles policy is enabled for the Term and is permitted on the associated objects.

In Business Glossary | Term summary, a 'Notes' column has been added for the related object section. The 'Notes' column is configurable via the system setting 'notes.relatedobjects' (True/False), with the default set to False (disabled). When enabled, users can add free-text notes for each related object to explain logic or context.

Key capabilities include:

  • Notes appear in the Relationship Diagram Quick View with relationship details.

  • Notes can be edited, viewed, or deleted.

Fixed

Update Governance Roles Functionality Corrected

In Business Glossary (Terms – Update Governance Roles), an issue occurred where the Update Governance Role functionality did not work as expected. The fields sometimes appeared empty when revisiting the Governance Roles. This issue has been resolved. Updates to Governance Roles now display correctly, ensuring accurate governance information in the Business Glossary.

Warning Message Before Deleting Categories/SubCategories with Terms

In Domain Security, previously, there was no warning message when deleting a category or subcategory with associated terms, which could lead to unintended data loss.

Now, users see a warning message with two options when deleting a category or subcategory:

  1. Delete the category and move all terms to the parent domain. Subcategories are deleted, but their terms are moved to the same parent domain.

  2. Delete the category and permanently delete all associated terms and subcategories.

Users can select the desired option and operate safely, ensuring better control and preventing accidental deletions.

askEdgi

New & Improved

askEdgi is a conversational AI assistant embedded within OvalEdge, designed to deliver governance-first, context-aware insights. Unlike traditional AI tools, askEdgi understands metadata, business terms, stewardship information, and governance policies, ensuring secure, compliant, and accurate analytics. It enables users to discover data, analyze datasets, enrich them with AI, save workflows as reusable recipes, and access guided platform support.

askEdgi operates across Public, SaaS, and On-Prem deployments, with functionality tailored to each environment.

askEdgi Editions & Deployment Models

  • Public Edition: Provides open access for users to perform metadata search, analyze public datasets or uploaded files, and create, execute, and share recipes. Users can also publish recipes to the Marketplace with monetization options. The workspace supports Smartᵝ, Discoveryᵝ, and Analytics modes, enabling end-to-end analysis, workflow management, and seamless recipe lifecycle operations.

  • SaaS – Data Analytics: Full analytics on catalog and connected sources, AI enrichment, and recipe management with internal sharing. Marketplace recipes can be executed as per policy.

  • SaaS – Metadata Analytics: Metadata-focused insights from the OvalEdge catalog. Recipes operate on metadata only, with optional Marketplace access for metadata workflows.

  • On-Prem Edition: Operates entirely within client infrastructure, supporting metadata analytics only. Recipes are limited to catalog metadata, with strict governance and no external AI enrichment.

Modes of Operation

  • Smartᵝ Mode: Ask questions in natural language. Edgi interprets intent, locates relevant data, applies the correct analysis, and delivers complete answers.

  • Discoveryᵝ Mode: Focus on data discovery across catalogs, glossary terms, projects, and templates, with options to add datasets to the Workspace.

  • Analytics Mode: Analyze datasets in the Workspace, generating summaries, tables, or charts, and applying AI functions when needed.

Key Features

1. Search Across Metadata

askEdgi interprets natural language queries and returns context-aware results across multiple metadata domains, including:

  • Data Catalog: Tables, columns, schemas, files

  • Business Glossary: Terms, categories, subcategories

  • Projects: Active projects

  • Service Desk Templates

2. Perform Data Analysis

askEdgi enables natural language analysis directly in the Workspace:

  • Add datasets from the catalog

  • Upload external files (CSV, Parquet up to 100 MB)

  • Ask analytical questions

  • Generate tables, summaries, and visual charts (bar, line, pie, scatter)

3. AI Enrichment Functions

AI enrichment enhances datasets in the Workspace using natural language prompts:

  • Calculated columns (e.g., return_rate, profit margin)

  • Classifications (High/Medium/Low risk)

  • Sentiment, intent, and emotion analysis

  • Text classification and proofreading

  • Prompt analysis to refine queries

4. Recipes

Recipes capture repeatable workflows:

  • Save datasets, joins, AI enrichments, and visualizations

  • Assign name, description, and steps

  • Rerun with updated data

  • Share internally or publish in the Marketplace (Public)

Recipe types include:

  • Diagnostic Recipes: Assess data quality and compliance readiness

  • Success Metrics/Adoption Recipes: Measure governance adoption

  • Business Recipes: Analyze business objectives, e.g., sales, churn

5. Knowledge Agent

The embedded knowledge agent provides step-by-step guidance on using OvalEdge features, accelerating onboarding, reducing support dependency, and building confidence in platform exploration.

6. Track & Monetisation (Public Only)

In the Public instance, askEdgi enables recipe consumption, subscriptions, usage tracking, and monetisation:

  • Marketplace for free or paid recipes

  • Subscription plans: Free Trial, Business, Business Plus, Enterprise

  • Track AI token usage, AWS compute, and recipe executions

  • Secure payments via Stripe

  • Enforce spend limits and plan-specific caps

7. Security & Privacy

askEdgi follows a strict governance-first architecture:

  • Direct, controlled data access with temporary workspaces

  • Hybrid AI model: in-house governance layer + trusted LLMs for language understanding

  • Metadata classification, PII handling, and role-based access checks

  • Encryption in transit and at rest

  • Admin configurations for fine-tuned queries and AI behavior

  • Full transparency into data access, usage, and ownership

Service Desk

New & Improved

SLA Alerts and Approvals Now Auto-Updated with Steward Changes

In Service Desk, when the steward of a domain was changed, the update was not reflected in the service requests. As a result, SLA notifications continued to be sent to the old steward, which caused delays in approvals and confusion in the request management process.

Now, when a governance role is changed, the system automatically updates the governance role user in the existing service requests, which ensures that SLA alerts and approvals are always routed to the correct, active governance role user.

Align Service Desk Date Columns with Database Time Zone

The Advanced Job ‘Update Service Desk Date Columns from UTC to Database Time Zone’ ensures that all date and time columns in Service Desk are correctly aligned with the application and database time zones, improving consistency in display and SLA calculations.

Scenarios:

  • Application and Database Time Zones differ: Running the job converts all Service Desk columns to the application time zone.

  • Application and Database Time Zones are the Same, but not UTC: Before migrating to R7.2.x, time columns were displayed in UTC, whereas SLA calculations used a different time zone. After running the job, all time columns and SLA calculations are aligned to the application and database time zone.

  • Application and Database Time Zones in UTC: The job is not required to be run, as the time zones are already consistent.

This enhancement ensures accurate time displays and SLA calculations across Service Desk.

Data Classification Recommendations

New & Improved

Data Classification Recommendation Enhancement

The Data Classification Recommendation (DCR) framework has been enhanced into a fully configurable, algorithm-driven engine that allows users to customize how glossary-term recommendations are generated, scored, and automated, providing greater control, transparency, and flexibility.

Key Enhancements

  1. Algorithm Selection (New)

    1. Users can now choose the desired Name-Matching Algorithm within the AI Model:

      1. LLM (Semantic) – Context-aware, high accuracy

      2. Cosine Similarity – Embedding-based vector comparison

      3. Levenshtein Similarity – Character-level distance

      4. Fuzzy Logic – Lightweight, typo-tolerant

    2. Each model can run with the algorithm best suited to its dataset, allowing better precision testing and fine-tuned model accuracy.

  2. Heuristic & Configurable Scoring Controls (New)

    1. A new heuristics layer enables users to design their own scoring strategy:

    2. Turn Data and Pattern scoring ON/OFF as needed.

    3. Define whether to consider or ignore regex-based patterns at the term level.

    4. Decide which score components to boost, penalize, or omit from the final calculation.

    5. This gives stewards the ability to tune DCR accuracy dynamically for each model or domain.

  3. Pattern & Data Matching Toggles (New Controls)

    1. Dedicated toggles allow complete control over content-based recommendations:

      1. Data Matching ON/OFF – Compare or skip actual data-value similarity.

      2. Pattern Matching ON/OFF – Include or exclude structural format checks (e.g., DDD-DD-DDDD for SSN).

    2. These toggles help optimize runtime and precision based on available profiling depth.

  4. Label Enhancements

    1. Clearer field labels and placeholders (e.g., “Minimum Smart Score for Recommendation”, “Auto-Acceptance Threshold”).

  5. UI and Usability Improvements

    1. Added Tag-based Object Filtering and Regex Exclusion/Exception for precise model scope control.

    2. Updated tooltips and help text for score fields and algorithm descriptions.

Change Management

  • The AI Recommendation Model now provides advanced configuration options for algorithm selection, scoring, and object filtering, enabling precise control over recommendation accuracy and scope.

    What Changed

    Previously, the AI Recommendation Model relied solely on Fuzzy Logic for Name Score calculation and lacked configurable scoring parameters or advanced filtering options. The Business Glossary also used a single regex field for both name and data matching.

    In the current version, users can choose from multiple algorithms—Cosine Similarity, LLM (Semantic), Levenshtein Similarity, or Fuzzy Logic—to determine how name comparisons are performed. The model setup now supports Tag Selection and Regex-based inclusion/exclusion, allowing users to control which objects are included in recommendations. Scoring logic is fully configurable, with options such as Data Type Match Boost and Heuristic Scoring for enhanced accuracy.

    Additionally, the Business Glossary now includes separate configuration fields for Object Name Matching Regex and Object Data Matching Regex, each with its own Boost Score, providing more granular control and precision in term-level scoring.

    Affected Users: Administrators, Authors, and Data Stewards.

Data Classification Recommendation

New & Improved

Configurable Object Limit for DCR AI Models

In Data Classification Recommendation (DCR), a new system setting, dcr.object.limit, has been introduced to allow administrators to configure the maximum number of objects that can be added to a Data Classification Recommendation AI model. Previously, the limit was fixed at 20 objects, which restricted the ability to create bulk AI recommendation models for different data sets. With this improvement, the default limit remains 20, but it can now be increased up to 99 through system settings.

Certification Policy

Fixed

Caution Tag Propagates to Downstream Objects

In the Certification Policy, an issue occurred where the caution tag was applied only to the directly associated object when a Data Quality (DQ) rule failed, instead of propagating to downstream objects. This issue has been resolved. The caution tag now correctly applies to both the associated object and all downstream objects (table types), ensuring accurate visibility of data quality impact across the lineage.

Data Quality

New & Improved

Rules Summary

A new Rules Summary feature is introduced in Data Quality Rules, providing a centralized view of all rule executions, object executions, and upcoming scheduled jobs. It enables users to monitor rule performance, execution history, and future schedules in one place.

The Rules Summary page includes three tabs:

  • Rule Executions: Displays the complete execution history of all data quality rules across rule types (Table, Column, File, and Code). Key details include Rule Name, Object Type, Execution ID, Result, Passed/Failed/Undetermined Objects, Start/End Time, Duration, and Run By. Users can filter or sort by Rule Execution ID and Object Execution ID for quick navigation.

  • Object Execution Results: Provides detailed insights for each executed object, including Rule Name, Object Type, Execution IDs, Connection Name, Schema/Folder/Object/Attribute, Row Counts (Passed/Failed/Total), Result Value, and Run On. Supports filtering by Rule Execution ID and Object Execution ID to access specific results.

  • Upcoming Executions: Displays future scheduled data quality jobs, sourced from Jobs > Jobs Summary > Upcoming Jobs, showing the scheduled date, time, and associated rules for easy monitoring of upcoming runs.

Rule Execution ID Dropdown and Dashboard Header Updates

The Data Quality Rule Dashboard now includes a Rule Execution ID dropdown to view results for a specific rule execution. When an Execution ID is selected, all dashboard components—Execution Overview, Object Results Distribution, Failure Insights, and Quality Trends—automatically refresh to display the corresponding data.

By default, the dashboard displays the most recent execution. Additionally, section headers have been updated to remove the word “Last” to ensure consistent and clear labeling across the dashboard.

The Data Quality module now includes enhanced navigation features to improve usability and streamline access to related information. Two new optional columns, Job Workflows and Schedules, have been added to the Data Quality Rules table. These columns display the count of job workflows and schedules associated with each rule. Each count serves as a navigational link, redirecting users to the respective Job Workflows or Schedules page, automatically filtered by the selected Data Quality Rule.

Change Management

  • The Data Quality Rule Dashboard now includes a Rule Execution ID dropdown, allowing users to view results for specific rule executions rather than just the most recent one.

    What Changed

    Previously, the Data Quality Rule Dashboard only displayed the most recent execution statistics and results, limiting users' ability to analyze past executions.

    In the current version, a new Rule Execution ID dropdown allows users to select a specific execution, which automatically refreshes all related dashboard components—Execution Overview, Object Results Distribution, Failure Insights, and Quality Trends—to display the corresponding data. Section headers have also been updated to remove the word “Last,” ensuring consistent and clear labeling.

    Affected Users: Administrators and Business Users

    👉 For more details, see Data Quality - Rule Execution ID Dropdown.

Query Sheet

Fixed

Support for Parameterized Queries and Custom Configurations

In the Query Sheet, an issue occurred where custom configurations and parameterized queries were not supported. This issue has been resolved. Users can now define parameterized values, and when executing a query using these parameters, the system automatically retrieves the corresponding configuration values, restoring full support for custom configurations and parameterized queries.

Jobs

New & Improved

INIT-State Jobs Can Be Queued for Priority Execution

In Jobs, users can now select the ‘Run On Priority Queue’ option from Manage Jobs to prioritize jobs in the INIT state. Users can set any number of jobs to run on the priority queue, but these will be prioritized only after the current execution jobs are done.

The Run Now configuration defines the count of a separate queue for priority jobs, ensuring they can run even if they are stuck in the INIT state due to the ‘ovaledge.running.jobs.count’ limitation. For example, if the configuration limit is set to 3, then the maximum number of jobs that can be set to run now will be 3.

Job Log Page Metrics

In Jobs, the Job Log 'Page Metrics' provides a detailed overview of key performance indicators and diagnostic data for each job execution. The Page Metrics button is associated with every Job Log linked to a specific Job.

It captures critical metrics, such as Job Title, Job Duration, and Velocity (processing speed), along with counts of Application Database calls, Source System calls, and Asynchronous calls, which help in understanding system interaction and load. Additionally, it tracks SQL Error Counts, Log Error Counts, and Slow Queries Count, providing insight into potential issues that may impact performance or data quality. Each metric entry also includes a Page Request ID and the corresponding Page URL, allowing administrators to trace and troubleshoot specific requests within the application.

Fixed

INIT State Issue Fixed for Report Downloads

In Jobs, an issue occurred where downloading Data Catalog reports in CSV format caused jobs to remain stuck in the INIT state. This issue has been resolved, and the Report downloads now complete successfully.

Large Output Logs Download Without Errors

In Jobs, an issue occurred where jobs with large output failed to download logs due to memory exhaustion and timeout errors. This issue has been resolved, and logs for all jobs, including those with large output, can now be downloaded successfully.

Reliable Execution of Data Quality Scheduled Jobs

In Jobs, an issue occurred where Data Quality Scheduled Jobs sometimes showed inconsistent execution statuses, such as ‘Partial Success’, while rerunning the same job displayed ‘Passed’. In some cases, jobs failed due to connector pooling timeout errors.

These issues have been resolved. Data Quality Scheduled jobs now execute reliably according to their scheduled periods, ensuring consistent and accurate job statuses.

Impact Analysis

Fixed

Loader Added for Downloading Impacted Objects

In Impact Analysis | Download Impacted Objects, an issue occurred that caused delays when downloading reports. Repeatedly clicking the download button by users sometimes led to production Tomcat server crashes. This issue has been resolved. A loader has been added to the download pop-up to indicate progress and prevent multiple clicks.

Change Management

  • In Impact Analysis, the download process now runs as a background job, allowing users to download the impacted objects as a ZIP file from the Job Logs.

    What Changed

    In Impact Analysis, the impacted objects were previously directly downloaded using the download button. However, when the Download button is clicked, a job is initiated in the background. The data can then be downloaded as a ZIP file from the Job Logs.

    Affected Users: Admin Users

Load Metadata from Files

Fixed

UserID Column Displays Updated Details

In the Business Glossary LMDF Template, an issue occurred where the UserID column did not display updated information after downloading the template with metadata. This issue has been resolved. The UserID column now correctly reflects the updated details in the downloaded template.

Data & Metadata Changes

New & Improved

Analyze Data Using Report Type Filter

In Data & Metadata Changes, the Reports tab now includes a Report Type filter column, enabling users to filter and analyze data by report type. This enhancement improves data visibility, making it easier to locate and work with specific types of reports.

Fixed

Consistent Last Meta Sync Date Across Reports

In Data and Metadata Changes (Reports Tab), an issue occurred where the Last Meta Sync Date values were inconsistent with those shown in the Data Catalog. This issue has been resolved. The Last Meta Sync Date column now displays consistent and accurate values across both the Data and Metadata Changes (Reports Tab) and the Data Catalog (Reports Tab).

Upload File or Folder

New & Improved

Duplicate Upload Restriction for NFS Connector

In the Upload File or Folder module, users could previously re-upload the same file or folder multiple times to the same location using the NFS Connector. This behavior could cause unintentional overwriting and confusion due to the absence of validation or warning messages.

With this enhancement, the system now validates uploads to prevent duplicates within the same directory. If a file or folder with the same name already exists, the system displays an error message and restricts the upload. Users must rename the file or folder before re-uploading.

This improvement ensures data integrity, prevents accidental overwrites, and maintains clarity in shared storage environments.

Connectors

New & Improved

Salesforce | Column Relationship Type Display

In Salesforce, the column relationship type is now displayed to provide better visibility into field dependencies and metadata understanding.

The following updates have been implemented:

  • Relationship types such as Master-detail, Lookup, Many-to-many, External lookup, Indirect lookup, and Hierarchical are now visible for Salesforce columns.

  • The enhancement helps stewards identify incorrect relationships that could affect data quality or calculations.

Configurable Connection Pool

In the Connectors module, connection pooling settings are now configurable directly within the application for improved flexibility and performance management.

The following updates have been implemented:

  • A new Connection Pooling tab is available under Administrator > Connectors > Settings.

  • Administrators can configure parameters such as maximum pool size, idle time, connection timeout, validation timeout, and leak detection threshold.

  • Default values are applied automatically and can be modified with proper validation controls.

  • All configured values are stored and applied during the next job execution without affecting active sessions.

  • The update standardizes connection pool management across all RDBMS connectors, improving scalability and monitoring through logs and metrics.

Connector Summary Page

In the Connectors module, the Connector Summary Page is now available to provide a centralized view of a connector, enabling efficient monitoring, management, and analysis of connector performance and data quality.

It displays:

  • Connector description, authentication type, credential manager, environment, and governance roles (Data Steward, Data Owner, Data Custodian, Integration Admin, Security Admin).

  • Crawl details, including last crawl status, job reference, duration, and objects (new, updated, deleted).

  • Profiling details such as coverage %, last profile status, job reference, and top profiled tables (table name, row count, null %, distinct %) with pagination.

  • Recent job activity with job ID, type, triggered by, status, duration, and error message; supports pagination.

  • All scheduled jobs with type, frequency, scheduled run time, and associated workflows.

Credential Manager Visibility

In the Connectors module, a new field is now available to display the enabled credential manager, improving visibility and operational efficiency.

Implemented Features:

  • The Credential Manager column has been added to the connector list view.

  • The column displays the configured credential manager, such as Vault, AWS Secret Manager, Azure Key Vault, or HashiCorp Vault.

  • If no credential manager is configured, the field displays Empty.

  • Eliminates the need to access connector settings to view credential details, streamlining navigation and management.

Azure Synapse | Classification Metadata Fields

In Azure Synapse, all classification-related metadata fields are now fetched and mapped to improve data categorization and understanding.

Implemented Features

  • Support has been added to fetch all classification fields for Synapse objects instead of only a single field.

  • Each classification field is mapped to connector-level custom fields for accurate data representation.

  • Alias columns are added for all custom field types to assist in mapping and identification.

  • Advanced jobs now validate and fetch classifications for all required schema tables.

  • Audits are added for alias fields to support tracking and reference.

Service Desk Administration

Change Management

  • Governance role changes now automatically update users/teams in both new and existing service request workflows.

    What Changed

    In Service Desk Templates, previously, when the governance role of an object is updated, the associated user/team in the workflow of existing service requests is not automatically updated. The change applies only to newly created service requests. Now, when the governance role of an object is updated, the associated user/team in the workflow of existing service requests is automatically updated.

    Affected Users: All users (Only users who are added as Governance Role)

    👉 For more details, see SLA Alerts and Approvals.

  • Integration details have been moved from the Service Desk Administration module to a separate component under the main Administration module.

    What Changed

    In Service Desk Administration, integration details were previously included under the Service Desk Administration module; however, they are now added as a separate component under the main Administration module.

    Affected Users: Author license users

Advanced Jobs

New & Improved

Extended Functionalities of Alation to OvalEdge URL Migration Job

The migration process previously converted Alation URLs to the OvalEdge format by reading the description field from the Wiki table only. This functionality has been extended. The migration job now also reads data from the Storyline table, and any Alation URLs present in story content are automatically converted to OvalEdge URLs. This enhancement ensures a more complete and accurate URL migration.

Alation URL Conversion Failed Due to Missing Double Quotes

When running an Alation URL conversion advanced job, links failed to migrate correctly because double quotes were missing between the attributes in the HTML. The issue was resolved by correcting the link format, re-uploading the descriptions, and re-running the migration advanced job. URLs now convert successfully into the OvalEdge format.

Advanced Jobs

The latest release introduces a set of new advanced jobs that allow users to update the data specific to modules or features using advanced algorithms.

Name
Description

Merging Temp Lineage Tables in Bulk

This job merges temporary lineage tables into their original tables within a specified schema in bulk.

Fetching Labels for Synapse Columns

This job helps fetch Synapse labels at both table and column levels.

Table Archival

This job archives data present in the table according to the specified retention period. For example, if the retention period is set to 5 days, the recent 5 days of data will persist, and all other data will be deleted and archived in CSV format into the specified temppath/table_archival/.

System settings

The latest release introduces new system settings that enhance user control over the application's behavior.

Key
Description
Impacted Modules

askedgi.recipe.marketplace.base.url

Configure the URL of the recipe marketplace used for editing or generating public/shareable links.

Parameters:

The default value is Empty.

Enter the required marketplace URL

askEdgi, Recipe

askedgi.recipe.marketplace.password

Configure the password used to authenticate with the recipe marketplace APIs.

Parameters:

The default value is Empty.

Enter the required marketplace password.

askEdgi, Recipe

askedgi.recipe.marketplace.url

Configure the base URL of the recipe marketplace for API calls.

Parameters:

The default value is Empty.

Enter the required marketplace URL.

askEdgi, Recipe

askedgi.recipe.marketplace.username

Configure the username used to authenticate with the recipe marketplace APIs.

Parameters:

The default value is Empty.

Enter the required marketplace username.

askEdgi, Recipe

askedgi.s3.public.connector

Configure the S3 public connector ID used for S3-to-S3 copy operations in AskEdgi. When a connection matches this ID, the system uses copy mode instead of download-to-local mode.

Parameters:

The default value is Empty.

Enter the required S3 connector ID.

askEdgi

askedgi.workspace.objects.limit

Configure the maximum number of objects allowed in a workspace.

Parameters:

The default value is 50.

Enter the required workspace object limit.

askEdgi

askedgiplus.enable

Configure to enable or disable AskEdgi Plus.

Parameters:

The default value is False.

If set to True, AskEdgi Plus is enabled.

If set to False, AskEdgi Plus is disabled.

askEdgi

max.concurrent.bridgeclient.tasks

Configure the maximum number of client-side data processing tasks allowed to run simultaneously.

Parameters:

The default value is 5.

Enter the required task limit.

askEdgi

max.concurrent.bridgeserver.tasks

Configure the maximum number of server-side data processing tasks allowed to run simultaneously.

Parameters:

The default value is 3.

Enter the required task limit.

askEdgi

oe.edgi.free.plan.price.id

Configure the Stripe price ID for the free plan.

Parameters:

The default value is Empty.

Enter the required Stripe price ID.

askEdgi, Recipe

ovaledge.edgi.host.type

Configure the application hosting type.

Parameters:

The default value is Empty.

Enter the hosting type (e.g., public).

askEdgi

askedgi.edition

This value is used for choosing the askEdgi edition: Catalog Insights or Insights & Analytics

Parameters:

Available editions: Catalog Insights / Insights & Analytics

The default value is '.

Catalog Insights: Catalog search with metadata analytics across tables and files

Insights & Analytics: Full analytics on top of files and catalog tables for deeper analysis

askEdgi

askedgi.ovaledge.recipe.connection.id

Default OvalEdge recipe connectId

askEdgi, Recipe

askedgi.metadata.analytics.role

Define the role permitted to perform metadata analytics in AskEdgi (SaaS v2 and On-Prem variants).

A role from any license level can be selected.

As a best practice, configure a role associated with a limited number of users (ideally 4–5) to ensure stable performance of workspaces.

Parameters:

The default role is OE_ADMIN

askEdgi

dcr.object.limit

Set the maximum number of objects that can be added to the scope of the DCR Model.

Parameters:

Default Value: 20

Max Value: 100

Data Classification Recommendations

jobs.priority.running.count

This configuration defines the count of a separate queue for priority jobs, ensuring they can run even if they are stuck in the INIT state due to the ovaledge.running.jobs.count limitation.

Jobs

ai.config

This configuration is used to store the AI config. This opens a pop-up to configure the AI settings.

Parameters :

AI Config

The default value is the standard config without the api keys

AI

notes.relatedobjects

The notes section will appear in the Related Objects of Business Glossary when the configuration is turned on.

Business Glossary

restrict.newuser.saml.login

This configuration is used to restrict/allow new users to log in if set to true.

Note: A user must be created in advance through the login process or administrative setup.

Parameters:

The default value is false.

Login

dashboard.apache.superset.enabled

This configuration controls the integration of the Apache Superset external application when set to True.

Parameters:

The default value is false.

Dashboards

dashboard.apache.superset.base_url

This configuration allows users to add the required Apache Superset base URL.

Dashboards

dashboard.apache.superset.user_name

This configuration enables users to add their username to the base URL.

Dashboards

dashboard.apache.superset.password

This configuration allows users to enter a password for the base URL.

Dashboards

dashboard.external.dashboard.admin

This configuration defines the user role authorized to access the Apache Superset external application.

Parameters:

The default value is OE_Admin.

Dashboards

Known Vulnerabilities

This release addresses known security vulnerabilities identified by Trivy scans. These issues originate from third-party open-source libraries packaged with the product images and do not come from OvalEdge’s own custom code.

Standard Image

The scan identified a total of 3 High severity vulnerabilities. No Critical, Medium, or Low severity issues were reported. The vulnerabilities are in core web libraries and include:

  • Netty (CVE-2025-55163 - HTTP/2 "Made YouReset" DoS, and CVE-2025-24970 - SslHandler Native Crash).

  • Tomcat (CVE-2025-48989 - HTTP/2 "Made YouReset" DoS).

These issues primarily pose a Denial of Service (DoS) risk by causing resource exhaustion or a native application crash.

Big Data Image

The scan identified 1 Critical vulnerability and a total of 11 unique High severity CVEs in third-party libraries packaged with the Big Data container.

  • 1 Critical vulnerability was identified: CVE-2018-1282 in Hive JDBC. This is an improper input validation flaw that could allow a remote attacker to execute arbitrary SQL commands (SQL Injection), potentially leading to data theft or modification.

  • 11 High-severity vulnerabilities were detected, including issues in big data and utility components:

    • Hadoop (CVE-2021-33036 - YARN Privilege Escalation).

    • Protobuf-Java (CVE-2024-7254, CVE-2022-3509 & CVE-2022-3510, CVE-2021-22569).

    • Gson (CVE-2022-25647 - Deserialization of Untrusted Data).

    • Netty (CVE-2019-16869 - HTTP Request Smuggling).

    • Commons IO (CVE-2024-47554 - Denial of Service).

    • Jetty (CVE-2024-13009 - Gzip Request Buffer Flaw).

    • Hive (CVE-2015-7521 - Authorization Bypass, CVE-2015-1772 - LDAP Auth Bypass, CVE-2016-3083 - Improper Certificate Validation/MITM).

These issues could allow unauthorized access (authorization/authentication bypass), privilege escalation, denial-of-service (DoS) attacks, or compromise connection integrity.

Action Plan

The identified vulnerabilities, including both critical and high-severity issues, will be remediated in the upcoming release. Fixes will be applied through updated libraries and components to ensure the product images include the latest security patches.


Copyright © 2025, OvalEdge LLC, Peachtree Corners, GA, USA.

Last updated

Was this helpful?