Crawling
Crawling is the process of collecting metadata from a connected data source. This metadata includes details such as databases, tables, columns, reports, files, code, APIs, and their attributes.
Data sources can include systems like databases, data lakes, or reporting tools. After crawling is complete, the collected metadata will be available in the Data Catalog.
Example: For a SQL Server data source, crawling retrieves information such as Table Name, Column Name, Data Type, Data Type Size, Title, and Description. The screenshot below shows metadata from a sample database table in a SQL Server Data Source.
The system connects to the SQL Server, crawls the table, and fetches the metadata as shown.
Copyright © 2025, OvalEdge LLC, Peachtree Corners GA USA
Last updated
Was this helpful?

