Trustworthy Model
This article defines how askEdgi ensures trustworthy AI behavior, including data governance, explainability, auditability, and responsible use of artificial intelligence when analyzing customer-provided data.
askEdgi is designed to assist authorized users in analyzing, transforming, and understanding data they are permitted to access. Trustworthiness is achieved through deterministic execution, strong governance controls, transparent explanations, and comprehensive auditability.
Applies To: askEdgi SaaS Platform
Scope and System Boundaries
askEdgi operates within the following boundaries:
In scope
User-uploaded files (e.g., spreadsheets, CSVs)
Data accessed via approved connectors
Deterministic computation in sandboxed execution environments
AI-assisted planning and explanation
Workspace-level governance and access controls
Out of scope
Autonomous decision-making
Scoring or profiling individuals
Unapproved data enrichment
Silent modification of customer data
askEdgi does not access external data sources unless explicitly configured and authorized by the customer.
Definitions
Customer Content: Data, files, prompts, generated code, analysis outputs, and derived artifacts associated with a customer workspace.
Personal Data: Any information relating to an identified or identifiable individual.
System Metadata: Security logs, performance telemetry, and operational metrics excluding raw customer datasets.
Execution Sandbox: Isolated compute environment used for deterministic operations.
Model Provider: Third-party AI model API used for interpretation and explanation.
Trustworthy AI Principles
askEdgi adheres to the following principles:
Groundedness & Accuracy
All numeric, statistical, and analytical results in askEdgi are generated through deterministic execution engines, such as SQL and Python, operating within controlled execution sandboxes. Language models are not used to compute values, derive metrics, or perform calculations. This separation ensures that analytical outputs are reproducible, verifiable, and consistent across executions, providing users with reliable results that can be independently validated.
Language models in askEdgi are limited to interpreting user intent expressed in natural language and generating human-readable explanations of executed results. They do not perform computations, generate synthetic values, or infer missing data. All explanations are grounded in executed logic and validated outputs, ensuring that narratives accurately reflect what was actually run and returned by the system.
When the available data, metadata, or context is insufficient to answer a user’s question accurately, askEdgi does not attempt to infer or guess results. Instead, the system explicitly signals uncertainty and prompts the user for clarification, additional context, or dataset selection. This behavior prevents misleading outputs and reinforces responsible, accuracy-first analytics.
Explainability
askEdgi delivers explainability through a layered explanation model that caters to both business and technical users.
Each analysis may include a high-level summary describing the outcome in plain language, followed by detailed technical explanations that outline transformations, filters, joins, and assumptions applied during execution.
Where applicable, executable artifacts such as SQL queries or Python logic are available to support full reproducibility and independent review.
askEdgi provides layered explanations:
Summary: Plain-language result description
Technical Detail: Transformations, filters, joins, assumptions
Reproducibility: Executable SQL, Python, or formula steps
Auditability
askEdgi maintains comprehensive auditability by logging all material user and system actions, including data access, query execution, recipe runs, and workspace operations. Each log entry is timestamped and associated with a specific user identity and workspace context. These logs support governance reviews, incident investigations, and compliance audits, and align with OvalEdge platform audit and retention policies.
Governance & Least Privilege
askEdgi enforces governance through role-based access control and strict workspace isolation. Users can only access data sources, connectors, and actions that are explicitly permitted by their assigned roles and underlying OvalEdge catalog permissions. Workspace boundaries ensure that analysis artifacts, uploaded files, and execution results are isolated per user, preventing unauthorized access across users or teams.
Safety & Policy Enforcement
askEdgi applies a strict instruction hierarchy in which system and governance policies always take precedence over user input and dataset content. All analytical execution occurs within sandboxed environments that isolate compute workloads and prevent persistence beyond approved storage. Output controls further restrict response formats and data exposure, reducing the risk of misuse or unintended data leakage.
Explainability & Provenance Model
Each analysis run may include:
Data sources used (dataset IDs, connector references)
Row and column counts accessed
Filters, joins, aggregations applied
Missing-data handling strategy
Versioned execution artifacts
Analysis run identifier
Users may request a full “show your work” view at any time.
Hallucination Mitigation
askEdgi mitigates hallucinations by:
Separating planning from execution
Validating narrative explanations against executed results
Refusing to guess or fabricate values
Enforcing schema awareness and type checking
Data Governance Controls
Workspace isolation
RBAC (Owner, Admin, Analyst, Viewer)
Connector allow lists
Read-only default mode
Explicit approval for write-back operations
Optional sensitive-data redaction rules
Human-in-the-Loop Controls
askEdgi does not perform automatic write-back to source systems or downstream platforms. All actions that could export, modify, or persist results require explicit user initiation and confirmation. This ensures that human oversight remains central to all impactful operations and prevents unintended changes.
AI Threat Mitigation
All user input, dataset content, and contextual data are treated as untrusted input within askEdgi. System-level instructions and governance policies override any instructions embedded in data or user prompts. Tool invocation and execution paths are validated against policy checks to prevent prompt injection and unauthorized behavior.
Prompt Injection
Data treated as untrusted input
System instructions override data instructions
Tool invocation requires policy checks
Data Exfiltration
Output size limits
Aggregation-first responses
Raw dumps are disabled by default
Sensitive-field redaction
Cross-Tenant Leakage
Tenant-scoped identity and encryption
Strict authorization on all access paths
Model Governance
askEdgi maintains governance over AI model usage through versioned model registries and controlled prompt templates. Changes to prompts and model configurations follow defined change-control processes and are tested to ensure accuracy, safety, and consistency before release. Monitoring mechanisms are in place to detect unexpected behavior following model updates.
Compliance Alignment
This policy aligns with:
OECD AI Principles
NIST AI Risk Management Framework
GDPR and applicable data protection laws
Review Cycle
The Trustworthy Model is reviewed on an annual basis or whenever there is a material change to askEdgi’s architecture, AI usage, or governance controls. This review process ensures that documented controls remain accurate, aligned with platform behavior, and consistent with evolving regulatory and governance expectations.
Copyright © 2026, OvalEdge LLC, Peachtree Corners, GA USA
Last updated
Was this helpful?

