Data Quality Control Mechanisms
Overview of the mechanisms that we employ to track, monitor, and alert on data deviations, ensuring the quality and reliability of our data.
Overview
At SAP LeanIX, we prioritize the integrity and reliability of our data. Ensuring high data quality is essential for making informed business decisions and maintaining trust with our customers. By continuously monitoring and alerting on data deviations, we ensure that our customers can trust the data they rely on for their business operations.
This document outlines the mechanisms that we employ to track, monitor, and alert on data deviations in our Gainsight environment to ensure that our data remains reliable and consistent for all stakeholders.
For any questions or further information, please reach out to your Customer Success Manager.
Data Quality Monitoring Framework
Our data quality monitoring framework is designed to detect and alert on any deviations that might indicate potential issues. In the following sections, you can find out how we ensure robust data quality control.
Data Source Integration
We integrate data from multiple sources into Gainsight, ensuring comprehensive data coverage. The following table lists our primary data sources.
Data Source | Usage | Additional Information |
---|---|---|
Salesforce | Used to connect with our CRM system, allowing our Sales team to view Gainsight data | Salesforce Connector |
Snowflake | Used to retrieve product usage data and customer engagement metrics, as well as track the adoption of SAP LeanIX | Snowflake Connector |
Zendesk | Used to ingest support tickets and track their statuses | Zendesk Connector |
Productboard | Used to collect and monitor product ideas sent to our Product team | Configure Product Requests |
Gainsight Customer Community | Used to track engagement in the SAP LeanIX Community | Customer Communities and Gainsight Integration - Admin Guide |
Automated Alert Mechanisms
We've established automated systems in Snowflake to identify significant data changes at both the regional and workspace levels, enhancing data quality and reliability. Additionally, we’ve set up a webhook as a backup in case Snowpipe, a data ingestion service provided by Snowflake, fails.
Our primary focus is identifying the following:
- Daily row count deviations: Monitoring the number of rows for each data source and table in the last 7 days.
- Percentage change alerts: If there is a decrease of more than 25% in the row count from the previous day, an alert is triggered. The 25% threshold for deviations is set to balance sensitivity and practicality in our data monitoring system. By establishing this threshold, we aim to identify significant anomalies without overloading the system with alerts for minor fluctuations that are within the normal range of data variability. Deviations under 25% are considered acceptable as they typically reflect natural variations in data volume rather than issues needing immediate attention. This approach allows us to focus our resources on addressing more substantial and potentially problematic changes, ensuring efficient and effective data quality management.
- Weekend exception: To avoid false positives due to typically lower weekend activity, no alerts are generated for data collected on weekends (Saturdays and Sundays, CEST time zone).
Customer Communication
In the event of a data deviation affecting all customers, our team promptly investigates the issue and communicates any findings and resolutions to the customers. Our goal is to ensure transparency and maintain the highest standards of data quality.
Continuous Improvement
We're committed to continuous improvement of our data quality processes. This involves regular reviews, updates, and enhancements to our monitoring systems to adapt to new challenges and ensure ongoing reliability.
Updated 4 months ago