A modern data warehouse should be able to meet the requirements of agile BI projects and support new data sources. Traditional ETL tools and data warehouse architectures do not have the agility, ease of use and level of automation required for this.
The areto Data Chef is an innovative DWH tool that optimizes the data warehouse creation process end-to-end. It significantly reduces the time, cost and risk of data warehouse projects. areto Data Chef supports Exasol and Snowflake analytical databases to enable near-real-time data warehousing with continuous and incremental data collection.
As an open source tool, areto Data Chef largely automates data integration. Data is integrated into the DWH in just a few steps without user intervention or complex interface adjustments.
Data is extracted via Java JDBC. In addition, data can be loaded into the database via .csv files. An included repository ensures that the user does not have to worry about long and complex configurations. The areto Data Chef contains all metadata and information of the running processes and can be integrated into monitoring solutions as required.
The areto Data Chef is based on the Data Vault 2.0 modeling method and independently generates the database objects, process control, and the three access layers of a DWH (stage, core, and mart) based on different mapping files. The mappings form the basis for data organization in the DWH. They are used to define the hubs, links and satellites objects used in Data Vault. In addition, the user has the option of defining the hard and soft business rules required during loading.
Other features of Data Chef include the independent generation of hash keys and historization in the DWH. For the user, the origin and changes of the data become visible and traceable.
Using areto Data Chef saves up to 50% of the otherwise usual implementation effort. On the one hand, data quality is increased by reducing manual user input. On the other hand, the DWH can be flexibly extended and adapted by using Data Vault 2.0 – and is thus optimally prepared for new requirements.
The areto Data Chef enables users to concentrate on the business aspects of the requirements when implementing BI projects. Attention to the technical implementation is no longer necessary when using areto Data Chef. With little technical knowledge, users of the tool can implement a complete data integration and add it to existing systems. In this way, the extension of a DWH can be realized in the shortest possible time and with consistent quality.
Agile BI projects place new demands on the underlying data model for building a data warehouse. Since the DWH is extended with additional functions with each iteration, the data model must grow with its requirements. Existing data models and their loading processes should remain unchanged if possible to avoid re-running tests.
Using Data Vault 2.0, information is divided according to key values (“hubs”), references (“links”) and context attributes (“satellites”). This structuring enables the highest possible degree of standardization of the data model and loading processes. An extension of the data model does not affect existing structures. Once the user has determined the mappings for the Data Chef, they generate the relevant database objects according to Data Vault: hubs, links and satellites.
Separation of data in the DWH is achieved via a Raw Vault (integrated raw data) and a Business Vault (addition of transformed data to the Raw Vault). In combination with an all-encompassing historization, a full auditability of the data can be ensured.
The areto Data Chef contains all metadata and information about the executed processes in the DWH and can be integrated into solutions for process monitoring.
Data supply of Snowflake with Kafka Snowpipe Data Chef
Convince yourself of the performance of areto Data Chef. We would be pleased to advise you on the optimal use of the solution.
Phone: +49 221 66 95 75-0