Activity
Mon
Wed
Fri
Sun
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Data Innovators Exchange

Public • 327 • Free

6 contributions to Data Innovators Exchange
Seeking tips on Data Mesh & Fabric Organization
"I'm currently diving deeper into the concepts of Data Mesh and Data Fabric, especially from an organizational and strategic perspective. Does anyone have good resources or reading recommendations on this topic? I'm particularly interested in the organizational aspects but also open to resources that focus more on the architectural approaches. Thanks in advance!
6
3
New comment 27d ago
4 likes • 27d
there is an open source project on data contracts, not only metadata specifications but also transformations and examples for a testable interface https://datacontract.com/
How to connect multiple dbt core projects
dbt Cloud has the "dbt mesh" to connect your dbt projects. In the core version, there is no native dbt functionality for that, but there is a Python package called "dbt-loom", developed by Nicholas Yager, to connect your dbt-projects. We have it in place for almost one year and it works really well. This fuctionality is crucial when implementing a Data Mesh paradigm. Have a nice weekend! https://github.com/nicholasyager/dbt-loom
5
2
New comment Oct 21
0 likes • Oct 19
I kind of get the critic from data mesh to data warehouse, that there are just too much undocumented interfaces within a data warehouse. They come from domain driven design and microservices and there it doens't make any sense. But if you have a monolith like SAP, you need to have all those interfaces. So I like this connection. Thank you. Now comes my question: How do you document the new interfaces? Does the dbt-loom offer any support for this?
Data Vault Modeling: Where are you doing it?
At some point in each Data Vault project, the source data is analyzed, and a Data Vault 2.0 model is drafted. I saw a lot of different locations where teams draft their Data Vault model, Excel sheets, drawing tools like Draw.io or Miro.com, dedicated modeling tools, directly inside their automation tools, or on pen and paper. My past experience highlighted three most important things to consider when deciding where to model: - Must fit into the tool stack: if your automation tool offers Data Vault modeling, do it in there - Ease of use: Simplicity is important to streamline initial modeling, changes and additions - Persistent and centralized: All components of the whole model should be stored in a central place, to help other modelers identify what has been done already. Where are you currently designing the Data Vault model? What are your experiences? Let me know!
4
15
New comment Aug 28
2 likes • Aug 15
@Tim Kirschke that is more complicated. I usually model a business object model. Like the one in mermaid. Then I map the sources to it. Very much like John Giles suggested it. Talk to the business, look at the sources, I had a presentation on a more detailed view on this kind of work (early vs. late integration) detailing what John did and standardize it further. You could have seen that kind of generation also in the Innovator solution we build ages ago. The rest is more or less generated...
1 like • Aug 28
@Jaroslaw Syrokosz chatGPT can do it, too. I once gave it all the description of willibald and asked for a data model in mermaid. It worked, although it missed a lot of the model....
Architecture: Local optima versus maintainability
Hello I am new to this forum. I love data modelling, data vault and architecture. So I thought I make my introduction with some thoughts on architecture (just because someone compared data modelling with pineapple on pizza - not for everyone - and yes I disagree). A data warehouse has an architecture and ideally exactly one location for each activity. The activity is carried out for all sources at this point. This reduces the maintenance effort, because you don't have to know the data intimately or remember all the details. It is sufficient to visualise the activity in order to determine where the necessary corrections need to be made. With Data Vault also comes an architecture. A good architecture that includes configuration options for the interfaces, the Business Vault, the Data Mart and - if you are particularly adventurous - special Data Vault patterns. And no matter how good the architecture is, there are often 'local optima' during development. If we only deviate from the warehouse guidelines for this small activity, we save ourselves a lot of work. And the exception for this source is small. Actually non-existent. Everyone can remember that. Two years later, there are exceptions for every data source and you have to know the data exactly again to be able to make changes successfully. The maintenance effort increases. And after 5 years at the latest, only a few people will still be able to make changes and everyone will be talking about a redesign. All the small gains in speed are forgotten. This does not mean that the architecture must not change. In fact, it should change. The only question is: will everyone benefit? Or just this data source alone. If everyone benefits, this change should also be made for everyone. Only then is it a universal change. If you only change this for specific situations, then there are many individual customised solutions. Special solutions that need to be known before the existing implementation is adapted. Maintaining an architecture requires a lot of effort. It is the only way to ensure that processes and the allocation of tasks to individual areas are clear. This effort is worth it, as it brings disproportionately high gains in terms of maintainability, the onboarding of new or temporary employees and enables quick or temporary changes between teams in the DWH.
8
6
New comment Aug 21
2 likes • Aug 14
The less a colleague has to think, before he can start changing stuff, the less the maintenance cost. You always have to think a lot before taking action in our line of work, but that should be because of the content and not for the environment. And -again- change things, if they are not good, but make it work for all. This will help productivity as well.
1 like • Aug 21
@Volker Nürnberg that is right. However in case of business rules automation is still lacking. And especially on the differentiation of hard/soft rules aka as earlyx vs. late Integration, there is a lot that can go wrong....
Comparing 3 Types of Data Modelling
What are the pro's and con's in your experience? 📊 **Normalized Data Modeling** ⭐ **Star Schema** 🔗 **Data Vault** Michael Kahn breaks down the differences in these 3 approaches. His channel has racked up over 4m views and he is in the Top 5 online educators for data pipeline engineering for small teams to deliver better analytics, faster. 👉 [Watch and Comment Now!] #DataModeling #DataEngineering #BigData #Analytics #TechTalk
7
7
New comment Aug 9
Comparing 3 Types of Data Modelling
5 likes • Aug 6
The versus part is strange, but it seems that this video is one in a longer row, where he also compares the architectures of the approaches. And then it's 3NF Warehouse vs. Star Schema vs. Data Vault. This video is very high-level. Funny thing, when I do a data warehouse I use all 3 modelling techniques. A 3NF Model of the core warehouse for general aspect and talking to business, data vault for the core warehouse and a star schema for the data mart....
1-6 of 6
Michael Müller
3
35points to level up
@michael-muller-2499
Data Vault Professional with both certificates, Board of Directors Deutschsprachige Data Vault User Group (DDVUG e.V.)

Active 2d ago
Joined Jul 9, 2024
powered by