Activity
Mon
Wed
Fri
Sun
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
What is this?
Less
More

Memberships

Learn Microsoft Fabric

Public • 4.2k • Free

23 contributions to Learn Microsoft Fabric
Estimating Fabric cost or sizing
Any pointers on how to estimate for Fabric CUs, licensing and SKU for an organization considering to move to Fabric please?
1
2
New comment 15h ago
1 like • 19h
Hello Amit! The only cost estimator I know is this one ---> Pricing Calculator | Microsoft Azure For any other information regarding a transition to Microsoft Fabric, there are so many variables to take into account, and it is hard to define common guidelines 🙂 Let's wait for other comments and see if someone has some helpful info!
PBI expertise for DP-600
Dp600 : why is the emphasis of the exam so much on PBI. Someone not from PBI background like me, finds it difficult to answer specific PBI DAX expressions and formula questions. For that you will need to have experience working on PBI reports. How do I up skill..any suggestions please?
1
4
New comment 3d ago
I think the new DP-600 exam was meant to replace the old (and unavailable) DP-500 exam (Azure Enterprise Data Analyst Associate Certification). As far as I remember, the DP-500 was also centered around Power BI and some advanced stuff like Tabular Editor, XMLA Endpoint, semantic modeling, etc. along with questions regarding how to design and implement enterprise-scale analytics solutions using Microsoft Azure (in particular, Microsoft Purview and Azure Synapse Analytics) and Power BI (Desktop and Service) So I think that's the reason why Microsoft has included a lot of questions about Power BI in the DP-600 exam 🙂
Do you use "Semantic Model" as data mart?
I have a gold level data warehouse and the Power BI users have the access to the data warehouse endpoint to create their own semantic model for reporting. I regard the semantic model as data mart but someone insists that I should create data mart per subject areas. What's your opinion?
4
3
New comment 9d ago
3 likes • 9d
Hi Jerry, like you, I consider semantic models as data marts in some ways because I can create different semantic models for specific areas using tables residing in the same database. In the same way, each model can be customized based on the analytics needs of each area. In my opinion, Data marts in Power BI Service can be considered as a preliminary small version of Fabric Data Warehouses 😄
3 likes • 9d
@Jerry Lee it is exactly what me and my team suggested (and did) for a project before Fabric: developers for each department/area created their own semantic models to support their analysis 🙂
Future of Power Query vs Python for ETL in Fabric
Power Query (Dataflows gen2) has a user friendly GUI to generate ETL code, and hand coding is occasionally required to handle tricky cases. In contrast, impelemting ETL in Python mostly involves hand writing code. What is the future direction in MS Fabric in terms of Power Query vs Python? Is Power Query engine being improved so that Python will not be required? Or, is Python going to be the de-facto ETL language in Fabric? Instead of investing in time and effort to master both of these languages, is it worthwhile to focus on one and master it?
5
7
New comment 6d ago
5 likes • 13d
Hello Surm! In my opinion, having a general understanding of both Power Query and Python can be the best option but, I understand your point 🙂 Based on my experience, Python is far more powerful than Power Query when it comes to heavy and complex data manipulations and transformations. In this context, Fabric notebooks have the Data-Wrangler which can help non-expert people to apply transformations on data without writing any Python code. Of course, the data wrangler is NOT like Power Query and right now it can be used only for data preparation (i.e. cleaning, manipulation, ...). I hope Microsoft will improve this to have a complete Python experience to handle complex ETL processes because, in my opinion (and based on my personal experience), Power Query is ok but, it is not suitable for complex operations (to me, it cannot be considered a 100% real ETL tool)🙂
Seeking Best Practices for Migrating to Microsoft Fabric
Hi all, I’m still new to this community and Microsoft Fabric, and I want to thank you all for letting me be a part of it. 🙏 I am currently considering migrating from traditional Microsoft Power BI solution to the Microsoft Fabric platform. I have a few questions that I hope you can answer. I've already watched a ton of videos on the topic—they inspire a lot of great thoughts but don’t quite give me a "solution" or a possible best practice to follow. We are an organization with many different systems and companies, which means we have many different data ingestions. Our data sources are primarily on-prem SQL servers, Microsoft 365 CRM, API calls to the HR system, and a few Excel sheets on Sharepoint. I’m particularly thinking about best practices for the architecture in Fabric. What should we use for data ingestion, transformation, and storage? Dataflows, notebooks, or pipelines—or maybe a combination? And what about the medallion architecture—is that for data ingestion to files in a data lakehouse or directly as deltatables in a lakehouse or warehouse? And how do you then proceed from there to the other steps in the architecture? I hope you can get back to me on my considerations so I can create a structured roadmap from the start, avoiding the need to redo the data architecture later on. Thanks in advance.
0
2
New comment 14d ago
1 like • 14d
Hello Tobias, based on the Microsoft documentation and suggestions, and based on my personal experience: - Data ingestion: the tool to use depends on what you want to achieve. Pipelines are great if you want a simple and quick copy-data activity (source --> Fabric). However, if data transformations are involved, Dataflows and Notebooks are the best choices. Mind that, based on my experience, Spark Notebooks are way more powerful than Power Query when it comes to complex transformations, table joins, etc. - Medallion Architecture: the answer is "it depends" 😄 it depends on the data journey you want to set and how many layers you want to build to prepare data for analysis and reporting. In other words, the "classic" three-medallion architecture (as also suggested by Microsoft) can be further customized depending on several factors (e.g., what are the data validation rules? Do I need to create different Lakehouses for each department? Do I have different rules for each department data?). Also for what concerns the tools to use between each layer, I would say the answer is "it depends" (e.g., they can be notebooks, stored-procedures - if you go with Warehouses). In general, I would say that there are no straight answers when it comes to implementing a Fabric solution but you can narrow the circle of options by collecting information on what you need to do on data 🙂 What do you think?
1-10 of 23
Samuele Campitiello
3
29points to level up
@samuele-campitiello-2521
Power BI Developer / Data Visualization Engineer - From Stio with fury

Active 7h ago
Joined Feb 19, 2024
Turin (Italy)
powered by