Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

Learn Microsoft Fabric

Public • 6.5k • Free

8 contributions to Learn Microsoft Fabric
Official DP-700 course released on MSLearn
Microsoft has released the official self-paced course for DP-700. This course will eventually become the recommended self-paced material on the Fabric Data Engineer Associate certification page. So, if you plan on taking the exam once it goes GA, you might as well take advantage of this course and start preparing now. Check it out here👉🏽 Course DP-700T00-A: Microsoft Fabric Data Engineer Also, find some helpful DP-700 discussions below: - DP-700 beta exam experiences and tips (by Ali Stoops) - DP 700 - Study Guide & Learn Collection When are you planning to take the DP-700 exam? Let us know how you’re preparing below👇🏽
20
11
New comment 1d ago
Official DP-700 course released on MSLearn
1 like • 3d
I wonder if Microsoft will offer a free voucher to take this certification?
DirectLake
Hi Guys, Just wanted to ask about Partitioning with Direct Lake. I already have a very large delta table, roughly 60 million rows. Every hour I am appending data to this table using a notebook. I have partitioned this table using year and month (so roughly 84 partitions). I assume the benefit of partition is that the append is easier and the optimize function doesn't have to join up the 60 million rows but rather the append files inside of the latest year+month combination. However when I go to the Microsoft guide it tells me that I should avoid using partitions if my goal is to use a delta table for a semantic model (which it is): Microsoft Reference: https://learn.microsoft.com/en-us/fabric/get-started/direct-lake-understand-storage#table-partitioning Important If the main purpose of a Delta table is to serve as a data source for semantic models (and secondarily, other query workloads), it's usually better to avoid partitioning in preference for optimizing the load of columns into memory. Questions: 1. Should I avoid using the partition? 2. What examples are there of why we need to partition? Any help will be much appreciated. Thanks
2
3
New comment Nov 15
0 likes • Nov 15
Thanks @Mohammad Eljawad It should make appending easier as the parquet file is added to the latest partition (year = 2024 and Month = 11). I still would like to understand does partitioning help when using Direct Lake in Power BI?
Studying for DP-600
Just wondering how long you guys spent studying for the DP-600? What resources did you use?
3
5
New comment Nov 15
2 likes • Nov 13
Hi @Emily Gurr I would recommend checking off the study guide: https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/dp-600 When I sat in the exam, the learning modules/ labs on the Microsoft Website were really useful: https://learn.microsoft.com/en-us/training/courses/dp-600t00 And finally make sure you do the practice exam that is available from Microsoft: https://learn.microsoft.com/en-us/credentials/certifications/fabric-analytics-engineer-associate/practice/assessment?assessment-type=practice&assessmentId=90&practice-assessment-type=certification
Estimating Capacity Size
Hey everyone, I am currently using a Fabric Trial License (FT1) and I was wondering what is the best License to get given my current consumption. I have attached a screenshot of my Fabric Capacity Metrics and I can see the highest total usage occurred on 1st October @ 10:31. I used 91.27 CU: (Interactive CU: 9.97, Background CU: 81.3) in a 30 second period. This seems to indicate I need a F4 SKU? As 91.27/30 = 3.04.... However, I notice that my background consumption was highest a few minutes later at 83.87 CU in a 30 second period. Whereas my interactive CU was highest on 10th October at 78.48 CU in a 30 second period. The sum of these two highs returns a 162.35 CU, which would indicate I need a F8 SKU? As 162.35/30 = 5.41.. Which SKU do you think I need? Furthermore if I want to reduce my consumption, how would I go about doing this? For background operations when I drill-through at the highest consumption point I see multiple runs of my notebook for different periods. Why? For interactive operations I see a Query which ran 5 minutes before the drill-through time. Why? Any help would be much appreciated.
1
3
New comment Oct 16
Estimating Capacity Size
1 like • Oct 16
Thanks @Eivind Haugen I also figured out that smoothing causes different time periods to appear in the time point detail page. Specifically: - For interactive jobs run by users: capacity consumption is typically smoothed over a minimum of 5 minutes, or longer, to reduce short-term temporal spikes. - For scheduled, or background jobs: capacity consumption is spread over 24 hours, eliminating the concern for job scheduling or contention.
Request for Feedback: Resume with 1.5 Years of Data Consulting Experience
I'd appreciate your feedback on my resume, which reflects 1.5 years of experience as a Data Consultant. I was accepted by a major company, but the CEO, who didn't interview me, mentioned wanting someone with more experience. I also had an interview with another company, but they offered a lowball deal of about $5 per hour, with 40% taxes on top of that. I've been applying to many positions, but I'm still not getting interviews. I'm using it as part of my cover letter. Dear Hiring Manager xxxx, I am writing to express my interest in the Data Analyst position at xxxx. With experience in data consulting and a history of improving analytics solutions, I believe I can contribute well to your team. In my current role as a Data consultant, I have the opportunity to work closely with clients, gaining an understanding of their unique data requirements, and providing continuous support throughout the project lifecycle. My solid experience in reverse engineering legacy Power BI reports, optimizing data models, and generating enhanced reports to meet evolving business needs has refined my skills in Power BI, SQL, and reverse engineering. Additionally, my grasp in DAX has allowed me to introduce new KPIs and improve analytics, thereby enhancing decision-making processes for clients. Coupled with my Microsoft DP-600 and PL300 certification, these skills equip me with the necessary expertise to excel in this role. Moreover, I am committed to continuous learning and staying updated on best practices, currently exploring Microsoft Azure (DP-203 certification). I am eager to leverage my capabilities to drive impactful insights and solutions for clients. Thank you for considering my application. I am excited about the opportunity to further discuss how my experience and skills align with the needs of your team. Please find my resume attached for your review. Sincerely, xxx Data Consultant
2
7
New comment Oct 10
Request for Feedback: Resume with 1.5 Years of Data Consulting Experience
0 likes • Oct 10
@Robert Lavigne For Fabrics what would you suggest our GitHub accounts should contain? As of right now, I have been using GitHub to simply connect to my Fabric workspace: https://github.com/Krishan36
1-8 of 8
Krishan Patel
3
30points to level up
@krishan-patel-9709
Senior BI Analyst working at the University of London

Active 1d ago
Joined Oct 2, 2024
powered by