Activity
Mon
Wed
Fri
Sun
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Learn Microsoft Fabric

Public โ€ข 5.5k โ€ข Free

Fabric Dojo ็ป‡็‰ฉ

Private โ€ข 205 โ€ข $39/m

7 contributions to Learn Microsoft Fabric
DirectLake
Hi Guys, Just wanted to ask about Partitioning with Direct Lake. I already have a very large delta table, roughly 60 million rows. Every hour I am appending data to this table using a notebook. I have partitioned this table using year and month (so roughly 84 partitions). I assume the benefit of partition is that the append is easier and the optimize function doesn't have to join up the 60 million rows but rather the append files inside of the latest year+month combination. However when I go to the Microsoft guide it tells me that I should avoid using partitions if my goal is to use a delta table for a semantic model (which it is): Microsoft Reference: https://learn.microsoft.com/en-us/fabric/get-started/direct-lake-understand-storage#table-partitioning Important If the main purpose of a Delta table is to serve as a data source for semantic models (and secondarily, other query workloads), it's usually better to avoid partitioning in preference for optimizing the load of columns into memory. Questions: 1. Should I avoid using the partition? 2. What examples are there of why we need to partition? Any help will be much appreciated. Thanks
1
3
New comment 6d ago
0 likes โ€ข 7d
Thanks @Mohammad Eljawad It should make appending easier as the parquet file is added to the latest partition (year = 2024 and Month = 11). I still would like to understand does partitioning help when using Direct Lake in Power BI?
Studying for DP-600
Just wondering how long you guys spent studying for the DP-600? What resources did you use?
3
5
New comment 7d ago
2 likes โ€ข 9d
Hi @Emily Gurr I would recommend checking off the study guide: https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/dp-600 When I sat in the exam, the learning modules/ labs on the Microsoft Website were really useful: https://learn.microsoft.com/en-us/training/courses/dp-600t00 And finally make sure you do the practice exam that is available from Microsoft: https://learn.microsoft.com/en-us/credentials/certifications/fabric-analytics-engineer-associate/practice/assessment?assessment-type=practice&assessmentId=90&practice-assessment-type=certification
Estimating Capacity Size
Hey everyone, I am currently using a Fabric Trial License (FT1) and I was wondering what is the best License to get given my current consumption. I have attached a screenshot of my Fabric Capacity Metrics and I can see the highest total usage occurred on 1st October @ 10:31. I used 91.27 CU: (Interactive CU: 9.97, Background CU: 81.3) in a 30 second period. This seems to indicate I need a F4 SKU? As 91.27/30 = 3.04.... However, I notice that my background consumption was highest a few minutes later at 83.87 CU in a 30 second period. Whereas my interactive CU was highest on 10th October at 78.48 CU in a 30 second period. The sum of these two highs returns a 162.35 CU, which would indicate I need a F8 SKU? As 162.35/30 = 5.41.. Which SKU do you think I need? Furthermore if I want to reduce my consumption, how would I go about doing this? For background operations when I drill-through at the highest consumption point I see multiple runs of my notebook for different periods. Why? For interactive operations I see a Query which ran 5 minutes before the drill-through time. Why? Any help would be much appreciated.
1
3
New comment Oct 16
Estimating Capacity Size
1 like โ€ข Oct 16
Thanks @Eivind Haugen I also figured out that smoothing causes different time periods to appear in the time point detail page. Specifically: - For interactive jobs run by users: capacity consumption is typically smoothed over a minimum of 5 minutes, or longer, to reduce short-term temporal spikes. - For scheduled, or background jobs:ย capacity consumption is spread over 24 hours, eliminating the concern for job scheduling or contention.
Request for Feedback: Resume with 1.5 Years of Data Consulting Experience
I'd appreciate your feedback on my resume, which reflects 1.5 years of experience as a Data Consultant. I was accepted by a major company, but the CEO, who didn't interview me, mentioned wanting someone with more experience. I also had an interview with another company, but they offered a lowball deal of about $5 per hour, with 40% taxes on top of that. I've been applying to many positions, but I'm still not getting interviews. I'm using it as part of my cover letter. Dear Hiring Manager xxxx, I am writing to express my interest in the Data Analyst position at xxxx. With experience in data consulting and a history of improving analytics solutions, I believe I can contribute well to your team. In my current role as a Data consultant, I have the opportunity to work closely with clients, gaining an understanding of their unique data requirements, and providing continuous support throughout the project lifecycle. My solid experience in reverse engineering legacy Power BI reports, optimizing data models, and generating enhanced reports to meet evolving business needs has refined my skills in Power BI, SQL, and reverse engineering. Additionally, my grasp in DAX has allowed me to introduce new KPIs and improve analytics, thereby enhancing decision-making processes for clients. Coupled with my Microsoft DP-600 and PL300 certification, these skills equip me with the necessary expertise to excel in this role. Moreover, I am committed to continuous learning and staying updated on best practices, currently exploring Microsoft Azure (DP-203 certification). I am eager to leverage my capabilities to drive impactful insights and solutions for clients. Thank you for considering my application. I am excited about the opportunity to further discuss how my experience and skills align with the needs of your team. Please find my resume attached for your review. Sincerely, xxx Data Consultant
2
7
New comment Oct 10
Request for Feedback: Resume with 1.5 Years of Data Consulting Experience
0 likes โ€ข Oct 10
@Robert Lavigne For Fabrics what would you suggest our GitHub accounts should contain? As of right now, I have been using GitHub to simply connect to my Fabric workspace: https://github.com/Krishan36
Built My Very First Fabric Solution
Hi Community, I had previously developed a Power BI report on the Premier League: Premier League Report This report ingests data from two separate web sources: - 24/25 Match Results - 24/25 Fixtures I passed DP-600 back in June but when I attended a presentation delivered by Will to the London Fabric User Group I was inspired to start building in Fabrics. I have tried to outline how using Microsoft Fabrics has improved my analytical solution in the slides attached. - Increased resilience & robustness - Built in Validation & Quality checks - Leveraging Git Integration for enhanced collaboration - Enhanced Analytical Capabilities. Please let me know your thoughts and any suggestions or improvements you might have.
23
8
New comment Oct 14
1 like โ€ข Oct 8
Thanks Will. ๐Ÿ™‚ My biggest learning was leveraging notebooks to return exit values which I can use in orchestrating my pipeline. Future Developments: - I still think the collaboration part could be further enhanced using deployment pipelines and Fabric REST APIs but it got very complex for me. (https://learn.microsoft.com/en-us/fabric/cicd/manage-deployment) - I could also test if every team has an image url which is online by using pipeline controls to iterate through the links.
1-7 of 7
Krishan Patel
3
32points to level up
@krishan-patel-9709
Senior BI Analyst working at the University of London

Active 49m ago
Joined Oct 2, 2024
powered by