Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

24 contributions to Learn Microsoft Fabric
Access keyvault from notebooks using Workspace Identity
Hi everyone! Is it possible to access a keyvault secret using a workspace identity from which the notebook is executed? I mean, a WS identity has access to the AKV, which I know is possible, but will the notebook that runs from the same WS inherit its access? Should I do that, develop using a service account that has a individual access or use a managed identity? I'm a bit lost on this one What would be a good practice here? Thanks!
1
2
New comment 14d ago
2 likes • 26d
Currently its not supported to use the workspace identity to get the secret from a key vault. Notebooks run under your identity when you run them manually, scheduled notebook under the one who created the schedule. Also, you are not able to access the workspace identity from a notebook. This is the status for now but it is a much requested featur, so it will probably come one day. For now use notebookutils to get the secret using your identity or utilize the azure + msal package to access a keyvault using a manually created service principal.
October leaderboard winners! 🥇🥈🥉
A big congrats to the following top contributors in the community for the month of October: 🥇 @Krishan Patel 🥈 @Lukasz Obst 🥉 @Viv B Thank you all for your contributions to the community, you have all won a free pass to Fabric Dojo for the month of November 🥳 For everyone else in the community: if you want a one-month free pass to Fabric Dojo (worth $39), then it's simple, just finish in the top three in the leaderboard for the month of November (this month!). How do you climb the leaderboard? Read this
24
9
New comment Nov 5
2 likes • Nov 1
Thanks @Sundar Sriram Garimella and @Mubaraq Abdulmaleek
Estimating Capacity Size
Hey everyone, I am currently using a Fabric Trial License (FT1) and I was wondering what is the best License to get given my current consumption. I have attached a screenshot of my Fabric Capacity Metrics and I can see the highest total usage occurred on 1st October @ 10:31. I used 91.27 CU: (Interactive CU: 9.97, Background CU: 81.3) in a 30 second period. This seems to indicate I need a F4 SKU? As 91.27/30 = 3.04.... However, I notice that my background consumption was highest a few minutes later at 83.87 CU in a 30 second period. Whereas my interactive CU was highest on 10th October at 78.48 CU in a 30 second period. The sum of these two highs returns a 162.35 CU, which would indicate I need a F8 SKU? As 162.35/30 = 5.41.. Which SKU do you think I need? Furthermore if I want to reduce my consumption, how would I go about doing this? For background operations when I drill-through at the highest consumption point I see multiple runs of my notebook for different periods. Why? For interactive operations I see a Query which ran 5 minutes before the drill-through time. Why? Any help would be much appreciated.
1
3
New comment Oct 16
Estimating Capacity Size
1 like • Oct 14
I would argue that these point-in-time occurances where you exceed F4 CUs per 30 second window should be covered by throttling: https://learn.microsoft.com/en-us/fabric/enterprise/throttling#future-smoothed-consumption You can accumulate up to 10minutes of "debt" without having to worry too much as long as it never exceeds this threshold.
Delta Parquet for dummies?
Hi all. we are currently using a c# library to convert JSON to parquet. The JSON is highly complex, and to ensure that Power BI can easily access the data without having to do a million steps, we are expanding the records and tables to essentially flatten the resulting parquet. As part of my initial exploration of Fabric, I understand that Delta Parquet is the way to go for these files, and that we probably need to use Spark to get there. However, I have been able to find very little information (maybe I don't know what to look for) about Delta Parquet, whether we need to use Spark, how does Power BI use these files differently from straight-up Parquet. I'm grateful in advance for any pointers to information, videos, etc., that would be helpful for me to understand this important topic.
1
8
New comment Oct 10
0 likes • Oct 10
@Sharad Verma @Gyan Penrose-Kafka yes but also one does need to care about the amount of files one has. Especially with many updates table maintenance needs to be done from time to time
0 likes • Oct 10
@Gyan Penrose-Kafka for time series data I would also use eventhouse with kql database as it makes everything easier. Also consider what @Sharad Verma wrote
Pipelines (different tenants)
Is it possible to use pipelines between workspaces of different tenants?
2
3
New comment Oct 11
1 like • Oct 9
Do you mean 1. Call a pipeline A from tenant A from within a pipeline B tenant B 2. Or have a pipeline copy data from a place in tenant A to tenant B
1-10 of 24
Lukasz Obst
3
18points to level up
@lukasz-obst-6089
Data Engineer

Active 1d ago
Joined Apr 16, 2024
powered by