How to Set Up Agentic Semantic Model Development for Power BI Using GitHub Copilot on Windows
I was inspired and in awe after watching the video that Rui Romano posted in LinkedIn where Rui shared a video on how he used Agentic model development works 202603 AgenticE2E FabCon. It looked like all that I had to do was have the right tools to use, edit a few files and let the LLMs do the rest! In…
Setting up Power BI Version Control with Azure Dev Ops
In this blog post is a way set up version control for Power BI semantic models (and reports) using the PBIP (Power BI Project) format, Azure DevOps (Azure Repos), and VS Code. This approach treats your semantic model as readable text files (JSON/TMDL), enabling proper Git diffing, branching, merging, and collaboration—something binary .pbix files don’t support well. Prerequisites Power BI…
How to create a Power BI Semantic Model online (No need for PBI Desktop)
It has been in the service for quite a while so I thought I would blog about it in terms of how you can create a power BI semantic model simply using the web interface. This means you no longer need Power BI desktop, or a Windows PC to get going. This is quite a significant change because at times…
Direct Lake or Import which Semantic Model has the best performance
In this blog post I am going to show you how I completed the automated testing and then the results where I am going to compare Direct Lake, Import and DirectQuery and which one appears to be the best. As always, your testing may very or be different to my tests below. I would highly recommend that you use the…
Comparing Microsoft Direct Lake vs Import– Which Semantic Model performs best?
I was recently part of a discussion (which I have heard of multiple times), which was which semantic model to use in Microsoft Fabric. This was the source for this blog post where I am going to compare Microsoft Direct Lake (DL) to an Import Semantic Model. The goal is to first explain how I set up and configured the…
Backing Up Your Microsoft Fabric Workspace: A Notebook-Driven Approach to Disaster Recovery
In the high-stakes world of data architecture, where downtime can cascade into real business disruptions, I’ve learned that even the most robust platforms have their blind spots. Just last month, while collaborating with a client’s Architecture team on their disaster recovery strategy, we uncovered a subtle but critical gap in Microsoft Fabric: while OneLake thoughtfully mirrors data across multiple regions…
How to Access a Former Employee’s Power BI “My Workspace” and Recover Reports
One of the common challenges I’ve seen in organizations is when a team member leaves and their Power BI reports are stored in their personal My Workspace. These reports often contain valuable datasets and dashboards that are still in use or need to be maintained. So, how do you access and recover these reports? In this blog post, I’ll walk…
Power BI MCP Tuner Server and does it reduce capacity requirements – Part 4
This blog post is about using MCP to tune DAX and then using the Automated Load Testing does it reduce capacity. I originally did not plan for this post but after viewing the details from Justin Martin – DAX Performance Tuner | LinkedIn I had to give it a go. It was then easy for me to test if the…
Running and viewing Automated Load Testing Results – Part 3
This blog post is going to detail how I run the load test and then view the load testing results to determine how the capacity has coped when I increase the number of users. Along with demonstrating how I automated the load testing without having to run it manually! Please find below the previous series in case. This is the…
Automating Load Testing Setting Up Your Fabric Lakehouse and Notebooks – Part 2
In today’s blog post I am going to show you how to set up the Lakehouse and Fabric notebooks so that you can then configure it to be ready to be used with the JSON file we created in the previous blog post. Series Details Part 1: Capturing Real Queries with Performance Analyzer Part 2 (This blog post): Setting Up…