Setting up Power BI Version Control with Azure Dev Ops

In this blog post is a way set up version control for Power BI semantic models (and reports) using the PBIP (Power BI Project) format, Azure DevOps (Azure Repos), and VS Code. This approach treats your semantic model as readable text files (JSON/TMDL), enabling proper Git diffing, branching, merging, and collaboration—something binary .pbix files don’t support well. Prerequisites Power BI…

Microsoft Fabric: Why Warehouse Beats Lakehouse by 233% in Speed and 278% in Capacity Savings

After my previous blog post on the different semantic model options and at the same time working with a Fabric customer, it got me thinking which is faster and which consumes less capacity when ingesting data into Power BI either via the SQL Endpoint to a Lakehouse or a query from the Warehouse. Below you will find the information which…

How Much of Your Fabric Capacity Is Really Being Eaten by Background Jobs? (The 24-Hour Smoothing Trick Explained)

I was recently working with a customer and one of the questions they had is we are going to be running an ingestion process. We want to know how much Fabric Capacity this will be consuming. The challenge with this question is that in Fabric a background capacity gets smoothed over 24 hours. For example, when looking at the Capacity…

Backing Up Your Microsoft Fabric Workspace: A Notebook-Driven Approach to Disaster Recovery

In the high-stakes world of data architecture, where downtime can cascade into real business disruptions, I’ve learned that even the most robust platforms have their blind spots. Just last month, while collaborating with a client’s Architecture team on their disaster recovery strategy, we uncovered a subtle but critical gap in Microsoft Fabric: while OneLake thoughtfully mirrors data across multiple regions…