Load testing is essential when working with Microsoft Fabric capacity. With limited resources, deploying a Power BI report without testing can lead to performance issues, downtime, and frustrated users. In this series, I’ll show you how to automate load testing using Fabric Notebooks, making the process faster, easier, and repeatable.

Inspired by Phil Seamark’s approach, this method eliminates manual complexity and allows you to capture real user queries for accurate testing.

Series Details

Part 1: Capturing Real Queries with Performance Analyzer

Part 2: Setting Up Your Fabric Lakehouse and Notebooks

Part 3: Running, Analyzing Load Tests and Automation of Load Testing

Why complete load testing?

Fabric capacity provides a fixed amount of resources. If your reports exceed those limits, you risk:

  • Slowing down or crashing your entire capacity.
  • Temporary capacity disablement, impacting all users.

By load testing, you can:

  • Validate performance before production deployment.
  • Plan for scaling or optimization.
  • Re-test after changes to measure improvements.

Capturing Real Queries with Performance Analyzer

The first step is to capture the real queries using Performance Analyzer in Power BI. I will explain below how to complete this. It’s certainly a lot easier than doing it in the past where I manually had to find columns, tables, etc.

The goal of capturing the queries is to replicate how users will interact with the Power BI Report. This means that the report should be one that is designed and ready for the end users.

The first step is to make sure that you have got your semantic model created and the associated report.

NOTE: In this example my Semantic Model is using DirectLake mode.

Here is an example of my Power BI report which I am going to use to capture the queries.

To capture the queries, I will complete the steps below.

  • In the ribbon click on Optimize and then click on Performance analyzer
  • A screenshot of a computer

AI-generated content may be incorrect.
  • In the Performance Analyzer flyout click on Start recording.
  • A screenshot of a computer

AI-generated content may be incorrect.
  • I can then see the performance analyzer has started recording.
  • A screenshot of a computer

AI-generated content may be incorrect.
  • Now I will interact with the report, in the same way that an end user would.
    • In my example I will interact with the visuals, slicers and Date Slicer.
  • I clicked on a visual and as shown below I can see how long it took for all the visuals to get the results from the DAX query.
  • A screenshot of a computer

AI-generated content may be incorrect.
    • NOTE: The reason for the long duration on the visuals is because this is the first time I am interacting with my DirectLake semantic model, and it must load the columns into memory.
    • On subsequent interactions it gets quicker.
  • I keep on interacting with reports, and I will see the results in the performance analyzer flyout.
  • I would recommend that you ensure that you have enough interactions that you would expect from end users. This should be something that should be planned beforehand.
  • Once I had enough queries I then clicked on Stop.
  • A screenshot of a computer

AI-generated content may be incorrect.

Exporting the Performance Analyzer Queries

Now that I have created all the queries I need to export or save the performance analyzer queries so that this can be used in the load testing.

  • I click on Export
  • I am then prompted to save the JSON file.
  • As shown below I give the file a meaningful name
    • NOTE: Make sure that the filename does not have any spaces or special characters. The reason for this is when using this file in the notebook later, it will be easier to use.
  • A screenshot of a computer

AI-generated content may be incorrect.
  • I then clicked on Save

Summary

In this blog post I have shown you how to create the queries that you want to load tests for your semantic model. You can also export and save this file which will be used in future blog posts.

In Part 2, we’ll set up a Fabric Lakehouse and create notebooks to run these queries at scale.

Thanks for following along 😊