This blog post is going to detail how I run the load test and then view the load testing results to determine how the capacity has coped when I increase the number of users.

Along with demonstrating how I automated the load testing without having to run it manually!

Please find below the previous series in case. This is the first time you have come across this blog post and want to understand what has been done previously.

Series Details

Part 1: Capturing Real Queries with Performance Analyzer

Part 2: Setting Up Your Fabric Lakehouse and Notebooks

Part 3 (This blog post): Running, Analyzing Load Tests and Automation of Load Testing

Part 4: Using the DAX Tuner MCP to fix slow DAX measures

Running the Load Test

To run the load test, I need to understand what I am going to test for.

This means doing some planning beforehand. This requires working with the business users to understand how they are going to use the reports. As well as how many people would concurrently be using the reports.

In my example I am going to complete the following:

  • I have already created the JSON file which replicates how the users will interact with the report.
  • I am going to run the job twice, once with 20 concurrent users and a second time with 50 concurrent users.

NOTE: I will be using the Notebook I tested with in my previous blog post.

Running load test for 20 concurrent threads (which I equate to users)

I have updated the JSON file and the following settings below.

I then clicked on Run All to start the notebook.

Once it started I could then see that all 20 concurrent sessions were running.

A screenshot of a computer program

AI-generated content may be incorrect.

Once completed I could see all the notebooks had run successfully.

A screenshot of a computer

AI-generated content may be incorrect.

Running load test for 50 concurrent threads (which I equate to users)

As shown below I changed the concurrent_threads to 50

I initially got an error when trying to run the 50 concurrent threads.

The reason for this is in line 91 it has got the concurrency set to 25 as shown below.

I changed this to 50 and it then ran successfully.

I then ran the load testing a few more times to ensure I had consistent results.

Load Test Results

I then created a semantic model from the CSV files that captured the results.

As shown below the CSV files were saved into my Lakehouse with the example shown below.

There would be a CSV file for the number of concurrent threads.

I also used the new functionality to create the semantic model using the Fabric/Power BI Service. Here is the documentation if you are interested: https://learn.microsoft.com/en-au/power-bi/transform-model/service-edit-data-models

I also used a trick that I have previously blogged about how to ingest files from the Lakehouse into a Semantic Model (How to get data from a Fabric Lakehouse File into Power BI Desktop – Using Scanner API JSON – FourMoo | Microsoft Fabric | Power BI)

The reason that I did this so that I could use Power Query in the web to load from the CSV files into my semantic model.

A screenshot of a computer

AI-generated content may be incorrect.

Yes, it is possible to load the CSV files into a Lakehouse table, but I wanted to see if I could create a semantic model ONLY doing it within the semantic model.

I was happy to see that this is possible. And I really enjoyed experience. Especially when creating measures, I could simply create a measure, refresh my Power BI report and consume the measure immediately.

Ok enough rambling, here is what the load testing looks like.

Fabric Capacity Metrics App

Below is how much capacity the load testing consumed

A screen shot of a graph

AI-generated content may be incorrect.

Overall, it never got above 10% of my F64 capacity, which is good to see.

Power BI Report Analysis

It is also important to understand even though it did not use a significant amount of capacity, I wanted to see which visuals were consuming the most capacity.

A special thanks to Phil Seamark for answering questions with regard to the logs that were captured to ensure that I create a report that made sense and was easy to understand.

A screenshot of a computer

AI-generated content may be incorrect.

As shown above the visual called “Quantity by Product” had the highest duration. Closely followed by “Sales by Product”

As suggested by Phil, I also put in the 90th 75th and 50th percentile. This gives a great overview of the DAX query performance.

I am now in the position to make some changes to my DAX measures and re-test to see if I can reduce the DAX duration times.

Automating the Load Testing

One further notes I wanted to make is that in the past when using the PowerShell load testing, I had to wait until after hours and then manually run the load test.

Because I was using a notebook, I now can schedule the testing of the notebook after hours without having to stay up late or work late.

I created a data pipeline using my notebook where I had modified it to accept parameters as shown below.

I then scheduled it to run at 04:50 AM when I knew there would be minimal load on my Fabric Capacity.

What happened is when I ran the above it failed.

The reason it failed is because I was trying to run too many concurrent notebooks. There is a limitation which I found here: Fabric Notebook known limitation – Microsoft Fabric | Microsoft Learn

What I did to resolve this is to split the notebooks into parallel tasks as shown below.

This then ran successfully.

Summary

In this blog post I explained and showed you how I did the load testing using the notebooks. And created a Power BI Semantic Model and report to better understand the DAX query duration and how this affected my Fabric Capacity.

In Part 4, I am going to attempt to use the DAX Performance Tuner from Justin Martin and see if this can improve the DAX measures, which I will then re-test. I will also blog about setting up the MCP server (hopefully I can get it working it will be my first attempt)

BONUS

If you are interested in getting the Power Query syntax I used to get the data from my Lakehouse, you can find the M Code below.

Thanks for following along 😊