In today’s blog post I am going to show you how to set up the Lakehouse and Fabric notebooks so that you can then configure it to be ready to be used with the JSON file we created in the previous blog post.

Series Details

Part 1: Capturing Real Queries with Performance Analyzer

Part 2 (This blog post): Setting Up Your Fabric Lakehouse and Notebooks

Part 3: Running, Analyzing Load Tests and Automation of Load Testing

Where to get a copy of the Notebooks

Phil Seamark has been very gracious in allowing a copy of the notebooks to be hosted in the GitHub link below. I would recommend always getting a copy of the notebook from the GitHub location to make sure it is the latest version.

fabric-toolbox/tools/FabricLoadTestTool at main · microsoft/fabric-toolbox

Next download the 2 notebooks:

In the steps below I will explain what each of the notebooks does and what needs to be configured.

Setting up the Lakehouse and JSON file

The first step is to make sure that I have my Lakehouse configured and the JSON file uploaded into the right location.

NOTE: It is critical that you create the exact folders below, because this is where the notebooks look for the JSON file.

As shown below I already have a Lakehouse called FM_LH

A white background with a black line

AI-generated content may be incorrect.

Creating the Sub Folders

I then go into my Lakehouse and navigate to the Files section as shown below.

  1. Next to Files, I click on the 3 dots
  2. I then click on New Subfolder
  3. I give the Subfolder the name of “PerfScenarios”

A screenshot of a computer

AI-generated content may be incorrect.

I click Create

Next, I need to create another Sub Folder.

A screenshot of a computer

AI-generated content may be incorrect.

  1. I click on my PerfSecnearios folder
  2. I click on New subfolder
  3. I give it the name of “Queries”

A screenshot of a computer

AI-generated content may be incorrect.

I click Create

The final Sub Folder to create is called “logs” which is where all the logs from the load performance testing will be saved.

  1. I click on my PerfSecnearios folder
  2. I click on New subfolder
  3. I give it the name of “Logs”

I now can see my folder structure

Uploading my JSON file

Now that I have created the folder, I need to upload my JSON file which I created in Part 1 of this series

In the Lakehouse I complete the following steps to upload my JSON file

  1. I click on Get Data
  2. I then click on Upload files
  3. A screenshot of a computer

AI-generated content may be incorrect.
  4. I then click on the Folder icon as shown below.
  5. A screenshot of a computer

AI-generated content may be incorrect.

Next, I find my JSON file

  1. I click on the JSON file name
  2. I make sure it has been selected
  3. I click on Open to load the location into the Upload Files section in my Lakehouse

I then click on Upload.

A screenshot of a computer

AI-generated content may be incorrect.

Once completed I can see my JSON file has been uploaded successfully

A screenshot of a phone

AI-generated content may be incorrect.

Notebook – RunPerfScenario

This notebook runs performance testing scenarios against Power BI models using DAX queries.

NOTE: Nothing needs to be changes in this notebook.

An overview of what it does is below.

Cell 1

This is the parameter cell, which will be used to pass through parameters in the second notebook.

NOTE: You can leave these default values as is.

Cell 3

This is where all the magic happens (And the magic written by Phil Seamark) where it creates the functions to be used in Cell 4 below.

  • It runs a DAX query against the semantic model.
  • It loads the queries from the JSON file.
  • It captures the metadata for each of the queries

Cell 4

Here it is where it connects to the Semantic Model and does the following:

  • Connects to the XMLA endpoint and complete the authentication.
    • NOTE: Make sure that the account being used has access to the XMLA endpoint for the Semantic Model
  • Loads the queries from the JSON File.
  • Captures the results and writes it to a CSV file in Lakehouse

Changing the Default Lakehouse

Because the notebook is attached to a different lakehouse, when I opened the notebook, I could see that it was not mapped to my Lakehouse.

To fix this I took the following steps:

A screenshot of a computer

AI-generated content may be incorrect.

  1. I clicked on Add data items
  2. I clicked on Existing data sources

I selected my Lakehouse and clicked on Connect

A screenshot of a computer

AI-generated content may be incorrect.

Next, I clicked on the three dots next to my Lakehouse “FM_LH” and clicked on “Set as default lakehouse”

A screenshot of a computer

AI-generated content may be incorrect.

Finally, I removed the previous Lakehouse by clicking on the three dots next to the Lakehouse with the red x and clicked on “Remove”

A screenshot of a computer

AI-generated content may be incorrect.

I could now see my Lakehouse successfully connected to the Notebook.

Notebook – RunLoadTest

This is the notebook where I will configure the load test parameters, which will run the actual load test.

Below is an overview of it does and what I needed to change to run the load test.

Cell 2

This is where it sets the default vCore allocation for the Python Notebook to 4 vCores

Cell 3

This is the cell where I had to update my parameters as well as run the load test.

  • Line 13 – load_test_name
    • This is the name I want to give my load test.
    • I called it “WWI_DL”
      • I would recommend giving the test a meaningful name, so you know what semantic model was being tested.
  • Line 14 – dataset
    • This is the name of the Semantic Model
    • I put in “WWI_DL”
  • Line 15 – workspace
    • This is the name of my workspace where the semantic model is.
    • I put in “Fabric_FourMoo”
    • NOTE: I recommend always putting in the workspace name to avoid potential duplicate semantic model names.
  • Line 16 – queryfile
    • This is the location of where my JSON file has been stored.
    • I put in “/lakehouse/default/Files/PerfScenarios/Queries/PowerBIPerformanceData_2025_10_17_DL.json”
  • Line 17 – concurrent_threads
    • This is how many threads or better said how many executions do you want to run in parallel. It is similar to representing concurrent users.
    • I put in “1” for my initial test to make sure it works
    • I would change this to a higher number when doing the real load test.
  • Line 18 – delay_sec
    • This is how long to wait before running another DAX query.
    • I typically put this at 25 seconds as people would need to look at the report and what has changed before making additional changes.
  • Line 19 – iterations
    • This is how many times do you want this to run.
    • I typically set this between 10 – 20 so it can ensure all the queries are run, and to see if it starts using the memory cache in the engine to answer queries.

The rest of the items can be left as is.

It will then run using the previous notebook to run the load testing (RunPerfScenario) and store the results in Lakehouse in CSV files.

Cell 4

This is where it then displays the results.
I plan on creating a Power BI report to view the results in a future blog post.

Changing the Default Lakehouse

Because the notebook is attached to a different lakehouse, when I opened the notebook, I could see that it was not mapped to my Lakehouse.

A screenshot of a computer

AI-generated content may be incorrect.

To fix this I took the following steps:

A screenshot of a computer

AI-generated content may be incorrect.

  1. I clicked on Add data items
  2. I clicked on Existing data sources

I selected my Lakehouse and clicked on Connect

A screenshot of a computer

AI-generated content may be incorrect.

Next, I clicked on the three dots next to my Lakehouse “FM_LH” and clicked on “Set as default lakehouse”

A screenshot of a computer

AI-generated content may be incorrect.

Finally, I removed the previous Lakehouse by clicking on the three dots next to the Lakehouse with the red x and clicked on “Remove”

A screenshot of a computer

AI-generated content may be incorrect.

I could now see my Lakehouse successfully connected to the Notebook.

Testing the Notebook

I am now ready to run and test the notebook to make sure it works as expected.

I clicked on Run all

The notebook ran and I could see my initial test ran successfully.

A screenshot of a computer

AI-generated content may be incorrect.

I got an error when I ran it for the first time

What happened was when I was writing this blog post I got an error. I thought it would be good to show you how to troubleshoot if you get an error.

The notebook ran and I thought it was going well.

Then I got the dreaded error as shown below.

This can sometimes be hard to figure out what caused the error.

I then clicked on the Snapshot notebook name

A screenshot of a computer

AI-generated content may be incorrect.

I then got to see the snapshot information in the notebook.

I scrolled down until I could see an error. And as shown below the error was that it could not find the Semantic Model name I had put in the parameters.

A screenshot of a computer

AI-generated content may be incorrect.

I then went back to find the semantic model, which I had renamed.

I then updated my parameters, and the testing of the load testing ran successfully.

Summary

In this blog post I went through setting up the 2 notebooks so that they will be ready to run automated load testing. I also completed an initial test to make sure it works as expected.

In Part 3, I will run the load test and analyse the results.

Thanks for following along 😊