Power BI RoundUp – Help Shape the Future of Power BI – On-Premise Update for Nov 2018 – Modeling View – Analysis Services is 20 years old!

Here is my weekly roundup, and whilst it has not been as busy as the past few weeks, there are still some awesome updates.

Power BI – Help Shape the Future of Power BI

I always think it is really important to provide feedback, especially when they are asking for assistance.

You can click on the link above to take the Survey.

Personally, this gives me the opportunity to be able to give feedback and hopefully help improve and drive the product forward.

You can find the details here: Help shape the future of Power BI

Power BI – On-Premise Data Gateway Update for Nov 2018

You can click on the image above to download the latest version of the On-Premise Data Gateway.

In this months, updates are bug fixes, improvements and the November version of the Mashup Engine, which runs all the Power Query steps.

All the details can be found here: On-premises data gateway November update is now available

Power BI – Modeling View

As shown above there is the new Modeling View in Power BI, and this blog post does a quick overview of the new features, as well as a GIF to show you it in action.

Blog details can be found here: Modeling View in Power BI Desktop

Analysis Services is 20 years old

Whilst it might not appear to some people why this is so significant, this is because the underlying analytical engine in Power BI desktop is a version of Analysis Services.

This video is awesome to see people both from Microsoft & other awesome people who have been using analysis services for a VERY long time.

I really enjoyed watching the video, I would recommend watching it.

All the details can be found here: Analysis Services is 20 years old!

What runs under the cover when I open Power BI Desktop

I am always interested in what happens under the covers when I open Power BI Desktop. So I did a little digging and I got the inspiration from Macro Russo when he did his Webinar as well as presented at the Queensland Power BI User Group on “My Power BI report is slow: what should I do?”

What happens when I click on Power BI Desktop.exe?

This is the list of associated processes that are opened when Power BI Desktop is started, which I will explain what they are below.

This is what it looks like for me when I am running Windows 10 and I have a look in the Task Manager under Processes

  • CefSharp.BrowserSubprocess
    • Because Power BI Desktop runs in the Power BI Service, which is essentially a website and all the visuals are rendered in a browser.
    • My understanding is that within Power BI Desktop it is simulating how it will run the Power BI Service.
    • As per Marco’s Webinar, if this consumes a lot of memory or CPU this is potentially why your Power BI Report is slow.
  • Console Windows Host
    • UPDATE (06 Sep 2018) – I got a reply from Amanda Cofsky from the Microsoft Power BI team, who said that the Console Windows Host is the “Analysis Services Engine Console Output”, which is generally used by the Microsoft Engineers for debugging purposes.
  • Microsoft Mashup Evaluation Container
    • This is the Power Query Engine.
    • This is responsible for processing all the steps in the Power Query Editor. Which gets data from my sources, transforms it and then loads it into my data model.
    • When I look at a Server where I have got the On-Premise Data Gateway installed I will see a lot of instances of the Microsoft Mashup Evaluation Container running. This is because this is where my data gets loaded and transformed into tables before sending to the Power BI Service.
    • This is the executable which is the starting point and container for all the processes that are run within Power BI Desktop.
    • This is where all the magic happens, it is an analytical data engine which leverages In-Memory technology to achieve incredible compression using the X-Velocity Engine and blazing fast query response times by loading all the data into memory.
    • This is where all the data gets loaded from Power Query into the data model.
    • This process can have the highest memory usage.
    • If I have an expensive DAX measure which must get most of its data from the storage engine I will see an increase in memory utilization and CPU during the evaluation and running of the DAX measure. Which once again as per Marco’s Webinar is a great indicator as to why my Power BI Report is slow.

I hope that this has given you some insights into what runs under the cover in Power BI Desktop and that there are quite a few moving parts that work together seamlessly to make the report creation and development experience so seamless and fast when developing Power BI Reports.

As always if there are any questions or you have more details and insights into the details above, please let me know and I will happily update the details in this blog post.

Thanks for reading!

SSAS (SQL Server Analysis Services) – Getting all partition information from SSAS Database

I had a requirement where I wanted to find out and keep a constant record of exactly how my partitions were set up and created within SQL Server Analysis Services (SSAS). So below is a script that I found somewhere (If I find the source I will put it in hereJ) and how I inserted into a SQL Server Table so that I can use it for the creation and dropping of SSAS Partitions.

 

Example: We are going to be getting back all our SSAS Partition information from our AdventureWorksDW2012 cube.

 

You can get a copy of this here: http://msftdbprodsamples.codeplex.com/releases/view/55330

·         I used the AdventureWorksDW2012 Data File and AdventureWorks Multidimensional Models SQL Server 2012

 

Installation of ASSP for Analysis Services

1.       If you have not got this installed already you would first need to install the ASSP for Analysis services.

2.       You can get the files from here:

https://asstoredprocedures.codeplex.com/releases/view/79180

3.       And then in order to install you can follow these easy to install instructions:

https://asstoredprocedures.codeplex.com/wikipage?title=Installation%20Instructions

 

Configuration andgetting down partitioned data

 

1.       The first thing that we did was to check to ensure that when running our MDX query it would return the required results from our cube.

2.       In SQL Server Management Studio (SSMS) we went into our Analysis Services, then went to our AdventureWorksDW2012 cube.

a.       Then right click, select New Query and then MDX

b.      clip_image001

c.       Once this opens run the following query:

call assp.DiscoverXmlMetadata(“Partition”)

d.      Now execute the query and you should see the following as shown below:

e.      clip_image002

f.        NOTE: There are a whole host more columns with a lot of valuable information.

                                                               i.      Below is often what I am most interested in, and the columns are on the right hand side

g.       clip_image003

3.       Then in order to store this information we are then going to use SSIS to Insert data from an MDX query into a SQL Server table.

a.       You can use this blog post below

b.      SSIS 2012 – INSERTING DATA INTO A SQL SERVER TABLE FROM AN MDX QUERY

4.       Then we created our table with the following syntax in order to get the data into our SQL Server Table as our destination.

CREATETABLE [Mart].[TD_SSAS_PartitionDetails](

       [Name] [varchar](300)NULL,

       [ID] [varchar](300)NULL,

       [CreatedTimeStamp] [varchar](300)NULL,

       [LastSchemaUpdate] [varchar](300)NULL,

       [Description] [varchar](300)NULL,

       [LastProcessed] [varchar](300)NULL,

       [State] [varchar](300)NULL,

       [Type] [varchar](300)NULL,

       [AggregationPrefix] [varchar](300)NULL,

       [StorageMode] [varchar](300)NULL,

       [CurrentStorageMode] [varchar](300)NULL,

       [StringStoresCompatibilityLevel] [varchar](300)NULL,

       [CurrentStringStoresCompatibilityLevel] [varchar](300)NULL,

       [ProcessingMode] [varchar](300)NULL,

       [ProcessingPriority] [varchar](300)NULL,

       [StorageLocation] [varchar](300)NULL,

       [RemoteDataSourceID] [varchar](300)NULL,

       [Slice] [varchar](300)NULL,

       [EstimatedRows] [varchar](300)NULL,

       [AggregationDesignID] [varchar](300)NULL,

       [EstimatedSize] [varchar](300)NULL,

       [Parent_MeasureGroupID] [varchar](300)NULL,

       [Parent_CubeID] [varchar](300)NULL,

       [Parent_DatabaseID] [varchar](300)NULL

)ON [PRIMARY]

 

GO

a.        Once completed our SSIS Data Flow task looked like the following:

b.       clip_image004

5.       Now once we run the above SSIS package we see our data in our SQL Server table from step 4 above.

6.       NOTE: If you want to keep a historical record you could then take this data and put it into a Fact style table.

a.        But for our purposes we just truncated the above table and inserted our data again daily.

 

Usage for SSAS Partition Details

·         The main reason for us getting our SSAS Partition details into a SQL Server table, is so that we could then get a list of our current partitions for our SSAS database and cube.

·         We could then use our Partition details to find out or Max and Min Partitions, and also how many partitions we have.

·         Another use is based on our Max Partition is to ensure that we can create partitions for our data that we are going to load going forward.

·         Likewise we could also find out our Min Partition to drop older partitions of data.

·         And finally because all our data is stored in a SQL Server Table, we could then use this data within SSIS using XMLA to dynamically create partitions.

o    You can use this blog post below as a reference:

SSIS – CREATING NEW PARTITIONS IN SQL SERVER ANALYSIS SERVICES (SSAS) WITH XMLA AND ANALYSIS SERVICES DDL