Making sense of challenging data topics one step at a time.

Author: Dominick Raimato Page 1 of 4

Simulation Integration with Power Apps

Simulating scenarios with Power BI is a common use case that organizations need to get the most out of their data. Being able to manipulate elements of the data to simulate an outcome brings deeper insights and accelerates decision making. Everyone says “what if”, but Power BI paired with Power Apps can help you build the simulation and put it into action.

About two years ago, I wrote an article on how to simulate scenarios with numeric parameters. This solution still applies and works well for most scenarios. However, there is a drawback to this solution. If you have a lot of inputs, you could end up with a painful user experience. Because of the constant recalculation of your measures, it could bog down your experience and eat away at your capacity.

While not a solution for every scenario, we can use Power Apps to achieve the same outcome. In our scenario for this article, we want to drive profitability by manipulating a profit and loss statement. We want to adjust inputs such as revenue, expenses, headcount, and taxes. Because of the numerous inputs that drive a profit and loss statement, this is a perfect example for Power Apps.

Preparing the Report Elements

To start, we are using an existing report that contains a profit and loss statement. We have a matrix visual that shows the lines of the statement and their associated values. If you notice, each line is labeled with a section letter and line number. This makes it easy to identify each element:

Screenshot of a report canvas in Power BI Desktop with a profit and loss statement.
Profit and loss statement within Power BI Desktop

To setup our simulation, we need to have measures to aggregate the data and bring it into the app. You will note that I created a Simulation App Measures query and added a measure for each line item. This will make it easy to bring the data into Power Apps.

Lastly, you will note that I left space on the right hand of the canvas to make it easy to add our application. While we are adding a Power App, it is like any other visualization and requires the appropriate space. While it is perfectly fine to place the app on it’s own page, we want to keep it close to our profit and loss statement for comparison.

Preparing the Power App

While it seems simple, we need to take a few steps to integrate Power Apps into our report. The first step is that we need to select the Power App visual in the visualization pane. Once you add the Power App visual, it provides the instructions for the next few steps:

Screenshot of adding a Power App visual to Power BI Desktop
Adding the Power App visual to our report canvas.

Once we have the visual in place, we need to add fields from our data pane to the visual. This data is the base for supporting your simulation. Once the data has been selected, we can select which Power Apps environment we want to use. Next, if we have an existing app, we can select it here. However, we have not created the app yet so we are going to create one from scratch. When you do this, you will be warned that a new window is being opened so you can edit the app with Power Apps Studio.

Creating Your Simulation App

Now, there are a lot of components that go into a Power App. Because of the complexities that go into creating a Power App, I am going to direct you to Microsoft Learn to get all of these details. However, I am going to go over a few basics to help you get started.

Basic App Settings

There are a number of settings you will need to get in place. A simple example is the name of the app, icon for the app, and the orientation of the application. If you go to the three apps at the end of the menu and select settings, you can set all of these items. You can also set other items such as background color or images:

Screenshot of Power Apps Studio and how to get to the app settings.
Adjust settings for your app within Power Apps Studio

You will also note that this is an option for version history. If you are new to Power Apps, this is a great tool to assist you in the event you make a mistake. You can easily roll back your version if you break something and need to go back. Just remember to save versions periodically with comments so you know where you are at in your versioning.

Using Data for Simulation

If you are familiar with Power Apps, you know you can bring data from various sources into a single application. However, if you go to the data view, you will find it empty. But you need data to build your simulation, so where is it at? There is an item on your tree view called “PowerBIIntegration” which holds your data in a tabular format:

Screenshot of Power Apps Studio showing the PowerBIIntegration data source for simulation.
The PowerBIIntegration provides the simulation data for our app.

Since we put only measures in this app and no dimensions with it, the table only has a single row in it. However, we will still need to provide the aggregation of our data. To do this, you will need to write a little code in Power Apps to achieve the desired value to appear in a text box. For example, if you want to aggregate the total gross revenue, you will need this code to achieve the outcome:

Text(Sum([@PowerBIIntegration].Data, A1), "$#,0")

This code aggregates the data from [@PowerBIIntegration] and formats it properly for display. You do not have to go as crazy, but I really like to make sure the formatting is correct. It requires a little extra work, but it goes a long way.

Testing, Saving, and Publishing the Simulation App

Once you have something ready, you can click the play button on the menu bar to test your application. This will allow you to test the functionality of your application and ensure it works. You also can hold down your ALT key to test elements on the app canvas. Once you have it tested, it is time to publish.

Screenshot of Power Apps Studio highlighting the play button, save button, and publish button.
Test, save, and publish your reports from Power Apps Studio

Once it is ready, click the save button to ensure your changes are captured. Next, you can click the publish button and this will push the changes live. This will allow you to view the app inside of Power BI and available for your report consumers to use. Depending on the environment settings, you might need to share the application with others, but that can easily be performed with using the same Entra or Active Directory security group you leveraged to provide access to the report in the Power BI Service. However, when you are finished, the report canvas will now have the app integrated right on the page:

Screenshot of the simulation Power App inside of Power BI Desktop.
The final product – Simulation app inside of the Power BI Report

Common Questions with Power Apps

Every time I bring up this solution, I receive a lot of questions. However, they are often variations of the same three topics that are in this section. I wanted to address them as I am guessing you have the same questions as well.

Question 1 – Licensing and Cost

We all know Microsoft does not give much out for free. However, with how this app is designed, there is no additional impact to the cost of your licensing. Power Apps is using seeded capabilities from your base Microsoft 365 licensing and therefore does not cost you anything extra. While licensing changes often, I recommend you check out Michael Heath’s blog post on Power Platform licensing if you are new to it.

Question 2 – App Use Outside of Power BI

A lot of people ask about being able to leverage the app outside of Power BI. The answer on this is a hard no. We must keep in mind that the app is leveraging data from your Power BI report and filtering based upon the report context. Therefore, this app is not usable outside of the report. However, you could easily re-deploy this app in another report assuming you leverage the same measure names to load the data into the app.

Question 3 – Power Apps and Writeback

The last question I receive is if Power Apps provides capabilities for writeback to data sources. The short answer is yes, but depending on your data source additional licensing might be required. There are tools out there that allow you to perform write back with less complexity, but Power Apps absolutely works for this. It is up to you if it is worth the effort and cost to make it happen, but it is absolutely possible.

Conclusion

People forget that Power BI is actually part of the Power Platform suite. Sadly, many forget that there are opportunities to integrate these apps together. However, this is an example of being able to simplify integration of Power Apps to extend capabilities with little effort. If you are not an expert, do not be afraid to use Copilot for Power Apps to help you out as well!

Have you ever integrated a Power App into a Power BI report? What was your use case? Did you have any challenges in making it happen? Tell me in the comments below!

How do I connect to REST APIs with Power BI? (Part 5)

In preparing for the Global Power BI and Fabric Summit this year, I got a little nostalgic. I went back to my presentation and early blog posts on connecting to REST APIs. Two years, several blog posts, and a book on Power Query later, I felt it was time to add a fifth part to this series. I realized that I left out one additional step I could take to optimize my query a little more to allow me to squeeze the most out of my API call limits.

I will be picking up where we left off with part 4 of how to connect to data with REST APIs. However, if you are new to my blog, you can start from part 1 and get caught up. If you recall, I am limited to 100 API calls a day with my scenario. If I am watching more than 100 stocks, I might need to consider moving to the next subscription level. However, I have an option to squeeze even more out and potentially return 1,000 stock symbols. This is a bonus as I do not need to increase my subscription!

What Did Dom Miss in Part 4?

To note what I missed, I need to go back to part 2 on REST APIs. You must Read the Fine Manual! In reading the documentation, I realized that I could batch multiple stock symbols at once. Instead of making individual calls for each symbol, I can group ten of them in a single call.

Screenshot of the Yahoo Finance REST APIs documentation
Guess I need to take my own advice?

With this knowledge in hand, I set out to take my query one step further and get more out of my subscription. It is not as straight forward as it may seem, but with a little manipulation we can make it work.

Prepare my List

We already have a spreadsheet of stock symbols we have brought in. We need to do start by removing all of the columns except the stock symbol. Just right click on the symbol column and remove other columns. Once that is complete, insert an index column that starts from 1. Your query should look like this:

Screenshot of the symbol column with the index column next to it.
Make sure your index starts from 1!

Next, we need to add a batch column. We will need to identify every 10th row and assign it an index number. We will need to create a custom column for this. Using a modulo division function, we will identify every tenth row. If it is the tenth row, the remainder will be zero. If the modulo division equals zero, we will divide the index by 10. Otherwise, we will return a null value. The formula looks like this:

if 
  Number.Mod([Index], 10) = 0 
  then Number.IntegerDivide([Index], 10) 
  else null

Next, we just need to use the fill up feature to make sure the rows are assigned to a batch. The last few rows will still be null. This is not an issue as it will still be its own batch. The result looks like this:

Screenshot of our stock symbol list with the associated batch number.
Our stock symbols associated with a batch

With our batch in place, we can remove the index column. Next, we need to group our symbols together.

Use Group By to Bring Symbols Together

Going back to our documentation, each symbol must be separated by a comma. We will use the group by function to bring ten rows at a time together into a single column. We will select our batch column and select the group by function in the transform ribbon. Using the sum aggregation, we are going to summarize our symbol column. Naturally, this will result in an error, but we will fix that next.

Screenshot of the group by function in Power Query
Setting up the group by function

Next, we will need to update the M code in the advanced editor. We need to replace the List.Sum() function with the Text.Combine() function. Our code for this step will look like this:

Table.Group(#"Removed Columns", {"Batch"}, 
  {{"Symbols", each Text.Combine([Symbol], ","), 
  type nullable text}})

The result an updated query with our stock symbols combined:

Screenshot of the combined stock symbols
Stock symbols combined in a single cell by batch

Now that the hard part is over, we just need to apply our function.

Functions and REST APIs

We already created a function in part 3 of my REST APIs series. We just need to apply it to our new query. Go to the add column ribbon and insert a column from function. Using the symbols column for the parameter, we can query all of the symbols in the cell.

Invoking our custom function that use REST APIs
Invoking the Get-StockInfo function

All that is left to do is expand the results so they can be leveraged in the model:

Screenshot of our results from the REST APIs
Stock data from our endpoint

And just like that, we turned twenty five API calls to three. Not too shabby!

Conclusion

It is extra work, but it is worth while if you can save a few dollars and minutes with your refresh. It may seem excessive, but it might make the difference. Besides, I could not resist a post where I tell people to Read the Fine Manual as I know my friend Mike will get a good laugh out of it again!

Have you ever used the group by function in Power BI? Or ever extended like this? Did I miss another way to optimize this query? If so, tell me in the comments below!

FabCon Takeaways

Did you attend FabCon during the last week of March in Las Vegas, NV? If you were not able to attend, it was packed with a lot of great knowledge, exciting announcements, and wonderful data professionals to bounce ideas off of all week. I was lucky enough to attend with my colleague and friend, Michael Heath. At every break when we would reunite, we were so excited to share what we had just learned.

Picture of Michael Heath and Dominick Raimato at FabCon.
Michael and I connecting in person at FabCon!

I know not everyone got to attend this event. And even for those of us who were there, it was a lot of information to take in over three days. I wanted to summarize some of my top takeaways. I mean, it is difficult to distill an entire conference into a single blog post, but I wanted to at least highlight some of the key moments that made me go aha!

FabCon Takeaway #1 – Out of the Box

While I heard this many times before, I finally was able to wrap my head around it during the conference. Fabric is an all inclusive platform to simplify the management of data for all forms of projects. Whether it is warehousing, reporting, or data science, the platform is ready to use.

Like Buying a Car

The best analogy I heard all week was the concept of buying a car. Before Fabric, you needed to pick out each part individually. You needed to identify your storage medium, ETL tool, virtual network components, and data science tools. This is the equivalent of going to the auto parts store and picking each component out one by one and then assembling it to build a car. While some of you might choose to do this, it is a lot of work. You need to not only have the skills to do the data work, but the experience to setup your Azure environment.

Instead, Fabric allows you to buy the car as it is off the lot. When you setup your workspace, you can add components without the hassle of configuration. It just works which is a really nice feeling. Even better, because you are using a capacity license, it is easier to estimate costs. I know there are calculators out there for Azure, but you always forget something when you are attempting to build a budget. Fabric eliminates that challenge with a clear pricing structure. This is a huge step forward for self-service data science outside of the traditional IT function.

Microsoft has been pushing their “5×5” approach with Fabric. The concept is 5 seconds to sign up and 5 minutes to your first wow. While I appreciate this approach, I think there is a little work to go in this space. If you have a guide to help you, I believe this to be true. However, if you are new to Fabric, I do not find it to be quite intuitive for that. Long story short, make sure you do a little learning before you try it on your own.

What About Data Engineers?

Does this mean they are out of a job? Not at all! Data engineers will still be essential. For me, I think it redirects their efforts into more valuable workloads. Instead of fighting with Azure, they are focused on getting value out of the platform. It also opens the door to data engineers and scientists with deep domain knowledge in the business. That is a game changer!

Does this mean our existing data infrastructure is obsolete? Also no! These tools are still essential. If anything, this might allow less important workloads be shifted to Fabric reducing administrative burden. That allows you to focus on the mission critical workloads within your existing infrastructure. While Microsoft would like you to move your workloads to Fabric, not everything has to go.

FabCon Takeaway #2 – CI/CD Is Close

If unfamiliar, CI/CD stands for Continuous Integration and Continuous Deployment. To handle this, you need to have a mechanism to manage your source code for components of Fabric. While it is not 100% there yet, the future is quite bright. While some might not like how this is done, this is a huge step forward.

The integration with Azure DevOps is slick. The ability to create a feature branch and merge it back to the main branch is slick. With a few clicks, you can create a new workspace, replicate your components from the main branch, and replicate it to Git. When you are finished with your feature, you can easily merge it back in. But for me, the nicest part is that you can retire your workspace as part of the merge which is so nice.

I also love the fact you can choose your deployment path. If you would prefer to use Deployment Pipelines in Fabric, you can do so. However, if you rather have Git repos for Dev, Test, and Prod, you can do it too. This allows you the flexibility to manage and control your deployments as you see fit.

Stay tuned for an article on this in the near future!

FabCon Takeaway #3 – Integration Points

For me, the biggest question I kept asking was how well could I integrate these components into other solutions. Creating a machine learning model is fantastic, but useless if you cannot integrate it with other applications. This is huge!

I need to investigate how this works further, but the promise was pretty clear. The idea is that your data scientists can perform their analysis and train a model with Microsoft Fabric. Once completed, they can leverage it via endpoints and provide predictions. I have done this with Azure ML in the past, but the process was a tad clunky. I am hoping this experience gets better with Fabric.

Another teaser was the ability to build your own custom Copilot chat bots within Fabric. This is not using Copilot to create a model, but rather training a Copilot to use your data. Eventually, this data could find its way into other tools. To be honest, this has been somewhat happening already today with the Q&A feature in Power BI. However, if Microsoft can streamline this into a single Copilot experience, this would be huge. Being able to query your data without having to go Fabric would go a long way.

Anything Else from FabCon?

Yes, there is a ton more to talk about. But these were the stand outs for myself. What is hard to put into a post are the conversations with other professionals were outstanding. Meeting some of the conquering heroes of the Power BI world is always fun, but eating lunch with different people was where the biggest insights were gained. Learning about everyone’s unique scenarios allows you to refocus your vision of the platform. I always recommend people attend conferences for that reason alone. That is where you gain the most interesting insights!

Also, since everyone asks this question, Michael and I were able to enjoy some of the fun around us. It was a busy week at the conference and work did not slow down in our absence. While we did not get to any shows, we did get to unwind with other professionals and even meet one of the best Elvis impersonators I have seen in my lifetime.

Michael and I posing with The King

Conclusion

This coming year is going to be exciting when it comes to Microsoft Fabric. Between what was announced at FabCon and what is coming in the near future, the future is bright! I look forward to sharing more insights over the coming months.

Did you make it to FabCon? What were your key insights? Anything I missed in my summary? Tell me in the comments below!

Get Ready for the DP-600 Exam

I took the Implementing Analytics Solutions Using Microsoft Fabric (DP-600) recently and it was a unique experience for me. This was the first time I took a beta exam from Microsoft. It was also the first time I took an exam with the new rules around being able to search Microsoft Learn. I felt it was a different experience, but a successful one as I did pass.

It would be unethical to discuss specific questions and details around the exam. However, there are a few things you can do to make sure you are prepared for this exam. A little preparation can go a long way!

Content Breakdown for the Exam

This exam assesses your knowledge with Microsoft Fabric. As a user of Power BI, you might find elements of this exam easier. Because Power BI is a big part of Fabric, you will find elements of it in there. The breakdown of the exam is as follows:

  • Plan, implement, and manage a solution for data analytics (10–15%)
  • Prepare and serve data (40–45%)
  • Implement and manage semantic models (20–25%)
  • Explore and analyze data (20–25%)

With this knowledge in hand, it is easier to start to understand what you need to focus on when it comes to the prep for the exam. Clearly, prepare and serve data is a hefty portion of the exam. And if you have not touched Fabric, you will not perform well with this exam. But all is not lost as you will need some of that knowledge to be successful!

Studying for the DP-600

Ideally, everyone would attend a training for the DP-600. However, that is not always realistic. My work schedule has been chaotic, so even if I wanted to attend a class, how would I do it?!

To start, Microsoft has a few learning paths that can get you on the right track. They are designed to walk you through the core components of Microsoft Fabric. They are available on the DP-600 page, but I have them summarized here:

In addition, Microsoft has created some learning paths to help you out. It might be best to start with the DP-600 Study Guide from Microsoft. This will help you wrap your head around the core components of the exam. It also gives you some links to additional learning paths to help you out.

However, you will want to get some hands on experience with Microsoft Fabric. You can fire up a free trial if you have an active Microsoft 365 environment in place. To get started, sign into Power BI, go to your account in the upper right hand corner, and start the trial:

Screenshot of account view where you can initiate a trial of Microsoft Fabric.

You will have sixty days to explore Microsoft Fabric at no cost. Make sure you are ready to commit to your learning to ensure you do not run out of time before you take the exam!

Practice Makes Perfect!

I am always prompted to add a practice exam when I sign up for an exam. But Microsoft has a free exam you can take to get a baseline of your knowledge. Take a look at this screenshot from the DP-600 exam page:

Screenshot of the scheduling pane for the DP-600 exam

This practice exam runs you through fifty multiple choice questions. It does not mimic the actual exam as there are no case studies. In addition, the exam does have some questions that do not use multiple choice for the answers. It is not an accurate representation of the exam, but it does a good job of giving you a baseline of your knowledge.

If you have not taken many certification exams, you might want to invest in the paid practice exam. In general, most exams are broken into three sections. The majority of the exam is a collection of questions that you can come back and review before moving on to the next section. However, once you move on, you cannot go back. The same is true for the lab section. Then, there is a section where you cannot go back to update previous answers. That causes some anxiety for some people, so taking a paid practice exam can help prepare you for that experience.

Regardless of the route you choose, I love that you can check the answer on the spot. When you do it, they provide a link to the documentation on Microsoft Learn that explains why the answer is correct. Sometimes you are just guessing and get lucky, but this will help you confirm why you chose the right answer.

Anything Else to Consider?

The resources above helped me pass the exam. However, you might need to know a little more than what is outlined above. Being multi-lingual will help you be successful with this exam. You need to know your DAX, M (both code and GUI), T-SQL, and PySpark syntax and best practices. You might not be an expert in all of them, but a strong command of the basics is critical.

Microsoft Learn was helpful for me as there were a few items that I got stuck on. In fact, my wife commented on the fact that I spent more time on this exam than any others I have taken in the past. Having Microsoft Learn at my disposal helped a lot, but it was not a replacement for my existing knowledge. While it will assist you, relying on it will result in you running out of time.

Conclusion

I felt like this exam was one of the better exams I have taken in the recent past. I feel like so many certification exams are about trying to trick you. The broad amount of content in this exam allowed it to be both challenging, yet meaningful. It may seem weird, but I really enjoyed this exam! If you are looking to continue your analytics journey, you will want to add this exam to your list!

Have you taken the DP-600 yet? What did you think about it? Did you find it meaningful? Tell me in the comments below!

Drill Through for Deeper Insights with Power BI

It is a pet peeve of mine and it drive me nuts. I hate it when someone tries to fit as many visuals on the canvas as humanly possible. It is cluttered, ugly, and hurts the performance of your report. But how do we get around this issue? The answer is simple! Add a drill through to your report!

I will be honest – I was confused by the initial setup like you might be. The setup is a little intimidating at first, but once you understand how to configure the drill through, it is easy to implement. Like what I said at the end of my article about bookmarks in Power BI, you will probably get things wrong the first time. However, with some guidance, I hope you will be able to implement them with ease!

Preparing Your Report

Generally, this starts with a basic report page with a goal. Right now, I have a Sales Data report page that has a lot of information. However, I want to drill down on a manufacturer and learn more about the products being sold. If you look at our current Sales Data tab, you will note that there is some product information, but not the context I want:

Screenshot of the Sales Data report page with a line chart comparing year over year of sales data, revenue by state on a map, a table of the top 25 products and their manufacturer by revenue, and a bar chart of quantity sold by manufacturer.
Current state of the Sales Data report

Our first challenge is to create a meaningful drilldown page that shows information by manufacturer. In my scenario, I wanted to see the top products by both revenue and quantity. I also wanted to see product segments and have the ability to filter by category. The result is a drilldown page that looks like this:

Screenshot of the manufacturer drilldown that shows two tables that return the top 20 products by revenue and quantity respectively.  There is a donut chat showing distribution of revenue by segment and a tile slicer for product category.
Manufacturer drilldown report page

You will note that I did not place a slicer for manufacturer on the canvas. This is because the drill through will handle the filtering for us. We will tackle that next.

Add Drill Through Logic

With our detailed report created, we need to enable the drill through functionality. On the report page, we must add the field we want to use for the drill through. Since we want to look at the manufacturer, we will add that field to our visualization pane under drill through:

Adding the manufacturer field from the data panel to the drill through section for the visualization panel.
Adding the drill through field

One thing you might not have noticed right away is the back arrow that was added to the page in the upper left hand corner. This allows you to return to the original page when you drill down on your data. By default, this is set to just go back to the last page. However, you might want to hard code a page navigation. This will ensure you return to the correct page and is a requirement if you allow drill through on multiple levels:

Screenshot of the back button that is added to the page along with the configuration of the button.
Configuring the back button added to the report canvas

You will note that with the drill through, there were two additional settings available to us. The first setting is on by default which is to keep all filters. This simply maintains the filtering from the page where the drill through occurred. So if it is filtered on a specific year, it will be maintained.

The cross-report drill through allows you to make this report page available on other reports. When published to the Power BI service, other reports in the workspace can drill through with this page and see the detail. This allows you to reuse this page on multiple reports which is very convenient.

Testing the Drill Through

With everything in place, it is time to test our drill through. Going back to the Sales Data page, we can right click on one of the bars in our quantity by revenue visual. From there, we can see the drill through menu option with our new report page:

Screenshot of the drill through menu on our quantity by manufacturer visualization.
Testing our drill through settings

If you have the ability to select your new page, then you did it right! Go ahead and click on it and you will see the new detail page filtered on the manufacturer:

Screenshot of the drill through report page filtered on a manufacturer.
Drill through report page filtered for Van Arsdel

It take a little work to configure, but the results are beautiful. It does a lot to help clean up your canvas and allow you to get more detail on a specific dimension. With a clean look like this, it makes it easier to consume your report with the ability to dig further for details.

Anything Else to Consider?

What if you want to use other fields for the drill through on the same report? You can easily facilitate that by adding them to the drill through field. This will give you the ability to use the same page for multiple dimensions to reduce effort.

In addition, consider the detail of your page when designing it. The ability to reuse it over and over again will reduce a lot of repeated effort. If you leverage the cross report feature, it could make your drill through page an extremely helpful feature to any report you create.

Conclusion

This is one of those unsung heroes of the Power BI Report. It is hidden in plain sight and not always easy to create. But with a little effort, it can really transform your report. It is worth the time as it will take the experience of your reports to the next level!

So, have you used drill throughs on your reports? What are your use cases? Did I miss anything? If so, tell me in the comments below!

Refresh from Files Without a Power BI Gateway?!

I cannot begin to tell you how many people I meet who think they need a Power BI Gateway to refresh their dataset using a file as a source. Yes, if you have selected a file data source, you will need to use a Power BI Gateway. However, there is a way around this challenge. It takes a little extra work to prepare your dataset, but you can make it work.

Before everyone starts moaning about using files as a data source, sometimes this is unavoidable. I worked with a third-party business travel platform that did not provide an endpoint for me to connect directly to the data. As a result, I needed to export data on a weekly basis to feed my dashboard. Since then, that platform has resolved this issue.

I also have a current scenario where I do not have access to the APIs for an ERP solution. Because of roles and permissions, I am not allowed to access this data. A creative solution is to export the data through a scheduled job that sends a CSV file to my email. From there, I can use Power Automate to position the data so it can be accessed by the Power BI Service without using a gateway. Is it ugly? No doubt. Somewhat obnoxious of a process? You betcha! But does it work? Absolutely!

What is the Power BI Gateway?

The Power BI Gateway might be new to you. If you are not an administrator of your Power BI environment, you probably do not know about it. The concept is simple – the gateway brokers access to your data sources that reside on premises within your network. Instead of opening up firewall ports or creating crazy firewall rules, you can deploy a gateway to broker the connection.

There are two types of gateways that can be used. The enterprise gateway provides an enterprise-wide connection for your data sources with the controls to manage access to them. Generally, you install the gateway on a server that has access to your data sources. Once it is in place, you configure your data connections within the Power BI Service by adding credentials and delegating access. The biggest benefit besides the controls, you can provide redundancy to your gateway.

The other type of gateway is the personal gateway. This is deployed by an individual user. The data source must be accessible from the device from the personal gateway, but controls of the data source are essentially inherited by the user. This is a quick way to get a gateway in place, but causes a risk because it is tied to an individual user with no oversight. In addition, your device must remain on for the refresh to be successful which does not always work out.

Regardless of the type, the Power BI Gateway allows you to connect to any data source that resides on premises. While files work with gateways, they are a bit of a burden to manage. Unlike other sources such as SQL Servers, they have a tendency to move around or adjust. If we can manage this without the gateway, it will be easier for end users to manage without having to involve an administrator in the process.

Bypassing the Power BI Gateway

In short, the easiest way to bypass using a Power BI Gateway for your files is to use SharePoint Online. Since it is part of Microsoft 365 and already in the cloud, you do not need a gateway to connect to your file. Start by making sure your file is in SharePoint and open it in the desktop app from the site. From there, go to the file ribbon, select info, and click on copy path.

Once you have a file path, create a new source in Power Query and select the web connector. Paste your URL into the field, but remove the ?web=1 at the end of the path. Click OK and use your Organizational Account to access your file. From there, you will be prompted to go through and access the data based on the file structure. I talk about the web connector with REST APIs, but the concept is the same if you have never used this connector before.

Screenshot of the Web Connector configured to connect to a file inside of SharePoint Online without the Power BI Gateway.
Configuring the Web Connector for your file in SharePoint

Now, this process works well for Excel Workbooks and CSV files as they can be opened with Microsoft Excel. But what about JSON, XML, or Parquet files? You can get around this challenge by simply adding any Microsoft Office file into the directory to get the file path and replace the file name with the correct one. This will save you the effort of building out the URL.

But I Already Have Done So Much!

So often, I have people tell me that they have spent hours transforming their query already and fear they need to start from scratch. But believe it or not, it is easy to resolve. Let’s start with this block of code in your advanced editor:

let
  Source = Excel.Workbook("c:\data\taxi_zone_lookup.xlsx", 
    null, true), 
  ExcelData = Source{[Item = "TaxiZoneTable", Kind = "Table"]}[Data]
in
  ExcelData

We can see from the code block above, we are accessing the file from a folder in the root of our hard drive. Once we have in inside of SharePoint, we can replace the “c:\data\taxi_zone_lookup.xlsx” with a web connector:

let  
  Source = Excel.Workbook(
    Web.Contents(   "https://<tenant>.sharepoint.com/sites/NYCTaxi/Shared%20Documents/Taxi%20Data/taxi_zone_lookup.xlsx"
    ), 
    null, 
    true
  ), 
  ExcelData = Source{[Item = "TaxiZoneTable", Kind = "Table"]}[Data]
in
  ExcelData

You will have to excuse the formatting in this example, but you can see it was a simple swap out in the code. With the change in place, we did not need to make any additional changes to the downstream code. While this code block is small, we could have a lot of transformations in place which would require considerable rework.

Anything Else to Consider?

A big benefit to moving to the cloud location is that you can easily share your Power BI Desktop file for development work without having to update the file path. You will need to share the file with users who will be working directly with the source, but that is easy to manage.

In addition, you can easily add files to your data source without having to get an administrator involved. Regardless of the type of Power BI Gateway you want to use, you will likely need an administrator to help you. If you are using the enterprise gateway, you will need a platform administrator to configure the connection. If you want to use a personal gateway, you might need someone with administrator rights to install the software on your machine. Regardless of the type, you will likely need some help.

Conclusion

I wish more people talked about managing file refreshes this way. This is a great option, yet Microsoft tends to keep quiet about it. In fact, you can leverage this method to provide a refreshable report from a form you created. Patrick from Guy in a Cube created a video on how to achieve this. Hard to believe how a few small changes could make your life so much easier!

So have you ever created a refreshable interface with Power BI like this? Do you have a use case for this? Tell me about it in the comments below!

Artificial Intelligence is Available Inside of Power Query?!

Unless you have lived under a rock in the past year, you know that artificial intelligence is a popular topic. Whether it is Copilot, ChatGPT, Bard, or Open AI, artificial intelligence is here to stay. It is time to embrace the utilization of artificial intelligence and learn how to embrace it for our benefit.

But surely, the use of artificial intelligence inside of Power Query is brand new, right? Wrong! I remember seeing it available five years ago and it is a powerful tool to integrate with your models! I am not going to be able to go into all of the details of artificial intelligence in this article, but rather help you understand how to integrate it within your existing queries.

What Is Artificial Intelligence?

In the fast few years, Artificial Intelligence (AI) has become a force in society. It permeates the world we live in and has become an integral part of how things are done. We see examples of it in our lives through things like chat bots, self-driving cars, and even social media. While some may find it intimidating, the reality of AI is that it has the power to make our lives easier.

Artificial Intelligence is used to describe technologies that perform tasks that once required human intelligence to complete. When properly deployed, AI models can recognize patterns, solve problems, make decisions, and understand natural language. These models do not organically exist. They are created through extensive training activities to ensure the results you receive are valid. The work we perform with Power Query may result in the creation of a model while others will leverage one.

While generative artificial intelligence (GenAI) is all the rage right now, we will not be using it for our queries. Instead, we will be integrating models that have been trained to provide a response. You might be the creator of these models while others are creating them for you. The result is the opportunity to bring back a predictive response based upon your query.

How to Get Started?

While Power Query is available in both Microsoft Excel and Power BI, this functionality is only found in Power BI. While you can leverage similar AI models in Microsoft Excel, it is done outside of Power Query. This is an important item to note if you prefer using Power Query inside of Excel.

Second, you must have some form of a premium workspace to enable this functionality. This can be in the form of a Premium Per User, Premium Capacity, or Azure Power BI Embed environment. If you already have one of these environment at your disposal, you are set. If not, check out my article on licensing strategies around Power BI. I recommend spinning up an Azure Power BI Embed SKU or activate a Premium Per User trial if you want to try these features at a minimal cost.

What if you do not have a Power BI environment available to you? Unfortunately, you will not be able to take advantage of this functionality inside of Power Query. I recommend you work with your organization to spin up a trial to explore this functionality.

Pre-Built Models in Power Query

Power Query allows us to leverage pre-built models. Leveraging Azure Cognitive Services, we can perform text and image analysis on our queries. We can do this from the add column ribbon inside of Power Query:

Screenshot of the add column ribbon inside of Power Query with an arrow pointing to the text analytics function.
Activating the Text Analytics function

If you are doing this for the first time, you will be prompted to allow the analysis of your data. Once you have done that, you can select which model you wish to use:

Screenshot of the Text Analytics screen with options to select the detect language, extract key phrases, and score sentiment models.  Next, you can select the column to analyze and language associated with the text.
Selecting a Text Analytics model for your analysis

As you can see, you can select multiple pre-built models to perform your analysis. With text based analysis, you should consider both the text and the language being used. This helps ensure the proper analysis is being performed. By default, it will use your tenant’s default language. If your column is using the same language consistently in the query, you can hard code it in this screen. However, if you have multiple languages, you might need to to use the detect language model to bolster your analysis.

You can also identify tags for images you ingest with Power Query. This is not easily performed, but is ideal for small images. Because of the practicality of this, you will likely not use the vision model. However, you can try it out for yourself:

Screenshot of the tag images model in the vision module from Azure Cognitive Services.
Configuring the tag images model

These models can provide quick analysis and provide meaningful results with minimal effort. I find that combining these results with other dimensions it allows me to sort through results and find the most meaningful responses in surveys.

Custom Artificial Intelligence Models

If you have created a custom model using Azure Machine Learning, you can leverage it within Power Query. Leveraging a public dataset around diabetes, I created a custom model with Azure Machine Learning. With an endpoint created, I can access it within Power Query with a few clicks:

Screenshot of the home ribbon in Power Query with an arrow pointing to the Azure Machine Learning function.
Activating the Azure Machine Learning function

Once I have selected Azure Machine Learning, I can select which endpoint I wish to use. I have already deployed my model and selected it below:

Screenshot of an Azure Machine Learning endpoint being configured for Power Query.
Configuring my predictive model in Power Query

As you can see, there are multiple inputs I must identify for my endpoint to return a predictive response. I can identify the different columns in my query and return a result. We will need to identify values for each field to allow this to work.

It is important to consider that Azure Machine Learning uses a consumption pricing model. Every time you refresh your query, it is seeking new predictions. Keep this in mind when building out your queries as you could rack up some extra costs without realizing it!

Any Other Options?

If you are a user of R or Python, you can leverage custom algorithms and models inside of Power Query as well. You need to deploy software on your device to make this work and can be prohibitive if you do not have admin access on your machine. I do go into detail on these tools in Chapter 11 of my book, but decided not to discuss them in this post. Primarily, I wanted this to be about out of the box and easy implementation.

Conclusion

It is amazing how artificial intelligence has infiltrated Power Query. For many, they have no clue it was there or what is needed to get it activated. With a little effort, you can really transform your queries with a few clicks. I recommend you try it out to see how you can harness the value of artificial intelligence in your queries!

So did you know about this feature in Power Query? Have you used it before? Are you going to try it out? Tell me about it in the comments below!

Self Service Analytics Does Not Equal Easy

This is a difficult post to write. From a personal perspective, I really struggle with this topic. In my daily work, I assist organizations with enabling a self service data strategy. Collaborating together, we spend hours working to find the right path that matches the needs and goals of the organization with the realities of the organization. I am a huge advocate for self service analytics and believe in it. Because of this, I want to be ridiculously helpful with everyone I encounter.

Sadly, I run into people on a regular basis that want “a little help” to perform some analysis with Power BI or Excel. The problem with this is that the ask is quite complicated. The reality of my situation is that I can make the ask happen quickly. I can spend an hour or two on a report and it will look beautiful. Having worked with Power BI for almost ten years, I have a pile of experience to make it happen efficiently. But the requestor wants me to help them build it themselves. This takes three to five times longer because I am guiding someone through the process. And I am all for doing this because it helps enable another person to use Power BI or Excel.

While I am all for helping people, I find that their interest in learning about these tools is often non-existent. They have a goal in mind that needs to be completed and they think Power BI or Excel is an easy way out. It is a self service analytics tool, so it should be easy! And yet they find the opposite. Without taking taking the time to understand the basics, they end up lost.

What makes this hard for myself and others is that it cheapens the skills we possess as data professionals. Data is not easy. Personally, I find this approach to be a little insulting. So how do we fix this problem?

Setting Expectations for Self Service Analytics

I started responding to every request I receive with clear expectations around self service analytics. I explain the challenges and skills they need to possess before they embark on such a project. Sometimes this scares them off, but it is reasonable way to find alignment.

I often share the basics they will need to learn to be successful. I start with understanding data modeling. We start by talking about things like building a star schema, using a date table, and managing relationships. Sometimes a flat table from Excel or SharePoint is fine, but other times they want more depth and dimension to their reports. Without this in place, they will never achieve their goal.

Next, we discuss how to build out measures. Like every beginner, including myself at one time, they rely on implicit measures. These measures often provide basic aggregations and get the job done. However, they often ask for something more complex. They need to perform time based analysis or use filtering. They are going to need to learn how to write DAX expressions for explicit measures.

Lastly, we will discuss what report design looks like. Often, I have a theme file or template for the organization available, so this helps with branding. But it is important to address best practices with data visualizations. This avoids confusion and provides a better consumer experience.

Assign Some Homework

With expectations set, it is time to assign some homework. Self service analytics requires some investment. But before helping anyone, I assign some basic training. The concept is simple – if someone does not know the basics, they are never going to understand the advanced topics.

Microsoft Learn is my go to for training others around the basics of Power BI. Because it is free, the only real investment is time. It covers a lot of the basics of the tool and allows me to focus on the more advanced topics. I, just like you, need to protect my time. The more I can push to other resources, the easier it is for me to stay focused on my work.

There are two benefits from using this method. First, I can ask for a badge validating that they completed the training. However, I like that this method helps gauge the interest of someone who is asking for help. The course linked above takes about three hours to complete. If they cannot invest three hours in themselves, then why should you invest three hours in them? The answer to that question is up to you, but it sets expectations for the relationship you will encounter through this engagement.

Create Working Sessions

At this point, if they have completed their homework, I recommend creating working sessions to set boundaries. I start with setting up 45 minute sessions with a clear agenda. Ideally, a weekly working session is best. Naturally, you might need to schedule more frequent sessions based upon the business requirements. It is important that you agree upon the cadence to avoid conflict in the future.

For each working session, start with a recap of the previous session. Review the previous topic and any progress made since you last met. Then, work through the assigned topic for the current week. Make sure it is recorded so it can be referenced afterwards. Lastly, decide what the topic of the next working session. This helps keep future sessions focused and your timeline compact.

I recommend that you continue assigning homework. For example, if you have an upcoming working session that will focus on a particular DAX statement, provide resources ahead of time. If you have a measure that needs to use the SWITCH() function, send them to an article like this one to help them understand the concept ahead of time. You can also assign work to be completed in their report before the next session to streamline your working sessions.

As a rule of thumb, I work to keep sessions scheduled. An ad-hoc working session often ends up being a mess. You also have other deliverables and expectations from your leadership that could get in the way. If someone needs time with me between sessions, I generally ask them to schedule it. I feel like the “Can we hop on a quick call? I have a quick question!” conversation always comes at the most inopportune time. Setting these boundaries goes a long way to ensuring you can maintain a balance in your workday.

Provide Some Learning

It never hurts to provide additional learning and training. There are several sources out there. There are several books and sites out there to help. If you need help with Power Query, you can check out my book as an option to assist. If it is with DAX, I recommend The Definitive Guide to DAX by Marco Russo and Alberto Ferrari. Lastly, you can suggest Now You See It by Stephen Few if they need help with data visualization.

At some point, you they will need to embrace self service analytics on their own. By providing materials, they can be focused on their learning and be able to sustain their own solutions. These resources will provide assistance both during your working sessions and in the future.

If self service analytics is a goal, investment in learning is key. I see customers who continue to struggle with this because they do not invest in training and learning. When you are working with an individual, at some point they will need to invest more in themselves to achieve that goal. If you do not help them head down that path, you will be stuck helping them forever.

Conclusion

I love helping people discover the world of self service analytics! I hope you are too! Just make sure you help others in a sustainable manner. These types of relationships are good to have, but you need to manage them well. Be ridiculously helpful – just take care of yourself as well!

Have you encountered this situation? Do you struggle with people who think self service analytics is easy? If so, tell me about it in the comments below!

Custom Headers with Calculation Groups

In working with a customer a few weeks back, I ran into another odd scenario. This was a new ask for me, but made sense in a post COVID-19 world. Normally when I build out time intelligence with Calculation Groups, I usually only perform comparisons between the previous year and the current year. However, this customer wanted to go back a few years to be able to see the post COVID-19 impact on their metrics. Naturally, Calculation Groups makes quick work of a request such as this. However, there was no way to create their own custom headers and they found it confusing:

Screenshot of generic headers assigned from calculation groups.
Matrix with generic column headers

The use of generic headers led to confusion when reading the visualization. On top of that, they had a fiscal year that started on July 1st. This made it more complicated to understand which fiscal year they were currently in. In short, they wanted to provide dynamic column headers in the matrix visual that aligned with the associated fiscal year. If you were looking at the 2023-2024 fiscal year, the labels need to translate accordingly:

Calculation Group LabelCustom Column Label
CYm32020-2021
CYm22021-2022
CYm12022-2023
CY2023-2024
YoYChange from Last Year
YoY_Pct% Change from Last Year
CYm3 Year End2020-2021 YE
CYm3 Variance YE2020-2021 Var YE
CYm2 Year End2021-2022 YE
CYm2 Variance YE2021-2022 Var YE
CYm1 Year End2022-2023 YE
CYm1 Variance YE2022-2023 Var YE
Generic versus custom labels

Well this was a challenge and upon all of my research, I found a possible solution. And while I was able to get my solution to work, I am not 100% sure that this is still the best solution. However, it worked for my situation. And while the performance is not the greatest, it still got the job done.

To get started, you do need to understand about Calculation Groups and using Tabular Editor. If you have not used them before, the guys over at SQLBI have a great series on how to use Calculation Groups. And if you have not already done so, head over to TabularEditor.com to download the latest version to create your Calculation Groups. For our scenario today, we are going to assume you already have your Time Intelligence measures setup. Our focus is on applying the logic for the headers.

Creating the Column Headers

There are a few challenges with the creation of these column headers. One, they must map to a calculation group item. Two, they must follow the naming convention outlined above. And three, they must be properly sequenced.

We are building our logic inside of Power Query under the assumption we do not have access to our data source to enact this solution. This allows us to see how versatile Power Query can be for our projects. However, you are welcome to perform this logic upstream in a SQL server if you have one available.

If you are not comfortable with Power Query, I suggest you check out my book which will give you the knowledge and experience to create this table.

Reference your date table

The first step in the process is that we will right click on our date table and reference it in another query. From there, we will right click on the Fiscal Year column and remove all other columns. Once completed, we will remove the duplicates in the fiscal year column so our results look like this:4

Screenshot of our custom column headers query starting with the fiscal year data extracted from bi_date.
Referencing bi_date for our custom column headers

This is the easiest way to ensure that your custom column headers will align with your current date table. This will eliminate any issues with it being out of sync in the future.

Add mappings for Calculation Groups

Our next step is to add our mappings so our custom columns will leverage the right calculation group measures. Referencing the table above, we will add a new column for each calculation group item. We will also fill in the column with the sequence value in the order we wish to display these items.

For each measure, go to the Add Columns ribbon and select Custom Column. Change the column name from “Custom” to your label from the Calculation Group. Then add the sequence number in the formula box below:

Screenshot of creating a custom column for each measure from Calculation Groups
Create a custom column for each measure

When finished, your list of labels should look something like this:

Screenshot of all measures from Calculation Groups added into the query as columns.
All measures loaded as columns

The last thing we need to do is unpivot our columns so they become rows in the dataset. Right click on the Fiscal Year column and select “Unpivot Other Columns”. This will turn the other columns into rows. Rename the columns so they are measure and measure sequence respectively:

Screenshot of measures from the Calculation Groups pivoted into rows
Measures pivoted into rows for use

Now that this has been done, it is time to build out our logic for the labels.

Assigning labels with a conditional column

At this point, you have two choices. Because of the complex nature of our labels, you will need to write them out in the advanced editor. The question is if you would prefer to get the conditional column logic in place first. If you are not comfortable with the logic, you can use the graphical interface to achieve this. Otherwise, you can use if/then/else in the advanced editor.

I am not going to get into the weeds with this code as there were a number of functions used. However, I will provide a full printout of the code for this query when we are finished. In the end, our goal is to have our labels show up accordingly:

Screenshot of custom column header labels for our calculation groups
Creating custom column header labels

This process will take some time, but the trick is to use the measure sequence to make your life easier. By using that, it will make your creation of column headers easier. It takes some practice, but you will get there quickly.

Address sequence of columns

This was the trickiest part. We will implement this solution using conditional columns, but there is a catch. We will be sorting our custom header column with this new sequence column. However, we must have an equally distinct value return for the sequence. So how do we make that work?

The trick is grouping. CY, CYm1, CYm2, and CYm3 will be part of the first group. YoY will be in group 2 and group 3 will be YoY_Pct. Then the year end and year end variance columns will be part of group four.

Now, you might already be concerned about a collision. And you are correct, there is one at this moment. But we are going to do one more thing to fix this. For groups 1 and four, we are going to add the fiscal year as a decimal value. This will allow us to use consistent headers and maintain sequencing. I promise you that I have spent hours figuring this out and I do not know of a better way to achieve this outcome. If you think there is a better way to do it, give it a try. But I know for a fact that this outcome works:

Screenshot of the header sequence column for our custom headers.
Sequence column added for our custom headers

Now that we have this in place, we can look at the final code.

Final Code – Custom Column Headers

As you can tell, this was not an easy process. It took a lot of custom coding to make it work. However, the results will speak for themselves shortly. Regardless, here is the code we used to achieve our outcome:

let
  Source = bi_date, 
  #"Removed Other Columns" = Table.SelectColumns(
    Source, 
    {"FiscalYear"}
  ), 
  #"Removed Duplicates" = Table.Distinct(#"Removed Other Columns"), 
  #"Added CY - 3" = Table.AddColumn(#"Removed Duplicates", "CYm3", each 0, Int64.Type), 
  #"Added CY - 2" = Table.AddColumn(#"Added CY - 3", "CYm2", each 1, Int64.Type), 
  #"Added CY - 1" = Table.AddColumn(#"Added CY - 2", "CYm1", each 2, Int64.Type), 
  #"Added CY" = Table.AddColumn(#"Added CY - 1", "CY", each 3, Int64.Type), 
  #"Added YoY" = Table.AddColumn(#"Added CY", "YoY", each 4, Int64.Type), 
  #"Added YoY%" = Table.AddColumn(#"Added YoY", "YoY_Pct", each 5, Int64.Type), 
  #"Added CY - 3 YE" = Table.AddColumn(#"Added YoY%", "CYm3 Year End", each 6, Int64.Type), 
  #"Added CY - 3 Var YE" = Table.AddColumn(
    #"Added CY - 3 YE", 
    "CYm3 Variance YE", 
    each 7, 
    Int64.Type
  ), 
  #"Added CY - 2 YE" = Table.AddColumn(
    #"Added CY - 3 Var YE", 
    "CYm2 Year End", 
    each 8, 
    Int64.Type
  ), 
  #"Added CY - 2 Var YE" = Table.AddColumn(
    #"Added CY - 2 YE", 
    "CYm2 Variance YE", 
    each 9, 
    Int64.Type
  ), 
  #"Added CY - 1 YE" = Table.AddColumn(
    #"Added CY - 2 Var YE", 
    "CYm1 Year End", 
    each 10, 
    Int64.Type
  ), 
  #"Added CY - 1 Var YE" = Table.AddColumn(
    #"Added CY - 1 YE", 
    "CYm1 Variance YE", 
    each 11, 
    Int64.Type
  ),
    #"Unpivoted Columns" = Table.UnpivotOtherColumns(
    #"Added CY - 1 Var YE", 
    {"FiscalYear"}, 
    "Measure", 
    "MeasureSequence"
  ), 
  #"Added ColumnHeaders" = Table.AddColumn(
    #"Unpivoted Columns", 
    "ColumnHeaders", 
    each 
      if [MeasureSequence] = 0 then
        Number.ToText([FiscalYear] - 3)
          & "-"
          & Number.ToText([FiscalYear] - 2)
      else if [MeasureSequence] = 1 then
        Number.ToText([FiscalYear] - 2)
          & "-"
          & Number.ToText([FiscalYear] - 1)
      else if [MeasureSequence] = 2 then
        Number.ToText([FiscalYear] - 1)
          & "-"
          & Number.ToText([FiscalYear])
      else if [MeasureSequence] = 3 then
        Number.ToText([FiscalYear])
          & "-"
          & Number.ToText([FiscalYear] + 1)
      else if [MeasureSequence] = 4 then
        "Change from Last Year"
      else if [MeasureSequence] = 5 then
        "% Change from Last Year"
      else if [MeasureSequence] = 6 then
        Number.ToText([FiscalYear] - 3)
          & "-"
          & Number.ToText([FiscalYear] - 2)
          & " YE"
      else if [MeasureSequence] = 7 then
        Number.ToText([FiscalYear] - 3)
          & "-"
          & Number.ToText([FiscalYear] - 2)
          & " Var YE"
      else if [MeasureSequence] = 8 then
        Number.ToText([FiscalYear] - 2)
          & "-"
          & Number.ToText([FiscalYear] - 1)
          & " YE"
      else if [MeasureSequence] = 9 then
        Number.ToText([FiscalYear] - 2)
          & "-"
          & Number.ToText([FiscalYear] - 1)
          & " Var YE"
      else if [MeasureSequence] = 10 then
        Number.ToText([FiscalYear] - 1)
          & "-"
          & Number.ToText([FiscalYear])
          & " YE"
      else if [MeasureSequence] = 11 then
        Number.ToText([FiscalYear] - 1)
          & "-"
          & Number.ToText([FiscalYear])
          & " Var YE"
      else
        "Error", 
    type text
  ), 
  #"Added HeaderSequence" = Table.AddColumn(
    #"Added ColumnHeaders", 
    "HeaderSequence", 
    each 
      if [MeasureSequence] = 0 then
        (([FiscalYear] - 3) / 10000) + 1
      else if [MeasureSequence] = 1 then
        (([FiscalYear] - 2) / 10000) + 1
      else if [MeasureSequence] = 2 then
        (([FiscalYear] - 1) / 10000) + 1
      else if [MeasureSequence] = 3 then
        (([FiscalYear]) / 10000) + 1
      else if [MeasureSequence] = 4 then
        2
      else if [MeasureSequence] = 5 then
        3
      else if [MeasureSequence] = 6 then
        (([FiscalYear] - 3) / 10000) + 4
      else if [MeasureSequence] = 7 then
        (([FiscalYear] - 3) / 10000) + 4.00001
      else if [MeasureSequence] = 8 then
        (([FiscalYear] - 2) / 10000) + 4
      else if [MeasureSequence] = 9 then
        (([FiscalYear] - 2) / 10000) + 4.00001
      else if [MeasureSequence] = 10 then
        (([FiscalYear] - 1) / 10000) + 4
      else if [MeasureSequence] = 11 then
        (([FiscalYear] - 1) / 10000) + 4.00001
      else
        0, 
    type number
  )
in
    #"Added HeaderSequence"

It almost looks worse than you might imagine. However, keep in mind that some of this is to help ensure we have a solid experience with our custom headers.

At this point, we will close and apply our query changes and load them into our data model.

Preparing the data model

At this point, we have created our custom headers. Next, we must map them for use with our Calculation Groups and fit them into our model. We will need to build a relationship to our custom headers query from our date table and create a custom measure to leverage our calculation group.

Connect date and custom headers table

Our first task is to connect our date table with our custom column headers table. This will filter down our custom column to only show the headers we need for a selected fiscal year. We will simply connect the fiscal year columns on both tables to build the relationship.

Now, you might be concerned about the many to many relationship being established. While often discouraged, there are times where they are required. This is one of those moments. We will go ahead and click OK to create the relationship:

Screenshot of the many to many relationship being created between the date and custom column headers tables.
Connecting the date and custom column headers tables

With this relationship in place, we can now focus on creating a measure to return our desired results!

Custom measure for Calculation Groups

The last step before we create our visualization is to create a measure that leverages our calculation groups, but it is performed from the custom column headers table. A fun fact that you might not realize is that you can use your calculation groups as a filter in a CALCULATE() function. We are going to use the measures we created inside of the custom column headers table to perform that exact filtering!

The formula is simple. We are going to use CALCULATE(), call our Revenue measure, and select our time intelligence measure to apply. The result is a measure that looks like this:

Total Revenue = 
   CALCULATE (
       [Revenue],
       TimeIntelligence[Name] = MAX ( CustomColumnHeaders[Measure] )
   )

With our measures set, we just need to create our matrix.

Add custom headers to our matrix

To complete the process, we need to simply update our matrix. For the column headers, replace the time intelligence field with the custom headers from our custom column headers table. We then need to replace our measure with our new custom measure to ensure the data comes back correctly. The result is the following:

Screenshot of our custom column headers associated with our calculation groups.
Custom column headers for our matrix

This is quite the process to setup, but it can be rewarding if it works out. It will stretch your skills in a few areas within Power BI. However, the results speak for themselves.

Anything else to consider?

Keep in mind that if you go this route, updating the logic with your calculation groups will grow increasingly complex. If you need to add a new measure, this will add an additional burden to the update process. Each new item will need to be updated manually in Power Query resulting in additional effort.

My recommendation is that you take the time upfront to establish all of the requirements. This will help avoid future rework in the future.

Conclusion

First of all, you read all of this and understood it – congratulations! This is a very complex topic and not easy to digest. You might need to re-read this article a few times to fully comprehend what happened, but I promise it will work out. What is important is that you have a grasp of the key elements.

So have you ever ran into a requirement like this? Do you have a more efficient way to achieve this outcome? Do you plan on using something like this in the future? Tell me about it in the comments below!

Dynamically Compare Customers in a Report

I was working with a customer recently and she came up with an interesting use case with Power BI. The idea was simple – she needs to compare a manufacturer against the top three manufacturers within a period of time. Thinking it was pretty straight forward, I suggested she just use a bar chart and filter down to those manufacturers and call it a day. If only it were that simple!

She came back and said well I need to dynamically select the chosen manufacturers. And in a perfect world, I would like to change the threshold of top manufacturers. Naturally, this whole scenario became far more complex! Being someone who does not back down from a challenge, I decided to give it a shot!

To help break down how this will work, I decided to use DAX Studio. It is a free tool that you can download and install from here. This is not a prerequisite, but it does make explaining the measure easier.

Part 1 – Design the measure in DAX Studio

The first step in the process is that we have to build out our base query. To start, we will pull back a list of manufacturers and the revenue associated with them. We will return that query ordered by the manufacturer with the highest revenue at the top:

DAX Studio window with a list of manufacturers and their associated revenue totals.
List of all manufacturers and their associated revenue totals

Next, we need to limit the list to only the top number of manufacturers selected. We will add a variable with a hard coded value of three for now. However, we will make it dynamic later on. We also need to add a filter to our list of manufacturers so only the top three return. We will use RANKX() to achieve that result.

Query in DAX Studio updated to only return the top three manufacturers and their associated revenue totals.
Returning the top three manufacturers by revenue

Next, we want to declare a manufacturer and return the associated revenue. For now, we will hard code the manufacturer Fama and update it later. We will then return the revenue associated with our selected manufacturer.

Screenshot of DAX Studio showing the top three manufacturers revenue plus our revenue for Fama.
Final code from DAX Studio

If you are following along at home, the final code for what we put together in DAX Studio is as follows:

DEFINE
    VAR TopRank = 3
    VAR SelectedManufacturer = "Fama"
    VAR ListOfManufacturers =
        ADDCOLUMNS (
            SUMMARIZE ( bi_manufacturer, bi_manufacturer[Manufacturer] ),
            "Revenue", [Revenue]
        )
    VAR FilteredList =
        FILTER (
            ListOfManufacturers,
            RANKX ( ListOfManufacturers, [Revenue],, DESC ) <= TopRank
        )
    VAR SelectedManufacturerRevenue =
        CALCULATE ( [Revenue], bi_manufacturer[Manufacturer] = SelectedManufacturer )
    VAR UnifiedList =
        UNION (
            FilteredList,
            { ( SelectedManufacturer, SelectedManufacturerRevenue ) }
        )

EVALUATE
UnifiedList
ORDER BY [Revenue] DESC

With that set, we can now turn our attention back to Power BI to prepare our report for the next step.

Part 2 – Preparing Power BI

There are two elements we must prepare for this scenario. First, we must prepare an independent manufacturing list. To avoid any conflicts with filtering, we must create this as a new query that parallels our existing bi_manufacturing table and is used in our calculation.

Next, we will need to add our top manufacturer threshold selection. Right now we hard coded the value three in our formula, but we want it to be dynamic. We will use a what-if parameter to achieve that goal.

Preparing the manufacturer list

The next step in our process is to prepare a list of distinct manufacturers. You could choose to create a DAX table to achieve this, but I suggest performing this in Power Query. I simply referenced my bi_manufacturer table, removed my extra columns, and updated the name to ManufacturerList.

Image of my prepared manufacturer list inside of Power Query.
List of manufacturers in Power Query

Next, I will need to create a measure to identify the selected manufacturer. This will allow me to select my manufacturer from a list. And if a single value has not been selected, it will return a blank value instead.

Selected Manufacturer = SELECTEDVALUE(ManufacturerList[Manufacturer], "")

With all of this in place, we just need to add a slicer on our report page.

Screenshot of our Top N Compare report canvas with the Manufacturer slicer present and set to Fama.
Top N Compare report page with Fama selected from the ManufacturerList

With this in place, it is time to add our threshold measure.

Preparing the Top N selector

In the past, I wrote an article on leveraging What If Parameters in Power BI. In that scenario, we used a decimal value for percentages. Instead, we will use whole numbers for this situation. However, a small change has occurred since that article was published. Now, there are two types of parameters we can select. We will be selecting the numeric parameter to achieve our goal. Head to the modeling ribbon and select the numeric range option from the new parameter dropdown. You will need to update the highlighted fields below:

Screenshot of selecting and updating a numeric range parameter in Power BI.
Adding a numeric range parameter in Power BI

Leaving the add slicer option checked, click create. This will generate a new table and measure that will work in concert with the new slicer. We do not have to do anything extra at this point, but it is worth noting that you could choose to create these elements manually if you choose.

Part 3 – Modify DAX Studio code for use in Power BI

Time for the grand finale! With all our hard work, the last few steps are about making some small adjustments to our DAX Studio code. We will start by creating a new measure and adapting our code for use in Power BI.

Taking the code from above, the first step is to remove the word DEFINE from the block. Next, we will update our hard coded values for top rank and selected manufacturer. If you create a report measures table, note that the [Top N Manufacturer Value] will not appear there unless you move it. Check under that new table we created.

Next, we will add a new variable at the top called CustomerLine and use it to bridge our revenue to the right customer. The code will look like this:

VAR ManufacturerLine = MAX(bi_manufacturer[Manufacturer])

Lastly, we will change our EVALUATE to a RETURN and update the value to perform a SUMX() on the revenue of our unified list where the ManufacturerLine value matches the Manufacturer. Our final code for our measure should look like this:


VAR TopRank = [Top N Manufacturers Value]
VAR SelectedManufacturer = [Selected Manufacturer]
VAR ManufacturerLine =
    MAX ( bi_manufacturer[Manufacturer] )
VAR ListOfManufacturers =
    ADDCOLUMNS (
        SUMMARIZE ( bi_manufacturer, bi_manufacturer[Manufacturer] ),
        "Revenue", [Revenue]
    )
VAR FilteredList =
    FILTER (
        ListOfManufacturers,
        RANKX ( ListOfManufacturers, [Revenue],, DESC ) <= TopRank
    )
VAR SelectedManufacturerRevenue =
    CALCULATE ( [Revenue], bi_manufacturer[Manufacturer] = SelectedManufacturer )
VAR UnifiedList =
    UNION (
        FilteredList,
        { ( SelectedManufacturer, SelectedManufacturerRevenue ) }
    )
RETURN
    SUMX ( FILTER ( UnifiedList, [Manufacturer] = ManufacturerLine ), [Revenue] )

At this point, I added my Top N Revenue measure to my table to compare and I discovered two issues. One, it is returning revenue for every line, not just the Top N and the selected manufacturer. But in addition, it is doubling the revenue for Fama:

Screenshot of issues from our new Top N Revenue measure.
Top N Revenue measure not working properly

To fix every value coming back, we will simply add an ALL() around our original list of manufacturers. This will make sure all manufacturers are accounted for in this calculation:


VAR TopRank = [Top N Manufacturers Value]
VAR SelectedManufacturer = [Selected Manufacturer]
VAR ManufacturerLine =
    MAX ( bi_manufacturer[Manufacturer] )
VAR ListOfManufacturers =
    ADDCOLUMNS (
        SUMMARIZE ( ALL ( bi_manufacturer ), bi_manufacturer[Manufacturer] ),
        "Revenue", [Revenue]
    )
VAR FilteredList =
    FILTER (
        ListOfManufacturers,
        RANKX ( ListOfManufacturers, [Revenue],, DESC ) <= TopRank
    )
VAR SelectedManufacturerRevenue =
    CALCULATE ( [Revenue], bi_manufacturer[Manufacturer] = SelectedManufacturer )
VAR UnifiedList =
    UNION (
        FilteredList,
        { ( SelectedManufacturer, SelectedManufacturerRevenue ) }
    )
RETURN
    SUMX ( FILTER ( UnifiedList, [Manufacturer] = ManufacturerLine ), [Revenue] )

To avoid a potential collision where our selected manufacturer is also in our top list, we just need to add some logic in our return statement to see if our manufacturer is present. If it is, simply return the value from the top list and move on.

VAR TopRank = [Top N Manufacturers Value]
VAR SelectedManufacturer = [Selected Manufacturer]
VAR ManufacturerLine =
    MAX ( bi_manufacturer[Manufacturer] )
VAR ListOfManufacturers =
    ADDCOLUMNS (
        SUMMARIZE ( ALL ( bi_manufacturer ), bi_manufacturer[Manufacturer] ),
        "Revenue", [Revenue]
    )
VAR FilteredList =
    FILTER (
        ListOfManufacturers,
        RANKX ( ListOfManufacturers, [Revenue],, DESC ) <= TopRank
    )
VAR SelectedManufacturerRevenue =
    CALCULATE ( [Revenue], bi_manufacturer[Manufacturer] = SelectedManufacturer )
VAR UnifiedList =
    UNION (
        FilteredList,
        { ( SelectedManufacturer, SelectedManufacturerRevenue ) }
    )
RETURN
    IF (
        SUMX ( FILTER ( FilteredList, [Manufacturer] = ManufacturerLine ), [Revenue] ) > 0,
        SUMX ( FILTER ( FilteredList, [Manufacturer] = ManufacturerLine ), [Revenue] ),
        SUMX ( FILTER ( UnifiedList, [Manufacturer] = ManufacturerLine ), [Revenue] )
    )

And the results speak for themselves:

Screenshot of the Top N Revenue values corrected.
Top N Revenue working properly

Is it a lot of work to enact? Yes. But is there an opportunity to extract valuable insights from this type of report? Also yes.

Where to go next?

If you want to take this to the next level, consider adding some conditional formatting. Go back and read my article on creating conditional formatting with a column chart. This will allow you to call out your selected manufacturer a lot easier. It is a small adjustment, but it could go a long way to enhancing your report design.

It is worth considering that my queries to achieve this outcome were very simple. Depending on what measures you are comparing, this could become a very cumbersome solution. It is important that you take the time to review your report’s performance and ensure it does not overwhelm your capacity in the Power BI Service. Use the performance analyzer and continue to tune your measures to ensure no issues with rendering this data.

Conclusion

This is a bit of a challenging project. Even while writing this one, I was getting exhausted. It seems simple in practice, but it does require a lot of effort to achieve this desired outcome. I hope if you run into a scenario where you have this requirement that you will find this solution helpful!

So, have you seen anything like this before? Have you run into challenges like this? If so, make sure to share in the comments below!

Page 1 of 4

Powered by WordPress & Theme by Anders Norén