Making sense of challenging data topics one step at a time.

Author: Dominick Raimato Page 1 of 5

Copilot Reports with Microsoft Fabric – Part 3

We have been on a journey the past two weeks to deploy Copilot Reports with the new user activity reporting endpoint. While the journey has taken some time, we have arrived at the final step which is deploying our reports! You might have already performed some of this already, but I developed a template to make your life easier.

At the end of this article, you will have a report deployed to help you track week over week utilization of Copilot for Microsoft 365 within your organization. As a bonus, we will implement a few other reports to help you along the way on your adoption journey.

By the way, if you have not deployed Copilot for Microsoft 365 yet, you still want to deploy this report. There is a special page that will identify potential top users of Microsoft 365 within your organization. They are generally primed for being a participant in a Copilot for Microsoft 365 pilot program and potentially serve as a champion for your organization.

First Things First – I Made an Error

If you have been following this series, I discovered an error in our data flow code. I resolved this error on November 16th, 2024 at 9:30 AM eastern time (New York City time) and updated my GitHub repository. If you have been following this series before that correction has been made, you will need to edit the CopilotData_Raw table. You can open the advanced editor and replace it with the following code:

let
  GetCopilotData = (Params) =>
    let
      Path = if Text.Length(Params) > 0 then Params else "$top=1000", 
      Source = Json.Document(
        Web.Contents(
          "https://graph.microsoft.com/beta/reports/getMicrosoft365CopilotUsageUserDetail(period='D30')", 
          [RelativePath = "?" & Path, Headers = [Authorization = "Bearer " & #"Get-AccessToken"()]]
        )
      ), 
      NextList = @Source[value], 
      result = try
        @NextList & @GetCopilotData(Text.AfterDelimiter(Source[#"@odata.nextLink"], "?"))
      otherwise
        @NextList
    in
      result, 
  CopilotData = GetCopilotData(""), 
  #"Converted to table" = Table.FromList(
    CopilotData, 
    Splitter.SplitByNothing(), 
    null, 
    null, 
    ExtraValues.Error
  ), 
  #"Expanded Column1" = Table.ExpandRecordColumn(
    #"Converted to table", 
    "Column1", 
    {
      "reportRefreshDate", 
      "userPrincipalName", 
      "displayName", 
      "lastActivityDate", 
      "copilotChatLastActivityDate", 
      "microsoftTeamsCopilotLastActivityDate", 
      "wordCopilotLastActivityDate", 
      "excelCopilotLastActivityDate", 
      "powerPointCopilotLastActivityDate", 
      "outlookCopilotLastActivityDate", 
      "oneNoteCopilotLastActivityDate", 
      "loopCopilotLastActivityDate", 
      "copilotActivityUserDetailsByPeriod"
    }, 
    {
      "reportRefreshDate", 
      "userPrincipalName", 
      "displayName", 
      "lastActivityDate", 
      "copilotChatLastActivityDate", 
      "microsoftTeamsCopilotLastActivityDate", 
      "wordCopilotLastActivityDate", 
      "excelCopilotLastActivityDate", 
      "powerPointCopilotLastActivityDate", 
      "outlookCopilotLastActivityDate", 
      "oneNoteCopilotLastActivityDate", 
      "loopCopilotLastActivityDate", 
      "copilotActivityUserDetailsByPeriod"
    }
  ), 
  #"Removed columns" = Table.RemoveColumns(
    #"Expanded Column1", 
    {"displayName", "copilotActivityUserDetailsByPeriod"}
  ), 
  #"Renamed columns" = Table.RenameColumns(
    #"Removed columns", 
    {
      {"lastActivityDate", "Last Activity Date"}, 
      {"copilotChatLastActivityDate", "Copilot Chat"}, 
      {"microsoftTeamsCopilotLastActivityDate", "Copilot for Teams"}, 
      {"wordCopilotLastActivityDate", "Copilot for Word"}, 
      {"excelCopilotLastActivityDate", "Copilot for Excel"}, 
      {"powerPointCopilotLastActivityDate", "Copilot for PowerPoint"}, 
      {"outlookCopilotLastActivityDate", "Copilot for Outlook"}, 
      {"oneNoteCopilotLastActivityDate", "Copilot for OneNote"}, 
      {"loopCopilotLastActivityDate", "Copilot for Loop"}
    }
  )
in
  #"Renamed columns"

In short, I missed the paging requirement for this endpoint. I assumed that this endpoint was like the other graph reports endpoints. But then again, we all know what happens when we assume! Regardless, with this small change in place, you are ready to collect data for your entire organization.

If you are not up to speed, go back to my first post in this series and start collecting data to build your Copilot reports.

Deploying the Copilot Reports Template

To get started, go my GitHub repository to download the Copilot Utilization Report template. It is a pbit file that we will be able to connect to the semantic model built off your Lakehouse. Once you have downloaded the template, you can open it. You will immediately receive an “error”, but that is simply because you need to connect to your semantic model:

Screenshot of connecting our Copilot Reports to our semantic model attached to our Lakehouse.
Connecting to our semantic model associated with our Lakehouse.

With our template activated, we can save and deploy the report by publishing it to Power BI! Before we get too far, let’s do a quick tour of the report!

Copilot Reports – High Level Overview

The template comes with nine pages. Three are focused on Copilot Activity, four are focused on collaboration behaviors within the Microsoft 365, and the Collaboration Score page. There is also a report template page that is available for you to create additional custom reports.

Each page has a slicer panel to filter based upon your user information from Entra ID. With the caveat that your user information must be accurate, you can easily filter to see how users leverage tools within the Microsoft 365. You can filter department, company, office location, or even job title. You can also filter based upon licenses assigned to users as well!

In addition to slicers, each page has a help button that when activated, provides an overlay to help you understand the report visualizations. This will help you and any future consumers understand how the report is interpreted and how values are calculated. Because of this, I will not go into great detail, but give you a high-level overview of the pages themselves.

How to Use the Copilot Reports

There are three pages dedicated to Copilot utilization. The first page, Copilot activity, dives into the utilization of Copilot over the past week. It also allows you to trend utilization over time. Remember, we can only capture dates when someone last used Copilot. It is best to compare week over week due to inconsistencies with how Microsoft releases the data. However, we can smooth this out over a week instead.

The Active Copilot Users page allows us to quickly identify which users used Copilot over the past week. In addition, it shows how many applications it was leveraged within. For example, if you used Copilot for Microsoft 365 within Microsoft Teams, Word, and PowerPoint, you would have an app count of three. This can help you find users for a focus group to learn more about use cases.

The Inactive Copilot Users page allows you to identify who is not using their assigned license. During a pilot, we want to make sure that everyone is leveraging Copilot. This report helps us track and activate dormant licenses. It also helps us to find who has used Copilot for Microsoft 365 in the past but have not used it for over two weeks. This might prompt you to reassign licenses to users who have more time to spend exploring Copilot.

While not a perfect science, these three reports will help you drive a quality pilot program. Beyond your pilot, it will help you drive broader adoption across the organization as well.

How to Use the Activity Analysis Reports

Why do I care about utilization of Microsoft 365 tools? Remember, Copilot for Microsoft 365 is using data from your tenant. The better you use the platform, the better your experience with Copilot for Microsoft 365 will be!

Each page is dedicated to the core utilization metrics available from Microsoft Graph. While these are high level metrics, it is enough to understand general trends over the past thirty days. Our goal is to see balance between applications and drivers of quality collaboration.

How to Use the Collaboration Score Report

There is a difference between using software and adopting software. Even though Microsoft Teams became the standard since Skype for Business was retired in 2020, it is not well adopted. The concept of the Collaboration Score is to see how well people are using their collaboration tools. But how does it work?

The Collaboration Score focuses on behaviors that lead to sharing knowledge. You could have people who work in their OneDrive and private Teams Chats all day, but they are siloed. Even if they send 500 messages a day, they will have a low score. The reason? They do not collaborate with others in shared spaces!

The better users leverage centralized collaboration repositories, the broader the corpus of knowledge will be. In addition, when new people join your organization, they will be able to draw upon that knowledge right away with Copilot. If it is siloed, it might take a while for that knowledge to be shared through individual collaboration tools like private chats and OneDrive.

What the Heck is a Collaboration Score?

I credit the Collaboration Score to my friend and colleague Kristen Hubler. She recently wrote a great article on using versus adopting software. We collaborated together to find a way to identify top users of Microsoft 365 users. What started as a hunch proved out time over time when it came to Copilot.

The concept was simple – the more diverse your Microsoft 365 utilization, the better their Copilot experience. The higher the Collaboration Score, the higher their self-reported use case count and savings. In fact, my favorite thing to do is temper “Copilot sucks” feedback with Collaboration Scores. If a user says that with a Collaboration Score of 10, it is likely because they do not use their tools well. However, if they collaborate well, there is a better corpus of knowledge to draw upon with Copilot.

In addition to the data they will have access to, it shows a user’s propensity to use new technology. It has been four years since Microsoft Teams became the official standard for collaboration. If some of your users have not been adopting these tools, they might struggle with Copilot.

What does this all mean? You might need to work on your adoption of Microsoft 365 in addition to Copilot. However, that can be part of your overall rollout of Copilot for Microsoft 365 as you go forward.

Next Steps

With these reports in place, you can now track utilization of Copilot for Microsoft 365 for adoption. If your adoption is low, start working with your organizational change management team within your organization to get the most out of your investment. Technology professionals do not consider the behavior changes you need to make with a tool like Copilot. However, having the data can help shape that roadmap.

After you have deployed this report, drop some comments below on how it is going! I want to know what you discovered with your data and some changes you might make! Also, please do not hesitate to provide feedback as I will likely be releasing new versions of this report template over time.

Copilot Reports with Microsoft Fabric – Part 2

Copilot Reports have been a challenge for a while, but Microsoft Fabric has made life so much easier! In last week’s article, we went through the process to start harvesting data from our endpoints. This week is about getting the model in place so we can deploy an easy-to-use report to track utilization trends.

While this process is a little tedious, this should make your life much easier. With a report that is already designed with the key metrics and outcomes built for you, it should only take minutes for you to find some key insights. Just make sure you pay attention to the details and everything should work out smoothly!

Add Tables to your Semantic Model

The first step we need to complete is the process of adding tables to your default semantic model. This is a critical step to ensure you have the right data available to you. We will start by going to your workspace and opening your SQL analytics endpoint tied to your Lakehouse:

Screenshot highlighting the SQL analytics endpoint for your Lakehouse
SQL analytics endpoint within your Fabric Workspace

Once, opened, we can go to the Modeling tab and manage your default semantic model. Once the new window appears, select the tables holding your data within the Lakehouse and click confirm:

Screenshot of the steps to add tables to your default semantic model within MIcrosoft Fabric
Adding tables to your Semantic Model

Now that we have your tables in place, it is time to setup some relationships to enable our reports.

Building Your Relationships

Now that we have our tables selected, we need to connect them together. If you have been using Power BI already, you know how this works. In our situation, we will need to drag our tables to the model layout so we can use it for our Copilot Reports:

Screenshot of the model layout for Copilot Reports within Microsoft Fabric
Tables added to our model layout

You will note that I added all tables except one. The collaboration_levels table was intentionally omitted because it will not have any relationships. This table is purely for labeling purposes only and will not directly interact with the tables displayed in this layout.

Connecting the Date Dimension for Copilot Reports

To get started, we will connect our date_table to our different activity tables. To make this simple, all of our relationships will use the same column name. We just need to connect the dots. We will simply connect date_table[date] to the following columns with a one-to-many relationship:

  • onedrive_user_activity[Report_Refresh_Date]
  • teams_user_activity[Report_Refresh_Date]
  • sharepoint_user_activity[Report_Refresh_Date]
  • viva_engage_user_activity[Report_Refresh_Date]
  • copilot_last_activity[Report_Refresh_Date]
Screenshot of setting up relationships in the model layout for Copilot Reports
Creating a relationship between tables

You will note that I left one table out of the mix. We will need to create a relationship with the copilot_data table, but this one will be a little different. For this one, we need to make sure the relationship is inactive:

Screenshot of the special relationship between the date_table and copilot_data
Make sure your relationship with copilot_data is inactive!

This might not make sense at the moment, but it will work out later. I promise! With these relationships in place, your model layout should look something like this:

Screenshot of our model layout with our date table in place.
Model view with date_table connected to all activity tables

With our date dimension in place, it is time to build our user data.

Connecting the User Dimension for Copilot Reports

Our graph_users table holds a lot of valuable information for deeper analysis. For example, we can provide dimensional analysis based on a user’s department, office, company, manager, or even job title! This will obviously only be as effective as the data associated with your user within Entra ID. However, many organizations find this as a great place to start.

The process of building these relationships is far easier. Both sides of the relationship use the [User_Principal_Name] column, so we just need to build out our connection. To make this happen, we will create a one-to-many connection between graph_users[User_Principal_Name] with the following columns:

  • onedrive_user_activity[User_Principal_Name]
  • teams_user_activity[User_Principal_Name]
  • sharepoint_user_activity[User_Principal_Name]
  • viva_engage_user_activity[User_Principal_Name]

Wait a minute! What about the copilot activity data tables?! Yes, I did exclude them for the moment. The reason is we need to create a bi-directional relationship our user dimension and these tables as it will aid in some of our reporting:

Screenshot of a bi-directonal relationship for our Copilot Reports model layout.
Creating a bi-directional relationship for our Copilot utilization tables

We are getting close, but we have one last relationship to create.

Connecting the License Dimension for Copilot Reports

The last dimension we are going to provide is the ability to filter users based upon what licenses they have assigned to them. For some organizations, there is a split in users between enterprise and frontline licenses. For others, they want to be hyper focused on only Copilot for Microsoft 365 users.

Regardless of your use case, this could provide some additional insight. To make things easier for us, we will simply create a relationship between the licensing_data and graph_users table with the [User_Principal_Name] column on both sides. This will also be a bi-directional relationship.

With our relationships in place, our model should look something like this:

Screenshot of our final model layout for our Copilot Reports
Our final model layout for our Copilot Reports

At this point, please double and triple check your relationships to make sure nothing is missing or incorrectly mapped. If you do not have this in place, next week’s article is going to be painful as you will be coming back to this article.

Modeling Choices for our Copilot Reports

So why did I make some of the choices I did with relationships? Well, I can tell you that part of this is because of the challenges I have experienced already with the stock labels. For example, I have experienced some volatility with the labeling of copilots in the copilot_data table prior to the release of the API endpoints. To avoid having to make these changes in the future, I have taken a few steps to ensure this is not an issue for the future.

I also fully expect this solution to be obsolete at some point in the future. When the announcement of the Copilot Reports endpoint was made, I was hoping this would provide some more granular results. For whatever reason, this has not happened yet. However, I fully expect this to eventually fall in line with the other reporting endpoints in the future. No clue on the timeline and I won’t speculate on it, but I have to imagine that this will happen eventually. Therefore, I hope to be writing an article that says this series is obsolete and you should be using the new version of the endpoint.

Fun Fact About This Article

I think one of the biggest aspects of leveraging Microsoft Fabric is the power to build everything within a web browser. My favorite part of writing this article is that I did it completely at 37,000 feet in the air! If you are a fan of Chick Corea, I just heard 500 Miles High on repeat in my head as I wrote this article.

While the soundtrack is fun, I just wanted to highlight the power and ability to manage this project all while in the air. While working on airplane might not be your thing, it is still amazing to see what we can get done with a limited internet connection. Naturally, your milage may vary by airline, but still pretty cool in my book!

Next Steps

At this point, we are harvesting our data and we have our model set. By next week, we should have three weeks of data that we can trend and start to provide better Copilot for Microsoft 365 data points to review. I am so excited to share this and look forward to publishing that article soon!

Copilot Reports with Microsoft Fabric – Part 1

Over the past year, I have been helping customers with their Copilot for Microsoft 365 deployments. The number one complaint I have received from customers has to do with the ability to provide quality Copilot Reports. If you are reading this, you are probably not alone. However, some recent changes have come about that will help us get better reporting.

While in preview at the moment, Microsoft released reporting APIs for Copilot User Activity. While these endpoints are in preview with a target release of November 2024, we can start taking advantage of them today. The biggest challenge is that in order to provide some quality reporting, we will need to start populating data. To make this a reality, this week’s post will be 100% focused on populating data in your Fabric Lakehouse.

It is important to note that I cover a lot of the mechanics of how to make this happen in previous posts. I have linked to the appropriate posts where applicable but wanted to highlight this now. There are a lot of moving pieces to make this happen and there is some prerequisite knowledge for you to be successful. Please take the time to reference these posts when applicable.

Elephant in the Room with Copilot Reports

Before we get started, we need to address the headwinds with Copilot Reports. If you know me personally, you know I am a balls and strikes kind of person. I call things as I see them and sometimes that means calling out some of the issues that come with new solutions.

Challenges with the Data

If you are a data person like me, you find the reports available from Microsoft to be frustrating. They are detailed with specific behaviors inside of Viva Insights, but you cannot drill down enough to see where use cases are being leveraged. The only way to look at individual user activity has been with the Microsoft 365 Admin Reports for Copilot. This data is what the API endpoints are pulling at this time. While the individual detail is nice, it does not resolve the frustration we have been seeing.

The reason for this frustration is twofold. Unlike the endpoints we explored in my previous series, these endpoints have two drawbacks. The first is that I cannot identify activity for a specified date. I can only pull for a period of activity. Second, I do not get a count of activity for each use of Copilot. Instead, I just get a date for the last time Copilot was used. Most of my customers have been vocal about this second point.

Overcoming the Challenges

I will not speculate on why Microsoft has chosen this path. Instead, I pushed to find a way to work around the challenge and make something happen. Instead of focusing on individual days of data, we will need to look at data week over week. Is this an imperfect system? Absolutely! But I rather focus on what I can do than what I cannot do. Previously, I was able to do this by exporting this data on a weekly basis. Annoying? Yes, but now we can automate it with the advent of the new reporting endpoint. And I was even able to enhance other elements of my analysis as a result!

This all means our reporting will simply look at user activity week over week with Copilot. This will help us drive the value we seek in some meaningful way instead of guessing at how people are leveraging Copilot. My customers have found this insight valuable in driving better adoption which results in stronger utilization.

Foundations of Copilot Reports

If you followed along for my series on Graph API reports, you should already be familiar with the key steps to get things going for this project. While we did a lot of this in that series, we will be revisiting parts of it again. To simplify the process, we will go through high level the steps we need to complete. If you need a primer on this, please go back to my previous series to go through the details on how to prepare a Lakehouse in Microsoft Fabric.

Build a Fabric Lakehouse

To get started, sign into Microsoft Fabric and create a workspace for your Copilot Reports. You will likely want to share these reports with others and it might make sense to create a dedicated workspace for this. The key point here is that we will want a new workspace to streamline the data collection process as it will be changing.

Screenshot of my new Fabric Lakehouse for my Copilot Reports
New Fabric Lakehouse for my Copilot Reports

With our Lakehouse in place, we are ready for the next step in the process.

Creating Tables in the Lakehouse

Just like we did before, we will use a Fabric Notebook to create our tables. To get started, download the Create Graph Tables notebook I created for you. Remember, that you must swap your context to the Data Engineering view before you can import the notebook.

Screenshot of the Data Engineering view inside of Microsoft Fabric where we will import our notebook.
Swapping to the Data Engineering view to import your Notebook

Once imported, you can open the notebook and connect it to your new Lakehouse. From there, you can run the notebook. This process will take a few minutes, but will hopefully make some parts of this process much easier for you long term.

Screenshot of our notebook creating tables in our Fabric Lakehouse for our Copilot Reports
Creating tables in our Fabric Lakehouse

With our tables connected, we need to start harvesting data from Microsoft Graph.

Harvesting Data for our Copilot Reports

Now that we have our foundation in place, it is all about getting our data liberated from Graph and into the Lakehouse. To get this process started, download the Copilot Reporting Template I created for your Dataflow. Once downloaded, create a Dataflow Gen2 and import your template:

Screenshot of the process to import a Power Query template in a Dataflow.
Importing my Power Query template in a Dataflow

Once imported, you will need to update your parameters with your Service Principal information. If you never did this before, follow the steps I outlined in my Reporting with the Graph API post a few weeks back to ensure you have one configured properly. If you do, just update the parameters:

Screenshot of the Dataflow with the required parameters that need to be updated for your Copilot Reports
Update your parameters with your Service Principal details

Now that you have updated your Service Principal information, you just need to update your connection properties ensuring all endpoints are using an anonymous method as we are using the Service Principal to authenticate. From there, you just need to map your data to your new Lakehouse:

Screenshot of us mapping data from our Dataflow to our new Fabric Lakehouse
Mapping our data to our Lakehouse

We will need to map the following tables with the following properties:

Query NameLakehouse Table NameBehavior
GraphUsersgraph_usersReplace
Date_Tabledate_tableReplace
LicensingDatalicensing_dataReplace
TeamsUserActivityteams_user_activityAppend
OneDriveUserActivityonedrive_user_activityAppend
SharePointUserActivitysharepoint_user_activityAppend
VivaEngageUserActivityviva_engage_user_activityAppend
CopilotDatacopilot_dataAppend
CopilotLastActivitycopilot_last_activityAppend
Table mappings for our Dataflow to our Fabric Lakehouse

Once this is completed, we can publish our Dataflow.

Next Steps

Now that we have all of this in place, it is time to setup the refresh. If you remember from the beginning, we will be looking this data on a weekly basis. Therefore, we will want to setup the refresh to pull the data weekly. For myself, I setup the refresh to occur Sunday mornings at 9 AM as that is a quiet time of the week. It also gives me the best shot at pulling the data on the same day of the week. Remember, we are at the mercy of Microsoft and when they are releasing the data, so we have to somewhat hope for the best in this situation.

With the refreshes running weekly, we will start to collect some meaningful data we can use. Next week, we will work on preparing the semantic model to allow us to connect our Power BI report. Until then, we will keep harvesting data to help bring better value to our report!

Microsoft Graph API Reporting in Fabric – Part 3

Welcome back to my series on the Microsoft Graph API. We have spent a lot of time laying the groundwork for our data. In this article, it is all about getting our data into our Lakehouse so we can start to trend our data.

As a quick reminder, the purpose of us having to go through this exercise is because we must retain data on our own. The Microsoft Graph API will only furnish detailed data over the past thirty days. Any further back and it is lost. Therefore, if we want to trend data over three months, we must warehouse it.

In this article, we will be finalizing our queries and load them into our Lakehouse. To get there, we are going to go through some deep levels of M Query. However, hang in there until the end, I should be able to make it easier for you!

Getting Teams User Activity Detail

To get started, we are going to have to write out some code. While you can just copy/paste the code into this article, I think it is worth walking through together to help you understand the method behind the madness. It seems complicated – and it really is. But understanding the rationale of what was done will help you with your own projects. To help us on this journey, you will need to create three more parameters to simplify the process:

Parameter NameParameter TypeParameter Value
LakehouseIdTextValue of Lakehouse ID from URL
WorkspaceIdTextValue of Workspace ID from URL
HistoricalDayCountDecimal NumberList of values (5, 15, 27) with the default set to 5
Parameters required for capturing Microsoft Graph API data

With your parameters in place, it is time to write out our query. Open a new blank query and strap in!

Start With a Function

Because we are pulling data for a single date at a time, we will need to create a function. We have done this before, but this time we are going to embed it within an existing query:

let
  GraphDataFunction = (ExportDate as date) =>
    let
      GraphData = Web.Contents("https://graph.microsoft.com/v1.0/reports/", 
        [RelativePath="getTeamsUserActivityUserDetail(date=" & Date.ToText(ExportDate, "yyyy-MM-dd") & ")",
        Headers=[Accept="application/json",Authorization="Bearer " & #"Get-AuthToken"()]])
    in
      GraphData
in
  GraphDataFunction

We need to call a few things out. First and foremost, you will note we are using the RelativePath option. This is because our dates will be updated with each all of the function, therefore we must use this option to avoid errors.

Next, you will notice the Authorization header is using our Get-AuthToken function we created in part 2 of this series. This is key to ensure we use our token to complete the authentication process.

Checking the Lakehouse for Existing Data

We will need to ensure we only query Graph data that is not already in the Lakehouse. Because of how finicky the Graph API can be, we cannot guarantee data to be available the next day. Sometimes it is a few days behind. We also do not want duplicate data to be added to the Lakehouse.

To prevent this from happening, we will query the Lakehouse and the tables we created. By using the parameters we created, we will call our Lakehouse and pull our historical data so we can transform it:

let
  GraphDataFunction = (ExportDate as date) =>
    let
      GraphData = Web.Contents("https://graph.microsoft.com/v1.0/reports/", 
        [RelativePath="getTeamsUserActivityUserDetail(date=" & Date.ToText(ExportDate, "yyyy-MM-dd") & ")",
        Headers=[Accept="application/json",Authorization="Bearer " & #"Get-AuthToken"()]])
    in
      GraphData, 
  LakehouseConnection = Lakehouse.Contents(null){[workspaceId = WorkspaceId]}[Data]{[lakehouseId = LakehouseId]}[Data],
  HistoricalExports = LakehouseConnection{[Id = "teams_user_activity", ItemKind = "Table"]}[Data],
  ReportRefreshDates = Table.SelectColumns(HistoricalExports, {"Report_Refresh_Date"}),
  FilterRefreshDates = Table.SelectRows(ReportRefreshDates, each [Report_Refresh_Date] <> null and [Report_Refresh_Date] <> ""),
  HistoricalDatesExported = Table.Distinct(FilterRefreshDates, {"Report_Refresh_Date"})
in
  HistoricalDatesExported

These five lines of code will allow us to connect to our teams_user_activity table, remove unnecessary columns, remove duplicates, and format into a usable column. This will be empty for right now, but more will be added as we move forward.

Create a Range of Dates and Merge with Historical Data

Next, we will pull data from the last few days. To make life easier on us, we will limit the pull to five days to speed up our development process. We just need to use today’s date as a base and create a range going back five days that we can use:

let
  GraphDataFunction = (ExportDate as date) =>
    let
      GraphData = Web.Contents("https://graph.microsoft.com/v1.0/reports/", 
        [RelativePath="getTeamsUserActivityUserDetail(date=" & Date.ToText(ExportDate, "yyyy-MM-dd") & ")",
        Headers=[Accept="application/json",Authorization="Bearer " & #"Get-AuthToken"()]])
    in
      GraphData, 
  LakehouseConnection = Lakehouse.Contents(null){[workspaceId = WorkspaceId]}[Data]{[lakehouseId = LakehouseId]}[Data],
  HistoricalExports = LakehouseConnection{[Id = "teams_user_activity", ItemKind = "Table"]}[Data],
  ReportRefreshDates = Table.SelectColumns(HistoricalExports, {"Report_Refresh_Date"}),
  FilterRefreshDates = Table.SelectRows(ReportRefreshDates, each [Report_Refresh_Date] <> null and [Report_Refresh_Date] <> ""),
  HistoricalDatesExported = Table.Distinct(FilterRefreshDates, {"Report_Refresh_Date"}),
  TodaysDate = Date.AddDays(Date.From(DateTime.LocalNow()), -5),
  ListDatesForExport = List.Dates(TodaysDate, 5, #duration(1, 0, 0, 0)),
  TableOfDates = Table.FromList(ListDatesForExport, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
  RenameDateColumn = Table.RenameColumns(TableOfDates, {{"Column1", "ExportDates"}}),
  SetProperDataType = Table.TransformColumnTypes(RenameDateColumn, {{"ExportDates", type date}}),
  MergeHistoricalData = Table.NestedJoin(SetProperDataType, {"ExportDates"}, HistoricalDatesExported, {"Report_Refresh_Date"}, "HistoricalDatesExported", JoinKind.LeftOuter),
  ExpandHistoricalDates = Table.ExpandTableColumn(MergeHistoricalData, "HistoricalDatesExported", {"Report_Refresh_Date"}, {"Report_Refresh_Date"}),
  IdentifyMissingData = Table.SelectRows(ExpandHistoricalDates, each ( [Report_Refresh_Date] = null))
in
  IdentifyMissingData

Once we have a list of dates in place, we will merge it to our historical data from the HistoricalDatesExported step. From there, expand the values and filter out the null values so we only have dates where we are missing data left.

Execute the Microsoft Graph API Function

Reaching all the way back to the first step, GraphDataFunction, we are going to pass our list of dates through to get our data. We will pass our column and pull our data through:

let
  GraphDataFunction = (ExportDate as date) =>
    let
      GraphData = Web.Contents("https://graph.microsoft.com/v1.0/reports/", 
        [RelativePath="getTeamsUserActivityUserDetail(date=" & Date.ToText(ExportDate, "yyyy-MM-dd") & ")",
        Headers=[Accept="application/json",Authorization="Bearer " & #"Get-AuthToken"()]])
    in
      GraphData, 
  LakehouseConnection = Lakehouse.Contents(null){[workspaceId = WorkspaceId]}[Data]{[lakehouseId = LakehouseId]}[Data],
  HistoricalExports = LakehouseConnection{[Id = "teams_user_activity", ItemKind = "Table"]}[Data],
  ReportRefreshDates = Table.SelectColumns(HistoricalExports, {"Report_Refresh_Date"}),
  FilterRefreshDates = Table.SelectRows(ReportRefreshDates, each [Report_Refresh_Date] <> null and [Report_Refresh_Date] <> ""),
  HistoricalDatesExported = Table.Distinct(FilterRefreshDates, {"Report_Refresh_Date"}),
  TodaysDate = Date.AddDays(Date.From(DateTime.LocalNow()), -5),
  ListDatesForExport = List.Dates(TodaysDate, 5, #duration(1, 0, 0, 0)),
  TableOfDates = Table.FromList(ListDatesForExport, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
  RenameDateColumn = Table.RenameColumns(TableOfDates, {{"Column1", "ExportDates"}}),
  SetProperDataType = Table.TransformColumnTypes(RenameDateColumn, {{"ExportDates", type date}}),
  MergeHistoricalData = Table.NestedJoin(SetProperDataType, {"ExportDates"}, HistoricalDatesExported, {"Report_Refresh_Date"}, "HistoricalDatesExported", JoinKind.LeftOuter),
  ExpandHistoricalDates = Table.ExpandTableColumn(MergeHistoricalData, "HistoricalDatesExported", {"Report_Refresh_Date"}, {"Report_Refresh_Date"}),
  IdentifyMissingData = Table.SelectRows(ExpandHistoricalDates, each ( [Report_Refresh_Date] = null)),
  ExecuteGraphFunction = Table.AddColumn(IdentifyMissingData, "Invoked custom function", each GraphDataFunction([ExportDates])),
  FilterMissingData = Table.SelectRows(ExecuteGraphFunction, each [Attributes]?[Hidden]? <> true),
  MergeExportedFiles = Table.AddColumn(FilterMissingData, "Transform file", each #"Transform file"([Invoked custom function], 33)),
  RemoveNonDataColumns = Table.SelectColumns(MergeExportedFiles, {"Transform file"}),
  ExpandResults = Table.ExpandTableColumn(RemoveNonDataColumns, "Transform file", {"Report Refresh Date", " User Id", " User Principal Name", " Last Activity Date", " Is Deleted", " Deleted Date", " Assigned Products", " Team Chat Message Count", " Private Chat Message Count", " Call Count", " Meeting Count", " Meetings Organized Count", " Meetings Attended Count", " Ad Hoc Meetings Organized Count", " Ad Hoc Meetings Attended Count", " Scheduled One-time Meetings Organized Count", " Scheduled One-time Meetings Attended Count", " Scheduled Recurring Meetings Organized Count", " Scheduled Recurring Meetings Attended Count", " Audio Duration In Seconds", " Video Duration In Seconds", " Screen Share Duration In Seconds", " Has Other Action", " Urgent Messages", " Post Messages", " Tenant Display Name", " Shared Channel Tenant Display Names", " Reply Messages", " Is Licensed", " Report Period"}),
  FilterMissingResults = Table.SelectRows(ExpandResults, each [Report Refresh Date] <> null)
in
  FilterMissingResults

After we execute the function, we will combine our files and expand the results. To do this, we are using a custom function titled Transform file to combine our files seamlessly:

(Parameter as binary, ColumnCount as number) => let
  Source = Csv.Document(Parameter, [Delimiter = ",", Columns = ColumnCount, Encoding = 65001, QuoteStyle = QuoteStyle.None]),
  #"Promoted headers" = Table.PromoteHeaders(Source, [PromoteAllScalars = true])
in
  #"Promoted headers"

The result should be something that looks like this:

Screenshot of Microsoft Graph API report data.
Screenshot of the Microsoft Graph API report data for Teams

Wow – that was a whole lot for a single endpoint. Only four more applications left! Frustrated that you will have to do this again? Well, instead of being frustrated, let’s make it a little easier.

Turning Microsoft Graph API Reports into a Flexible Function

We have worked so hard to get the first query in place. But if we think through things a little further, we can simplify the process for future endpoints. Instead of going through this entire process over and over again, we can convert this to a function and pass data to make it easier for us to use.

Prepare Your Endpoint Data

To simplify our process, we will start by creating a table we can reference. This will highlight the application, Microsoft Graph API endpoint, Fabric Lakehouse table, columns pulled from the endpoint, and the column names. Create a blank query called Endpoints and add the following code:

let
  Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("3VXbbtswDP0Vwc/pQ9cvCOIMDZCtRt0FGIIiECzOIWpTgURny75+lOPc5aB+3YttkkeUeXRILZfJtNZYJaMEwnvVeHArXTBukXfiLYFbwLhz/ZB4Crxf8vgkj1fYWMfqFX458GuVaoaRCjCVOaQCN7pS33UtzhT9ptK7zpp5lUIFDGZ0+OgWz7Vnddiw8+VARk1sQzySrQrALZxMfQx9A2CkUk0c6JDwyj0jBifVnSJj77EksTNnTVOwD/nagjJwaE3yPlomLwSpkx2lWivY8Bkj6gCLc/U4iKth7CwQfgvgxampwQD9itWRn3xHxbVrrZ24WjpIV9UuFp3+iUY/xVibIrNILFX7YGyCEWPtBI3z9uU/4W2Bvt0i0+UwLt9A115I4PCOMdgCAhtRAp/uN+nM9DAZ46uf3Ugl4bfUZK1Z2s/7s6plp60sisYmwttNM1+YXo6r1IR/b9rbqzGzzImz7jbq2RZ3Vl4DrhPkxRpMUwWNEDww1nAn2T1wf2KZZo1zoc7PZI6gb4puDFqVNk4zWhKxyvAsLBkfNGigJ5QXDoD2Go4jnrX8Ga9FKkEVlkQ3rgQ6HqFAMuvPzTcgLfHLud91iRw+EVQxzL4LpH1OmUR4cyyAfFDebYssRFBqSqWARexbsVbQWrF2+anruqdXBg7py8JaSM7764o7gZNIu3/+BL7AxC6yOX4MuaTe/wE=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Application = _t, Lakehouse_Table = _t, Graph_Endpoint = _t, Column_Count = _t, Column_Mapping = _t]),
  #"Changed column type" = Table.TransformColumnTypes(Source, {{"Column_Count", Int64.Type}, {"Application", type text}, {"Lakehouse_Table", type text}, {"Graph_Endpoint", type text}, {"Column_Mapping", type text}})
in
  #"Changed column type"

You can choose to pull this data from somewhere else if it is easier for you, but this table is easy to maintain in this scenario.

Convert our Query to a Function

Our next step is to take the first query we created in this article and make it a function. We will start by adding the parameter for an application at the top and filter the Endpoints table with the value passed in the parameter:

(Application as text) =>
let
  EndpointTable = Endpoints,
  EndpointData = Table.SelectRows(EndpointTable, each [Application] = Application),
  GraphDataFunction = (ExportDate as date) =>
    let
      GraphData = Web.Contents("https://graph.microsoft.com/v1.0/reports/", 
        [RelativePath=EndpointData{0}[Graph_Endpoint] & "(date=" & Date.ToText(ExportDate, "yyyy-MM-dd") & ")",
        Headers=[Accept="application/json",Authorization="Bearer " & #"Get-AuthToken"()]])
    in
      GraphData, 

  //Endpoint = "teams_user_activity",
  LakehouseConnection = Lakehouse.Contents(null){[workspaceId = WorkspaceId]}[Data]{[lakehouseId = LakehouseId]}[Data],
  HistoricalExports = LakehouseConnection{[Id = EndpointData{0}[Lakehouse_Table], ItemKind = "Table"]}[Data],
  ReportRefreshDates = Table.SelectColumns(HistoricalExports, {"Report_Refresh_Date"}),
  FilterRefreshDates = Table.SelectRows(ReportRefreshDates, each [Report_Refresh_Date] <> null and [Report_Refresh_Date] <> ""),
  HistoricalDatesExported = Table.Distinct(FilterRefreshDates, {"Report_Refresh_Date"}),
  TodaysDate = Date.AddDays(Date.From(DateTime.LocalNow()), -HistoricalDayCount),
  ListDatesForExport = List.Dates(TodaysDate, HistoricalDayCount, #duration(1, 0, 0, 0)),
  TableOfDates = Table.FromList(ListDatesForExport, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
  RenameDateColumn = Table.RenameColumns(TableOfDates, {{"Column1", "ExportDates"}}),
  SetProperDataType = Table.TransformColumnTypes(RenameDateColumn, {{"ExportDates", type date}}),
  MergeHistoricalData = Table.NestedJoin(SetProperDataType, {"ExportDates"}, HistoricalDatesExported, {"Report_Refresh_Date"}, "HistoricalDatesExported", JoinKind.LeftOuter),
  ExpandHistoricalDates = Table.ExpandTableColumn(MergeHistoricalData, "HistoricalDatesExported", {"Report_Refresh_Date"}, {"Report_Refresh_Date"}),
  IdentifyMissingData = Table.SelectRows(ExpandHistoricalDates, each ( [Report_Refresh_Date] = null)),
  ExecuteGraphFunction = Table.AddColumn(IdentifyMissingData, "Invoked custom function", each GraphDataFunction([ExportDates])),
  FilterMissingData = Table.SelectRows(ExecuteGraphFunction, each [Attributes]?[Hidden]? <> true),
  MergeExportedFiles = Table.AddColumn(FilterMissingData, "Transform file", each #"Transform file"([Invoked custom function], EndpointData{0}[Column_Count])),
  RemoveNonDataColumns = Table.SelectColumns(MergeExportedFiles, {"Transform file"}),
  ExpandResults = Table.ExpandTableColumn(RemoveNonDataColumns, "Transform file", Text.Split(EndpointData{0}[Column_Mapping], ", ")),
  FilterMissingResults = Table.SelectRows(ExpandResults, each [Report Refresh Date] <> null)
in
  FilterMissingResults

From there, we will call the data from the EndpointData step and inject it into our query with the following items:

  • In the GraphDataFunction step, place a dynamic endpoint name
  • In the HistoricalExports step, place a dynamic Lakehouse table name
  • In the MergeExportedFiles step, place a dynamic column count in the Transform file function
  • In the ExpandResults step, place dynamic column mappings and wrap it with a Text.Split() function

Now that is completed, we can call the function by typing in the application name and clicking invoke:

Screenshot of us invoking our Microsoft Graph API reports function
Invoking our Microsoft Graph API reports function.

With this in place, we can invoke our function for OneDrive, SharePoint, Email, and Viva Engage with a few clicks!

Get Current Users from Entra ID

For our last step, we just need to pull our users out of Entra ID into our Dataflow. To make this happen, I like to use a function to make it easier. I created a function titled Get-Users and invoked it with this code:

() =>
let
  GetUserDetail = (Params) =>
  let    
    Path = if Text.Length(Params) > 0  then Params
      else "$select=displayName,givenName,jobTitle,mail,officeLocation,surname,userPrincipalName,id,accountEnabled,userType,companyName,department,employeeType&$expand=manager",  
    Source = Json.Document(Web.Contents("https://graph.microsoft.com/v1.0/users",[RelativePath = "?" & Path, Headers=[Authorization="Bearer " & #"Get-AuthToken"()]])),  
    NextList = @Source[value], result = try @NextList & @GetUserDetail(Text.AfterDelimiter(Source[#"@odata.nextLink"], "?")) otherwise @NextList    
  in
    result,
  UserDetail = GetUserDetail("")
in
  UserDetail

You will notice the function inside of the function. This is because we will need to iterate through the entire user list to receive all users. Because the API limits us to batches of 1,000, we will need to work through this a few times to get the results we desire.

HELP! THIS IS TOO MUCH!

Is your head spinning? Mine too. In fact, this was a lot to even get this coded. And if I ever want to reuse this, I do not want to go through any of this level of effort again. I am probably going to be like that math teacher you had where you learned how to do everything the hard way to pass an exam and then the next chapter they teach you a way to complete a problem easier. I hated it when that happened to be, but I am still going to do it to you.

To streamline this, I created a Power Query template for this project. You can download the template and import it into your new Dataflow:

Screenshot of the Microsoft Graph API Power Query template being imported into a dataflow.
Importing a Power Query template for our Microsoft Graph API reports

Once imported, you will need to update the following parameters:

  • TenantId
  • AppId
  • ClientSecret
  • WorkspaceId
  • LakehouseId

This should streamline the process considerably if you are just getting started!

From Dataflow to Lakehouse

We now just need to map our data into our Lakehouse. This will take a little diligence, but it should move along pretty quickly. We just need to match our queries and columns from our Dataflows into tables within our Lakehouse.

Adding Teams Data to the Lakehouse

Starting with our Teams user activity data, we will go to the bottom right-hand corner of the screen and click the + icon to add it to our Lakehouse. Once you have a connection to your Lakehouse, we will use an existing table and navigate to the table where your data is stored:

Screenshot of placing Microsoft Graph API data into our Lakehouse
Adding Teams data into our Lakehouse

Once we have selected our table, we need to map our columns. We also need to update the setting that will allow us to append our data:

Mapping our Microsoft Graph API report fields into our Fabric Lakehouse
Mapping columns and appending our data

Now we just need to save our settings. Once we publish and refresh our dataflow, our data will be added into the Lakehouse. We just need to repeat these steps for OneDrive, SharePoint, Email, and Viva Engage.

Adding Graph Users to the Lakehouse

If you remember from my last article, we created all of the table for our Microsoft Graph API report endpoints. However, we did not need to create anything for our users. Why is that?!

The answer is simple – with our approach, we will optimize the refresh experience and eliminate data duplication for these dates. This is because it is a moving target and hard to hit. However, with our users, it is a different story as we will replace our data each time. So, just like we did before, we will go ahead and add our Lakehouse as a data destination. However, this time we will create a new table. Once in place, we can make adjustments to our mappings and settings:

Screenshot of our Entra ID users mapping to the Fabric Lakehouse
Mapping Entra ID users to the Lakehouse

Once completed, we can click save and move to publishing our Dataflow.

One More Thing Before We Publish!

Last thing we need to do is adjust our HistoricalDayCounter from 5 days to 27 days. This will ensure that all the historical data available is captured in the refresh and will be loaded into the Lakehouse:

Screenshot of the HistoricalDayCounter parameter being updated to 30 days.
Update our HistoricalDayCounter to 27 days

Once completed, we can click publish. It will take a few minutes to validate the Dataflow. After that, we just need to setup a daily refresh and our data will start to populate into the Lakehouse!

Next Steps and Conclusion

Alright, that was a ton of information in a single article. It was hard to split up, but I wanted to keep it together in a single article as these pieces were all interconnected. It is a lot, but it is complete. Now that you have your data, you can create your model, measures, and start building out your reports.

In other news, I have been able to get the endpoint for Copilot. However, there are some data quality challenges that I want to address before doing a full write-up. In the meantime, this will get you started with understanding how complicated this level of reporting can be.

Did you make it to the end? Were you successful? If so, tell me about it in the comments below!

Microsoft Graph API Reporting in Fabric – Part 2

In my last post, I went over they why and covered the building blocks for connecting to the Microsoft Graph API for reporting. With this in place, it is time to prepare the landing zone for our data in Microsoft Fabric! To achieve this, we will need to setup our Fabric environment so we can start to build out our solution.

Before we get started, if you do not have Microsoft Fabric, do not fear! You can start a Microsoft Fabric trial for 60 days for free. When you sign into Power BI, you can click on your user avatar, and start your trial. If this is not available to you, it might be because your Fabric Administrator has turned off the trial feature. You might need to submit a request to re-enable this feature.

Creating Your Lakehouse

To get things started, we need to create a new workspace that has a Fabric capacity assigned to it. Just like you would before with Power BI, create a new workspace inside of Microsoft Fabric and ensure it is tied to your capacity.

Once you have your workspace in place, it is time to build your Lakehouse. If you go to the new button in the upper left hand corner of the workspace window, a screen will appear where you can select a Lakehouse:

Screenshot of the screen where you can select a Lakehouse to deploy.
Deploying a Lakehouse in Microsoft Fabric

From there, you can give your Lakehouse a name and click create. You will be redirected to your new Lakehouse. Once created, we will need to extract the Workspace ID and Lakehouse ID from the URL. The easiest way to capture this is to just copy the URL in the address bar to a notepad where you kept your Service Principal data in part 1 of this series. Once you capture this information, you can go can close your Lakehouse:

https://app.powerbi.com/groups/<WorkspaceId>/lakehouses/<LakehouseId>?experience=power-bi

Preparing Your Lakehouse

Now that you have a Lakehouse in place, it is time to prepare the tables. While there is an opportunity to create these tables directly out of our Dataflow, we will want to create a few ahead of time. This is simply to reduce the amount of effort on your side. The goal is to make it easy, but the process can be complex. This should make the process a little more seamless for you as a result.

While I am going to give you a Notebook you can upload, we will pause and create one from scratch so you understand the process. When you click new item in your workspace, we will scroll down to prepare data and select Notebook:

Screenshot of adding a Notebook to your Fabric Workspace.
Adding a Notebook to your Fabric Workspace

We will add our Lakehouse to the Notebook on the left hand side. Once in place, we can add our code using PySpark into a cell:

Screenshot of our Notebook in Microsoft Fabric that has been connected to a Lakehouse and PySpark code to create a table.
Preparing our Notebook to build tables in our Lakehouse

Using documentation from the Microsoft Graph API, I was able to identify the endpoints and their associated fields. This allowed me to create a block of code that would create our teams_user_activity table:

#Build out teams_user_activity table in the Lakehouse

from pyspark.sql.types import StructType, StructField, IntegerType, StringType, DateType, BooleanType
schema = StructType([
    StructField("Report_Refresh_Date", DateType(), True),
    StructField("User_Id", StringType(), True),
    StructField("User_Principal_Name", StringType(), True),
    StructField("Last_Activity_Date", DateType(), True),
    StructField("Is_Deleted", BooleanType(), True),
    StructField("Deleted_Date", StringType(), True),
    StructField("Assigned_Products", StringType(), True),
    StructField("Team_Chat_Message_Count", IntegerType(), True),
    StructField("Private_Chat_Message_Count", IntegerType(), True),
    StructField("Call_Count", IntegerType(), True),
    StructField("Meeting_Count", IntegerType(), True),
    StructField("Meetings_Organized_Count", IntegerType(), True),
    StructField("Meetings_Attended_Count", IntegerType(), True),
    StructField("Ad_Hoc_Meetings_Organized_Count", IntegerType(), True),
    StructField("Ad_Hoc_Meetings_Attended_Count", IntegerType(), True),
    StructField("Scheduled_One-time_Meetings_Organized_Count", IntegerType(), True),
    StructField("Scheduled_One-time_Meetings_Attended_Count", IntegerType(), True),
    StructField("Scheduled_Recurring_Meetings_Organized_Count", IntegerType(), True),
    StructField("Scheduled_Recurring_Meetings_Attended_Count", IntegerType(), True),
    StructField("Audio_Duration_In_Seconds", IntegerType(), True),
    StructField("Video_Duration_In_Seconds", IntegerType(), True),
    StructField("Screen_Share_Duration_In_Seconds", IntegerType(), True),
    StructField("Has_Other_Action", StringType(), True),
    StructField("Urgent_Messages", IntegerType(), True),
    StructField("Post_Messages", IntegerType(), True),
    StructField("Tenant_Display_Name", StringType(), True),
    StructField("Shared_Channel_Tenant_Display_Names", StringType(), True),
    StructField("Reply_Messages", IntegerType(), True),
    StructField("Is_Licensed", StringType(), True),
    StructField("Report_Period", IntegerType(), True)
])
df = spark.createDataFrame([], schema)
df.write.format("delta").saveAsTable("teams_user_activity")

In our scenario, we will need five tables for Teams, OneDrive, SharePoint, Email, and Viva Engage. To simplify the process for you, I have created an notebook you can upload into Fabric and run. You just need to connect your Lakehouse and select run all. To perform this task, you must switch to the Data Engineering context of Fabric and select Import on your workspace:

Screenshot of the process to import a notebook into your Fabric Workspace
Importing a Notebook into your Microsoft Fabric Workspace

Once you have run the script successfully, you can go to your SQL Endpoint for the Lakehouse and verify the tables have been created and that they are empty with the following query:

SELECT * FROM teams_user_activity ORDER BY Report_Refresh_Date DESC;

SELECT * FROM onedrive_user_activity ORDER BY Report_Refresh_Date DESC;

SELECT * FROM sharepoint_user_activity ORDER BY Report_Refresh_Date DESC;

SELECT * FROM email_user_activity ORDER BY Report_Refresh_Date DESC;

SELECT * FROM viva_engage_user_activity ORDER BY Report_Refresh_Date DESC

Preparing Your Dataflow

After creating your Lakehouse and preparing your tables, you can create a Dataflow in the same workspace. This will query the data and push it to the right tables within our Lakehouse. However, you need to be careful to select a Dataflow Gen2 as opposed to a Dataflow:

Screenshot of the screen where you can select a Dataflow Gen2 to deploy.

Adding a Dataflow Gen2 to your Fabric workspace

This is critical for us to ensure our data makes it into the Lakehouse later on, so be careful with your selection! Once you select it, you can rename your Dataflow in the upper left hand corner:

Screenshot of renaming your Dataflow in Microsoft Fabric.
Renaming your Dataflow

With your Lakehouse and Dataflow created, it is time to start getting some data.

Connecting to the Microsoft Graph API

If you remember from my last article, we will be using the application permissions with our app registration. As a result, we must acquire a OAuth2 token to facilitate the authentication process. Luckily, we made it easy for you to deploy this in your solution.

Adding Parameters in Power Query

I covered the basics of Parameters in Power Query in an older article. Instead of going through the details, I will just highlight what we need to have created.

To facilitate our authentication, we need to create three parameters that will be used to hold data from our app registration. The three parameters we need are as follows:

Parameter NameParameter TypeParameter Value
TenantIdTextDirectory (tenant) ID
AppIdTextApplication (client) ID
ClientSecretTextClient secret value
Parameters for our authentication process

Remember, capitalization counts! If you copy/paste the parameters above, you will not have an issue with the next step. It seems easy, but take great care with this process!

Creating an Authentication Function

Once you have your parameters set, you need to create a new blank query. To create what we need to make this connection happen, it requires you to build out the query in code. However, I have made that easier for you. Just copy the code below and paste it into the advanced editor with your new query:

() =>
  let
    ContentBody = "Content-Type=application/x-www-form-urlencoded&
        scope=https://graph.microsoft.com/.default&
        grant_type=client_credentials&
        client_id=" & AppId & "&
        client_secret=" & ClientSecret, 
    Source = Json.Document(
      Web.Contents(
        "https://login.microsoftonline.com/" & TenantId & "/oauth2/v2.0/token", 
        [
          Content = Text.ToBinary(ContentBody), 
          Headers = [
            Accept = "application/json", 
            ContentType = "application/x-www-form-urlencoded"
            ]
        ]
      )
    )[access_token]
  in
    Source

Once we have pasted the code into the query, click next. After that, rename the query to Get-AuthToken. Again, capitalization is important so take care with this step!

Now that we have our authentication function in place, it is time to take the next step towards building out our Microsoft Graph API functions to pull user information and Teams user activity.

Conclusion and Next Steps

There is a lot of setup involved in this process. Part of the reason is because of the complexity of the pull we need to perform, but also to avoid running into duplicates and reduce the chance for throttling with the endpoints. I know this seems like a lot of work, but I promise it will pay off in the end!

In my next post, we will finally grab our Graph API data and load it into our Lakehouse! Stay tuned to see how it happens!

Microsoft Graph API Reporting in Fabric – Part 1

The Microsoft Graph API library has been around since November 2015 and continues to expand. One of the key endpoints that has centered around utilization reports of the Microsoft 365 ecosystem. These reports can be leveraged to drive better overall adoption of the Microsoft 365 platform and understand trends in use.

Leveraging the Microsoft Graph API, the Microsoft 365 Utilization Metrics app has been the easiest deployed solution to work through this challenge. With a few clicks in AppSource, you can deploy this app within Power BI, update some details on the sematic model, and hit refresh to understand utilization. And while it does show data for the past year, customers frequently want more out of their data.

People forget about the Microsoft Graph API until new technology finds its way into the ecosystem. Enter stage left, Copilot for Microsoft 365, the newest belle at the ball! With new technology coming at a premium price point, companies want to understand utilization to ensure value. The $30 per user per month cost of Copilot suddenly has people wanting to confirm people are leveraging the tool. If not, they want to reclaim the license and get it to someone else.

This gave me the idea to revisit this challenge and see how we could leverage Fabric to make it a reality. While the Microsoft 365 Roadmap has this rolling out soon, a little preparation will allow you to deliver this data when it is finally available.

What are the Microsoft Graph API Reports?

The Microsoft Graph API reports are a collection of endpoints that provide utilization data for users. Aggregated by action, these reports provide visibility to how Microsoft 365 is leveraged. While you can pull data for users in 7, 30, 90, and 180 periods, the data is aggregated by user. This proves a challenge in getting data in a meaningful format if you are looking to track trends over time.

While those periods work, there is another option. Most endpoints allow you to specify an individual date for your query. However, you can only collect that data for the past 30 days. This is a huge restriction and can limit your ability to trend data over a longer period of time.

In the past, if you did not have a data warehousing strategy in place, you were severely limited to what Power BI could do. However, leveraging Microsoft Fabric, this will get you in the right direction pretty quickly. However, with the advent of Microsoft Fabric, we will be able to simplify this process and enable you to track this data with minimal effort!

Preparing Access to Microsoft Graph API Reports

To get ready for the data warehousing process, you must deploy an app registration inside of Entra ID. To get started, sign into Entra ID, navigate to Applications, select App Registration, and start a new registration. Name your new app and keep everything else the same:

Screenshot of an app being registered in Entra ID
Registering an app via Entra ID

Once you click register, copy the Application (Client) ID and Tenant ID from the overview page and save it for later. Next, go to API permissions and click add permission. You will select Microsoft Graph and then application permissions. From there, you will need to add the following permissions:

Permission NamePurpose
User.Read.AllProvide dimensional data around users within your organization from Entra ID. While User.Read has already been added to the app, it will not work in this situation.
Reports.Read.AllProvides access to all reporting endpoints for Microsoft 365. This includes Microsoft Teams, SharePoint, OneDrive, Exchange, and Viva Engage
Details about API permissions for your app

Once these permissions are assigned, grant admin consent for your organization. The permissions you added should have green checkmarks next to them:

Screenshot of API permissions granted for your application
Admin consent granted for API permissions

Lastly, you need to generate a client secret. Go to Certificates & Secrets and add a new Client Secret. Once created, make sure you copy the value for your secret and save it. You cannot get it again and will need to re-create it when you are finished.

Turn off Report Masking

As a security feature, the Microsoft Graph API has user masking available for this data. This helps protect the anonymity of user activity in the tenant. However, if we want to track Copilot for Microsoft 365 utilization, we cannot be anonymous. For some of you, this will not apply. However, this process could require some approvals. Asking early will ensure you can the report mask turned off ahead of time.

To turn off report masking, go to the Microsoft 365 Admin Center and expand settings on the left side of the screen. From there, select org settings and then select reports. A panel will appear on the right side of the screen. Uncheck the option titled display concealed user, group, and site names in all reports:

Turning off the report masking in the Microsoft 365 admin center
Turn off the report masking in the Microsoft 365 Admin Center

It is important to note that this feature is either on or off for everyone in the organization for all reports. There are no granular controls for this feature. Therefore, make sure you understand the implications of turning off this setting and communicate it to fellow Microsoft 365 administrators who might be using these reporting features whether by API endpoint or in their respective administrator portal.

What else do I need to know?

This project will require you to understand how to use REST APIs in Dataflows. If you have not read my series on leveraging REST APIs in Power BI, this would be a good place to start. We will need to build a number of connections leveraging this method.

Next, we will have to navigate an authentication process that will require us to use the POST method. I have also written an article on navigating this challenge. This will be a little more complicated, but the code found in this article will be helpful for understanding how we will build a function to drive authentication for these endpoints.

Lasty, you will find yourself writing some M Query code. If you are new to this, the code will be provided to assist you. However, you will want to learn a little more about the language so you understand the method behind the madness.

Conclusion and Next Steps

This setup only takes a few minutes, but it will go a long way in preparing our environment. As mentioned above, if you are not familiar with using REST APIs in Power Query, please take some time to learn about them. I promise you that you will thank me later. This can get very complicated, so understanding the process ahead of time goes a long way!

This will be enough to get thing started for right now. In my next article, we will explore setting up our Fabric environment to query this data and prepare it for the data warehouse.

Simulation Integration with Power Apps

Simulating scenarios with Power BI is a common use case that organizations need to get the most out of their data. Being able to manipulate elements of the data to simulate an outcome brings deeper insights and accelerates decision making. Everyone says “what if”, but Power BI paired with Power Apps can help you build the simulation and put it into action.

About two years ago, I wrote an article on how to simulate scenarios with numeric parameters. This solution still applies and works well for most scenarios. However, there is a drawback to this solution. If you have a lot of inputs, you could end up with a painful user experience. Because of the constant recalculation of your measures, it could bog down your experience and eat away at your capacity.

While not a solution for every scenario, we can use Power Apps to achieve the same outcome. In our scenario for this article, we want to drive profitability by manipulating a profit and loss statement. We want to adjust inputs such as revenue, expenses, headcount, and taxes. Because of the numerous inputs that drive a profit and loss statement, this is a perfect example for Power Apps.

Preparing the Report Elements

To start, we are using an existing report that contains a profit and loss statement. We have a matrix visual that shows the lines of the statement and their associated values. If you notice, each line is labeled with a section letter and line number. This makes it easy to identify each element:

Screenshot of a report canvas in Power BI Desktop with a profit and loss statement.
Profit and loss statement within Power BI Desktop

To setup our simulation, we need to have measures to aggregate the data and bring it into the app. You will note that I created a Simulation App Measures query and added a measure for each line item. This will make it easy to bring the data into Power Apps.

Lastly, you will note that I left space on the right hand of the canvas to make it easy to add our application. While we are adding a Power App, it is like any other visualization and requires the appropriate space. While it is perfectly fine to place the app on it’s own page, we want to keep it close to our profit and loss statement for comparison.

Preparing the Power App

While it seems simple, we need to take a few steps to integrate Power Apps into our report. The first step is that we need to select the Power App visual in the visualization pane. Once you add the Power App visual, it provides the instructions for the next few steps:

Screenshot of adding a Power App visual to Power BI Desktop
Adding the Power App visual to our report canvas.

Once we have the visual in place, we need to add fields from our data pane to the visual. This data is the base for supporting your simulation. Once the data has been selected, we can select which Power Apps environment we want to use. Next, if we have an existing app, we can select it here. However, we have not created the app yet so we are going to create one from scratch. When you do this, you will be warned that a new window is being opened so you can edit the app with Power Apps Studio.

Creating Your Simulation App

Now, there are a lot of components that go into a Power App. Because of the complexities that go into creating a Power App, I am going to direct you to Microsoft Learn to get all of these details. However, I am going to go over a few basics to help you get started.

Basic App Settings

There are a number of settings you will need to get in place. A simple example is the name of the app, icon for the app, and the orientation of the application. If you go to the three apps at the end of the menu and select settings, you can set all of these items. You can also set other items such as background color or images:

Screenshot of Power Apps Studio and how to get to the app settings.
Adjust settings for your app within Power Apps Studio

You will also note that this is an option for version history. If you are new to Power Apps, this is a great tool to assist you in the event you make a mistake. You can easily roll back your version if you break something and need to go back. Just remember to save versions periodically with comments so you know where you are at in your versioning.

Using Data for Simulation

If you are familiar with Power Apps, you know you can bring data from various sources into a single application. However, if you go to the data view, you will find it empty. But you need data to build your simulation, so where is it at? There is an item on your tree view called “PowerBIIntegration” which holds your data in a tabular format:

Screenshot of Power Apps Studio showing the PowerBIIntegration data source for simulation.
The PowerBIIntegration provides the simulation data for our app.

Since we put only measures in this app and no dimensions with it, the table only has a single row in it. However, we will still need to provide the aggregation of our data. To do this, you will need to write a little code in Power Apps to achieve the desired value to appear in a text box. For example, if you want to aggregate the total gross revenue, you will need this code to achieve the outcome:

Text(Sum([@PowerBIIntegration].Data, A1), "$#,0")

This code aggregates the data from [@PowerBIIntegration] and formats it properly for display. You do not have to go as crazy, but I really like to make sure the formatting is correct. It requires a little extra work, but it goes a long way.

Testing, Saving, and Publishing the Simulation App

Once you have something ready, you can click the play button on the menu bar to test your application. This will allow you to test the functionality of your application and ensure it works. You also can hold down your ALT key to test elements on the app canvas. Once you have it tested, it is time to publish.

Screenshot of Power Apps Studio highlighting the play button, save button, and publish button.
Test, save, and publish your reports from Power Apps Studio

Once it is ready, click the save button to ensure your changes are captured. Next, you can click the publish button and this will push the changes live. This will allow you to view the app inside of Power BI and available for your report consumers to use. Depending on the environment settings, you might need to share the application with others, but that can easily be performed with using the same Entra or Active Directory security group you leveraged to provide access to the report in the Power BI Service. However, when you are finished, the report canvas will now have the app integrated right on the page:

Screenshot of the simulation Power App inside of Power BI Desktop.
The final product – Simulation app inside of the Power BI Report

Common Questions with Power Apps

Every time I bring up this solution, I receive a lot of questions. However, they are often variations of the same three topics that are in this section. I wanted to address them as I am guessing you have the same questions as well.

Question 1 – Licensing and Cost

We all know Microsoft does not give much out for free. However, with how this app is designed, there is no additional impact to the cost of your licensing. Power Apps is using seeded capabilities from your base Microsoft 365 licensing and therefore does not cost you anything extra. While licensing changes often, I recommend you check out Michael Heath’s blog post on Power Platform licensing if you are new to it.

Question 2 – App Use Outside of Power BI

A lot of people ask about being able to leverage the app outside of Power BI. The answer on this is a hard no. We must keep in mind that the app is leveraging data from your Power BI report and filtering based upon the report context. Therefore, this app is not usable outside of the report. However, you could easily re-deploy this app in another report assuming you leverage the same measure names to load the data into the app.

Question 3 – Power Apps and Writeback

The last question I receive is if Power Apps provides capabilities for writeback to data sources. The short answer is yes, but depending on your data source additional licensing might be required. There are tools out there that allow you to perform write back with less complexity, but Power Apps absolutely works for this. It is up to you if it is worth the effort and cost to make it happen, but it is absolutely possible.

Conclusion

People forget that Power BI is actually part of the Power Platform suite. Sadly, many forget that there are opportunities to integrate these apps together. However, this is an example of being able to simplify integration of Power Apps to extend capabilities with little effort. If you are not an expert, do not be afraid to use Copilot for Power Apps to help you out as well!

Have you ever integrated a Power App into a Power BI report? What was your use case? Did you have any challenges in making it happen? Tell me in the comments below!

How do I connect to REST APIs with Power BI? (Part 5)

In preparing for the Global Power BI and Fabric Summit this year, I got a little nostalgic. I went back to my presentation and early blog posts on connecting to REST APIs. Two years, several blog posts, and a book on Power Query later, I felt it was time to add a fifth part to this series. I realized that I left out one additional step I could take to optimize my query a little more to allow me to squeeze the most out of my API call limits.

I will be picking up where we left off with part 4 of how to connect to data with REST APIs. However, if you are new to my blog, you can start from part 1 and get caught up. If you recall, I am limited to 100 API calls a day with my scenario. If I am watching more than 100 stocks, I might need to consider moving to the next subscription level. However, I have an option to squeeze even more out and potentially return 1,000 stock symbols. This is a bonus as I do not need to increase my subscription!

What Did Dom Miss in Part 4?

To note what I missed, I need to go back to part 2 on REST APIs. You must Read the Fine Manual! In reading the documentation, I realized that I could batch multiple stock symbols at once. Instead of making individual calls for each symbol, I can group ten of them in a single call.

Screenshot of the Yahoo Finance REST APIs documentation
Guess I need to take my own advice?

With this knowledge in hand, I set out to take my query one step further and get more out of my subscription. It is not as straight forward as it may seem, but with a little manipulation we can make it work.

Prepare my List

We already have a spreadsheet of stock symbols we have brought in. We need to do start by removing all of the columns except the stock symbol. Just right click on the symbol column and remove other columns. Once that is complete, insert an index column that starts from 1. Your query should look like this:

Screenshot of the symbol column with the index column next to it.
Make sure your index starts from 1!

Next, we need to add a batch column. We will need to identify every 10th row and assign it an index number. We will need to create a custom column for this. Using a modulo division function, we will identify every tenth row. If it is the tenth row, the remainder will be zero. If the modulo division equals zero, we will divide the index by 10. Otherwise, we will return a null value. The formula looks like this:

if 
  Number.Mod([Index], 10) = 0 
  then Number.IntegerDivide([Index], 10) 
  else null

Next, we just need to use the fill up feature to make sure the rows are assigned to a batch. The last few rows will still be null. This is not an issue as it will still be its own batch. The result looks like this:

Screenshot of our stock symbol list with the associated batch number.
Our stock symbols associated with a batch

With our batch in place, we can remove the index column. Next, we need to group our symbols together.

Use Group By to Bring Symbols Together

Going back to our documentation, each symbol must be separated by a comma. We will use the group by function to bring ten rows at a time together into a single column. We will select our batch column and select the group by function in the transform ribbon. Using the sum aggregation, we are going to summarize our symbol column. Naturally, this will result in an error, but we will fix that next.

Screenshot of the group by function in Power Query
Setting up the group by function

Next, we will need to update the M code in the advanced editor. We need to replace the List.Sum() function with the Text.Combine() function. Our code for this step will look like this:

Table.Group(#"Removed Columns", {"Batch"}, 
  {{"Symbols", each Text.Combine([Symbol], ","), 
  type nullable text}})

The result an updated query with our stock symbols combined:

Screenshot of the combined stock symbols
Stock symbols combined in a single cell by batch

Now that the hard part is over, we just need to apply our function.

Functions and REST APIs

We already created a function in part 3 of my REST APIs series. We just need to apply it to our new query. Go to the add column ribbon and insert a column from function. Using the symbols column for the parameter, we can query all of the symbols in the cell.

Invoking our custom function that use REST APIs
Invoking the Get-StockInfo function

All that is left to do is expand the results so they can be leveraged in the model:

Screenshot of our results from the REST APIs
Stock data from our endpoint

And just like that, we turned twenty five API calls to three. Not too shabby!

Conclusion

It is extra work, but it is worth while if you can save a few dollars and minutes with your refresh. It may seem excessive, but it might make the difference. Besides, I could not resist a post where I tell people to Read the Fine Manual as I know my friend Mike will get a good laugh out of it again!

Have you ever used the group by function in Power BI? Or ever extended like this? Did I miss another way to optimize this query? If so, tell me in the comments below!

FabCon Takeaways

Did you attend FabCon during the last week of March in Las Vegas, NV? If you were not able to attend, it was packed with a lot of great knowledge, exciting announcements, and wonderful data professionals to bounce ideas off of all week. I was lucky enough to attend with my colleague and friend, Michael Heath. At every break when we would reunite, we were so excited to share what we had just learned.

Picture of Michael Heath and Dominick Raimato at FabCon.
Michael and I connecting in person at FabCon!

I know not everyone got to attend this event. And even for those of us who were there, it was a lot of information to take in over three days. I wanted to summarize some of my top takeaways. I mean, it is difficult to distill an entire conference into a single blog post, but I wanted to at least highlight some of the key moments that made me go aha!

FabCon Takeaway #1 – Out of the Box

While I heard this many times before, I finally was able to wrap my head around it during the conference. Fabric is an all inclusive platform to simplify the management of data for all forms of projects. Whether it is warehousing, reporting, or data science, the platform is ready to use.

Like Buying a Car

The best analogy I heard all week was the concept of buying a car. Before Fabric, you needed to pick out each part individually. You needed to identify your storage medium, ETL tool, virtual network components, and data science tools. This is the equivalent of going to the auto parts store and picking each component out one by one and then assembling it to build a car. While some of you might choose to do this, it is a lot of work. You need to not only have the skills to do the data work, but the experience to setup your Azure environment.

Instead, Fabric allows you to buy the car as it is off the lot. When you setup your workspace, you can add components without the hassle of configuration. It just works which is a really nice feeling. Even better, because you are using a capacity license, it is easier to estimate costs. I know there are calculators out there for Azure, but you always forget something when you are attempting to build a budget. Fabric eliminates that challenge with a clear pricing structure. This is a huge step forward for self-service data science outside of the traditional IT function.

Microsoft has been pushing their “5×5” approach with Fabric. The concept is 5 seconds to sign up and 5 minutes to your first wow. While I appreciate this approach, I think there is a little work to go in this space. If you have a guide to help you, I believe this to be true. However, if you are new to Fabric, I do not find it to be quite intuitive for that. Long story short, make sure you do a little learning before you try it on your own.

What About Data Engineers?

Does this mean they are out of a job? Not at all! Data engineers will still be essential. For me, I think it redirects their efforts into more valuable workloads. Instead of fighting with Azure, they are focused on getting value out of the platform. It also opens the door to data engineers and scientists with deep domain knowledge in the business. That is a game changer!

Does this mean our existing data infrastructure is obsolete? Also no! These tools are still essential. If anything, this might allow less important workloads be shifted to Fabric reducing administrative burden. That allows you to focus on the mission critical workloads within your existing infrastructure. While Microsoft would like you to move your workloads to Fabric, not everything has to go.

FabCon Takeaway #2 – CI/CD Is Close

If unfamiliar, CI/CD stands for Continuous Integration and Continuous Deployment. To handle this, you need to have a mechanism to manage your source code for components of Fabric. While it is not 100% there yet, the future is quite bright. While some might not like how this is done, this is a huge step forward.

The integration with Azure DevOps is slick. The ability to create a feature branch and merge it back to the main branch is slick. With a few clicks, you can create a new workspace, replicate your components from the main branch, and replicate it to Git. When you are finished with your feature, you can easily merge it back in. But for me, the nicest part is that you can retire your workspace as part of the merge which is so nice.

I also love the fact you can choose your deployment path. If you would prefer to use Deployment Pipelines in Fabric, you can do so. However, if you rather have Git repos for Dev, Test, and Prod, you can do it too. This allows you the flexibility to manage and control your deployments as you see fit.

Stay tuned for an article on this in the near future!

FabCon Takeaway #3 – Integration Points

For me, the biggest question I kept asking was how well could I integrate these components into other solutions. Creating a machine learning model is fantastic, but useless if you cannot integrate it with other applications. This is huge!

I need to investigate how this works further, but the promise was pretty clear. The idea is that your data scientists can perform their analysis and train a model with Microsoft Fabric. Once completed, they can leverage it via endpoints and provide predictions. I have done this with Azure ML in the past, but the process was a tad clunky. I am hoping this experience gets better with Fabric.

Another teaser was the ability to build your own custom Copilot chat bots within Fabric. This is not using Copilot to create a model, but rather training a Copilot to use your data. Eventually, this data could find its way into other tools. To be honest, this has been somewhat happening already today with the Q&A feature in Power BI. However, if Microsoft can streamline this into a single Copilot experience, this would be huge. Being able to query your data without having to go Fabric would go a long way.

Anything Else from FabCon?

Yes, there is a ton more to talk about. But these were the stand outs for myself. What is hard to put into a post are the conversations with other professionals were outstanding. Meeting some of the conquering heroes of the Power BI world is always fun, but eating lunch with different people was where the biggest insights were gained. Learning about everyone’s unique scenarios allows you to refocus your vision of the platform. I always recommend people attend conferences for that reason alone. That is where you gain the most interesting insights!

Also, since everyone asks this question, Michael and I were able to enjoy some of the fun around us. It was a busy week at the conference and work did not slow down in our absence. While we did not get to any shows, we did get to unwind with other professionals and even meet one of the best Elvis impersonators I have seen in my lifetime.

Michael and I posing with The King

Conclusion

This coming year is going to be exciting when it comes to Microsoft Fabric. Between what was announced at FabCon and what is coming in the near future, the future is bright! I look forward to sharing more insights over the coming months.

Did you make it to FabCon? What were your key insights? Anything I missed in my summary? Tell me in the comments below!

Get Ready for the DP-600 Exam

I took the Implementing Analytics Solutions Using Microsoft Fabric (DP-600) recently and it was a unique experience for me. This was the first time I took a beta exam from Microsoft. It was also the first time I took an exam with the new rules around being able to search Microsoft Learn. I felt it was a different experience, but a successful one as I did pass.

It would be unethical to discuss specific questions and details around the exam. However, there are a few things you can do to make sure you are prepared for this exam. A little preparation can go a long way!

Content Breakdown for the Exam

This exam assesses your knowledge with Microsoft Fabric. As a user of Power BI, you might find elements of this exam easier. Because Power BI is a big part of Fabric, you will find elements of it in there. The breakdown of the exam is as follows:

  • Plan, implement, and manage a solution for data analytics (10–15%)
  • Prepare and serve data (40–45%)
  • Implement and manage semantic models (20–25%)
  • Explore and analyze data (20–25%)

With this knowledge in hand, it is easier to start to understand what you need to focus on when it comes to the prep for the exam. Clearly, prepare and serve data is a hefty portion of the exam. And if you have not touched Fabric, you will not perform well with this exam. But all is not lost as you will need some of that knowledge to be successful!

Studying for the DP-600

Ideally, everyone would attend a training for the DP-600. However, that is not always realistic. My work schedule has been chaotic, so even if I wanted to attend a class, how would I do it?!

To start, Microsoft has a few learning paths that can get you on the right track. They are designed to walk you through the core components of Microsoft Fabric. They are available on the DP-600 page, but I have them summarized here:

In addition, Microsoft has created some learning paths to help you out. It might be best to start with the DP-600 Study Guide from Microsoft. This will help you wrap your head around the core components of the exam. It also gives you some links to additional learning paths to help you out.

However, you will want to get some hands on experience with Microsoft Fabric. You can fire up a free trial if you have an active Microsoft 365 environment in place. To get started, sign into Power BI, go to your account in the upper right hand corner, and start the trial:

Screenshot of account view where you can initiate a trial of Microsoft Fabric.

You will have sixty days to explore Microsoft Fabric at no cost. Make sure you are ready to commit to your learning to ensure you do not run out of time before you take the exam!

Practice Makes Perfect!

I am always prompted to add a practice exam when I sign up for an exam. But Microsoft has a free exam you can take to get a baseline of your knowledge. Take a look at this screenshot from the DP-600 exam page:

Screenshot of the scheduling pane for the DP-600 exam

This practice exam runs you through fifty multiple choice questions. It does not mimic the actual exam as there are no case studies. In addition, the exam does have some questions that do not use multiple choice for the answers. It is not an accurate representation of the exam, but it does a good job of giving you a baseline of your knowledge.

If you have not taken many certification exams, you might want to invest in the paid practice exam. In general, most exams are broken into three sections. The majority of the exam is a collection of questions that you can come back and review before moving on to the next section. However, once you move on, you cannot go back. The same is true for the lab section. Then, there is a section where you cannot go back to update previous answers. That causes some anxiety for some people, so taking a paid practice exam can help prepare you for that experience.

Regardless of the route you choose, I love that you can check the answer on the spot. When you do it, they provide a link to the documentation on Microsoft Learn that explains why the answer is correct. Sometimes you are just guessing and get lucky, but this will help you confirm why you chose the right answer.

Anything Else to Consider?

The resources above helped me pass the exam. However, you might need to know a little more than what is outlined above. Being multi-lingual will help you be successful with this exam. You need to know your DAX, M (both code and GUI), T-SQL, and PySpark syntax and best practices. You might not be an expert in all of them, but a strong command of the basics is critical.

Microsoft Learn was helpful for me as there were a few items that I got stuck on. In fact, my wife commented on the fact that I spent more time on this exam than any others I have taken in the past. Having Microsoft Learn at my disposal helped a lot, but it was not a replacement for my existing knowledge. While it will assist you, relying on it will result in you running out of time.

Conclusion

I felt like this exam was one of the better exams I have taken in the recent past. I feel like so many certification exams are about trying to trick you. The broad amount of content in this exam allowed it to be both challenging, yet meaningful. It may seem weird, but I really enjoyed this exam! If you are looking to continue your analytics journey, you will want to add this exam to your list!

Have you taken the DP-600 yet? What did you think about it? Did you find it meaningful? Tell me in the comments below!

Page 1 of 5

Powered by WordPress & Theme by Anders Norén