Plenty to read!

Plenty to read!

Recover Power BI reports or models that you can’t download

Recover Power BI reports or models that you can’t download


DOWNLOAD A REPORT OR MODEL FROM FABRIC

…if the Power BI “download this file” is greyed out and you’re unable to download it from the user interface.


TL;DR: 2024 was an inflection point in many ways, not only for Power BI and Fabric, but also for me, personally. Regarding Power BI and Fabric, there are some things…

  • I’m excited about, like semantic-link-labs, PBIR, Git integration updates, and metric sets.

  • I’m cautiously optimistic about, like the new visual updates, DAX functions, Copilot in the DAX Query view, TMDL scripting, and whiteback (or “translytic apps”, the vaguest name for a feature since “AI skills”).

  • I’m hesitant or confused about, like OrgApps, visual calculations, and Direct Lake semantic model storage mode.

  • I’m disappointed and frustrated about, like the Copilot experiences for semantic models and reports, the overall UI/UX of Power BI Desktop and Fabric, the “era of AI” and “analytics engineer” messaging, and the lack of any meaningful Apple Watch or WearOS integration for the Power BI Mobile App.

Regarding data-goblins.com and myself, personally…

  • I lost ±20% of my productive hours this year due to issues after moving with childcare and our new house.

  • I started making video content for SQLBI, which was difficult to learn and adjust to.


2024 has been a unique year in many ways, which brought many changes and developments. Indeed, the frenetic pace of releases, announcements, and new features for Power BI have continued from last year. However,

The purpose of this article is twofold:

1. To explain how to get PBIX or PBIP files from a workspace when you can't download them with the UI.
2. To demonstrate a use-case for semantic link labs in Fabric notebooks with Power BI reports.

This article is informative; I'm not trying to sell you Fabric, nor endorsing a tool, feature, or approach.
This article is one of three examples of things I like to do in semantic-link-labs with reports.

 

NEW STUFF IN POWER BI: LOOKING TO THE FUTURE

There are many new features or changes for Power BI that have been released or announced in 2024. Some of these changes are exclusive to Fabric (meaning you can only use them in a Fabric capacity), although many were general Power BI changes that anyone might use.

 

FEATURES I’M EXCITED ABOUT

 

What is semantic-link-labs?

Semantic-link-labs is a Python package created by Michael Kovalsky that provides simple functions to perform a variety of tasks or automations in Fabric from a notebook. The name implies that the library uses Semantic Link and semantic models (it does), but it lets you do so much more across many items and scopes.

Why am I excited by semantic-link-labs for Power BI?

  • Report automation. My biggest frustration with Power BI right now is how inefficient it is to make and manage useful Power BI reports. Semantic-link-labs will help you streamline many tasks to manage reports, especially once the new Power BI enhanced report (PBIR) format becomes available.

  • Benifits for Pro and PPU environments. You can use semantic-link-labs in a notebook from a small Fabric workspace (i.e. F2) to interact with items in any other workspace (i.e. Pro, PPU). This means that any org can use semantic-link-labs to help manage/scale Power BI; there’s no chunky pricetag requirement.

  • Simplicity and scale. Semantic-link-labs can empower a maturing Power BI developer to achieve exponential increases to their productivity. This isn’t just a tool, it’s a force multiplier. The feeling when I use semantic-link-labs is similar to when I used Tabular Editor with semantic models for the first time; the feeling that “I will never be able to go back to the old ways”. Examples of things that are faster and easier with semantic-link-labs:

    • Working with the Fabric APIs.

    • Changing and centrally managing Power BI report themes.

    • Custom auditing and monitoring solutions.

    • and many, many more…

Do I have any concerns about semantic-link-labs?

Since semantic-link-labs has such a wide scope and scale, I’m a bit worried that it will spread itself too thin across too much of Fabric. The risk here is that semantic-link-labs falls into the trap of “jack of all trades, master of none”, where it has the breadth to do many interesting things on the surface, but lacks the depth to address deeper or more complete needs. Examples might be undocumented exceptions or limitations, or a need for a user to extend the functionality with their own, more complex code. If this happens, then there’s also a higher likelihood that we’ll observe new updates causing bugs or disruptions to solutions that use semantic-link-labs.

A second concern pertains to adoption. Semantic-link-labs needs more content and documentation to guide people about the use-cases and effective use of the tool. This tool is simply too valuable to be left to fall into the “pit of shiny objects” where many other Fabric features lay. This pit is where people and documentation focus on only tools and technology with demos too shallow to explain how it can help with real-world problems.


What is PBIR?

The Power BI enhanced report (PBIR) format is a new format for report metadata. Previously, report metadata consisted of a heavily nested and escaped Layout JSON file, which was difficult to use and read. Further, Microsoft didn’t officially support reading or changing this file (although I’ve been doing it anyways for the last 4 years). Now, with PBIR, the metadata is not only easier to read and use, but this is also officially supported. Soon, we will have a renaissance of tools for reports, similar to tools for semantic models.

Why am I excited by PBIR for Power BI?

  • Report automation. PBIR will make it easier to streamline and even automate changes to reports, by manipulating the metadata using code and tools. An example is with semantic-link-labs. But there will be more.

  • Visual templates. Using PBIR, it’s possible to have full or partial templates for visuals and pages. Again, tools in the future will make this easier, but you can already do this today with your own, custom solutions.

  • Source control. Using the PBIR format, it will be easier to track, manage, and merge changes to reports. This is not only because the report metadata is simpler to read, but also because the metadata is split across multiple files in a folder structure. I think this will be true for enterprise teams using Git repositories, as well as smaller or self-service teams who use features like OneDrive refresh and Deployment Pipelines.

Do I have any concerns about PBIR?

  • Is the technical debt of the report metadata just too high? While the PBIR format is objectively cleaner, it’s still very, very difficult for the average person to read and understand. The JSON is still heavily nested, property and visual names aren’t always clear, and each page and visual has a meaningless ID field as a name (although you can chance it, yourself). My concern is that the meaning of changes (when tracking source control) will be lost in this complexity; and, as such, few people will use or understand the metadata for functional source control. Further, there are some glaring flaws in the metadata that don’t seem possible to address, like:

    • Storing data points in report metadata.

    • Referencing table names when measures are used in visuals.

    • Visuals “remembering” config when you change visual types.

    There are clear improvements with PBIR that will make more possible; for that alone, I’m thankful. However, I’m worried that it might not be enough without the help of extra tools on top.


What is Git integration?

Git integration lets you integrate remote Git repositories with a Fabric, PPU, or Premium workspace. Git integration isn’t new; it was announced and released in preview last year, too. However, there’s more features and

Why am I excited by the Git integration updates for Power BI?

  • Lower boundary to source control. Using Git integration, it’s easier

  • GitHub or DevOps. You can use Git integration with both GitHub enterprise and Azure DevOps.

Do I have any concerns about Git integration?

  • Not really. I

Next, in a new code cell, add and adjust the following code, replacing “ProReport” with the report name that you want to copy, and “ProWorkspace” with its workspace name.


What is TMDL scripting?

The Power BI enhanced report (PBIR) format is a new format for report metadata. Previously, report metadata consisted of a heavily nested and escaped Layout JSON file, which was difficult to use and read. Further, Microsoft didn’t officially support reading or changing this file (although I’ve been doing it anyways for the last 4 years). Now, with PBIR, the metadata is not only easier to read and use, but this is also officially supported. Soon, we will have a renaissance of tools for reports, similar to tools for semantic models.

Why am I excited by TMDL scripting for Power BI?

  • Report automation. PBIR will make it easier to streamline and even automate changes to reports, by manipulating the metadata using code and tools. An example is with semantic-link-labs. But there will be more.

Do I have any concerns about TMDL scripting?

  • Will Copilot hallucinate a lot? F

The first block of code retrieves the definition of the target report. The target workspace that contains the target report does not have to use a Fabric capacity license mode, and can be Premium, PPU, or Pro.

When you run this code (after starting your spark session), it will retrieve the report definition from the target report in the target workspace. The output of the code is a variable called definition which is a dataframe that contains the base64 payload of each “part” of the report definition. These are metadata files which describe how the report looks and which semantic model it should connect to. In the next step, we’re going to write these files to the lakehouse so we can retrieve and use them.

Note that only the workspace with the notebook, lakehouse, and environment needs to be in a Fabric capacity. The target workspace where you published the report doesn’t need to be on Fabric capacity or even be Premium (P SKU) or Premium-Per-User (PPU).

In the example above, the report is published to a workspace that uses the “Pro” license mode, which you can also see, below:

Continuing on, we only need one more notebook cell. This cell contains code that is going to loop through the previous dataframe, and for each row, create a new file in the lakehouse with that file:

Each file is written to a custom directory with the name of the report. We now have the definition files and we can use them to “restore” our report that we couldn’t download; it’s all downhill from here.


STEP 4: GET THE DEFINITION FILES FROM YOUR COMPUTER

To retrieve the definition files, we will use OneLake Explorer. OneLake explorer provides a OneDrive-like interface to access Fabric data locally if you have a Windows machine.

If you haven’t yet, download and install OneLake Explorer, and complete its setup. Once you do this, you should see a folder structure which resembles the workspaces that you have access to, for any workspaces that have data in OneLake. Inside of this folder structure, you can find the report metadata files that we just created, virtualized and available from our desktop.

You can navigate through these folders to find this metadata in the finder on your computer, now:


STEP 5: CREATE AND SAVE AN EMPTY PBIP FILE

Next, you should open Power BI Desktop and create a new Blank report. This empty file is where we are going to add the report metadata files. First, we need to save this report as a PBIP file, so that it understands the report metadata file formats. If you haven’t yet, you should enable the new PBIP format.

DO NOT enable the new PBIR enhanced report metadata format while its in preview, because (while in preview) the files we retrieved have a different format. This preview PBIR format also comes with many limitations. You should only enable this format if the published report is also using it.

The following diagram shows you where to find the preview settings for the PBIP format:

Enable the new Power BI Project (.pbip) save option to continue. Don’t enable PBIR.


STEP 6: OVERWRITE THE EMPTY PBIP FILE’S METADATA TO FINISH

Finally, you want to copy the metadata files from OneLake explorer to the empty PBIP file. Specifically, you want to copy these files to the “.Report” folder, which contains the same information.

Copy the report files from OneLake explorer into the empty PBIP .Report folder

Once you do this, you can open the recovered report in Power BI Desktop by double-clicking the .pbip file. From here, you can continue working in Power BI Desktop, or save the report as a PBIX file, if you wish.

The recovered report open as a Power BI Project file in Power BI Desktop. We can continue working, or save the file as a Power BI Desktop (PBIX) file, first. Note that the restored report contains a live connection to the same semantic model as the original, target report.

To change this data source, you have to either go to “File > Options and Settings > Data Source Settings”, or from the “Home” ribbon, select “Transform Data” and “Data Source Settings”

You’ve now recovered the report that you could not download from the service. Some suggested clean-up from here includes:

  • Removing the report metadata files from the Lakehouse

  • Testing the recovered report in Power BI Desktop

  • Testing the recovered report after publishing it to a workspace.

  • Marking the original, target report for archival and deletion, once you’ve tested the recovered report.

  • Pausing the Fabric pay-as-you-go capacity, if you will only use it for this (so it doesn’t cost you money).

This approach can save you a lot of time. You can automate it further, by creating all of the other “supporting” files for the PBIP as you write to the lakehouse, which means that you can skip the last two steps. I’m sharing the full set of steps rather than the abridged, more efficient version, so that the approach is clear.

Don't forget to pause your capacity if it's not in use:

Don't forget to pause your Fabric capacity if it's pay-as-you-go and not in use. Otherwise you pay for each minute that it's running, which can cost you a lot of money.

 

PART 2: RECOVER A MODEL

The previous sections only explained how to recover a report. If you need a data model, then you need a different approach, because you need to retrieve your model metadata. Thankfully, this is still possible.

You don't need Power BI Desktop to manage a data model:

Even if your model remains in the service, this is also fine. It's valid if you or your users prefer to use Power BI Desktop, but you can also manage a published data model in several ways:

1. Edit data models in the Power BI service, in the web browser.
2. (Fabric, Premium, PPU only) Manage data models via XMLA endpoints. You can do this easily by making use of external tools like Tabular Editor or ALM Toolkit.
3. (Advanced) Deploy model changes (made to a PBIX file) to a workspace via REST APIs.

 

APPROACH 1: USING TABULAR EDITOR (FABRIC, PREMIUM, OR PPU)

In the first approach, the model is on a Fabric, Premium, or PPU license mode workspace, and you recover the metadata using Tabular Editor, overwriting the model.bim file in an empty PBIP file. The scenario diagram for this approach is below:

How to recover a model from a Fabric, Premium, or PPU workspace using Tabular Editor and PBIP files.

I’m currently sick, so I’m abridging this article and summarizing the steps below. You can find a full write-up of the approach in the Tabular Editor docs, here.

  • Step 1: If the model is in a workspace that’s on Fabric capacity, Premium capacity, or uses the Premium Per User license mode, then you can connect to it using XMLA endpoints. This lets you connect to a model with external tools to get access to advanced features or productivity enhancements. Click here to find out where to get the connection string for your workspace.

  • Step 2: Connect to the model from Tabular Editor. If your model has incremental refresh enabled or uses automatic aggregations, you need to disable that stuff.

  • Step 3: Once connected, you can then prepare to save your model metadata.

  • Step 4: Create an empty PBIP file where you’ll overwrite the model metadata. This is the same as described in the previous section.

  • Step 5: The model metadata in a PBIP file is located in the .Dataset folder. You can overwrite this model.bim file from Tabular Editor.

    If you’ve enabled the option to “Store semantic model metadata using TMDL format” then this metadata is in the TMDL format, and you’ll need to ensure that the serialization settings of Tabular Editor align with the serialization done by Power BI Desktop. Serialization just means how the tool breaks apart the metadata into individual files, something which is handy for source control. If you don’t know what any of that means, don’t worry about it; just use the .bim format and ensure that you’ve saved the empty PBIP with the TMDL option disabled.

  • Step 6: You need to open the PBIP file where you’ve just overwritten the model.bim. The first thing that you should do is go into the “Power Query” UI (“Edit Data” or “Transform Data”), and check each query. Generally, it’s best if you let each query preview load. Then, click “Close & Load data”. If it’s an import model, you’ll now load the data to the model. You can save the file as a PBIP or as a PBIX and continue.

This approach works fine; I’ve done it dozens—likely hundreds—of times. However, it’s not officially supported, so make sure that you validate the model before continuing. Notably, this approach only works if you have XMLA endpoints enabled and the model is published to a Fabric, Premium, or PPU workspace. If the model is on a Pro workspace or you prefer an approach similar to the previous section, you can use semantic-link-labs.

 

APPROACH 2: USING NOTEBOOKS AND SEMANTIC-LINK-LABS

To recover a model, you can also take an approach identical to what we did with the report metadata, earlier. This approach differs only slightly in that the code is a bit simpler. Notably, if the model has incremental refresh, then you will need to disable it first, which might be complex.

The scenario diagram for this approach is below:

How to recover a model from a workspace using notebooks and semantic-link-labs.

The steps for this approach are below:

  • Step 1: Create a Fabric workspace and add a notebook, lakehouse, and environment item. You can also re-use the ones that you used from recovering a report (or something else). The Fabric capacity can be a trial or smaller F2 SKU.

  • Step 2: Ensure that you install semantic-link-labs before proceeding. This is best done in a custom environment.

  • Step 3: In the notebook, retrieve the bim definition from the target model in the target workspace. The target workspace can be a Pro workspace. The code for this looks something like the below:

Notebook cell to get the model.bim of the model “5E-Monsters-BasicRules” from the “ProWorkspace”. It saves it to the lakehouse as the modelname.bim (so 5e-Monsters-BasicRules.bim in this case). You can retrieve the .bim then using OneLake Explorer.

  • Step 4: Retrieve the .bim file using OneLake Explorer.

  • Step 5: Create and save a new, empty PBIP file. Ensure that the “TMDL” option is disabled.

  • Step 6: The model metadata in a PBIP file is located in the .Dataset folder. Copy and paste the model.bim file from OneLake explorer to the PBIP, overwriting the original model.bim.

  • Step 7: Same as with the other approach, you need to open the PBIP file where you’ve just overwritten the model.bim. The first thing that you should do is go into the “Power Query” UI (“Edit Data” or “Transform Data”), and check each query. Generally, it’s best if you let each query preview load. Then, click “Close & Load data”. If it’s an import model, you’ll now load the data to the model. You can save the file as a PBIP or as a PBIX and continue.

    After Step 7, you should also perform cleanup by removing the model.bim from the lakehouse and pausing your capacity, if it’s not needed.

This approach is fairly straightforward if you have access to a Fabric notebook and lakehouse. Note that you might also try retrieving the model.bim using the REST APIs and taking a similar approach, but I didn’t test that, because those APIs drive me nuts.

Like Approach 1, this is also not officially supported, but it’s a good workaround.

 

APPROACH 3: USING GIT INTEGRATION

If the model is published to a Fabric, Premium, or PPU workspace, you can also use Git integration. This approach is described by Marc Leijveld on his blog. I also think that this approach should work with reports, to be honest. It too is not officially supported, but it’s good to have multiple options available.

 

APPROACH 4: USING VS CODE AND THE FABRIC STUDIO EXTENSION

Mathias Thierbach let me know that this approach (copying/replacing PBIP/PBIR metadata) also works if you use the Fabric Studio VS Code extension created by Gerhard Brueckl. This extension is available for free and uses the Fabric REST APIs. I haven’t tried this approach out myself, but if you use VS Code and this extension, it’s could be the best option for you.

 

APPROACH 5: USING POWERSHELL

James Bartlett also has an approach to do this via PowerShell that leverages pbi.tools and the Power BI REST APIs. James details in the script the requirements to run and use the script; if you are familiar with PowerShell this could be the best option for you.

Fun fact: To my knowledge (and James’ credit) this is actually the first approach that was capable of doing this!


PART 3: RECOVER BOTH A MODEL AND REPORT

For most people, parts 1-2 should let you recover what you need. However, some may prefer that the model and report are in the same file. In this case, you simply combine Part 1 with one of the approaches in Part 2; adding the report and model metadata to the same, empty PBIP file. Then, you can continue with that file once you’ve handled the data (i.e. loaded it for an import model).

However, you will need to make one additional adjustment to the report definition file. Specifically, the datasetReference key/property. You must remove the part of the definition that says “byConnection” and instead add new information that says “byPath”. The latter refers to the local model in the same file, while the former refers to a live connection in the semantic model.

An example of this is below:

When you want to change a report connection from a remote source (live connection to a published model) to a local source (the model in the same file as the PBIP) you need to change it from “byConnection” to “byPath”. This shows an example of a connection change from the “ProModel” in the “ProWorkspace” to the model in the same file called “EmptyPBIFileName”.

You’re now free to continue working with your new, thick boi report. Note that I haven’t tested this specific approach in some time; you might need to take additional, minor steps.

 

TO CONCLUDE

In certain circumstances, you might need to open a report or model in Power BI Desktop, but you can’t download it from the service. In these cases, the Download this file option is greyed out, and you might seem stuck. However, you can recover the report using semantic-link-labs using a Fabric notebook and lakehouse (with the OneLake Explorer) to retrive and use the report metadata. This approach is a pretty straightforward workaround, and it works even if the report is published to a workspace not on Fabric capacity.

To recover a model you can use Tabular Editor (to save the .bim or TMDL model metadata) or Git integration if the model is published to a Fabric, Premium, or PPU workspace. Alternatively, you can use the same approach as recovering the report, but retrieving the model metadata (.bim) instead of the report metadata.


Potential conflict-of-interest disclaimer:

In the interest of transparency, I declare here any potential conflicts-of-interest about products that I write about.

Tabular Editor and Tabular Tools: I am paid by Tabular Editor to produce trainings and learning material.

Microsoft Fabric and Power BI: I am part of the Microsoft MVP program, which you can read about here. The MVP Program rewards community contributions, like articles such as these I write in my spare time. The program has benefits such as "early access to Microsoft products" and technical subscriptions like for Visual Studio and Microsoft 365. It is also a source of valuable community engagement and inter-personal support from many talented individuals around the world.

I am also paid by Microsoft part-time to produce documentation and learning content.

I share my own personal opinions, criticisms, and thoughts without influence or endorsement from Microsoft, Tabular Editor, and Tabular Tools, and do my best to remain objective and unbiased.

Fix visuals, replace fields, and mass-format reports in Power BI

Fix visuals, replace fields, and mass-format reports in Power BI

0