Plenty to read!

Plenty to read!

Managing Power BI reports or models using semantic-link-labs

Managing Power BI reports or models using semantic-link-labs


MANAGE AND AUTOMATE REPORTS OR MODELS

…by using semantic-link-labs in a notebook in Microsoft Fabric.


TL;DR: Use semantic-link-labs in a Fabric notebook to streamline a number of report or model management tasks to enhance productivity. Install the library, write some simple Python code, and use / extend the existing functionality. Semantic-link-labs compliments existing tools and presents a number of new, interesting possibilities, particularly for reports. Much more will be possible when the Power BI enhanced report format (PBIR) is out of preview.


Thus far, the part of Microsoft Fabric that I’ve personally found the most interesting is not Copilot, Direct Lake, or its data warehousing capabilities, but a combination of notebooks and simple file/table storage via Lakehouses. Specifically, the library semantic link and its “expansion pack” semantic-link-labs, spearheaded by Michael Kovalsky. These tools help you build, manage, use, and audit the various items in Fabric from a Python notebook, including Power BI semantic models and reports.

Semantic-link-labs provide a lot of convenient functions that you can use to automate and streamline certain tasks during Power BI development; both of models and reports. For me, I’m particularly interested in the reporting functionalities, because this is where I typically find that I lose the most time, and because there is a drought of tools to address this area.

The purpose of this article is twofold:

1. To introduce semantic-link-labs and explain its various use-cases for a Power BI professional.
2. To explain that different tools compliment one another in the lifecycle management of a model or report.

This article is informative; I'm not trying to sell you Fabric, nor endorsing a tool, feature, or approach.

Note: I'm currently sick as I write this article, so certain things might not be to my usual style or standards; sorry for that, but I know that I won't be motivated to write this, tomorrow.

 

GET STARTED WITH SEMANTIC-LINK-LABS

If you learn better by doing rather than reading, I’ve written up two articles that describe use-cases of semantic-link-labs:

  1. Recover reports or models that you can’t download from the Power BI service.

  2. View, copy, and modify multiple visuals, pages, or Power BI reports at once.

 

WHAT YOU NEED TO START WORKING WITH SEMANTIC-LINK-LABS

You need the following:

  1. A Fabric workspace where you can create a new notebook (and ideally also a lakehouse).

  2. Either a new environment where you can install the semantic-link-labs library, or you have to install it in each notebook before you use the library (%pip install semantic-link-labs).

  3. Browse the semantic-link-labs docs to find useful functions that you can apply to your scenario.

The functions that you use in semantic-link-labs are fairly basic, so you don’t need to have an extensive grasp of the Python language. However, if you do know Python, chances are that you will be able to extend this functionality significantly. And even if your understanding of Python is limited, you can make good use of tools that leverage LLMs to help you write the code you need; just make sure you research and understand it, first!

 

WHERE SEMANTIC-LINK-LABS AND NOTEBOOKS FIT IN TO POWER BI

When you develop models and reports in Power BI, you typically use more tools than just Power BI Desktop and the Power BI service. Many different tools support Power BI development, from official tools like the various parts of Fabric or Microsoft 365, as well as third-party tools that address specific problems during the model or report creation process.

Semantic-link-labs I’d argue is unique for several reasons:

  • You can use it in a similar way for both Power BI semantic models and reports.

  • The notebook for semantic-link-labs must be in a Fabric workspace, but the target items that you manage or view don’t necessarily have to be in a Fabric or even a Premium or PPU workspace. This is a huge benefit, because it means that people can use a small F2 capacity with a workspace for automating and managing items in their other Pro or PPU workspaces, for instance.

  • It has a wide bredth of possible use-cases; you can use it across many stages of model or report lifecycle management.

  • It’s a library tailored for use in Fabric notebooks, making it useful for documentation and re-use of notebooks across scenarios, which lends well to atomic design.

  • It’s a tool that requires you write Python; it’s a notebook-based tool. It doesn’t have a user interface. Since notebooks can be scheduled, managed and monitored in Fabric, this lends well to automation.

Here are some examples of things you can do with semantic-link-labs:

  • Build: View or make changes to model and report metadata. For instance, you can apply standalone DAX templates from a notebook.

  • Test: Run DAX queries or a Best Practice Analyzer against a model on a schedule (or in response to a trigger, with some set-up) to automatically detect anomalies or deviations from tolerable ranges.

  • Deploy: Copy models and reports between workspaces, or use other programmatic techniques to orchestrate or assist deployment i.e. by using the REST APIs.

  • Manage: Analyze, browse, and modify both models and reports; typically, this involves using code to streamline repetitious tasks across multiple models or reports. You can also make changes to other Fabric items like lakehouses and things like workspaces.

  • Audit/Optimize: Get an overview of items, like the number of reports or semantic models, or run best practice analyzers across multiple models in a workspace or tenant.

  • Monitor: Gather metadata and stastics about models and reports to monitor changes, quality, and (with other supporting tools or APIs) usage in a custom monitoring solution with alerting.

However, I think it’s also important to understand that notebooks, semantic link, and semantic-link-labs don’t replace existing tools. Rather, these new tools compliment the existing ones that you already have. To illustrate this, consider the following sections, which gives an overview of some common tools that I typically consider to help me during both model and report development.

A clarification Kobold appears. Clearing its throat, it pulls out a scroll:
You don't need other tools than Power BI Desktop to make a good semantic model or report in Power BI. However, you will generally find that using these tools helps you to save time, gives you more options, and makes the process generally more convenient and less painful, overall.

Goblin Tip:

You can also use Semantic Link to connect to and consume a semantic model. For example, you might use the tables and measures in a model and combine this business logic with other, disparate data sources for a specific analysis.

You can also of course use notebooks to connect to and transform data sources, either data that's already in Fabric, or external data sources that you want to transform and land in OneLake.

I've written an article discussing these various use-cases in brief, which you can find here.

 

TOOLS DURING MODEL DEVELOPMENT

The following is a high-level overview of the process of developing a model, and the various tools that I consider or use at each step. Some niche tools are excluded for conciseness. Notebooks and semantic link / semantic-link-labs are indicated with ▼.

Note that this is a subjective overview of how I see it; you might see it differently or use other tools that aren’t listed here, and that’s fine. The point is that different tools are used at different stages, and that semantic-link-labs covers a wide bredth of use-cases.

Different tools that I consider along each step in the model development process. This is a high-level, subjective overview, intended to illustrate that different tools fill different niches and have different purposes.

Dashed lines indicate a tool that is used very situationally. For instance, I only generally use ALM toolkit when I really need to compare two semantic models and identify what's changed. Likewise, I only use Power BI Desktop after testing a model if I really have to; otherwise, I generally avoid using Power BI Desktop once the model is ready for deployment.

As you can see in the diagram, some tools fill a very specific niche (like Figma, which I use to design model wireframes or plan logic), while others apply across much of the model lifecycle (like Tabular Editor or semantic-link-labs).

Additionally, note that these tools are used in parallel; overlap does not imply redundancy. For instance, several tools overlap during the build stage, but each have their own place:

  • Power BI Desktop: For import models, I use the Power Query editor to do transformations, if necessary, because it is the most appropriate tool to do this. The Power Query user interface is effective, and you can add custom code or functions, if necessary, as well as organize queries and add comments. I might also use Power BI Desktop to set up some visuals to aid in validation as I build.

  • Tabular Editor: For all models, I use Tabular Editor to do the majority of development. Tabular Editor is helpful because you can see the model as you work. In Tabular Editor 3, you can also organize and seaprate windows, so you can create a whole workspace that responds to the context of what you select and what you’re doing. The development tasks I do in Tabular Editor include creating relationships, adding DAX, and applying templates and patterns that I’ve saved as C# scripts. Finally, I also organize the model and do some basic validation in Tabular Editor, either with C# scripts or by using the advanced features in Tabular Editor 3. If I only have access to Tabular Editor 2, I usually split development more evenly between Power BI Desktop and Tabular Editor 2.

  • Copilot: Honestly, I do not use Copilot today, but I can foresee future scenarios where if I have access to it, I might use it to generate occasional DAX queries or code, generate descriptions, or add comments to measures. This would be after my validation and very situational.

  • Bravo: For import models, I use Bravo to add date table templates, particularly if the date table differs from my standard template which I use most of the time. Bravo also can be a convenient way to apply time intelligence patterns or get an overview of model size.

  • Semantic link / semantic-link-labs: For certain models, I might use these libraries in a notebook to automate testing as I develop, running periodic DAX queries and comparing to other data sources. I would also use them to migrate an import model to Direct Lake, if necessary, since there is an entire process designed for this.

This is even better exemplified during the test phase:

  • Power BI Desktop: I use for visual-driven testing, either because it’s more effective or because the visuals will be used in reports.

  • Tabular Editor: I use for ad hoc testing, or deep testing of the model and DAX. This is particularly helpful with the DAX queries and DAX debugger of Tabular Editor 3 if the DAX is complicated. For import models, I lean heavily on TE3’s integration of the VertiPaq analyzer, because I can get a picture of the model size and take immediate action to check values and make adjustments either to the Power Query (M) of table partitions, and see the effect on the VertiPaq statistics.

  • Semantic link / semantic-link-labs: I use for automating testing, comparing data sources, or scaling these tests over multiple models, when necessary. One example that is nice to use is to set the VertiPaq annotations, which sets the VertiPaq statistics as model annotations that you can pick up with the Best Practice Analyzer automatically, either using the notebook or in Tabular Editor.

  • DAX Studio: I use for performance testing and optimization of DAX evaluation times, typically of (cleaned up) queries from visuals (via the performance analyzer).

As discussed in previous articles, automated testing is an important part of ensuring a quality solution and following DataOps principles, which can help you avoid issues before they become a problem for users.

Again, this is not intended to be representative of the general situation or advocating for a particular approach. These are just an explanation of my subjective thoughts about how these tools fit together.

 

TOOLS DURING REPORT DEVELOPMENT

The following is a high-level overview of the process of developing a report, and the various tools that I consider or use at each step. Some niche tools are excluded for conciseness. Notebooks and semantic link / semantic-link-labs are indicated with ▼.

To reiterate — this is a subjective overview of how I see it; you might see it differently or use other tools that aren’t listed here, and that’s fine. This isn’t an all-encompassing overview. The point is that different tools are used at different stages, and that semantic-link-labs covers a wide bredth of use-cases.

Different tools that I consider along each step in the report development process. This is a high-level, subjective overview, intended to illustrate that different tools fill different niches and have different purposes.

Dashed lines indicate a tool that is used very situationally. For instance, I very rarely-if ever-use report BPAs, because they often focus on specific technical things and don't test well for meaningful criteria that users find important. Exceptionally, I might use these when there are many reports to test or audit. Also, Copilot is included, but not highlighted in any stage, because I don't see a meaningful use of Copilot during any stage in report development compared to other (non-AI) tools or approaches which are still faster and more convenient.

Note that "optimize" for reports includes only changes to the report and not the underlying model, in this diagram.

Notably, the same pattern is reflected in the report development diagram; certain tools have particular niches, but overlap doesn’t imply redundancy. What’s unique to the report situation is that most tools are very niche; the only tools that apply across a broader spectrum are Power BI Desktop and semantic-link-labs. Unlike model development, which already is a rich ecosystem of various tools that address various problems, report development doesn’t have many tools available.

The Layout file (report JSON metadata) in the PBIX is a big mess, which makes it hard to create good tools for reports.

In fact, until recently, it wasn’t even really conceivable to start building tools to work with Power BI reports. Most “tools” were actually custom visuals, or were very limited in scope. This was because the Power BI report metadata was practically unworkable; only brave souls like Mathias Thierbach with pbi.tools were able to break that ice and show us the possibilities. However, now, the Power BI Project (PBIP) file format, and particularly the new Power BI enhanced report format (PBIR) open wide the many possibilities.

Most tools for Power BI reports are quite specific:

  • Figma used for design and creating or translating prototypes to visuals.

  • Deneb used for creating custom visuals or using custom visual templates.

  • Tabular Editor for using advanced C# scripts to apply certain templates (SVG visuals) or even modify the PBIX or PBIP metadata programmatically (even if this is unsupported).

  • PBI Explorer and PBI Inspector for aiding in report auditing or optimization; particularly PBI Explorer which shows things like hidden visuals, interactions, and so forth better than Power BI Desktop.

But here is where semantic-link-labs really stands out. Using semantic-link-labs, you can do all kinds of things with reports that weren’t possible, before; there are so many areas where this tool is breaking new ground and opening new possibilities. This is particularly true with the new functions that only work on PBIR files, which is currently in preview and has a lot of limitations, but will eventually become the standard format for reports.

Here are some examples of ways you can use semantic-link-labs throughout report development:

There are truly many, many possibilities.

 

TO CONCLUDE

Semantic-link-labs is a Python library that you can use in Fabric notebooks to help you with semantic model and report development. It presents a number of new possibilities to automate and streamline certain tasks, allowing you to improve your effeciency and productivity with Power BI.

Semantic-link-labs is a tool that you can use in parallel with other tools in the Power BI ecosystem, be it first- or third-party tools. If there is overlap, then you choose whichever tool best suits your situation or preference.

One area where semantic-link-labs shows particular promise is in the management of reports. Here, there has been a drought of tools that can help the average Power BI developer become more efficient and make the report creation process more convenient. There are already many use-cases, but as the library matures and the PBIR format becomes standard, this will become very interesting to see and use.


Potential conflict-of-interest disclaimer:

In the interest of transparency, I declare here any potential conflicts-of-interest about products that I write about.

Tabular Editor and Tabular Tools: I am paid by Tabular Editor to produce trainings and learning material.

Microsoft Fabric and Power BI: I am part of the Microsoft MVP program, which you can read about here. The MVP Program rewards community contributions, like articles such as these I write in my spare time. The program has benefits such as "early access to Microsoft products" and technical subscriptions like for Visual Studio and Microsoft 365. It is also a source of valuable community engagement and inter-personal support from many talented individuals around the world.

I am also paid by Microsoft part-time to produce documentation and learning content.

I share my own personal opinions, criticisms, and thoughts without influence or endorsement from Microsoft, Tabular Editor, and Tabular Tools, and do my best to remain objective and unbiased.

Fix visuals, replace fields, and mass-format reports in Power BI

Fix visuals, replace fields, and mass-format reports in Power BI

Myths, Magic, and Copilot for Power BI

Myths, Magic, and Copilot for Power BI

0