Plenty to read!

Plenty to read!

Managing Power BI reports or models using semantic-link-labs

Managing Power BI reports or models using semantic-link-labs


MANAGE AND AUTOMATE REPORTS OR MODELS

…by using semantic-link-labs in a notebook in Microsoft Fabric.


TL;DR: Use semantic-link-labs in a Fabric notebook to streamline a number of report or model management tasks to enhance productivity. Install the library, write some simple Python code, and use / extend the existing functionality. Semantic-link-labs compliments existing tools and presents a number of new, interesting possibilities, particularly for reports. Much more will be possible when the Power BI enhanced report format (PBIR) is out of preview.


Thus far, the part of Microsoft Fabric that I’ve personally found the most interesting is not Copilot, Direct Lake, or its data warehousing capabilities, but a combination of notebooks and simple file/table storage via Lakehouses. Specifically, the library semantic link and its “expansion pack” semantic-link-labs, spearheaded by Michael Kovalsky. These tools help you build, manage, use, and audit the various items in Fabric from a Python notebook, including Power BI semantic models and reports.

Semantic-link-labs provide a lot of convenient functions that you can use to automate and streamline certain tasks during Power BI development; both of models and reports. For me, I’m particularly interested in the reporting functionalities, because this is where I typically find that I lose the most time, and because there is a drought of tools to address this area.

The purpose of this article is twofold:

1. To introduce semantic-link-labs and explain its various use-cases for a Power BI professional.
2. To explain that different tools compliment one another in the lifecycle management of a model or report.

This article is informative; I'm not trying to sell you Fabric, nor endorsing a tool, feature, or approach.

Note: I'm currently sick as I write this article, so certain things might not be to my usual style or standards; sorry for that, but I know that I won't be motivated to write this, tomorrow.

 

GET STARTED WITH SEMANTIC-LINK-LABS

If you learn better by doing rather than reading, I’ve written up two articles that describe use-cases of semantic-link-labs:

  1. Recover reports or models that you can’t download from the Power BI service.

  2. View, copy, and modify multiple visuals, pages, or Power BI reports at once.

 

WHAT YOU NEED TO START WORKING WITH SEMANTIC-LINK-LABS

You need the following:

  1. A Fabric workspace where you can create a new notebook (and ideally also a lakehouse).

  2. Either a new environment where you can install the semantic-link-labs library, or you have to install it in each notebook before you use the library (%pip install semantic-link-labs).

  3. Browse the semantic-link-labs docs to find useful functions that you can apply to your scenario.

The functions that you use in semantic-link-labs are fairly basic, so you don’t need to have an extensive grasp of the Python language. However, if you do know Python, chances are that you will be able to extend this functionality significantly. And even if your understanding of Python is limited, you can make good use of tools that leverage LLMs to help you write the code you need; just make sure you research and understand it, first!

 

WHERE SEMANTIC-LINK-LABS AND NOTEBOOKS FIT IN TO POWER BI

When you develop models and reports in Power BI, you typically use more tools than just Power BI Desktop and the Power BI service. Many different tools support Power BI development, from official tools like the various parts of Fabric or Microsoft 365, as well as third-party tools that address specific problems during the model or report creation process.

Semantic-link-labs I’d argue is unique for several reasons:

  • You can use it in a similar way for both Power BI semantic models and reports.

  • The notebook for semantic-link-labs must be in a Fabric workspace, but the target items that you manage or view don’t necessarily have to be in a Fabric or even a Premium or PPU workspace. This is a huge benefit, because it means that people can use a small F2 capacity with a workspace for automating and managing items in their other Pro or PPU workspaces, for instance.

  • It has a wide bredth of possible use-cases; you can use it across many stages of model or report lifecycle management.

  • It’s a library tailored for use in Fabric notebooks, making it useful for documentation and re-use of notebooks across scenarios, which lends well to atomic design.

  • It’s a tool that requires you write Python; it’s a notebook-based tool. It doesn’t have a user interface. Since notebooks can be scheduled, managed and monitored in Fabric, this lends well to automation.

Here are some examples of things you can do with semantic-link-labs:

  • Build: View or make changes to model and report metadata. For instance, you can apply standalone DAX templates from a notebook.

  • Test: Run DAX queries or a Best Practice Analyzer against a model on a schedule (or in response to a trigger, with some set-up) to automatically detect anomalies or deviations from tolerable ranges.

  • Deploy: Copy models and reports between workspaces, or use other programmatic techniques to orchestrate or assist deployment i.e. by using the REST APIs.

  • Manage: Analyze, browse, and modify both models and reports; typically, this involves using code to streamline repetitious tasks across multiple models or reports. You can also make changes to other Fabric items like lakehouses and things like workspaces.

  • Audit/Optimize: Get an overview of items, like the number of reports or semantic models, or run best practice analyzers across multiple models in a workspace or tenant.

  • Monitor: Gather metadata and stastics about models and reports to monitor changes, quality, and (with other supporting tools or APIs) usage in a custom monitoring solution with alerting.

However, I think it’s also important to understand that notebooks, semantic link, and semantic-link-labs don’t replace existing tools. Rather, these new tools compliment the existing ones that you already have. To illustrate this, consider the following sections, which gives an overview of some common tools that I typically consider to help me during both model and report development.

A clarification Kobold appears. Clearing its throat, it pulls out a scroll:
You don't need other tools than Power BI Desktop to make a good semantic model or report in Power BI. However, you will generally find that using these tools helps you to save time, gives you more options, and makes the process generally more convenient and less painful, overall.

Goblin Tip:

You can also use Semantic Link to connect to and consume a semantic model. For example, you might use the tables and measures in a model and combine this business logic with other, disparate data sources for a specific analysis.

You can also of course use notebooks to connect to and transform data sources, either data that's already in Fabric, or external data sources that you want to transform and land in OneLake.

I've written an article discussing these various use-cases in brief, which you can find here.

 

TOOLS DURING MODEL DEVELOPMENT

The following is a high-level overview of the process of developing a model, and the various tools that I consider or use at each step. Some niche tools are excluded for conciseness. Notebooks and semantic link / semantic-link-labs are indicated with ▼.

A clarification Kobold appears. Clearing its throat, it pulls out a scroll:
Note that this is a subjective overview of how I see it; you might see it differently or use other tools that aren’t listed here, and that’s fine. This is not an overview of all tools or their objective capabilities!

The point is that different tools are used at different stages, and that semantic-link-labs covers a wide bredth of use-cases.

Different tools that I consider along each step in the model development process. This is a high-level, subjective overview, intended to illustrate that different tools fill different niches and have different purposes.

Dashed lines indicate a tool that is used very situationally. For instance, I only generally use ALM toolkit when I really need to compare two semantic models and identify what's changed. Likewise, I only use Power BI Desktop after testing a model if I really have to; otherwise, I generally avoid using Power BI Desktop once the model is ready for deployment.

Model design involves the activities during the planning and requirements gathering phase of the semantic model lifecycle. For me, this means designing the model and thinking through the functionality, including possible DAX and in some cases even Power Query transformations. Tools at this stage help me illustrate a how the model will look and work and highlight key functionalities or risks. I might make wireframes, mock-ups, or prototypes.

Design is not limited to reports.


Building a model involves actually adding objects, writing code, and so forth. Tools I use at this stage either do that for me manually (i.e. Power BI Desktop) or automatically by applying templates that I’ve been curating over the last months / years (i.e. Tabular Editor or semantic-link-labs). They might also just support the process.


Testing a model involves ensuring that the model performs as expected, both in terms of the results it returns and its performance / functionality. Tools I use at this stage support the testing process with myself, peers, or users, or they automate or facilitate streamlining tests by running queries against the model or validating results and metadata against baselines and rules.


Deploying a model involves copying its metadata into or between workspaces. Tools I use at this stage do this or facilitate it, either in full or partially, or manually or automatically.

(Of course in more mature scenarios deployment involves more things but remember this is a high-level overview)


Managing a model involves checking or changing a model as it is being used throughout its lifecycle. A concrete example might be implementing incremental refresh, changing a TOM property, adding or changing object definitions, or (re-)organizing the model. This is a broad category. Model management might be reactive or proactive, and might be manual in response to a request, or automated in response to monitoring triggers. Tools I use here either support or facilitate these changes.


Auditing/optimizing a model involves making targeted improvements to achieve better results—typically performance. Auditing is more about checking models you didn’t make, while optimizing could be something you do on your own model or someone else’s. Tools I use here help me get an overview of model contents, find bottlenecks or problems, and then solve those problems and make changes. This is by far the area where I use the most tools, because you’re sleuthing a bit and each scenario is different. Some optimizations might just involve turning off “Auto Date/Time” in Power BI Desktop, but most require a deeper investigation…


Monitoring a model involves gathering data about the model contents, metadata, or usage over time to inform certain decisions and actions. Tools I use here typically facilitate that data gathering, reporting, and alerting. Monitoring isn’t a one-off or ad hoc task; it is something that must be done regularly to get benefit.


As you can see in the diagram, some tools fill a very specific niche (like Figma, which I use to design model wireframes or plan logic), while others apply across much of the model lifecycle (like Tabular Editor or semantic-link-labs).

Additionally, note that these tools are used in parallel; overlap does not imply redundancy. For instance, several tools overlap during the build stage, but each have their own place:

  • Power BI Desktop: For import models, I use the Power Query editor to do transformations, if necessary, because it is the most appropriate tool to do this. The Power Query user interface is effective, and you can add custom code or functions, if necessary, as well as organize queries and add comments. I might also use Power BI Desktop to set up some visuals to aid in validation as I build.

  • Tabular Editor: For all models, I use Tabular Editor to do the majority of development. Tabular Editor is helpful because you can see the model as you work. In Tabular Editor 3, you can also organize and seaprate windows, so you can create a whole workspace that responds to the context of what you select and what you’re doing. The development tasks I do in Tabular Editor include creating relationships, adding DAX, and applying templates and patterns that I’ve saved as C# scripts. Finally, I also organize the model and do some basic validation in Tabular Editor, either with C# scripts or by using the advanced features in Tabular Editor 3. If I only have access to Tabular Editor 2, I usually split development more evenly between Power BI Desktop and Tabular Editor 2.

  • Copilot: Honestly, I do not use Copilot today, but I can foresee future scenarios where if I have access to it, I might use it to generate occasional DAX queries or code, generate descriptions, or add comments to measures. This would be after my validation and very situational.

  • Bravo: For import models, I use Bravo to add date table templates, particularly if the date table differs from my standard template which I use most of the time. Bravo also can be a convenient way to apply time intelligence patterns or get an overview of model size.

  • Semantic link / semantic-link-labs: For certain models, I might use these libraries in a notebook to automate testing as I develop, running periodic DAX queries and comparing to other data sources. I would also use them to migrate an import model to Direct Lake, if necessary, since there is an entire process designed for this.

This is even better exemplified during the test phase:

  • Power BI Desktop: I use for visual-driven testing, either because it’s more effective or because the visuals will be used in reports.

  • Tabular Editor: I use for ad hoc testing, or deep testing of the model and DAX. This is particularly helpful with the DAX queries and DAX debugger of Tabular Editor 3 if the DAX is complicated. For import models, I lean heavily on TE3’s integration of the VertiPaq analyzer, because I can get a picture of the model size and take immediate action to check values and make adjustments either to the Power Query (M) of table partitions, and see the effect on the VertiPaq statistics.

  • Semantic link / semantic-link-labs: I use for automating testing, comparing data sources, or scaling these tests over multiple models, when necessary. One example that is nice to use is to set the VertiPaq annotations, which sets the VertiPaq statistics as model annotations that you can pick up with the Best Practice Analyzer automatically, either using the notebook or in Tabular Editor.

  • DAX Studio: I use for performance testing and optimization of DAX evaluation times, typically of (cleaned up) queries from visuals (via the performance analyzer).

As discussed in previous articles, automated testing is an important part of ensuring a quality solution and following DataOps principles, which can help you avoid issues before they become a problem for users.

Again, this is not intended to be representative of the general situation or advocating for a particular approach. These are just an explanation of my subjective thoughts about how these tools fit together.

 

TOOLS DURING REPORT DEVELOPMENT

The following is a high-level overview of the process of developing a report, and the various tools that I consider or use at each step. Some niche tools are excluded for conciseness. Notebooks and semantic link / semantic-link-labs are indicated with ▼.

A clarification Kobold appears. Clearing its throat, it pulls out a scroll:
Again, this is a subjective overview of how I see it; you might see it differently or use other tools that aren’t listed here, and that’s fine. This is not an overview of all tools or their objective capabilities!

I'm just pointing out how much you can do with semantic-link-labs in notebooks.

Different tools that I consider along each step in the report development process. This is a high-level, subjective overview, intended to illustrate that different tools fill different niches and have different purposes.

Dashed lines indicate a tool that is used very situationally. For instance, I very rarely-if ever-use report BPAs, because they often focus on specific technical things and don't test well for meaningful criteria that users find important. Exceptionally, I might use these when there are many reports to test or audit. Also, Copilot is included, but not highlighted in any stage, because I don't see a meaningful use of Copilot during any stage in report development compared to other (non-AI) tools or approaches which are still faster and more convenient.

Note that "optimize" for reports includes only changes to the report and not the underlying model, in this diagram.

Report design involves defining what the report will look like and how it will work during the planning and requirements gathering stage. This is in my opinion the most important part of a reporting project, but that’s because I have been taught to take a design-thinking or visual approach to projects. Tools that I use here help me create the design, be it making report wireframes, mock-ups, and prototypes, or planning how it will work.

In Power BI, since the report and model are inextricably linked, I have to use model tools during this step too to plan the model objects that will be needed. This is the main reason why I include model tools with a UI/UX like Power BI Desktop and Tabular Editor. Note that I exclude many design tools that I use for things like streamlining fonts, colors, themes, etc. I only include the most prominent tools here.

While it’s technically feasible to use code-based and notebook tools like semantic-link-labs in this step, that is something that really doesn’t jive with me. For me, design must be visual. For you, maybe it’s different.

Design is not the same as creating visuals.


Building a report involves creating the visuals and functionality. This is something that involves both making and changing the report as well as report-specific objects. Tools here helps or automate this process, or facilitate it in some capacity.


Testing a report involves validating that it works as expected; that it meets the reporting requirements and can effectively convey information to users. During this stage my preferred approach is to do playtesting with users and gather qualitative, subjective feedback. If it must be quantified, I use time-to-learn or test how long it takes them to complete tasks with the report. The latter is rare, though. Very occasionally I’ll use report BPAs, but I generally find these to be ineffective since they don’t reflect user perspectives well enough, and I don’t get sufficient ROI from defining my own rules. These rules neglect the subjective greyness of visualization.

In more sophisticated scenarios and organizations this might involve things like UI automation with RPA (i.e. Selenium, Power Automate Desktop) or other techniques…


Deploying a report involves copying its metadata into or between workspaces. Tools here facilitate this. Again… high-level overview…


Managing a report involves making changes or updating a report as its used. Tools here either facilitate these changes or help you find / do things. For instance, I use PBI Explorer to when managing reports to view “gotchas” more easily like visual-level filters or edit interactions. Some of these management tasks might be ad hoc and others could be automated or scheduled; some might be proactive and others might be reactive to requests or changes in the business or user community.


Auditing/optimizing a report involves making it more useful or effective for users. This might mean making it more performant, but it generally means optimizing for the amount of time it takes users to do something, find something, or answer a question. My goal is typically to reduce the amount of time that users spend on my reports as much as possible. In a perfect world, users spend as little time on my reports as possible so they can focus on their tasks and responsibilities in the business, because that is almost never “looking at reports”.

In the past in more sophisticated scenarios or organizations this might involve more customized tools…


Monitoring a report involves checking its contents or usage over time to inform certain decisions and actions. Tools I use here typically facilitate that data gathering, reporting, and alerting. Monitoring isn’t a one-off or ad hoc task; it is something that must be done regularly to get benefit.

Monitoring a report for me really focuses on usage and user sentiment / feedback.


Notably, the same pattern is reflected in the report development diagram; certain tools have particular niches, but overlap doesn’t imply redundancy. What’s unique to the report situation is that most tools are very niche; the only tools that apply across a broader spectrum are Power BI Desktop and semantic-link-labs. Unlike model development, which already is a rich ecosystem of various tools that address various problems, report development doesn’t have many tools available.

The Layout file (report JSON metadata) in the PBIX is a big mess, which makes it hard to create good tools for reports.

In fact, until recently, it wasn’t even really conceivable to start building tools to work with Power BI reports. Most “tools” were actually custom visuals, or were very limited in scope. This was because the Power BI report metadata was practically unworkable; only brave souls like Mathias Thierbach with pbi.tools were able to break that ice and show us the possibilities. However, now, the Power BI Project (PBIP) file format, and particularly the new Power BI enhanced report format (PBIR) open wide the many possibilities.

Most tools for Power BI reports are quite specific:

  • Figma used for design and creating or translating prototypes to visuals.

  • Deneb used for creating custom visuals or using custom visual templates.

  • Tabular Editor for planning and creating report-specific objects, or even using advanced C# scripts to apply certain templates (i.e. SVG visuals) or even modify the PBIX or PBIP metadata programmatically (even if this is unsupported…).

  • PBI Explorer and PBI Inspector for aiding in report auditing or optimization; particularly PBI Explorer which shows things like hidden visuals, interactions, and so forth better than Power BI Desktop.

But here is where semantic-link-labs really stands out. Using semantic-link-labs, you can do all kinds of things with reports that weren’t possible, before; there are so many areas where this tool is breaking new ground and opening new possibilities. This is particularly true with the new functions that only work on PBIR files, which is currently in preview and has a lot of limitations, but will eventually become the standard format for reports.

Here are some examples of ways you can use semantic-link-labs throughout report development:

There are truly many, many possibilities.

Goblin Tip:

Don't get too attached to tools. Tools are just a means to an end. Use whatever best fits your scenario.

Some people develop strong attachments or biases toward tools. Remember that different people like and use different tools (or the same tools) in different ways, and that's perfectly fine. Depending on scenarios and preferences, different people might use different things.

Don't treat tools (like Qlik vs Tableau vs Power BI) like gaming consoles and debate like it's Nintendo vs. Sega or Playstation vs. Xbox. That's just silly, unhelpful, and kinda toxic.

 

TO CONCLUDE

Semantic-link-labs is a Python library that you can use in Fabric notebooks to help you with semantic model and report development. It presents a number of new possibilities to automate and streamline certain tasks, allowing you to improve your effeciency and productivity with Power BI.

Semantic-link-labs is a tool that you can use in parallel with other tools in the Power BI ecosystem, be it first- or third-party tools. If there is overlap, then you choose whichever tool best suits your situation or preference.

One area where semantic-link-labs shows particular promise is in the management of reports. Here, there has been a drought of tools that can help the average Power BI developer become more efficient and make the report creation process more convenient. There are already many use-cases, but as the library matures and the PBIR format becomes standard, this will become very interesting to see and use.


Potential conflict-of-interest disclaimer:

In the interest of transparency, I declare here any potential conflicts-of-interest about products that I write about.

Tabular Editor and Tabular Tools: I am paid by Tabular Editor to produce trainings and learning material.

Microsoft Fabric and Power BI: I am part of the Microsoft MVP program, which you can read about here. The MVP Program rewards community contributions, like articles such as these I write in my spare time. The program has benefits such as "early access to Microsoft products" and technical subscriptions like for Visual Studio and Microsoft 365. It is also a source of valuable community engagement and inter-personal support from many talented individuals around the world.

I am also paid by Microsoft part-time to produce documentation and learning content.

I share my own personal opinions, criticisms, and thoughts without influence or endorsement from Microsoft, Tabular Editor, and Tabular Tools, and do my best to remain objective and unbiased.

Fix visuals, replace fields, and mass-format reports in Power BI

Fix visuals, replace fields, and mass-format reports in Power BI

Myths, Magic, and Copilot for Power BI

Myths, Magic, and Copilot for Power BI

0