Building Power BI Reports: Desktop vs Fabric
Why this comparison feels confusing
If you’re a Power BI report author who’s just getting into Microsoft Fabric, you’ve probably asked the same question I hear over and over: am I supposed to stop using Power BI Desktop now?
It’s a fair question. Power BI Desktop is a Windows app that has traditionally been the place where report authors do everything: get data, transform it, model it, and build the report. Microsoft even describes that “connect, shape/transform, then load” experience as part of how Power BI Desktop works with Power Query.
Fabric changes the feel of that workflow because Power BI is now also a first-class experience in the browser inside the Fabric portal. And that browser experience isn’t just “view and share” anymore. You can edit semantic models in the service, including using Power Query for import models and building reports directly from that same environment.
This shift also matters a lot for people who simply couldn’t rely on Power BI Desktop before. If you’re on a Mac, using a Chromebook, working from a locked-down corporate machine, or in an environment where installing desktop software requires jumping through weeks of approval hoops, the browser-based Power BI experience is a genuine unlock. For the first time, you can build reports and work with semantic models using nothing more than a modern browser. That alone explains why so many people are revisiting the “Desktop vs Fabric” question now—it’s no longer just about preference, it’s about access.
The goal here is simple: help you decide, when you’re starting a brand-new report, which authoring surface is going to feel smooth… and which one is going to make you mutter “why is that button gray?” (we’ve all been there).
The from-scratch paths
Let’s define “from scratch” the way report authors actually experience it:
You start with nothing, you need to acquire data, shape it, model it, and then build visuals on a canvas. That’s the full loop.
In Desktop, that loop is straightforward because it’s all in one place.
In Fabric (browser), there are two common “from scratch” starting points, and they matter:
One path is “create a new import semantic model in the service using Get data / Transform data, then create a report from it.” Microsoft documents this directly in the semantic model editing experience: you can add new import tables using Get data, shape them with Transform data (Power Query), and then create a report from that model.
The other path is “Create a quick report,” which is a simplified browser experience meant to get you moving fast (often by pasting data or starting from an existing semantic model). The quick report documentation is explicit about what it supports today and what it doesn’t.
Here’s the “picture in your head” diagram I recommend keeping around:
Desktop-first Get Data (many connectors) → Power Query (full) → Model view (full) → Report view (full)
Fabric browser-first (import model route) Create semantic model (Get data) → Power Query online (import-only) → Web model editor (most core modeling) → Web report editor
Fabric browser-first (quick report route) Existing semantic model or paste data (limited) → Autogenerated visuals → Switch to full edit if needed
That last one is great for prototypes, demos, and “I just need to see something.” It is not the same thing as building a durable, repeatable solution. Microsoft even calls out that pasted data can’t be updated later without redoing the Create workflow.
Data import in practice
Power BI Desktop is still the “widest funnel” for data import. Desktop is built around connecting to one or many data sources through Power Query, and it’s where the connector ecosystem (including custom connectors) has been most complete historically.
In the Fabric browser authoring experience, you can absolutely bring in data for import models. But Microsoft lists several connector-related limitations for adding import tables and for enabling query editing/refresh in the web model editing experience. Custom connectors and a specific set of connectors (including OLE DB, R, and Python) aren’t supported for adding import tables in web editing, and models using those connectors also don’t support query editing in the Power Query editor in that web experience.
There’s also a “connection setup reality” that can surprise new Fabric authors: in the web Power Query experience, you can use existing personal cloud connections, but you can’t create new personal cloud connections inside the editor. That setup happens elsewhere.
And if you’re tempted to start with the quick report “Paste data” option, Microsoft is very explicit about the limits: no way to update the pasted data later, a 512 KB paste size cap, and other constraints like an eight-table limit and naming restrictions.
So the author takeaway is pretty clean:
If “from scratch” means “I don’t know what source I’ll need, and I might need some quirky connector or a custom connector,” Desktop is the safer, less-surprising start.
If “from scratch” means “my data is already in an environment Fabric understands well, and I’m building an import semantic model from supported connectors,” browser-first can work.
Data transformation: Desktop vs Fabric’s distributed approach
In Desktop, Power Query is the primary data transformation surface. It’s mature, predictable, and designed for shaping data before loading it into the model.
In Fabric (browser), the Power Query editor exists — but it’s not the only transformation option, and this is where the comparison changes.
Fabric doesn’t narrow transformation. It redistributes it.
Instead of assuming all shaping happens inside Power Query within the report tool, Fabric gives you multiple upstream transformation options:
- Dataflow Gen2 (Power Query at the service level)
- Spark notebooks for large-scale transformation
- T-SQL in the Warehouse
- Lakehouse table transformations
- Pipelines and orchestration
- Python-based or semantic link workflows
In Desktop, Power Query is the workbench.
In Fabric, Power Query is one tool in a broader data platform.
The web Power Query editor is optimized for import models. It does not support Direct Lake or DirectQuery transformations, and dynamic data sources are not supported. You can edit parameters in the service, but you cannot create them there.
That sounds like a limitation — until you realize what Fabric expects.
Desktop assumes:
The report author shapes the data.
Fabric assumes:
The platform shapes the data. The author models and visualizes it.
If your organization is serious about Fabric, Power Query inside the report becomes a last-mile cleanup tool — not the primary ETL engine.
That’s not weaker. It’s architectural.
Data modeling and semantic model editing
For a long time, “modeling” was the easy part of this comparison: Desktop won.
That’s no longer true in a blanket way.
About a year ago, Power BI in Fabric added a feature to allow you to create and edit semantic models in the service: create measures, calculated columns, calculated tables, relationships, properties, and even define row-level security roles.
So yes, the web experience is a real modeling environment now.
But Microsoft also explicitly lists functional gaps between Desktop model view and the service. These include: not being able to change a table’s storage mode, no View as dialog, and other limitations like feature tables and certain data categories.
Another important difference shows up around relationship automation.
In Power BI Desktop, relationship creation can happen in two distinct ways when you load data. First, if you are connecting to a relational source like SQL Server or Azure SQL and that database has actual foreign key constraints defined, Desktop can read that metadata and automatically create relationships in the model during initial load (assuming auto-detect is enabled). In that case, it is not guessing — it is importing real source-defined relationships.
Second, when usable foreign key metadata does not exist — which is common in many warehouses, views, or lake environments — Desktop falls back to heuristics. It will attempt to infer relationships based on matching column names, compatible data types, and cardinality patterns (for example, detecting which side appears unique). This inference behavior is what most people think of as “auto-detect relationships.” It is convenient, but optional, and many experienced modelers disable it to avoid accidental joins.
In the Fabric browser experience, neither of those automatic behaviors occurs. When you import data using Power Query in the service, relationships defined in the underlying source system are not automatically brought into the semantic model. There is also no background auto-detect process that later scans for new relationships based on column names or data patterns. All relationships must be created explicitly by the author.
There’s also a workflow difference worth calling out: semantic model editing in the service uses AutoSave, and Microsoft notes changes are permanent with no option to undo in that experience.
And if you use DAX Query View: Microsoft documents that Desktop-saved DAX queries aren’t visible in the web DAX query view, and queries written in the web are discarded when you close the browser.
So modeling is now a “depends” conversation:
If your modeling needs stay within the web-supported feature set, the Fabric browser experience can work well.
If you need the full Desktop modeling surface area (and the deeper tooling workflows many modelers rely on), Desktop still feels like the more complete workbench.
Report visualization and the canvas experience
Here’s the part that surprises many people: the report canvas gap is smaller than the data prep and modeling gaps.
Microsoft describes editing view in the Power BI service as the place where you create and edit reports in the browser, similar to Report view in Desktop.
Microsoft also notes that the ribbon is the main part of the report editor that differs between Desktop and the service, and the actions available vary depending on what you select on the canvas.
In practical terms, if you’re building visuals, arranging a layout, formatting, and polishing a report, the browser experience is “close enough” that many authors can be productive quickly.
Custom visuals are not a major dividing line either. Microsoft documentation for importing visuals says you can import visuals from AppSource in both Power BI Desktop and the Power BI service, and you can also import from a file.
The place the browser report editor shines for brand-new authors is the quick report experience: paste data and let Power BI auto-generate visuals, then switch to full edit if you want. Just remember those quick report limitations (especially the “no way to update pasted data later” part).
Side-by-side comparison for building from scratch
The table below summarizes what matters most for report authors building a new report end-to-end.
| Stage | Power BI Desktop | Power BI in Fabric (browser) | Practical implication for new Fabric authors |
|---|---|---|---|
| Data import starting point | Get Data connects to many sources through Power Query. | Create supports quick reports (existing semantic model or paste data), and web semantic model editing supports Get data for import models. | Browser has two “modes”: quick reports (fast, limited) and import model creation/editing (more capable). |
| Connector breadth | Broader connector ecosystem, including custom connectors. | Web model editing can’t add import tables from custom connectors and a listed set (including OLE DB, R, Python). | If you suspect you’ll need niche connectors, Desktop is safer. |
| Paste / lightweight data entry | Not the typical primary path (most authors use Get Data). | Paste/manual entry is supported but capped at 512 KB and can’t be updated later; other limits apply. | Great for prototypes, fragile for maintained solutions. |
| Data transformation | Full Power Query authoring in Desktop. | Power Query editor exists for import semantic models; but only for import storage mode, not Direct Lake/DirectQuery. | If your Fabric work leans into Direct Lake, expect transformation to move upstream. |
| Dynamic data sources | Possible to author patterns that later hit service refresh constraints. | Dynamic data sources aren’t supported in the web Power Query editor. | Browser-first reduces certain risky patterns (sometimes by simply saying “no”). |
| Parameters | Create with Manage Parameters in Power Query. | Can edit/review parameter settings, but not create parameters. | Parameter-heavy transformation patterns push you toward Desktop authoring. |
| Core modeling | Full Model view; rich editing and inspection workflows. | Can create measures, calculated columns/tables, relationships, properties, and roles in the service. | Web modeling is real now for everyday modeling tasks. |
| Modeling gaps | Generally the “superset” experience. | Notable gaps: can’t change storage mode; View as dialog not supported; other listed gaps. | If you rely on those gaps, Desktop remains your main tool. |
| Relationship automation | Auto-detect relationships can import source-defined relationships and infer new ones using heuristics; behavior can be enabled or disabled. | No auto-detect feature; relationships are never inferred and must be explicitly defined. | Fabric favors deliberate modeling over convenience-driven guesses. |
| DAX query workflow | Desktop can save DAX queries with the model. | Web DAX query tabs are discarded on close; web doesn’t show Desktop-saved DAX queries. | For repeatable diagnostics, Desktop still feels more durable. |
| Report visualization | Report view in Desktop for authoring. | Editing view in the service for authoring; ribbon differs but core canvas editing is supported. | For visuals/layout, the gap is smaller than most people expect. |
| Custom visuals | Import visuals via AppSource or file. | Same ability to import visuals via AppSource or file in the service. | Not a major deciding factor for most report authors. |
How to choose without overthinking it
Here’s the bottom line I want you to walk away with:
If your “from scratch” report is going to live or die on data import and transformation flexibility, start in Desktop. You’ll get the widest connector coverage, the most mature Power Query patterns (including creating parameters), and fewer “this feature isn’t supported here” surprises. Desktop is still the most predictable, self-contained transformation workbench when you expect to do most of the shaping inside the report tool itself.
If your “from scratch” report is mostly about building visuals on top of a semantic model that already exists in the Fabric environment, the browser experience can be a perfectly legitimate authoring surface. The web report editor is capable, and web semantic model editing covers a lot of day-to-day modeling tasks now. And if your transformation work is happening upstream — in Dataflow Gen2, Spark, T-SQL, or Lakehouse processes — the browser experience fits naturally into that Fabric-first architecture.
And here’s the most Fabric-era statement I can make without drifting into platform debates:
It’s not really Desktop versus Fabric. It’s Desktop plus Fabric, and you choose the surface based on where your friction will be:
- If your friction is report-level data shaping (cleaning, merging, parameterizing inside the report), Desktop will usually feel smoother.
- If your friction is platform-level data engineering (Spark notebooks, Dataflow Gen2, T-SQL in a Warehouse, Lakehouse pipelines), that work belongs in Fabric — not in Desktop.
- If your friction is purely report-canvas design (building visuals, layout, formatting, iterating quickly), either surface can work — and the browser may be perfectly sufficient.
One last practical bridge worth knowing about: Microsoft documents that for Direct Lake models, the web experience can offer an “Edit in Desktop” path that launches live editing in Power BI Desktop (Windows-only), and Microsoft also documents how Desktop can create and live-edit Direct Lake semantic models via the OneLake catalog. That’s a big deal because it means you can stay in a Fabric-first architecture while still using Desktop as your authoring workbench when you need it.
Other feature differences worth knowing about
Beyond modeling and Power Query, there are a handful of features that still exist in Power BI Desktop (or related desktop tools) that don’t yet have full parity inside the Fabric browser experience. These aren’t deal-breakers for most authors—but they matter depending on your workflow.
Paginated reports are one example. These aren’t created in Power BI Desktop at all—they’re authored using Power BI Report Builder, a separate Windows tool. And there’s no browser-based paginated authoring surface inside Fabric today. If you need pixel-perfect, printable reports (invoices, regulatory forms, operational reports), that’s still a desktop story.
External Tools integration is another. Desktop supports launching tools like DAX Studio and Tabular Editor directly from the ribbon. Those tools are central to advanced modeling, diagnostics, and automation workflows. The Fabric browser experience has no equivalent because those tools rely on local integration.
Custom visuals discovery also feels different. You can use custom visuals in both Desktop and the service, but Desktop provides a richer, more integrated Visuals Marketplace experience for browsing and adding them directly from AppSource.
Q&A and natural language exploration exists in the broader Power BI service experience, but the tuning and authoring surface for it is still more mature in Desktop. If you’ve used Q&A heavily for model validation or exploration, you’ll notice the difference.
Power BI Goals (scorecards) live in the service ecosystem but aren’t part of the browser-based report authoring surface in Fabric. They operate alongside reports—not inside the modeling workflow.
None of these gaps mean the browser experience is weak. They simply reflect that Desktop evolved for deep authoring over many years, while Fabric’s browser surface is optimized for accessibility, shared modeling, and cloud-first workflows.
More info:
