Imagine this scenario - a management meeting, discussing sales performance for the last quarter. The sales team presents their numbers from CSV files, finance brings their own Excel version - and the numbers don’t match. Instead of discussing strategy, everyone spends an hour debating which data is correct. I’ve seen this firsthand. And I assume you have too. This is not an analytical problem. It’s an organizational problem that only looks like an analytical one.
Where does this chaos come from?
In most organizations, data exists in multiple places at the same time - local files, SharePoint folders, Excel sheets sent via email or Teams, links labeled "this is the latest version". Each of these creates its own version of reality. And over time, those versions drift apart.
The result is predictable - Power BI reports break after someone renames a column, data consolidation takes days and is error-prone, sensitive files get shared unintentionally, and datasets are outdated because nobody refreshed them.
The traditional response is ETL - building pipelines that physically copy data between systems. It works - but it creates exactly what it tries to solve: more copies, delays, and failure points.
A shortcut that changes everything
In Microsoft Fabric, this problem is addressed through Shortcuts.
A Shortcut in OneLake is a virtual pointer to data that already exists - on SharePoint, OneDrive, Azure Storage or another Lakehouse.
It doesn’t copy data.
It doesn’t move it.
It simply tells Fabric: "the data is there - treat it as if it were here".
For a business user, nothing changes - a financial controller still saves a file in Excel on SharePoint as usual. But for the data platform, that file is instantly available in the Lakehouse.
This is Zero-Copy - no duplication, no transfer, no synchronization delays. Just a logical bridge between where data lives and where it is used.
What does it look like in practice?
A financial controller saves a CSV file in a SharePoint folder. That’s it. Fabric detects the file via Shortcut and triggers an automated process: Dataflow Gen2 or a Spark Notebook validates the data, checks columns and data types, saves it in Delta Parquet format - optimized for analytics and versioning.
From that point, analysts no longer connect to slow Excel files. They connect Power BI to a centralized model in OneLake. Using Direct Lake, Power BI reads data directly from Delta files - without importing it. Reports load quickly, even with large datasets.
And the controller still works in Excel - nothing changes on their side.
One view, one source of truth
Back to the meeting scenario - if both teams used Shortcuts, their data would be processed through the same pipeline. Inconsistencies would be detected immediately - not during the meeting, but at the moment of data ingestion.
And the meeting could finally focus on strategy.
FAQ
Do Shortcuts only work with SharePoint and OneDrive?
No - they support multiple sources, including Azure Data Lake, Amazon S3, Google Cloud Storage and Dataverse.
What happens when I delete a Shortcut?
Nothing - the original data remains unchanged. A Shortcut is only a reference.
How is access controlled?
Fabric uses passthrough authentication - user permissions remain exactly as defined in the source system.
Can Shortcuts handle large data volumes?
Yes - especially with Direct Lake, which reads data directly without importing it.
What if someone changes the file structure?
A properly configured pipeline detects the issue, rejects invalid data, and prevents incorrect updates.
