Updates to data modeling section
This commit is contained in:
Родитель
f2e013c4e6
Коммит
a7cbcbafb5
|
@ -61,7 +61,7 @@ We'll begin by enabling the setting to allow for editing data models in the clou
|
|||
|
||||
<font size="6">✅ Lab check</font>
|
||||
|
||||
With the SQL endpoint we are able to run ad-hoc SQL queries atop of our tables and visualize our results all within a browser session. In the next section we want to focus on modeling our data and preparing it for analysis.
|
||||
With the SQL endpoint we are able to run ad-hoc SQL queries atop of our tables and visualize our results using DirectQuery mode all within a browser session. In the next section we want to focus on modeling our data and preparing it for analysis.
|
||||
|
||||
### Create relationships
|
||||
|
||||
|
@ -73,10 +73,13 @@ It is also recommend to strive to deliver the **right number of tables** with th
|
|||
|
||||
---
|
||||
|
||||
1. From the ribbon, select **Reporting** and then **New Power BI dataset**.
|
||||
1. From the ribbon, select **Reporting** and then **New semantic model**.
|
||||
|
||||
![New Power BI dataset](./Media/new-powerbi-dataset.png)
|
||||
|
||||
> [!NOTE]
|
||||
> The **New semantic model** option is also visible from the Lakehouse explorer's **Home** tab.
|
||||
|
||||
1. In the **New dataset** window, update the **Name** to **SalesDirectLakeModel**. Seelct the following objects listed below and then click **Confirm**:
|
||||
|
||||
| Object Name |
|
||||
|
@ -109,7 +112,9 @@ It is also recommend to strive to deliver the **right number of tables** with th
|
|||
> The **Assume referential integrity** selection, enables running more efficient queries by using INNER JOIN statements rather than OUTER JOIN. This feature is only available when using Direct Lake and DirectQuery connectivity modes.
|
||||
> Learn more about [Referential integrity](https://docs.microsoft.com/power-bi/connect-data/desktop-assume-referential-integrity)
|
||||
|
||||
1. Perform these same steps for each of the remaining tables and columns listed in the following table to create relationships.
|
||||
1. From the **Home** tab, select the **Manage relationships** option, and for each of the items listed below select **New relationship** and configure the relationships for each of the remaining tables and columns. Once complete select **Close**.
|
||||
|
||||
![Manage relationships](./Media/manage-relationships.png)
|
||||
|
||||
| Make this relationship active | From: Table 1 (column) | To: Table 2 (column) | Cardinality | Assume referential integrity | Cross filter direction |
|
||||
| :----- |:----- | :------ | :----- | :----- | :----- |
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
✏️ Lab scenario
|
||||
---
|
||||
|
||||
In this section of the lab, our objective is to gather and merge daily files from a cloud directory. As the number of files in this directory will grow over time, it is essential to develop a data preparation solution that can accommodate this expansion. To achieve this, we need to create a future-proofed data preparation solution that can handle this growth1.
|
||||
In this section of the lab, our objective is to gather and merge daily files from a cloud directory. As the number of files in this directory will grow over time, it is essential to develop a data preparation solution that can accommodate this expansion. To achieve this, we need to create a future-proofed data preparation solution that can handle this growth.
|
||||
|
||||
Please note that the term *future-proofed* refers to a solution that is designed to remain effective and relevant as conditions change over time. For more information on future-proofing queries, you can refer to the [Power Query best practices](https://learn.microsoft.com/power-query/best-practices#future-proofing-queries) documentation.
|
||||
|
||||
|
@ -30,7 +30,7 @@ We'll begin by navigating to a new, empty, or non-production workspace to check
|
|||
|
||||
## Lakehouse storage
|
||||
|
||||
To begin, we will create a **Lakehouse**, which is a data architecture platform that enables the storage, management, and analysis of structured and unstructured data. This flexible and scalable solution allows organizations to handle large volumes of data using a variety of tools and frameworks to process and analyze that data.
|
||||
We'll start by creating a **Lakehouse**, which is a data architecture platform that enables the storage, management, and analysis of structured and unstructured data. This flexible and scalable solution allows organizations to handle large volumes of data using a variety of tools and frameworks to process and analyze that data.
|
||||
|
||||
Learn more about [lakehouses in Microsoft Fabric](https://learn.microsoft.com/fabric/data-engineering/lakehouse-overview)
|
||||
|
||||
|
|
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 260 KiB |
Двоичные данные
Day After Dashboard in a Day/Media/new-powerbi-dataset.png
Двоичные данные
Day After Dashboard in a Day/Media/new-powerbi-dataset.png
Двоичный файл не отображается.
До Ширина: | Высота: | Размер: 23 KiB После Ширина: | Высота: | Размер: 19 KiB |
Двоичные данные
Day After Dashboard in a Day/Media/star-schema.png
Двоичные данные
Day After Dashboard in a Day/Media/star-schema.png
Двоичный файл не отображается.
До Ширина: | Высота: | Размер: 126 KiB После Ширина: | Высота: | Размер: 177 KiB |
Загрузка…
Ссылка в новой задаче