update sample 7 inference pipeline

This commit is contained in:
Blanca Li 2020-12-08 21:38:20 +08:00
Родитель 5f379210f1
Коммит d8bee496b6
1 изменённых файлов: 3 добавлений и 3 удалений

Просмотреть файл

@ -78,7 +78,7 @@ After the model is trained, we would use the **Score Model** and **Evaluate Mode
For **Feature Hashing** module, it is easy to perform feature engineer on scoring flow as training flow. Use **Feature Hashing** module directly to process the input text data.
For **Extract N-Gram Feature from Text** module, we would connect the **Result Vocabulary output** from the training dataflow to the **Input Vocabulary** on the scoring dataflow, and set the **Vocabulary mode** parameter to **ReadOnly**.
[![Graph of n-gram score](./media/text-classification-wiki/n-gram.png)](./media/text-classification-wiki/n-gram.png)
![Graph of n-gram score](./media/text-classification-wiki/n-gram.png)
After finishing the engineering step, **Score Model** could be used to generate predictions for the test dataset by using the trained model. To check the result, select the output port of **Score Model** and then select **Visualize**.
@ -89,11 +89,11 @@ To check the result, select the output port of the **Evaluate Model** and then s
After submitting the training pipeline above successfully, you can register the output of the circled module as dataset.
:::image type="content" source="./media/text-classification-wiki/extract-n-gram-output-voc-register-dataset.png" alt-text="register dataset" border="true":::
![register dataset of output vocabulary](./media/text-classification-wiki/extract-n-gram-output-voc-register-dataset.png)
Then you can create real-time inference pipeline. After creating inference pipeline, you need to adjust your inference pipeline manually like following:
:::image type="content" source="./media/text-classification-wiki/extract-n-gram-inference-pipeline.png" alt-text="inference pipeline" border="true":::
![inference pipeline](./media/text-classification-wiki/extract-n-gram-inference-pipeline.png)
Then submit the inference pipeline, and deploy a real-time endpoint.