From 76a31a0676abefe11c04063b9526011116accd11 Mon Sep 17 00:00:00 2001 From: John Ehrlinger Date: Mon, 7 Jan 2019 10:02:12 -0500 Subject: [PATCH 1/2] Add extra readme files. --- jobs/Readme.md | 5 +++++ scripts/Readme.md | 5 +++++ 2 files changed, 10 insertions(+) create mode 100644 jobs/Readme.md create mode 100644 scripts/Readme.md diff --git a/jobs/Readme.md b/jobs/Readme.md new file mode 100644 index 0000000..27e7388 --- /dev/null +++ b/jobs/Readme.md @@ -0,0 +1,5 @@ +# Azure Databricks job templates + +This folder contains the template used to create the Azure Databricks job that does the batch scoring. + +Instructions to use this file are located at https://github.com/Azure/BatchSparkScoringPredictiveMaintenance/blob/master/BatchScoringJob.md diff --git a/scripts/Readme.md b/scripts/Readme.md new file mode 100644 index 0000000..fa97b29 --- /dev/null +++ b/scripts/Readme.md @@ -0,0 +1,5 @@ +# Azure Databricks job customization script + +This folder contains the script that customizes the template file used to create the Azure Databricks job that does the batch scoring. + +Instructions to use this file are located at https://github.com/Azure/BatchSparkScoringPredictiveMaintenance/blob/master/BatchScoringJob.md From 9870c5f7ed131d0d43ed07eda8c89e58a426b65f Mon Sep 17 00:00:00 2001 From: John Ehrlinger Date: Mon, 7 Jan 2019 14:45:23 -0500 Subject: [PATCH 2/2] Update BatchScoringJob.md --- BatchScoringJob.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/BatchScoringJob.md b/BatchScoringJob.md index 8b77b09..5f1995f 100644 --- a/BatchScoringJob.md +++ b/BatchScoringJob.md @@ -70,4 +70,4 @@ The `quartz_cron_expression` takes [Quartz cron](http://www.quartz-scheduler.org # Conclusion -The actual work of this scenario is done through this Azure Databricks job. The job executes the `3_Scoring_Pipeline` notebook, which depends on a machine learning model existing on the Azure Databricks file storage. We created the model using the `2_Training_Pipeline` notebook. +The actual work of this scenario is done through this Azure Databricks job. The job executes the `3_Scoring_Pipeline` notebook, which depends on a machine learning model existing on the Azure Databricks file storage. We created the model using the `2_Training_Pipeline` notebook which used the data downloaded with the `1_data_ingestion` notebook.