Страница:
Local Tutorial Debugging using Spark logs
Страницы
Arm Parameters
Azure deployment
Cloud Deployment On Linux
Cloud Simulator
Cloud deployment
Configuring the Arm template
Create new metric
Creating your first pipeline in 5 minutes!
Data Accelerator with Databricks
Data Accelerator
Data Accumulator
Diagnose issues using Telemetry
FAQ
Find Applications For Your Environment
Home
Inviting others and RBAC
Live query
Local Cloud Debugging
Local Tutorial Add an Alert
Local Tutorial Adding SQL to your flow and outputs to Metrics dashboard
Local Tutorial Advanced Aggregate alerts
Local Tutorial Creating your first Flow in local mode
Local Tutorial Custom schema
Local Tutorial Debugging using Spark logs
Local Tutorial Extending with UDF UDAF custom code
Local Tutorial Outputs to disk
Local Tutorial Reference data
Local Tutorial Scaling the docker host
Local Tutorial Tag Aggregate to metrics
Local Tutorial Tag Rules output to local file
Local create metric
Local mode with Docker
Local running sample
Output data to Azure SQL Database
Run Data Accelerator Flows on Databricks
Scale
Schedule batch job
Set up aggregate alert
Set up new outputs
Set up simple alert
Spark logs
Tagged data flowing to CosmosDB
Tagging aggregate rules
Tagging simple rules
Tutorials
Upgrade Existing Data Accelerator Environment to v1.1
Upgrade Existing Data Accelerator Environment to v1.2
Use Input in different tenant
Windowing functions
functions
readme
reference data
sql query
3
Local Tutorial Debugging using Spark logs
Dinesh Chandnani редактировал(а) эту страницу 2019-04-15 18:21:30 -07:00
Содержание
If you run into issues with your job, you will want to dig into the job logs to determine the cause. The Spark jobs has detailed logs; you can access them via the docker logs command.
In this tutorial, you'll learn to:
- View docker logs
Docker logs
- You can look at the logs by querying the container that is running via Powershell
- Launch Powershell then run the following command:
docker logs --tail 1000 dataxlocal
- If you want to see the logs continuously be updated, you can use the '-f' flag:
docker logs -f --tail 1000 dataxlocal
- Launch Powershell then run the following command:
This will help diagnose issues, see exceptions and callstacks and confirm jobs are running properly.