Welcome to Data Accelerator Wiki!
Data Accelerator dramatically simplifies onboarding to Streaming of Big Data on Spark. Get started in minutes on a rich, easy to use experience to help with creation, editing and management of streaming jobs via Flows. Various teams in Microsoft use Data Accelerator every day for processing streamed data from several products at Microsoft scale.
Installation guide
To unleash the full power Data Accelerator, deploy to Azure and check the tutorials for cloud mode.
We have also enabled a "hello world" experience that you try out locally by running docker container. When running locally there are no dependencies on Azure, however the functionality is very limited and only there to give you a very cursory overview of Data Accelerator. To run Data Accelerator locally, follow these instructions and then check out the tutorials of local mode.
Tutorials
Learn how to use Data Accelerator step by step and get started setting up your big data pipeline in minutes. Data Accelerator provides all the tools necessary to go from simple to complex requirements, all within easy-to-use portal.
Tutorials will walk you through the tools, step by step, how to create a Flow.
Versions
Here you'll find the list of versions along with what is new with each release and a description of the changes you need to apply to migrate to the new version.
- 1.2 - Databricks, Batch, SQL Output support - Steps to upgrade existing environment to v1.2 here
- 1.1 - Kafka support - Steps to upgrade existing environment to v1.1 here
- 1.0 - Initial release
Contributing
If you are interested in fixing issues and contributing to the code base, we would love to partner with you. Try things out, join in the design conversations and make pull requests.
Feedback
- Request new features on GitHub
- Open a new issue on GitHub
- Ask a question on Stack Overflow
- Check out the contributing page to see the best places to log issues and start discussions.
Please also see our Code of Conduct.
FAQ
For common issues and questions, check out our FAQ.