Crawl GitHub APIs and store the discovered orgs, repos, commits, ...
Перейти к файлу
Jeff McAffer eee1042608 0.1.1 2016-10-31 23:03:26 -07:00
.vscode Initial commit and package 2016-10-31 23:01:54 -07:00
lib Initial commit and package 2016-10-31 23:01:54 -07:00
.gitignore Initial commit and package 2016-10-31 23:01:54 -07:00
LICENSE Initial commit and package 2016-10-31 23:01:54 -07:00
README.md Initial commit and package 2016-10-31 23:01:54 -07:00
jsconfig.json Initial commit and package 2016-10-31 23:01:54 -07:00
package.json 0.1.1 2016-10-31 23:03:26 -07:00

README.md

Version License Downloads

GHCrawler

A robust GitHub API crawler that walks a queue of GitHub entities transitively retrieving and storing their contents. GHCrawler is great for:

  • Retreiving all GitHub entities related to an org, repo, or user
  • Efficiently storing and the retrieved entities
  • Keeping the stored data up to date when used in conjunction with a GitHub event tracker

GHCrawler focuses on successively retrieving and walking GitHub resources supplied on a queue. Each resource is fetched, analyzed, stored and plumbed for more resources to fetch. Discovered resources are themselves queued for further processing. The crawler is careful to not repeatedly fetch the same resource.

Examples

Coming...

Contributing

The project team is more than happy to take contributions and suggestions.

To start working, run npm install in the repository folder to install the required dependencies.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.