* step 1 nnictl generate experimentId & merge folder
* step 2.1 modify .experiment structure
* step 2.2 add lock for .experiment rw in nnictl
* step 2.2 add filelock dependence
* step 2.2 remove uniqueString from main.js
* fix test bug
* fix test bug
* setp 3.1 add experiment manager
* step 3.2 add getExperimentsInfo
* fix eslint
* add a simple file lock to support stale
* step 3.3 add test
* divide abs experiment manager from manager
* experiment manager refactor
* support .experiment sync update status
* nnictl no longer uses rest api to update status or endtime
* nnictl no longer uses rest api to update status or endtime
* fix eslint
* support .experiment sync update endtime
* fix test
* fix settimeout bug
* fix test
* adjust experiment endTime
* separate simple file lock class
* modify name
* add 'id' in .experiment
* update rest api format
* fix eslint
* fix issue in comments
* fix rest api format
* add indent in json in experiments manager
* fix unittest
* fix unittest
* refector file lock
* fix eslint
* remove '__enter__' in filelock
* filelock support never expire
Co-authored-by: Ning Shang <nishang@microsoft.com>
Designed new interface to support reusable training service, currently only applies to OpenPAI, and default disabled.
Replace trial_keeper.py to trial_runner.py, trial_runner holds an environment, and receives commands from nni manager to run or stop an trial, and return events to nni manager.
Add trial dispatcher, which inherits from original trianing service interface. It uses to share as many as possible code of all training service, and isolate with training services.
Add EnvironmentService interface to manage environment, including start/stop an environment, refresh status of environments.
Add command channel on both nni manager and trial runner parts, it supports different ways to pass messages between them. Current supported channels are file, web sockets. and supported commands from nni manager are start, kill trial, send new parameters; from runner are initialized(support some channel doesn't know which runner connected), trial end, stdout ((new type), including metric like before), version check (new type), gpu info (new type).
Add storage service to wrapper a storage to standard file operations, like NFS, azure storage and so on.
Partial support run multiple trials in parallel on runner side, but not supported by trial dispatcher side.
Other minor changes,
Add log_level to TS UT, so that UT can show debug level log.
Expose platform to start info.
Add RouterTrainingService to keep origianl OpenPAI training service, and support dynamic IOC binding.
Add more GPU info for future usage, including GPU mem total/free/used, gpu type.
Make some license information consistence.
Fix async/await problems on Array.forEach, this method doesn't support async actually.
Fix IT errors on download data, which causes by my #2484 .
Accelerate some run loop pattern by reducing sleep seconds.
In current version of makefile, if user use make dev-install, they will install two packages "nni-sdk" and "nnictl" in their environment.
Now merge the two package to be one package "nni".
trial_keeper use 50070 port to connect to webhdfs server, and PAI use a mapping method to map 50070 port to 5070 port to visit restful server, this method has some risk for PAI may not support this kind of mapping in later release.Now use Pylon path(/webhdfs/api/v1) instead of 50070 port in webhdfs client of trial_keeper, the path is transmitted in trainingService.
In this pr, we have these changes:
1. Change to use webhdfs path instead of 50070 port in hdfs client.
2. Change to use new hdfs package "PythonWebHDFS", which is build to support pylon by myself. You could test the new function from "sparksnail/nni:dev-pai" image to test pai trainingService.
3. Update some variables' name according to comments.
* Fix pypi package missing python module
* Fix pypi package missing python module
* fix bug in smartparam example (#322)
* Fix nnictl update trialnum and document (#326)
1.Fix restful server of update
2.Update nnictl document of update
3.Add tensorboard in docement
* Update the version numbers from 0.3.2 to 0.3.3