зеркало из https://github.com/mozilla/fleet.git
Fix documentation typos (#1682)
This commit is contained in:
Родитель
a5e1007146
Коммит
d2a7e38c85
|
@ -2,7 +2,7 @@
|
||||||
|
|
||||||
* Added remote IP in the logs for all osqueryd/launcher requests. (#1653)
|
* Added remote IP in the logs for all osqueryd/launcher requests. (#1653)
|
||||||
|
|
||||||
* Fixed bugs that caused logs to sometimes be ommited from the logwriter. (#1636, #1617)
|
* Fixed bugs that caused logs to sometimes be omitted from the logwriter. (#1636, #1617)
|
||||||
|
|
||||||
* Fixed a bug where request bodies were not being explicitly closed. (#1613)
|
* Fixed a bug where request bodies were not being explicitly closed. (#1613)
|
||||||
|
|
||||||
|
@ -163,7 +163,7 @@ See https://wiki.mozilla.org/Security/Server_Side_TLS for more information on th
|
||||||
|
|
||||||
* Improve platform detection accuracy.
|
* Improve platform detection accuracy.
|
||||||
|
|
||||||
Previously Kolide was determing platform based on the OS of the system osquery was built on instead of the OS it was running on. Please note: Offline hosts may continue to report an erroneous platform until they check-in with Kolide.
|
Previously Kolide was determining platform based on the OS of the system osquery was built on instead of the OS it was running on. Please note: Offline hosts may continue to report an erroneous platform until they check-in with Kolide.
|
||||||
|
|
||||||
* Fix bugs where query links in the pack sidebar pointed to the wrong queries.
|
* Fix bugs where query links in the pack sidebar pointed to the wrong queries.
|
||||||
|
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
CLI Documentation
|
CLI Documentation
|
||||||
=================
|
=================
|
||||||
|
|
||||||
Kolide Fleet provides a server which allows you to manage and orchestrate an osquery deployment across of a set of workstations and servers. For certain use-cases, it makes sense to maintain the configuration and data of an osquery deployment in source-controlled files. It is also desireable to be able to manage these files with a familiar command-line tool. To facilitate this, we are working on an experimental CLI called `fleetctl`.
|
Kolide Fleet provides a server which allows you to manage and orchestrate an osquery deployment across of a set of workstations and servers. For certain use-cases, it makes sense to maintain the configuration and data of an osquery deployment in source-controlled files. It is also desirable to be able to manage these files with a familiar command-line tool. To facilitate this, we are working on an experimental CLI called `fleetctl`.
|
||||||
|
|
||||||
### Warning: In Progress
|
### Warning: In Progress
|
||||||
|
|
||||||
|
@ -18,7 +18,7 @@ Inspiration for the `fleetctl` command-line experience as well as the file forma
|
||||||
|
|
||||||
The `fleetctl` tool is heavily inspired by the [`kubectl`](https://kubernetes.io/docs/user-guide/kubectl-overview/) tool. If you are familiar with `kubectl`, this will all feel very familiar to you. If not, some further explanation would likely be helpful.
|
The `fleetctl` tool is heavily inspired by the [`kubectl`](https://kubernetes.io/docs/user-guide/kubectl-overview/) tool. If you are familiar with `kubectl`, this will all feel very familiar to you. If not, some further explanation would likely be helpful.
|
||||||
|
|
||||||
Fleet exposes the aspects of an osquery deployment as a set of "objects". Objects may be a query, a pack, a set of configuration options, etc. The documentaiton for [Declarative Management of Kubernetes Objects Using Configuration Files](https://kubernetes.io/docs/tutorials/object-management-kubectl/declarative-object-management-configuration/) says the following about the object lifecycle:
|
Fleet exposes the aspects of an osquery deployment as a set of "objects". Objects may be a query, a pack, a set of configuration options, etc. The documentation for [Declarative Management of Kubernetes Objects Using Configuration Files](https://kubernetes.io/docs/tutorials/object-management-kubectl/declarative-object-management-configuration/) says the following about the object lifecycle:
|
||||||
|
|
||||||
> Objects can be created, updated, and deleted by storing multiple object configuration files in a directory and using `kubectl apply` to recursively create and update those objects as needed.
|
> Objects can be created, updated, and deleted by storing multiple object configuration files in a directory and using `kubectl apply` to recursively create and update those objects as needed.
|
||||||
|
|
||||||
|
@ -93,7 +93,7 @@ When you reason about how to manage these config files, consider following the [
|
||||||
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [config-single-file.yml](../../examples/config-single-file.yml) file as an example of this syntax.
|
- Group related objects into a single file whenever it makes sense. One file is often easier to manage than several. See the [config-single-file.yml](../../examples/config-single-file.yml) file as an example of this syntax.
|
||||||
- Don’t specify default values unnecessarily – simple and minimal configs will reduce errors.
|
- Don’t specify default values unnecessarily – simple and minimal configs will reduce errors.
|
||||||
|
|
||||||
All of these files can be concatenated together into [one file](../../examples/config-single-file.yml) (seperated by `---`), or they can be in [individual files with a directory structure](../../examples/config-many-files) like the following:
|
All of these files can be concatenated together into [one file](../../examples/config-single-file.yml) (separated by `---`), or they can be in [individual files with a directory structure](../../examples/config-many-files) like the following:
|
||||||
|
|
||||||
```
|
```
|
||||||
|-- config.yml
|
|-- config.yml
|
||||||
|
|
|
@ -29,7 +29,7 @@ fleet prepare db
|
||||||
|
|
||||||
## Running Fleet using Docker development infrastructure
|
## Running Fleet using Docker development infrastructure
|
||||||
|
|
||||||
To start the Fleet server backed by the Docker development infrasturcture, run the Fleet binary as follows:
|
To start the Fleet server backed by the Docker development infrastructure, run the Fleet binary as follows:
|
||||||
|
|
||||||
```
|
```
|
||||||
fleet serve
|
fleet serve
|
||||||
|
|
|
@ -79,7 +79,7 @@
|
||||||
{"description": "The name of the class.","name": "class","options": {},"type": "TEXT_TYPE"},
|
{"description": "The name of the class.","name": "class","options": {},"type": "TEXT_TYPE"},
|
||||||
{"description": "Relative path to the class or instance.","name": "relative_path","options": {},"type": "TEXT_TYPE"}
|
{"description": "Relative path to the class or instance.","name": "relative_path","options": {},"type": "TEXT_TYPE"}
|
||||||
],
|
],
|
||||||
"description": "WMI CommandLineEventConsumer, which can be used for persistance on Windows. https://www.blackhat.com/docs/us-15/materials/us-15-Graeber-Abusing-Windows-Management-Instrumentation-WMI-To-Build-A-Persistent%20Asynchronous-And-Fileless-Backdoor-wp.pdf",
|
"description": "WMI CommandLineEventConsumer, which can be used for persistence on Windows. https://www.blackhat.com/docs/us-15/materials/us-15-Graeber-Abusing-Windows-Management-Instrumentation-WMI-To-Build-A-Persistent%20Asynchronous-And-Fileless-Backdoor-wp.pdf",
|
||||||
"examples": [
|
"examples": [
|
||||||
"select filter,consumer,query,command_line_template from wmi_filter_consumer_binding wcb join wmi_cli_event_consumers wcec on consumer = wcec.__relpath join wmi_event_filters wef on wef.__relpath = wcb.filter;"
|
"select filter,consumer,query,command_line_template from wmi_filter_consumer_binding wcb join wmi_cli_event_consumers wcec on consumer = wcec.__relpath join wmi_event_filters wef on wef.__relpath = wcb.filter;"
|
||||||
],
|
],
|
||||||
|
@ -135,7 +135,7 @@
|
||||||
{"description": "The name of the class.","name": "class","options": {},"type": "TEXT_TYPE"},
|
{"description": "The name of the class.","name": "class","options": {},"type": "TEXT_TYPE"},
|
||||||
{"description": "Relative path to the class or instance.","name": "relative_path","options": {},"type": "TEXT_TYPE"}
|
{"description": "Relative path to the class or instance.","name": "relative_path","options": {},"type": "TEXT_TYPE"}
|
||||||
],
|
],
|
||||||
"description": "WMI ActiveScriptEventConsumer, which can be used for persistance on Windows. https://www.blackhat.com/docs/us-15/materials/us-15-Graeber-Abusing-Windows-Management-Instrumentation-WMI-To-Build-A-Persistent%20Asynchronous-And-Fileless-Backdoor-wp.pdf",
|
"description": "WMI ActiveScriptEventConsumer, which can be used for persistence on Windows. https://www.blackhat.com/docs/us-15/materials/us-15-Graeber-Abusing-Windows-Management-Instrumentation-WMI-To-Build-A-Persistent%20Asynchronous-And-Fileless-Backdoor-wp.pdf",
|
||||||
"examples": [
|
"examples": [
|
||||||
"select filter,consumer,query,scripting_engine,script_text from wmi_filter_consumer_binding wcb join wmi_script_event_consumers wsec on consumer = wsec.__relpath join wmi_event_filters wef on wef.__relpath = wcb.filter;"
|
"select filter,consumer,query,scripting_engine,script_text from wmi_filter_consumer_binding wcb join wmi_script_event_consumers wsec on consumer = wsec.__relpath join wmi_event_filters wef on wef.__relpath = wcb.filter;"
|
||||||
],
|
],
|
||||||
|
@ -188,7 +188,7 @@
|
||||||
"blacklisted": false,
|
"blacklisted": false,
|
||||||
"columns": [
|
"columns": [
|
||||||
{"description": "The local owner of authorized_keys file","name": "uid","options": {"additional": true},"type": "BIGINT_TYPE"},
|
{"description": "The local owner of authorized_keys file","name": "uid","options": {"additional": true},"type": "BIGINT_TYPE"},
|
||||||
{"description": "algorithim of key","name": "algorithm","options": {},"type": "TEXT_TYPE"},
|
{"description": "algorithm of key","name": "algorithm","options": {},"type": "TEXT_TYPE"},
|
||||||
{"description": "parsed authorized keys line","name": "key","options": {},"type": "TEXT_TYPE"},
|
{"description": "parsed authorized keys line","name": "key","options": {},"type": "TEXT_TYPE"},
|
||||||
{"description": "Path to the authorized_keys file","name": "key_file","options": {},"type": "TEXT_TYPE"}
|
{"description": "Path to the authorized_keys file","name": "key_file","options": {},"type": "TEXT_TYPE"}
|
||||||
],
|
],
|
||||||
|
@ -1220,7 +1220,7 @@
|
||||||
{"description": "Platform information.","name": "platform_info","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Platform information.","name": "platform_info","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Performance setting for the processor.","name": "perf_ctl","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Performance setting for the processor.","name": "perf_ctl","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Performance status for the processor.","name": "perf_status","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Performance status for the processor.","name": "perf_status","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Bitfield controling enabled features.","name": "feature_control","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Bitfield controlling enabled features.","name": "feature_control","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Run Time Average Power Limiting power limit.","name": "rapl_power_limit","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Run Time Average Power Limiting power limit.","name": "rapl_power_limit","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Run Time Average Power Limiting energy status.","name": "rapl_energy_status","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Run Time Average Power Limiting energy status.","name": "rapl_energy_status","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Run Time Average Power Limiting power units.","name": "rapl_power_units","options": {},"type": "BIGINT_TYPE"}
|
{"description": "Run Time Average Power Limiting power units.","name": "rapl_power_units","options": {},"type": "BIGINT_TYPE"}
|
||||||
|
@ -1911,7 +1911,7 @@
|
||||||
"columns": [
|
"columns": [
|
||||||
{"description": "Daemon or agent service name","name": "label","options": {},"type": "TEXT_TYPE"},
|
{"description": "Daemon or agent service name","name": "label","options": {},"type": "TEXT_TYPE"},
|
||||||
{"description": "Name of the override key","name": "key","options": {},"type": "TEXT_TYPE"},
|
{"description": "Name of the override key","name": "key","options": {},"type": "TEXT_TYPE"},
|
||||||
{"description": "Overriden value","name": "value","options": {},"type": "TEXT_TYPE"},
|
{"description": "Overridden value","name": "value","options": {},"type": "TEXT_TYPE"},
|
||||||
{"description": "User ID applied to the override, 0 applies to all","name": "uid","options": {},"type": "BIGINT_TYPE"},
|
{"description": "User ID applied to the override, 0 applies to all","name": "uid","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Path to daemon or agent plist","name": "path","options": {},"type": "TEXT_TYPE"}
|
{"description": "Path to daemon or agent plist","name": "path","options": {},"type": "TEXT_TYPE"}
|
||||||
],
|
],
|
||||||
|
@ -2084,7 +2084,7 @@
|
||||||
{"description": "Real user ID of the user process using the file","name": "uid","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Real user ID of the user process using the file","name": "uid","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Effective user ID of the process using the file","name": "euid","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Effective user ID of the process using the file","name": "euid","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Real group ID of the process using the file","name": "gid","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Real group ID of the process using the file","name": "gid","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Effective group ID of the processs using the file","name": "egid","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Effective group ID of the processes using the file","name": "egid","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Indicates the mode of the file","name": "mode","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Indicates the mode of the file","name": "mode","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "User ID of the owner of the file","name": "owner_uid","options": {},"type": "BIGINT_TYPE"},
|
{"description": "User ID of the owner of the file","name": "owner_uid","options": {},"type": "BIGINT_TYPE"},
|
||||||
{"description": "Group ID of the owner of the file","name": "owner_gid","options": {},"type": "BIGINT_TYPE"},
|
{"description": "Group ID of the owner of the file","name": "owner_gid","options": {},"type": "BIGINT_TYPE"},
|
||||||
|
|
|
@ -88,7 +88,7 @@ func testYARATransactions(t *testing.T, ds kolide.Datastore) {
|
||||||
yaraSection, err := ds.YARASection()
|
yaraSection, err := ds.YARASection()
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
require.NotNil(t, yaraSection)
|
require.NotNil(t, yaraSection)
|
||||||
// there shouldn't be any file paths because we rolled back the transaciton
|
// there shouldn't be any file paths because we rolled back the transaction
|
||||||
require.Len(t, yaraSection.FilePaths, 0)
|
require.Len(t, yaraSection.FilePaths, 0)
|
||||||
|
|
||||||
// try it again
|
// try it again
|
||||||
|
|
|
@ -72,7 +72,7 @@ func (d *Datastore) ResetOptions() (opts []kolide.Option, err error) {
|
||||||
}
|
}
|
||||||
err = txn.Commit()
|
err = txn.Commit()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "commiting reset options")
|
return nil, errors.Wrap(err, "committing reset options")
|
||||||
}
|
}
|
||||||
|
|
||||||
return opts, nil
|
return opts, nil
|
||||||
|
|
|
@ -15,7 +15,7 @@ type Checker interface {
|
||||||
|
|
||||||
// Handler returns an http.Handler that checks the status of all the dependencies.
|
// Handler returns an http.Handler that checks the status of all the dependencies.
|
||||||
// Handler responds with either:
|
// Handler responds with either:
|
||||||
// 200 OK if the server can successfuly communicate with it's backends or
|
// 200 OK if the server can successfully communicate with it's backends or
|
||||||
// 500 if any of the backends are reporting an issue.
|
// 500 if any of the backends are reporting an issue.
|
||||||
func Handler(logger log.Logger, checkers map[string]Checker) http.HandlerFunc {
|
func Handler(logger log.Logger, checkers map[string]Checker) http.HandlerFunc {
|
||||||
return func(w http.ResponseWriter, r *http.Request) {
|
return func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
|
|
@ -123,11 +123,11 @@ type AppConfig struct {
|
||||||
// MetadataURL is a URL provided by the IDP which can be used to download
|
// MetadataURL is a URL provided by the IDP which can be used to download
|
||||||
// metadata
|
// metadata
|
||||||
MetadataURL string `db:"metadata_url"`
|
MetadataURL string `db:"metadata_url"`
|
||||||
// IDPName is a human freindly name for the IDP
|
// IDPName is a human friendly name for the IDP
|
||||||
IDPName string `db:"idp_name"`
|
IDPName string `db:"idp_name"`
|
||||||
// EnableSSO flag to determine whether or not to enable SSO
|
// EnableSSO flag to determine whether or not to enable SSO
|
||||||
EnableSSO bool `db:"enable_sso"`
|
EnableSSO bool `db:"enable_sso"`
|
||||||
// FIMInterval defines the interval when file integrity checks will occurr
|
// FIMInterval defines the interval when file integrity checks will occur
|
||||||
FIMInterval int `db:"fim_interval"`
|
FIMInterval int `db:"fim_interval"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -154,7 +154,7 @@ type SSOSettingsPayload struct {
|
||||||
// MetadataURL is a URL provided by the IDP which can be used to download
|
// MetadataURL is a URL provided by the IDP which can be used to download
|
||||||
// metadata
|
// metadata
|
||||||
MetadataURL *string `json:"metadata_url"`
|
MetadataURL *string `json:"metadata_url"`
|
||||||
// IDPName is a human freindly name for the IDP
|
// IDPName is a human friendly name for the IDP
|
||||||
IDPName *string `json:"idp_name"`
|
IDPName *string `json:"idp_name"`
|
||||||
// EnableSSO flag to determine whether or not to enable SSO
|
// EnableSSO flag to determine whether or not to enable SSO
|
||||||
EnableSSO *bool `json:"enable_sso"`
|
EnableSSO *bool `json:"enable_sso"`
|
||||||
|
|
|
@ -56,7 +56,7 @@ type ImportStatus struct {
|
||||||
// SkipCount count of items that are skipped. The reasons for the omissions
|
// SkipCount count of items that are skipped. The reasons for the omissions
|
||||||
// can be found in Warnings.
|
// can be found in Warnings.
|
||||||
SkipCount int `json:"skip_count"`
|
SkipCount int `json:"skip_count"`
|
||||||
// Warnings groups catagories of warnings with one or more detail messages.
|
// Warnings groups categories of warnings with one or more detail messages.
|
||||||
Warnings map[WarningType][]string `json:"warnings"`
|
Warnings map[WarningType][]string `json:"warnings"`
|
||||||
// Messages contains an entry for each import attempt.
|
// Messages contains an entry for each import attempt.
|
||||||
Messages []string `json:"messages"`
|
Messages []string `json:"messages"`
|
||||||
|
|
|
@ -13,7 +13,7 @@ var wrongTypeError = errors.New("argument missing or unexpected type")
|
||||||
// UnmarshalJSON custom unmarshaling for PackNameMap will determine whether
|
// UnmarshalJSON custom unmarshaling for PackNameMap will determine whether
|
||||||
// the pack section of an osquery config file refers to a file path, or
|
// the pack section of an osquery config file refers to a file path, or
|
||||||
// pack details. Pack details are unmarshalled into into PackDetails structure
|
// pack details. Pack details are unmarshalled into into PackDetails structure
|
||||||
// as oppossed to nested map[string]interface{}
|
// as opposed to nested map[string]interface{}
|
||||||
func (pnm PackNameMap) UnmarshalJSON(b []byte) error {
|
func (pnm PackNameMap) UnmarshalJSON(b []byte) error {
|
||||||
var temp map[string]interface{}
|
var temp map[string]interface{}
|
||||||
err := json.Unmarshal(b, &temp)
|
err := json.Unmarshal(b, &temp)
|
||||||
|
|
|
@ -16,7 +16,7 @@ type logWriter struct {
|
||||||
mtx sync.Mutex
|
mtx sync.Mutex
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a logwriter, path refers to file that will recieve log content
|
// New creates a logwriter, path refers to file that will receive log content
|
||||||
func New(path string) (io.WriteCloser, error) {
|
func New(path string) (io.WriteCloser, error) {
|
||||||
file, err := os.OpenFile(path, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0644)
|
file, err := os.OpenFile(path, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0644)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
|
@ -45,7 +45,7 @@ func TestAuthenticate(t *testing.T) {
|
||||||
user := tt.user
|
user := tt.user
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
loggedIn, token, err := svc.Login(ctx, tt.username, tt.password)
|
loggedIn, token, err := svc.Login(ctx, tt.username, tt.password)
|
||||||
require.Nil(st, err, "login unsuccesful")
|
require.Nil(st, err, "login unsuccessful")
|
||||||
assert.Equal(st, user.ID, loggedIn.ID)
|
assert.Equal(st, user.ID, loggedIn.ID)
|
||||||
assert.NotEmpty(st, token)
|
assert.NotEmpty(st, token)
|
||||||
|
|
||||||
|
|
|
@ -583,7 +583,7 @@ func TestRequirePasswordReset(t *testing.T) {
|
||||||
// Log user in
|
// Log user in
|
||||||
if tt.Enabled {
|
if tt.Enabled {
|
||||||
_, _, err = svc.Login(ctx, tt.Username, tt.PlaintextPassword)
|
_, _, err = svc.Login(ctx, tt.Username, tt.PlaintextPassword)
|
||||||
require.Nil(t, err, "login unsuccesful")
|
require.Nil(t, err, "login unsuccessful")
|
||||||
sessions, err = svc.GetInfoAboutSessionsForUser(ctx, user.ID)
|
sessions, err = svc.GetInfoAboutSessionsForUser(ctx, user.ID)
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
require.Len(t, sessions, 1, "user should have one session")
|
require.Len(t, sessions, 1, "user should have one session")
|
||||||
|
|
|
@ -69,7 +69,7 @@ go run package_metadata.go -repo /Users/$me/kolide_packages/ -git-tag=1.0.4
|
||||||
```
|
```
|
||||||
|
|
||||||
7. Create a git commit commit with the updated package repos.
|
7. Create a git commit commit with the updated package repos.
|
||||||
The repo building scripts can be flaky, and occasionaly it's useful to use a `--reset HARD` flag with git to retry building the release.
|
The repo building scripts can be flaky, and occasionally it's useful to use a `--reset HARD` flag with git to retry building the release.
|
||||||
|
|
||||||
8. Push the release to gcloud. Pushing will override the contents of the gcs bucket and the release will be immediately available.
|
8. Push the release to gcloud. Pushing will override the contents of the gcs bucket and the release will be immediately available.
|
||||||
|
|
||||||
|
|
Загрузка…
Ссылка в новой задаче