Because
- We have a demo app that can test the integration end to end, it has
the frontend and backend.
- We could stand up the experimenter, launch the experiment or rollout,
and push to RS, Cirrus can sync to RS and then the frontend can send a
request to the backend and then backend can reach out to Cirrus and can
ask for the feature config
- When the backend receives the response, can send the response back to
the frontend
This commit
- Tests end-to-end integration between the experimenter, RS, Cirrus, and
Demo app
- Test both the experiment and rollout end to end
fixes#9387
* fix#7185 chore(project): update local dev authentication for Jetstream
* update env vars for integration; add troubleshooting tip
* add gcp project access info
Because
* With the settings we shipped, timeout=60s, refresh=30s, wait=45*2=90s, we expect that after two invocations of the celery task the experiment will be marked as timed out and 90s gives us that extra 30s to wait, but I've observed that since the timeout calculation is detached from the celery schedule, it may occur that on the second invocation it can be milliseconds below the timeout point and so it won't timeout until the third invocation, and since that's 90s which butts up right against the 90s the test waits it can randomly happen just after the test ends, I verified that the test did in fact end just before the experiment was marked as timed out
This commit
* Changes the integration test timing to timeout=40s, refresh=20s, wait=60*2=120s which should give enough time for three invocations of the task to occur
Because
* We've seen a lot of intermittent failures in circle complaining about COMPOSE_HTTP_TIMEOUT being too low
This commit
* Let's try increasing it and see if that helps
Because
* We should be able to decrease the celery task timeouts so speed up the integration tests
This commit
* Turns them down to 5s to update and 10s to timeout