Страница:
Dealing with Test Flakiness
Страницы
Accessibility Guidelines
Adopting Multi Root Workspace APIs
Automated Issue Triaging
Build Champion
Build and run 32bit Code OSS on Windows
Code Editor Roadmap
Coding Guidelines
Color customization color id changes
Commit Signing
Community Issue Tracking
Contributor License Agreement
Cross Compiling for Debian Based Linux
Dealing with Test Flakiness
December Pre Release
December Test Plan
Design Goals
Development Process
Differences between the repository and Visual Studio Code
Editor decorations & colors
Explain extension causes high cpu load
Extension API guidelines
Extension API process
Feature Areas
Feedback Channels
File Watcher Internals
File Watcher Issues
Git & SCM Issues
Home
How to Contribute
IME Test
Insiders Release Notes
Interactive Window Documentation
Inventory of performance marks
Issue Grooming
Issue Tracking
Issues Triaging
Iteration Plans
Keybinding Issues
Known issues
Lists And Trees
Native Crash Issues
Notebook Smoke Test
Notebook documentation
Notebook layout
November Plan
Performance Issues (old)
Performance Issues
Pointer Device & Touch support
Publish vscode types
Related Projects
Release Process
Releasing vscode.dev
Removing Insiders
Roadmap 2017
Roadmap 2018
Roadmap 2019
Roadmap 2020
Roadmap
Roadmap‐2021‐2022
Running the Endgame
Sanity Check
Search Issues
Selfhosting on Windows WSL
Semantic Highlighting Overview
Setting Descriptions
Smoke Test
Source Code Organization
Submitting Bugs and Suggestions
Terminal Issues
TypeScript Issues
UX
VS Code Speech
Virtual Workspaces
Working Copies
Working with xterm.js
Writing Test Plan Items
Writing Tests
[DEV] Perf Tools for VS Code Development
[DEV] Programming advice and tips & tricks
[WIP] Code Editor Design Doc
[WIP] Design Checklist
[WIP] Design Principles
[dev perf] PerformanceError and unresponsive renderer telemetry
7
Dealing with Test Flakiness
Aaron Munger редактировал(а) эту страницу 2023-03-01 16:08:11 -08:00
Test flakiness slows down the whole team by making the pipeline less reliable, resulting in a bunch of wasted time investigating. Also, repeated false positives can cause people to care less about health of the pipeline.
If you have a flaky test, you should disable it ASAP to keep the build green. Even if the test only failed a couple of times in the past month, not disabling it will cause more wasted effort and more false positives down the line. You can check how often the test is failing in the ADO dashboards.
Here are some strategies for dealing with flakiness:
- Timeouts: Timeouts are inherently flaky as they depend on CPU speed, cores, other processes, etc. Polling is almost always a better approach.
- Retries: Retrying a test can work around a flaky test temporarily, but should generally not be used in the long term. If a test needs to be retried, it means there's an underlying error being hidden and a flake could still end up happening (based on the flake rate). Retrying can be a good strategy when the test failing is acceptable and it's not worth the effort to investigate.
- Async vs sync: If a test can be written in a synchronous way (mostly for unit tests) this is preferable.
- Reproducing locally: You might be able to reproduce the failure locally by wrapping the test in a loop:
for (let i = 0; i < 100; i++) { test('the flaky test ' + i, async () => { ... }); }
- Wrap the code under test in
withVerboseLogs
to increase the amount of information you can get out of the test. - Also call
mocha.bail
beforemocha.run
in the appropriatetestrunner.js
so that the test run will end on a failure and is easier to find in the output.
- Wrap the code under test in
- Test against the product build: Some failures may only happen in the product build, not out of sources. Integration and smoke tests can both run against the product build:
# Integration tests export INTEGRATION_TEST_ELECTRON_PATH=<install dir> ./scripts/test-integration.sh --build # Smoke tests yarn smoketest --build <install dir>
- Check the logs: Log files (client and server) are available in Azure Pipelines for all tests except the unit tests as artifacts on the build. Look for the "n published" link under Related on the build summary page.
- If the logs don't provide any useful information, try improving the assert failure message to provide more information about the state when the test failed. You might need to wrap a timeout failure in a try/catch and add an
assert.fail
within the catch block.
- If the logs don't provide any useful information, try improving the assert failure message to provide more information about the state when the test failed. You might need to wrap a timeout failure in a try/catch and add an
- Playback Playwright traces: Browser smoke tests can be played back on https://trace.playwright.dev/ by downloading the
playwright-trace-*.zip
build artifacts underlogs-<platform>-*
->smoke-tests-browser
.
Project Management
- Roadmap
- Iteration Plans
- Development Process
- Issue Tracking
- Build Champion
- Release Process
- Running the Endgame
- Related Projects
Contributing
- How to Contribute
- Submitting Bugs and Suggestions
- Feedback Channels
- Source Code Organization
- Coding Guidelines
- Testing
- Dealing with Test Flakiness
- Contributor License Agreement
- Extension API Guidelines
- Accessibility Guidelines
Documentation