- Removed 'commit' memory projection.
- Improved 'ram' memory projection:
- Semaphore is not normalized to 100%. We use the actual memory usages of the pips as the semaphore value.
- Semaphore limit becomes 90% of the available RAM at the end of scheduling phase. In the past, we were adding buildxl memory usage to the available ram. We were hoping that the memory used by the scheduling phase is garbage-collected.
- Due to the race between UpdateStatus method and the logic to decide the ram semaphore limit, the RAM size is sometimes 0, especially in the builds where graph reloading is so fast. In those cases, we were using 100gb as the default memory for the machines, which was pretty high. With these changes, before we calculate the semaphore limit, we ensure that we measure the RAM size correctly.
- WorkerResourceChanged listeners were not added to the remote workers because we were calling ChooseWorkerCpu constructor before the remote workers were added to m_workers list. This caused not triggering/unpausing ChooseWorkerCpu dispatcher when a pip is done on a remote worker. It got fixed now.
Related work items: #2192122
Implement breakaway processes for interpose sandbox.
Ptrace handling is still missing (if a process under a process tree that is ptraced tries to breakaway, it won't).
These tests are failing in some CB scenarios where the tests are run under some reparse points the test infra is not aware of. The feature is Linux-oriented anyways (for JS customers), so the coverage we care about is maintained
Related work items: #2220026
Apparently, base64 encoding on linux automatically adds a newline every X characters. My speculation is that whatever tool reads that file does not expect this, so it reads a single line only, i.e., partial token value, so when it tries to auth with that partial value, it gets access denied.
Related work items: #2219577
Add the ability to restrict undeclared reads to particular scopes. The usage for the JS resolves translates into allowing pips to only read sources under the project roots of their transitive closure.
Related work items: #2209427
SealDirectory pips take part in filtering. This makes sense when using filters that target paths. But it falls apart when using filters that target other properties like a pipid.
The related bug demonstrates this well. Consider a graph with 2 process pips that each produce a SealDirectory:
P0 -> SD0
P1 -> SD1
A pip id filter of `id='p0'` will target p0 as expected. But the behavior prior to this change would be a negated id filter of `~(id='p0')` would match on p1, sd0, and sd1. But since p0 is a dependency of sd0, p0 would actually end up getting scheduled as well.
This change scopes the id filter to Process, CopyFile, and WriteFile pips. Due to the way negation is implemented with being pushed down in to the filter itself, this change makes the behavior of `~(id='p0')` only match on p1, as expected.
It isn't perfect though. Fundamentally, since SealDirectories are included in filtering, other filter combinations may end up including them back again.
Related work items: #2213617
Add a LinuxPipDebugAnalyzer.
This Analyzer generates the required configuration for vscode to launch debugger. It takes the pip semi-stable hash as input, generates the configuration and writes it into a json file. Copy the configuration to vscode launch.json then start debugging
Related work items: #2207693
CancellationToken.CreateFailure() will assert the cancellation is requested. However, there are two cancellation tokens: Context.CancellationToken and Scheduler CancellationToken in HistoricMetaDataCache. Create the failure from wrong cancellation token throws the exception and crash the build.
In this pr, using the CancellationFailure constructor directly to avoid ContractException. It doesn't matter which cancellation is requested, the build will be cancelled.
Related work items: #2212685