This caused some deployment issues in CB, and there is a hot fix that CB needs that got trapped in between cleanup waves. Reverting so CB can get unblocked.
* Removed now unnecessary 'dependentPackageIdsToSkip'
* Removed remaining unnecessary 'withQualifier' and the general netstandard2.0 forcing that happened on cache specs side
* Reorganized asptnetcore assembly references. This could be simplified further in the future, but now it is more self-contained.
* Some other minor cleanups
Next step is trying to make sure dependency versions are the right ones and try to turn on the flag that enforces this from the nuget resolver.
Revert "Merged PR 765049: DScript spec cleanup wave 2
* Removed now unnecessary 'dependentPackageIdsToSkip'
* Removed remaining unnecessary 'withQualifier' and the general netstandard2.0 forcing that happened on cache specs side
* Reorganized asptnetcore assembly references. This could be simplified further in the future, but now it is more self-contained.
* Some other minor cleanups
Next step is trying to make sure dependency versions are the right ones and try to turn on the flag that enforces this from the nuget resolver."
Reverted commit `2b01690d`.
* Removed now unnecessary 'dependentPackageIdsToSkip'
* Removed remaining unnecessary 'withQualifier' and the general netstandard2.0 forcing that happened on cache specs side
* Reorganized asptnetcore assembly references. This could be simplified further in the future, but now it is more self-contained.
* Some other minor cleanups
Next step is trying to make sure dependency versions are the right ones and try to turn on the flag that enforces this from the nuget resolver.
Mostly cache specs, but also touching some shared stuff:
* Unified some Nuget packages (System.Collections.Immutable, System.Memory, etc), and bumped versions to the latest. This triggered a bunch of other updates...
* Removed a lot of unnecessary 'withQualifier'
* Simplified some references by fixing the 'runtimes' folder behavior on the Nuget resolver (which forced a bunch of 'withWinRuntime' calls
More cleanup passes are possible, but those to come later
This PR forces:
- All gRPC clients in .NET 4.7.2 to use gRPC.Core (which is the only supported option)
- All gRPC clients in .NET >= 5 to use gRPC.NET
This is needed for:
- Simplify handling of HTTP clients (which we'll use to stop creating one HttpClient per request)
- Us to support encryption in the future.
The PR also fixes Microsoft.Extensions.Logging.Abstractions, which is a contentious nuget pkg for a variety of reasons and gets in the way of doing this. The change can't be submitted separately because it's hard to repro that way
I've recently updated ErrorProne.NET and added a few rules there.
This PR does the following:
1. Updates ErrorProne.NET to 0.6.1
2. Updates the compiler toolchain to the latest version
3. Fixes the following warnings:
## Structs with default equality and hashcode are used in hashtables
This is problematic since the default methods causes boxing and the hash function is very weak. For instance, hash code for `KeyValuePair<int, int>(1, 1)` and (2,2) are the same.
```
Public\Src\FrontEnd\Core\Incrementality\FrontEndPublicFacadeAndAstProvider.cs(316,20): warning EPC25: The default 'ValueType.Equals' is used in Content.Equals(other.Content)
Public\Src\Utilities\PackedTable\StringTable.cs(128,30): warning EPC24: A struct 'BuildXL.Utilities.PackedTable.CharSpan' with a default Equals and GetHashCode implementation is used as a key in a hash table
Public\Src\Engine\Scheduler\Caching\HistoricMetadataCache.cs(1048,17): warning EPC24: A struct 'BuildXL.Scheduler.Cache.HistoricMetadataCache.Expirable<BuildXL.Scheduler.Cache.HistoricMetadataCache.PublishedEntry>' with a default Equals and GetHashCode implementation is used as a key in a hash table
Public\Src\Tools\Execution.Analyzer\Analyzers.Partner\GraphDiffAnalyzer.cs(391,26): warning EPC24: A struct 'System.Collections.Generic.KeyValuePair<string, (string, bool)>' with a default Equals and GetHashCode implementation is used as a key in a hash table
Public\Src\Tools\Execution.Analyzer\Analyzers.Partner\GraphDiffAnalyzer.cs(94,13): warning EPC24: A struct 'System.Collections.Generic.KeyValuePair<BuildXL.Pips.PipId, BuildXL.Pips.PipId>' with a default Equals and GetHashCode implementation is used as a key in a hash table
Public\Src\Tools\SymbolDaemon\SymbolDaemon.cs(559,13): warning EPC24: A struct 'BuildXL.Ipc.ExternalApi.SealedDirectoryFile' with a default Equals and GetHashCode implementation is used as a key in a hash table
```
## Linear Enumerable.Contains instead of hashset.contains
```
Public\Src\Utilities\UnitTests\TestUtilities.XUnit\XAssert.cs(272,69): warning EPC23: Linear search via Enumerable.Contains is used instead of an instance Contains method
Public\Src\Engine\UnitTests\Processes\SandboxedProcessTest.cs(1601,17): warning EPC23: Linear search via Enumerable.Contains is used instead of an instance Contains method
Public\Src\Engine\Processes\ProcessDumper.cs(230,21): warning EPC23: Linear search via Enumerable.Contains is used instead of an instance Contains method
```
And a bunch of cases when we were using default `ToString` impl which is not helpful.
This PR reverts the revert and adds a fix to prevent the issue from happening.
Revert "Merged PR 757989: Remove Azure Storage SDK V9"
This reverts commit 0e95536756.
Linux build in rolling build pipeline started to fail after this change.
Reverts !758549
Revert "Merged PR 757989: Remove Azure Storage SDK V9"
This reverts commit 0e95536756.
Linux build in rolling build pipeline started to fail after this change.
We're now getting Component Governance alerts about our usage of this NuGet package. This PR removes it from everywhere that remains.
This PR is risky because it changes the API used by CASaaS across several subsystems.
BuildXL looks up cache miss data based on a priority list of keys configured by the user. In practice, the best general setup is to use a chain of git parent commits. This gets replicated in logic that invokes bxl.exe today for some of our customers. This PR provides this as a default experience without the end user needing to manually perform it.
The approach is heuristic, trying to pick candidate keys that might have been used to publish a fingerprint store for a build as "close" as the one running.
For this, the candidates are picked by
1. Get latest 5 commits from the current HEAD. We should get a match if this branch was built before, for example for a PR we expect one build per 'iteration'. Because an iteration can have multiple commits, we should not use a very small number (like, say, this commit and the one before), but let's not go overboard either (hence 5).
2. For any additional additional branches (indicated by the user in the argument), the assumption is that they run build against every commit (this should typically be the `main` branch, or the target branch for a PR).
a) We retrieve the merge-base from HEAD and that branch. We use as keys the hashes for the 3 commits immediately previous to the merge-base.
b) Finally, as a last resort, we use the latest 3 hashes from that branch as a last resort, assuming again that it was built recently
An example: for a PR (that has branched from `h5` and committed a number of times) merging against main, with this topology:
```
.--b5--b4--b3--b2--b1-- b0 [PR branch]
/
h8 -- h7 --h6--- h5 -- h4 -- h3 -- h2 -- h1 -- h0 [main]
```
The hashes for the keys would be (as described above)
1)`b0, b1, b2, b3, b4, b5`
2a) `h5, h6, h7`
2b) `h0, h1, h2`
Duplicates are removed, so something like this:
```
.--b1-- b0 [PR branch]
/
h8 -- h7 --h6--- h5 -- h1 -- h0 [main]
```
would result in the cadidates being `b0, b1, h5, h6, h7, h0, h1`.
Related work items: #2116194
This PR adds support for [PublicAPIAnalyzers](https://github.com/dotnet/roslyn-analyzers) - the analyzers developed by the Roslyn team and used by .NET Community to enforce the public surface stability of libraries.
The idea behind the analyzer is that during the build, the analyzer checks `PublicAPI.Shipped.txt` and `PublicAPI.Unshipped.txt` files for the content of the project's public surface. And if there are changes in the public surface in respect to those files, the analyzer would emit a warning.
The analyzers won't prevent you from making breaking changes, but those breaking changes would be intentional (plus some of them are not obvious at all, for instance, adding a default parameter is a runtime breaking change, even though this is not a compile time breaking change).
In order to enable the analyzer the two steps are required:
1. Add `usePublicApianalyzer: true` to a project spec
2. Add `publicApiFiles` to a project spec.
The last part is required since there are multiple strategies that can be used to force the API compatibility. For instance, the API surface might be different for different target frameworks, in this case a helper function `gbetFrameworkSpecificPublicApiFiles` can be used that will get different files depending on the target framework.
The analyzer also warn on some public API anti-patterns, like default arguments for methods with overloads or for default arguments in general.
This PR also enables such analysis for cache hashing project, since the stability of that project is very important in order to avoid runtime failures in BuilxXL, Arficats and CloudBuild.
Related work items: #2112018
Revert "Merged PR 747309: Upgrade Azure.Identity to 1.10.2
Upgrades some dependent packages and removes a few unused ones
Related work items: #2115829"
Reverted commit `57b2faf4`.
Reverts !750711
Related work items: #2115829
Revert "Merged PR 747309: Upgrade Azure.Identity to 1.10.2
Upgrades some dependent packages and removes a few unused ones
Related work items: #2115829"
Reverted commit `57b2faf4`.
Related work items: #2115829
The purposes of upgrading XUnit are twofolds:
1. To use the latest greatest XUnit packages.
2. As an attempt to relieve hanging XUnit pips.
For (2), we also do other changes:
- Split Ninja UTs to multiple "smaller" pips, where each pip should run faster than the big one.
- Limit the number of xunit pips that can run in parallel by using semaphore. We suspect that the hanging XUnit pips are caused by too many XUnit pips running in parallel.
Example of successful validation: https://dev.azure.com/mseng/Domino/_build/results?buildId=23035493&view=results
Related work items: #2111327
These tests were added years ago to ensure there were no bad interactions between gvfs and BuildXL. They have not been run in quite a while. Removing them to cleanup code and reduce pipeline execution time
Removing old AD-based auth packages
----
#### AI-Generated Description
This change removes the dependency on the `Microsoft.IdentityModel.Clients.ActiveDirectory` NuGet package from the **config.dsc** file. This package was used for authentication with Azure services, but it is now deprecated and replaced by `Microsoft.Identity.Client`. The change also removes the corresponding binding redirect entry from the **BuildXLSdk.dsc** file.
Related work items: #2099904
- Uses Microsoft.Artifacts.Authentication for MSAL authentication in VssCredentialsFactory
- Does not use WAM broker because we can't easily get a parent window handle when running under buildxl
- Skips device code authentication to avoid having to display message on the console.
Related work items: #2063548
Updating the following packages:
- Azure.Identity: `1.8.2` -> `1.10.0`
- Azure.Core: `1.31.0` -> `1.34.0`
- Azure.Security.KeyVault.Certificates: `4.0.3` -> `4.5.1`
- Azure.Security.KeyVault.Secrets: `4.0.3` -> `4.5.0`
Allows the set of changes reverted in https://mseng.visualstudio.com/Domino/_git/CloudBuild/pullrequest/736215?_a=files&path=/private/Deployment/ValidateCache/src/SecretRetriever.cs to merge, since this relies on the updated set of packages and simplifies auth for the cache validation task used in nightly quickbuild releases.
----
## AI-Generated Description
This change updates the **config.dsc** file by upgrading the versions of some NuGet packages related to Azure services, such as Azure.Core, Azure.Identity, Azure.Security.KeyVault.Secrets, and Azure.Security.KeyVault.Certificates.
Related work items: #2100527
Upgrade to RocksDbSharp 8.1.1-20230829.3 and remove now-unnecessary hack. The new version of the package fixes the issue that requires us to keep a reference of all ` ColumnFamilyOptions` objects in order for them to not be GC'd.
----
## AI-Generated Description
This change:
- Updates the versions of `RocksDbSharp` and `RocksDbNative` NuGet packages in the **config.dsc** file.
- Removes some unused code and comments related to a bug in the RocksDb library in the **RocksDbLifetimeDatabase.cs** file.
Related work items: #2090673
Updates various SBOM related packages to work with the new Component Detector.
These changes are necessary in order for CloudBuild Repo to update to the new version of Component Detector and SBOM tooling
- Removes references to Microsoft.IdentityModel.Clients.ActiveDirectory in the downloader
- Removes old unused package: Microsoft.Rest.ClientRuntime.Azure.Authentication
Related work items: #2099917
As preparation for checkpointing, which requires that garbage collection happens for the whole storage account at once, it makes sense to first support garbage collecting multiple namespaces/universes in a single garbage collection run. The idea is that instead of accessing the DB directly, there is an IAccessor, which limits the view of the database to only a given namespace. In practice, what this means is that each accessor will have a unique set of RocksDb column families that it accesses. Other than that, the logic to create/manage the database stays the same.
Another change is that we can now update our view of the world in subsequent runs via reading Azure Storage's change feed. This is extremely important since otherwise, nothing works: on the first run, since we touch everything, nothing is evictable; and on the second run, such a long time has passed that without updating our view of things, we might be deleting blobs with new references.
Finally, after both these changes, I also implemented checkpointing. The checkpoint and all its data will live in different containers in the 0th shard of the cache, as different-sized caches _are different caches_, regardless of whether they share accounts. Ideally, we won't have this ever since we're the ones resharding, but even today we already have that problem since some of our tests are not using all 100 accounts we've provisioned.
Upgrade ADO and Artifact packages to 19.224 version
The initiation is to upgrade the drop.app to the version that contains drop domain related changes.
RuntimeContracts and Microsoft.Identity.Client package are also upgraded based on the 19.224 version requirements.
Related work items: #2079209
Separate guardian from Roslyn analyzers. We now use the roslyn analyzers from Microsoft in all builds as a part of the build. Guardian will be enabled when TOOLPATH_GUARDIAN variable exists.
Related work items: #2031143
Revert "Merged PR 719250: Use Microsoft.Artifacts.Authentication for cache/drop authentication"
This reverts commit dc8fb8ff65.
Reverting change that's breaking dev cache authentication for users without VPN.