previous shouldDeploy condition have not been included in deployment
condition. This variable has been updated to "forceDeployment" and
included back in the deployment condition.
shouldDeploy has been updated to check if release builds are the latest
version in its major version series.
## Description
It is possible to load a summary into a detached SharedTree. In this
case, the detached revision that is used to generate sequence IDs for
commits while detached must be updated to ensure that it doesn't
duplicate any of the sequence IDs already used in the summary. This PR
fixes the issue and also adds an assert when sequencing in the
EditManager to ensure that we don't sequence regressive sequence IDs.
This fix also allows us to trim the trunk when summarizing (without
breaking our fuzz tests), which reduces summary sizes especially for
detached trees with many synchronous edits.
## Description
Adjusts the `condition` parameter of
`include-conditionally-run-stress-test` to apply to the entire stage
rather than the test-running job. This improves the clarity from the
pipeline overview page, see most recent run vs previous ones:
![image](https://github.com/user-attachments/assets/e344d02a-a617-4163-bbda-0bfaac276541)
This also fixes the `stressMode` parameter not being given options at
queue time, which was introduced in #21702 and prevented the pipeline
from being queued.
---------
Co-authored-by: Abram Sanderson <absander@microsoft.com>
## Description
Users of `IntervalCollection` can specify a start and end `side` when
adding or changing an interval. Doing so changes the behavior for the
resulting `start` and `end` local references put into the merge tree:
using the default `Side.Before` causes references to slide forward when
the segment they exist on is removed, whereas using `Side.After` causes
references to slide backward.
In the common case, local reference positions sliding is initiated by
removing a segment in merge-tree. However, `IntervalCollection` also
reuses this slide logic when intervals are changed/added concurrently to
a segment removal. This latter case did not correctly plumb through the
sliding preference to the helper function it used, which gave
undesirable merge semantics.
Resolves
[AB#22191](https://dev.azure.com/fluidframework/235294da-091d-4c29-84fc-cdfc3d90890b/_workitems/edit/22191).
Co-authored-by: Abram Sanderson <absander@microsoft.com>
Collect more data (telemetry) to better understand number of concurrent users in the document.
In addition to capturing quorum size, capture audience size, as well as capture how many socket connections there were in a session
## Description
The tests unnecessarily initialize a SharedTree during test discovery.
This exercises many common code paths in SharedTree (creation, editing,
etc.). Breakpoints set in these areas will be hit during the discovery
of these memory tests, even if the memory tests are not being run. This
is annoying and confusing for development - for example, running any
(unrelated) single SharedTree test from the test browser will also run
the memory test initialization code and hit extraneous breakpoints.
This PR removes the unnecessary initialization code, which is (sneakily)
happening in a property initializer.
## Description
Rename toFlexSchema to toStoredSchema to reflect that flex schema is no
longer a thing.
Rename/move tests to match what they are testing.
## Description
Updates our transitive dependencies on `path-to-regexp` to versions that
fixed https://nvd.nist.gov/vuln/detail/CVE-2024-45296 . Accomplished by
updating our direct dependencies on `sinon` to a mix of version 18 and
19, since that's the main way in which we get transitive dependencies on
`path-to-regexp`.
`@types/sinon` was also opportunistically updated to the latest version
where it wasn't already up to date.
## Description
Updates `tar` to version `6.2.1` to address
https://nvd.nist.gov/vuln/detail/CVE-2024-28863 . Done by adding a
`pnpm.overrides` entry `"tar": "^6.2.1"` to the package.json of each of
the affected packages, running `pnpm i --no-frozen-lockfile`, then
removing the override from package.json and running the same command
again.
Updates the configuration for build-tools packages to use fluid-build to
build.
The changes were modest since it was already using fluid-build for most
of the build. Only the root-level tasks needed to be hooked up, and the
individual package scripts needed to be updated.
[AB#17062](https://dev.azure.com/fluidframework/235294da-091d-4c29-84fc-cdfc3d90890b/_workitems/edit/17062)
## Description
This PR improves GraphCommits in a couple of ways.
1. It adds a protective assert to the properties of commits that have
been trimmed by the EditManager. This prevents any part of our system
from accessing those commits without immediately throwing an error. This
will catch any bugs that try to hold on to stale graph commits much
sooner than they would otherwise be caught.
2. It removes the rollback property from GraphCommit and implements it
via a WeakMap instead. This is appropriate because the rollback property
is only used by a single scoped feature, so it need not be a property
that is visible to all other code that works with GraphCommits.
## Description
When a SharedTree is detached and submits a change, this schedules the
subsequent advancement of the EditManager's minimum sequence number on
the JS microtask queue rather than doing it immediately/synchronously.
Doing it synchronously is problematic because advancing the minimum
sequence number might cause trunk commits to be evicted, however, this
is happening in an event callback context - `submitCommit` is being
called as a result of the `afterChange` event being fired on the local
branch. If trunk eviction were to occur immediately, it would mean that
other listeners to the `afterChange` event might experience different
trunks (some before trimming, others after trimming). In practice, this
is a problem when generating revertibles, as revertibles respond to the
`afterChange` event by creating branches - but the bases of these
branches may or may not have already been trimmed by the aforementioned
behavior. Delaying the trimming guarantees that all listeners to the
`afterChange` event can safely reference any commits that they may
already have handles to.
Adds steps to the release tool to check that release notes have been
generated and that the per-package changelogs have been updated. Also
prompts the user for a bump type earlier in the process if needed.
The changelog step is not trivial to check, so I just prompt the user to
select whether they did it. Not ideal, but at least the tool will remind
the release driver that this step is needed. We can add a more complete
check in the future if needed.
---------
Co-authored-by: Joshua Smithrud <54606601+Josmithr@users.noreply.github.com>
## Description
Handle schema contravariantly when generating input types.
## Breaking Changes
No runtime changes, but typing tweaks could break users on non-exact
schema, and the enum schema APIs have been tweaked to avoid relying on
non-exact schema, see changeset.
## Description
While neither of these compile errors are in the `@public` API surface,
they could impact customers using the public API due to how `libCheck`
works and how Fluid Framework handles it's API exports.
See changeset for details.
I have filed
https://dev.azure.com/fluidframework/internal/_workitems/edit/21856 to
internally track that this was not detected by the CI of FF or FF
examples.
## Reviewer guidance
This is part 3 or 3 of the op bunching feature. This part focuces on the
changes in the DDS. [Part
1](https://github.com/microsoft/FluidFramework/pull/22839) and [part
2](https://github.com/microsoft/FluidFramework/pull/22840).
## Problem
During op processing, container runtime sends ops one at a time to data
stores to DDSes. If a DDS has received M contiguous ops as part of a
batch, the DDS is called M times to process them individually. This has
performance implications for some DDSes and they would benefit from
receiving and processing these M ops together.
Take shared tree for example:
For each op received which has a sequenced commit, all the pending
commits are processed by the rebaser. So, as the number of ops received
grows, so does the processing of pending commits. The following example
describes this clearly:
Currently if a shared tree client has N pending commits which have yet
to be sequenced, each time that client receives a sequenced commit
authored by another client (an op), it will update each of its pending
commits which takes at least O(N) work.
Instead, if it receives M commits at once, it could do a single update
pass on each pending commit instead of M per pending commit.
It can compose the M commits together into a single change to update
over, so it can potentially go from something like O (N * M) work to O
(N + M) work with batching.
## Solution - op bunching
The solution implemented here is a feature called "op bunching".
With this feature, contiguous ops in a grouped op batch that belong to a
data store / DDS will be bunched and sent to it in an array - The
grouped op is sent as an `ISequencedMessageEnvelope` and the individual
message `contents` in it are sent as an array along with the
`clientSequenceNumber` and `localOpMetadata`.
The container runtime will send bunch of contiguous ops for each data
store to it. The data store will send bunch of contiguous ops for each
DDS to it. The DDS can choose how to process these ops. Shared tree for
instance, would compose the commits in all these ops and update pending
commits with it.
Bunching only contiguous ops for a data store / DDS in a batch preserves
the behavior of processing ops in the sequence it was received.
Couple of behavior changes to note:
1. Op events - An implication of this change is the timing of "op"
events emitted by container runtime and data store runtime will change.
Currently, these layers emit an "op" event immediately after an op is
processed. With this change, an upper layer will only know when a bunch
has been processed by a lower layer. So, it will emit "op" events for
individual ops in the bunch after the entire bunch is processed.
From my understanding, this should be fine because we do not provide any
guarantee that the "op" event will be emitted immediately after an op is
processed. These events will be emitted in order of op processing and
(sometime) after the op is processed.
Take delta manager / container runtime as an example. Delta manager
sends an op for processing to container runtime and emits the "op"
event. However, container runtime may choose to not process these ops
immediately but save them until an entire batch is received. This change
was made but was reverted due to some concerns not related to the topic
discussed here - https://github.com/microsoft/FluidFramework/pull/21785.
The chang here is similar to the above behavior where an upper layer
doesn't know and shouldn't care what lower layers do with ops.
2. `metadata` property on message - With this PR, the metadata property
is removed from a message before it's sent to data stores and DDS. This
is because we now send one common message (the grouped op) and an array
of contents. Individual messages within a grouped op have batch begin
and end metadata but they are just added by the runtime to keep it like
old batch messages. The data store and DDS don't care about it so
removing them should be fine.
[AB#20123](https://dev.azure.com/fluidframework/235294da-091d-4c29-84fc-cdfc3d90890b/_workitems/edit/20123)
Adding a test that attempts to create a new datastore as synchronously
as possible. In the process of creating this test, i noticed a bug in
the handle resolution code, where handles to datastores would fail to
resolve, unless customer request logic, like in data object, is used. It
should not be necessary to use custom request logic, as it is pattern
the runtime has been moving away from. To fix this I've added handling
to the datastore runtime such that it returns the entrypoint when the
request is via a handle, and has no sub-path.
## Description
Update puppeteer to latest version so it brings in `ws@8.17.x` (instead
of `8.16`) to address https://nvd.nist.gov/vuln/detail/CVE-2024-37890.
The [breaking changes from `puppeteer` 22 to
23](https://github.com/puppeteer/puppeteer/releases/tag/puppeteer-v23.0.0)
don't affect us as far as I can tell, but I had to update
`jest-puppeteer` to its latest version (and it made sense to update
`jest-environment-puppeteer` as well) for jest tests to pass again.
Otherwise I was getting errors like this:
```console
FAIL dist/test/jest/buffer.spec.js
● Test suite failed to run
TypeError: global.context.isIncognito is not a function
at closeContext (node_modules/.pnpm/jest-environment-puppeteer@9.0.2_typescript@5.4.5/node_modules/jest-environment-puppeteer/dist/index.js:210:24)
```
## Description
This fixes a bug in which the sequence numbers associated with local
branches could be advanced when they shouldn't have been. This allowed
commits from those branches to be garbage collected while the branches
were still alive.
This change adds default implementations of the `IFluidRepo`,
`IWorkspace`, `IReleaseGroup`, and `IPackage` interfaces. These
implementations are sufficient for most uses in build-tools and
build-cli. They may be further extended for specific scenarios in future
changes.
I disabled the TypeDoc generation in this PR to make the code changes
easier to review. Once this series of PRs lands, we can enable TypeDoc
or api-markdown-documenter and regen the docs.
---------
Co-authored-by: Alex Villarreal <716334+alexvy86@users.noreply.github.com>
## Reviewer guidance
This is part 2 or 3 of the op bunching feature. This part focuces on the
changes in data store. Part 1 - #22839
## Problem
During op processing, container runtime sends ops one at a time to data
stores to DDSes. If a DDS has received M contiguous ops as part of a
batch, the DDS is called M times to process them individually. This has
performance implications for some DDSes and they would benefit from
receiving and processing these M ops together.
Take shared tree for example:
For each op received which has a sequenced commit, all the pending
commits are processed by the rebaser. So, as the number of ops received
grows, so does the processing of pending commits. The following example
describes this clearly:
Currently if a shared tree client has N pending commits which have yet
to be sequenced, each time that client receives a sequenced commit
authored by another client (an op), it will update each of its pending
commits which takes at least O(N) work.
Instead, if it receives M commits at once, it could do a single update
pass on each pending commit instead of M per pending commit.
It can compose the M commits together into a single change to update
over, so it can potentially go from something like O (N * M) work to O
(N + M) work with batching.
## Solution - op bunching
The solution implemented here is a feature called "op bunching".
With this feature, contiguous ops in a grouped op batch that belong to a
data store / DDS will be bunched and sent to it in an array - The
grouped op is sent as an `ISequencedMessageEnvelope` and the individual
message `contents` in it are sent as an array along with the
`clientSequenceNumber` and `localOpMetadata`.
The container runtime will send bunch of contiguous ops for each data
store to it. The data store will send bunch of contiguous ops for each
DDS to it. The DDS can choose how to process these ops. Shared tree for
instance, would compose the commits in all these ops and update pending
commits with it.
Bunching only contiguous ops for a data store / DDS in a batch preserves
the behavior of processing ops in the sequence it was received.
Couple of behavior changes to note:
1. Op events - An implication of this change is the timing of "op"
events emitted by container runtime and data store runtime will change.
Currently, these layers emit an "op" event immediately after an op is
processed. With this change, an upper layer will only know when a bunch
has been processed by a lower layer. So, it will emit "op" events for
individual ops in the bunch after the entire bunch is processed.
From my understanding, this should be fine because we do not provide any
guarantee that the "op" event will be emitted immediately after an op is
processed. These events will be emitted in order of op processing and
(sometime) after the op is processed.
Take delta manager / container runtime as an example. Delta manager
sends an op for processing to container runtime and emits the "op"
event. However, container runtime may choose to not process these ops
immediately but save them until an entire batch is received. This change
was made but was reverted due to some concerns not related to the topic
discussed here - https://github.com/microsoft/FluidFramework/pull/21785.
The chang here is similar to the above behavior where an upper layer
doesn't know and shouldn't care what lower layers do with ops.
2. `metadata` property on message - With this PR, the metadata property
is removed from a message before it's sent to data stores and DDS. This
is because we now send one common message (the grouped op) and an array
of contents. Individual messages within a grouped op have batch begin
and end metadata but they are just added by the runtime to keep it like
old batch messages. The data store and DDS don't care about it so
removing them should be fine.
[AB#20123](https://dev.azure.com/fluidframework/235294da-091d-4c29-84fc-cdfc3d90890b/_workitems/edit/20123)
Inquirer has been rewritten and released as `@inquirer/prompts`, so this
PR upgrades the few inquirer uses we have to the new version.
We also use another prompt library in build-cli, `prompts`, and I think
we can likely replace it with the new inquirer, but I left that for
another change.
## Reviewer guidance
This is part 1 or 3 of the op bunching feature. The PR with the
end-to-end feature is
https://github.com/microsoft/FluidFramework/pull/22686. It has been
broken down for simpler review process.
This part focuces on the changes in the Runtime layer.
Note - This change breaks the snapshot tests because old snapshots with
merge tree had "catchupOps" blobs which contained ops. Now that
"metadata" property is not sent to DDS, comparing snapshots fails
because of the metadata property not present in latest snapshots for
these merge tree instances. A code change is done in snapshot normalizer
for this.
## Problem
During op processing, container runtime sends ops one at a time to data
stores to DDSes. If a DDS has received M contiguous ops as part of a
batch, the DDS is called M times to process them individually. This has
performance implications for some DDSes and they would benefit from
receiving and processing these M ops together.
Take shared tree for example:
For each op received which has a sequenced commit, all the pending
commits are processed by the rebaser. So, as the number of ops received
grows, so does the processing of pending commits. The following example
describes this clearly:
Currently if a shared tree client has N pending commits which have yet
to be sequenced, each time that client receives a sequenced commit
authored by another client (an op), it will update each of its pending
commits which takes at least O(N) work.
Instead, if it receives M commits at once, it could do a single update
pass on each pending commit instead of M per pending commit.
It can compose the M commits together into a single change to update
over, so it can potentially go from something like O (N * M) work to O
(N + M) work with batching.
## Solution - op bunching
The solution implemented here is a feature called "op bunching".
With this feature, contiguous ops in a grouped op batch that belong to a
data store / DDS will be bunched and sent to it in an array - The
grouped op is sent as an `ISequencedMessageEnvelope` and the individual
message `contents` in it are sent as an array along with the
`clientSequenceNumber` and `localOpMetadata`.
The container runtime will send bunch of contiguous ops for each data
store to it. The data store will send bunch of contiguous ops for each
DDS to it. The DDS can choose how to process these ops. Shared tree for
instance, would compose the commits in all these ops and update pending
commits with it.
Bunching only contiguous ops for a data store / DDS in a batch preserves
the behavior of processing ops in the sequence it was received.
Couple of behavior changes to note:
1. Op events - An implication of this change is the timing of "op"
events emitted by container runtime and data store runtime will change.
Currently, these layers emit an "op" event immediately after an op is
processed. With this change, an upper layer will only know when a bunch
has been processed by a lower layer. So, it will emit "op" events for
individual ops in the bunch after the entire bunch is processed.
From my understanding, this should be fine because we do not provide any
guarantee that the "op" event will be emitted immediately after an op is
processed. These events will be emitted in order of op processing and
(sometime) after the op is processed.
Take delta manager / container runtime as an example. Delta manager
sends an op for processing to container runtime and emits the "op"
event. However, container runtime may choose to not process these ops
immediately but save them until an entire batch is received. This change
was made but was reverted due to some concerns not related to the topic
discussed here - https://github.com/microsoft/FluidFramework/pull/21785.
The chang here is similar to the above behavior where an upper layer
doesn't know and shouldn't care what lower layers do with ops.
2. `metadata` property on message - With this PR, the metadata property
is removed from a message before it's sent to data stores and DDS. This
is because we now send one common message (the grouped op) and an array
of contents. Individual messages within a grouped op have batch begin
and end metadata but they are just added by the runtime to keep it like
old batch messages. The data store and DDS don't care about it so
removing them should be fine.
This also results in the "snapshot test" failing as explained before and
this PR contains a fix for that.
[AB#20123](https://dev.azure.com/fluidframework/235294da-091d-4c29-84fc-cdfc3d90890b/_workitems/edit/20123)
## Description
Cleanup references to markdown-magic-template from lockfiles.
These projects have a `file:` reference to `@fluid-tools/markdown-magic`
in their package json. The only way I could find to clean up the
lockfile entries corresponding to the dependency tree for the dependency
that we removed in `@fluid-tools/markdown-magic` was to manually delete
that line from the lockfile in these packages and run `pnpm i
--no-frozen-lockfile` after.
## Description
In preparation for adding `attendeeDisconnected` support for Presence.
We'll be using same support Audience uses to monitor disconnects and
announce them.
This PR keep IAudienceEvents "removeMember" and "addMember" distinct
such that interfaces with similar capabilities may be intersected.
---------
Co-authored-by: Joshua Smithrud <54606601+Josmithr@users.noreply.github.com>
## Description
`markdown-magic-template` is not used at all as far as I can tell, and
it brings in a transitive dependency on `lodash.template: 4.5.0` which
is flagged with a CVE that can't be addressed easily because `lodash`
seems to have stopped publishing each of their functions as a separate
package after 4.5.0.
The latest release of jssm includes the fixes in
https://github.com/StoneCypher/jssm/pull/569, so our patch is no longer
needed and has been removed. However, the jssm-viz package has a similar
problem (open PR: https://github.com/StoneCypher/jssm-viz/pull/54) so I
patched it with type changes similar to what was done for jssm
originally.
If or when the jssm-viz PR is accepted or the issue is otherwise fixed,
we can remove the patch.
oclif-test underwent some major changes in its latest release. Tests in
build-tools have been updated with new patterns to account for the
changes. The changes were informed by [the oclif-test migration
guide](https://github.com/oclif/test/blob/main/MIGRATION.md).
Most of the conversion was straightforward except for tests that use env
variables. For those, I used
[mocked-env](https://www.npmjs.com/package/mocked-env). The tests still
use mocha/chai.
Upgrades @octokit/core to v5, which should have no effect on our uses.
The breaking changes are mostly related to dropping node support:
https://github.com/octokit/core.js/releases/tag/v5.0.0
I could have gone to v6 but there are some other deps in the tree with
peer deps on v5, and the main change in v6 is that it went ESM-only.