## Description
Add chngeset for removing the inbound and outbound queues on
iDeltamanager. Refer to this PR for extra details:
https://github.com/microsoft/FluidFramework/pull/22282
---------
Co-authored-by: Jatin Garg <jatingarg@Jatins-MacBook-Pro-2.local>
Co-authored-by: Tyler Butler <tyler@tylerbutler.com>
## Description
[AB#7202](https://dev.azure.com/fluidframework/internal/_workitems/edit/7202)
1.) Remove deprecated inbound and outbound queues on IDeltaManager.
2.) Move them to IDeltaManagerFull so that internal Fluid layers can
still use it but not the apps.
---------
Co-authored-by: Jatin Garg <jatingarg@Jatins-MacBook-Pro-2.local>
Co-authored-by: Joshua Smithrud <54606601+Josmithr@users.noreply.github.com>
Co-authored-by: Tyler Butler <tyler@tylerbutler.com>
As part of ongoing improvements, several exposed internals that are
unnecessary for any supported scenarios and could lead to errors if used
have been removed. Since direct usage would likely result in errors, it
is not expected that these changes will impact any Fluid Framework
consumers.
Removed types:
- IMergeTreeTextHelper
- MergeNode
- ObliterateInfo
- PropertiesManager
- PropertiesRollback
- SegmentGroup
- SegmentGroupCollection
In addition to removing the above types, their exposures have also been
removed from interfaces and their implementations: `ISegment`,
`ReferencePosition`, and `ISerializableInterval`.
Removed functions:
- addProperties
- ack
Removed properties:
- propertyManager
- segmentGroups
The initial deprecations of the now changed or removed types were
announced in Fluid Framework v2.2.0:
[Fluid Framework
v2.2.0](https://github.com/microsoft/FluidFramework/blob/main/RELEASE_NOTES/2.2.0.md)
---------
Co-authored-by: Tyler Butler <tylerbu@microsoft.com>
Since we'll be releasing a lot of legacy API changes and removals in
2.10, I think a dedicated section in the release notes will be useful.
Incidentally this is why the sections were designed to be configurable.
## Description
Fixes
[AB#19784](https://dev.azure.com/fluidframework/235294da-091d-4c29-84fc-cdfc3d90890b/_workitems/edit/19784)
ContainerRuntime's 'batchBegin'/'batchEnd' events: Removing the
`contents` property on event arg `op`
The 'batchBegin'/'batchEnd' events on ContainerRuntime indicate when a
batch is beginning/finishing being processed.
The `contents` property on there is not useful or relevant when
reasoning over incoming changes at the batch level.
So it has been removed from the `op` event arg.
## Breaking Changes
Yes this is a breaking change. See changeset.
---------
Co-authored-by: Kian Thompson <102998837+kian-thompson@users.noreply.github.com>
The `Client` class in the merge-tree package has been removed.
Additionally, types that directly or indirectly expose the merge-tree
`Client` class have also been removed.
The removed types were not meant to be used directly, and direct usage
was not supported:
- AttributionPolicy
- IClientEvents
- IMergeTreeAttributionOptions
- SharedSegmentSequence
- SharedStringClass
Some classes that referenced the `Client` class have been transitioned
to interfaces. Direct instantiation of these classes was not supported
or necessary for any supported scenario, so the change to an interface
should not impact usage. This applies to the following types:
- SequenceInterval
- SequenceEvent
- SequenceDeltaEvent
- SequenceMaintenanceEvent
The initial deprecations of the now changed or removed types were
announced in Fluid Framework v2.4.0:
[Several MergeTree Client Legacy APIs are now
deprecated](https://github.com/microsoft/FluidFramework/blob/main/RELEASE_NOTES/2.4.0.md#several-mergetree-client-legacy-apis-are-now-deprecated-22629)
---------
Co-authored-by: Tyler Butler <tylerbu@microsoft.com>
adds a new `changed` event to `TreeBranchEvents` that fires for both
local and remote changes
---------
Co-authored-by: Noah Encke <78610362+noencke@users.noreply.github.com>
Generates the release notes for 2.5. Most changesets have been updated
with minor wording and formatting changes. Command used to generate the
release notes:
```shell
pnpm flub generate releaseNotes -g client -t minor --outFile RELEASE_NOTES/2.5.0.md
```
---------
Co-authored-by: jzaffiro <110866475+jzaffiro@users.noreply.github.com>
1. `ISessionClient` method names updated for consistency to
`getConnectionId()` and `getConnectionStatus()`.
2. Implementation of `ISessionClient` moved to a full class object.
3. Changeset provided for Presence changes since 2.4.
4. Updated `id` to `ID` in comments (public and most internal).
No behavior is changed.
[AB#21446](https://dev.azure.com/fluidframework/235294da-091d-4c29-84fc-cdfc3d90890b/_workitems/edit/21446)
---------
Co-authored-by: Willie Habimana <whabimana@microsoft.com>
Co-authored-by: Tyler Butler <tylerbu@microsoft.com>
## Description
Handle schema contravariantly when generating input types.
## Breaking Changes
No runtime changes, but typing tweaks could break users on non-exact
schema, and the enum schema APIs have been tweaked to avoid relying on
non-exact schema, see changeset.
## Description
While neither of these compile errors are in the `@public` API surface,
they could impact customers using the public API due to how `libCheck`
works and how Fluid Framework handles it's API exports.
See changeset for details.
I have filed
https://dev.azure.com/fluidframework/internal/_workitems/edit/21856 to
internally track that this was not detected by the CI of FF or FF
examples.
## Reviewer guidance
This is part 3 or 3 of the op bunching feature. This part focuces on the
changes in the DDS. [Part
1](https://github.com/microsoft/FluidFramework/pull/22839) and [part
2](https://github.com/microsoft/FluidFramework/pull/22840).
## Problem
During op processing, container runtime sends ops one at a time to data
stores to DDSes. If a DDS has received M contiguous ops as part of a
batch, the DDS is called M times to process them individually. This has
performance implications for some DDSes and they would benefit from
receiving and processing these M ops together.
Take shared tree for example:
For each op received which has a sequenced commit, all the pending
commits are processed by the rebaser. So, as the number of ops received
grows, so does the processing of pending commits. The following example
describes this clearly:
Currently if a shared tree client has N pending commits which have yet
to be sequenced, each time that client receives a sequenced commit
authored by another client (an op), it will update each of its pending
commits which takes at least O(N) work.
Instead, if it receives M commits at once, it could do a single update
pass on each pending commit instead of M per pending commit.
It can compose the M commits together into a single change to update
over, so it can potentially go from something like O (N * M) work to O
(N + M) work with batching.
## Solution - op bunching
The solution implemented here is a feature called "op bunching".
With this feature, contiguous ops in a grouped op batch that belong to a
data store / DDS will be bunched and sent to it in an array - The
grouped op is sent as an `ISequencedMessageEnvelope` and the individual
message `contents` in it are sent as an array along with the
`clientSequenceNumber` and `localOpMetadata`.
The container runtime will send bunch of contiguous ops for each data
store to it. The data store will send bunch of contiguous ops for each
DDS to it. The DDS can choose how to process these ops. Shared tree for
instance, would compose the commits in all these ops and update pending
commits with it.
Bunching only contiguous ops for a data store / DDS in a batch preserves
the behavior of processing ops in the sequence it was received.
Couple of behavior changes to note:
1. Op events - An implication of this change is the timing of "op"
events emitted by container runtime and data store runtime will change.
Currently, these layers emit an "op" event immediately after an op is
processed. With this change, an upper layer will only know when a bunch
has been processed by a lower layer. So, it will emit "op" events for
individual ops in the bunch after the entire bunch is processed.
From my understanding, this should be fine because we do not provide any
guarantee that the "op" event will be emitted immediately after an op is
processed. These events will be emitted in order of op processing and
(sometime) after the op is processed.
Take delta manager / container runtime as an example. Delta manager
sends an op for processing to container runtime and emits the "op"
event. However, container runtime may choose to not process these ops
immediately but save them until an entire batch is received. This change
was made but was reverted due to some concerns not related to the topic
discussed here - https://github.com/microsoft/FluidFramework/pull/21785.
The chang here is similar to the above behavior where an upper layer
doesn't know and shouldn't care what lower layers do with ops.
2. `metadata` property on message - With this PR, the metadata property
is removed from a message before it's sent to data stores and DDS. This
is because we now send one common message (the grouped op) and an array
of contents. Individual messages within a grouped op have batch begin
and end metadata but they are just added by the runtime to keep it like
old batch messages. The data store and DDS don't care about it so
removing them should be fine.
[AB#20123](https://dev.azure.com/fluidframework/235294da-091d-4c29-84fc-cdfc3d90890b/_workitems/edit/20123)
## Reviewer guidance
This is part 2 or 3 of the op bunching feature. This part focuces on the
changes in data store. Part 1 - #22839
## Problem
During op processing, container runtime sends ops one at a time to data
stores to DDSes. If a DDS has received M contiguous ops as part of a
batch, the DDS is called M times to process them individually. This has
performance implications for some DDSes and they would benefit from
receiving and processing these M ops together.
Take shared tree for example:
For each op received which has a sequenced commit, all the pending
commits are processed by the rebaser. So, as the number of ops received
grows, so does the processing of pending commits. The following example
describes this clearly:
Currently if a shared tree client has N pending commits which have yet
to be sequenced, each time that client receives a sequenced commit
authored by another client (an op), it will update each of its pending
commits which takes at least O(N) work.
Instead, if it receives M commits at once, it could do a single update
pass on each pending commit instead of M per pending commit.
It can compose the M commits together into a single change to update
over, so it can potentially go from something like O (N * M) work to O
(N + M) work with batching.
## Solution - op bunching
The solution implemented here is a feature called "op bunching".
With this feature, contiguous ops in a grouped op batch that belong to a
data store / DDS will be bunched and sent to it in an array - The
grouped op is sent as an `ISequencedMessageEnvelope` and the individual
message `contents` in it are sent as an array along with the
`clientSequenceNumber` and `localOpMetadata`.
The container runtime will send bunch of contiguous ops for each data
store to it. The data store will send bunch of contiguous ops for each
DDS to it. The DDS can choose how to process these ops. Shared tree for
instance, would compose the commits in all these ops and update pending
commits with it.
Bunching only contiguous ops for a data store / DDS in a batch preserves
the behavior of processing ops in the sequence it was received.
Couple of behavior changes to note:
1. Op events - An implication of this change is the timing of "op"
events emitted by container runtime and data store runtime will change.
Currently, these layers emit an "op" event immediately after an op is
processed. With this change, an upper layer will only know when a bunch
has been processed by a lower layer. So, it will emit "op" events for
individual ops in the bunch after the entire bunch is processed.
From my understanding, this should be fine because we do not provide any
guarantee that the "op" event will be emitted immediately after an op is
processed. These events will be emitted in order of op processing and
(sometime) after the op is processed.
Take delta manager / container runtime as an example. Delta manager
sends an op for processing to container runtime and emits the "op"
event. However, container runtime may choose to not process these ops
immediately but save them until an entire batch is received. This change
was made but was reverted due to some concerns not related to the topic
discussed here - https://github.com/microsoft/FluidFramework/pull/21785.
The chang here is similar to the above behavior where an upper layer
doesn't know and shouldn't care what lower layers do with ops.
2. `metadata` property on message - With this PR, the metadata property
is removed from a message before it's sent to data stores and DDS. This
is because we now send one common message (the grouped op) and an array
of contents. Individual messages within a grouped op have batch begin
and end metadata but they are just added by the runtime to keep it like
old batch messages. The data store and DDS don't care about it so
removing them should be fine.
[AB#20123](https://dev.azure.com/fluidframework/235294da-091d-4c29-84fc-cdfc3d90890b/_workitems/edit/20123)
- use `details` for all elements
- SignalLatency: shorten names now that data is packed into details
- SignalLost/SignalOutOfOrder: rename `trackingSequenceNumber` to
`expectedSequenceNumber`
- SignalOutOfOrder: avoid logging `contents.type` for when there it is a
chance it could be customer content.
- update tests:
- logger.assertMatch to use true inlineDetailsProp argument
- explicit `reconnectCount` for SignalLatency
- explicit `expectedSequenceNumber` and
`clientBroadcastSignalSequenceNumber` for SignalLost
Generated release notes for the 2.4 release. The notes were generated using the following command:
```shell
flub generate releaseNotes -g client -t minor --outFile RELEASE_NOTES/2.4.0.md
```
A long time ago (5acfef448f) we added
support in ContaineRuntime to parse op contents if it's a string. The
intention was to stop parsing in DeltaManager once that saturated. This
is that long overdue follow-up.
Taking this opportunity to make a few things hopefully clearer in
ContainerRuntime too:
* Highlighting where/how the serialization/deserialization of `contents`
happens
* Highlighting the different treatment/expectations for runtime v.
non-runtime messages during `process` flow
## Deprecations:
Deprecating use of `contents` on the event arg `op` for
`batchBegin`/`batchEnd` events, they're in for a surprise. I added a
changeset for this case.
## Description
This cleans up a section of the object node tests which used schema
types in a complicated generic way that's fragile and hard to work with.
When rewriting these tests, I focused the new ones on testing aspects
that actually have special logic and are likely to break instead of just
different value types. Thus the tests now cover the odd normalization
cases of numbers.
These tests found a couple of issues:
- unhydrated node handling of null was incorrect (see changeset)
- Some type errors were thrown for invalid user input. TO help keep it
easy to tell which errors are our bugs and which are app bugs I've made
these fluid usage errors.
- A needless check for NaN was included where the check for isFinaite
would handle it correctly (Number.isFinite considers NaNs to not be
finite: comment already calls out NaN and it has test coverage so this
seems like a safe change.)
## Description
Optimizations based on a profile of BubbleBench.
This resulted in about 5% more bubbles (80 vs 76).
Given this application's read costs scale with bubbles squared (for the
collision logic), contains writes, and most reads are leaf values which
are not impacted by this, this seems like a pretty good win. Some real
apps could probably show significantly larger gains if they are less
leaf heavy or less write heavy.
## Description
Currently, unhydrated nodes can be edited but they do not emit any
change events. This PR fixes that by properly emitting events. Changes
include:
* Removing the `anchor` parameter from the relevant events on
`AnchorEvents`. This allows those events to be implemented by things
that don't have anchors (like `UnhydratedFlexTreeNode`) and things that
only sometimes have anchors (like `TreeNodeKernel`). Nobody uses that
parameter currently anyway because it is redundant (it's the same as the
object that the event is being registered on).
* Emitting a change event from `UnhydratedFlexTreeNode` when one of its
fields is edited. This is listened to by the `TreeNodeKernel`, which can
then emit its own corresponding event.
* Adding distinct types for the "unhydrated" and "hydrated" versions of
the state in `TreeNodeKernel`. This makes it more obvious what the state
transition is, and makes the type checking and safety more explicit.
* Adding a `lazy` function helper to better encapsulate the lazy-getting
of the events in TreeNodeKernel.
In an effort the reduce exposure of the Client class in the merge-tree
package this change additionally deprecates a number of types which
either directly or indirectly expose the merge-tree Client class.
Most of these types are not meant to be used directly, and direct use is
not supported:
- AttributionPolicy
- IClientEvents
- IMergeTreeAttributionOptions
- SharedSegmentSequence
- SharedStringClass
Some of the deprecations are for class constructors and in those cases
we plan to replace the class with an interface which has an equivalent
API. Direct instantiation of these classes is not currently supported or
necessary for any supported scenario, so the change to an interface
should not impact usage:
- SequenceInterval
- SequenceEvent
- SequenceDeltaEvent
- SequenceMaintenanceEvent
---------
Co-authored-by: jzaffiro <110866475+jzaffiro@users.noreply.github.com>
Co-authored-by: Tyler Butler <tyler@tylerbutler.com>
## Description
This corrects some behavior in the (currently alpha) branching APIs.
Currently, merging one or more commits into a target branch from a
source branch does not generate revertibles on the target branch. This
PR updates it so that each merge commit fires a "commitApplied" event
and generates the proper revertible.
Follow-up to PR #22538. These properties were not intended to be
mutable. While this change is technically breaking, the previous PR was
made after the `2.3` release was cut, so this change is safe to make
prior to `2.4`.
Also adds a changeset for the new functionality, which was also missed
in the previous PR.
## Description
Generate ChangeLogs for client 2.3.
I had to manually remove the package references from the changeset from
the propertyDDS package removal to avoid
https://github.com/changesets/changesets/issues/1403 before generating
these.
Generated using:
```
pnpm flub generate changelog -g client
```
Release notes will be generated by going to the commit before this
merged, and reprocessing the changesets.
We are concerned that holding batch messages in ContainerRuntime even
while DeltaManager advances its tracked sequence numbers through the
batch could have unintended consequences. So this PR restores the old
behavior of processing each message in a batch one-by-one, rather than
holding until the whole batch arrives.
Note that there's no change in behavior here for Grouped Batches.
### How the change works
PR #21785 switched the RemoteMessageProcessor from returning ungrouped
batch ops as they arrived, to holding them and finally returning the
whole batch once the last arrived. The downstream code was also updated
to take whole batches, whereas before it would take individual messages
and use the batch metadata to detect batch start/end.
Too many other changes were made after that PR to straight revert it.
Logic was added throughout CR and PSM that looks at info about that
batch which is found on the first message in the batch. So we can
reverse the change and process one-at-a-time, but we need a way to carry
around that "batch start info" with the first message in the batch.
So we are now modeling the result that RMP yields as one of three cases:
- A full batch of messages (could be from a single-message batch or a
Grouped Batch)
- The first message of a multi-message batch
- The next message in a multi-message batch
The first two cases include the "batch start info" needed for the recent
Offline work. The third case just indicates whether it's the last
message or not.
#22501 added some of the necessary structure, introducing the type for
"batch start info" and updating the downstream code to use that instead
of reading it off the old "Inbound Batch" type. This PR now adds those
other two cases to the RMP return type and handles processing them
throughout CR and PSM.
This is a workaround for a bug in the release notes generator - it
excludes changesets that apply to no packages.
The extra metadata will need to be deleted before generating changelogs.
If this is not done, generate changelog will fail with an error like:
```
Error: Command failed with exit code 1: pnpm exec changeset version
🦋 error TypeError: Cannot destructure property 'packageJson' of 'undefined' as it is undefined.
🦋 error at Object.shouldSkipPackage
(/home/tylerbu/code/FluidFramework/node_modules/.pnpm/@changesets+should-skip-package@0.1.1/node_modules/@changesets/should-skip-package/dist/changesets-should-skip-package.cjs.js:6:3)
🦋 error at getRelevantChangesets
(/home/tylerbu/code/FluidFramework/node_modules/.pnpm/@changesets+assemble-release-plan@6.0.4/node_modules/@changesets/assemble-release-plan/dist/changesets-assemble-release-plan.cjs.js:608:29)
🦋 error at Object.assembleReleasePlan [as default]
(/home/tylerbu/code/FluidFramework/node_modules/.pnpm/@changesets+assemble-release-plan@6.0.4/node_modules/@changesets/assemble-release-plan/dist/changesets-assemble-release-plan.cjs.js:536:30)
🦋 error at version (/home/tylerbu/code/FluidFramework/node_modules/.pnpm/@changesets+cli@2.27.8/node_modules/@changesets/cli/dist/changesets-cli.cjs.js:1281:60)
🦋 error at async run (/home/tylerbu/code/FluidFramework/node_modules/.pnpm/@changesets+cli@2.27.8/node_modules/@changesets/cli/dist/changesets-cli.cjs.js:1463:11)
```
## Description
Move changeset "Make SharedTree usable with legacy APIs" into "section":
"tree"
It is more of a feature than a bug fix (the fact this feature was
missing was more accidental than planned, but still seems more tree than
bug)
## Description
#22272 intended to expose a `SharedTree` constant usable with the legacy
API (registry-based channel factories, data objects, etc). However,
`@fluidframework/tree` had no `@fluidframework/tree/legacy` nor a
`@fluidframework/tree/alpha` export, so the exposed API was not actually
usable. This rectifies that and makes `SharedTree` usable with the rest
of the legacy API (e.g. with `@fluidframework/aqueduct`).
---------
Co-authored-by: Abram Sanderson <absander@microsoft.com>