* Subarray checks that indices are distinct when debugging
* Renamed ConcatOpTests to StringConcatOpTests
* Crowdsourcing explains why accuracy and precision are NaN.
* Added build instructions for Visual Studio Code.
- Fixed automaton deserialization ignoring LogValueOverride
- Fixed SequenceDistribution.EnumerateSupport and TryEnumerateSupport having different side effects
- Added TryDeterminize, SetLogValueOverride, and ProjectOnTransducer methods to SequenceDistribution
- Added a parameterless overload for Automaton.TryDeterminize that returns the output of TryDeterminize(out TThis) and discards information about deterministicity of said output
* Immutable distribution interfaces
* DiscreteChar made immutable
* Automata made constant
* Automaton.GetLogValue optimized for cases of deterministic and epsilon-free automata
* Fixed Automaton.[Try]EnumerateSupport so that it won't produce duplicates for non-determinizable automata
* Introduced IWeightFunction - interface for abstract weight functions used by SequenceDistribution
* Multi-representable weight function for sequence distribution, that automatically switches between point mass, dictionary, and autoaton representations as appropriate
* Early stops for automaton support enumeration
* Improved automata graphviz format
* Language writer correctly processes nested generics
* Incremented version to 0.4
* Subarray and GetItems factors and operators take IReadOnlyList instead of IList.
* IMatchboxRecommenderMapping uses IReadOnlyList instead of IList.
* Moved Subarray and GetItems factors from Factor class to Collection class.
* Moved variable factors from Factor class to Clone class.
* Conversion.IsAssignableFrom handles covariance.
* Util.GetElementType and IsIList include IReadOnlyList.
* Code cleanup
* Refactored MessageTransform.ConvertMethodInvoke
* Removed Collection.Sort
# problem
The Azure DevOps VSTest runner is not handling the Compiler Options test well, because it takes so long. Also the test takes a very long time to run.
# solution
Run different options in parallel, and in a separate process to finish reliably and in a reasonable time.
InferNet.Infer is generic.
ModelBuilder uses the correct overload of InferNet.Infer.
CodeBuilder.Method checks for correct number of arguments.
FileArray implements IReadOnlyList.
* Added WordStrings and StringFormatTests.
* FactorManager does not allow point mass conversion of the return value argument of EP evidence methods (previously handled by MessageTransform).
* Code cleanup. Renamed IdentityComparer to ReferenceEqualityComparer.
Previous version was correct but had a pathalogical (exponential) runtime for some forms of automata
with multiple branches with epsilon transitions. Also code was unneccessary complex, because it tried
to compute unreachable states in presence of loops.
Specific changes in this PR:
- `EnumerateSupport` implementation was moved into its own file - `Automaton.EnumerateSupport`
- `ComputeEndStateReachability` method has been resurrected. It is invoked only if loops are detected
In all other cases, it is trivial to check for end-state reachability during normal traversal
- Traversal loop has been split in steps, some of which were moved into their own local methods.
- Fast path for non-branchy part of automaton was implemented.
Removed Cancels attribute from PowerPlateOp.EnterAverageConditional
Sum_Expanded handles an empty array.
Dirichlet.GetMean handles zero pseudo-counts.
Moved TrueSkill tests into TrueSkillTests.
* Motif Finder uses SequenceCount = 70
* DiscreteFromDirichletOp.ProbsAverageConditional handles PiecewiseVector
* GammaPower.FromLogMeanMinusMeanLog handles infinite mean and negative power
* Tests use Assert.Throws instead of try/catch
* Test output is less verbose
* Added SimplestBackwardChainTest3
* Fixed IterativeProcessTransform when variable has QueryTypes.MarginalDividedByPrior and ConstrainEqualRandom(variable, observed)
* Discrete allows Dimension=0
* Fix long overflow in OperatorTests.Longs()
* Fixed printing big float arrays, extended series for ((exp(x) - 1) / x - 1) - 0.5
* GammaPower.GetLogProb uses ulp-based threshold
* Moved a constant for maximum terms in NormalCdfMomentRatio outside of method body
* Separate methods for Previous/NextDoubleWithPositiveDifference
* More magic constants replaced with ulp-based ones
* Fixed abs value comparison in GammaPower.GetLogProb
* Ulp-based constatnts in IsBetween.XAverageConditional
* Moved the definitions of Ulp1-dependant constants below that of Ulp1
* Replaced <= in GammaPower with a < as it used to be
Co-authored-by: Dmitry Kats <ratkillerx@hotmail.com>
IncrementTransform handles GetJaggedItemsOp and GetDeepJaggedItemsOp.
IndexingTransform gives a warning for unimplemented cases instead of throwing.
Improved code doc for Damp functions.
Changed "#if HAS_BINARY_FORMATTER" to "#if NETFULL"
Removed Assert.Timeout from performance tests
* RoslynDeclarationProvider uses all available source codes
to build the requested type declaration.
* Conversion.TryFindConversion looks for casts defined on the toType as
well.
Co-authored-by: Dmitry Kats <ratkillerx@hotmail.com>
Removed c_digamma_small case from MMath.Digamma
Added tests for MMath.GammaLnSeries and XMinusLog1Plus
GenerateSeries also generates error bounds
Added CheckMathLibraries
- Ensured argument consistency in NormalCdfIntegralTest
- Fixed some test values
- Exp-Sinh quadrature in LogisticGaussian
- Arithmetic-precision-based constants in logistic gaussian
- Named const for Ulp(1)
- Generating series for gamma(x) - 1/x
- Fixed integer overflow in BGRat
- GenerateSeries can print arrays of bigfloats.
ComputeSpecialFunctionsTestValues.py computes more accurate test values, uses pure mpmath, more reliable quadrature. Computes values for NormalCdfRatioLn. Removed hard-coded cases.
MMath.Log1PlusExp uses a threshold based on machine epsilon.
Added PointMassEstimator.
PointMass overrides Equals.
Observed variables can have query types.
Removed the obsolete Output attribute.
Fixed build order.
Internal compiler changes:
* Added MarginalAnalysisTransform. VariableTransform, ChannelTransform, Channel2Transform no longer set marginal prototypes.
* VariableInformation can be attached to any declaration, not just variables.
* DeriveArrayVariable does not propagate MarginalPrototype attributes.
* VariableTransform and ChannelTransform create marginal channels for constants and parameters.
* ModelBuilder only attaches sizes to objects assignable from arrays.
* Added ObservedVariableMessages CompilerAttribute.
* Channel2Transform attaches DescriptionAttributes
- Moved arguments and expected result values for special function tests from inlined code to csv files.
- Added a python script to compute expected result values for tested special functions in high precision.
- Improved accuracy of NormalCdfLn for `x < -8`. Truncated series was too short, test used to pass, because the expected value itself was incorrect.
* Update to .NET Core 3.1
This update was faily trivial change of project properties with only one caveat
```
error CS0104: 'Range' is an ambiguous reference between 'Microsoft.ML.Probabilistic.Models.Range' and 'System.Range'
```
which require sprinkling everywhere following line of code
```
using Range = Microsoft.ML.Probabilistic.Models.Range;
```
* Update build definition to use more modern images
- Windows 2019 for .NET Core 3.1
- Mac OS 10.14 (Specifically https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops)
Co-authored-by: Tom Minka <8955276+tminka@users.noreply.github.com>
It really is not required - at all points it is known with which automaton we operate right now.
Also, where only state index was needed we store now the int index instead of fat State object
which contains extra information.
Contracts has been changed: `TryEnumerateSupport()` does not throw if it encounters non-enumerable automata.
- Implementation of `TryEnumerateSupport` was changed a lot:
- It avoids recursion (and "stacked IEnumerables")
- An optimization has been added - traversing states with single point-mass forward transition
(90+% of real-world cases) is cheaper, because it avoids some extra allocations
Also "is enumerable" status is cached. It is set proactively at automaton construction
in 2 common cases which are cheap to detect:
- automaton with self-loops can not be enumerated
- automaton with only forward transitions (and thus no loops) can always be enumerated
In all other cases this flag is calculated lazily on first enumeration try.
`System.Collections.Immutable` has `ImmutableArray` that serves the same purpose as
`ReadOnlyArray` but has different API. This type is available (without extra dependencies)
only in netcore, so can't be used in Infer.NET which has to support netframework.
Until netframework support can be dropped reimplement a subset of `ImmutableArray`
in Infer.NET codebase.
* Python tool to generate expressions for truncated series.
* Using the generated series in special functions
* Test projects set Optimize=true in Release configuration
Co-authored-by: Tom Minka <8955276+tminka@users.noreply.github.com>
Fixed an issue where DependencyAnalysisTransform would give incorrect SkipIfUniform dependencies, causing statements to be incorrectly pruned from the generated code.
Removed FactorManager.AnyItem. Added FactorManager.All.
GaussianProductOp.AAverageConditional handles uniform B.
MMath.ChooseLn handles more cases.
- Introduced an abstraction for truncated power series
- Introduced an abstraction for power series
- Made [Di|Tri|Tetra]Gamma[Ln] use it
- Added internal interface to recompute power series used in MMath and make them longer/shorter depending on the necessary precision. It can be used in tests.
Currently, power series computation is rather primitive (cut-off some precomputed series at a point where it used to be cut as long as the precision in <= 53, don't cut-off otherwise).
o Marked each non-packing assembly as "IsPackage=false" because the default is to package the assemblies. Done this via a common.props import.
o Switched to using integrated CSProj NuGet properties. Done this via a nuget-properties.props import.
o Switched to using msbuild in release.yml rather than nuget, since nuget.exe does not support the new spec.
o Added a LearnersNuGet project to be the "csproj" host for the learners nuspec (since the learners nuget does not correspond to any particular existing csproj project).
o Updated ClickThroughModel.csproj and ClinicalTrial.csproj and Image_Classifier.cs and MontyHall.csproj to new-style csprojs because otherwise new msbuild commands reject them.
o Made release.yml msbuild calls multiprocess to speed them up.
o Factored out some common properties into the common.props file.
GammaPower creates a point mass whenever Rate would be infinite.
Added GammaPower.FromMeanAndMeanLog.
Improved numerical accuracy of GammaPower.GetLogProb, GetMean, and GetMode.
Added GammaPowerEstimator
GammaProductOp supports GammaPower distributions.
GammaProductOp handles uniform message from product.
Added GammaPowerProductOp_Laplace
Fixed PowerOp for GammaPower distributions.
Added PlusGammaOp for GammaPower distributions.
MMath.GammaUpper has an unregularized option.
Added TruncatedGamma.GetMeanPower.
PowerOp supports TruncatedGamma.
Swapped the argument order of (internal) MMath.LargestDoubleProduct and LargestDoubleRatio.
UnlimitedStatesComputation() is used temporary to alter maximal size of automaton
which is defined my MaxStateCount. Using it from different threads could mess up the limit.
Now each threads gets its own limit.
Also, the default MaxStateCount limit is increased to 300k, because that is what the biggest String inference customer uses.
After recent refactoring that removed `ProbabilityOutsideRanges`, `DiscreteChar.Complement()`
started to work incorrectly in case ranges were going one after another.
For example DiscreteChar.Point('\0').Complement() was equal to uniform distribution, i.e. still included the \0 char.
Automaton determinization procedure used to keep a running total of weights for each state.
That sum was maintained by adding weights for next open segment and substracting weights
for closed segments. If the weights of segments differ a lot (LogValue difference is bigger than 100)
than due to numerical issues sum could become zero after subtraction which lead to dropping
of transition and automaton language truncation.
Now the weights sum is recalculated from scratch each time. It also results in loss of precision,
but it is important that the precision is lost only for very large weights, not very small ones. So
no accidental zeroing of weights is happening and language is not truncated.
It doesn't really impact runtime because `WeightedStateSet` construction enumerated all weights
anyway (to normalize them), so in the worst case slowdown is at most constant.
And in average case (where we maintain 1 or 2 destination states per transition) the runtime
is actually better due to lower constant costs.
New test (`AutomatonTests.Determinize11`) was added, it used to fail with previous implementation.
To make this change I had to rewrite code substantially which in my opinion makes it easier to follow:
- The determinization procedure now makes use of `CharSegmentsEnumerator` helper class
which enumerates over all char segments from multiple char distributions. These segments are
non-overlapping.
- `WeightedStateSetBuilder` now handles duplicate state indices in `Add()` call. Previously deduplication
had to happen via accumulating sum in `Dictionary<int, WeightSum>`
Fixed corner cases in MMath.LargestDoubleProduct and NormalCdfIntegral.
Improved accuracy of DoubleIsBetweenOp.
Loosened the tolerance of GaussianIsBetweenCRCC_IsMonotonicInXMean and GaussianIsBetweenCRCC_IsMonotonicInXPrecision, to be fixed later.
Fixed cases where MMath.NormalCdf, DoubleIsBetweenOp, IsPositiveOp, DoublePlusOp would throw.
JaggedSubarrayOp uses ForceProper.
PlusDoubleOp gracefully handles improper distributions on input.
Added MMath.NormalCdfIntegral, NormalCdfDiff, NormalCdfExtended.
Added ExtendedDouble class.
MeanVarianceAccumulator correctly handles weight=0.
Region.GetLogVolume has a lower bound.
Variables with PointEstimate do not use JaggedSubarrayWithMarginal.
Due to refactoring mistake `sampleProb / prob * intervalLength`
turned into `sampleProb / (prob * intervalLength)` which is obviosly incorrect.
Fixed that + added a rudimentary test for `DiscreteChar.Sample()`
Made `ArgumentCountToNames` cache thread-safety.
After that another exception appeared with `ArgsToValidatingAutomaton` dictionary (reference null exception). Making the dictionary concurrent fixed this error.
* Determinization doesn't change language of the automaton anymore.
2 new tests were added which check that it doesn't happen.
Previously all very low probability transitions (with probability of less than e^-35) were removed.
That was done because of 2 reasons:
- some probabilities were represented in linear space. And e^-35 is as low resolution
as you can get with regular doubles. (That is, if one probability = 1 and another is e^-35,
then when you add them together, second one is indistinguishable from zero).
This was fixed when discrete charprobabilities
- Trying to determinize some non-determinizable automata lead to explosion of low-probability
states and transitions which led to a very poor performance.
(E.g. `AutomatonNormalizationPerformance3` test). Now a smarter strategy for detecting
these non-determinizable automata is used - during traversal all sets of states from root
are remembered. If automaton comes to the same set of states but with different weights
than it stops immediately, because otherwise it will be caught in infinite loop
* `Equals()` and `GetHashCode()` for `WeightedStateSet` take into account only high 32 bits of weight.
This coupled with normalization of weights allows to reuse already added states with
very close weights. This speeds up the "PropertyInferencePerformanceTest" 2.5x due to
smaller intermediate automata.
* Weighted sets of size one are handled specially in `TryDeterminize` - they don't need to be
determinized and can be copied into result almost as is. (Unless they have non-deterministic
transitions. Simple heuristic of "has different destination states" is used to detect that
and fallback to slow general path).
* Representation for `WeightedStateSet` is changed from (int -> float) dictionary to
sorted array of (int, float) pairs. As an optimization, a common case of single-element
set does not allocate any arrays.
* Determinization code for `ListAutomaton` was removed, because it has never worked
Transition weights may become infinite when going from log space into value space. And
`SetToSum()` which is used by `MergerParallelTransitions` can't handle 2 inifnite weights at once.
To solve this, transitions weights are normalized by max weight prior to passing into `SetToSum()`
* Pair (FirstTranstiionIndex, LastTransitionIndex) changed to pair
(FirstTransitionIndex, TransitionsCount).
* Introduced Automaton.Builder.LinkedStateData struct which
mirrors Automaton.StateData but represents transitions as linked list.
Previously Automaton.StateData was reused for this purpose.
That was confusing.
In some edge cases StringAutomaton needs to represent extremely
low probabilities of character transitions. To be able to do that,
instead of storing probabilities as double values they are stored
as `Weight` structs already used by automatons. Weight stores logarithm
of value instead of value itself.
TryDeterminized() tried to do something even if it is known that automaton is already determinized
or non-determinizable.
Because automaton state is immutable, it is possible to store determinization state alongside with it.
There are 3 states:
* Unknown - TryDeterminize() was never called for this automaton
* IsDeterminized - TryDeterminized() successfully determinized automaton
* IsNonDeterminizable - TryDeterminize() was called but didn't succeed.
Because determinization state depends on maximum number of states,
`TryDeterminize(int maxStatesCount)` method was removed in favour of using defaults.
Because this overload was never used in practice.
Also, as an implementation detail an enum type was exposed as part of automaton quoting interface.
Compiler generated incorrect C# code for quoting enum constants. Fixed that.
We finally reached a point where automatons are big enough that recursive implementations
of algorithms fail with `StackOverflowException`.
There are 4 tests which test operations with big automatons (100k states):
* `TryComputePointLargeAutomaton`
* `SetToProductLargeAutomaton`
* `GetLogNormalizerLargeAutomaton`
* `ProjectSourceLargeAutomaton`
All four used to fail before code was rewritten to use explicit stack instead of recursive calls.
All changes except one didn't change algorithms used, code was almost mechanically changed
to use stack.
The only exception - automaton simplification. Old code used recursion in very non-trivial way.
It was rewritten from scratch, using different algorithm: instead of extraction of generalized
sequences and then reinserting them back new code merges states directly in automaton.
(There's a comment at the beginning of Simplify() method explaining all operations)