Rules Engine Filter
Using the custom filter and operators, you can only configure simple expressions (like In, Equals, Greater than, etc.). Another drawback is joining expressions. If you configure multiple filters for a feature flag, the flag will be evaluated to true
even if 1 of the filter conditions is evaluated to true
. Hence you evaluate the final result by combining results from multiple filters.
For such complex scenarios you can use RulesEngine
filter. RulesEngine
is an open-source library that can create JSON-based workflows to evaluate rules and complex conditions. You can check the details of Rules Engine in its GitHub repository.
Integration
For utilizing Rules Engine, you will need to create a filter of type RulesEngine
, and configure the name of the Workflow/JSON file which contains the rule. The JSON file is kept in an Azure Storage Blob. Tenants who are interested in using Rules Engine filter must provide the Storage Connection String (with Read permission) and the name of the Container where the JSON files will be kept.
Each workflow in a rule engine can have multiple Rules, the feature flag will be evaluated to true
only if all the rules passes.
Flight Context Input
The flight context object (header x-flightcontext
) is passed an input to the Rule Engine workflow. Hence, if the Rule engine as an expression Country == \"UK\"
then the property Country
should be present in the flight context object
Check this scenario to evaluate a Simple Rule Engine in a feature flag.
Additional Operators
Rules Engine supports writing dynamic Linq Lambda operations, so you can use simple operators (=
, !=
, '>', etc.) and relational operators like (||
, &&
). However, this restricts usage of Graph operators like checking if an User belongs to a Security Group. We leverage ReSettings in Rules Engine to allow custom Operators. that cannot be written in Lamda expressions. All the additional operators are exposed as part of the Operator
static class. As of now the following Customer Operators are allowed
Operator | Description | Example Usage |
---|---|---|
IsMember | Checks if the UPN is part of a Security Group. The UPN and the Group ID are needed as parameters. | Operator.IsMember(UserPrincipalName, \"HERO_GROUP_OBJECT_ID\") |
IsNotMember | Checks if the UPN is not part of a Security Group. The UPN and the Group ID are needed as parameters. | Operator.IsNotMember(UserPrincipalName, \"NON_HERO_GROUP_OBJECT_ID\") |
In | Checks if a given value belongs to a list of values. The list of values are given in comma-separated string. | Operator.In(EmpType, \"1,2,3,4\") |
NotIn | Checks if a given value doesn't belongs to a list of values. The list of values are given in comma-separated string. | Operator.NotIn(EmpType, \"5,6,7,8\") |
Check this scenario to evaluate a Rule Engine with custom operators and multiple rules.
Complex Relations
By default, all the Rules in a Workflow are joined using AND
operators. Hence, for a feature flag to be true
all the rules must evaluate to true
. To create more complex relations, you can either create a single rule with the complex lamda expression, or leverage LocalParams. Using LocalParams you can assign a conditional expression to a variable and then join these parameters.
Check this scenario to understand how to utilize LocalParams.
Enabling Rules Engine Filter
Rules engine filter is a tenant specific feature and is enabled based on tenant requirements. The following steps are required
-
Create an Azure Storage account (you can re-use existing accounts).
-
Create a Container to keep your JSON rules (we suggest creating a new container instead of re-using an existing container). Contact the Admin team (refers to the team maintaining the Deployed artifacts of the service) and send the following info
- Storage Account Connection String (with reading permissions on the container) - We suggest not providing a SAS key since the connection string will have to be updated when the SAS key expires.
- Name of the container
- Cache duration - To improve performance, the JSON rule engine file is cached in memory. We suggest keeping a small duration in a Pre-Production environment for faster testing. We are working to make the cache invalidation process real-time. For emergency cache refresh in Production, you will need to get the instance restarted.