Fixing security vulnarability.

This commit is contained in:
Irina Tarnavski 2020-11-23 12:00:04 -08:00
Родитель 504e85ab62
Коммит 45b37f846c
264 изменённых файлов: 11036 добавлений и 2397 удалений

Просмотреть файл

@ -2,12 +2,107 @@
## Main documents
* [FE README](./frontend/README.md)
* [FE Contribution guide](./frontend/CONTRIBUTION.md)
* [BE README](./backend/README.md)
* [BE Contribution guide](./backend/CONTRIBUTION.md)
* [Deployment README](./arm/README.md)
## Solution structure
![FunctionalSegregation](./documentation/pictures/MRStructureDiagrams-SolutionArchitecture.png)
## Business description
### Terms
**Order/Purchase/Transaction** :
The main object that describes a particular act of interaction between a Merchant and a User. It's stored in
Dynamics 365 Fraud Protection (DFP) and sometime retrived by Manual Reviev tool (MR) for synchronization and local storing.
**Item** :
One element in MR system that represents a particular purchase.
**Decision** :
A reflection of the Purchase Status entity. Shows the decision about aparticular purchase. Could be generated on merchant side and in MR tool.
**Enrichment** :
When purchase event is consumed by the MR application, it has no information about the purchase, just a reference to it via purchase ID. The process of filling the item with actual purchase data is called enrichment.
**Queue** :
A logical container in the storage dynamically filled by items based on some filters.
**Filter** :
A set of parameters that define a set of items in a queue. A filter is created alongside the queue.
**Escalation queue** :
A queue that contains items with ESCALATE or HOLD labels. This is just specific view of the related main queue. Items in an escalated queue could be reviewed only by supervisors.
**Residual queue** :
A queue that consists of orders which are not matching filters of any existing queue.
**Locked queue** :
A queue that has sorting by one of the order fields. An analyst can review items only from the top of the sorted queue.
**Unlocked queue** :
A queue where an analyst can pick items in random order for review.
**Label** :
A mark for an order in the queue that is applied by an analyst or senior analyst as a result of a manual review.
Labels are divided into two groups: final labels that forms decisions (GOOD, BAD, WATCH_INCONCLUSIVE, WATCH_NA)
and intermediate labels for internal usage in MR (ESCALATE and HOLD). Final labels form a resolution object.
**Resolution** :
A particular final decision that was made in the MR tool. Could be retrieved during resolution lifetime.
**Tag** :
Tag is a short mark for specifying item specific. Tags can be applied by analysts and viewed in item/resolution surfing.
**Note** :
Note is a comment left by an analyst in the order.
### Permissions
Manual Review has role-based access which means every user should have a particular role to use particular features. There are three main kinds of roles:
* fraud analyst,
* senior fraud analyst
* manager/administrator
All roles should be defined for the DFP Service principal in Azure AD.
Role assignments can be done both by the Azure portal and by the DFP User Access tab (the second way is more preferable).
In addition to main roles, some privileges can be provided to users based on in-tool actions and assignments.
All frontend-intended APIs are protected with the OAuth2.0 Implicit flow grant.
The frontend is responsible for routing the user on Azure Active Directory login page and for the token extracting.
Once the token obtained the frontend attach this token to each call to the backend.
The backend uses stateless token processing with role enrichment (in Azure AD, it uses caching).
Role permissions:
| The Analyst | The Senior Analyst | The Fraud Manager |
| ----------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------ |
| view queues assigned to him | view any queue | view any queue |
| | create queues | create queues |
| | assign people to any queue | assign people to any queue |
| | | update any queue (change name and deadline) where possible |
| | | delete any queue |
| view any order on queues visible to him | view any item | view any item |
| lock items in queues assigned to him in accordance with sorting settings | lock items in queues assigned to him in accordance with sorting settings | lock any order in any queue |
| label, tag, comment, unlock items locked on him | label, tag, comment, unlock items locked on him | label, tag, comment, unlock items locked on him |
| apply bulk decisions on items that are visible for the analyst | apply bulk decisions on any unlocked item (including already labeled) | apply bulk decisions on any item |
| | | search items among the queues |
| | | release any lock for any analyst (future feature) |
| | view demand/supply dashboard | view demand/supply dashboard |
| view performance dashboard for themselves (including per-queue activity view) | view performance dashboard for themselves (including per-queue activity view) | view performance dashboard for any analyst (including per-queue activity view) |
| | | view performance dashboard for any queue (including per-analyst activity view) |
| view historical queue settings for participated queues | view historical queue settings for any queues | view historical queue settings for any queues |
| | view historical analyst info | view historical analyst info |
Assignment-based permissions:
| Queue reviewer | Queue supervisor |
| -------------------- | ----------------------------------------------------------------------------------------- |
| lock items | lock items |
| | lock escalated items (in escalated queue) |
| process locked items | process locked items |
| | receive notifications about orders being escalated in a supervised queue (future feature) |
## Microsoft Open Source code of conduct
For additional information, see the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct).

Просмотреть файл

@ -60,13 +60,13 @@
"value": "S1"
},
"appBackendSku": {
"value": "B2"
"value": "B3"
},
"appBackendNumOfWorkers": {
"value": 1
"value": 2
},
"appJavaOpts": {
"value": "-Xms3072m -Xmx3072m"
"value": "-Xms6016m -Xmx6016m"
},
"mailUsername": {
"value": "dfp-mr-notificator@outlook.com"

Просмотреть файл

@ -202,8 +202,8 @@
],
"metricalertExceptionsName": "[concat(parameters('prefix'),'-server-exceptions-alert')]",
"metricalertExceptionsSecondaryName": "[concat(parameters('prefix'),'-secondary-server-exceptions-alert')]",
"scheduledqueryrulesTaskIdleTooLongName": "[concat(parameters('prefix'),'-task-idle-too-long-alert')]",
"scheduledqueryrulesTaskIdleTooLongSecondaryName": "[concat(parameters('prefix'),'-secondary-task-idle-too-long-alert')]",
"scheduledqueryrulesTaskIdleTooLongName": "[concat(parameters('prefix'),'-background-task-alert')]",
"scheduledqueryrulesTaskIdleTooLongSecondaryName": "[concat(parameters('prefix'),'-secondary-background-task-alert')]",
"metricalertNoIncomingEventsDfphubName": "[concat(parameters('prefix'),'-no-incomeing-events-dfp-hub-alert')]",
"metricalertNoIncomingEventsDfphubSecondaryName": "[concat(parameters('prefix'),'-secondary-no-incomeing-events-dfp-hub-alert')]",
"metricalertTraceWarnSeverityName": "[concat(parameters('prefix'),'-trace-severity-warn-alert')]",
@ -508,10 +508,10 @@
"[resourceId('microsoft.insights/actionGroups',parameters('actionGroupName'))]"
],
"properties": {
"description": "Backend Java application task idle for too long time",
"description": "Backend Java application task has issues",
"enabled": "true",
"source": {
"query": "traces\n| where message matches regex \"Task \\\\[.*\\\\] is idle for too long. Last execution was \\\\[.*\\\\] minutes ago with status message: \\\\[.*\\\\]\"\n| project taskname=extract(\"Task \\\\[(.*)\\\\] is idle for too long. Last execution was \\\\[.*\\\\] minutes ago with status message: \\\\[.*\\\\]\", 1, message),timestamp\n| summarize AggregatedValue = count() by bin(timestamp, 5m),taskname\n",
"query": "traces\n| where message matches regex \"Background task \\\\[.*\\\\] issue.*\"\n| project taskname=extract(\"Background task \\\\[(.*)\\\\] issue.*\", 1, message),timestamp\n| summarize AggregatedValue = count() by bin(timestamp, 5m),taskname\n",
"authorizedResources": [],
"dataSourceId": "[resourceId('microsoft.insights/components', parameters('appInsightName'))]",
"queryType": "ResultCount"
@ -550,10 +550,10 @@
"[resourceId('microsoft.insights/actionGroups',parameters('actionGroupName'))]"
],
"properties": {
"description": "Backend Java application task idle for too long time",
"description": "Backend Java application task has issues",
"enabled": "true",
"source": {
"query": "traces\n| where message matches regex \"Task \\\\[.*\\\\] is idle for too long. Last execution was \\\\[.*\\\\] minutes ago with status message: \\\\[.*\\\\]\"\n| project taskname=extract(\"Task \\\\[(.*)\\\\] is idle for too long. Last execution was \\\\[.*\\\\] minutes ago with status message: \\\\[.*\\\\]\", 1, message),timestamp\n| summarize AggregatedValue = count() by bin(timestamp, 5m),taskname\n",
"query": "traces\n| where message matches regex \"Background task \\\\[.*\\\\] issue.*\"\n| project taskname=extract(\"Background task \\\\[(.*)\\\\] issue.*\", 1, message),timestamp\n| summarize AggregatedValue = count() by bin(timestamp, 5m),taskname\n",
"authorizedResources": [],
"dataSourceId": "[resourceId('microsoft.insights/components', parameters('appInsightSecondaryName'))]",
"queryType": "ResultCount"

Просмотреть файл

@ -112,8 +112,7 @@
"defaultConsistencyLevel": "Strong"
}
},
"isMatchRegexp": "function isMatchRegexp(str, pattern) {let regex = RegExp(pattern); return regex.test(str);}",
"getBucketNumber": "function getBucketNumber(bucket_size, value) {return Math.floor(bucket_size / value);}"
"isMatchRegexp": "function isMatchRegexp(str, pattern) {let regex = RegExp(pattern); return regex.test(str);}"
},
"resources": [
{
@ -482,6 +481,28 @@
"options": {}
}
},
{
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers",
"apiVersion": "2020-04-01",
"name": "[concat(parameters('cosmosDbAccountName'), '/QueuesDB/LinkAnalysis')]",
"dependsOn": [
"[resourceId('Microsoft.DocumentDB/databaseAccounts/sqlDatabases', parameters('cosmosDbAccountName'), 'QueuesDB')]",
"[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('cosmosDbAccountName'))]"
],
"properties": {
"resource": {
"id": "LinkAnalysis",
"partitionKey": {
"paths": [
"/id"
],
"kind": "Hash"
},
"defaultTtl": -1
},
"options": {}
}
},
{
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers",
"apiVersion": "2020-04-01",
@ -592,23 +613,6 @@
"options": {}
}
},
{
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/userDefinedFunctions",
"apiVersion": "2020-04-01",
"name": "[concat(parameters('cosmosDbAccountName'), '/AnalyticsDB/ItemLabelActivities/getBucketNumber')]",
"dependsOn": [
"[resourceId('Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers', parameters('cosmosDbAccountName'), 'AnalyticsDB', 'ItemLabelActivities')]",
"[resourceId('Microsoft.DocumentDB/databaseAccounts/sqlDatabases', parameters('cosmosDbAccountName'), 'AnalyticsDB')]",
"[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('cosmosDbAccountName'))]"
],
"properties": {
"resource": {
"id": "getBucketNumber",
"body": "[variables('getBucketNumber')]"
},
"options": {}
}
},
{
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/userDefinedFunctions",
"apiVersion": "2020-04-01",
@ -625,24 +629,7 @@
},
"options": {}
}
},
{
"type": "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/userDefinedFunctions",
"apiVersion": "2020-04-01",
"name": "[concat(parameters('cosmosDbAccountName'), '/QueuesDB/Items/getBucketNumber')]",
"dependsOn": [
"[resourceId('Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers', parameters('cosmosDbAccountName'), 'QueuesDB', 'Items')]",
"[resourceId('Microsoft.DocumentDB/databaseAccounts/sqlDatabases', parameters('cosmosDbAccountName'), 'QueuesDB')]",
"[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('cosmosDbAccountName'))]"
],
"properties": {
"resource": {
"id": "getBucketNumber",
"body": "[variables('getBucketNumber')]"
},
"options": {}
}
}
}
],
"outputs": {
"CosmosDbAccountResourceId": {

521
backend/CONTRIBUTION.md Normal file
Просмотреть файл

@ -0,0 +1,521 @@
# Manual Review Contribution guide (backend)
This document:
* contains low-level solution details
* is used as an onboarding guide for newcomers
* should be considered by contributors to pass PR (pull request) procedure
> The project is alive and rapidly extended so if you see any inaccuracies, please,
notify maintainers of the solution. We appreciate any feedback.
Summary:
* [Architecture description](#architecture-description)
* [Applications](#applications)
* [Storage](#storage)
* [External services](#external-services)
* [Internal layers](#internal-layers)
* [Security](#security)
* [Logging](#logging)
* [Data loss protection](#data-loss-protection)
* [Contribution rules](#contribution-rules)
* [Versioning](#versioning)
* [Non-functional requirements](#non-functional-requirements)
* [Style guide](#style-guide)
* [Logging requirements](#logging-requirements)
* [In-place solution list](#in-place-solution-list)
The [Architecture description](#architecture-description) is the section which is intended for anyone who is
going to read source code. There you can find most important high-level description of the solution backend part.
Once you need to change something inside the code, it's crucial to get aware of [Contribution rules](#contribution-rules).
On case if you implement some new feature or met some unclear construction in code, you could find the explanation
in [In-place solution list](#in-place-solution-list). Also, take into account that any PR must be checked against each
principle described there.
## Architecture description
The Manual Review backend is a Web API that uses microservices architecture so each service can function apart from the others.
Also, it tightly relies on Azure services in part of security, logging, data storing and exchanging.
The main high-level flow is:
1. Receive data about purchases from DFP.
2. Store data as items in MR Storage. Then enrich it and place among queues.
3. Users process items in queues.
4. Stream item processing events to analytics service.
5. Store event data in a way that suit analysis usage.
6. Users and 3rd-party applications observe analytical data.
7. Report processing results to DFP.
![Backend](../documentation/pictures/MRStructureDiagrams-Backend.png)
Different microservices connected to each other either by asynchronous persistent messaging systems or
by synchronous protocols with mandatory repeating mechanism. Such structure is intended for a separate scaling
of the processing part and the analytical part. The asyncronous messaging should help to smooth load.
### Applications
Currently, there are two application in the backend part:
* mr-queues
> Queues is a main service of the project. It provides a REST API for managing of queues, and items inside those queues, and item-based dictionaries.
For now, it also provides API for other real-time processing functions (users, tokens, app settings), but in the future it should be moved to a separate services.
* mr-analytics
>Analytics is a service for aggregating, computing and retrieving analytics data through a REST API.
It retrieves purchases that are being processed by fraud analysts, sends them to DFP and stores them into a database.
It also provides an API for generating dashboards based on collected analytics data.
Applications are compiled into executable jar-files intended to run in Azure App Service instances.
Also, there are several libraries, that contains common solution logic:
* azure-graph-client
* cosmos-utilities
* dfp-auth-starter
* durable-ehub-starter
* model
The detail description could be found in `README.md` file of each library module.
All modules are Gradle projects and could be build, tested and launched by it. The source code
structured in accordance with Java/Spring best practices and initially was created by
[Spring Initializr](https://start.spring.io/).
To get understanding about correct construction of backend components, please, review [In-place solution list](#in-place-solution-list).
### Storage
The main solution storage is an instance of Azure CosmosDB. We choose it because:
* It has distributed structure and elastic scalability.
* It could perform complex queries with filtering, aggregation and preprocessing (UDF).
* It supports geo-redundancy
As Cosmos DB is NoSQL DB, it requires careful respecting of locking and consistency principles.
Please, refer [In-place solution list](#in-place-solution-list) to get more details about exact cases and approved patterns.
### External services
There are some services which are used by applications.
On initialization:
* Applications connect to Azure Key Vault to get the sensitive configuration. Please, refer [In-place solution list](#in-place-solution-list)
to find which and how information should be stored there.
* Properties are retrieved from property-files that placed along with executable jar-files, but some dynamic
properties in property-files are refer to environment variables that should be correctly defined during
App Service deployment. Also property files are separated by profiles. Please, refer [In-place solution list](#in-place-solution-list)
to find more information.
At runtime:
* Applications are connecting to Azure Active Directory to get user information. Check [In-place solution list](#in-place-solution-list)
to find rules of Azure AD usage.
* All logging and monitoring information is sent to Azure Application Insights. Check [In-place solution list](#in-place-solution-list)
to find support requirements.
* Applications are actively exchanging data with DFP. Check [In-place solution list](#in-place-solution-list)
to find support requirements.
* For some special cases, applications could communicate with other external services, e.g. Map service, Email disposability service, etc.
Check [In-place solution list](#in-place-solution-list) to find support requirements.
### Internal layers
Applications follow Spring framework and all components inside are beans. It starts execution
by calling `SpringApplication.run()` and then rely on declarative bean configuration described
by in-code annotations and property files. Needless to say that annotations hide under the hood
the drastically big amount of functionality, so the knowledge of following tool is mandatory:
* Lombok
* Resilience4j
* Spring Data
* Spring Web
* Spring Core (beans, services, configs, schedulers, caches)
There are no dynamically created beans but some of them contain their own lifecycle inside,
e.g. Sheduled tasks, daemon workers, etc. Before putting any code inside, please, review and
analyze main purpose of existing beans and only then decide where to place the code.
Main applications follow layered architecture. Each bean must fit only one layer but on
each layer there could be several beans that works together to share responsibility:
![Backend](../documentation/pictures/MRStructureDiagrams-BELayers.png)
Applications are configured on the startup by standard Spring mechanisms.
Controllers are responsible for incomming request/command flow.
In some cases (Event Hub Processors) they could be implicit and should be managed only by the configuration.
The service layer is the richest and complex layer. All business logic should be placed here.
Execution of services could be triggered in a different ways:
* by external call
* by incoming event
* by scheduled task
Repositories - are interfaces for outcoming request/command flow. In some cases Clients play a role of repositories.
During runtime execution information is continuously fed to monitoring systems. To interact with some of them there are
beans like `MeterRegistry`. Only logging is accessed through static methods.
Also, there are `model`s which are not beans in general meaning. All models placed in a separate package and served
for cross-bean communication.
### Security
There are two layers of security for users:
* Endpoint level: whether user has the role which is allowed to call particular endpoints.
This is a first-level defence barrier that insures a user can't permit inapropriate
*actions*. Checked on controller layer.
* Data level: whether user has access to read/create/update specific specifyc entity.
This is a second-level defence barrier that insures a user actions can't *impact
data* to which a user has restricted access in complex or implicit actions. Checked
on public client layer.
Also, there are 2 ways to authenticate incoming requests:
1. by data inside the token (for external applications that connect to MR).
2. by DFP role which retrieved from Azure Graph API base on indentity info from token.
Once incoming request is authenticated, the auth info is available in `SecurityContextHolder` and
can be conveniently retrieved by `UserPrincipalUtility`. Also this class allows to mimic security context
for users that are offline (e.g. on moment of retrieving data for alert sending).
### Logging
Logging is the most important component for troubleshooting, and it's why we should carefully
use it.
By default Spring Boot starters use Logback framework as logging provider.
As long as we have Lombok, we should use `@Slf4j` instead of direct logger definition.
In cloud environment logs are flowing into Azure Application Insights (AI) resource alongside with another
[telemetry](https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-in-process-agent#autocollected-requests-dependencies-logs-and-metrics).
AI aggregates logs among all instances of the solution in one place. In the AI resource
they are called _traces_ and could be queried from the `trace` table using
[Kusto query language (KQL)](https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/log-query-overview).
Please, refer [In-place solution list](#in-place-solution-list) to find answers on mos frequent questions.
Please, find logging rules in [Logging requirements](#logging-requirements)
### Data loss protection
The application is working with financial data and any information is highly valuable here. It's why we
can't rely on non-persisted eventing. From other hands, the application should be fast and hide as many
delays from user as possible. It's why we rely on the background tasks for any long-term and data-sensitive
operations.
As backgroud task should implement some repeating patterns, it's crucial that any instance of the application will
continue working even if another has failed for some reasons. That cause difficulties avoiding the case when several
instances of the application trying to process the same action. In order to synchronize such works, we have
implemented tasks - special objects that stored in the Cosmos DB in `Tasks` container. They are used as shared
locks:
* they have state (`READY`/`RUNNING`)
* they have different timestamps to detect freezes
* they contain extra information about how and when the previous execution has been done.
That's very crucial to understand which actions should be implemented here and which restriction are there.
Please, refer [In-place solution list](#in-place-solution-list) recommendations.
## Contribution rules
### Versioning
Any contribution should go through pull request review process.
We follow regular git flow for the changes:
* Implement a bugfix or new functionality
* Commit any significant code change as a separate commit. Commits have to be descriptive
and contain only changes related to the story which you picked for development.
Each commit must contain a tag at the beginning of the commit message indicating
where changes where made (`[BE]` for the `backend`).
* Make sure it's compiling
* Test it locally by launching against the real environment
* Run build.gradle to make sure gradle may build it and tests are passing (the real environment is required)
* Pull request should have a tag indicating
where changes where made (`[BE]` for the `backend`). Also it
should have a second tag refering to the Bug Tracking story `[MDMR-123]`. For better visibility,
it's highly recommended to add GitHub labels to PR as well.
* Commit and push the branch
* Create a PR to merge this branch to the target branch
### Non-functional requirements
* To pass PR review you must check code against [logging rules](#logging-requirements).
* Unit tests are not mandatory for now but once you change source code, the existing unit tests should execute without errors. If any, correct unit tests.
* The code should follow [In-place solution list](#in-place-solution-list)
### Style guide
* The source code formatting should follow the default IDE code style preferences. Currently it's checked by IntelliJ IDEA.
* The source code should follow default Sonar rules. As much as possible warnings and errors
reported by [SonarLint](https://plugins.jetbrains.com/plugin/7973-sonarlint) plugin should be resolved.
The current list of rules designates between controversal types of writing code
and should override Sonar rules if any intersect.
* All temporary features and code parts should be marked by TODO comments.
* Big amount of annotations in parameters should have parameter declaration on the different line from annotations. E.g.:
```java
@Operation(summary = "Get performance metrics for list of queues")
@GetMapping(value = "/labeling/queues", produces = MediaType.APPLICATION_JSON_VALUE)
@Secured({PREFIXED_MANAGER_ROLE})
public Set<ItemLabelingMetricsByQueueDTO> getItemLabelingMetricsByQueue(
@Parameter(description = FROM_PARAM_DESCRIPTION, example = FROM_PARAM_EXAMPLE)
@DateTimeFormat(iso = DateTimeFormat.ISO.DATE_TIME)
@RequestParam
OffsetDateTime from,
@Parameter(description = TO_PARAM_DESCRIPTION, example = TO_PARAM_EXAMPLE)
@DateTimeFormat(iso = DateTimeFormat.ISO.DATE_TIME)
@RequestParam
OffsetDateTime to,
@Parameter(description = AGGREGATION_PARAM_DESCRIPTION)
@Schema(type = "string", format = SWAGGER_DURATION_FORMAT, example = SWAGGER_DURATION_EXAMPLE)
@RequestParam
Duration aggregation,
@Parameter(description = ANALYSTS_PARAM_DESCRIPTION)
@RequestParam(value = "analyst", required = false)
Set<String> analystIds,
@Parameter(description = QUEUES_PARAM_DESCRIPTION)
@RequestParam(value = "queue", required = false)
Set<String> queueIds
) {
return performanceService.getItemLabelingMetricsByQueue(from, to, aggregation, analystIds, queueIds);
}
```
* Order of annotations on classes should be (the first one is the closest to class declaration): `org.springframework` >
`org.projectlombok` > `io.swagger.core` > other
* API naming should follow [common best practices](https://restfulapi.net/resource-naming)
* Name of containers / tables should reflect it's content (e.g. if container stores especially RedHotChillyPepper
entities then the name should be `RedHotChillyPeppers` with notion of all attributes and in a plural form)
* The IDE should be configured the following way:
* Class count to use import with '*': 5
* Names count to use static import with '*': 3
* Line length restriction for JavaDocs and comments: 80
* Line length restriction for code: 120
* It's highly recommended to use annotations `@Nonnull` and `@Nullable` for argument description
### Logging requirements
1. Try to put descriptive message inside a log. Message can be called descriptive when it's shortly describes the outcome of
the execution block it refers to.
2. Try to avoid overused and repetitive messages.
3. When hesitating to choose a log level (and your case is not described in [severities](#selecting-log-severity) guideliness),
tend to choose lesser severity log.
4. Avoid printing sensitive information to the log messages.
5. Avoid using logs inside loops that don't operate with external resources (e.g. EventHub, CosmosDB, Active Directory), instead accumulate the result at the end of the loop.
6. Every log resolvable argument should be wrapped in rectangular brackets (no other symbols are allowed), so
your log should look like that: `log.warn("Error occurred during queue [{}] processing.", queue.getId())`. Also don't
forget to add a dot at the end of the sentence.
7. It's prefreable to put long string arguments after error explanation sentence:
`log.error("Error occurred during queues processing: [{}]", queueIds)`.
8. Resolvable arguments of a log which are reserved for another exception message should not be surrounded with brackets:
`log.warn("Excpected error occurred. {}. Reverting changes.", ex.getMessage())`
9. Any code changes must be properly logged. Walk through [Selecting log severity](#selecting-log-severity) and check that
all cases described there have according logging.
## In-place solution list
### Structuring of the feature
The next diagram should be used to define correct place of the logic implementation:
![Backend](../documentation/pictures/MRStructureDiagrams-BELayerResponsibility.png)
Please, make sure that your classes and their relations follows this structure.
Please, make sure that class field names follow patterns that already applied in the application.
### DTO classes
Any interaction with user should be done by DTO classes. In some cases it's enough to write DTO inly for 1-level class
(e.g. `Item` and `ItemDTO`).
**Pros:** It allows to exchange only required information and only allowed information.
**Cons:** The developer needs to write more classes.
### Model mapper usage
* For any data mappings where objects contain fields with the same names, it's required to use [Model Mapper](http://modelmapper.org/).
* Preconfigured Model Mapper beans are already included in existing applications.
* All complex mappings should be done in special services.
### Database Locking
We use optimistic locks to prevent data been lost while writing to the database.
For more details refer to [official documentation](https://docs.microsoft.com/en-us/azure/cosmos-db/database-transactions-optimistic-concurrency).
Optimistic locking means that any write operation to database can be finished with exception during normal fuctioning.
* Once exception is received, the whole business logic should be repeated.
* There shouldn't be more-than-1-write business operations. Otherwise repeating can cause problems.
* We use resilience4j annotations like `@Retry` on bean method level. That's why there are `thisService` beans
across the application. Make sure that you know the 'proxy' pattern and how the annotation processing works in Spring.
Locking is based on `_etag` property and the same property are used for deduplication. Also, guarantees of
etag consistency [1](https://stackoverflow.com/questions/52499978/confusion-on-how-updates-work-with-weak-consistency-models)
[2](https://stackoverflow.com/questions/40662399/document-db-etag-optimistic-concurrency-with-session-consistency) allows us to use
session consistency in the Cosmos DB.
**Pros:** DB is scalable and many services can deal with it without freezing.
**Cons:** Developer should consider possibility to get exception on any DB interaction.
### Retries on Cosmos DB reads
As Cosmos DB has throughput restriction then it can fail any read response or return incomplete results (for list requests).
in order to work with it, consider `PageProcessingUtility` usage.
**Pros:** DB is scalable and many services can deal with it without freezing.
**Cons:** Developer should consider possibility to get exception on any DB interaction.
### Custom queries to Cosmos DB
We use Spring data for simple request. For more efficient queries we use [custom methods](https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#repositories.custom-implementations).
Please, refer on examples in code.
### Event streaming
Once you implement streaming between services, make sure:
* The only info for current event is sent.
* The event can't be separated to several more simple events.
* Events are reusable and can be consumed by any new potential application.
* If events carry critical information then there is repeating mechanism for bot sending and receiving.
### Swagger support
The project uses swagger to easy debugging. Every time you implement new endpoint,
make sure that it looks properly in [http://localhost:8080/swagger-ui/index.html?url=/v3/api-docs](http://localhost:8080/swagger-ui/index.html?url=/v3/api-docs)
### Swagger api.json generation
When one add/remove/change some endpoint please regenerate api.json by building the respective module with gradle.
One of unit tests contains swagger retrieving from [springdoc](https://springdoc.org/) endpoint and saving it to api.json
in module root.
### External services calls
In order to make application durable:
* Any call to external system should be protected by timeout (any implicit or explicit).
* If information retrieving is required for nirmal functioning and there is mno repeating
behavior at top layers then call to external service should be wrapped into retries.
### Properties
We use Spring application.yml files to store configuration. There are several profiles for different environments
* default (no profile name) - contains all properties and default values for them.
* local - is used for local debugging, must be applied on top of default profile.
* int - is used for testing environments, must be applied on top of default profile, must contain all properties that impact behavior and could be configured on installation.
* int-secondary - is used to highlight properties, specific for secondary environment, must be applied on top of int profile, should NOT be changed on installation.
* prod - is used for production environments, must be applied on top of default profile,must contain all properties that impact behavior and could be configured on installation.
* prod-secondary - is used to highlight properties, specific for secondary environment, must be applied on top of prod profile, should NOT be changed on installation.
Once you add new property, you must to check if it should be added to each configuration file.
**Pros:** It allows to configure different environments/instances precisely
**Cons:** The developer must manage all existing profiles
### Security review
Once you implement a feature that involves any interaction with user (e.g. incoming HTTP request, alert sending),
you must check it against next steps:
1. Check users who able to use this interaction in general by table in [README](../README.md)
2. Сheck that implementation restrict any interaction by default for everyone except Fraud Manager. As example,
for controllers it's done by `@Secured({ADMIN_MANAGER_ROLE})` annotation on class level.
3. Unblock access on method level for other roles in aссordance with permission table.
4. Check if interaction requires any data from storages/other services.
5. If so, separate service logic to `PublicService` and `PublicClient` where any interaction with data pass
PublicClient.
6. Make sure that all methods in `PublicClient` are protected with one of following annotations:
* `@PostFilter`
* `@PreFilter`
* `@PostAuthorize`
* `@PreAuthorize`
### Role enrichment
Each request that comes to application pass the security filters. They are configured in `SecurityConfig` class in
each application. The main flow is based on default `AADAppRoleStatelessAuthenticationFilter`
and custom `DFPRoleExtractionFilter` from `dfp-auth-starter` module. The last one contains following logic:
* Correctness of the JWT token is checked automatically by Azure Spring Security module.
* To separate user tokens (which should be enriched by DFP roles) from app tokens, the equality of oid and sub claims
is checked. Equality of claims is considered as an aspect of application token
* For every new user token the application makes a call to Azure AD in order to retrieve DFP roles for the token owner.
The result of lookup is stored in a local cache for 10 minutes (by default). The size of the cache is restricted by 500 entries (by default).
**Pros:** Seamless integration with DFP roles.
**Cons:** Backend must mimic Azure AD services for frontend (e.g. photo retrieving and other).
### Usage of Application Insights agent
Integration with AI was made with [Application Insights Java Agent](https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-in-process-agent).
It is configured through `ApplicationInsights.json` configuration file
(agent don't need to be in the classpath, but config file have to be placed near the agent whereever it is) and
application arguments.
**Pros:** This way allows to see the biggest amount of data in AI. Also, it is recommended by Azure as the best practice.
**Cons:** There are some known issues:
* Can't send telemtry for requests with spaces in the url, e.g.: https://graph.microsoft.com/v1.0/servicePrincipals?$ filter=appId eq '935c35fb-d606-4e27-b2f8-802af975bea6'
[github issues](https://github.com/microsoft/ApplicationInsights-Java/issues/1290)
* Web client GET requests are not captured. [github issues](https://github.com/microsoft/ApplicationInsights-Java/issues/1276)
* Operation context is not created when message is comming from EventHub, so it's not possible right now to trace all
insights produced by incoming message. [stackoverflow](https://stackoverflow.com/questions/63235493/how-to-preserve-operationid-when-sending-message-through-azure-eventhub)
* if log contains exception info alike `log.warn("Some warn about [{}]", id, exception)` then such log won't be stored as
a trace but exception without message info but with stack trace.
### Selecting log severity
**ERROR** - `log.error` calls should be used when you need explicitly define unexpected errors (when exception
coudn't/shouldn't be thrown) in the program. If there is an exception then it must be logged as a separate
log entry. Logs of such severity always need to be turned on in cloud instance.
> This log level should be always turned on.
**WARN** - `log.warn` calls should be used to define potential errors in the program.
If there is an exception then it's recommended to log it in a separate log entry. Logs
of such severity can be turned off only when the solution version is stable and it needs to be cost-effective.
> Turning off this log level makes triage impossible for some cases.
**INFO** - `log.info` calls have to be used before:
- any attempt to make changes in persisted state (entry in the database, sending message/event, posting of changes to
connected systems by any protocol) with short description of changes and with obligatory indication of tracking information
(e.g. id, actor, etc.)
`log.info` calls have to be used after:
- any attempt to make changes in a persisted state with the description of results
- any business decision applied in the code or any condition check that impacts the result of the business operation
- any condition that restricts user in the getting of information despite successful authorization
Logs of such severity can be turned off only when the solution version is stable and it needs to be cost-effective.
> Turning off this log level makes triage impossible for some cases.
**DEBUG** - `log.debug` calls have to be used for debugging purposes of unexpected behavior. This is the hardest log level
to consider during development which always tends to be overused. Therefore it preferable to use it when:
* it's obvious that the current block of code tends to have bugs in it
* it adjoins egress/ingress integration points (such as REST API, EventHub event streaming)
* to print verbose values that were not included in the INFO level of the same block of code
> Logs of such severity always need to be turned off in cloud, unless they are written to the filesystem.
**TRACE** - `log.trace` calls are prohibited in the solution and should be avoided. You can use them in development, but
during pull request review they have to be removed.
### Choosing between traces and exceptions
Application Insights support different types of entries. The main two which you can generate from code are traces and exceptions.
In our technical alerting we configure different allowed thresholds for them:
* for exceptions we have bigger thresholds about 100 entries per 5 minutes for each application instance.
* for traces there is some threshold level for `WARN` severity, so it's considered ok to have some warning messages periodically.
* for errors threshold is 1 and all errors should be investigated by support team.
If you wish to log exception then it should be done as below:
* `log.warn("Some warn about [{}]", id); log.warn("Warn about [{}] exception", id, exception)` - in this case you will have both trace and exception entries in AI.
* `log.warn("Some warn about [{}]: {}", id, exception.getMessage);` - in this case you will have only trace event in AI without stack trace.
* `log.warn("Some warn about [{}]: {}", id, exception);` - in this case you will have only exception event in AI without stack trace. Use it if exception is
frequent and transient.
**Pros:**
* There is some exception background noise from libraries, so big threshold on exceptions allow to process it quietly.
* A developer can precisely manage which information should trigger alerts.
**Cons:**
* stacktrace could be found only in exception entries.
* sometimes it's tricky to correlate related exceptions and traces.

Просмотреть файл

@ -1,12 +1,9 @@
# Backend
This is a parent module of the Manual Review application. The application follows microservice architecture and contains
the following main executable modules:
* [mr-queues](./queues) that responsible for real-time item processing in the queue-based paradigm.
* [mr-analytics](./analytics) that responsible for post-processing analysis and reporting.
This is a parent module of the Manual Review backend (BE) applications.
Please, read more in [BE Contribution guide](./CONTRIBUTION.md).
The module combines all services and provides common
configurations like `.gitignore` and `settings.gradle`.
Below you can find main technical information about building/launching of backend part.
## Getting Started

Просмотреть файл

@ -114,8 +114,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -128,8 +128,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -251,8 +251,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -265,8 +265,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -378,8 +378,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -392,8 +392,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -509,8 +509,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -523,8 +523,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -630,8 +630,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -644,8 +644,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -755,8 +755,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -769,8 +769,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -892,8 +892,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -906,8 +906,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -1024,8 +1024,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -1038,8 +1038,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -1158,8 +1158,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -1172,8 +1172,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -1290,8 +1290,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -1304,8 +1304,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -1456,8 +1456,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -1470,8 +1470,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -1632,8 +1632,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -1646,8 +1646,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -1812,8 +1812,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -1826,8 +1826,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -1982,8 +1982,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -1996,8 +1996,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -2148,8 +2148,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -2162,8 +2162,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -2314,8 +2314,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -2328,8 +2328,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -2478,8 +2478,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -2492,8 +2492,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -2633,8 +2633,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -2647,8 +2647,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -2797,8 +2797,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -2811,8 +2811,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -2952,8 +2952,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -2966,8 +2966,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -3127,8 +3127,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -3141,8 +3141,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -3247,8 +3247,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -3261,8 +3261,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -3362,8 +3362,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -3376,8 +3376,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -3488,8 +3488,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -3502,8 +3502,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -3650,8 +3650,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -3664,8 +3664,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -3803,8 +3803,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -3817,8 +3817,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -3924,8 +3924,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -3938,8 +3938,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -4048,8 +4048,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -4062,8 +4062,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -4171,8 +4171,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -4185,8 +4185,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -4309,8 +4309,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -4323,8 +4323,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -4447,8 +4447,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -4461,8 +4461,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -4578,8 +4578,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -4592,8 +4592,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -4716,8 +4716,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -4730,8 +4730,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -4839,8 +4839,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -4853,8 +4853,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -4977,8 +4977,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -4991,8 +4991,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -5107,8 +5107,8 @@
}
}
},
"400" : {
"description" : "Bad Request",
"500" : {
"description" : "Internal Server Error",
"content" : {
"application/json" : {
"schema" : {
@ -5121,8 +5121,8 @@
}
}
},
"500" : {
"description" : "Internal Server Error",
"400" : {
"description" : "Bad Request",
"content" : {
"application/json" : {
"schema" : {
@ -5463,6 +5463,14 @@
"badOverturned" : {
"type" : "integer",
"format" : "int32"
},
"goodInBatch" : {
"type" : "integer",
"format" : "int32"
},
"badInBatch" : {
"type" : "integer",
"format" : "int32"
}
}
},
@ -5793,8 +5801,49 @@
"type" : "string",
"format" : "date-time"
},
"previousRunSuccessfull" : {
"type" : "boolean"
"currentRun" : {
"type" : "string",
"format" : "date-time"
},
"previousSuccessfulRun" : {
"type" : "string",
"format" : "date-time"
},
"previousSuccessfulExecutionTime" : {
"type" : "object",
"properties" : {
"seconds" : {
"type" : "integer",
"format" : "int64"
},
"negative" : {
"type" : "boolean"
},
"zero" : {
"type" : "boolean"
},
"nano" : {
"type" : "integer",
"format" : "int32"
},
"units" : {
"type" : "array",
"items" : {
"type" : "object",
"properties" : {
"durationEstimated" : {
"type" : "boolean"
},
"dateBased" : {
"type" : "boolean"
},
"timeBased" : {
"type" : "boolean"
}
}
}
}
}
},
"lastFailedRunMessage" : {
"type" : "string"
@ -5840,15 +5889,15 @@
"type" : "integer",
"format" : "int64"
},
"nano" : {
"type" : "integer",
"format" : "int32"
"negative" : {
"type" : "boolean"
},
"zero" : {
"type" : "boolean"
},
"negative" : {
"type" : "boolean"
"nano" : {
"type" : "integer",
"format" : "int32"
},
"units" : {
"type" : "array",

Просмотреть файл

@ -6,12 +6,14 @@ package com.griddynamics.msd365fp.manualreview.analytics;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.scheduling.annotation.EnableScheduling;
import reactor.core.scheduler.Schedulers;
@SpringBootApplication
@EnableScheduling
public class AnalyticsApplication {
public static void main(String[] args) {
Schedulers.enableMetrics();
SpringApplication.run(AnalyticsApplication.class, args);
}
}

Просмотреть файл

@ -7,11 +7,18 @@ package com.griddynamics.msd365fp.manualreview.analytics.config;
import lombok.AccessLevel;
import lombok.NoArgsConstructor;
import java.time.Instant;
import java.time.OffsetDateTime;
import java.time.ZoneId;
@NoArgsConstructor(access = AccessLevel.PRIVATE)
@SuppressWarnings("java:S2386")
public class Constants {
public static final OffsetDateTime ELDEST_APPLICATION_DATE =
OffsetDateTime.ofInstant(Instant.ofEpochMilli(0), ZoneId.systemDefault());
public static final String DEFAULT_PAGE_REQUEST_SIZE_STR = "1000";
public static final int DEFAULT_PAGE_REQUEST_SIZE = 1000;

Просмотреть файл

@ -23,6 +23,7 @@ public class ApplicationProperties {
private final Map<String, TaskProperties> tasks;
private final double taskResetTimeoutMultiplier;
private final double taskWarningTimeoutMultiplier;
private final double taskSuccessfulRunsTimeoutMultiplier;
private final TaskExecutor taskExecutor;
@AllArgsConstructor

Просмотреть файл

@ -32,4 +32,8 @@ public class ItemLabelingMetricDTO {
private int goodOverturned = 0;
@Builder.Default
private int badOverturned = 0;
@Builder.Default
private int goodInBatch = 0;
@Builder.Default
private int badInBatch = 0;
}

Просмотреть файл

@ -3,13 +3,18 @@
package com.griddynamics.msd365fp.manualreview.analytics.model.persistence;
import com.fasterxml.jackson.databind.annotation.JsonDeserialize;
import com.fasterxml.jackson.databind.annotation.JsonSerialize;
import com.griddynamics.msd365fp.manualreview.model.TaskStatus;
import com.griddynamics.msd365fp.manualreview.model.jackson.FlexibleDateFormatDeserializer;
import com.griddynamics.msd365fp.manualreview.model.jackson.ISOStringDateTimeSerializer;
import com.microsoft.azure.spring.data.cosmosdb.core.mapping.Document;
import com.microsoft.azure.spring.data.cosmosdb.core.mapping.PartitionKey;
import lombok.*;
import org.springframework.data.annotation.Id;
import org.springframework.data.annotation.Version;
import java.time.Duration;
import java.time.OffsetDateTime;
import static com.griddynamics.msd365fp.manualreview.analytics.config.Constants.TASK_CONTAINER_NAME;
@ -25,8 +30,13 @@ public class Task {
@PartitionKey
private String id;
private TaskStatus status;
@JsonSerialize(using = ISOStringDateTimeSerializer.class)
private OffsetDateTime previousRun;
private Boolean previousRunSuccessfull;
@JsonSerialize(using = ISOStringDateTimeSerializer.class)
private OffsetDateTime currentRun;
@JsonSerialize(using = ISOStringDateTimeSerializer.class)
private OffsetDateTime previousSuccessfulRun;
private Duration previousSuccessfulExecutionTime;
private String lastFailedRunMessage;
private String instanceId;

Просмотреть файл

@ -81,6 +81,24 @@ public interface ItemLabelActivityRepositoryCustomMethods {
final Set<String> analystIds,
final Set<String> queueIds);
/**
* Calculate batch decisions performance by provided query parameters.
* Buckets are separated by labels and merchantRuleDecisions.
* Bucket ids and bucket numbers are null
* as method doesn't differentiate
* results by query parameters
*
* @param startDateTime a time bound
* @param endDateTime a time bound
* @param analystIds a list of analyst ids for filtering,
* if it's empty the all analysts are counted
* @return the list of buckets
*/
List<ItemLabelingBucket> getBatchPerformance(
@NonNull final OffsetDateTime startDateTime,
@NonNull final OffsetDateTime endDateTime,
final Set<String> analystIds);
/**
* Calculate overall spent time by provided query parameters.
* Buckets are separated by labels.

Просмотреть файл

@ -41,8 +41,9 @@ public class ItemLabelActivityRepositoryImpl implements ItemLabelActivityReposit
String.format(
"SELECT VALUE root FROM " +
"(SELECT c.label, c.merchantRuleDecision, count(c.label) AS cnt, c.queueId AS id, FLOOR((c.labeled-%1$s)/%3$s) AS bucket " +
"FROM c where " +
"FROM c WHERE " +
"(c.labeled BETWEEN %1$s AND %2$s) " +
"AND IS_DEFINED(c.queueId) AND NOT IS_NULL(c.queueId) " +
"%4$s " +
"%5$s " +
"group by c.queueId, FLOOR((c.labeled-%1$s)/%3$s), c.label, c.merchantRuleDecision) " +
@ -70,8 +71,9 @@ public class ItemLabelActivityRepositoryImpl implements ItemLabelActivityReposit
String.format(
"SELECT VALUE root FROM " +
"(SELECT c.label, c.merchantRuleDecision, count(c.label) AS cnt, c.analystId as id, FLOOR((c.labeled-%1$s)/%3$s) AS bucket " +
"FROM c where " +
"FROM c WHERE " +
"(c.labeled BETWEEN %1$s AND %2$s) " +
"AND IS_DEFINED(c.queueId) AND NOT IS_NULL(c.queueId) " +
"%4$s " +
"%5$s " +
"group by c.analystId, FLOOR((c.labeled-%1$s)/%3$s), c.label, c.merchantRuleDecision) " +
@ -99,8 +101,9 @@ public class ItemLabelActivityRepositoryImpl implements ItemLabelActivityReposit
String.format(
"SELECT VALUE root FROM " +
"(SELECT c.label, c.merchantRuleDecision, count(c.label) AS cnt " +
"FROM c where " +
"FROM c WHERE " +
"(c.labeled BETWEEN %1$s AND %2$s) " +
"AND IS_DEFINED(c.queueId) AND NOT IS_NULL(c.queueId) " +
"%3$s " +
"%4$s " +
"group by c.label, c.merchantRuleDecision) " +
@ -117,6 +120,30 @@ public class ItemLabelActivityRepositoryImpl implements ItemLabelActivityReposit
.collect(Collectors.toList());
}
@Override
public List<ItemLabelingBucket> getBatchPerformance(@NonNull final OffsetDateTime startDateTime,
@NonNull final OffsetDateTime endDateTime,
final Set<String> analystIds) {
return itemLabelActivityContainer.runCrossPartitionQuery(
String.format(
"SELECT VALUE root FROM " +
"(SELECT c.label, c.merchantRuleDecision, count(c.label) AS cnt " +
"FROM c WHERE " +
"(c.labeled BETWEEN %1$s AND %2$s) " +
"AND (NOT IS_DEFINED(c.queueId) OR IS_NULL(c.queueId)) " +
"%3$s " +
"group by c.label, c.merchantRuleDecision) " +
"AS root",
startDateTime.toEpochSecond(),
endDateTime.toEpochSecond(),
CollectionUtils.isEmpty(analystIds) ? "" :
String.format("AND c.analystId IN ('%1$s') ", String.join("','", analystIds))))
.map(cip -> itemLabelActivityContainer.castCosmosObjectToClassInstance(cip.toJson(), ItemLabelingBucket.class))
.filter(Optional::isPresent)
.map(Optional::get)
.collect(Collectors.toList());
}
@Override
public List<LabelingTimeBucket> getSpentTime(@NonNull final OffsetDateTime startDateTime,
@NonNull final OffsetDateTime endDateTime,
@ -126,8 +153,9 @@ public class ItemLabelActivityRepositoryImpl implements ItemLabelActivityReposit
String.format(
"SELECT VALUE root FROM " +
"(SELECT c.label, sum(c.decisionApplyingDuration) AS totalDuration, count(c.labeled) AS cnt " +
"FROM c where " +
"FROM c WHERE " +
"(c.labeled BETWEEN %1$s AND %2$s) " +
"AND IS_DEFINED(c.queueId) AND NOT IS_NULL(c.queueId) " +
"%3$s " +
"%4$s " +
"group by c.label) " +
@ -160,13 +188,14 @@ public class ItemLabelActivityRepositoryImpl implements ItemLabelActivityReposit
+ " Count(1) as count "
+ "FROM ("
+ "SELECT "
+ " udf.getBucketNumber(c.riskScore,%1$s) as risk_score_bucket, "
+ " FLOOR(c.riskScore/%1$s) as risk_score_bucket, "
+ " c.label "
+ "FROM c "
+ "WHERE "
+ " (c.labeled BETWEEN %2$s AND %3$s)"
+ " AND IS_DEFINED(c.riskScore) "
+ " AND NOT IS_NULL(c.riskScore) "
+ " AND IS_DEFINED(c.queueId) AND NOT IS_NULL(c.queueId) "
+ " %4$s "
+ " %5$s "
+ " %6$s "

Просмотреть файл

@ -34,6 +34,7 @@ public class ItemLockActivityRepositoryImpl implements ItemLockActivityRepositor
"(SELECT c.actionType, sum(c.released-c.locked) AS totalDuration, count(c.released) AS cnt " +
"FROM c where " +
"(c.released BETWEEN %1$s AND %2$s) " +
"AND IS_DEFINED(c.queueId) AND NOT IS_NULL(c.queueId) " +
"%3$s " +
"%4$s " +
"group by c.actionType) " +

Просмотреть файл

@ -21,9 +21,12 @@ import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Schedulers;
import java.time.Duration;
import java.time.OffsetDateTime;
import java.util.LinkedList;
import java.util.List;
import java.util.UUID;
@ -102,6 +105,7 @@ public class StreamService implements HealthCheckProcessor {
healthCheckRepository.save(hc);
});
List<Mono<Void>> sendings = new LinkedList<>();
processorRegistry.forEach((hub, client) -> {
for (int i = 0; i < healthCheckBatchSize; i++) {
HealthCheck healthCheck = HealthCheck.builder()
@ -114,20 +118,25 @@ public class StreamService implements HealthCheckProcessor {
.type(EVENT_HUB_CONSUMER)
.generatedBy(applicationProperties.getInstanceId())
.active(true)
.created(OffsetDateTime.now())
.ttl(healthCheckTtl.toSeconds())
._etag("new")
.build();
client.sendHealthCheckPing(healthCheck.getId(), () -> {
try {
healthCheckRepository.save(healthCheck);
} catch (CosmosDBAccessException e) {
log.debug("Receiver already inserted this [{}] health-check entry", healthCheck.getId());
}
});
sendings.add(client.sendHealthCheckPing(healthCheck.getId())
.doOnSuccess(v -> {
try {
healthCheck.setCreated(OffsetDateTime.now());
healthCheckRepository.save(healthCheck);
} catch (CosmosDBAccessException e) {
log.debug("Receiver already inserted this [{}] health-check entry", healthCheck.getId());
}
}));
healthCheckNum++;
}
});
Mono.zipDelayError(sendings, results -> results)
.subscribeOn(Schedulers.boundedElastic())
.subscribe();
return overdueHealthChecks.isEmpty();
}

Просмотреть файл

@ -27,6 +27,7 @@ import java.util.function.Function;
import java.util.stream.Collectors;
import java.util.stream.StreamSupport;
import static com.griddynamics.msd365fp.manualreview.analytics.config.Constants.ELDEST_APPLICATION_DATE;
import static com.griddynamics.msd365fp.manualreview.analytics.config.Constants.INCORRECT_CONFIG_STATUS;
import static com.griddynamics.msd365fp.manualreview.analytics.config.ScheduledJobsConfig.*;
import static com.griddynamics.msd365fp.manualreview.model.TaskStatus.READY;
@ -148,43 +149,53 @@ public class TaskService {
// Restore task if it's stuck
if (task != null && !taskLaunched) {
restoreTaskIfStuck(task, taskProperties);
processTaskFreezes(task, taskProperties);
}
});
}
private boolean isTaskReadyForExecutionNow(Task task, ApplicationProperties.TaskProperties taskProperties) {
return task.getPreviousRun() == null ||
task.getPreviousRun().plus(taskProperties.getDelay()).isBefore(OffsetDateTime.now());
return READY.equals(task.getStatus()) &&
(task.getPreviousRun() == null ||
task.getPreviousRun().plus(taskProperties.getDelay()).isBefore(OffsetDateTime.now()));
}
@SuppressWarnings("java:S1854")
private void restoreTaskIfStuck(Task task, ApplicationProperties.TaskProperties taskProperties) {
Duration timeAfterPreviousRun;
if (task.getPreviousRun() != null) {
timeAfterPreviousRun = Duration.between(
task.getPreviousRun(), OffsetDateTime.now());
} else {
timeAfterPreviousRun = Duration.between(OffsetDateTime.MIN, OffsetDateTime.now());
}
private void processTaskFreezes(Task task, ApplicationProperties.TaskProperties taskProperties) {
Duration timeout = Objects.requireNonNullElse(taskProperties.getTimeout(), taskProperties.getDelay());
Duration acceptableDelayBeforeWarning = Duration.ofSeconds(
(long) (timeout.toSeconds() * applicationProperties.getTaskWarningTimeoutMultiplier()));
Duration acceptableDelayBeforeReset = Duration.ofSeconds(
(long) (timeout.toSeconds() * applicationProperties.getTaskResetTimeoutMultiplier()));
if (timeAfterPreviousRun.compareTo(acceptableDelayBeforeWarning) > 0) {
log.warn("Task [{}] is idle for too long. Last execution was [{}] minutes ago with status message: [{}]",
task.getId(), timeAfterPreviousRun.toMinutes(), task.getLastFailedRunMessage());
OffsetDateTime previousSuccessfulRun = Objects.requireNonNullElse(task.getPreviousSuccessfulRun(), ELDEST_APPLICATION_DATE);
OffsetDateTime previousRun = Objects.requireNonNullElse(task.getPreviousRun(), ELDEST_APPLICATION_DATE);
OffsetDateTime currentRun = Objects.requireNonNullElse(task.getCurrentRun(), ELDEST_APPLICATION_DATE);
OffsetDateTime now = OffsetDateTime.now();
Duration runWithoutSuccess = Duration.between(previousSuccessfulRun, now);
Duration acceptableDelayWithoutSuccessfulRuns = Duration.ofSeconds(
(long) (timeout.toSeconds() * applicationProperties.getTaskSuccessfulRunsTimeoutMultiplier()));
if (previousSuccessfulRun.isBefore(previousRun) &&
runWithoutSuccess.compareTo(acceptableDelayWithoutSuccessfulRuns) > 0) {
log.warn("Background task [{}] issue. No successful executions during [{}] minutes. Last Fail reason: [{}].",
task.getId(), runWithoutSuccess.toMinutes(), task.getLastFailedRunMessage());
}
if (!READY.equals(task.getStatus()) && timeAfterPreviousRun.compareTo(acceptableDelayBeforeReset) > 0) {
try {
log.info("Start [{}] task restore", task.getId());
task.setStatus(READY);
task.setLastFailedRunMessage("Restored after long downtime");
taskRepository.save(task);
log.info("Task [{}] has been restored", task.getId());
} catch (CosmosDBAccessException e) {
log.warn("Task [{}] recovering ended with a conflict: {}", task.getId(), e.getMessage());
if (!READY.equals(task.getStatus())) {
Duration currentRunDuration = Duration.between(currentRun, now);
Duration acceptableDelayBeforeWarning = Duration.ofSeconds(
(long) (timeout.toSeconds() * applicationProperties.getTaskWarningTimeoutMultiplier()));
Duration acceptableDelayBeforeReset = Duration.ofSeconds(
(long) (timeout.toSeconds() * applicationProperties.getTaskResetTimeoutMultiplier()));
if (currentRunDuration.compareTo(acceptableDelayBeforeWarning) > 0) {
log.warn("Background task [{}] issue. Idle for too long. Last execution was [{}] minutes ago with status message: [{}].",
task.getId(), currentRunDuration.toMinutes(), task.getLastFailedRunMessage());
}
if (currentRunDuration.compareTo(acceptableDelayBeforeReset) > 0) {
try {
log.info("Start [{}] task restore", task.getId());
task.setStatus(READY);
task.setLastFailedRunMessage("Restored after long downtime");
taskRepository.save(task);
log.info("Task [{}] has been restored", task.getId());
} catch (CosmosDBAccessException e) {
log.warn("Task [{}] recovering ended with a conflict: {}", task.getId(), e.getMessage());
}
}
}
}
@ -195,7 +206,7 @@ public class TaskService {
taskRepository.save(Task.builder()
.id(taskName)
._etag(taskName)
.previousRun(OffsetDateTime.now().minus(properties.getDelay()))
.previousRun(ELDEST_APPLICATION_DATE)
.status(READY)
.build());
log.info("Task [{}] has been initialized successfully.", taskName);
@ -234,10 +245,6 @@ public class TaskService {
*/
@SuppressWarnings("java:S2326")
private <T, E extends Exception> boolean executeTask(Task task) {
ApplicationProperties.TaskProperties taskProperties =
applicationProperties.getTasks().get(task.getId());
TaskExecution<Object, Exception> taskExecution = taskExecutions.get(task.getId());
// check possibility to execute
if (!READY.equals(task.getStatus())) {
return false;
@ -247,8 +254,12 @@ public class TaskService {
OffsetDateTime startTime = OffsetDateTime.now();
task.setStatus(RUNNING);
task.setInstanceId(applicationProperties.getInstanceId());
if (task.getPreviousRun() == null){
task.setPreviousRun(startTime);
task.setCurrentRun(startTime);
if (task.getPreviousRun() == null) {
task.setPreviousRun(ELDEST_APPLICATION_DATE);
}
if (task.getPreviousSuccessfulRun() == null) {
task.setPreviousSuccessfulRun(ELDEST_APPLICATION_DATE);
}
Task runningTask;
try {
@ -266,6 +277,9 @@ public class TaskService {
log.info("Task [{}] started its execution.", runningTask.getId());
// launch execution
ApplicationProperties.TaskProperties taskProperties =
applicationProperties.getTasks().get(task.getId());
TaskExecution<Object, Exception> taskExecution = taskExecutions.get(task.getId());
CompletableFuture
.supplyAsync(() -> {
try {
@ -278,20 +292,26 @@ public class TaskService {
Objects.requireNonNullElse(taskProperties.getTimeout(), taskProperties.getDelay()).toMillis(),
TimeUnit.MILLISECONDS)
.whenComplete((result, exception) -> {
Duration duration = Duration.between(startTime, OffsetDateTime.now());
runningTask.setStatus(READY);
runningTask.setPreviousRun(startTime);
runningTask.setPreviousRunSuccessfull(true);
if (exception != null) {
log.warn("Task [{}] finished its execution with an exception.",
runningTask.getId(), exception);
log.warn("Task [{}] finished its execution with an exception in [{}].",
runningTask.getId(), duration.toString());
log.warn("Task [{}] exception", runningTask.getId(), exception);
runningTask.setLastFailedRunMessage(exception.getMessage());
runningTask.setPreviousRunSuccessfull(false);
taskRepository.save(runningTask);
} else if (result.isEmpty()) {
log.info("Task [{}] finished its execution with empty result.", runningTask.getId());
} else {
log.info("Task [{}] finished its execution successfully. Result: [{}]",
runningTask.getId(), result.get());
runningTask.setPreviousSuccessfulRun(startTime);
runningTask.setPreviousSuccessfulExecutionTime(duration);
if (result.isEmpty()) {
log.info("Task [{}] finished its execution with empty result in [{}].",
runningTask.getId(), duration);
} else {
log.info("Task [{}] finished its execution successfully in [{}]. Result: [{}]",
runningTask.getId(), duration, result.get());
}
}
taskRepository.save(runningTask);
});

Просмотреть файл

@ -36,6 +36,17 @@ public class PublicItemLabelingHistoryClient {
queueIds);
}
@PreAuthorize("@dataSecurityService.checkPermissionForQueuePerformanceReading(authentication, #analystIds)")
public List<ItemLabelingBucket> getBatchLabelingSummary(
@NonNull final OffsetDateTime from,
@NonNull final OffsetDateTime to,
final Set<String> analystIds) {
return labelActivityRepository.getBatchPerformance(
from,
to,
analystIds);
}
@PreAuthorize("@dataSecurityService.checkPermissionForQueuePerformanceReading(authentication, #analystIds)")
public List<ItemLabelingBucket> getItemLabelingHistoryGroupedByQueues(
@NonNull final OffsetDateTime from,

Просмотреть файл

@ -66,6 +66,14 @@ public class PublicItemLabelingMetricService {
mapOverturnedDecisions(bucket, totalResult);
});
List<ItemLabelingBucket> batchDBResult = labelingClient.getBatchLabelingSummary(
from,
to,
analystIds);
dbResult.forEach(bucket -> {
mapBatchDecisions(bucket, totalResult);
});
calculateDerivedItemLabelingMetrics(totalResult);
return totalResult;
@ -327,6 +335,19 @@ public class PublicItemLabelingMetricService {
}
}
private void mapBatchDecisions(ItemLabelingBucket bucket, ItemLabelingMetricDTO performance) {
switch (bucket.getLabel()) {
case GOOD:
performance.setGoodInBatch(performance.getGoodInBatch() + bucket.getCnt());
break;
case BAD:
performance.setBadInBatch(performance.getBadInBatch() + bucket.getCnt());
break;
default:
break;
}
}
private void mapRiskScoreDistribution(LabelBucket labelBucket,
RiskScoreOverviewDTO.RiskScoreBucketDTO riskScoreBucketDTO) {
switch (labelBucket.getLabel()) {

Просмотреть файл

@ -5,13 +5,14 @@ mr:
instance-type: prim
task-reset-timeout-multiplier: 4.0
task-warning-timeout-multiplier: 2.0
task-successful-runs-timeout-multiplier: 8.0
tasks:
prim-health-analysis-task:
enabled: true
delay: PT10M
delay: PT1M
sec-health-analysis-task:
enabled: false
delay: PT10M
delay: PT1M
resolution-send-task:
enabled: true
delay: PT1M
@ -54,12 +55,24 @@ azure:
token-cache-size: 500
token-cache-retention: PT10M
event-hub:
checkpoint-interval: PT3M
sending-timeout: PT10M
sending-retries: 3
health-check-ttl: PT24H
health-check-batch-size: 5
health-check-batch-size: 1
health-check-allowed-delay: PT60M
consumers:
item-lock-event-hub:
checkpoint-interval: PT1M
item-label-event-hub:
checkpoint-interval: PT1M
item-resolution-event-hub:
checkpoint-interval: PT1M
item-assignment-event-hub:
checkpoint-interval: PT1M
queue-size-event-hub:
checkpoint-interval: PT1M
queue-update-event-hub:
checkpoint-interval: PT1M
overall-size-event-hub:
checkpoint-interval: PT1M
swagger:
# the https://cors-anywhere.herokuapp.com/ prefix is only for dev environments

Просмотреть файл

@ -5,13 +5,14 @@ mr:
instance-type: prim
task-reset-timeout-multiplier: 4.0
task-warning-timeout-multiplier: 2.0
task-successful-runs-timeout-multiplier: 8.0
tasks:
prim-health-analysis-task:
enabled: true
delay: PT10M
delay: PT1M
sec-health-analysis-task:
enabled: false
delay: PT10M
delay: PT1M
resolution-send-task:
enabled: true
delay: PT10M
@ -50,12 +51,24 @@ azure:
token-cache-size: 500
token-cache-retention: PT10M
event-hub:
checkpoint-interval: PT3M
sending-timeout: PT10M
sending-retries: 3
health-check-ttl: PT24H
health-check-batch-size: 5
health-check-batch-size: 1
health-check-allowed-delay: PT60M
consumers:
item-lock-event-hub:
checkpoint-interval: PT1M
item-label-event-hub:
checkpoint-interval: PT1M
item-resolution-event-hub:
checkpoint-interval: PT1M
item-assignment-event-hub:
checkpoint-interval: PT1M
queue-size-event-hub:
checkpoint-interval: PT1M
queue-update-event-hub:
checkpoint-interval: PT1M
overall-size-event-hub:
checkpoint-interval: PT1M
swagger:
token-url: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/v2.0/token

Просмотреть файл

@ -15,13 +15,14 @@ mr:
instance-id: ${WEBSITE_INSTANCE_ID}
task-reset-timeout-multiplier: 4.0
task-warning-timeout-multiplier: 2.0
task-successful-runs-timeout-multiplier: 8.0
tasks:
prim-health-analysis-task:
enabled: true
delay: PT10M
delay: PT1M
sec-health-analysis-task:
enabled: false
delay: PT10M
delay: PT1M
resolution-send-task:
enabled: true
delay: PT1M
@ -82,34 +83,38 @@ azure:
connection-string: ${spring-cloud-azure-eventhub-connection-string:${EVENT_HUB_CONNECTION_STRING}}
checkpoint-storage-account: ${EVENT_HUB_OFFSET_STORAGE_NAME}
checkpoint-connection-string: DefaultEndpointsProtocol=https;AccountName=${EVENT_HUB_OFFSET_STORAGE_NAME};AccountKey=${spring-cloud-azure-eventhub-checkpoint-access-key:${EVENT_HUB_OFFSET_STORAGE_KEY}};EndpointSuffix=core.windows.net
checkpoint-interval: PT1M
sending-timeout: PT10M
sending-retries: 3
health-check-ttl: PT24H
health-check-batch-size: 5
health-check-batch-size: 1
health-check-allowed-delay: PT1H
consumers:
item-lock-event-hub:
destination: item-lock-event-hub
group: ${spring.application.name}
checkpoint-interval: PT1M
item-label-event-hub:
destination: item-label-event-hub
group: ${spring.application.name}
checkpoint-interval: PT1M
item-resolution-event-hub:
destination: item-resolution-event-hub
group: ${spring.application.name}
checkpoint-interval: PT1M
item-assignment-event-hub:
destination: item-assignment-event-hub
group: ${spring.application.name}
checkpoint-interval: PT1M
queue-size-event-hub:
destination: queue-size-event-hub
group: ${spring.application.name}
checkpoint-interval: PT1M
queue-update-event-hub:
destination: queue-update-event-hub
group: ${spring.application.name}
checkpoint-interval: PT1M
overall-size-event-hub:
destination: overall-size-event-hub
group: ${spring.application.name}
checkpoint-interval: PT1M
swagger:
auth-url: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/authorize?resource=${CLIENT_ID}

Просмотреть файл

@ -1,6 +0,0 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
function getBucketNumber(value, bucket_size){
return Math.floor(value / bucket_size);
}

Просмотреть файл

@ -0,0 +1,24 @@
package com.griddynamics.msd365fp.manualreview.cosmos.utilities;
import lombok.experimental.UtilityClass;
import lombok.extern.slf4j.Slf4j;
import org.springframework.lang.NonNull;
@Slf4j
@UtilityClass
public class IdUtility {
public String encodeRestrictedChars(@NonNull String id) {
String result = id
.replace("%", ".25")
.replace("/", ".2F")
.replace("\\", ".5C")
.replace("?", ".3F")
.replace("#", ".23");
if (!result.equals(id)) {
log.error("Id [{}] contains one of restricted values (%, /, \\, ?, #)", id);
}
return result;
}
}

Просмотреть файл

@ -21,9 +21,6 @@ public class EventHubProperties {
private final String connectionString;
private final String checkpointStorageAccount;
private final String checkpointConnectionString;
private final Duration checkpointInterval;
private final Duration sendingTimeout;
private final long sendingRetries;
private final Map<String, ProducerProperties> producers;
private final Map<String, ConsumerProperties> consumers;
@ -32,6 +29,9 @@ public class EventHubProperties {
@ToString
public static class ProducerProperties {
private final String destination;
private final Duration sendingPeriod;
private final long sendingWorkers;
private final int bufferSize;
}
@AllArgsConstructor
@ -39,6 +39,7 @@ public class EventHubProperties {
public static class ConsumerProperties {
private final String destination;
private final String group;
private final Duration checkpointInterval;
}
}

Просмотреть файл

@ -17,17 +17,18 @@ import com.griddynamics.msd365fp.manualreview.ehub.durable.model.HealthCheckProc
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Tags;
import io.micrometer.core.instrument.Timer;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import reactor.core.scheduler.Schedulers;
import org.apache.commons.lang3.tuple.Pair;
import reactor.core.publisher.Mono;
import java.time.Duration;
import java.time.OffsetDateTime;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import java.util.*;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.function.Consumer;
@ -41,6 +42,9 @@ public class DurableEventHubProcessorClient<T> {
public static final int MAX_EHUB_PARTITIONS = 32;
public static final String MR_HEALTH_CHECK_PREFIX = "{\"mr-eh-health-check\":true,\"checkId\":\"";
public static final String MR_HEALTH_CHECK_SUFFIX = "\"}";
public static final int HEALTH_CHECK_QUEUE_CAPACITY = 100;
public static final int HEALTH_CHECK_WORKERS = 3;
public static final int HEALTH_CHECK_MAX_BATCH_SIZE = 1;
private final EventHubProperties properties;
private final String hubName;
@ -60,32 +64,59 @@ public class DurableEventHubProcessorClient<T> {
private final Map<String, OffsetDateTime> localCheckpoints = new ConcurrentHashMap<>();
private EventProcessorClient internalClient;
private EventHubProducerAsyncClient healthcheckClient;
private Counter healthCheckSendingCounter;
private Counter healthCheckSendingErrorCounter;
private final List<DurableEventHubProducerWorker> healthcheckProducerWorkers = new LinkedList<>();
private final LinkedBlockingQueue<Pair<EventData, CompletableFuture<Object>>> healthcheckQueue =
new LinkedBlockingQueue<>(HEALTH_CHECK_QUEUE_CAPACITY);
private final Counter healthcheckOfferingCounter;
private final Counter healthcheckSendingCounter;
private final Counter healthcheckErrorCounter;
private final Timer healthcheckSendingTimer;
public void sendHealthCheckPing(String id, Runnable callback) {
if (healthcheckClient != null) {
EventData data = new EventData(MR_HEALTH_CHECK_PREFIX + id + MR_HEALTH_CHECK_SUFFIX);
healthcheckClient.send(Set.of(data))
.timeout(properties.getSendingTimeout())
.retry(properties.getSendingRetries())
.doOnSuccess(res -> {
log.debug("Health-check [{}] has been successfully sent.", id);
healthCheckSendingCounter.increment();
callback.run();
})
.doOnError(e -> {
log.warn("Error during health-check [{}] sending.", id, e);
healthCheckSendingErrorCounter.increment();
})
.subscribeOn(Schedulers.elastic())
.subscribe();
} else {
log.warn("EH healthcheck is called before client initialization");
}
public DurableEventHubProcessorClient(final EventHubProperties properties,
final String hubName,
final ObjectMapper mapper,
final Class<T> klass,
final Consumer<T> eventProcessor,
final Consumer<Throwable> errorProcessor,
final HealthCheckProcessor healthcheckProcessor,
final MeterRegistry meterRegistry) {
this.properties = properties;
this.hubName = hubName;
this.mapper = mapper;
this.klass = klass;
this.eventProcessor = eventProcessor;
this.errorProcessor = errorProcessor;
this.healthcheckProcessor = healthcheckProcessor;
this.meterRegistry = meterRegistry;
this.healthcheckOfferingCounter = meterRegistry.counter(
"event-hub.health-check-offered",
Tags.of(HUB_TAG, hubName));
this.healthcheckSendingCounter = meterRegistry.counter(
"event-hub.health-check-sent",
Tags.of(HUB_TAG, hubName));
this.healthcheckErrorCounter = meterRegistry.counter(
"event-hub.health-check-sendingError",
Tags.of(HUB_TAG, hubName));
this.healthcheckSendingTimer = meterRegistry.timer(
"event-hub.health-check-sendingLatency",
Tags.of(HUB_TAG, hubName));
}
public Mono<Void> sendHealthCheckPing(String id) {
return Mono.just(new EventData(MR_HEALTH_CHECK_PREFIX + id + MR_HEALTH_CHECK_SUFFIX))
.flatMap(data -> {
CompletableFuture<Object> result = new CompletableFuture<>();
if (healthcheckQueue.offer(Pair.of(data, result))) {
healthcheckOfferingCounter.increment();
return Mono.fromFuture(result);
} else {
log.info("A health-check [{}] can't be offered for sending in hub [{}]", id, hubName);
return Mono.empty();
}
})
.then();
}
public synchronized void start() {
@ -121,24 +152,33 @@ public class DurableEventHubProcessorClient<T> {
internalClient = eventProcessorClientBuilder.buildEventProcessorClient();
}
if (healthcheckClient == null) {
healthCheckSendingCounter = meterRegistry.counter(
"event-hub.health-check-sending",
Tags.of(HUB_TAG, hubName));
healthCheckSendingErrorCounter = meterRegistry.counter(
"event-hub.health-check-sending-error",
Tags.of(HUB_TAG, hubName));
healthcheckClient = new EventHubClientBuilder()
.connectionString(
properties.getConnectionString(),
properties.getConsumers().get(hubName).getDestination())
.buildAsyncProducerClient();
while (healthcheckProducerWorkers.size() < HEALTH_CHECK_WORKERS) {
DurableEventHubProducerWorker worker = new DurableEventHubProducerWorker(
healthcheckQueue,
hubName,
HEALTH_CHECK_MAX_BATCH_SIZE,
Duration.ofSeconds(2),
healthcheckSendingCounter,
healthcheckErrorCounter,
healthcheckSendingTimer,
this::createNewClient);
worker.setDaemon(true);
worker.start();
healthcheckProducerWorkers.add(worker);
}
log.info("Start EventHub listening for [{}]", hubName);
internalClient.start();
}
private EventHubProducerAsyncClient createNewClient() {
return new EventHubClientBuilder()
.connectionString(
properties.getConnectionString(),
properties.getConsumers().get(hubName).getDestination())
.buildAsyncProducerClient();
}
protected void onReceive(EventContext eventContext) {
String partition = eventContext.getPartitionContext().getPartitionId();
Long sequenceNumber = eventContext.getEventData().getSequenceNumber();
@ -159,7 +199,7 @@ public class DurableEventHubProcessorClient<T> {
processingLagCounters.get(partition).increment(lag);
if (lag == 0 ||
localCheckpoints.get(partition)
.plus(properties.getCheckpointInterval())
.plus(properties.getConsumers().get(hubName).getCheckpointInterval())
.isBefore(received)) {
log.info("Updating checkpoint for partition [{}] in [{}] on sequence number [{}]",
partition,
@ -257,7 +297,7 @@ public class DurableEventHubProcessorClient<T> {
// prepare local variables
localCheckpoints.computeIfAbsent(
partition,
key -> OffsetDateTime.now().minus(properties.getCheckpointInterval()));
key -> OffsetDateTime.now().minus(properties.getConsumers().get(hubName).getCheckpointInterval()));
log.info("Started receiving on partition [{}] in [{}]", partition, hubName);
}

Просмотреть файл

@ -14,26 +14,42 @@ import com.griddynamics.msd365fp.manualreview.model.event.Event;
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Tags;
import io.micrometer.core.instrument.Timer;
import lombok.Builder;
import lombok.extern.slf4j.Slf4j;
import reactor.core.scheduler.Schedulers;
import org.apache.commons.lang3.tuple.Pair;
import reactor.core.publisher.Mono;
import reactor.util.retry.Retry;
import java.util.Set;
import java.io.Closeable;
import java.time.Duration;
import java.util.LinkedList;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.LinkedBlockingQueue;
@Slf4j
public class DurableEventHubProducerClient {
public class DurableEventHubProducerClient implements Closeable {
public static final String HUB_TAG = "hub";
public static final int MAX_OFFERING_ATTEMPTS = 2;
public static final int MAX_BATCH_SIZE = 100;
public static final Duration MIN_OFFERING_BACKOFF = Duration.ofMillis(100);
private final EventHubProperties properties;
private final EventHubProperties.ProducerProperties hubProperties;
private final String hubName;
private final ObjectMapper mapper;
private final Counter processingCounter;
private final Counter offeringCounter;
private final Counter sendingCounter;
private final Counter errorCounter;
private final Timer sendingTimer;
private final LinkedBlockingQueue<Pair<EventData, CompletableFuture<Object>>> queue;
private final List<DurableEventHubProducerWorker> workers = new LinkedList<>();
private EventHubProducerAsyncClient internalClient;
@Builder
public DurableEventHubProducerClient(final EventHubProperties properties,
@ -41,29 +57,70 @@ public class DurableEventHubProducerClient {
final ObjectMapper mapper,
final MeterRegistry meterRegistry) {
this.properties = properties;
this.hubProperties = properties.getProducers().get(hubName);
this.queue = new LinkedBlockingQueue<>(this.hubProperties.getBufferSize());
this.hubName = hubName;
this.mapper = mapper;
this.processingCounter = meterRegistry.counter(
this.offeringCounter = meterRegistry.counter(
"event-hub.offered",
Tags.of(HUB_TAG, hubName));
this.sendingCounter = meterRegistry.counter(
"event-hub.sent",
Tags.of(HUB_TAG, hubName));
this.errorCounter = meterRegistry.counter(
"event-hub.sendingError",
Tags.of(HUB_TAG, hubName));
this.sendingTimer = meterRegistry.timer(
"event-hub.sendingLatency",
Tags.of(HUB_TAG, hubName));
}
public synchronized void start() {
internalClient = new EventHubClientBuilder()
while (workers.size() < hubProperties.getSendingWorkers()) {
DurableEventHubProducerWorker worker = new DurableEventHubProducerWorker(
queue,
hubName,
MAX_BATCH_SIZE,
hubProperties.getSendingPeriod(),
sendingCounter,
errorCounter,
sendingTimer,
this::createNewClient);
worker.setDaemon(true);
worker.start();
workers.add(worker);
}
}
private EventHubProducerAsyncClient createNewClient() {
return new EventHubClientBuilder()
.connectionString(
properties.getConnectionString(),
properties.getProducers().get(hubName).getDestination())
hubProperties.getDestination())
.buildAsyncProducerClient();
}
public boolean send(final Event event) {
if (internalClient == null) {
return false;
}
public Mono<Void> send(final Event event) {
return Mono.just(event)
.map(this::transformToEventData)
.flatMap(data -> {
CompletableFuture<Object> result = new CompletableFuture<>();
if (queue.offer(Pair.of(data, result))) {
offeringCounter.increment();
return Mono.just(result);
} else {
return Mono.error(new EventHubProducerOverloadedError());
}
})
.retryWhen(Retry.backoff(MAX_OFFERING_ATTEMPTS, MIN_OFFERING_BACKOFF))
.doOnError(e -> log.error("An event [{}] can't be offered for sending in hub [{}]",
event.getId(), hubName))
.flatMap(Mono::fromFuture)
.then();
}
private EventData transformToEventData(final Event event) {
EventData data;
try {
data = new EventData(mapper.writeValueAsString(event));
@ -72,22 +129,34 @@ public class DurableEventHubProducerClient {
hubName,
event.getId(),
event);
return false;
throw new EventHubProducerParsingError(e);
}
internalClient.send(Set.of(data))
.timeout(properties.getSendingTimeout())
.retry(properties.getSendingRetries())
.doOnSuccess(res -> processingCounter.increment())
.doOnError(e -> {
log.error("An error has occurred in hub [{}] during event [{}] sending: {}",
hubName,
event.getId(),
data.getBodyAsString());
errorCounter.increment();
})
.subscribeOn(Schedulers.elastic())
.subscribe();
return true;
return data;
}
@Override
public void close() {
workers.forEach(DurableEventHubProducerWorker::close);
workers.clear();
}
public static class EventHubProducerError extends RuntimeException {
public EventHubProducerError(final Throwable cause) {
super(cause);
}
public EventHubProducerError() {
super();
}
}
public static class EventHubProducerParsingError extends EventHubProducerError {
public EventHubProducerParsingError(final Throwable cause) {
super(cause);
}
}
public static class EventHubProducerOverloadedError extends EventHubProducerError {
}
}

Просмотреть файл

@ -0,0 +1,145 @@
package com.griddynamics.msd365fp.manualreview.ehub.durable.streaming;
import com.azure.messaging.eventhubs.EventData;
import com.azure.messaging.eventhubs.EventDataBatch;
import com.azure.messaging.eventhubs.EventHubProducerAsyncClient;
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.Timer;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.tuple.Pair;
import reactor.core.publisher.Mono;
import java.io.Closeable;
import java.time.Duration;
import java.util.LinkedList;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.function.Supplier;
import java.util.stream.Collectors;
@Slf4j
public class DurableEventHubProducerWorker extends Thread implements Closeable {
private final LinkedBlockingQueue<Pair<EventData, CompletableFuture<Object>>> queue;
private final String hubName;
private final Duration sendingPeriod;
private final Counter sendingCounter;
private final Counter errorCounter;
private final Timer sendingTimer;
private final Supplier<EventHubProducerAsyncClient> clientCreator;
private final int maxBatchSize;
private EventHubProducerAsyncClient localClient = null;
private final LinkedList<Pair<EventData, CompletableFuture<Object>>> buffer = new LinkedList<>();
private final LinkedList<Pair<EventData, CompletableFuture<Object>>> retryBuffer = new LinkedList<>();
private boolean closed = false;
public DurableEventHubProducerWorker(final LinkedBlockingQueue<Pair<EventData, CompletableFuture<Object>>> queue,
final String hubName,
final int maxBatchSize,
final Duration sendingPeriod,
final Counter sendingCounter,
final Counter errorCounter,
final Timer sendingTimer,
final Supplier<EventHubProducerAsyncClient> clientCreator) {
this.queue = queue;
this.hubName = hubName;
this.maxBatchSize = maxBatchSize;
this.sendingPeriod = sendingPeriod;
this.sendingCounter = sendingCounter;
this.errorCounter = errorCounter;
this.sendingTimer = sendingTimer;
this.clientCreator = clientCreator;
}
@Override
public void run() {
while (!closed) {
// prepare tools
if (localClient == null) {
localClient = clientCreator.get();
}
localClient.createBatch()
.map(this::collectDataForBatch)
.flatMap(this::sendBatch)
.onErrorResume(this::processSendingError)
.block();
}
}
private EventDataBatch collectDataForBatch(final EventDataBatch batch) {
boolean couldContinueBatching;
do {
couldContinueBatching = false;
Pair<EventData, CompletableFuture<Object>> event;
if (!retryBuffer.isEmpty()) {
event = retryBuffer.pollFirst();
} else {
event = queue.poll();
}
if (event != null) {
if (batch.tryAdd(event.getLeft())) {
buffer.add(event);
couldContinueBatching = buffer.size() < maxBatchSize;
} else {
retryBuffer.add(event);
}
}
} while (couldContinueBatching);
return batch;
}
private Mono<Void> sendBatch(final EventDataBatch batch) {
if (batch.getCount() > 0) {
Timer.Sample sample = Timer.start();
return localClient.send(batch)
.doOnSuccess(v -> {
sample.stop(sendingTimer);
buffer.forEach(pair -> pair.getRight().complete(""));
sendingCounter.increment(buffer.size());
buffer.clear();
});
} else {
return Mono.delay(sendingPeriod).then();
}
}
private Mono<Void> processSendingError(final Throwable e) {
// log the error
log.warn("An error has occurred in hub [{}] during event batch sending: {}",
hubName,
buffer.stream().map(Pair::getLeft).collect(Collectors.toList()));
log.warn("An error has occurred in hub [{}] during event batch sending.",
hubName, e);
errorCounter.increment();
// initiate client recreation
if (localClient != null) {
localClient.close();
localClient = null;
}
// try to send data back to queue
buffer.addAll(retryBuffer);
retryBuffer.clear();
for (Pair<EventData, CompletableFuture<Object>> event : buffer) {
if (!queue.offer(event)) {
retryBuffer.add(event);
}
}
buffer.clear();
return Mono.empty().then();
}
@Override
public void close() {
closed = true;
}
}

Просмотреть файл

@ -29,6 +29,7 @@ public class DeviceContext implements Serializable {
private String deviceContextId;
private String provider;
private String deviceContextDC;
private String merchantFuzzyDeviceId;
private String externalDeviceId;
private String externalDeviceType;
private String userAgent;

Просмотреть файл

@ -52,20 +52,20 @@ public class User implements Serializable {
@JsonDeserialize(using = FlexibleDateFormatDeserializer.class)
@JsonSerialize(using = ISOStringDateTimeSerializer.class)
private OffsetDateTime phoneNumberValidatedDate;
private BigDecimal totalSpend;
private BigDecimal totalTransactions;
private BigDecimal totalRefundAmount;
private BigDecimal totalChargebackAmount;
private BigDecimal totalDaysOfUse;
private BigDecimal last30DaysSpend;
private BigDecimal last30DaysTransactions;
private BigDecimal last30DaysRefundAmount;
private BigDecimal last30DaysChargebackAmount;
private BigDecimal last30DaysOfUse;
private BigDecimal monthlyAverageSpend;
private BigDecimal monthlyAverageTransactions;
private BigDecimal monthlyAverageRefundAmount;
private BigDecimal monthlyAverageChargebackAmount;
private BigDecimal totalSpend = BigDecimal.ZERO;
private BigDecimal totalTransactions = BigDecimal.ZERO;
private BigDecimal totalRefundAmount = BigDecimal.ZERO;
private BigDecimal totalChargebackAmount = BigDecimal.ZERO;
private BigDecimal totalDaysOfUse = BigDecimal.ZERO;
private BigDecimal last30DaysSpend = BigDecimal.ZERO;
private BigDecimal last30DaysTransactions = BigDecimal.ZERO;
private BigDecimal last30DaysRefundAmount = BigDecimal.ZERO;
private BigDecimal last30DaysChargebackAmount = BigDecimal.ZERO;
private BigDecimal last30DaysOfUse = BigDecimal.ZERO;
private BigDecimal monthlyAverageSpend = BigDecimal.ZERO;
private BigDecimal monthlyAverageTransactions = BigDecimal.ZERO;
private BigDecimal monthlyAverageRefundAmount = BigDecimal.ZERO;
private BigDecimal monthlyAverageChargebackAmount = BigDecimal.ZERO;
@JsonDeserialize(using = FlexibleDateFormatDeserializer.class)
@JsonSerialize(using = ISOStringDateTimeSerializer.class)
private OffsetDateTime measuresIngestionDateTimeUTC;

Просмотреть файл

@ -24,6 +24,7 @@ public class DeviceContextNodeData extends NodeData {
private String deviceContextId;
private String provider;
private String merchantFuzzyDeviceId;
private String deviceContextDC;
private String userAgent;
private String screenResolution;

Просмотреть файл

@ -0,0 +1,12 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.model.dfp.raw;
import com.fasterxml.jackson.annotation.JsonInclude;
import java.util.HashMap;
@JsonInclude(JsonInclude.Include.NON_NULL)
public class LinkAnalysisCountResponse extends HashMap<String, Integer> {
}

Просмотреть файл

@ -0,0 +1,18 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.model.dfp.raw;
import com.fasterxml.jackson.annotation.JsonInclude;
import lombok.*;
import java.util.Set;
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
@JsonInclude(JsonInclude.Include.NON_NULL)
public class LinkAnalysisDetailsRequest {
private Set<String> purchaseIds;
}

Просмотреть файл

@ -0,0 +1,58 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.model.dfp.raw;
import com.fasterxml.jackson.annotation.JsonInclude;
import com.fasterxml.jackson.databind.PropertyNamingStrategy;
import com.fasterxml.jackson.databind.annotation.JsonNaming;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.math.BigDecimal;
import java.time.OffsetDateTime;
import java.util.Set;
@Data
@JsonInclude(JsonInclude.Include.NON_NULL)
public class LinkAnalysisDetailsResponse {
Set<PurchaseDetails> purchaseDetails;
@Data
@JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
public static class PurchaseDetails {
private String purchaseId;
private OffsetDateTime merchantLocalDate;
private BigDecimal totalAmount;
private BigDecimal totalAmountInUSD;
private BigDecimal salesTax;
private BigDecimal salesTaxInUSD;
private String currency;
private Integer riskScore;
private String merchantRuleDecision;
private String reasonCodes;
private User user;
private DeviceContext deviceContext;
private boolean userRestricted;
@NoArgsConstructor
@AllArgsConstructor
@Data
@JsonInclude(JsonInclude.Include.NON_NULL)
@JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
public static class User {
private String email;
private String userId;
}
@NoArgsConstructor
@AllArgsConstructor
@Data
@JsonInclude(JsonInclude.Include.NON_NULL)
@JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
public static class DeviceContext {
private String ipAdress;
}
}
}

Просмотреть файл

@ -0,0 +1,19 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.model.dfp.raw;
import com.fasterxml.jackson.annotation.JsonInclude;
import lombok.Data;
import java.util.HashMap;
import java.util.Set;
@JsonInclude(JsonInclude.Include.NON_NULL)
public class LinkAnalysisFullResponse extends HashMap<String, LinkAnalysisFullResponse.FieldLinks> {
@Data
public static class FieldLinks {
private int purchaseCounts;
private Set<String> purchaseIds;
}
}

Просмотреть файл

@ -0,0 +1,12 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.model.dfp.raw;
import com.fasterxml.jackson.annotation.JsonInclude;
import java.util.HashMap;
@JsonInclude(JsonInclude.Include.NON_NULL)
public class LinkAnalysisRequest extends HashMap<String, String> {
}

Просмотреть файл

@ -0,0 +1,40 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.model.dfp.raw;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.HashMap;
import java.util.Map;
@Data
@NoArgsConstructor
@AllArgsConstructor
public class UserEmailListEntity {
private Map<String, UserEmailList> lists = new HashMap<>();
@Data
@NoArgsConstructor
@AllArgsConstructor
public static class UserEmailList {
private String value;
}
public String getCommon() {
return lists != null && lists.get("Common") != null ?
lists.get("Common").getValue() :
null;
}
public Boolean getCommonRestriction() {
String common = getCommon();
if (common != null) {
if (common.equals("Safe")) return false;
if (common.equals("Block")) return true;
}
return null;
}
}

Просмотреть файл

@ -0,0 +1,16 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.model.dfp.raw;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.util.CollectionUtils;
@Data
@NoArgsConstructor
@AllArgsConstructor
public class UserEmailListEntityRequest {
private String entityValue;
}

Просмотреть файл

@ -3,6 +3,8 @@
package com.griddynamics.msd365fp.manualreview.model.event;
public interface Event {
import java.io.Serializable;
public interface Event extends Serializable {
String getId();
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -7,12 +7,14 @@ package com.griddynamics.msd365fp.manualreview.queues;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.scheduling.annotation.EnableScheduling;
import reactor.core.scheduler.Schedulers;
@SpringBootApplication
@EnableScheduling
public class QueuesApplication {
public static void main(String[] args) {
Schedulers.enableMetrics();
SpringApplication.run(QueuesApplication.class, args);
}

Просмотреть файл

@ -7,6 +7,9 @@ import lombok.AccessLevel;
import lombok.NoArgsConstructor;
import java.time.Duration;
import java.time.Instant;
import java.time.OffsetDateTime;
import java.time.ZoneId;
@NoArgsConstructor(access = AccessLevel.PRIVATE)
@SuppressWarnings("java:S2386")
@ -15,7 +18,8 @@ public class Constants {
public static final int TOP_ELEMENT_IN_CONTAINER_PAGE_SIZE = 1;
public static final String TOP_ELEMENT_IN_CONTAINER_CONTINUATION = null;
public static final OffsetDateTime ELDEST_APPLICATION_DATE =
OffsetDateTime.ofInstant(Instant.ofEpochMilli(0), ZoneId.systemDefault());
public static final Duration DEFAULT_CACHE_INVALIDATION_INTERVAL = Duration.ZERO;
public static final long DEFAULT_CACHE_SIZE = 0;
@ -24,10 +28,12 @@ public class Constants {
public static final String DEFAULT_QUEUE_VIEW_PARAMETER_STR = "REGULAR";
public static final String DEFAULT_ITEM_PAGE_SIZE_STR = "20";
public static final int DEFAULT_ITEM_PAGE_SIZE = 20;
public static final int DEFAULT_ITEM_INFO_PAGE_SIZE = 100;
public static final String ITEMS_CONTAINER_NAME = "Items";
public static final String QUEUES_CONTAINER_NAME = "Queues";
public static final String TASK_CONTAINER_NAME = "Tasks";
public static final String LINK_ANALYSIS_CONTAINER_NAME = "LinkAnalysis";
public static final String HEALTH_CHECK_CONTAINER_NAME = "HealthChecks";
public static final String DICTIONARIES_CONTAINER_NAME = "Dictionaries";
public static final String SETTINGS_CONTAINER_NAME = "ConfigurableAppSettings";
@ -54,6 +60,8 @@ public class Constants {
public static final String MESSAGE_QUEUE_NOT_FOUND = "Queue not found";
public static final String MESSAGE_ITEM_NOT_FOUND = "Item not found";
public static final String MESSAGE_NOT_FOUND = "Not found";
public static final String MESSAGE_ITEM_IS_EMPTY = "Item is empty";
public static final String MESSAGE_INCORRECT_USER = "Incorrect user";
public static final String MESSAGE_NO_SUPERVISORS = "No one supervisor is found";
public static final String MESSAGE_INCORRECT_QUEUE_ASSIGNMENT = "The same person can't be a reviewer and a supervisor";
@ -69,11 +77,13 @@ public class Constants {
public static final String DICTIONARY_TASK_NAME = "dictionary-reconciliation-task";
public static final String ENRICHMENT_TASK_NAME = "item-enrichment-task";
public static final String QUEUE_ASSIGNMENT_TASK_NAME = "queue-assignment-reconciliation-task";
public static final String RESOLUTION_SENDING_TASK_NAME = "resolution-sending-task";
public static final String PRIM_HEALTH_ANALYSIS_TASK_NAME = "prim-health-analysis-task";
public static final String SEC_HEALTH_ANALYSIS_TASK_NAME = "sec-health-analysis-task";
public static final String SECURITY_SCHEMA_IMPLICIT = "mr_user_auth";
public static final String CLIENT_REGISTRATION_AZURE_DFP_API = "azure-dfp-api";
public static final String CLIENT_REGISTRATION_AZURE_DFP_LA_API = "azure-dfp-la-api";
public static final String RESIDUAL_QUEUE_NAME = "# Residual Queue";

Просмотреть файл

@ -65,6 +65,25 @@ public class WebClientConfig {
.build();
}
@Bean
WebClient azureDFPLAAPIWebClient(OAuth2AuthorizedClientManager authorizedClientManager, ObjectMapper mapper) {
ServletOAuth2AuthorizedClientExchangeFilterFunction oauth2Client =
new ServletOAuth2AuthorizedClientExchangeFilterFunction(authorizedClientManager);
oauth2Client.setDefaultClientRegistrationId(Constants.CLIENT_REGISTRATION_AZURE_DFP_LA_API);
Consumer<ClientCodecConfigurer> clientCodecConfigurerConsumer = clientCodecConfigurer -> clientCodecConfigurer
.defaultCodecs()
.jackson2JsonEncoder(new Jackson2JsonEncoder(mapper, MediaType.APPLICATION_JSON));
return WebClient.builder()
.filter(logRequestFilter())
.apply(oauth2Client.oauth2Configuration())
.exchangeStrategies(ExchangeStrategies
.builder()
.codecs(clientCodecConfigurerConsumer)
.codecs(configurer -> configurer.customCodecs().registerWithDefaultConfig(new Jackson2JsonDecoder(mapper, MediaType.APPLICATION_OCTET_STREAM)))
.build())
.build();
}
@Bean
WebClient nonAuthorizingWebClient() {
return WebClient.builder()

Просмотреть файл

@ -23,6 +23,7 @@ public class ApplicationProperties {
private final Map<String, TaskProperties> tasks;
private final double taskResetTimeoutMultiplier;
private final double taskWarningTimeoutMultiplier;
private final double taskSuccessfulRunsTimeoutMultiplier;
private final TaskExecutor taskExecutor;
@AllArgsConstructor

Просмотреть файл

@ -5,12 +5,13 @@ package com.griddynamics.msd365fp.manualreview.queues.controller;
import com.griddynamics.msd365fp.manualreview.model.PageableCollection;
import com.griddynamics.msd365fp.manualreview.model.exception.BusyException;
import com.griddynamics.msd365fp.manualreview.model.exception.EmptySourceException;
import com.griddynamics.msd365fp.manualreview.model.exception.IncorrectConditionException;
import com.griddynamics.msd365fp.manualreview.model.exception.NotFoundException;
import com.griddynamics.msd365fp.manualreview.queues.model.ItemDataField;
import com.griddynamics.msd365fp.manualreview.queues.model.dto.*;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.SearchQuery;
import com.griddynamics.msd365fp.manualreview.queues.service.ItemService;
import com.griddynamics.msd365fp.manualreview.queues.service.PublicLinkAnalysisService;
import com.griddynamics.msd365fp.manualreview.queues.service.PublicItemService;
import com.griddynamics.msd365fp.manualreview.queues.service.SearchQueryService;
import io.swagger.v3.oas.annotations.Operation;
@ -22,9 +23,11 @@ import lombok.extern.slf4j.Slf4j;
import org.springframework.data.domain.Sort;
import org.springframework.http.MediaType;
import org.springframework.security.access.annotation.Secured;
import org.springframework.validation.annotation.Validated;
import org.springframework.web.bind.annotation.*;
import javax.validation.Valid;
import javax.validation.constraints.Positive;
import java.util.Collection;
import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.*;
@ -34,6 +37,7 @@ import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.*;
@Tag(name = "items", description = "The Item API for observing and processing orders.")
@Slf4j
@RequiredArgsConstructor
@Validated
@SecurityRequirement(name = SECURITY_SCHEMA_IMPLICIT)
@Secured({ADMIN_MANAGER_ROLE})
public class ItemController {
@ -43,6 +47,7 @@ public class ItemController {
private final PublicItemService publicItemService;
private final ItemService itemService;
private final SearchQueryService searchQueryService;
private final PublicLinkAnalysisService linAnalysisService;
@Operation(summary = "Get item details by ID")
@GetMapping(value = "/{id}", produces = MediaType.APPLICATION_JSON_VALUE)
@ -82,6 +87,14 @@ public class ItemController {
publicItemService.labelItem(id, queueId, label);
}
@Operation(summary = "Set the label and release all items")
@PatchMapping(value = "/batch/label", consumes = MediaType.APPLICATION_JSON_VALUE)
@Secured({ADMIN_MANAGER_ROLE, SENIOR_ANALYST_ROLE, ANALYST_ROLE})
public BatchLabelReportDTO batchLabelItem(
@Valid @RequestBody final BatchLabelDTO batchLabel) throws NotFoundException, IncorrectConditionException, BusyException {
return publicItemService.batchLabelItem(batchLabel);
}
@Operation(summary = "Add the note to the specified item")
@PutMapping(value = "/{id}/note", consumes = MediaType.APPLICATION_JSON_VALUE)
@Secured({ADMIN_MANAGER_ROLE, SENIOR_ANALYST_ROLE, ANALYST_ROLE})
@ -106,7 +119,7 @@ public class ItemController {
@Operation(summary = "Save search query to the database")
@PostMapping(value = "/search-query",
consumes = MediaType.APPLICATION_JSON_VALUE)
consumes = MediaType.APPLICATION_JSON_VALUE)
@Secured({ADMIN_MANAGER_ROLE})
public String saveSearchQuery(
@Valid @RequestBody final ItemSearchQueryDTO itemSearchQueryDTO
@ -116,7 +129,7 @@ public class ItemController {
@Operation(summary = "Get search query by id")
@GetMapping(value = "/search-query/{id}",
produces = MediaType.APPLICATION_JSON_VALUE)
produces = MediaType.APPLICATION_JSON_VALUE)
@Secured({ADMIN_MANAGER_ROLE})
public ItemSearchQueryDTO getSearchQuery(
@PathVariable("id") final String id
@ -126,7 +139,7 @@ public class ItemController {
@Operation(summary = "Execute saved search query and return search results")
@GetMapping(value = "/search-query/{id}/results",
produces = MediaType.APPLICATION_JSON_VALUE)
produces = MediaType.APPLICATION_JSON_VALUE)
@Secured({ADMIN_MANAGER_ROLE})
public PageableCollection<ItemDTO> applySearchQuery(
@PathVariable("id") final String id,
@ -139,4 +152,56 @@ public class ItemController {
) throws NotFoundException, BusyException {
return itemService.searchForItems(id, size, continuation, sortingField, sortingOrder);
}
@Operation(summary = "Initiate link analysis for particular item")
@PostMapping(value = "/link-analysis",
consumes = MediaType.APPLICATION_JSON_VALUE,
produces = MediaType.APPLICATION_JSON_VALUE)
@Secured({ADMIN_MANAGER_ROLE, SENIOR_ANALYST_ROLE, ANALYST_ROLE})
public LinkAnalysisDTO initLinkAnalysis(
@Valid @RequestBody final LinkAnalysisCreationDTO linkAnalysisCreationDTO) throws NotFoundException, IncorrectConditionException, BusyException, EmptySourceException {
return linAnalysisService.createLinkAnalysisEntry(linkAnalysisCreationDTO);
}
@Operation(summary = "Initiate link analysis for particular item")
@GetMapping(value = "/link-analysis/{id}",
produces = MediaType.APPLICATION_JSON_VALUE)
@Secured({ADMIN_MANAGER_ROLE, SENIOR_ANALYST_ROLE, ANALYST_ROLE})
public LinkAnalysisDTO getLinkAnalysisInfo(@PathVariable("id") String id) throws NotFoundException {
return linAnalysisService.getLinkAnalysisEntry(id);
}
@Operation(summary = "Get items form MR linked to the current item")
@GetMapping(value = "/link-analysis/{id}/mr-items",
produces = MediaType.APPLICATION_JSON_VALUE)
@Secured({ADMIN_MANAGER_ROLE, SENIOR_ANALYST_ROLE, ANALYST_ROLE})
public PageableCollection<LAItemDTO> getMRLinks(
@PathVariable("id")
String id,
@Parameter(description = "size of a page")
@RequestParam(required = false, defaultValue = DEFAULT_ITEM_PAGE_SIZE_STR)
@Positive
Integer size,
@Parameter(description = "continuation token from previous request")
@RequestParam(required = false)
String continuation) throws NotFoundException, BusyException {
return linAnalysisService.getMRItems(id, size, continuation);
}
@Operation(summary = "Get items form MR linked to the current item")
@GetMapping(value = "/link-analysis/{id}/dfp-items",
produces = MediaType.APPLICATION_JSON_VALUE)
@Secured({ADMIN_MANAGER_ROLE, SENIOR_ANALYST_ROLE, ANALYST_ROLE})
public PageableCollection<DFPItemDTO> getDFPLinks(
@PathVariable("id")
String id,
@Parameter(description = "size of a page")
@RequestParam(required = false, defaultValue = DEFAULT_ITEM_PAGE_SIZE_STR)
@Positive
Integer size,
@Parameter(description = "continuation token from previous request")
@RequestParam(required = false)
String continuation) throws NotFoundException {
return linAnalysisService.getDFPItems(id, size, continuation);
}
}

Просмотреть файл

@ -52,10 +52,13 @@ import org.springframework.security.core.annotation.AuthenticationPrincipal;
import org.springframework.security.web.authentication.preauth.PreAuthenticatedAuthenticationToken;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Flux;
import reactor.core.scheduler.Scheduler;
import reactor.core.scheduler.Schedulers;
import java.time.Duration;
import java.util.*;
import java.util.concurrent.Executors;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
@ -70,7 +73,7 @@ import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.SEC
@SecurityRequirement(name = SECURITY_SCHEMA_IMPLICIT)
@RequiredArgsConstructor
@Secured({ADMIN_MANAGER_ROLE})
@SuppressWarnings({"java:S5411", "java:S3776", "java:S3358"})
@SuppressWarnings({"java:S5411", "java:S3776", "java:S3358" })
public class TestingController {
private static final Faker faker = new Faker();
@ -118,6 +121,20 @@ public class TestingController {
itemEnrichmentService.enrichItem(id, true);
}
@Operation(summary = "Trigger forced item enrichment for ALL active items")
@PostMapping(value = "/items/enrichment")
public void enrichAllActiveItems() throws BusyException {
Collection<String> items = PageProcessingUtility.getAllPages(
c -> itemRepository.findActiveItemIds(300, c));
log.warn("We're trying to reenrich {} items", items.size());
Scheduler scheduler = Schedulers.fromExecutor(Executors.newSingleThreadExecutor());
Flux.fromIterable(items)
.doOnNext(item -> itemEnrichmentService.enrichItem(item, true))
.subscribeOn(scheduler)
.subscribe();
}
@Operation(summary = "Randomize scores for items in a queue")
@PostMapping(value = "/queue/{queueId}/score/randomize")
public void randomizeScore(@PathVariable final String queueId) throws BusyException {
@ -268,15 +285,19 @@ public class TestingController {
});
}
@Operation(summary = "Hard delete for all items by IDs")
@DeleteMapping(value = "/items")
@Operation(summary = "Hard delete for all entries by IDs")
@DeleteMapping(value = "/databases/{dbName}/container/{containerName}/entries")
public void deleteAllById(
@PathVariable("dbName") final String dbName,
@PathVariable("containerName") final String containerName,
@Parameter(hidden = true)
@AuthenticationPrincipal PreAuthenticatedAuthenticationToken principal,
@RequestBody List<Map<String, String>> ids) {
CosmosContainer container = cosmosClient.getDatabase(dbName).getContainer(containerName);
List<String> toDelete = ids.stream().map(mp -> mp.get("id")).collect(Collectors.toList());
itemRepository.deleteAll(itemRepository.findAllById(toDelete));
log.info("User [{}] has deleted items [{}].", UserPrincipalUtility.extractUserId(principal), toDelete);
toDelete.forEach(id ->
container.getItem(id, id).delete().block());
log.warn("User [{}] has deleted items [{}].", UserPrincipalUtility.extractUserId(principal), toDelete);
}
@Operation(summary = "Generate specified amount of purchases. Send them to the DFP. " +

Просмотреть файл

@ -0,0 +1,33 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model;
import com.griddynamics.msd365fp.manualreview.model.ItemEscalation;
import com.griddynamics.msd365fp.manualreview.model.ItemHold;
import com.griddynamics.msd365fp.manualreview.model.ItemLabel;
import com.griddynamics.msd365fp.manualreview.model.ItemLock;
import java.time.OffsetDateTime;
import java.util.Set;
public interface BasicItemInfo {
String getId();
OffsetDateTime getImported();
OffsetDateTime getEnriched();
boolean isActive();
ItemLabel getLabel();
Set<String> getQueueIds();
ItemLock getLock();
ItemEscalation getEscalation();
ItemHold getHold();
}

Просмотреть файл

@ -0,0 +1,17 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
@NoArgsConstructor
@AllArgsConstructor
@Data
public class BatchUpdateResult {
private String itemId;
private boolean success;
private String reason;
}

Просмотреть файл

@ -0,0 +1,24 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import com.griddynamics.msd365fp.manualreview.model.event.Event;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import java.io.Serializable;
@NoArgsConstructor
@AllArgsConstructor
@Data
@Slf4j
public class ItemEvent implements Serializable {
@JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, include = JsonTypeInfo.As.EXTERNAL_PROPERTY, property = "klass", visible = true)
private Event event;
private Class<? extends Event> klass;
private String sendingId;
}

Просмотреть файл

@ -369,6 +369,12 @@ public class ItemQuery {
return this;
}
public ItemQueryConstructor enriched() {
queryParts.add(
String.format("IS_DEFINED(%1$s.enriched) AND NOT IS_NULL(%1$s.enriched)", alias)
);
return this;
}
public ItemQueryConstructor enrichedAfter(OffsetDateTime time) {
if (time != null) {
@ -440,6 +446,13 @@ public class ItemQuery {
return this;
}
public ItemQueryConstructor hasEvents() {
queryParts.add(String.format(
"(ARRAY_LENGTH(%1$s.events) > 0)",
alias));
return this;
}
//TODO: rework
public ItemQueryConstructor notEscalation() {
queryParts.add(String.format(

Просмотреть файл

@ -0,0 +1,25 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model;
import lombok.AllArgsConstructor;
import lombok.Getter;
import java.io.Serializable;
@SuppressWarnings("unused")
@AllArgsConstructor
public enum LinkAnalysisField implements Serializable {
CREATION_DATE("creationDate"),
DISCOVERED_IP_ADDRESS("discoveredIPAddress"),
MERCHANT_FUZZY_DEVICE_ID("merchantFuzzyDeviceId"),
MERCHANT_PAYMENT_INSTRUMENT_ID("merchantPaymentInstrumentId"),
EMAIL("email"),
BIN("bin"),
HOLDER_NAME("holderName"),
ZIPCODE("zipcode");
@Getter
private String relatedLAName;
}

Просмотреть файл

@ -0,0 +1,22 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model.dto;
import com.griddynamics.msd365fp.manualreview.model.Label;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotNull;
import java.util.Set;
@AllArgsConstructor
@NoArgsConstructor
@Data
public class BatchLabelDTO {
@NotNull
private Label label;
@NotNull
private Set<String> itemIds;
}

Просмотреть файл

@ -0,0 +1,27 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model.dto;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.Collection;
import java.util.HashSet;
public class BatchLabelReportDTO extends HashSet<BatchLabelReportDTO.LabelResult> {
public BatchLabelReportDTO(final Collection<? extends LabelResult> c) {
super(c);
}
@AllArgsConstructor
@NoArgsConstructor
@Data
public static class LabelResult {
private String itemId;
private boolean success;
private String reason;
}
}

Просмотреть файл

@ -0,0 +1,51 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model.dto;
import com.fasterxml.jackson.annotation.JsonInclude;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.math.BigDecimal;
import java.time.OffsetDateTime;
@NoArgsConstructor
@AllArgsConstructor
@Builder
@Data
@JsonInclude(JsonInclude.Include.NON_NULL)
public class DFPItemDTO {
private String purchaseId;
private OffsetDateTime merchantLocalDate;
private BigDecimal totalAmount;
private BigDecimal totalAmountInUSD;
private BigDecimal salesTax;
private BigDecimal salesTaxInUSD;
private String currency;
private Integer riskScore;
private String merchantRuleDecision;
private String reasonCodes;
private LAUser user;
private LADeviceContext deviceContext;
private Boolean userRestricted;
@NoArgsConstructor
@AllArgsConstructor
@Data
@JsonInclude(JsonInclude.Include.NON_NULL)
public static class LAUser {
private String email;
private String userId;
}
@NoArgsConstructor
@AllArgsConstructor
@Data
@JsonInclude(JsonInclude.Include.NON_NULL)
public static class LADeviceContext {
private String ipAdress;
}
}

Просмотреть файл

@ -0,0 +1,24 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model.dto;
import com.fasterxml.jackson.annotation.JsonInclude;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotNull;
@NoArgsConstructor
@AllArgsConstructor
@Builder
@Data
@JsonInclude(JsonInclude.Include.NON_NULL)
public class LAItemDTO {
@NotNull
private ItemDTO item;
private boolean availableForLabeling;
private Boolean userRestricted;
}

Просмотреть файл

@ -0,0 +1,22 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model.dto;
import com.griddynamics.msd365fp.manualreview.queues.model.LinkAnalysisField;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotNull;
import java.util.Set;
@AllArgsConstructor
@NoArgsConstructor
@Data
public class LinkAnalysisCreationDTO {
@NotNull
private String itemId;
private String queueId;
private Set<LinkAnalysisField> fields;
}

Просмотреть файл

@ -0,0 +1,32 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model.dto;
import com.griddynamics.msd365fp.manualreview.queues.model.LinkAnalysisField;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import javax.validation.constraints.NotNull;
import java.util.List;
import java.util.Set;
@AllArgsConstructor
@NoArgsConstructor
@Data
public class LinkAnalysisDTO {
@NotNull
private String id;
private int found;
private int foundInMR;
private Set<LinkAnalysisField> analysisFields;
private List<FieldLinks> fields;
@Data
public static class FieldLinks {
private LinkAnalysisField id;
private String value;
private int purchaseCounts;
}
}

Просмотреть файл

@ -23,6 +23,7 @@ import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.EMA
public class EmailDomain implements Serializable {
@Id
@PartitionKey
private String id;
private String emailDomainName;
private DisposabilityCheck disposabilityCheck;

Просмотреть файл

@ -7,6 +7,8 @@ import com.fasterxml.jackson.annotation.JsonProperty;
import com.griddynamics.msd365fp.manualreview.model.*;
import com.griddynamics.msd365fp.manualreview.model.dfp.AssesmentResult;
import com.griddynamics.msd365fp.manualreview.model.dfp.MainPurchase;
import com.griddynamics.msd365fp.manualreview.queues.model.BasicItemInfo;
import com.griddynamics.msd365fp.manualreview.queues.model.ItemEvent;
import com.microsoft.azure.spring.data.cosmosdb.core.mapping.Document;
import com.microsoft.azure.spring.data.cosmosdb.core.mapping.PartitionKey;
import lombok.*;
@ -27,7 +29,7 @@ import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.ITE
@Builder(toBuilder = true)
@EqualsAndHashCode(exclude = "_etag")
@Document(collection = ITEMS_CONTAINER_NAME)
public class Item implements Serializable {
public class Item implements BasicItemInfo, Serializable {
@Id
@PartitionKey
private String id;
@ -63,6 +65,10 @@ public class Item implements Serializable {
@Builder.Default
private Set<String> reviewers = new HashSet<>();
@Builder.Default
private Set<ItemEvent> events = new HashSet<>();
@Builder.Default
private long ttl = -1;

Просмотреть файл

@ -0,0 +1,75 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.model.persistence;
import com.griddynamics.msd365fp.manualreview.queues.model.LinkAnalysisField;
import com.microsoft.azure.spring.data.cosmosdb.core.mapping.Document;
import com.microsoft.azure.spring.data.cosmosdb.core.mapping.PartitionKey;
import lombok.*;
import org.springframework.data.annotation.Id;
import org.springframework.data.annotation.Version;
import org.springframework.lang.NonNull;
import java.time.OffsetDateTime;
import java.util.Comparator;
import java.util.Set;
import java.util.TreeSet;
import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.LINK_ANALYSIS_CONTAINER_NAME;
@AllArgsConstructor
@NoArgsConstructor
@Data
@Builder
@EqualsAndHashCode(exclude = "_etag")
@Document(collection = LINK_ANALYSIS_CONTAINER_NAME)
public class LinkAnalysis {
@Id
@PartitionKey
private String id;
private String ownerId;
private Set<LinkAnalysisField> analysisFields;
private Set<FieldLinks> fields;
private int found;
private int foundInMR;
@Builder.Default
private TreeSet<MRItemInfo> mrPurchaseIds = new TreeSet<>();
@Builder.Default
private TreeSet<String> dfpPurchaseIds = new TreeSet<>();
@Data
@Builder
@AllArgsConstructor
@NoArgsConstructor
public static class FieldLinks {
private LinkAnalysisField id;
private String value;
private int purchaseCounts;
private Set<String> purchaseIds;
}
@Data
@Builder
@AllArgsConstructor
@NoArgsConstructor
public static class MRItemInfo implements Comparable<MRItemInfo> {
private String id;
private OffsetDateTime imported;
@Override
public int compareTo(@NonNull final MRItemInfo o) {
return Comparator
.comparing(MRItemInfo::getImported)
.thenComparing(MRItemInfo::getId)
.compare(this, o);
}
}
@Version
@SuppressWarnings("java:S116")
String _etag;
private long ttl;
}

Просмотреть файл

@ -3,13 +3,16 @@
package com.griddynamics.msd365fp.manualreview.queues.model.persistence;
import com.fasterxml.jackson.databind.annotation.JsonSerialize;
import com.griddynamics.msd365fp.manualreview.model.TaskStatus;
import com.griddynamics.msd365fp.manualreview.model.jackson.ISOStringDateTimeSerializer;
import com.microsoft.azure.spring.data.cosmosdb.core.mapping.Document;
import com.microsoft.azure.spring.data.cosmosdb.core.mapping.PartitionKey;
import lombok.*;
import org.springframework.data.annotation.Id;
import org.springframework.data.annotation.Version;
import java.time.Duration;
import java.time.OffsetDateTime;
import java.util.Map;
@ -26,9 +29,14 @@ public class Task {
@PartitionKey
private String id;
private TaskStatus status;
private Map<String,String> variables;
@JsonSerialize(using = ISOStringDateTimeSerializer.class)
private OffsetDateTime previousRun;
private Boolean previousRunSuccessfull;
@JsonSerialize(using = ISOStringDateTimeSerializer.class)
private OffsetDateTime currentRun;
@JsonSerialize(using = ISOStringDateTimeSerializer.class)
private OffsetDateTime previousSuccessfulRun;
private Duration previousSuccessfulExecutionTime;
private Map<String, String> variables;
private String lastFailedRunMessage;
private String instanceId;

Просмотреть файл

@ -8,4 +8,6 @@ import com.microsoft.azure.spring.data.cosmosdb.repository.CosmosRepository;
public interface EmailDomainRepository extends CosmosRepository<EmailDomain, String> {
Iterable<EmailDomain> findByEmailDomainName(String emailDomainName);
}

Просмотреть файл

@ -50,6 +50,14 @@ public interface ItemRepositoryCustomMethods {
@Nullable final Boolean locked,
@Nullable final Boolean held);
PageableCollection<Item> findUnreportedItems(
final int size,
@Nullable final String continuationToken);
PageableCollection<String> findActiveItemIds(
final int size,
final String continuationToken);
PageableCollection<String> findUnenrichedItemIds(
final int size,
final String continuationToken);
@ -129,4 +137,14 @@ public interface ItemRepositoryCustomMethods {
@Nullable Set<String> tags,
int size,
@Nullable String continuationToken);
PageableCollection<BasicItemInfo> findEnrichedItemInfoByIds(
@NonNull final Set<String> ids,
final int size,
@Nullable final String continuationToken);
PageableCollection<Item> findEnrichedItemsByIds(
@NonNull final Set<String> ids,
final int size,
@Nullable final String continuationToken);
}

Просмотреть файл

@ -0,0 +1,11 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.repository;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.LinkAnalysis;
import com.microsoft.azure.spring.data.cosmosdb.repository.CosmosRepository;
public interface LinkAnalysisRepository extends CosmosRepository<LinkAnalysis, String> {
}

Просмотреть файл

@ -93,6 +93,16 @@ public class ItemRepositoryCustomMethodsImpl implements ItemRepositoryCustomMeth
.execute(size, continuationToken);
}
@Override
public PageableCollection<Item> findUnreportedItems(
final int size,
@Nullable final String continuationToken) {
return ItemQuery.constructor("i")
.hasEvents()
.constructSelectExecutor(itemsContainer)
.execute(size, continuationToken);
}
@Override
public int countActiveItems() {
return ItemQuery.constructor("i")
@ -145,6 +155,22 @@ public class ItemRepositoryCustomMethodsImpl implements ItemRepositoryCustomMeth
.execute(size, continuationToken);
}
@Override
public PageableCollection<String> findActiveItemIds(
final int size,
final String continuationToken) {
ExtendedCosmosContainer.Page res = itemsContainer.runCrossPartitionPageableQuery(
"SELECT i.id FROM i WHERE i.active ORDER BY i._ts",
size,
continuationToken);
List<String> queriedItems = res.getContent()
.map(cip -> Optional.ofNullable((String) cip.get("id")))
.filter(Optional::isPresent)
.map(Optional::get)
.collect(Collectors.toList());
return new PageableCollection<>(queriedItems, res.getContinuationToken());
}
@Override
public PageableCollection<String> findUnenrichedItemIds(
final int size,
@ -176,6 +202,47 @@ public class ItemRepositoryCustomMethodsImpl implements ItemRepositoryCustomMeth
return new PageableCollection<>(queriedItems, res.getContinuationToken());
}
@Override
public PageableCollection<BasicItemInfo> findEnrichedItemInfoByIds(
@NonNull final Set<String> ids,
final int size,
@Nullable final String continuationToken) {
ExtendedCosmosContainer.Page res = itemsContainer.runCrossPartitionPageableQuery(
"SELECT i.id, i.imported, i.enriched, i.active, " +
"i.label, i.queueIds, i.lock, i.escalation, i.hold " +
"FROM i WHERE IS_DEFINED(i.enriched) AND NOT IS_NULL(i.enriched) " +
"AND i.id IN ('" +
String.join("','", ids) + "')",
size,
continuationToken);
List<BasicItemInfo> queriedItems = res.getContent()
.map(cip -> itemsContainer.castCosmosObjectToClassInstance(cip.toJson(), Item.class))
.filter(Optional::isPresent)
.map(Optional::get)
.collect(Collectors.toList());
return new PageableCollection<>(queriedItems, res.getContinuationToken());
}
@Override
public PageableCollection<Item> findEnrichedItemsByIds(
@NonNull final Set<String> ids,
final int size,
@Nullable final String continuationToken) {
ExtendedCosmosContainer.Page res = itemsContainer.runCrossPartitionPageableQuery(
"SELECT i " +
"FROM i WHERE IS_DEFINED(i.enriched) AND NOT IS_NULL(i.enriched) " +
"AND i.id IN ('" +
String.join("','", ids) + "')",
size,
continuationToken);
List<Item> queriedItems = res.getContent()
.map(cip -> itemsContainer.castCosmosObjectToClassInstance(cip.get("i"), Item.class))
.filter(Optional::isPresent)
.map(Optional::get)
.collect(Collectors.toList());
return new PageableCollection<>(queriedItems, res.getContinuationToken());
}
@Override
public Optional<Item> findItemById(
@NonNull final String id,
@ -348,7 +415,7 @@ public class ItemRepositoryCustomMethodsImpl implements ItemRepositoryCustomMeth
+ " Count(1) as count "
+ "FROM ("
+ "SELECT "
+ " udf.getBucketNumber(c.assessmentResult.RiskScore,%1$s) as risk_score_bucket "
+ " FLOOR(c.assessmentResult.RiskScore/%1$s) as risk_score_bucket "
+ "FROM c "
+ "WHERE "
+ " c.active "
@ -388,7 +455,8 @@ public class ItemRepositoryCustomMethodsImpl implements ItemRepositoryCustomMeth
//JOIN
.collectionInCollectionField(ItemDataField.TAGS, tags)//no ".and" because here we use JOIN part, not WHERE one
//WHERE
.inField(ItemDataField.ID, ids)
.enriched()
.and().inField(ItemDataField.ID, ids)
.and().queueIds(queueIds, residual)
.and().active(isActive)
.and().all(itemFilters)
@ -401,4 +469,5 @@ public class ItemRepositoryCustomMethodsImpl implements ItemRepositoryCustomMeth
.constructSelectExecutor(itemsContainer)
.execute(size, continuationToken);
}
}

Просмотреть файл

@ -3,9 +3,7 @@
package com.griddynamics.msd365fp.manualreview.queues.service;
import com.griddynamics.msd365fp.manualreview.model.dfp.raw.ExplorerEntity;
import com.griddynamics.msd365fp.manualreview.model.dfp.raw.ExplorerEntityRequest;
import com.griddynamics.msd365fp.manualreview.model.dfp.raw.Node;
import com.griddynamics.msd365fp.manualreview.model.dfp.raw.*;
import lombok.RequiredArgsConstructor;
import lombok.Setter;
import lombok.extern.slf4j.Slf4j;
@ -31,6 +29,24 @@ public class DFPExplorerService {
private WebClient dfpClient;
@Value("${azure.dfp.graph-explorer-url}")
private String dfpExplorerUrl;
@Value("${azure.dfp.user-email-list-url}")
private String userEmailListUrl;
@Cacheable(value = "user-email-list")
public UserEmailListEntity exploreUserEmailList(final String email) {
UserEmailListEntityRequest request = new UserEmailListEntityRequest(email);
log.info("Start User.Email list retrieving for [{}].", email);
UserEmailListEntity result = dfpClient
.post()
.uri(userEmailListUrl)
.body(Mono.just(request), UserEmailListEntityRequest.class)
.retrieve()
.bodyToMono(UserEmailListEntity.class)
.block();
log.info("User.Email list for [{}] has been retrieved successfully: [{}].", email,
result != null ? result.getCommon() : "null");
return result;
}
@Cacheable(value = "traversal-purchase", unless = "#result.isEmpty()")
public ExplorerEntity explorePurchase(final String id) {

Просмотреть файл

@ -4,12 +4,12 @@
package com.griddynamics.msd365fp.manualreview.queues.service;
import com.griddynamics.msd365fp.manualreview.dfpauth.util.UserPrincipalUtility;
import com.griddynamics.msd365fp.manualreview.queues.model.BasicItemInfo;
import com.griddynamics.msd365fp.manualreview.queues.model.QueueView;
import com.griddynamics.msd365fp.manualreview.queues.model.QueueViewType;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.Item;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.LinkAnalysis;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.Queue;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections4.SetUtils;
import org.javers.core.Javers;
import org.javers.core.JaversBuilder;
import org.javers.core.diff.Diff;
@ -20,7 +20,7 @@ import org.springframework.security.core.Authentication;
import org.springframework.stereotype.Service;
import org.springframework.util.CollectionUtils;
import java.util.Collections;
import java.util.Collection;
import java.util.List;
import java.util.Objects;
@ -58,8 +58,8 @@ public class DataSecurityService {
List<String> roles = Objects.requireNonNull(UserPrincipalUtility.extractUserRoles(authentication));
return roles.contains(ADMIN_MANAGER_ROLE) ||
roles.contains(SENIOR_ANALYST_ROLE) ||
userHasAccessToQueueViewAsSupervisor(queueView, actor) ||
userHasAccessToQueueViewAsReviewer(queueView, actor);
userAssignedToQueueViewAsSupervisor(queueView, actor) ||
userAssignedToQueueViewAsReviewer(queueView, actor);
}
public boolean checkPermissionForQueueReading(
@ -70,7 +70,8 @@ public class DataSecurityService {
List<String> roles = Objects.requireNonNull(UserPrincipalUtility.extractUserRoles(authentication));
return roles.contains(ADMIN_MANAGER_ROLE) ||
roles.contains(SENIOR_ANALYST_ROLE) ||
userHasAccessToQueue(queue, actor);
userAssignedToQueueAsSupervisor(queue, actor) ||
userAssignedToQueueAsReviewer(queue, actor);
}
public boolean checkPermissionForQueueCreation(@NonNull Authentication authentication) {
@ -103,33 +104,86 @@ public class DataSecurityService {
public boolean checkPermissionForItemReading(
@NonNull Authentication authentication,
@NonNull Item item,
@NonNull BasicItemInfo item,
@Nullable QueueView queueView) {
String actor = UserPrincipalUtility.extractUserId(authentication);
log.debug("User [{}] has attempted to read an item [{}] in queue view [{}].",
actor, item.getId(), queueView == null ? null : queueView.getViewId());
List<String> roles = Objects.requireNonNull(UserPrincipalUtility.extractUserRoles(authentication));
return (queueView == null || itemBelongsToQueue(item, queueView)) &&
(item.isActive() || roles.contains(ADMIN_MANAGER_ROLE)) &&
(userHasAccessToItemAsLockOwner(item, actor) ||
roles.contains(ADMIN_MANAGER_ROLE) ||
roles.contains(SENIOR_ANALYST_ROLE) ||
userHasAccessToQueueViewAsSupervisor(queueView, actor) ||
userHasAccessToQueueViewAsReviewer(queueView, actor));
return roles.contains(ADMIN_MANAGER_ROLE) ||
roles.contains(SENIOR_ANALYST_ROLE) ||
userHasAccessToItemAsLockOwner(item, actor) ||
userHasAccessToItemInQueueView(item, actor, queueView);
}
public boolean checkPermissionForItemReading(
@NonNull Authentication authentication,
@NonNull BasicItemInfo item,
@Nullable Collection<Queue> queues) {
String actor = UserPrincipalUtility.extractUserId(authentication);
log.debug("User [{}] has attempted to read an item [{}].",
actor, item.getId());
List<String> roles = Objects.requireNonNull(UserPrincipalUtility.extractUserRoles(authentication));
return roles.contains(ADMIN_MANAGER_ROLE) ||
roles.contains(SENIOR_ANALYST_ROLE) ||
userHasAccessToItemAsLockOwner(item, actor) ||
(queues != null && queues.stream().anyMatch(queue -> userHasAccessToItemInQueue(item, actor, queue)));
}
public boolean checkPermissionForItemUpdate(
@NonNull Authentication authentication,
@NonNull Item item) {
@NonNull BasicItemInfo item) {
String actor = UserPrincipalUtility.extractUserId(authentication);
log.info("User [{}] has attempted to modify the [{}] item.", actor, item.getId());
return userHasAccessToItemAsLockOwner(item, actor);
}
public boolean checkPermissionForItemUpdateWithoutLock(
@NonNull Authentication authentication,
@NonNull BasicItemInfo item,
@Nullable Collection<Queue> queues) {
return checkPermissionRestrictionForItemUpdateWithoutLock(authentication, item, queues) == null;
}
public String checkPermissionRestrictionForItemUpdateWithoutLock(
@NonNull Authentication authentication,
@NonNull BasicItemInfo item,
@Nullable Collection<Queue> queues) {
List<String> roles = Objects.requireNonNull(UserPrincipalUtility.extractUserRoles(authentication));
String actor = UserPrincipalUtility.extractUserId(authentication);
log.info("User [{}] has attempted to label item [{}] in batch.", actor, item.getId());
if (roles.contains(ADMIN_MANAGER_ROLE)) {
return null;
}
if (item.getLock().getOwnerId() != null && !userHasAccessToItemAsLockOwner(item, actor)) {
return "Item locked by another analyst.";
}
if (itemIsEscalated(item)) {
return "Item is escalated.";
}
if (roles.contains(SENIOR_ANALYST_ROLE)) {
return null;
}
if (!item.isActive()) {
return "Item is inactive.";
}
if (queues == null) {
return "Item cannot be updated without queue.";
} else if (queues.stream()
.filter(q -> userHasAccessToItemInQueue(item, actor, q))
.findAny()
.isEmpty()) {
return "Item is unavailable for current user.";
}
return null;
}
public boolean checkPermissionForItemLock(
@NonNull Authentication authentication,
@NonNull Item item,
@NonNull BasicItemInfo item,
@NonNull QueueView queueView) {
List<String> roles = Objects.requireNonNull(UserPrincipalUtility.extractUserRoles(authentication));
String actor = UserPrincipalUtility.extractUserId(authentication);
@ -137,50 +191,102 @@ public class DataSecurityService {
return (item.getLock() == null || item.getLock().getOwnerId() == null) &&
!queueView.getViewType().isAbstract() &&
(roles.contains(ADMIN_MANAGER_ROLE) ||
userHasAccessToQueueViewAsSupervisor(queueView, actor) ||
userHasAccessToQueueViewAsReviewer(queueView, actor));
userAssignedToQueueViewAsSupervisor(queueView, actor) ||
userAssignedToQueueViewAsReviewer(queueView, actor));
}
public boolean checkPermissionForLinkAnalysisCreation(
@NonNull Authentication authentication,
@NonNull LinkAnalysis entry) {
List<String> roles = Objects.requireNonNull(UserPrincipalUtility.extractUserRoles(authentication));
String actor = UserPrincipalUtility.extractUserId(authentication);
log.info("User [{}] attempt to create [{}] link analysis entry.", actor, entry.getId());
return roles.contains(ADMIN_MANAGER_ROLE) ||
(actor != null && actor.equals(entry.getOwnerId()));
}
public boolean checkPermissionForLinkAnalysisRead(
@NonNull Authentication authentication,
@NonNull LinkAnalysis entry) {
List<String> roles = Objects.requireNonNull(UserPrincipalUtility.extractUserRoles(authentication));
String actor = UserPrincipalUtility.extractUserId(authentication);
log.debug("User [{}] attempt to fread [{}] link analysis entry.", actor, entry.getId());
return roles.contains(ADMIN_MANAGER_ROLE) ||
(actor != null && actor.equals(entry.getOwnerId()));
}
private boolean itemBelongsToQueue(
@NonNull final Item item,
@NonNull final QueueView queueView) {
return (queueView.isResidual() && CollectionUtils.isEmpty(item.getQueueIds())) ||
(!queueView.isResidual() && item.getQueueIds().contains(queueView.getQueueId()));
@NonNull final BasicItemInfo item,
@Nullable final Queue queue) {
return queue != null && item.isActive() &&
((queue.isResidual() && CollectionUtils.isEmpty(item.getQueueIds())) ||
(!queue.isResidual() && item.getQueueIds().contains(queue.getId())));
}
private boolean userHasAccessToItemAsLockOwner(
@NonNull final Item item,
@NonNull final String actor) {
return item.getLock() != null &&
@NonNull final BasicItemInfo item,
@Nullable final String actor) {
return actor != null &&
item.getLock() != null &&
actor.equals(item.getLock().getOwnerId());
}
private boolean userHasAccessToQueue(
@NonNull final Queue queue,
@NonNull final String actor) {
return SetUtils.union(
Objects.requireNonNullElse(queue.getSupervisors(), Collections.emptySet()),
Objects.requireNonNullElse(queue.getReviewers(), Collections.emptySet()))
.contains(actor);
private boolean userHasAccessToItemInQueueView(
@NonNull final BasicItemInfo item,
@Nullable final String actor,
@Nullable final QueueView queueView) {
return queueView != null && itemBelongsToQueue(item, queueView.getQueue())
&& (userAssignedToQueueViewAsSupervisor(queueView, actor) ||
(!itemIsEscalated(item) && userAssignedToQueueViewAsReviewer(queueView, actor)));
}
private boolean userHasAccessToQueueViewAsSupervisor(
@Nullable final QueueView queueView,
@NonNull final String actor) {
return queueView != null &&
queueView.getSupervisors() != null &&
queueView.getSupervisors().contains(actor);
private boolean userHasAccessToItemInQueue(
@NonNull final BasicItemInfo item,
@Nullable final String actor,
@Nullable final Queue queue) {
return itemBelongsToQueue(item, queue)
&& (userAssignedToQueueAsSupervisor(queue, actor) ||
(!itemIsEscalated(item) && userAssignedToQueueAsReviewer(queue, actor)));
}
private boolean userHasAccessToQueueViewAsReviewer(
private boolean itemIsEscalated(
@NonNull final BasicItemInfo item) {
return item.getEscalation() != null;
}
private boolean userAssignedToQueueViewAsSupervisor(
@Nullable final QueueView queueView,
@NonNull final String actor) {
@Nullable final String actor) {
return queueView != null &&
queueView.getReviewers() != null &&
queueView.getReviewers().contains(actor) &&
userAssignedToQueueAsSupervisor(queueView.getQueue(), actor);
}
private boolean userAssignedToQueueAsSupervisor(
@Nullable final Queue queue,
@Nullable final String actor) {
return actor != null &&
queue != null &&
queue.getSupervisors() != null &&
queue.getSupervisors().contains(actor);
}
private boolean userAssignedToQueueViewAsReviewer(
@Nullable final QueueView queueView,
@Nullable final String actor) {
return queueView != null &&
userAssignedToQueueAsReviewer(queueView.getQueue(), actor) &&
QueueViewType.REGULAR.equals(queueView.getViewType());
}
private boolean userAssignedToQueueAsReviewer(
@Nullable final Queue queue,
@Nullable final String actor) {
return actor != null &&
queue != null &&
queue.getReviewers() != null &&
queue.getReviewers().contains(actor);
}
private boolean diffContainsNonPropertyChange(
@NonNull final Diff diff) {
return diff.getChangesByType(PropertyChange.class).size() < diff.getChanges().size();

Просмотреть файл

@ -3,6 +3,7 @@
package com.griddynamics.msd365fp.manualreview.queues.service;
import com.griddynamics.msd365fp.manualreview.cosmos.utilities.IdUtility;
import com.griddynamics.msd365fp.manualreview.queues.model.DictionaryType;
import com.griddynamics.msd365fp.manualreview.queues.model.dto.DictionaryValueDTO;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.DictionaryEntity;
@ -10,11 +11,10 @@ import com.griddynamics.msd365fp.manualreview.queues.repository.DictionaryReposi
import com.griddynamics.msd365fp.manualreview.queues.repository.ItemRepository;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import java.net.URLEncoder;
import java.nio.charset.StandardCharsets;
import java.time.Duration;
import java.time.OffsetDateTime;
import java.util.*;
@ -55,7 +55,11 @@ public class DictionaryService {
DictionaryEntity dictionaryEntity = new DictionaryEntity();
dictionaryEntity.setType(type);
dictionaryEntity.setValue(valueDto.getValue());
dictionaryEntity.setId(String.format("%s:%s", type, valueDto.getValue()));
dictionaryEntity.setId(
IdUtility.encodeRestrictedChars(
String.format("%s:%s", type, valueDto.getValue())
)
);
dictionaryEntity.setTtl(type.getField() != null ? dictionaryTtl.toSeconds() : -1);
dictRepository.save(dictionaryEntity);
log.info("Dictionary entity was created: [{}]", dictionaryEntity);
@ -80,11 +84,12 @@ public class DictionaryService {
.collect(Collectors.toMap(DictionaryEntity::getValue, entity -> entity));
valuesFromData.stream()
.filter(Objects::nonNull)
.filter(value -> dictEntities.get(value) == null || dictEntities.get(value).getConfirmed() == null)
.forEach(value -> {
DictionaryEntity toSave = null;
toSave = dictEntities.getOrDefault(value, DictionaryEntity.builder()
.id(String.format("%s:%s", type, URLEncoder.encode(value, StandardCharsets.UTF_8).replace('%','.')))
.id(String.format("%s:%s", type, IdUtility.encodeRestrictedChars(value)))
.type(type)
.value(value)
.build());

Просмотреть файл

@ -3,6 +3,7 @@
package com.griddynamics.msd365fp.manualreview.queues.service;
import com.griddynamics.msd365fp.manualreview.cosmos.utilities.IdUtility;
import com.griddynamics.msd365fp.manualreview.model.DisposabilityCheck;
import com.griddynamics.msd365fp.manualreview.model.DisposabilityCheckServiceResponse;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.EmailDomain;
@ -16,10 +17,7 @@ import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import java.time.Duration;
import java.util.Arrays;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
import java.util.*;
import java.util.stream.Collectors;
@Slf4j
@ -40,6 +38,7 @@ public class EmailDomainService {
@Retry(name = "cosmosOptimisticUpdate")
public void saveEmailDomain(String emailDomainName, DisposabilityCheck disposabilityCheck) {
EmailDomain emailDomain = EmailDomain.builder()
.id(IdUtility.encodeRestrictedChars(emailDomainName))
.emailDomainName(emailDomainName)
.disposabilityCheck(disposabilityCheck)
.ttl(emailDomainTtl.toSeconds())
@ -49,11 +48,13 @@ public class EmailDomainService {
public DisposabilityCheck checkDisposability(String emailDomain) {
//try to get information from cache
Optional<DisposabilityCheck> disposableDomain = getDisposabilityCheck(emailDomain);
if (disposableDomain.isPresent()) {
return disposableDomain.get();
Iterator<EmailDomain> emailDomainIterator = emailDomainRepository.findByEmailDomainName(emailDomain)
.iterator();
if (emailDomainIterator.hasNext()) {
return emailDomainIterator.next().getDisposabilityCheck();
}
//call third-party services
DisposabilityCheckServiceResponse responseFromKickbox = kickboxEmailDomainCheckProvider.check(emailDomain);
DisposabilityCheckServiceResponse responseFromNameApi = nameApiEmailDomainCheckProvider.check(emailDomain);
DisposabilityCheck result = mergeDisposabilityChecks(responseFromKickbox, responseFromNameApi);
@ -66,11 +67,6 @@ public class EmailDomainService {
return result;
}
private Optional<DisposabilityCheck> getDisposabilityCheck(final String emailDomainName) {
return emailDomainRepository.findById(emailDomainName)
.map(EmailDomain::getDisposabilityCheck);
}
private DisposabilityCheck mergeDisposabilityChecks(
DisposabilityCheckServiceResponse... disposabilityCheckServiceResponses
) {

Просмотреть файл

@ -73,6 +73,8 @@ public class ItemEnrichmentService {
private Integer maxEnrichmentAttempts;
@Setter(onMethod = @__({@Value("${mr.tasks.item-enrichment-task.history-depth}")}))
private int historyDepth;
@Setter(onMethod = @__({@Value("${azure.cosmosdb.default-ttl}")}))
private Duration defaultTtl;
private final GeodeticCalculator geoCalc = new GeodeticCalculator();
@ -124,6 +126,7 @@ public class ItemEnrichmentService {
if (item.getImported().plus(maxEnrichmentDelay).isBefore(OffsetDateTime.now())) {
log.error("Item [{}] can't be enriched during max delay period.", itemId);
item.setEnrichmentFailed(true);
item.setTtl(defaultTtl.toSeconds());
item.setEnrichmentFailReason("There is no purchase information in DFP during maximum delay period");
itemRepository.save(item);
} else {
@ -181,6 +184,7 @@ public class ItemEnrichmentService {
if (maxEnrichmentAttempts <= item.getEnrichmentAttempts()) {
log.error("Item [{}] can't be enriched during max attempts and marked as failed.", itemId);
item.setEnrichmentFailed(true);
item.setTtl(defaultTtl.toSeconds());
item.setEnrichmentFailReason("Can't be enriched during max attempts");
}
itemRepository.save(item);
@ -195,7 +199,7 @@ public class ItemEnrichmentService {
.collect(Collectors.toSet());
if (actualPurchaseHistory.size() < historyDepth) {
actualPurchaseHistory = mainPurchase.getPreviousPurchaseList().stream()
.sorted(Comparator.comparing(PreviousPurchase::getMerchantLocalDate))
.sorted(Comparator.comparing(PreviousPurchase::getMerchantLocalDate).reversed())
.limit(historyDepth)
.collect(Collectors.toSet());
}
@ -651,125 +655,128 @@ public class ItemEnrichmentService {
.collect(Collectors.toList()));
}
if (purchase.getPreviousPurchaseList() != null) {
Set<PreviousPurchase> lastWeekPreviousPurchases = purchase.getPreviousPurchaseList().stream()
.filter(pp -> pp.getMerchantLocalDate().isAfter(purchase.getMerchantLocalDate().minusWeeks(1)))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastDayPreviousPurchases = lastWeekPreviousPurchases.stream()
.filter(pp -> pp.getMerchantLocalDate().isAfter(purchase.getMerchantLocalDate().minusDays(1)))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastHourPreviousPurchases = lastDayPreviousPurchases.stream()
.filter(pp -> pp.getMerchantLocalDate().isAfter(purchase.getMerchantLocalDate().minusHours(1)))
.collect(Collectors.toSet());
calculatedFields.setTransactionCount(new Velocity<>(
(long) lastHourPreviousPurchases.size(),
(long) lastDayPreviousPurchases.size(),
(long) lastWeekPreviousPurchases.size()));
calculatedFields.setTransactionAmount(new Velocity<>(
getPurchaseSetSumAmount(lastHourPreviousPurchases),
getPurchaseSetSumAmount(lastDayPreviousPurchases),
getPurchaseSetSumAmount(lastWeekPreviousPurchases)));
Set<PreviousPurchase> lastHourRejectedPreviousPurchases = lastHourPreviousPurchases.stream()
.filter(pp -> REJECTED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastDayRejectedPreviousPurchases = lastDayPreviousPurchases.stream()
.filter(pp -> REJECTED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastWeekRejectedPreviousPurchases = lastWeekPreviousPurchases.stream()
.filter(pp -> REJECTED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
calculatedFields.setRejectedTransactionCount(new Velocity<>(
(long) lastHourRejectedPreviousPurchases.size(),
(long) lastDayRejectedPreviousPurchases.size(),
(long) lastWeekRejectedPreviousPurchases.size()));
calculatedFields.setRejectedTransactionAmount(new Velocity<>(
getPurchaseSetSumAmount(lastHourRejectedPreviousPurchases),
getPurchaseSetSumAmount(lastDayRejectedPreviousPurchases),
getPurchaseSetSumAmount(lastWeekRejectedPreviousPurchases)));
Set<PreviousPurchase> lastHourFailedPreviousPurchases = lastHourPreviousPurchases.stream()
.filter(pp -> FAILED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastDayFailedPreviousPurchases = lastDayPreviousPurchases.stream()
.filter(pp -> FAILED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastWeekFailedPreviousPurchases = lastWeekPreviousPurchases.stream()
.filter(pp -> FAILED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
calculatedFields.setFailedTransactionCount(new Velocity<>(
(long) lastHourFailedPreviousPurchases.size(),
(long) lastDayFailedPreviousPurchases.size(),
(long) lastWeekFailedPreviousPurchases.size()));
calculatedFields.setFailedTransactionAmount(new Velocity<>(
getPurchaseSetSumAmount(lastHourFailedPreviousPurchases),
getPurchaseSetSumAmount(lastDayFailedPreviousPurchases),
getPurchaseSetSumAmount(lastWeekFailedPreviousPurchases)));
Set<PreviousPurchase> lastHourSuccessfulPreviousPurchases = lastHourPreviousPurchases.stream()
.filter(pp -> APPROVED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastDaySuccessfulPreviousPurchases = lastDayPreviousPurchases.stream()
.filter(pp -> APPROVED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastWeekSuccessfulPreviousPurchases = lastWeekPreviousPurchases.stream()
.filter(pp -> APPROVED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
calculatedFields.setSuccessfulTransactionCount(new Velocity<>(
(long) lastHourSuccessfulPreviousPurchases.size(),
(long) lastDaySuccessfulPreviousPurchases.size(),
(long) lastWeekSuccessfulPreviousPurchases.size()));
calculatedFields.setSuccessfulTransactionAmount(new Velocity<>(
getPurchaseSetSumAmount(lastHourSuccessfulPreviousPurchases),
getPurchaseSetSumAmount(lastDaySuccessfulPreviousPurchases),
getPurchaseSetSumAmount(lastWeekSuccessfulPreviousPurchases)));
calculatedFields.setUniquePaymentInstrumentCount(new Velocity<>(
getUniquePaymentInstrumentCount(lastHourPreviousPurchases),
getUniquePaymentInstrumentCount(lastDayPreviousPurchases),
getUniquePaymentInstrumentCount(lastWeekPreviousPurchases)));
if (purchase.getPaymentInstrumentList() != null) {
Set<String> currentPurchasePaymentInstrumentIds = purchase.getPaymentInstrumentList().stream()
.map(PaymentInstrument::getPaymentInstrumentId)
.collect(Collectors.toSet());
Set<PreviousPurchase> lastHourTransactionWithCurrentPaymentInstrument =
filterPreviousPurchasesByPIUsage(lastHourPreviousPurchases, currentPurchasePaymentInstrumentIds);
Set<PreviousPurchase> lastDayTransactionWithCurrentPaymentInstrument =
filterPreviousPurchasesByPIUsage(lastDayPreviousPurchases, currentPurchasePaymentInstrumentIds);
Set<PreviousPurchase> lastWeekTransactionWithCurrentPaymentInstrument =
filterPreviousPurchasesByPIUsage(lastWeekPreviousPurchases, currentPurchasePaymentInstrumentIds);
calculatedFields.setCurrentPaymentInstrumentTransactionCount(new Velocity<>(
(long) lastHourTransactionWithCurrentPaymentInstrument.size(),
(long) lastDayTransactionWithCurrentPaymentInstrument.size(),
(long) lastWeekTransactionWithCurrentPaymentInstrument.size()));
calculatedFields.setCurrentPaymentInstrumentTransactionAmount(new Velocity<>(
getPurchaseSetSumAmount(lastHourTransactionWithCurrentPaymentInstrument),
getPurchaseSetSumAmount(lastDayTransactionWithCurrentPaymentInstrument),
getPurchaseSetSumAmount(lastWeekTransactionWithCurrentPaymentInstrument)));
}
calculatedFields.setUniqueIPCountries(new Velocity<>(
countUniqueIPCountriesInPreviousPurchases(lastHourPreviousPurchases),
countUniqueIPCountriesInPreviousPurchases(lastDayPreviousPurchases),
countUniqueIPCountriesInPreviousPurchases(lastWeekPreviousPurchases)));
if (purchase.getPreviousPurchaseList() == null) {
purchase.setPreviousPurchaseList(new LinkedList<>());
}
Set<PreviousPurchase> lastWeekPreviousPurchases = purchase.getPreviousPurchaseList().stream()
.filter(pp -> pp.getMerchantLocalDate().isAfter(purchase.getMerchantLocalDate().minusWeeks(1)))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastDayPreviousPurchases = lastWeekPreviousPurchases.stream()
.filter(pp -> pp.getMerchantLocalDate().isAfter(purchase.getMerchantLocalDate().minusDays(1)))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastHourPreviousPurchases = lastDayPreviousPurchases.stream()
.filter(pp -> pp.getMerchantLocalDate().isAfter(purchase.getMerchantLocalDate().minusHours(1)))
.collect(Collectors.toSet());
calculatedFields.setTransactionCount(new Velocity<>(
(long) lastHourPreviousPurchases.size(),
(long) lastDayPreviousPurchases.size(),
(long) lastWeekPreviousPurchases.size()));
calculatedFields.setTransactionAmount(new Velocity<>(
getPurchaseSetSumAmount(lastHourPreviousPurchases),
getPurchaseSetSumAmount(lastDayPreviousPurchases),
getPurchaseSetSumAmount(lastWeekPreviousPurchases)));
Set<PreviousPurchase> lastHourRejectedPreviousPurchases = lastHourPreviousPurchases.stream()
.filter(pp -> REJECTED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastDayRejectedPreviousPurchases = lastDayPreviousPurchases.stream()
.filter(pp -> REJECTED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastWeekRejectedPreviousPurchases = lastWeekPreviousPurchases.stream()
.filter(pp -> REJECTED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
calculatedFields.setRejectedTransactionCount(new Velocity<>(
(long) lastHourRejectedPreviousPurchases.size(),
(long) lastDayRejectedPreviousPurchases.size(),
(long) lastWeekRejectedPreviousPurchases.size()));
calculatedFields.setRejectedTransactionAmount(new Velocity<>(
getPurchaseSetSumAmount(lastHourRejectedPreviousPurchases),
getPurchaseSetSumAmount(lastDayRejectedPreviousPurchases),
getPurchaseSetSumAmount(lastWeekRejectedPreviousPurchases)));
Set<PreviousPurchase> lastHourFailedPreviousPurchases = lastHourPreviousPurchases.stream()
.filter(pp -> FAILED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastDayFailedPreviousPurchases = lastDayPreviousPurchases.stream()
.filter(pp -> FAILED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastWeekFailedPreviousPurchases = lastWeekPreviousPurchases.stream()
.filter(pp -> FAILED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
calculatedFields.setFailedTransactionCount(new Velocity<>(
(long) lastHourFailedPreviousPurchases.size(),
(long) lastDayFailedPreviousPurchases.size(),
(long) lastWeekFailedPreviousPurchases.size()));
calculatedFields.setFailedTransactionAmount(new Velocity<>(
getPurchaseSetSumAmount(lastHourFailedPreviousPurchases),
getPurchaseSetSumAmount(lastDayFailedPreviousPurchases),
getPurchaseSetSumAmount(lastWeekFailedPreviousPurchases)));
Set<PreviousPurchase> lastHourSuccessfulPreviousPurchases = lastHourPreviousPurchases.stream()
.filter(pp -> APPROVED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastDaySuccessfulPreviousPurchases = lastDayPreviousPurchases.stream()
.filter(pp -> APPROVED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
Set<PreviousPurchase> lastWeekSuccessfulPreviousPurchases = lastWeekPreviousPurchases.stream()
.filter(pp -> APPROVED_TRANSACTION_STATUS.equalsIgnoreCase(pp.getLastMerchantStatus()))
.collect(Collectors.toSet());
calculatedFields.setSuccessfulTransactionCount(new Velocity<>(
(long) lastHourSuccessfulPreviousPurchases.size(),
(long) lastDaySuccessfulPreviousPurchases.size(),
(long) lastWeekSuccessfulPreviousPurchases.size()));
calculatedFields.setSuccessfulTransactionAmount(new Velocity<>(
getPurchaseSetSumAmount(lastHourSuccessfulPreviousPurchases),
getPurchaseSetSumAmount(lastDaySuccessfulPreviousPurchases),
getPurchaseSetSumAmount(lastWeekSuccessfulPreviousPurchases)));
calculatedFields.setUniquePaymentInstrumentCount(new Velocity<>(
getUniquePaymentInstrumentCount(lastHourPreviousPurchases),
getUniquePaymentInstrumentCount(lastDayPreviousPurchases),
getUniquePaymentInstrumentCount(lastWeekPreviousPurchases)));
if (purchase.getPaymentInstrumentList() == null) {
purchase.setPaymentInstrumentList(new LinkedList<>());
}
Set<String> currentPurchasePaymentInstrumentIds = purchase.getPaymentInstrumentList().stream()
.map(PaymentInstrument::getPaymentInstrumentId)
.collect(Collectors.toSet());
Set<PreviousPurchase> lastHourTransactionWithCurrentPaymentInstrument =
filterPreviousPurchasesByPIUsage(lastHourPreviousPurchases, currentPurchasePaymentInstrumentIds);
Set<PreviousPurchase> lastDayTransactionWithCurrentPaymentInstrument =
filterPreviousPurchasesByPIUsage(lastDayPreviousPurchases, currentPurchasePaymentInstrumentIds);
Set<PreviousPurchase> lastWeekTransactionWithCurrentPaymentInstrument =
filterPreviousPurchasesByPIUsage(lastWeekPreviousPurchases, currentPurchasePaymentInstrumentIds);
calculatedFields.setCurrentPaymentInstrumentTransactionCount(new Velocity<>(
(long) lastHourTransactionWithCurrentPaymentInstrument.size(),
(long) lastDayTransactionWithCurrentPaymentInstrument.size(),
(long) lastWeekTransactionWithCurrentPaymentInstrument.size()));
calculatedFields.setCurrentPaymentInstrumentTransactionAmount(new Velocity<>(
getPurchaseSetSumAmount(lastHourTransactionWithCurrentPaymentInstrument),
getPurchaseSetSumAmount(lastDayTransactionWithCurrentPaymentInstrument),
getPurchaseSetSumAmount(lastWeekTransactionWithCurrentPaymentInstrument)));
calculatedFields.setUniqueIPCountries(new Velocity<>(
countUniqueIPCountriesInPreviousPurchases(lastHourPreviousPurchases),
countUniqueIPCountriesInPreviousPurchases(lastDayPreviousPurchases),
countUniqueIPCountriesInPreviousPurchases(lastWeekPreviousPurchases)));
item.getPurchase().setCalculatedFields(calculatedFields);
}
@ -777,6 +784,7 @@ public class ItemEnrichmentService {
return lastHourPreviousPurchases.stream()
.filter(pp -> pp.getPaymentInstrumentList() != null)
.flatMap(pp -> pp.getPaymentInstrumentList().stream())
.filter(pi -> pi.getPaymentInstrumentId() != null)
.map(PaymentInstrument::getPaymentInstrumentId)
.distinct()
.count();
@ -784,7 +792,7 @@ public class ItemEnrichmentService {
private BigDecimal getPurchaseSetSumAmount(Set<? extends Purchase> purchases) {
return purchases.stream()
.map(Purchase::getTotalAmountInUSD)
.map(p -> Objects.requireNonNullElse(p.getTotalAmountInUSD(), BigDecimal.ZERO))
.reduce(BigDecimal::add)
.orElse(BigDecimal.ZERO);
}
@ -793,6 +801,7 @@ public class ItemEnrichmentService {
return purchases.stream()
.filter(pp -> pp.getPaymentInstrumentList() != null)
.filter(pp -> pp.getPaymentInstrumentList().stream()
.filter(pi -> pi.getPaymentInstrumentId() != null)
.map(PaymentInstrument::getPaymentInstrumentId)
.anyMatch(piIds::contains))
.collect(Collectors.toSet());

Просмотреть файл

@ -3,16 +3,20 @@
package com.griddynamics.msd365fp.manualreview.queues.service;
import com.griddynamics.msd365fp.manualreview.cosmos.utilities.IdUtility;
import com.griddynamics.msd365fp.manualreview.cosmos.utilities.PageProcessingUtility;
import com.griddynamics.msd365fp.manualreview.model.ItemLabel;
import com.griddynamics.msd365fp.manualreview.model.ItemLock;
import com.griddynamics.msd365fp.manualreview.model.ItemNote;
import com.griddynamics.msd365fp.manualreview.model.PageableCollection;
import com.griddynamics.msd365fp.manualreview.model.event.Event;
import com.griddynamics.msd365fp.manualreview.model.event.dfp.PurchaseEventBatch;
import com.griddynamics.msd365fp.manualreview.model.event.internal.ItemResolutionEvent;
import com.griddynamics.msd365fp.manualreview.model.event.type.LockActionType;
import com.griddynamics.msd365fp.manualreview.model.exception.BusyException;
import com.griddynamics.msd365fp.manualreview.model.exception.NotFoundException;
import com.griddynamics.msd365fp.manualreview.queues.model.ItemDataField;
import com.griddynamics.msd365fp.manualreview.queues.model.ItemEvent;
import com.griddynamics.msd365fp.manualreview.queues.model.dto.ItemDTO;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.Item;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.Queue;
@ -26,6 +30,7 @@ import lombok.RequiredArgsConstructor;
import lombok.Setter;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.SerializationUtils;
import org.apache.commons.lang3.tuple.ImmutablePair;
import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
@ -34,6 +39,8 @@ import org.springframework.lang.NonNull;
import org.springframework.lang.Nullable;
import org.springframework.stereotype.Service;
import org.springframework.util.CollectionUtils;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Schedulers;
import java.time.Duration;
import java.time.OffsetDateTime;
@ -60,27 +67,28 @@ public class ItemService {
public void saveEmptyItem(PurchaseEventBatch eventBatch) {
eventBatch.forEach(event -> {
String id = event.getEventId();
log.info("Event [{}] has been received from the DFP rule [{}].", id, event.getRuleName());
String purchaseId = event.getEventId();
String itemId = IdUtility.encodeRestrictedChars(purchaseId);
log.info("Event [{}] has been received from the DFP rule [{}].", purchaseId, event.getRuleName());
if ("purchase".equalsIgnoreCase(event.getEventType())) {
// Create and save the item
Item item = Item.builder()
.id(id)
.id(itemId)
.active(false)
.imported(OffsetDateTime.now())
._etag(id)
._etag(UUID.randomUUID().toString())
.build();
try {
itemRepository.save(item);
log.info("Item [{}] has been saved to the storage.", id);
log.info("Item [{}] has been saved to the storage.", itemId);
} catch (CosmosDBAccessException e) {
log.info("Item [{}] has not been saved to the storage because it's already exist.", id);
log.info("Item [{}] has not been saved to the storage because it's already exist.", itemId);
} catch (Exception e) {
log.warn("Item [{}] has not been saved to the storage: {}", id, e.getMessage());
log.warn("Item [{}] has not been saved to the storage: {}", itemId, e.getMessage());
}
} else {
log.info("The event type of [{}] is [{}]. The event has been ignored.", id, event.getEventType());
log.info("The event type of [{}] is [{}]. The event has been ignored.", purchaseId, event.getEventType());
}
});
}
@ -290,6 +298,46 @@ public class ItemService {
}
}
/**
* Initializes resolution sending.
*
* @return list of initialized sents
*/
public boolean sendResolutions() throws BusyException {
log.info("Start resolution sending.");
PageProcessingUtility.executeForAllPages(
continuation -> itemRepository.findUnreportedItems(
DEFAULT_ITEM_PAGE_SIZE,
continuation),
itemCollection -> {
Set<ImmutablePair<String, ItemEvent>> eventCollection = itemCollection.stream()
.flatMap(item -> item.getEvents().stream()
.filter(event -> ItemResolutionEvent.class.equals(event.getKlass()))
.map(event -> new ImmutablePair<>(item.getId(), event)))
.collect(Collectors.toSet());
log.info("Start resolution sending for [{}].", eventCollection);
Set<Mono<Void>> executions = eventCollection.stream()
.map(eventTuple -> streamService.sendItemResolvedEvent(eventTuple.getRight().getEvent())
.doOnSuccess(v -> thisService.deleteEventFromItem(eventTuple.left, eventTuple.right.getSendingId())))
.collect(Collectors.toSet());
Mono.zipDelayError(executions, r -> r)
.subscribeOn(Schedulers.boundedElastic())
.subscribe();
});
return true;
}
@Retry(name = "cosmosOptimisticUpdate")
protected void deleteEventFromItem(String itemId, String sendingId) {
Optional<Item> itemOptional = itemRepository.findById(itemId);
itemOptional.ifPresent(item -> {
if (item.getEvents().removeIf(event -> sendingId.equals(event.getSendingId()))) {
itemRepository.save(item);
log.info("Event [{}] were reported from item [{}].", sendingId, itemId);
}
});
}
/**
* Unlocks all items with expired lock timestamps.
*

Просмотреть файл

@ -9,10 +9,14 @@ import com.griddynamics.msd365fp.manualreview.model.PageableCollection;
import com.griddynamics.msd365fp.manualreview.model.exception.BusyException;
import com.griddynamics.msd365fp.manualreview.model.exception.EmptySourceException;
import com.griddynamics.msd365fp.manualreview.model.exception.NotFoundException;
import com.griddynamics.msd365fp.manualreview.queues.model.BasicItemInfo;
import com.griddynamics.msd365fp.manualreview.queues.model.BatchUpdateResult;
import com.griddynamics.msd365fp.manualreview.queues.model.QueueView;
import com.griddynamics.msd365fp.manualreview.queues.model.QueueViewType;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.Item;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.Queue;
import com.griddynamics.msd365fp.manualreview.queues.repository.ItemRepository;
import com.microsoft.azure.spring.data.cosmosdb.exception.CosmosDBAccessException;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections4.CollectionUtils;
@ -26,10 +30,9 @@ import org.springframework.security.access.prepost.PreFilter;
import org.springframework.stereotype.Service;
import java.time.OffsetDateTime;
import java.util.Collection;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.*;
import java.util.function.BiConsumer;
import java.util.function.Function;
import java.util.stream.Collectors;
import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.*;
@ -40,10 +43,11 @@ import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.*;
public class PublicItemClient {
private final ItemRepository itemRepository;
private final DataSecurityService dataSecurityService;
@PostFilter("@dataSecurityService.checkPermissionForItemReading(authentication, filterObject, #queueView)")
public PageableCollection<Item> getActiveItemPageableList(
public PageableCollection<Item> getQueueViewItemList(
@NonNull QueueView queueView,
int pageSize,
@Nullable String continuationToken) throws BusyException {
@ -71,6 +75,24 @@ public class PublicItemClient {
});
}
@PostFilter("@dataSecurityService.checkPermissionForItemReading(authentication, filterObject, #queues)")
public Collection<BasicItemInfo> getItemInfoByIds(
@NonNull Set<String> ids,
@Nullable Collection<Queue> queues) throws BusyException {
return PageProcessingUtility.getAllPages(
continuation -> itemRepository.findEnrichedItemInfoByIds(ids, DEFAULT_ITEM_INFO_PAGE_SIZE, continuation));
}
@PostFilter("@dataSecurityService.checkPermissionForItemReading(authentication, filterObject, #queues)")
public Collection<Item> getItemListByIds(
@NonNull Set<String> ids,
@Nullable Collection<Queue> queues) throws BusyException {
return PageProcessingUtility.getAllPages(
continuation -> itemRepository.findEnrichedItemsByIds(ids, DEFAULT_ITEM_PAGE_SIZE, continuation));
}
@PostFilter("@dataSecurityService.checkPermissionForItemReading(authentication, filterObject, #queueView)")
public PageableCollection<Item> getLockedItemPageableList(
@ -186,6 +208,48 @@ public class PublicItemClient {
}
public <T> Collection<BatchUpdateResult> batchUpdate(
@NonNull Set<String> ids,
@Nullable Collection<Queue> queues,
@NonNull Function<Item, T> modifier,
@NonNull BiConsumer<Item, T> postprocessor) throws BusyException {
Map<String, BatchUpdateResult> results = new HashMap<>();
PageProcessingUtility.executeForAllPages(continuationToken ->
itemRepository.findEnrichedItemsByIds(ids, DEFAULT_ITEM_PAGE_SIZE, continuationToken),
items -> {
for (Item item : items) {
String check = dataSecurityService.checkPermissionRestrictionForItemUpdateWithoutLock(UserPrincipalUtility.getAuth(), item, queues);
if (check != null) {
results.put(item.getId(), new BatchUpdateResult(item.getId(), false, check));
} else {
try {
T context = modifier.apply(item);
item = itemRepository.save(item);
postprocessor.accept(item, context);
results.put(item.getId(), new BatchUpdateResult(item.getId(), true, "Successfully updated."));
log.info("Item [{}] has been modified in batch operation.", item.getId());
} catch (CosmosDBAccessException e) {
results.put(item.getId(), new BatchUpdateResult(item.getId(), false, "Item has been modified by another process."));
} catch (Exception e) {
log.warn("Exception during bulk operation for item [{}]: {}", item.getId(), e.getMessage());
log.warn("Exception during bulk operation for item [{}]", item.getId(), e);
results.put(item.getId(), new BatchUpdateResult(item.getId(), false, "Internal exception."));
}
}
}
}
);
ids.stream()
.filter(id -> !results.containsKey(id))
.forEach(id -> results.put(id, new BatchUpdateResult(id, false, "Item not found.")));
return results.values();
}
@PreAuthorize("@dataSecurityService.checkPermissionForItemLock(authentication, #item, #queueView)")
public void lockItem(@NonNull QueueView queueView, @NonNull Item item) {
item.lock(queueView.getQueueId(), queueView.getViewId(), UserPrincipalUtility.getUserId());

Просмотреть файл

@ -6,16 +6,15 @@ package com.griddynamics.msd365fp.manualreview.queues.service;
import com.griddynamics.msd365fp.manualreview.cosmos.utilities.PageProcessingUtility;
import com.griddynamics.msd365fp.manualreview.dfpauth.util.UserPrincipalUtility;
import com.griddynamics.msd365fp.manualreview.model.*;
import com.griddynamics.msd365fp.manualreview.model.event.internal.ItemResolutionEvent;
import com.griddynamics.msd365fp.manualreview.model.event.type.LockActionType;
import com.griddynamics.msd365fp.manualreview.model.exception.BusyException;
import com.griddynamics.msd365fp.manualreview.model.exception.EmptySourceException;
import com.griddynamics.msd365fp.manualreview.model.exception.IncorrectConditionException;
import com.griddynamics.msd365fp.manualreview.model.exception.NotFoundException;
import com.griddynamics.msd365fp.manualreview.queues.model.ItemEvent;
import com.griddynamics.msd365fp.manualreview.queues.model.QueueView;
import com.griddynamics.msd365fp.manualreview.queues.model.dto.ItemDTO;
import com.griddynamics.msd365fp.manualreview.queues.model.dto.LabelDTO;
import com.griddynamics.msd365fp.manualreview.queues.model.dto.NoteDTO;
import com.griddynamics.msd365fp.manualreview.queues.model.dto.TagDTO;
import com.griddynamics.msd365fp.manualreview.queues.model.dto.*;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.Item;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.Queue;
import io.github.resilience4j.retry.annotation.Retry;
@ -32,10 +31,7 @@ import org.springframework.stereotype.Service;
import java.time.Duration;
import java.time.OffsetDateTime;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Set;
import java.util.*;
import java.util.stream.Collectors;
import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.*;
@ -68,7 +64,7 @@ public class PublicItemService {
@Nullable final String continuationToken) throws NotFoundException, BusyException {
QueueView queueView = publicQueueClient.getActiveQueueView(queueId);
PageableCollection<Item> queriedItems =
publicItemClient.getActiveItemPageableList(queueView, pageSize, continuationToken);
publicItemClient.getQueueViewItemList(queueView, pageSize, continuationToken);
return new PageableCollection<>(
queriedItems.getValues()
.stream()
@ -178,7 +174,7 @@ public class PublicItemService {
if (item.getLock() == null || item.getLock().getOwnerId() == null) {
throw new IncorrectConditionException(MESSAGE_ITEM_IS_NOT_LOCKED);
}
if (queueView != null && !queueView.getViewId().equals(item.getLock().getQueueViewId())){
if (queueView != null && !queueView.getViewId().equals(item.getLock().getQueueViewId())) {
throw new IncorrectConditionException(MESSAGE_ITEM_IS_NOT_LOCKED_IN_QUEUE);
}
Item oldItem = SerializationUtils.clone(item);
@ -206,10 +202,10 @@ public class PublicItemService {
if (item.getLock() == null || item.getLock().getOwnerId() == null) {
throw new IncorrectConditionException(MESSAGE_ITEM_IS_NOT_LOCKED);
}
if (queueView != null && !queueView.getViewId().equals(item.getLock().getQueueViewId())){
if (queueView != null && !queueView.getViewId().equals(item.getLock().getQueueViewId())) {
throw new IncorrectConditionException(MESSAGE_ITEM_IS_NOT_LOCKED_IN_QUEUE);
}
if (queueView == null){
if (queueView == null) {
queueView = publicQueueClient.getActiveQueueView(item.getLock().getQueueViewId());
}
Item oldItem = SerializationUtils.clone(item);
@ -239,13 +235,30 @@ public class PublicItemService {
case BAD:
case WATCH_NA:
case WATCH_INCONCLUSIVE:
labelResolution(item, oldItem);
labelResolution(item);
publicItemClient.updateItem(item, oldItem);
streamService.sendItemAssignmentEvent(item, oldItem.getQueueIds());
streamService.sendItemLockEvent(item, oldItem.getLock(), LockActionType.LABEL_APPLIED_RELEASE);
streamService.sendItemLabelEvent(item, oldItem);
break;
case ESCALATE:
labelEscalate(item, oldItem, queueView, actor);
labelEscalate(item, queueView, actor);
publicItemClient.updateItem(item, oldItem);
streamService.sendItemAssignmentEvent(item, oldItem.getQueueIds());
streamService.sendItemLockEvent(item, oldItem.getLock(), LockActionType.LABEL_APPLIED_RELEASE);
streamService.sendItemLabelEvent(item, oldItem);
break;
case HOLD:
labelHold(item, oldItem, queueView, actor);
labelHold(item, queueView, actor);
publicItemClient.updateItem(item, oldItem);
streamService.sendItemLockEvent(item, oldItem.getLock(), LockActionType.LABEL_APPLIED_RELEASE);
streamService.sendItemLabelEvent(item, oldItem);
break;
default:
throw new IncorrectConditionException(
@ -254,6 +267,47 @@ public class PublicItemService {
}
}
public BatchLabelReportDTO batchLabelItem(final BatchLabelDTO batchLabel) throws IncorrectConditionException, BusyException {
if (!Label.GOOD.equals(batchLabel.getLabel()) && !Label.BAD.equals(batchLabel.getLabel())) {
throw new IncorrectConditionException("Batch labeling accepts only GOOD/BAD labels");
}
String actor = UserPrincipalUtility.getUserId();
log.info("User [{}] is trying to label items [{}] with label [{}].", actor, batchLabel.getItemIds(), batchLabel.getLabel());
Collection<Queue> queues = publicQueueClient.getActiveQueueList(null);
Collection<BatchLabelReportDTO.LabelResult> results = publicItemClient.batchUpdate(
batchLabel.getItemIds(),
queues,
item -> {
Item oldItem = SerializationUtils.clone(item);
if (item.getLabel() == null) {
item.setLabel(new ItemLabel());
}
item.getLabel().label(batchLabel.getLabel(), actor, null, null);
item.getNotes().add(ItemNote.builder()
.created(OffsetDateTime.now())
.note(String.format("# Applied [%s] label in bulk operation", batchLabel.getLabel()))
.userId(UserPrincipalUtility.getUserId())
.build());
labelResolution(item);
return oldItem;
},
(item, oldItem) -> {
streamService.sendItemAssignmentEvent(item, oldItem.getQueueIds());
if (oldItem.getLock() != null && oldItem.getLock().getOwnerId()!=null) {
streamService.sendItemLockEvent(item, oldItem.getLock(), LockActionType.LABEL_APPLIED_RELEASE);
}
streamService.sendItemLabelEvent(item, oldItem);
}).stream()
.map(r -> modelMapper.map(r, BatchLabelReportDTO.LabelResult.class))
.collect(Collectors.toSet());
return new BatchLabelReportDTO(results);
}
@Retry(name = "cosmosOptimisticUpdate")
public void commentItem(final String id, final String queueId, final NoteDTO noteAssignment) throws NotFoundException, IncorrectConditionException {
// Get the item
@ -265,7 +319,7 @@ public class PublicItemService {
if (item.getLock() == null || item.getLock().getOwnerId() == null) {
throw new IncorrectConditionException(MESSAGE_ITEM_IS_NOT_LOCKED);
}
if (queueView != null && !queueView.getViewId().equals(item.getLock().getQueueViewId())){
if (queueView != null && !queueView.getViewId().equals(item.getLock().getQueueViewId())) {
throw new IncorrectConditionException(MESSAGE_ITEM_IS_NOT_LOCKED_IN_QUEUE);
}
Item oldItem = SerializationUtils.clone(item);
@ -290,7 +344,7 @@ public class PublicItemService {
if (item.getLock() == null || item.getLock().getOwnerId() == null) {
throw new IncorrectConditionException(MESSAGE_ITEM_IS_NOT_LOCKED);
}
if (queueView != null && !queueView.getViewId().equals(item.getLock().getQueueViewId())){
if (queueView != null && !queueView.getViewId().equals(item.getLock().getQueueViewId())) {
throw new IncorrectConditionException(MESSAGE_ITEM_IS_NOT_LOCKED_IN_QUEUE);
}
Item oldItem = SerializationUtils.clone(item);
@ -344,19 +398,18 @@ public class PublicItemService {
.collect(Collectors.toList());
}
private void labelResolution(Item item, Item oldItem) {
private void labelResolution(Item item) {
item.unlock();
item.deactivate(defaultTtl.toSeconds());
item.setQueueIds(Set.of());
publicItemClient.updateItem(item, oldItem);
streamService.sendItemAssignmentEvent(item, oldItem.getQueueIds());
streamService.sendItemResolvedEvent(item);
streamService.sendItemLockEvent(item, oldItem.getLock(), LockActionType.LABEL_APPLIED_RELEASE);
streamService.sendItemLabelEvent(item, oldItem);
item.getEvents().add(new ItemEvent(
streamService.createItemResolvedEvent(item),
ItemResolutionEvent.class,
UUID.randomUUID().toString()
));
}
private void labelEscalate(Item item, Item oldItem, QueueView queueView, String actor) {
private void labelEscalate(Item item, QueueView queueView, String actor) {
item.unlock();
item.setEscalation(ItemEscalation.builder()
.escalated(OffsetDateTime.now())
@ -364,14 +417,9 @@ public class PublicItemService {
.reviewerId(actor)
.build());
item.setQueueIds(Collections.singleton(queueView.getQueueId()));
publicItemClient.updateItem(item, oldItem);
streamService.sendItemAssignmentEvent(item, oldItem.getQueueIds());
streamService.sendItemLockEvent(item, oldItem.getLock(), LockActionType.LABEL_APPLIED_RELEASE);
streamService.sendItemLabelEvent(item, oldItem);
}
private void labelHold(Item item, Item oldItem, QueueView queueView, String actor) {
private void labelHold(Item item, QueueView queueView, String actor) {
item.unlock();
item.setHold(ItemHold.builder()
.held(OffsetDateTime.now())
@ -379,9 +427,5 @@ public class PublicItemService {
.queueViewId(queueView.getViewId())
.ownerId(actor)
.build());
publicItemClient.updateItem(item, oldItem);
streamService.sendItemLockEvent(item, oldItem.getLock(), LockActionType.LABEL_APPLIED_RELEASE);
streamService.sendItemLabelEvent(item, oldItem);
}
}

Просмотреть файл

@ -0,0 +1,34 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.service;
import com.griddynamics.msd365fp.manualreview.model.exception.NotFoundException;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.LinkAnalysis;
import com.griddynamics.msd365fp.manualreview.queues.repository.LinkAnalysisRepository;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.security.access.prepost.PostAuthorize;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.stereotype.Service;
import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.MESSAGE_NOT_FOUND;
@Slf4j
@Service
@RequiredArgsConstructor
public class PublicLinkAnalysisClient {
private final LinkAnalysisRepository linkAnalysisRepository;
@PreAuthorize("@dataSecurityService.checkPermissionForLinkAnalysisCreation(authentication, #entry)")
public void saveLinkAnalysisEntry(final LinkAnalysis entry) {
linkAnalysisRepository.save(entry);
}
@PostAuthorize("@dataSecurityService.checkPermissionForLinkAnalysisRead(authentication, returnObject)")
public LinkAnalysis getLinkAnalysisEntry(String id) throws NotFoundException {
return linkAnalysisRepository.findById(id).orElseThrow(() -> new NotFoundException(MESSAGE_NOT_FOUND));
}
}

Просмотреть файл

@ -0,0 +1,285 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
package com.griddynamics.msd365fp.manualreview.queues.service;
import com.griddynamics.msd365fp.manualreview.dfpauth.util.UserPrincipalUtility;
import com.griddynamics.msd365fp.manualreview.model.PageableCollection;
import com.griddynamics.msd365fp.manualreview.model.dfp.Address;
import com.griddynamics.msd365fp.manualreview.model.dfp.DeviceContext;
import com.griddynamics.msd365fp.manualreview.model.dfp.PaymentInstrument;
import com.griddynamics.msd365fp.manualreview.model.dfp.User;
import com.griddynamics.msd365fp.manualreview.model.dfp.raw.*;
import com.griddynamics.msd365fp.manualreview.model.exception.BusyException;
import com.griddynamics.msd365fp.manualreview.model.exception.EmptySourceException;
import com.griddynamics.msd365fp.manualreview.model.exception.IncorrectConditionException;
import com.griddynamics.msd365fp.manualreview.model.exception.NotFoundException;
import com.griddynamics.msd365fp.manualreview.queues.model.LinkAnalysisField;
import com.griddynamics.msd365fp.manualreview.queues.model.QueueView;
import com.griddynamics.msd365fp.manualreview.queues.model.dto.*;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.Item;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.LinkAnalysis;
import com.griddynamics.msd365fp.manualreview.queues.model.persistence.Queue;
import lombok.RequiredArgsConstructor;
import lombok.Setter;
import lombok.extern.slf4j.Slf4j;
import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Mono;
import java.time.Duration;
import java.time.format.DateTimeFormatter;
import java.util.*;
import java.util.stream.Collectors;
import static com.griddynamics.msd365fp.manualreview.model.Constants.DFP_DATE_TIME_PATTERN;
import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.MESSAGE_ITEM_IS_EMPTY;
@Slf4j
@Service
@RequiredArgsConstructor
public class PublicLinkAnalysisService {
public static final int LA_ITEM_DB_REQUEST_SIZE = 200;
public static final int LA_ITEM_DFP_REQUEST_SIZE = 25;
private final PublicItemClient publicItemClient;
private final PublicQueueClient publicQueueClient;
private final ModelMapper modelMapper;
private final PublicLinkAnalysisClient publicLinkAnalysisClient;
private final DataSecurityService dataSecurityService;
private final DFPExplorerService dfpExplorerService;
@Setter(onMethod = @__({@Autowired, @Qualifier("azureDFPLAAPIWebClient")}))
private WebClient dfpClient;
@Value("${azure.dfp.link-analysis-full-url}")
private String dfpLinkAnalysisFullUrl;
@Value("${azure.dfp.link-analysis-count-url}")
private String dfpLinkAnalysisCountUrl;
@Value("${azure.dfp.link-analysis-details-url}")
private String dfpLinkAnalysisDetailsUrl;
@Value("${mr.link-analysis.ttl}")
private Duration ttl;
@Value("${mr.link-analysis.check-user-restriction}")
private boolean checkUserRestriction;
public LinkAnalysisDTO createLinkAnalysisEntry(final LinkAnalysisCreationDTO request) throws NotFoundException, IncorrectConditionException, EmptySourceException, BusyException {
QueueView queueView = null;
if (request.getQueueId() != null) {
queueView = publicQueueClient.getActiveQueueView(request.getQueueId());
}
Item item = publicItemClient.getItem(request.getItemId(), queueView, null);
if (item.getPurchase() == null) {
throw new IncorrectConditionException(MESSAGE_ITEM_IS_EMPTY);
}
Set<String> intersectionIds = new HashSet<>();
LinkAnalysis linkAnalysis = LinkAnalysis.builder()
.id(UUID.randomUUID().toString())
.ownerId(UserPrincipalUtility.getUserId())
.analysisFields(request.getFields())
.fields(new HashSet<>())
.ttl(ttl.toSeconds())
.build();
// prepare request to DFP
PaymentInstrument pi = item.getPurchase().getPaymentInstrumentList().stream()
.max(Comparator.comparing(PaymentInstrument::getMerchantLocalDate))
.orElse(null);
DeviceContext dc = item.getPurchase().getDeviceContext();
User user = item.getPurchase().getUser();
Address ba = item.getPurchase().getAddressList().stream()
.filter(a -> "BILLING".equalsIgnoreCase(a.getType()))
.max(Comparator.comparing(Address::getAddressId))
.orElse(null);
LinkAnalysisRequest dfpRequest = new LinkAnalysisRequest();
if (user != null) {
dfpRequest.put(LinkAnalysisField.CREATION_DATE.getRelatedLAName(),
user.getCreationDate().format(DateTimeFormatter.ofPattern(DFP_DATE_TIME_PATTERN)));
dfpRequest.put(LinkAnalysisField.EMAIL.getRelatedLAName(), user.getEmail());
}
if (dc != null) {
dfpRequest.put(LinkAnalysisField.DISCOVERED_IP_ADDRESS.getRelatedLAName(), dc.getDiscoveredIPAddress());
dfpRequest.put(LinkAnalysisField.MERCHANT_FUZZY_DEVICE_ID.getRelatedLAName(), dc.getMerchantFuzzyDeviceId());
}
if (pi != null) {
dfpRequest.put(LinkAnalysisField.MERCHANT_PAYMENT_INSTRUMENT_ID.getRelatedLAName(), pi.getMerchantPaymentInstrumentId());
dfpRequest.put(LinkAnalysisField.HOLDER_NAME.getRelatedLAName(), pi.getHolderName());
dfpRequest.put(LinkAnalysisField.BIN.getRelatedLAName(), pi.getBin());
}
if (ba != null) {
dfpRequest.put(LinkAnalysisField.ZIPCODE.getRelatedLAName(), ba.getZipCode());
}
// request data from DFP
LinkAnalysisCountResponse dfpCountResults = dfpClient
.post()
.uri(dfpLinkAnalysisCountUrl)
.body(Mono.just(dfpRequest), LinkAnalysisRequest.class)
.retrieve()
.bodyToMono(LinkAnalysisCountResponse.class)
.block();
if (dfpCountResults == null) throw new EmptySourceException();
LinkAnalysisRequest dfpFullRequest = new LinkAnalysisRequest();
dfpFullRequest.putAll(dfpRequest);
for (LinkAnalysisField field : LinkAnalysisField.values()) {
if (!request.getFields().contains(field)) {
dfpFullRequest.remove(field.getRelatedLAName());
}
}
// request purchase Ids
LinkAnalysisFullResponse dfpFullResults;
if (!request.getFields().isEmpty()) {
dfpFullResults = dfpClient
.post()
.uri(dfpLinkAnalysisFullUrl)
.body(Mono.just(dfpFullRequest), LinkAnalysisRequest.class)
.retrieve()
.bodyToMono(LinkAnalysisFullResponse.class)
.block();
if (dfpFullResults == null) throw new EmptySourceException();
} else {
dfpFullResults = new LinkAnalysisFullResponse();
}
// map data to response
dfpFullResults.entrySet().stream()
.min(Comparator.comparing(e -> e.getValue().getPurchaseCounts()))
.ifPresent(e -> intersectionIds.addAll(dfpFullResults.get(e.getKey()).getPurchaseIds()));
for (LinkAnalysisField field : LinkAnalysisField.values()) {
LinkAnalysis.FieldLinks fieldLinks = LinkAnalysis.FieldLinks.builder()
.id(field)
.value(dfpRequest.get(field.getRelatedLAName()))
.build();
if (dfpFullResults.containsKey(field.getRelatedLAName())) {
fieldLinks.setPurchaseCounts(dfpFullResults.get(field.getRelatedLAName()).getPurchaseCounts());
fieldLinks.setPurchaseIds(dfpFullResults.get(field.getRelatedLAName()).getPurchaseIds());
intersectionIds.retainAll(dfpFullResults.get(field.getRelatedLAName()).getPurchaseIds());
} else if (dfpCountResults.containsKey(field.getRelatedLAName())) {
fieldLinks.setPurchaseCounts(dfpCountResults.get(field.getRelatedLAName()));
}
linkAnalysis.getFields().add(fieldLinks);
}
linkAnalysis.setFound(intersectionIds.size());
if (!request.getFields().isEmpty()) {
Collection<Queue> queues = publicQueueClient.getActiveQueueList(null);
for (int i = 0; i < intersectionIds.size(); i += LA_ITEM_DB_REQUEST_SIZE) {
Set<String> idsForRequest = intersectionIds.stream().skip(i).limit(LA_ITEM_DB_REQUEST_SIZE).collect(Collectors.toSet());
linkAnalysis.getMrPurchaseIds().addAll(publicItemClient.getItemInfoByIds(idsForRequest, queues).stream()
.map(itemInfo -> modelMapper.map(itemInfo, LinkAnalysis.MRItemInfo.class))
.collect(Collectors.toSet()));
}
intersectionIds.removeAll(linkAnalysis.getMrPurchaseIds().stream()
.map(LinkAnalysis.MRItemInfo::getId).collect(Collectors.toSet()));
linkAnalysis.getDfpPurchaseIds().addAll(intersectionIds);
linkAnalysis.setFoundInMR(linkAnalysis.getMrPurchaseIds().size());
}
publicLinkAnalysisClient.saveLinkAnalysisEntry(linkAnalysis);
return modelMapper.map(linkAnalysis, LinkAnalysisDTO.class);
}
public PageableCollection<LAItemDTO> getMRItems(
final String id,
final Integer size,
final String continuation
) throws NotFoundException, BusyException {
LinkAnalysis linkAnalysis = publicLinkAnalysisClient.getLinkAnalysisEntry(id);
int start = continuation == null ? 0 : Integer.parseInt(continuation);
Set<String> idsForRequest = linkAnalysis.getMrPurchaseIds().stream()
.skip(start)
.limit(size)
.map(LinkAnalysis.MRItemInfo::getId)
.collect(Collectors.toSet());
int end = start + idsForRequest.size();
TreeSet<LAItemDTO> result = new TreeSet<>(Comparator
.comparing((LAItemDTO dto) -> dto.getItem().getImported())
.thenComparing((LAItemDTO dto) -> dto.getItem().getId()));
Collection<Queue> queues = publicQueueClient.getActiveQueueList(null);
for (int i = 0; i < idsForRequest.size(); i += LA_ITEM_DB_REQUEST_SIZE) {
Set<String> idsForLocalRequest = idsForRequest.stream().skip(i).limit(LA_ITEM_DB_REQUEST_SIZE).collect(Collectors.toSet());
result.addAll(publicItemClient.getItemListByIds(idsForLocalRequest, queues).stream()
.map(item -> LAItemDTO.builder()
.item(modelMapper.map(item, ItemDTO.class))
.availableForLabeling(dataSecurityService.checkPermissionForItemUpdateWithoutLock(
UserPrincipalUtility.getAuth(), item, queues))
.build())
.collect(Collectors.toList()));
}
if (checkUserRestriction) {
result.forEach(item -> {
if (item.getItem() != null &&
item.getItem().getPurchase() != null &&
item.getItem().getPurchase().getUser() != null &&
item.getItem().getPurchase().getUser().getEmail() != null) {
UserEmailListEntity userEmailLists = dfpExplorerService.exploreUserEmailList(item.getItem().getPurchase().getUser().getEmail());
item.setUserRestricted(userEmailLists.getCommonRestriction());
}
});
}
return new PageableCollection<>(result,
end < linkAnalysis.getMrPurchaseIds().size() ? String.valueOf(end) : null);
}
public PageableCollection<DFPItemDTO> getDFPItems(
final String id,
final Integer size,
final String continuation
) throws NotFoundException {
LinkAnalysis linkAnalysis = publicLinkAnalysisClient.getLinkAnalysisEntry(id);
int start = continuation == null ? 0 : Integer.parseInt(continuation);
Set<String> idsForRequest = linkAnalysis.getDfpPurchaseIds().stream()
.skip(start)
.limit(size)
.collect(Collectors.toSet());
int end = start + idsForRequest.size();
TreeSet<DFPItemDTO> result = new TreeSet<>(Comparator.comparing(DFPItemDTO::getPurchaseId));
for (int i = 0; i < idsForRequest.size(); i += LA_ITEM_DFP_REQUEST_SIZE) {
Set<String> idsForLocalRequest = idsForRequest.stream().skip(i).limit(LA_ITEM_DFP_REQUEST_SIZE).collect(Collectors.toSet());
LinkAnalysisDetailsResponse dfpDetailsResults = dfpClient
.post()
.uri(dfpLinkAnalysisDetailsUrl)
.body(Mono.just(new LinkAnalysisDetailsRequest(idsForLocalRequest)), LinkAnalysisDetailsRequest.class)
.retrieve()
.bodyToMono(LinkAnalysisDetailsResponse.class)
.block();
if (dfpDetailsResults != null && dfpDetailsResults.getPurchaseDetails() != null) {
result.addAll(dfpDetailsResults.getPurchaseDetails().stream()
.map(details -> modelMapper.map(details, DFPItemDTO.class))
.collect(Collectors.toSet()));
}
}
if (checkUserRestriction) {
result.forEach(details -> {
if (details.getUser() != null &&
details.getUser().getEmail() != null) {
UserEmailListEntity userEmailLists = dfpExplorerService.exploreUserEmailList(details.getUser().getEmail());
details.setUserRestricted(userEmailLists.getCommonRestriction());
}
});
}
return new PageableCollection<>(result,
end < linkAnalysis.getMrPurchaseIds().size() ? String.valueOf(end) : null);
}
public LinkAnalysisDTO getLinkAnalysisEntry(final String id) throws NotFoundException {
return modelMapper.map(publicLinkAnalysisClient.getLinkAnalysisEntry(id), LinkAnalysisDTO.class);
}
}

Просмотреть файл

@ -29,13 +29,12 @@ import org.modelmapper.ModelMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Schedulers;
import java.time.Duration;
import java.time.OffsetDateTime;
import java.util.Collection;
import java.util.List;
import java.util.Set;
import java.util.UUID;
import java.util.*;
import java.util.stream.Collectors;
import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.DEFAULT_QUEUE_PAGE_SIZE;
@ -115,6 +114,7 @@ public class StreamService implements HealthCheckProcessor {
healthCheckRepository.save(hc);
});
List<Mono<Void>> sendings = new LinkedList<>();
processorRegistry.forEach((hub, client) -> {
for (int i = 0; i < healthCheckBatchSize; i++) {
HealthCheck healthCheck = HealthCheck.builder()
@ -127,20 +127,24 @@ public class StreamService implements HealthCheckProcessor {
.type(EVENT_HUB_CONSUMER)
.generatedBy(applicationProperties.getInstanceId())
.active(true)
.created(OffsetDateTime.now())
.ttl(healthCheckTtl.toSeconds())
._etag("new")
.build();
client.sendHealthCheckPing(healthCheck.getId(), () -> {
try {
healthCheckRepository.save(healthCheck);
} catch (CosmosDBAccessException e) {
log.debug("Receiver already inserted this [{}] health-check entry", healthCheck.getId());
}
});
sendings.add(client.sendHealthCheckPing(healthCheck.getId())
.doOnSuccess(v -> {
try {
healthCheck.setCreated(OffsetDateTime.now());
healthCheckRepository.save(healthCheck);
} catch (CosmosDBAccessException e) {
log.debug("Receiver already inserted this [{}] health-check entry", healthCheck.getId());
}
}));
healthCheckNum++;
}
});
Mono.zipDelayError(sendings, results -> results)
.subscribeOn(Schedulers.boundedElastic())
.subscribe();
return overdueHealthChecks.isEmpty();
}
@ -151,15 +155,11 @@ public class StreamService implements HealthCheckProcessor {
* @param channel the name of producer
* @return true when event was successfully sent
*/
public <T extends Event> boolean sendEvent(T event, String channel) {
public <T extends Event> Mono<Void> sendEvent(T event, String channel) {
log.info("Sending event to [{}] with body: [{}]", channel, event);
boolean success = producerRegistry.get(channel).send(event);
if (success) {
log.info("Event [{}] sending has been started successfully.", event.getId());
} else {
log.warn("Event [{}] has not been sent: [{}]", event.getId(), event);
}
return success;
return producerRegistry.get(channel).send(event)
.doOnSuccess(v -> log.info("Event [{}] sending has been started successfully.", event.getId()))
.doOnError(v -> log.error("Event [{}] has not been sent: [{}]", event.getId(), event));
}
/**
@ -174,7 +174,9 @@ public class StreamService implements HealthCheckProcessor {
if (CollectionUtils.isEmpty(newIds)) {
newIds = getActiveResidualQueues().stream().map(Queue::getId).collect(Collectors.toSet());
}
sendItemAssignmentEvent(item, newIds, Set.of());
sendItemAssignmentEvent(item, newIds, Set.of())
.subscribeOn(Schedulers.elastic())
.subscribe();
} catch (BusyException e) {
log.error("Event [{}] for item [{}] wasn't sent due to database overload",
ItemAssignmentEvent.class, item.getId());
@ -198,21 +200,23 @@ public class StreamService implements HealthCheckProcessor {
if (CollectionUtils.isEmpty(oldIds)) {
oldIds = getActiveResidualQueues().stream().map(Queue::getId).collect(Collectors.toSet());
}
sendItemAssignmentEvent(item, newIds, oldIds);
sendItemAssignmentEvent(item, newIds, oldIds)
.subscribeOn(Schedulers.elastic())
.subscribe();
} catch (BusyException e) {
log.error("Event [{}] for item [{}] wasn't sent due to database overload",
ItemAssignmentEvent.class, item.getId());
}
}
private void sendItemAssignmentEvent(final Item item, final Set<String> newIds, final Set<String> oldIds) {
private Mono<Void> sendItemAssignmentEvent(final Item item, final Set<String> newIds, final Set<String> oldIds) {
ItemAssignmentEvent event = ItemAssignmentEvent.builder()
.id(item.getId())
.newQueueIds(newIds)
.oldQueueIds(oldIds)
.actioned(OffsetDateTime.now())
.build();
sendEvent(event, ITEM_ASSIGNMENT_EVENT_HUB);
return sendEvent(event, ITEM_ASSIGNMENT_EVENT_HUB);
}
public void sendItemLockEvent(Item item, ItemLock prevLock, LockActionType actionType) {
@ -232,17 +236,23 @@ public class StreamService implements HealthCheckProcessor {
event.setOwnerId(item.getLock().getOwnerId());
event.setLocked(item.getLock().getLocked());
}
sendEvent(event, ITEM_LOCK_EVENT_HUB);
sendEvent(event, ITEM_LOCK_EVENT_HUB)
.subscribeOn(Schedulers.elastic())
.subscribe();
}
public boolean sendQueueSizeEvent(Queue queue) {
public void sendQueueSizeEvent(Queue queue) {
QueueSizeUpdateEvent event = modelMapper.map(queue, QueueSizeUpdateEvent.class);
return sendEvent(event, QUEUE_SIZE_EVENT_HUB);
sendEvent(event, QUEUE_SIZE_EVENT_HUB)
.subscribeOn(Schedulers.elastic())
.subscribe();
}
public boolean sendOverallSizeEvent(int size) {
public void sendOverallSizeEvent(int size) {
OverallSizeUpdateEvent event = new OverallSizeUpdateEvent(size, OffsetDateTime.now());
return sendEvent(event, OVERALL_SIZE_EVENT_HUB);
sendEvent(event, OVERALL_SIZE_EVENT_HUB)
.subscribeOn(Schedulers.elastic())
.subscribe();
}
public void sendItemLabelEvent(final Item item, final Item oldItem) {
@ -250,20 +260,30 @@ public class StreamService implements HealthCheckProcessor {
.id(item.getId())
.label(item.getLabel())
.assesmentResult(item.getAssessmentResult())
.decisionApplyingDuration(
.decisionApplyingDuration(oldItem.getLock() == null || oldItem.getLock().getLocked() == null ? Duration.ZERO :
Duration.between(oldItem.getLock().getLocked(), item.getLabel().getLabeled()))
.build();
sendEvent(event, ITEM_LABEL_EVENT_HUB);
sendEvent(event, ITEM_LABEL_EVENT_HUB)
.subscribeOn(Schedulers.elastic())
.subscribe();
}
public void sendItemResolvedEvent(final Item item) {
ItemResolutionEvent event = modelMapper.map(item, ItemResolutionEvent.class);
sendEvent(event, ITEM_RESOLUTION_EVENT_HUB);
public ItemResolutionEvent createItemResolvedEvent(final Item item) {
ItemResolutionEvent res = modelMapper.map(item, ItemResolutionEvent.class);
res.setId(item.getId() + "-" + item.getLabel().getLabeled());
return modelMapper.map(item, ItemResolutionEvent.class);
}
public Mono<Void> sendItemResolvedEvent(final Event event) {
return sendEvent(event, ITEM_RESOLUTION_EVENT_HUB);
}
public void sendQueueUpdateEvent(final Queue queue) {
QueueUpdateEvent event = modelMapper.map(queue, QueueUpdateEvent.class);
sendEvent(event, QUEUE_UPDATE_EVENT_HUB);
sendEvent(event, QUEUE_UPDATE_EVENT_HUB)
.subscribeOn(Schedulers.elastic())
.subscribe();
}
//TODO: caching

Просмотреть файл

@ -41,7 +41,6 @@ import static com.griddynamics.msd365fp.manualreview.queues.config.Constants.*;
@Slf4j
@RequiredArgsConstructor
public class TaskService {
private final TaskRepository taskRepository;
private final ApplicationProperties applicationProperties;
private final ThreadPoolTaskExecutor taskExecutor;
@ -60,27 +59,28 @@ public class TaskService {
@PostConstruct
private void initializeTasks() {
this.taskExecutions = Map.of(
QUEUE_ASSIGNMENT_TASK_NAME, task ->
queueService.reconcileQueueAssignments(),
ENRICHMENT_TASK_NAME, task ->
itemEnrichmentService.enrichAllPoorItems(false),
OVERALL_SIZE_TASK_NAME, task ->
streamService.sendOverallSizeEvent(itemService.countActiveItems()),
QUEUE_SIZE_TASK_NAME, task ->
queueService.fetchSizesForQueues(),
RESIDUAL_QUEUE_TASK_NAME, task ->
queueService.reviseResidualQueue(),
ITEM_UNLOCK_TASK_NAME, task ->
itemService.unlockItemsByTimeout(),
DICTIONARY_TASK_NAME, task ->
dictionaryService.updateDictionariesByStorageData(
task.getPreviousRun(),
applicationProperties.getTasks().get(task.getId()).getDelay()),
ITEM_ASSIGNMENT_TASK_NAME, this::itemStateFetch,
PRIM_HEALTH_ANALYSIS_TASK_NAME, this::healthAnalysis,
SEC_HEALTH_ANALYSIS_TASK_NAME, this::healthAnalysis
);
this.taskExecutions = new HashMap<>();
this.taskExecutions.put(QUEUE_ASSIGNMENT_TASK_NAME, task ->
queueService.reconcileQueueAssignments());
this.taskExecutions.put(ENRICHMENT_TASK_NAME, task ->
itemEnrichmentService.enrichAllPoorItems(false));
this.taskExecutions.put(OVERALL_SIZE_TASK_NAME, task ->
reportOverallSize());
this.taskExecutions.put(QUEUE_SIZE_TASK_NAME, task ->
queueService.fetchSizesForQueues());
this.taskExecutions.put(RESIDUAL_QUEUE_TASK_NAME, task ->
queueService.reviseResidualQueue());
this.taskExecutions.put(ITEM_UNLOCK_TASK_NAME, task ->
itemService.unlockItemsByTimeout());
this.taskExecutions.put(RESOLUTION_SENDING_TASK_NAME, task ->
itemService.sendResolutions());
this.taskExecutions.put(DICTIONARY_TASK_NAME, task ->
dictionaryService.updateDictionariesByStorageData(
task.getPreviousSuccessfulRun(),
applicationProperties.getTasks().get(task.getId()).getDelay()));
this.taskExecutions.put(ITEM_ASSIGNMENT_TASK_NAME, this::itemStateFetch);
this.taskExecutions.put(PRIM_HEALTH_ANALYSIS_TASK_NAME, this::healthAnalysis);
this.taskExecutions.put(SEC_HEALTH_ANALYSIS_TASK_NAME, this::healthAnalysis);
Optional<Map.Entry<String, ApplicationProperties.TaskProperties>> incorrectTimingTask = applicationProperties.getTasks().entrySet().stream()
.filter(entry -> entry.getValue().getDelay() == null ||
@ -102,6 +102,11 @@ public class TaskService {
}
}
private boolean reportOverallSize() {
streamService.sendOverallSizeEvent(itemService.countActiveItems());
return true;
}
public List<Task> getAllTasks() {
List<Task> result = new ArrayList<>();
taskRepository.findAll().forEach(result::add);
@ -165,43 +170,53 @@ public class TaskService {
// Restore task if it's stuck
if (task != null && !taskLaunched) {
restoreTaskIfStuck(task, taskProperties);
processTaskFreezes(task, taskProperties);
}
});
}
private boolean isTaskReadyForExecutionNow(Task task, ApplicationProperties.TaskProperties taskProperties) {
return task.getPreviousRun() == null ||
task.getPreviousRun().plus(taskProperties.getDelay()).isBefore(OffsetDateTime.now());
return READY.equals(task.getStatus()) &&
(task.getPreviousRun() == null ||
task.getPreviousRun().plus(taskProperties.getDelay()).isBefore(OffsetDateTime.now()));
}
@SuppressWarnings("java:S1854")
private void restoreTaskIfStuck(Task task, ApplicationProperties.TaskProperties taskProperties) {
Duration timeAfterPreviousRun;
if (task.getPreviousRun() != null) {
timeAfterPreviousRun = Duration.between(
task.getPreviousRun(), OffsetDateTime.now());
} else {
timeAfterPreviousRun = Duration.between(OffsetDateTime.MIN, OffsetDateTime.now());
}
private void processTaskFreezes(Task task, ApplicationProperties.TaskProperties taskProperties) {
Duration timeout = Objects.requireNonNullElse(taskProperties.getTimeout(), taskProperties.getDelay());
Duration acceptableDelayBeforeWarning = Duration.ofSeconds(
(long) (timeout.toSeconds() * applicationProperties.getTaskWarningTimeoutMultiplier()));
Duration acceptableDelayBeforeReset = Duration.ofSeconds(
(long) (timeout.toSeconds() * applicationProperties.getTaskResetTimeoutMultiplier()));
if (timeAfterPreviousRun.compareTo(acceptableDelayBeforeWarning) > 0) {
log.warn("Task [{}] is idle for too long. Last execution was [{}] minutes ago with status message: [{}]",
task.getId(), timeAfterPreviousRun.toMinutes(), task.getLastFailedRunMessage());
OffsetDateTime previousSuccessfulRun = Objects.requireNonNullElse(task.getPreviousSuccessfulRun(), ELDEST_APPLICATION_DATE);
OffsetDateTime previousRun = Objects.requireNonNullElse(task.getPreviousRun(), ELDEST_APPLICATION_DATE);
OffsetDateTime currentRun = Objects.requireNonNullElse(task.getCurrentRun(), ELDEST_APPLICATION_DATE);
OffsetDateTime now = OffsetDateTime.now();
Duration runWithoutSuccess = Duration.between(previousSuccessfulRun, now);
Duration acceptableDelayWithoutSuccessfulRuns = Duration.ofSeconds(
(long) (timeout.toSeconds() * applicationProperties.getTaskSuccessfulRunsTimeoutMultiplier()));
if (previousSuccessfulRun.isBefore(previousRun) &&
runWithoutSuccess.compareTo(acceptableDelayWithoutSuccessfulRuns) > 0) {
log.warn("Background task [{}] issue. No successful executions during [{}] minutes. Last Fail reason: [{}].",
task.getId(), runWithoutSuccess.toMinutes(), task.getLastFailedRunMessage());
}
if (!READY.equals(task.getStatus()) && timeAfterPreviousRun.compareTo(acceptableDelayBeforeReset) > 0) {
try {
log.info("Start [{}] task restore", task.getId());
task.setStatus(READY);
task.setLastFailedRunMessage("Restored after long downtime");
taskRepository.save(task);
log.info("Task [{}] has been restored", task.getId());
} catch (CosmosDBAccessException e) {
log.warn("Task [{}] recovering ended with a conflict: {}", task.getId(), e.getMessage());
if (!READY.equals(task.getStatus())) {
Duration currentRunDuration = Duration.between(currentRun, now);
Duration acceptableDelayBeforeWarning = Duration.ofSeconds(
(long) (timeout.toSeconds() * applicationProperties.getTaskWarningTimeoutMultiplier()));
Duration acceptableDelayBeforeReset = Duration.ofSeconds(
(long) (timeout.toSeconds() * applicationProperties.getTaskResetTimeoutMultiplier()));
if (currentRunDuration.compareTo(acceptableDelayBeforeWarning) > 0) {
log.warn("Background task [{}] issue. Idle for too long. Last execution was [{}] minutes ago with status message: [{}].",
task.getId(), currentRunDuration.toMinutes(), task.getLastFailedRunMessage());
}
if (currentRunDuration.compareTo(acceptableDelayBeforeReset) > 0) {
try {
log.info("Start [{}] task restore", task.getId());
task.setStatus(READY);
task.setLastFailedRunMessage("Restored after long downtime");
taskRepository.save(task);
log.info("Task [{}] has been restored", task.getId());
} catch (CosmosDBAccessException e) {
log.warn("Task [{}] recovering ended with a conflict: {}", task.getId(), e.getMessage());
}
}
}
}
@ -212,7 +227,7 @@ public class TaskService {
taskRepository.save(Task.builder()
.id(taskName)
._etag(taskName)
.previousRun(OffsetDateTime.now().minus(properties.getDelay()))
.previousRun(ELDEST_APPLICATION_DATE)
.status(READY)
.build());
log.info("Task [{}] has been initialized successfully.", taskName);
@ -251,10 +266,6 @@ public class TaskService {
*/
@SuppressWarnings("java:S2326")
private <T, E extends Exception> boolean executeTask(Task task) {
ApplicationProperties.TaskProperties taskProperties =
applicationProperties.getTasks().get(task.getId());
TaskExecution<Object, Exception> taskExecution = taskExecutions.get(task.getId());
// check possibility to execute
if (!READY.equals(task.getStatus())) {
return false;
@ -264,8 +275,12 @@ public class TaskService {
OffsetDateTime startTime = OffsetDateTime.now();
task.setStatus(RUNNING);
task.setInstanceId(applicationProperties.getInstanceId());
if (task.getPreviousRun() == null){
task.setPreviousRun(startTime);
task.setCurrentRun(startTime);
if (task.getPreviousRun() == null) {
task.setPreviousRun(ELDEST_APPLICATION_DATE);
}
if (task.getPreviousSuccessfulRun() == null) {
task.setPreviousSuccessfulRun(ELDEST_APPLICATION_DATE);
}
Task runningTask;
try {
@ -283,6 +298,9 @@ public class TaskService {
log.info("Task [{}] started its execution.", runningTask.getId());
// launch execution
ApplicationProperties.TaskProperties taskProperties =
applicationProperties.getTasks().get(task.getId());
TaskExecution<Object, Exception> taskExecution = taskExecutions.get(task.getId());
CompletableFuture
.supplyAsync(() -> {
try {
@ -295,20 +313,26 @@ public class TaskService {
Objects.requireNonNullElse(taskProperties.getTimeout(), taskProperties.getDelay()).toMillis(),
TimeUnit.MILLISECONDS)
.whenComplete((result, exception) -> {
Duration duration = Duration.between(startTime, OffsetDateTime.now());
runningTask.setStatus(READY);
runningTask.setPreviousRun(startTime);
runningTask.setPreviousRunSuccessfull(true);
if (exception != null) {
log.warn("Task [{}] finished its execution with an exception.",
runningTask.getId(), exception);
log.warn("Task [{}] finished its execution with an exception in [{}].",
runningTask.getId(), duration.toString());
log.warn("Task [{}] exception", runningTask.getId(), exception);
runningTask.setLastFailedRunMessage(exception.getMessage());
runningTask.setPreviousRunSuccessfull(false);
taskRepository.save(runningTask);
} else if (result.isEmpty()) {
log.info("Task [{}] finished its execution with empty result.", runningTask.getId());
} else {
log.info("Task [{}] finished its execution successfully. Result: [{}]",
runningTask.getId(), result.get());
runningTask.setPreviousSuccessfulRun(startTime);
runningTask.setPreviousSuccessfulExecutionTime(duration);
if (result.isEmpty()) {
log.info("Task [{}] finished its execution with empty result in [{}].",
runningTask.getId(), duration);
} else {
log.info("Task [{}] finished its execution successfully in [{}]. Result: [{}]",
runningTask.getId(), duration, result.get());
}
}
taskRepository.save(runningTask);
});
@ -337,7 +361,7 @@ public class TaskService {
Map<String, String> variables = Optional.ofNullable(task.getVariables()).orElse(new HashMap<>());
String comprehensiveCheckTimeString = variables.get("comprehensiveCheckTime");
OffsetDateTime comprehensiveCheckTime = comprehensiveCheckTimeString == null
? OffsetDateTime.MIN
? ELDEST_APPLICATION_DATE
: OffsetDateTime.parse(comprehensiveCheckTimeString);
// check item states (with desired checking depth)
@ -346,9 +370,9 @@ public class TaskService {
variables.put("comprehensiveCheckTime", currentRunTime.toString());
} else {
itemService.reconcileItemAssignmentsForChangedQueues(
task.getPreviousRun().minus(partialCheckObservedPeriod));
task.getPreviousSuccessfulRun().minus(partialCheckObservedPeriod));
itemService.reconcileAssignmentsForNewItems(
task.getPreviousRun().minus(partialCheckObservedPeriod));
task.getPreviousSuccessfulRun().minus(partialCheckObservedPeriod));
}
task.setVariables(variables);

Просмотреть файл

@ -5,7 +5,11 @@ mr:
instance-type: prim
task-reset-timeout-multiplier: 4.0
task-warning-timeout-multiplier: 2.0
task-successful-runs-timeout-multiplier: 8.0
cache:
user-email-list:
invalidation-interval: PT10M
max-size: 500
traversal-purchase:
invalidation-interval: PT2M
max-size: 500
@ -16,12 +20,15 @@ mr:
invalidation-interval: PT1M
max-size: 200
tasks:
resolution-sending-task:
enabled: true
delay: PT2M
prim-health-analysis-task:
enabled: true
delay: PT10M
delay: PT1M
sec-health-analysis-task:
enabled: false
delay: PT10M
delay: PT1M
residual-queue-reconciliation-task:
enabled: true
delay: PT10M
@ -64,6 +71,10 @@ mr:
ttl: PT10M
search-query:
ttl: P30D
link-analysis:
ttl: P1D
# check-user-restriction: true
check-user-restriction: false
email-domain:
ttl: P7D
@ -88,19 +99,53 @@ azure:
Sandbox_ManualReviewSeniorAnalyst: SENIOR_ANALYST
Sandbox_ManualReviewAnalyst: ANALYST
dfp:
link-analysis-full-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/knowledgegateway/v1.0/customersupport/connectedentities?queryType=full
link-analysis-count-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/knowledgegateway/v1.0/customersupport/connectedentities?queryType=count
link-analysis-details-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/knowledgegateway/v1.0/customersupport/purchasedetails
graph-explorer-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/knowledgegateway/customersupport/v1.0/explorer/traversal
purchase-event-url: https://intz.api.dfp.dynamics-int.com/v1.0/merchantservices/events/Purchase
bank-event-url: https://intz.api.dfp.dynamics-int.com/v1.0/merchantservices/events/BankEvent
user-email-list-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/knowledgegateway/v1.0/sparta/customersupport/lists/status/User.Email
purchase-event-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/v1.0/merchantservices/events/Purchase
bank-event-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/v1.0/merchantservices/events/BankEvent
dfp-auth:
token-cache-size: 500
token-cache-retention: PT10M
event-hub:
checkpoint-interval: PT3M
sending-timeout: PT10M
sending-retries: 3
sending-timeout: PT20S
health-check-ttl: PT24H
health-check-batch-size: 5
health-check-batch-size: 2
health-check-allowed-delay: PT60M
consumers:
dfp-hub:
checkpoint-interval: PT1M
producers:
item-lock-event-hub:
sending-period: PT1S
sending-workers: 4
buffer-size: 100
item-label-event-hub:
sending-period: PT1S
sending-workers: 4
buffer-size: 10
item-resolution-event-hub:
sending-period: PT1S
sending-workers: 6
buffer-size: 10
item-assignment-event-hub:
sending-period: PT2S
sending-workers: 6
buffer-size: 1000
queue-size-event-hub:
sending-period: PT1S
sending-workers: 4
buffer-size: 200
queue-update-event-hub:
sending-period: PT1S
sending-workers: 4
buffer-size: 100
overall-size-event-hub:
sending-period: PT1S
sending-workers: 4
buffer-size: 100
swagger:
auth-url: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/authorize?resource=${CLIENT_ID}
@ -116,6 +161,8 @@ spring:
scope: https://graph.microsoft.com/.default
azure-dfp-api:
scope: https://api.dfp.microsoft-int.com/.default
azure-dfp-la-api:
scope: https://api.dfp.microsoft-int.com/.default
provider:
azure-maps-api:
token-uri: https://login.microsoftonline.com/${TENANT_ID}/oauth2/v2.0/token
@ -123,6 +170,8 @@ spring:
token-uri: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/v2.0/token
azure-dfp-api:
token-uri: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/v2.0/token
azure-dfp-la-api:
token-uri: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/v2.0/token
resilience4j.retry:
instances:

Просмотреть файл

@ -3,6 +3,8 @@
mr:
tasks:
resolution-sending-task:
enabled: false
prim-health-analysis-task:
enabled: false
sec-health-analysis-task:

Просмотреть файл

@ -5,7 +5,11 @@ mr:
instance-type: prim
task-reset-timeout-multiplier: 4.0
task-warning-timeout-multiplier: 2.0
task-successful-runs-timeout-multiplier: 8.0
cache:
user-email-list:
invalidation-interval: PT10M
max-size: 500
traversal-purchase:
invalidation-interval: PT2M
max-size: 500
@ -16,12 +20,15 @@ mr:
invalidation-interval: PT1M
max-size: 200
tasks:
resolution-sending-task:
enabled: true
delay: PT2M
prim-health-analysis-task:
enabled: true
delay: PT10M
delay: PT1M
sec-health-analysis-task:
enabled: false
delay: PT10M
delay: PT1M
residual-queue-reconciliation-task:
enabled: true
delay: PT10M
@ -64,6 +71,10 @@ mr:
ttl: P14D
search-query:
ttl: P30D
link-analysis:
ttl: P1D
# check-user-restriction: true
check-user-restriction: false
email-domain:
ttl: P7D
@ -85,19 +96,53 @@ azure:
ManualReviewSeniorAnalyst: SENIOR_ANALYST
ManualReviewAnalyst: ANALYST
dfp:
link-analysis-full-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics.com/knowledgegateway/v1.0/customersupport/connectedentities?queryType=full
link-analysis-count-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics.com/knowledgegateway/v1.0/customersupport/connectedentities?queryType=count
link-analysis-details-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics.com/knowledgegateway/v1.0/customersupport/purchasedetails
graph-explorer-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics.com/knowledgegateway/customersupport/v1.0/explorer/traversal
purchase-event-url: https://api.dfp.dynamics.com/v1.0/merchantservices/events/Purchase
bank-event-url: https://api.dfp.dynamics.com/v1.0/merchantservices/events/BankEvent
user-email-list-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics.com/knowledgegateway/v1.0/sparta/customersupport/lists/status/User.Email
purchase-event-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics.com/v1.0/merchantservices/events/Purchase
bank-event-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics.com/v1.0/merchantservices/events/BankEvent
dfp-auth:
token-cache-size: 500
token-cache-retention: PT10M
event-hub:
checkpoint-interval: PT3M
sending-timeout: PT10M
sending-retries: 3
sending-timeout: PT20S
health-check-ttl: PT24H
health-check-batch-size: 5
health-check-batch-size: 2
health-check-allowed-delay: PT60M
consumers:
dfp-hub:
checkpoint-interval: PT1M
producers:
item-lock-event-hub:
sending-period: PT1S
sending-workers: 4
buffer-size: 100
item-label-event-hub:
sending-period: PT1S
sending-workers: 4
buffer-size: 10
item-resolution-event-hub:
sending-period: PT1S
sending-workers: 6
buffer-size: 10
item-assignment-event-hub:
sending-period: PT2S
sending-workers: 6
buffer-size: 1000
queue-size-event-hub:
sending-period: PT1S
sending-workers: 4
buffer-size: 200
queue-update-event-hub:
sending-period: PT1S
sending-workers: 4
buffer-size: 100
overall-size-event-hub:
sending-period: PT1S
sending-workers: 4
buffer-size: 100
swagger:
auth-url: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/authorize?resource=${CLIENT_ID}
@ -113,6 +158,8 @@ spring:
scope: https://graph.microsoft.com/.default
azure-dfp-api:
scope: https://api.dfp.microsoft.com/.default
azure-dfp-la-api:
scope: https://api.dfp.microsoft.com/.default
provider:
azure-maps-api:
token-uri: https://login.microsoftonline.com/${TENANT_ID}/oauth2/v2.0/token
@ -120,6 +167,8 @@ spring:
token-uri: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/v2.0/token
azure-dfp-api:
token-uri: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/v2.0/token
azure-dfp-la-api:
token-uri: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/v2.0/token
resilience4j.retry:
instances:

Просмотреть файл

@ -9,7 +9,11 @@ mr:
instance-id: ${WEBSITE_INSTANCE_ID}
task-reset-timeout-multiplier: 4.0
task-warning-timeout-multiplier: 2.0
task-successful-runs-timeout-multiplier: 8.0
cache:
user-email-list:
invalidation-interval: PT10M
max-size: 500
traversal-purchase:
invalidation-interval: PT2M
max-size: 500
@ -20,12 +24,15 @@ mr:
invalidation-interval: PT1M
max-size: 200
tasks:
resolution-sending-task:
enabled: true
delay: PT2M
prim-health-analysis-task:
enabled: true
delay: PT10M
delay: PT1M
sec-health-analysis-task:
enabled: false
delay: PT10M
delay: PT1M
residual-queue-reconciliation-task:
enabled: true
delay: PT10M
@ -68,11 +75,14 @@ mr:
corepool-size: 5
max-pool-size: 10
queue-capacity: 25
timeout-seconds: PT5M
dictionary:
ttl: P14D
search-query:
ttl: P30D
link-analysis:
ttl: P1D
# check-user-restriction: true
check-user-restriction: false
email-domain:
ttl: P7D
disposable-email-checker:
@ -120,9 +130,13 @@ azure:
Sandbox_ManualReviewSeniorAnalyst: SENIOR_ANALYST
Sandbox_ManualReviewAnalyst: ANALYST
dfp:
link-analysis-full-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/knowledgegateway/v1.0/customersupport/connectedentities?queryType=full
link-analysis-count-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/knowledgegateway/v1.0/customersupport/connectedentities?queryType=count
link-analysis-details-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/knowledgegateway/v1.0/customersupport/purchasedetails
graph-explorer-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/knowledgegateway/customersupport/v1.0/explorer/traversal
purchase-event-url: https://intz.api.dfp.dynamics-int.com/v1.0/merchantservices/events/Purchase
bank-event-url: https://intz.api.dfp.dynamics-int.com/v1.0/merchantservices/events/BankEvent
user-email-list-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/knowledgegateway/v1.0/sparta/customersupport/lists/status/User.Email
purchase-event-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/v1.0/merchantservices/events/Purchase
bank-event-url: https://${CLIENT_TENANT_SHORT_NAME}-${CLIENT_TENANT_ID}.api.dfp.dynamics-int.com/v1.0/merchantservices/events/BankEvent
dfp-auth:
token-cache-size: 500
token-cache-retention: PT10M
@ -134,31 +148,50 @@ azure:
connection-string: ${spring-cloud-azure-eventhub-connection-string:${EVENT_HUB_CONNECTION_STRING}}
checkpoint-storage-account: ${EVENT_HUB_OFFSET_STORAGE_NAME}
checkpoint-connection-string: DefaultEndpointsProtocol=https;AccountName=${EVENT_HUB_OFFSET_STORAGE_NAME};AccountKey=${spring-cloud-azure-eventhub-checkpoint-access-key:${EVENT_HUB_OFFSET_STORAGE_KEY}};EndpointSuffix=core.windows.net
checkpoint-interval: PT1M
sending-timeout: PT10M
sending-retries: 3
health-check-ttl: PT24H
health-check-batch-size: 5
health-check-batch-size: 2
health-check-allowed-delay: PT60M
consumers:
dfp-hub:
destination: dfp-hub
group: ${spring.application.name}
checkpoint-interval: PT1M
producers:
item-lock-event-hub:
destination: item-lock-event-hub
sending-period: PT1S
sending-workers: 4
buffer-size: 100
item-label-event-hub:
destination: item-label-event-hub
sending-period: PT1S
sending-workers: 4
buffer-size: 10
item-resolution-event-hub:
destination: item-resolution-event-hub
sending-period: PT1S
sending-workers: 6
buffer-size: 10
item-assignment-event-hub:
destination: item-assignment-event-hub
sending-period: PT2S
sending-workers: 6
buffer-size: 1000
queue-size-event-hub:
destination: queue-size-event-hub
sending-period: PT1S
sending-workers: 4
buffer-size: 200
queue-update-event-hub:
destination: queue-update-event-hub
sending-period: PT1S
sending-workers: 4
buffer-size: 100
overall-size-event-hub:
destination: overall-size-event-hub
sending-period: PT1S
sending-workers: 4
buffer-size: 100
swagger:
@ -183,7 +216,12 @@ spring:
scope: https://graph.microsoft.com/.default
azure-dfp-api:
client-id: ${CLIENT_ID}
client-secret: ${client-secret:${CLIENT_SECRET}}
client-secret: ${client-secret:${CLIENT_SECRET}}
authorization-grant-type: client_credentials
scope: https://api.dfp.microsoft-int.com/.default
azure-dfp-la-api:
client-id: b2238e4d-a9ee-4939-ab8e-944f29fa7eb3
client-secret: daadc627-e96b-4e2d-a8b7-dd135d11e753
authorization-grant-type: client_credentials
scope: https://api.dfp.microsoft-int.com/.default
provider:
@ -193,6 +231,8 @@ spring:
token-uri: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/v2.0/token
azure-dfp-api:
token-uri: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/v2.0/token
azure-dfp-la-api:
token-uri: https://login.microsoftonline.com/${CLIENT_TENANT_ID}/oauth2/v2.0/token
aop:
proxyTargetClass: true

Просмотреть файл

@ -1,6 +0,0 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
function getBucketNumber(value, bucket_size){
return Math.floor(value / bucket_size);
}

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 349 KiB

Двоичные данные
documentation/pictures/MRStructureDiagrams-BELayers.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 338 KiB

Двоичные данные
documentation/pictures/MRStructureDiagrams-Backend.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 900 KiB

207
frontend/CONTRIBUTION.md Normal file
Просмотреть файл

@ -0,0 +1,207 @@
# Manual Review Contribution guide (frontend)
This document:
* contains low-level solution details
* is used as an onboarding guide for newcomers
* describes how to contribute to the project front-end part (FE)
* should be considered by contributors to pass PR (pull request) procedure
> The project is alive and rapidly extended so if you see any inaccuracies, please,
notify maintainers of the solution. We appreciate any feedback.
Summary:
* [Architecture description](#architecture-description)
+ [In-place solutions overview](#in-place-solutions-overview)
+ [Frontend application layers](#frontend-application-layers)
+ [Top-level folder structure](#frontend-folder-structure)
* [Contribution rules](#contribution-rules)
+ [Reporting a bug](#reporting-a-bug)
+ [Pull requests](#pull-requests)
+ [Copyright and licensing](#copyright-and-licensing)
* [Code style](#code-style)
+ [Pull request labels](#pull-request-labels)
+ [Git commit messages](#git-commit-messages)
+ [Code of conduct](#code-of-conduct)
* [FAQ](#faq)
## Architecture description
* Frontend is built using Single Page Application approach.
* It handles user authentication by implementing [Microsoft identity platform and OAuth 2.0 authorization code flow](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-auth-code-flow).
* Frontend by itself has no runtime after it has been built for production, this allows it to be hosted as a static bundle on Azure Storage Account
and expose it to the public via Static Website approach, which makes it highly cost efficient.
* Frontend communicates with both Queues and Analytics Services via exposed REST APIs.
* The application is starting by executing bootstrap task queue. The order of invocation is important, since tasks are dependent on previous results.
* State management concept was utilized for storing the data.
* All the component dependencies are added through the dependency inversion SOLID approach.
### In-place solutions overview
1. [Typescript](https://www.typescriptlang.org/) is used as language, that extends JavaScript with types, so all the data models have their interfaces.
2. [React](https://reactjs.org/) is used as an SPA framework. In most cases we preferred Class components over React hooks.
3. [Sass](https://sass-lang.com/) preprocessor is used as a CSS extension language.
4. [React router](https://reactrouter.com/) is utilized for navigation. You can find the ROUTES_LIST in `frontend/app.tsx` and navigation menu in `frontend/src/components/page-layout/left-navigation/left-navigation.tsx`.
5. [MobX](https://mobx.js.org/) is used for the state management and files with a name ends with `store` should be treated as MobX stores.
6. Dependency inversion principle was leveraged with [InversifyJS](http://inversify.io/).
* This means, all the services are registered as IoC containers within the application bootstrap (find more details in `frontend/src/application-bootstrap/` folder).
* Each service has a unique token - Symbol type value associated with the service (find the list in `frontend/src/types.ts`)
* If you want to use any service or MobX store in the React component you need to inject it using the following pattern:
```
@resolve(TYPES.<service-token>)
<service-name-for-component>: <service-class-name>;
```
Code example:
```typescript
@resolve(TYPES.SEARCH_SCREEN_STORE)
private searchScreenStore!: SearchScreenStore;
```
7. [Create React App](https://github.com/facebook/create-react-app) was used for application build.
However, the default configuration was ejected, and you can find customised webpack configuration for the local development as well as for the production build in the next folder: `frontend/config/`.
It was done in order to use parameter decorators, for example:
```typescript
import { inject, injectable } from 'inversify';
@injectable()
export class AwesomeStore {
constructor(@inject("SOME_SERVICE") private customerService: any) {}
}
```
At this point Babel does not have proper support of parameter decorators ([Issue reference](https://github.com/babel/babel/issues/9838)). However, Typescript does, so we changed webpack
configuration to process .ts and .tsx with `ts-loader` instead of `babel-loader` that was out of the box with create-react-app.
8. [Moment.js](https://momentjs.com/) is used in utils functions to work with the dates.
9. The application is integrated with [Azure Maps](https://docs.microsoft.com/en-us/azure/azure-maps/).
* FE requesting token for azure Maps API through the request to queues BE module (Find more details in `frontend/src/utility-services/azure-maps-service.ts`)
* Map itself is used in the next component `frontend/src/screens/review-console/item-details/parts/transaction-map/map/map.tsx`
10. A collection of UX controls [Fluent UI](https://developer.microsoft.com/en-us/fluentui#/controls/web) was used for creating the components.
11. Some [Nivo](https://nivo.rocks/components) componets was used for building charts on the Dashboard pages.
### Frontend application layers
![Architecture overview](../frontend/src/assets/doc/fe-architeture.png)
1. API services - interact with both Queues and Analytics BE services, by using [axios HTTP client](https://github.com/axios/axios). Usually all the response (DTO) models interfaces are placed in the folder with the service implementation.
2. Domain Services - work with Data Source and implement business logic. They usually use data-transformers to convert an API response (DTO) model to a view model.
3. View Services - MobX Stores with the computed and aggregated data for the View model.
4. As described in [In-place solutions overview](#in-place-solutions-overview) section they are registered as Inversify containers and injected into components.
5. View components - React components responsible for rendering the view, handling DOM events and styling.
### Frontend folder structure
```
./frontend
|+-- config // customised webpack configuration for the both local development and production build
| +--jest // Jest configuration
|+-- public // has entrypoint index.html, favicon etc
| +-- index.html
|+-- scripts // node scripts implementation for the npm commands: `start, test, build`
|+-- src
| +-- application-bootstrap // Task queue for bootstrapping the application: service registering tasks, some data pre-loading, etc
| +-- assets // SVG pictures, icons
|+-- constants // different constants, enums
| +-- components // reusable components with common logic that are used in several places.
| +-- data-services
| +-- api-services // services for interacting with BE
| +-- data-transformers // transformers for converting API response into View models
| +-- domain-services // services that works with data source and implements businness logic
| +-- interfaces // Typescript interfaces for both API and Domain services
| +-- models // view data models (classes and interfaces)
| +-- screens // top-level React components for the route pages and their parts
| +-- styles // common SCSS variables for colors and sizes
| +-- utility-services // services not related to any business entity, like authentication-service, azure-maps-service, local-storage-service etc
| +-- utils // helper functions for working with dates, text, colors, etc
| +-- view-services // MobX stores
```
## Contribution rules
We use [Github Flow](https://guides.github.com/introduction/flow/index.html), so all code changes happen through Pull Requests.
### Reporting a bug
We use GitHub issues to track public bugs. Report a bug by opening a new issue; it's that easy!
Please refer to project's git hub issue.
### Pull Requests
1. **Fork** the repo on GitHub
2. **Clone** the project to your own machine
3. **Commit changes** to your own branch
4. **Push** your work back up to your fork
5. **Create a PR** to merge this branch to the `development` branch
NOTE: Before opening the PR, please make sure you have read our Code style section: ([Git Commit Messages](#git-commit-messages), [Pull Request Labels](#pull-request-labels))
### Copyright and licensing
The project is licensed under the [MIT License](http://choosealicense.com/licenses/mit/).
When you submit code changes, your submissions are understood to be under the same [MIT License](http://choosealicense.com/licenses/mit/) that covers the project. Feel free to contact the maintainers if that's a concern.
Include a license header at the top of each new file.
````
Copyright (c) Microsoft Corporation.
Licensed under the MIT license.
````
* [TSX file license example](https://github.com/griddynamics/msd365fp-manual-review/blob/80509535eb9e9a25f0ce1137a1de5ce1503a4c3f/frontend/src/index.tsx#L1)
* [TS file license example](https://github.com/griddynamics/msd365fp-manual-review/blob/80509535eb9e9a25f0ce1137a1de5ce1503a4c3f/frontend/src/types.ts#L1)
## Code style
For the linting frontend code we use [ESlint](https://eslint.org/) configured along with [Airbnb's ESLint config with TypeScript support](https://www.npmjs.com/package/eslint-config-airbnb-typescript)
Configuration file: [./frontend/.eslintrc]( https://github.com/griddynamics/msd365fp-manual-review/blob/master/frontend/.eslintrc)
### Pull request labels
Label name | Description
| --- | --- |
| `FRONTEND` | Pull requests which have developed functionality for the `frontend` module. |
| `BACKEND` | Pull requests which have developed functionality for the `backend` module.|
| `HOT_FIX` | Pull requests which have fixes for the bugs and needs to be merged into `master` branch. |
| `BUG_FIX` | Pull requests which have fixes for a specific bugs and needs to be merged into `development` branch. |
| `WIP` | Pull requests which are still being worked on, more changes will follow. |
### Git commit messages
1. Commits have to be descriptive and contain only changes related to the story which you picked for development.
2. Each commit must contain a tag at the beginning of the commit message indicating where changes where made.
* `[BE]` for the `backend` module
* `[FE]` for he `frontend` module accordingly.
3. Limit the first line to 72 characters or less
4. You can look at the commit history find an example of a common commit.
### Code of conduct
Find detailed information in the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct).
## FAQ
#### Components decoration with @autoBind and @observer
In case you use both decorators on the same class you may find a warning in console. This may happen
because @autoBind changes the render function (e.g. `this.render = this.render.bind(this);`), and it can not be ovserved by
mobX anymore.
> `The render function for an observer component (<Component Display Name>) was modified after MobX attached. This is not supported, since the new function can't be triggered by MobX.`
🚫 Will trigger a warning in case component is unmounted and mounted again.
```typescript jsx
import React, { Component } from 'react';
import { observer } from 'mobx-react';
import autoBind from 'autobind-decorator';
@observer
@autoBind
export class Button extends Component<{data: any}, never> {
onClick() { console.log(this.props.data); }
render() { return (<button onClick={this.onClick} />); }
}
```
✅ Will not trigger a warning (recommended approach)
```typescript jsx
import React, { Component } from 'react';
import { observer } from 'mobx-react';
import autoBind from 'autobind-decorator';
@observer
export class Button extends Component<{data: any}, never> {
@autoBind
onClick() { console.log(this.props.data); }
render() { return (<button onClick={this.onClick} />); }
}
```

Просмотреть файл

@ -10,18 +10,18 @@ Tools that have to be installed in your local environment.
- `yarn` 1.22.0
#### Quick Start
For quick project setup in development mode, you can execute the following commands.
> NOTE:
> In order to access local Back End runtime instead of Cloud deployed version
> you need to specify API_BASE_URL environment variable for instance in .env file
> dev URL is used by default, find details in `./src/setupProxy.js`
```sh
> cd ./msd365fp-manual-review/frontend
> yarn
> yarn start
```
For quick project setup in development mode, you need:
1. Specify several environment variables for instance in .env file (See [sample.env](./sample.env) as example)
- API_BASE_URL - base Backend URL you want the frontend to communicate with, used in [setupProxy.js](./src/setupProxy.js)
- LOG_LEVEL - can be error | warn | info | debug | trace, debug is used by default in [development-configuration.ts](./src/utility-services/configuration/development-configuration.ts)
- CLIENT_ID, TENANT, MAP_CLIENT_ID, TOKEN_PERSIST_KEY, NONCE_PERSIST_KEY - are used in [development-configuration.ts](./src/utility-services/configuration/development-configuration.ts)
2. Execute the following commands.
```sh
> cd ./msd365fp-manual-review/frontend
> yarn
> yarn start
```
#### Deployment
In order to perform deployment refer to [deployment guide](../arm/README.md)
@ -53,6 +53,5 @@ Your app is ready to be deployed!
See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information.
## Microsoft Open Source code of conduct
For additional information, see the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct).
## Contribution
Find detailed information in the [Contribution guide](./CONTRIBUTION.md).

Просмотреть файл

@ -1,15 +0,0 @@
## Why ejected create-react-app
In order to use Parameter decorators, for example:
```typescript
import { inject, injectable } from 'inversify';
@injectable()
export class AwesomeStore {
constructor(@inject("SOME_SERVICE") private customerService: any) {}
}
```
At this point Babel does not have proper support of parameter decorators, however typescript does so we changed webpack
configuration to process .ts and .tsx with `ts-loader` instead of `babel-loader` that was out of the box with create-react-app.
[Issue reference](https://github.com/babel/babel/issues/9838)

Просмотреть файл

@ -1,36 +0,0 @@
## Components decoration
#### @autoBind and @observer
In case you use both decorators on the same class you may find a warning in console. This may happen
because @autoBind changes the render function (e.g. `this.render = this.render.bind(this);`), and it can not be ovserved by
mobX anymore.
> `The render function for an observer component (<Component Display Name>) was modified after MobX attached. This is not supported, since the new function can't be triggered by MobX.`
🚫 Will trigger a warning in case component is unmounted and mounted again.
```typescript jsx
import React, { Component } from 'react';
import { observer } from 'mobx-react';
import autoBind from 'autobind-decorator';
@observer
@autoBind
export class Button extends Component<{data: any}, never> {
onClick() { console.log(this.props.data); }
render() { return (<button onClick={this.onClick} />); }
}
```
✅ Will not trigger a warning (recommended approach)
```typescript jsx
import React, { Component } from 'react';
import { observer } from 'mobx-react';
import autoBind from 'autobind-decorator';
@observer
export class Button extends Component<{data: any}, never> {
@autoBind
onClick() { console.log(this.props.data); }
render() { return (<button onClick={this.onClick} />); }
}
```

Просмотреть файл

@ -88,7 +88,7 @@
"react-router-dom": "^5.1.2",
"reflect-metadata": "^0.1.13",
"resolve": "1.15.0",
"resolve-url-loader": "3.1.1",
"resolve-url-loader": "^3.1.2",
"sass-loader": "8.0.2",
"semver": "6.3.0",
"style-loader": "0.23.1",

13
frontend/sample.env Normal file
Просмотреть файл

@ -0,0 +1,13 @@
# Used in frontend/src/setupProxy.js
API_BASE_URL=https://dfp-manrev-dev.azurefd.net/api
#
# The rest are used in frontend/src/utility-services/configuration/development-configuration.ts
#
# LOG_LEVEL can be error | warn | info | debug | trace
LOG_LEVEL=debug
BASE_AUTH_URL=
CLIENT_ID=
TENANT=
TOKEN_PERSIST_KEY=
NONCE_PERSIST_KEY=
MAP_CLIENT_ID=

Просмотреть файл

@ -8,7 +8,8 @@ import {
Configuration,
DevelopmentConfiguration,
Logger,
ProductionConfiguration
ProductionConfiguration,
SEVERITY,
} from '../utility-services';
export const registerConfigurationTask = {
@ -16,7 +17,15 @@ export const registerConfigurationTask = {
let configuration: Configuration;
if (process.env.NODE_ENV !== 'production') {
configuration = new DevelopmentConfiguration();
configuration = new DevelopmentConfiguration(
process.env.LOG_LEVEL as SEVERITY,
process.env.BASE_AUTH_URL,
process.env.CLIENT_ID,
process.env.TENANT,
process.env.TOKEN_PERSIST_KEY,
process.env.NONCE_PERSIST_KEY,
process.env.MAP_CLIENT_ID
);
} else {
const configurationApiService = new ConfigurationApiServiceImpl();
const loadedConfig = await configurationApiService.getConfiguration();

Просмотреть файл

@ -15,7 +15,13 @@ import {
AppStore,
QueuePerformanceStore,
LockedItemsStore,
AnalystOverturnedPerformanceStore, AlertsStore, AlertsMutationStore, ReportsModalStore
AnalystOverturnedPerformanceStore,
AlertsStore,
AlertsMutationStore,
ReportsModalStore,
LinkAnalysisStore,
LinkAnalysisDFPItemsStore,
LinkAnalysisMRItemsStore
} from '../view-services';
import { QueueMutationStore, QueueMutationModalStore } from '../view-services/essence-mutation-services';
import { ReviewPermissionStore } from '../view-services/review-permission-store';
@ -102,6 +108,18 @@ export const registerViewServicesTask = {
.bind<FiltersStore>(TYPES.FILTERS_STORE)
.to(FiltersStore);
container
.bind<LinkAnalysisStore>(TYPES.LINK_ANALYSIS_STORE)
.to(LinkAnalysisStore);
container
.bind<LinkAnalysisDFPItemsStore>(TYPES.LINK_ANALYSIS_DFP_ITEMS_STORE)
.to(LinkAnalysisDFPItemsStore);
container
.bind<LinkAnalysisMRItemsStore>(TYPES.LINK_ANALYSIS_MR_ITEMS_STORE)
.to(LinkAnalysisMRItemsStore);
// ____ DASHBOARD STORES ____
container

Двоичные данные
frontend/src/assets/doc/fe-architeture.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 496 KiB

Просмотреть файл

@ -87,7 +87,6 @@ export class TextInCondition extends Component<TextInConditionComponentProps, Te
@autobind
addCustomItem(input: string) {
// eslint-disable-next-line @typescript-eslint/no-unused-vars
const { filterId, condition } = this.props;
this.filtersStore.postDictionaryValues(filterId, input);

Просмотреть файл

@ -0,0 +1,24 @@
@import "../../styles/variables";
.icon-text {
display: flex;
line-height: 20px;
&__icon {
vertical-align: middle;
margin-right: 4px;
cursor: default;
&-good {
color: $goodColor;
}
&-bad {
color: $badColor;
}
&-unknown {
color: $neutralTertiary;
}
}
}

Просмотреть файл

@ -0,0 +1,142 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import React, { Component } from 'react';
import cn from 'classnames';
import { FontIcon } from '@fluentui/react/lib/Icon';
import { Text, ITextProps } from '@fluentui/react/lib/Text';
import { TooltipHost } from '@fluentui/react/lib/Tooltip';
import './icon-text.scss';
const CN = 'icon-text';
enum ICON_STYLES {
'GOOD' = 'GOOD',
'BAD' = 'BAD',
'UNKNOWN' = 'UNKNOWN'
}
interface IconInfo {
iconName: string,
value: any;
tooltipText?: string | JSX.Element,
}
interface Icons {
[ICON_STYLES.GOOD]?: IconInfo;
[ICON_STYLES.BAD]?: IconInfo;
[ICON_STYLES.UNKNOWN]?: IconInfo;
}
interface IconTextProps {
text?: string | JSX.Element | null,
textVariant: ITextProps['variant'],
placeholder: string | JSX.Element,
iconValue: any,
icons: Icons,
className?: string;
title?: string;
}
export class IconText extends Component<IconTextProps, never> {
renderIconWithTooltip(value: any, icons: Icons) {
const tooltipId = `${Math.random()}`;
if (icons.GOOD && icons.GOOD.tooltipText && icons.GOOD.value === value) {
return (
<TooltipHost
content={icons.GOOD.tooltipText}
id={tooltipId}
calloutProps={{ gapSpace: 0 }}
>
<FontIcon
iconName={icons.GOOD.iconName}
className={cn(`${CN}__icon`, `${CN}__icon-good`)}
aria-describedby={tooltipId}
/>
</TooltipHost>
);
}
if (icons.GOOD && icons.GOOD.value === value) {
return (
<FontIcon
iconName={icons.GOOD.iconName}
className={cn(`${CN}__icon`, `${CN}__icon-good`)}
/>
);
}
if (icons.BAD && icons.BAD.tooltipText && icons.BAD.value === value) {
return (
<TooltipHost
content={icons.BAD.tooltipText}
id={tooltipId}
calloutProps={{ gapSpace: 0 }}
>
<FontIcon
iconName={icons.BAD.iconName}
className={cn(`${CN}__icon`, `${CN}__icon-bad`)}
aria-describedby={tooltipId}
/>
</TooltipHost>
);
}
if (icons.BAD && icons.BAD.value === value) {
return (
<FontIcon
iconName={icons.BAD.iconName}
className={cn(`${CN}__icon`, `${CN}__icon-bad`)}
/>
);
}
if (icons.UNKNOWN && icons.UNKNOWN.tooltipText && icons.UNKNOWN.value === value) {
return (
<TooltipHost
content={icons.UNKNOWN.tooltipText}
id={tooltipId}
calloutProps={{ gapSpace: 0 }}
>
<FontIcon
iconName={icons.UNKNOWN.iconName}
className={cn(`${CN}__icon`, `${CN}__icon-unknown`)}
aria-describedby={tooltipId}
/>
</TooltipHost>
);
}
if (icons.UNKNOWN && icons.UNKNOWN.value === value) {
return (
<FontIcon
iconName={icons.UNKNOWN.iconName}
className={cn(`${CN}__icon`, `${CN}__icon-unknown`)}
/>
);
}
return null;
}
render() {
const {
text,
textVariant,
placeholder,
iconValue,
icons,
className,
title
} = this.props;
if (text === undefined || text === null) {
return placeholder;
}
return (
<div className={cn(CN, className)}>
{this.renderIconWithTooltip(iconValue, icons)}
<Text variant={textVariant} title={title}>{text}</Text>
</div>
);
}
}

Просмотреть файл

@ -1,6 +1,4 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
export function getCurrentDateTimeString() {
}
export * from './icon-text';

Просмотреть файл

@ -7,3 +7,4 @@ export * from './switch-tabs';
export * from './price';
export * from './pickers';
export * from './filters';
export * from './icon-text';

Просмотреть файл

@ -46,9 +46,10 @@ import './item-details-list.scss';
export interface ItemsDetailsListProps {
queueStore: QueueStore;
storeForItemsLoading: ItemsLoadable;
storeForItemsLoading: ItemsLoadable<Item>;
handleLoadMoreRowsClick: () => void;
handleSortingUpdate?: (sortingObject: ItemSortSettingsDTO) => void;
searchId?: string;
sortingObject?: ItemSortSettingsDTO;
selectedQueue?: Queue | null;
loadingMessage?: string;
@ -249,15 +250,15 @@ export class ItemsDetailsList extends Component<ItemsDetailsListProps, never> {
@autobind
setItemToItemStore(selectedItem: Item) {
if (!selectedItem.active) {
this.history.push({ pathname: ROUTES.build.searchInactiveItemDetails(selectedItem.id) });
return;
}
const { selectedQueue, queueStore } = this.props;
const { queueStore, selectedQueue, searchId } = this.props;
const { lockedById, lockedOnQueueViewId, status } = selectedItem;
const { user } = this.user;
if (searchId && !selectedItem.active) {
this.history.push({ pathname: ROUTES.build.searchInactiveItemDetails(searchId, selectedItem.id) });
return;
}
const isEscalatedItem = status === ITEM_STATUS.ESCALATED || status === ITEM_STATUS.ON_HOLD;
const queues = isEscalatedItem ? queueStore.escalatedQueues : queueStore.queues;
@ -276,10 +277,10 @@ export class ItemsDetailsList extends Component<ItemsDetailsListProps, never> {
this.history.push({ pathname });
}
if (!selectedQueue && queueViewId) {
if (searchId && !selectedQueue && queueViewId) {
const pathname = isLockedByCurrent && isLockedInTheCurrentQueue
? ROUTES.build.searchItemDetailsReviewConsole(queueViewId, selectedItem.id)
: ROUTES.build.searchItemDetails(queueViewId, selectedItem.id);
? ROUTES.build.searchItemDetailsReviewConsole(searchId, queueViewId, selectedItem.id)
: ROUTES.build.searchItemDetails(searchId, queueViewId, selectedItem.id);
this.history.push({ pathname });
}
@ -412,7 +413,9 @@ export class ItemsDetailsList extends Component<ItemsDetailsListProps, never> {
renderQueues(item: Item) {
const { queueStore } = this.props;
const { queueIds, selectedQueueId } = item;
const selectedQueueName = queueStore.getQueueById(selectedQueueId!)?.name || '';
const selectedQueueName = selectedQueueId
? queueStore.getQueueById(selectedQueueId!)?.name || `Deleted queue (ID ${selectedQueueId!.substr(0, 8)}...)`
: 'Deleted queue';
const queuesOptions: IContextualMenuItem[] = this.composeQueueOptionsForItem(item, queueIds);
// The first option is the header "Open in", and only the second one is the real queue

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше