Add first version of the transversal ACA challenge up to the deployment of the statestore
This commit is contained in:
Родитель
c8ad8c24a1
Коммит
3652379747
|
@ -12,5 +12,5 @@ finefines:
|
|||
license-key: HX783-5PN1G-CRJ4A-K2L7V
|
||||
|
||||
vehicle-information.address: ${VEHICLE_REGISTRATION_SERVICE_BASE_URL:http://${VEHICLE_REGISTRATION_SERVICE:localhost}:6002}/vehicleinfo/{licenseNumber}
|
||||
# Used for service discovery when service-to-service invocation is used
|
||||
# Used for service discovery when service invocation is used
|
||||
vehicle-registration-service.name: ${VEHICLE_REGISTRATION_SERVICE:vehicleregistrationservice}
|
|
@ -0,0 +1,15 @@
|
|||
componentType: state.azure.cosmosdb
|
||||
version: v1
|
||||
metadata:
|
||||
- name: url
|
||||
value: <YOUR_COSMOSDB_ACCOUNT_URL>
|
||||
- name: masterKey
|
||||
value: <YOUR_COSMOSDB_MASTER_KEY>
|
||||
- name: database
|
||||
value: dapr-workshop-java-database
|
||||
- name: collection
|
||||
value: vehicle-state
|
||||
- name: actorStateStore
|
||||
value: "true"
|
||||
scopes:
|
||||
- traffic-control-service
|
|
@ -1,4 +1,3 @@
|
|||
# pubsub.yaml for Azure Service Bus
|
||||
componentType: pubsub.azure.servicebus
|
||||
version: v1
|
||||
metadata:
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
# pubsub.yaml for Azure Cache for Redis
|
||||
componentType: pubsub.redis
|
||||
version: v1
|
||||
metadata:
|
||||
|
|
|
@ -3,3 +3,4 @@ _site
|
|||
.jekyll-cache
|
||||
.jekyll-metadata
|
||||
vendor
|
||||
*.local.*
|
|
@ -0,0 +1,17 @@
|
|||
Dapr is a portable, serverless, event-driven runtime that makes it easy for developers to build resilient, stateless and stateful microservices that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
|
||||
|
||||
Dapr codifies the *best practices* for building microservice applications into open, independent, building blocks that enable you to build portable applications with the language and framework of your choice. Each building block is independent and you can use one, some, or all of them in your application.
|
||||
|
||||
![Dapr overview]({{ include.relativeAssetsPath }}images/overview.png)
|
||||
|
||||
## How it works
|
||||
|
||||
Dapr injects a side-car (container or process) to each compute unit. The side-car interacts with event triggers and communicates with the compute unit via standard HTTP or gRPC protocols. This enables Dapr to support all existing and future programming languages without requiring you to import frameworks or libraries.
|
||||
|
||||
Dapr offers built-in state management, reliable messaging (at least once delivery), triggers and bindings through standard HTTP verbs or gRPC interfaces. This allows you to write stateless, stateful and actor-like services following the same programming paradigm. You can freely choose consistency model, threading model and message delivery patterns.
|
||||
|
||||
Dapr runs natively on Kubernetes, as a self hosted binary on your machine, on an IoT device, or as a container that can be injected into any system, in the cloud or on-premises.
|
||||
|
||||
Dapr uses pluggable component state stores and message buses such as Redis as well as gRPC to offer a wide range of communication methods, including direct dapr-to-dapr using gRPC and async Pub-Sub with guaranteed delivery and at-least-once semantics.
|
||||
|
||||
{{ include.relativeAssetsPath }}
|
|
@ -0,0 +1,26 @@
|
|||
Make sure you have the following prerequisites installed on your machine:
|
||||
|
||||
- [Git](https://git-scm.com/)
|
||||
- [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli)
|
||||
- A code editor or IDE like:
|
||||
- [Visual Studio Code](https://code.visualstudio.com/)
|
||||
- [IntelliJ IDEA](https://www.jetbrains.com/idea/download/)
|
||||
- [Eclipse IDE](https://www.eclipse.org/downloads/)
|
||||
- Download [Docker Desktop](https://www.docker.com/products/docker-desktop) or [Rancher Desktop](https://rancherdesktop.io/)
|
||||
- [Install the Dapr CLI](https://docs.dapr.io/getting-started/install-dapr-cli/) and [initialize Dapr locally](https://docs.dapr.io/getting-started/install-dapr-selfhost/)
|
||||
- [OpenJDK 17](https://learn.microsoft.com/en-us/java/openjdk/download#openjdk-17)
|
||||
- [Apache Maven 3.8.x+](http://maven.apache.org/download.cgi)
|
||||
- Make sure that Maven uses the correct Java runtime by running `mvn -version`.
|
||||
- Clone the source code repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/Azure/dapr-java-workshop.git
|
||||
```
|
||||
|
||||
**From now on, this folder is referred to as the 'source code' folder.**
|
||||
|
||||
{: .important-title }
|
||||
> Powershell
|
||||
>
|
||||
> If you are using Powershell, you need to replace in multiline commands `\` by **`** at then end of each line.
|
||||
>
|
|
@ -0,0 +1,83 @@
|
|||
The Spring for Apache Kafka (spring-kafka) project applies core Spring concepts to the development of Kafka-based messaging solutions. It provides a "template" as a high-level abstraction for sending messages. It also provides support for Message-driven POJOs with @KafkaListener annotations and a "listener container".
|
||||
|
||||
## Kafka Publisher
|
||||
|
||||
1. The `TrafficControlService/src/main/java/dapr/traffic/fines/KafkaConfig.java` file defines custom JsonSerializer class to be used for serializing objects for kafka publishing.
|
||||
|
||||
```java
|
||||
public JsonObjectSerializer() {
|
||||
super(customizedObjectMapper());
|
||||
}
|
||||
|
||||
private static ObjectMapper customizedObjectMapper() {
|
||||
ObjectMapper mapper = JacksonUtils.enhancedObjectMapper();
|
||||
mapper.registerModule(new JavaTimeModule());
|
||||
mapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
|
||||
return mapper;
|
||||
}
|
||||
```
|
||||
|
||||
2. The `TrafficControlService/src/main/java/dapr/traffic/fines/KafkaConfig.java` file defines `ProducerFactory` and `KafkaTemplate` classes.
|
||||
|
||||
```java
|
||||
@Bean
|
||||
public ProducerFactory<String, SpeedingViolation> producerFactory() {
|
||||
Map<String, Object> config = new HashMap<>();
|
||||
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
|
||||
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
|
||||
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonObjectSerializer.class);
|
||||
return new DefaultKafkaProducerFactory(config);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public KafkaTemplate<String, SpeedingViolation> kafkaTemplate() {
|
||||
return new KafkaTemplate<String, SpeedingViolation>(producerFactory());
|
||||
}
|
||||
```
|
||||
|
||||
3. The `TrafficControlService/src/main/java/dapr/traffic/fines/KafkaFineCollectionClient.java` uses `KafkaTemplate` to publish fine to "test" topic.
|
||||
|
||||
```java
|
||||
@Autowired
|
||||
private KafkaTemplate<String, SpeedingViolation> kafkaTemplate;
|
||||
|
||||
@Override
|
||||
public void submitForFine(SpeedingViolation speedingViolation) {
|
||||
kafkaTemplate.send("test", speedingViolation);
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
## Kafka Subscriber
|
||||
|
||||
1. The `FineCollectionService/src/main/java/dapr/fines/violation/KafkaConsumerConfig.java` defines a factory class for Kafka listener.
|
||||
|
||||
```java
|
||||
@Bean
|
||||
public ConsumerFactory<String, SpeedingViolation> consumerFactory() {
|
||||
Map<String, Object> props = new HashMap<>();
|
||||
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
|
||||
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test");
|
||||
props.put(JsonObjectDeserializer.TRUSTED_PACKAGES, "*");
|
||||
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(),
|
||||
new JsonObjectDeserializer());
|
||||
}
|
||||
|
||||
@Bean
|
||||
public ConcurrentKafkaListenerContainerFactory<String, SpeedingViolation> kafkaListenerContainerFactory() {
|
||||
ConcurrentKafkaListenerContainerFactory<String, SpeedingViolation>
|
||||
factory = new ConcurrentKafkaListenerContainerFactory<>();
|
||||
factory.setConsumerFactory(consumerFactory());
|
||||
return factory;
|
||||
}
|
||||
```
|
||||
|
||||
2. The `FineCollectionService/src/main/java/dapr/fines/violation/KafkaViolationConsumer.java` file implements kafka listener.
|
||||
|
||||
```java
|
||||
@KafkaListener(topics = "test", groupId = "test", containerFactory = "kafkaListenerContainerFactory")
|
||||
public void listen(SpeedingViolation violation) {
|
||||
|
||||
violationProcessor.processSpeedingViolation(violation);
|
||||
}
|
||||
```
|
|
@ -0,0 +1,94 @@
|
|||
In this assignment, you'll run the application to make sure everything works correctly.
|
||||
|
||||
## Assignment goals
|
||||
|
||||
To complete this assignment, you must reach the following goals:
|
||||
|
||||
- Apache Kafka - either run as a docker container (see below) or install and run on your machine ([download](https://kafka.apache.org/downloads))
|
||||
- All services are running.
|
||||
- The logging indicates that all services are working correctly.
|
||||
|
||||
## Step 1. Running Kafka using Docker or Rancher Desktop
|
||||
|
||||
From the root of **source code** folder, run the following command to configure and start Kafka from your locally installed Docker or Rancher Desktop:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
This command will read the docker-compose.yml file located within the root folder and download and run Kafka containers for this workshop.
|
||||
|
||||
## Step 2. Run the VehicleRegistration service
|
||||
|
||||
1. Open the source code folder in your code editor.
|
||||
|
||||
2. Open a terminal window.
|
||||
|
||||
3. Make sure the current folder is `VehicleRegistrationService`.
|
||||
|
||||
4. Start the service:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
> If you receive an error here, please double-check whether or not you have installed all the [prerequisites]({{ site.baseurl }}{% link {{include.linkToPrerequisites}} %}) for the workshop!
|
||||
|
||||
## Step 3. Run the FineCollection service
|
||||
|
||||
1. Make sure the VehicleRegistrationService service is running (result of step 1).
|
||||
|
||||
1. Open a **new** terminal window.
|
||||
|
||||
1. Make sure the current folder is `FineCollectionService`.
|
||||
|
||||
1. Start the service:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
## Step 4. Run the TrafficControl service
|
||||
|
||||
1. Make sure the VehicleRegistrationService and FineCollectionService are running (results of step 1 and 2).
|
||||
|
||||
2. Open a **new** terminal window and make sure the current folder is `TrafficControlService`.
|
||||
|
||||
3. Start the service:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
## Step 5. Run the simulation
|
||||
|
||||
Now you're going to run the simulation that actually simulates cars driving on the highway. The simulation will simulate 3 entry- and exit-cameras (one for each lane).
|
||||
|
||||
1. Open a new terminal window and make sure the current folder is `Simulation`.
|
||||
|
||||
2. Start the simulation:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
3. In the simulation window you should see something like this:
|
||||
|
||||
```bash
|
||||
2021-09-15 13:47:59.599 INFO 22875 --- [ main] dapr.simulation.SimulationApplication : Started SimulationApplication in 0.98 seconds (JVM running for 1.289)
|
||||
2021-09-15 13:47:59.603 INFO 22875 --- [pool-1-thread-2] dapr.simulation.Simulation : Start camera simulation for lane 1
|
||||
2021-09-15 13:47:59.603 INFO 22875 --- [pool-1-thread-1] dapr.simulation.Simulation : Start camera simulation for lane 0
|
||||
2021-09-15 13:47:59.603 INFO 22875 --- [pool-1-thread-3] dapr.simulation.Simulation : Start camera simulation for lane 2
|
||||
2021-09-15 13:47:59.679 INFO 22875 --- [pool-1-thread-2] dapr.simulation.Simulation : Simulated ENTRY of vehicle with license number 77-ZK-59 in lane 1
|
||||
2021-09-15 13:47:59.869 INFO 22875 --- [pool-1-thread-3] dapr.simulation.Simulation : Simulated ENTRY of vehicle with license number LF-613-D in lane 2
|
||||
2021-09-15 13:48:00.852 INFO 22875 --- [pool-1-thread-1] dapr.simulation.Simulation : Simulated ENTRY of vehicle with license number 12-LZ-KS in lane 0
|
||||
2021-09-15 13:48:04.797 INFO 22875 --- [pool-1-thread-2] dapr.simulation.Simulation : Simulated EXIT of vehicle with license number 77-ZK-59 in lane 0
|
||||
2021-09-15 13:48:04.894 INFO 22875 --- [pool-1-thread-3] dapr.simulation.Simulation : Simulated EXIT of vehicle with license number LF-613-D in lane 0
|
||||
```
|
||||
|
||||
4. Also check the logging in all the other Terminal windows. You should see all entry- and exit events and any speeding-violations that were detected in the logging.
|
||||
|
||||
Now you know the application runs correctly. It's time to start adding Dapr to the application.
|
||||
|
||||
## Next assignment
|
||||
|
||||
Make sure you stop all running processes and close all the terminal windows before proceeding to the next assignment. Stopping a service or the simulation is done by pressing `Ctrl-C` in the terminal window.
|
|
@ -0,0 +1,237 @@
|
|||
In this assignment, you're going to replace direct Spring Kafka producer and consumer implementation with Dapr **publish/subscribe** messaging to send messages from the TrafficControlService to the FineCollectionService.
|
||||
|
||||
With the Dapr pub/sub building block, you use a *topic* to send and receive messages. The producer sends messages to the topic and one or more consumers subscribe to this topic to receive those messages. First you are going to prepare the TrafficControlService so it can send messages using Dapr pub/sub.
|
||||
|
||||
Dapr provides two methods by which you can subscribe to topics:
|
||||
|
||||
* **Declaratively**, where subscriptions are defined in an external file.
|
||||
* **Programmatically**, where subscriptions are defined in user code, using language specific SDK's.
|
||||
|
||||
This example demonstrates a **programmatic** approach using Dapr's Java SDK.
|
||||
|
||||
If you want to get more detailed information, read the [overview of this building block](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview/) in the Dapr documentation.
|
||||
|
||||
To complete this assignment, you must reach the following goals:
|
||||
|
||||
1. The TrafficControlService sends `SpeedingViolation` messages using the Dapr pub/sub building block.
|
||||
2. The FineCollectionService receives `SpeedingViolation` messages using the Dapr pub/sub building block.
|
||||
3. Kafka is used as pub/sub message broker that runs as part of the solution, either in a Docker container, on directly on laptop.
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Open the file `dapr/kafka-pubsub.yaml` in your code editor.
|
||||
|
||||
1. Inspect this file. As you can see, it specifies the type of the message broker to use (`pubsub.kafka`) and specifies information on how to connect to the Kafka server you started in step 1 (running on localhost on port `9092`) in the `metadata` section.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: pubsub
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.kafka
|
||||
version: v1
|
||||
metadata:
|
||||
- name: brokers # Required. Kafka broker connection setting
|
||||
value: "localhost:9092"
|
||||
- name: consumerGroup # Optional. Used for input bindings.
|
||||
value: "test"
|
||||
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
|
||||
value: "my-dapr-app-id"
|
||||
- name: authType # Required.
|
||||
- name: authRequired
|
||||
value: "false"
|
||||
- name: maxMessageBytes # Optional.
|
||||
value: 1024
|
||||
- name: consumeRetryInterval # Optional.
|
||||
value: 200ms
|
||||
- name: version # Optional.
|
||||
value: 0.10.2.0
|
||||
- name: disableTls # Optional. Disable TLS. This is not safe for production!! You should read the `Mutual TLS` section for how to use TLS.
|
||||
value: "true"
|
||||
scopes:
|
||||
- trafficcontrolservice
|
||||
- finecollectionservice
|
||||
```
|
||||
|
||||
In the `scopes` section, you specify that only the TrafficControlService and FineCollectionService should use the pub/sub building block.
|
||||
|
||||
1. **Copy or Move** this file `dapr/kafka-pubsub.yaml` to `dapr/components/` folder (when starting Dapr applications from command line, you specify a folder `dapr/components/` where Dapr component definitions are located). From the root folder, run the following command:
|
||||
|
||||
```bash
|
||||
mkdir dapr/components
|
||||
cp dapr/kafka-pubsub.yaml dapr/components/
|
||||
```
|
||||
|
||||
## Step 1: Publish messages in the TrafficControlService
|
||||
|
||||
1. Open the file, **TrafficControlService/src/main/java/dapr/traffic/fines/DaprFineCollectionClient.java** in your code editor, and inspect it.
|
||||
|
||||
2. It implements the `FineCollectionClient` interface.
|
||||
|
||||
```java
|
||||
public class DaprFineCollectionClient implements FineCollectionClient{
|
||||
private final DaprClient daprClient;
|
||||
|
||||
public DaprFineCollectionClient(final DaprClient daprClient) {
|
||||
this.daprClient = daprClient;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void submitForFine(SpeedingViolation speedingViolation) {
|
||||
|
||||
|
||||
daprClient.publishEvent("pubsub", "test", speedingViolation).block();
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
3. Open the file `TrafficControlService/src/main/java/dapr/traffic/TrafficControlConfiguration.java` in your code editor.
|
||||
|
||||
The default JSON serialization is not suitable for todays goal, so you need to customize the Jackson `ObjectMapper` that it uses. You do so by adding a static inner class to configure the JSON serialization:
|
||||
|
||||
```java
|
||||
static class JsonObjectSerializer extends DefaultObjectSerializer {
|
||||
public JsonObjectSerializer() {
|
||||
OBJECT_MAPPER.registerModule(new JavaTimeModule());
|
||||
OBJECT_MAPPER.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Comment out** following @Bean method:
|
||||
|
||||
```java
|
||||
@Bean
|
||||
public FineCollectionClient fineCollectionClient() {
|
||||
return new KafkaFineCollectionClient();
|
||||
}
|
||||
```
|
||||
|
||||
5. **Uncomment** following @Bean method:
|
||||
|
||||
```java
|
||||
// @Bean
|
||||
// public FineCollectionClient fineCollectionClient(final DaprClient daprClient) {
|
||||
// return new DaprFineCollectionClient(daprClient);
|
||||
// }
|
||||
```
|
||||
|
||||
6. **Uncomment** following @Bean method:
|
||||
|
||||
```java
|
||||
// @Bean
|
||||
// public DaprClient daprClient() {
|
||||
// return new DaprClientBuilder()
|
||||
// .withObjectSerializer(new JsonObjectSerializer())
|
||||
// .build();
|
||||
// }
|
||||
```
|
||||
|
||||
7. Check all your code changes are correct by building the code. Execute the following command in the terminal window:
|
||||
|
||||
```bash
|
||||
mvn package
|
||||
```
|
||||
|
||||
## Step 2: Receive messages in the FineCollectionService
|
||||
|
||||
Dapr will call your service on a `POST` endpoint `/collectfine` to retrieve the subscriptions for that service. You will implement this endpoint and return the subscription for the `test` topic.
|
||||
|
||||
1. Open the file `FineCollectionService/src/main/java/dapr/fines/violation/ViolationController.java` in your code editor.
|
||||
|
||||
2. Uncomment the code line below:
|
||||
|
||||
```java
|
||||
//@RestController
|
||||
```
|
||||
|
||||
3. Uncomment the code snippet below:
|
||||
|
||||
```java
|
||||
// @PostMapping(path = "/collectfine")
|
||||
// @Topic(name = "test", pubsubName = "pubsub")
|
||||
// public ResponseEntity<Void> registerViolation(@RequestBody final CloudEvent<SpeedingViolation> event) {
|
||||
// var violation = event.getData();
|
||||
// violationProcessor.processSpeedingViolation(violation);
|
||||
// return ResponseEntity.ok().build();
|
||||
// }
|
||||
```
|
||||
|
||||
4. Open the file `FineCollectionService/src/main/java/dapr/fines/violation/KafkaViolationConsumer.java` in your code editor.
|
||||
|
||||
5. Comment out @KafkaLister annotation line:
|
||||
|
||||
```java
|
||||
@KafkaListener(topics = "test", groupId = "test", containerFactory = "kafkaListenerContainerFactory")
|
||||
```
|
||||
|
||||
6. Check all your code changes are correct by building the code. Execute the following command in the terminal window:
|
||||
|
||||
```bash
|
||||
mvn package
|
||||
```
|
||||
|
||||
Now you can test the application.
|
||||
|
||||
## Step 3: Test the application
|
||||
|
||||
You're going to start all the services now.
|
||||
|
||||
1. Make sure no services from previous tests are running (close the command-shell windows).
|
||||
|
||||
1. Open the terminal window and make sure the current folder is `VehicleRegistrationService`.
|
||||
|
||||
1. Enter the following command to run the VehicleRegistrationService:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a **new** terminal window and change the current folder to `FineCollectionService`.
|
||||
|
||||
1. Enter the following command to run the FineCollectionService with a Dapr sidecar:
|
||||
|
||||
Ensure you have run `dapr init` command prior to running the below command.
|
||||
|
||||
```bash
|
||||
dapr run --app-id finecollectionservice --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 --components-path ../dapr/components mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a **new** terminal window and change the current folder to `TrafficControlService`.
|
||||
|
||||
1. Enter the following command to run the TrafficControlService with a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
dapr run --app-id trafficcontrolservice --app-port 6000 --dapr-http-port 3600 --dapr-grpc-port 60000 --components-path ../dapr/components mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a **new** terminal window and change the current folder to `Simulation`.
|
||||
|
||||
1. Start the simulation:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
You should see the same logs as **Assignment 1**. Obviously, the behavior of the application is exactly the same as before.
|
||||
|
||||
## Step 4: Debug Dapr applications in Eclipse
|
||||
|
||||
The steps below are tailored to debug TrafficControlService, but would be the same for debugging any Dapr application in Eclipse.
|
||||
|
||||
1. Click `Run > External Tools > External Tools Configuration..`.
|
||||
2. Click `New Launch Configuration` icon.
|
||||
* Name = trafficcontrolservice-dapr-debug
|
||||
* Location = c:\dapr\dapr.exe
|
||||
* Working Directory = ${workspace_loc:/TrafficControlService}
|
||||
* Arguments = run --app-id finecollectionservice --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 --components-path ../dapr/components
|
||||
|
||||
![Eclipse External Tools Configuration]({{include.relativeAssetsPath}}images/eclipse-external-tools-configurations.png)
|
||||
|
||||
3. Apply.
|
||||
4. Run.
|
||||
5. Set breakpoints in your code as you normally would in Eclipse.
|
||||
6. From `Debug` menu start the application either as a `Java Application` or as a `Spring Boot App`.
|
|
@ -0,0 +1,132 @@
|
|||
Stop Simulation, TrafficControlService and FineCollectionService, and VehicleRegistrationService by pressing Crtl-C in the respective terminal windows.
|
||||
|
||||
## Step 1: Create Azure Service Bus
|
||||
|
||||
In this assignment, you will use Azure Service Bus as the message broker with the Dapr pub/sub building block. You're going to create an Azure Service Bus namespace and a topic in it. To be able to do this, you need to have an Azure subscription. If you don't have one, you can create a free account at [https://azure.microsoft.com/free/](https://azure.microsoft.com/free/).
|
||||
|
||||
1. Login to Azure:
|
||||
|
||||
```bash
|
||||
az login
|
||||
```
|
||||
|
||||
1. Create a resource group:
|
||||
|
||||
```bash
|
||||
az group create --name rg-dapr-workshop-java --location eastus
|
||||
```
|
||||
|
||||
A [resource group](https://learn.microsoft.com/azure/azure-resource-manager/management/manage-resource-groups-portal) is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. In our workshop, all the databases, all the microservices, etc. will be grouped into a single resource group.
|
||||
|
||||
1. [Azure Service Bus](https://learn.microsoft.com/en-us/azure/service-bus-messaging/) Namespace is a logical container for topics, queues, and subscriptions. This namespace needs to be globally unique. Use the following command to generate a unique name:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
UNIQUE_IDENTIFIER=$(LC_ALL=C tr -dc a-z0-9 </dev/urandom | head -c 5)
|
||||
SERVICE_BUS="sb-dapr-workshop-java-$UNIQUE_IDENTIFIER"
|
||||
echo $SERVICE_BUS
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$ACCEPTED_CHAR = [Char[]]'abcdefghijklmnopqrstuvwxyz0123456789'
|
||||
$UNIQUE_IDENTIFIER = (Get-Random -Count 5 -InputObject $ACCEPTED_CHAR) -join ''
|
||||
$SERVICE_BUS = "sb-dapr-workshop-java-$UNIQUE_IDENTIFIER"
|
||||
$SERVICE_BUS
|
||||
```
|
||||
|
||||
1. Create a Service Bus messaging namespace:
|
||||
|
||||
```bash
|
||||
az servicebus namespace create --resource-group rg-dapr-workshop-java --name $SERVICE_BUS --location eastus
|
||||
```
|
||||
|
||||
1. Create a Service Bus topic:
|
||||
|
||||
```bash
|
||||
az servicebus topic create --resource-group rg-dapr-workshop-java --namespace-name $SERVICE_BUS --name test
|
||||
```
|
||||
|
||||
1. Create authorization rules for the Service Bus topic:
|
||||
|
||||
```bash
|
||||
az servicebus topic authorization-rule create --resource-group rg-dapr-workshop-java --namespace-name $SERVICE_BUS --topic-name test --name DaprWorkshopJavaAuthRule --rights Manage Send Listen
|
||||
```
|
||||
|
||||
1. Get the connection string for the Service Bus topic and copy it to the clipboard:
|
||||
|
||||
```bash
|
||||
az servicebus topic authorization-rule keys list --resource-group rg-dapr-workshop-java --namespace-name $SERVICE_BUS --topic-name test --name DaprWorkshopJavaAuthRule --query primaryConnectionString --output tsv
|
||||
```
|
||||
|
||||
## Step 2: Configure the pub/sub component
|
||||
|
||||
1. Open the file `dapr/azure-servicebus-pubsub.yaml` in your code editor.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: pubsub
|
||||
spec:
|
||||
type: pubsub.azure.servicebus
|
||||
version: v1
|
||||
metadata:
|
||||
- name: connectionString # Required when not using Azure Authentication.
|
||||
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
|
||||
scopes:
|
||||
- trafficcontrolservice
|
||||
- finecollectionservice
|
||||
```
|
||||
|
||||
As you can see, you specify a different type of pub/sub component (`pubsub.azure.servicebus`) and you specify in the `metadata` section how to connect to Azure Service Bus created in step 1. For this workshop, you are going to use the connection string you copied in the previous step. You can also configure the component to use Azure Active Directory authentication. For more information, see [Azure Service Bus pub/sub component](https://docs.dapr.io/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics/).
|
||||
|
||||
In the `scopes` section, you specify that only the `TrafficControlService` and `FineCollectionService` should use the pub/sub building block. To know more about scopes, see [Application access to components with scopes](https://docs.dapr.io/operations/components/component-scopes/#application-access-to-components-with-scopes).
|
||||
|
||||
1. **Copy or Move** this file `dapr/azure-servicebus-pubsub.yaml` to `dapr/components` folder.
|
||||
|
||||
1. **Replace** the `connectionString` value with the value you copied from the clipboard.
|
||||
|
||||
1. **Move** the files `dapr/components/kafka-pubsub.yaml` and `dap/components/rabbit-pubsub.yaml` back to `dapr/` folder if they are present in the component folder.
|
||||
|
||||
## Step 3: Test the application
|
||||
|
||||
You're going to start all the services now.
|
||||
|
||||
1. Make sure no services from previous tests are running (close the command-shell windows).
|
||||
|
||||
1. Open the terminal window and make sure the current folder is `VehicleRegistrationService`.
|
||||
|
||||
1. Enter the following command to run the VehicleRegistrationService with a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a terminal window and change the current folder to `FineCollectionService`.
|
||||
|
||||
1. Enter the following command to run the FineCollectionService with a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
dapr run --app-id finecollectionservice --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 --components-path ../dapr/components mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a terminal window and change the current folder to `TrafficControlService`.
|
||||
|
||||
1. Enter the following command to run the TrafficControlService with a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
dapr run --app-id trafficcontrolservice --app-port 6000 --dapr-http-port 3600 --dapr-grpc-port 60000 --components-path ../dapr/components mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a terminal window and change the current folder to `Simulation`.
|
||||
|
||||
1. Start the simulation:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
You should see the same logs as before. Obviously, the behavior of the application is exactly the same as before. But now, instead of messages being published and subscribed via kafka topic, are being processed through Azure Service Bus.
|
|
@ -0,0 +1,23 @@
|
|||
1. Create an Application Insights resource:
|
||||
|
||||
```bash
|
||||
az monitor app-insights component create --app cae-dapr-workshop-java --location eastus --kind web -g rg-dapr-workshop-java --application-type web
|
||||
```
|
||||
|
||||
You may receive a message to install the application-insights extension, if so please install the extension for this exercise.
|
||||
|
||||
1. Get the instrumentation key for the Application Insights and set it to the `INSTRUMENTATION_KEY` variable:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
INSTRUMENTATION_KEY=$(az monitor app-insights component show --app cae-dapr-workshop-java -g rg-dapr-workshop-java --query instrumentationKey)
|
||||
echo $INSTRUMENTATION_KEY
|
||||
```
|
||||
|
||||
- PowerShell:
|
||||
|
||||
```powershell
|
||||
$INSTRUMENTATION_KEY = az monitor app-insights component show --app cae-dapr-workshop-java -g rg-dapr-workshop-java --query instrumentationKey
|
||||
$INSTRUMENTATION_KEY
|
||||
```
|
|
@ -0,0 +1,42 @@
|
|||
<!-- Require 'include.showObservability' that set if Dapr telemetry is displayed
|
||||
or not -->
|
||||
A [container apps environment](https://learn.microsoft.com/en-us/azure/container-apps/environment) acts as a secure boundary around our container apps. Containers deployed on the same environment use the same virtual network and write the log to the same logging destionation, in our case: Log Analytics workspace.
|
||||
|
||||
{% if include.showObservability %}
|
||||
|
||||
To create the container apps environment with Dapr service-to-service telemetry, you need to set `--dapr-instrumentation-key` parameter to the Application Insights instrumentation key. Use the following command to create the container apps environment:
|
||||
|
||||
```bash
|
||||
az containerapp env create \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--location eastus \
|
||||
--name cae-dapr-workshop-java \
|
||||
--logs-workspace-id "$LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID" \
|
||||
--logs-workspace-key "$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET" \
|
||||
--dapr-instrumentation-key "$INSTRUMENTATION_KEY"
|
||||
```
|
||||
|
||||
{% else %}
|
||||
|
||||
{: .important-title }
|
||||
> Dapr Telemetry
|
||||
>
|
||||
> If you want to enable Dapr telemetry, you need to create the container apps environment with Application Insights. You can follow these instructions instead of the instructions below: [(Optional) Observability with Dapr using Application Insights]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/02-aca/2-observability.md %})
|
||||
>
|
||||
|
||||
Create the container apps environment with the following command:
|
||||
|
||||
```bash
|
||||
az containerapp env create \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--location eastus \
|
||||
--name cae-dapr-workshop-java \
|
||||
--logs-workspace-id "$LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID" \
|
||||
--logs-workspace-key "$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET"
|
||||
```
|
||||
|
||||
{% endif %}
|
||||
|
||||
{: .note }
|
||||
> Some Azure CLI commands can take some time to execute. Don't hesitate to have a look at the next assignments / steps to know what you will have to do. And then, come back to this one when the command is done and execute the next one.
|
||||
>
|
|
@ -0,0 +1,114 @@
|
|||
<!-- Require 'stepNumber' as input: the number of the first step of this include.
|
||||
Return the number of the last step in this include -->
|
||||
## Step {{stepNumber}} - Run the simulation
|
||||
|
||||
1. Set the following environment variable:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
export TRAFFIC_CONTROL_SERVICE_BASE_URL=https://$TRAFFIC_CONTROL_SERVICE_FQDN
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$env:TRAFFIC_CONTROL_SERVICE_BASE_URL = "https://$TRAFFIC_CONTROL_SERVICE_FQDN"
|
||||
```
|
||||
|
||||
1. In the root folder of the simulation (`Simulation`), start the simulation:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
{% assign stepNumber = stepNumber | plus: 1 %}
|
||||
## Step {{stepNumber}} - Test the microservices running in ACA
|
||||
|
||||
You can access the log of the container apps from the [Azure Portal](https://portal.azure.com/) or directly in a terminal window. The following steps show how to access the logs from the terminal window for each microservice.
|
||||
|
||||
{: .note }
|
||||
> The logs can take a few minutes to appear in the Log Analytics Workspace. If the logs are not updated, open the log stream in the Azure Portal.
|
||||
>
|
||||
|
||||
|
||||
### Traffic Control Service
|
||||
|
||||
1. Run the following command to identify the running revision of traffic control service container apps:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
TRAFFIC_CONTROL_SERVICE_REVISION=$(az containerapp revision list -n ca-traffic-control-service -g rg-dapr-workshop-java --query "[0].name" -o tsv)
|
||||
echo $TRAFFIC_CONTROL_SERVICE_REVISION
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$TRAFFIC_CONTROL_SERVICE_REVISION = az containerapp revision list -n ca-traffic-control-service -g rg-dapr-workshop-java --query "[0].name" -o tsv
|
||||
$TRAFFIC_CONTROL_SERVICE_REVISION
|
||||
```
|
||||
|
||||
2. Run the following command to get the last 10 lines of traffic control service logs from Log Analytics Workspace:
|
||||
|
||||
```bash
|
||||
az monitor log-analytics query \
|
||||
--workspace $LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID \
|
||||
--analytics-query "ContainerAppConsoleLogs_CL | where RevisionName_s == '$TRAFFIC_CONTROL_SERVICE_REVISION' | project TimeGenerated, Log_s | sort by TimeGenerated desc | take 10" \
|
||||
--out table
|
||||
```
|
||||
|
||||
### Fine Collection Service
|
||||
|
||||
1. Run the following command to identify the running revision of fine collection service container apps:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
FINE_COLLECTION_SERVICE_REVISION=$(az containerapp revision list -n ca-fine-collection-service -g rg-dapr-workshop-java --query "[0].name" -o tsv)
|
||||
echo $FINE_COLLECTION_SERVICE_REVISION
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$FINE_COLLECTION_SERVICE_REVISION = az containerapp revision list -n ca-fine-collection-service -g rg-dapr-workshop-java --query "[0].name" -o tsv
|
||||
$FINE_COLLECTION_SERVICE_REVISION
|
||||
```
|
||||
|
||||
2. Run the following command to get the last 10 lines of fine collection service logs from Log Analytics Workspace:
|
||||
|
||||
```bash
|
||||
az monitor log-analytics query \
|
||||
--workspace $LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID \
|
||||
--analytics-query "ContainerAppConsoleLogs_CL | where RevisionName_s == '$FINE_COLLECTION_SERVICE_REVISION' | project TimeGenerated, Log_s | sort by TimeGenerated desc | take 10" \
|
||||
--out table
|
||||
```
|
||||
|
||||
### Vehicle Registration Service
|
||||
|
||||
1. Run the following command to identify the running revision of vehicle registration service container apps:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
VEHICLE_REGISTRATION_SERVICE_REVISION=$(az containerapp revision list -n ca-vehicle-registration-service -g rg-dapr-workshop-java --query "[0].name" -o tsv)
|
||||
echo $VEHICLE_REGISTRATION_SERVICE_REVISION
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$VEHICLE_REGISTRATION_SERVICE_REVISION = az containerapp revision list -n ca-vehicle-registration-service -g rg-dapr-workshop-java --query "[0].name" -o tsv
|
||||
$VEHICLE_REGISTRATION_SERVICE_REVISION
|
||||
```
|
||||
|
||||
2. Run the following command to get the last 10 lines of vehicle registration service logs from Log Analytics Workspace:
|
||||
|
||||
```bash
|
||||
az monitor log-analytics query \
|
||||
--workspace $LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID \
|
||||
--analytics-query "ContainerAppConsoleLogs_CL | where RevisionName_s == '$VEHICLE_REGISTRATION_SERVICE_REVISION' | project TimeGenerated, Log_s | sort by TimeGenerated desc | take 10" \
|
||||
--out table
|
||||
```
|
|
@ -0,0 +1,169 @@
|
|||
Now, let's create the infrastructure for our application, so you can later deploy our microservices to [Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/).
|
||||
|
||||
<!-- ----------------------------------------------------------------------- -->
|
||||
<!-- LOG ANALYTICS WORKSPACE -->
|
||||
<!-- ----------------------------------------------------------------------- -->
|
||||
|
||||
### Log Analytics Workspace
|
||||
|
||||
[Log Analytics workspace](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-workspace-overview) is the environment for Azure Monitor log data. Each workspace has its own data repository and configuration, and data sources and solutions are configured to store their data in a particular workspace. You will use the same workspace for most of the Azure resources you will be creating.
|
||||
|
||||
1. Create a Log Analytics workspace with the following command:
|
||||
|
||||
```bash
|
||||
az monitor log-analytics workspace create \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--location eastus \
|
||||
--workspace-name log-dapr-workshop-java
|
||||
```
|
||||
|
||||
1. Retrieve the Log Analytics Client ID and client secret and store them in environment variables:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID=$(
|
||||
az monitor log-analytics workspace show \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--workspace-name log-dapr-workshop-java \
|
||||
--query customerId \
|
||||
--output tsv | tr -d '[:space:]'
|
||||
)
|
||||
echo "LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID=$LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID"
|
||||
|
||||
LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=$(
|
||||
az monitor log-analytics workspace get-shared-keys \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--workspace-name log-dapr-workshop-java \
|
||||
--query primarySharedKey \
|
||||
--output tsv | tr -d '[:space:]'
|
||||
)
|
||||
echo "LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET"
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID="$(
|
||||
az monitor log-analytics workspace show `
|
||||
--resource-group rg-dapr-workshop-java `
|
||||
--workspace-name log-dapr-workshop-java `
|
||||
--query customerId `
|
||||
--output tsv
|
||||
)"
|
||||
Write-Output "LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID=$LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID"
|
||||
|
||||
$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET="$(
|
||||
az monitor log-analytics workspace get-shared-keys `
|
||||
--resource-group rg-dapr-workshop-java `
|
||||
--workspace-name log-dapr-workshop-java `
|
||||
--query primarySharedKey `
|
||||
--output tsv
|
||||
)"
|
||||
Write-Output "LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET"
|
||||
```
|
||||
|
||||
<!-- ----------------------------------------------------------------------- -->
|
||||
<!-- APPLICATION INSIGTHS -->
|
||||
<!-- ----------------------------------------------------------------------- -->
|
||||
|
||||
<!-- If observability is shown, Application Insights is required -->
|
||||
{% if include.showObservability %}
|
||||
|
||||
### Application Insights
|
||||
|
||||
[Application Insights](https://learn.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview?tabs=java) is used to enable Dapr service-to-service telemetry. The telemetry is used to visualize the microservices communication in the Application Insigts `Application Map`. When [creating the Azure Container Apps environment](https://learn.microsoft.com/en-us/cli/azure/containerapp/env?view=azure-cli-latest#az-containerapp-env-create), you can set Application Insights instrumentation key that is used by Dapr to export service-to-service telemetry to Application Insights.
|
||||
|
||||
{% include 05-assignment-5-aks-aca/02-aca/0-1-setup-application-insights.md %}
|
||||
|
||||
{% endif %}
|
||||
|
||||
<!-- ----------------------------------------------------------------------- -->
|
||||
<!-- CONTAINER REGISTRY -->
|
||||
<!-- ----------------------------------------------------------------------- -->
|
||||
|
||||
### Azure Container Registry
|
||||
|
||||
Later, you will be creating Docker containers and pushing them to the Azure Container Registry.
|
||||
|
||||
1. [Azure Container Registry](https://learn.microsoft.com/en-us/azure/container-registry/) is a private registry for hosting container images. Using the Azure Container Registry, you can store Docker images for all types of container deployments. This registry needs to be gloablly unique. Use the following command to generate a unique name:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
UNIQUE_IDENTIFIER=$(LC_ALL=C tr -dc a-z0-9 </dev/urandom | head -c 5)
|
||||
CONTAINER_REGISTRY="crdaprworkshopjava$UNIQUE_IDENTIFIER"
|
||||
echo $CONTAINER_REGISTRY
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$ACCEPTED_CHAR = [Char[]]'abcdefghijklmnopqrstuvwxyz0123456789'
|
||||
$UNIQUE_IDENTIFIER = (Get-Random -Count 5 -InputObject $ACCEPTED_CHAR) -join ''
|
||||
$CONTAINER_REGISTRY = "crdaprworkshopjava$UNIQUE_IDENTIFIER"
|
||||
$CONTAINER_REGISTRY
|
||||
```
|
||||
|
||||
1. Create an Azure Container Registry with the following command:
|
||||
|
||||
```bash
|
||||
az acr create \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--location eastus \
|
||||
--name "$CONTAINER_REGISTRY" \
|
||||
--workspace log-dapr-workshop-java \
|
||||
--sku Standard \
|
||||
--admin-enabled true
|
||||
```
|
||||
|
||||
Notice that you created the registry with admin rights `--admin-enabled true` which is not suited for real production, but well for our workshop
|
||||
|
||||
1. Update the registry to allow anonymous users to pull the images ():
|
||||
|
||||
```bash
|
||||
az acr update \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--name "$CONTAINER_REGISTRY" \
|
||||
--anonymous-pull-enabled true
|
||||
```
|
||||
|
||||
This can be handy if you want other attendees of the workshop to use your registry, but this is not suitable for production.
|
||||
|
||||
1. Get the URL of the Azure Container Registry and set it to the `CONTAINER_REGISTRY_URL` variable with the following command:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
CONTAINER_REGISTRY_URL=$(
|
||||
az acr show \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--name "$CONTAINER_REGISTRY" \
|
||||
--query "loginServer" \
|
||||
--output tsv
|
||||
)
|
||||
|
||||
echo "CONTAINER_REGISTRY_URL=$CONTAINER_REGISTRY_URL"
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$CONTAINER_REGISTRY_URL="$(
|
||||
az acr show `
|
||||
--resource-group rg-dapr-workshop-java `
|
||||
--name "$CONTAINER_REGISTRY" `
|
||||
--query "loginServer" `
|
||||
--output tsv
|
||||
)"
|
||||
|
||||
Write-Output "CONTAINER_REGISTRY_URL=$CONTAINER_REGISTRY_URL"
|
||||
```
|
||||
|
||||
<!-- ----------------------------------------------------------------------- -->
|
||||
<!-- CONTAINER APPS ENVIRONMENT -->
|
||||
<!-- ----------------------------------------------------------------------- -->
|
||||
|
||||
### Azure Container Apps Environment
|
||||
|
||||
{% include 05-assignment-5-aks-aca/02-aca/0-2-setup-container-apps-env.md %}
|
|
@ -0,0 +1,32 @@
|
|||
In [Assignment 3 - Using Dapr for pub/sub with Azure Service Bus]({{ site.baseurl }}{% link {{include.linkToAssignment3}} %}), you copied the file `dapr/azure-servicebus-pubsub.yaml` to `dapr/components` folder and updated the `connectionString` value. This file was used to deploy the `pubsub` Dapr component.
|
||||
|
||||
The [Dapr component schema for Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/dapr-overview?tabs=bicep1%2Cyaml#component-schema) is different from the standard Dapr component yaml schema. It has been slightly simplified. Hence the need for a new component yaml file.
|
||||
|
||||
1. Open the file `dapr/aca-azure-servicebus-pubsub.yaml` in your code editor.
|
||||
|
||||
```yaml
|
||||
componentType: pubsub.azure.servicebus
|
||||
version: v1
|
||||
metadata:
|
||||
- name: connectionString
|
||||
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
|
||||
scopes:
|
||||
- traffic-control-service
|
||||
- fine-collection-service
|
||||
```
|
||||
|
||||
2. **Copy or Move** this file `dapr/aca-servicebus-pubsub.yaml` to `dapr/components` folder.
|
||||
|
||||
3. **Replace** the `connectionString` value with the value you set in `dapr/components/azure-servicebus-pubsub.yaml` in [Assignment 3 - Using Dapr for pub/sub with Azure Service Bus]({{ site.baseurl }}{% link {{include.linkToAssignment3}} %}).
|
||||
|
||||
4. Go to the root folder of the repository.
|
||||
|
||||
5. Enter the following command to deploy the `pubsub` Dapr component:
|
||||
|
||||
```bash
|
||||
az containerapp env dapr-component set \
|
||||
--name cae-dapr-workshop-java \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--dapr-component-name pubsub \
|
||||
--yaml ./dapr/components/aca-azure-servicebus-pubsub.yaml
|
||||
```
|
|
@ -0,0 +1,150 @@
|
|||
<!-- Require 'stepNumber' as input: the number of the first step of this include.
|
||||
Return the number of the last step in this include -->
|
||||
## Step {{stepNumber}} - Generate Docker images for applications, and push them to ACR
|
||||
|
||||
Since you don't have any container images ready yet, we'll build and push container images in Azure Container Registry (ACR) to get things running.
|
||||
|
||||
1. Login to your ACR repository
|
||||
|
||||
```bash
|
||||
az acr login --name $CONTAINER_REGISTRY
|
||||
```
|
||||
|
||||
2. In the root folder of VehicleRegistrationService microservice, run the following command
|
||||
|
||||
```bash
|
||||
mvn spring-boot:build-image
|
||||
docker tag vehicle-registration-service:1.0-SNAPSHOT "$CONTAINER_REGISTRY.azurecr.io/vehicle-registration-service:1.0"
|
||||
docker push "$CONTAINER_REGISTRY.azurecr.io/vehicle-registration-service:1.0"
|
||||
```
|
||||
|
||||
3. In the root folder of FineCollectionService microservice, run the following command
|
||||
|
||||
```bash
|
||||
mvn spring-boot:build-image
|
||||
docker tag fine-collection-service:1.0-SNAPSHOT "$CONTAINER_REGISTRY.azurecr.io/fine-collection-service:1.0"
|
||||
docker push "$CONTAINER_REGISTRY.azurecr.io/fine-collection-service:1.0"
|
||||
```
|
||||
|
||||
4. In the root folder of TrafficControlService microservice, run the following command
|
||||
|
||||
```bash
|
||||
mvn spring-boot:build-image
|
||||
docker tag traffic-control-service:1.0-SNAPSHOT "$CONTAINER_REGISTRY.azurecr.io/traffic-control-service:1.0"
|
||||
docker push "$CONTAINER_REGISTRY.azurecr.io/traffic-control-service:1.0"
|
||||
```
|
||||
|
||||
{% assign stepNumber = stepNumber | plus: 1 %}
|
||||
## Step {{stepNumber}} - Deploy the Container Apps
|
||||
|
||||
Now that you have created the container apps environment and push the images, you can create the container apps. A container app is a containerized application that is deployed to a container apps environment.
|
||||
|
||||
You will create three container apps, one for each of our Java services: `TrafficControlService`, `FineCollectionService` and `VehicleRegistrationService`.
|
||||
|
||||
1. Create a Container App for `VehicleRegistrationService` with the following command:
|
||||
|
||||
```bash
|
||||
az containerapp create \
|
||||
--name ca-vehicle-registration-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--environment cae-dapr-workshop-java \
|
||||
--image "$CONTAINER_REGISTRY_URL/vehicle-registration-service:1.0" \
|
||||
--target-port 6002 \
|
||||
--ingress internal \
|
||||
--min-replicas 1 \
|
||||
--max-replicas 1
|
||||
```
|
||||
|
||||
Notice that internal ingress is enable. This is because we want to provide access to the service only from within the container apps environment. FineCollectionService will be able to access the VehicleRegistrationService using the internal ingress FQDN.
|
||||
|
||||
1. Get the FQDN of `VehicleRegistrationService` and save it in a variable:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
VEHICLE_REGISTRATION_SERVICE_FQDN=$(az containerapp show \
|
||||
--name ca-vehicle-registration-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--query "properties.configuration.ingress.fqdn" \
|
||||
-o tsv)
|
||||
echo $VEHICLE_REGISTRATION_SERVICE_FQDN
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$VEHICLE_REGISTRATION_SERVICE_FQDN = az containerapp show `
|
||||
--name ca-vehicle-registration-service `
|
||||
--resource-group rg-dapr-workshop-java `
|
||||
--query "properties.configuration.ingress.fqdn" `
|
||||
-o tsv
|
||||
$VEHICLE_REGISTRATION_SERVICE_FQDN
|
||||
```
|
||||
|
||||
Notice that the FQDN is in the format `<service-name>.internal.<unique-name>.<region>.azurecontainerapps.io` where internal indicates that the service is only accessible from within the container apps environment, i.e. exposed with internal ingress.
|
||||
|
||||
1. Create a Container App for `FineCollectionService` with the following command:
|
||||
|
||||
```bash
|
||||
az containerapp create \
|
||||
--name ca-fine-collection-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--environment cae-dapr-workshop-java \
|
||||
--image "$CONTAINER_REGISTRY_URL/fine-collection-service:1.0" \
|
||||
--min-replicas 1 \
|
||||
--max-replicas 1 \
|
||||
--enable-dapr \
|
||||
--dapr-app-id fine-collection-service \
|
||||
--dapr-app-port 6001 \
|
||||
--dapr-app-protocol http \
|
||||
--env-vars "VEHICLE_REGISTRATION_SERVICE_BASE_URL=https://$VEHICLE_REGISTRATION_SERVICE_FQDN"
|
||||
```
|
||||
|
||||
1. Create a Container App for `TrafficControlService` with the following command:
|
||||
|
||||
```bash
|
||||
az containerapp create \
|
||||
--name ca-traffic-control-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--environment cae-dapr-workshop-java \
|
||||
--image "$CONTAINER_REGISTRY_URL/traffic-control-service:1.0" \
|
||||
--target-port 6000 \
|
||||
--ingress external \
|
||||
--min-replicas 1 \
|
||||
--max-replicas 1 \
|
||||
--enable-dapr \
|
||||
--dapr-app-id traffic-control-service \
|
||||
--dapr-app-port 6000 \
|
||||
--dapr-app-protocol http
|
||||
```
|
||||
|
||||
1. Get the FQDN of traffic control service and save it in a variable:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
TRAFFIC_CONTROL_SERVICE_FQDN=$(az containerapp show \
|
||||
--name ca-traffic-control-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--query "properties.configuration.ingress.fqdn" \
|
||||
-o tsv)
|
||||
echo $TRAFFIC_CONTROL_SERVICE_FQDN
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$TRAFFIC_CONTROL_SERVICE_FQDN = $(az containerapp show `
|
||||
--name ca-traffic-control-service `
|
||||
--resource-group rg-dapr-workshop-java `
|
||||
--query "properties.configuration.ingress.fqdn" `
|
||||
-o tsv)
|
||||
$TRAFFIC_CONTROL_SERVICE_FQDN
|
||||
```
|
||||
|
||||
Notice that the FQDN is in the format `<service-name>.<unique-name>.<region>.azurecontainerapps.io` where internal is not present. Indeed, traffic control service is exposed with external ingress, i.e. it is accessible from outside the container apps environment. It will be used by the simulation to test the application.
|
||||
|
||||
<!-- -------------------------------- TEST --------------------------------- -->
|
||||
|
||||
{% assign stepNumber = stepNumber | plus: 1 %}
|
||||
{% include 05-assignment-5-aks-aca/02-aca/0-3-test-application.md %}
|
|
@ -0,0 +1,7 @@
|
|||
## Step {{stepNumber}}: View the telemetry in Application Insights
|
||||
|
||||
1. Open the Application Insights resource in the [Azure portal]([https](https://portal.azure.com/)).
|
||||
|
||||
1. Go to `Application Map`, you should see a diagram like the on below
|
||||
|
||||
![Dapr Telemetry]({{ include.relativeAssetsPath }}images/dapr-telemetry.png)
|
|
@ -0,0 +1,42 @@
|
|||
<!-- Require 'stepNumber' as input: the number of the first step of this include.
|
||||
Return the number of the last step in this include -->
|
||||
## Step {{stepNumber}}: Use Dapr to invoke the Vehicle Registration Service from the Fine Collection Service
|
||||
|
||||
With Dapr, services can invoke other services using their application id. This is done by using the Dapr client to make calls to the Dapr sidecar. The Vehicle Registration Service will be started with a Dapr sidecar.
|
||||
|
||||
1. Open the `FineCollectionService` project in your code editor and navigate to the `DaprVehicleRegistrationClient` class. This class implements the `VehicleRegistrationClient` interface and uses the Dapr client to invoke the Vehicle Registration Service. Inspect the implementation of this class.
|
||||
|
||||
2. Navigate to the `FineCollectionConfiguration` class to switch between the default and Dapr implementation of the `VehicleRegistrationClient`.
|
||||
|
||||
3. **Uncomment** following @Bean method:
|
||||
|
||||
```java
|
||||
// @Bean
|
||||
// public VehicleRegistrationClient vehicleRegistrationClient(final DaprClient daprClient) {
|
||||
// return new DaprVehicleRegistrationClient(daprClient);
|
||||
// }
|
||||
```
|
||||
|
||||
4. **Uncomment** following @Bean method, if not already done:
|
||||
|
||||
```java
|
||||
// @Bean
|
||||
// public DaprClient daprClient() {
|
||||
// return new DaprClientBuilder().build();
|
||||
// }
|
||||
```
|
||||
|
||||
5. **Comment out** following @Bean method:
|
||||
|
||||
```java
|
||||
@Bean
|
||||
public VehicleRegistrationClient vehicleRegistrationClient(final RestTemplate restTemplate) {
|
||||
return new DefaultVehicleRegistrationClient(restTemplate, vehicleInformationAddress);
|
||||
}
|
||||
```
|
||||
|
||||
6. Check all your code-changes are correct by building the code. Execute the following command in the terminal window:
|
||||
|
||||
```bash
|
||||
mvn package
|
||||
```
|
|
@ -0,0 +1,67 @@
|
|||
<!-- Require 'stepNumber' as input: the number of the first step of this include.
|
||||
Return the number of the last step in this include -->
|
||||
## Step {{stepNumber}}: Enable Dapr for Vehicle Registration Service
|
||||
|
||||
In this step, you will enable Dapr for the `VehicleRegistrationService` to be discoverable by the `FineCollectionService` using Dapr's service invocation building block.
|
||||
|
||||
`FineCollectionService` Dapr sidecar uses Vehicle Registration Service `dapr-app-id` to resolve the service invocation endpoint. The name (i.e. `dapr-app-id`) of `VehicleRegistrationService` is set in the application properties of `FineCollectionService` (i.e. `application.yaml`) as shown below:
|
||||
|
||||
```yaml
|
||||
vehicle-registration-service.name: ${VEHICLE_REGISTRATION_SERVICE:vehicleregistrationservice}
|
||||
```
|
||||
|
||||
The default value is `vehicleregistrationservice` that will be override using the environment variable `VEHICLE_REGISTRATION_SERVICE` to the name set in the following step:
|
||||
|
||||
1. Open a **new** termninale and run the following command to enable Dapr for `VehicleRegistrationService`:
|
||||
|
||||
```bash
|
||||
az containerapp dapr enable \
|
||||
--name ca-vehicle-registration-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--dapr-app-id vehicle-registration-service \
|
||||
--dapr-app-port 6002 \
|
||||
--dapr-app-protocol http
|
||||
```
|
||||
|
||||
1. Note the `dapr-app-id` you set in the previous step. It is used to resolve the service invocation endpoint.
|
||||
|
||||
{% assign stepNumber = stepNumber | plus: 1 %}
|
||||
## Step {{stepNumber}}: Build and redeploy fine collection service
|
||||
|
||||
In this step, you will rebuild and redeploy the `FineCollectionService` to use the `VehicleRegistrationService` service invocation endpoint.
|
||||
|
||||
1. Delete the image from local docker:
|
||||
|
||||
```bash
|
||||
docker rmi fine-collection-service:1.0-SNAPSHOT
|
||||
```
|
||||
|
||||
1. In the root folder of `FineCollectionService`, run the following command to build and push the image:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:build-image
|
||||
docker tag fine-collection-service:1.0-SNAPSHOT "$CONTAINER_REGISTRY.azurecr.io/fine-collection-service:2.0"
|
||||
docker push "$CONTAINER_REGISTRY.azurecr.io/fine-collection-service:2.0"
|
||||
```
|
||||
|
||||
Where `$CONTAINER_REGISTRY` is the name of your Azure Container Registry.
|
||||
|
||||
1. Update `FineCollectionService` container app with the new image and with the environment variable to set the name (i.e. `dapr-app-id`) of `VehicleRegistrationService`:
|
||||
|
||||
```bash
|
||||
az containerapp update \
|
||||
--name ca-fine-collection-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--image "$CONTAINER_REGISTRY.azurecr.io/fine-collection-service:2.0" \
|
||||
--set-env-vars "VEHICLE_REGISTRATION_SERVICE=vehicle-registration-service" "VEHICLE_REGISTRATION_SERVICE_BASE_URL=not-used"
|
||||
```
|
||||
|
||||
Where `$CONTAINER_REGISTRY` is the name of your Azure Container Registry. The `VEHICLE_REGISTRATION_SERVICE_BASE_URL` is set to `not-used` because it is not used anymore. If it is used, this FQDN does not exist and the service invocation will fail.
|
||||
|
||||
|
||||
<!-- -------------------------------- TEST --------------------------------- -->
|
||||
|
||||
{% assign stepNumber = stepNumber | plus: 1 %}
|
||||
{% include 05-assignment-5-aks-aca/02-aca/0-3-test-application.md %}
|
||||
|
||||
Check Application Map of Application Insights in Azure Portal to see the connection between the `FineCollectionService` and the `VehicleRegistrationService`.
|
|
@ -0,0 +1,57 @@
|
|||
1. Open a terminal window.
|
||||
|
||||
1. Azure Cosmos DB account for SQL API is a globally distributed multi-model database service. This account needs to be globally unique. Use the following command to generate a unique name:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
UNIQUE_IDENTIFIER=$(LC_ALL=C tr -dc a-z0-9 </dev/urandom | head -c 5)
|
||||
COSMOS_DB="cosno-dapr-workshop-java-$UNIQUE_IDENTIFIER"
|
||||
echo $COSMOS_DB
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$ACCEPTED_CHAR = [Char[]]'abcdefghijklmnopqrstuvwxyz0123456789'
|
||||
$UNIQUE_IDENTIFIER = (Get-Random -Count 5 -InputObject $ACCEPTED_CHAR) -join ''
|
||||
$COSMOS_DB = "cosno-dapr-workshop-java-$UNIQUE_IDENTIFIER"
|
||||
$COSMOS_DB
|
||||
```
|
||||
|
||||
1. Create a Cosmos DB account for SQL API:
|
||||
|
||||
```bash
|
||||
az cosmosdb create --name $COSMOS_DB --resource-group rg-dapr-workshop-java --locations regionName=eastus failoverPriority=0 isZoneRedundant=False
|
||||
```
|
||||
|
||||
{: .important }
|
||||
> The name of the Cosmos DB account must be unique across all Azure Cosmos DB accounts in the world. If you get an error that the name is already taken, try a different name. In the following steps, please update the name of the Cosmos DB account accordingly.
|
||||
|
||||
1. Create a SQL API database:
|
||||
|
||||
```bash
|
||||
az cosmosdb sql database create --account-name $COSMOS_DB --resource-group rg-dapr-workshop-java --name dapr-workshop-java-database
|
||||
```
|
||||
|
||||
1. Create a SQL API container:
|
||||
|
||||
```bash
|
||||
az cosmosdb sql container create --account-name $COSMOS_DB --resource-group rg-dapr-workshop-java --database-name dapr-workshop-java-database --name vehicle-state --partition-key-path /partitionKey --throughput 400
|
||||
```
|
||||
|
||||
{: .important }
|
||||
> The partition key path is `/partitionKey` as mentionned in [Dapr documentation](https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-azure-cosmosdb/#setup-azure-cosmosdb).
|
||||
>
|
||||
|
||||
1. Get the Cosmos DB account URL and note it down. You will need it in the next step and to deploy it to Azure.
|
||||
|
||||
```bash
|
||||
az cosmosdb show --name $COSMOS_DB --resource-group rg-dapr-workshop-java --query documentEndpoint -o tsv
|
||||
```
|
||||
|
||||
1. Get the master key and note it down. You will need it in the next step and to deploy it to Azure.
|
||||
|
||||
```bash
|
||||
az cosmosdb keys list --name $COSMOS_DB --resource-group rg-dapr-workshop-java --type keys --query primaryMasterKey -o tsv
|
||||
```
|
|
@ -0,0 +1,29 @@
|
|||
1. Open the `TrafficControlService` project in your code editor and navigate to the `DaprVehicleStateRepository` class. This class use the Dapr client to store and retrieve the state of a vehicle. Inspect the implementation of this class.
|
||||
|
||||
1. Navigate to the `TrafficControlConfiguration` class to swith from the `InMemoryVehicleStateRepository` to the `DaprVehicleStateRepository`.
|
||||
|
||||
1. **Update** @Bean method to instantiate `DaprVehicleStateRepository` instead of `InMemoryVehicleStateRepository`:
|
||||
|
||||
```java
|
||||
@Bean
|
||||
public VehicleStateRepository vehicleStateRepository(final DaprClient daprClient) {
|
||||
return new DaprVehicleStateRepository(daprClient);
|
||||
}
|
||||
```
|
||||
|
||||
1. **Uncomment** following @Bean method if not already done:
|
||||
|
||||
```java
|
||||
// @Bean
|
||||
// public DaprClient daprClient() {
|
||||
// return new DaprClientBuilder()
|
||||
// .withObjectSerializer(new JsonObjectSerializer())
|
||||
// .build();
|
||||
// }
|
||||
```
|
||||
|
||||
1. Check all your code-changes are correct by building the code. Execute the following command in the terminal window:
|
||||
|
||||
```bash
|
||||
mvn package
|
||||
```
|
|
@ -0,0 +1,39 @@
|
|||
<!-- Require 'stepNumber' as input: the number of the first step of this include.
|
||||
Return the number of the last step in this include -->
|
||||
## Step {{stepNumber}}: Build and redeploy traffic control service
|
||||
|
||||
In this step, you will rebuild and redeploy the `TrafficControlService` to use the Azure Cosmos DB state store instead of keeping the state in memory.
|
||||
|
||||
1. Delete the image from local docker:
|
||||
|
||||
```bash
|
||||
docker rmi traffic-control-service:1.0-SNAPSHOT
|
||||
```
|
||||
|
||||
1. In the root folder of `TrafficControlService`, run the following command to build and push the image:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:build-image
|
||||
docker tag traffic-control-service:1.0-SNAPSHOT "$CONTAINER_REGISTRY.azurecr.io/traffic-control-service:2.0"
|
||||
docker push "$CONTAINER_REGISTRY.azurecr.io/traffic-control-service:2.0"
|
||||
```
|
||||
|
||||
Where `$CONTAINER_REGISTRY` is the name of your Azure Container Registry.
|
||||
|
||||
1. Update `TrafficControlService` container with the new image:
|
||||
|
||||
```bash
|
||||
az containerapp update \
|
||||
--name ca-traffic-control-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--image "$CONTAINER_REGISTRY.azurecr.io/traffic-control-service:2.0"
|
||||
```
|
||||
|
||||
Where `$CONTAINER_REGISTRY` is the name of your Azure Container Registry.
|
||||
|
||||
<!-- -------------------------------- TEST --------------------------------- -->
|
||||
|
||||
{% assign stepNumber = stepNumber | plus: 1 %}
|
||||
{% include 05-assignment-5-aks-aca/02-aca/0-3-test-application.md %}
|
||||
|
||||
Check Application Map of Application Insights in Azure Portal to see the connection between the `TrafficControlService` and the `aca-azure-cosmosdb-statestore`. Check in Azure Portal the data in Cosmos DB.
|
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 20 KiB |
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 22 KiB |
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 25 KiB |
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 44 KiB |
|
@ -8,21 +8,9 @@ layout: default
|
|||
|
||||
# Dapr Overview
|
||||
|
||||
Dapr is a portable, serverless, event-driven runtime that makes it easy for developers to build resilient, stateless and stateful microservices that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
|
||||
{% include 00-intro/1-dapr-overview.md relativeAssetsPath="../../assets/" %}
|
||||
|
||||
Dapr codifies the *best practices* for building microservice applications into open, independent, building blocks that enable you to build portable applications with the language and framework of your choice. Each building block is independent and you can use one, some, or all of them in your application.
|
||||
|
||||
![Dapr overview](../../assets/images/overview.png)
|
||||
|
||||
## How it works
|
||||
|
||||
Dapr injects a side-car (container or process) to each compute unit. The side-car interacts with event triggers and communicates with the compute unit via standard HTTP or gRPC protocols. This enables Dapr to support all existing and future programming languages without requiring you to import frameworks or libraries.
|
||||
|
||||
Dapr offers built-in state management, reliable messaging (at least once delivery), triggers and bindings through standard HTTP verbs or gRPC interfaces. This allows you to write stateless, stateful and actor-like services following the same programming paradigm. You can freely choose consistency model, threading model and message delivery patterns.
|
||||
|
||||
Dapr runs natively on Kubernetes, as a self hosted binary on your machine, on an IoT device, or as a container that can be injected into any system, in the cloud or on-premises.
|
||||
|
||||
Dapr uses pluggable component state stores and message buses such as Redis as well as gRPC to offer a wide range of communication methods, including direct dapr-to-dapr using gRPC and async Pub-Sub with guaranteed delivery and at-least-once semantics.
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Introduction]({{ site.baseurl }}{% link index.markdown %}){: .btn .mt-7 }
|
||||
|
|
|
@ -8,31 +8,9 @@ layout: default
|
|||
|
||||
# Prerequisites
|
||||
|
||||
Make sure you have the following prerequisites installed on your machine:
|
||||
{% include 00-intro/2-prerequisites.md %}
|
||||
|
||||
- Git ([download](https://git-scm.com/))
|
||||
- A code editor or IDE like:
|
||||
- Visual Studio Code ([download](https://code.visualstudio.com/))
|
||||
- IntelliJ IDEA ([download](https://www.jetbrains.com/idea/download/))
|
||||
- Eclipse IDE for Java Developers ([download](https://www.eclipse.org/downloads/))
|
||||
- Docker for desktop ([download](https://www.docker.com/products/docker-desktop)) or Rancher Desktop ([download](https://rancherdesktop.io/))
|
||||
- [Install the Dapr CLI](https://docs.dapr.io/getting-started/install-dapr-cli/) and [initialize Dapr locally](https://docs.dapr.io/getting-started/install-dapr-selfhost/)
|
||||
- Java 16 or above ([download](https://adoptopenjdk.net/?variant=openjdk16))
|
||||
- Apache Maven 3.6.3 or above is required; Apache Maven 3.8.1 is advised ([download](http://maven.apache.org/download.cgi))
|
||||
- Make sure that Maven uses the correct Java runtime by running `mvn -version`.
|
||||
- Clone the source code repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/Azure/dapr-java-workshop.git
|
||||
```
|
||||
|
||||
**From now on, this folder is referred to as the 'source code' folder.**
|
||||
|
||||
{: .important-title }
|
||||
> Powershell
|
||||
>
|
||||
> If you are using Powershell, you need to replace in multiline commands `\` by **`** at then end of each line.
|
||||
>
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Dapr Overview]({{ site.baseurl }}{% link modules/00-intro/1-dapr-overview.md %}){: .btn .mt-7 }
|
||||
|
|
|
@ -20,89 +20,9 @@ has_toc: true
|
|||
{:toc}
|
||||
</details>
|
||||
|
||||
The Spring for Apache Kafka (spring-kafka) project applies core Spring concepts to the development of Kafka-based messaging solutions. It provides a "template" as a high-level abstraction for sending messages. It also provides support for Message-driven POJOs with @KafkaListener annotations and a "listener container".
|
||||
{% include 01-assignment-1-lab/1-spring-for-apache-kafka.md %}
|
||||
|
||||
## Kafka Publisher
|
||||
|
||||
1. The `TrafficControlService/src/main/java/dapr/traffic/fines/KafkaConfig.java` file defines custom JsonSerializer class to be used for serializing objects for kafka publishing.
|
||||
|
||||
```java
|
||||
public JsonObjectSerializer() {
|
||||
super(customizedObjectMapper());
|
||||
}
|
||||
|
||||
private static ObjectMapper customizedObjectMapper() {
|
||||
ObjectMapper mapper = JacksonUtils.enhancedObjectMapper();
|
||||
mapper.registerModule(new JavaTimeModule());
|
||||
mapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
|
||||
return mapper;
|
||||
}
|
||||
```
|
||||
|
||||
2. The `TrafficControlService/src/main/java/dapr/traffic/fines/KafkaConfig.java` file defines `ProducerFactory` and `KafkaTemplate` classes.
|
||||
|
||||
```java
|
||||
@Bean
|
||||
public ProducerFactory<String, SpeedingViolation> producerFactory() {
|
||||
Map<String, Object> config = new HashMap<>();
|
||||
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
|
||||
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
|
||||
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonObjectSerializer.class);
|
||||
return new DefaultKafkaProducerFactory(config);
|
||||
}
|
||||
|
||||
@Bean
|
||||
public KafkaTemplate<String, SpeedingViolation> kafkaTemplate() {
|
||||
return new KafkaTemplate<String, SpeedingViolation>(producerFactory());
|
||||
}
|
||||
```
|
||||
|
||||
3. The `TrafficControlService/src/main/java/dapr/traffic/fines/KafkaFineCollectionClient.java` uses `KafkaTemplate` to publish fine to "test" topic.
|
||||
|
||||
```java
|
||||
@Autowired
|
||||
private KafkaTemplate<String, SpeedingViolation> kafkaTemplate;
|
||||
|
||||
@Override
|
||||
public void submitForFine(SpeedingViolation speedingViolation) {
|
||||
kafkaTemplate.send("test", speedingViolation);
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
## Kafka Subscriber
|
||||
|
||||
1. The `FineCollectionService/src/main/java/dapr/fines/violation/KafkaConsumerConfig.java` defines a factory class for Kafka listener.
|
||||
|
||||
```java
|
||||
@Bean
|
||||
public ConsumerFactory<String, SpeedingViolation> consumerFactory() {
|
||||
Map<String, Object> props = new HashMap<>();
|
||||
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
|
||||
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test");
|
||||
props.put(JsonObjectDeserializer.TRUSTED_PACKAGES, "*");
|
||||
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(),
|
||||
new JsonObjectDeserializer());
|
||||
}
|
||||
|
||||
@Bean
|
||||
public ConcurrentKafkaListenerContainerFactory<String, SpeedingViolation> kafkaListenerContainerFactory() {
|
||||
ConcurrentKafkaListenerContainerFactory<String, SpeedingViolation>
|
||||
factory = new ConcurrentKafkaListenerContainerFactory<>();
|
||||
factory.setConsumerFactory(consumerFactory());
|
||||
return factory;
|
||||
}
|
||||
```
|
||||
|
||||
2. The `FineCollectionService/src/main/java/dapr/fines/violation/KafkaViolationConsumer.java` file implements kafka listener.
|
||||
|
||||
```java
|
||||
@KafkaListener(topics = "test", groupId = "test", containerFactory = "kafkaListenerContainerFactory")
|
||||
public void listen(SpeedingViolation violation) {
|
||||
|
||||
violationProcessor.processSpeedingViolation(violation);
|
||||
}
|
||||
```
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Prerequisites]({{ site.baseurl }}{% link modules/00-intro/2-prerequisites.md %}){: .btn .mt-7 }
|
||||
|
|
|
@ -20,101 +20,9 @@ has_toc: true
|
|||
{:toc}
|
||||
</details>
|
||||
|
||||
In this assignment, you'll run the application to make sure everything works correctly.
|
||||
{% include 01-assignment-1-lab/2-lab-instructions.md linkToPrerequisites="modules/00-intro/2-prerequisites.md" %}
|
||||
|
||||
## Assignment goals
|
||||
|
||||
To complete this assignment, you must reach the following goals:
|
||||
|
||||
- Apache Kafka - either run as a docker container (see below) or install and run on your machine ([download](https://kafka.apache.org/downloads))
|
||||
- All services are running.
|
||||
- The logging indicates that all services are working correctly.
|
||||
|
||||
## Step 1. Running Kafka using Docker or Rancher Desktop
|
||||
|
||||
From the root of **source code** folder, run the following command to configure and start Kafka from your locally installed Docker or Rancher Desktop:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
This command will read the docker-compose.yml file located within the root folder and download and run Kafka containers for this workshop.
|
||||
|
||||
## Step 2. Run the VehicleRegistration service
|
||||
|
||||
1. Open the source code folder in your code editor.
|
||||
|
||||
2. Open a terminal window.
|
||||
|
||||
3. Make sure the current folder is `VehicleRegistrationService`.
|
||||
|
||||
4. Start the service:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
> If you receive an error here, please double-check whether or not you have installed all the [prerequisites](../Module0/index.md) for the workshop!
|
||||
|
||||
## Step 3. Run the FineCollection service
|
||||
|
||||
1. Make sure the VehicleRegistrationService service is running (result of step 1).
|
||||
|
||||
1. Open a **new** terminal window.
|
||||
|
||||
1. Make sure the current folder is `FineCollectionService`.
|
||||
|
||||
1. Start the service:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
## Step 4. Run the TrafficControl service
|
||||
|
||||
1. Make sure the VehicleRegistrationService and FineCollectionService are running (results of step 1 and 2).
|
||||
|
||||
2. Open a **new** terminal window and make sure the current folder is `TrafficControlService`.
|
||||
|
||||
3. Start the service:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
## Step 5. Run the simulation
|
||||
|
||||
Now you're going to run the simulation that actually simulates cars driving on the highway. The simulation will simulate 3 entry- and exit-cameras (one for each lane).
|
||||
|
||||
1. Open a new terminal window and make sure the current folder is `Simulation`.
|
||||
|
||||
2. Start the simulation:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
3. In the simulation window you should see something like this:
|
||||
|
||||
```bash
|
||||
2021-09-15 13:47:59.599 INFO 22875 --- [ main] dapr.simulation.SimulationApplication : Started SimulationApplication in 0.98 seconds (JVM running for 1.289)
|
||||
2021-09-15 13:47:59.603 INFO 22875 --- [pool-1-thread-2] dapr.simulation.Simulation : Start camera simulation for lane 1
|
||||
2021-09-15 13:47:59.603 INFO 22875 --- [pool-1-thread-1] dapr.simulation.Simulation : Start camera simulation for lane 0
|
||||
2021-09-15 13:47:59.603 INFO 22875 --- [pool-1-thread-3] dapr.simulation.Simulation : Start camera simulation for lane 2
|
||||
2021-09-15 13:47:59.679 INFO 22875 --- [pool-1-thread-2] dapr.simulation.Simulation : Simulated ENTRY of vehicle with license number 77-ZK-59 in lane 1
|
||||
2021-09-15 13:47:59.869 INFO 22875 --- [pool-1-thread-3] dapr.simulation.Simulation : Simulated ENTRY of vehicle with license number LF-613-D in lane 2
|
||||
2021-09-15 13:48:00.852 INFO 22875 --- [pool-1-thread-1] dapr.simulation.Simulation : Simulated ENTRY of vehicle with license number 12-LZ-KS in lane 0
|
||||
2021-09-15 13:48:04.797 INFO 22875 --- [pool-1-thread-2] dapr.simulation.Simulation : Simulated EXIT of vehicle with license number 77-ZK-59 in lane 0
|
||||
2021-09-15 13:48:04.894 INFO 22875 --- [pool-1-thread-3] dapr.simulation.Simulation : Simulated EXIT of vehicle with license number LF-613-D in lane 0
|
||||
```
|
||||
|
||||
4. Also check the logging in all the other Terminal windows. You should see all entry- and exit events and any speeding-violations that were detected in the logging.
|
||||
|
||||
Now you know the application runs correctly. It's time to start adding Dapr to the application.
|
||||
|
||||
## Next assignment
|
||||
|
||||
Make sure you stop all running processes and close all the terminal windows before proceeding to the next assignment. Stopping a service or the simulation is done by pressing `Ctrl-C` in the terminal window.
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Spring for Apache Kafka Usage]({{ site.baseurl }}{% link modules/01-assignment-1-lab/2-lab-instructions.md %}){: .btn .mt-7 }
|
||||
|
|
|
@ -7,4 +7,4 @@ layout: default
|
|||
|
||||
# Assignment 1 - Running Applications with Kafka without using Dapr
|
||||
|
||||
Details on running the application to make sure everything works correctly.
|
||||
Details on how to run the application to make sure everything works correctly.
|
|
@ -19,246 +19,12 @@ has_toc: true
|
|||
{:toc}
|
||||
</details>
|
||||
|
||||
In this assignment, you're going to replace direct Spring Kafka producer and consumer implementation with Dapr **publish/subscribe** messaging to send messages from the TrafficControlService to the FineCollectionService.
|
||||
{% include 02-assignment-2-dapr-pub-sub/1-dapr-pub-sub.md relativeAssetsPath="../../assets/" %}
|
||||
|
||||
With the Dapr pub/sub building block, you use a *topic* to send and receive messages. The producer sends messages to the topic and one or more consumers subscribe to this topic to receive those messages. First you are going to prepare the TrafficControlService so it can send messages using Dapr pub/sub.
|
||||
|
||||
Dapr provides two methods by which you can subscribe to topics:
|
||||
|
||||
* **Declaratively**, where subscriptions are defined in an external file.
|
||||
* **Programmatically**, where subscriptions are defined in user code, using language specific SDK's.
|
||||
|
||||
This example demonstrates a **programmatic** approach using Dapr's Java SDK.
|
||||
|
||||
If you want to get more detailed information, read the [overview of this building block](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview/) in the Dapr documentation.
|
||||
|
||||
To complete this assignment, you must reach the following goals:
|
||||
|
||||
1. The TrafficControlService sends `SpeedingViolation` messages using the Dapr pub/sub building block.
|
||||
2. The FineCollectionService receives `SpeedingViolation` messages using the Dapr pub/sub building block.
|
||||
3. Kafka is used as pub/sub message broker that runs as part of the solution, either in a Docker container, on directly on laptop.
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Open the file `dapr/kafka-pubsub.yaml` in your code editor.
|
||||
|
||||
1. Inspect this file. As you can see, it specifies the type of the message broker to use (`pubsub.kafka`) and specifies information on how to connect to the Kafka server you started in step 1 (running on localhost on port `9092`) in the `metadata` section.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: pubsub
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.kafka
|
||||
version: v1
|
||||
metadata:
|
||||
- name: brokers # Required. Kafka broker connection setting
|
||||
value: "localhost:9092"
|
||||
- name: consumerGroup # Optional. Used for input bindings.
|
||||
value: "test"
|
||||
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
|
||||
value: "my-dapr-app-id"
|
||||
- name: authType # Required.
|
||||
- name: authRequired
|
||||
value: "false"
|
||||
- name: maxMessageBytes # Optional.
|
||||
value: 1024
|
||||
- name: consumeRetryInterval # Optional.
|
||||
value: 200ms
|
||||
- name: version # Optional.
|
||||
value: 0.10.2.0
|
||||
- name: disableTls # Optional. Disable TLS. This is not safe for production!! You should read the `Mutual TLS` section for how to use TLS.
|
||||
value: "true"
|
||||
scopes:
|
||||
- trafficcontrolservice
|
||||
- finecollectionservice
|
||||
```
|
||||
|
||||
In the `scopes` section, you specify that only the TrafficControlService and FineCollectionService should use the pub/sub building block.
|
||||
|
||||
1. **Copy or Move** this file `dapr/kafka-pubsub.yaml` to `dapr/components/` folder (when starting Dapr applications from command line, you specify a folder `dapr/components/` where Dapr component definitions are located). From the root folder, run the following command:
|
||||
|
||||
```bash
|
||||
mkdir dapr/components
|
||||
cp dapr/kafka-pubsub.yaml dapr/components/
|
||||
```
|
||||
|
||||
## Step 1: Publish messages in the TrafficControlService
|
||||
|
||||
1. Open the file, **TrafficControlService/src/main/java/dapr/traffic/fines/DaprFineCollectionClient.java** in your code editor, and inspect it.
|
||||
|
||||
2. It implements the `FineCollectionClient` interface.
|
||||
|
||||
```java
|
||||
public class DaprFineCollectionClient implements FineCollectionClient{
|
||||
private final DaprClient daprClient;
|
||||
|
||||
public DaprFineCollectionClient(final DaprClient daprClient) {
|
||||
this.daprClient = daprClient;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void submitForFine(SpeedingViolation speedingViolation) {
|
||||
|
||||
|
||||
daprClient.publishEvent("pubsub", "test", speedingViolation).block();
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
3. Open the file `TrafficControlService/src/main/java/dapr/traffic/TrafficControlConfiguration.java` in your code editor.
|
||||
|
||||
The default JSON serialization is not suitable for todays goal, so you need to customize the Jackson `ObjectMapper` that it uses. You do so by adding a static inner class to configure the JSON serialization:
|
||||
|
||||
```java
|
||||
static class JsonObjectSerializer extends DefaultObjectSerializer {
|
||||
public JsonObjectSerializer() {
|
||||
OBJECT_MAPPER.registerModule(new JavaTimeModule());
|
||||
OBJECT_MAPPER.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Comment out** following @Bean method:
|
||||
|
||||
```java
|
||||
@Bean
|
||||
public FineCollectionClient fineCollectionClient() {
|
||||
return new KafkaFineCollectionClient();
|
||||
}
|
||||
```
|
||||
|
||||
5. **Uncomment** following @Bean method:
|
||||
|
||||
```java
|
||||
// @Bean
|
||||
// public FineCollectionClient fineCollectionClient(final DaprClient daprClient) {
|
||||
// return new DaprFineCollectionClient(daprClient);
|
||||
// }
|
||||
```
|
||||
|
||||
6. **Uncomment** following @Bean method:
|
||||
|
||||
```java
|
||||
// @Bean
|
||||
// public DaprClient daprClient() {
|
||||
// return new DaprClientBuilder()
|
||||
// .withObjectSerializer(new JsonObjectSerializer())
|
||||
// .build();
|
||||
// }
|
||||
```
|
||||
|
||||
7. Check all your code changes are correct by building the code. Execute the following command in the terminal window:
|
||||
|
||||
```bash
|
||||
mvn package
|
||||
```
|
||||
|
||||
## Step 2: Receive messages in the FineCollectionService
|
||||
|
||||
Dapr will call your service on a `POST` endpoint `/collectfine` to retrieve the subscriptions for that service. You will implement this endpoint and return the subscription for the `test` topic.
|
||||
|
||||
1. Open the file `FineCollectionService/src/main/java/dapr/fines/violation/ViolationController.java` in your code editor.
|
||||
|
||||
2. Uncomment the code line below:
|
||||
|
||||
```java
|
||||
//@RestController
|
||||
```
|
||||
|
||||
3. Uncomment the code snippet below:
|
||||
|
||||
```java
|
||||
// @PostMapping(path = "/collectfine")
|
||||
// @Topic(name = "test", pubsubName = "pubsub")
|
||||
// public ResponseEntity<Void> registerViolation(@RequestBody final CloudEvent<SpeedingViolation> event) {
|
||||
// var violation = event.getData();
|
||||
// violationProcessor.processSpeedingViolation(violation);
|
||||
// return ResponseEntity.ok().build();
|
||||
// }
|
||||
```
|
||||
|
||||
4. Open the file `FineCollectionService/src/main/java/dapr/fines/violation/KafkaViolationConsumer.java` in your code editor.
|
||||
|
||||
5. Comment out @KafkaLister annotation line:
|
||||
|
||||
```java
|
||||
@KafkaListener(topics = "test", groupId = "test", containerFactory = "kafkaListenerContainerFactory")
|
||||
```
|
||||
|
||||
6. Check all your code changes are correct by building the code. Execute the following command in the terminal window:
|
||||
|
||||
```bash
|
||||
mvn package
|
||||
```
|
||||
|
||||
Now you can test the application.
|
||||
|
||||
## Step 3: Test the application
|
||||
|
||||
You're going to start all the services now.
|
||||
|
||||
1. Make sure no services from previous tests are running (close the command-shell windows).
|
||||
|
||||
1. Open the terminal window and make sure the current folder is `VehicleRegistrationService`.
|
||||
|
||||
1. Enter the following command to run the VehicleRegistrationService:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a **new** terminal window and change the current folder to `FineCollectionService`.
|
||||
|
||||
1. Enter the following command to run the FineCollectionService with a Dapr sidecar:
|
||||
|
||||
Ensure you have run `dapr init` command prior to running the below command.
|
||||
|
||||
```bash
|
||||
dapr run --app-id finecollectionservice --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 --components-path ../dapr/components mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a **new** terminal window and change the current folder to `TrafficControlService`.
|
||||
|
||||
1. Enter the following command to run the TrafficControlService with a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
dapr run --app-id trafficcontrolservice --app-port 6000 --dapr-http-port 3600 --dapr-grpc-port 60000 --components-path ../dapr/components mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a **new** terminal window and change the current folder to `Simulation`.
|
||||
|
||||
1. Start the simulation:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
You should see the same logs as **Assignment 1**. Obviously, the behavior of the application is exactly the same as before.
|
||||
|
||||
## Step 4: Debug Dapr applications in Eclipse
|
||||
|
||||
The steps below are tailored to debug TrafficControlService, but would be the same for debugging any Dapr application in Eclipse.
|
||||
|
||||
1. Click `Run > External Tools > External Tools Configuration..`.
|
||||
2. Click `New Launch Configuration` icon.
|
||||
* Name = trafficcontrolservice-dapr-debug
|
||||
* Location = c:\dapr\dapr.exe
|
||||
* Working Directory = ${workspace_loc:/TrafficControlService}
|
||||
* Arguments = run --app-id finecollectionservice --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 --components-path ../dapr/components
|
||||
|
||||
![Eclipse External Tools Configuration](../../assets/images/eclipse-external-tools-configurations.png)
|
||||
|
||||
3. Apply.
|
||||
4. Run.
|
||||
5. Set breakpoints in your code as you normally would in Eclipse.
|
||||
6. From `Debug` menu start the application either as a `Java Application` or as a `Spring Boot App`.
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 1 - Run without Dapr]({{ site.baseurl }}{% link modules/01-assignment-1-lab/1-spring-for-apache-kafka.md %}){: .btn .mt-7 }
|
||||
[< Assignment 1 - Run without Dapr]({{ site.baseurl }}{% link modules/01-assignment-1-lab/2-lab-instructions.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
<span class="fs-3">
|
||||
[Assignment 3 - Pub/sub with Azure Services >]({{ site.baseurl }}{% link modules/03-assignment-3-azure-pub-sub/index.md %}){: .btn .float-right .mt-7 }
|
||||
|
|
|
@ -20,138 +20,9 @@ has_toc: true
|
|||
{:toc}
|
||||
</details>
|
||||
|
||||
Stop Simulation, TrafficControlService and FineCollectionService, and VehicleRegistrationService by pressing Crtl-C in the respective terminal windows.
|
||||
{% include 03-assignment-3-azure-pub-sub/1-azure-service-bus.md %}
|
||||
|
||||
## Step 1: Create Azure Service Bus
|
||||
|
||||
In this assignment, you will use Azure Service Bus as the message broker with the Dapr pub/sub building block. You're going to create an Azure Service Bus namespace and a topic in it. To be able to do this, you need to have an Azure subscription. If you don't have one, you can create a free account at [https://azure.microsoft.com/free/](https://azure.microsoft.com/free/).
|
||||
|
||||
1. Login to Azure:
|
||||
|
||||
```bash
|
||||
az login
|
||||
```
|
||||
|
||||
1. Create a resource group:
|
||||
|
||||
```bash
|
||||
az group create --name rg-dapr-workshop-java --location eastus
|
||||
```
|
||||
|
||||
A [resource group](https://learn.microsoft.com/azure/azure-resource-manager/management/manage-resource-groups-portal) is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. In our workshop, all the databases, all the microservices, etc. will be grouped into a single resource group.
|
||||
|
||||
1. [Azure Service Bus](https://learn.microsoft.com/en-us/azure/service-bus-messaging/) Namespace is a logical container for topics, queues, and subscriptions. This namespace needs to be globally unique. Use the following command to generate a unique name:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
UNIQUE_IDENTIFIER=$(LC_ALL=C tr -dc a-z0-9 </dev/urandom | head -c 5)
|
||||
SERVICE_BUS="sb-dapr-workshop-java-$UNIQUE_IDENTIFIER"
|
||||
echo $SERVICE_BUS
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$ACCEPTED_CHAR = [Char[]]'abcdefghijklmnopqrstuvwxyz0123456789'
|
||||
$UNIQUE_IDENTIFIER = (Get-Random -Count 5 -InputObject $ACCEPTED_CHAR) -join ''
|
||||
$SERVICE_BUS = "sb-dapr-workshop-java-$UNIQUE_IDENTIFIER"
|
||||
$SERVICE_BUS
|
||||
```
|
||||
|
||||
1. Create a Service Bus messaging namespace:
|
||||
|
||||
```bash
|
||||
az servicebus namespace create --resource-group rg-dapr-workshop-java --name $SERVICE_BUS --location eastus
|
||||
```
|
||||
|
||||
1. Create a Service Bus topic:
|
||||
|
||||
```bash
|
||||
az servicebus topic create --resource-group rg-dapr-workshop-java --namespace-name $SERVICE_BUS --name test
|
||||
```
|
||||
|
||||
1. Create authorization rules for the Service Bus topic:
|
||||
|
||||
```bash
|
||||
az servicebus topic authorization-rule create --resource-group rg-dapr-workshop-java --namespace-name $SERVICE_BUS --topic-name test --name DaprWorkshopJavaAuthRule --rights Manage Send Listen
|
||||
```
|
||||
|
||||
1. Get the connection string for the Service Bus topic and copy it to the clipboard:
|
||||
|
||||
```bash
|
||||
az servicebus topic authorization-rule keys list --resource-group rg-dapr-workshop-java --namespace-name $SERVICE_BUS --topic-name test --name DaprWorkshopJavaAuthRule --query primaryConnectionString --output tsv
|
||||
```
|
||||
|
||||
## Step 2: Configure the pub/sub component
|
||||
|
||||
1. Open the file `dapr/azure-servicebus-pubsub.yaml` in your code editor.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: pubsub
|
||||
spec:
|
||||
type: pubsub.azure.servicebus
|
||||
version: v1
|
||||
metadata:
|
||||
- name: connectionString # Required when not using Azure Authentication.
|
||||
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
|
||||
scopes:
|
||||
- trafficcontrolservice
|
||||
- finecollectionservice
|
||||
```
|
||||
|
||||
As you can see, you specify a different type of pub/sub component (`pubsub.azure.servicebus`) and you specify in the `metadata` section how to connect to Azure Service Bus created in step 1. For this workshop, you are going to use the connection string you copied in the previous step. You can also configure the component to use Azure Active Directory authentication. For more information, see [Azure Service Bus pub/sub component](https://docs.dapr.io/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics/).
|
||||
|
||||
In the `scopes` section, you specify that only the TrafficControlService and FineCollectionService should use the pub/sub building block.
|
||||
|
||||
1. **Copy or Move** this file `dapr/azure-servicebus-pubsub.yaml` to `dapr/components` folder.
|
||||
|
||||
1. **Replace** the `connectionString` value with the value you copied from the clipboard.
|
||||
|
||||
1. **Move** the files `dapr/components/kafka-pubsub.yaml` and `dap/components/rabbit-pubsub.yaml` back to `dapr/` folder if they are present in the component folder.
|
||||
|
||||
## Step 3: Test the application
|
||||
|
||||
You're going to start all the services now.
|
||||
|
||||
1. Make sure no services from previous tests are running (close the command-shell windows).
|
||||
|
||||
1. Open the terminal window and make sure the current folder is `VehicleRegistrationService`.
|
||||
|
||||
1. Enter the following command to run the VehicleRegistrationService with a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a terminal window and change the current folder to `FineCollectionService`.
|
||||
|
||||
1. Enter the following command to run the FineCollectionService with a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
dapr run --app-id finecollectionservice --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 --components-path ../dapr/components mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a terminal window and change the current folder to `TrafficControlService`.
|
||||
|
||||
1. Enter the following command to run the TrafficControlService with a Dapr sidecar:
|
||||
|
||||
```bash
|
||||
dapr run --app-id trafficcontrolservice --app-port 6000 --dapr-http-port 3600 --dapr-grpc-port 60000 --components-path ../dapr/components mvn spring-boot:run
|
||||
```
|
||||
|
||||
1. Open a terminal window and change the current folder to `Simulation`.
|
||||
|
||||
1. Start the simulation:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
You should see the same logs as before. Obviously, the behavior of the application is exactly the same as before. But now, instead of messages being published and subscribed via kafka topic, are being processed through Azure Service Bus.
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 2 - Run with Dapr]({{ site.baseurl }}{% link modules/02-assignment-2-dapr-pub-sub/index.md %}){: .btn .mt-7 }
|
||||
|
|
|
@ -117,7 +117,7 @@ In this assignment, you will use Azure Cache for Redis as the message broker wit
|
|||
|
||||
As you can see, you specify a different type of pub/sub component (`pubsub.redis`) and you specify in the `metadata` section how to connect to Azure Cache for Redis created in step 1. For this workshop, you are going to use the redis hostname, password and port you copied in the previous step. For more information, see [Redis Streams pub/sub component](https://docs.dapr.io/reference/components-reference/supported-pubsub/setup-redis-pubsub/).
|
||||
|
||||
In the `scopes` section, you specify that only the TrafficControlService and FineCollectionService should use the pub/sub building block.
|
||||
In the `scopes` section, you specify that only the `TrafficControlService` and `FineCollectionService` should use the pub/sub building block. To know more about scopes, see [Application access to components with scopes](https://docs.dapr.io/operations/components/component-scopes/#application-access-to-components-with-scopes).
|
||||
|
||||
1. **Copy or Move** this file `dapr/azure-redis-pubsub.yaml` to `dapr/components` folder.
|
||||
|
||||
|
@ -165,6 +165,8 @@ You're going to start all the services now.
|
|||
|
||||
You should see the same logs as before. Obviously, the behavior of the application is exactly the same as before. But now, instead of messages being published and subscribed via kafka topic, are being processed through Redis Streams.
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 2 - Run with Dapr]({{ site.baseurl }}{% link modules/02-assignment-2-dapr-pub-sub/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -11,6 +11,8 @@ With Dapr you can easily without code changes switch between different [pub/sub
|
|||
|
||||
All the assignments of this module can be done independently from each other. You can choose to do one or all of them.
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[Azure Service Bus]({{ site.baseurl }}{% link modules/03-assignment-3-azure-pub-sub/1-azure-service-bus.md %}){: .btn }
|
||||
</span>
|
||||
|
@ -18,6 +20,8 @@ All the assignments of this module can be done independently from each other. Yo
|
|||
[Azure Cache for Redis]({{ site.baseurl }}{% link modules/03-assignment-3-azure-pub-sub/2-azure-cache-redis.md %}){: .btn }
|
||||
</span>
|
||||
|
||||
<!-- ------------------------------ CHALLENGE ------------------------------ -->
|
||||
|
||||
{: .new-title }
|
||||
> Challenge
|
||||
>
|
||||
|
|
|
@ -44,6 +44,8 @@ From the list of telemetry items, click the `Show` button to view an individual
|
|||
|
||||
![Zipkin UI](../../assets/images/zipkin-screenshot.png)
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 3 - Pub/sub with Azure Services]({{ site.baseurl }}{% link modules/03-assignment-3-azure-pub-sub/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -52,6 +52,8 @@ layout: default
|
|||
dapr.io/enabled: "true"
|
||||
```
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Deploy to AKS]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/01-aks/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -195,7 +195,7 @@ This is the end of the workshop!
|
|||
- [Prevent port collisions]({{ site.baseurl }}{% link modules/08-additional-topics/1-prevent-port-collisions.md %})
|
||||
- [Dapr and Service Meshes]({{ site.baseurl }}{% link modules/08-additional-topics/2-dapr-and-service-meshes.md %})
|
||||
- You can continue the workshop with the **bonus assignments** to learn more about other Dapr building blocks:
|
||||
- [Service-to-service invocation using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-to-service-invocation/index.md %})
|
||||
- [Service invocation using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-invocation/index.md %})
|
||||
- [Azure Cosmos DB as a state store]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/index.md %})
|
||||
- [Azure Key Vault as a secret store]({{ site.baseurl }}{% link modules/09-bonus-assignments/03-secret-store/index.md %})
|
||||
|
||||
|
@ -205,6 +205,8 @@ This is the end of the workshop!
|
|||
> When the workshop is done, please follow the [cleanup instructions]({{ site.baseurl }}{% link modules/10-cleanup/index.md %}) to delete the resources created in this workshop.
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Dapr Sidecar in k8's]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/01-aks/1-dapr-sidecar-in-k8s.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -142,6 +142,8 @@ Find the Application Map feature within the lefthand navigation of the Applicati
|
|||
|
||||
![Application Insights Application Map](../../assets/images/application-insights-application-map.png)
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Deploy to AKS with Dapr Extension]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/01-aks/2-aks-instructions.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -49,6 +49,8 @@ layout: default
|
|||
|
||||
7. verify all application pods are running by executing the following command: `kubectl get pods`.
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< (Optional) Observability]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/01-aks/3-observability-with-open-telemetry.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -12,6 +12,8 @@ In this assignment, you will deploy the 3 services and the simulation to Azure K
|
|||
|
||||
There are two more optional exercises in this assignment. The first one is to [setup observability in AKS using OpenTelemetry]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/01-aks/3-observability-with-open-telemetry.md %}). The second one is to use [GitOps to deploy the application to AKS]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/01-aks/4-gitops.md %}).
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 4 - Observability]({{ site.baseurl }}{% link modules/04-assignment-4-observability-zipkin/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -29,175 +29,9 @@ This assignement is about deploying our microservices to [Azure Container Apps](
|
|||
> Either [Assignment 3 - Using Dapr for pub/sub with Azure Service Bus]({{ site.baseurl }}{% link modules/03-assignment-3-azure-pub-sub/1-azure-service-bus.md %}) or [Assignment 3 - Using Dapr for pub/sub with Azure Cache for Redis]({{ site.baseurl }}{% link modules/03-assignment-3-azure-pub-sub/2-azure-cache-redis.md %}) is a pre-requisite for this assignment.
|
||||
>
|
||||
|
||||
|
||||
## Setup
|
||||
|
||||
Now, let's create the infrastructure for our application, so you can later deploy our microservices to [Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/).
|
||||
|
||||
### Log Analytics Workspace
|
||||
|
||||
[Log Analytics workspace](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-workspace-overview) is the environment for Azure Monitor log data. Each workspace has its own data repository and configuration, and data sources and solutions are configured to store their data in a particular workspace. You will use the same workspace for most of the Azure resources you will be creating.
|
||||
|
||||
1. Create a Log Analytics workspace with the following command:
|
||||
|
||||
```bash
|
||||
az monitor log-analytics workspace create \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--location eastus \
|
||||
--workspace-name log-dapr-workshop-java
|
||||
```
|
||||
|
||||
1. Retrieve the Log Analytics Client ID and client secret and store them in environment variables:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID=$(
|
||||
az monitor log-analytics workspace show \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--workspace-name log-dapr-workshop-java \
|
||||
--query customerId \
|
||||
--output tsv | tr -d '[:space:]'
|
||||
)
|
||||
echo "LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID=$LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID"
|
||||
|
||||
LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=$(
|
||||
az monitor log-analytics workspace get-shared-keys \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--workspace-name log-dapr-workshop-java \
|
||||
--query primarySharedKey \
|
||||
--output tsv | tr -d '[:space:]'
|
||||
)
|
||||
echo "LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET"
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID="$(
|
||||
az monitor log-analytics workspace show `
|
||||
--resource-group rg-dapr-workshop-java `
|
||||
--workspace-name log-dapr-workshop-java `
|
||||
--query customerId `
|
||||
--output tsv
|
||||
)"
|
||||
Write-Output "LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID=$LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID"
|
||||
|
||||
$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET="$(
|
||||
az monitor log-analytics workspace get-shared-keys `
|
||||
--resource-group rg-dapr-workshop-java `
|
||||
--workspace-name log-dapr-workshop-java `
|
||||
--query primarySharedKey `
|
||||
--output tsv
|
||||
)"
|
||||
Write-Output "LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET"
|
||||
```
|
||||
|
||||
### Azure Container Registry
|
||||
|
||||
Later, you will be creating Docker containers and pushing them to the Azure Container Registry.
|
||||
|
||||
1. [Azure Container Registry](https://learn.microsoft.com/en-us/azure/container-registry/) is a private registry for hosting container images. Using the Azure Container Registry, you can store Docker images for all types of container deployments. This registry needs to be gloablly unique. Use the following command to generate a unique name:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
UNIQUE_IDENTIFIER=$(LC_ALL=C tr -dc a-z0-9 </dev/urandom | head -c 5)
|
||||
CONTAINER_REGISTRY="crdaprworkshopjava$UNIQUE_IDENTIFIER"
|
||||
echo $CONTAINER_REGISTRY
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$ACCEPTED_CHAR = [Char[]]'abcdefghijklmnopqrstuvwxyz0123456789'
|
||||
$UNIQUE_IDENTIFIER = (Get-Random -Count 5 -InputObject $ACCEPTED_CHAR) -join ''
|
||||
$CONTAINER_REGISTRY = "crdaprworkshopjava$UNIQUE_IDENTIFIER"
|
||||
$CONTAINER_REGISTRY
|
||||
```
|
||||
|
||||
1. Create an Azure Container Registry with the following command:
|
||||
|
||||
```bash
|
||||
az acr create \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--location eastus \
|
||||
--name "$CONTAINER_REGISTRY" \
|
||||
--workspace log-dapr-workshop-java \
|
||||
--sku Standard \
|
||||
--admin-enabled true
|
||||
```
|
||||
|
||||
Notice that you created the registry with admin rights `--admin-enabled true` which is not suited for real production, but well for our workshop
|
||||
|
||||
1. Update the registry to allow anonymous users to pull the images ():
|
||||
|
||||
```bash
|
||||
az acr update \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--name "$CONTAINER_REGISTRY" \
|
||||
--anonymous-pull-enabled true
|
||||
```
|
||||
|
||||
|
||||
This can be handy if you want other attendees of the workshop to use your registry, but this is not suitable for production.
|
||||
|
||||
1. Get the URL of the Azure Container Registry and set it to the `CONTAINER_REGISTRY_URL` variable with the following command:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
CONTAINER_REGISTRY_URL=$(
|
||||
az acr show \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--name "$CONTAINER_REGISTRY" \
|
||||
--query "loginServer" \
|
||||
--output tsv
|
||||
)
|
||||
|
||||
echo "CONTAINER_REGISTRY_URL=$CONTAINER_REGISTRY_URL"
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$CONTAINER_REGISTRY_URL="$(
|
||||
az acr show `
|
||||
--resource-group rg-dapr-workshop-java `
|
||||
--name "$CONTAINER_REGISTRY" `
|
||||
--query "loginServer" `
|
||||
--output tsv
|
||||
)"
|
||||
|
||||
Write-Output "CONTAINER_REGISTRY_URL=$CONTAINER_REGISTRY_URL"
|
||||
```
|
||||
|
||||
### Azure Container Apps environment
|
||||
|
||||
A [container apps environment](https://learn.microsoft.com/en-us/azure/container-apps/environment) acts as a secure boundary around our container apps. Containers deployed on the same environment use the same virtual network and write the log to the same logging destionation, in our case: Log Analytics workspace.
|
||||
|
||||
|
||||
{: .important-title }
|
||||
> Dapr Telemetry
|
||||
>
|
||||
|
||||
> If you want to enable Dapr telemetry, you need to create the container apps environment with Application Insights. You can follow these instructions instead of the instructions below: [(Optional) Observability with Dapr using Application Insights]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/02-aca/2-observability.md %})
|
||||
>
|
||||
|
||||
Create the container apps environment with the following command:
|
||||
|
||||
```bash
|
||||
az containerapp env create \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--location eastus \
|
||||
--name cae-dapr-workshop-java \
|
||||
--logs-workspace-id "$LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID" \
|
||||
--logs-workspace-key "$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET"
|
||||
```
|
||||
|
||||
{: .note }
|
||||
> Some Azure CLI commands can take some time to execute. Don't hesitate to have a look at the next assignments to know what you will have to do. And then, come back to this one when the command is done and execute the next one.
|
||||
>
|
||||
{% include 05-assignment-5-aks-aca/02-aca/1-setup.md showObservability=false %}
|
||||
|
||||
## Step 1 - Deploy Dapr Components
|
||||
|
||||
|
@ -205,49 +39,17 @@ You are going to deploy the `pubsub` Dapr component. This pubsub is either Azure
|
|||
|
||||
### Azure Service Bus
|
||||
|
||||
In [Assignment 3 - Using Dapr for pub/sub with Azure Service Bus]({{ site.baseurl }}{% link modules/03-assignment-3-azure-pub-sub/1-azure-service-bus.md %}), you copied the file `dapr/azure-servicebus-pubsub.yaml` to `dapr/components` folder and updated the `connectionString` value. This file is used to deploy the `pubsub` Dapr component.
|
||||
|
||||
The Dapr component structure for Azure Container Apps is different from the standard Dapr component yaml structure and hence the need for a new component yaml file.
|
||||
|
||||
1. Open the file `dapr/aca-azure-servicebus-pubsub.yaml` in your code editor.
|
||||
|
||||
```yaml
|
||||
# pubsub.yaml for Azure Service Bus
|
||||
componentType: pubsub.azure.servicebus
|
||||
version: v1
|
||||
metadata:
|
||||
- name: connectionString
|
||||
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
|
||||
scopes:
|
||||
- trafficcontrolservice
|
||||
- finecollectionservice
|
||||
```
|
||||
|
||||
2. **Copy or Move** this file `dapr/aca-servicebus-pubsub.yaml` to `dapr/components` folder.
|
||||
|
||||
3. **Replace** the `connectionString` value with the value you set in `dapr/components/azure-servicebus-pubsub.yaml` in [Assignment 3 - Using Dapr for pub/sub with Azure Service Bus]({{ site.baseurl }}{% link modules/03-assignment-3-azure-pub-sub/1-azure-service-bus.md %}).
|
||||
|
||||
4. Go to the root folder of the repository.
|
||||
|
||||
5. Enter the following command to deploy the `pubsub` Dapr component:
|
||||
|
||||
```bash
|
||||
az containerapp env dapr-component set \
|
||||
--name cae-dapr-workshop-java --resource-group rg-dapr-workshop-java \
|
||||
--dapr-component-name pubsub \
|
||||
--yaml ./dapr/components/aca-azure-servicebus-pubsub.yaml
|
||||
```
|
||||
{% include 05-assignment-5-aks-aca/02-aca/2-1-dapr-component-service-bus.md linkToAssignment3="modules/03-assignment-3-azure-pub-sub/1-azure-service-bus.md" %}
|
||||
|
||||
### Azure Cache for Redis
|
||||
|
||||
In [Assignment 3 - Using Dapr for pub/sub with Azure Cache for Redis]({{ site.baseurl }}{% link modules/03-assignment-3-azure-pub-sub/2-azure-cache-redis.md %}), you copied the file `dapr/aca-azure-redis-pubsub.yaml` to `dapr/components` folder and updated the `redisHost` and `redisPassword` values. This file is used to deploy the `pubsub` Dapr component.
|
||||
|
||||
The Dapr component structure for Azure Container Apps is different from the standard Dapr component yaml structure and hence the need for a new component yaml file.
|
||||
The [Dapr component schema for Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/dapr-overview?tabs=bicep1%2Cyaml#component-schema) is different from the standard Dapr component yaml schema. It has been slightly simplified. Hence the need for a new component yaml file.
|
||||
|
||||
1. Open the file `dapr/aca-redis-pubsub.yaml` in your code editor.
|
||||
|
||||
```yaml
|
||||
# pubsub.yaml for Azure Cache for Redis
|
||||
componentType: pubsub.redis
|
||||
version: v1
|
||||
metadata:
|
||||
|
@ -277,256 +79,10 @@ The Dapr component structure for Azure Container Apps is different from the stan
|
|||
--yaml ./dapr/components/aca-redis-pubsub.yaml
|
||||
```
|
||||
|
||||
## Step 2 - Generate Docker images for applications, and push them to ACR
|
||||
<!-- ----------------------- BUILD, DEPLOY AND TEST ------------------------ -->
|
||||
|
||||
Since you don't have any container images ready yet, we'll build and push container images in Azure Container Registry (ACR) to get things running.
|
||||
|
||||
1. Login to your ACR repository
|
||||
|
||||
```bash
|
||||
az acr login --name $CONTAINER_REGISTRY
|
||||
```
|
||||
|
||||
2. In the root folder of VehicleRegistrationService microservice, run the following command
|
||||
|
||||
```bash
|
||||
mvn spring-boot:build-image
|
||||
docker tag vehicle-registration-service:1.0-SNAPSHOT "$CONTAINER_REGISTRY.azurecr.io/vehicle-registration-service:latest"
|
||||
docker push $CONTAINER_REGISTRY.azurecr.io/vehicle-registration-service:latest
|
||||
```
|
||||
|
||||
3. In the root folder of FineCollectionService microservice, run the following command
|
||||
|
||||
```bash
|
||||
mvn spring-boot:build-image
|
||||
docker tag fine-collection-service:1.0-SNAPSHOT "$CONTAINER_REGISTRY.azurecr.io/fine-collection-service:latest"
|
||||
docker push $CONTAINER_REGISTRY.azurecr.io/fine-collection-service:latest
|
||||
```
|
||||
|
||||
4. In the root folder of TrafficControlService microservice, run the following command
|
||||
|
||||
```bash
|
||||
mvn spring-boot:build-image
|
||||
docker tag traffic-control-service:1.0-SNAPSHOT "$CONTAINER_REGISTRY.azurecr.io/traffic-control-service:latest"
|
||||
docker push $CONTAINER_REGISTRY.azurecr.io/traffic-control-service:latest
|
||||
```
|
||||
|
||||
## Step 3 - Deploy the Container Apps
|
||||
|
||||
Now that you have created the container apps environment and push the images, you can create the container apps. A container app is a containerized application that is deployed to a container apps environment.
|
||||
|
||||
You will create three container apps, one for each of our Java services: TrafficControlService, FineCollectionService and VehicleRegistrationService.
|
||||
|
||||
1. Create a Container App for VehicleRegistrationService with the following command:
|
||||
|
||||
```bash
|
||||
az containerapp create \
|
||||
--name ca-vehicle-registration-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--environment cae-dapr-workshop-java \
|
||||
--image "$CONTAINER_REGISTRY_URL/vehicle-registration-service:latest" \
|
||||
--target-port 6002 \
|
||||
--ingress internal \
|
||||
--min-replicas 1 \
|
||||
--max-replicas 1
|
||||
```
|
||||
|
||||
Notice that internal ingress is enable. This is because we want to provide access to the service only from within the container apps environment. FineCollectionService will be able to access the VehicleRegistrationService using the internal ingress FQDN.
|
||||
|
||||
1. Get the FQDN of VehicleRegistrationService and save it in a variable:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
VEHICLE_REGISTRATION_SERVICE_FQDN=$(az containerapp show \
|
||||
--name ca-vehicle-registration-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--query "properties.configuration.ingress.fqdn" \
|
||||
-o tsv)
|
||||
echo $VEHICLE_REGISTRATION_SERVICE_FQDN
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$VEHICLE_REGISTRATION_SERVICE_FQDN = az containerapp show `
|
||||
--name ca-vehicle-registration-service `
|
||||
--resource-group rg-dapr-workshop-java `
|
||||
--query "properties.configuration.ingress.fqdn" `
|
||||
-o tsv
|
||||
$VEHICLE_REGISTRATION_SERVICE_FQDN
|
||||
```
|
||||
|
||||
Notice that the FQDN is in the format `<service-name>.internal.<unique-name>.<region>.azurecontainerapps.io` where internal indicates that the service is only accessible from within the container apps environment, i.e. exposed with internal ingress.
|
||||
|
||||
1. Create a Container App for FineCollectionService with the following command:
|
||||
|
||||
```bash
|
||||
az containerapp create \
|
||||
--name ca-fine-collection-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--environment cae-dapr-workshop-java \
|
||||
--image "$CONTAINER_REGISTRY_URL/fine-collection-service:latest" \
|
||||
--min-replicas 1 \
|
||||
--max-replicas 1 \
|
||||
--enable-dapr \
|
||||
--dapr-app-id fine-collection-service \
|
||||
--dapr-app-port 6001 \
|
||||
--dapr-app-protocol http \
|
||||
--env-vars "VEHICLE_REGISTRATION_SERVICE_BASE_URL=https://$VEHICLE_REGISTRATION_SERVICE_FQDN"
|
||||
```
|
||||
|
||||
1. Create a Container App for TrafficControlService with the following command:
|
||||
|
||||
```bash
|
||||
az containerapp create \
|
||||
--name ca-traffic-control-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--environment cae-dapr-workshop-java \
|
||||
--image "$CONTAINER_REGISTRY_URL/traffic-control-service:latest" \
|
||||
--target-port 6000 \
|
||||
--ingress external \
|
||||
--min-replicas 1 \
|
||||
--max-replicas 1 \
|
||||
--enable-dapr \
|
||||
--dapr-app-id traffic-control-service \
|
||||
--dapr-app-port 6000 \
|
||||
--dapr-app-protocol http
|
||||
```
|
||||
|
||||
1. Get the FQDN of traffic control service and save it in a variable:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
TRAFFIC_CONTROL_SERVICE_FQDN=$(az containerapp show \
|
||||
--name ca-traffic-control-service \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--query "properties.configuration.ingress.fqdn" \
|
||||
-o tsv)
|
||||
echo $TRAFFIC_CONTROL_SERVICE_FQDN
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$TRAFFIC_CONTROL_SERVICE_FQDN = $(az containerapp show `
|
||||
--name ca-traffic-control-service `
|
||||
--resource-group rg-dapr-workshop-java `
|
||||
--query "properties.configuration.ingress.fqdn" `
|
||||
-o tsv)
|
||||
$TRAFFIC_CONTROL_SERVICE_FQDN
|
||||
```
|
||||
|
||||
Notice that the FQDN is in the format `<service-name>.<unique-name>.<region>.azurecontainerapps.io` where internal is not present. Indeed, traffic control service is exposed with external ingress, i.e. it is accessible from outside the container apps environment. It will be used by the simulation to test the application.
|
||||
|
||||
## Step 4 - Run the simulation
|
||||
|
||||
1. Set the following environment variable:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
export TRAFFIC_CONTROL_SERVICE_BASE_URL=https://$TRAFFIC_CONTROL_SERVICE_FQDN
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$env:TRAFFIC_CONTROL_SERVICE_BASE_URL = "https://$TRAFFIC_CONTROL_SERVICE_FQDN"
|
||||
```
|
||||
|
||||
1. In the root folder of the simulation (`Simulation`), start the simulation:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:run
|
||||
```
|
||||
|
||||
## Step 5 - Test the microservices running in ACA
|
||||
|
||||
You can access the log of the container apps from the [Azure Portal](https://portal.azure.com/) or directly in a terminal window. The following steps show how to access the logs from the terminal window for each microservice.
|
||||
|
||||
|
||||
### Traffic Control Service
|
||||
|
||||
1. Run the following command to identify the running revision of traffic control service container apps:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
TRAFFIC_CONTROL_SERVICE_REVISION=$(az containerapp revision list -n ca-traffic-control-service -g rg-dapr-workshop-java --query "[0].name" -o tsv)
|
||||
echo $TRAFFIC_CONTROL_SERVICE_REVISION
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$TRAFFIC_CONTROL_SERVICE_REVISION = az containerapp revision list -n ca-traffic-control-service -g rg-dapr-workshop-java --query "[0].name" -o tsv
|
||||
$TRAFFIC_CONTROL_SERVICE_REVISION
|
||||
```
|
||||
|
||||
2. Run the following command to get the last 10 lines of traffic control service logs from Log Analytics Workspace:
|
||||
|
||||
```bash
|
||||
az monitor log-analytics query \
|
||||
--workspace $LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID \
|
||||
--analytics-query "ContainerAppConsoleLogs_CL | where RevisionName_s == '$TRAFFIC_CONTROL_SERVICE_REVISION' | project TimeGenerated, Log_s | sort by TimeGenerated desc | take 10" \
|
||||
--out table
|
||||
```
|
||||
|
||||
### Fine Collection Service
|
||||
|
||||
1. Run the following command to identify the running revision of fine collection service container apps:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
FINE_COLLECTION_SERVICE_REVISION=$(az containerapp revision list -n ca-fine-collection-service -g rg-dapr-workshop-java --query "[0].name" -o tsv)
|
||||
echo $FINE_COLLECTION_SERVICE_REVISION
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$FINE_COLLECTION_SERVICE_REVISION = az containerapp revision list -n ca-fine-collection-service -g rg-dapr-workshop-java --query "[0].name" -o tsv
|
||||
$FINE_COLLECTION_SERVICE_REVISION
|
||||
```
|
||||
|
||||
2. Run the following command to get the last 10 lines of fine collection service logs from Log Analytics Workspace:
|
||||
|
||||
```bash
|
||||
az monitor log-analytics query \
|
||||
--workspace $LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID \
|
||||
--analytics-query "ContainerAppConsoleLogs_CL | where RevisionName_s == '$FINE_COLLECTION_SERVICE_REVISION' | project TimeGenerated, Log_s | sort by TimeGenerated desc | take 10" \
|
||||
--out table
|
||||
```
|
||||
|
||||
### Vehicle Registration Service
|
||||
|
||||
1. Run the following command to identify the running revision of vehicle registration service container apps:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
VEHICLE_REGISTRATION_SERVICE_REVISION=$(az containerapp revision list -n ca-vehicle-registration-service -g rg-dapr-workshop-java --query "[0].name" -o tsv)
|
||||
echo $VEHICLE_REGISTRATION_SERVICE_REVISION
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$VEHICLE_REGISTRATION_SERVICE_REVISION = az containerapp revision list -n ca-vehicle-registration-service -g rg-dapr-workshop-java --query "[0].name" -o tsv
|
||||
$VEHICLE_REGISTRATION_SERVICE_REVISION
|
||||
```
|
||||
|
||||
2. Run the following command to get the last 10 lines of vehicle registration service logs from Log Analytics Workspace:
|
||||
|
||||
```bash
|
||||
az monitor log-analytics query \
|
||||
--workspace $LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID \
|
||||
--analytics-query "ContainerAppConsoleLogs_CL | where RevisionName_s == '$VEHICLE_REGISTRATION_SERVICE_REVISION' | project TimeGenerated, Log_s | sort by TimeGenerated desc | take 10" \
|
||||
--out table
|
||||
```
|
||||
{% assign stepNumber = 2 %}
|
||||
{% include 05-assignment-5-aks-aca/02-aca/3-build-deploy-test.md %}
|
||||
|
||||
## Next Steps
|
||||
|
||||
|
@ -538,7 +94,7 @@ This is the end of the workshop!
|
|||
- [Prevent port collisions]({{ site.baseurl }}{% link modules/08-additional-topics/1-prevent-port-collisions.md %})
|
||||
- [Dapr and Service Meshes]({{ site.baseurl }}{% link modules/08-additional-topics/2-dapr-and-service-meshes.md %})
|
||||
- You can continue the workshop with the **bonus assignments** to learn more about other Dapr building blocks:
|
||||
- [Service-to-service invocation using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-to-service-invocation/index.md %})
|
||||
- [Service invocation using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-invocation/index.md %})
|
||||
- [Azure Cosmos DB as a state store]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/index.md %})
|
||||
- [Azure Key Vault as a secret store]({{ site.baseurl }}{% link modules/09-bonus-assignments/03-secret-store/index.md %})
|
||||
|
||||
|
@ -548,6 +104,8 @@ This is the end of the workshop!
|
|||
> When the workshop is done, please follow the [cleanup instructions]({{ site.baseurl }}{% link modules/10-cleanup/index.md %}) to delete the resources created in this workshop.
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Deploy to ACA]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/02-aca/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -7,6 +7,7 @@ nav_order: 2
|
|||
layout: default
|
||||
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
|
||||
# (Optional) Observability with Dapr using Application Insights
|
||||
|
@ -23,56 +24,19 @@ has_toc: true
|
|||
{:toc}
|
||||
</details>
|
||||
|
||||
|
||||
In this section, you will deploy Dapr service-to-service telemetry using Application Insights. When [creating the Azure Container Apps environment](https://learn.microsoft.com/en-us/cli/azure/containerapp/env?view=azure-cli-latest#az-containerapp-env-create), you can set Application Insights instrumentation key that is used by Dapr to export service-to-service telemetry to Application Insights.
|
||||
In this section, you will deploy Dapr service-to-service telemetry using [Application Insights](https://learn.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview?tabs=java). When [creating the Azure Container Apps environment](https://learn.microsoft.com/en-us/cli/azure/containerapp/env?view=azure-cli-latest#az-containerapp-env-create), you can set Application Insights instrumentation key that is used by Dapr to export service-to-service telemetry to Application Insights.
|
||||
|
||||
## Step 1: Create Application Insights resource
|
||||
|
||||
1. Create an Application Insights resource:
|
||||
|
||||
```bash
|
||||
az monitor app-insights component create --app cae-dapr-workshop-java --location eastus --kind web -g rg-dapr-workshop-java --application-type web
|
||||
```
|
||||
|
||||
You may receive a message to install the application-insights extension, if so please install the extension for this exercise.
|
||||
|
||||
1. Get the instrumentation key for the Application Insights and set it to the `INSTRUMENTATION_KEY` variable:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
INSTRUMENTATION_KEY=$(az monitor app-insights component show --app cae-dapr-workshop-java -g rg-dapr-workshop-java --query instrumentationKey)
|
||||
echo $INSTRUMENTATION_KEY
|
||||
```
|
||||
|
||||
- PowerShell:
|
||||
|
||||
```powershell
|
||||
|
||||
$INSTRUMENTATION_KEY = az monitor app-insights component show --app cae-dapr-workshop-java -g rg-dapr-workshop-java --query instrumentationKey
|
||||
|
||||
$INSTRUMENTATION_KEY
|
||||
```
|
||||
{% include 05-assignment-5-aks-aca/02-aca/0-1-setup-application-insights.md %}
|
||||
|
||||
## Step 2: Create Azure Container Apps environment
|
||||
|
||||
A [container apps environment](https://learn.microsoft.com/en-us/azure/container-apps/environment) acts as a secure boundary around our container apps. Containers deployed on the same environment use the same virtual network and write the log to the same logging destionation, in our case: Log Analytics workspace.
|
||||
|
||||
To create the container apps environment with Dapr service-to-service telemetry, you need to set `--dapr-instrumentation-key` parameter to the Application Insights instrumentation key. Use the following command to create the container apps environment:
|
||||
|
||||
```bash
|
||||
az containerapp env create \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--location eastus \
|
||||
--name cae-dapr-workshop-java \
|
||||
--logs-workspace-id "$LOG_ANALYTICS_WORKSPACE_CUSTOMER_ID" \
|
||||
--logs-workspace-key "$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET" \
|
||||
--dapr-instrumentation-key "$INSTRUMENTATION_KEY"
|
||||
```
|
||||
{% include 05-assignment-5-aks-aca/02-aca/0-2-setup-container-apps-env.md showObservability=true %}
|
||||
|
||||
## Step 3: Deploy the application
|
||||
|
||||
To deploy the application, follow all the instructions after the creation of the container apps environment in [Deploying Applications to Azure Container Apps (ACA) with Dapr]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/02-aca/1-aca-instructions.md %}). After the completion of the deployment, you can see the service-to-service telemetry in the Application Insights as shown below.
|
||||
To deploy the application, follow all the instructions after the creation of the container apps environment in [Deploying Applications to Azure Container Apps (ACA) with Dapr]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/02-aca/1-aca-instructions.md %}). After the completion of the deployment and the testing, you can see the service-to-service telemetry in the Application Insights as shown below.
|
||||
|
||||
## Step 4: View the telemetry in Application Insights
|
||||
|
||||
|
@ -82,8 +46,8 @@ To deploy the application, follow all the instructions after the creation of the
|
|||
|
||||
![Dapr Telemetry](../../../assets/image/../images/dapr-telemetry.png)
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Deploy to ACA with Dapr]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/02-aca/1-aca-instructions.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
||||
|
|
|
@ -16,6 +16,8 @@ In this assignment, you will deploy the 3 services and the simulation to [Azure
|
|||
> If you want to enable Dapr telemetry, you need to create the container apps environment with Application Insigths. When creating the environment, you can follow the instruction in the optional exercise to [setup Dapr Telemetry in ACA using Application Insights]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/02-aca/2-observability.md %}).
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 4 - Observability]({{ site.baseurl }}{% link modules/04-assignment-4-observability-zipkin/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -9,6 +9,8 @@ layout: default
|
|||
|
||||
In this assignment, you will deploy the 3 services and the simulation to [Azure Kubernetes Service (AKS)]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/01-aks/index.md %}) and [Azure Container Apps (ACA)]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/02-aca/index.md %}). You can choose to deploy to either AKS or ACA, but not both.
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[Azure Kubernetes Service (AKS)]({{ site.baseurl }}{% link modules/05-assignment-5-aks-aca/01-aks/index.md %}){: .btn }
|
||||
</span>
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Invoke Vehicle Registration Service from Fine Collection Service
|
||||
parent: Service-to-service invocation using Dapr
|
||||
parent: Service invocation using Dapr
|
||||
grand_parent: Bonus Assignments
|
||||
has_children: false
|
||||
nav_order: 1
|
||||
|
@ -23,50 +23,15 @@ has_toc: true
|
|||
|
||||
In this assignment, you will use Dapr to invoke the `VehicleRegistrationService` from the `FineCollectionService`. You will use the [service invocation building block](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/) provided by Dapr.
|
||||
|
||||
## Step 1: Use Dapr to invoke the Vehicle Registration Service from the Fine Collection Service
|
||||
<!-- ------------ STEP 1 - INVOKE VEHICLE REGISTRATION SERVICE ------------- -->
|
||||
|
||||
With Dapr, services can invoke other services using their application id. This is done by using the Dapr client to make calls to the Dapr sidecar. The Vehicle Registration Service will be started with a Dapr sidecar.
|
||||
|
||||
1. Open the `FineCollectionService` project in your code editor and navigate to the `DaprVehicleRegistrationClient` class. This class implements the `VehicleRegistrationClient` interface and uses the Dapr client to invoke the Vehicle Registration Service. Inspect the implementation of this class.
|
||||
|
||||
2. Navigate to the `FineCollectionConfiguration` class to switch between the default and Dapr implementation of the `VehicleRegistrationClient`.
|
||||
|
||||
3. **Uncomment** following @Bean method
|
||||
|
||||
```java
|
||||
// @Bean
|
||||
// public VehicleRegistrationClient vehicleRegistrationClient(final DaprClient daprClient) {
|
||||
// return new DaprVehicleRegistrationClient(daprClient);
|
||||
// }
|
||||
```
|
||||
|
||||
4. **Uncomment** following @Bean method
|
||||
|
||||
```java
|
||||
// @Bean
|
||||
// public DaprClient daprClient() {
|
||||
// return new DaprClientBuilder().build();
|
||||
// }
|
||||
```
|
||||
|
||||
5. **Comment out** following @Bean method
|
||||
|
||||
```java
|
||||
@Bean
|
||||
public VehicleRegistrationClient vehicleRegistrationClient(final RestTemplate restTemplate) {
|
||||
return new DefaultVehicleRegistrationClient(restTemplate, vehicleInformationAddress);
|
||||
}
|
||||
```
|
||||
|
||||
6. Check all your code-changes are correct by building the code. Execute the following command in the terminal window:
|
||||
|
||||
```bash
|
||||
mvn package
|
||||
```
|
||||
{% assign stepNumber = 1 %}
|
||||
{% include 09-bonus-assignments/01-service-invocation/1-use-dapr-to-invoke-vehicle-registration-service.md %}
|
||||
|
||||
Now you can test the application.
|
||||
|
||||
## Step 2: Test the application
|
||||
{% assign stepNumber = stepNumber | plus: 1 %}
|
||||
## Step {{stepNumber}}: Test the application
|
||||
|
||||
You're going to start all the services now.
|
||||
|
||||
|
@ -80,6 +45,14 @@ You're going to start all the services now.
|
|||
dapr run --app-id vehicleregistrationservice --app-port 6002 --dapr-http-port 3602 --dapr-grpc-port 60002 --components-path ../dapr/components mvn spring-boot:run
|
||||
```
|
||||
|
||||
`FineCollectionService` Dapr sidecar uses Vehicle Registration Service `app-id` to resolve the service invocation endpoint. The name (i.e. `app-id`) of `VehicleRegistrationService` is set in the application properties of `FineCollectionService` (i.e. `application.yaml`) as shown below:
|
||||
|
||||
```yaml
|
||||
vehicle-registration-service.name: ${VEHICLE_REGISTRATION_SERVICE:vehicleregistrationservice}
|
||||
```
|
||||
|
||||
The default value is `vehicleregistrationservice` that can be override using the environment variable `VEHICLE_REGISTRATION_SERVICE`.
|
||||
|
||||
1. Open a **new** terminal window and change the current folder to `FineCollectionService`.
|
||||
|
||||
1. Enter the following command to run the FineCollectionService with a Dapr sidecar:
|
||||
|
@ -106,9 +79,11 @@ You're going to start all the services now.
|
|||
|
||||
You should see the same logs as before. Obviously, the behavior of the application is exactly the same as before.
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[Deploy to AKS]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-to-service-invocation/2-deploying-to-aks.md %}){: .btn }
|
||||
[Deploy to AKS]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-invocation/2-deploying-to-aks.md %}){: .btn }
|
||||
</span>
|
||||
<!-- <span class="fs-3">
|
||||
[Deploy to ACA]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-to-service-invocation/3-deploying-to-aca.md %}){: .btn }
|
||||
</span> -->
|
||||
<span class="fs-3">
|
||||
[Deploy to ACA]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-invocation/3-deploying-to-aca.md %}){: .btn }
|
||||
</span>
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Deploying service-to-service invocation to Azure Kubernetes Service
|
||||
parent: Service-to-service invocation using Dapr
|
||||
title: Deploying service invocation to Azure Kubernetes Service
|
||||
parent: Service invocation using Dapr
|
||||
grand_parent: Bonus Assignments
|
||||
has_children: false
|
||||
nav_order: 2
|
||||
|
@ -8,7 +8,7 @@ layout: default
|
|||
has_toc: true
|
||||
---
|
||||
|
||||
# Deploying service-to-service invocation to Azure Kubernetes Service
|
||||
# Deploying service invocation to Azure Kubernetes Service
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
|
@ -21,15 +21,15 @@ has_toc: true
|
|||
{:toc}
|
||||
</details>
|
||||
|
||||
In this assignment, you will deploy the service-to-service communication to Azure Kubernetes Service (AKS). You will use the [service invocation building block](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/) provided by Dapr.
|
||||
In this assignment, you will deploy the service communication to Azure Kubernetes Service (AKS). You will use the [service invocation building block](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/) provided by Dapr.
|
||||
|
||||
{: .important-title }
|
||||
> Pre-requisite
|
||||
>
|
||||
> The first part [Invoke Vehicle Registration Service from Fine Collection Service using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-to-service-invocation/1-invoke-service-using-dapr.md %}) is a pre-requisite for this assignment.
|
||||
> The first part [Invoke Vehicle Registration Service from Fine Collection Service using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-invocation/1-invoke-service-using-dapr.md %}) is a pre-requisite for this assignment.
|
||||
>
|
||||
|
||||
## Step 1: Deploy service-to-service communication to AKS
|
||||
## Step 1: Deploy service invocation to AKS
|
||||
|
||||
1. Open `deploy/vehicleregistrationservice.yaml` in your code editor and **uncomment** the following lines:
|
||||
|
||||
|
@ -51,7 +51,7 @@ In this assignment, you will deploy the service-to-service communication to Azur
|
|||
|
||||
Where `$CONTAINER_REGISTRY` is the name of your Azure Container Registry.
|
||||
|
||||
1. In the root folder of FineCollectionService microservice, run the following command:
|
||||
1. In the root folder of `FineCollectionService`, run the following command to build and push the image:
|
||||
|
||||
```bash
|
||||
mvn spring-boot:build-image
|
||||
|
@ -93,6 +93,8 @@ In this assignment, you will deploy the service-to-service communication to Azur
|
|||
> When the workshop is done, please follow the [cleanup instructions]({{ site.baseurl }}{% link modules/10-cleanup/index.md %}) to delete the resources created in this workshop.
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Invoke Service using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-to-service-invocation/1-invoke-service-using-dapr.md %}){: .btn .mt-7 }
|
||||
[< Invoke Service using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-invocation/1-invoke-service-using-dapr.md %}){: .btn .mt-7 }
|
||||
</span>
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
title: Deploying service invocation to Azure Container Apps
|
||||
parent: Service invocation using Dapr
|
||||
grand_parent: Bonus Assignments
|
||||
has_children: false
|
||||
nav_order: 3
|
||||
layout: default
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
# Deploying service invocation to Azure Container Apps
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
<details open markdown="block">
|
||||
<summary>
|
||||
Table of contents
|
||||
</summary>
|
||||
{: .text-delta }
|
||||
- TOC
|
||||
{:toc}
|
||||
</details>
|
||||
|
||||
In this assignment, you will deploy the service communication to Azure Container Apps (ACA). You will use the [service invocation building block](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/) provided by Dapr.
|
||||
|
||||
{: .important-title }
|
||||
> Pre-requisite
|
||||
>
|
||||
> The first part [Invoke Vehicle Registration Service from Fine Collection Service using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-invocation/1-invoke-service-using-dapr.md %}) is a pre-requisite for this assignment.
|
||||
>
|
||||
|
||||
{% assign stepNumber = 1 %}
|
||||
{% include 09-bonus-assignments/01-service-invocation/3-deploy-to-aca.md %}
|
||||
|
||||
{: .important-title }
|
||||
> Cleanup
|
||||
>
|
||||
> When the workshop is done, please follow the [cleanup instructions]({{ site.baseurl }}{% link modules/10-cleanup/index.md %}) to delete the resources created in this workshop.
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Invoke Service using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-invocation/1-invoke-service-using-dapr.md %}){: .btn .mt-7 }
|
||||
</span>
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
title: Service-to-service invocation using Dapr
|
||||
title: Service invocation using Dapr
|
||||
parent: Bonus Assignments
|
||||
has_children: true
|
||||
nav_order: 1
|
||||
layout: default
|
||||
---
|
||||
|
||||
# Service-to-service invocation using Dapr
|
||||
# Service invocation using Dapr
|
||||
|
||||
This bonus assignment is about using Dapr to invoke the `VehicleRegistrationService` from the `FineCollectionService`. You will use the [service invocation building block](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/) provided by Dapr.
|
||||
|
||||
|
@ -16,6 +16,8 @@ This bonus assignment is about using Dapr to invoke the `VehicleRegistrationServ
|
|||
> The first part is a pre-requisite for the deployment to Azure Kubernetes Service (AKS) and to Azure Container Apps (ACA).
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[Let's start!]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-to-service-invocation/1-invoke-service-using-dapr.md %}){: .btn .mt-7 }
|
||||
[Let's start!]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-invocation/1-invoke-service-using-dapr.md %}){: .btn .mt-7 }
|
||||
</span>
|
|
@ -1,43 +0,0 @@
|
|||
---
|
||||
title: Deploying service-to-service invocation to Azure Container Apps
|
||||
parent: Service-to-service invocation using Dapr
|
||||
grand_parent: Bonus Assignments
|
||||
has_children: false
|
||||
nav_order: 3
|
||||
layout: default
|
||||
nav_exclude: true
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
# Deploying service-to-service invocation to Azure Container Apps
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
<details open markdown="block">
|
||||
<summary>
|
||||
Table of contents
|
||||
</summary>
|
||||
{: .text-delta }
|
||||
- TOC
|
||||
{:toc}
|
||||
</details>
|
||||
|
||||
In this assignment, you will deploy the service-to-service communication to Azure Container Apps (ACA). You will use the [service invocation building block](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/) provided by Dapr.
|
||||
|
||||
{: .important-title }
|
||||
> Pre-requisite
|
||||
>
|
||||
> The first part [Invoke Vehicle Registration Service from Fine Collection Service using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-to-service-invocation/1-invoke-service-using-dapr.md %}) is a pre-requisite for this assignment.
|
||||
>
|
||||
|
||||
**TODO: Update the documentation**
|
||||
|
||||
{: .important-title }
|
||||
> Cleanup
|
||||
>
|
||||
> When the workshop is done, please follow the [cleanup instructions]({{ site.baseurl }}{% link modules/10-cleanup/index.md %}) to delete the resources created in this workshop.
|
||||
>
|
||||
|
||||
<span class="fs-3">
|
||||
[< Invoke Service using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-to-service-invocation/1-invoke-service-using-dapr.md %}){: .btn .mt-7 }
|
||||
</span>
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Use Azure Cosmos DB to store the state of a vehicle using Dapr
|
||||
parent: Use Azure Cosmos DB as a state store
|
||||
title: Using Azure Cosmos DB to store the state of a vehicle with Dapr
|
||||
parent: Using Azure Cosmos DB as a state store
|
||||
grand_parent: Bonus Assignments
|
||||
has_children: false
|
||||
nav_order: 1
|
||||
|
@ -8,7 +8,7 @@ layout: default
|
|||
has_toc: true
|
||||
---
|
||||
|
||||
# Use Azure Cosmos DB to store the state of a vehicle using Dapr
|
||||
# Using Azure Cosmos DB to store the state of a vehicle wtih Dapr
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
|
@ -21,76 +21,18 @@ has_toc: true
|
|||
{:toc}
|
||||
</details>
|
||||
|
||||
This bonus assignment is about using Azure Cosmos DB as a [state store](https://docs.dapr.io/operations/components/setup-state-store/) for the `TrafficControlService`. You will use the [Azure Cosmos DB state store component](https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-azure-cosmosdb/) provided by Dapr.
|
||||
This bonus assignment is about using Azure Cosmos DB as a [state store](https://docs.dapr.io/operations/components/setup-state-store/) for the `TrafficControlService` instead of keeping the sate in memory. You will use the [Azure Cosmos DB state store component](https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-azure-cosmosdb/) provided by Dapr.
|
||||
|
||||
## Step 1: Create an Azure Cosmos DB
|
||||
|
||||
1. Open a terminal window.
|
||||
|
||||
1. Azure Cosmos DB account for SQL API is a globally distributed multi-model database service. This account needs to be globally unique. Use the following command to generate a unique name:
|
||||
|
||||
- Linux/Unix shell:
|
||||
|
||||
```bash
|
||||
UNIQUE_IDENTIFIER=$(LC_ALL=C tr -dc a-z0-9 </dev/urandom | head -c 5)
|
||||
COSMOS_DB="cosno-dapr-workshop-java-$UNIQUE_IDENTIFIER"
|
||||
echo $COSMOS_DB
|
||||
```
|
||||
|
||||
- Powershell:
|
||||
|
||||
```powershell
|
||||
$ACCEPTED_CHAR = [Char[]]'abcdefghijklmnopqrstuvwxyz0123456789'
|
||||
$UNIQUE_IDENTIFIER = (Get-Random -Count 5 -InputObject $ACCEPTED_CHAR) -join ''
|
||||
$COSMOS_DB = "cosno-dapr-workshop-java-$UNIQUE_IDENTIFIER"
|
||||
$COSMOS_DB
|
||||
```
|
||||
|
||||
1. Create a Cosmos DB account for SQL API:
|
||||
|
||||
```bash
|
||||
az cosmosdb create --name $COSMOS_DB --resource-group rg-dapr-workshop-java --locations regionName=eastus failoverPriority=0 isZoneRedundant=False
|
||||
```
|
||||
|
||||
{: .important }
|
||||
> The name of the Cosmos DB account must be unique across all Azure Cosmos DB accounts in the world. If you get an error that the name is already taken, try a different name. In the following steps, please update the name of the Cosmos DB account accordingly.
|
||||
|
||||
1. Create a SQL API database:
|
||||
|
||||
```bash
|
||||
az cosmosdb sql database create --account-name $COSMOS_DB --resource-group rg-dapr-workshop-java --name dapr-workshop-java-database
|
||||
```
|
||||
|
||||
1. Create a SQL API container:
|
||||
|
||||
```bash
|
||||
az cosmosdb sql container create --account-name $COSMOS_DB --resource-group rg-dapr-workshop-java --database-name dapr-workshop-java-database --name vehicle-state --partition-key-path /partitionKey --throughput 400
|
||||
```
|
||||
|
||||
{: .important }
|
||||
> The partition key path is `/partitionKey` as mentionned in [Dapr documentation](https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-azure-cosmosdb/#setup-azure-cosmosdb).
|
||||
>
|
||||
|
||||
1. Get the Cosmos DB account URL and note it down. You will need it in the next step and to deploy it to Azure.
|
||||
|
||||
```bash
|
||||
az cosmosdb show --name $COSMOS_DB --resource-group rg-dapr-workshop-java --query documentEndpoint -o tsv
|
||||
```
|
||||
|
||||
1. Get the master key and note it down. You will need it in the next step and to deploy it to Azure.
|
||||
|
||||
```bash
|
||||
az cosmosdb keys list --name $COSMOS_DB --resource-group rg-dapr-workshop-java --type keys --query primaryMasterKey -o tsv
|
||||
```
|
||||
{% include 09-bonus-assignments/02-state-store/1-1-create-cosmos-db.md %}
|
||||
|
||||
## Step 2: Configure the Azure Cosmos DB state store component
|
||||
|
||||
1. Open the file `dapr/azure-cosmosdb-statestore.yaml` in your code editor.
|
||||
1. Open the file `dapr/azure-cosmosdb-statestore.yaml` in your code editor and look at the content of the file.
|
||||
|
||||
1. **Copy or Move** this file `dapr/azure-cosmosdb-statestore.yaml` to `dapr/components` folder.
|
||||
|
||||
1. **Move** the file `dapr/components/redis-statestore.yaml` back to `dapr/` folder.
|
||||
|
||||
1. **Replace** the following placeholders in the file `dapr/components/azure-cosmosdb-statestore.yaml` with the values you noted down in the previous step:
|
||||
|
||||
- `<YOUR_COSMOSDB_ACCOUNT_URL>` with the Cosmos DB account URL
|
||||
|
@ -98,35 +40,7 @@ This bonus assignment is about using Azure Cosmos DB as a [state store](https://
|
|||
|
||||
## Step 3: Add the Azure Cosmos DB state store to the `TrafficControlService`
|
||||
|
||||
1. Open the `TrafficControlService` project in your code editor and navigate to the `DaprVehicleStateRepository` class. This class use the Dapr client to store and retrieve the state of a vehicle. Inspect the implementation of this class.
|
||||
|
||||
1. Navigate to the `TrafficControlConfiguration` class to swith from the `InMemoryVehicleStateRepository` to the `DaprVehicleStateRepository`.
|
||||
|
||||
1. **Update** @Bean method to instantiate `DaprVehicleStateRepository` instead of `InMemoryVehicleStateRepository`:
|
||||
|
||||
```java
|
||||
@Bean
|
||||
public VehicleStateRepository vehicleStateRepository(final DaprClient daprClient) {
|
||||
return new DaprVehicleStateRepository(daprClient);
|
||||
}
|
||||
```
|
||||
|
||||
1. **Uncomment** following @Bean method if not already done:
|
||||
|
||||
```java
|
||||
// @Bean
|
||||
// public DaprClient daprClient() {
|
||||
// return new DaprClientBuilder()
|
||||
// .withObjectSerializer(new JsonObjectSerializer())
|
||||
// .build();
|
||||
// }
|
||||
```
|
||||
|
||||
1. Check all your code-changes are correct by building the code. Execute the following command in the terminal window:
|
||||
|
||||
```bash
|
||||
mvn package
|
||||
```
|
||||
{% include 09-bonus-assignments/02-state-store/1-3-update-traffic-control-service.md %}
|
||||
|
||||
Now you can test the application
|
||||
|
||||
|
@ -170,9 +84,11 @@ You're going to start all the services now.
|
|||
|
||||
You should see the same logs as before. Obviously, the behavior of the application is exactly the same as before.
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[Deploy to AKS]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/2-deploying-to-aks.md %}){: .btn }
|
||||
</span>
|
||||
<!-- <span class="fs-3">
|
||||
<span class="fs-3">
|
||||
[Deploy to ACA]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/3-deploying-to-aca.md %}){: .btn }
|
||||
</span> -->
|
||||
</span>
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Deploying Azure Cosmos DB state store to Azure Kubernetes Service
|
||||
parent: Use Azure Cosmos DB as a state store
|
||||
parent: Using Azure Cosmos DB as a state store
|
||||
grand_parent: Bonus Assignments
|
||||
has_children: false
|
||||
nav_order: 2
|
||||
|
@ -31,7 +31,7 @@ In this assignment, you will deploy the Azure Cosmos DB state store to Azure Kub
|
|||
> The account URL and the master key of the Azure Cosmos DB instance are required for this assignment. Please use the same Azure Cosmos DB instance as used in the first part of this assignment.
|
||||
>
|
||||
|
||||
### Step 1: Deploy Azure Cosmos DB state store to AKS
|
||||
## Step 1: Deploy Azure Cosmos DB state store to AKS
|
||||
|
||||
1. Create Kubernetes secret for the Azure Cosmos DB account URL and the master key using the following command:
|
||||
|
||||
|
@ -131,6 +131,8 @@ In this assignment, you will deploy the Azure Cosmos DB state store to Azure Kub
|
|||
> When the workshop is done, please follow the [cleanup instructions]({{ site.baseurl }}{% link modules/10-cleanup/index.md %}) to delete the resources created in this workshop.
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Cosmos DB as a State store]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/1-azure-cosmos-db-state-store.md %}){: .btn .mt-7 }
|
||||
[< Cosmos DB as a state store]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/1-azure-cosmos-db-state-store.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -1,11 +1,10 @@
|
|||
---
|
||||
title: Deploying Azure Cosmos DB state store to Azure Container Apps
|
||||
parent: Use Azure Cosmos DB as a state store
|
||||
parent: Using Azure Cosmos DB as a state store
|
||||
grand_parent: Bonus Assignments
|
||||
has_children: false
|
||||
nav_order: 3
|
||||
layout: default
|
||||
nav_exclude: true
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
|
@ -29,8 +28,40 @@ In this assignment, you will deploy the Azure Cosmos DB state store to Azure Con
|
|||
>
|
||||
> The first part [Use Azure Cosmos DB to store the state of a vehicle using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/1-azure-cosmos-db-state-store.md %}) is a pre-requisite for this assignment.
|
||||
>
|
||||
> The account URL and the master key of the Azure Cosmos DB instance are required for this assignment. Please use the same Azure Cosmos DB instance as used in the first part of this assignment.
|
||||
>
|
||||
|
||||
**TODO: Update the documentation**
|
||||
## Step 1: Deploy Azure Cosmos DB state store component to ACA
|
||||
|
||||
1. Remove `azure-cosmosdb-statestore.yaml` from `dapr/components` folder.
|
||||
|
||||
1. Open the file `dapr/aca-azure-cosmosdb-statestore.yaml` in your code editor and compare the content of the file with the content of the file `dapr/azure-cosmosdb-statestore.yaml` from the previous assignment.
|
||||
|
||||
1. **Copy or Move** this file `dapr/aca-azure-cosmosdb-statestore.yaml` to `dapr/components` folder.
|
||||
|
||||
1. **Replace** the following placeholders in this file `dapr/components/aca-azure-cosmosdb-statestore.yaml` with the values you noted down in the previous assignment:
|
||||
|
||||
- `<YOUR_COSMOSDB_ACCOUNT_URL>` with the Cosmos DB account URL
|
||||
- `<YOUR_COSMOSDB_MASTER_KEY>` with the master key
|
||||
|
||||
1. Go to the root folder of the repository.
|
||||
|
||||
1. Enter the following command to deploy the `statestore` Dapr component:
|
||||
|
||||
```bash
|
||||
az containerapp env dapr-component set \
|
||||
--name cae-dapr-workshop-java \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--dapr-component-name statestore \
|
||||
--yaml ./dapr/components/aca-azure-cosmosdb-statestore.yaml
|
||||
```
|
||||
|
||||
<!-- ----------------------- BUILD, DEPLOY AND TEST ------------------------ -->
|
||||
|
||||
{% assign stepNumber = 2 %}
|
||||
{% include 09-bonus-assignments/02-state-store/3-deploy-to-aca.md %}
|
||||
|
||||
<!-- ------------------------------- CLEANUP ------------------------------- -->
|
||||
|
||||
{: .important-title }
|
||||
> Cleanup
|
||||
|
@ -38,7 +69,9 @@ In this assignment, you will deploy the Azure Cosmos DB state store to Azure Con
|
|||
> When the workshop is done, please follow the [cleanup instructions]({{ site.baseurl }}{% link modules/10-cleanup/index.md %}) to delete the resources created in this workshop.
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Cosmos DB as a State store]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/1-azure-cosmos-db-state-store.md %}){: .btn .mt-7 }
|
||||
[< Cosmos DB as a state store]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/1-azure-cosmos-db-state-store.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
title: Use Azure Cosmos DB as a state store
|
||||
title: Using Azure Cosmos DB as a state store
|
||||
parent: Bonus Assignments
|
||||
has_children: true
|
||||
nav_order: 2
|
||||
layout: default
|
||||
---
|
||||
|
||||
# Use Azure Cosmos DB as a state store
|
||||
# Using Azure Cosmos DB as a state store
|
||||
|
||||
This bonus assignment is about using Azure Cosmos DB as a [state store](https://docs.dapr.io/operations/components/setup-state-store/) for the `TrafficControlService`. You will use the [Azure Cosmos DB state store component](https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-azure-cosmosdb/) provided by Dapr.
|
||||
|
||||
|
@ -16,6 +16,8 @@ This bonus assignment is about using Azure Cosmos DB as a [state store](https://
|
|||
> The first part is a pre-requisite for the deployment to Azure Kubernetes Service (AKS) and to Azure Container Apps (ACA).
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[Let's start!]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/1-azure-cosmos-db-state-store.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -143,6 +143,8 @@ To create a secret in the Azure Key Vault, use the following command and replace
|
|||
> When deployed to Azure Kubernetes Service or Azure Container Apps, managed identity can be used instead of client secret. See [Using Managed Service Identities](https://docs.dapr.io/developing-applications/integrations/azure/authenticating-azure/#using-managed-service-identities).
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[Retreive a secret in the application]({{ site.baseurl }}{% link modules/09-bonus-assignments/03-secret-store/2-azure-key-vault-secret-store-code.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -152,6 +152,8 @@ You should see the same logs as **Assignment 1**. Obviously, the behavior of the
|
|||
> When the workshop is done, please follow the [cleanup instructions]({{ site.baseurl }}{% link modules/10-cleanup/index.md %}) to delete the resources created in this workshop.
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Secret Store setup]({{ site.baseurl }}{% link modules/09-bonus-assignments/03-secret-store/1-azure-key-vault-secret-store-setup.md %}){: .btn .mt-7 }
|
||||
</span>
|
|
@ -112,12 +112,20 @@ You're going to start all the services now.
|
|||
|
||||
You should see the same logs as **Assignment 3** with Azure Service Bus. Obviously, the behavior of the application is exactly the same as before.
|
||||
|
||||
{: .new-title }
|
||||
> Challenge
|
||||
>
|
||||
> You can use the secret store to store Cosmos DB master key as well. Try it out! More information on Cosmos DB as a state store can be found in [Bonus Assignment: State Store]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/index.md %}).
|
||||
>
|
||||
|
||||
{: .important-title }
|
||||
> Cleanup
|
||||
>
|
||||
> When the workshop is done, please follow the [cleanup instructions]({{ site.baseurl }}{% link modules/10-cleanup/index.md %}) to delete the resources created in this workshop.
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Secret Store setup]({{ site.baseurl }}{% link modules/09-bonus-assignments/03-secret-store/1-azure-key-vault-secret-store-setup.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
</span>
|
||||
|
|
|
@ -18,6 +18,8 @@ The [first part]({{ site.baseurl }}{% link modules/09-bonus-assignments/03-secre
|
|||
> The first part is a pre-requisite for the second and third part. The second and third part can be done in any order.
|
||||
>
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[Let's start!]({{ site.baseurl }}{% link modules/09-bonus-assignments/03-secret-store/1-azure-key-vault-secret-store-setup.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
|
|
|
@ -9,7 +9,7 @@ layout: default
|
|||
|
||||
The bonus assignments are optional and are not required to complete the workshop. They are provided as additional learning opportunities. The assigments cover several building blocks of Dapr not covered by the workshop:
|
||||
|
||||
- Service invocation: [Service-to-service invocation using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-to-service-invocation/index.md %})
|
||||
- Service invocation: [service invocation using Dapr]({{ site.baseurl }}{% link modules/09-bonus-assignments/01-service-invocation/index.md %})
|
||||
- State management: [Use Azure Cosmos DB as a state store]({{ site.baseurl }}{% link modules/09-bonus-assignments/02-state-store/index.md %})
|
||||
- Secrets Management: [Use Azure Key Vault as a secret store]({{ site.baseurl }}{% link modules/09-bonus-assignments/03-secret-store/index.md %})
|
||||
|
||||
|
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: Dapr Overview
|
||||
parent: Basic Concepts and Prerequisites
|
||||
grand_parent: Azure Container Apps Challenge
|
||||
has_children: false
|
||||
nav_order: 1
|
||||
layout: default
|
||||
---
|
||||
|
||||
# Dapr Overview
|
||||
|
||||
{% include 00-intro/1-dapr-overview.md relativeAssetsPath="../../../assets/" %}
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< ACA Challenge]({{ site.baseurl }}{% link modules/11-aca-challenge/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
<span class="fs-3">
|
||||
[Prerequisites >]({{ site.baseurl }}{% link modules/11-aca-challenge/00-intro/2-prerequisites.md %}){: .btn .float-right .mt-7 }
|
||||
</span>
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: Prerequisites
|
||||
parent: Basic Concepts and Prerequisites
|
||||
grand_parent: Azure Container Apps Challenge
|
||||
has_children: false
|
||||
nav_order: 2
|
||||
layout: default
|
||||
---
|
||||
|
||||
# Prerequisites
|
||||
|
||||
{% include 00-intro/2-prerequisites.md %}
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Dapr Overview]({{ site.baseurl }}{% link modules/11-aca-challenge/00-intro/1-dapr-overview.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
<span class="fs-3">
|
||||
[Assignment 1 - Run without Dapr >]({{ site.baseurl }}{% link modules/11-aca-challenge/01-assignment-1-lab/1-spring-for-apache-kafka.md %}){: .btn .float-right .mt-7 }
|
||||
</span>
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
title: Basic Concepts and Prerequisites
|
||||
parent: Azure Container Apps Challenge
|
||||
has_children: true
|
||||
nav_order: 1
|
||||
layout: default
|
||||
---
|
||||
|
||||
# Basic Concepts and Prerequisites
|
||||
|
||||
Details on Dapr and Workshop Prerequisites.
|
|
@ -0,0 +1,33 @@
|
|||
---
|
||||
title: Spring for Apache Kafka Usage
|
||||
parent: Assignment 1 - Running Applications with Kafka without Dapr
|
||||
grand_parent: Azure Container Apps Challenge
|
||||
has_children: false
|
||||
nav_order: 1
|
||||
layout: default
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
# Spring for Apache Kafka Usage
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
<details open markdown="block">
|
||||
<summary>
|
||||
Table of contents
|
||||
</summary>
|
||||
{: .text-delta }
|
||||
- TOC
|
||||
{:toc}
|
||||
</details>
|
||||
|
||||
{% include 01-assignment-1-lab/1-spring-for-apache-kafka.md %}
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Prerequisites]({{ site.baseurl }}{% link modules/11-aca-challenge/00-intro/2-prerequisites.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
<span class="fs-3">
|
||||
[Running Applications without using Dapr >]({{ site.baseurl }}{% link modules/11-aca-challenge/01-assignment-1-lab/2-lab-instructions.md %}){: .btn .float-right .mt-7 }
|
||||
</span>
|
|
@ -0,0 +1,33 @@
|
|||
---
|
||||
title: Run without Dapr
|
||||
parent: Assignment 1 - Running Applications with Kafka without Dapr
|
||||
grand_parent: Azure Container Apps Challenge
|
||||
has_children: false
|
||||
nav_order: 2
|
||||
layout: default
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
# Running Applications without using Dapr
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
<details open markdown="block">
|
||||
<summary>
|
||||
Table of contents
|
||||
</summary>
|
||||
{: .text-delta }
|
||||
- TOC
|
||||
{:toc}
|
||||
</details>
|
||||
|
||||
{% include 01-assignment-1-lab/2-lab-instructions.md linkToPrerequisites="modules/11-aca-challenge/00-intro/2-prerequisites.md" %}
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Spring for Apache Kafka Usage]({{ site.baseurl }}{% link modules/11-aca-challenge/01-assignment-1-lab/2-lab-instructions.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
<span class="fs-3">
|
||||
[Assignment 2 - Run with Dapr >]({{ site.baseurl }}{% link modules/11-aca-challenge/02-assignment-2-dapr-pub-sub/index.md %}){: .btn .float-right .mt-7 }
|
||||
</span>
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
title: Assignment 1 - Running Applications with Kafka without Dapr
|
||||
parent: Azure Container Apps Challenge
|
||||
has_children: true
|
||||
nav_order: 2
|
||||
layout: default
|
||||
---
|
||||
|
||||
# Assignment 1 - Running Applications with Kafka without using Dapr
|
||||
|
||||
Details on how to run the application to make sure everything works correctly.
|
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
title: Assignment 2 - Using Dapr for pub/sub with Kafka
|
||||
parent: Azure Container Apps Challenge
|
||||
has_children: false
|
||||
nav_order: 3
|
||||
layout: default
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
# Assignment 2 - Using Dapr for pub/sub with Kafka
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
<details open markdown="block">
|
||||
<summary>
|
||||
Table of contents
|
||||
</summary>
|
||||
{: .text-delta }
|
||||
- TOC
|
||||
{:toc}
|
||||
</details>
|
||||
|
||||
{% include 02-assignment-2-dapr-pub-sub/1-dapr-pub-sub.md relativeAssetsPath="../../../assets/" %}
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 1 - Run without Dapr]({{ site.baseurl }}{% link modules/11-aca-challenge/01-assignment-1-lab/2-lab-instructions.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
<span class="fs-3">
|
||||
[Assignment 3 - Pub/sub with Azure Service Bus >]({{ site.baseurl }}{% link modules/11-aca-challenge/03-assignment-3-azure-pub-sub/index.md %}){: .btn .float-right .mt-7 }
|
||||
</span>
|
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
title: Assignment 3 - Using Dapr for pub/sub with Azure Service Bus
|
||||
parent: Azure Container Apps Challenge
|
||||
has_children: false
|
||||
nav_order: 4
|
||||
layout: default
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
# Assignment 3 - Using Dapr for pub/sub with Azure Service Bus
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
<details open markdown="block">
|
||||
<summary>
|
||||
Table of contents
|
||||
</summary>
|
||||
{: .text-delta }
|
||||
- TOC
|
||||
{:toc}
|
||||
</details>
|
||||
|
||||
{% include 03-assignment-3-azure-pub-sub/1-azure-service-bus.md %}
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 2 - Run with Dapr]({{ site.baseurl }}{% link modules/11-aca-challenge/02-assignment-2-dapr-pub-sub/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
<span class="fs-3">
|
||||
[Assignment 4 - Deploy to Azure Container Apps >]({{ site.baseurl }}{% link modules/11-aca-challenge/04-deploy-to-aca/index.md %}){: .btn .float-right .mt-7 }
|
||||
</span>
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: Assignment 4 - Deploying to Azure Container Apps
|
||||
parent: Azure Container Apps Challenge
|
||||
has_children: false
|
||||
nav_order: 5
|
||||
layout: default
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
# Assignment 4 - Deploying to Azure Container Apps
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
<details open markdown="block">
|
||||
<summary>
|
||||
Table of contents
|
||||
</summary>
|
||||
{: .text-delta }
|
||||
- TOC
|
||||
{:toc}
|
||||
</details>
|
||||
|
||||
This assignment is about deploying the 3 microservices to [Azure Container Apps](https://learn.microsoft.com/en-us/azure/container-apps/) with Dapr enabled for pub/sub. This is the first deployment of the microservices to Azure. The next assignments provide step by step instructions for deploying the microservices using more Dapr building blocks. The camera simulation runs locally and is not deployed to Azure.
|
||||
|
||||
![Azure Container Apps Challenge - First Deployment](../../../assets/images/aca-deployment-1.png)
|
||||
|
||||
## Setup
|
||||
|
||||
{% include 05-assignment-5-aks-aca/02-aca/1-setup.md showObservability=true %}
|
||||
|
||||
## Step 1 - Deploy Dapr Component for pub/sub
|
||||
|
||||
You are going to deploy the `pubsub` Dapr component to use Azure Service Bus as the pub/sub message broker.
|
||||
|
||||
{% include 05-assignment-5-aks-aca/02-aca/2-1-dapr-component-service-bus.md linkToAssignment3="modules/11-aca-challenge/03-assignment-3-azure-pub-sub/index.md" %}
|
||||
|
||||
<!-- ----------------------- BUILD, DEPLOY AND TEST ------------------------ -->
|
||||
|
||||
{% assign stepNumber = 2 %}
|
||||
{% include 05-assignment-5-aks-aca/02-aca/3-build-deploy-test.md %}
|
||||
|
||||
<!-- ---------------------------- OBSERVABILITY ---------------------------- -->
|
||||
|
||||
{% assign stepNumber = stepNumber | plus: 1 %}
|
||||
{% include 05-assignment-5-aks-aca/02-aca/4-observability.md relativeAssetsPath="../../../assets/" %}
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 3 - Pub/sub with Azure Service Bus]({{ site.baseurl }}{% link modules/11-aca-challenge/03-assignment-3-azure-pub-sub/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
<span class="fs-3">
|
||||
[Assignment 5 - Service invocation >]({{ site.baseurl }}{% link modules/11-aca-challenge/05-service-invocation/index.md %}){: .btn .float-right .mt-7 }
|
||||
</span>
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
title: Assignment 5 - Service Invocation using Dapr
|
||||
parent: Azure Container Apps Challenge
|
||||
has_children: false
|
||||
nav_order: 6
|
||||
layout: default
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
# Assignment 5 - Service Invocation using Dapr
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
<details open markdown="block">
|
||||
<summary>
|
||||
Table of contents
|
||||
</summary>
|
||||
{: .text-delta }
|
||||
- TOC
|
||||
{:toc}
|
||||
</details>
|
||||
|
||||
This assignment is about using Dapr to invoke the `VehicleRegistrationService` from the `FineCollectionService`. You will use the [service invocation building block](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/service-invocation-overview/) provided by Dapr. This is the second step to reach the final state of the application for this challge. It is represented by the diagram below.
|
||||
|
||||
![Azure Container Apps Challenge - Second Deployment](../../../assets/images/aca-deployment-2.png)
|
||||
|
||||
<!-- ------------ STEP 1 - INVOKE VEHICLE REGISTRATION SERVICE ------------- -->
|
||||
|
||||
{% assign stepNumber = 1 %}
|
||||
{% include 09-bonus-assignments/01-service-invocation/1-use-dapr-to-invoke-vehicle-registration-service.md %}
|
||||
|
||||
<!-- ---------------------------- DEPLOY TO ACA ---------------------------- -->
|
||||
|
||||
{% assign stepNumber = stepNumber | plus: 1 %}
|
||||
{% include 09-bonus-assignments/01-service-invocation/3-deploy-to-aca.md %}
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 4 - Deploy to Azure Container Apps]({{ site.baseurl }}{% link modules/11-aca-challenge/04-deploy-to-aca/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
<span class="fs-3">
|
||||
[Assignment 6 - Cosmos DB as a state store >]({{ site.baseurl }}{% link modules/11-aca-challenge/06-state-store/index.md %}){: .btn .float-right .mt-7 }
|
||||
</span>
|
|
@ -0,0 +1,70 @@
|
|||
---
|
||||
title: Assignment 6 - Using Azure Cosmos DB as a state store
|
||||
parent: Azure Container Apps Challenge
|
||||
has_children: false
|
||||
nav_order: 7
|
||||
layout: default
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
# Assignment 6 - Using Azure Cosmos DB as a state store
|
||||
|
||||
{: .no_toc }
|
||||
|
||||
<details open markdown="block">
|
||||
<summary>
|
||||
Table of contents
|
||||
</summary>
|
||||
{: .text-delta }
|
||||
- TOC
|
||||
{:toc}
|
||||
</details>
|
||||
|
||||
This assignment is about using Azure Cosmos DB as a [state store](https://docs.dapr.io/operations/components/setup-state-store/) for the `TrafficControlService` instead of keeping the sate in memory. You will use the [Azure Cosmos DB state store component](https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-azure-cosmosdb/) provided by Dapr. This is the third step to reach the final state of the application for this challenge. It is represented by the diagram below.
|
||||
|
||||
![Azure Container Apps Challenge - Third Deployment](../../../assets/images/aca-deployment-3.png)
|
||||
|
||||
## Step 1: Create an Azure Cosmos DB
|
||||
|
||||
{% include 09-bonus-assignments/02-state-store/1-1-create-cosmos-db.md %}
|
||||
|
||||
## Step 2: Deploy Azure Cosmos DB state store component to ACA
|
||||
|
||||
1. Open the file `dapr/aca-azure-cosmosdb-statestore.yaml` in your code editor and look at the content of the file.
|
||||
|
||||
1. **Copy or Move** this file `dapr/aca-azure-cosmosdb-statestore.yaml` to `dapr/components` folder.
|
||||
|
||||
1. **Replace** the following placeholders in this file `dapr/components/aca-azure-cosmosdb-statestore.yaml` with the values you noted down in the previous step:
|
||||
|
||||
- `<YOUR_COSMOSDB_ACCOUNT_URL>` with the Cosmos DB account URL
|
||||
- `<YOUR_COSMOSDB_MASTER_KEY>` with the master key
|
||||
|
||||
1. Go to the root folder of the repository.
|
||||
|
||||
1. Enter the following command to deploy the `statestore` Dapr component:
|
||||
|
||||
```bash
|
||||
az containerapp env dapr-component set \
|
||||
--name cae-dapr-workshop-java \
|
||||
--resource-group rg-dapr-workshop-java \
|
||||
--dapr-component-name statestore \
|
||||
--yaml ./dapr/components/aca-azure-cosmosdb-statestore.yaml
|
||||
```
|
||||
|
||||
## Step 3: Add the Azure Cosmos DB state store to the `TrafficControlService`
|
||||
|
||||
{% include 09-bonus-assignments/02-state-store/1-3-update-traffic-control-service.md %}
|
||||
|
||||
<!-- ----------------------- BUILD, DEPLOY AND TEST ------------------------ -->
|
||||
|
||||
{% assign stepNumber = 4 %}
|
||||
{% include 09-bonus-assignments/02-state-store/3-deploy-to-aca.md %}
|
||||
|
||||
<!-- ----------------------------- NAVIGATION ------------------------------ -->
|
||||
|
||||
<span class="fs-3">
|
||||
[< Assignment 5 - Service invocation]({{ site.baseurl }}{% link modules/11-aca-challenge/05-service-invocation/index.md %}){: .btn .mt-7 }
|
||||
</span>
|
||||
<span class="fs-3">
|
||||
[Assignment 5 - Service invocation >]({{ site.baseurl }}{% link modules/11-aca-challenge/05-service-invocation/index.md %}){: .btn .float-right .mt-7 }
|
||||
</span>
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: Azure Container Apps Challenge
|
||||
has_children: true
|
||||
nav_order: 11
|
||||
layout: default
|
||||
---
|
||||
|
||||
# Azure Container Apps Challenge
|
||||
|
||||
In this challenge, you will cover most of the topics covered in the workshop and the bonus assignments. You will:
|
||||
|
||||
- Deploy all 3 microservices to Azure Container Apps (ACA);
|
||||
- Use Azure Service Bus as a pub/sub Dapr component for the communication between Traffic Control Service and Fine Collection Service;
|
||||
- Use Azure Cosmos DB as a state store Dapr building block for Traffic Control Service;
|
||||
- Use the service invocation Dapr building block to invoke the Vehicle Registration Service from the Fine Collection Service;
|
||||
- Use Azure Key Vault as a secret store Dapr building block for the Fine Collection Service.
|
||||
|
||||
The following diagram shows the architecture, that is the final state of this challenge:
|
||||
|
||||
![Architecture](../../assets/images/fine-collection-service-secret-store.png)
|
||||
|
||||
<span class="fs-3">
|
||||
[Let's start!]({{ site.baseurl }}{% link modules/11-aca-challenge/00-intro/1-dapr-overview.md %}){: .btn .mt-7 }
|
||||
</span>
|
Загрузка…
Ссылка в новой задаче