This document contains a description of the Incident Service and all of its functions. It starts with a high level description and after that contains more details on each topic.
The emergency response solution is comprised by several microservices as described in the Emergency Response Demo Architecture guide. The whole solution is running on OpenShift, so when this document when there are referenced to the cluster, containers, etc, it's assumed that all of this points to an OpenShift cluster.
As you can see in above diagram, the Incident Service consists of different parts. Clients connect to a REST API, which enable other services to access information about incidents. An incident consists of one or more persons in need of help as well as a location. For all properties of an incident, please refer to the OpenAPI specification for the Incident Service, which also contains useful information about, which services to implement, as well as which message formats to exchange with Kafka. Each time an incident is created, an IncidentReportedEvent must be sent. Other services will send a UpdateIncidentCommand to notify the Incident Service on any changes to an incident.
Finally the Incident Service implements a health check API to let the OpenShift cluster know if the service is up and running.
Please refer to the following sections for more details on each topic. Be aware that this document describes a full implementation of the service. For testing/evaluation purposes, you can create a minimal solution to get the Incident Service working with the other components. Such a solution only needs to include:
- REST API implementation
- Kafka Integration
- Persistence can be replaced by an ephemeral solution (like storing Incidents in a LinkedList)
The service exposes a REST API as specified in the OpenAPI specification. The service exposes five paths and any implementation must comply with this interface.
Example of an incident JSON
{
"lat": "34.25184",
"lon": "-77.89708",
"numberOfPeople": 9,
"victimName": "Mr Test",
"victimPhoneNumber": "(651) 555-9526",
"medicalNeeded": true
}
The incident service will need some kind of persistence. There are no requirements on specific storage solutions, but keep in mind that the Incident service is a microservice, so any implementation needs to be able to be restarted - possibly in another node in the cluster. And keep in mind there may be multiple instances of the service running simultaneously.
- All incidents received must be persisted and returned on later calls.
- There are no requirements on specific storage solutions, but keep in mind that we work with microservices, so solutions which enable moving workloads easily are preferred.
There are two relevant topics on the Kafka queue, which the incident service must integrate to.
Topic name | Send/Receive | Description | Format |
---|---|---|---|
topic-incident-event | Send | Notify other microservices that a new incident has been created. | IncidentReportedEvent |
topic-incident-command | Receive | Get notifications from other microservices when an incident has changed | UpdateIncidentCommand |
In order to connect to the Kafka server, use the following url kafka-cluster-kafka-bootstrap.emergency-response-demo.svc.cluster.local:9092
Both the IncidentReportedEvent and UpdateIncidentCommand formats are included as data types in the OpenAPI specification for the Incident Service.
Example of IncidentReportedEvent:
{
"id":"fe6bcee7-5447-4b0d-8c7f-a9ca4f10eb14",
"messageType":"IncidentReportedEvent",
"invokingService":"IncidentService",
"timestamp":1573513503569,
"body":{
"id":"fe6bcee7-5447-4b0d-8c7f-a9ca4f10eb14",
"lat":55.693615,
"lon":12.567255,
"numberOfPeople":12,
"medicalNeeded":true,
"timestamp":1573464689
}
}
Example of UpdateIncidentCommand:
{
"id":"ea9f2f52-2b56-448b-b388-b6ff16368050",
"messageType":"UpdateIncidentCommand",
"invokingService":"IncidentProcessService",
"timestamp":1573815193958,
"body":{
"incident":{
"id":"ea9f2f52-2b56-448b-b388-b6ff16368050",
"status":"RESCUED"
}
}
}
A full implementation must provide an endpoint /actuator/health, which can tell whether the service is up and running. Only requirement is, that it returns HTTP code 200 if the service is up.
Implement /metrics end point giving out OpenMetrics format Prometheus understands. Ask the tech lead for more details on this.