Releases: Activiti/activiti-cloud-application
7.6.0-alpha.19
Release version 7.6.0-alpha.19
7.6.0-alpha.18
Release version 7.6.0-alpha.18
7.6.0-beta.11
Release version 7.6.0-beta.11
7.6.0-alpha.13
Release version 7.6.0-alpha.13
7.5.1
What's Changed
- fix(versions): update Activiti/activiti-cloud-application versions into 7.5.x by @alfresco-build in #739
- fix(versions): update Activiti/activiti-cloud-application versions into 7.5.x by @alfresco-build in #742
Full Changelog: 7.5.0...7.5.1
7.5.0
7.4.0
7.3.0
7.2.0
7.1.0-M17
description: Release 7.1.0-M17
7.1.0-M17
You can consume all the Activiti artifacts for this release from Alfresco Nexus:
<repositories>
<repository>
<id>activiti-releases</id>
<url>https://artifacts.alfresco.com/nexus/content/repositories/activiti-releases</url>
</repository>
</repositories>
Activiti Cloud:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.activiti.cloud</groupId>
<artifactId>activiti-cloud-dependencies</artifactId>
<version>7.1.0-M17</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
Activiti Core
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.activiti</groupId>
<artifactId>activiti-dependencies</artifactId>
<version>7.1.0-M17</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
In the milestone 7.1.0-M17 you will find the following main fixes & features:
- Improve Query Consumer Event Handling Performance
- Support Horizontal Pod Autoscaling (HPA) on Activiti Cloud full chart
- Change name validation for BPMN process definition and model payload
- Add category field to process models
- JUEL: Add custom function 'list'
- Fix variables propagation logic
- Fix modeling app validation for subprocesses
- Update Spring Boot version to 2.6.2
Changes from previous milestones
Support Horizontal Pod Autoscaling (HPA) on Activiti Cloud full chart.
Kubernetes supports horizontal scalability through Horizontal Pod Autoscaler (HPA) mechanism.
In activiti-cloud-full-chart it is now possible to enable HPA for the runtime-bundle
and activiti-cloud-query
microservices.
Requirements
The HorizontalPodAutoscaler can fetch metrics from aggregated APIs that, for Kubernetes (metrics.k8s.io), are provided by an add-on named Metrics Server
.
So, Metric Server
needs to be installed and launched to use the HPA feature. Please refer to this page for its installation.
HPA Configuration
In the activiti-cloud-full-chart the HorizontalPodAutoscaler is disabled by default for backward compatibility. Please
add the following configuration to your values.yaml
to enable and use it:
runtime-bundle:
hpa:
enabled: true
minReplicas: 1
maxReplicas: 6
cpu: 90
memory: "2000Mi"
activiti-cloud-query:
hpa:
enabled: true
minReplicas: 1
maxReplicas: 4
cpu: 90
This configuration (present in the hpa-values.yaml file in this repository) enable HPA for both runtime-bundle
and activiti-cloud-query
.
⚠️ WARNING: the provided values are just an example. Please adjust the values to your specific use case.
Configuration Properties
Name | Description | Default |
---|---|---|
enabled |
enables the HPA feature | false |
minReplicas |
starting number of replicas to be spawned | |
maxReplicas |
max number of replicas to be spawned | |
cpu |
+1 replica over this average % CPU value | |
memory |
+1 replica over this average memory value | |
scalingPolicesEnabled |
enables the scaling policies | true |
Activiti Cloud Query and HPA
Activiti Cloud supports both RabbitMQ
and Kafka
message broker. Activiti Cloud Query is a consumer of the message broker, so we need to be extra careful in the configuration of the automatic scalability in order to keep it working properly.
As a general rule, the automatic horizontal scalability for the query consumers should be enabled only when the Activiti Cloud has enabled partitioning
.
Activiti Cloud Query and HPA with Kafka
In a partitioned installation, Kafka allows the consumers to connect to one or more partitions with the maximum ratio of 1:1 between partitions and consumers.
So when configuring HPA please don't specify the maxReplicas
value greater than the partitionCount
.
Activiti Cloud Query and HPA with RabbitMQ
When partitioning RabbitMQ the configuration will spawn one replica for every partition, so you should avoid activating the HorizontalPodAutoscaler
in this case.
Improved Query Event Handling Performance
- Optimized entities hash code and equals to be consistent across all state transitions using entity identifier
- Changed Audit entity identity generator to use sequence generator
- Changed Query variable entity identity generator to use sequence generator
- Enabled Hibernate query batching for Audit and Query event handlers
- Optimized Audit event entities to be immutable to avoid extra update queries
- Optimized Audit event entities to use dynamic insert query behavior
- Optimized Query event entities to use dynamic insert and update query behavior
- Replaced Spring Data Repository with JPA EntityManager to reduce N+1 queries overhead
- Optimized Query event handler entity graph fetching to reduce N+1 select queries
- Optimized task variables and task created events handling order
Query Performance Optimization Results M16 vs M17
7.1.0-M16
- Average Latency: 41ms
- 95p Latency: 62ms
- Throughput: 24tps
7.1.0-M17
- Average Latency: 3.6ms
- 95p Latency: 7ms
- Throughput: 277tps
Consistency on variables propagation
Previously, the propagation of variables to the process instance level was not consistent between user tasks and service tasks: while user tasks were mapping variables directly from the
local scope to the process level, service tasks were also updating variables in the execution context before propagating them to the process level.
This was leading to some misbehaviour depending on the execution order (see Activiti/Activiti#3787).
Starting from this release, service tasks are no longer updating the execution context, but updating directly the process instance variables based on the defined mappings.
Possible side effects: with the previous implementation, connector outputs were made available to the current execution even in the case where no mapping was defined,
meaning that connector outputs could be directly used in expressions for sequence flow expressions. After this change this will be no longer the case: in order to use
connector outputs in further expressions you'll need to define a mapping for it (either explicitly or using one of the MAP_ALL_OUTPUTS
or MAP_ALL
options).