404
+ +Page not found
+ + +diff --git a/docs/404.html b/docs/404.html new file mode 100644 index 00000000..ec1c742b --- /dev/null +++ b/docs/404.html @@ -0,0 +1,177 @@ + + +
+ + + + +Page not found
+ + +All notable changes to this project will be documented in this file.
+The format is based on Keep a Changelog, +and this project adheres to Semantic Versioning.
+For release notes before version 3.1, please refer to https://github.com/Accenture/mercury
+platform-core engine updated with virtual thread
+Print out basic JVM information before startup for verification of base container image.
+Removed Maven Shade packager
+Updated open sources libraries to address security vulnerabilities
+Enhanced Benchmark tool to support "Event over HTTP" protocol to evaluate performance +efficiency for commmunication between application containers using HTTP.
+N/A
+Updated open sources libraries
+Support two executable JAR packaging system: +1. Maven Shade packager +2. Spring Boot packager
+Starting from version 3.0.5, we have replaced Spring Boot packager with Maven Shade. +This avoids a classpath edge case for Spring Boot packager when running kafka-client +under Java 11 or higher.
+Maven Shade also results in smaller executable JAR size.
+N/A
+Updated open sources libraries
+The "/info/lib" admin endpoint has been enhanced to list library dependencies for executable JAR +generated by either Maven Shade or Spring Boot Packager.
+Improved ConfigReader to recognize both ".yml" and ".yaml" extensions and their uses are interchangeable.
+N/A
+N/A
+Updated open sources libraries
+N/A
+N/A
+N/A
+In this release, we have replace Google HTTP Client with vertx non-blocking WebClient. +We also tested compatibility up to OpenJDK version 20 and maven 3.9.2.
+When "x-raw-xml" HTTP request header is set to "true", the AsyncHttpClient will skip the built-in +XML serialization so that your application can retrieve the original XML text.
+Retire Google HTTP client
+Upgrade maven plugin versions.
+This is a major release with some breaking changes. Please refer to Chapter-10 (Migration guide) for details. +This version brings the best of preemptive and cooperating multitasking to Java (version 1.8 to 19) before +Java 19 virtual thread feature becomes officially available.
+N/A
+N/A
+N/A
+google-http-client 1.42.3
+Improved unit tests to use assertThrows to evaluate exception
+In this version, REST automation code is moved to platform-core such that REST and Websocket +service can share the same port.
+N/A
+In this version, websocket notification example code has been removed from the REST automation system. +If your application uses this feature, please recover the code from version 2.5.0 and refactor it as a +separate library.
+N/A
+Simplify REST automation system by removing websocket notification example in REST automation.
+New Preload annotation class to automate pre-registration of LambdaFunction.
+Removed Spring framework and Tomcat dependencies from platform-core so that the core library can be applied +to legacy J2EE application without library conflict.
+Updated open sources libraries.
+Support more than one event stream cluster. User application can share the same event stream cluster +for pub/sub or connect to an alternative cluster for pub/sub use cases.
+N/A
+Cloud connector libraries update to Hazelcast 5.1.2
+Add tagging feature to handle language connector's routing and exception handling
+Remove language pack's pub/sub broadcast feature
+N/A
+Enhanced AsyncRequest to handle non-blocking fork-n-join
+N/A
+Upgrade Spring Boot from 2.6.3 to 2.6.6
+Add support of queue API in native pub/sub module for improved ESB compatibility
+N/A
+N/A
+N/A
+N/A
+N/A
+N/A
+distributed.trace.aggregation
in application.properties such that trace aggregation
+ may be disabled.N/A
+N/A
+Callback function can implement ServiceExceptionHandler to catch exception. It adds the onError() method.
+N/A
+Open sources library update - Vert.x 4.1.3, Netty 4.1.68-Final
+N/A
+"object.streams.io" route is removed from platform-core
+Vert.x is introduced as the in-memory event bus
+Version 1.13.0 is the last version that uses Akka as the in-memory event system.
+Legacy websocket notification example application
+N/A
+N/A
+If predictable topic is set, application instances will report their predictable topics as "instance ID" +to the presence monitor. This improves visibility when a developer tests their application in "hybrid" mode. +i.e. running the app locally and connect to the cloud remotely for event streams and cloud resources.
+N/A
+N/A
+N/A
+N/A
+Improved Kafka producer and consumer pairing
+New presence monitor's admin endpoint for the operator to force routing table synchronization ("/api/ping/now")
+N/A
+Improved routing table integrity check
+Event stream systems like Kafka assume topic to be used long term. +This version adds support to reuse the same topic when an application instance restarts.
+You can create a predictable topic using unique application name and instance ID. +For example, with Kubernetes, you can use the POD name as the unique application instance topic.
+N/A
+N/A
+Automate trace for fork-n-join use case
+N/A
+N/A
+N/A
+N/A
+Improved distributed trace - set the "from" address in EventEnvelope automatically.
+N/A
+N/A
+Application life-cycle management - User provided main application(s) will be started after Spring Boot declares web +application ready. This ensures correct Spring autowiring or dependencies are available.
+Bugfix for locale - String.format(float) returns comma as decimal point that breaks number parser. +Replace with BigDecimal decimal point scaling.
+Bugfix for Tomcat 9.0.35 - Change Async servlet default timeout from 30 seconds to -1 so the system can handle the +whole life-cycle directly.
+N/A
+For large payload in an event, the payload is automatically segmented into 64 KB segments. + When there are more than one target application instances, the system ensures that the segments of the same event + is delivered to exactly the same target.
+N/A
+N/A
+N/A
+For security reason, upgrade log4j to version 2.13.2
+Use RestEasy JAX-RS library
+For security reason, removed Jersey JAX-RS library
+N/A
+For simplicity, retire route-substitution admin endpoint. Route substitution uses a simple static table in +route-substitution.yaml.
+N/A
+N/A
+SimpleRBAC class is retired
+N/A
+Retired proprietary config manager since we can use the "BeforeApplication" approach to load config from Kubernetes +configMap or other systems of config record.
+N/A
+N/A
+Kafka-connector will shutdown application instance when the EventProducer cannot send event to Kafka. +This would allow the infrastructure to restart application instance automatically.
+N/A
+N/A
+N/A
+N/A
+Feature to disable PoJo deserialization so that caller can decide if the result set should be in PoJo or a Map.
+N/A
+Added HTTP relay feature in rest-automation project
+N/A
+BeforeApplication
annotation - this allows user application to execute some setup logic before the main
+ application starts. e.g. modifying parameters in application.properties-html file_path
is given when starting the JAR file.N/A
+N/A
+Updated Spring Boot to v2.2.1
+Multi-tenancy support for event streams (Hazelcast and Kafka). +This allows the use of a single event stream cluster for multiple non-prod environments. +For production, it must use a separate event stream cluster for security reason.
+N/A
+N/A
+language pack API key obtained from environment variable
+N/A
+rest-core subproject has been merged with rest-spring
+N/A
+N/A
+Minor refactoring of kafka-connector and hazelcast-connector to ensure that they can coexist if you want to include +both of these dependencies in your project.
+This is for convenience of dev and testing. In production, please select only one cloud connector library to reduce +memory footprint.
+Add inactivity expiry timer to ObjectStreamIO so that house-keeper can clean up resources that are idle
+N/A
+By default, GSON serializer converts all numbers to double, resulting in unwanted decimal point for integer and long. +To handle custom map serialization for correct representation of numbers, an unintended side effect was introduced in +earlier releases.
+List of inner PoJo would be incorrectly serialized as map, resulting in casting exception. +This release resolves this issue.
+N/A
+N/A
+System log service
+Refactoring of Hazelcast event stream connector library to sync up with the new Kafka connector.
+Language-support service application for Python, Node.js and Go, etc. +Python language pack project is available at https://github.com/Accenture/mercury-python
+N/A
+platform-core
project)rest-spring
)rest-spring
)N/A
+N/A
+Added retry logic in persistent queue when OS cannot update local file metadata in real-time for Windows based machine.
+N/A
+pom.xml changes - update with latest 3rd party open sources dependencies.
+platform-core
#
+# additional security to protect against model injection
+# comma separated list of model packages that are considered safe to be used for object deserialization
+#
+#safe.data.models=com.accenture.models
+
+rest-spring
"/env" endpoint is added. See sample application.properties below:
+#
+# environment and system properties to be exposed to the "/env" admin endpoint
+#
+show.env.variables=USER, TEST
+show.application.properties=server.port, cloud.connector
+
+N/A
+platform-core
Use Java Future and an elastic cached thread pool for executing user functions.
+N/A
+Hazelcast support is added. This includes two projects (hazelcast-connector and hazelcast-presence).
+Hazelcast-connector is a cloud connector library. Hazelcast-presence is the "Presence Monitor" for monitoring the +presence status of each application instance.
+platform-core
The "fixed resource manager" feature is removed because the same outcome can be achieved at the application level. +e.g. The application can broadcast requests to multiple application instances with the same route name and use a +callback function to receive response asynchronously. The services can provide resource metrics so that the caller +can decide which is the most available instance to contact.
+For simplicity, resources management is better left to the cloud platform or the application itself.
+N/A
+N/A
+ +In the interest of fostering an open and welcoming environment, we as +contributors and maintainers pledge to making participation in our project and +our community a harassment-free experience for everyone, regardless of age, body +size, disability, ethnicity, gender identity and expression, level of experience, +nationality, personal appearance, race, religion, or sexual identity and +orientation.
+Examples of behavior that contributes to creating a positive environment +include:
+Examples of unacceptable behavior by participants include:
+Project maintainers are responsible for clarifying the standards of acceptable +behavior and are expected to take appropriate and fair corrective action in +response to any instances of unacceptable behavior.
+Project maintainers have the right and responsibility to remove, edit, or +reject comments, commits, code, wiki edits, issues, and other contributions +that are not aligned to this Code of Conduct, or to ban temporarily or +permanently any contributor for other behaviors that they deem inappropriate, +threatening, offensive, or harmful.
+This Code of Conduct applies both within project spaces and in public spaces +when an individual is representing the project or its community. Examples of +representing a project or community include using an official project e-mail +address, posting via an official social media account, or acting as an appointed +representative at an online or offline event. Representation of a project may be +further defined and clarified by project maintainers.
+Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported by contacting Kevin Bader (the current project maintainer). All +complaints will be reviewed and investigated and will result in a response that +is deemed necessary and appropriate to the circumstances. The project team is +obligated to maintain confidentiality with regard to the reporter of an incident. +Further details of specific enforcement policies may be posted separately.
+Project maintainers who do not follow or enforce the Code of Conduct in good +faith may face temporary or permanent repercussions as determined by other +members of the project's leadership.
+This Code of Conduct is adapted from the Contributor Covenant, version 1.4, +available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
+ +Thanks for taking the time to contribute!
+The following is a set of guidelines for contributing to Mercury and its packages, which are hosted +in the Accenture Organization on GitHub. These are mostly +guidelines, not rules. Use your best judgment, and feel free to propose changes to this document +in a pull request.
+This project and everyone participating in it is governed by our +Code of Conduct. By participating, you are expected to uphold this code. +Please report unacceptable behavior to Kevin Bader, who is the current project maintainer.
+We follow the standard GitHub workflow. +Before submitting a Pull Request:
+CHANGELOG.md
file with your current change in form of [Type of change e.g. Config, Kafka, .etc]
+ with a short description of what it is all about and a link to issue or pull request,
+ and choose a suitable section (i.e., changed, added, fixed, removed, deprecated).When we make a significant decision in how to write code, or how to maintain the project and +what we can or cannot support, we will document it using +Architecture Decision Records (ADR). +Take a look at the design notes for existing ADRs. +If you have a question around how we do things, check to see if it is documented +there. If it is not documented there, please ask us - chances are you're not the only one +wondering. Of course, also feel free to challenge the decisions by starting a discussion on the +mailing list.
+ +
+
+
As an organization, Accenture believes in building an inclusive workplace and contributing to a world where equality thrives. Certain terms or expressions can unintentionally harm, perpetuate damaging stereotypes, and insult people. Inclusive language avoids bias, slang terms, and word choices which express derision of groups of people based on race, gender, sexuality, or socioeconomic status. The Accenture North America Technology team created this guidebook to provide Accenture employees with a view into inclusive language and guidance for working to avoid its use—helping to ensure that we communicate with respect, dignity and fairness.
+How to use this guide?
+Accenture has over 514,000 employees from diverse backgrounds, who perform consulting and delivery work for an equally diverse set of clients and partners. When communicating with your colleagues and representing Accenture, consider the connotation, however unintended, of certain terms in your written and verbal communication. The guidelines are intended to help you recognize non-inclusive words and understand potential meanings that these words might convey. Our goal with these recommendations is not to require you to use specific words, but to ask you to take a moment to consider how your audience may be affected by the language you choose.
+Inclusive Categories | +Non-inclusive term | +Replacement | +Explanation | +
---|---|---|---|
Race, Ethnicity & National Origin | +master | +primary client source leader |
+ Using the terms “master/slave” in this context inappropriately normalizes and minimizes the very large magnitude that slavery and its effects have had in our history. | +
slave | +secondary replica follower |
+ ||
blacklist | +deny list block list |
+ The term “blacklist” was first used in the early 1600s to describe a list of those who were under suspicion and thus not to be trusted, whereas “whitelist” referred to those considered acceptable. Accenture does not want to promote the association of “black” and negative, nor the connotation of “white” being the inverse, or positive. | +|
whitelist | +allow list approved list |
+ ||
native | +original core feature |
+ Referring to “native” vs “non-native” to describe technology platforms carries overtones of minimizing the impact of colonialism on native people, and thus minimizes the negative associations the terminology has in the latter context. | +|
non-native | +non-original non-core feature |
+ ||
Gender & Sexuality | +man-hours | +work-hours business-hours |
+ When people read the words ‘man’ or ‘he,’ people often picture males only. Usage of the male terminology subtly suggests that only males can perform certain work or hold certain jobs. Gender-neutral terms include the whole audience, and thus using terms such as “business executive” instead of “businessman,” or informally, “folks” instead of “guys” is preferable because it is inclusive. | +
man-days | +work-days business-days |
+ ||
Ability Status & (Dis)abilities | +sanity check insanity check |
+ confidence check quality check rationality check |
+ Using the “Human Engagement, People First’ approach, putting people - all people - at the center is + important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. | +
dummy variables | +indicator variables | +||
Violence | +STONITH, kill, hit | +conclude cease discontinue |
+ Using the “Human Engagement, People First’ approach, putting people - all people - at the center is + important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. | +
one throat to choke | +single point of contact primary contact |
+
This guidebook is a living document and will be updated as terminology evolves. We encourage our users to provide feedback on the effectiveness of this document and we welcome additional suggestions. Contact us at Technology_ProjectElevate@accenture.com.
+ +The foundation library (platform-core) has been integrated with Java 21 virtual thread and +Kotlin suspend function features.
+When a user function makes a RPC call using virtual thread or suspend function, +the user function appears to be "blocked" so that the code can execute sequentially. +Behind the curtain, the function is actually "suspended".
+This makes sequential code with RPC performs as good as reactive code. +More importantly, the sequential code represents the intent of the application clearly, +thus making code easier to read and maintain.
+You can precisely control how your functions execute, using virtual threads, suspend functions +or kernel thread pools to yield the highest performance and throughput.
+We are using Gson for its minimalist design.
+We have customized the serialization behavior to be similar to Jackson and other serializers. +i.e. Integer and long values are kept without decimal points.
+For API functional compatibility with Jackson, we have added the writeValueAsString, +writeValueAsBytes and readValue methods.
+The convertValue method has been consolidated into the readValue method.
+For efficient and serialization performance, we use MsgPack as schemaless binary transport for +EventEnvelope that contains event metadata, headers and payload.
+For consistency, we have customized Spring Boot and Servlet serialization and exception handlers.
+Mercury uses the temporary local file system (/tmp
) as an overflow area for events when the
+consumer is slower than the producer. This event buffering design means that user application
+does not have to handle back-pressure logic directly.
However, it does not restrict you from implementing your flow-control logic.
+In Mercury version 1, the Akka actor system is used as the in-memory event bus. +Since Mercury version 2, we have migrated from Akka to Eclipse Vertx.
+In Mercury version 3, we extend the engine to be fully non-blocking with low-level control +of application performance and throughput.
+In Mercury version 3.1, the platform core engine is fully integrated with Java 21 virtual thread.
+The platform-core
includes a non-blocking HTTP and websocket server for standalone operation without
+Spring Boot. The rest-spring-3
library is designed to turn your code to be a Spring Boot application.
You may also use the platform-core
library with a regular Spring Boot application without the
+rest-spring-3
library if you prefer.
The following parameters are used by the system. You can define them in either the application.properties or +application.yml file.
+When you use both application.properties and application.yml, the parameters in application.properties will take +precedence.
+Key | +Value (example) | +Required | +
---|---|---|
application.name | +Application name | +Yes | +
spring.application.name | +Alias for application name | +Yes*1 | +
info.app.version | +major.minor.build (e.g. 1.0.0) | +Yes | +
info.app.description | +Something about your application | +Yes | +
web.component.scan | +your own package path or parent path | +Yes | +
server.port | +e.g. 8083 | +Yes*1 | +
rest.automation | +true if you want to enable automation | +Optional | +
rest.server.port | +e.g. 8085 | +Optional | +
websocket.server.port | +Alias for rest.server.port | +Optional | +
static.html.folder | +classpath:/public/ | +Yes | +
mime.types | +Map of file extensions to MIME types (application.yml only) |
+Optional | +
spring.web.resources.static-locations | +(alias for static.html.folder) | +Yes*1 | +
spring.mvc.static-path-pattern | +/** | +Yes*1 | +
jax.rs.application.path | +/api | +Optional* | +
show.env.variables | +comma separated list of variable names | +Optional | +
show.application.properties | +comma separated list of property names | +Optional | +
cloud.connector | +kafka, none, etc. | +Optional | +
cloud.services | +e.g. some.interesting.service | +Optional | +
snake.case.serialization | +true (recommended) | +Optional | +
safe.data.models | +packages pointing to your PoJo classes | +Optional | +
protect.info.endpoints | +true to disable actuators. Default: true | +Optional | +
trace.http.header | +comma separated list. Default "X-Trace-Id" | +Optional | +
index.redirection | +comma separated list of URI paths | +Optional* | +
index.page | +default is index.html | +Optional* | +
hsts.feature | +default is true | +Optional* | +
application.feature.route.substitution | +default is false | +Optional | +
route.substitution.file | +points to a config file | +Optional | +
application.feature.topic.substitution | +default is false | +Optional | +
topic.substitution.file | +points to a config file | +Optional | +
kafka.replication.factor | +3 | +Kafka | +
cloud.client.properties | +e.g. classpath:/kafka.properties | +Connector | +
user.cloud.client.properties | +e.g. classpath:/second-kafka.properties | +Connector | +
default.app.group.id | +groupId for the app instance. Default: appGroup |
+Connector | +
default.monitor.group.id | +groupId for the presence-monitor. Default: monitorGroup |
+Connector | +
monitor.topic | +topic for the presence-monitor. Default: service.monitor |
+Connector | +
app.topic.prefix | +Default: multiplex (DO NOT change) | +Connector | +
app.partitions.per.topic | +Max Kafka partitions per topic. Default: 32 |
+Connector | +
max.virtual.topics | +Max virtual topics = partitions * topics. Default: 288 |
+Connector | +
max.closed.user.groups | +Number of closed user groups. Default: 10, range: 3 - 30 |
+Connector | +
closed.user.group | +Closed user group. Default: 1 | +Connector | +
transient.data.store | +Default is "/tmp/reactive" | +Optional | +
running.in.cloud | +Default is false (set to true if containerized) | +Optional | +
multicast.yaml | +points to the multicast.yaml config file | +Optional | +
journal.yaml | +points to the journal.yaml config file | +Optional | +
deferred.commit.log | +Default is false (may be set to true in unit tests) | +Optional | +
*
- when using the "rest-spring" library
You can place static HTML files (e.g. the HTML bundle for a UI program) in the "resources/public" folder or +in the local file system using the "static.html.folder" parameter.
+The system supports a bare minimal list of file extensions to MIME types. If your use case requires additional
+MIME type mapping, you may define them in the application.yml
configuration file under the mime.types
+section like this:
mime.types:
+ pdf: 'application/pdf'
+ doc: 'application/msword'
+
+Note that application.properties file cannot be used for the "mime.types" section because it only supports text +key-values.
+If rest.automation=true
and rest.server.port or server.port
are configured, the system will start
+a lightweight non-blocking HTTP server. If rest.server.port
is not available, it will fall back to server.port
.
If rest.automation=false
and you have a websocket server endpoint annotated as WebsocketService
, the system
+will start a non-blocking Websocket server with a minimalist HTTP server that provides actuator services.
+If websocket.server.port
is not available, it will fall back to rest.server.port
or server.port
.
If you add Spring Boot dependency, Spring Boot will use server.port
to start Tomcat or similar HTTP server.
The built-in lightweight non-blocking HTTP server and Spring Boot can co-exist when you configure
+rest.server.port
and server.port
to use different ports.
Note that the websocket.server.port
parameter is an alias of rest.server.port
.
The system handles back-pressure automatically by overflowing events from memory to a transient data store. +As a cloud native best practice, the folder must be under "/tmp". The default is "/tmp/reactive". +The "running.in.cloud" parameter must be set to false when your apps are running in IDE or in your laptop. +When running in kubernetes, it can be set to true.
+PoJo may contain Java code. As a result, it is possible to inject malicious code that does harm when +deserializing a PoJo. This security risk applies to any JSON serialization engine.
+For added security and peace of mind, you may want to protect your PoJo package paths.
+When the safe.data.models
parameter is configured, the underlying serializers for JAX-RS, Spring RestController
+and Servlets will respect this setting and enforce PoJo filtering.
If there is a genuine need to programmatically perform serialization, you may use the pre-configured serializer +so that the serialization behavior is consistent.
+You can get an instance of the serializer with SimpleMapper.getInstance().getMapper()
.
The serializer may perform snake case or camel serialization depending on the parameter snake.case.serialization
.
If you want to ensure snake case or camel, you can select the serializer like this:
+SimpleObjectMapper snakeCaseMapper = SimpleMapper.getInstance().getSnakeCaseMapper();
+SimpleObjectMapper camelCaseMapper = SimpleMapper.getInstance().getCamelCaseMapper();
+
+The trace.http.header
parameter sets the HTTP header for trace ID. When configured with more than one label,
+the system will retrieve trace ID from the corresponding HTTP header and propagate it through the transaction
+that may be served by multiple services.
If trace ID is presented in an HTTP request, the system will use the same label to set HTTP response traceId header.
+X-Trace-Id: a9a4e1ec-1663-4c52-b4c3-7b34b3e33697
+or
+X-Correlation-Id: a9a4e1ec-1663-4c52-b4c3-7b34b3e33697
+
+If you use the kafka-connector (cloud connector) and kafka-presence (presence monitor), you may want to +externalize kafka.properties like this:
+cloud.client.properties=file:/tmp/config/kafka.properties
+
+Note that "classpath" refers to embedded config file in the "resources" folder in your source code and "file" +refers to an external config file.
+You want also use the embedded config file as a backup like this:
+cloud.client.properties=file:/tmp/config/kafka.properties,classpath:/kafka.properties
+
+To enable distributed trace logging, please set this in log4j2.xml:
+<logger name="org.platformlambda.core.services.DistributedTrace" level="INFO" />
+
+The platform-core includes built-in serializers for JSON and XML in the AsyncHttpClient, JAX-RS and +Spring RestController. The XML serializer is designed for simple use cases. If you need to handle more +complex XML data structure, you can disable the XML serializer by adding the following HTTP request +header.
+X-Raw-Xml=true
+
+Chapter-9 | +Home | +Appendix-II | +
---|---|---|
API Overview | +Table of Contents | +Reserved route names | +
The Mercury foundation code is written using the same core API and each function has a route name.
+The following route names are reserved. Please DO NOT use them in your application functions to avoid breaking +the system unintentionally.
+Route | +Purpose | +Modules | +
---|---|---|
actuator.services | +Actuator endpoint services | +platform-core | +
elastic.queue.cleanup | +Elastic event buffer clean up task | +platform-core | +
distributed.tracing | +Distributed tracing logger | +platform-core | +
system.ws.server.cleanup | +Websocket server cleanup service | +platform-core | +
http.auth.handler | +REST automation authentication router | +platform-core | +
event.api.service | +Event API service | +platform-core | +
stream.to.bytes | +Event API helper function | +platform-core | +
system.service.registry | +Distributed routing registry | +Connector | +
system.service.query | +Distributed routing query | +Connector | +
cloud.connector.health | +Cloud connector health service | +Connector | +
cloud.manager | +Cloud manager service | +Connector | +
presence.service | +Presence signal service | +Connector | +
presence.housekeeper | +Presence keep-alive service | +Connector | +
cloud.connector | +Cloud event emitter | +Connector | +
init.multiplex.* | +reserved for event stream startup | +Connector | +
completion.multiplex.* | +reserved for event stream clean up | +Connector | +
async.http.request | +HTTP request event handler | +REST automation | +
async.http.response | +HTTP response event handler | +REST automation | +
cron.scheduler | +Cron job scheduler | +Simple Scheduler | +
init.service.monitor.* | +reserved for event stream startup | +Service monitor | +
completion.service.monitor.* | +reserved for event stream clean up | +Service monitor | +
The following optional route names will be detected by the system for additional user defined features.
+Route | +Purpose | +
---|---|
additional.info | +User application function to return information about your application status |
+
distributed.trace.forwarder | +Custom function to forward performance metrics to a telemetry system |
+
transaction.journal.recorder | +Custom function to record transaction request-response payloads into an audit DB |
+
The additional.info
function, if implemented, will be invoked from the "/info" endpoint and its response
+will be merged into the "/info" response.
For distributed.trace.forwarder
and transaction.journal.recorder
, please refer to Chapter-5
+for details.
The following event headers are injected by the system as READ only metadata. They are available from the +input "headers". However, they are not part of the EventEnvelope.
+Header | +Purpose | +
---|---|
my_route | +route name of your function | +
my_trace_id | +trace ID, if any, for the incoming event | +
my_trace_path | +trace path, if any, for the incoming event | +
You can create a trackable PostOffice using the "headers" and the "instance" parameters in the input arguments +of your function. The FastRPC instance requires only the "headers" parameters.
+// Java
+PostOffice po = new PostOffice(headers, instance);
+
+// Kotlin
+val fastRPC = FastRPC(headers);
+
+Appendix-I | +Home | +Appendix-III | +
---|---|---|
Application Configuration | +Table of Contents | +Actuator and HTTP client | +
The following admin endpoints are available.
+GET /info
+GET /info/routes
+GET /info/lib
+GET /env
+GET /health
+GET /livenessprobe
+POST /shutdown
+
+Endpoint | +Purpose | +
---|---|
/info | +Describe the application | +
/info/routes | +Show public routing table | +
/info/lib | +List libraries packed with this executable | +
/env | +List all private and public function route names and selected environment variables | +
/health | +Application health check endpoint | +
/livenessprobe | +Check if application is running normally | +
/shutdown | +Operator may use this endpoint to do a POST command to stop the application | +
For the shutdown endpoint, you must provide an X-App-Instance
HTTP header where the value is the "origin ID"
+of the application. You can get the value from the "/info" endpoint.
You can extend the "/health" endpoint by implementing and registering lambda functions to be added to the +"health check" dependencies.
+mandatory.health.dependencies=cloud.connector.health, demo.health
+optional.health.dependencies=other.service.health
+
+Your custom health service must respond to the following requests:
+A sample health service is available in the DemoHealth
class of the lambda-example
project as follows:
@PreLoad(route="demo.health", instances=5)
+public class DemoHealth implements LambdaFunction {
+
+ private static final String TYPE = "type";
+ private static final String INFO = "info";
+ private static final String HEALTH = "health";
+
+ @Override
+ public Object handleEvent(Map<String, String> headers, Object input, int instance) {
+ /*
+ * The interface contract for a health check service includes both INFO and HEALTH responses
+ */
+ if (INFO.equals(headers.get(TYPE))) {
+ Map<String, Object> result = new HashMap<>();
+ result.put("service", "demo.service");
+ result.put("href", "http://127.0.0.1");
+ return result;
+ }
+ if (HEALTH.equals(headers.get(TYPE))) {
+ /*
+ * This is a place-holder for checking a downstream service.
+ *
+ * You may implement your own logic to test if a downstream service is running fine.
+ * If running, just return a health status message.
+ * Otherwise,
+ * throw new AppException(status, message)
+ */
+ return "demo.service is running fine";
+ }
+ throw new IllegalArgumentException("type must be info or health");
+ }
+}
+
+
+The "async.http.request" function can be used as a non-blocking HTTP client.
+To make an HTTP request to an external REST endpoint, you can create an HTTP request object using the
+AsyncHttpRequest
class and make an async RPC call to the "async.http.request" function like this:
PostOffice po = new PostOffice(headers, instance);
+AsyncHttpRequest req = new AsyncHttpRequest();
+req.setMethod("GET");
+req.setHeader("accept", "application/json");
+req.setUrl("/api/hello/world?hello world=abc");
+req.setQueryParameter("x1", "y");
+List<String> list = new ArrayList<>();
+list.add("a");
+list.add("b");
+req.setQueryParameter("x2", list);
+req.setTargetHost("http://127.0.0.1:8083");
+EventEnvelope request = new EventEnvelope().setTo("async.http.request").setBody(req);
+Future<EventEnvelope> res = po.asyncRequest(request, 5000);
+res.onSuccess(response -> {
+ // do something with the result
+});
+
+In a suspend function using KotlinLambdaFunction, the same logic may look like this:
+val fastRPC = FastRPC(headers)
+val req = AsyncHttpRequest()
+req.setMethod("GET")
+req.setHeader("accept", "application/json")
+req.setUrl("/api/hello/world?hello world=abc")
+req.setQueryParameter("x1", "y")
+val list: MutableList<String> = ArrayList()
+list.add("a")
+list.add("b")
+req.setQueryParameter("x2", list)
+req.setTargetHost("http://127.0.0.1:8083")
+val request = EventEnvelope().setTo("async.http.request").setBody(req)
+val response = fastRPC.awaitRequest(request, 5000)
+// do something with the result
+
+For most cases, you can just set a HashMap into the request body and specify content-type as JSON or XML. +The system will perform serialization properly.
+Example code may look like this:
+AsyncHttpRequest req = new AsyncHttpRequest();
+req.setMethod("POST");
+req.setHeader("accept", "application/json");
+req.setHeader("content-type", "application/json");
+req.setUrl("/api/book");
+req.setTargetHost("https://service_provider_host");
+req.setBody(mapOfKeyValues);
+// where keyValues is a HashMap
+
+For larger payload, you may use the streaming method. See sample code below:
+int len;
+byte[] buffer = new byte[4096];
+FileInputStream in = new FileInputStream(myFile);
+ObjectStreamIO stream = new ObjectStreamIO(timeoutInSeconds);
+ObjectStreamWriter out = stream.getOutputStream();
+while ((len = in.read(buffer, 0, buffer.length)) != -1) {
+ out.write(buffer, 0, len);
+}
+// closing the output stream would send a EOF signal to the stream
+out.close();
+// tell the HTTP client to read the input stream
+req.setStreamRoute(stream.getInputStreamId());
+
+If content length is not given, the response body will be received as a stream.
+Your application should check if the HTTP response header "stream" exists. Its value is an input "stream ID".
+For simplicity and readability, we recommend using "suspend function" to read the input byte-array stream.
+It may look like this:
+val po = PostOffice(headers, instance)
+val fastRPC = FastRPC(headers)
+
+val req = EventEnvelope().setTo(streamId).setHeader("type", "read")
+while (true) {
+ val event = fastRPC.awaitRequest(req, 5000)
+ if (event.status == 408) {
+ // handle input stream timeout
+ break
+ }
+ if ("eof" == event.headers["type"]) {
+ po.send(streamId, Kv("type", "close"))
+ break
+ }
+ if ("data" == event.headers["type"]) {
+ val block = event.body
+ if (block is ByteArray) {
+ // handle the data block from the input stream
+ }
+ }
+}
+
+IMPORTANT: Do not set the "content-length" HTTP header because the system will automatically compute the
+correct content-length for small payload. For large payload, it will use the chunking method.
+
Appendix-II | +Home | +
---|---|
Reserved route names | +Table of Contents | +
Mercury version 3 is a toolkit for writing composable applications.
+At the platform level, composable architecture refers to loosely coupled platform services, utilities, and +business applications. With modular design, you can assemble platform components and applications to create +new use cases or to adjust for ever-changing business environment and requirements. Domain driven design (DDD), +Command Query Responsibility Segregation (CQRS) and Microservices patterns are the popular tools that architects +use to build composable architecture. You may deploy application in container, serverless or other means.
+At the application level, a composable application means that an application is assembled from modular software +components or functions that are self-contained and pluggable. You can mix-n-match functions to form new applications. +You can retire outdated functions without adverse side effect to a production system. Multiple versions of a function +can exist, and you can decide how to route user requests to different versions of a function. Applications would be +easier to design, develop, maintain, deploy, and scale.
+++ +Figure 1 - Composable application architecture
+
As shown in Figure 1, a minimalist composable application consists of three user defined components:
+and a composable event engine that provides:
+Each application has an entry point. You may implement an entry point in a main application like this:
+@MainApplication
+public class MainApp implements EntryPoint {
+ public static void main(String[] args) {
+ AppStarter.main(args);
+ }
+ @Override
+ public void start(String[] args) {
+ // your startup logic here
+ log.info("Started");
+ }
+}
+
+For a command line use case, your main application ("MainApp") module would get command line arguments and +send the request as an event to a business logic function for processing.
+For a backend application, the MainApp is usually used to do some "initialization" or setup steps for your +services.
+Your user function module may look like this:
+@PreLoad(route = "hello.simple", instances = 10)
+public class SimpleDemoEndpoint implements TypedLambdaFunction<AsyncHttpRequest, Object> {
+ @Override
+ public Object handleEvent(Map<String, String> headers, AsyncHttpRequest input, int instance) {
+ // business logic here
+ return result;
+ }
+}
+
+Each function in a composable application should be implemented in the first principle of "input-process-output". +It should be stateless and self-contained. i.e. it has no direct dependencies with any other functions in the +composable application. Each function is addressable by a unique "route name" and you can use PoJo for input and output.
+In the above example, the function is reachable with the route name "hello.simple". The input is an AsyncHttpRequest +object, meaning that this function is a "Backend for Frontend (BFF)" module that is invoked by a REST endpoint.
+When a function finishes processing, its output will be delivered to the next function.
+++Writing code in the first principle of "input-process-output" promotes Test Driven Development (TDD) because + interface contact is clearly defined. Self-containment means code is more readable too.
+
A transaction may pass through one or more user functions. For example, you can write a user function to receive a +request from a user, make requests to some user functions, and consolidate the responses before responding to the +user.
+Note that event orchestration is optional. In the most basic REST application, the REST automation system can send +the user request to a function directly. When the function finishes processing, its output will be routed as +a HTTP response to the user.
+Event routing is done behind the curtain by the composable engine which consists of the REST automation service, +an in-memory event system ("event loop") and an optional localized pub/sub system.
+REST automation creates REST endpoints by configuration rather than code. You can define a REST endpoint like this:
+ - service: "hello.world"
+ methods: ['GET']
+ url: "/api/hello/world"
+ timeout: 10s
+
+In this example, when a HTTP request is received at the URL path "/api/hello/world", the REST automation system +will convert the HTTP request into an event for onward delivery to the user defined function "hello.world". +Your function will receive the HTTP request as input and return a result set that will be sent as a HTTP response +to the user.
+For more sophisticated business logic, you can write a function to receive the HTTP request and do +"event orchestration". i.e. you can do data transformation and send "events" to other user functions to +process the request.
+The composable engine encapsulates the Eclipse vertx event bus library for event routing. It exposes the +"PostOffice" API for your orchestration function to send async or RPC events.
+The in-memory event system is designed for point-to-point delivery. In some use cases, you may like to have +a broadcast channel so that more than one function can receive the same event. For example, sending notification +events to multiple functions. The optional local pub/sub system provides this multicast capability.
+While REST is the most popular user facing interface, there are other communication means such as event triggers +in a serverless environment. You can write a function to listen to these external event triggers and send the events +to your user defined functions. This custom "adapter" pattern is illustrated as the dotted line path in Figure 1.
+The first step is to build Mercury libraries from source. +To simplify the process, you may publish the libraries to your enterprise artifactory.
+mkdir sandbox
+cd sandox
+git clone https://github.com/Accenture/mercury-composable.git
+cd mercury-composable
+mvn clean install
+
+The above sample script clones the Mercury open sources project and builds the libraries from source.
+The pre-requisite is maven 3.8.6 and Openjdk 21 or higher.
+This will build the mercury libraries and the sample applications.
+The platform-core
project is the foundation library for writing composable application.
Assuming you follow the suggested project directory above, you can run a sample composable application +called "lambda-example" like this:
+cd sandbox/mercury-composable/examples/lambda-example
+java -jar target/lambda-example-3.1.1.jar
+
+You will find the following console output when the app starts
+Exact API paths [/api/event, /api/hello/download, /api/hello/upload, /api/hello/world]
+Wildcard API paths [/api/hello/download/{filename}, /api/hello/generic/{id}]
+
+Application parameters are defined in the resources/application.properties file (or application.yml if you prefer).
+When rest.automation=true
is defined, the system will parse the "rest.yaml" configuration for REST endpoints.
When REST automation is turned on, the system will start a lightweight non-blocking HTTP server. +By default, it will search for the "rest.yaml" file from "/tmp/config/rest.yaml" and then from "classpath:/rest.yaml". +Classpath refers to configuration files under the "resources" folder in your source code project.
+To instruct the system to load from a specific path. You can add the rest.automation.yaml
parameter.
To select another server port, change the rest.server.port
parameter.
rest.server.port=8085
+rest.automation=true
+rest.automation.yaml=classpath:/rest.yaml
+
+To create a REST endpoint, you can add an entry in the "rest" section of the "rest.yaml" config file like this:
+ - service: "hello.download"
+ methods: [ 'GET' ]
+ url: "/api/hello/download"
+ timeout: 20s
+ cors: cors_1
+ headers: header_1
+ tracing: true
+
+The above example creates the "/api/hello/download" endpoint to route requests to the "hello.download" function. +We will elaborate more about REST automation in Chapter-3.
+A function is executed when an event arrives. You can define a "route name" for each function. +It is created by a class implementing one of the following interfaces:
+TypedLambdaFunction
allows you to use PoJo or HashMap as input and outputLambdaFunction
is untyped, but it will transport PoJo from the caller to the input of your functionKotlinLambdaFunction
is a typed lambda function for coding in Kotlin++TypedLambdaFunction and LambdaFunction are pure Java functional wrappers.
+
With the application started in a command terminal, please use a browser to point to: +http://127.0.0.1:8085/api/hello/world
+It will echo the HTTP headers from the browser like this:
+{
+ "headers": {},
+ "instance": 1,
+ "origin": "20230324b709495174a649f1b36d401f43167ba9",
+ "body": {
+ "headers": {
+ "sec-fetch-mode": "navigate",
+ "sec-fetch-site": "none",
+ "sec-ch-ua-mobile": "?0",
+ "accept-language": "en-US,en;q=0.9",
+ "sec-ch-ua-platform": "\"Windows\"",
+ "upgrade-insecure-requests": "1",
+ "sec-fetch-user": "?1",
+ "accept": "text/html,application/xhtml+xml,application/xml,*/*",
+ "sec-fetch-dest": "document",
+ "user-agent": "Mozilla/5.0 Chrome/111.0.0.0"
+ },
+ "method": "GET",
+ "ip": "127.0.0.1",
+ "https": false,
+ "url": "/api/hello/world",
+ "timeout": 10
+ }
+}
+
+The function is defined in the MainApp class in the source project with the following segment of code:
+LambdaFunction echo = (headers, input, instance) -> {
+ log.info("echo #{} got a request", instance);
+ Map<String, Object> result = new HashMap<>();
+ result.put("headers", headers);
+ result.put("body", input);
+ result.put("instance", instance);
+ result.put("origin", platform.getOrigin());
+ return result;
+};
+// Register the above inline lambda function
+platform.register("hello.world", echo, 10);
+
+The Hello World function is written as an "inline lambda function". It is registered programmatically using
+the platform.register
API.
The rest of the functions in the demo app are written using regular classes implementing the LambdaFunction, +TypedLambdaFunction and KotlinLambdaFunction interfaces.
+Let's examine the SimpleDemoEndpoint
example under the "services" folder. It may look like this:
@PreLoad(route = "hello.simple", instances = 10)
+public class SimpleDemoEndpoint implements TypedLambdaFunction<AsyncHttpRequest, Object> {
+ @Override
+ public Object handleEvent(Map<String, String> headers, AsyncHttpRequest input, int instance) {
+ // business logic here
+ }
+}
+
+The PreLoad
annotation assigns a route name to the Java class and registers it with an in-memory event system.
+The instances
parameter tells the system to create a number of workers to serve concurrent requests.
++Note that you don't need a lot of workers to handle a larger number of users + and requests provided that your function can finish execution very quickly.
+
By default, the system will run the function as a "virtual thread". There are three function execution strategies +(virtual thread, suspend function and kernel thread pool). We will explain the concept in Chapter-2
+In a composable application, a function is designed using the first principle of "input-process-output".
+In the "hello.simple" function above, the input is an HTTP request expressed as a class of AsyncHttpRequest
.
+You can ignore headers
input argument for the moment. We will cover it later.
The output is declared as "Object" so that the function can return any data structure using a HashMap or PoJo.
+You may want to review the REST endpoint /api/simple/{task}/*
in the rest.yaml config file to see how it is
+connected to the "hello.simple" function.
We take a minimalist approach for the rest.yaml syntax. The parser will detect any syntax errors. Please check +application log to ensure all REST endpoint entries in rest.yaml file are valid.
+Using the lambda-example as a template, let's create your first function by adding a function in the +"services" package folder. You will give it the route name "my.first.function" in the "PreLoad" annotation.
+++Note that route name must use lower case letters and numbers separated by the period character.
+
+@PreLoad(route = "my.first.function", instances = 10)
+public class MyFirstFunction implements TypedLambdaFunction<AsyncHttpRequest, Object> {
+
+ @Override
+ public Object handleEvent(Map<String, String> headers, AsyncHttpRequest input, int instance) {
+ // your business logic here
+ return input;
+ }
+}
+
+To connect this function with a REST endpoint, you can declare a new REST endpoint in the rest.yaml like this:
+ - service: "my.first.function"
+ methods: [ 'GET' ]
+ url: "/api/hello/my/function"
+ timeout: 20s
+ cors: cors_1
+ headers: header_1
+ tracing: true
+
+If you do not put any business logic, the above function will echo the incoming HTTP request object back to the +browser.
+Now you can examine the input HTTP request object and perform some data transformation before returning a result.
+The AsyncHttpRequest class allows you to access data structure such as HTTP method, URL, path parameters, +query parameters, cookies, etc.
+When you click the "rebuild" button in IDE and run the "MainApp", the new function will be available in the
+application. Alternatively, you can also do mvn clean package
to generate a new executable JAR and run the
+JAR from command line using "java -jar target/lambda-example-3.1.1.jar".
To test your new function, visit http://127.0.0.1:8085/api/hello/my/function
+Your function automatically uses an in-memory event bus. The HTTP request from the browser is converted to +an event by the system for delivery to your function as the "input" argument.
+The underlying HTTP server is asynchronous and non-blocking. +i.e. it does not consume CPU resources while waiting for a response.
+This composable architecture allows you to design and implement applications so that you have precise control of +performance and throughput. Performance tuning is much easier.
+You can assemble related functions in a single composable application, and it can be compiled and built into
+a single "executable" for deployment using mvn clean package
.
The executable JAR is in the target folder.
+Composable application is by definition cloud native. It is designed to be deployable using Kubernetes or serverless.
+A sample Dockerfile for your executable JAR may look like this:
+FROM eclipse-temurin:21.0.1_12-jdk
+EXPOSE 8083
+WORKDIR /app
+COPY target/your-app-name.jar .
+ENTRYPOINT ["java","-jar","your-app-name.jar"]
+
+The above Dockerfile will fetch Openjdk 21 packaged in "Ubuntu 22.04 LTS".
+Home | +Chapter-2 | +
---|---|
Table of Contents | +Function Execution Strategy | +
In a composable application, each function is self-contained with zero or minimal dependencies.
+A function is a class that implements the LambdaFunction, TypedLambdaFunction or KotlinLambdaFunction interface. +Within each function boundary, it may have private methods that are fully contained within the class.
+As discussed in Chapter-1, a function may look like this:
+@PreLoad(route = "my.first.function", instances = 10)
+public class MyFirstFunction implements TypedLambdaFunction<AsyncHttpRequest, Object> {
+
+ @Override
+ public Object handleEvent(Map<String, String> headers, AsyncHttpRequest input, int instance) {
+ // your business logic here
+ return input;
+ }
+}
+
+A function is an event listener with the "handleEvent" method. The data structures of input and output are defined +by API interface contract during application design phase.
+In the above example, the input is AsyncHttpRequest because this function is designed to handle an HTTP request event +from a REST endpoint defined in the "rest.yaml" configuration file. We set the output as "Object" so that there is +flexibility in returning a HashMap or a PoJo. You can also enforce the use of a PoJo by updating the output type.
+A single transaction may involve multiple functions. For example, the user submits a form from a browser that +sends an HTTP request to a function. In MVC pattern, the function receiving the user's input is the "controller". +It carries out input validation and forwards the event to a business logic function (the "view") +that performs some processing and then submits the event to a data persistent function (the "model") to save +a record into the database.
+In cloud native application, the transaction flow may be more sophisticated than the typical "mvc" style. You can do +"event orchestration" in the function receiving the HTTP request and then make event requests to various functions.
+This "event orchestration" can be done by code using the "PostOffice" and/or "FastRPC" API.
+To further reduce coding effort, you can perform "event orchestration" by configuration using "Event Script". +Event Script is available as an optional enterprise add-on module from Accenture.
+You can add authentication function using the optional authentication
tag in a service. In "rest.yaml", a service
+for a REST endpoint refers to a function in your application.
An authentication function can be written using a TypedLambdaFunction that takes the input as a "AsyncHttpRequest". +Your authentication function can return a boolean value to indicate if the request should be accepted or rejected.
+A typical authentication function may validate an HTTP header or cookie. e.g. forward the "Bearer token" from the +"Authorization" header to your organization's OAuth 2.0 Identity Provider for validation.
+To approve an incoming request, your custom authentication function can return true
.
Optionally, you can add "session" key-values by returning an EventEnvelope like this:
+return new EventEnvelope().setHeader("user_id", "A12345").setBody(true);
+
+The above example approves the incoming request and returns a "session" variable ("user_id": "A12345") to the next task.
+If your authentication function returns false
, the user will receive a "HTTP-401 Unauthorized" error response.
You can also control the status code and error message by throwing an AppException
like this:
throw new AppException(401, "Invalid credentials");
+
+A composable application is assembled from a collection of modular functions. For example, data persistence functions +and authentication functions are likely to be reusable in many applications.
+@PreLoad(route = "my.first.function", instances = 10)
+
+In the above function, the parameter "instances" tells the system to reserve a number of workers for the function. +Workers are running on-demand to handle concurrent user requests.
+Note that you can use smaller number of workers to handle many concurrent users if your function finishes +processing very quickly. If not, you should reserve more workers to handle the work load.
+Concurrency requires careful planning for optimal performance and throughput. +Let's review the strategies for function execution.
+A function is executed when an event arrives. There are three function execution strategies.
+Strategy | +Advantage | +Disadvantage | +
---|---|---|
Virtual thread | +Highest throughput in terms of concurrent users. Functionally similar to a suspend function. |
+N/A | +
Suspend function | +Sequential "non-blocking" for RPC (request-response) that makes code easier to read and maintain |
+Requires coding in Kotlin language | +
Kernel threads | +Highest performance in terms of operations per seconds |
+Lower number of concurrent threads due to high context switching overheads |
+
By default, the system will run your function as a virtual thread because this is the most efficient execution +strategy.
+The "Thread" object in the standard library will operate in a non-blocking fashion. This means it is safe to use +the Thread.sleep() method. It will release control to the event loop when your function enters into sleep, thus +freeing CPU resources for other functions.
+We have added the "request" methods in the PostOffice API to support synchronous RPC that leverages the virtual +thread resource suspend/resume functionality.
+Future<EventEnvelope> future = po.request(requestEvent, timeout);
+EventEnvelope result = future.get();
+
+// alternatively, you can do:
+EventEnvelope result = po.request(requestEvent, timeout).get();
+
+However, there are a few things that you should avoid when using virtual threads:
+If you prefer writing business logic in Kotlin, you may use suspend function.
+Similar to virtual thread, a suspend function is a coroutine that can be suspended and resumed. The best use case +for a suspend function is for handling of "sequential non-blocking" request-response. This is the same as "async/await" +in node.js and other programming language.
+To implement a "suspend function", you must implement the KotlinLambdaFunction interface and write code in Kotlin.
+If you are new to Kotlin, please download and run JetBrains Intellij IDE. The quickest way to get productive in Kotlin +is to write a few statements of Java code in a placeholder class and then copy-n-paste the Java statements into the +KotlinLambdaFunction's handleEvent method. Intellij will automatically convert Java code into Kotlin.
+The automated code conversion is mostly accurate (roughly 90%). You may need some touch up to polish the converted +Kotlin code.
+In a suspend function, you can use a set of "await" methods to make non-blocking request-response (RPC) calls.
+For example, to make a RPC call to another function, you can use the awaitRequest
method.
Please refer to the FileUploadDemo
class in the "examples/lambda-example" project.
val po = PostOffice(headers, instance)
+val fastRPC = FastRPC(headers)
+
+val req = EventEnvelope().setTo(streamId).setHeader(TYPE, READ)
+while (true) {
+ val event = fastRPC.awaitRequest(req, 5000)
+ // handle the response event
+ if (EOF == event.headers[TYPE]) {
+ log.info("{} saved", file)
+ awaitBlocking {
+ out.close()
+ }
+ po.send(streamId, Kv(TYPE, CLOSE))
+ break;
+ }
+ if (DATA == event.headers[TYPE]) {
+ val block = event.body
+ if (block is ByteArray) {
+ total += block.size
+ log.info("Saving {} - {} bytes", filename, block.size)
+ awaitBlocking {
+ out.write(block)
+ }
+ }
+ }
+}
+
+In the above code segment, it has a "while" loop to make RPC calls to continuously "fetch" blocks of data +from a stream. The status of the stream is indicated in the event header "type". It will exit the "while" loop +when it detects the "End of Stream (EOF)" signal.
+Suspend function will be "suspended" when it is waiting for a response. When it is suspended, it does not +consume CPU resources, thus your application can handle a large number of concurrent users and requests.
+Coroutines run in a "cooperative multitasking" manner. Technically, each function is running sequentially. +However, when many functions are suspended during waiting, it appears that all functions are running concurrently.
+You may notice that there is an awaitBlocking
wrapper in the code segment.
Sometimes, you cannot avoid blocking code. In the above example, the Java's FileOutputStream is a blocking method.
+To ensure that a small piece of blocking code in a coroutine does not slow down the "event loop",
+you can apply the awaitBlocking
wrapper method. The system will run the blocking code in a separate worker thread
+without blocking the event loop.
In addition to the "await" sets of API, the delay(milliseconds)
method puts your function into sleep in a
+non-blocking manner. The yield()
method is useful when your function requires more time to execute complex
+business logic. You can add the yield()
statement before you execute a block of code. The yield method releases
+control to the event loop so that other coroutines and suspend functions will not be blocked by a heavy weighted
+function.
++Do not block your function because it may block all coroutines since they run in a single kernel thread
+
Suspend function is a powerful way to write high throughput application. Your code is presented in a sequential +flow that is easier to write and maintain.
+You may want to try the demo "file upload" REST endpoint to see how suspend function behaves. If you follow Chapter-1, +your lambda example application is already running. To test the file upload endpoint, here is a simple Python script:
+import requests
+files = {'file': open('some_data_file.txt', 'rb')}
+r = requests.post('http://127.0.0.1:8085/api/upload', files=files)
+print(r.text)
+
+This assumes you have the python "requests" package installed. If not, please do pip install requests
to install
+the dependency.
The uploaded file will be kept in the "/tmp/upload-download-demo" folder.
+To download the file, point your browser to http://127.0.0.1:8085/api/download/some_data_file.txt +Your browser will usually save the file in the "Downloads" folder.
+You may notice that the FileDownloadDemo class is written in Java using the interface
+TypedLambdaFunction<AsyncHttpRequest, EventEnvelope>
. The FileDownloadDemo class will run using a kernel thread.
Note that each function is independent and the functions with different execution strategies can communicate in events.
+The output of your function is an "EventEnvelope" so that you can set the HTTP response header correctly. +e.g. content type and filename.
+When downloading a file, the FileDownloadDemo function will block if it is sending a large file. +Therefore, you want it to run as a kernel thread.
+For very large file download, you may want to write the FileDownloadDemo function using asynchronous programming
+with the EventInterceptor
annotation or implement a suspend function using KotlinLambdaFunction. Suspend function
+is non-blocking.
When you add the annotation "KernelThreadRunner" in a function declared as LambdaFunction or TypedLambdaFunction, +the function will be executed using a "kernel thread pool" and Java will run your function in native +"preemptive multitasking" mode.
+While preemptive multitasking fully utilizes the CPU, its context switching overheads may increase as the number of +kernel threads grow. As a rule of thumb, you should control the maximum number of kernel threads to less than 200.
+The parameter event.worker.pool
is defined with a default value of 100. You can change this value to adjust to
+the actual CPU power in your environment. Keep the default value for best performance unless you have tested the
+limit in your environment.
++When you have more concurrent requests, your application may slow down because some functions + are blocked when the number of concurrent kernel threads is reached.
+
You should reduce the number of "instances" (i.e. worker pool) for a function to a small number so that your
+application does not exceed the maximum limit of the event.worker.pool
parameter.
Kernel threads are precious and finite resources. When your function is computational intensive or making +external HTTP or database calls in a synchronous blocking manner, you may use it with a small number +of worker instances.
+To rapidly release kernel thread resources, you should write "asynchronous" code. i.e. for event-driven programming, +you can use send event to another function asynchronously, and you can create a callback function to listen +to responses.
+For RPC call, you can use the asyncRequest
method to write asynchronous RPC calls. However, coding for asynchronous
+RPC pattern is more challenging. For example, you may want to return a "pending" result immediately using HTTP-202.
+Your code will move on to execute using a "future" that will execute callback methods (onSuccess
and onFailure
).
+Another approach is to annotate the function as an EventInterceptor
so that your function can respond to the user
+in a "future" callback.
For ease of programming, we recommend using virtual thread or suspend function to handle synchronous RPC calls.
+Before the availability of virtual thread technology, Java VM is using kernel threads for code execution. +If you have a lot of users hitting your service concurrently, multiple threads are created to serve concurrent +requests.
+When your code serving the requests make blocking call to other services, the kernel threads are busy while your +user functions wait for responses. Kernel threads that are in the wait state is consuming CPU time.
+If the blocking calls finish very quickly, this may not be an issue.
+However, when the blocking calls take longer to complete, a lot of outstanding kernel threads that are waiting +for responses would compete for CPU resources, resulting in higher internal friction in the JVM that makes your +application running slower. This is not a productive use of computer resources.
+This type of performance issue caused by internal friction is very difficult to avoid. While event driven and +reactive programming that uses asynchronous processing and callbacks would address this artificial bottleneck, +asynchronous code is harder to implement and maintain when the application complexity increases.
+It would be ideal if we can write sequential code that does not block. Sequential code is much easier to write +and read because it communicates the intent of the code clearly.
+Leveraging Java 21 virtual thread, Mercury 3.1 allows the developer to write code in a sequential manner. +When code in your function makes an RPC call to another service using the PostOffice's "request" API, it returns +a Java Future object but the "Future" object itself is running in a virtual thread. This means when your code +retrieves the RPC result using the "get" method, your code appears "blocked" while waiting for the response +from the target service.
+Although your code appears to be "blocked", the virtual thread is “suspended”. It will wake up when the response +arrives. When a virtual thread is suspended, it does not consume CPU time and the memory structure for keeping +the thread in suspend mode is very small. Virtual thread technology is designed to support tens of thousands, +if not millions, of concurrent RPC requests in a single compute machine, container or serverless instance.
+Mercury 3.1 supports mixed thread management - virtual threads, suspend functions and kernel threads.
+Functions running in different types of threads are connected loosely in events. This functional isolation +and encapsulation mean that you can precisely control how your application performs for each functional logic block.
+Chapter-1 | +Home | +Chapter-3 | +
---|---|---|
Introduction | +Table of Contents | +REST automation | +
The platform-core foundation library contains a built-in non-blocking HTTP server that you can use to create REST +endpoints. Behind the curtain, it is using the vertx web client and server libraries.
+The REST automation system is not a code generator. The REST endpoints in the rest.yaml file are handled by +the system directly - "Config is the code".
+We will use the "rest.yaml" sample configuration file in the "lambda-example" project to elaborate the configuration +approach.
+The rest.yaml configuration has three sections:
+REST automation is optional. To turn on REST automation, add or update the following parameters in the +application.properties file (or application.yml if you like).
+rest.server.port=8085
+rest.automation=true
+rest.automation.yaml=classpath:/rest.yaml
+
+When rest.automation=true
, you can configure the server port using rest.server.port
or server.port
.
REST automation can co-exist with Spring Boot. Please use rest.server.port
for REST automation and
+server.port
for Spring Boot.
The rest.automation.yaml
tells the system the location of the rest.yaml configuration file.
You can configure more than one location and the system will search them sequentially. The following example +tells the system to load rest.yaml from "/tmp/config/rest.yaml". If the file is not available, it will use +the rest.yaml in the project's resources folder.
+rest.automation.yaml=file:/tmp/config/rest.yaml, classpath:/rest.yaml
+
+The "rest" section of the rest.yaml configuration file may contain one or more REST endpoints.
+A REST endpoint may look like this:
+ - service: ["hello.world"]
+ methods: ['GET', 'PUT', 'POST', 'HEAD', 'PATCH', 'DELETE']
+ url: "/api/hello/world"
+ timeout: 10s
+ cors: cors_1
+ headers: header_1
+ threshold: 30000
+ tracing: true
+
+In this example, the URL for the REST endpoint is "/api/hello/world" and it accepts a list of HTTP methods.
+When an HTTP request is sent to the URL, the HTTP event will be sent to the function declared with service route name
+"hello.world". The input event will be the "AsyncHttpRequest" object. Since the "hello.world" function is written
+as an inline LambdaFunction in the lambda-example
application, the AsyncHttpRequest is converted to a HashMap.
To process the input as an AsyncHttpRequest object, the function must be written as a regular class. See the +"services" folder of the lambda-example for additional examples.
+The "timeout" value is the maximum time that REST endpoint will wait for a response from your function. +If there is no response within the specified time interval, the user will receive an HTTP-408 timeout exception.
+The "authentication" tag is optional. If configured, the route name given in the authentication tag will be used. +The input event will be delivered to a function with the authentication route name. In this example, it is +"v1.api.auth".
+Your custom authentication function may look like this:
+@PreLoad(route = "v1.api.auth", instances = 10)
+public class SimpleAuthentication implements TypedLambdaFunction<AsyncHttpRequest, Object> {
+
+ @Override
+ public Object handleEvent(Map<String, String> headers, AsyncHttpRequest input, int instance) {
+ // Your authentication logic here. The return value should be true or false.
+ return result;
+ }
+}
+
+Your authentication function can return a boolean value to indicate if the request should be accepted or rejected.
+If true, the system will send the HTTP request to the service. In this example, it is the "hello.world" function. +If false, the user will receive an "HTTP-401 Unauthorized" exception.
+Optionally, you can use the authentication function to return some session information after authentication. +For example, your authentication can forward the "Authorization" header of the incoming HTTP request to your +organization's OAuth 2.0 Identity Provider for authentication.
+To return session information to the next function, the authentication function can return an EventEnvelope. +It can set the session information as key-values in the response event headers.
+In the lambda-example application, there is a demo authentication function in the AuthDemo class with the +"v1.api.auth" route name. To demonstrate passing session information, the AuthDemo class set the header +"user=demo" in the result EventEnvelope.
+You can test this by visiting http://127.0.0.1:8085/api/hello/generic/1 to invoke the "hello.generic" function.
+The console will print:
+DistributedTrace:55 - trace={path=GET /api/hello/generic/1, service=v1.api.auth, success=true,
+ origin=20230326f84dd5f298b64be4901119ce8b6c18be, exec_time=0.056, start=2023-03-26T20:08:01.702Z,
+ from=http.request, id=aa983244cef7455cbada03c9c2132453, round_trip=1.347, status=200}
+HelloGeneric:56 - Got session information {user=demo}
+DistributedTrace:55 - trace={path=GET /api/hello/generic/1, service=hello.generic, success=true,
+ origin=20230326f84dd5f298b64be4901119ce8b6c18be, start=2023-03-26T20:08:01.704Z, exec_time=0.506,
+ from=v1.api.auth, id=aa983244cef7455cbada03c9c2132453, status=200}
+DistributedTrace:55 - trace={path=GET /api/hello/generic/1, service=async.http.response,
+ success=true, origin=20230326f84dd5f298b64be4901119ce8b6c18be, start=2023-03-26T20:08:01.705Z,
+ exec_time=0.431, from=hello.generic, id=aa983244cef7455cbada03c9c2132453, status=200}
+
+This illustrates that the HTTP request has been processed by the "v1.api.auth" function. The "hello.generic" function +is wired to the "/api/hello/generic/{id}" endpoint as follows:
+ - service: "hello.generic"
+ methods: ['GET']
+ url: "/api/hello/generic/{id}"
+ # Turn on authentication pointing to the "v1.api.auth" function
+ authentication: "v1.api.auth"
+ timeout: 20s
+ cors: cors_1
+ headers: header_1
+ tracing: true
+
+The tracing
tag tells the system to turn on "distributed tracing". In the console log shown above, you see
+three lines of log from "distributed trace" showing that the HTTP request is processed by "v1.api.auth" and
+"hello.generic" before returning result to the browser using the "async.http.response" function.
++Note: the "async.http.response" is a built-in function to send the HTTP response to the browser.
+
The optional cors
and headers
tags point to the specific CORS and HEADERS sections respectively.
For ease of development, you can define CORS headers using the CORS section like this.
+This is a convenient feature for development. For cloud native production system, it is most likely that +CORS processing is done at the API gateway level.
+You can define different sets of CORS headers using different IDs.
+cors:
+ - id: cors_1
+ options:
+ - "Access-Control-Allow-Origin: ${api.origin:*}"
+ - "Access-Control-Allow-Methods: GET, DELETE, PUT, POST, PATCH, OPTIONS"
+ - "Access-Control-Allow-Headers: Origin, Authorization, X-Session-Id, X-Correlation-Id,
+ Accept, Content-Type, X-Requested-With"
+ - "Access-Control-Max-Age: 86400"
+ headers:
+ - "Access-Control-Allow-Origin: ${api.origin:*}"
+ - "Access-Control-Allow-Methods: GET, DELETE, PUT, POST, PATCH, OPTIONS"
+ - "Access-Control-Allow-Headers: Origin, Authorization, X-Session-Id, X-Correlation-Id,
+ Accept, Content-Type, X-Requested-With"
+ - "Access-Control-Allow-Credentials: true"
+
+The HEADERS section is used to do some simple transformation for HTTP request and response headers.
+You can add, keep or drop headers for HTTP request and response. Sample HEADERS section is shown below.
+headers:
+ - id: header_1
+ request:
+ #
+ # headers to be inserted
+ # add: ["hello-world: nice"]
+ #
+ # keep and drop are mutually exclusive where keep has precedent over drop
+ # i.e. when keep is not empty, it will drop all headers except those to be kept
+ # when keep is empty and drop is not, it will drop only the headers in the drop list
+ # e.g.
+ # keep: ['x-session-id', 'user-agent']
+ # drop: ['Upgrade-Insecure-Requests', 'cache-control', 'accept-encoding', 'host', 'connection']
+ #
+ drop: ['Upgrade-Insecure-Requests', 'cache-control', 'accept-encoding', 'host', 'connection']
+
+ response:
+ #
+ # the system can filter the response headers set by a target service,
+ # but it cannot remove any response headers set by the underlying servlet container.
+ # However, you may override non-essential headers using the "add" directive.
+ # i.e. don't touch essential headers such as content-length.
+ #
+ # keep: ['only_this_header_and_drop_all']
+ # drop: ['drop_only_these_headers', 'another_drop_header']
+ #
+ # add: ["server: mercury"]
+ #
+ # You may want to add cache-control to disable browser and CDN caching.
+ # add: ["Cache-Control: no-cache, no-store", "Pragma: no-cache",
+ # "Expires: Thu, 01 Jan 1970 00:00:00 GMT"]
+ #
+ add:
+ - "Strict-Transport-Security: max-age=31536000"
+ - "Cache-Control: no-cache, no-store"
+ - "Pragma: no-cache"
+ - "Expires: Thu, 01 Jan 1970 00:00:00 GMT"
+
+Chapter-2 | +Home | +Chapter-4 | +
---|---|---|
Function execution strategies | +Table of Contents | +Event orchestration | +
In traditional programming, we can write modular software components and wire them together as a single application. +There are many ways to do that. You can rely on a "dependency injection" framework. In many cases, you would need +to write orchestration logic to coordinate how the various components talk to each other to process a transaction.
+In a composable application, you write modular functions using the first principle of "input-process-output".
+Functions communicate with each other using events and each function has a "handleEvent" method to process "input" +and return result as "output". Writing software component in the first principle makes Test Driven Development (TDD) +straight forward. You can write mock function and unit tests before you put in actual business logic.
+Mocking an event-driven function in a composable application is as simple as overriding the function's route name +with a mock function.
+There are two ways to register a function:
+In programmatic registration, you can register a function like this:
+Platform platform = Platform.getInstance();
+platform.registerPrivate("my.function", new MyFunction(), 10);
+
+In the above example, You obtain a singleton instance of the Platform API class and use it to register a private
+function MyFunction
with a route name "my.function".
In declarative approach, you use the PreLoad
annotation to register a class with an event handler.
Your function should implement the LambdaFunction, TypedLambdaFunction or KotlinLambdaFunction. +While LambdaFunction is untyped, the event system can transport PoJo and your function should +test the object type and cast it to the correct PoJo.
+TypedLambdaFunction and KotlinLambdaFunction are typed, and you must declare the input and output classes +according to the input/output API contract of your function.
+For example, the SimpleDemoEndpoint has the "PreLoad" annotation to declare the route name and number of worker +instances.
+By default, LambdaFunction and TypedLambdaFunction are executed using "virtual threads" for the worker instances.
+To change a function to use kernel thread, you can add the KernelThreadRunner
annotation.
@KernelThreadRunner
+@PreLoad(route = "hello.simple", instances = 10)
+public class SimpleDemoEndpoint implements TypedLambdaFunction<AsyncHttpRequest, Object> {
+ @Override
+ public Object handleEvent(Map<String, String> headers, AsyncHttpRequest input, int instance) {
+ // business logic here
+ }
+}
+
+Once a function is created using the declarative method, you can override it with a mock function by using the +programmatic registration method in a unit test.
+When you use the programmatic registration approach, you can use the "register" or the "registerPrivate" method to
+set the function as "public" or "private" respectively. For declarative approach, the PreLoad
annotation
+contains a parameter to define the visibility of the function.
// or register it as "public"
+platform.register("my.function", new MyFunction(), 10);
+
+// register a function as "private"
+platform.registerPrivate("my.function", new MyFunction(), 10);
+
+A private function is visible by other functions in the same application memory space.
+A public function is accessible by other function from another application instance using service mesh or +"Event over HTTP" method. We will discuss inter-container communication in Chapter-7 and +Chapter-8.
+To send an asynchronous event or an event RPC call from one function to another, you can use the PostOffice
APIs.
In your function, you can obtain an instance of the PostOffice like this:
+@Override
+public Object handleEvent(Map<String, String> headers, AsyncHttpRequest input, int instance) {
+ PostOffice po = new PostOffice(headers, instance);
+ // e.g. po.send and po.asyncRequest for sending asynchronous event and making RPC call
+}
+
+The PostOffice API detects if tracing is enabled in the incoming request. If yes, it will propagate tracing +information to "downstream" functions.
+“Request-response”, best for interactivity
e.g. Drop-n-forget
e.g. Progressive rendering
e.g. Work-flow application
e.g. File transfer
In enterprise application, RPC is the most common pattern in making call from one function to another.
+The "calling" function makes a request and waits for the response from the "called" function.
+In Mercury version 3, there are 2 types of RPC calls - "asynchronous" and "sequential non-blocking".
+You can use the asyncRequest
method to make an asynchronous RPC call. Asynchronous means that the response
+will be delivered to the onSuccess
or onFailure
callback method.
Note that normal response and exception are sent to the onSuccess method and timeout exception to the onFailure +method.
+If you set "timeoutException" to false, the timeout exception will be delivered to the onSuccess callback and +the onFailure callback will be ignored.
+Future<EventEnvelope> asyncRequest(final EventEnvelope event, long timeout)
+ throws IOException;
+Future<EventEnvelope> asyncRequest(final EventEnvelope event, long timeout,
+ boolean timeoutException) throws IOException;
+
+// example
+EventEnvelope request = new EventEnvelope().setTo(SERVICE).setBody(TEXT);
+Future<EventEnvelope> response = po.asyncRequest(request, 2000);
+response.onSuccess(result -> {
+ // handle the response event
+}).onFailure(ex -> {
+ // handle timeout exception
+});
+
+The timeout value is measured in milliseconds.
+A special version of RPC is the fork-n-join API. This allows you to make concurrent requests to multiple functions. +The system will consolidate all responses and return them as a list of events.
+Normal responses and user defined exceptions are sent to the onSuccess method and timeout exception to the onFailure +method. Your function will receive all responses or a timeout exception.
+If you set "timeoutException" to false, partial results will be delivered to the onSuccess method when one or +more services fail to respond on-time. The onFailure method is not required.
+Future<List<EventEnvelope>> asyncRequest(final List<EventEnvelope> event, long timeout)
+ throws IOException;
+
+Future<List<EventEnvelope>> asyncRequest(final List<EventEnvelope> event, long timeout,
+ boolean timeoutException) throws IOException;
+
+// example
+List<EventEnvelope> requests = new ArrayList<>();
+requests.add(new EventEnvelope().setTo(SERVICE1).setBody(TEXT1));
+requests.add(new EventEnvelope().setTo(SERVICE2).setBody(TEXT2));
+Future<List<EventEnvelope>> responses = po.asyncRequest(requests, 2000);
+responses.onSuccess(events -> {
+ // handle the response events
+}).onFailure(ex -> {
+ // handle timeout exception
+});
+
+When your function is a service by itself, asynchronous RPC and fork-n-join require different programming approaches.
+There are two ways to do that: +1. Your function returns an immediate result and waits for the response(s) to the onSuccess or onFailure callback +2. Your function is implemented as an "EventInterceptor"
+For the first approach, your function can return an immediate result telling the caller that your function would need +time to process the request. This works when the caller can be reached by a callback.
+For the second approach, your function is annotated with the keyword EventInterceptor
.
+It can immediately return a "null" response that will be ignored by the event system. Your function can inspect
+the "replyTo" address and correlation ID in the incoming event and include them in a future response to the caller.
The easiest way to do synchronous RPC call in a "sequential non-blocking" way is to run your function in a virtual +thread (Note that this is the default behavior and you don't need to set anything).
+// for a single RPC call
+PostOffice po = new PostOffice(headers, instance);
+Future<EventEnvelope> future = po.request(requestEvent, timeoutInMills);
+EventEnvelope result = future.get();
+
+// for a fork-n-join call
+PostOffice po = new PostOffice(headers, instance);
+Future<List<EventEnvelope>> future = po.request(requestEvents, timeoutInMills);
+List<EventEnvelope> result = future.get();
+
+If you prefer coding in Kotlin, you can implement a "suspend function" using the KotlinLambdaFunction interface.
+The following code segment illustrates the creation of the "hello.world" function that makes a non-blocking RPC +call to "another.service".
+@PreLoad(route="hello.world", instances=10)
+class FileUploadDemo: KotlinLambdaFunction<AsyncHttpRequest, Any> {
+ override suspend fun handleEvent(headers: Map<String, String>, input: AsyncHttpRequest,
+ instance: Int): Any {
+ val fastRPC = FastRPC(headers)
+ // your business logic here...
+ val req = EventEnvelope().setTo("another.service").setBody(myPoJo)
+ return fastRPC.awaitRequest(req, 5000)
+ }
+}
+
+The API method signature for non-blocking RPC and fork-n-join are as follows:
+@Throws(IOException::class)
+suspend fun awaitRequest(request: EventEnvelope, timeout: Long): EventEnvelope
+
+@Throws(IOException::class)
+suspend fun awaitRequest(requests: List<EventEnvelope>, timeout: Long): List<EventEnvelope>
+
+To make an asynchronous call from one function to another, use the send
method.
void send(String to, Kv... parameters) throws IOException;
+void send(String to, Object body) throws IOException;
+void send(String to, Object body, Kv... parameters) throws IOException;
+void send(final EventEnvelope event) throws IOException;
+
+Kv is a key-value pair for holding one parameter.
+Asynchronous event calls are handled in the background so that your function can continue processing. +For example, sending a notification message to a user.
+You can declare another function as a "callback". When you send a request to another function, you can set the +"replyTo" address in the request event. When a response is received, your callback function will be invoked to +handle the response event.
+EventEnvelope req = new EventEnvelope().setTo("some.service")
+ .setBody(myPoJo).setReplyTo("my.callback");
+po.send(req);
+
+In the above example, you have a callback function with route name "my.callback". You send the request event +with a MyPoJo object as payload to the "some.service" function. When a response is received, the "my.callback" +function will get the response as input.
+Pipeline is a linked list of event calls. There are many ways to do pipeline. One way is to keep the pipeline plan +in an event's header and pass the event across multiple functions where you can set the "replyTo" address from the +pipeline plan. You should handle exception cases when a pipeline breaks in the middle of a transaction.
+An example of the pipeline header key-value may look like this:
+pipeline=service.1, service.2, service.3, service.4, service.5
+
+In the above example, when the pipeline event is received by a function, the function can check its position +in the pipeline by comparing its own route name with the pipeline plan.
+PostOffice po = new PostOffice(headers, instance);
+
+// some business logic here...
+String myRoute = po.getRoute();
+
+Suppose myRoute is "service.2", the function can send the response event to "service.3". +When "service.3" receives the event, it can send its response event to the next one. i.e. "service.4".
+When the event reaches the last service ("service.5"), the processing will complete.
+If you set a function as singleton (i.e. one worker instance), it will receive event in an orderly fashion. +This way you can "stream" events to the function, and it will process the events one by one.
+Another means to do streaming is to create an "ObjectStreamIO" event stream like this:
+ObjectStreamIO stream = new ObjectStreamIO(60);
+ObjectStreamWriter out = new ObjectStreamWriter(stream.getOutputStreamId());
+out.write(messageOne);
+out.write(messageTwo);
+out.close();
+
+String streamId = stream.getInputStreamId();
+// pass the streamId to another function
+
+In the code segment above, your function creates an object event stream and writes 2 messages into the stream +It obtains the streamId of the event stream and sends it to another function. The other function can read the +data blocks orderly.
+You must declare "end of stream" by closing the output stream. If you do not close an output stream, +it remains open and idle. If a function is trying to read an input stream using the stream ID and the +next data block is not available, it will time out.
+A stream will be automatically closed when the idle inactivity timer is reached. In the above example, +ObjectStreamIO(60) means an idle inactivity timer of 60 seconds.
+There are two ways to read an input event stream - asynchronous or sequential non-blocking.
+To read events from a stream, you can create an instance of the AsyncObjectStreamReader like this:
+AsyncObjectStreamReader in = new AsyncObjectStreamReader(stream.getInputStreamId(), 8000);
+Future<Object> block = in.get();
+block.onSuccess(b -> {
+ if (b != null) {
+ // process the data block
+ } else {
+ // end of stream. Do additional processing.
+ in.close();
+ }
+});
+
+The above illustrates reading the first block of data. The function would need to iteratively read the stream +until end of stream (i.e. when the stream returns null). As a result, asynchronous application code for stream +processing is more challenging to write.
+The industry trend is to use sequential non-blocking method instead of "asynchronous callback" because your code +will be much easier to read.
+You can use the awaitRequest
method to read the next block of data from an event stream.
An example for reading a stream is shown in the FileUploadDemo
kotlin class in the lambda-example project.
+It is using a simple "while" loop to read the stream. When the function fetches the next block of data using
+the awaitRequest
method, the function is suspended until the next data block or "end of stream" signal is received.
It may look like this:
+val po = PostOffice(headers, instance)
+val fastRPC = FastRPC(headers)
+
+val req = EventEnvelope().setTo(streamId).setHeader(TYPE, READ)
+while (true) {
+ val event = fastRPC.awaitRequest(req, 5000)
+ if (event.status == 408) {
+ // handle input stream timeout
+ break
+ }
+ if ("eof" == event.headers["type"]) {
+ po.send(streamId, Kv("type", "close"))
+ break
+ }
+ if ("data" == event.headers["type"]) {
+ val block = event.body
+ if (block is ByteArray) {
+ // handle the data block from the input stream
+ }
+ }
+}
+
+Since the code style is "sequential non-blocking", using a "while" loop does not block the "event loop" provided +that you are using an "await" API inside the while-loop.
+In this fashion, the intent of the code is clear. Sequential non-blocking method offers high throughput because +it does not consume CPU resources while the function is waiting for a response from another function.
+We recommend sequential non-blocking style for more sophisticated event streaming logic.
+++Note: "await" methods are only supported in KotlinLambdaFunction which is a suspend function. + When Java 19 virtual thread feature becomes officially available, we will enhance + the function execution strategies accordingly.
+
Once you have implemented modular functions in a self-contained manner, the best practice is to write one or more +functions to do "event orchestration".
+Think of the orchestration function as a music conductor who guides the whole team to perform.
+For event orchestration, your function can be the "conductor" that sends events to the individual functions so that +they operate together as a single application. To simplify design, the best practice is to apply event orchestration +for each transaction or use case. The event orchestration function also serves as a living documentation about how +your application works. It makes your code more readable.
+To automate event orchestration, there is an enterprise add-on module called "Event Script". +This is the idea of "config over code" or "declarative programming". The primary purpose of "Event Script" +is to reduce coding effort so that the team can focus in improving application design and code quality. +Please contact your Accenture representative if you would like to evaluate the additional tool.
+In the next chapter, we will discuss the build, test and deploy process.
+
Chapter-3 | +Home | +Chapter-5 | +
---|---|---|
REST automation | +Table of Contents | +Build, test and deploy | +
The first step in writing an application is to create an entry point for your application.
+A minimalist main application template is shown as follows:
+@MainApplication
+public class MainApp implements EntryPoint {
+ public static void main(String[] args) {
+ AppStarter.main(args);
+ }
+ @Override
+ public void start(String[] args) {
+ // your startup logic here
+ log.info("Started");
+ }
+}
+
+Note that MainApplication is mandatory. You must have at least one "main application" module.
+++Note: Please adjust the parameter "web.component.scan" in application.properties + to point to your user application package(s) in your source code project.
+
If your application does not require additional startup logic, you may just print a greeting message.
+The AppStarter.main()
statement in the "main" method is used when you want to start your application within the IDE.
+You can "right-click" the main method and select "run".
You can also build and run the application from command line like this:
+cd sandbox/mercury-composable/examples/lambda-example
+mvn clean package
+java -jar target/lambda-example-3.1.1.jar
+
+The lambda-example is a sample application that you can use as a template to write your own code. Please review +the pom.xml and the source directory structure. The pom.xml is pre-configured to support Java and Kotlin.
+In the lambda-example project root, you will find the following directories:
+src/main/java
+src/main/kotlin
+src/test/java
+
+Note that kotlin unit test directory is not included because you can test all functions in Java unit tests.
+Since all functions are connected using the in-memory event bus, you can test any function by sending events +from a unit test module in Java. If you are comfortable with the Kotlin language, you may also set up Kotlin +unit tests accordingly. There is no harm having both types of unit tests in the same project.
+Since the source project contains both Java and Kotlin, we have replaced javadoc maven plugin with Jetbrains "dokka" +documentation engine for both Java and Kotlin. Javadoc is useful if you want to write and publish your own libraries.
+To generate Java and Kotlin source documentation, please run "mvn dokka:dokka". You may "cd" to the platform-core +project to try the maven dokka command to generate some source documentation. The home page will be available +in "target/dokka/index.html"
+Please follow the step-by-step learning guide in Chapter-1 to write your own functions. You can then +configure new REST endpoints to use your new functions.
+In Chapter-1, we have discussed the three function execution strategies to optimize your application +to the full potential of stability, performance and throughput.
+In Chapter-3, we have presented the configuration syntax for the "rest.yaml" REST automation
+definition file. Please review the sample rest.yaml file in the lambda-example project. You may notice that
+it has an entry for HTTP forwarding. The following entry in the sample rest.yaml file illustrates an HTTP
+forwarding endpoint. In HTTP forwarding, you can replace the "service" route name with a direct HTTP target host.
+You can do "URL rewrite" to change the URL path to the target endpoint path. In the below example,
+/api/v1/*
will be mapped to /api/*
in the target endpoint.
- service: "http://127.0.0.1:${rest.server.port}"
+ trust_all_cert: true
+ methods: ['GET', 'PUT', 'POST']
+ url: "/api/v1/*"
+ url_rewrite: ['/api/v1', '/api']
+ timeout: 20
+ cors: cors_1
+ headers: header_1
+ tracing: true
+
+One feature in REST automation "rest.yaml" configuration is that you can configure more than one function in the +"service" section. In the following example, there are two function route names ("hello.world" and "hello.copy"). +The first one "hello.world" is the primary service provider. The second one "hello.copy" will receive a copy of +the incoming event automatically.
+This feature allows you to write new version of a function without disruption to current functionality. Once you are +happy with the new version of function, you can route the endpoint directly to the new version by updating the +"rest.yaml" configuration file.
+ - service: ["hello.world", "hello.copy"]
+
+Please refer to "rpcTest" method in the "HelloWorldTest" class in the lambda-example to get started.
+In unit test, we want to start the main application so that all the functions are ready for tests.
+First, we write a "TestBase" class to use the BeforeClass setup method to start the main application like this:
+public class TestBase {
+
+ private static final AtomicInteger seq = new AtomicInteger(0);
+
+ @BeforeClass
+ public static void setup() {
+ if (seq.incrementAndGet() == 1) {
+ AppStarter.main(new String[0]);
+ }
+ }
+}
+
+The atomic integer "seq" is used to ensure the main application entry point is executed only once.
+Your first unit test may look like this:
+@SuppressWarnings("unchecked")
+@Test
+public void rpcTest() throws IOException, InterruptedException {
+ Utility util = Utility.getInstance();
+ BlockingQueue<EventEnvelope> bench = new ArrayBlockingQueue<>(1);
+ String name = "hello";
+ String address = "world";
+ String telephone = "123-456-7890";
+ DemoPoJo pojo = new DemoPoJo(name, address, telephone);
+ PostOffice po = new PostOffice("unit.test", "12345", "POST /api/hello/world");
+ EventEnvelope request = new EventEnvelope().setTo("hello.world")
+ .setHeader("a", "b").setBody(pojo.toMap());
+ po.asyncRequest(request, 800).onSuccess(bench::offer);
+ EventEnvelope response = bench.poll(10, TimeUnit.SECONDS);
+ assert response != null;
+ Assert.assertEquals(HashMap.class, response.getBody().getClass());
+ MultiLevelMap map = new MultiLevelMap((Map<String, Object>) response.getBody());
+ Assert.assertEquals("b", map.getElement("headers.a"));
+ Assert.assertEquals(name, map.getElement("body.name"));
+ Assert.assertEquals(address, map.getElement("body.address"));
+ Assert.assertEquals(telephone, map.getElement("body.telephone"));
+ Assert.assertEquals(util.date2str(pojo.time), map.getElement("body.time"));
+}
+
+Note that the PostOffice instance can be created with tracing information in a Unit Test. The above example +tells the system that the sender is "unit.test", the trace ID is 12345 and the trace path is "POST /api/hello/world".
+For unit test, we need to convert the asynchronous code into "synchronous" execution so that unit test can run +sequentially. "BlockingQueue" is a good choice for this.
+The "hello.world" is an echo function. The above unit test sends an event containing a key-value {"a":"b"}
and
+the payload of a HashMap from the DemoPoJo.
If the function is designed to handle PoJo, we can send PoJo directly instead of a Map.
+++IMPORTANT: blocking code should only be used for unit tests. DO NOT use blocking code in your + application code because it will block the event system and dramatically slow down + your application.
+
The Utility and MultiLevelMap classes are convenient tools for unit tests. In the above example, we use the +Utility class to convert a date object into a UTC timestamp. It is because date object is serialized as a UTC +timestamp in an event.
+The MultiLevelMap supports reading an element using the convenient "dot and bracket" format.
+For example, given a map like this:
+{
+ "body":
+ {
+ "time": "2023-03-27T18:10:34.234Z",
+ "hello": [1, 2, 3]
+ }
+}
+
+Example | +Command | +Result | +
---|---|---|
1 | +map.getElement("body.time") | +2023-03-27T18:10:34.234Z | +
2 | +map.getElement("body.hello[2]") | +3 | +
Let's do a unit test for PoJo. In this second unit test, it sends a RPC request to the "hello.pojo" function that +is designed to return a SamplePoJo object with some mock data.
+Please refer to "pojoRpcTest" method in the "PoJoTest" class in the lambda-example for details.
+The unit test verifies that the "hello.pojo" has correctly returned the SamplePoJo object with the pre-defined +mock value.
+@Test
+public void pojoTest() throws IOException, InterruptedException {
+ Integer ID = 1;
+ String NAME = "Simple PoJo class";
+ String ADDRESS = "100 World Blvd, Planet Earth";
+ BlockingQueue<EventEnvelope> bench = new ArrayBlockingQueue<>(1);
+ PostOffice po = new PostOffice("unit.test", "20001", "GET /api/hello/pojo");
+ EventEnvelope request = new EventEnvelope().setTo("hello.pojo").setHeader("id", "1");
+ po.asyncRequest(request, 800).onSuccess(bench::offer);
+ EventEnvelope response = bench.poll(10, TimeUnit.SECONDS);
+ assert response != null;
+ Assert.assertEquals(SamplePoJo.class, response.getBody().getClass());
+ SamplePoJo pojo = response.getBody(SamplePoJo.class);
+ Assert.assertEquals(ID, pojo.getId());
+ Assert.assertEquals(NAME, pojo.getName());
+ Assert.assertEquals(ADDRESS, pojo.getAddress());
+}
+
+Note that you can do class "casting" or use the built-in casting API as shown below:
+++SamplePoJo pojo = (SamplePoJo) response.getBody()
+SamplePoJo pojo = response.getBody(SamplePoJo.class)
+
Testing Kotlin suspend functions is challenging. However, testing suspend function using events is straight forward +because of loose coupling.
+Let's do a unit test for the lambda-example's FileUploadDemo function. Its route name is "hello.upload".
+Please refer to "uploadTest" method in the "SuspendFunctionTest" class in the lambda-example for details.
+@SuppressWarnings("unchecked")
+@Test
+public void uploadTest() throws IOException, InterruptedException {
+ String FILENAME = "unit-test-data.txt";
+ BlockingQueue<EventEnvelope> bench = new ArrayBlockingQueue<>(1);
+ Utility util = Utility.getInstance();
+ PostOffice po = PostOffice.getInstance();
+ int len = 0;
+ ByteArrayOutputStream bytes = new ByteArrayOutputStream();
+ ObjectStreamIO stream = new ObjectStreamIO();
+ ObjectStreamWriter out = new ObjectStreamWriter(stream.getOutputStreamId());
+ for (int i=0; i < 10; i++) {
+ String line = "hello world "+i+"\n";
+ byte[] d = util.getUTF(line);
+ out.write(d);
+ bytes.write(d);
+ len += d.length;
+ }
+ out.close();
+ // emulate a multi-part file upload
+ AsyncHttpRequest req = new AsyncHttpRequest();
+ req.setMethod("POST");
+ req.setUrl("/api/upload/demo");
+ req.setTargetHost("http://127.0.0.1:8080");
+ req.setHeader("accept", "application/json");
+ req.setHeader("content-type", "multipart/form-data");
+ req.setContentLength(len);
+ req.setFileName(FILENAME);
+ req.setStreamRoute(stream.getInputStreamId());
+ // send the HTTP request event to the "hello.upload" function
+ EventEnvelope request = new EventEnvelope().setTo("hello.upload").setBody(req);
+ po.asyncRequest(request, 8000).onSuccess(bench::offer);
+ EventEnvelope response = bench.poll(10, TimeUnit.SECONDS);
+ assert response != null;
+ Assert.assertEquals(HashMap.class, response.getBody().getClass());
+ Map<String, Object> map = (Map<String, Object>) response.getBody();
+ System.out.println(response.getBody());
+ Assert.assertEquals(len, map.get("expected_size"));
+ Assert.assertEquals(len, map.get("actual_size"));
+ Assert.assertEquals(FILENAME, map.get("filename"));
+ Assert.assertEquals("Upload completed", map.get("message"));
+ // finally check that "hello.upload" has saved the test file
+ File dir = new File("/tmp/upload-download-demo");
+ File file = new File(dir, FILENAME);
+ Assert.assertTrue(file.exists());
+ Assert.assertEquals(len, file.length());
+ // compare file content
+ byte[] b = Utility.getInstance().file2bytes(file);
+ Assert.assertArrayEquals(bytes.toByteArray(), b);
+}
+
+In the above unit test, we use the ObjectStreamIO to emulate a file stream and write 10 blocks of data into it. +The unit test then makes an RPC call to the "hello.upload" with the emulated HTTP request event.
+The "hello.upload" is a Kotlin suspend function. It will be executed when the event arrives. +After saving the test file, it will return an HTTP response object that the unit test can validate.
+In this fashion, you can create unit tests to test suspend functions in an event-driven manner.
+The pom.xml is pre-configured to generate an executable JAR. The following is extracted from the pom.xml.
+The main class is AutoStart
that will load the "main application" and use it as the entry point to
+run the application.
<plugin>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-maven-plugin</artifactId>
+ <configuration>
+ <mainClass>org.platformlambda.core.system.AutoStart</mainClass>
+ </configuration>
+ <executions>
+ <execution>
+ <id>build-info</id>
+ <goals>
+ <goal>build-info</goal>
+ </goals>
+ </execution>
+ </executions>
+</plugin>
+
+Composable application is designed to be deployable using Kubernetes or serverless.
+A sample Dockerfile for an executable JAR may look like this:
+FROM mcr.microsoft.com/openjdk/jdk:21-ubuntu
+EXPOSE 8083
+WORKDIR /app
+COPY target/rest-spring-3-example-3.1.1.jar .
+ENTRYPOINT ["java","-jar","rest-spring-3-example-3.1.1.jar"]
+
+The system has a built-in distributed tracing feature. You can enable tracing for any REST endpoint by adding +"tracing=true" in the endpoint definition in the "rest.yaml" configuration file.
+You may also upload performance metrics from the distributed tracing data to your favorite telemetry system dashboard.
+To do that, please implement a custom metrics function with the route name distributed.trace.forwarder
.
The input to the function will be a HashMap like this:
+trace={path=/api/upload/demo, service=hello.upload, success=true,
+ origin=2023032731e2a5eeae8f4da09f3d9ac6b55fb0a4,
+ exec_time=77.462, start=2023-03-27T19:38:30.061Z,
+ from=http.request, id=12345, round_trip=132.296, status=200}
+
+The system will detect if distributed.trace.forwarder
is available. If yes, it will forward performance metrics
+from distributed trace to your custom function.
Optionally, you may also implement a custom audit function named transaction.journal.recorder
to monitor
+request-response payloads.
To enable journaling, please add this to the application.properties file.
+journal.yaml=classpath:/journal.yaml
+
+and add the "journal.yaml" configuration file to the project's resources folder with content like this:
+journal:
+ - "my.test.function"
+ - "another.function"
+
+In the above example, the "my.test.function" and "another.function" will be monitored and their request-response +payloads will be forwarded to your custom audit function. The input to your audit function will be a HashMap +containing the performance metrics data and a "journal" section with the request and response payloads in clear form.
+++IMPORTANT: journaling may contain sensitive personally identifiable data and secrets. Please check + security compliance before storing them into access restricted audit data store.
+
Chapter-4 | +Home | +Chapter-6 | +
---|---|---|
Event orchestration | +Table of Contents | +Spring Boot | +
While the platform-core foundation code includes a lightweight non-blocking HTTP server, you can also turn your +application into an executable Spring Boot application.
+There are two ways to do that:
+rest-spring-3
add-on library for a pre-configured Spring Boot experienceFor option 1, the platform-core library can co-exist with Spring Boot. You can write code specific to Spring Boot +and the Spring framework ecosystem. Please make sure you add the following startup code to your Spring Boot +main application like this:
+@SpringBootApplication
+public class MyMainApp extends SpringBootServletInitializer {
+
+ public static void main(String[] args) {
+ AppStarter.main(args);
+ SpringApplication.run(MyMainApp.class, args);
+ }
+
+}
+
+We suggest running AppStarter.main
before the SpringApplication.run
statement. This would allow the platform-core
+foundation code to load the event-listener functions into memory before Spring Boot starts.
You can add the rest-spring-3
library in your application and turn it into a pre-configured
+Spring Boot 3 application.
The "rest-spring" library configures Spring Boot's serializers (XML and JSON) to behave consistently as the +built-in lightweight non-blocking HTTP server.
+If you want to disable the lightweight HTTP server, you can set rest.automation=false
in application.properties.
+The REST automation engine and the lightweight HTTP server will be turned off.
++IMPORTANT: the platform-core library assumes the application configuration files to be either + application.yml or application.properties. If you use custom Spring profile, please keep the + application.yml or application.properties for the platform-core. If you use default Spring + profile, both platform-core and Spring Boot will use the same configuration files.
+
You can customize your error page using the default errorPage.html
by copying it from the platform-core's or
+rest-spring's resources folder to your source project. The default page is shown below.
This is the HTML error page that the platform-core or rest-spring library uses. You can update it with +your corporate style guide. Please keep the parameters (status, message, path, warning) intact.
+<!DOCTYPE html>
+<html lang="en">
+<head>
+ <title>HTTP Error</title>
+ <meta charset="utf-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1">
+</head>
+<body>
+
+<div>
+ <h3>HTTP-${status}</h3>
+ <div>${warning}</div><br/>
+ <table>
+ <tbody>
+ <tr><td style="font-style: italic; width: 100px">Type</td><td>error</td></tr>
+ <tr><td style="font-style: italic; width: 100px">Status</td><td>${status}</td></tr>
+ <tr><td style="font-style: italic; width: 100px">Message</td><td>${message}</td></tr>
+ <tr><td style="font-style: italic; width: 100px">Path</td><td>${path}</td></tr>
+ </tbody>
+ </table>
+
+</div>
+</body>
+</html>
+
+If you want to keep REST automation's lightweight HTTP server together with Spring Boot's Tomcat or other +application server, please add the following to your application.properties file:
+server.port=8083
+rest.server.port=8085
+rest.automation=true
+
+The platform-core and Spring Boot will use rest.server.port
and server.port
respectively.
Let's review the rest-spring-3-example
demo application in the "examples/rest-spring-3-example" project.
You can use the rest-spring-3-example as a template to create a Spring Boot application.
+In addition to the REST automation engine that let you create REST endpoints by configuration, you can also +programmatically create REST endpoints with the following approaches:
+We will examine asynchronous REST endpoint with the AsyncHelloWorld
class.
@RestController
+public class AsyncHelloWorld {
+ private static final AtomicInteger seq = new AtomicInteger(0);
+
+ @GetMapping(value = "/api/hello/world", produces={"application/json", "application/xml"})
+ public Mono<Map<String, Object>> hello(HttpServletRequest request) {
+ String traceId = Utility.getInstance().getUuid();
+ PostOffice po = new PostOffice("hello.world.endpoint", traceId, "GET /api/hello/world");
+ Map<String, Object> forward = new HashMap<>();
+
+ Enumeration<String> headers = request.getHeaderNames();
+ while (headers.hasMoreElements()) {
+ String key = headers.nextElement();
+ forward.put(key, request.getHeader(key));
+ }
+ // As a demo, just put the incoming HTTP headers as a payload and a parameter showing the sequence counter.
+ // The echo service will return both.
+ int n = seq.incrementAndGet();
+ EventEnvelope req = new EventEnvelope();
+ req.setTo("hello.world").setBody(forward).setHeader("seq", n);
+ return Mono.create(callback -> {
+ try {
+ po.asyncRequest(req, 3000)
+ .onSuccess(event -> {
+ Map<String, Object> result = new HashMap<>();
+ result.put("status", event.getStatus());
+ result.put("headers", event.getHeaders());
+ result.put("body", event.getBody());
+ result.put("execution_time", event.getExecutionTime());
+ result.put("round_trip", event.getRoundTrip());
+ callback.success(result);
+ })
+ .onFailure(ex -> callback.error(new AppException(408, ex.getMessage())));
+ } catch (IOException e) {
+ callback.error(e);
+ }
+ });
+ }
+}
+
+In this hello world REST endpoint, Spring Reactor runs the "hello" method asynchronously without waiting for a response.
+The example code copies the HTTP requests and sends it as the request payload to the "hello.world" function. +The function is defined in the MainApp like this:
+Platform platform = Platform.getInstance();
+LambdaFunction echo = (headers, input, instance) -> {
+ Map<String, Object> result = new HashMap<>();
+ result.put("headers", headers);
+ result.put("body", input);
+ result.put("instance", instance);
+ result.put("origin", platform.getOrigin());
+ return result;
+};
+platform.register("hello.world", echo, 20);
+
+When "hello.world" responds, its result set will be returned to the onSuccess
method as a "future response".
The "onSuccess" method then sends the response to the browser using the JAX-RS resume mechanism.
+The AsyncHelloConcurrent
is the same as the AsyncHelloWorld
except that it performs a "fork-n-join" operation
+to multiple instances of the "hello.world" function.
Unlike "rest.yaml" that defines tracing by configuration, you can turn on tracing programmatically in a JAX-RS +endpoint. To enable tracing, the function sets the trace ID and path in the PostOffice constructor.
+When you try the endpoint at http://127.0.0.1:8083/api/hello/world, it will echo your HTTP request headers. +In the command terminal, you will see tracing information in the console log like this:
+DistributedTrace:67 - trace={path=GET /api/hello/world, service=hello.world, success=true,
+ origin=20230403364f70ebeb54477f91986289dfcd7b75, exec_time=0.249, start=2023-04-03T04:42:43.445Z,
+ from=hello.world.endpoint, id=e12e871096ba4938b871ee72ef09aa0a, round_trip=20.018, status=200}
+
+If you want to turn on a non-blocking websocket server, you can add the following configuration to +application.properties.
+server.port=8083
+websocket.server.port=8085
+
+The above assumes Spring Boot runs on port 8083 and the websocket server runs on port 8085.
+++Note that "websocket.server.port" is an alias of "rest.server.port"
+
You can create a websocket service with a Java class like this:
+@WebSocketService("hello")
+public class WsEchoDemo implements LambdaFunction {
+
+ @Override
+ public Object handleEvent(Map<String, String> headers, Object body, int instance) {
+ // handle the incoming websocket events (type = open, close, bytes or string)
+ }
+}
+
+The above creates a websocket service at the URL "/ws/hello" server endpoint.
+Please review the example code in the WsEchoDemo class in the rest-spring-2-example project for details.
+If you want to use Spring Boot's Tomcat websocket server, you can disable the non-blocking websocket server feature
+by removing the websocket.server.port
configuration and any websocket service classes with the WebSocketService
+annotation.
To try out the demo websocket server, visit http://127.0.0.1:8083 and select "Websocket demo".
+The rest-spring-3
subproject is a pre-configured Spring Boot 3 library.
In "rest-spring-3", Spring WebFlux replaces JAX-RS as the asynchronous HTTP servlet engine.
+Chapter-5 | +Home | +Chapter-7 | +
---|---|---|
Build, test and deploy | +Table of Contents | +Event over HTTP | +
The in-memory event system allows functions to communicate with each other in the same application memory space.
+In composable architecture, applications are modular components in a network. Some transactions may require +the services of more than one application. "Event over HTTP" extends the event system beyond a single application.
+The Event API service (event.api.service
) is a built-in function in the system.
To enable "Event over HTTP", you must first turn on the REST automation engine with the following parameters +in the application.properties file:
+rest.server.port=8085
+rest.automation=true
+
+and then check if the following entry is configured in the "rest.yaml" endpoint definition file. +If not, update "rest.yaml" accordingly. The "timeout" value is set to 60 seconds to fit common use cases.
+ - service: [ "event.api.service" ]
+ methods: [ 'POST' ]
+ url: "/api/event"
+ timeout: 60s
+ tracing: true
+
+This will expose the Event API endpoint at port 8085 and URL "/api/event".
+In kubernetes, The Event API endpoint of each application is reachable through internal DNS and there is no need +to create "ingress" for this purpose.
+You may now test drive the Event API service.
+First, build and run the lambda-example application in port 8085.
+cd examples/lambda-example
+java -jar target/lambda-example-3.1.1.jar
+
+Second, build and run the rest-spring-example application.
+cd examples/rest-spring-example-3
+java -jar target/rest-spring-3-example-3.1.1.jar
+
+The rest-spring-3-example application will run as a Spring Boot application in port 8083 and 8086.
+These two applications will start independently.
+You may point your browser to http://127.0.0.1:8083/api/pojo/http/1 to invoke the HelloPojoEventOverHttp
+endpoint service that will in turn makes an Event API call to the lambda-example's "hello.pojo" service.
You will see the following response in the browser. This means the rest-spring-example application has successfully +made an event API call to the lambda-example application using the Event API endpoint.
+{
+ "id": 1,
+ "name": "Simple PoJo class",
+ "address": "100 World Blvd, Planet Earth",
+ "date": "2023-03-27T23:17:19.257Z",
+ "instance": 6,
+ "seq": 66,
+ "origin": "2023032791b6938a47614cf48779b1cf02fc89c4"
+}
+
+To examine how the application makes the Event API call, please refer to the HelloPojoEventOverHttp
class
+in the rest-spring-example. The class is extracted below:
@RestController
+public class HelloPoJoEventOverHttp {
+
+ @GetMapping("/api/pojo/http/{id}")
+ public Mono<SamplePoJo> getPoJo(@PathVariable("id") Integer id) {
+ AppConfigReader config = AppConfigReader.getInstance();
+ String remotePort = config.getProperty("lambda.example.port", "8085");
+ String remoteEndpoint = "http://127.0.0.1:"+remotePort+"/api/event";
+ String traceId = Utility.getInstance().getUuid();
+ PostOffice po = new PostOffice("hello.pojo.endpoint", traceId, "GET /api/pojo/http");
+ EventEnvelope req = new EventEnvelope().setTo("hello.pojo").setHeader("id", id);
+ return Mono.create(callback -> {
+ try {
+ EventEnvelope response = po.request(req, 3000, Collections.emptyMap(), remoteEndpoint, true).get();
+ if (response.getBody() instanceof SamplePoJo result) {
+ callback.success(result);
+ } else {
+ callback.error(new AppException(response.getStatus(), response.getError()));
+ }
+ } catch (IOException | ExecutionException | InterruptedException e) {
+ callback.error(e);
+ }
+ });
+ }
+}
+
+The method signatures of the Event API is shown as follows:
+// io.vertx.core.Future
+public Future<EventEnvelope> asyncRequest(final EventEnvelope event, long timeout,
+ Map<String, String> headers,
+ String eventEndpoint, boolean rpc) throws IOException;
+
+// java.util.concurrent.Future
+public Future<EventEnvelope> request(final EventEnvelope event, long timeout,
+ Map<String, String> headers,
+ String eventEndpoint, boolean rpc) throws IOException;
+
+suspend fun awaitRequest(request: EventEnvelope?, timeout: Long,
+ headers: Map<String, String>,
+ eventEndpoint: String, rpc: Boolean): EventEnvelope
+
+Optionally, you may add security headers in the "headers" argument. e.g. the "Authorization" header.
+The eventEndpoint is a fully qualified URL. e.g. http://peer/api/event
The "rpc" boolean value is set to true so that the response from the service of the peer application instance +will be delivered. For drop-n-forget use case, you can set the "rpc" value to false. It will immediately return +an HTTP-202 response.
+While you can call the "Event-over-HTTP" APIs programmatically, it would be more convenient to automate it with a +configuration. This service abstraction means that user applications do not need to know where the target services are.
+You can enable Event-over-HTTP configuration by adding this parameter in application.properties:
+#
+# Optional event-over-http target maps
+#
+event.over.http=classpath:/event-over-http.yaml
+
+and then create the configuration file "event-over-http.yaml" like this:
+event:
+ http:
+ - route: 'hello.pojo2'
+ target: 'http://127.0.0.1:${lambda.example.port}/api/event'
+ - route: 'event.http.test'
+ target: 'http://127.0.0.1:${server.port}/api/event'
+ # optional security headers
+ headers:
+ authorization: 'demo'
+ - route: 'event.save.get'
+ target: 'http://127.0.0.1:${server.port}/api/event'
+ headers:
+ authorization: 'demo'
+
+In the above example, there are three routes (hello.pojo2, event.http.test and event.save.get) with target URLs. +If additional authentication is required for the peer's "/api/event" endpoint, you may add a set of security +headers in each route.
+When you send asynchronous event or make a RPC call to "event.save.get" service, it will be forwarded to the
+peer's "event-over-HTTP" endpoint (/api/event
) accordingly.
You may also add variable references to the application.properties (or application.yaml) file, such as +"server.port" in this example.
+An example in the rest-spring-3-example subproject is shown below to illustrate this service abstraction. +In this example, the remote Event-over-HTTP endpoint address is resolved from the event-over-http.yaml +configuration.
+@RestController
+public class HelloPoJoEventOverHttpByConfig {
+
+ @GetMapping("/api/pojo2/http/{id}")
+ public Mono<SamplePoJo> getPoJo(@PathVariable("id") Integer id) {
+ String traceId = Utility.getInstance().getUuid();
+ PostOffice po = new PostOffice("hello.pojo.endpoint", traceId, "GET /api/pojo2/http");
+ /*
+ * "hello.pojo2" resides in the lambda-example and is reachable by "Event-over-HTTP".
+ * In HelloPojoEventOverHttp.java, it demonstrates the use of Event-over-HTTP API.
+ * In this example, it illustrates the use of the "Event-over-HTTP by configuration" feature.
+ * Please see application.properties and event-over-http.yaml files for more details.
+ */
+ EventEnvelope req = new EventEnvelope().setTo("hello.pojo2").setHeader("id", id);
+ return Mono.create(callback -> {
+ try {
+ EventEnvelope response = po.request(req, 3000, false).get();
+ if (response.getBody() instanceof SamplePoJo result) {
+ callback.success(result);
+ } else {
+ callback.error(new AppException(response.getStatus(), response.getError()));
+ }
+ } catch (IOException | ExecutionException | InterruptedException e) {
+ callback.error(e);
+ }
+ });
+ }
+}
+
+++Note: The configuration based "event-over-HTTP" feature does not support fork-n-join request API. + You can achieve similar parallel processing using multiple calls to "po.request API" + where each call returns a Java "Future".
+
The Event API exposes all public functions of an application instance to the network using a single REST endpoint.
+The advantages of Event API includes:
+The following configuration adds authentication service to the Event API endpoint:
+ - service: [ "event.api.service" ]
+ methods: [ 'POST' ]
+ url: "/api/event"
+ timeout: 60s
+ authentication: "v1.api.auth"
+ tracing: true
+
+This enforces every incoming request to the Event API endpoint to be authenticated by the "v1.api.auth" service +before passing to the Event API service. You can plug in your own authentication service such as OAuth 2.0 +"bearer token" validation.
+Please refer to Chapter-3 - REST automation for details.
+
Chapter-6 | +Home | +Chapter-8 | +
---|---|---|
Spring Boot | +Table of Contents | +Service mesh | +
Service mesh is a dedicated infrastructure layer to facilitate inter-container communication using "sidecar" and +"control plane".
+Service mesh systems require additional administrative containers (PODs) for "control plane" and "service discovery."
+The additional infrastructure requirements vary among products.
+We will discuss using Kafka as a minimalist service mesh.
+++Note: Service mesh is optional. You can use "event over HTTP" for inter-container + communication if service mesh is not suitable.
+
Typically, a service mesh system uses a "side-car" to sit next to the application container in the same POD to provide +service discovery and network proxy services.
+Instead of using a side-car proxy, the system maintains a distributed routing table in each application instance. +When a function requests the service of another function which is not in the same memory space, the "cloud.connector" +module will bridge the event to the peer application through a network event system like Kafka.
+As shown in the following table, if "service.1" and "service.2" are in the same memory space of an application, +they will communicate using the in-memory event bus.
+If they are in different applications and the applications are configured with Kafka, the two functions will +communicate via the "cloud.connector" service.
+In-memory event bus | +Network event stream | +
---|---|
"service.1" -> "service.2" | +"service.1" -> "cloud.connector" -> "service.2" | +
The system supports Kafka out of the box. For example, to select kafka, you can configure application.properties like this:
+cloud.connector=kafka
+
+The "cloud.connector" parameter can be set to "none" or "kafka". +The default parameter of "cloud.connector" is "none". This means the application is not using +any network event system "connector", thus running independently.
+Let's set up a minimalist service mesh with Kafka to see how it works.
+You need a Kafka cluster as the network event stream system. For development and testing, you can build
+and run a standalone Kafka server like this. Note that the mvn clean package
command is optional because
+the executable JAR should be available after the mvn clean install
command in Chapter-1.
cd connectors/adapters/kafka/kafka-standalone
+mvn clean package
+java -jar target/kafka-standalone-3.1.1.jar
+
+The standalone Kafka server will start at port 9092. You may adjust the "server.properties" in the standalone-kafka +project when necessary.
+When the kafka server is started, it will create two temporary directories in the "/tmp" folder:
+++The kafka server is designed for development purpose only. The kafka and zookeeper data stores + will be cleared when the server is restarted.
+
The "kafka-presence" is a "presence monitor" application. It is a minimalist "control plane" in service mesh +terminology.
+What is a presence monitor? A presence monitor is the control plane that assigns unique "topic" for each +user application instance.
+It monitors the "presence" of each application. If an application fails or stops, the presence monitor will +advertise the event to the rest of the system so that each application container will update its corresponding +distributed routing table, thus bypassing the failed application and its services.
+If an application has more than one container instance deployed, they will work together to share load evenly.
+You will start the presence monitor like this:
+cd connectors/adapters/kafka/kafka-presence
+java -jar target/kafka-presence-3.1.1.jar
+
+By default, the kafka-connector will run at port 8080. Partial start-up log is shown below:
+AppStarter:344 - Modules loaded in 2,370 ms
+AppStarter:334 - Websocket server running on port-8080
+ServiceLifeCycle:73 - service.monitor, partition 0 ready
+HouseKeeper:72 - Registered monitor (me) 2023032896b12f9de149459f9c8b71ad8b6b49fa
+
+The presence monitor will use the topic "service.monitor" to connect to the Kafka server and register itself +as a presence monitor.
+Presence monitor is resilient. You can run more than one instance to back up each other. +If you are not using Docker or Kubernetes, you need to change the "server.port" parameter of the second instance +to 8081 so that the two application instances can run in the same laptop.
+Let's run the rest-spring-2-example (rest-spring-3-example) and lambda-example applications with +Kafka connector turned on.
+For demo purpose, the rest-spring-2-example and lambda-example are pre-configured with "kafka-connector". +If you do not need these libraries, please remove them from the pom.xml built script.
+Since kafka-connector is pre-configured, we can start the two demo applications like this:
+cd examples/rest-spring-2-example
+java -Dcloud.connector=kafka -Dmandatory.health.dependencies=cloud.connector.health
+ -jar target/rest-spring-2-example-3.1.1.jar
+
+cd examples/lambda-example
+java -Dcloud.connector=kafka -Dmandatory.health.dependencies=cloud.connector.health
+ -jar target/lambda-example-3.1.1.jar
+
+The above command uses the "-D" parameters to configure the "cloud.connector" and "mandatory.health.dependencies".
+The parameter mandatory.health.dependencies=cloud.connector.health
tells the system to turn on the health check
+endpoint for the application.
For the rest-spring-2-example, the start-up log may look like this:
+AppStarter:344 - Modules loaded in 2,825 ms
+PresenceConnector:155 - Connected pc.abb4a4de.in, 127.0.0.1:8080,
+ /ws/presence/202303282583899cf43a49b98f0522492b9ca178
+EventConsumer:160 - Subscribed multiplex.0001.0
+ServiceLifeCycle:73 - multiplex.0001, partition 0 ready
+
+This means that the rest-spring-2-example has successfully connected to the presence monitor at port 8080. +It has subscribed to the topic "multiplex.0001" partition 0.
+For the lambda-example, the log may look like this:
+AppStarter:344 - Modules loaded in 2,742 m
+PresenceConnector:155 - Connected pc.991a2be0.in, 127.0.0.1:8080,
+ /ws/presence/2023032808d82ebe2c0d4e5aa9ca96b3813bdd25
+EventConsumer:160 - Subscribed multiplex.0001.1
+ServiceLifeCycle:73 - multiplex.0001, partition 1 ready
+ServiceRegistry:242 - Peer 202303282583899cf43a49b98f0522492b9ca178 joins (rest-spring-2-example 3.0.0)
+ServiceRegistry:383 - hello.world (rest-spring-2-example, WEB.202303282583899cf43a49b98f0522492b9ca178) registered
+
+You notice that the lambda-example has discovered the rest-spring-2-example through Kafka and added the +"hello.world" to the distributed routing table.
+At this point, the rest-spring-2-example will find the lambda-example application as well:
+ServiceRegistry:242 - Peer 2023032808d82ebe2c0d4e5aa9ca96b3813bdd25 joins (lambda-example 3.0.0)
+ServiceRegistry:383 - hello.world (lambda-example,
+ APP.2023032808d82ebe2c0d4e5aa9ca96b3813bdd25) registered
+ServiceRegistry:383 - hello.pojo (lambda-example,
+ APP.2023032808d82ebe2c0d4e5aa9ca96b3813bdd25) registered
+
+This is real-time service discovery coordinated by the "kafka-presence" monitor application.
+Now you have created a minimalist event-driven service mesh.
+In Chapter-7, you have sent a request from the rest-spring-2-example to the lambda-example using +"Event over HTTP" without a service mesh.
+In this section, you can make the same request using service mesh.
+Please point your browser to http://127.0.0.1:8083/api/pojo/mesh/1 +You will see the following response in your browser.
+{
+ "id": 1,
+ "name": "Simple PoJo class",
+ "address": "100 World Blvd, Planet Earth",
+ "date": "2023-03-28T17:53:41.696Z",
+ "instance": 1,
+ "seq": 1,
+ "origin": "2023032808d82ebe2c0d4e5aa9ca96b3813bdd25"
+}
+
+You can check the service mesh status from the presence monitor's "/info" endpoint.
+You can visit http://127.0.0.1:8080/info and it will show something like this:
+{
+ "app": {
+ "name": "kafka-presence",
+ "description": "Presence Monitor",
+ "version": "3.0.0"
+ },
+ "personality": "RESOURCES",
+ "additional_info": {
+ "total": {
+ "topics": 2,
+ "virtual_topics": 2,
+ "connections": 2
+ },
+ "topics": [
+ "multiplex.0001 (32)",
+ "service.monitor (11)"
+ ],
+ "virtual_topics": [
+ "multiplex.0001-000 -> 202303282583899cf43a49b98f0522492b9ca178, rest-spring-2-example v3.0.0",
+ "multiplex.0001-001 -> 2023032808d82ebe2c0d4e5aa9ca96b3813bdd25, lambda-example v3.0.0"
+ ],
+ "connections": [
+ {
+ "elapsed": "25 minutes 12 seconds",
+ "created": "2023-03-28T17:43:13Z",
+ "origin": "2023032808d82ebe2c0d4e5aa9ca96b3813bdd25",
+ "name": "lambda-example",
+ "topic": "multiplex.0001-001",
+ "monitor": "2023032896b12f9de149459f9c8b71ad8b6b49fa",
+ "type": "APP",
+ "updated": "2023-03-28T18:08:25Z",
+ "version": "3.0.0",
+ "seq": 65,
+ "group": 1
+ },
+ {
+ "elapsed": "29 minutes 42 seconds",
+ "created": "2023-03-28T17:38:47Z",
+ "origin": "202303282583899cf43a49b98f0522492b9ca178",
+ "name": "rest-spring-2-example",
+ "topic": "multiplex.0001-000",
+ "monitor": "2023032896b12f9de149459f9c8b71ad8b6b49fa",
+ "type": "WEB",
+ "updated": "2023-03-28T18:08:29Z",
+ "version": "3.0.0",
+ "seq": 75,
+ "group": 1
+ }
+ ],
+ "monitors": [
+ "2023032896b12f9de149459f9c8b71ad8b6b49fa - 2023-03-28T18:08:46Z"
+ ]
+ },
+ "vm": {
+ "java_vm_version": "18.0.2.1+1",
+ "java_runtime_version": "18.0.2.1+1",
+ "java_version": "18.0.2.1"
+ },
+ "origin": "2023032896b12f9de149459f9c8b71ad8b6b49fa",
+ "time": {
+ "current": "2023-03-28T18:08:47.613Z",
+ "start": "2023-03-28T17:31:23.611Z"
+ }
+}
+
+In this example, it shows that there are two user applications (rest-spring-2-example and lambda-example) connected.
+The presence monitor has a "/health" endpoint.
+You can visit http://127.0.0.1:8080/health and it will show something like this:
+{
+ "upstream": [
+ {
+ "route": "cloud.connector.health",
+ "status_code": 200,
+ "service": "kafka",
+ "topics": "on-demand",
+ "href": "127.0.0.1:9092",
+ "message": "Loopback test took 3 ms; System contains 2 topics",
+ "required": true
+ }
+ ],
+ "origin": "2023032896b12f9de149459f9c8b71ad8b6b49fa",
+ "name": "kafka-presence",
+ "status": "UP"
+}
+
+Similarly, you can check the health status of the rest-spring-2-example application with http://127.0.0.1:8083/health
+{
+ "upstream": [
+ {
+ "route": "cloud.connector.health",
+ "status_code": 200,
+ "service": "kafka",
+ "topics": "on-demand",
+ "href": "127.0.0.1:9092",
+ "message": "Loopback test took 4 ms",
+ "required": true
+ }
+ ],
+ "origin": "202303282583899cf43a49b98f0522492b9ca178",
+ "name": "rest-spring-example",
+ "status": "UP"
+}
+
+It looks similar to the health status of the presence monitor. However, only the presence monitor shows the total +number of topics because it handles topic issuance to each user application instance.
+Additional actuator endpoints includes:
+You can press "control-C" to stop an application. Let's stop the lambda-example application.
+Once you stopped lamdba-example from the command line, the rest-spring-2-example will detect it:
+ServiceRegistry:278 - Peer 2023032808d82ebe2c0d4e5aa9ca96b3813bdd25 left (lambda-example 3.0.0)
+ServiceRegistry:401 - hello.world 2023032808d82ebe2c0d4e5aa9ca96b3813bdd25 unregistered
+ServiceRegistry:401 - hello.pojo 2023032808d82ebe2c0d4e5aa9ca96b3813bdd25 unregistered
+
+The rest-spring-2-example will update its distributed routing table automatically.
+You will also find log messages in the kafka-presence application like this:
+MonitorService:120 - Member 2023032808d82ebe2c0d4e5aa9ca96b3813bdd25 left
+TopicController:250 - multiplex.0001-001 released by 2023032808d82ebe2c0d4e5aa9ca96b3813bdd25,
+ lambda-example, 3.0.0
+
+When an application instance stops, the presence monitor will detect the event, remove it from the registry and +release the topic associated with the disconnected application instance.
+The presence monitor is using the "presence" feature in websocket, thus we call it "presence" monitor.
+Chapter-7 | +Home | +CHAPTER-9 | +
---|---|---|
Event over HTTP | +Table of Contents | +API overview | +
Each application has an entry point. You may implement an entry point in a main application like this:
+@MainApplication
+public class MainApp implements EntryPoint {
+ public static void main(String[] args) {
+ AppStarter.main(args);
+ }
+ @Override
+ public void start(String[] args) {
+ // your startup logic here
+ log.info("Started");
+ }
+}
+
+In your main application, you will implement the EntryPoint
interface to override the "start" method.
+Typically, a main application is used to initiate some application start up procedure.
In some case when your application does not need any start up logic, you can just print a message to indicate +that your application has started.
+You may want to keep the static "main" method which can be used to run your application inside an IDE.
+The pom.xml build script is designed to run the AppStarter
start up function that will execute your main
+application's start method.
In some case, your application may have more than one main application module. You can decide the sequence of
+execution using the "sequence" parameter in the MainApplication
annotation. The module with the smallest sequence
+number will run first.
Sometimes, it may be required to set up some environment configuration before your main application starts.
+You can implement a BeforeApplication
module. Its syntax is similar to the MainApplication
.
@BeforeApplication
+public class EnvSetup implements EntryPoint {
+
+ @Override
+ public void start(String[] args) {
+ // your environment setup logic here
+ log.info("initialized");
+ }
+}
+
+The BeforeApplication
logic will run before your MainApplication
module. This is useful when you want to do
+special handling of environment variables. For example, decrypt an environment variable secret, construct an X.509
+certificate, and save it in the "/tmp" folder before your main application starts.
Mercury version 3 is an event engine that encapsulates Eclipse Vertx and Kotlin coroutine and suspend function.
+A composable application is a collection of functions that communicate with each other in events. +Each event is transported by an event envelope. Let's examine the envelope.
+There are 3 elements in an event envelope:
+Element | +Type | +Purpose | +
---|---|---|
1 | +metadata | +Includes unique ID, target function name, reply address correlation ID, status, exception, trace ID and path |
+
2 | +headers | +User defined key-value pairs | +
3 | +body | +Event payload (primitive, hash map or PoJo) | +
Headers and body are optional, but you must provide at least one of them. If the envelope do not have any headers +or body, the system will send your event as a "ping" command to the target function. The response acknowledgements +that the target function exists. This ping/pong protocol tests the event loop or service mesh. This test mechanism +is useful for DevSecOps admin dashboard.
+To reject an incoming request, you can throw an AppException like this:
+// example-1
+throw new AppException(400, "My custom error message");
+// example-2
+throw new AppException(400, "My custom error message", ex);
+
+Example-1 - a simple exception with status code (400) and an error message
+Example-2 - includes a nested exception
+As a best practice, we recommend using error codes that are compatible with HTTP status codes.
+You can write a function in Java like this:
+@PreLoad(route = "hello.simple", instances = 10)
+public class SimpleDemoEndpoint implements TypedLambdaFunction<AsyncHttpRequest, Object> {
+ @Override
+ public Object handleEvent(Map<String, String> headers, AsyncHttpRequest input, int instance) {
+ // business logic here
+ return result;
+ }
+}
+
+By default, a Java function will run using a kernel thread. To tell the system that you want to run the function as
+a coroutine, you can add the CoroutineRunner
annotation.
The PreLoad
annotation tells the system to preload the function into memory and register it into the event loop.
+You must provide a "route name" and configure the number of concurrent workers ("instances").
Route name is used by the event loop to find your function in memory. A route name must use lower letters and numbers, +and it must have at least one dot as a word separator. e.g. "hello.simple" is a proper route name but "HelloSimple" +is not.
+You can implement your function using the LambdaFunction or TypedLambdaFunction. The latter allows you to define +the input and output classes.
+The system will map the event body into the input
argument and the event headers into the headers
argument.
+The instance
argument informs your function which worker is serving the current request.
Similarly, you can also write a "suspend function" in Kotlin like this:
+@PreLoad(route = "hello.world", instances = 10, isPrivate = false,
+ envInstances = "instances.hello.world")
+class HelloWorld : KotlinLambdaFunction<Any?, Map<String, Any>> {
+
+ @Throws(Exception::class)
+ override suspend fun handleEvent(headers: Map<String, String>, input: Any?,
+ instance: Int): Map<String, Any> {
+ // business logic here
+ return result;
+ }
+}
+
+In the suspend function example above, you may notice the optional envInstances
parameter. This tells the system
+to use a parameter from the application.properties (or application.yml) to configure the number of workers for the
+function. When the parameter defined in "envInstances" is not found, the "instances" parameter is used as the
+default value.
There are some reserved metadata for route name ("my_route"), trace ID ("my_trace_id") and trace path ("my_trace_path") +in the "headers" argument. They do not exist in the incoming event envelope. Instead, the system automatically +insert them as read-only metadata.
+They are used when your code want to obtain an instance of PostOffice or FastRPC.
+To inspect all metadata, you can declare the input as "EventEnvelope". The system will map the whole event envelope +into the "input" argument. You can retrieve the replyTo address and other useful metadata.
+Note that the "replyTo" address is optional. It only exists when the caller is making an RPC call to your function. +If the caller sends an asynchronous request, the "replyTo" value is null.
+You can obtain a singleton instance of the Platform object to do the following:
+We recommend using the PreLoad
annotation in a class to declare the function route name, number of worker instances
+and whether the function is public or private.
In some use cases where you want to create and destroy functions on demand, you can register them programmatically.
+In the following example, it registers "my.function" using the MyFunction class as a public function and +"another.function" with the AnotherFunction class as a private function. It then registers two kotlin functions +in public and private scope respectively.
+Platform platform = Platform.getInstance();
+
+// register a public function
+platform.register("my.function", new MyFunction(), 10);
+
+// register a private function
+platform.registerPrivate("another.function", new AnotherFunction(), 20);
+
+// register a public suspend function
+platform.registerKoltin("my.suspend.function", new MySuspendFunction(), 10);
+
+// register a private suspend function
+platform.registerKoltinPrivate("another.suspend.function", new AnotherSuspendFunction(), 10);
+
+A public function is visible by any application instances in the same network. When a function is declared as +"public", the function is reachable through the EventAPI REST endpoint or a service mesh.
+A private function is invisible outside the memory space of the application instance that it resides. +This allows application to encapsulate business logic according to domain boundary. You can assemble closely +related functions as a composable application that can be deployed independently.
+In some use cases, you want to release a function on-demand when it is no longer required.
+platform.release("another.function");
+
+The above API will unload the function from memory and release it from the "event loop".
+You can check if a function with the named route has been deployed.
+if (platform.hasRoute("another.function")) {
+ // do something
+}
+
+Functions are registered asynchronously. For functions registered using the PreLoad
annotation, they are available
+to your application when the MainApplication starts.
For functions that are registered on-demand, you can wait for the function to get ready like this:
+Future<Boolean> status = platform.waitForProvider("cloud.connector", 10);
+status.onSuccess(ready -> {
+ // business logic when "cloud.connector" is ready
+});
+
+Note that the "onFailure" method is not required. The onSuccess will return true or false. In the above example, +your application waits for up to 10 seconds. If the function (i.e. the "provider") is available, the API will invoke +the "onSuccess" method immediately.
+When an application instance starts, a unique ID is generated. We call this the "Origin ID".
+String originId = po.getOrigin();
+
+When running the application in a minimalist service mesh using Kafka or similar network event stream system, +the origin ID is used to uniquely identify the application instance.
+The origin ID is automatically appended to the "replyTo" address when making a RPC call over a network event stream +so that the system can send the response event back to the "originator" or "calling" application instance.
+An application may have one of the following personality:
+You can change the application personality like this:
+// the default value is "APP"
+ServerPersonality.getInstance().setType(ServerPersonality.Type.REST);
+
+The personality setting is for documentation purpose only. It does not affect the behavior of your application. +It will appear in the application "/info" endpoint.
+You can obtain an instance of the PostOffice from the input "headers" and "instance" parameters in the input +arguments of your function.
+PostOffice po = new PostOffice(headers, instance);
+
+The PostOffice is the event manager that you can use to send asynchronous events or to make RPC requests. +The constructor uses the READ only metadata in the "headers" argument in the "handleEvent" method of your function.
+You can send an asynchronous event like this.
+// example-1
+po.send("another.function", "test message");
+
+// example-2
+po.send("another.function", new Kv("some_key", "some_value"), new kv("another_key", "another_value"));
+
+// example-3
+po.send("another.function", somePoJo, new Kv("some_key", "some_value"));
+
+// example-4
+EventEnvelope event = new EventEnvelope().setTo("another.function")
+ .setHeader("some_key", "some_value").setBody(somePoJo);
+po.send(event)
+
+// example-5
+po.sendLater(event, new Date(System.currentTimeMillis() + 5000));
+
+The first 3 APIs are convenient methods and the system will automatically create an EventEnvelope to hold the +target route name, key-values and/or event payload.
+You can make RPC call like this:
+// example-1
+EventEnvelope request = new EventEnvelope().setTo("another.function")
+ .setHeader("some_key", "some_value").setBody(somePoJo);
+Future<EventEnvelope> response = po.asyncRequest(request, 5000);
+response.onSuccess(result -> {
+ // result is the response event
+});
+response.onFailure(e -> {
+ // handle timeout exception
+});
+
+// example-2
+Future<EventEnvelope> response = po.asyncRequest(request, 5000, false);
+response.onSuccess(result -> {
+ // result is the response event
+ // Timeout exception is returned as a response event with status=408
+});
+
+// example-3 with the "rpc" boolean parameter set to true
+Future<EventEnvelope> response = po.asyncRequest(request, 5000, "http://peer/api/event", true);
+response.onSuccess(result -> {
+ // result is the response event
+});
+response.onFailure(e -> {
+ // handle timeout exception
+});
+
+"Event over HTTP" is an important topic. Please refer to Chapter 7 for more details.
+In a similar fashion, you can make a fork-n-join call that sends request events in parallel to more than one function.
+// example-1
+EventEnvelope request1 = new EventEnvelope().setTo("this.function")
+ .setHeader("hello", "world").setBody("test message");
+EventEnvelope request2 = new EventEnvelope().setTo("that.function")
+ .setHeader("good", "day").setBody(somePoJo);
+List<EventEnvelope> requests = new ArrayList<>();
+requests.add(request1);
+requests.add(request2);
+Future<List<EventEnvelope>> responses = po.asyncRequest(requests, 5000);
+response.onSuccess(results -> {
+ // results contains the response events
+});
+response.onFailure(e -> {
+ // handle timeout exception
+});
+
+// example-2
+Future<List<EventEnvelope>> responses = po.asyncRequest(requests, 5000, false);
+response.onSuccess(results -> {
+ // results contains the response events.
+ // Partial result list is returned if one or more functions did not respond.
+});
+
+You can make a sequential non-blocking RPC call from one function to another. The FastRPC is similar to the PostOffice. +It is the event manager for KotlinLambdaFunction. You can create an instance of the FastRPC using the "headers" +parameters in the input arguments of your function.
+val fastRPC = new FastRPC(headers)
+val request = EventEnvelope().setTo("another.function")
+ .setHeader("some_key", "some_value").setBody(somePoJo)
+// example-1
+val response = fastRPC.awaitRequest(request, 5000)
+// handle the response event
+
+// example-2 with the "rpc" boolean parameter set to true
+val response = fastRPC.awaitRequest(request, 5000, "http://peer/api/event", true)
+// handle the response event
+
+Note that timeout exception is returned as a regular event with status 408.
+Sequential non-blocking code is easier to read. Moreover, it handles more concurrent users and requests +without consuming a lot of CPU resources because it is "suspended" while waiting for a response from another function.
+You can make a sequential non-blocking fork-n-join call using the FastRPC API like this:
+val fastRPC = FastRPC(headers)
+val template = EventEnvelope().setTo("hello.world").setHeader("someKey", "someValue")
+val requests = ArrayList<EventEnvelope>()
+// create a list of 4 request events
+for (i in 0..3) {
+ requests.add(EventEnvelope(template.toBytes()).setBody(i).setCorrelationId("cid-$i"))
+}
+val responses: List<EventEnvelope> = fastRPC.awaitRequest(requests, 5000)
+// handle the response events
+
+In the above example, the function creates a list of request events from a template event with target service +"hello.world". It sets the number 0 to 3 to the individual events with unique correlation IDs.
+The response events contain the same set of correlation IDs so that your business logic can decide how to +handle individual response event.
+The result may be a partial list of response events if one or more functions failed to respond on time.
+The PostOffice provides the "exists()" method that is similar to the "platform.hasRoute()" command.
+The difference is that the "exists()" method can discover functions of another application instance when running +in the "service mesh" mode.
+If your application is not deployed in a service mesh, the PostOffice's "exists" and Platform's "hasRoute" APIs +will provide the same result.
+boolean found = po.exists("another.function");
+if (found) {
+ // do something
+}
+
+If you want to know the route name and optional trace ID and path, you can use the following APIs.
+For example, if tracing is enabled, the trace ID will be available. You can put the trace ID in application log +messages. This would group log messages of the same transaction together when you search the trace ID from +a centralized logging dashboard such as Splunk.
+String myRoute = po.getRoute();
+String traceId = po.getTraceId();
+String tracePath = po.getTracePath();
+
+You can use the PostOffice instance to annotate a trace in your function like this:
+// annotate a trace with the key-value "hello:world"
+po.annotateTrace("hello", "world");
+
+This is useful when you want to attach transaction specific information in the performance metrics. +For example, the traces may be used in production transaction analytics.
+++IMPORTANT: do not annotate sensitive or secret information such as PII, PHI, PCI data because + the trace is visible in application log. It may also be forwarded to a centralized + telemetry dashboard.
+
Your function can access the main application configuration from the platform like this:
+AppConfigReader config = AppConfigReader.getInstance();
+// the value can be string or a primitive
+Object value = config.get('my.parameter');
+// the return value will be converted to a string
+String text = config.getProperty('my.parameter');
+
+The system uses the standard dot-bracket format for a parameter name.
+++e.g. "hello.world", "some.key[2]"
+
You can override the main application configuration at run-time using the Java argument "-D".
+++e.g. "java -Dserver.port=8080 -jar myApp.jar"
+
Additional configuration files can be added with the ConfigReader
API like this:
// filePath should have location prefix "classpath:/" or "file:/"
+ConfigReader reader = new ConfigReader();
+reader.load(filePath);
+
+The configuration system supports environment variable or reference to the main application configuration
+using the dollar-bracket syntax ${reference:default_value}
.
++e.g. "some.key=${MY_ENV_VARIABLE}", "some.key=${my.key}"
+
As a best practice, we advocate a minimalist approach in API integration. +To build powerful composable applications, the above set of APIs is sufficient to perform +"event orchestration" where you write code to coordinate how the various functions work together as a +single "executable". Please refer to Chapter-4 for more details about event orchestration.
+Since Mercury is used in production installations, we will exercise the best effort to keep the core API stable.
+Other APIs in the toolkits are used internally to build the engine itself, and they may change from time to time. +They are mostly convenient methods and utilities. The engine is fully encapsulated and any internal API changes +are not likely to impact your applications.
+To further reduce coding effort, you can perform "event orchestration" by configuration using "Event Script". +It is available as an enterprise add-on module from Accenture.
+Mercury libraries are designed to co-exist with your favorite frameworks and tools. Inside a class implementing
+the LambdaFunction
, TypedLambdaFunction
or KotlinLambdaFunction
, you can use any coding style and frameworks
+as you like, including sequential, object-oriented and reactive programming styles.
Mercury version 3 has a built-in lightweight non-blocking HTTP server, but you can also use Spring Boot and other +application server framework with it.
+A sample Spring Boot integration is provided in the "rest-spring" project. It is an optional feature, and you can +decide to use a regular Spring Boot application with Mercury or to pick the customized Spring Boot in the +"rest-spring" library.
+You can use the lambda-example
project as a template to start writing your own applications. It is preconfigured
+to support kernel threads, coroutine and suspend function.
This project is licensed under the Apache 2.0 open sources license. We will update the public codebase after +it passes regression tests and meets stability and performance benchmarks in our production systems.
+Mercury is developed as an engine for you to build the latest cloud native and composable applications. +While we are updating the technology frequently, the essential internals and the core APIs are stable.
+We are monitoring the progress of the upcoming Java 19 Virtual Thread feature and will include it in our API +when it becomes officially available.
+For enterprise clients, optional technical support is available. Please contact your Accenture representative
+for details.
+
Chapter-8 | +Home | +
---|---|
Service mesh | +Table of Contents | +
Mercury version 3 is a toolkit for writing composable applications.
+ +Chapter 2 - Function Execution Strategies
+ +Chapter 4 - Event orchestration
+Chapter 5 - Build, test and deploy
+ + + + +Appendix I - application.properties
+ + + +Reference engine for building "Composable architecture and applications".
+This project leverages the power of Java version 21 (LTS) virtual thread feature.
+Inside a user function, your code is running sequentially and synchronous request-response (RPC) calls +appear to be "blocking". However, your function is actually suspended when waiting for a response from +another function, database query or an external resource, thus reducing CPU consumption and +dramatically increasing application throughput to handle more transactions and users.
+If you need compatibility with lower Java version down to version 1.8, please visit +Mercury 3.0 in https://github.com/Accenture/mercury
+The Mercury project is created with one primary objective -
+to make software easy to write, read, test, deploy, scale and manage.
Mercury is the building block for writing event driven composable applications. +It is applicable for both green field system development and IT modernization projects.
+A legacy application may be modernized in 3 steps:
+Mercury can be used to decompose a large monolithic application into functional blocks where each legacy functional +block can be encapsulated in a functional wrapper that is fully event driven. Once functional boundary is +identified, a set of functions can be grouped and deployed as a microservices unit. The deployed services can +communicate with each other in events over the network. The end result is a set of cloud native services.
+Since each function is coupled with an event system, the amount of code within a functional wrapper is relatively +small and we can refactor or rewrite some logic using modern coding style. There is no restriction in coding +style inside a functional wrapper. You can write sequential code, object oriented code or reactive code.
+In this fashion, you can keep legacy code that is trouble free and focus your energy to re-invent the application +selectively and incrementally.
+With Mercury 3.1, you have complete control to precisely tune your application for optimal performance and throughput +using three function execution strategies:
+January 2024
+To get started with your first application, please refer to the Developer Guide.
+In cloud migration and IT modernization, we evaluate application portfolio and recommend different +disposition strategies based on the 7R migration methodology.
+7R: Retire, retain, re-host, re-platform, replace, re-architect and re-imagine.
+
+The most common observation during IT modernization discovery is that there are many complex monolithic applications +that are hard to modernize quickly.
+IT modernization is like moving into a new home. It would be the opportunity to clean up and to improve for +business agility and strategic competitiveness.
+Composable architecture is gaining visibility recently because it accelerates organization transformation towards +a cloud native future. We will discuss how we may reduce modernization risks with this approach.
+Composability applies to both platform and application levels.
+We can trace the root of composability to Service Oriented Architecture (SOA) in 2000 or a technical bulletin on +"Flow-Based Programming" by IBM in 1971. This is the idea that architecture and applications are built using +modular building blocks and each block is self-contained with predictable behavior.
+At the platform level, composable architecture refers to loosely coupled platform services, utilities, and +business applications. With modular design, you can assemble platform components and applications to create +new use cases or to adjust for ever-changing business environment and requirements. Domain driven design (DDD), +Command Query Responsibility Segregation (CQRS) and Microservices patterns are the popular tools that architects +use to build composable architecture. You may deploy applications in containers, serverless or other means.
+At the application level, a composable application means that an application is assembled from modular software +components or functions that are self-contained and pluggable. You can mix-n-match functions to form new applications. +You can retire outdated functions without adverse side effect to a production system. Multiple versions of a function +can exist, and you can decide how to route user requests to different versions of a function. Applications would be +easier to design, develop, maintain, deploy, and scale.
+Composable architecture and applications contribute to business agility.
+Since 2014, microservices architectural pattern helps to decompose a big application into smaller pieces of +“self-contained” services. We also apply digital decoupling techniques to services and domains. Smaller is better. +However, we are writing code in the same old fashion. One method is calling other methods directly. Functional and +reactive programming techniques are means to run code in a non-blocking manner, for example Reactive Streams, Akka, +Vertx, Quarkus Multi/Uni and Spring Reactive Flux/Mono. These are excellent tools, but they do not reduce the +complexity of business applications.
+To make an application composable, the software components within a single application should be loosely coupled +where each component has zero or minimal dependencies.
+Unlike traditional programming approach, composable application is built from the top down. First, we describe +a business transaction as an event flow. Second, from the event flow, we identify individual functions for +business logic. Third, we write user story for each function and write code in a self-contained manner. +Finally, we write orchestration code to coordinate event flow among the functions, so they work together +as a single application.
+The individual functions become the building block for a composable application. We can mix-n-match different +sets of functions to address different business use cases.
+Cloud native applications are deployed as containers or serverless functions. Ideally, they communicate using events. +For example, the CQRS design pattern is well accepted for building high performance cloud native applications.
+++ +Figure 1 - Cloud native applications use event streams to communicate
+
However, within a single application unit, the application is mostly built in a traditional way. +i.e. one function is calling other functions and libraries directly, thus making the modules and libraries +tightly coupled. As a result, microservices may become smaller monolithic applications.
+To overcome this limitation, we can employ “event-driven design” to make the microservices application unit composable.
+An application unit is a collection of functions in memory and an “event bus” is the communication conduit to connect +the functions together to form a single executable.
+++ +Figure 2 – Functions use in-memory event bus to communicate
+
For a composable application, each function is written using the first principle of “input-process-output” where +input and output payloads are delivered as events. All input and output are immutable to reduce unintended bugs +and side effects.
+Since input and output for each function is well-defined, test-driven development (TDD) can be done naturally. +It is also easier to define a user story for each function and the developer does not need to study and integrate +multiple levels of dependencies, resulting in higher quality code.
+++ +Figure 3 - The first principle of a function
+
What is a “function”? For example, reading a record from a database and performing some data transformation, +doing a calculation with a formula, etc.
+++ +Figure 4 - Connecting output of one function to input of another
+
As shown in Figure 4, if function-1 wants to send a request to function-2, we can write “event orchestration code” +to put the output from function-1 into an event envelope and send it over an in-memory event bus. The event system +will transport the event envelope to function-2, extract the payload and submit it as “input” to function-2
+In event-driven application design, a function is executed when an event arrives as “input.” When a function +finishes processing, your application can command the event system to route the result set (“output”) as an +event to another function.
+++ +Figure 5 - Executing function through event flow
+
As shown in Figure 5, functions can send/receive events using an in-memory event bus (aka "event loop").
+This event-driven architecture provides the foundation to design and implement composable applications. +Each function is self-contained and loosely coupled by event flow.
+A function receiving an event needs to be executed. There are three ways to do that:
+Many modern programming languages such as GoLang, Kotlin, Python and Node.js support “cooperative multitasking” +using “event loop” or “coroutine.” Instead of context switching at the kernel level, functions are executed orderly +by yielding to each other. The order of execution depends on the event flow of each business transaction.
+Since the functions are running cooperatively, the overheads of context switching are low. “Event loop” or +“Coroutine” technology usually can support tens of thousands of “functions” running in “parallel.” +Technically, the functions are running sequentially. When each function finishes execution very quickly, +they appear as running concurrently.
+In Java 19, the virtual thread feature was introduced as an experimental feature. +It is officially supported in Java 21 LTS.
+Virtual thread is more than coroutine. It is comparable to Kotlin suspend function that we will discuss below. +However, it is implemented in the "Thread" API of the standard library, therefore reducing code complexity.
+++ +Figure 6 - Cooperative multitasking
+
In a typical enterprise application, many functions are waiting for responses most of the time. +In preemptive multitasking, these functions are using kernel threads and consuming CPU time. +Too many active kernel threads would turn the application into slow motion.
+“Suspend function” not only avoids overwhelming the CPU with excessive kernel threads but also leverages the +synchronous request-response opportunity into high throughput non-blocking operation.
+As the name indicates, “suspend function” can be suspended and resumed. When it is suspended, it yields control +to the event loop so that other coroutines or suspend functions can run.
+In Node.js and GoLang, coroutine and suspend function are the same. Suspend function refers to the “async/await” +keywords or API of coroutine. In Kotlin, the suspend function extends a coroutine to have the suspend/resume ability.
+A function is suspended when it is waiting for a response from the network, a database or from another function. +It is resumed when a response is received.
+++ +Figure 7 - Improving throughput with suspend function
+
As shown in Figure 8, a “suspend function” can suspend and resume multiple times during its execution. +When it suspends, it is not using any CPU time, thus the application has more time to serve other functions. +This mechanism is very efficient that it can significantly increase the throughput of the application. +i.e. it can handle many concurrent users, and process more requests.
+Java supports “preemptive multitasking” using kernel threads. Multiple functions can execute in parallel. +Preemptive multitasking leverages the multiple cores of a CPU to yield higher performance.
+Preemptive multitasking is performed at the kernel level and the operating system is doing the context switching. +As a result, the maximum number of kernel threads is small. As a rule of thumb, a moderately fast computer can +support ~200 kernel threads.
+In Mercury 3.1, the default execution mode of a user function is "virtual thread". To use kernel thread pool for +a user function, you must add the "KernelThreadRunner" annotation explicitly.
+++ +Figure 8 - Multitasking of kernel threads at the hardware and OS level
+
The ability to select an optimal function execution strategy for a function is critical to the success of a +composable application. This allows the developer to have low level control of how the application performs and scales.
+Without an optimal function execution strategy, performance tuning is usually an educated guess.
+In composable application architecture, each function is self-contained and stateless. We can predict the performance +of each function by selecting an optimal function execution strategy and evaluate it with unit tests and observability. +Predicting application performance and throughput at design and development time reduces modernization risks.
+The pros and cons of each function execution strategy are summarized below:
+Strategy | +Advantage | +Disadvantage | +
---|---|---|
Virtual thread | +Highest throughput in terms of concurrent users. Functionally similar to a suspend function. |
+N/A | +
Suspend function | +Sequential "non-blocking" for RPC (request-response) that makes code easier to read and maintain |
+Requires coding in Kotlin language | +
Kernel threads | +Highest performance in terms of operations per seconds |
+Lower number of concurrent threads due to high context switching overheads |
+
As shown in the table above, performance and throughput are determined by function execution strategies.
+For example, single threaded event driven network proxies such as nginx support twenty times more concurrent +connections than multithreading application servers.
+If we simplify event-driven programming and support all three function execution strategies, we can design and +implement composable applications that deliver high performance and high throughput.
+The “virtual thread” feature in Java 21 is a good building block for function execution strategies. It is the most +significant technological advancement for Java. It supports non-blocking sequential programming without +explicitly using the “async” and “await” keywords. As a result, we believe that all current open sources libraries +that provide event loop functionality would evolve.
+To accelerate this evolution, we have implemented Mercury version 3.1 as an accelerator to build composable +applications. It supports the two pillars of composable application – In-memory event bus and selection of +function execution strategies.
+It integrates with Eclipse Vertx to hide the complexity of event-driven programming and embraces the three function +execution strategies using virtual thread, suspend function and kernel thread pool.
+We can construct a composable application with self-contained functions that execute when events arrive. +There is a simple event API that we call the “Post Office” to support sequential non-blocking RPC, async, +drop and forget, callback, workflow, pipeline, streaming and interceptor patterns.
+Sequential non-blocking RPC reduces the effort in application modernization because we can directly port sequential +legacy code from a monolithic application to the new composable cloud native design.
+Earlier we discussed “event orchestration.” We have an accelerator called “Event Script” that provides +“event orchestration” in configuration to eliminate the most tedious coding effort. Event Script creates a +composable application in three steps: (1) the product owner and architect describe the business transaction as +a flow of events, (2) the developer converts the flow chart into event script and (3) write the individual +functions for business logic. The system will connect the various functions together and orchestrate the +event flow as a single application.
+In traditional programming, we write code to make calls to different methods and libraries. In event-driven +programming, we write code to send events, and this is “event orchestration.” We can use events to make RPC call +just like traditional programming. It is viable to port legacy orchestration logic into event orchestration code.
+To further reduce coding effort, we can use Event Script to do “event orchestration.” This would replace code with +simple event flow configuration.
+Note: Event Script is outside the scope of this open sources project.
+ Please contact your Accenture representative if you are interested to use
+ Event Script to further reduce coding effort for composable applications.
+
+The developer can use any coding style to write the individual functions, no matter it is sequential, object-oriented, +or reactive. One may use any favorite frameworks or libraries. There are no restrictions.
+There is a learning curve in writing “event orchestration.” Since event orchestration supports sequential +non-blocking RPC, the developer can port existing legacy code to the modern style with direct mapping. +Typically, the learning curve is about two weeks. If you are familiar with event-driven programming, the learning +curve would be lower. To eliminate this learning curve, the developer may use Event Script that replaces orchestration +code with event flow configuration files. Event Script is designed to have virtually zero API integration for +exceptionally low learning curve.
+Composability applies to both platform and application levels. We can design and implement better cloud native +applications that are composable using event-driven design and the three function execution strategies.
+We can deliver application that demonstrates both high performance and high throughput, an objective that has been +technically challenging with traditional means. We can scientifically predict application performance and throughput +in design and development time, thus saving time and ensuring consistent product quality.
+Composable approach also facilitates the migration of monolithic application into cloud native by decomposing the +application to functional level and assembling them into microservices according to domain and functional boundary. +It reduces coding effort and application complexity, meaning less project risks.
+Since Java has the largest enterprise-grade open sources and commercial libraries with easy access to a large pool +of trained developers, the availability of virtual thread technology would retain Java as the best option for +application modernization and composable applications.
+This opens a new frontier of cloud native applications that are composable, scalable, and easy to maintain, +thus contributing to business agility.
+ +' + escapeHtml(summary) +'
' + noResultsText + '
'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/docs/search/search_index.json b/docs/search/search_index.json new file mode 100644 index 00000000..eb4b8cdb --- /dev/null +++ b/docs/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Mercury 3.1 Reference engine for building \"Composable architecture and applications\". Pre-requisite This project leverages the power of Java version 21 (LTS) virtual thread feature. Inside a user function, your code is running sequentially and synchronous request-response (RPC) calls appear to be \"blocking\". However, your function is actually suspended when waiting for a response from another function, database query or an external resource, thus reducing CPU consumption and dramatically increasing application throughput to handle more transactions and users. If you need compatibility with lower Java version down to version 1.8, please visit Mercury 3.0 in https://github.com/Accenture/mercury Welcome to the Mercury project The Mercury project is created with one primary objective - to make software easy to write, read, test, deploy, scale and manage. Mercury is the building block for writing event driven composable applications. It is applicable for both green field system development and IT modernization projects. A legacy application may be modernized in 3 steps: Decompose Encapsulate Reinvent Mercury can be used to decompose a large monolithic application into functional blocks where each legacy functional block can be encapsulated in a functional wrapper that is fully event driven. Once functional boundary is identified, a set of functions can be grouped and deployed as a microservices unit. The deployed services can communicate with each other in events over the network. The end result is a set of cloud native services. Since each function is coupled with an event system, the amount of code within a functional wrapper is relatively small and we can refactor or rewrite some logic using modern coding style. There is no restriction in coding style inside a functional wrapper. You can write sequential code, object oriented code or reactive code. In this fashion, you can keep legacy code that is trouble free and focus your energy to re-invent the application selectively and incrementally. With Mercury 3.1, you have complete control to precisely tune your application for optimal performance and throughput using three function execution strategies: Virtual thread - this is the default execution style for a functional block in pure Java Suspend function - if you prefer coding in Kotlin, this allows you to define a function as a \"coroutine\" that is comparable to Java virtual thread. Kernel thread pool - you can define a function to run in a kernel thread pool for computationally intensive task or legacy code that must run in a kernel thread January 2024 Write your first composable application To get started with your first application, please refer to the Developer Guide . Introduction to composable architecture In cloud migration and IT modernization, we evaluate application portfolio and recommend different disposition strategies based on the 7R migration methodology. 7R: Retire, retain, re-host, re-platform, replace, re-architect and re-imagine. The most common observation during IT modernization discovery is that there are many complex monolithic applications that are hard to modernize quickly. IT modernization is like moving into a new home. It would be the opportunity to clean up and to improve for business agility and strategic competitiveness. Composable architecture is gaining visibility recently because it accelerates organization transformation towards a cloud native future. We will discuss how we may reduce modernization risks with this approach. Composability Composability applies to both platform and application levels. We can trace the root of composability to Service Oriented Architecture (SOA) in 2000 or a technical bulletin on \"Flow-Based Programming\" by IBM in 1971. This is the idea that architecture and applications are built using modular building blocks and each block is self-contained with predictable behavior. At the platform level, composable architecture refers to loosely coupled platform services, utilities, and business applications. With modular design, you can assemble platform components and applications to create new use cases or to adjust for ever-changing business environment and requirements. Domain driven design (DDD), Command Query Responsibility Segregation (CQRS) and Microservices patterns are the popular tools that architects use to build composable architecture. You may deploy applications in containers, serverless or other means. At the application level, a composable application means that an application is assembled from modular software components or functions that are self-contained and pluggable. You can mix-n-match functions to form new applications. You can retire outdated functions without adverse side effect to a production system. Multiple versions of a function can exist, and you can decide how to route user requests to different versions of a function. Applications would be easier to design, develop, maintain, deploy, and scale. Composable architecture and applications contribute to business agility. Building a composable application Microservices Since 2014, microservices architectural pattern helps to decompose a big application into smaller pieces of \u201cself-contained\u201d services. We also apply digital decoupling techniques to services and domains. Smaller is better. However, we are writing code in the same old fashion. One method is calling other methods directly. Functional and reactive programming techniques are means to run code in a non-blocking manner, for example Reactive Streams, Akka, Vertx, Quarkus Multi/Uni and Spring Reactive Flux/Mono. These are excellent tools, but they do not reduce the complexity of business applications. Composable application To make an application composable, the software components within a single application should be loosely coupled where each component has zero or minimal dependencies. Unlike traditional programming approach, composable application is built from the top down. First, we describe a business transaction as an event flow. Second, from the event flow, we identify individual functions for business logic. Third, we write user story for each function and write code in a self-contained manner. Finally, we write orchestration code to coordinate event flow among the functions, so they work together as a single application. The individual functions become the building block for a composable application. We can mix-n-match different sets of functions to address different business use cases. Event is the communication conduit Cloud native applications are deployed as containers or serverless functions. Ideally, they communicate using events. For example, the CQRS design pattern is well accepted for building high performance cloud native applications. Figure 1 - Cloud native applications use event streams to communicate However, within a single application unit, the application is mostly built in a traditional way. i.e. one function is calling other functions and libraries directly, thus making the modules and libraries tightly coupled. As a result, microservices may become smaller monolithic applications. To overcome this limitation, we can employ \u201cevent-driven design\u201d to make the microservices application unit composable. An application unit is a collection of functions in memory and an \u201cevent bus\u201d is the communication conduit to connect the functions together to form a single executable. Figure 2 \u2013 Functions use in-memory event bus to communicate In-memory event bus For a composable application, each function is written using the first principle of \u201cinput-process-output\u201d where input and output payloads are delivered as events. All input and output are immutable to reduce unintended bugs and side effects. Since input and output for each function is well-defined, test-driven development (TDD) can be done naturally. It is also easier to define a user story for each function and the developer does not need to study and integrate multiple levels of dependencies, resulting in higher quality code. Figure 3 - The first principle of a function What is a \u201cfunction\u201d? For example, reading a record from a database and performing some data transformation, doing a calculation with a formula, etc. Figure 4 - Connecting output of one function to input of another As shown in Figure 4, if function-1 wants to send a request to function-2, we can write \u201cevent orchestration code\u201d to put the output from function-1 into an event envelope and send it over an in-memory event bus. The event system will transport the event envelope to function-2, extract the payload and submit it as \u201cinput\u201d to function-2 Function execution strategy In event-driven application design, a function is executed when an event arrives as \u201cinput.\u201d When a function finishes processing, your application can command the event system to route the result set (\u201coutput\u201d) as an event to another function. Figure 5 - Executing function through event flow As shown in Figure 5, functions can send/receive events using an in-memory event bus (aka \"event loop\"). This event-driven architecture provides the foundation to design and implement composable applications. Each function is self-contained and loosely coupled by event flow. A function receiving an event needs to be executed. There are three ways to do that: Virtual thread Suspend function Kernel thread pool Virtual thread Many modern programming languages such as GoLang, Kotlin, Python and Node.js support \u201ccooperative multitasking\u201d using \u201cevent loop\u201d or \u201ccoroutine.\u201d Instead of context switching at the kernel level, functions are executed orderly by yielding to each other. The order of execution depends on the event flow of each business transaction. Since the functions are running cooperatively, the overheads of context switching are low. \u201cEvent loop\u201d or \u201cCoroutine\u201d technology usually can support tens of thousands of \u201cfunctions\u201d running in \u201cparallel.\u201d Technically, the functions are running sequentially. When each function finishes execution very quickly, they appear as running concurrently. In Java 19, the virtual thread feature was introduced as an experimental feature. It is officially supported in Java 21 LTS. Virtual thread is more than coroutine. It is comparable to Kotlin suspend function that we will discuss below. However, it is implemented in the \"Thread\" API of the standard library, therefore reducing code complexity. Figure 6 - Cooperative multitasking \u201cSuspend function\u201d In a typical enterprise application, many functions are waiting for responses most of the time. In preemptive multitasking, these functions are using kernel threads and consuming CPU time. Too many active kernel threads would turn the application into slow motion. \u201cSuspend function\u201d not only avoids overwhelming the CPU with excessive kernel threads but also leverages the synchronous request-response opportunity into high throughput non-blocking operation. As the name indicates, \u201csuspend function\u201d can be suspended and resumed. When it is suspended, it yields control to the event loop so that other coroutines or suspend functions can run. In Node.js and GoLang, coroutine and suspend function are the same. Suspend function refers to the \u201casync/await\u201d keywords or API of coroutine. In Kotlin, the suspend function extends a coroutine to have the suspend/resume ability. A function is suspended when it is waiting for a response from the network, a database or from another function. It is resumed when a response is received. Figure 7 - Improving throughput with suspend function As shown in Figure 8, a \u201csuspend function\u201d can suspend and resume multiple times during its execution. When it suspends, it is not using any CPU time, thus the application has more time to serve other functions. This mechanism is very efficient that it can significantly increase the throughput of the application. i.e. it can handle many concurrent users, and process more requests. Kernel thread pool Java supports \u201cpreemptive multitasking\u201d using kernel threads. Multiple functions can execute in parallel. Preemptive multitasking leverages the multiple cores of a CPU to yield higher performance. Preemptive multitasking is performed at the kernel level and the operating system is doing the context switching. As a result, the maximum number of kernel threads is small. As a rule of thumb, a moderately fast computer can support ~200 kernel threads. In Mercury 3.1, the default execution mode of a user function is \"virtual thread\". To use kernel thread pool for a user function, you must add the \"KernelThreadRunner\" annotation explicitly. Figure 8 - Multitasking of kernel threads at the hardware and OS level Performance and throughput The ability to select an optimal function execution strategy for a function is critical to the success of a composable application. This allows the developer to have low level control of how the application performs and scales. Without an optimal function execution strategy, performance tuning is usually an educated guess. In composable application architecture, each function is self-contained and stateless. We can predict the performance of each function by selecting an optimal function execution strategy and evaluate it with unit tests and observability. Predicting application performance and throughput at design and development time reduces modernization risks. The pros and cons of each function execution strategy are summarized below: Strategy Advantage Disadvantage Virtual thread Highest throughput in terms of concurrent users. Functionally similar to a suspend function. N/A Suspend function Sequential \"non-blocking\" for RPC (request-response) that makes code easier to read and maintain Requires coding in Kotlin language Kernel threads Highest performance in terms of operations per seconds Lower number of concurrent threads due to high context switching overheads As shown in the table above, performance and throughput are determined by function execution strategies. For example, single threaded event driven network proxies such as nginx support twenty times more concurrent connections than multithreading application servers. The best of both worlds If we simplify event-driven programming and support all three function execution strategies, we can design and implement composable applications that deliver high performance and high throughput. The \u201cvirtual thread\u201d feature in Java 21 is a good building block for function execution strategies. It is the most significant technological advancement for Java. It supports non-blocking sequential programming without explicitly using the \u201casync\u201d and \u201cawait\u201d keywords. As a result, we believe that all current open sources libraries that provide event loop functionality would evolve. To accelerate this evolution, we have implemented Mercury version 3.1 as an accelerator to build composable applications. It supports the two pillars of composable application \u2013 In-memory event bus and selection of function execution strategies. It integrates with Eclipse Vertx to hide the complexity of event-driven programming and embraces the three function execution strategies using virtual thread, suspend function and kernel thread pool. We can construct a composable application with self-contained functions that execute when events arrive. There is a simple event API that we call the \u201cPost Office\u201d to support sequential non-blocking RPC, async, drop and forget, callback, workflow, pipeline, streaming and interceptor patterns. Sequential non-blocking RPC reduces the effort in application modernization because we can directly port sequential legacy code from a monolithic application to the new composable cloud native design. Earlier we discussed \u201cevent orchestration.\u201d We have an accelerator called \u201cEvent Script\u201d that provides \u201cevent orchestration\u201d in configuration to eliminate the most tedious coding effort. Event Script creates a composable application in three steps: (1) the product owner and architect describe the business transaction as a flow of events, (2) the developer converts the flow chart into event script and (3) write the individual functions for business logic. The system will connect the various functions together and orchestrate the event flow as a single application. What is \"event orchestration\"? In traditional programming, we write code to make calls to different methods and libraries. In event-driven programming, we write code to send events, and this is \u201cevent orchestration.\u201d We can use events to make RPC call just like traditional programming. It is viable to port legacy orchestration logic into event orchestration code. To further reduce coding effort, we can use Event Script to do \u201cevent orchestration.\u201d This would replace code with simple event flow configuration. Note: Event Script is outside the scope of this open sources project. Please contact your Accenture representative if you are interested to use Event Script to further reduce coding effort for composable applications. How steep is the learning curve for a developer? The developer can use any coding style to write the individual functions, no matter it is sequential, object-oriented, or reactive. One may use any favorite frameworks or libraries. There are no restrictions. There is a learning curve in writing \u201cevent orchestration.\u201d Since event orchestration supports sequential non-blocking RPC, the developer can port existing legacy code to the modern style with direct mapping. Typically, the learning curve is about two weeks. If you are familiar with event-driven programming, the learning curve would be lower. To eliminate this learning curve, the developer may use Event Script that replaces orchestration code with event flow configuration files. Event Script is designed to have virtually zero API integration for exceptionally low learning curve. Conclusion Composability applies to both platform and application levels. We can design and implement better cloud native applications that are composable using event-driven design and the three function execution strategies. We can deliver application that demonstrates both high performance and high throughput, an objective that has been technically challenging with traditional means. We can scientifically predict application performance and throughput in design and development time, thus saving time and ensuring consistent product quality. Composable approach also facilitates the migration of monolithic application into cloud native by decomposing the application to functional level and assembling them into microservices according to domain and functional boundary. It reduces coding effort and application complexity, meaning less project risks. Since Java has the largest enterprise-grade open sources and commercial libraries with easy access to a large pool of trained developers, the availability of virtual thread technology would retain Java as the best option for application modernization and composable applications. This opens a new frontier of cloud native applications that are composable, scalable, and easy to maintain, thus contributing to business agility.","title":"Home"},{"location":"#mercury-31","text":"Reference engine for building \"Composable architecture and applications\".","title":"Mercury 3.1"},{"location":"#pre-requisite","text":"This project leverages the power of Java version 21 (LTS) virtual thread feature. Inside a user function, your code is running sequentially and synchronous request-response (RPC) calls appear to be \"blocking\". However, your function is actually suspended when waiting for a response from another function, database query or an external resource, thus reducing CPU consumption and dramatically increasing application throughput to handle more transactions and users. If you need compatibility with lower Java version down to version 1.8, please visit Mercury 3.0 in https://github.com/Accenture/mercury","title":"Pre-requisite"},{"location":"#welcome-to-the-mercury-project","text":"The Mercury project is created with one primary objective - to make software easy to write, read, test, deploy, scale and manage. Mercury is the building block for writing event driven composable applications. It is applicable for both green field system development and IT modernization projects. A legacy application may be modernized in 3 steps: Decompose Encapsulate Reinvent Mercury can be used to decompose a large monolithic application into functional blocks where each legacy functional block can be encapsulated in a functional wrapper that is fully event driven. Once functional boundary is identified, a set of functions can be grouped and deployed as a microservices unit. The deployed services can communicate with each other in events over the network. The end result is a set of cloud native services. Since each function is coupled with an event system, the amount of code within a functional wrapper is relatively small and we can refactor or rewrite some logic using modern coding style. There is no restriction in coding style inside a functional wrapper. You can write sequential code, object oriented code or reactive code. In this fashion, you can keep legacy code that is trouble free and focus your energy to re-invent the application selectively and incrementally. With Mercury 3.1, you have complete control to precisely tune your application for optimal performance and throughput using three function execution strategies: Virtual thread - this is the default execution style for a functional block in pure Java Suspend function - if you prefer coding in Kotlin, this allows you to define a function as a \"coroutine\" that is comparable to Java virtual thread. Kernel thread pool - you can define a function to run in a kernel thread pool for computationally intensive task or legacy code that must run in a kernel thread January 2024","title":"Welcome to the Mercury project"},{"location":"#write-your-first-composable-application","text":"To get started with your first application, please refer to the Developer Guide .","title":"Write your first composable application"},{"location":"#introduction-to-composable-architecture","text":"In cloud migration and IT modernization, we evaluate application portfolio and recommend different disposition strategies based on the 7R migration methodology. 7R: Retire, retain, re-host, re-platform, replace, re-architect and re-imagine. The most common observation during IT modernization discovery is that there are many complex monolithic applications that are hard to modernize quickly. IT modernization is like moving into a new home. It would be the opportunity to clean up and to improve for business agility and strategic competitiveness. Composable architecture is gaining visibility recently because it accelerates organization transformation towards a cloud native future. We will discuss how we may reduce modernization risks with this approach.","title":"Introduction to composable architecture"},{"location":"#composability","text":"Composability applies to both platform and application levels. We can trace the root of composability to Service Oriented Architecture (SOA) in 2000 or a technical bulletin on \"Flow-Based Programming\" by IBM in 1971. This is the idea that architecture and applications are built using modular building blocks and each block is self-contained with predictable behavior. At the platform level, composable architecture refers to loosely coupled platform services, utilities, and business applications. With modular design, you can assemble platform components and applications to create new use cases or to adjust for ever-changing business environment and requirements. Domain driven design (DDD), Command Query Responsibility Segregation (CQRS) and Microservices patterns are the popular tools that architects use to build composable architecture. You may deploy applications in containers, serverless or other means. At the application level, a composable application means that an application is assembled from modular software components or functions that are self-contained and pluggable. You can mix-n-match functions to form new applications. You can retire outdated functions without adverse side effect to a production system. Multiple versions of a function can exist, and you can decide how to route user requests to different versions of a function. Applications would be easier to design, develop, maintain, deploy, and scale. Composable architecture and applications contribute to business agility.","title":"Composability"},{"location":"#building-a-composable-application","text":"","title":"Building a composable application"},{"location":"#microservices","text":"Since 2014, microservices architectural pattern helps to decompose a big application into smaller pieces of \u201cself-contained\u201d services. We also apply digital decoupling techniques to services and domains. Smaller is better. However, we are writing code in the same old fashion. One method is calling other methods directly. Functional and reactive programming techniques are means to run code in a non-blocking manner, for example Reactive Streams, Akka, Vertx, Quarkus Multi/Uni and Spring Reactive Flux/Mono. These are excellent tools, but they do not reduce the complexity of business applications.","title":"Microservices"},{"location":"#composable-application","text":"To make an application composable, the software components within a single application should be loosely coupled where each component has zero or minimal dependencies. Unlike traditional programming approach, composable application is built from the top down. First, we describe a business transaction as an event flow. Second, from the event flow, we identify individual functions for business logic. Third, we write user story for each function and write code in a self-contained manner. Finally, we write orchestration code to coordinate event flow among the functions, so they work together as a single application. The individual functions become the building block for a composable application. We can mix-n-match different sets of functions to address different business use cases.","title":"Composable application"},{"location":"#event-is-the-communication-conduit","text":"Cloud native applications are deployed as containers or serverless functions. Ideally, they communicate using events. For example, the CQRS design pattern is well accepted for building high performance cloud native applications. Figure 1 - Cloud native applications use event streams to communicate However, within a single application unit, the application is mostly built in a traditional way. i.e. one function is calling other functions and libraries directly, thus making the modules and libraries tightly coupled. As a result, microservices may become smaller monolithic applications. To overcome this limitation, we can employ \u201cevent-driven design\u201d to make the microservices application unit composable. An application unit is a collection of functions in memory and an \u201cevent bus\u201d is the communication conduit to connect the functions together to form a single executable. Figure 2 \u2013 Functions use in-memory event bus to communicate","title":"Event is the communication conduit"},{"location":"#in-memory-event-bus","text":"For a composable application, each function is written using the first principle of \u201cinput-process-output\u201d where input and output payloads are delivered as events. All input and output are immutable to reduce unintended bugs and side effects. Since input and output for each function is well-defined, test-driven development (TDD) can be done naturally. It is also easier to define a user story for each function and the developer does not need to study and integrate multiple levels of dependencies, resulting in higher quality code. Figure 3 - The first principle of a function What is a \u201cfunction\u201d? For example, reading a record from a database and performing some data transformation, doing a calculation with a formula, etc. Figure 4 - Connecting output of one function to input of another As shown in Figure 4, if function-1 wants to send a request to function-2, we can write \u201cevent orchestration code\u201d to put the output from function-1 into an event envelope and send it over an in-memory event bus. The event system will transport the event envelope to function-2, extract the payload and submit it as \u201cinput\u201d to function-2","title":"In-memory event bus"},{"location":"#function-execution-strategy","text":"In event-driven application design, a function is executed when an event arrives as \u201cinput.\u201d When a function finishes processing, your application can command the event system to route the result set (\u201coutput\u201d) as an event to another function. Figure 5 - Executing function through event flow As shown in Figure 5, functions can send/receive events using an in-memory event bus (aka \"event loop\"). This event-driven architecture provides the foundation to design and implement composable applications. Each function is self-contained and loosely coupled by event flow. A function receiving an event needs to be executed. There are three ways to do that: Virtual thread Suspend function Kernel thread pool","title":"Function execution strategy"},{"location":"#virtual-thread","text":"Many modern programming languages such as GoLang, Kotlin, Python and Node.js support \u201ccooperative multitasking\u201d using \u201cevent loop\u201d or \u201ccoroutine.\u201d Instead of context switching at the kernel level, functions are executed orderly by yielding to each other. The order of execution depends on the event flow of each business transaction. Since the functions are running cooperatively, the overheads of context switching are low. \u201cEvent loop\u201d or \u201cCoroutine\u201d technology usually can support tens of thousands of \u201cfunctions\u201d running in \u201cparallel.\u201d Technically, the functions are running sequentially. When each function finishes execution very quickly, they appear as running concurrently. In Java 19, the virtual thread feature was introduced as an experimental feature. It is officially supported in Java 21 LTS. Virtual thread is more than coroutine. It is comparable to Kotlin suspend function that we will discuss below. However, it is implemented in the \"Thread\" API of the standard library, therefore reducing code complexity. Figure 6 - Cooperative multitasking","title":"Virtual thread"},{"location":"#suspend-function","text":"In a typical enterprise application, many functions are waiting for responses most of the time. In preemptive multitasking, these functions are using kernel threads and consuming CPU time. Too many active kernel threads would turn the application into slow motion. \u201cSuspend function\u201d not only avoids overwhelming the CPU with excessive kernel threads but also leverages the synchronous request-response opportunity into high throughput non-blocking operation. As the name indicates, \u201csuspend function\u201d can be suspended and resumed. When it is suspended, it yields control to the event loop so that other coroutines or suspend functions can run. In Node.js and GoLang, coroutine and suspend function are the same. Suspend function refers to the \u201casync/await\u201d keywords or API of coroutine. In Kotlin, the suspend function extends a coroutine to have the suspend/resume ability. A function is suspended when it is waiting for a response from the network, a database or from another function. It is resumed when a response is received. Figure 7 - Improving throughput with suspend function As shown in Figure 8, a \u201csuspend function\u201d can suspend and resume multiple times during its execution. When it suspends, it is not using any CPU time, thus the application has more time to serve other functions. This mechanism is very efficient that it can significantly increase the throughput of the application. i.e. it can handle many concurrent users, and process more requests.","title":"\u201cSuspend function\u201d"},{"location":"#kernel-thread-pool","text":"Java supports \u201cpreemptive multitasking\u201d using kernel threads. Multiple functions can execute in parallel. Preemptive multitasking leverages the multiple cores of a CPU to yield higher performance. Preemptive multitasking is performed at the kernel level and the operating system is doing the context switching. As a result, the maximum number of kernel threads is small. As a rule of thumb, a moderately fast computer can support ~200 kernel threads. In Mercury 3.1, the default execution mode of a user function is \"virtual thread\". To use kernel thread pool for a user function, you must add the \"KernelThreadRunner\" annotation explicitly. Figure 8 - Multitasking of kernel threads at the hardware and OS level","title":"Kernel thread pool"},{"location":"#performance-and-throughput","text":"The ability to select an optimal function execution strategy for a function is critical to the success of a composable application. This allows the developer to have low level control of how the application performs and scales. Without an optimal function execution strategy, performance tuning is usually an educated guess. In composable application architecture, each function is self-contained and stateless. We can predict the performance of each function by selecting an optimal function execution strategy and evaluate it with unit tests and observability. Predicting application performance and throughput at design and development time reduces modernization risks. The pros and cons of each function execution strategy are summarized below: Strategy Advantage Disadvantage Virtual thread Highest throughput in terms of concurrent users. Functionally similar to a suspend function. N/A Suspend function Sequential \"non-blocking\" for RPC (request-response) that makes code easier to read and maintain Requires coding in Kotlin language Kernel threads Highest performance in terms of operations per seconds Lower number of concurrent threads due to high context switching overheads As shown in the table above, performance and throughput are determined by function execution strategies. For example, single threaded event driven network proxies such as nginx support twenty times more concurrent connections than multithreading application servers.","title":"Performance and throughput"},{"location":"#the-best-of-both-worlds","text":"If we simplify event-driven programming and support all three function execution strategies, we can design and implement composable applications that deliver high performance and high throughput. The \u201cvirtual thread\u201d feature in Java 21 is a good building block for function execution strategies. It is the most significant technological advancement for Java. It supports non-blocking sequential programming without explicitly using the \u201casync\u201d and \u201cawait\u201d keywords. As a result, we believe that all current open sources libraries that provide event loop functionality would evolve. To accelerate this evolution, we have implemented Mercury version 3.1 as an accelerator to build composable applications. It supports the two pillars of composable application \u2013 In-memory event bus and selection of function execution strategies. It integrates with Eclipse Vertx to hide the complexity of event-driven programming and embraces the three function execution strategies using virtual thread, suspend function and kernel thread pool. We can construct a composable application with self-contained functions that execute when events arrive. There is a simple event API that we call the \u201cPost Office\u201d to support sequential non-blocking RPC, async, drop and forget, callback, workflow, pipeline, streaming and interceptor patterns. Sequential non-blocking RPC reduces the effort in application modernization because we can directly port sequential legacy code from a monolithic application to the new composable cloud native design. Earlier we discussed \u201cevent orchestration.\u201d We have an accelerator called \u201cEvent Script\u201d that provides \u201cevent orchestration\u201d in configuration to eliminate the most tedious coding effort. Event Script creates a composable application in three steps: (1) the product owner and architect describe the business transaction as a flow of events, (2) the developer converts the flow chart into event script and (3) write the individual functions for business logic. The system will connect the various functions together and orchestrate the event flow as a single application.","title":"The best of both worlds"},{"location":"#what-is-event-orchestration","text":"In traditional programming, we write code to make calls to different methods and libraries. In event-driven programming, we write code to send events, and this is \u201cevent orchestration.\u201d We can use events to make RPC call just like traditional programming. It is viable to port legacy orchestration logic into event orchestration code. To further reduce coding effort, we can use Event Script to do \u201cevent orchestration.\u201d This would replace code with simple event flow configuration. Note: Event Script is outside the scope of this open sources project. Please contact your Accenture representative if you are interested to use Event Script to further reduce coding effort for composable applications.","title":"What is \"event orchestration\"?"},{"location":"#how-steep-is-the-learning-curve-for-a-developer","text":"The developer can use any coding style to write the individual functions, no matter it is sequential, object-oriented, or reactive. One may use any favorite frameworks or libraries. There are no restrictions. There is a learning curve in writing \u201cevent orchestration.\u201d Since event orchestration supports sequential non-blocking RPC, the developer can port existing legacy code to the modern style with direct mapping. Typically, the learning curve is about two weeks. If you are familiar with event-driven programming, the learning curve would be lower. To eliminate this learning curve, the developer may use Event Script that replaces orchestration code with event flow configuration files. Event Script is designed to have virtually zero API integration for exceptionally low learning curve.","title":"How steep is the learning curve for a developer?"},{"location":"#conclusion","text":"Composability applies to both platform and application levels. We can design and implement better cloud native applications that are composable using event-driven design and the three function execution strategies. We can deliver application that demonstrates both high performance and high throughput, an objective that has been technically challenging with traditional means. We can scientifically predict application performance and throughput in design and development time, thus saving time and ensuring consistent product quality. Composable approach also facilitates the migration of monolithic application into cloud native by decomposing the application to functional level and assembling them into microservices according to domain and functional boundary. It reduces coding effort and application complexity, meaning less project risks. Since Java has the largest enterprise-grade open sources and commercial libraries with easy access to a large pool of trained developers, the availability of virtual thread technology would retain Java as the best option for application modernization and composable applications. This opens a new frontier of cloud native applications that are composable, scalable, and easy to maintain, thus contributing to business agility.","title":"Conclusion"},{"location":"CHANGELOG/","text":"Changelog Release notes All notable changes to this project will be documented in this file. The format is based on Keep a Changelog , and this project adheres to Semantic Versioning . For release notes before version 3.1, please refer to https://github.com/Accenture/mercury Version 3.1.1, 2/8/2023 Added AutoStart to run application as Spring Boot if the rest-spring-3 library is packaged in app Configurable \"Event over HTTP\" - automatic forward events over HTTP using a configuration Removed Bugfix: removed websocket client connection timeout that causes the first connection to drop after one minute Changed Open sources library update (Spring Boot 3.2.2, Vertx 4.5.3 and MsgPack 0.9.8) Version 3.1.0, 1/5/2023 Added Full integration with Java 21 Virtual Thread Default execution mode is set to \"virtual thread\" KernelThreadRunner annotation added to provide optional support of kernel threads Removed Retired Spring Boot version 2 Hazelcast and ActiveMQ network connectors Changed platform-core engine updated with virtual thread Version 3.0.7, 12/23/2023 Added Print out basic JVM information before startup for verification of base container image. Removed Removed Maven Shade packager Changed Updated open sources libraries to address security vulnerabilities Spring Boot 2/3 to version 2.7.18 and 3.2.1 respectively Tomcat 9.0.84 Vertx 4.5.1 Classgraph 4.8.165 Netty 4.1.104.Final slf4j API 2.0.9 log4j2 2.22.0 Kotlin 1.9.22 Artemis 2.31.2 Hazelcast 5.3.6 Guava 33.0.0-jre Version 3.0.6, 10/26/2023 Added Enhanced Benchmark tool to support \"Event over HTTP\" protocol to evaluate performance efficiency for commmunication between application containers using HTTP. Removed N/A Changed Updated open sources libraries Spring Boot 2/3 to version 2.7.17 and 3.1.5 respectively Kafka-client 3.6.0 Version 3.0.5, 10/21/2023 Added Support two executable JAR packaging system: 1. Maven Shade packager 2. Spring Boot packager Starting from version 3.0.5, we have replaced Spring Boot packager with Maven Shade. This avoids a classpath edge case for Spring Boot packager when running kafka-client under Java 11 or higher. Maven Shade also results in smaller executable JAR size. Removed N/A Changed Updated open sources libraries Spring-Boot 2.7.16 / 3.1.4 classgraph 4.8.163 snakeyaml 2.2 kotlin 1.9.10 vertx 4.4.6 guava 32.1.3-jre msgpack 0.9.6 slj4j 2.0.9 zookeeper 3.7.2 The \"/info/lib\" admin endpoint has been enhanced to list library dependencies for executable JAR generated by either Maven Shade or Spring Boot Packager. Improved ConfigReader to recognize both \".yml\" and \".yaml\" extensions and their uses are interchangeable. Version 3.0.4, 8/6/2023 Added N/A Removed N/A Changed Updated open sources libraries Spring-Boot 2.7.14 / 3.1.2 Kafka-client 3.5.1 classgraph 4.8.161 guava 32.1.2-jre msgpack 0.9.5 Version 3.0.3, 6/27/2023 Added File extension to MIME type mapping for static HTML file handling Removed N/A Changed Open sources library update - Kotlin version 1.9.0 Version 3.0.2, 6/9/2023 Added N/A Removed N/A Changed Consistent exception handling for Event API endpoint Open sources lib update - Vertx 4.4.4, Spring Boot 2.7.13, Spring Boot 3.1.1, classgraph 4.8.160, guava 32.0.1-jre Version 3.0.1, 6/5/2023 In this release, we have replace Google HTTP Client with vertx non-blocking WebClient. We also tested compatibility up to OpenJDK version 20 and maven 3.9.2. Added When \"x-raw-xml\" HTTP request header is set to \"true\", the AsyncHttpClient will skip the built-in XML serialization so that your application can retrieve the original XML text. Removed Retire Google HTTP client Changed Upgrade maven plugin versions. Version 3.0.0, 4/18/2023 This is a major release with some breaking changes. Please refer to Chapter-10 (Migration guide) for details. This version brings the best of preemptive and cooperating multitasking to Java (version 1.8 to 19) before Java 19 virtual thread feature becomes officially available. Added Function execution engine supporting kernel thread pool, Kotlin coroutine and suspend function \"Event over HTTP\" service for inter-container communication Support for Spring Boot version 3 and WebFlux Sample code for a pre-configured Spring Boot 3 application Removed Remove blocking APIs from platform-core Retire PM2 process manager sample script due to compatibility issue Changed Refactor \"async.http.request\" to use vertx web client for non-blocking operation Update log4j2 version 2.20.0 and slf4j version 2.0.7 in platform-core Update JBoss RestEasy JAX_RS to version 3.15.6.Final in rest-spring Update vertx to 4.4.2 Update Spring Boot parent pom to 2.7.12 and 3.1.0 for spring boot 2 and 3 respectively Remove com.fasterxml.classmate dependency from rest-spring Version 2.8.0, 3/20/2023 Added N/A Removed N/A Changed Improved load balancing in cloud-connector Filter URI to avoid XSS attack Upgrade to SnakeYaml 2.0 and patch Spring Boot 2.6.8 for compatibility with it Upgrade to Vertx 4.4.0, classgraph 4.8.157, tomcat 9.0.73 Version 2.7.1, 12/22/2022 Added standalone benchmark report app client and server benchmark apps add timeout tag to RPC events Removed N/A Changed Updated open sources dependencies Netty 4.1.86.Final Tomcat 9.0.69 Vertx 4.3.6 classgraph 4.8.152 google-http-client 1.42.3 Improved unit tests to use assertThrows to evaluate exception Enhanced AsyncHttpRequest serialization Version 2.7.0, 11/11/2022 In this version, REST automation code is moved to platform-core such that REST and Websocket service can share the same port. Added AsyncObjectStreamReader is added for non-blocking read operation from an object stream. Support of LocalDateTime in SimpleMapper Add \"removeElement\" method to MultiLevelMap Automatically convert a map to a PoJo when the sender does not specify class in event body Removed N/A Changed REST automation becomes part of platform-core and it can co-exist with Spring Web in the rest-spring module Enforce Spring Boot lifecycle management such that user apps will start after Spring Boot has loaded all components Update netty to version 4.1.84.Final Version 2.6.0, 10/13/2022 In this version, websocket notification example code has been removed from the REST automation system. If your application uses this feature, please recover the code from version 2.5.0 and refactor it as a separate library. Added N/A Removed Simplify REST automation system by removing websocket notification example in REST automation. Changed Replace Tomcat websocket server with Vertx non-blocking websocket server library Update netty to version 4.1.79.Final Update kafka client to version 2.8.2 Update snake yaml to version 1.33 Update gson to version 2.9.1 Version 2.5.0, 9/10/2022 Added New Preload annotation class to automate pre-registration of LambdaFunction. Removed Removed Spring framework and Tomcat dependencies from platform-core so that the core library can be applied to legacy J2EE application without library conflict. Changed Bugfix for proper housekeeping of future events. Make Gson and MsgPack handling of integer/long consistent Updated open sources libraries. Eclipse vertx-core version 4.3.4 MsgPack version 0.9.3 Google httpclient version 1.42.2 SnakeYaml version 1.31 Version 2.3.6, 6/21/2022 Added Support more than one event stream cluster. User application can share the same event stream cluster for pub/sub or connect to an alternative cluster for pub/sub use cases. Removed N/A Changed Cloud connector libraries update to Hazelcast 5.1.2 Version 2.3.5, 5/30/2022 Added Add tagging feature to handle language connector's routing and exception handling Removed Remove language pack's pub/sub broadcast feature Changed Update Spring Boot parent to version 2.6.8 to fetch Netty 4.1.77 and Spring Framework 5.3.20 Streamlined language connector transport protocol for compatibility with both Python and Node.js Version 2.3.4, 5/14/2022 Added N/A Removed Remove swagger-ui distribution from api-playground such that developer can clone the latest version Changed Update application.properties (from spring.resources.static-locations to spring.web.resources.static-locations) Update log4j, Tomcat and netty library version using Spring parent 2.6.6 Version 2.3.3, 3/30/2022 Added Enhanced AsyncRequest to handle non-blocking fork-n-join Removed N/A Changed Upgrade Spring Boot from 2.6.3 to 2.6.6 Version 2.3.2, 2/21/2022 Added Add support of queue API in native pub/sub module for improved ESB compatibility Removed N/A Changed N/A Version 2.3.1, 2/19/2022 Added N/A Removed N/A Changed Update Vertx to version 4.2.4 Update Tomcat to version 5.0.58 Use Tomcat websocket server for presence monitors Bugfix - Simple Scheduler's leader election searches peers correctly Version 2.3.0, 1/28/2022 Added N/A Removed N/A Changed Update copyright notice Update Vertx to version 4.2.3 Bugfix - RSA key generator supporting key length from 1024 to 4096 bits CryptoAPI - support different AES algorithms and custom IV Update Spring Boot to version 2.6.3 Version 2.2.3, 12/29/2021 Added Transaction journaling Add parameter distributed.trace.aggregation in application.properties such that trace aggregation may be disabled. Removed N/A Changed Update JBoss RestEasy library to 3.15.3.Final Improved po.search(route) to scan local and remote service registries. Added \"remoteOnly\" selection. Fix bug in releasing presence monitor topic for specific closed user group Update Apache log4j to version 2.17.1 Update Spring Boot parent to version 2.6.1 Update Netty to version 4.1.72.Final Update Vertx to version 4.2.2 Convenient class \"UserNotification\" for backend service to publish events to the UI when REST automation is deployed Version 2.2.2, 11/12/2021 Added User defined API authentication functions can be selected using custom HTTP request header \"Exception chaining\" feature in EventEnvelope New \"deferred.commit.log\" parameter for backward compatibility with older PowerMock in unit tests Removed N/A Changed Improved and streamlined SimpleXmlParser to handle arrays Bugfix for file upload in Service Gateway (REST automation library) Update Tomcat library from 9.0.50 to 9.0.54 Update Spring Boot library to 2.5.6 Update GSON library to 2.8.9 Version 2.2.1, 10/1/2021 Added Callback function can implement ServiceExceptionHandler to catch exception. It adds the onError() method. Removed N/A Changed Open sources library update - Vert.x 4.1.3, Netty 4.1.68-Final Version 2.1.1, 9/10/2021 Added User defined PoJo and Generics mapping Standardized serializers for default case, snake_case and camelCase Support of EventEnvelope as input parameter in TypedLambdaFunction so application function can inspect event's metadata Application can subscribe to life cycle events of other application instances Removed N/A Changed Replace Tomcat websocket server engine with Vertx in presence monitor for higher performance Bugfix for MsgPack transport of integer, long, BigInteger and BigDecimal Version 2.1.0, 7/25/2021 Added Multicast - application can define a multicast.yaml config to relay events to more than one target service. StreamFunction - function that allows the application to control back-pressure Removed \"object.streams.io\" route is removed from platform-core Changed Elastic Queue - Refactored using Oracle Berkeley DB Object stream I/O - simplified design using the new StreamFunction feature Open sources library update - Spring Boot 2.5.2, Tomcat 9.0.50, Vert.x 4.1.1, Netty 4.1.66-Final Version 2.0.0, 5/5/2021 Vert.x is introduced as the in-memory event bus Added ActiveMQ and Tibco connectors Admin endpoints to stop, suspend and resume an application instance Handle edge case to detect stalled application instances Add \"isStreamingPubSub\" method to the PubSub interface Removed Event Node event stream emulator has been retired. You may use standalone Kafka server as a replacement for development and testing in your laptop. Multi-tenancy namespace configuration has been retired. It is replaced by the \"closed user group\" feature. Changed Refactored Kafka and Hazelcast connectors to support virtual topics and closed user groups. Updated ConfigReader to be consistent with Spring value substitution logic for application properties Replace Akka actor system with Vert.x event bus Common code for various cloud connectors consolidated into cloud core libraries Version 1.13.0, 1/15/2021 Version 1.13.0 is the last version that uses Akka as the in-memory event system. Version 1.12.66, 1/15/2021 Added A simple websocket notification service is integrated into the REST automation system Seamless migration feature is added to the REST automation system Removed Legacy websocket notification example application Changed N/A Version 1.12.65, 12/9/2020 Added \"kafka.pubsub\" is added as a cloud service File download example in the lambda-example project \"trace.log.header\" added to application.properties - when tracing is enabled, this inserts the trace-ID of the transaction in the log context. For more details, please refer to the Developer Guide Add API to pub/sub engine to support creation of topic with partitions TypedLambdaFunction is added so that developer can predefine input and output classes in a service without casting Removed N/A Changed Decouple Kafka pub/sub from kafka connector so that native pub/sub can be used when application is running in standalone mode Rename \"relay\" to \"targetHost\" in AsyncHttpRequest data model Enhanced routing table distribution by sending a complete list of route tables, thus reducing network admin traffic. Version 1.12.64, 9/28/2020 Added If predictable topic is set, application instances will report their predictable topics as \"instance ID\" to the presence monitor. This improves visibility when a developer tests their application in \"hybrid\" mode. i.e. running the app locally and connect to the cloud remotely for event streams and cloud resources. Removed N/A Changed N/A Version 1.12.63, 8/27/2020 Added N/A Removed N/A Changed Improved Kafka producer and consumer pairing Version 1.12.62, 8/12/2020 Added New presence monitor's admin endpoint for the operator to force routing table synchronization (\"/api/ping/now\") Removed N/A Changed Improved routing table integrity check Version 1.12.61, 8/8/2020 Added Event stream systems like Kafka assume topic to be used long term. This version adds support to reuse the same topic when an application instance restarts. You can create a predictable topic using unique application name and instance ID. For example, with Kubernetes, you can use the POD name as the unique application instance topic. Removed N/A Changed N/A Version 1.12.56, 8/4/2020 Added Automate trace for fork-n-join use case Removed N/A Changed N/A Version 1.12.55, 7/19/2020 Added N/A Removed N/A Changed Improved distributed trace - set the \"from\" address in EventEnvelope automatically. Version 1.12.54, 7/10/2020 Added N/A Removed N/A Changed Application life-cycle management - User provided main application(s) will be started after Spring Boot declares web application ready. This ensures correct Spring autowiring or dependencies are available. Bugfix for locale - String.format(float) returns comma as decimal point that breaks number parser. Replace with BigDecimal decimal point scaling. Bugfix for Tomcat 9.0.35 - Change Async servlet default timeout from 30 seconds to -1 so the system can handle the whole life-cycle directly. Version 1.12.52, 6/11/2020 Added new \"search\" method in Post Office to return a list of application instances for a service simple \"cron\" job scheduler as an extension project add \"sequence\" to MainApplication annotation for orderly execution when more than one MainApplication is available support \"Optional\" object in EventEnvelope so a LambdaFunction can read and return Optional Removed N/A Changed The rest-spring library has been updated to support both JAR and WAR deployment All pom.xml files updated accordingly PersistentWsClient will back off for 10 seconds when disconnected by remote host Version 1.12.50, 5/20/2020 Added Payload segmentation For large payload in an event, the payload is automatically segmented into 64 KB segments. When there are more than one target application instances, the system ensures that the segments of the same event is delivered to exactly the same target. PersistentWsClient added - generalized persistent websocket client for Event Node, Kafka reporter and Hazelcast reporter. Removed N/A Changed Code cleaning to improve consistency Upgraded to hibernate-validator to v6.1.5.Final and Hazelcast version 4.0.1 REST automation is provided as a library and an application to handle different use cases Version 1.12.40, 5/4/2020 Added N/A Removed N/A Changed For security reason, upgrade log4j to version 2.13.2 Version 1.12.39, 5/3/2020 Added Use RestEasy JAX-RS library Removed For security reason, removed Jersey JAX-RS library Changed Updated RestLoader to initialize RestEasy servlet dispatcher Support nested arrays in MultiLevelMap Version 1.12.36, 4/16/2020 Added N/A Removed For simplicity, retire route-substitution admin endpoint. Route substitution uses a simple static table in route-substitution.yaml. Changed N/A Version 1.12.35, 4/12/2020 Added N/A Removed SimpleRBAC class is retired Changed Improved ConfigReader and AppConfigReader with automatic key-value normalization for YAML and JSON files Improved pub/sub module in kafka-connector Version 1.12.34, 3/28/2020 Added N/A Removed Retired proprietary config manager since we can use the \"BeforeApplication\" approach to load config from Kubernetes configMap or other systems of config record. Changed Added \"isZero\" method to the SimpleMapper class Convert BigDecimal to string without scientific notation (i.e. toPlainString instead of toString) Corresponding unit tests added to verify behavior Version 1.12.32, 3/14/2020 Added N/A Removed N/A Changed Kafka-connector will shutdown application instance when the EventProducer cannot send event to Kafka. This would allow the infrastructure to restart application instance automatically. Version 1.12.31, 2/26/2020 Added N/A Removed N/A Changed Kafka-connector now supports external service provider for Kafka properties and credentials. If your application implements a function with route name \"kafka.properties.provider\" before connecting to cloud, the kafka-connector will retrieve kafka credentials on demand. This addresses case when kafka credentials change after application start-up. Interceptors are designed to forward requests and thus they do not generate replies. However, if you implement a function as an EventInterceptor, your function can throw exception just like a regular function and the exception will be returned to the calling function. This makes it easier to write interceptors. Version 1.12.30, 2/6/2020 Added Expose \"async.http.request\" as a PUBLIC function (\"HttpClient as a service\") Removed N/A Changed Improved Hazelcast client connection stability Improved Kafka native pub/sub Version 1.12.29, 1/10/2020 Added Rest-automation will transport X-Trace-Id from/to Http request/response, therefore extending distributed trace across systems that support the X-Trace-Id HTTP header. Added endpoint and service to shutdown application instance. Removed N/A Changed Updated SimpleXmlParser with XML External Entity (XXE) injection prevention. Bug fix for hazelcast recovery logic - when a hazelcast node is down, the app instance will restart the hazelcast client and reset routing table correctly. HSTS header insertion is optional so that we can disable it to avoid duplicated header when API gateway is doing it. Version 1.12.26, 1/4/2020 Added Feature to disable PoJo deserialization so that caller can decide if the result set should be in PoJo or a Map. Removed N/A Changed Simplified key management for Event Node AsyncHttpRequest case insensitivity for headers, cookies, path parameters and session key-values Make built-in configuration management optional Version 1.12.19, 12/28/2019 Added Added HTTP relay feature in rest-automation project Removed N/A Changed Improved hazelcast retry and peer discovery logic Refactored rest-automation's service gateway module to use AsyncHttpRequest Info endpoint to show routing table of a peer Version 1.12.17, 12/16/2019 Added Simple configuration management is added to event-node, hazelcast-presence and kafka-presence monitors Added BeforeApplication annotation - this allows user application to execute some setup logic before the main application starts. e.g. modifying parameters in application.properties Added API playground as a convenient standalone application to render OpenAPI 2.0 and 3.0 yaml and json files Added argument parser in rest-automation helper app to use a static HTML folder in the local file system if arguments -html file_path is given when starting the JAR file. Removed N/A Changed Kafka publisher timeout value changed from 10 to 20 seconds Log a warning when Kafka takes more than 5 seconds to send an event Version 1.12.14, 11/20/2019 Added getRoute() method is added to PostOffice to facilitate RBAC The route name of the current service is added to an outgoing event when the \"from\" field is not present Simple RBAC using YAML configuration instead of code Removed N/A Changed Updated Spring Boot to v2.2.1 Version 1.12.12, 10/26/2019 Added Multi-tenancy support for event streams (Hazelcast and Kafka). This allows the use of a single event stream cluster for multiple non-prod environments. For production, it must use a separate event stream cluster for security reason. Removed N/A Changed logging framework changed from logback to log4j2 (version 2.12.1) Use JSR-356 websocket annotated ClientEndpoint Improved websocket reconnection logic Version 1.12.9, 9/14/2019 Added Distributed tracing implemented in platform-core and rest-automation Improved HTTP header transformation for rest-automation Removed N/A Changed language pack API key obtained from environment variable Version 1.12.8, 8/15/2019 Added N/A Removed rest-core subproject has been merged with rest-spring Changed N/A Version 1.12.7, 7/15/2019 Added Periodic routing table integrity check (15 minutes) Set kafka read pointer to the beginning for new application instances except presence monitor REST automation helper application in the \"extensions\" project Support service discovery of multiple routes in the updated PostOffice's exists() method logback to set log level based on environment variable LOG_LEVEL (default is INFO) Removed N/A Changed Minor refactoring of kafka-connector and hazelcast-connector to ensure that they can coexist if you want to include both of these dependencies in your project. This is for convenience of dev and testing. In production, please select only one cloud connector library to reduce memory footprint. Version 1.12.4, 6/24/2019 Added Add inactivity expiry timer to ObjectStreamIO so that house-keeper can clean up resources that are idle Removed N/A Changed Disable HTML encape sequence for GSON serializer Bug fix for GSON serialization optimization Bug fix for Object Stream housekeeper By default, GSON serializer converts all numbers to double, resulting in unwanted decimal point for integer and long. To handle custom map serialization for correct representation of numbers, an unintended side effect was introduced in earlier releases. List of inner PoJo would be incorrectly serialized as map, resulting in casting exception. This release resolves this issue. Version 1.12.1, 6/10/2019 Added Store-n-forward pub/sub API will be automatically enabled if the underlying cloud connector supports it. e.g. kafka ObjectStreamIO, a convenient wrapper class, to provide event stream I/O API. Object stream feature is now a standard feature instead of optional. Deferred delivery added to language connector. Removed N/A Changed N/A Version 1.11.40, 5/25/2019 Added Route substitution for simple versioning use case Add \"Strict Transport Security\" header if HTTPS (https://tools.ietf.org/html/rfc6797) Event stream connector for Kafka Distributed housekeeper feature for Hazelcast connector Removed System log service Changed Refactoring of Hazelcast event stream connector library to sync up with the new Kafka connector. Version 1.11.39, 4/30/2019 Added Language-support service application for Python, Node.js and Go, etc. Python language pack project is available at https://github.com/Accenture/mercury-python Removed N/A Changed replace Jackson serialization engine with Gson ( platform-core project) replace Apache HttpClient with Google Http Client ( rest-spring ) remove Jackson dependencies from Spring Boot ( rest-spring ) interceptor improvement Version 1.11.33, 3/25/2019 Added N/A Removed N/A Changed Move safe.data.models validation rules from EventEnvelope to SimpleMapper Apache fluent HTTP client downgraded to version 4.5.6 because the pom file in 4.5.7 is invalid Version 1.11.30, 3/7/2019 Added Added retry logic in persistent queue when OS cannot update local file metadata in real-time for Windows based machine. Removed N/A Changed pom.xml changes - update with latest 3rd party open sources dependencies. Version 1.11.29, 1/25/2019 Added platform-core Support for long running functions so that any long queries will not block the rest of the system. \"safe.data.models\" is available as an option in the application.properties. This is an additional security measure to protect against Jackson deserialization vulnerability. See example below: # # additional security to protect against model injection # comma separated list of model packages that are considered safe to be used for object deserialization # #safe.data.models=com.accenture.models rest-spring \"/env\" endpoint is added. See sample application.properties below: # # environment and system properties to be exposed to the \"/env\" admin endpoint # show.env.variables=USER, TEST show.application.properties=server.port, cloud.connector Removed N/A Changed platform-core Use Java Future and an elastic cached thread pool for executing user functions. Fixed N/A Version 1.11.28, 12/20/2018 Added Hazelcast support is added. This includes two projects (hazelcast-connector and hazelcast-presence). Hazelcast-connector is a cloud connector library. Hazelcast-presence is the \"Presence Monitor\" for monitoring the presence status of each application instance. Removed platform-core The \"fixed resource manager\" feature is removed because the same outcome can be achieved at the application level. e.g. The application can broadcast requests to multiple application instances with the same route name and use a callback function to receive response asynchronously. The services can provide resource metrics so that the caller can decide which is the most available instance to contact. For simplicity, resources management is better left to the cloud platform or the application itself. Changed N/A Fixed N/A","title":"Release notes"},{"location":"CHANGELOG/#changelog","text":"","title":"Changelog"},{"location":"CHANGELOG/#release-notes","text":"All notable changes to this project will be documented in this file. The format is based on Keep a Changelog , and this project adheres to Semantic Versioning . For release notes before version 3.1, please refer to https://github.com/Accenture/mercury","title":"Release notes"},{"location":"CHANGELOG/#version-311-282023","text":"","title":"Version 3.1.1, 2/8/2023"},{"location":"CHANGELOG/#added","text":"AutoStart to run application as Spring Boot if the rest-spring-3 library is packaged in app Configurable \"Event over HTTP\" - automatic forward events over HTTP using a configuration","title":"Added"},{"location":"CHANGELOG/#removed","text":"Bugfix: removed websocket client connection timeout that causes the first connection to drop after one minute","title":"Removed"},{"location":"CHANGELOG/#changed","text":"Open sources library update (Spring Boot 3.2.2, Vertx 4.5.3 and MsgPack 0.9.8)","title":"Changed"},{"location":"CHANGELOG/#version-310-152023","text":"","title":"Version 3.1.0, 1/5/2023"},{"location":"CHANGELOG/#added_1","text":"Full integration with Java 21 Virtual Thread Default execution mode is set to \"virtual thread\" KernelThreadRunner annotation added to provide optional support of kernel threads","title":"Added"},{"location":"CHANGELOG/#removed_1","text":"Retired Spring Boot version 2 Hazelcast and ActiveMQ network connectors","title":"Removed"},{"location":"CHANGELOG/#changed_1","text":"platform-core engine updated with virtual thread","title":"Changed"},{"location":"CHANGELOG/#version-307-12232023","text":"","title":"Version 3.0.7, 12/23/2023"},{"location":"CHANGELOG/#added_2","text":"Print out basic JVM information before startup for verification of base container image.","title":"Added"},{"location":"CHANGELOG/#removed_2","text":"Removed Maven Shade packager","title":"Removed"},{"location":"CHANGELOG/#changed_2","text":"Updated open sources libraries to address security vulnerabilities Spring Boot 2/3 to version 2.7.18 and 3.2.1 respectively Tomcat 9.0.84 Vertx 4.5.1 Classgraph 4.8.165 Netty 4.1.104.Final slf4j API 2.0.9 log4j2 2.22.0 Kotlin 1.9.22 Artemis 2.31.2 Hazelcast 5.3.6 Guava 33.0.0-jre","title":"Changed"},{"location":"CHANGELOG/#version-306-10262023","text":"","title":"Version 3.0.6, 10/26/2023"},{"location":"CHANGELOG/#added_3","text":"Enhanced Benchmark tool to support \"Event over HTTP\" protocol to evaluate performance efficiency for commmunication between application containers using HTTP.","title":"Added"},{"location":"CHANGELOG/#removed_3","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_3","text":"Updated open sources libraries Spring Boot 2/3 to version 2.7.17 and 3.1.5 respectively Kafka-client 3.6.0","title":"Changed"},{"location":"CHANGELOG/#version-305-10212023","text":"","title":"Version 3.0.5, 10/21/2023"},{"location":"CHANGELOG/#added_4","text":"Support two executable JAR packaging system: 1. Maven Shade packager 2. Spring Boot packager Starting from version 3.0.5, we have replaced Spring Boot packager with Maven Shade. This avoids a classpath edge case for Spring Boot packager when running kafka-client under Java 11 or higher. Maven Shade also results in smaller executable JAR size.","title":"Added"},{"location":"CHANGELOG/#removed_4","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_4","text":"Updated open sources libraries Spring-Boot 2.7.16 / 3.1.4 classgraph 4.8.163 snakeyaml 2.2 kotlin 1.9.10 vertx 4.4.6 guava 32.1.3-jre msgpack 0.9.6 slj4j 2.0.9 zookeeper 3.7.2 The \"/info/lib\" admin endpoint has been enhanced to list library dependencies for executable JAR generated by either Maven Shade or Spring Boot Packager. Improved ConfigReader to recognize both \".yml\" and \".yaml\" extensions and their uses are interchangeable.","title":"Changed"},{"location":"CHANGELOG/#version-304-862023","text":"","title":"Version 3.0.4, 8/6/2023"},{"location":"CHANGELOG/#added_5","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_5","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_5","text":"Updated open sources libraries Spring-Boot 2.7.14 / 3.1.2 Kafka-client 3.5.1 classgraph 4.8.161 guava 32.1.2-jre msgpack 0.9.5","title":"Changed"},{"location":"CHANGELOG/#version-303-6272023","text":"","title":"Version 3.0.3, 6/27/2023"},{"location":"CHANGELOG/#added_6","text":"File extension to MIME type mapping for static HTML file handling","title":"Added"},{"location":"CHANGELOG/#removed_6","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_6","text":"Open sources library update - Kotlin version 1.9.0","title":"Changed"},{"location":"CHANGELOG/#version-302-692023","text":"","title":"Version 3.0.2, 6/9/2023"},{"location":"CHANGELOG/#added_7","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_7","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_7","text":"Consistent exception handling for Event API endpoint Open sources lib update - Vertx 4.4.4, Spring Boot 2.7.13, Spring Boot 3.1.1, classgraph 4.8.160, guava 32.0.1-jre","title":"Changed"},{"location":"CHANGELOG/#version-301-652023","text":"In this release, we have replace Google HTTP Client with vertx non-blocking WebClient. We also tested compatibility up to OpenJDK version 20 and maven 3.9.2.","title":"Version 3.0.1, 6/5/2023"},{"location":"CHANGELOG/#added_8","text":"When \"x-raw-xml\" HTTP request header is set to \"true\", the AsyncHttpClient will skip the built-in XML serialization so that your application can retrieve the original XML text.","title":"Added"},{"location":"CHANGELOG/#removed_8","text":"Retire Google HTTP client","title":"Removed"},{"location":"CHANGELOG/#changed_8","text":"Upgrade maven plugin versions.","title":"Changed"},{"location":"CHANGELOG/#version-300-4182023","text":"This is a major release with some breaking changes. Please refer to Chapter-10 (Migration guide) for details. This version brings the best of preemptive and cooperating multitasking to Java (version 1.8 to 19) before Java 19 virtual thread feature becomes officially available.","title":"Version 3.0.0, 4/18/2023"},{"location":"CHANGELOG/#added_9","text":"Function execution engine supporting kernel thread pool, Kotlin coroutine and suspend function \"Event over HTTP\" service for inter-container communication Support for Spring Boot version 3 and WebFlux Sample code for a pre-configured Spring Boot 3 application","title":"Added"},{"location":"CHANGELOG/#removed_9","text":"Remove blocking APIs from platform-core Retire PM2 process manager sample script due to compatibility issue","title":"Removed"},{"location":"CHANGELOG/#changed_9","text":"Refactor \"async.http.request\" to use vertx web client for non-blocking operation Update log4j2 version 2.20.0 and slf4j version 2.0.7 in platform-core Update JBoss RestEasy JAX_RS to version 3.15.6.Final in rest-spring Update vertx to 4.4.2 Update Spring Boot parent pom to 2.7.12 and 3.1.0 for spring boot 2 and 3 respectively Remove com.fasterxml.classmate dependency from rest-spring","title":"Changed"},{"location":"CHANGELOG/#version-280-3202023","text":"","title":"Version 2.8.0, 3/20/2023"},{"location":"CHANGELOG/#added_10","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_10","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_10","text":"Improved load balancing in cloud-connector Filter URI to avoid XSS attack Upgrade to SnakeYaml 2.0 and patch Spring Boot 2.6.8 for compatibility with it Upgrade to Vertx 4.4.0, classgraph 4.8.157, tomcat 9.0.73","title":"Changed"},{"location":"CHANGELOG/#version-271-12222022","text":"","title":"Version 2.7.1, 12/22/2022"},{"location":"CHANGELOG/#added_11","text":"standalone benchmark report app client and server benchmark apps add timeout tag to RPC events","title":"Added"},{"location":"CHANGELOG/#removed_11","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_11","text":"Updated open sources dependencies Netty 4.1.86.Final Tomcat 9.0.69 Vertx 4.3.6 classgraph 4.8.152 google-http-client 1.42.3 Improved unit tests to use assertThrows to evaluate exception Enhanced AsyncHttpRequest serialization","title":"Changed"},{"location":"CHANGELOG/#version-270-11112022","text":"In this version, REST automation code is moved to platform-core such that REST and Websocket service can share the same port.","title":"Version 2.7.0, 11/11/2022"},{"location":"CHANGELOG/#added_12","text":"AsyncObjectStreamReader is added for non-blocking read operation from an object stream. Support of LocalDateTime in SimpleMapper Add \"removeElement\" method to MultiLevelMap Automatically convert a map to a PoJo when the sender does not specify class in event body","title":"Added"},{"location":"CHANGELOG/#removed_12","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_12","text":"REST automation becomes part of platform-core and it can co-exist with Spring Web in the rest-spring module Enforce Spring Boot lifecycle management such that user apps will start after Spring Boot has loaded all components Update netty to version 4.1.84.Final","title":"Changed"},{"location":"CHANGELOG/#version-260-10132022","text":"In this version, websocket notification example code has been removed from the REST automation system. If your application uses this feature, please recover the code from version 2.5.0 and refactor it as a separate library.","title":"Version 2.6.0, 10/13/2022"},{"location":"CHANGELOG/#added_13","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_13","text":"Simplify REST automation system by removing websocket notification example in REST automation.","title":"Removed"},{"location":"CHANGELOG/#changed_13","text":"Replace Tomcat websocket server with Vertx non-blocking websocket server library Update netty to version 4.1.79.Final Update kafka client to version 2.8.2 Update snake yaml to version 1.33 Update gson to version 2.9.1","title":"Changed"},{"location":"CHANGELOG/#version-250-9102022","text":"","title":"Version 2.5.0, 9/10/2022"},{"location":"CHANGELOG/#added_14","text":"New Preload annotation class to automate pre-registration of LambdaFunction.","title":"Added"},{"location":"CHANGELOG/#removed_14","text":"Removed Spring framework and Tomcat dependencies from platform-core so that the core library can be applied to legacy J2EE application without library conflict.","title":"Removed"},{"location":"CHANGELOG/#changed_14","text":"Bugfix for proper housekeeping of future events. Make Gson and MsgPack handling of integer/long consistent Updated open sources libraries. Eclipse vertx-core version 4.3.4 MsgPack version 0.9.3 Google httpclient version 1.42.2 SnakeYaml version 1.31","title":"Changed"},{"location":"CHANGELOG/#version-236-6212022","text":"","title":"Version 2.3.6, 6/21/2022"},{"location":"CHANGELOG/#added_15","text":"Support more than one event stream cluster. User application can share the same event stream cluster for pub/sub or connect to an alternative cluster for pub/sub use cases.","title":"Added"},{"location":"CHANGELOG/#removed_15","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_15","text":"Cloud connector libraries update to Hazelcast 5.1.2","title":"Changed"},{"location":"CHANGELOG/#version-235-5302022","text":"","title":"Version 2.3.5, 5/30/2022"},{"location":"CHANGELOG/#added_16","text":"Add tagging feature to handle language connector's routing and exception handling","title":"Added"},{"location":"CHANGELOG/#removed_16","text":"Remove language pack's pub/sub broadcast feature","title":"Removed"},{"location":"CHANGELOG/#changed_16","text":"Update Spring Boot parent to version 2.6.8 to fetch Netty 4.1.77 and Spring Framework 5.3.20 Streamlined language connector transport protocol for compatibility with both Python and Node.js","title":"Changed"},{"location":"CHANGELOG/#version-234-5142022","text":"","title":"Version 2.3.4, 5/14/2022"},{"location":"CHANGELOG/#added_17","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_17","text":"Remove swagger-ui distribution from api-playground such that developer can clone the latest version","title":"Removed"},{"location":"CHANGELOG/#changed_17","text":"Update application.properties (from spring.resources.static-locations to spring.web.resources.static-locations) Update log4j, Tomcat and netty library version using Spring parent 2.6.6","title":"Changed"},{"location":"CHANGELOG/#version-233-3302022","text":"","title":"Version 2.3.3, 3/30/2022"},{"location":"CHANGELOG/#added_18","text":"Enhanced AsyncRequest to handle non-blocking fork-n-join","title":"Added"},{"location":"CHANGELOG/#removed_18","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_18","text":"Upgrade Spring Boot from 2.6.3 to 2.6.6","title":"Changed"},{"location":"CHANGELOG/#version-232-2212022","text":"","title":"Version 2.3.2, 2/21/2022"},{"location":"CHANGELOG/#added_19","text":"Add support of queue API in native pub/sub module for improved ESB compatibility","title":"Added"},{"location":"CHANGELOG/#removed_19","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_19","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#version-231-2192022","text":"","title":"Version 2.3.1, 2/19/2022"},{"location":"CHANGELOG/#added_20","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_20","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_20","text":"Update Vertx to version 4.2.4 Update Tomcat to version 5.0.58 Use Tomcat websocket server for presence monitors Bugfix - Simple Scheduler's leader election searches peers correctly","title":"Changed"},{"location":"CHANGELOG/#version-230-1282022","text":"","title":"Version 2.3.0, 1/28/2022"},{"location":"CHANGELOG/#added_21","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_21","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_21","text":"Update copyright notice Update Vertx to version 4.2.3 Bugfix - RSA key generator supporting key length from 1024 to 4096 bits CryptoAPI - support different AES algorithms and custom IV Update Spring Boot to version 2.6.3","title":"Changed"},{"location":"CHANGELOG/#version-223-12292021","text":"","title":"Version 2.2.3, 12/29/2021"},{"location":"CHANGELOG/#added_22","text":"Transaction journaling Add parameter distributed.trace.aggregation in application.properties such that trace aggregation may be disabled.","title":"Added"},{"location":"CHANGELOG/#removed_22","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_22","text":"Update JBoss RestEasy library to 3.15.3.Final Improved po.search(route) to scan local and remote service registries. Added \"remoteOnly\" selection. Fix bug in releasing presence monitor topic for specific closed user group Update Apache log4j to version 2.17.1 Update Spring Boot parent to version 2.6.1 Update Netty to version 4.1.72.Final Update Vertx to version 4.2.2 Convenient class \"UserNotification\" for backend service to publish events to the UI when REST automation is deployed","title":"Changed"},{"location":"CHANGELOG/#version-222-11122021","text":"","title":"Version 2.2.2, 11/12/2021"},{"location":"CHANGELOG/#added_23","text":"User defined API authentication functions can be selected using custom HTTP request header \"Exception chaining\" feature in EventEnvelope New \"deferred.commit.log\" parameter for backward compatibility with older PowerMock in unit tests","title":"Added"},{"location":"CHANGELOG/#removed_23","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_23","text":"Improved and streamlined SimpleXmlParser to handle arrays Bugfix for file upload in Service Gateway (REST automation library) Update Tomcat library from 9.0.50 to 9.0.54 Update Spring Boot library to 2.5.6 Update GSON library to 2.8.9","title":"Changed"},{"location":"CHANGELOG/#version-221-1012021","text":"","title":"Version 2.2.1, 10/1/2021"},{"location":"CHANGELOG/#added_24","text":"Callback function can implement ServiceExceptionHandler to catch exception. It adds the onError() method.","title":"Added"},{"location":"CHANGELOG/#removed_24","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_24","text":"Open sources library update - Vert.x 4.1.3, Netty 4.1.68-Final","title":"Changed"},{"location":"CHANGELOG/#version-211-9102021","text":"","title":"Version 2.1.1, 9/10/2021"},{"location":"CHANGELOG/#added_25","text":"User defined PoJo and Generics mapping Standardized serializers for default case, snake_case and camelCase Support of EventEnvelope as input parameter in TypedLambdaFunction so application function can inspect event's metadata Application can subscribe to life cycle events of other application instances","title":"Added"},{"location":"CHANGELOG/#removed_25","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_25","text":"Replace Tomcat websocket server engine with Vertx in presence monitor for higher performance Bugfix for MsgPack transport of integer, long, BigInteger and BigDecimal","title":"Changed"},{"location":"CHANGELOG/#version-210-7252021","text":"","title":"Version 2.1.0, 7/25/2021"},{"location":"CHANGELOG/#added_26","text":"Multicast - application can define a multicast.yaml config to relay events to more than one target service. StreamFunction - function that allows the application to control back-pressure","title":"Added"},{"location":"CHANGELOG/#removed_26","text":"\"object.streams.io\" route is removed from platform-core","title":"Removed"},{"location":"CHANGELOG/#changed_26","text":"Elastic Queue - Refactored using Oracle Berkeley DB Object stream I/O - simplified design using the new StreamFunction feature Open sources library update - Spring Boot 2.5.2, Tomcat 9.0.50, Vert.x 4.1.1, Netty 4.1.66-Final","title":"Changed"},{"location":"CHANGELOG/#version-200-552021","text":"Vert.x is introduced as the in-memory event bus","title":"Version 2.0.0, 5/5/2021"},{"location":"CHANGELOG/#added_27","text":"ActiveMQ and Tibco connectors Admin endpoints to stop, suspend and resume an application instance Handle edge case to detect stalled application instances Add \"isStreamingPubSub\" method to the PubSub interface","title":"Added"},{"location":"CHANGELOG/#removed_27","text":"Event Node event stream emulator has been retired. You may use standalone Kafka server as a replacement for development and testing in your laptop. Multi-tenancy namespace configuration has been retired. It is replaced by the \"closed user group\" feature.","title":"Removed"},{"location":"CHANGELOG/#changed_27","text":"Refactored Kafka and Hazelcast connectors to support virtual topics and closed user groups. Updated ConfigReader to be consistent with Spring value substitution logic for application properties Replace Akka actor system with Vert.x event bus Common code for various cloud connectors consolidated into cloud core libraries","title":"Changed"},{"location":"CHANGELOG/#version-1130-1152021","text":"Version 1.13.0 is the last version that uses Akka as the in-memory event system.","title":"Version 1.13.0, 1/15/2021"},{"location":"CHANGELOG/#version-11266-1152021","text":"","title":"Version 1.12.66, 1/15/2021"},{"location":"CHANGELOG/#added_28","text":"A simple websocket notification service is integrated into the REST automation system Seamless migration feature is added to the REST automation system","title":"Added"},{"location":"CHANGELOG/#removed_28","text":"Legacy websocket notification example application","title":"Removed"},{"location":"CHANGELOG/#changed_28","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#version-11265-1292020","text":"","title":"Version 1.12.65, 12/9/2020"},{"location":"CHANGELOG/#added_29","text":"\"kafka.pubsub\" is added as a cloud service File download example in the lambda-example project \"trace.log.header\" added to application.properties - when tracing is enabled, this inserts the trace-ID of the transaction in the log context. For more details, please refer to the Developer Guide Add API to pub/sub engine to support creation of topic with partitions TypedLambdaFunction is added so that developer can predefine input and output classes in a service without casting","title":"Added"},{"location":"CHANGELOG/#removed_29","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_29","text":"Decouple Kafka pub/sub from kafka connector so that native pub/sub can be used when application is running in standalone mode Rename \"relay\" to \"targetHost\" in AsyncHttpRequest data model Enhanced routing table distribution by sending a complete list of route tables, thus reducing network admin traffic.","title":"Changed"},{"location":"CHANGELOG/#version-11264-9282020","text":"","title":"Version 1.12.64, 9/28/2020"},{"location":"CHANGELOG/#added_30","text":"If predictable topic is set, application instances will report their predictable topics as \"instance ID\" to the presence monitor. This improves visibility when a developer tests their application in \"hybrid\" mode. i.e. running the app locally and connect to the cloud remotely for event streams and cloud resources.","title":"Added"},{"location":"CHANGELOG/#removed_30","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_30","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#version-11263-8272020","text":"","title":"Version 1.12.63, 8/27/2020"},{"location":"CHANGELOG/#added_31","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_31","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_31","text":"Improved Kafka producer and consumer pairing","title":"Changed"},{"location":"CHANGELOG/#version-11262-8122020","text":"","title":"Version 1.12.62, 8/12/2020"},{"location":"CHANGELOG/#added_32","text":"New presence monitor's admin endpoint for the operator to force routing table synchronization (\"/api/ping/now\")","title":"Added"},{"location":"CHANGELOG/#removed_32","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_32","text":"Improved routing table integrity check","title":"Changed"},{"location":"CHANGELOG/#version-11261-882020","text":"","title":"Version 1.12.61, 8/8/2020"},{"location":"CHANGELOG/#added_33","text":"Event stream systems like Kafka assume topic to be used long term. This version adds support to reuse the same topic when an application instance restarts. You can create a predictable topic using unique application name and instance ID. For example, with Kubernetes, you can use the POD name as the unique application instance topic.","title":"Added"},{"location":"CHANGELOG/#removed_33","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_33","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#version-11256-842020","text":"","title":"Version 1.12.56, 8/4/2020"},{"location":"CHANGELOG/#added_34","text":"Automate trace for fork-n-join use case","title":"Added"},{"location":"CHANGELOG/#removed_34","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_34","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#version-11255-7192020","text":"","title":"Version 1.12.55, 7/19/2020"},{"location":"CHANGELOG/#added_35","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_35","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_35","text":"Improved distributed trace - set the \"from\" address in EventEnvelope automatically.","title":"Changed"},{"location":"CHANGELOG/#version-11254-7102020","text":"","title":"Version 1.12.54, 7/10/2020"},{"location":"CHANGELOG/#added_36","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_36","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_36","text":"Application life-cycle management - User provided main application(s) will be started after Spring Boot declares web application ready. This ensures correct Spring autowiring or dependencies are available. Bugfix for locale - String.format(float) returns comma as decimal point that breaks number parser. Replace with BigDecimal decimal point scaling. Bugfix for Tomcat 9.0.35 - Change Async servlet default timeout from 30 seconds to -1 so the system can handle the whole life-cycle directly.","title":"Changed"},{"location":"CHANGELOG/#version-11252-6112020","text":"","title":"Version 1.12.52, 6/11/2020"},{"location":"CHANGELOG/#added_37","text":"new \"search\" method in Post Office to return a list of application instances for a service simple \"cron\" job scheduler as an extension project add \"sequence\" to MainApplication annotation for orderly execution when more than one MainApplication is available support \"Optional\" object in EventEnvelope so a LambdaFunction can read and return Optional","title":"Added"},{"location":"CHANGELOG/#removed_37","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_37","text":"The rest-spring library has been updated to support both JAR and WAR deployment All pom.xml files updated accordingly PersistentWsClient will back off for 10 seconds when disconnected by remote host","title":"Changed"},{"location":"CHANGELOG/#version-11250-5202020","text":"","title":"Version 1.12.50, 5/20/2020"},{"location":"CHANGELOG/#added_38","text":"Payload segmentation For large payload in an event, the payload is automatically segmented into 64 KB segments. When there are more than one target application instances, the system ensures that the segments of the same event is delivered to exactly the same target. PersistentWsClient added - generalized persistent websocket client for Event Node, Kafka reporter and Hazelcast reporter.","title":"Added"},{"location":"CHANGELOG/#removed_38","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_38","text":"Code cleaning to improve consistency Upgraded to hibernate-validator to v6.1.5.Final and Hazelcast version 4.0.1 REST automation is provided as a library and an application to handle different use cases","title":"Changed"},{"location":"CHANGELOG/#version-11240-542020","text":"","title":"Version 1.12.40, 5/4/2020"},{"location":"CHANGELOG/#added_39","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_39","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_39","text":"For security reason, upgrade log4j to version 2.13.2","title":"Changed"},{"location":"CHANGELOG/#version-11239-532020","text":"","title":"Version 1.12.39, 5/3/2020"},{"location":"CHANGELOG/#added_40","text":"Use RestEasy JAX-RS library","title":"Added"},{"location":"CHANGELOG/#removed_40","text":"For security reason, removed Jersey JAX-RS library","title":"Removed"},{"location":"CHANGELOG/#changed_40","text":"Updated RestLoader to initialize RestEasy servlet dispatcher Support nested arrays in MultiLevelMap","title":"Changed"},{"location":"CHANGELOG/#version-11236-4162020","text":"","title":"Version 1.12.36, 4/16/2020"},{"location":"CHANGELOG/#added_41","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_41","text":"For simplicity, retire route-substitution admin endpoint. Route substitution uses a simple static table in route-substitution.yaml.","title":"Removed"},{"location":"CHANGELOG/#changed_41","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#version-11235-4122020","text":"","title":"Version 1.12.35, 4/12/2020"},{"location":"CHANGELOG/#added_42","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_42","text":"SimpleRBAC class is retired","title":"Removed"},{"location":"CHANGELOG/#changed_42","text":"Improved ConfigReader and AppConfigReader with automatic key-value normalization for YAML and JSON files Improved pub/sub module in kafka-connector","title":"Changed"},{"location":"CHANGELOG/#version-11234-3282020","text":"","title":"Version 1.12.34, 3/28/2020"},{"location":"CHANGELOG/#added_43","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_43","text":"Retired proprietary config manager since we can use the \"BeforeApplication\" approach to load config from Kubernetes configMap or other systems of config record.","title":"Removed"},{"location":"CHANGELOG/#changed_43","text":"Added \"isZero\" method to the SimpleMapper class Convert BigDecimal to string without scientific notation (i.e. toPlainString instead of toString) Corresponding unit tests added to verify behavior","title":"Changed"},{"location":"CHANGELOG/#version-11232-3142020","text":"","title":"Version 1.12.32, 3/14/2020"},{"location":"CHANGELOG/#added_44","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_44","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_44","text":"Kafka-connector will shutdown application instance when the EventProducer cannot send event to Kafka. This would allow the infrastructure to restart application instance automatically.","title":"Changed"},{"location":"CHANGELOG/#version-11231-2262020","text":"","title":"Version 1.12.31, 2/26/2020"},{"location":"CHANGELOG/#added_45","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_45","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_45","text":"Kafka-connector now supports external service provider for Kafka properties and credentials. If your application implements a function with route name \"kafka.properties.provider\" before connecting to cloud, the kafka-connector will retrieve kafka credentials on demand. This addresses case when kafka credentials change after application start-up. Interceptors are designed to forward requests and thus they do not generate replies. However, if you implement a function as an EventInterceptor, your function can throw exception just like a regular function and the exception will be returned to the calling function. This makes it easier to write interceptors.","title":"Changed"},{"location":"CHANGELOG/#version-11230-262020","text":"","title":"Version 1.12.30, 2/6/2020"},{"location":"CHANGELOG/#added_46","text":"Expose \"async.http.request\" as a PUBLIC function (\"HttpClient as a service\")","title":"Added"},{"location":"CHANGELOG/#removed_46","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_46","text":"Improved Hazelcast client connection stability Improved Kafka native pub/sub","title":"Changed"},{"location":"CHANGELOG/#version-11229-1102020","text":"","title":"Version 1.12.29, 1/10/2020"},{"location":"CHANGELOG/#added_47","text":"Rest-automation will transport X-Trace-Id from/to Http request/response, therefore extending distributed trace across systems that support the X-Trace-Id HTTP header. Added endpoint and service to shutdown application instance.","title":"Added"},{"location":"CHANGELOG/#removed_47","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_47","text":"Updated SimpleXmlParser with XML External Entity (XXE) injection prevention. Bug fix for hazelcast recovery logic - when a hazelcast node is down, the app instance will restart the hazelcast client and reset routing table correctly. HSTS header insertion is optional so that we can disable it to avoid duplicated header when API gateway is doing it.","title":"Changed"},{"location":"CHANGELOG/#version-11226-142020","text":"","title":"Version 1.12.26, 1/4/2020"},{"location":"CHANGELOG/#added_48","text":"Feature to disable PoJo deserialization so that caller can decide if the result set should be in PoJo or a Map.","title":"Added"},{"location":"CHANGELOG/#removed_48","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_48","text":"Simplified key management for Event Node AsyncHttpRequest case insensitivity for headers, cookies, path parameters and session key-values Make built-in configuration management optional","title":"Changed"},{"location":"CHANGELOG/#version-11219-12282019","text":"","title":"Version 1.12.19, 12/28/2019"},{"location":"CHANGELOG/#added_49","text":"Added HTTP relay feature in rest-automation project","title":"Added"},{"location":"CHANGELOG/#removed_49","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_49","text":"Improved hazelcast retry and peer discovery logic Refactored rest-automation's service gateway module to use AsyncHttpRequest Info endpoint to show routing table of a peer","title":"Changed"},{"location":"CHANGELOG/#version-11217-12162019","text":"","title":"Version 1.12.17, 12/16/2019"},{"location":"CHANGELOG/#added_50","text":"Simple configuration management is added to event-node, hazelcast-presence and kafka-presence monitors Added BeforeApplication annotation - this allows user application to execute some setup logic before the main application starts. e.g. modifying parameters in application.properties Added API playground as a convenient standalone application to render OpenAPI 2.0 and 3.0 yaml and json files Added argument parser in rest-automation helper app to use a static HTML folder in the local file system if arguments -html file_path is given when starting the JAR file.","title":"Added"},{"location":"CHANGELOG/#removed_50","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_50","text":"Kafka publisher timeout value changed from 10 to 20 seconds Log a warning when Kafka takes more than 5 seconds to send an event","title":"Changed"},{"location":"CHANGELOG/#version-11214-11202019","text":"","title":"Version 1.12.14, 11/20/2019"},{"location":"CHANGELOG/#added_51","text":"getRoute() method is added to PostOffice to facilitate RBAC The route name of the current service is added to an outgoing event when the \"from\" field is not present Simple RBAC using YAML configuration instead of code","title":"Added"},{"location":"CHANGELOG/#removed_51","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_51","text":"Updated Spring Boot to v2.2.1","title":"Changed"},{"location":"CHANGELOG/#version-11212-10262019","text":"","title":"Version 1.12.12, 10/26/2019"},{"location":"CHANGELOG/#added_52","text":"Multi-tenancy support for event streams (Hazelcast and Kafka). This allows the use of a single event stream cluster for multiple non-prod environments. For production, it must use a separate event stream cluster for security reason.","title":"Added"},{"location":"CHANGELOG/#removed_52","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_52","text":"logging framework changed from logback to log4j2 (version 2.12.1) Use JSR-356 websocket annotated ClientEndpoint Improved websocket reconnection logic","title":"Changed"},{"location":"CHANGELOG/#version-1129-9142019","text":"","title":"Version 1.12.9, 9/14/2019"},{"location":"CHANGELOG/#added_53","text":"Distributed tracing implemented in platform-core and rest-automation Improved HTTP header transformation for rest-automation","title":"Added"},{"location":"CHANGELOG/#removed_53","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_53","text":"language pack API key obtained from environment variable","title":"Changed"},{"location":"CHANGELOG/#version-1128-8152019","text":"","title":"Version 1.12.8, 8/15/2019"},{"location":"CHANGELOG/#added_54","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_54","text":"rest-core subproject has been merged with rest-spring","title":"Removed"},{"location":"CHANGELOG/#changed_54","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#version-1127-7152019","text":"","title":"Version 1.12.7, 7/15/2019"},{"location":"CHANGELOG/#added_55","text":"Periodic routing table integrity check (15 minutes) Set kafka read pointer to the beginning for new application instances except presence monitor REST automation helper application in the \"extensions\" project Support service discovery of multiple routes in the updated PostOffice's exists() method logback to set log level based on environment variable LOG_LEVEL (default is INFO)","title":"Added"},{"location":"CHANGELOG/#removed_55","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_55","text":"Minor refactoring of kafka-connector and hazelcast-connector to ensure that they can coexist if you want to include both of these dependencies in your project. This is for convenience of dev and testing. In production, please select only one cloud connector library to reduce memory footprint.","title":"Changed"},{"location":"CHANGELOG/#version-1124-6242019","text":"","title":"Version 1.12.4, 6/24/2019"},{"location":"CHANGELOG/#added_56","text":"Add inactivity expiry timer to ObjectStreamIO so that house-keeper can clean up resources that are idle","title":"Added"},{"location":"CHANGELOG/#removed_56","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_56","text":"Disable HTML encape sequence for GSON serializer Bug fix for GSON serialization optimization Bug fix for Object Stream housekeeper By default, GSON serializer converts all numbers to double, resulting in unwanted decimal point for integer and long. To handle custom map serialization for correct representation of numbers, an unintended side effect was introduced in earlier releases. List of inner PoJo would be incorrectly serialized as map, resulting in casting exception. This release resolves this issue.","title":"Changed"},{"location":"CHANGELOG/#version-1121-6102019","text":"","title":"Version 1.12.1, 6/10/2019"},{"location":"CHANGELOG/#added_57","text":"Store-n-forward pub/sub API will be automatically enabled if the underlying cloud connector supports it. e.g. kafka ObjectStreamIO, a convenient wrapper class, to provide event stream I/O API. Object stream feature is now a standard feature instead of optional. Deferred delivery added to language connector.","title":"Added"},{"location":"CHANGELOG/#removed_57","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_57","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#version-11140-5252019","text":"","title":"Version 1.11.40, 5/25/2019"},{"location":"CHANGELOG/#added_58","text":"Route substitution for simple versioning use case Add \"Strict Transport Security\" header if HTTPS (https://tools.ietf.org/html/rfc6797) Event stream connector for Kafka Distributed housekeeper feature for Hazelcast connector","title":"Added"},{"location":"CHANGELOG/#removed_58","text":"System log service","title":"Removed"},{"location":"CHANGELOG/#changed_58","text":"Refactoring of Hazelcast event stream connector library to sync up with the new Kafka connector.","title":"Changed"},{"location":"CHANGELOG/#version-11139-4302019","text":"","title":"Version 1.11.39, 4/30/2019"},{"location":"CHANGELOG/#added_59","text":"Language-support service application for Python, Node.js and Go, etc. Python language pack project is available at https://github.com/Accenture/mercury-python","title":"Added"},{"location":"CHANGELOG/#removed_59","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_59","text":"replace Jackson serialization engine with Gson ( platform-core project) replace Apache HttpClient with Google Http Client ( rest-spring ) remove Jackson dependencies from Spring Boot ( rest-spring ) interceptor improvement","title":"Changed"},{"location":"CHANGELOG/#version-11133-3252019","text":"","title":"Version 1.11.33, 3/25/2019"},{"location":"CHANGELOG/#added_60","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_60","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_60","text":"Move safe.data.models validation rules from EventEnvelope to SimpleMapper Apache fluent HTTP client downgraded to version 4.5.6 because the pom file in 4.5.7 is invalid","title":"Changed"},{"location":"CHANGELOG/#version-11130-372019","text":"","title":"Version 1.11.30, 3/7/2019"},{"location":"CHANGELOG/#added_61","text":"Added retry logic in persistent queue when OS cannot update local file metadata in real-time for Windows based machine.","title":"Added"},{"location":"CHANGELOG/#removed_61","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_61","text":"pom.xml changes - update with latest 3rd party open sources dependencies.","title":"Changed"},{"location":"CHANGELOG/#version-11129-1252019","text":"","title":"Version 1.11.29, 1/25/2019"},{"location":"CHANGELOG/#added_62","text":"platform-core Support for long running functions so that any long queries will not block the rest of the system. \"safe.data.models\" is available as an option in the application.properties. This is an additional security measure to protect against Jackson deserialization vulnerability. See example below: # # additional security to protect against model injection # comma separated list of model packages that are considered safe to be used for object deserialization # #safe.data.models=com.accenture.models rest-spring \"/env\" endpoint is added. See sample application.properties below: # # environment and system properties to be exposed to the \"/env\" admin endpoint # show.env.variables=USER, TEST show.application.properties=server.port, cloud.connector","title":"Added"},{"location":"CHANGELOG/#removed_62","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_62","text":"platform-core Use Java Future and an elastic cached thread pool for executing user functions.","title":"Changed"},{"location":"CHANGELOG/#fixed","text":"N/A","title":"Fixed"},{"location":"CHANGELOG/#version-11128-12202018","text":"","title":"Version 1.11.28, 12/20/2018"},{"location":"CHANGELOG/#added_63","text":"Hazelcast support is added. This includes two projects (hazelcast-connector and hazelcast-presence). Hazelcast-connector is a cloud connector library. Hazelcast-presence is the \"Presence Monitor\" for monitoring the presence status of each application instance.","title":"Added"},{"location":"CHANGELOG/#removed_63","text":"platform-core The \"fixed resource manager\" feature is removed because the same outcome can be achieved at the application level. e.g. The application can broadcast requests to multiple application instances with the same route name and use a callback function to receive response asynchronously. The services can provide resource metrics so that the caller can decide which is the most available instance to contact. For simplicity, resources management is better left to the cloud platform or the application itself.","title":"Removed"},{"location":"CHANGELOG/#changed_63","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#fixed_1","text":"N/A","title":"Fixed"},{"location":"CODE_OF_CONDUCT/","text":"Contributor Covenant Code of Conduct Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Our Standards Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. Scope This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Kevin Bader (the current project maintainer). All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. Attribution This Code of Conduct is adapted from the Contributor Covenant , version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html","title":"Code of Conduct"},{"location":"CODE_OF_CONDUCT/#contributor-covenant-code-of-conduct","text":"","title":"Contributor Covenant Code of Conduct"},{"location":"CODE_OF_CONDUCT/#our-pledge","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.","title":"Our Pledge"},{"location":"CODE_OF_CONDUCT/#our-standards","text":"Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting","title":"Our Standards"},{"location":"CODE_OF_CONDUCT/#our-responsibilities","text":"Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.","title":"Our Responsibilities"},{"location":"CODE_OF_CONDUCT/#scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.","title":"Scope"},{"location":"CODE_OF_CONDUCT/#enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Kevin Bader (the current project maintainer). All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.","title":"Enforcement"},{"location":"CODE_OF_CONDUCT/#attribution","text":"This Code of Conduct is adapted from the Contributor Covenant , version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html","title":"Attribution"},{"location":"CONTRIBUTING/","text":"Contributing to the Mercury framework Thanks for taking the time to contribute! The following is a set of guidelines for contributing to Mercury and its packages, which are hosted in the Accenture Organization on GitHub. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request. Code of Conduct This project and everyone participating in it is governed by our Code of Conduct . By participating, you are expected to uphold this code. Please report unacceptable behavior to Kevin Bader, who is the current project maintainer. What should I know before I get started? We follow the standard GitHub workflow . Before submitting a Pull Request: Please write tests. Make sure you run all tests and check for warnings. Think about whether it makes sense to document the change in some way. For smaller, internal changes, inline documentation might be sufficient, while more visible ones might warrant a change to the developer's guide or the README . Update CHANGELOG.md file with your current change in form of [Type of change e.g. Config, Kafka, .etc] with a short description of what it is all about and a link to issue or pull request, and choose a suitable section (i.e., changed, added, fixed, removed, deprecated). Design Decisions When we make a significant decision in how to write code, or how to maintain the project and what we can or cannot support, we will document it using Architecture Decision Records (ADR) . Take a look at the design notes for existing ADRs. If you have a question around how we do things, check to see if it is documented there. If it is not documented there, please ask us - chances are you're not the only one wondering. Of course, also feel free to challenge the decisions by starting a discussion on the mailing list.","title":"Contribution"},{"location":"CONTRIBUTING/#contributing-to-the-mercury-framework","text":"Thanks for taking the time to contribute! The following is a set of guidelines for contributing to Mercury and its packages, which are hosted in the Accenture Organization on GitHub. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request.","title":"Contributing to the Mercury framework"},{"location":"CONTRIBUTING/#code-of-conduct","text":"This project and everyone participating in it is governed by our Code of Conduct . By participating, you are expected to uphold this code. Please report unacceptable behavior to Kevin Bader, who is the current project maintainer.","title":"Code of Conduct"},{"location":"CONTRIBUTING/#what-should-i-know-before-i-get-started","text":"We follow the standard GitHub workflow . Before submitting a Pull Request: Please write tests. Make sure you run all tests and check for warnings. Think about whether it makes sense to document the change in some way. For smaller, internal changes, inline documentation might be sufficient, while more visible ones might warrant a change to the developer's guide or the README . Update CHANGELOG.md file with your current change in form of [Type of change e.g. Config, Kafka, .etc] with a short description of what it is all about and a link to issue or pull request, and choose a suitable section (i.e., changed, added, fixed, removed, deprecated).","title":"What should I know before I get started?"},{"location":"CONTRIBUTING/#design-decisions","text":"When we make a significant decision in how to write code, or how to maintain the project and what we can or cannot support, we will document it using Architecture Decision Records (ADR) . Take a look at the design notes for existing ADRs. If you have a question around how we do things, check to see if it is documented there. If it is not documented there, please ask us - chances are you're not the only one wondering. Of course, also feel free to challenge the decisions by starting a discussion on the mailing list.","title":"Design Decisions"},{"location":"INCLUSIVITY/","text":"TECHNOLOGY INCLUSIVE LANGUAGE GUIDEBOOK As an organization, Accenture believes in building an inclusive workplace and contributing to a world where equality thrives. Certain terms or expressions can unintentionally harm, perpetuate damaging stereotypes, and insult people. Inclusive language avoids bias, slang terms, and word choices which express derision of groups of people based on race, gender, sexuality, or socioeconomic status. The Accenture North America Technology team created this guidebook to provide Accenture employees with a view into inclusive language and guidance for working to avoid its use\u2014helping to ensure that we communicate with respect, dignity and fairness. How to use this guide? Accenture has over 514,000 employees from diverse backgrounds, who perform consulting and delivery work for an equally diverse set of clients and partners. When communicating with your colleagues and representing Accenture, consider the connotation, however unintended, of certain terms in your written and verbal communication. The guidelines are intended to help you recognize non-inclusive words and understand potential meanings that these words might convey. Our goal with these recommendations is not to require you to use specific words, but to ask you to take a moment to consider how your audience may be affected by the language you choose. Inclusive Categories Non-inclusive term Replacement Explanation Race, Ethnicity & National Origin master primary client source leader Using the terms \u201cmaster/slave\u201d in this context inappropriately normalizes and minimizes the very large magnitude that slavery and its effects have had in our history. slave secondary replica follower blacklist deny list block list The term \u201cblacklist\u201d was first used in the early 1600s to describe a list of those who were under suspicion and thus not to be trusted, whereas \u201cwhitelist\u201d referred to those considered acceptable. Accenture does not want to promote the association of \u201cblack\u201d and negative, nor the connotation of \u201cwhite\u201d being the inverse, or positive. whitelist allow list approved list native original core feature Referring to \u201cnative\u201d vs \u201cnon-native\u201d to describe technology platforms carries overtones of minimizing the impact of colonialism on native people, and thus minimizes the negative associations the terminology has in the latter context. non-native non-original non-core feature Gender & Sexuality man-hours work-hours business-hours When people read the words \u2018man\u2019 or \u2018he,\u2019 people often picture males only. Usage of the male terminology subtly suggests that only males can perform certain work or hold certain jobs. Gender-neutral terms include the whole audience, and thus using terms such as \u201cbusiness executive\u201d instead of \u201cbusinessman,\u201d or informally, \u201cfolks\u201d instead of \u201cguys\u201d is preferable because it is inclusive. man-days work-days business-days Ability Status & (Dis)abilities sanity check insanity check confidence check quality check rationality check Using the \u201cHuman Engagement, People First\u2019 approach, putting people - all people - at the center is important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. dummy variables indicator variables Violence STONITH, kill, hit conclude cease discontinue Using the \u201cHuman Engagement, People First\u2019 approach, putting people - all people - at the center is important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. one throat to choke single point of contact primary contact This guidebook is a living document and will be updated as terminology evolves. We encourage our users to provide feedback on the effectiveness of this document and we welcome additional suggestions. Contact us at Technology_ProjectElevate@accenture.com .","title":"Inclusivity"},{"location":"INCLUSIVITY/#technology-inclusive-language-guidebook","text":"As an organization, Accenture believes in building an inclusive workplace and contributing to a world where equality thrives. Certain terms or expressions can unintentionally harm, perpetuate damaging stereotypes, and insult people. Inclusive language avoids bias, slang terms, and word choices which express derision of groups of people based on race, gender, sexuality, or socioeconomic status. The Accenture North America Technology team created this guidebook to provide Accenture employees with a view into inclusive language and guidance for working to avoid its use\u2014helping to ensure that we communicate with respect, dignity and fairness. How to use this guide? Accenture has over 514,000 employees from diverse backgrounds, who perform consulting and delivery work for an equally diverse set of clients and partners. When communicating with your colleagues and representing Accenture, consider the connotation, however unintended, of certain terms in your written and verbal communication. The guidelines are intended to help you recognize non-inclusive words and understand potential meanings that these words might convey. Our goal with these recommendations is not to require you to use specific words, but to ask you to take a moment to consider how your audience may be affected by the language you choose. Inclusive Categories Non-inclusive term Replacement Explanation Race, Ethnicity & National Origin master primary client source leader Using the terms \u201cmaster/slave\u201d in this context inappropriately normalizes and minimizes the very large magnitude that slavery and its effects have had in our history. slave secondary replica follower blacklist deny list block list The term \u201cblacklist\u201d was first used in the early 1600s to describe a list of those who were under suspicion and thus not to be trusted, whereas \u201cwhitelist\u201d referred to those considered acceptable. Accenture does not want to promote the association of \u201cblack\u201d and negative, nor the connotation of \u201cwhite\u201d being the inverse, or positive. whitelist allow list approved list native original core feature Referring to \u201cnative\u201d vs \u201cnon-native\u201d to describe technology platforms carries overtones of minimizing the impact of colonialism on native people, and thus minimizes the negative associations the terminology has in the latter context. non-native non-original non-core feature Gender & Sexuality man-hours work-hours business-hours When people read the words \u2018man\u2019 or \u2018he,\u2019 people often picture males only. Usage of the male terminology subtly suggests that only males can perform certain work or hold certain jobs. Gender-neutral terms include the whole audience, and thus using terms such as \u201cbusiness executive\u201d instead of \u201cbusinessman,\u201d or informally, \u201cfolks\u201d instead of \u201cguys\u201d is preferable because it is inclusive. man-days work-days business-days Ability Status & (Dis)abilities sanity check insanity check confidence check quality check rationality check Using the \u201cHuman Engagement, People First\u2019 approach, putting people - all people - at the center is important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. dummy variables indicator variables Violence STONITH, kill, hit conclude cease discontinue Using the \u201cHuman Engagement, People First\u2019 approach, putting people - all people - at the center is important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. one throat to choke single point of contact primary contact This guidebook is a living document and will be updated as terminology evolves. We encourage our users to provide feedback on the effectiveness of this document and we welcome additional suggestions. Contact us at Technology_ProjectElevate@accenture.com .","title":"TECHNOLOGY INCLUSIVE LANGUAGE GUIDEBOOK"},{"location":"arch-decisions/DESIGN-NOTES/","text":"Design notes Support sequential synchronous RPC in a non-blocking fashion The foundation library (platform-core) has been integrated with Java 21 virtual thread and Kotlin suspend function features. When a user function makes a RPC call using virtual thread or suspend function, the user function appears to be \"blocked\" so that the code can execute sequentially. Behind the curtain, the function is actually \"suspended\". This makes sequential code with RPC performs as good as reactive code. More importantly, the sequential code represents the intent of the application clearly, thus making code easier to read and maintain. Low level control of function execution strategies You can precisely control how your functions execute, using virtual threads, suspend functions or kernel thread pools to yield the highest performance and throughput. Serialization Gson We are using Gson for its minimalist design. We have customized the serialization behavior to be similar to Jackson and other serializers. i.e. Integer and long values are kept without decimal points. For API functional compatibility with Jackson, we have added the writeValueAsString, writeValueAsBytes and readValue methods. The convertValue method has been consolidated into the readValue method. MsgPack For efficient and serialization performance, we use MsgPack as schemaless binary transport for EventEnvelope that contains event metadata, headers and payload. Custom JSON and XML serializers For consistency, we have customized Spring Boot and Servlet serialization and exception handlers. Reactive design Mercury uses the temporary local file system ( /tmp ) as an overflow area for events when the consumer is slower than the producer. This event buffering design means that user application does not have to handle back-pressure logic directly. However, it does not restrict you from implementing your flow-control logic. In-memory event system In Mercury version 1, the Akka actor system is used as the in-memory event bus. Since Mercury version 2, we have migrated from Akka to Eclipse Vertx. In Mercury version 3, we extend the engine to be fully non-blocking with low-level control of application performance and throughput. In Mercury version 3.1, the platform core engine is fully integrated with Java 21 virtual thread. Spring Boot 3 The platform-core includes a non-blocking HTTP and websocket server for standalone operation without Spring Boot. The rest-spring-3 library is designed to turn your code to be a Spring Boot application. You may also use the platform-core library with a regular Spring Boot application without the rest-spring-3 library if you prefer.","title":"Design notes"},{"location":"arch-decisions/DESIGN-NOTES/#design-notes","text":"","title":"Design notes"},{"location":"arch-decisions/DESIGN-NOTES/#support-sequential-synchronous-rpc-in-a-non-blocking-fashion","text":"The foundation library (platform-core) has been integrated with Java 21 virtual thread and Kotlin suspend function features. When a user function makes a RPC call using virtual thread or suspend function, the user function appears to be \"blocked\" so that the code can execute sequentially. Behind the curtain, the function is actually \"suspended\". This makes sequential code with RPC performs as good as reactive code. More importantly, the sequential code represents the intent of the application clearly, thus making code easier to read and maintain.","title":"Support sequential synchronous RPC in a non-blocking fashion"},{"location":"arch-decisions/DESIGN-NOTES/#low-level-control-of-function-execution-strategies","text":"You can precisely control how your functions execute, using virtual threads, suspend functions or kernel thread pools to yield the highest performance and throughput.","title":"Low level control of function execution strategies"},{"location":"arch-decisions/DESIGN-NOTES/#serialization","text":"","title":"Serialization"},{"location":"arch-decisions/DESIGN-NOTES/#gson","text":"We are using Gson for its minimalist design. We have customized the serialization behavior to be similar to Jackson and other serializers. i.e. Integer and long values are kept without decimal points. For API functional compatibility with Jackson, we have added the writeValueAsString, writeValueAsBytes and readValue methods. The convertValue method has been consolidated into the readValue method.","title":"Gson"},{"location":"arch-decisions/DESIGN-NOTES/#msgpack","text":"For efficient and serialization performance, we use MsgPack as schemaless binary transport for EventEnvelope that contains event metadata, headers and payload.","title":"MsgPack"},{"location":"arch-decisions/DESIGN-NOTES/#custom-json-and-xml-serializers","text":"For consistency, we have customized Spring Boot and Servlet serialization and exception handlers.","title":"Custom JSON and XML serializers"},{"location":"arch-decisions/DESIGN-NOTES/#reactive-design","text":"Mercury uses the temporary local file system ( /tmp ) as an overflow area for events when the consumer is slower than the producer. This event buffering design means that user application does not have to handle back-pressure logic directly. However, it does not restrict you from implementing your flow-control logic.","title":"Reactive design"},{"location":"arch-decisions/DESIGN-NOTES/#in-memory-event-system","text":"In Mercury version 1, the Akka actor system is used as the in-memory event bus. Since Mercury version 2, we have migrated from Akka to Eclipse Vertx. In Mercury version 3, we extend the engine to be fully non-blocking with low-level control of application performance and throughput. In Mercury version 3.1, the platform core engine is fully integrated with Java 21 virtual thread.","title":"In-memory event system"},{"location":"arch-decisions/DESIGN-NOTES/#spring-boot-3","text":"The platform-core includes a non-blocking HTTP and websocket server for standalone operation without Spring Boot. The rest-spring-3 library is designed to turn your code to be a Spring Boot application. You may also use the platform-core library with a regular Spring Boot application without the rest-spring-3 library if you prefer.","title":"Spring Boot 3"},{"location":"guides/APPENDIX-I/","text":"Application Configuration The following parameters are used by the system. You can define them in either the application.properties or application.yml file. When you use both application.properties and application.yml, the parameters in application.properties will take precedence. Key Value (example) Required application.name Application name Yes spring.application.name Alias for application name Yes*1 info.app.version major.minor.build (e.g. 1.0.0) Yes info.app.description Something about your application Yes web.component.scan your own package path or parent path Yes server.port e.g. 8083 Yes*1 rest.automation true if you want to enable automation Optional rest.server.port e.g. 8085 Optional websocket.server.port Alias for rest.server.port Optional static.html.folder classpath:/public/ Yes mime.types Map of file extensions to MIME types (application.yml only) Optional spring.web.resources.static-locations (alias for static.html.folder) Yes*1 spring.mvc.static-path-pattern /** Yes*1 jax.rs.application.path /api Optional* show.env.variables comma separated list of variable names Optional show.application.properties comma separated list of property names Optional cloud.connector kafka, none, etc. Optional cloud.services e.g. some.interesting.service Optional snake.case.serialization true (recommended) Optional safe.data.models packages pointing to your PoJo classes Optional protect.info.endpoints true to disable actuators. Default: true Optional trace.http.header comma separated list. Default \"X-Trace-Id\" Optional index.redirection comma separated list of URI paths Optional* index.page default is index.html Optional* hsts.feature default is true Optional* application.feature.route.substitution default is false Optional route.substitution.file points to a config file Optional application.feature.topic.substitution default is false Optional topic.substitution.file points to a config file Optional kafka.replication.factor 3 Kafka cloud.client.properties e.g. classpath:/kafka.properties Connector user.cloud.client.properties e.g. classpath:/second-kafka.properties Connector default.app.group.id groupId for the app instance. Default: appGroup Connector default.monitor.group.id groupId for the presence-monitor. Default: monitorGroup Connector monitor.topic topic for the presence-monitor. Default: service.monitor Connector app.topic.prefix Default: multiplex (DO NOT change) Connector app.partitions.per.topic Max Kafka partitions per topic. Default: 32 Connector max.virtual.topics Max virtual topics = partitions * topics. Default: 288 Connector max.closed.user.groups Number of closed user groups. Default: 10, range: 3 - 30 Connector closed.user.group Closed user group. Default: 1 Connector transient.data.store Default is \"/tmp/reactive\" Optional running.in.cloud Default is false (set to true if containerized) Optional multicast.yaml points to the multicast.yaml config file Optional journal.yaml points to the journal.yaml config file Optional deferred.commit.log Default is false (may be set to true in unit tests) Optional * - when using the \"rest-spring\" library Static HTML contents You can place static HTML files (e.g. the HTML bundle for a UI program) in the \"resources/public\" folder or in the local file system using the \"static.html.folder\" parameter. The system supports a bare minimal list of file extensions to MIME types. If your use case requires additional MIME type mapping, you may define them in the application.yml configuration file under the mime.types section like this: mime.types: pdf: 'application/pdf' doc: 'application/msword' Note that application.properties file cannot be used for the \"mime.types\" section because it only supports text key-values. HTTP and websocket port assignment If rest.automation=true and rest.server.port or server.port are configured, the system will start a lightweight non-blocking HTTP server. If rest.server.port is not available, it will fall back to server.port . If rest.automation=false and you have a websocket server endpoint annotated as WebsocketService , the system will start a non-blocking Websocket server with a minimalist HTTP server that provides actuator services. If websocket.server.port is not available, it will fall back to rest.server.port or server.port . If you add Spring Boot dependency, Spring Boot will use server.port to start Tomcat or similar HTTP server. The built-in lightweight non-blocking HTTP server and Spring Boot can co-exist when you configure rest.server.port and server.port to use different ports. Note that the websocket.server.port parameter is an alias of rest.server.port . Transient data store The system handles back-pressure automatically by overflowing events from memory to a transient data store. As a cloud native best practice, the folder must be under \"/tmp\". The default is \"/tmp/reactive\". The \"running.in.cloud\" parameter must be set to false when your apps are running in IDE or in your laptop. When running in kubernetes, it can be set to true. The safe.data.models parameter PoJo may contain Java code. As a result, it is possible to inject malicious code that does harm when deserializing a PoJo. This security risk applies to any JSON serialization engine. For added security and peace of mind, you may want to protect your PoJo package paths. When the safe.data.models parameter is configured, the underlying serializers for JAX-RS, Spring RestController and Servlets will respect this setting and enforce PoJo filtering. If there is a genuine need to programmatically perform serialization, you may use the pre-configured serializer so that the serialization behavior is consistent. You can get an instance of the serializer with SimpleMapper.getInstance().getMapper() . The serializer may perform snake case or camel serialization depending on the parameter snake.case.serialization . If you want to ensure snake case or camel, you can select the serializer like this: SimpleObjectMapper snakeCaseMapper = SimpleMapper.getInstance().getSnakeCaseMapper(); SimpleObjectMapper camelCaseMapper = SimpleMapper.getInstance().getCamelCaseMapper(); The trace.http.header parameter The trace.http.header parameter sets the HTTP header for trace ID. When configured with more than one label, the system will retrieve trace ID from the corresponding HTTP header and propagate it through the transaction that may be served by multiple services. If trace ID is presented in an HTTP request, the system will use the same label to set HTTP response traceId header. X-Trace-Id: a9a4e1ec-1663-4c52-b4c3-7b34b3e33697 or X-Correlation-Id: a9a4e1ec-1663-4c52-b4c3-7b34b3e33697 Kafka specific configuration If you use the kafka-connector (cloud connector) and kafka-presence (presence monitor), you may want to externalize kafka.properties like this: cloud.client.properties=file:/tmp/config/kafka.properties Note that \"classpath\" refers to embedded config file in the \"resources\" folder in your source code and \"file\" refers to an external config file. You want also use the embedded config file as a backup like this: cloud.client.properties=file:/tmp/config/kafka.properties,classpath:/kafka.properties Distributed trace To enable distributed trace logging, please set this in log4j2.xml: