404
+ +Page not found
+ + +diff --git a/docs/404.html b/docs/404.html new file mode 100644 index 0000000..dbccc51 --- /dev/null +++ b/docs/404.html @@ -0,0 +1,165 @@ + + +
+ + + + +Page not found
+ + +All notable changes to this project will be documented in this file.
+The format is based on Keep a Changelog, +and this project adheres to Semantic Versioning.
+N/A
+N/A
+AppConfig will resolve key-values from system properties and environment variables at startup
+Eliminate preload.yaml configuration file
+Support parsing of multiple environment variables and base system properties for +a single key-value in Config Reader.
+N/A
+Upgraded to sync with Mercury-Composable for the foundation event-driven and Event-over-HTTP +design. Tested with Node.js version 22.12.0 (LTS). Backward compatible to version 20.18.1 (LTS).
+Event-over-HTTP compatibility tests conducted with Mercury-Composable version 4.0.32.
+N/A
+N/A
+Ported composable core features from Mercury 3.0 Java version
+Threshold feature in REST automation
+N/A
+Minimal viable product
+N/A
+N/A
+ +In the interest of fostering an open and welcoming environment, we as +contributors and maintainers pledge to making participation in our project and +our community a harassment-free experience for everyone, regardless of age, body +size, disability, ethnicity, gender identity and expression, level of experience, +nationality, personal appearance, race, religion, or sexual identity and +orientation.
+Examples of behavior that contributes to creating a positive environment +include:
+Examples of unacceptable behavior by participants include:
+Project maintainers are responsible for clarifying the standards of acceptable +behavior and are expected to take appropriate and fair corrective action in +response to any instances of unacceptable behavior.
+Project maintainers have the right and responsibility to remove, edit, or +reject comments, commits, code, wiki edits, issues, and other contributions +that are not aligned to this Code of Conduct, or to ban temporarily or +permanently any contributor for other behaviors that they deem inappropriate, +threatening, offensive, or harmful.
+This Code of Conduct applies both within project spaces and in public spaces +when an individual is representing the project or its community. Examples of +representing a project or community include using an official project e-mail +address, posting via an official social media account, or acting as an appointed +representative at an online or offline event. Representation of a project may be +further defined and clarified by project maintainers.
+Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported by contacting Kevin Bader (the current project maintainer). All +complaints will be reviewed and investigated and will result in a response that +is deemed necessary and appropriate to the circumstances. The project team is +obligated to maintain confidentiality with regard to the reporter of an incident. +Further details of specific enforcement policies may be posted separately.
+Project maintainers who do not follow or enforce the Code of Conduct in good +faith may face temporary or permanent repercussions as determined by other +members of the project's leadership.
+This Code of Conduct is adapted from the Contributor Covenant, version 1.4, +available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
+ +Thanks for taking the time to contribute!
+The following is a set of guidelines for contributing to Mercury and its packages, which are hosted +in the Accenture Organization on GitHub. These are mostly +guidelines, not rules. Use your best judgment, and feel free to propose changes to this document +in a pull request.
+This project and everyone participating in it is governed by our +Code of Conduct. By participating, you are expected to uphold this code. +Please report unacceptable behavior to Kevin Bader, who is the current project maintainer.
+We follow the standard GitHub workflow. +Before submitting a Pull Request:
+CHANGELOG.md
file with your current change in form of [Type of change e.g. Config, Kafka, .etc]
+ with a short description of what it is all about and a link to issue or pull request,
+ and choose a suitable section (i.e., changed, added, fixed, removed, deprecated).When we make a significant decision in how to write code, or how to maintain the project and +what we can or cannot support, we will document it using +Architecture Decision Records (ADR). +Take a look at the design notes for existing ADRs. +If you have a question around how we do things, check to see if it is documented +there. If it is not documented there, please ask us - chances are you're not the only one +wondering. Of course, also feel free to challenge the decisions by starting a discussion on the +mailing list.
+ +
+
As an organization, Accenture believes in building an inclusive workplace and contributing to a world where equality thrives. Certain terms or expressions can unintentionally harm, perpetuate damaging stereotypes, and insult people. Inclusive language avoids bias, slang terms, and word choices which express derision of groups of people based on race, gender, sexuality, or socioeconomic status. The Accenture North America Technology team created this guidebook to provide Accenture employees with a view into inclusive language and guidance for working to avoid its use—helping to ensure that we communicate with respect, dignity and fairness.
+How to use this guide?
+Accenture has over 514,000 employees from diverse backgrounds, who perform consulting and delivery work for an equally diverse set of clients and partners. When communicating with your colleagues and representing Accenture, consider the connotation, however unintended, of certain terms in your written and verbal communication. The guidelines are intended to help you recognize non-inclusive words and understand potential meanings that these words might convey. Our goal with these recommendations is not to require you to use specific words, but to ask you to take a moment to consider how your audience may be affected by the language you choose.
+Inclusive Categories | +Non-inclusive term | +Replacement | +Explanation | +
---|---|---|---|
Race, Ethnicity & National Origin | +master | +primary client source leader |
+ Using the terms “master/slave” in this context inappropriately normalizes and minimizes the very large magnitude that slavery and its effects have had in our history. | +
slave | +secondary replica follower |
+ ||
blacklist | +deny list block list |
+ The term “blacklist” was first used in the early 1600s to describe a list of those who were under suspicion and thus not to be trusted, whereas “whitelist” referred to those considered acceptable. Accenture does not want to promote the association of “black” and negative, nor the connotation of “white” being the inverse, or positive. | +|
whitelist | +allow list approved list |
+ ||
native | +original core feature |
+ Referring to “native” vs “non-native” to describe technology platforms carries overtones of minimizing the impact of colonialism on native people, and thus minimizes the negative associations the terminology has in the latter context. | +|
non-native | +non-original non-core feature |
+ ||
Gender & Sexuality | +man-hours | +work-hours business-hours |
+ When people read the words ‘man’ or ‘he,’ people often picture males only. Usage of the male terminology subtly suggests that only males can perform certain work or hold certain jobs. Gender-neutral terms include the whole audience, and thus using terms such as “business executive” instead of “businessman,” or informally, “folks” instead of “guys” is preferable because it is inclusive. | +
man-days | +work-days business-days |
+ ||
Ability Status & (Dis)abilities | +sanity check insanity check |
+ confidence check quality check rationality check |
+ Using the “Human Engagement, People First’ approach, putting people - all people - at the center is + important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. | +
dummy variables | +indicator variables | +||
Violence | +STONITH, kill, hit | +conclude cease discontinue |
+ Using the “Human Engagement, People First’ approach, putting people - all people - at the center is + important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. | +
one throat to choke | +single point of contact primary contact |
+
This guidebook is a living document and will be updated as terminology evolves. We encourage our users to provide feedback on the effectiveness of this document and we welcome additional suggestions. Contact us at Technology_ProjectElevate@accenture.com.
+ +Modern applications are sophisticated. Navigating multiple layers of application logic, utilities and libraries +make code complex and difficult to read.
+To make code readable and module, we advocate the composable application design pattern.
+Each function in a composable application is a building block of functionality. It is self-contained, stateless +and independent of the rest of the application. You can write code using the first principle of "input-process-output".
+Mercury is both a development methodology and a toolkit. It articulates the use of events between functions +instead of tight coupling using direct method calls.
+In Node.js, this is particular important because it ensures that each function yields to the event loop without +blocking the rest of the application, resulting in higher performance and throughout.
+The system encapsulates the standard Node.js EventEmitter with a "manager and worker" pattern. Each worker of +a function will process incoming event orderly. This allows the developer the flexibility to implement singleton +pattern and parallel processing easily.
+It integrates natively with the standard Node.js stream library. For higher digital decoupling, the system +provides a set of ObjectStream I/O API so that producer can write to a stream before a consumer is ready.
+To reduce memory footprint, the system uses the temporary local file system at "/tmp/node/streams" to hold +data blocks of a stream. The temporary data blocks are cleared automatically when a stream is read or closed.
+The system supports a base configuration (application.yml) and the developer can use additional configuration files +with the "ConfigReader" API. It follows a structured configuration approach similar to Java's Spring Boot.
+The core engine does not have dependency on the local file system. This provides a path to support Composable design +in a browser application in future iterations.
+ +The following parameters are reserved by the system. You can add your application parameters
+in the main application configuration file (application.yml
) or apply additional configuration
+files using the ConfigReader
API.
Key | +Value (example) | +Required | +
---|---|---|
application.name | +Application name | +Yes | +
info.app.version | +major.minor.build (e.g. 1.0.0) | +Yes | +
info.app.description | +Something about your application | +Yes | +
server.port | +e.g. 8083 | +Yes | +
static.html.folder | +e.g. /tmp/html | +Yes | +
yaml.rest.automation | +Default value is classpath:/rest.yaml | +Optional | +
yaml.mime.types | +Optional config file | +Optional | +
mime.types | +Map of file extensions to MIME types | +Optional | +
log.format | +text or json | +Optional | +
log.level | +default 'info' | +Optional | +
health.dependencies | +e.g. 'database.health' | +Optional | +
You can place static HTML files (e.g. the HTML bundle for a UI program) in the "resources/public" folder or +in the local file system using the "static.html.folder" parameter.
+The system supports a bare minimal list of file extensions to MIME types. If your use case requires additional
+MIME type mapping, you may define them in the application.yml
configuration file under the mime.types
+section like this:
mime.types:
+ pdf: 'application/pdf'
+ doc: 'application/msword'
+
+Alternatively, you can create a mime-types.yml file and point it using the "yaml.mime.types" parameter.
+The system uses a temp folder in "/tmp/node/streams" to hold temporary data blocks for streaming I/O.
+The following route names are reserved by the system.
+Route | +Purpose | +Modules | +
---|---|---|
distributed.tracing | +Distributed tracing logger | +core engine | +
async.http.request | +HTTP response event handler | +core engine | +
event.api.service | +Event API handler | +REST automation | +
actuator.services | +admin endpoints (/info, /health, /livenessprobe) | +REST automation | +
Header | +Purpose | +
---|---|
X-Stream-Id | +Temporal route name for streaming content | +
X-TTL | +Time to live in milliseconds for a streaming content | +
X-Async | +This header, if set to true, indicates it is a drop-n-forget request | +
X-Trace-Id | +This allows the system to propagate trace ID | +
Chapter-7 | +Home | +Appendix-II | +
---|---|---|
Test Driven Development | +Table of Contents | +Async HTTP client | +
The following admin endpoints are available.
+GET /info
+GET /health
+GET /livenessprobe
+
+Endpoint | +Purpose | +
---|---|
/info | +Describe the application | +
/health | +Application health check endpoint | +
/livenessprobe | +Check if application is running normally | +
You can extend the "/health" endpoint by implementing a composable functions to be added to the +"health check" dependencies.
+health.dependencies=database.health, cache.health
+
+Your custom health service must respond to the following requests:
+A sample health service is available in the health-check.ts
class of the hello world project as follows:
import { preload, Composable, EventEnvelope, AppException } from 'mercury';
+
+const TYPE = 'type';
+const INFO = 'info';
+const HEALTH = 'health';
+
+export class DemoHealthCheck implements Composable {
+
+ @preload('demo.health')
+ initialize(): DemoHealthCheck {
+ return this;
+ }
+
+ // Your service should be declared as an async function with input as EventEnvelope
+ async handleEvent(evt: EventEnvelope) {
+ const command = evt.getHeader(TYPE);
+ if (command == INFO) {
+ return {'service': 'demo.service', 'href': 'http://127.0.0.1'};
+ }
+ if (command == HEALTH) {
+ // this is a dummy health check
+ return {'status': 'demo.service is running fine'};
+ }
+ throw new AppException(400, 'Request type must be info or health');
+ }
+}
+
+The "async.http.request" function can be used as a non-blocking HTTP client.
+To make an HTTP request to an external REST endpoint, you can create an HTTP request object using the
+AsyncHttpRequest
class and make an async RPC call to the "async.http.request" function like this:
const po = new PostOffice(evt.getHeaders());
+const req = new AsyncHttpRequest();
+req.setMethod("GET");
+req.setHeader("accept", "application/json");
+req.setUrl("/api/hello/world?hello world=abc");
+req.setQueryParameter("x1", "y");
+const list = new Array<string>();
+list.push("a");
+list.push("b");
+req.setQueryParameter("x2", list);
+req.setTargetHost("http://127.0.0.1:8083");
+const event = new EventEnvelope().setTo("async.http.request").setBody(req);
+const result = po.request(event, 5000);
+// the result is an EventEnvelope
+
+For most cases, you can just set a JSON object into the request body and specify content-type as JSON.
+Example code may look like this:
+const req = new AsyncHttpRequest();
+req.setMethod("POST");
+req.setHeader("accept", "application/json");
+req.setHeader("content-type", "application/json");
+req.setUrl("/api/book");
+req.setTargetHost("https://service_provider_host");
+req.setBody(jsonKeyValues);
+
+For larger payload, you may use the streaming method. See sample code below:
+const stream = new ObjectStreamIO(timeoutInSeconds);
+const out = stream.getOutputStream();
+out.write(blockOne);
+out.write(blockTwo);
+// closing the output stream would send a EOF signal to the stream
+out.close();
+// tell the HTTP client to read the input stream
+req.setStreamRoute(stream.getInputStreamId());
+
+The AsyncHttpClient service (route name async.http.request
) uses native Node.js streams to integrate
+with the underlying Axios HTTP client. It uses the temporary local file system (folder /tmp/node/streams
)
+to reduce memory footprint. This makes the producer and consumer of a stream asynchronous. i.e. The producer
+can write data blocks into a stream before a consumer is available.
If content length is not given, the response body will be received as a stream.
+Your application should check if the HTTP response header "stream" exists. Its value is the input "stream ID".
+Sample code to read a stream may look like this:
+static async downloadFile(streamId: string, filename: string) {
+ let n = 0;
+ let len = 0;
+ const stream = new ObjectStreamReader(streamId, 5000);
+ while (true) {
+ try {
+ const block = await stream.read();
+ if (block) {
+ n++;
+ if (block instanceof Buffer) {
+ len += block.length;
+ log.info(`Received ${filename}, block-${n} - ${block.length} bytes`)
+ }
+ } else {
+ log.info("EOF reached");
+ break;
+ }
+ } catch (e) {
+ const status = e instanceof AppException? e.getStatus() : 500;
+ log.error(`Exception - rc=${status}, message=${e.message}`);
+ break;
+ }
+
+ }
+ return len;
+}
+
+IMPORTANT: Do not set the "content-length" HTTP header because the system will automatically compute the +correct content-length for small payload. For large payload, it will use the chunking method.
+Appendix-I | +Home | +
---|---|
Application config | +Table of Contents | +
Mercury version 4 is a toolkit for writing composable applications.
+At the platform level, composable architecture refers to loosely coupled platform services, utilities, and +business applications. With modular design, you can assemble platform components and applications to create +new use cases or to adjust for ever-changing business environment and requirements. Domain driven design (DDD), +Command Query Responsibility Segregation (CQRS) and Microservices patterns are the popular tools that architects +use to build composable architecture. You may deploy application in container, serverless or other means.
+At the application level, a composable application means that an application is assembled from modular software +components or functions that are self-contained and pluggable. You can mix-n-match functions to form new applications. +You can retire outdated functions without adverse side effect to a production system. Multiple versions of a function +can exist, and you can decide how to route user requests to different versions of a function. Applications would be +easier to design, develop, maintain, deploy, and scale.
+++ +Figure 1 - Composable application architecture
+
As shown in Figure 1, a minimalist composable application consists of three user defined components:
+Event choreography: Instead of writing an orchestrator in code, you can deploy Event Script as an engine. +Please refer to the composable-application example in the +Mercury-Composable project. You can configure an +Event-over-HTTP configuration file to connect the Java based Event Script engine to your Node.js application. +You can package the Event Script application and your Node.js application into a single container for +deployment. Alternatively, you can deploy your node.js application as serverless function in the cloud and +the Event Script application can execute the serverless functions according to an event flow configuration.
+The foundation libary includes:
+Each application has an entry point. You may implement an entry point in a main application like this:
+import { Logger, Platform, RestAutomation } from 'mercury';
+import { ComposableLoader } from './preload/preload.js';
+
+const log = Logger.getInstance();
+
+async function main() {
+ // Load composable functions into memory and initialize configuration management
+ ComposableLoader.initialize();
+ // start REST automation engine
+ const server = new RestAutomation();
+ server.start();
+ // keep the server running
+ const platform = Platform.getInstance();
+ platform.runForever();
+ log.info('Hello world application started');
+}
+// run the application
+main();
+
+For a command line use case, your main application module would get command line arguments and +send the request as an event to a business logic function for processing.
+For a backend application, the main application is usually used to do some "initialization" or +setup steps for your services.
+The ComposableLoader.initialize()
statement will register your user functions into the event loop.
+There is no need to directly import each module in your application code.
Your user function module may look like this:
+export class HelloWorldService implements Composable {
+
+ @preload('hello.world', 10)
+ initialize(): HelloWorldService {
+ return this;
+ }
+
+ async handleEvent(event: EventEnvelope) {
+ // your business logic here
+ return someResult;
+ }
+}
+
+Each function in a composable application should be implemented in the first principle of "input-process-output". +It should be stateless and self-contained. i.e. it has no direct dependencies with any other functions in the +composable application. Each function is addressable by a unique "route name". Input and output can be +primitive value or JSON objects to be transported using standard event envelopes.
+In the above example, the unique "route name" of the function is "hello.world".
+You can define instances, isPublic and isInterceptor in the preload
annotation. The default values are
+instances=1, isPublic=false and isInterceptor=false. In the example, the number of instances is set to 10.
+You can set the number of instances from 1 to 500.
++Writing code in the first principle of "input-process-output" promotes Test Driven Development (TDD) because + interface contact is clearly defined. Self-containment means code is more readable.
+
You can publish a set of composable functions as a library. To import your composable functions from a library, +you may add the following in the application.yml configuration file. In this example, it tells the system +to search for composable functions in the package called "mercury".
+#
+# To scan libraries for composable functions, use a comma separated text string
+# for a list of library dependencies.
+#
+web.component.scan: 'mercury'
+
+The "mercury" package is actually the composable core library. To illustate this feature, we have added a sample
+composable function called "no.op" in the NoOp.ts class. When you build the example app using "npm run build",
+the "preload" step will execute the "generate-preloader.js" script to generate the preload.ts
class in the
+"src/preload" folder. The "no.op" composable function will simply echo input as output.
A worked example of application.yml file is available in the examples/src/resources folder.
+A transaction can pass through one or more user functions. In this case, you can write a user function to receive +request from a user, make requests to some user functions, and consolidate the responses before responding to the +user.
+Note that event orchestration is optional. For example, you can create a BackEnd for FrontEnd (BFF) application +simply by writing a composable function and link it with the built-in REST automation system.
+REST automation creates REST endpoints by configuration rather than code. You can define a REST endpoint like this:
+ - service: "hello.world"
+ methods: ['GET']
+ url: "/api/hello/world"
+ timeout: 10s
+
+In this example, when a HTTP request is received at the URL path "/api/hello/world", the REST automation system +will convert the HTTP request into an event for onward delivery to the user defined function "hello.world". +Your function will receive the HTTP request as input and return a result set that will be sent as a HTTP response +to the user.
+For more sophisticated business logic, we recommend the use of Event Script for event choreography discussed +earlier.
+The composable engine encapsulates the standard Node.js EventEmitter library for event routing. It exposes the +"PostOffice" API for you to write your own event orchestration function to send async or RPC events.
+The in-memory event system is designed for point-to-point delivery. In some use cases, you may like to have +a broadcast channel so that more than one function can receive the same event. For example, sending notification +events to multiple functions. The optional local pub/sub system provides this multicast capability.
+While REST is the most popular user facing interface, there are other communication means such as event triggers +in a serverless environment. You can write a function to listen to these external event triggers and send the events +to your user defined functions. This custom "adapter" pattern is illustrated as the dotted line path in Figure 1.
+To visualize what is a Composable application, let's try out the "Hello World" application in Chapter 2.
+Home | +Chapter-2 | +
---|---|
Table of Contents | +Hello World application | +
Getting started with the "hello world" application in the example sub-project.
+You can clone the project like this:
+cd sandbox
+git clone https://github.com/Accenture/mercury-nodejs.git
+cd mercury-nodejs
+cd examples
+
+Mercury for Node.js is written in TypeScript. Please install library dependencies using npm first:
+npm install
+
+When you enter npm install
, it will fetch the configured Mercury library from github using
+package-lock.json.
To obtain the latest update, you can do npm run pull
.
cd examples
+npm run pull
+
+If you want to use an earlier release, you can specify the release branch with a hash sign +like this:
+npm install https://github.com/Accenture/mercury-nodejs#release/v4.1.1
+
+If you are using mercury-nodejs in your organization, we recommend publishing the mercury-nodejs +core library to your corporate artifactory.
+npm run build
+
+When you build the example app using "npm run build", the "preload" step will execute the
+"generate-preloader.js" script to generate the preload.ts
class in the "src/preload" folder.
+Then it will generate the "dist" folder containing the executable "javascript" files.
You can run the application using node hello-world.js
. You will see log messages like this:
% npm run build
+> examples@4.1.1 prebuild
+> npm run lint
+> examples@4.1.1 lint
+> eslint . --fix
+> examples@4.1.1 build
+> npm run preload && tsc -p tsconfig.json && node copy-static-files.js
+> examples@4.1.1 preload
+> node generate-preloader.js
+INFO Loading base configuration from /examples/src/resources/application.yml (config-reader.js:98)
+INFO Scanning /examples/node_modules/mercury/dist (scanPackage:generate-preloader.js:19)
+INFO Class NoOp (scanPackageJs:generate-preloader.js:71)
+INFO Scanning /examples/src (main:generate-preloader.js:193)
+INFO Class DemoAuth (scanSourceFolder:generate-preloader.js:95)
+INFO Class DemoHealthCheck (scanSourceFolder:generate-preloader.js:95)
+INFO Class HelloWorldService (scanSourceFolder:generate-preloader.js:95)
+INFO Composable class loader (/preload/preload.ts) generated (generatePreLoader:generate-preloader.js:169)
+% cd dist
+% node hello-world.js
+INFO Loading base configuration from /Users/eric.law/sandbox/mercury-nodejs/examples/dist/resources/application.yml (config-reader.js:98)
+INFO Base configuration 2609990e76414441af65af27b65f2cdd (ComposableLoader.initialize:preload.js:40)
+INFO Loading NoOp as no.op (descriptor.value:composable.js:18)
+INFO Loading DemoAuth as v1.api.auth (descriptor.value:composable.js:18)
+INFO Loading DemoHealthCheck as demo.health (descriptor.value:composable.js:18)
+INFO Loading HelloWorldService as hello.world (descriptor.value:composable.js:18)
+INFO Event system started - 9f2fa4a008534f19a1cb1a3dfe1e3af0 (platform.js:437)
+INFO PRIVATE distributed.tracing registered (platform.js:213)
+INFO PRIVATE async.http.request registered with 200 instances (platform.js:216)
+INFO PRIVATE no.op registered (platform.js:213)
+INFO PRIVATE v1.api.auth registered (platform.js:213)
+INFO PRIVATE demo.health registered (platform.js:213)
+INFO PUBLIC hello.world registered with 10 instances (platform.js:216)
+INFO PRIVATE actuator.services registered with 10 instances (platform.js:216)
+INFO PRIVATE event.api.service registered with 200 instances (platform.js:216)
+INFO PRIVATE rest.automation.manager registered (platform.js:213)
+INFO Loaded header_1, request headers, add=0, drop=5, keep=0 (RestEntry.loadHeaderEntry:routing.js:259)
+INFO Loaded header_1, response headers, add=4, drop=0, keep=0 (RestEntry.loadHeaderEntry:routing.js:259)
+INFO Loaded cors_1 cors headers (*) (RestEntry.loadCors:routing.js:276)
+INFO POST /api/event -> event.api.service, timeout=60s, tracing=true (routing.js:513)
+INFO OPTIONS /api/event -> event.api.service, timeout=60s (routing.js:507)
+INFO GET /api/hello/world -> v1.api.auth -> hello.world, timeout=10s, tracing=true (routing.js:510)
+INFO PUT /api/hello/world -> v1.api.auth -> hello.world, timeout=10s, tracing=true (routing.js:510)
+INFO POST /api/hello/world -> v1.api.auth -> hello.world, timeout=10s, tracing=true (routing.js:510)
+INFO HEAD /api/hello/world -> v1.api.auth -> hello.world, timeout=10s, tracing=true (routing.js:510)
+INFO PATCH /api/hello/world -> v1.api.auth -> hello.world, timeout=10s, tracing=true (routing.js:510)
+INFO DELETE /api/hello/world -> v1.api.auth -> hello.world, timeout=10s, tracing=true (routing.js:510)
+INFO OPTIONS /api/hello/world -> hello.world, timeout=10s (routing.js:507)
+INFO POST /api/hello/upload -> hello.world, timeout=15s, tracing=false (routing.js:513)
+INFO OPTIONS /api/hello/upload -> hello.world, timeout=15s (routing.js:507)
+INFO POST /api/hello/list -> hello.list, timeout=15s, tracing=false (routing.js:513)
+INFO OPTIONS /api/hello/list -> hello.list, timeout=15s (routing.js:507)
+INFO GET /api/simple/{task}/* -> hello.world, timeout=12s, tracing=false (routing.js:513)
+INFO PUT /api/simple/{task}/* -> hello.world, timeout=12s, tracing=false (routing.js:513)
+INFO POST /api/simple/{task}/* -> hello.world, timeout=12s, tracing=false (routing.js:513)
+INFO OPTIONS /api/simple/{task}/* -> hello.world, timeout=12s (routing.js:507)
+WARN trust_all_cert=true for http://127.0.0.1:8086 is not relevant - Do you meant https? (RestEntry.loadRestEntry:routing.js:476)
+INFO GET /api/v1/* -> http://127.0.0.1:8086, timeout=20s, tracing=true (routing.js:513)
+INFO PUT /api/v1/* -> http://127.0.0.1:8086, timeout=20s, tracing=true (routing.js:513)
+INFO POST /api/v1/* -> http://127.0.0.1:8086, timeout=20s, tracing=true (routing.js:513)
+INFO OPTIONS /api/v1/* -> http://127.0.0.1:8086, timeout=20s (routing.js:507)
+INFO GET /api/hello/download -> hello.download, timeout=20s, tracing=false (routing.js:513)
+INFO OPTIONS /api/hello/download -> hello.download, timeout=20s (routing.js:507)
+INFO Exact API path [/api/event, /api/hello/download, /api/hello/list, /api/hello/upload, /api/hello/world] (RestEntry.load:routing.js:171)
+INFO Wildcard API path [/api/simple/{task}/*, /api/v1/*] (RestEntry.load:routing.js:190)
+INFO Static HTML folder: /Users/eric.law/sandbox/mercury-nodejs/examples/dist/resources/public (RestEngine.startHttpServer:rest-automation.js:154)
+INFO Loaded 18 mime types (RestEngine.startHttpServer:rest-automation.js:172)
+INFO To stop application, press Control-C (EventSystem.runForever:platform.js:517)
+INFO Hello world application started (main:hello-world.js:13)
+INFO REST automation service started on port 8086 (rest-automation.js:289)
+
+Open your browser to visit "http://127.0.0.1:8086". You will see the example application home page +like this:
+Hello World
+INFO endpoint
+Health endpoint
+Demo endpoint
+
+When you click the INFO hyperlink, you will see a page like this:
+{
+ "app": {
+ "name": "example-app",
+ "version": "4.1.1",
+ "description": "Composable application example"
+ },
+ "memory": {
+ "max": "34,093,076,480",
+ "free": "17,068,216,320",
+ "used": "12,988,104"
+ },
+ "node": {
+ "version": "v22.12.0"
+ },
+ "origin": "2f2d6abd7b9c4d9d9694b3b900254f7a",
+ "time": {
+ "current": "2023-12-23 15:54:03.002",
+ "start": "2023-12-23 15:49:35.102"
+ },
+ "uptime": "4 minutes 33 seconds"
+}
+
+The health endpoint may look like this:
+{
+ "up": true,
+ "origin": "2f2d6abd7b9c4d9d9694b3b900254f7a",
+ "name": "example-app",
+ "dependency": [
+ {
+ "route": "demo.health",
+ "service": "demo.service",
+ "href": "http://127.0.0.1",
+ "status_code": 200,
+ "message": {
+ "status": "demo.service is running fine"
+ }
+ }
+ ]
+}
+
+When you enter "http://127.0.0.1:8086/api/hello/world" in the browser, you will see this page:
+{
+ "headers": {
+ "upgrade-insecure-requests": "1",
+ "dnt": "1",
+ "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)",
+ "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;",
+ "sec-fetch-site": "same-origin",
+ "sec-fetch-mode": "navigate",
+ "sec-fetch-user": "?1",
+ "sec-fetch-dest": "document",
+ "referer": "http://127.0.0.1:8086/",
+ "accept-language": "en-US,en;q=0.9",
+ "x-flow-id": "hello-world"
+ },
+ "method": "GET",
+ "ip": "127.0.0.1",
+ "url": "/api/hello/world",
+ "timeout": 10,
+ "https": false
+}
+
+When you start the hello world application, you will find this "GET /api/hello/world -> hello.world" in the log, +indicating that REST automation has rendered the endpoint.
+This instructs the REST automation system to route the URI "/api/hello/world" to the function with the route name +"hello.world".
+The function simply echoes back the incoming HTTP request object showing HTTP method, path and headers, etc.
+The "hello.world" function is available as "services/hello-world-service.ts" in the examples/src
folder.
The statement echoing the HTTP request is return new EventEnvelope(evt)
A function can be defined in a class with this template:
+export class HelloWorldService implements Composable {
+
+ @preload('hello.world', 10)
+ initialize(): HelloWorldService {
+ return this;
+ }
+
+ async handleEvent(evt: EventEnvelope) {
+ // your business logic here
+ return someResult;
+ }
+}
+
+The "Composable" interface enforces two methods (initialize and handleEvent). +The "preload" annotation tells the system to load the function into memory so that it can be used +anywhere in your application without tight coupling.
+You can define route name, instances, isPublic and isInterceptor in the preload
annotation.
+The default values are instances=1, isPublic=false and isInterceptor=false. In the example,
+the number of instances is set to 10. You can set the number of instances from 1 to 500.
Optionally, you can put additional setup code in the "initialize" method.
+If your function has a constructor, please do not use any input arguments.
+When you browse the endpoint "http://127.0.0.1:8086/api/hello/world", you will see a log message like this:
+INFO {"trace":{ "origin":"2f2d6abd7b9c4d9d9694b3b900254f7a",
+ "id":"5bf3cc1aab7647878d7ba91565d4ef9b","path":"GET /api/hello/world",
+ "service":"hello.world","start":"2023-06-09T23:13:23.263Z","success":true,
+ "exec_time":0.538,"round_trip":1.016,"from":"http.request"}
+ }
+
+Mercury has built-in distributed tracing ability. Composable application is by definition event driven. +Since a transaction may pass through multiple services, distributed tracing helps to visualize the event flows.
+This can pinpoint to performance bottleneck or design flaws early in the development cycle. This contributes to +higher product quality because the developer can make adjustment sooner.
+The system comes with the standard "/info", "/health" and "/livenessprobe" admin endpoints.
+Please browse the "health-check.ts" as an example to write your own health checks. You can have more than one +health check services.
+Composable application is usually deployed as a containerized microservice or a serverless application.
+The resources folder contains the following:
+application.yml
+application.name: 'example-app'
+info.app:
+ version: '4.1.1'
+ description: 'Composable application example'
+
+# server port for Event API REST endpoint
+server.port: 8086
+
+# log.format can be 'text' or 'json'
+log:
+ format: 'text'
+ level: INFO
+
+# You can add optional health checks that point to your custom health check functions
+# (the dependency list is a comma separated list)
+health.dependencies: 'demo.health'
+
+# if you have some composable functions from one or more libraries, web.component.scan
+# should contain a comma separated list of library package names.
+web.component.scan: 'mercury'
+
+Note that you can use "environment variables" in the configuration using the standard dollar-bracket format. +e.g.
+some.key=${MY_ENV_VAR:defaultValue}
+
+The minimal set of parameters required by the system is shown above. You can add application specific parameters.
+The application.name, info.app.version, info.app.description, server.port, log.format, log.level +and health.dependencies are required.
+You may use the example app as a template to write your own composable application.
+Before you write new user functions, please reset the example project with the "clean" command.
+To obtain the latest update, you can do npm run pull
.
cd examples
+npm run clean
+
+This will clean up the compiled code and reset the preload.ts
file to an initial state. You may then create
+your main class from hello-world.ts
and your own functions in the services
folder. Remember to update
+the application.yml, rest.yml and index.html page accordingly.
Note: If you do not "clean" the example project, compilation would fail due to broken imports.
+Chapter-1 | +Home | +Chapter-3 | +
---|---|---|
Introduction | +Table of Contents | +REST automation | +
The foundation library contains a built-in non-blocking HTTP server that you can use to create REST +endpoints. Behind the curtain, it is using the Express server library, and we extend it to support dynamic creation +of REST endpoints.
+The REST automation system is not a code generator. The REST endpoints in the rest.yaml file are handled by +the system directly - "Config is the code".
+We will use the "rest.yaml" sample configuration file in the "hello world" example app to elaborate the configuration +approach.
+The rest.yaml configuration has three sections:
+REST automation is optional. To turn on REST automation, add the REST automation start up script in your main app:
+import { Logger, Platform, RestAutomation } from 'mercury';
+import { ComposableLoader } from '../preload/preload.js';
+...
+async function main() {
+ ComposableLoader.initialize();
+ const server = new RestAutomation();
+ server.start();
+}
+main();
+
+Note that the class "preload.ts" is automatically generated when you do "npm run preload" or "npm run build". +The compiled file is located in the "dist/preload/preload.js". Therefore, you use the import statement for +'../preload/preload.js'.
+Please review the "hello-world.ts" for more details.
+The yaml.rest.automation
parameter in the application.yml file tells the system the location of the rest.yaml
+configuration file. The default value is "classpath:/rest.yaml". The classpath:/
prefix means that the config
+file is available under the "src/resources" folder in your project. If you want the rest.yaml configuration
+file to be externalized to the local file system, you can use the file:/
prefix. e.g. "file:/tmp/config/rest.yaml".
yaml.rest.automation: 'classpath:/rest.yaml'
+
+The "rest" section of the rest.yaml configuration file may contain one or more REST endpoints.
+A REST endpoint may look like this:
+ - service: ["hello.world"]
+ methods: ['GET', 'PUT', 'POST', 'HEAD', 'PATCH', 'DELETE']
+ url: "/api/hello/world"
+ timeout: 10s
+ cors: cors_1
+ headers: header_1
+ threshold: 30000
+ authentication: 'v1.api.auth'
+ tracing: true
+
+In this example, the URL for the REST endpoint is "/api/hello/world" and it accepts a list of HTTP methods. +When an HTTP request is sent to the URL, the HTTP event will be sent to the function declared with service +route name "hello.world". The input event "body" will be an "AsyncHttpRequest" object. You can retrieve HTTP +metadata such as method, url path, HTTP request headers from the object.
+The "timeout" value is the maximum time that REST endpoint will wait for a response from your function. +If there is no response within the specified time interval, the user will receive an HTTP-408 timeout exception.
+The "authentication" tag is optional. If configured, the route name given in the authentication tag will be used. +The input event will be delivered to a function with the authentication route name. In this example, it is +"v1.api.auth".
+Your custom authentication function may look like this:
+export class DemoAuth implements Composable {
+
+ @preload('v1.api.auth')
+ initialize(): DemoAuth {
+ return this;
+ }
+
+ async handleEvent(evt: EventEnvelope) {
+ const req = new AsyncHttpRequest(evt.getBody() as object);
+ const method = req.getMethod();
+ const url = req.getUrl();
+ log.info(`${method} ${url} authenticated`);
+ // this is a demo so we approve all requests
+ return true;
+ }
+}
+
+Your authentication function can return a boolean value to indicate if the request should be accepted or rejected. +Optionally, you can also return an EventEnvelope containing a boolean body and a set of key-values in the headers.
+If true, the system will send the HTTP request to the service. In this example, it is the "hello.world" function. +If false, the user will receive an "HTTP-401 Unauthorized" exception.
+Optionally, you can use the authentication function to return some session information after authentication. +For example, your authentication can forward the "Authorization" header of the incoming HTTP request to your +organization's OAuth 2.0 Identity Provider for authentication.
+To return session information to the next function, the authentication function can return an EventEnvelope. +It can set the session information as key-values in the response event headers.
+You can test this by visiting http://127.0.0.1:8086/api/hello/world to invoke the "hello.world" function.
+The console will print:
+INFO {"trace":{"origin":"11efb0d8fcff4924b90aaf738deabed0",
+ "id":"4dd5db2e64b54eef8746ab5fbb4489a3","path":"GET /api/hello/world",
+ "service":"v1.api.auth","start":"2023-06-10T00:01:07.492Z","success":true,
+ "exec_time":0.525,"round_trip":0.8,"from":"http.request"}} (handleEvent:tracer.js:27)
+INFO HTTP-200 GET /api/hello/world (RestEngine.relayRequest:rest-automation.js:604)
+INFO {"trace":{"origin":"11efb0d8fcff4924b90aaf738deabed0",
+ "id":"4dd5db2e64b54eef8746ab5fbb4489a3","path":"GET /api/hello/world",
+ "service":"hello.world","start":"2023-06-10T00:01:07.495Z","success":true,
+ "exec_time":0.478,"round_trip":1.238,"from":"http.request"}} (handleEvent:tracer.js:27)
+
+This illustrates that the HTTP request has been processed by the "v1.api.auth" function.
+The tracing
tag tells the system to turn on "distributed tracing". In the console log shown above, you see
+two lines of log from "distributed trace" showing that the HTTP request is processed by "v1.api.auth" and
+"hello.world" before returning result to the browser.
The optional cors
and headers
tags point to the specific CORS and HEADERS sections respectively.
For ease of development, you can define CORS headers using the CORS section like this.
+This is a convenient feature for development. For cloud native production system, it is most likely that +CORS processing is done at the API gateway level.
+You can define different sets of CORS headers using different IDs.
+cors:
+ - id: cors_1
+ options:
+ - "Access-Control-Allow-Origin: ${api.origin:*}"
+ - "Access-Control-Allow-Methods: GET, DELETE, PUT, POST, PATCH, OPTIONS"
+ - "Access-Control-Allow-Headers: Origin, Authorization, X-Session-Id, X-Correlation-Id,
+ Accept, Content-Type, X-Requested-With"
+ - "Access-Control-Max-Age: 86400"
+ headers:
+ - "Access-Control-Allow-Origin: ${api.origin:*}"
+ - "Access-Control-Allow-Methods: GET, DELETE, PUT, POST, PATCH, OPTIONS"
+ - "Access-Control-Allow-Headers: Origin, Authorization, X-Session-Id, X-Correlation-Id,
+ Accept, Content-Type, X-Requested-With"
+ - "Access-Control-Allow-Credentials: true"
+
+The HEADERS section is used to do some simple transformation for HTTP request and response headers.
+You can add, keep or drop headers for HTTP request and response. Sample HEADERS section is shown below.
+headers:
+ - id: header_1
+ request:
+ #
+ # headers to be inserted
+ # add: ["hello-world: nice"]
+ #
+ # keep and drop are mutually exclusive where keep has precedent over drop
+ # i.e. when keep is not empty, it will drop all headers except those to be kept
+ # when keep is empty and drop is not, it will drop only the headers in the drop list
+ # e.g.
+ # keep: ['x-session-id', 'user-agent']
+ # drop: ['Upgrade-Insecure-Requests', 'cache-control', 'accept-encoding', 'host', 'connection']
+ #
+ drop: ['Upgrade-Insecure-Requests', 'cache-control', 'accept-encoding', 'host', 'connection']
+
+ response:
+ #
+ # the system can filter the response headers set by a target service,
+ # but it cannot remove any response headers set by the underlying servlet container.
+ # However, you may override non-essential headers using the "add" directive.
+ # i.e. don't touch essential headers such as content-length.
+ #
+ # keep: ['only_this_header_and_drop_all']
+ # drop: ['drop_only_these_headers', 'another_drop_header']
+ #
+ # add: ["server: mercury"]
+ #
+ # You may want to add cache-control to disable browser and CDN caching.
+ # add: ["Cache-Control: no-cache, no-store", "Pragma: no-cache",
+ # "Expires: Thu, 01 Jan 1970 00:00:00 GMT"]
+ #
+ add:
+ - "Strict-Transport-Security: max-age=31536000"
+ - "Cache-Control: no-cache, no-store"
+ - "Pragma: no-cache"
+ - "Expires: Thu, 01 Jan 1970 00:00:00 GMT"
+
+The "threshold" parameter in the REST endpoint definition is not supported in the Node.js version.
+In the Java version, the underlying HTTP server is Vertx HTTP server. HTTP request body is handled as a stream. +When content length is given, the REST automation engine will render the input as a byte array if the length +is less than the threshold value. Otherwise, it will render it as a stream for a user function to read.
+In the Node.js version, the underlying HTTP server is Express. We have configured the bodyParser to render +HTTP request body in this order:
+Chapter-2 | +Home | +Chapter-4 | +
---|---|---|
Hello World | +Table of Contents | +Event orchestration | +
In traditional programming, we can write modular software components and wire them together as a single application. +There are many ways to do that. You can rely on a "dependency injection" framework. In many cases, you would need +to write orchestration logic to coordinate how the various components talk to each other to process a transaction.
+In a composable application, you write modular functions using the first principle of "input-process-output".
+Functions communicate with each other using events and each function has a "handleEvent" method to process "input" +and return result as "output". Writing software component in the first principle makes Test Driven Development (TDD) +straight forward. You can write mock function and unit tests before you put in actual business logic.
+Mocking an event-driven function in a composable application is as simple as overriding the function's route name +with a mock function.
+There are two ways to register a function:
+In declarative approach, you use the preLoad
annotation to register a class with an event handler like this:
export class HelloWorldService implements Composable {
+
+ @preload('hello.world', 10)
+ initialize(): HelloWorldService {
+ return this;
+ }
+
+ async handleEvent(evt: EventEnvelope) {
+ // your business logic here
+ return someResult;
+ }
+}
+
+You can define route name, instances, isPublic and isInterceptor in the preload
annotation. The default values are
+instances=1, isPublic=false and isInterceptor=false. In the example, the number of instances is set to 10.
+You can set the number of instances from 1 to 500.
Once a function is created using the declarative method, you can override it with a mock function by using the +programmatic approach in a unit test.
+In programmatic approach, you can register a composable class like this:
+const platform = Platform.getInstance();
+platform.register('my.function', new HelloWorld(), 10);
+
+In the above example, You obtain a singleton instance of the Platform API class and use it to register
+the HelloWorld.ts class with a route name my.function
and up to 10 concurrent worker instances.
+Note that the class must implement the Composable
interface and you must not use the preload
annotation
+in the initialize() method if you want to register the function programmatically.
In both declarative and programmatic approaches, the initialize method may contain additional setup +code for your function.
+A private function is visible by other functions in the same application memory space.
+A public function is accessible by other function from another application instance using the +"Event over HTTP" method. We will discuss inter-container communication in Chapter-5.
+The number of concurrent workers for a function is defined in the "instances" parameter.
+When you set "instances" to one, the function will be declared as a singleton.
+When you declare a function as an interceptor, the system will ignore return value from the function.
+Usually, the interceptor function can use the PostOffice's send API to forward the incoming event to +the downstream function(s). In some use cases, you may use the interceptor to conditionally return value +by sending the result set to the "reply to" address.
+To send an asynchronous event or an event RPC call from one function to another, you can use the PostOffice
APIs.
For example,
+async handleEvent(evt: EventEnvelope) {
+ const po = new PostOffice(evt.headers());
+ const req = new EventEnvelope().setTo(HELLO_WORLD_SERVICE).setBody(TEST_MESSAGE);
+ const result = await po.request(req, 3000);
+ ...
+
+Note that the input to the PostOffice is the incoming event's headers. The PostOffice API detects if tracing +is enabled in the incoming request. If yes, it will propagate tracing information to "downstream" functions.
+“Request-response”, best for interactivity
e.g. Drop-n-forget
e.g. Progressive rendering
e.g. Work-flow application
e.g. File transfer
In enterprise application, RPC is the most common pattern in making call from one function to another.
+The "calling" function makes a request and waits for the response from the "called" function. +There are two code patterns for RPC.
+To wait for a response, you can use the "await" keyboard since your function has been declared as "async".
+const result = await po.request(req, 3000);
+
+po.request(req, 3000)
+ .then(event => {
+ // handle the response
+ })
+ .catch(e => {
+ // handle exception
+ });
+
+You can declare another function as a "callback". When you send a request to another function, you can set the +"replyTo" address in the request event. When a response is received, your callback function will be invoked to +handle the response event.
+const request = new EventEnvelope().setTo('hello.world')
+ .setBody('test message').setReplyTo('my.callback');
+po.send(request);
+
+In the above example, you have a callback function with route name "my.callback". You send the request event +with a JSON object as payload to the "hello.world" function. When a response is received, the "my.callback" +function will get the response as input.
+Pipeline is a linked list of event calls. There are many ways to do pipeline. One way is to keep the pipeline plan +in an event's header and pass the event across multiple functions where you can set the "replyTo" address from the +pipeline plan. You should handle exception cases when a pipeline breaks in the middle of a transaction.
+An example of the pipeline header key-value may look like this:
+pipeline=service.1, service.2, service.3, service.4, service.5
+
+In the above example, when the pipeline event is received by a function, the function can check its position +in the pipeline by comparing its own route name with the pipeline plan.
+In a function, you can retrieve its own route name like this:
+const myRoute = evt.getHeader('my_route');
+
+The "my_route" header is a metadata inserted by the system.
+Suppose myRoute is "service.2", the function can send the response event to "service.3". +When "service.3" receives the event, it can send its response event to the next one. i.e. "service.4".
+When the event reaches the last service ("service.5"), the processing will complete.
+If you set a function as singleton (i.e. one worker instance), it will receive event in an orderly fashion. +This way you can "stream" events to the function, and it will process the events one by one.
+Another means to do streaming is to create an "ObjectStreamIO" event stream like this:
+const stream = new ObjectStreamIO(60);
+const out = new ObjectStreamWriter(stream.getOutputStreamId());
+out.write(messageOne);
+out.write(messageTwo);
+out.close();
+
+const streamId = stream.getInputStreamId();
+// pass the streamId to another function
+
+In the code segment above, your function creates an object event stream and writes 2 messages into the stream +It obtains the streamId of the event stream and sends it to another function. The other function can read the +data blocks orderly.
+You must declare "end of stream" by closing the output stream. If you do not close an output stream, +it remains open and idle. If a function is trying to read an input stream using the stream ID and the +next data block is not available, it will time out.
+A stream will be automatically closed when the idle inactivity timer is reached. In the above example, +ObjectStreamIO(60) means an idle inactivity timer of 60 seconds.
+In another function, it may read the input stream like this:
+const stream = new ObjectStreamReader(streamId, 5000);
+while (someCondition) {
+ const b = await stream.read();
+ if (b instanceof Buffer) {
+ // process the data block
+ }
+ if (b == null) {
+ // this means EOF - the stream will be closed automatically
+ break
+ }
+}
+
+You can browse the "hello-world-service.ts" for the file upload and download statements to examine the +streaming code patterns.
+Mercury streams use the temporary folder at "/tmp/node/streams" folder to hold data blocks. +The temporary data blocks are cleaned once they are read by a function.
+In your functions, you can send/receive JSON object, bytes (Buffer
) and text (string
) with the object stream system.
For REST automation, it uses only Buffer and string.
+Once you have implemented modular functions in a self-contained manner, the best practice is to write one or more +functions to do "event orchestration".
+Think of the orchestration function as a music conductor who guides the whole team to perform.
+For event orchestration, your function can be the "conductor" that sends events to the individual functions so that +they operate together as a single application. To simplify design, the best practice is to apply event orchestration +for each transaction or use case. The event orchestration function also serves as a living documentation about how +your application works. It makes your code more readable.
+For more sophisticated application design, you may use the Event Script engine in the +Mercury-Composable project to do event choreography for your +composable functions in your Node.js application.
+Chapter-3 | +Home | +Chapter-5 | +
---|---|---|
REST automation | +Table of Contents | +Event over HTTP | +
The in-memory event system allows functions to communicate with each other in the same application memory space.
+In composable architecture, applications are modular components in a network. Some transactions may require +the services of more than one application. "Event over HTTP" extends the event system beyond a single application.
+The Event API service (event.api.service
) is a built-in function in the system.
To enable "Event over HTTP", you must first turn on the REST automation engine with the following parameters +in the application.properties file:
+server.port=8086
+
+and then add the following entry in the "rest.yaml" endpoint definition file. +If not, update "rest.yaml" accordingly. The "timeout" value is set to 60 seconds to fit common use cases.
+ - service: [ "event.api.service" ]
+ methods: [ 'POST' ]
+ url: "/api/event"
+ timeout: 60s
+ tracing: true
+
+This will expose the Event API endpoint at port 8086 and URL "/api/event".
+In kubernetes, The Event API endpoint of each application is reachable through internal DNS and there is no need +to create "ingress" for this purpose.
+You may now test drive the Event API service.
+First, build and run the lambda-example application in port 8086.
+cd examples/dist
+node hello-world.js
+
+Second, build and run the rpc-to-service application.
+cd examples/dist/extra
+node rpc-to-service.js
+
+The rpc-to-service application will connect to the hello world application and make requests to the "hello.world" +service there.
+$ node rpc-to-service.js
+INFO Event system started - ed28f069afc34647b7afc5e762522e9f (platform.js:441)
+INFO PRIVATE distributed.tracing registered (platform.js:215)
+INFO PRIVATE async.http.request registered with 200 instances (platform.js:218)
+INFO Platform ed28f069afc34647b7afc5e762522e9f ready (main:rpc-to-service.js:10)
+INFO Payload match? true (main:rpc-to-service.js:20)
+INFO Received 1 (main:rpc-to-service.js:21)
+INFO Payload match? true (main:rpc-to-service.js:20)
+INFO Received 2 (main:rpc-to-service.js:21)
+INFO Payload match? true (main:rpc-to-service.js:20)
+INFO Received 3 (main:rpc-to-service.js:21)
+INFO Demo application completed (main:rpc-to-service.js:29)
+
+In the rpc-to-service application, it makes the requests using the "await po.remoteRequest()" API.
+Since the rpc-to-service is not a service itself, it runs as a standalone command line application. +It provides the "tracing" metadata in the PostOffice like this:
+const REMOTE_EVENT_ENDPOINT = 'http://127.0.0.1:8086/api/event';
+const po = new PostOffice({ 'my_route': 'rpc.demo', 'my_trace_id': '200', 'my_trace_path': '/api/remote/rpc' });
+...
+const result = await po.remoteRequest(req, REMOTE_EVENT_ENDPOINT);
+
+This illustrates that you can write both command line application or service application using the Mercury-nodejs +toolkit.
+The Event API exposes all public functions of an application instance to the network using a single REST endpoint.
+The advantages of Event API includes:
+The following configuration adds authentication service to the Event API endpoint:
+ - service: [ "event.api.service" ]
+ methods: [ 'POST' ]
+ url: "/api/event"
+ timeout: 60s
+ authentication: "v1.api.auth"
+ tracing: true
+
+This enforces every incoming request to the Event API endpoint to be authenticated by the "v1.api.auth" service +before passing to the Event API service. You can plug in your own authentication service. For example, OAuth 2.0 +"bearer token" validation.
+Please refer to Chapter-3 - REST automation for details.
+
Chapter-4 | +Home | +Chapter-6 | +
---|---|---|
Event orchestration | +Table of Contents | +API overview | +
Each application has an entry point. You may implement the main entry point like this:
+import { Logger, Platform, RestAutomation } from 'mercury';
+import { ComposableLoader } from './preload/preload.js';
+
+const log = Logger.getInstance();
+
+async function main() {
+ // Load composable functions into memory and initialize configuration management
+ ComposableLoader.initialize();
+ // start REST automation engine
+ const server = new RestAutomation();
+ server.start();
+ // keep the server running
+ const platform = Platform.getInstance();
+ platform.runForever();
+ log.info('Hello world application started');
+}
+
+// run the application
+main();
+
+In this example, the ComposableLoader
will initialize the configuration management system,
+search and register available user functions into the event system. The default location
+of the system files is the "src/resources" folder.
File / bundle | +Purpose | +
---|---|
application.yml | +Base configuration file is assumed to be under the "src/resources" folder | +
rest.yaml | +REST endpoint configuration file is assumed to be under the "src/resources" folder | +
HTML bundle | +HTML/CSS/JS files, if any, can be placed under the "src/resources/public" folder | +
To tell the system to use a different application.yml, you can use this following statement before
+running the ComposableLoader.initialize()
command.
// resourcePath should be a fully qualified file path to the application's "resources" folder.
+const appConfig = AppConfig.getInstance(resourcePath);
+log.info(`Base configuration ${appConfig.getId()}`);
+
+You may override the file path for REST endpoint configuration and HTML bundle with the following:
+yaml.rest.automation: 'classpath:/rest.yaml'
+static.html.folder: 'classpath:/public'
+
+To enable the REST automation engine, you must use the server.start() command.
+To run the application as a service, use the platform.runForever() command. The application can be +stopped with Control-C in interactive mode or the Kill command at the kernel level by a container +management system such as Kubernetes.
+A composable application is a collection of functions that communicate with each other in events. +Each event is transported by an event envelope. Let's examine the envelope.
+There are 3 elements in an event envelope:
+Element | +Type | +Purpose | +
---|---|---|
1 | +metadata | +Includes unique ID, target function name, reply address correlation ID, status, exception, trace ID and path |
+
2 | +headers | +User defined key-value pairs | +
3 | +body | +Event payload (primitive or JSON object) | +
Headers and body are optional, but you must provide at least one of them.
+To reject an incoming request, you can throw an AppException like this:
+throw new AppException(400, "My custom error message");
+
+As a best practice, we recommend using error codes that are compatible with HTTP status codes.
+You can write a function like this:
+import { preload, Composable, EventEnvelope, AsyncHttpRequest, Logger } from 'mercury';
+
+const log = Logger.getInstance();
+
+export class DemoAuth implements Composable {
+
+ @preload('v1.api.auth', 5)
+ initialize(): DemoAuth {
+ return this;
+ }
+
+ async handleEvent(evt: EventEnvelope) {
+ const req = new AsyncHttpRequest(evt.getBody() as object);
+ const method = req.getMethod();
+ const url = req.getUrl();
+ log.info(`${method} ${url} authenticated`);
+ // this is a demo so we approve all requests
+ return true;
+ }
+}
+
+You can define route name, instances, isPublic and isInterceptor in the preload
annotation.
+The default values are instances=1, isPublic=false and isInterceptor=false. In the example,
+the number of instances is set to 5. You can set the number of instances from 1 to 500.
The above example is a demo "API authentication" function. The event body is an AsyncHttpRequest object +from the user because the "rest.yaml" routes the HTTP request to the function via its unique "route name".
+There are some reserved metadata for route name ("my_route"), trace ID ("my_trace_id") and trace path ("my_trace_path") +in the event's headers. They do not exist in the incoming event envelope. The system automatically +insert them as read-only metadata.
+You may inspect other event metadata such as the replyTo address and correlation ID.
+Note that the "replyTo" address is optional. It only exists when the caller is making an RPC request or callback to +your function. If the caller sends an asynchronous drop-n-forget request, the "replyTo" value is null.
+You can obtain a singleton instance of the Platform object to do the following:
+We recommend using the ComposableLoader to search and load your functions.
+In some use cases where you want to create and destroy functions on demand, you can register them programmatically.
+A public function is visible by any application instances in the same network. When a function is declared as +"public", the function is reachable through the Event-over-HTTP API REST endpoint.
+A private function is invisible outside the memory space of the application instance that it resides. +This allows application to encapsulate business logic according to domain boundary. You can assemble closely +related functions as a composable application that can be deployed independently.
+In some use cases, you want to release a function on-demand when it is no longer required.
+platform.release("another.function");
+
+The above API will unload the function from memory and release it from the "event loop".
+When an application instance starts, a unique ID is generated. We call this the "Origin ID".
+const originId = po.getOrigin();
+
+You can obtain an instance of the PostOffice from the input "headers" parameters in the input +arguments of your function.
+const po = new PostOffice(evt.getHeaders());
+
+The PostOffice is the event manager that you can use to send asynchronous events or to make RPC requests. +The constructor uses the READ only metadata in the "headers" argument in the "handleEvent" method of your function.
+You can check if a function with the named route has been deployed.
+if (po.exists("another.function")) {
+ // do something
+}
+
+Since a composable function is executed as an anonymous function, the this
reference is protected inside the
+functional scope and thus no longer relevant to the class scope.
To invoke other methods in the same class holding the composable function, the "getMyClass()" API can be used.
+async handleEvent(evt: EventEnvelope) {
+ const po = new PostOffice(evt.getHeaders());
+ const self = po.getMyClass() as HelloWorldService;
+ // business logic here
+ const len = await self.downloadFile(request.getStreamRoute(), request.getFileName());
+}
+
+In the above example, HelloWorldService
is the Composable class and the downloadFile
is a non-static method
+in the same class. Note that you must use the event headers to instantiate the PostOffice object.
The following code segment demonstrates that you can retrieve the function's route name, worker number, +optional traceId and tracePath.
+async handleEvent(evt: EventEnvelope) {
+ const po = new PostOffice(evt.getHeaders());
+ const route = po.getMyRoute();
+ const workerNumber = po.getMyInstance();
+ const traceId = po.getMyTraceId();
+ const tracePath = po.getMyTracePath();
+ // processing logic here
+}
+
+You can send an asynchronous event like this.
+// example-1
+const event = new EventEnvelope().setTo('hello.world').setBody('test message');
+po.send(event);
+
+// example-2
+po.sendLater(event, 5000);
+
+You can make RPC call like this:
+// example-1
+const event = new EventEnvelope().setTo('hello.world').setBody('test message');
+const result = await po.request(event, 5000);
+
+// example-2
+const result = await po.remoteRequest(event, 'http://peer/api/event');
+
+// API signatures
+request(event: EventEnvelope, timeout = 60000): Promise<EventEnvelope>
+remoteRequest(event: EventEnvelope, endpoint: string,
+ securityHeaders: object = {}, rpc=true, timeout = 60000): Promise<EventEnvelope>
+
+"Event over HTTP" is an important topic. Please refer to Chapter 5 for more details.
+If you want to know the route name and optional trace ID and path, you can inspect the incoming event headers.
+const po = new PostOffice(evt.getHeaders());
+const myRoute = po.getMyRoute();
+const traceId = po.getMyTraceId();
+const tracePath = po.getMyTracePath();
+const myInstance = po.getMyInstance();
+
+Your function can access the main application configuration management system like this:
+const config = AppConfig.getInstance().getReader();
+// the value can be string or a primitive
+const value = config.get('my.parameter');
+// the return value will be converted to a string
+const text = config.getProperty('my.parameter');
+
+The system uses the standard dot-bracket format for a parameter name.
+++e.g. "hello.world", "some.key[2]"
+
You can also override the main application configuration using the set
method.
Additional configuration files can be added with the ConfigReader
API like this:
const myConfig = new ConfigReader(filePath);
+
+where filePath can use the classpath:/
or file:/
prefix.
The configuration system supports environment variable or reference to the main application configuration
+using the dollar-bracket syntax ${reference:default_value}
.
++e.g. "some.key=${MY_ENV_VARIABLE}", "some.key=${my.key}"
+
You can override any configuration parameter from the command line when starting your application.
+node my-app.js -Dsome.key=some_value -Danother.key=another_value
+
+You can point your application to use a different base configuration file like this:
+node my-app.js -C/opt/config/application.yml
+
+The -C
command line argument tells the system to use the configuration file in "/opt/config/application.yml".
++Exercise: try this command "node hello-world.js -Dlog.format=json" to start the demo app
+
This will tell the Logger system to use JSON format instead of plain text output. The log output may look like this:
+{
+ "time": "2023-06-10 09:51:20.884",
+ "level": "INFO",
+ "message": "Event system started - 9f5c99c4d21a42cfb0115cfbaf533820",
+ "module": "platform.js:441"
+}
+{
+ "time": "2023-06-10 09:51:21.037",
+ "level": "INFO",
+ "message": "REST automation service started on port 8085",
+ "module": "rest-automation.js:226"
+}
+
+The system includes a built-in logger that can log in either text or json format.
+The default log format is "text". You can override the value in the "src/resources/application.yml" config file. +The following example sets the log format to "json".
+log.format: json
+
+Alternatively you can also override it at run-time using the "-D" parameter like this:
+node my-app.js -Dlog.format=json
+
+The logger supports line-numbering. When you run your executable javascript main program, the line number for each +log message is derived from the ".js" file.
+If you want to show the line number in the source ".ts" file for easy debug, you can run your application using +"nodemon". This is illustrated in the "npm start" command in the package.json file.
+For simplicity, the logger is implemented without any additional library dependencies.
+As a best practice, we advocate a minimalist approach in API integration. +To build powerful composable applications, the above set of APIs is sufficient to perform +"event orchestration" where you write code to coordinate how the various functions work together as a +single "executable". Please refer to Chapter-4 for more details about event orchestration.
+Since Mercury is used in production installations, we will exercise the best effort to keep the core API stable.
+Other APIs in the toolkits are used internally to build the engine itself, and they may change from time to time. +They are mostly convenient methods and utilities. The engine is fully encapsulated and any internal API changes +are not likely to impact your applications.
+To further reduce coding effort, you can perform "event orchestration" by configuration using "Event Script".
+Mercury libraries are designed to co-exist with your favorite frameworks and tools. Inside a class implementing +a composable function, you can use any coding style and frameworks as you like, including sequential, object-oriented +and reactive programming styles.
+Mercury has a built-in lightweight non-blocking HTTP server based on Express, but you can also use other +application server framework with it.
+You can use the hello world
project as a template to start writing your own applications.
This project is licensed under the Apache 2.0 open sources license. We will update the public codebase after +it passes regression tests and meets stability and performance benchmarks in our production systems.
+The source code is provided as is, meaning that breaking API changes may be introduced from time to time.
+For enterprise clients, technical support is available. Please contact your Accenture representative +for details.
+Chapter-5 | +Home | +Chapter-7 | +
---|---|---|
Event over HTTP | +Table of Contents | +Test Driven Development | +
The example project is pre-configured with "esLint" for TypeScript syntax validation and Jest testing framework.
+Composable application is designed to be Test Driven Development (TDD) friendly.
+There are two test suites under the "examples/test" folder. One for unit tests and one for end-to-end tests.
+Before running the tests, please build your application first. The E2E tests run against the build from the +dist folder. Also make sure no apps are running on the configured port already.
+npm run build # if you have not run yet
+npm test
+
+Since each user function is written in the first principle "input-process-output", you can write unit tests +to validate the interface contract of each function directly.
+For the unit tests, the setup and tear down steps are as follows:
+ beforeAll(async () => {
+ ComposableLoader.initialize();
+ platform = Platform.getInstance();
+ platform.runForever();
+ });
+
+ afterAll(async () => {
+ await platform.stop();
+ // give console.log a moment to finish
+ await util.sleep(1000);
+ log.info("Service tests completed");
+ });
+
+In the setup step, it tells the system to load the user functions into the event loop using
+ComposableLoader.initialize()
and setupo configuration management.
In the tear down step, it instructs the system to stop gracefully.
+A typical example for unit test is to use RPC method to send a request to a route served by a specific user function.
+it('can do health check', async () => {
+ const po = new PostOffice();
+ const req = new EventEnvelope().setTo('demo.health').setHeader('type', 'health');
+ const result = await po.request(req, 2000);
+ expect(result).toBeTruthy();
+ expect(result.getBody()).toEqual({"status": "demo.service is running fine"});
+});
+
+For end-to-end test, you can import and start your main application in the unit test like this:
+import '../src/hello-world.js';
+
+The setup and tear down steps are shown below:
+beforeAll(async () => {
+ const platform = Platform.getInstance();
+ const config = platform.getConfig();
+ const port = config.get('server.port');
+ targetHost = `http://127.0.0.1:${port}`;
+ log.info('Begin end-to-end tests');
+});
+
+afterAll(async () => {
+ const platform = Platform.getInstance();
+ await platform.stop();
+ // Give console.log a moment to finish
+ await util.sleep(1000);
+ log.info("End-to-end tests completed");
+});
+
+Since your main application ("hello world") has been loaded into the same memory space, it is served by the +platform singleton object. You can obtain the parameter "server.port" from the base configuration so that +your tests can make HTTP calls to the REST endpoints of the hello world application.
+Let's examine the following test to make a HTTP GET request to the "/api/hello/world" REST endpoint.
+it('can do HTTP-GET to /api/hello/world', async () => {
+ const po = new PostOffice();
+ const httpRequest = new AsyncHttpRequest().setMethod('GET');
+ httpRequest.setTargetHost(targetHost).setUrl('/api/hello/world');
+ httpRequest.setQueryParameter('x', 'y');
+ const req = new EventEnvelope().setTo('async.http.request').setBody(httpRequest.toMap());
+ const result = await po.request(req, 2000);
+ expect(result).toBeTruthy();
+ expect(result.getBody()).toBeInstanceOf(Object);
+ const map = new MultiLevelMap(result.getBody() as object);
+ expect(map.getElement('headers.user-agent')).toBe('async-http-client');
+ expect(map.getElement('method')).toBe('GET');
+ expect(map.getElement('ip')).toBe('127.0.0.1');
+ expect(map.getElement('url')).toBe('/api/hello/world');
+ expect(map.getElement('parameters.query.x')).toBe('y');
+});
+
+The system has a built-in AsyncHttpClient with the route name "async.http.request".
+The above example code creates an AsyncHttpRequest object and passes it to the AsyncHttpClient that +will in turn submit the HTTP GET request to the "/api/hello/world" endpoint.
+The MultiLevelMap is a convenient utility to retrieve key-values using the dot-bracket format.
+The "hello world" application is a user facing application. It exposes the user functions through REST endpoints +defined in the "rest.yaml" configuration file. When a function receives input from a REST endpoint, the payload +in the incoming "event envelope" is an AsyncHttpRequest object. The user function can examine HTTP headers, +cookies, method, URL and request body, if any.
+A user function can also be internal. For example, it may be an algorithm doing calculation for a sales order. +The function would receive its input from a user facing function like this:
+++REST endpoint -> user facing function -> internal functions -> database function
+
Please refer to Chapter 4 for some typical event patterns.
+“Request-response”, best for interactivity
e.g. Drop-n-forget
e.g. Progressive rendering
e.g. Work-flow application
e.g. File transfer
In a composable application, user functions are written in a self-contained manner without dependencies to other +user functions.
+You can imagine that a transaction may pass through multiple functions (aka services
) because of event
+driven design. You can mock any user function by re-registering the "route name" with a mock function that you
+provide in a unit test.
We advocate encapsulation of external dependencies. For example, database connection and query language +should be fully encapsulated within a data adapter function and other user functions should communicate with the +data adapter function using an agreed interface contract. This removes the tight coupling of user functions +with the underlying infrastructure, allowing us to upgrade infrastructure technology without heavy refactoring +at the application level.
+For a user function that encapsulates a database or an external system, you may mock the underlying dependencies +in the same fashion as you mock traditional code.
+You can apply the "Composable" methodology to write standalone command line applications. Please refer to the +"extra" folder for some simple examples.
+Example | +Name | +Purpose | +
---|---|---|
1 | +rpc.ts | +Demonstrate making RPC calls to a function | +
2 | +rpc-to-service.ts | +Demo program to make "event over HTTP" call to a service | +
3 | +async.ts | +Drop-n-forget async calls | +
4 | +callback.ts | +Make async call and ask the service to callback | +
5 | +nested-rpc.ts | +Making nested RPC calls chaining 2 functions | +
6 | +nested-rpc-with-trace.ts | +Same as (5) with distributed tracing turned on | +
The command line applications are test programs. They are not covered by unit tests in the example project.
+Chapter-6 | +Home | +Appendix-I | +
---|---|---|
API overview | +Table of Contents | +Application config | +
Mercury version 3 is a toolkit for writing composable applications.
+ +Chapter 2 - Hello World application
+ +Chapter 4 - Event orchestration
+ + + + + + +Reference engine for building "Composable architecture and applications".
+The Mercury project is created with one primary objective -
+to make software easy to write, read, test, deploy, scale and manage.
Mercury for Node.js inherits core functionality from the original Mercury Java project. +For examples,
+To get started, please refer to the Developer Guide.
+Applications written using Mercury for Node.js can interoperate with composable applications using the +Event-over-HTTP protocol, meaning that a composable Java application can invoke a Node.js application +using events that delivered over a regular HTTP connection.
+The Event Scripting feature for event choreography is not available in this Node.js version because +you can use the Java version as the event manager to orchestrate composable functions in a Node.js +application.
+For more information on event scripting, please visit the +Mercury-Composable project.
+You may explore Event Script to see +how to define event choreography for your composable application.
+December, 2024
+In cloud migration and IT modernization, we evaluate application portfolio and recommend different +disposition strategies based on the 7R migration methodology.
+7R: Retire, retain, re-host, re-platform, replace, re-architect and re-imagine.
+
+The most common observation during IT modernization discovery is that there are many complex monolithic applications +that are hard to modernize quickly.
+IT modernization is like moving into a new home. It would be the opportunity to clean up and to improve for +business agility and strategic competitiveness.
+Composable architecture is gaining momentum because it accelerates organization transformation towards +a cloud native future. We will discuss how we may reduce modernization risks with this approach.
+Composability applies to both platform and application levels.
+We can trace the root of composability to Service Oriented Architecture (SOA) in 2000 or a technical bulletin on +"Flow-Based Programming" by IBM in 1971. This is the idea that architecture and applications are built using +modular building blocks and each block is self-contained with predictable behavior.
+At the platform level, composable architecture refers to loosely coupled platform services, utilities, and +business applications. With modular design, you can assemble platform components and applications to create +new use cases or to adjust for ever-changing business environment and requirements. Domain driven design (DDD), +Command Query Responsibility Segregation (CQRS) and Microservices patterns are the popular tools that architects +use to build composable architecture. You may deploy application in container, serverless or other means.
+At the application level, a composable application means that an application is assembled from modular software +components or functions that are self-contained and pluggable. You can mix-n-match functions to form new applications. +You can retire outdated functions without adverse side effect to a production system. Multiple versions of a function +can exist, and you can decide how to route user requests to different versions of a function. Applications would be +easier to design, develop, maintain, deploy, and scale.
+Composable architecture and applications contribute to business agility.
+Since 2014, microservices architectural pattern helps to decompose a big application into smaller pieces of +“self-contained” services. We also apply digital decoupling techniques to services and domains. Smaller is better. +However, we are writing code in the same old fashion. One method is calling other methods directly. Functional and +reactive programming techniques are means to run code in a non-blocking manner, for example Reactive Streams, Akka, +Vertx, Quarkus Multi/Uni and Spring Reactive Flux/Mono. These are excellent tools, but they do not reduce the +complexity of business applications.
+To make an application composable, the software components within a single application should be loosely coupled +where each component has zero or minimal dependencies.
+Unlike traditional programming approach, composable application is built from the top down. First, we describe +a business transaction as an event flow. Second, from the event flow, we identify individual functions for +business logic. Third, we write user story for each function and write code in a self-contained manner. +Finally, we write orchestration code to coordinate event flow among the functions, so they work together +as a single application.
+The individual functions become the building block for a composable application. We can mix-n-match different +sets of functions to address different business use cases.
+Cloud native applications are deployed as containers or serverless functions. Ideally, they communicate using events. +For example, the CQRS design pattern is well accepted for building high performance cloud native applications.
+As shown in Figure 1, applications can communicate with each other using an enterprise event system.
+For inter-domain communication, it is called "Level 1 events". For inter-container communication within a single +domain, it is called "Level 2 events".
+++ +Figure 1 - Cloud native applications use event streams to communicate
+
However, within a single application unit, an application is mostly built in a traditional way. +i.e. one function is calling other functions and libraries directly, thus making the modules and libraries +tightly coupled. As a result, microservices may become smaller monolithic applications.
+To overcome this limitation, we can employ “event-driven design” to make the microservices application unit composable.
+An application unit is a collection of composable functions in memory. Functions communicate with each other +over an “in-memory event bus” to form a single deployable application.
+++ +Figure 2 – Functions use in-memory event bus to communicate
+
For a composable application, each function is written using the first principle of “input-process-output” where +input and output payloads are delivered as events. All input and output are immutable to reduce unintended bugs +and side effects.
+Since input and output for each function is well-defined, test-driven development (TDD) can be done naturally. +It is also easier to define a user story for each function and the developer does not need to integrate +multiple levels of dependencies with code, resulting in a higher quality product.
+++ +Figure 3 - The first principle of a function
+
What is a “function”? For example, reading a record from a database and performing some data transformation, +doing a calculation with a formula, etc.
+++ +Figure 4 - Connecting output of one function to input of another
+
As shown in Figure 4, if function-1 wants to send a request to function-2, we can write “event orchestration code” +to route the output from function-1 to function-2 send it over an in-memory event bus.
+In event-driven application design, a function is executed when an event arrives as an input
. When a function
+finishes processing, your application can command the event system to route the result set (output
) as an
+event to another function.
Each function is uniquely identified by a "route name". For example, when a REST endpoint receives a request, +the request object is sent as an event to a function with a route name defined in the REST automation configuration +file called "rest.yaml". The event system will execute the function with the incoming event as input. When the +function finishes execution, the event system will route its output to the next function or as an HTTP response +to the user.
+++ +Figure 5 - Executing function through event flow
+
As shown in Figure 5, functions can send/receive events using the underlying Node.js event loop.
+This event-driven architecture provides the foundation to design and implement composable applications. +Each function is self-contained and loosely coupled by event flow.
+Mercury for Node.js is written in TypeScript with type safety.
+Since Node.js application is usually single threaded, all functions must be executed cooperatively +in the "event loop."
+However, a traditional Node.js or javascript application can run slower if it is not designed to run +"cooperatively". i.e. each method must yield control to the event loop.
+Composable applications enjoy faster performance and throughput because each function is +written in a self-contained fashion without dependencies of other functions. When one function requests +the service of another function, control is released to the event loop, thus promoting higher performance +and throughput than traditional coding approach.
+Let's examine this in more details.
+For higher throughput, the platform core engine allows you to configure "concurrent" workers for each function +addressable by a unique route name. The engine is designed to be reactive. This means when one worker is busy, +it will not process the next event until it has finished processing of the current event. This reactive design +ensures orderly execution.
+To handle "concurrent" requests, we can configure more than one worker for a function. To ensure all functions +are executed in a non-blocking manner, your function should implement the "Composable" class interface that +enforces your function to use the "Promises" or "async/await" pattern. This means your function will release +control to the event loop while it is waiting for a response from another service, external REST endpoint or +a database.
+If your application is computational intensive, you can increase performance with the Node.js standard +"Worker Thread" library. While each function is running cooperatively in the event loop, a function can +spin up a worker thread to run CPU heavy operations in the background. This adds true "multi-threading" +ability to your application.
+There is one limitation. A worker thread and a function in the main event loop can only communicate using +a separate messaging tunnel like this:
+// in the main thread
+worker.postMessage(someRequest);
+
+// in the worker thread
+parentPort.postMessage(someResponse);
+
+Mercury reduces this complexity because you can write a function as a gateway to interface with the worker +thread.
+++IMPORTANT - Please be careful about the use of worker threads. Since each worker thread runs in a + separate "v8" instance, it may overload the target machine and degrade application performance when + you have many worker threads in your application. Therefore, please keep the number of worker threads + to a bare minimal.
+
We can construct a composable application with self-contained functions that execute when events arrive. +There is a simple event API that we call the “Post Office” to support sequential non-blocking RPC, async, +drop and forget, callback, workflow, pipeline, streaming and interceptor patterns.
+The "async/await" pattern in Node.js reduces the effort in application modernization because we can directly +port sequential legacy code from a monolithic application to the new composable cloud native design.
+You can use this composable foundation library to write high performance Node.js applications in a composable +manner. The built-in REST automation feature allows you to create REST endpoints by configuration and link +each endpoint with a composable function. The ideal use case would be a Backend for FrontEnd (BFF) application.
+For more complex application, we recommend using the Event Script system in the Mercury-Composable Java project +as a engine to drive composable functions in a Node.js application.
+Event choreography using Event Script is the best way to create truly composable application that is truly +decoupled. Your functions are executed according to an event flow that can be configured and readable by +product owners and analysts and not just by developers.
+Composability applies to both platform and application levels. We can design and implement better cloud native +applications that are composable using event-driven design, leading to code that is readable, modular and reusable.
+We can deliver application that demonstrates both high performance and high throughput, an objective that has been +technically challenging with traditional means. With built-in observability, we can scientifically predict +application performance and throughput in design and development time, thus saving time and ensuring consistent +product quality.
+Composable approach also facilitates the migration of monolithic application into cloud native by decomposing the +application to functional level and assembling them into microservices and/or serverless according to domain +boundary. It reduces coding effort and application complexity, meaning less project risks.
+This opens a new frontier of cloud native applications that are composable, scalable, and easy to maintain, +thus contributing to business agility.
+ +' + escapeHtml(summary) +'
' + noResultsText + '
'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/docs/search/search_index.json b/docs/search/search_index.json new file mode 100644 index 0000000..dacd1a3 --- /dev/null +++ b/docs/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Mercury version 4 for Node.js Reference engine for building \"Composable architecture and applications\". Welcome to the Mercury project The Mercury project is created with one primary objective - to make software easy to write, read, test, deploy, scale and manage. Mercury for Node.js inherits core functionality from the original Mercury Java project. For examples, REST automation - you can create REST endpoints by configuration instead of code In-memory event system - we extend the standard Node.js EventEmitter to support high concurrency and ease of use Event API endpoint - this facilitates inter-container communication using events over HTTP To get started, please refer to the Developer Guide . Applications written using Mercury for Node.js can interoperate with composable applications using the Event-over-HTTP protocol, meaning that a composable Java application can invoke a Node.js application using events that delivered over a regular HTTP connection. The Event Scripting feature for event choreography is not available in this Node.js version because you can use the Java version as the event manager to orchestrate composable functions in a Node.js application. For more information on event scripting, please visit the Mercury-Composable project. You may explore Event Script to see how to define event choreography for your composable application. December, 2024 Introduction to composable architecture In cloud migration and IT modernization, we evaluate application portfolio and recommend different disposition strategies based on the 7R migration methodology. 7R: Retire, retain, re-host, re-platform, replace, re-architect and re-imagine. The most common observation during IT modernization discovery is that there are many complex monolithic applications that are hard to modernize quickly. IT modernization is like moving into a new home. It would be the opportunity to clean up and to improve for business agility and strategic competitiveness. Composable architecture is gaining momentum because it accelerates organization transformation towards a cloud native future. We will discuss how we may reduce modernization risks with this approach. Composability Composability applies to both platform and application levels. We can trace the root of composability to Service Oriented Architecture (SOA) in 2000 or a technical bulletin on \"Flow-Based Programming\" by IBM in 1971. This is the idea that architecture and applications are built using modular building blocks and each block is self-contained with predictable behavior. At the platform level, composable architecture refers to loosely coupled platform services, utilities, and business applications. With modular design, you can assemble platform components and applications to create new use cases or to adjust for ever-changing business environment and requirements. Domain driven design (DDD), Command Query Responsibility Segregation (CQRS) and Microservices patterns are the popular tools that architects use to build composable architecture. You may deploy application in container, serverless or other means. At the application level, a composable application means that an application is assembled from modular software components or functions that are self-contained and pluggable. You can mix-n-match functions to form new applications. You can retire outdated functions without adverse side effect to a production system. Multiple versions of a function can exist, and you can decide how to route user requests to different versions of a function. Applications would be easier to design, develop, maintain, deploy, and scale. Composable architecture and applications contribute to business agility. Building a composable application Microservices Since 2014, microservices architectural pattern helps to decompose a big application into smaller pieces of \u201cself-contained\u201d services. We also apply digital decoupling techniques to services and domains. Smaller is better. However, we are writing code in the same old fashion. One method is calling other methods directly. Functional and reactive programming techniques are means to run code in a non-blocking manner, for example Reactive Streams, Akka, Vertx, Quarkus Multi/Uni and Spring Reactive Flux/Mono. These are excellent tools, but they do not reduce the complexity of business applications. Composable application To make an application composable, the software components within a single application should be loosely coupled where each component has zero or minimal dependencies. Unlike traditional programming approach, composable application is built from the top down. First, we describe a business transaction as an event flow. Second, from the event flow, we identify individual functions for business logic. Third, we write user story for each function and write code in a self-contained manner. Finally, we write orchestration code to coordinate event flow among the functions, so they work together as a single application. The individual functions become the building block for a composable application. We can mix-n-match different sets of functions to address different business use cases. Event is the communication conduit Cloud native applications are deployed as containers or serverless functions. Ideally, they communicate using events. For example, the CQRS design pattern is well accepted for building high performance cloud native applications. As shown in Figure 1, applications can communicate with each other using an enterprise event system. For inter-domain communication, it is called \"Level 1 events\". For inter-container communication within a single domain, it is called \"Level 2 events\". Figure 1 - Cloud native applications use event streams to communicate However, within a single application unit, an application is mostly built in a traditional way. i.e. one function is calling other functions and libraries directly, thus making the modules and libraries tightly coupled. As a result, microservices may become smaller monolithic applications. To overcome this limitation, we can employ \u201cevent-driven design\u201d to make the microservices application unit composable. An application unit is a collection of composable functions in memory. Functions communicate with each other over an \u201cin-memory event bus\u201d to form a single deployable application. Figure 2 \u2013 Functions use in-memory event bus to communicate In-memory event bus For a composable application, each function is written using the first principle of \u201cinput-process-output\u201d where input and output payloads are delivered as events. All input and output are immutable to reduce unintended bugs and side effects. Since input and output for each function is well-defined, test-driven development (TDD) can be done naturally. It is also easier to define a user story for each function and the developer does not need to integrate multiple levels of dependencies with code, resulting in a higher quality product. Figure 3 - The first principle of a function What is a \u201cfunction\u201d? For example, reading a record from a database and performing some data transformation, doing a calculation with a formula, etc. Figure 4 - Connecting output of one function to input of another As shown in Figure 4, if function-1 wants to send a request to function-2, we can write \u201cevent orchestration code\u201d to route the output from function-1 to function-2 send it over an in-memory event bus. Function execution In event-driven application design, a function is executed when an event arrives as an input . When a function finishes processing, your application can command the event system to route the result set ( output ) as an event to another function. Each function is uniquely identified by a \"route name\". For example, when a REST endpoint receives a request, the request object is sent as an event to a function with a route name defined in the REST automation configuration file called \"rest.yaml\". The event system will execute the function with the incoming event as input. When the function finishes execution, the event system will route its output to the next function or as an HTTP response to the user. Figure 5 - Executing function through event flow As shown in Figure 5, functions can send/receive events using the underlying Node.js event loop. This event-driven architecture provides the foundation to design and implement composable applications. Each function is self-contained and loosely coupled by event flow. Performance and throughput Mercury for Node.js is written in TypeScript with type safety. Since Node.js application is usually single threaded, all functions must be executed cooperatively in the \"event loop.\" However, a traditional Node.js or javascript application can run slower if it is not designed to run \"cooperatively\". i.e. each method must yield control to the event loop. Composable applications enjoy faster performance and throughput because each function is written in a self-contained fashion without dependencies of other functions. When one function requests the service of another function, control is released to the event loop, thus promoting higher performance and throughput than traditional coding approach. Let's examine this in more details. Throughput For higher throughput, the platform core engine allows you to configure \"concurrent\" workers for each function addressable by a unique route name. The engine is designed to be reactive. This means when one worker is busy, it will not process the next event until it has finished processing of the current event. This reactive design ensures orderly execution. To handle \"concurrent\" requests, we can configure more than one worker for a function. To ensure all functions are executed in a non-blocking manner, your function should implement the \"Composable\" class interface that enforces your function to use the \"Promises\" or \"async/await\" pattern. This means your function will release control to the event loop while it is waiting for a response from another service, external REST endpoint or a database. Performance If your application is computational intensive, you can increase performance with the Node.js standard \"Worker Thread\" library. While each function is running cooperatively in the event loop, a function can spin up a worker thread to run CPU heavy operations in the background. This adds true \"multi-threading\" ability to your application. There is one limitation. A worker thread and a function in the main event loop can only communicate using a separate messaging tunnel like this: // in the main thread worker.postMessage(someRequest); // in the worker thread parentPort.postMessage(someResponse); Mercury reduces this complexity because you can write a function as a gateway to interface with the worker thread. IMPORTANT - Please be careful about the use of worker threads. Since each worker thread runs in a separate \"v8\" instance, it may overload the target machine and degrade application performance when you have many worker threads in your application. Therefore, please keep the number of worker threads to a bare minimal. Use cases We can construct a composable application with self-contained functions that execute when events arrive. There is a simple event API that we call the \u201cPost Office\u201d to support sequential non-blocking RPC, async, drop and forget, callback, workflow, pipeline, streaming and interceptor patterns. The \"async/await\" pattern in Node.js reduces the effort in application modernization because we can directly port sequential legacy code from a monolithic application to the new composable cloud native design. You can use this composable foundation library to write high performance Node.js applications in a composable manner. The built-in REST automation feature allows you to create REST endpoints by configuration and link each endpoint with a composable function. The ideal use case would be a Backend for FrontEnd (BFF) application. For more complex application, we recommend using the Event Script system in the Mercury-Composable Java project as a engine to drive composable functions in a Node.js application. Event choreography using Event Script is the best way to create truly composable application that is truly decoupled. Your functions are executed according to an event flow that can be configured and readable by product owners and analysts and not just by developers. Conclusion Composability applies to both platform and application levels. We can design and implement better cloud native applications that are composable using event-driven design, leading to code that is readable, modular and reusable. We can deliver application that demonstrates both high performance and high throughput, an objective that has been technically challenging with traditional means. With built-in observability, we can scientifically predict application performance and throughput in design and development time, thus saving time and ensuring consistent product quality. Composable approach also facilitates the migration of monolithic application into cloud native by decomposing the application to functional level and assembling them into microservices and/or serverless according to domain boundary. It reduces coding effort and application complexity, meaning less project risks. This opens a new frontier of cloud native applications that are composable, scalable, and easy to maintain, thus contributing to business agility.","title":"Home"},{"location":"#mercury-version-4-for-nodejs","text":"Reference engine for building \"Composable architecture and applications\".","title":"Mercury version 4 for Node.js"},{"location":"#welcome-to-the-mercury-project","text":"The Mercury project is created with one primary objective - to make software easy to write, read, test, deploy, scale and manage. Mercury for Node.js inherits core functionality from the original Mercury Java project. For examples, REST automation - you can create REST endpoints by configuration instead of code In-memory event system - we extend the standard Node.js EventEmitter to support high concurrency and ease of use Event API endpoint - this facilitates inter-container communication using events over HTTP To get started, please refer to the Developer Guide . Applications written using Mercury for Node.js can interoperate with composable applications using the Event-over-HTTP protocol, meaning that a composable Java application can invoke a Node.js application using events that delivered over a regular HTTP connection. The Event Scripting feature for event choreography is not available in this Node.js version because you can use the Java version as the event manager to orchestrate composable functions in a Node.js application. For more information on event scripting, please visit the Mercury-Composable project. You may explore Event Script to see how to define event choreography for your composable application. December, 2024","title":"Welcome to the Mercury project"},{"location":"#introduction-to-composable-architecture","text":"In cloud migration and IT modernization, we evaluate application portfolio and recommend different disposition strategies based on the 7R migration methodology. 7R: Retire, retain, re-host, re-platform, replace, re-architect and re-imagine. The most common observation during IT modernization discovery is that there are many complex monolithic applications that are hard to modernize quickly. IT modernization is like moving into a new home. It would be the opportunity to clean up and to improve for business agility and strategic competitiveness. Composable architecture is gaining momentum because it accelerates organization transformation towards a cloud native future. We will discuss how we may reduce modernization risks with this approach.","title":"Introduction to composable architecture"},{"location":"#composability","text":"Composability applies to both platform and application levels. We can trace the root of composability to Service Oriented Architecture (SOA) in 2000 or a technical bulletin on \"Flow-Based Programming\" by IBM in 1971. This is the idea that architecture and applications are built using modular building blocks and each block is self-contained with predictable behavior. At the platform level, composable architecture refers to loosely coupled platform services, utilities, and business applications. With modular design, you can assemble platform components and applications to create new use cases or to adjust for ever-changing business environment and requirements. Domain driven design (DDD), Command Query Responsibility Segregation (CQRS) and Microservices patterns are the popular tools that architects use to build composable architecture. You may deploy application in container, serverless or other means. At the application level, a composable application means that an application is assembled from modular software components or functions that are self-contained and pluggable. You can mix-n-match functions to form new applications. You can retire outdated functions without adverse side effect to a production system. Multiple versions of a function can exist, and you can decide how to route user requests to different versions of a function. Applications would be easier to design, develop, maintain, deploy, and scale. Composable architecture and applications contribute to business agility.","title":"Composability"},{"location":"#building-a-composable-application","text":"","title":"Building a composable application"},{"location":"#microservices","text":"Since 2014, microservices architectural pattern helps to decompose a big application into smaller pieces of \u201cself-contained\u201d services. We also apply digital decoupling techniques to services and domains. Smaller is better. However, we are writing code in the same old fashion. One method is calling other methods directly. Functional and reactive programming techniques are means to run code in a non-blocking manner, for example Reactive Streams, Akka, Vertx, Quarkus Multi/Uni and Spring Reactive Flux/Mono. These are excellent tools, but they do not reduce the complexity of business applications.","title":"Microservices"},{"location":"#composable-application","text":"To make an application composable, the software components within a single application should be loosely coupled where each component has zero or minimal dependencies. Unlike traditional programming approach, composable application is built from the top down. First, we describe a business transaction as an event flow. Second, from the event flow, we identify individual functions for business logic. Third, we write user story for each function and write code in a self-contained manner. Finally, we write orchestration code to coordinate event flow among the functions, so they work together as a single application. The individual functions become the building block for a composable application. We can mix-n-match different sets of functions to address different business use cases.","title":"Composable application"},{"location":"#event-is-the-communication-conduit","text":"Cloud native applications are deployed as containers or serverless functions. Ideally, they communicate using events. For example, the CQRS design pattern is well accepted for building high performance cloud native applications. As shown in Figure 1, applications can communicate with each other using an enterprise event system. For inter-domain communication, it is called \"Level 1 events\". For inter-container communication within a single domain, it is called \"Level 2 events\". Figure 1 - Cloud native applications use event streams to communicate However, within a single application unit, an application is mostly built in a traditional way. i.e. one function is calling other functions and libraries directly, thus making the modules and libraries tightly coupled. As a result, microservices may become smaller monolithic applications. To overcome this limitation, we can employ \u201cevent-driven design\u201d to make the microservices application unit composable. An application unit is a collection of composable functions in memory. Functions communicate with each other over an \u201cin-memory event bus\u201d to form a single deployable application. Figure 2 \u2013 Functions use in-memory event bus to communicate","title":"Event is the communication conduit"},{"location":"#in-memory-event-bus","text":"For a composable application, each function is written using the first principle of \u201cinput-process-output\u201d where input and output payloads are delivered as events. All input and output are immutable to reduce unintended bugs and side effects. Since input and output for each function is well-defined, test-driven development (TDD) can be done naturally. It is also easier to define a user story for each function and the developer does not need to integrate multiple levels of dependencies with code, resulting in a higher quality product. Figure 3 - The first principle of a function What is a \u201cfunction\u201d? For example, reading a record from a database and performing some data transformation, doing a calculation with a formula, etc. Figure 4 - Connecting output of one function to input of another As shown in Figure 4, if function-1 wants to send a request to function-2, we can write \u201cevent orchestration code\u201d to route the output from function-1 to function-2 send it over an in-memory event bus.","title":"In-memory event bus"},{"location":"#function-execution","text":"In event-driven application design, a function is executed when an event arrives as an input . When a function finishes processing, your application can command the event system to route the result set ( output ) as an event to another function. Each function is uniquely identified by a \"route name\". For example, when a REST endpoint receives a request, the request object is sent as an event to a function with a route name defined in the REST automation configuration file called \"rest.yaml\". The event system will execute the function with the incoming event as input. When the function finishes execution, the event system will route its output to the next function or as an HTTP response to the user. Figure 5 - Executing function through event flow As shown in Figure 5, functions can send/receive events using the underlying Node.js event loop. This event-driven architecture provides the foundation to design and implement composable applications. Each function is self-contained and loosely coupled by event flow.","title":"Function execution"},{"location":"#performance-and-throughput","text":"Mercury for Node.js is written in TypeScript with type safety. Since Node.js application is usually single threaded, all functions must be executed cooperatively in the \"event loop.\" However, a traditional Node.js or javascript application can run slower if it is not designed to run \"cooperatively\". i.e. each method must yield control to the event loop. Composable applications enjoy faster performance and throughput because each function is written in a self-contained fashion without dependencies of other functions. When one function requests the service of another function, control is released to the event loop, thus promoting higher performance and throughput than traditional coding approach. Let's examine this in more details.","title":"Performance and throughput"},{"location":"#throughput","text":"For higher throughput, the platform core engine allows you to configure \"concurrent\" workers for each function addressable by a unique route name. The engine is designed to be reactive. This means when one worker is busy, it will not process the next event until it has finished processing of the current event. This reactive design ensures orderly execution. To handle \"concurrent\" requests, we can configure more than one worker for a function. To ensure all functions are executed in a non-blocking manner, your function should implement the \"Composable\" class interface that enforces your function to use the \"Promises\" or \"async/await\" pattern. This means your function will release control to the event loop while it is waiting for a response from another service, external REST endpoint or a database.","title":"Throughput"},{"location":"#performance","text":"If your application is computational intensive, you can increase performance with the Node.js standard \"Worker Thread\" library. While each function is running cooperatively in the event loop, a function can spin up a worker thread to run CPU heavy operations in the background. This adds true \"multi-threading\" ability to your application. There is one limitation. A worker thread and a function in the main event loop can only communicate using a separate messaging tunnel like this: // in the main thread worker.postMessage(someRequest); // in the worker thread parentPort.postMessage(someResponse); Mercury reduces this complexity because you can write a function as a gateway to interface with the worker thread. IMPORTANT - Please be careful about the use of worker threads. Since each worker thread runs in a separate \"v8\" instance, it may overload the target machine and degrade application performance when you have many worker threads in your application. Therefore, please keep the number of worker threads to a bare minimal.","title":"Performance"},{"location":"#use-cases","text":"We can construct a composable application with self-contained functions that execute when events arrive. There is a simple event API that we call the \u201cPost Office\u201d to support sequential non-blocking RPC, async, drop and forget, callback, workflow, pipeline, streaming and interceptor patterns. The \"async/await\" pattern in Node.js reduces the effort in application modernization because we can directly port sequential legacy code from a monolithic application to the new composable cloud native design. You can use this composable foundation library to write high performance Node.js applications in a composable manner. The built-in REST automation feature allows you to create REST endpoints by configuration and link each endpoint with a composable function. The ideal use case would be a Backend for FrontEnd (BFF) application. For more complex application, we recommend using the Event Script system in the Mercury-Composable Java project as a engine to drive composable functions in a Node.js application. Event choreography using Event Script is the best way to create truly composable application that is truly decoupled. Your functions are executed according to an event flow that can be configured and readable by product owners and analysts and not just by developers.","title":"Use cases"},{"location":"#conclusion","text":"Composability applies to both platform and application levels. We can design and implement better cloud native applications that are composable using event-driven design, leading to code that is readable, modular and reusable. We can deliver application that demonstrates both high performance and high throughput, an objective that has been technically challenging with traditional means. With built-in observability, we can scientifically predict application performance and throughput in design and development time, thus saving time and ensuring consistent product quality. Composable approach also facilitates the migration of monolithic application into cloud native by decomposing the application to functional level and assembling them into microservices and/or serverless according to domain boundary. It reduces coding effort and application complexity, meaning less project risks. This opens a new frontier of cloud native applications that are composable, scalable, and easy to maintain, thus contributing to business agility.","title":"Conclusion"},{"location":"CHANGELOG/","text":"Changelog All notable changes to this project will be documented in this file. The format is based on Keep a Changelog , and this project adheres to Semantic Versioning . Version 4.1.1, 12/22/2024 Added Composable class scanner for the source folder Added \"web.component.scan\" parameter to support scanning of dependency libaries Removed N/A Changed N/A Version 4.1.0, 12/20/2024 Added AppConfig will resolve key-values from system properties and environment variables at startup Removed Eliminate preload.yaml configuration file Changed Streamlined configuration management Updated preload annotation for developer to define concurrency Version 4.0.1, 12/16/2024 Added Support parsing of multiple environment variables and base system properties for a single key-value in Config Reader. Removed N/A Changed Improved environment variable parsing logic and detection of config loops. Compatibility with Unix, Mac and Windows OS Version 4.0.0, 12/9/2024 Upgraded to sync with Mercury-Composable for the foundation event-driven and Event-over-HTTP design. Tested with Node.js version 22.12.0 (LTS). Backward compatible to version 20.18.1 (LTS). Event-over-HTTP compatibility tests conducted with Mercury-Composable version 4.0.32. Added N/A Removed N/A Changed Refactored Event-over-HTTP to use standardized HTTP headers X-Stream-Id and X-Ttl Updated OSS dependencies to latest version Configured for EsLint version 9.16.0 Version 3.0.0, 6/10/2023 Ported composable core features from Mercury 3.0 Java version Added Unit and end-to-end tests for Mercury 3.0 Node.js and for the example app project. For backward compatibility, added optional \"setupMiddleware\" method in the rest-automation module. Removed Threshold feature in REST automation Changed N/A Version 1.0.0, 5/30/2022 Added Minimal viable product Removed N/A Changed N/A","title":"Release notes"},{"location":"CHANGELOG/#changelog","text":"All notable changes to this project will be documented in this file. The format is based on Keep a Changelog , and this project adheres to Semantic Versioning .","title":"Changelog"},{"location":"CHANGELOG/#version-411-12222024","text":"","title":"Version 4.1.1, 12/22/2024"},{"location":"CHANGELOG/#added","text":"Composable class scanner for the source folder Added \"web.component.scan\" parameter to support scanning of dependency libaries","title":"Added"},{"location":"CHANGELOG/#removed","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#version-410-12202024","text":"","title":"Version 4.1.0, 12/20/2024"},{"location":"CHANGELOG/#added_1","text":"AppConfig will resolve key-values from system properties and environment variables at startup","title":"Added"},{"location":"CHANGELOG/#removed_1","text":"Eliminate preload.yaml configuration file","title":"Removed"},{"location":"CHANGELOG/#changed_1","text":"Streamlined configuration management Updated preload annotation for developer to define concurrency","title":"Changed"},{"location":"CHANGELOG/#version-401-12162024","text":"","title":"Version 4.0.1, 12/16/2024"},{"location":"CHANGELOG/#added_2","text":"Support parsing of multiple environment variables and base system properties for a single key-value in Config Reader.","title":"Added"},{"location":"CHANGELOG/#removed_2","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_2","text":"Improved environment variable parsing logic and detection of config loops. Compatibility with Unix, Mac and Windows OS","title":"Changed"},{"location":"CHANGELOG/#version-400-1292024","text":"Upgraded to sync with Mercury-Composable for the foundation event-driven and Event-over-HTTP design. Tested with Node.js version 22.12.0 (LTS). Backward compatible to version 20.18.1 (LTS). Event-over-HTTP compatibility tests conducted with Mercury-Composable version 4.0.32.","title":"Version 4.0.0, 12/9/2024"},{"location":"CHANGELOG/#added_3","text":"N/A","title":"Added"},{"location":"CHANGELOG/#removed_3","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_3","text":"Refactored Event-over-HTTP to use standardized HTTP headers X-Stream-Id and X-Ttl Updated OSS dependencies to latest version Configured for EsLint version 9.16.0","title":"Changed"},{"location":"CHANGELOG/#version-300-6102023","text":"Ported composable core features from Mercury 3.0 Java version","title":"Version 3.0.0, 6/10/2023"},{"location":"CHANGELOG/#added_4","text":"Unit and end-to-end tests for Mercury 3.0 Node.js and for the example app project. For backward compatibility, added optional \"setupMiddleware\" method in the rest-automation module.","title":"Added"},{"location":"CHANGELOG/#removed_4","text":"Threshold feature in REST automation","title":"Removed"},{"location":"CHANGELOG/#changed_4","text":"N/A","title":"Changed"},{"location":"CHANGELOG/#version-100-5302022","text":"","title":"Version 1.0.0, 5/30/2022"},{"location":"CHANGELOG/#added_5","text":"Minimal viable product","title":"Added"},{"location":"CHANGELOG/#removed_5","text":"N/A","title":"Removed"},{"location":"CHANGELOG/#changed_5","text":"N/A","title":"Changed"},{"location":"CODE_OF_CONDUCT/","text":"Contributor Covenant Code of Conduct Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Our Standards Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. Scope This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Kevin Bader (the current project maintainer). All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. Attribution This Code of Conduct is adapted from the Contributor Covenant , version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html","title":"Code of Conduct"},{"location":"CODE_OF_CONDUCT/#contributor-covenant-code-of-conduct","text":"","title":"Contributor Covenant Code of Conduct"},{"location":"CODE_OF_CONDUCT/#our-pledge","text":"In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.","title":"Our Pledge"},{"location":"CODE_OF_CONDUCT/#our-standards","text":"Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting","title":"Our Standards"},{"location":"CODE_OF_CONDUCT/#our-responsibilities","text":"Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.","title":"Our Responsibilities"},{"location":"CODE_OF_CONDUCT/#scope","text":"This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.","title":"Scope"},{"location":"CODE_OF_CONDUCT/#enforcement","text":"Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Kevin Bader (the current project maintainer). All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.","title":"Enforcement"},{"location":"CODE_OF_CONDUCT/#attribution","text":"This Code of Conduct is adapted from the Contributor Covenant , version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html","title":"Attribution"},{"location":"CONTRIBUTING/","text":"Contributing to the Mercury framework Thanks for taking the time to contribute! The following is a set of guidelines for contributing to Mercury and its packages, which are hosted in the Accenture Organization on GitHub. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request. Code of Conduct This project and everyone participating in it is governed by our Code of Conduct . By participating, you are expected to uphold this code. Please report unacceptable behavior to Kevin Bader, who is the current project maintainer. What should I know before I get started? We follow the standard GitHub workflow . Before submitting a Pull Request: Please write tests. Make sure you run all tests and check for warnings. Think about whether it makes sense to document the change in some way. For smaller, internal changes, inline documentation might be sufficient, while more visible ones might warrant a change to the developer's guide or the README . Update CHANGELOG.md file with your current change in form of [Type of change e.g. Config, Kafka, .etc] with a short description of what it is all about and a link to issue or pull request, and choose a suitable section (i.e., changed, added, fixed, removed, deprecated). Design Decisions When we make a significant decision in how to write code, or how to maintain the project and what we can or cannot support, we will document it using Architecture Decision Records (ADR) . Take a look at the design notes for existing ADRs. If you have a question around how we do things, check to see if it is documented there. If it is not documented there, please ask us - chances are you're not the only one wondering. Of course, also feel free to challenge the decisions by starting a discussion on the mailing list.","title":"Contribution"},{"location":"CONTRIBUTING/#contributing-to-the-mercury-framework","text":"Thanks for taking the time to contribute! The following is a set of guidelines for contributing to Mercury and its packages, which are hosted in the Accenture Organization on GitHub. These are mostly guidelines, not rules. Use your best judgment, and feel free to propose changes to this document in a pull request.","title":"Contributing to the Mercury framework"},{"location":"CONTRIBUTING/#code-of-conduct","text":"This project and everyone participating in it is governed by our Code of Conduct . By participating, you are expected to uphold this code. Please report unacceptable behavior to Kevin Bader, who is the current project maintainer.","title":"Code of Conduct"},{"location":"CONTRIBUTING/#what-should-i-know-before-i-get-started","text":"We follow the standard GitHub workflow . Before submitting a Pull Request: Please write tests. Make sure you run all tests and check for warnings. Think about whether it makes sense to document the change in some way. For smaller, internal changes, inline documentation might be sufficient, while more visible ones might warrant a change to the developer's guide or the README . Update CHANGELOG.md file with your current change in form of [Type of change e.g. Config, Kafka, .etc] with a short description of what it is all about and a link to issue or pull request, and choose a suitable section (i.e., changed, added, fixed, removed, deprecated).","title":"What should I know before I get started?"},{"location":"CONTRIBUTING/#design-decisions","text":"When we make a significant decision in how to write code, or how to maintain the project and what we can or cannot support, we will document it using Architecture Decision Records (ADR) . Take a look at the design notes for existing ADRs. If you have a question around how we do things, check to see if it is documented there. If it is not documented there, please ask us - chances are you're not the only one wondering. Of course, also feel free to challenge the decisions by starting a discussion on the mailing list.","title":"Design Decisions"},{"location":"INCLUSIVITY/","text":"TECHNOLOGY INCLUSIVE LANGUAGE GUIDEBOOK As an organization, Accenture believes in building an inclusive workplace and contributing to a world where equality thrives. Certain terms or expressions can unintentionally harm, perpetuate damaging stereotypes, and insult people. Inclusive language avoids bias, slang terms, and word choices which express derision of groups of people based on race, gender, sexuality, or socioeconomic status. The Accenture North America Technology team created this guidebook to provide Accenture employees with a view into inclusive language and guidance for working to avoid its use\u2014helping to ensure that we communicate with respect, dignity and fairness. How to use this guide? Accenture has over 514,000 employees from diverse backgrounds, who perform consulting and delivery work for an equally diverse set of clients and partners. When communicating with your colleagues and representing Accenture, consider the connotation, however unintended, of certain terms in your written and verbal communication. The guidelines are intended to help you recognize non-inclusive words and understand potential meanings that these words might convey. Our goal with these recommendations is not to require you to use specific words, but to ask you to take a moment to consider how your audience may be affected by the language you choose. Inclusive Categories Non-inclusive term Replacement Explanation Race, Ethnicity & National Origin master primary client source leader Using the terms \u201cmaster/slave\u201d in this context inappropriately normalizes and minimizes the very large magnitude that slavery and its effects have had in our history. slave secondary replica follower blacklist deny list block list The term \u201cblacklist\u201d was first used in the early 1600s to describe a list of those who were under suspicion and thus not to be trusted, whereas \u201cwhitelist\u201d referred to those considered acceptable. Accenture does not want to promote the association of \u201cblack\u201d and negative, nor the connotation of \u201cwhite\u201d being the inverse, or positive. whitelist allow list approved list native original core feature Referring to \u201cnative\u201d vs \u201cnon-native\u201d to describe technology platforms carries overtones of minimizing the impact of colonialism on native people, and thus minimizes the negative associations the terminology has in the latter context. non-native non-original non-core feature Gender & Sexuality man-hours work-hours business-hours When people read the words \u2018man\u2019 or \u2018he,\u2019 people often picture males only. Usage of the male terminology subtly suggests that only males can perform certain work or hold certain jobs. Gender-neutral terms include the whole audience, and thus using terms such as \u201cbusiness executive\u201d instead of \u201cbusinessman,\u201d or informally, \u201cfolks\u201d instead of \u201cguys\u201d is preferable because it is inclusive. man-days work-days business-days Ability Status & (Dis)abilities sanity check insanity check confidence check quality check rationality check Using the \u201cHuman Engagement, People First\u2019 approach, putting people - all people - at the center is important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. dummy variables indicator variables Violence STONITH, kill, hit conclude cease discontinue Using the \u201cHuman Engagement, People First\u2019 approach, putting people - all people - at the center is important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. one throat to choke single point of contact primary contact This guidebook is a living document and will be updated as terminology evolves. We encourage our users to provide feedback on the effectiveness of this document and we welcome additional suggestions. Contact us at Technology_ProjectElevate@accenture.com .","title":"Inclusivity"},{"location":"INCLUSIVITY/#technology-inclusive-language-guidebook","text":"As an organization, Accenture believes in building an inclusive workplace and contributing to a world where equality thrives. Certain terms or expressions can unintentionally harm, perpetuate damaging stereotypes, and insult people. Inclusive language avoids bias, slang terms, and word choices which express derision of groups of people based on race, gender, sexuality, or socioeconomic status. The Accenture North America Technology team created this guidebook to provide Accenture employees with a view into inclusive language and guidance for working to avoid its use\u2014helping to ensure that we communicate with respect, dignity and fairness. How to use this guide? Accenture has over 514,000 employees from diverse backgrounds, who perform consulting and delivery work for an equally diverse set of clients and partners. When communicating with your colleagues and representing Accenture, consider the connotation, however unintended, of certain terms in your written and verbal communication. The guidelines are intended to help you recognize non-inclusive words and understand potential meanings that these words might convey. Our goal with these recommendations is not to require you to use specific words, but to ask you to take a moment to consider how your audience may be affected by the language you choose. Inclusive Categories Non-inclusive term Replacement Explanation Race, Ethnicity & National Origin master primary client source leader Using the terms \u201cmaster/slave\u201d in this context inappropriately normalizes and minimizes the very large magnitude that slavery and its effects have had in our history. slave secondary replica follower blacklist deny list block list The term \u201cblacklist\u201d was first used in the early 1600s to describe a list of those who were under suspicion and thus not to be trusted, whereas \u201cwhitelist\u201d referred to those considered acceptable. Accenture does not want to promote the association of \u201cblack\u201d and negative, nor the connotation of \u201cwhite\u201d being the inverse, or positive. whitelist allow list approved list native original core feature Referring to \u201cnative\u201d vs \u201cnon-native\u201d to describe technology platforms carries overtones of minimizing the impact of colonialism on native people, and thus minimizes the negative associations the terminology has in the latter context. non-native non-original non-core feature Gender & Sexuality man-hours work-hours business-hours When people read the words \u2018man\u2019 or \u2018he,\u2019 people often picture males only. Usage of the male terminology subtly suggests that only males can perform certain work or hold certain jobs. Gender-neutral terms include the whole audience, and thus using terms such as \u201cbusiness executive\u201d instead of \u201cbusinessman,\u201d or informally, \u201cfolks\u201d instead of \u201cguys\u201d is preferable because it is inclusive. man-days work-days business-days Ability Status & (Dis)abilities sanity check insanity check confidence check quality check rationality check Using the \u201cHuman Engagement, People First\u2019 approach, putting people - all people - at the center is important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. dummy variables indicator variables Violence STONITH, kill, hit conclude cease discontinue Using the \u201cHuman Engagement, People First\u2019 approach, putting people - all people - at the center is important. Denoting ability status in the context of inferior or problematic work implies that people with mental illnesses are inferior, wrong, or incorrect. one throat to choke single point of contact primary contact This guidebook is a living document and will be updated as terminology evolves. We encourage our users to provide feedback on the effectiveness of this document and we welcome additional suggestions. Contact us at Technology_ProjectElevate@accenture.com .","title":"TECHNOLOGY INCLUSIVE LANGUAGE GUIDEBOOK"},{"location":"arch-decisions/DESIGN-NOTES/","text":"Design notes Composable application Modern applications are sophisticated. Navigating multiple layers of application logic, utilities and libraries make code complex and difficult to read. To make code readable and module, we advocate the composable application design pattern. Each function in a composable application is a building block of functionality. It is self-contained, stateless and independent of the rest of the application. You can write code using the first principle of \"input-process-output\". Fully event driven Mercury is both a development methodology and a toolkit. It articulates the use of events between functions instead of tight coupling using direct method calls. In Node.js, this is particular important because it ensures that each function yields to the event loop without blocking the rest of the application, resulting in higher performance and throughout. Reactive design The system encapsulates the standard Node.js EventEmitter with a \"manager and worker\" pattern. Each worker of a function will process incoming event orderly. This allows the developer the flexibility to implement singleton pattern and parallel processing easily. Native Node.js stream and ObjectStream I/O It integrates natively with the standard Node.js stream library. For higher digital decoupling, the system provides a set of ObjectStream I/O API so that producer can write to a stream before a consumer is ready. To reduce memory footprint, the system uses the temporary local file system at \"/tmp/node/streams\" to hold data blocks of a stream. The temporary data blocks are cleared automatically when a stream is read or closed. Configuration management The system supports a base configuration (application.yml) and the developer can use additional configuration files with the \"ConfigReader\" API. It follows a structured configuration approach similar to Java's Spring Boot. Compatibility with browsers The core engine does not have dependency on the local file system. This provides a path to support Composable design in a browser application in future iterations.","title":"Design notes"},{"location":"arch-decisions/DESIGN-NOTES/#design-notes","text":"","title":"Design notes"},{"location":"arch-decisions/DESIGN-NOTES/#composable-application","text":"Modern applications are sophisticated. Navigating multiple layers of application logic, utilities and libraries make code complex and difficult to read. To make code readable and module, we advocate the composable application design pattern. Each function in a composable application is a building block of functionality. It is self-contained, stateless and independent of the rest of the application. You can write code using the first principle of \"input-process-output\".","title":"Composable application"},{"location":"arch-decisions/DESIGN-NOTES/#fully-event-driven","text":"Mercury is both a development methodology and a toolkit. It articulates the use of events between functions instead of tight coupling using direct method calls. In Node.js, this is particular important because it ensures that each function yields to the event loop without blocking the rest of the application, resulting in higher performance and throughout.","title":"Fully event driven"},{"location":"arch-decisions/DESIGN-NOTES/#reactive-design","text":"The system encapsulates the standard Node.js EventEmitter with a \"manager and worker\" pattern. Each worker of a function will process incoming event orderly. This allows the developer the flexibility to implement singleton pattern and parallel processing easily.","title":"Reactive design"},{"location":"arch-decisions/DESIGN-NOTES/#native-nodejs-stream-and-objectstream-io","text":"It integrates natively with the standard Node.js stream library. For higher digital decoupling, the system provides a set of ObjectStream I/O API so that producer can write to a stream before a consumer is ready. To reduce memory footprint, the system uses the temporary local file system at \"/tmp/node/streams\" to hold data blocks of a stream. The temporary data blocks are cleared automatically when a stream is read or closed.","title":"Native Node.js stream and ObjectStream I/O"},{"location":"arch-decisions/DESIGN-NOTES/#configuration-management","text":"The system supports a base configuration (application.yml) and the developer can use additional configuration files with the \"ConfigReader\" API. It follows a structured configuration approach similar to Java's Spring Boot.","title":"Configuration management"},{"location":"arch-decisions/DESIGN-NOTES/#compatibility-with-browsers","text":"The core engine does not have dependency on the local file system. This provides a path to support Composable design in a browser application in future iterations.","title":"Compatibility with browsers"},{"location":"guides/APPENDIX-I/","text":"Application configuration The following parameters are reserved by the system. You can add your application parameters in the main application configuration file ( application.yml ) or apply additional configuration files using the ConfigReader API. Key Value (example) Required application.name Application name Yes info.app.version major.minor.build (e.g. 1.0.0) Yes info.app.description Something about your application Yes server.port e.g. 8083 Yes static.html.folder e.g. /tmp/html Yes yaml.rest.automation Default value is classpath:/rest.yaml Optional yaml.mime.types Optional config file Optional mime.types Map of file extensions to MIME types Optional log.format text or json Optional log.level default 'info' Optional health.dependencies e.g. 'database.health' Optional Static HTML contents You can place static HTML files (e.g. the HTML bundle for a UI program) in the \"resources/public\" folder or in the local file system using the \"static.html.folder\" parameter. The system supports a bare minimal list of file extensions to MIME types. If your use case requires additional MIME type mapping, you may define them in the application.yml configuration file under the mime.types section like this: mime.types: pdf: 'application/pdf' doc: 'application/msword' Alternatively, you can create a mime-types.yml file and point it using the \"yaml.mime.types\" parameter. Transient data store The system uses a temp folder in \"/tmp/node/streams\" to hold temporary data blocks for streaming I/O. Reserved route names The following route names are reserved by the system. Route Purpose Modules distributed.tracing Distributed tracing logger core engine async.http.request HTTP response event handler core engine event.api.service Event API handler REST automation actuator.services admin endpoints (/info, /health, /livenessprobe) REST automation Reserved HTTP header names Header Purpose X-Stream-Id Temporal route name for streaming content X-TTL Time to live in milliseconds for a streaming content X-Async This header, if set to true, indicates it is a drop-n-forget request X-Trace-Id This allows the system to propagate trace ID Chapter-7 Home Appendix-II Test Driven Development Table of Contents Async HTTP client","title":"Appendix-I"},{"location":"guides/APPENDIX-I/#application-configuration","text":"The following parameters are reserved by the system. You can add your application parameters in the main application configuration file ( application.yml ) or apply additional configuration files using the ConfigReader API. Key Value (example) Required application.name Application name Yes info.app.version major.minor.build (e.g. 1.0.0) Yes info.app.description Something about your application Yes server.port e.g. 8083 Yes static.html.folder e.g. /tmp/html Yes yaml.rest.automation Default value is classpath:/rest.yaml Optional yaml.mime.types Optional config file Optional mime.types Map of file extensions to MIME types Optional log.format text or json Optional log.level default 'info' Optional health.dependencies e.g. 'database.health' Optional","title":"Application configuration"},{"location":"guides/APPENDIX-I/#static-html-contents","text":"You can place static HTML files (e.g. the HTML bundle for a UI program) in the \"resources/public\" folder or in the local file system using the \"static.html.folder\" parameter. The system supports a bare minimal list of file extensions to MIME types. If your use case requires additional MIME type mapping, you may define them in the application.yml configuration file under the mime.types section like this: mime.types: pdf: 'application/pdf' doc: 'application/msword' Alternatively, you can create a mime-types.yml file and point it using the \"yaml.mime.types\" parameter.","title":"Static HTML contents"},{"location":"guides/APPENDIX-I/#transient-data-store","text":"The system uses a temp folder in \"/tmp/node/streams\" to hold temporary data blocks for streaming I/O.","title":"Transient data store"},{"location":"guides/APPENDIX-I/#reserved-route-names","text":"The following route names are reserved by the system. Route Purpose Modules distributed.tracing Distributed tracing logger core engine async.http.request HTTP response event handler core engine event.api.service Event API handler REST automation actuator.services admin endpoints (/info, /health, /livenessprobe) REST automation","title":"Reserved route names"},{"location":"guides/APPENDIX-I/#reserved-http-header-names","text":"Header Purpose X-Stream-Id Temporal route name for streaming content X-TTL Time to live in milliseconds for a streaming content X-Async This header, if set to true, indicates it is a drop-n-forget request X-Trace-Id This allows the system to propagate trace ID Chapter-7 Home Appendix-II Test Driven Development Table of Contents Async HTTP client","title":"Reserved HTTP header names"},{"location":"guides/APPENDIX-II/","text":"Actuators and HTTP client Actuator endpoints The following admin endpoints are available. GET /info GET /health GET /livenessprobe Endpoint Purpose /info Describe the application /health Application health check endpoint /livenessprobe Check if application is running normally Custom health services You can extend the \"/health\" endpoint by implementing a composable functions to be added to the \"health check\" dependencies. health.dependencies=database.health, cache.health Your custom health service must respond to the following requests: Info request (type=info) - it should return a map that includes service name and href (protocol, hostname and port) Health check (type=health) - it should return a text string of the health check. e.g. read/write test result. It can throw AppException with status code and error message if health check fails. A sample health service is available in the health-check.ts class of the hello world project as follows: import { preload, Composable, EventEnvelope, AppException } from 'mercury'; const TYPE = 'type'; const INFO = 'info'; const HEALTH = 'health'; export class DemoHealthCheck implements Composable { @preload('demo.health') initialize(): DemoHealthCheck { return this; } // Your service should be declared as an async function with input as EventEnvelope async handleEvent(evt: EventEnvelope) { const command = evt.getHeader(TYPE); if (command == INFO) { return {'service': 'demo.service', 'href': 'http://127.0.0.1'}; } if (command == HEALTH) { // this is a dummy health check return {'status': 'demo.service is running fine'}; } throw new AppException(400, 'Request type must be info or health'); } } AsyncHttpClient API The \"async.http.request\" function can be used as a non-blocking HTTP client. To make an HTTP request to an external REST endpoint, you can create an HTTP request object using the AsyncHttpRequest class and make an async RPC call to the \"async.http.request\" function like this: const po = new PostOffice(evt.getHeaders()); const req = new AsyncHttpRequest(); req.setMethod(\"GET\"); req.setHeader(\"accept\", \"application/json\"); req.setUrl(\"/api/hello/world?hello world=abc\"); req.setQueryParameter(\"x1\", \"y\"); const list = new Array