This document contains informal notes to help developers of the Elastic APM Node.js agent. Developers should feel free to aggressively weed out obsolete notes. For structured developer and contributing rules and guidelines, see CONTRIBUTING.md.
Work on the Elastic Node.js APM agent is coordinated with other Elastic APM work via the APM Agents project and with releases of the full Elastic Stack via milestones named after Elastic Stack version numbers.
ELASTIC_APM_LOG_LEVEL=trace
will turn on trace-level logging in the agent
and apm http-client. Agent logging is in ecs-logging format,
which can be pretty-formatted via the ecslog
tool:
ELASTIC_APM_LOG_LEVEL=trace node myapp.js | ecslog
One of the important libs in the agent is require-in-the-middle
for intercepting require(...)
statements for monkey-patching. You can get
debug output from it via:
DEBUG=require-in-the-middle
And don't forget the node core NODE_DEBUG
and NODE_DEBUG_NATIVE
environment variables:
NODE_DEBUG=*
NODE_DEBUG_NATIVE=*
The following patch to the agent's async-hooks.js can be helpful to learn how its async hook tracks relationships between async operations:
diff --git a/lib/instrumentation/async-hooks.js b/lib/instrumentation/async-hooks.js
index 1dd168f..f35877d 100644
--- a/lib/instrumentation/async-hooks.js
+++ b/lib/instrumentation/async-hooks.js
@@ -71,6 +71,9 @@ module.exports = function (ins) {
// type, which will init for each scheduled timer.
if (type === 'TIMERWRAP') return
+ const indent = ' '.repeat(triggerAsyncId % 80)
+ process._rawDebug(`${indent}${type}(${asyncId}): triggerAsyncId=${triggerAsyncId} executionAsyncId=${asyncHooks.executionAsyncId()}`);
+
const transaction = ins.currentTransaction
if (!transaction) return
If the "Integration Tests" check fails for your PR, here are some notes on debugging that. (The actual ".ci/Jenkinsfile" and apm-integration-testing.git are the authority. See also the APM integration test troubleshooting guide.)
The Node.js integration tests are "test_nodejs.py" in apm-integration-testing. Roughly speaking, the integration tests:
- use that repo's scripts to start ES, kibana, apm-server and an express test app in Docker;
- run apm-integration-testing.git itself in a Docker container and call
make test-agent-nodejs
; - which runs
pytest tests/agent/test_nodejs.py ...
To reproduce the integration test failure on your dev machine mainly involves getting the correct settings for that "express test app", in particular using your PR commit sha. The following boilerplate should get you going. Note that it might likely get out of date:
-
Create and active a Python virtual env for the Python bits that are used:
python3 -m venv ./venv source ./venv/bin/activate
-
Set the apm-agent-nodejs.git commit you want to use:
export MYCOMMIT=... # e.g. 3554f05fad6798f229f75eebc07bb66cee918385
-
Start the docker containers:
export BUILD_OPTS="--nodejs-agent-package elastic/apm-agent-nodejs#$MYCOMMIT --opbeans-node-agent-branch $MYCOMMIT --build-parallel" export ELASTIC_STACK_VERSION=8.0.0 export COMPOSE_ARGS="${ELASTIC_STACK_VERSION} ${BUILD_OPTS} \ --with-agent-nodejs-express \ --no-apm-server-dashboards \ --no-apm-server-self-instrument \ --force-build --no-xpack-secure \ --apm-log-level=trace" make start-env # OR: replace all this with a suitable call to # 'python3 scripts/compose.py start ...'
Note: There is an easier way with "ELASTIC_STACK_VERSION=... APM_AGENT_NODEJS_VERSION=... make start-env" I believe. See the README.
-
Run the test suite:
pytest tests/agent/test_nodejs.py -v
-
(Optional) In a separate terminal, watch the log output from Node.js agent:
docker logs -f expressapp | ecslog
-
When done, stop the docker containers via:
make stop-env # OR: python3 scripts/compose.py stop
Also, optionally, turn off the Python virtual env:
deactivate
- Go to the apm-ci list of apm-agent-nodejs PRs and click on your PR.
- Click "Build with Parameters" in the left sidebar. (If you don't have "Build with Parameters" then you aren't logged in.)
- Select these options to (mostly) only run the "Benchmarks" step:
- Run_As_Master_Branch
- bench_ci
- tav_ci
- tests_ci
- test_edge_ci
Limitation: The current dashboard for benchmark results only shows datapoints from the "master" branch. It would be useful to have a separate chart that showed PR values.
(Another way to start the "Benchmarks" step is via a GitHub comment "jenkins run the benchmark tests". However, that also triggers the "Test" step and, depending on other conditions, the "TAV Test" step -- both of which are long and will run before getting to the Benchmarks run.)