The platform-console-data-manager project contains a Node.js app that provides a number of services for the platform-console-frontend web app via a REST API interface. In many cases, the backend simply provides a wrapper for REST APIs from other services.
By default, the data manager runs on port 3123.
For an interactive Swagger API console, see /docs.
For some Frisby unit tests, see /tests.
To access the Data manager API, you will need to login. The login API uses PassPort authentication middleware for Nodejs with Asynchronous PAM authentication for NodeJS as the Custom strategy. Once logged in, this will create a PassPort session which is shared with Express session.
POST /pam/login
Headers: 'Content-Type': 'application/x-www-form-urlencoded'
Response Codes:
200 - OK
401 - Unauthorized
500 - Server Error
Example body:
username=pnda&password=cG5kYQ==
The password must be encoded in Base64
GET /pam/logout
Response Codes:
200 - OK
500 - Server Error
Example body:
{"session":"logout"}
The packages API lets you see what software packages are available for deployment in the cluster. You can deploy packages, which turns them into applications.
GET /api/dm/packages
Response Codes:
200 - OK
500 - Server Error
Example response:
["spark-batch-example-app-1.0.23"]
GET /api/dm/packages/deployed
Response Codes:
200 - OK
500 - Server Error
Example response:
{"status": "DEPLOYED", "information": "human readable error message or other information about this status"}
Possible values for status:
NOTDEPLOYED
DEPLOYING
DEPLOYED
UNDEPLOYING
GET /api/dm/packages/<package>
Response Codes:
200 - OK
500 - Server Error
Example response:
{
"status": "DEPLOYED",
"version": "1.0.23",
"name": "spark-batch-example-app",
"user": "who-deployed-this",
"defaults": {
"oozie": {
"example": {
"end": "${deployment_end}",
"start": "${deployment_start}",
"driver_mem": "256M",
"input_data": "/user/pnda/PNDA_datasets/datasets/source=test-src/year=*",
"executors_num": "2",
"executors_mem": "256M",
"freq_in_mins": "180",
"job_name": "batch_example"
}
}
}
}
PUT /api/dm/packages/<package>
Response Codes:
202 - Accepted, poll /packages/<package>/status for status
404 - Package not found in repository
409 - Package already deployed
500 - Server Error
DELETE /api/dm/packages/<package>
Response Codes:
202 - Accepted, poll /packages/<package>/status for status
404 - Package not deployed
500 - Server Error
The applications API lets you see what applications are available in the cluster. You can start, stop, and delete applications.
GET /api/dm/applications
Response Codes:
200 - OK
500 - Server Error
Example response:
["spark-batch-example-app-instance"]
GET /api/dm/packages/<package>/applications
Response Codes:
200 - OK
500 - Server Error
Example response:
["spark-batch-example-app-instance"]
GET /api/dm/applications/<application>/status
Response Codes:
200 - OK
404 - Application not known
500 - Server Error
Example response:
{"status": "STARTED", "information": "human readible error message or other information about this status"}
Possible values for status:
NOTCREATED
CREATING
CREATED
STARTING
STARTED
STOPPING
DESTROYING
GET /api/dm/applications/<application>/detail
Response Codes:
200 - OK
404 - Application not known
500 - Server Error
{
"status": "STARTED",
"name": "application-name",
"yarn-ids": [
{"component":"example", "type":"oozie", "yarn-id":"application_1455877292606_0404"}
]
}
POST /api/dm/applications/<application>/start
Response Codes:
202 - Accepted, poll /applications/<application>/status for status
404 - Application not known
500 - Server Error
POST /api/dm/applications/<application>/stop
Response Codes:
202 - Accepted, poll /applications/<application>/status for status
404 - Application not known
500 - Server Error
GET /api/dm/applications/<application>
Response Codes:
200 - OK
404 - Application not known
500 - Server Error
Example response:
{
"status": "CREATED",
"overrides": {
"oozie": {
"example": {
"executors_num": "5"
}
}
},
"user": "somebody",
"package_name": "spark-batch-example-app-1.0.23",
"name": "spark-batch-example-app-instance",
"defaults": {
"oozie": {
"example": {
"end": "${deployment_end}",
"input_data": "/user/pnda/PNDA_datasets/datasets/source=test-src/year=*",
"driver_mem": "256M",
"start": "${deployment_start}",
"executors_num": "2",
"freq_in_mins": "180",
"executors_mem": "256M",
"job_name": "batch_example"
}
}
}
}
PUT /api/dm/applications/<application>
{
"user": "<username>",
"package": "<package>",
"<componentType>": {
"<componentName>": {
"<property>": "<value>"
}
}
}
Response Codes:
202 - Accepted, poll /applications/<application>/status for status
400 - Request body failed validation
404 - Package not found
409 - Application already exists
500 - Server Error
Example body:
{
"user": "somebody",
"package": "<package>",
"oozie": {
"example": {
"executors_num": "5"
}
}
}
Package and user are mandatory, property settings are optional
DELETE /api/dm/applications/<application>
Response Codes:
200 - OK
404 - Application not known
500 - Server Error
The endpoints API lets you browse environment variables that are known to the deployment manager.
GET /api/dm/endpoints
Response Codes:
200 - OK
500 - Server Error
Example response:
{"zookeeper_port": "2181", "cluster_root_user": "cloud-user", ... }
The datasets API lets you browse and update data rention policies for datasets in the cluster.
Datasets have a data retention policy which can be set to age or size, which controls whether the dataset has a maximum age in days (max_age_days) or size (max_size_gigabytes).
Datasets have a data retention mode which can be set to archive or delete, which controls what happens to data when it has reached the maximum age or size.
GET /api/dm/datasets
Response Codes:
200 - OK
500 - Server Error
Example response:
[{"policy":"age","path":"/user/pnda/PNDA_datasets/datasets/source=netflow","max_age_days":30,"id":"netflow","mode":"archive"},{"policy":"size","path":"/user/pnda/PNDA_datasets/datasets/source=telemetry","max_size_gigabytes":10,"id":"telemetry","mode":"delete"}]
GET /api/dm/datasets/<dataset>
Response Codes:
200 - OK
404 - Not found
500 - Server Error
Example response:
{"policy":"age","path":"/user/pnda/PNDA_datasets/datasets/source=netflow","max_age_days":30,"id":"netflow","mode":"archive"}
PUT /api/dm/datasets/<dataset>
Response Codes:
200 - OK
404 - Not found
500 - Server Error
Example body:
{"mode":"archive"}
{"mode":"delete"}
{"policy":"age","max_age_days":30}
{"policy":"size","max_size_gigabytes":10}
The metrics API lets you browse metrics available for the cluster.
GET /api/dm/
Response Codes:
200 - OK
500 - Server Error
Example response:
{
"data": [
"metric:zookeeper.nodes.ok",
"kafka.brokers.1.controllerstats.LeaderElection.75thPercentile",
"kafka.brokers.1.system.OpenFileDescriptorCount",
"metric:kafka.brokers.1.controllerstats.LeaderElection.RateUnit",
"kafka.brokers.1.UnderReplicatedPartitions",
"metric:platform.deployment-manager.packages_deployed_succeeded",
"kafka.brokers.1.controllerstats.UncleanLeaderElections.FifteenMinuteRate",
"metric:kafka.brokers.1.system.FreePhysicalMemorySize",
"hadoop.HDFS.total_dfs_capacity_across_datanodes",
"kafka.nodes.ok",
...
]
}
GET /api/dm/metrics
Response Codes:
200 - OK
500 - Server Error
Example response:
{
"metrics": [
"zookeeper.nodes.ok",
"kafka.brokers.1.controllerstats.LeaderElection.RateUnit",
"platform.deployment-manager.packages_deployed_succeeded",
"kafka.brokers.1.system.FreePhysicalMemorySize"
]
}
GET /api/dm/metrics/<metric>
Response Codes:
200 - OK
404 - Not found
500 - Server Error
Example response:
{
"metric": "kafka.health",
"currentData": {
"source": "kafka",
"value": "OK",
"timestamp": "1459438855068",
"causes": ""
}
}