Skip to content

Commit

Permalink
added gRPC support
Browse files Browse the repository at this point in the history
  • Loading branch information
Alex Mattson committed Oct 22, 2020
1 parent fd45446 commit fa040b5
Show file tree
Hide file tree
Showing 22 changed files with 826 additions and 142 deletions.
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -106,4 +106,6 @@ venv.bak/
lib/
include/
bin/
pyvenv.cfg
pyvenv.cfg
.DS_Store
.gitignore
98 changes: 93 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# whereami

`whereami` is a simple Kubernetes-oriented python app for describing the location of the pod serving a request via its attributes (cluster name, cluster region, pod name, namespace, service account, etc). This is useful for a variety of demos where you just need to understand how traffic is getting to and returning from your app.
`whereami` is a simple Kubernetes-oriented python app for describing the location of the pod serving a request via its attributes (cluster name, cluster region, pod name, namespace, service account, etc). This is useful for a variety of demos where you just need to understand how traffic is getting to and returning from your app.

`whereami`, by default, is a Flask-based python app. It also can operate as a [gRPC](https://grpc.io/) server. The instructions for using gRPC are at the bottom of this document.

### Simple deployment

Expand Down Expand Up @@ -239,7 +241,7 @@ Get the external Service endpoint again:
$ ENDPOINT=$(kubectl get svc whereami-frontend | grep -v EXTERNAL-IP | awk '{ print $4}')
```

Curl the endpoint to get the response. In this example we use [jq]() to provide a little more structure to the response:
Curl the endpoint to get the response. In this example we use [jq](https://stedolan.github.io/jq/) to provide a little more structure to the response:

```bash
$ curl $ENDPOINT -s | jq .
Expand Down Expand Up @@ -304,7 +306,7 @@ Get the external Service endpoint again:
$ ENDPOINT=$(kubectl get svc whereami-echo-headers | grep -v EXTERNAL-IP | awk '{ print $4}')
```

Curl the endpoint to get the response. Yet again, we use [jq]() to provide a little more structure to the response:
Curl the endpoint to get the response. Yet again, we use [jq](https://stedolan.github.io/jq/) to provide a little more structure to the response:

```bash
$ curl $ENDPOINT -s | jq .
Expand All @@ -329,11 +331,97 @@ $ curl $ENDPOINT -s | jq .
}
```

### gRPC support

By enabling a feature flag in the `whereami` configmap, `whereami` can be interacted with using [gRPC](https://grpc.io/), with support for the gRPC [health check protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md). The examples below leverage [grpcurl](https://github.com/fullstorydev/grpcurl), and assume you've already deployed a GKE cluster.

> Note: because gRPC is used as the protocol, the output of `whereami-grpc`, when using gRPC, will omit any `header` fields *and* the service listens on port `9090`

#### Step 1 - Deploy the whereami-grpc backend

Deploy the `whereami-grpc` backend using the manifests from [k8s-grpc-backend-overlay-example](k8s-grpc-backend-overlay-example):

```bash
$ kubectl apply -k k8s-grpc-backend-overlay-example
serviceaccount/whereami-grpc-ksa-backend created
configmap/whereami-grpc-configmap-backend created
service/whereami-grpc-backend created
deployment.apps/whereami-grpc-backend created
```

#### Step 2 - Deploy the whereami-grpc frontend

Now we're going to deploy the `whereami-grpc` frontend from the [k8s-grpc-frontend-overlay-example](k8s-grpc-frontend-overlay-example):

```bash
$ kubectl apply -k k8s-grpc-frontend-overlay-example
serviceaccount/whereami-grpc-ksa-frontend created
configmap/whereami-grpc-configmap-frontend created
service/whereami-grpc-frontend created
deployment.apps/whereami-grpc-frontend created
```

#### Step 3 - Query whereami-grpc frontend

Get the external Service gRPC endpoint:

```bash
$ ENDPOINT=$(kubectl get svc whereami-grpc-frontend | grep -v EXTERNAL-IP | awk '{ print $4}')
```

Call the endpoint (using [grpcurl](https://github.com/fullstorydev/grpcurl)) to get the response. In this example we use [jq](https://stedolan.github.io/jq/) to provide a little more structure to the response:

```bash
$ grpcurl -plaintext $ENDPOINT:9090 whereami.Whereami.GetPayload | jq .
{
"backend_result": {
"clusterName": "asm-test-01",
"metadata": "grpc-backend",
"nodeName": "gke-asm-test-01-default-pool-02e1ecfb-1b9z.c.alexmattson-scratch.internal",
"podIp": "10.76.0.10",
"podName": "whereami-grpc-backend-f9b79888d-mcbxv",
"podNameEmoji": "🤒",
"podNamespace": "default",
"podServiceAccount": "whereami-grpc-ksa-backend",
"projectId": "alexmattson-scratch",
"timestamp": "2020-10-22T05:10:58",
"zone": "us-central1-c"
},
"cluster_name": "asm-test-01",
"metadata": "grpc-frontend",
"node_name": "gke-asm-test-01-default-pool-f9ad78c0-w3qr.c.alexmattson-scratch.internal",
"pod_ip": "10.76.1.10",
"pod_name": "whereami-grpc-frontend-88b54bc6-9w8cc",
"pod_name_emoji": "🇸🇦",
"pod_namespace": "default",
"pod_service_account": "whereami-grpc-ksa-frontend",
"project_id": "alexmattson-scratch",
"timestamp": "2020-10-22T05:10:58",
"zone": "us-central1-f"
}
```

### Notes

If you'd like to build & publish via Google's [buildpacks](https://github.com/GoogleCloudPlatform/buildpacks), something like this should do the trick (leveraging the local `Procfile`):
If you'd like to build & publish via Google's [buildpacks](https://github.com/GoogleCloudPlatform/buildpacks), something like this should do the trick (leveraging the local `Procfile`) from this directory:

```
cat > run.Dockerfile << EOF
FROM gcr.io/buildpacks/gcp/run:v1
USER root
RUN apt-get update && apt-get install -y --no-install-recommends \
wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* && \
wget -O /bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/v0.3.3/grpc_health_probe-linux-amd64 && \
chmod +x /bin/grpc_health_probe
USER cnb
EOF
docker build -t gcr.io/${PROJECT_ID}/whereami-run-image -f run.Dockerfile .
docker push gcr.io/${PROJECT_ID}/whereami-run-image
```

```pack build --builder gcr.io/buildpacks/builder:v1 --publish gcr.io/${PROJECT_ID}/whereami```
```pack build --builder gcr.io/buildpacks/builder:v1 --publish gcr.io/${PROJECT_ID}/whereami --run-image gcr.io/${PROJECT_ID}/whereami-run-image```


191 changes: 70 additions & 121 deletions app.py
Original file line number Diff line number Diff line change
@@ -1,159 +1,108 @@
import requests
from flask import Flask, request, Response, jsonify
import logging
import json
import sys
import socket
import os
from datetime import datetime
import emoji
import random
from flask_cors import CORS
import whereami_payload

METADATA_URL = 'http://metadata.google.internal/computeMetadata/v1/'
METADATA_HEADERS = {'Metadata-Flavor': 'Google'}
from concurrent import futures
import multiprocessing

import grpc

from grpc_reflection.v1alpha import reflection
from grpc_health.v1 import health
from grpc_health.v1 import health_pb2
from grpc_health.v1 import health_pb2_grpc

import whereami_pb2
import whereami_pb2_grpc

app = Flask(__name__)
app.config['JSON_AS_ASCII'] = False # otherwise our emojis get hosed
CORS(app) # enable CORS

# set up emoji list
emoji_list = list(emoji.unicode_codes.UNICODE_EMOJI.keys())

# define Whereami object
whereami_payload = whereami_payload.WhereamiPayload()

@app.route('/healthz') # healthcheck endpoint
def i_am_healthy():
return ('OK')

# create gRPC class
class WhereamigRPC(whereami_pb2_grpc.WhereamiServicer):

@app.route('/', defaults={'path': ''})
@app.route('/<path:path>')
def home(path):
def GetPayload(self, request, context):
payload = whereami_payload.build_payload(None)
return whereami_pb2.WhereamiReply(**payload)

# define the response payload
payload = {}

# get GCP project ID
try:
r = requests.get(METADATA_URL +
'project/project-id',
headers=METADATA_HEADERS)
if r.ok:
payload['project_id'] = r.text
except:

logging.warning("Unable to capture project ID.")

# get GCP zone
try:
r = requests.get(METADATA_URL +
'instance/zone',
headers=METADATA_HEADERS)
if r.ok:
payload['zone'] = str(r.text.split("/")[3])
except:

logging.warning("Unable to capture zone.")

# get GKE node name
try:
r = requests.get(METADATA_URL +
'instance/hostname',
headers=METADATA_HEADERS)
if r.ok:
payload['node_name'] = str(r.text)
except:

logging.warning("Unable to capture node name.")

# get GKE cluster name
try:
r = requests.get(METADATA_URL +
'instance/attributes/cluster-name',
headers=METADATA_HEADERS)
if r.ok:
payload['cluster_name'] = str(r.text)
except:

logging.warning("Unable to capture GKE cluster name.")

# get host header
try:
payload['host_header'] = request.headers.get('host')
except:
logging.warning("Unable to capture host header.")

# get pod name, emoji & datetime
payload['pod_name'] = socket.gethostname()
payload['pod_name_emoji'] = emoji_list[hash(socket.gethostname()) %
len(emoji_list)]
payload['timestamp'] = datetime.now().replace(microsecond=0).isoformat()

# get namespace, pod ip, and pod service account via downstream API
if os.getenv('POD_NAMESPACE'):
payload['pod_namespace'] = os.getenv('POD_NAMESPACE')
else:
logging.warning("Unable to capture pod namespace.")

if os.getenv('POD_IP'):
payload['pod_ip'] = os.getenv('POD_IP')
else:
logging.warning("Unable to capture pod IP address.")
# if selected will serve gRPC endpoint on port 9090
# see https://github.com/grpc/grpc/blob/master/examples/python/xds/server.py
# for reference on code below
def grpc_serve():
server = grpc.server(
futures.ThreadPoolExecutor(max_workers=multiprocessing.cpu_count()))

if os.getenv('POD_SERVICE_ACCOUNT'):
payload['pod_service_account'] = os.getenv('POD_SERVICE_ACCOUNT')
else:
logging.warning("Unable to capture pod KSA.")
# Add the application servicer to the server.
whereami_pb2_grpc.add_WhereamiServicer_to_server(WhereamigRPC(), server)

# get the whereami METADATA envvar
metadata = os.getenv('METADATA')
if os.getenv('METADATA'):
payload['metadata'] = os.getenv('METADATA')
else:
logging.warning("Unable to capture metadata.")
# Create a health check servicer. We use the non-blocking implementation
# to avoid thread starvation.
health_servicer = health.HealthServicer(
experimental_non_blocking=True,
experimental_thread_pool=futures.ThreadPoolExecutor(max_workers=1))
health_pb2_grpc.add_HealthServicer_to_server(health_servicer, server)

# should we call a backend service?
call_backend = os.getenv('BACKEND_ENABLED')
# Create a tuple of all of the services we want to export via reflection.
services = tuple(
service.full_name
for service in whereami_pb2.DESCRIPTOR.services_by_name.values()) + (
reflection.SERVICE_NAME, health.SERVICE_NAME)

if call_backend == 'True':
# Add the reflection service to the server.
reflection.enable_server_reflection(services, server)
server.add_insecure_port('[::]:9090')
server.start()

backend_service = os.getenv('BACKEND_SERVICE')
# Mark all services as healthy.
overall_server_health = ""
for service in services + (overall_server_health,):
health_servicer.set(service, health_pb2.HealthCheckResponse.SERVING)

try:
r = requests.get('http://' + backend_service)
if r.ok:
backend_result = r.json()
else:
backend_result = None
except:
# Park the main application thread.
server.wait_for_termination()

print(sys.exc_info()[0])
backend_result = None

payload['backend_result'] = backend_result

echo_headers = os.getenv('ECHO_HEADERS')

if echo_headers == 'True':

try:
# HTTP heathcheck
@app.route('/healthz') # healthcheck endpoint
def i_am_healthy():
return ('OK')

payload['headers'] = {k:v for k, v in request.headers.items()}

except:
# default HTTP service
@app.route('/', defaults={'path': ''})
@app.route('/<path:path>')
def home(path):

logging.warning("Unable to capture inbound headers.")
payload = whereami_payload.build_payload(request.headers)

return jsonify(payload)


if __name__ == '__main__':
out_hdlr = logging.StreamHandler(sys.stdout)
fmt = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
fmt = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(message)s")
out_hdlr.setFormatter(fmt)
out_hdlr.setLevel(logging.INFO)
logging.getLogger().addHandler(out_hdlr)
logging.getLogger().setLevel(logging.INFO)
app.logger.handlers = []
app.logger.propagate = True
app.run(host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))

# decision point - HTTP or gRPC?
if os.getenv('GRPC_ENABLED') == "True":
logging.info("gRPC server listening on port 9090")
grpc_serve()

else:
app.run(
host='0.0.0.0', port=int(os.environ.get('PORT', 8080)),
threaded=True)
7 changes: 0 additions & 7 deletions k8s-backend-overlay-example copy/cm-flag.yaml

This file was deleted.

9 changes: 9 additions & 0 deletions k8s-grpc-backend-overlay-example/cm-flag.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: whereami-grpc-configmap
data:
BACKEND_ENABLED: "False"
BACKEND_SERVICE: "whereami-grpc-backend" # substitute with corresponding service name - this example assumes both services are in the same namespace
METADATA: "grpc-backend"
GRPC_ENABLED: "True"
Loading

0 comments on commit fa040b5

Please sign in to comment.