Skip to content

Latest commit

 

History

History
287 lines (197 loc) · 9.46 KB

lab4.md

File metadata and controls

287 lines (197 loc) · 9.46 KB

Lab 4 - Go deeper with functions

Before starting this lab, create a new folder for your files. As this lab builds on an earlier lab make a copy of lab3:

$ cp -r lab3 lab4 \
   && cd lab4

Inject configuration through environmental variables

It is useful to be able to control how a function behaves at runtime, we can do that in at least two ways:

At deployment time

  • Set environmental variables at deployment time

We did this with write_debug in Lab 3 - you can also set any custom environmental variables you want here too - for instance if you wanted to configure a language for your hello world function you may introduce a spoken_language variable.

Use HTTP context - querystring / headers

  • Use querystring and HTTP headers

The other option which is more dynamic and can be altered at a per-request level is the use of querystrings and HTTP headers, both can be passed through the faas-cli or curl.

These headers become exposed through environmental variables so they are easy to consume within your function. So any header is prefixed with Http_ and all - hyphens are replaced with an _ underscore.

Let's try it out with a querystring and a function that lists off all environmental variables.

  • Deploy a function that prints environmental variables using a built-in BusyBox command:
$ faas-cli deploy --name env --fprocess="env" --image="functions/alpine:latest" --network=func_functions
  • Invoke the function with a querystring:
$ echo "" | faas-cli invoke env --query workshop=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=05e8db360c5a
fprocess=env
HOME=/root
Http_Connection=close
Http_Content_Type=text/plain
Http_X_Call_Id=cdbed396-a20a-43fe-9123-1d5a122c976d
Http_X_Forwarded_For=10.255.0.2
Http_X_Start_Time=1519729562486546741
Http_User_Agent=Go-http-client/1.1
Http_Accept_Encoding=gzip
Http_Method=POST
Http_ContentLength=-1
Http_Path=/function/env
...
Http_Query=workshop=1
...

In Python code you'd type in os.getenv("Http_Query").

  • Now invoke it with a header:
$ echo "" | curl http://127.0.0.1:8080/function/env --header "X-Output-Mode: json"
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=05e8db360c5a
fprocess=env
HOME=/root
Http_X_Call_Id=8e597bcf-614f-4ca5-8f2e-f345d660db5e
Http_X_Forwarded_For=10.255.0.2
Http_X_Start_Time=1519729577415481886
Http_Accept=*/*
Http_Accept_Encoding=gzip
Http_Connection=close
Http_User_Agent=curl/7.55.1
Http_Method=GET
Http_ContentLength=0
Http_Path=/function/env
...
Http_X_Output_Mode=json
...

In Python code you'd type in os.getenv("Http_X_Output_Mode").

You can see that all other HTTP context is also provided such as Content-Length when the Http_Method is a POST, the User_Agent, Cookies and anything else you'd expect to see from a HTTP request.

Making use of logging

The OpenFaaS watchdog operates by passing in the HTTP request and reading an HTTP response via the standard I/O streams stdin and stdout. This means that the process running as a function does not need to know anything about the web or HTTP.

An interesting case is when a function exits with a non-zero exit code and stderr is not empty. By default a function's stdout/stderr is combined and stderr is not printed to the logs.

Lets check that with the hello-openfaas function from Lab 3.

Change the handler.py code to

import sys
import json

def handle(req):

    sys.stderr.write("This should be an error message.\n")
    return json.dumps({"Hello": "OpenFaaS"})

Build and deploy

$ faas-cli build -f issue-bot.yml \
  && faas-cli push -f issue-bot.yml \
  && faas-cli deploy -f issue-bot.yml

Now invoke the function with

$ echo | faas invoke hello-openfaas

You should see the combined output:

This should be an error message.
{"Hello": "OpenFaaS"}

Note: If you check the container logs with docker service logs hello-openfaas you should not see the stderr output.

In the example we need the function to return valid JSON that can be parsed. Unfortunately the log message makes the output invalid, so we need to redirect the messages from stderr to the container's logs. OpenFaaS provides a solution so you can print the error messages to the logs and keep the function response clear, returning only stdout. You should use the combine_output flag for that purposes.

Let's try it. Open the hello-openfaas.yaml file and add those lines:

    environment:
      combine_output: false

Push, deploy and invoke the function.

The output should be:

{"Hello": "OpenFaaS"}

Check the container logs for stderr. You should see a message like:

hello-openfaas.1.2xtrr2ckkkth@linuxkit-025000000001    | 2018/04/03 08:35:24 stderr: This should be an error message.

Create Workflows

There will be situations where it will be useful to take the output of one function and use it as an input to another. This is achievable both client-side and via the API Gateway.

Chaining functions on the client-side

You can pipe the result of one function into another using curl, the faas-cli or some of your own code. Here's an example:

Pros:

  • requires no code - can be done with CLI programs
  • fast for development and testing
  • easy to model in code

Cons:

  • additional latency - each function goes back to the server
  • chatty (more messages)

Example:

  • Deploy the NodeInfo function from the Function Store

  • Then push the output from NodeInfo through the Markdown converter

$ echo -n "" | faas-cli invoke nodeinfo | faas-cli invoke func_markdown
<p>Hostname: 64767782518c</p>

<p>Platform: linux
Arch: x64
CPU count: 4
Uptime: 1121466</p>

You will now see the output of the NodeInfo function decorated with HTML tags such as: <p>.

Another example of client-side chaining of functions may be to invoke a function that generates an image, then send that image into another function which adds a watermark.

Call one function from another

The easiest way to call one function from another is make a call over HTTP via the OpenFaaS API Gateway. This call does not need to know the external domain name or IP address, it can simply refer to the API Gateway as gateway through a DNS entry.

When accessing a service such as the API gateway from a function it's best practice to use an environmental variable to configure the hostname, this is important for two reasons - the name may change and in Kubernetes a suffix is sometimes needed.

Pros:

  • functions can make use of each other directly
  • low latency since the functions can access each other on the same network

Cons:

  • requires a code library for making the HTTP request

Example:

In Lab 3 we introduced the requests module and used it to call a remote API to get the name of an astronaut aboard the ISS. We can use the same technique to call another function deployed on OpenFaaS.

  • Go to the Function Store and deploy the Sentiment Analysis function.

The Sentiment Analysis function will tell you the subjectivity and polarity (positivity rating) of any sentence. The result of the function is formatted in JSON as per the example below:

$ echo -n "California is great, it's always sunny there." | faas-cli invoke sentimentanalysis
{"polarity": 0.8, "sentence_count": 1, "subjectivity": 0.75}

So the result shows us that our test sentence was both very subjective (75%) and very positive (80%). The values for these two fields are always between -1.00 and 1.00.

The following code can be used to call the Sentiment Analysis function or any other function:

    test_sentence = "California is great, it's always sunny there."
    r = requests.get("http://gateway:8080/function/sentimentanalysis", text= test_sentence)

Or via an environmental variable:

    gateway_hostname = os.getenv("gateway_hostname", "gateway") # uses a default of "gateway" for when "gateway_hostname" is not set
    test_sentence = "California is great, it's always sunny there."
    r = requests.get("http://" + gateway_hostname + ":8080/function/sentimentanalysis", text= test_sentence)

Since the result is always in JSON format we can make use of the helper function .json() to convert the response:

    result = r.json()
    if result["polarity"] > 0.45:
       return "That was probably positive"
    else:
        return "That was neutral or negative"

Now create a new function in Python and bring it all together

import os
import requests
import sys

def handle(req):
    """handle a request to the function
    Args:
        req (str): request body
    """

    gateway_hostname = os.getenv("gateway_hostname", "gateway") # uses a default of "gateway" for when "gateway_hostname" is not set

    test_sentence = req

    r = requests.get("http://" + gateway_hostname + ":8080/function/sentimentanalysis", data= test_sentence)

    if r.status_code != 200:
        sys.exit("Error with sentimentanalysis, expected: %d, got: %d\n" % (200, r.status_code))

    result = r.json()
    if result["polarity"] > 0.45:
        return "That was probably positive"
    else:
        return "That was neutral or negative"
  • Remember to add requests to your requirements.txt file

Note: you do not need to modify or alter the source for the SentimentAnalysis function, we have already deployed it and will access it via the API gateway.

Now move on to Lab 5.