By the end of this chapter, you will be able to:
- Compare and effectively utilize different serverless functions
- Set up a cloud-agnostic and container-native serverless framework
- Create, deploy, and invoke a function using the Fn framework
- Deploy serverless functions to cloud providers using serverless frameworks
- Create a real-life serverless application and run it on multiple cloud platforms in the future
In this chapter, we will explain serverless frameworks, create our first serverless functions using these frameworks, and deploy them to various cloud providers.
Let's imagine that you are developing a complex application with many functions in one cloud provider. It may not be feasible to move to another cloud provider, even if the new one is cheaper, faster, or more secure. This situation of vendor dependency is known as vendor lock-in in the industry, and it is a very critical decision factor in the long run. Fortunately, serverless frameworks are a simple and efficient solution to vendor lock-in.
In the previous chapter, all three major cloud providers and their serverless products were discussed. These products were compared based on their programming language support, trigger capabilities, and cost structure. However, there is still one unseen critical difference between all three products: operations. Creating functions, deploying them to cloud providers, and their management are all different for each cloud provider. In other words, you cannot use the same function in AWS Lambda, Google Cloud Functions, and Azure Functions. Various changes are required so that we can fulfil the requirements of cloud providers and their runtime.
Serverless frameworks are open source, cloud-agnostic platforms for running serverless applications. The first difference between the cloud provider and serverless products is that their serverless frameworks are open source and public. They are free to install on the cloud or on on-premise systems and operate on their own. The second characteristic is that serverless frameworks are cloud agnostic. This means that it is possible to run the same serverless functions on different cloud providers or your own systems. In other words, the cloud provider where the functions will be executed is just a configuration parameter in serverless frameworks. All cloud providers are equalized behind a shared API so that cloud-agnostic functions can be developed and deployed by serverless frameworks.
Cloud serverless platforms such as AWS Lambda increased the hype of serverless architectures and empowered their adoption in the industry. In the previous chapter, the evolution of cloud technology offerings over the years and significant cloud serverless platforms were discussed in depth. In this chapter, we will discuss open source serverless frameworks and talk about their featured characteristics and functionalities. There are many popular and upcoming serverless frameworks on the market. However, we will focus on two prominent frameworks with differences in terms of priorities and architecture. In this chapter, a container-native serverless framework, namely Fn, will be presented. Following that, a more comprehensive framework with multiple cloud provider support, namely, the Serverless Framework, will be discussed in depth. Although both frameworks create a cloud-agnostic and open source environment for running serverless applications, their differences in terms of implementation and developer experience will be illustrated.
Fn was announced in 2017 by Oracle at the JavaOne 2017 conference as an event-driven and open source Function-as-a-Service (FaaS) platform. The key characteristics of the framework are as follows:
- Open source: All the source code of the Fn project is publicly available at https://github.com/fnproject/fn, and the project is hosted at https://fnproject.io. It has an active community on GitHub, with more than 3,300 commits and 1,100 releases, as shown in the following screenshot:
Figure 3.1: Fn at GitHub
- Container-native: Containers and microservices have changed the manner of software development and operations. Fn is container-native, meaning that each function is packaged and deployed as a Docker container. Also, it is possible to create your own Docker container and run them as functions.
- Language support: The framework officially supports Go, Java, Node.js, Ruby, and Python. In addition, C# is supported by the community.
- Cloud-agnostic: Fn can run on every cloud provider or on-premise system, as long as Docker is installed and running. This is the most critical characteristic of Fn, since it avoids the vendor lock-in problem completely. If the functions do not depend on any cloud-specific service, it is possible to move between cloud providers and on-premise systems quickly.
As a cloud-agnostic and container-native platform, Fn is a developer-focused framework. It enhances developer experience and agility since you can develop, test, and debug locally and deploy to cloud with the same tooling. In the following exercise, we will install and configure Fn so that we can start using the framework.
Docker 17.10.0-ce or later should be installed and running on your computer before you start the next exercise, since this is a prerequisite for Fn.
In this exercise, you will install and configure a cloud-agnostic and container-native serverless framework on your local computer. The aim of this exercise is to illustrate how straightforward it is to configure and install the Fn Framework so that you can get started with serverless frameworks.
To complete this exercise successfully, we need to ensure that the following steps are executed:
- In your Terminal, type the following command:
curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh
This command downloads and installs the Fn framework. Once this is complete, the version number is printed out, as shown in the following screenshot:
Figure 3.2: Installation of Fn
- Start the Fn server by using the following command in your Terminal:
fn start -d
This command downloads the Docker image of the Fn server and starts it inside a container, as shown in the following screenshot:
Figure 3.3: Starting the Fn server
- Check the client and server version by using the following command in your Terminal:
The output should be as follows:
Figure 3.4: Fn server and client version
This output shows that both the client and server side are running and interacting with each other.
- Update the current Fn context and set a local development registry:
fn use context default && fn update context registry serverless
The output is shown in the following screenshot:
Figure 3.5: Registry setup for the current context
As the output indicates, the default context is set, and the registry is updated to serverless.
- Start the Fn dashboard by using the following command in your Terminal:
docker run -d --link fnserver:api -p 4000:4000 -e "FN_API_URL=http://api:8080" fnproject/ui
This command downloads the fnproject/ui image and starts it in detached mode. In addition, it links fnserver:api to itself and publishes the 4000 port, as shown in the following screenshot:
Figure 3.6: Starting the Fn UI
- Check the running Docker containers with the following command:
As expected, two containers are running for Fn with the image names fnproject/ui and fnproject/fnserver:latest, respectively, as shown in the following screenshot:
Figure 3.7: Docker containers
- Open http://localhost:4000 in your browser to check the Fn UI.
The Fn Dashboard lists the applications and function statistics as a web application, as shown in the following screenshot:
Figure 3.8: Fn Dashboard
With this exercise, we have installed the Fn framework, along with its client, server, and dashboard. Since Fn is a cloud-agnostic framework, it is possible to install any cloud or on-premise system with the illustrated steps. We will continue discussing the Fn framework in terms of how the functions are configured and deployed.
The Fn framework is designed to work with applications, where each application is a group of functions with their own route mappings. For instance, let's assume you have grouped your functions into a folder, as follows:
In each folder, there is a func.yaml file that defines the function with the corresponding implementation in Ruby, Node.js, or any other supported language. In addition, there is an app.yaml file in the root folder to define the application.
Let's start by checking the content of app.yaml:
app.yaml is used to define the root of the serverless application and includes the name of the application. There are also three additional files for the function in the root folder:
- func.go: Go implementation code
- go.mod: Go dependency definitions
- func.yaml: Function definition and trigger information
For a function with an HTTP trigger and Go runtime, the following func.yaml file is defined:
- name: serverless-app
When you deploy all of these functions to Fn, they will be accessible via the following URLs:
http://serverless-kubernetes.io/ -> root function
http://serverless-kubernetes.io/products -> function in products/ directory
http://serverless-kubernetes.io/suppliers -> function in suppliers/ directory
In the following exercise, the content of the app.yaml and func.yaml files, as well as their function implementation, will be illustrated with a real-life example.
In this exercise, we aim to create, deploy, and invoke a function using the Fn framework.
To complete this exercise successfully, we need to ensure that the following steps are executed:
- In your Terminal, run the following commands to create an application:
echo "name: serverless-app" > app.yaml
The output should be as follows:
Figure 3.9: Creating the application
These commands create a folder called serverless-app and then change the directory so that it's in this folder. Finally, a file called app.yaml is created with the content name: serverless-app, which is used to define the root of the application.
- Run the following command in your Terminal to create a root function that's available at the "/" of the application URL:
fn init --runtime ruby --trigger http
This command will create a Ruby function with an HTTP trigger at the root of the application, as shown in the following screenshot:
Figure 3.10: Ruby function creation
- Create a subfunction by using the following commands in your Terminal:
fn init --runtime go --trigger http hello-world
This command initializes a Go function with an HTTP trigger in the hello-world folder of the application, as shown in the following screenshot:
Figure 3.11: Go function creation
- Check the directory of the application by using the following command in your Terminal:
ls -l ./*
This command lists the files in the root and child folders, as shown in the following screenshot:
Figure 3.12: Folder structure
As expected, there is a Ruby function in the root folder with three files: func.rb for the implementation, func.yaml for the function definition, and Gemfile to define Ruby function dependencies.
Similarly, there is a Go function in the hello-world folder with three files: func.go for the implementation, func.yaml for the function definition, and go.mod for Go dependencies.
- Deploy the entire application by using the following command in your Terminal:
fn deploy --create-app --all --local
This command deploys all the functions by creating the app and using a local development environment, as shown in the following screenshot:
Figure 3.13: Application deployment to Fn
Firstly, the function for serverless-app is built, and then the function and trigger are created. Similarly, the hello-world function is built and deployed with the corresponding function and trigger.
- List the triggers of the application with the following command and copy the Endpoints for serverless-app-trigger and hello-world-trigger:
fn list triggers serverless-app
This command lists the triggers of serverless-app, along with function, type, source, and endpoint information, as shown in the following screenshot:
Figure 3.14: Trigger list
- Trigger the endpoints by using the following commands in your Terminal:
For the curl commands, do not forget to use the endpoints that we copied in Step 5.
curl -d Ece http://localhost:8080/t/serverless-app/serverless-app
The output should be as follows:
Figure 3.15: Invocation of the serverless-app trigger
This command will invoke the serverless-app trigger located at the root of the application. Since it was triggered with the name payload, it responded with a personal message: Hello Ece!:
This command will invoke the hello-world trigger without any payload and, as expected, it responded with Hello World, as shown in the following screenshot:
Figure 3.16: Invocation of the hello-world trigger
- Check the application and function statistics from the Fn Dashboard by opening http://localhost:4000 in your browser.
On the home screen, your applications and their overall statistics can be seen, along with auto-refreshed charts, as shown in the following screenshot:
Figure 3.17: Fn Dashboard – Home
Click on serverless-app from the applications list to view more information about the functions of the application, as shown in the following screenshot:
Figure 3.18: Fn Dashboard – Application
- Stop the Fn server by using the following command in your Terminal:
This command will stop the Fn server, including all the function instances, as shown in the following screenshot:
Figure 3.19: Fn server stop
In this exercise, we created a two-function application in the Fn framework and deployed it. We have shown you how to build functions as Docker containers using the fn client and by creating functions. In addition, the triggers of the functions were invoked via HTTP, and the statistics were checked from the Fn dashboard. As a container-native and cloud-agnostic framework, the functions of the framework are Docker containers, and they can run on any cloud provider or local system. In the next section, another serverless framework, namely, the Serverless Framework, which focuses more on cloud-provider integration, will be presented.
Serverless Framework is open source, and its source code is available at GitHub: https://github.com/serverless/serverless. It is a very popular repository with more than 31,000 stars, as shown in the following screenshot:
Figure 3.20: Serverless Framework GitHub repository
The official website of the framework is available at https://serverless.com and provides extensive documentation, use cases, and examples. The main features of the Serverless Framework can be grouped into four main topics:
- Cloud-agnostic: The Serverless Framework aims to create a cloud-agnostic serverless application development environment so that vendor lock-in is not a concern.
- Reusable Components: Serverless functions that are developed in the Serverless Framework are open source and available. These components help us to create complex applications quickly.
- Infrastructure-as-code: All the configuration and source code that's developed in the Serverless Framework is explicitly defined and can be deployed with a single command.
- Developer Experience: The Serverless Framework aims to enhance developer experience via its CLI, configuration parameters, and active community.
These four characteristics of the Serverless Framework make it the most well-known framework for creating serverless applications in the cloud. In addition, the framework focuses on the management of the complete life cycle of serverless applications:
- Develop: It is possible to develop apps locally and reuse open source plugins via the framework CLI.
- Deploy: The Serverless Framework can deploy to multiple cloud platforms and roll out and roll back versions from development to production.
- Test: The framework supports testing the functions out of the box by using the command-line client functions.
- Secure: The framework handles secrets for running the functions and cloud-specific authentication keys for deployments.
- Monitor: The metrics and logs of the serverless applications are available with the serverless runtime and client tools.
In the following exercise, a serverless application will be created, configured, and deployed to AWS using the Serverless Framework. The framework will be used inside a Docker container to show how easy it is to get started with serverless applications.
The Serverless Framework can be downloaded and installed to a local computer with npm. A Docker container, including the Serverless Framework installation, will be used in the following exercise so that we have a fast and reproducible setup.
In the following exercise, the hello-world function will be deployed to AWS Lambda using the Serverless Framework. In order to complete this exercise, you need to have an active Amazon Web Services account. You can create an account at https://aws.amazon.com/.
In this exercise, we aim to configure the Serverless Framework and deploy our very first function using it. With the Serverless Framework, it is possible to create cloud-agnostic serverless applications. In this exercise, we will deploy the functions to AWS Lambda. However, it is possible to deploy the same functions to different cloud providers.
To successfully complete this exercise, we need to ensure that the following steps are executed:
- In your Terminal, run the following command to start the Serverless Framework development environment:
docker run -it --entrypoint=bash onuryilmaz/serverless
This command will start a Docker container in interactive mode. In the following steps, actions will be taken inside this Docker container, as shown in the following screenshot:
Figure 3.21: Starting a Docker container for serverless
- Run the following command to check the framework version:
This command lists the Framework, Plugin, and SDK versions, and getting a complete output indicates that everything is set up correctly, as shown in the following screenshot:
Figure 3.22: Framework version
- Run the following command to use the framework interactively:
Press Y to create a new project and choose AWS Node.js from the dropdown, as shown in the following screenshot:
Figure 3.23: Creating a new project in the framework
- Set the name of the project to hello-world and press Enter. The output is as follows:
Figure 3.24: Successful creation of the project
- Press Y for the AWS credential setup question, and then press Y again for the Do you have an AWS account? question. The output will be as follows:
Figure 3.25: AWS account setup
You now have a URL for creating a serverless user. Copy and save the URL; we'll need it later.
- Open the URL from Step 4 in your browser and start adding users to the AWS console. The URL will open the Add user screen with predefined selections. Click Next: Permissions at the end of the screen, as shown in the following screenshot:
Figure 3.26: AWS Add user
- The AdministratorAccess policy should be selected automatically. Click Next: Tags at the bottom of the screen, as shown in the following screenshot:
Figure 3.27: AWS Add user – Permissions
- If you want to tag your users, you can add optional tags in this view. Click Next: Review, as shown in the following screenshot:
Figure 3.28: AWS Add user – Tags
- This view shows the summary of the new user. Click Create User, as shown in the following screenshot:
Figure 3.29: AWS Add user – Review
You will be redirected to a success page with an Access Key ID and secret, as shown in the following screenshot:
Figure 3.30: AWS Add user – Success
- Copy the key ID and secret access key so that you can use it in the following steps of this exercise and the activity for this chapter. You need to click Show to reveal the secret access key.
- Return to your Terminal and press Enter to enter the key ID and secret information, as shown in the following screenshot:
Figure 3.31: AWS Credentials in the framework
- Press Y for the Serverless account enable question and select register from the dropdown, as shown in the following screenshot:
Figure 3.32: Serverless account enabled
- Write your email and a password to create a Serverless Framework account, as shown in the following screenshot:
Figure 3.33: Serverless account register
- Run the following commands to change the directory and deploy the function:
serverless deploy -v
These commands will make the Serverless Framework deploy the function into AWS, as shown in the following screenshot:
Figure 3.34: Serverless Framework deployment output
The output logs start by packaging the service and creating AWS resources for the source code, artifacts, and functions. After all the resources have been created, the Service Information section will provide a summary of the functions and URLs.
At the end of the screen, you will find the Serverless Dashboard URL for the deployed function, as shown in the following screenshot:
Figure 3.35: Stack Outputs
Copy the dashboard URL so that you can check the function metrics in the upcoming steps.
- Invoke the function by using the following command in your Terminal:
serverless invoke --function hello
This command invokes the deployed function and prints out the response, as shown in the following screenshot:
Figure 3.36: Function output
As the output shows, statusCode is 200, and the body of the response indicates that the function has responded successfully.
- Open the Serverless Dashboard URL you copied at the end of Step 8 into your browser, as shown in the following screenshot:
Figure 3.37: Serverless Dashboard login
- Log in with the email and password you created in Step 5.
You will be redirected to the application list. Expand hello-world-app and click on the successful deployment line, as shown in the following screenshot:
Figure 3.38: Serverless Dashboard application list
In the function view, all the runtime information, including API endpoints, variables, alerts, and metrics, are available. Scroll down to see the number of invocations. The output should be as follows:
Figure 3.39: Serverless Dashboard function view
Since we have only invoked the function once, you will only see 1 in the charts.
- Return to your Terminal and delete the function with the following command:
This command will remove the deployed function and all its dependencies, as shown in the following screenshot:
Figure 3.40: Removing the function
Exit the Serverless Framework development environment container by writing exit in the Terminal, as shown in the following screenshot:
Figure 3.41: Exiting the container
In this exercise, we have created, configured, and deployed a serverless function using the Serverless Framework. Furthermore, the function is invoked via a CLI, and its metrics are checked from the Serverless Dashboard. The Serverless Framework creates a comprehensive abstraction for cloud providers so that it is only passed as credentials to the platform. In other words, where to deploy is just a matter of configuration with the help of serverless frameworks.
In the following activity, a real-life serverless daily weather application will be developed. You will create a serverless framework application with an invocation schedule and deploy it to a cloud provider. In addition, the weather status messages will be sent to a cloud-based collaboration tool known as Slack.
In order to complete the following activity, you need to be able to access a Slack workplace. You can use your existing Slack workspace or create a new one for free at https://slack.com/create.
The aim of this activity is to create a real-life serverless application that sends weather status messages in specific Slack channels. The function will be developed with the Serverless Framework so that it can run on multiple cloud platforms in the future. The function will be designed to run at particular times for your team so that they're informed about the weather status, such as early in the morning before their morning commute. These messages will be published on Slack channels, which is the main communication tool within the team.
In order to get the weather status to share within the team, you can use wttr.in (https://github.com/chubin/wttr.in), which is a free-to-use weather data provider. Once completed, you will have deployed a function to a cloud provider, namely, AWS Lambda:
Figure 3.42: Daily weather function
Finally, when the scheduler invokes the function, or when you invoke it manually, you will get messages regarding the current weather status in your Slack channel:
Figure 3.43: Slack message with the current weather status
In order to complete this activity, you should configure Slack by following the Slack setup steps.
Execute the following steps to configure Slack:
- In your Slack workspace, click your username and select Customize Slack.
- Click Configure apps in the opened window.
- Click on Browse the App Directory to add a new application from the directory.
- Find Incoming WebHooks from the search box in App Directory.
- Click on Set Up for the Incoming WebHooks application.
- Fill in the configuration for incoming webhooks with your specific channel name and icon.
- Open your Slack workspace and the channel you configured in Step 6 to be able to check the integration message.
Detailed screenshots of the Slack setup steps can be found on page 387.
Execute the following steps to complete this activity.
- In your Terminal, create a Serverless Framework application structure in a folder called daily-weather.
- Create a package.json file to define the Node.js environment in the daily-weather folder.
- Create a handler.js file to implement the actual functionality in the daily-weather folder.
- Install the Node.js dependencies for the serverless application.
- Export the AWS credentials as environment variables.
- Deploy the serverless application to AWS using the Serverless Framework.
- Check AWS Lambda for the deployed functions in the AWS Console.
- Invoke the function with the Serverless Framework client tools.
- Check the Slack channel for the posted weather status.
- Return to your Terminal and delete the function with the Serverless Framework.
- Exit the Serverless Framework development environment container.
The solution to this activity can be found on page 387.
In this chapter, we provided an overview of serverless frameworks by discussing the differences between the serverless products of cloud providers. Following that, one container-native and one cloud-native serverless framework were discussed in depth. Firstly, the Fn framework was discussed, which is an open source, container-native, and cloud-agnostic platform. Secondly, the Serverless Framework was presented, which is a more cloud-focused and comprehensive framework. Furthermore, both frameworks were installed and configured locally. Serverless applications were created, deployed, and run in both serverless frameworks. The functions were invoked with the capabilities of serverless frameworks, and the necessary metrics checked for further analysis. At the end of this chapter, a real-life, daily weather Slack bot was implemented as a cloud-agnostic, explicitly defined application using serverless frameworks. Serverless frameworks are essential for the serverless development world with their cloud-agnostic and developer-friendly characteristics.