Cloud Functions in Go with Terraform
Get your Go functions into automation with Terraform. A step by step guide on setting up your Terraform with sample code along with shortcuts for speeding up your automation.
Introduction
Cloud functions are the quickest and easiest way to get your lightweight, single-purpose code in the cloud and there are a number of ways to deploy your functions on Google Cloud. One of my favourites is using the gcloud
command-line tool.
gcloud functions deploy HelloWorld --runtime go116 --trigger-http --allow-unauthenticated
Running this does a lot of magical things for you behind the scenes which really harness the power of the public cloud in a way that really releases the developer of having to think too much about all the inner workings of getting their code out into the world. It will package up your code, deploy it, verify it’s available (eg, that it compiles) and then return to you a load balanced, scalable, HTTPS endpoint without any further effort.
Whilst this is very nice, sometimes more control is necessary. Maybe you want to set memory availability for some functions which do more processing. Perhaps you don’t want to make the function public, so configuring identity and access management (IAM) is necessary. Or maybe the function needs to access secrets from secret manager. The list of configuration options is huge.
When there’s other moving parts, such as IAM, secrets or resources which are needed by the function it gets messy to continue to do this with CLI tools and the need to manage the infrastructure via code using a tool such as Terraform becomes necessary.
Before you begin
If you’re planning to use the code snippets in this post, then you’ll need to make sure some pre-requisites are met.
You’ll need Terraform installed. I used version 1.1.2 but you may be able to use an earlier version as I have not used many of the tools the newer versions provide.
Authenticated via the CLI with
gcloud
installed. (Do this viagcloud auth login
)Although this example and tutorial will simply demonstrate how it works, and will remain within the bounds of the free tier, you will need to setup billing for your GCP account if you have not already done so. The first 2 million invocations of a function are free and if you go beyond that you will need to pay. Also, cloud storage where your function code will be stored has a 5gb free tier, but again excessive use will incur charges, which is why billing needs to be enabled.
This post assumes working knowledge of Go and how to setup a go.mod and go.sum.
Project structure
I’ve setup my repository to look like this. You can do so differently if you prefer as some prefer to have repos of code separate to the infrastructure. Just note that the code in the Terraform later will expect the source code in the following location.
├── Terraform
│ ├── functions.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── prod.tfbackend
│ ├── prod.tfvars
│ └── variables.tf
└── src
├── go.mod
├── go.sum
└── myfunction.go
Sample function code (in src/myfunction.go
):
package myfunction
import (
"fmt"
"net/http"
)
func HelloWorld(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello, World!")
}
Terraform files
The functions.tf file is the biggest of the Terraform files so let’s start here.
functions.tf
# Setup the root directory of where the source code will be stored.
locals {
root_dir = abspath("../src")
}
# Zip up our code so that we can store it for deployment.
data "archive_file" "source" {
type = "zip"
source_dir = local.root_dir
output_path = "/tmp/function.zip"
}
# This bucket will host the zipped file.
resource "google_storage_bucket" "bucket" {
name = "${var.project_id}-${var.function_name}"
location = var.region
}
# Add the zipped file to the bucket.
resource "google_storage_bucket_object" "zip" {
# Use an MD5 here. If there's no changes to the source code, this won't change either.
# We can avoid unnecessary redeployments by validating the code is unchanged, and forcing
# a redeployment when it has!
name = "${data.archive_file.source.output_md5}.zip"
bucket = google_storage_bucket.bucket.name
source = data.archive_file.source.output_path
}
# The cloud function resource.
resource "google_cloudfunctions_function" "function" {
available_memory_mb = "128"
entry_point = var.entry_point
ingress_settings = "ALLOW_ALL"
name = var.function_name
project = var.project_id
region = var.region
runtime = "go116"
service_account_email = google_service_account.function-sa.email
timeout = 20
trigger_http = true
source_archive_bucket = google_storage_bucket.bucket.name
source_archive_object = "${data.archive_file.source.output_md5}.zip"
}
# IAM Configuration. This allows unauthenticated, public access to the function.
# Change this if you require more control here.
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
# This is the service account in which the function will act as.
resource "google_service_account" "function-sa" {
account_id = "function-sa"
description = "Controls the workflow for the cloud pipeline"
display_name = "function-sa"
project = var.project_id
}
Discussion
Hopefully much of the comments in the functions.tf
file are self-documenting. We’ve got our source code in the src
directory which is at the same level as the Terraform
directory. When deployment happens, behind the scenes, Google Cloud takes our code and mod file and build out a binary and shoves it behind a HTTP server. This is why we need a w http.ResponseWriter, r *http.Request
in our arguments of our function.
Shortcutting deployments
The above 70 odd lines contains our bucket, IAM, function resource definition and source code zip. Awesome. As a bonus I would like to talk about the little nugget of gold in there which is the MD5 hashing of the source code zip file.
Specifically, this line:
name = "${data.archive_file.source.output_md5}.zip"
Once we’ve zipped our code, Terraform can hash the zip file with an MD5 Checksum which looks at the size of the zip file down to the byte to determine if there’s any difference between two zips and we use this as our file name. If we want to make changes to our infrastructure we can do so without having to redeploy the function as no code has changed saving about ~1-2 minutes of redeployment time. Neato!
Other necessary Terraform
The remaining Terraform files in our repo are pretty stock standard stuff you’ll find. I’ll list them below for completeness.
main.tf
terraform {
backend "gcs" {
# Bucket is passed in via cli arg. Eg, terraform init -reconfigure -backend-configuration=dev.tfbackend
}
}
provider "google" {
project = var.project_id
region = var.region
}
provider "google-beta" {
project = var.project_id
region = var.region
}
prod.tfbackend
bucket = "your-bucket-name"
This is the bucket where you will keep the Terraform statefile.
prod.tfvars
region = "us-central1"
project_id = "your-project-id"
function_name = "myfunction"
variables.tf
variable "project_id" {
description = "The project ID in Google Cloud to use for these resources."
}
variable "region" {
description = "The region in Google Cloud where the resources will be deployed."
}
variable "function_name" {
description = "The name of the function to be deployed"
}
variable "entry_point" {
description = "The entrypoint where the function is called"
default = "HelloWorld"
}
Summary
In this post we discussed deploying a Cloud Function in Go using Terraform, an infrastructure-as-code tool. In it we set up the function resource, the identity and access, as well as some tricks to reduce our build and deploy time. I hope you got something out of this post.