Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform

Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed

This is Informed’s first “technical blog post” in what will become a recurring feature. These posts are written by the technical practitioners on our Engineering Team for the general technical audience of our engineering peers. The posts will cover a variety of topics that touch on the range of technologies and techniques used at Informed spanning Machine Learning, Security, DevOps / Platform Engineering and Software Development.

This article shows how to deploy your own EventCatalog in AWS CloudFront via Terraform and updates to the Catalog via CI/CD (CircleCi in this case, but can be easily applied to other CI systems). It also shows how to use Lambda@Edge to implement Google SSO / OpenID Connect via the Widen/cloudfront-auth Project.

EventCatalog was created by David Boyne. It is an wonderful Open Source project that acts as a unifying Documentation tool for Event-Driven Architectures. And, it helps you document, visualize and keep on top of your Event Driven Architectures’ events, schemas, producers, consumers and services. You can go to the above links to find out more about EventCatalog itself.

Working on having a github repo with the full example at some point soon.

Create EventCatalog Project

You can create a sample EventCatalog Project using the EventCatalog CLI. This can be the scaffolding for your own project. In this case we’re going to use the sample project it will install as our project for this article.


  • Node.js version >= 14 or above (which can be checked by running node -v). You can use nvm for managing multiple Node versions on a single machine installed
  • We’re going to be using Node.js version 16.x.x
  • Yarn version >= 1.5 (which can be checked by running yarn –version). Yarn is a performant package manager for JavaScript and replaces the npm client. It is not strictly necessary but highly encouraged.
  • We’re using Yarn 1.22.x

Generate the Scaffold Project Website

Go to a directory on your computer’s filesystem where you want to save the project.

Generate the scaffolding for the project

  • We’re going to call the project my-catalog
  • If you had cloned the Informed/eventcatalog-sandbox github repo, the top level directory will be eventcatalog-sandbox but the project name is still my-catalog
npx @eventcatalog/create-eventcatalog@latest my-catalog

This will generate a new directory structure as a git project:

├── services
│   ├── Basket Service
│   │     └──
│   ├── Data Lake
│   │     └──
│   ├── Payment Service
│   │     └──
│   ├── Shipping Service
│   │     └──
├── events
│   ├── AddedItemToCart
│   │     └──versioned
│   │     │  └──0.0.1
│   │     │     └──
│   │     │     └──schema.json
│   │     └──
│   │     └──schema.json
│   ├── OrderComplete
│   │     └──
│   │     └──schema.json
│   ├── OrderConfirmed
│   │     └──
│   │     └──schema.json
│   ├── OrderRequested
│   │     └──
│   ├── PaymentProcessed
│   │     └──
├── static
│   └── img
├── eventcatalog.config.js
├── .eventcatalog-core/
├── package.json
├── yarn.lock
├── Dockefile
├── .dockerignore
├── .gitignore
└── .git/
  • Change directory into my-catalog
  • You can preview the EventCatalog with the command:
npm run dev
  • And then point your browser to http://localhost:3000
  • You will be able to view the sample Events, Services, and Domains there.
  • Once you are done checking it out, kill the npm process with CTL-C

Create the Terraform to deploy to Cloudfront

  • Create a terraform directory in my-catalog and add an assets directory to it
  • You could make this directory outside of the catalog if you would prefer to manage it that way
mkdir -p terraform/assets
cd terraform
  • Create a .gitignore in the terraform directory
curl -o terraform/.gitignore

Create the file

This file has all the terraform code to:

  • Set up the terraform environment
  • Specify the AWS provider
  • alt_fqdns a placeholder for now. May want to make alt_fqds a variable. It needs to be a list of strings Used by to specify aliases for the certificate and DNS but its kind of hard to support that with the sso callback
### Using locals to form variables by concatinating input variables
### Unfortunately can not do that in or <env>.tfvars
locals {
  fqdn        = "${var.app_name}-${var.project_name}.${var.environment}.${var.base_domain_name}"
  alt_fqdns   = []
  zone_name   = "${var.environment}.${var.base_domain_name}"
  lambda_name = "${var.environment}-${var.project_name}-${var.app_name}-${var.lambda_name_suffix}"

terraform {
  required_version = ">= 1.2.0"
  required_providers {
    aws = {
      source = "hashicorp/aws"
      # Need to use version < 4.0.0 to work with cloudposse/cloudfront-s3-cdn
      version = ">= 3.75.2"
  # You should use a different state management than local
  backend "local" {}

provider "aws" {
  region  = var.region
  profile = var.profile

Create the file

  • Configure the AWS IAM role and policies for the lambda@edge
  • Create the lambda@edge service
### Set up IAM role and policies for the lambda
data "aws_iam_policy_document" "lambda_edge_assume_role" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type = "Service"
      identifiers = [

# Define the IAM role for logging from the Lambda function.
data "aws_iam_policy_document" "lambda_edge_logging_policy" {
  statement {
    effect = "Allow"
    actions = [
    resources = ["arn:aws:logs:*:*:*"]

# Add IAM policy for logging to the iam role
resource "aws_iam_role_policy" "lambda_edge_logging" {
  name = "${local.lambda_name}-lambda_edge_logging"
  role =

  policy = data.aws_iam_policy_document.lambda_edge_logging_policy.json

# Create the iam role for the lambda function
resource "aws_iam_role" "lambda_edge" {
  name               = "${var.app_name}_lambda_edge_cloudfront"
  assume_role_policy = data.aws_iam_policy_document.lambda_edge_assume_role.json

# Create the lambda@edge function
resource "aws_lambda_function" "edge" {
  filename      = var.lambda_file_name
  function_name = local.lambda_name
  role          = aws_iam_role.lambda_edge.arn
  handler       = "index.handler"
  timeout       = "5"
  publish       = true

  # The filebase64sha256() function is available in Terraform 0.11.12 and later
  # For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
  # source_code_hash = "${base64sha256(file(""))}"
  source_code_hash = filebase64sha256(var.lambda_file_name)

  runtime = "nodejs12.x"


Create the file

  • Create the CloudFront CDN instance and S3 bucket with lambda@edge association
    • Uses the cloudposse/cloudfront-s3-cdn/aws terraform module to do all the hard work
    • This module currently will work only with the hashicorp/aws provider of versions < 4.0.0
      • This is why we are not using the latest version of the hashicorp/aws provider
    • Create the TLS Certificate using AWS ACM
module "cloudfront-s3-cdn" {
  source  = "cloudposse/cloudfront-s3-cdn/aws"
  version = "0.82.4"

  namespace               = var.bucket_namespace
  environment             = var.environment
  stage                   = var.project_name
  name                    = var.app_name
  encryption_enabled      = true
  allow_ssl_requests_only = false
  # This will allow a complete deletion of the bucket and all of its contents
  origin_force_destroy = true

  # DNS Settings
  parent_zone_id      = var.zone_id
  acm_certificate_arn = module.acm_request_certificate.arn
  aliases             = concat([local.fqdn], local.alt_fqdns)
  ipv6_enabled        = true
  dns_alias_enabled   = true

  # Caching Settings
  default_ttl = 300
  compress    = true

  # Website settings
  website_enabled = true
  index_document  = "index.html"
  error_document  = "404.html"

  depends_on = [module.acm_request_certificate]

  # Link Lambda@Edge to the CloudFront distribution
  lambda_function_association = [{
    event_type   = "viewer-request"
    include_body = false
    lambda_arn   = aws_lambda_function.edge.qualified_arn

### Request an SSL certificate
module "acm_request_certificate" {
  source                            = "cloudposse/acm-request-certificate/aws"
  version                           = "0.16.0"
  domain_name                       = local.fqdn
  subject_alternative_names         = local.alt_fqdns
  process_domain_validation_options = true
  ttl                               = "300"
  wait_for_certificate_issued       = true
  zone_name                         = local.zone_name
  • Variable Definitions for EventCatalog-Sandbox
variable "region" {
  description = "The region to use for the Terraform run"
  default     = ""

variable "profile" {
  description = "The local IAM profile to use for the Terraform run"
  default     = ""

variable "environment" {
  description = "The environment to use for the Terraform run"
  default     = ""

variable "project_name" {
  description = "The name of the project to use"
  default     = ""

variable "app_name" {
  description = "The name of this app"
  default     = "eventcatalog"

variable "base_domain_name" {
  description = "The base domain name for the environment"
  default     = ""

variable "bucket_namespace" {
  description = "The namespace prefix for s3 buckets"
  default     = ""

variable "zone_id" {
  description = "The route53 zone id for the domain zone of the FQDNs"
  default     = ""

variable "lambda_file_name" {
  description = "The name of the lambda function file that was generated by the Widen/cloudfront-auth project"
  default     = ""

variable "lambda_name_suffix" {
  description = "The suffix to append to the lambda function name to make it unique if need to destroy and recrete CloudFront distribution"
  default     = "000"

Create the sandbox.tfvars file

  • This file sets or overrides the default values for the terraform run
    • Set these as appropriate for your environment
    • Region may need to be us-east-1
region           = "us-east-1"
profile          = "sandbox"
environment      = "rob"
project_name     = "blogpost"
app_name         = "eventcatalog"
lambda_file_name = "assets/"

## These must be different for your environment
base_domain_name = ""
bucket_namespace = "informediq"
zone_id          = "Z10***********************K7U"

Create a placeholder lambda code zip file

We have a slight chicken and egg problem where we need to have the Cloudformation name to create the lambda@edge code zip file with the Widen/cloudfront-auth project.

So we’ll make a dummy temp zip file to start with.

  • Create a file assets/temp.js with the following content:
exports.handler = async (event) => {
    // TODO implement
    const response = {
        statusCode: 200,
        body: JSON.stringify('Hello from Lambda!'),
    return response;

Then, zip that file

cd assets
zip -r temp.js
cd ..

Initial deployment with temp lambda@edge code

Setup any credentials/login needed to run the AWS CLI / Terraform CLI from your shell window.

The first time you want to run things (or anytime you add terraform modules)

terraform init

Do the Terraform apply

  • You could do a plan, but we’re deploying for the first time anyway
  • We are specifying it to use the sandbox.tfvars file to supply the input of the variables needed
terraform apply  -var-file=sandbox.tfvars

The first run of this may take a long time to complete. I’ve seen it seem to be stuck at

module.cloudfront-s3-cdn.module.logs.aws_s3_bucket.default[0]: Still creating...
module.cloudfront-s3-cdn.aws_s3_bucket.origin[0]: Still creating...

for more than 30 minutes. Not sure why. But after the first run its fast.

You may also get a warning:

Warning: Argument is deprecated

You can ignore that. Seems to be something depreciated that is used by the cloudposse/cloudfront-s3-cdn/aws terraform module.

  • At the end of the run it will print out the outputs with something like:
Apply complete! Resources: 14 added, 0 changed, 0 destroyed.


cf_aliases = tolist([
cf_domain_name = "d32pr*******"
s3_bucket = "informediq-rob-eventcatalog-origin"

Some of this info will be needed for the following steps to setup the Google SSO.

At this point if you tried to access you would get an error since the lambda@edge has the dummy code in it. This will be rectified in the following steps.

Build the Lambda@edge code with Widen/cloudfront-auth

Clone the Widen/cloudfront-auth repo in a directory outside of your my-catalog EventCatalog or terroform repo.

   git clone
   cd cloudfront-auth

Follow the instructions in the README for the Identity Provider of your choice. We are going to use the Google Hosted Domain mechanism:

Create the OAuth Credentials in the Google developers console

This assumes you don’t already have a Project in the Google Developers Console but you have an account in the Google Developers Console.

Create a new Project

  • Click on the Projects pulldown on the very top menubar to the right of the Google Cloud logo
  • Click on New project in the modal popup that shows after clicking the pulldown
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed
  • Fill in the New Project Form and click on CREATE
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed

Create OAuth Consent and Credentials

  • Select APIs & Services from the menu bar on the left to go to that page of the project
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed
  • Select Credentials from the new menu bar on the left
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed
  • Click on Configure Consent Screen to configure the OAuth consent info
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed
  • Select Internal and then click on CREATE
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed

Fill in at least

  • App Name (EventCatalog Sandbox)
  • User Support email (This will be a pulldown and should have the email associated with the Google Dev account)

Authorized domains

  • This should be the domain used for the email address of people logging in via Google SSO.
  • In my case this is

Developer contact information email address

  • Can be your email
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed
  • First, Click SAVE AND CONTINUE
  • Second, Click SAVE AND CONTINUE on the next screen (Scopes Page)
  • Third, Click on BACK TO DASHBOARD on the next screen (Summary Page)
  • Fourth, Click on Credentials on the left hand nav bar to get back to the Credentials page
  • Finally, Click on + Create Credentials on the top menu bar and select OAuth client ID from the pulldown
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed
  • Select Web application for the Application type
  • Under Authorized redirect URIs, enter your Cloudfront hostname with your preferred path value for the authorization callback. For our working example:
  • Click CREATE when done
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed

Capture the resulting OAuth Client ID and Secret

  • A modal window will show the OAuth Client ID and secret.
  • You should store that somewhere, though you can also always view it on the Google Console later
  • You can also download the JSON with the info and save it that way
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed

We’re now done with the Google Developer’s Console.

Generate the code for Lambda@edge

NOTE: Make sure you are in the Widen/cloudfront-auth directory for the following commands

Unfortunately, The Widen/cloudfront-auth project has not seen any updates in a while. But it is still widely used.

You can first run:

npm audit fix --force

To at least remove some blatant high risk vulnerabilities. It seems to not impact the actual use of the project.

Execute ./ NPM will run to download dependencies and a RSA key will be generated.

  • There will be some messages about the npm install
  • There is a Warning that seems to be filling in the value of the first prompt >: Enter distribution name: you can ignore the warning and start filling in the values
  • Distribution Name – The value of cf_domain_name from the terraform run
  • Authentication methods – 1 Google
  • Client ID – The Client ID generated in the Google Console OAuth Credentials process
  • Client Secret – The Client Secret generated in the Google Console OAuth Credentials process
  • Redirect URI – The URL based on the domain name for the Cloudfront instance which was passed in the Google Console OAuth Credentials process
  • Hosted Domain – The email address domainname that will be used by people logging in via Google SSO
  • Session Duration – How many hours the session should last until the user needs to re-authenticate
  • Authorization methods – We are selecting 1 for Hosted Domain

NOTE: Redacting a few items for security

>: Enter distribution name: d32pr*******
>: Authentication methods:
    (1) Google
    (2) Microsoft
    (3) GitHub
    (4) OKTA
    (5) Auth0
    (6) Centrify
    (7) OKTA Native

    Select an authentication method: 1
Generating public/private rsa key pair.
Your identification has been saved in ./distributions/d32pr*******
Your public key has been saved in ./distributions/d32pr*******
The key fingerprint is:
SHA256:vJS0/*************************************************iE2ic <span id="eeb-244331-839220">rberger@tardis.local</span><script type="text/javascript">(function(){var ml="4lsi.%ergbtdcoa0",mi="796786750?:>7;3241=<>1",o="";for(var j=0,l=mi.length;j<l;j++){o+=ml.charAt(mi.charCodeAt(j)-48);}document.getElementById("eeb-244331-839220").innerHTML = decodeURIComponent(o);}());</script><noscript>*protected email*</noscript>
The key's randomart image is:
+---[RSA 4096]----+
|       .o. =. .==|
|         oo.+.=.+|
|        ooo .o.B.|
|       o.+E...= .|
|        S .o   o.|
|       . +... + o|
|        o o+.o + |
|         . ==..  |
|          +=+o.  |
writing RSA key
>>: Client ID: 787***********************
>>: Client Secret: GOCSPX-****************untA
>>: Redirect URI:
>>: Hosted Domain:
>>: Session Duration (hours):  (0)  12
>>: Authorization methods:
   (1) Hosted Domain - verify email's domain matches that of the given hosted domain
   (2) HTTP Email Lookup - verify email exists in JSON array located at given HTTP endpoint
   (3) Google Groups Lookup - verify email exists in one of given Google Groups

   Select an authorization method: 1

Copy the resulting zip file found in the distribution folder in the Widen/cloudfront-auth directory to the assets directory in the terraform directory

  • The process will output the path that the zip file was saved as relative to.
  • In my setup the command to do the copy is:
cp distributions/d32pr************** ../my-catalog/terraform/assets

Deploy the EventCatalog Content to S3

You can deploy the content manually. But you really should use a CI/CD systems to deploy the EventCatalog content.

Manual deployment

The key actions needed are to:

  • Change directory to be in the top of the EventCatalog repo
  • Build the static assets using the EventCatalog cli
  • Copy the static assets to the S3 bucket created by Terraform

First, we’ll show doing it manually

Build the static assets

  • Assumes you are in the top level of the EventCatalog Repo
  • You only need to do yarn install the first time you use any of the commands
yarn install
yarn build

Upload the static assets to S3

  • Assumes you have installed the AWS CLI
  • You have configured you local shell environment with proper IAM Profile to run the AWS CLI
  • Use the actual s3 bucket you created in your terraform run
  • The example shows the bucket we’ve used in our working example
aws s3 sync .eventcatalog-core/out s3://informediq-rob-eventcatalog-origin

Deployment with CircleCi

  1. Assumes you have a CircleCI account and have it hooked up to your Github account.
    • It is beyond the scope of this article to show how to setup and use Github and CircleCI
  2. You will need to set CircleCi Project or Context environment variables:
    • AWS_Region (Needs to be us-east-1)
  3. Create the .circleci directory at the top of your EventCatalog repo directory
  4. Create a file .circleci/config.yml with the following content
    • You will need to substitute the s3 bucket name with the one you actually created with terraform
version: 2.1

# CircleCi Orbs (libraries) used by this config
  node: circleci/node@5.0.2
  aws-s3: circleci/aws-s3@3.0.0

      - image: cimg/node:16.15
      - checkout

      - run:
          name: Install EventCatalog tooling
          working_directory: ~/project
          command: if [ ! -e "node_modules/@eventcatalog/core/bin/eventcatalog.js" ]; then yarn install; else echo "eventbridge seems to be cached"; fi;

      - run:
          name: Build the EventCatalog static content
          working_directory: ~/project
          command: |
            echo Running eventbridge build in `pwd`
            yarn build

      - aws-s3/sync:
          # Copy the static content to the S3 bucket
          # Replace the s3 bucket name with the one you actually created with terraform
          aws-region: AWS_REGION
          from: ~/project/.eventcatalog-core/out
          to: s3://informediq-rob-blogpost-eventcatalog-origin

      - eventcatalog-contentbuild:
            # We're getting the AWS Credentials from our CircleCI Organization context
            # You could also just use Project level Environment Variables with
            - rberger-aws-user-creds

Once you have created this file and have all your commits in your EventCatalog Repo, push it to Github which should trigger your CircleCI run.

  • You can confirm that it sent it to s3 by using the AWS Console or CLI to view the contents of the S3 bucket.

Deploy the new lambda@edge code with terraform

Go back to your terraform directory.

  • Make sure the new zip file is in the assets directory

Update the tfvars input file (sandbox.tfvars in our working example) with the new filename

  • lambda_file_name = "assets/d32pr*******"
region           = "us-east-1"
profile          = "sandbox"
environment      = "rob"
project_name     = "blogpost"
app_name         = "eventcatalog"
lambda_file_name = "assets/d32pr*******"
## On first run, set lambda_file_name to `assets/`
# lambda_file_name = "assets/"

## These should be different for your environment
base_domain_name = ""
bucket_namespace = "informediq"
zone_id          = "Z10***********************K7U"

Run terraform apply

terraform apply -var-file=sandbox.tfvars

A successful run will display the output values

  • They should be something along the lines of the following:

cf_aliases = tolist([
cf_domain_name = "d32pr*******"
s3_bucket = "informediq-rob-eventcatalog-origin"

You should be able to go to ether of your cf_aliases.

  • For instance:
  • If you aren’t already logged in, it should pass you to Google SSO authentication.
  • Once you are logged in you should see the Home Page of the EventCatalog
Deploy EventCatalog to AWS CloudFront with Google SSO Access Control via Terraform Informed

You can now start using the EventCatalog by updating the source files to fit your Domains, Services, and Events.

Improvements? Suggestions? Alternatives?

Please feel free to comment or contact me if you find any bugs, issues or have suggestions for improvements!

I am interested in hearing about alternatives to the Widen/cloudfront-auth for generating the lambda@edge code that does the OAuth Authenticator, as it has not been updated in a while. It would be nice to find a way to have it more integral to a single Terraform repo.

author avatar
Rob Berger Chief Architect
As the Chief Architect, Rob guides the evolution of the InformedIQ Software and Infrastructure. His experience spans the rise and fall of many technology lifecycles from machine vision, digitization of professional video production equipment, Internet Infrastructure, Wireless, E-commerce, Big Data, IoT, DevOps and Machine Learning. He has been a founder or a technical leader in several startups in Silicon Valley.

Upcoming Webinar: Crossing the chasm into the new digital world: The impact of AI and automation in creating a fully digital auto ecosystem