Nikolas Knickrehm

3 min read

Deploying SwaggerUI with Terraform and S3 for Multiple Teams

Infrastructure as Code

Great documentation is a key factor for the success of an API and the OpenAPI specification provides the tools for writing one. An OpenAPI specification can even be imported by specialized software to create nice and functional visualizations as well as generic client and server generation.

Deploying SwaggerUI with Terraform and S3 for Multiple Teams

In our team we were using SwaggerHub but noticed that we do not really need most of their premium features and that our documentation was often neglected when changes were made to the API. One of the reasons we identified was that the Swagger file which contains the specification was hosted far away from the actual code at SwaggerHub and that people forgot to update it when working on a ticket.

Since the SwaggerUI is open source we decided to host it ourselves and enable other teams to use our hosted solution as well. The easiest way of hosting SwaggerUI was by setting up an S3 bucket on AWS that is configured to serve static websites. In Terraform this can be done with just a few lines of code:

resource "aws_s3_bucket" "swaggerui" {
  bucket = "my-swagger-ui"
  acl    = "public-read"

  website {
    index_document = "index.html"
  }
}

We extracted the ./dist folder from SwaggerUI into our Terraform repository and made sure that all the files are synced to the bucket.

resource "aws_s3_bucket_object" "swaggerui-file" {
  for_each     = fileset("${path.module}/dist/", "*")
  acl          = "public-read"
  bucket       = aws_s3_bucket.swaggerui.id
  key          = each.value
  source       = "${path.module}/dist/${each.value}"
  etag         = filemd5("${path.module}/dist/${each.value}")
}

After deploying the resources with Terraform when trying to access the buckets public URL we ran into an interesting bug. Instead of rendering the index.html in the browser, the file was downloaded instead. When uploading a file to S3 the mime type is automatically set to application/octet-stream if nothing else is specified. Since there is a known set of file types in the SwaggerUI dist folder (html, css, js, png & later json) we extended the aws_s3_bucket_object resources and added a small helper to set the correct mime types for each file.

locals {
  mime_types = {
    html = "text/html"
    js = "application/javascript"
    css = "text/css"
    map = "application/json"
    png = "image/png"
  }
}

resource "aws_s3_bucket_object" "swaggerui-file" {
  for_each     = fileset("${path.module}/dist/", "*")
  acl          = "public-read"
  bucket       = aws_s3_bucket.swaggerui.id
  key          = each.value
  source       = "${path.module}/dist/${each.value}"
  etag         = filemd5("${path.module}/dist/${each.value}")
  
  content_type = lookup(local.mime_types, regex("\\.([0-9a-z]+)$", each.value)[0], null)
}

After accessing the bucket's URL again, the Swagger UI was rendered correctly with the default "Petstore" API specification. If that does not work for you, try clearing your cache before reloading the page.

Since we do not sell pets at our client we needed to implement a way to serve other API specifications. We want our setup to be used for our own API specifications including all API versions and pre-release stages as well as keeping it open for other teams and all their specifications. Therefore, we came up with hosting the configuration files in a sub-directory of the bucket and specifying their paths via the URL query parameter configURL. If you put in a simple path structure like /swaggerfiles/swagger.json the matching file would be served from the bucket. When some team does not want to use our bucket to host their Swagger files, they can also specify a full external URL instead.

To enable this setup we needed to make a small modification in the index.html:

<!-- Line 38 ff. -->
<script>
  window.onload = function() {
    const params = new URLSearchParams(window.location.search);
    const configUrl = params.get("configUrl");
    
    // Begin Swagger UI call region
    const ui = SwaggerUIBundle({
      url: configUrl ? configUrl : "https://petstore.swagger.io/v2/swagger.json",
      dom_id: '#swagger-ui',
      deepLinking: true,
      presets: [
        SwaggerUIBundle.presets.apis,
        SwaggerUIStandalonePreset
      ],
      plugins: [
        SwaggerUIBundle.plugins.DownloadUrl
      ],
      layout: "StandaloneLayout"
    });
    // End Swagger UI call region

    window.ui = ui;
  };
</script>

Now we can upload configuration files from different projects to the shared bucket and serve them via our S3 bucket's public url.

The only thing still missing was to grant other AWS accounts (owned by our team and neighbor teams) the permission to upload configuration files to the bucket. We simply created an S3 bucket policy in Terraform and gave restricted write access to certain sub-directories to some acccounts so that the teams can manage their own Swagger files in isolated environments.

resource "aws_s3_bucket_policy" "swagger_bucket_policy" {
  bucket = aws_s3_bucket.swaggerui.id
  policy = jsonencode({
    Version = "2012-10-17"
    Id      = "swagger_bucket_policy"
    Statement = [
      {
        Sid = "swagger_bucket_policy_object_write_policy"
        Effect = "Allow"
        Principal = {
          AWS = "arn:aws:iam::<accountId>:root"
        }
        Action = [
          "s3:DeleteObject",
          "s3:PutObject",
          "s3:PutObjectAcl"
        ]
        Resource = "${aws_s3_bucket.swaggerui.arn}/swaggerfiles/other-team"
      }
    ]
  })
}

Mischief managed! In my next post I will describe how we upload/update the Swagger files of our public API automatically whenever an API stage is deployed with the Serverless framework.

Next up

Want to stay up to date?