Resource | What does it mean to go serverless?

What does it mean to go serverless?

Serverless application development with Typescript — what are my options?

It’s time to look closely at serverless development. Come on a journey to investigate the serverless development model with Typescript. This article will take you through a potential framework, ORM, relational database provider, how to test, and our conclusions.

What is serverless?

Serverless computing is a shift away from the traditional set up where a developer looks after the server keeping it secure, available and up and running at all times. Serverless removes the management of the server infrastructure and physical hardware away from the developer.

Why do I like serverless?

It’s simpler. Serverless architecture makes sense. It’s a lower cost, more environmentally sustainable option where apps aren’t sitting idle on servers. As a developer I just want to focus on the code: building and maintaining applications. With serverless I don’t need to juggle with the infrastructure and hardware.

Cheaper, better, faster

Going serverless saves money. You only pay for what you use. It also saves energy. Our carbon footprint from having servers waiting for requests, burning CPU cycles, is reduced. Serverless also saves time. It reduces the need to spend time provisioning resources. In theory, developing using a serverless architecture (and an associated framework) should be simple and be able to be performed by anyone, not just someone with an infrastructure background.

In a serverless environment, servers don’t sit idle waiting to respond to requests. Instead, functions are spun up as required. Developers no longer need to worry about the hardware applications run on. This can also assist with the scalability of a service.

With client demand for serverless application development rising we experimented with some options

Serverless development with Typescript

Here at Abletech, we’re always on the lookout for new frameworks and methodologies. Andrew and I investigated Typescript, along with developing using a serverless development model, for future projects.

We needed to start with research about Typescript on serverless architecture, so we looked into various technologies and design patterns. We have lots of experience building APIs in Ruby on Rails and Elixir with the Phoenix framework so we tried to utilise our experience and similar patterns with these frameworks.

This blog article outlines our findings and what we liked/didn’t like about what we tried — note this isn’t your typical ‘getting started’ tutorial but more of a summary of our learnings.

Why Serverless and Typescript you ask?

Why Typescript? Well, coming from Ruby and Elixir — both dynamically typed — we wanted to trial working with a typed language and see if this helped with our efficiency. With a lot of our developers using VS Code, we’d heard how well Typescript integrated and wanted to give it a try. Additionally, many of our developers are already competent with Javascript making the transition super easy.

Time to pick a framework — Next.js

After looking into several frameworks, our first step was to investigate Next.js. Next.js is an open-source React web framework primarily used for generating static websites with server-side rendering, it is highly influenced by the Jamstack architecture. Abletech has used Next.js previously for several projects, and Kalo has used it for his own personal blog, so we had some experienced team members that we could lean on for inspiration.

We liked that Next.js came with built in support for API routes, this allowed for fast and easy development using a folder-based structure. Next.js being an opinionated framework meant we could focus on writing our application code rather than setting up the application itself. Out of the box, you could export a handler with a request and response object as arguments. Next.js provided helpers to return JSON, redirecting and setting the HTTP status as well as parsing route and query parameters.

Relying on the folder based structure meant we could setup a simple User CRUD API using two files and in a matter of minutes.

For example, within our pages/api/users directory, we can add the following which will handle listing all users:

const handler = async (_req: NextApiRequest, res: NextApiResponse): Promise<void> => {
  const users = [{id: '123', name: 'Bob', email: ''}]

export default handler

What about an ORM?

Having set up our routes, the next step was to integrate with a database. We wanted to maintain some familiarity with Rails and Phoenix so decided to stick with a standard relational database and PostgreSQL. There were many ORMs that supported PostgreSQL however we settled on Prisma due to its ease of integration with Next.js.

Prisma is a relatively new ORM that focuses on developer productivity. The strength here is that there’s a single source of truth between the database schema and the ‘models’ defined in the application’s schema. This took a little bit of getting used to as with Rails and Phoenix, we were used to defining models separately to the schema. Prisma’s client component also provides type-safe database queries making it easy to spot and debug code at compile time.

We liked how Prisma integrated with Next.js seamlessly and Prisma Studio which allowed us to create data easily using their browser based tool. Prisma’s prisma db pushcommand was a new concept to us but allowed for rapid prototyping. Essentially, this command pushed the schema we’d defined locally in the Prisma schema file to the database allowing setup without the need for migrations.

Whilst having local schemas created by introspecting the database was great for developer productivity, we found we were lacking some functionality that we were used to with ActiveRecord (Rails’ ORM) and Ecto (Elixir’s query/database wrapper). Validation seemed to be left to the database level meaning we couldn’t cleanse or verify our data before inserting it. Arguably, this was by design however it’s nice having that functionality out of the box with Active Record and Ecto. Whilst, Prisma provides a validator API which takes the generated model’s type and ensures that input being passed in is type safe, we found this difficult to use and it didn’t actually strip out invalid keys if we passed in the entire request params body.

Additionally, Prisma seemed to be lacking support for transactions. Prisma Client provided functionality to insert records and associations in a transaction as well as for batch updating/creating/deleting records however if you wanted to perform some complex business logic wrapped in a transaction — you’d run into trouble. Prisma does provide a transaction function where you can pass through raw queries, however this somewhat defeats the purpose of using an ORM if you’re writing all your complex queries in SQL. If you wanted to read more, we found this Reddit thread pretty interesting.

As we were only using Prisma for simple CRUD operations, it sufficed for our requirements and hence we decided to push on — in future, we would most likely investigate other ORMs for our use cases.

How will a relational database work with serverless applications?

Getting our database setup locally with Prisma was simple enough — creating a docker-compose file, adding a database service, exposing the port, constructing a connection string and exporting it as an environment variable to our project. However, we needed a solution that would work when we deployed our Lambda functions to AWS. Traditionally, we’d setup an RDS instance which our service would communicate with directly.

With the use of Lambda we needed to be able support the potential for cold-starts and a high number of connections continuously connecting and disconnecting to the database. Without appropriate management, or throttling, we could easily overwhelm our RDS instance. This is where AWS’s RDS Proxy comes in.

RDS Proxy will pool and share connections for us allowing for scalability and efficiently managed connections. Rather than creating a new connection for every Lambda invocation, RDS Proxy will attempt to reuse existing connections from warm connection pools. RDS Proxy sits between our Lambda functions and the RDS instance abstracting all the connection management away from the developer. In essence, we connect to the RDS proxy and it handles everything else for us.

We liked how easy it was to setup an RDS proxy, it was simple enough to follow the instructions in the AWS console and connect one to our existing RDS instance. RDS Proxy also meant we could interact with the database how we would normally, no different to if we were connecting to a traditional RDS instance. The only downside we noticed was the need to have the RDS instance and the proxy within a security group. This meant the database was locked to one region meaning availability could be somewhat reduced if we wanted to create a service that was to be used globally.

Potentially we could’ve investigated the use of a NoSQL option such as DynamoDB global tables, or gone down the route of a non-AWS provider such as Fauna— which was built for serverless applications.

How do I deploy my Next.js application?

We wanted to stick with AWS as we use their services across a lot of our projects and we’re already familiar with their products and how they operate. We explored using the Serverless framework to see if we could abstract away some of the resource provisioning for us. So, what is the Serverless framework? It’s a free open-source framework using Node allowing for the provisioning of event-driven resources — it’s provider agnostic meaning it’ll work across AWS, Google Cloud Platform, Azure etc. All definitions and requirements are defined using a serverless.ymlfile in the root directory of the project.

Having settled on AWS for the provider, we needed to provision Lambdas and API Gateway endpoints. For this, we used the Serverless Next.js component. This component deploys out our API routes to Lambda@Edge functions allowing them to be run and served from CloudFront edge locations. The idea behind using this component meant that by default, there is zero configuration and instead defaults can be extended. Sounds good, but note this caused some problems for us… more to come on this later.

Deploying out our API routes was incredibly easy, we added a serverless.yml file, specified that we wanted to use the Next.js Serverless component and away we went. Our functions were deployed and APIs were created with a single serverless command. However, in order to allow our Lambdas access to our RDS proxy, we needed to add the Lambdas into the same VPC as our proxy and RDS instance…. Unfortunately, this isn’t possible when your Lambda functions are deployed to edge locations.

  component: "./node_modules/@sls-next/serverless-component"

In order to solve this, we could’ve exposed our proxy to the world (bad) or allowed various IPs (slightly less bad), however this would’ve defeated the purpose of deploying our functions to edge locations. We also investigated whether we could use regular Lambda functions, deployed to our local ap-southeast-2 region. However, this was not possible using the Serverless Next.js component, although the developers are working towards genericizing the component and adding other provider support.

In the end, we determined that whilst Next.js is great for when we have lots of static content it may not be the best solution for server side development. Also using the Next.js Serverless component with an RDS proxy isn’t possible unless you roll your own stack and construct your own Serverless templates — defeating the purpose of the ease of use of Serverless.

Can we use Serverless without a framework?

Having decided not to go down the route of Next.js, we looked at alternatives. We enjoyed using the Serverless framework so decided to investigate rolling our own stack and moving away from a framework whilst still using Prisma as our ORM and RDS and RDS Proxy for our database.

Our first step was to setup a serverless.yml file, with various APIs all pointing to functions — again, we wanted to create simple CRUD APIs for a User model. This is what we came up with:

service: typescript-serverless-boilerplate

  name: aws
  runtime: nodejs12.x
  region: ap-southeast-2
    NODE_ENV: dev

  - serverless-plugin-typescript
  - serverless-offline
  - serverless-dotenv-plugin

    handler: app/handler.findUsers
      - http:
          path: users
          method: get
    handler: app/handler.findUser
      - http:
          path: users/{id}
          method: get
    handler: app/handler.createUser
      - http:
          path: users
          method: post
    handler: app/handler.updateUser
      - http:
          path: users/{id}
          method: put
    handler: app/handler.deleteUser
      - http:
          path: users/{id}
          method: delete

After defining the functions, we created a handler, mapping the definitions to functions that will execute our requests:

import { APIGatewayProxyHandler } from 'aws-lambda'

import { createOne, deleteOne, find, findOne, updateOne } from './controller/users'

export const findUsers: APIGatewayProxyHandler = () => {
  return find()

export const findUser: APIGatewayProxyHandler = event => {
  return findOne(event)

export const createUser: APIGatewayProxyHandler = event => {
  return createOne(event)

export const updateUser: APIGatewayProxyHandler = event => {
  return updateOne(event)

export const deleteUser: APIGatewayProxyHandler = event => {
  return deleteOne(event)

We can also specify VPC configuration and security group IDs so that Lambdas are provisioned within our existing security group and can talk to other resources that may already exist or were created earlier:

  name: aws
  runtime: nodejs12.x
  region: ap-southeast-2
      - sg-xxxx
      - subnet-xxx
      - subnet-xxx
      - subnet-xxx

With our Lambdas being provisioned in our existing security group we’re able to communicate with our RDS proxy. This was done by specifying the database URL as an environment variable. We could have also provisioned our RDS proxy and RDS instance using a CloudFormation template which Serverless supports out of the box. This would have been added using the “resources” section of the yml file.

Being able to develop locally was also a major requirement. Utilising the serverless-offline component, we were able to run up our service on our own machines, with Lambda functions and APIs replicated as if we were calling them directly on AWS. Again, this made development super easy and we could test that what we were deploying, would work locally first.

Whilst we lost some of the sensible defaults and structure by rolling our own stack, we were impressed by the flexibility of Serverless components and how easy it was to setup our Lambdas and API Gateway. By defining all our required functions in the yml file, we were able to structure our code using a standard MVC pattern and develop no differently to if we were deploying to a more traditional platform. Serverless deploys were often quick as well making for a quick feedback loop. Monitoring provided by AWS Lambda as well as allowing to test offline using the serverless-offline plugin, meant debugging was simple and allowed us to pinpoint errors in our code.

Whilst this was super cool and fun to play around with, being new to Typescript and creating an Express-backed API meant that in future, I’d probably feel more comfortable following a framework, or doing some reading into some established patterns for Typescript API apps. However, for a simple CRUD API, I enjoyed the ease at which you could define a handler and write to the database without much overhead. Additionally, we managed to provision Lambdas and API gateway endpoints without even going near the AWS console.

But what about Typescript?

Not being too experienced in the world of Typescript, and not following any particular framework, we wanted to setup our boilerplate app with some sensible defaults for formatting and linting. For starters, we followed Chris Hager’s blog article setting up our tsconfig using his instructions and linting using typescript-eslint. This boilerplate set us up with a modern tool-set for Typescript development in 2021. With tools constantly changing and becoming deprecated, we gound Chris’s blog article helpful in setting up a boilerplate Typescript project.

The last step was to setup our app with a prettier config and linting/formatting on commit and push using Husky.

A note on testing

Another one of our requirements when doing this analysis was a decent testing framework. We settled on using Jest as our framework as we’d had experience using this with our React-based UIs and because it has the ability to run tests in parallel.

We started off by unit testing our user’s controller by following the examples in the Prisma documentation. Whilst this was great asserting that our requests/responses returned expected results, we were mocking the database provider meaning we couldn’t fully test a real-world scenario. Now, if you Google the debates on whether or not to test database integration in your unit tests, or if you should leave these to your integration tests, you’ll come up with a myriad of differing opinions. Side-stepping this argument, having come from Rails and Phoenix land, we felt more comfortable testing this integration in our unit tests.

Usually in Rails or Phoenix, we would wrap our tests in transactions, execute the function being tested, assert what’s expected and rollback the database so it’s in a clean state ready for the next test. This is generally handled for us if we’re using a testing library such as RSpec or ExUnit. Unfortunately, finding a similar library and integrating it with our project was quite difficult. Additionally, if we were to follow this process, we wouldn’t be able to take advantage of Jest’s parallel test execution.

Enter, Integresql. Created by allaboutapps, Integresql uses PostgreSQL templates to create a database per test allowing for fast, parallel tests utilizing a real database. When our test runner starts, we’re able to initialise a template, execute our migrations and finalise it. Before each test, we’re able to initialize a test database, which creates a database from our template, point Prisma at the newly created database, run our test and assert what’s expected. IntgereSQL’s interface is a RESTful JSON API allowing us to simply make HTTP calls in beforeAll and beforeEach setup functions. Additionally, as IntegreSQL can be run using Docker, we can add it into our Docker network and set it up with all our containers talking to each other fairly easily.

import { exec as childProcessExec } from 'child_process'
import util from 'util'
import axios from 'axios'
import prisma from '../../lib/prisma'

import { v4 as uuidv4 } from 'uuid'

const PRISMA_BINARY = './node_modules/.bin/prisma2'

const hash = uuidv4()
let databaseId: string

beforeAll(async done => {
  const exec = util.promisify(childProcessExec)

  // Create a template database
  const { data: createData } = await`${process.env.INTEGRESQL_URL}/templates`, { hash })

  // Create the tables
  const databaseUrl = `${process.env.DATABASE_HOST}/${createData.database.config.database}`
  await exec(`DATABASE_URL=${databaseUrl} ${PRISMA_BINARY} db push --preview-feature --force-reset `)

  // Finalise the template
  await axios.put(`${process.env.INTEGRESQL_URL}/templates/${hash}`)
}, 40_000)

beforeEach(async done => {
  // Setup a new test database per-test
  const { data: getData } = await axios.get(`${process.env.INTEGRESQL_URL}/templates/${hash}/tests`)

  // Export it as the DATABASE_URL so Prisma knows
  databaseId = getData.database.config.database
  process.env.DATABASE_URL = `${process.env.DATABASE_HOST}/${databaseId}`


afterEach(() => {


To summarise, we investigated several different options for potential Serverless Typescript API-only services. Whilst we enjoyed using Next.js’s API routes, this didn’t seem like a particularly extensible solution for a logic-heavy API-only backend app and it seemed more particularly suited for static-content sites. On top of that, without rolling our own Serverless configuration for Next.js, we couldn’t deploy to a regular Lambda function meaning we weren’t able to provision our Lambdas in a VPC allowing access to our RDS Proxy.

Prisma as an ORM was good for a simple CRUD API app, and would probably suit applications that didn’t have complex business logic. We enjoyed the developer experience and how it was relatively easy to setup although having the singular schema did take a bit of getting used to. Prisma does however lack some of the functionality such as validation, transaction management and some connection configuration that we are used to with other ORMs so in future we’d probably look at the viability of using another ORM.

RDS Proxy was great — we liked how we could interface with it exactly how we would with a regular RDS instance. Setting up was a breeze and we could add it into our VPC and security group in no time at all.

Whilst we couldn’t use Next.js, The next best option was to roll our own Serverless stack. This proved slightly more beneficial and allowed for more flexibility with defining and provisioning our own resources. Again, we liked how easy it was to setup Lambdas and an API Gateway endpoint with everything being defined in the yml file. Configuring VPCs, and the fact we could define CloudFormation templates within the yml file, was a nice developer experience too. Being new to Typescript, we were a little worried about going it alone without a framework, so in future, we’d probably either investigate other frameworks or do some more investigation into common design patterns in Typescript.

Lastly, IntegreSQL was a great tool allowing for fast, parallel execution of tests with each test receiving its own database. It fulfilled our need for unit testing our app with proper database testing without the need to mock our database provider. We’re looking at rolling this tool out across some of our other backend services.

Message sent
Message could not be sent