Case Study: Coding a High-Performance Websocket App (2021 Tutorial)

Webworkers, Websockets & Server Side Rendering on Typescript NextJS

Kangzeroo
17 min readJul 3, 2021
View this web app live at https://crypto-orderbook-orcin.vercel.app/

This tutorial is part of an ongoing series called Keep the Dev Alive 🔥. It’s my lifelong plan to always keep learning & building, as I believe coding is an extension of humanity.

Today we are going to learn how to create an Order Book for cryptocurrency trades (View on Github). This is an advanced web tutorial intended for intermediate audiences.

Understanding the Orderbook

In simple terms, an order book is a live view of all trades of a security summarized in a chart. This order book will be trades of the cryptocurrency Ethereum ETH for US Dollars USD.

On the left hand side (red) are the Asks , which are people who want to sell ETH. The right hand side (green) are the Bids, which are people who want to buy ETH in exchange for USD. We can easily see that there is a lot more people who want to buy ETH than sell it 🤣

Looking closer at the GIF above, we can see columns for price, size and total. The price is how much USD one ETH is selling for. The size represents how many coins of ETH are being bought or sold at that price. This is real-time data as of July 3rd 2021, so we can see that ~1003 coins of ETH are being sold at $2211.90 USD each. As we go down the table, we can see the size increases as price increases. This makes sense, as sellers want to sell at higher prices. There are ~300,000 coins of ETH being sold at $2214.35 each. The prices keep fluctuating, so you’ll have to stare at that GIF for a while to see the figures I am quoting 😉 Don’t blink.

Finally, the total column represents the cumulative number of coins being bought/sold — it is the sum of all the size before that row. It is also what determines with width of the red/green bars. So for example, at $2214.35 USD per ETH, the total is 619,012. This means if you wanted to sell your ETH for more than $2214.35, you would need to wait for 619,012 coins to be sold at a lower price, before someone would buy at your price. That makes sense as buyers want the cheapest price. The green/red color behind the total is useful as we can visually compare the two sides to see where demand/supply is greater (Although you can’t compare it directly row by row as the prices differ. You can only look at it as a volumetric approximation).

Ok, now that we’re on the same page about what it is we’re building, let’s hope into the actual project itself.

Project Requirements

This was a Senior Engineer challenge project I found online. You can see the code at my Github repo here. Here are the requirements:

  • Process real-time updates streamed from a Crypto Facilities websocket
  • Must run smoothly on low-end devices
  • No external libraries
  • Server side rendering with NextJS and Typescript
  • Strict type coverage with no use of any
  • Sufficient test coverage
  • Must apply best practices, assuming it lays the groundwork for the rest of the frontend team to use
  • Mobile responsive

Seems simple enough. But actually, there are many nuances for performance optimizations that could go deep… very deep. This tutorial will cover the most important parts, but certainly more can be done to achieve even better performance. The most important concern from the above requirements is “must work smoothly on low-end devices”.

A taste test from the Firehose

Let’s take a look at the websocket and see just how much data is being streamed in. If you are not familiar with websockets, you can read the Mozilla Docs (or just continue reading this tutorial). Unlike REST which is a passive protocol where the server has no idea if the client is still alive, Websockets are real-time and thus an active connection between the client & server is kept. This is great for web games, livechat, and a real-time trading orderbook!

Let’s take a look at the data coming from the websocket. Open up the javascript console by pressing F12 in Chrome and paste in the below code:

const feed = new WebSocket("wss://www.cryptofacilities.com/ws/v1");
feed.onopen = () => {
const subscription = {
event: "subscribe",
feed: "book_ui_1",
product_ids: ["PI_ETHUSD"]
};
feed.send(JSON.stringify(subscription));
};
feed.onmessage = (event) => {
console.log(JSON.parse(event.data))
}
I realized after that I was using PI_XBTUSD instead of PI_ETHUSD, woops! Same process, different ticker.

That’s a lot of events every second! We can see how a low-end device could quickly get overwhelmed, especially if it also has to handle painting the UI graphics too! Remember, Javascript is single threaded, so processing all this will choke up the main & only thread real quick.

Before we dive into how we handle this firehose of updates, let’s quickly review the Crypto Facility docs.

To retrieve the data feed necessary to build the orderbook, use the following public WebSocket: wss://www.cryptofacilities.com/ws/v1 and sending the following message to this WebSocket: {"event":"subscribe","feed":"book_ui_1","product_ids":["PI_XBTUSD"]}.This data feed first returns a snapshot of state representing the existing state of the entire orderbook followed by deltas representing singular updates to levels within the book. The orders returned by the feed are in the format of [price, size][]. If the size returned by a delta is 0 then that price level should be removed from the orderbook, otherwise you can safely overwrite the state of that price level with new data returned by that delta.Unsubscribe from the data feed by sending the following message:{"event":"unsubscribe","feed":"book_ui_1","product_ids":["PI_XBTUSD"]}

The key thing to remember is that the first piece of data we get is the entire orderbook as of the first second in time. Then updates rapidly come in, taking the shape of an array [price, size], which we can use to update the orderbook.

For both our sanity, here’s screenshots of the initial orderbook, and the subsequent updates. Let’s call them “deltas”, as this is the official terminology for such updates.

The initial snapshot of the orderbook
The incremental updates to the orderbook (the “deltas”)

So now that we know what kind of data we’re dealing with, we need a way to handle this performantly without choking up the main Javascript thread.

Web Workers to the Rescue!

The answer to our performance needs is Web Workers. In simple terms, web workers are Javascript threads that runs in the background, separate from your main JS thread responsible for painting the UI. While this means your web worker cannot access the DOM, it does allow it to do cool things like work in the background even when you switch from your mobile browser app to do something else on your phone. In fact, web workers are what powers a lot of the functionality of progressive web apps, which allows you to get push notifications via web. But for our purposes, we are going to use web workers to process that firehose of websocket data so that we don’t overwhelm the main JS thread — we’ll let the main thread exclusively handle painting the UI.

To use more specific terminology, we are actually using a “service worker”, which is type of web worker for handling network requests.

Diagram courtesy of bitsofco.de

I promise we will dive into the code soon. Before we do, I want to highlight the over-arching paradigm that web workers operate on, which is the Actor Model(let me nerd out plz 🤓).

The Actor Model basically splits up work into different machines to handle processing in a decentralized manner. It allows you to scale horizontally by distributing your work without caring about the exact setup of each machine, as long as their communication protocol is agreed upon (see diagram below). Each machine can have its own state without affecting the others. Microservices are another example of the Actor Pattern.

Diagram courtesy of brianstorti.com

So now that we know how web workers work at a high level, let’s see how they communicate with the main JS thread in code. Here’s how our web worker handles incoming messages from the main JS thread:

// feed.worker.ts// FeedWebSocket is a custom class we create
const feed = new FeedWebSocket();
// onmessage is part of the default web worker API available in global scope. its here where we receive messages from main JS thread
onmessage = (event: MessageEvent) => {
switch (event.data.type) {
case "KILL_FEED": {
feed.closeFeed();
break;
}
default: {
console.log("Instructions not specific enough");
console.log(event);
}
}
};

And here’s how our web worker sends outbound messages to the main JS thread:

// postMessage is part of the default web worker API and available in global scope. its how we send messages to main JS thread
postMessage({
type: "WEBSOCKET_DATA_INCOMING",
data: JSON.parse(data)
});

And here’s where our web worker file feed.worker.ts lives inside our NextJS repo:

There are many ways to load a web worker into your project, but getting that configuration set up can take some time and experimentation. Fortunately I am going to save you a lot of time by showing you the config here. This is specifically for NextJS SSR (server-side rendering) with Typescript.

NextJS SSR with Typescript

Server side rendering (SSR) is most commonly used for 3 purposes:

  1. Speed performance. Server generated HTML already has all the dynamic data pre-populated when it gets sent to the client, unlike single page apps (SPA) which need an additional network request to get data from the server.
  2. Security. Since we are populating data on the server, we don’t need to expose extra network endpoints to the public internet, unlike SPAs.
  3. Search Engine Optimization (SEO). With SPAs, Google crawlers have an unreliable experience relating the url of your page with the HTML contents since it takes time for the client to load it — sometimes Google just sees a blank page! In comparison, server rendered HTML arrives pre-populated, so the Google crawlers can immediately see whats on your page and use it to rank you on Google Search.

Hopefully that gives some clarity on why SSR is used. Now admittedly, we don’t actually need SSR for this web app since almost all of the data still needs to be loaded on the frontend via the websocket that lives on the client. This project could have just as validly be made as a single page app (SPA). But the assignment requires us to use NextJS SSR, so here we are!

First lets install NextJS for Typescript. You can find the full instructions here.

$ yarn create next-app --typescript
$ yarn add -D eslint eslint-config-next @typescript-eslint/eslint-plugin

I would highly recommend you use my VSCode settings & extensions, which will give you tooling superpowers with Prettier, nodeJS debugging on Chrome, and typescript linters. VSCode is (in my opinion) the best text-editor to use for Typescript.

In order to get web workers to play nicely with NextJS and integrate with our Typescript config, we need to set up a few things. For NextJS to be able to serve our web worker file, we need to install a special loader called worker-loader and add it to all our config files.

$ yarn install -D worker-loader

And then in tsconfig.json:

compilerOptions.lib = ["dom", "dom.iterable", "esnext", "webworker"]
I also recommend setting up “compilerOptions.paths” for absolute imports, which improve repo searchability

And in next.config.js:

module.exports = {
reactStrictMode: true,
webpack(config, options) {
config.module.rules.push({
test: /\.worker\.js$/,
loader: "worker-loader",
options: {
name: "static/[hash].worker.js",
publicPath: "/_next/",
},
});
return config;
},
};

With this, NextJS can now handle web worker files so that when we run npm run build, the feed.worker.ts file will get compiled to feed.worker.js and be available on the frontend. The web worker needs to be a distinctly separate file from the main JS, because it must run on its own thread (recall that the Actor Model needs to be able to live on its own machine with its own state).

Now that we’ve set up NextJS, let’s look at how our main frontend app loads in the external web worker.

Loading the external web worker into the main Frontend App

I made a custom React hook to handle communications with our web worker. The important line of code is below, where we instantiate the nativeWorker class and give it an absolute URL to our file feed.worker.ts with the import method import.meta.url. This method only works with ECMAScript 2020 or later. So if you were bundling your code for UMD (such as embed web widgets loaded via <script> tags, this method would not work. however, I have a private repo for that which I should publish a tutorial on…).

// feed.hook.ts
const worker = useRef<Worker>();
...worker.current = new Worker(
new URL("@/workers/feed.worker", import.meta.url)
);

So when we run npm run build the web worker will be imported into our main frontend app with the location preserved correctly. Now in our main frontend app, we can consume the data that our web worker sent us via postMessage() :

// feed.hook.ts
worker.current.onmessage = (event) => {
switch (event.data.type): {
case "WEBSOCKET_DATA_INCOMING": {
console.log("Web worker sent main JS thread data!")
console.log(event.data.data)
}
}
}
...const feed = worker.currentexport default { feed }

And from our frontend, we can send our web worker instructions, such as killing the feed because its so damn noisy! 😤

// orderbook.tsx
import { useFeedWorker } from "@/api/feed.hook";
const { feed } = useFeedWorker();
feed.postMessage({
type: "KILL_FEED"
});

Fantastic! We now have 2-way communication between our main frontend app and the background web worker! There are 3 main components in our app so far…

  1. orderbook.tsx is our main frontend app (aka. the main JS thread). It uses postMessage() to talk to our background webworker and onmessage to listen.
  2. feed.worker.ts is our background webworker. It also uses postMessage() to talk, and onmessage to listen.
  3. new Websocket("...url") lives inside our webworker and handles the continuous stream of crypto trading data coming in real-time

Teamwork makes the Dream Work

Now that we’ve laid the main infrastructure of our frontend, its time to populate the sections with their duties. Specifically…

  • feed.worker.ts is going to handle the number crunching of incoming websocket data. This avoids choking the main JS thread.
  • orderbook.tsx is the main JS thread, and it will solely be responsible for painting the UI.

This separation of duties is what allows us to meet the requested performance requirements. Without this separation… the UI would lag.

With that agreed upon, we can start pumping out the code. Starting with the web worker handling the websocket data. Let’s take a look at the class FeedWebSocket that we skimmed over previously. Here’s what it looks like at a high level. For brevity sake, I have omitted some of the extra requirements of the assignment as they are just extra work but nothing special, providing no extra value to you as a reader. 💁🏼‍♂️ On with the show…

// feed.worker.tsclass FeedWebSocket {
private feed: WebSocket
private sourceOrderBook: ISourceOrderBook
private orderBookState: IOrderBookState
private lastAnnouncedTime: Date
private waitTimeMs = 2000
constructor(){
const feed = new Websocket("...url")
feed.onopen = () => {...}
feed.onmessage = () => {...}
feed.onclose = () => {...}
feed.onerror = () => {...}
this.feed = feed
}
closeFeed() {...} private mapDeltaArrayToHash() {...}
private updateDelta() {...}
}
const feed = new FeedWebSocket();
onmessage = (event: MessageEvent) => {
switch (event.data.type){
case "KILL_FEED": {feed.closeFeed()}
default: {...}
}
}

Let’s first take a look at the internal class variables:

  • sourceOrderBook: ISourceOrderBook → this will store the raw data from the websocket as is. I have created a custom type interface ISourceOrderBook for it so we can fulfill the requirement of strict type coverage
  • orderBookState: IOrderBookState → this will store the refined data format that we will massage our raw websocket data into before sending to the main frontend app to be displayed. IOrderBookState is also a custom type
  • lastAnnouncedTime & waitTimeMs → since the websocket arrives in high velocity (~5x per second), we will want to throttle it and only send updates to the main frontend app every 2 seconds to avoid having to repaint the UI so often

Now lets look at the one public function:

  • closeFeed() → this lets us close the websocket connection any time.
  • Actually there are more public functions in the actual code, but I have omitted them from this tutorial because they don’t add extra value

And now let’s look at the private internal functions:

  • mapDeltaArrayToHash() → we call this function in order to convert our raw websocket data sourceOrderBook into its refined format orderBookState (in retrospect, I could have gave them better naming by using the word raw and refined in the variable names). This gets called once when we get the initial state of the orderbook, and again for every delta from the websocket
  • updateDelta() → we call this function every time we get new raw delta data from the websocket. updateDelta() calls mapDeltaArrayToHash().

mapDeltaArrayToHash() basically does the following:

// feed.worker.tsconst mapDeltaArrayToHash = (deltas: [number, number][]) => {
const deltaHash = deltas.map((delta, index) => {
const [price, size] = delta;
const total = getPrevDeltaSize(...) + size;
return { price, size, total }
})
.reduce((acc, curr) => {
return { ...acc, [curr.price]: curr }
}, {})
return deltaHash
}

Hopefully that pseudo-code wasn’t too hard to follow along (check out JS reduce, its super helpful!). Basically we turned an array of delta arrays, into a giant object hash with the price as the key. That way when new deltas come in, we can overwrite past data easily by overwriting its hash key.
So for example:

// mapDeltaArrayToHash.test.tstest("assert that delta array is turned into delta hash", () => {
const deltaArray = [[2111,100],[2112,150]]
const deltaHash = {
2111: { price: 2111, size: 100, total: 100 },
2112: { price: 2112, size: 150, total: 250 }
}
expect(mapDeltaArrayToHash(deltaArray)).toStrictEqual(deltaHash)
})

Recall that in an orderbook, the total is the sum of the preceding price’s sizes. From the seller’s perspective, as we go up the price, the total gets bigger and bigger. From the buyer’s perspective, it’s the opposite direction (total gets bigger as price gets lower).

Finally, let’s open up the constructor and examine it. Specifically the feed.onmessage to see how we handle incoming websocket data:

// feed.worker.tsfeed.onmessage = (event: MessageEvent) => {
const data: ICryptoFacilitiesWSSnapshot = JSON.parse(event.data);
switch (data.feed) {
case "book_ui_1_snapshot": {
const dateStamp = new Date();
this.lastAnnoucedTime = dateStamp;
this.sourceOrderBook = data;
this.orderBookState = this.mapDeltaArrayToHash(data);
postMessage({
type: "WEBSOCKET_DATA_INCOMING",
data: this.orderBookState
});
break;
}
case "book_ui_1": {
this.updateDelta(data);
break;
}
}}
...private updateDelta(data) {
const { x, y } = furtherProcessDelta(data)
this.sourceOrderBook = x
this.orderBookState = y
const currentTimestamp = new Date();
const nextAllowedTime = this.lastAnnouncedTime + this.waitTimeMs
if (nextAllowedTime < currentTimestamp) {
postMessage({
type: "WEBSOCKET_DATA_INCOMING",
data: this.orderBookState
});
this.lastAnnoucedTime = currentTimestamp;
};
};

We see how upon initial receival of the orderbook data, we populate the necessary variables, process the data, and immediately send it over to our frontend app with postMessage().

But with subsequent deltas, we check if enough time has passed since our last announcement to the frontend. Only if we’ve waited long enough, then we send out the postMessage() again and reset the timer. That way we’re not flooding the main JS thread with every websocket delta.

Yay! 🎉 All the number crunching and network throttling done on the background web worker. Now we can let the main frontend app handle just the painting of the UI.

Too Pretty for Sleep 💅

Our frontend never sleeps. As long as the trading data comes in every 2 seconds from our web worker, the main JS thread will keep painting our UI and showing those green & red bars.

There’s nothing too difficult about making this UI, it’s just an HTML table:

// ordertable.tsx<table className={styles.table}>
<thead>
<tr className={styles.title}><th>{`${title}`}</th></tr>
<tr className={styles.heading}>
<th className={styles.head}>Price</th>
<th className={styles.head}>Size</th>
<th className={styles.head}>Total</th>
</tr>
</thead>
<tbody>
{rows.map(row => {
const { price, size, total } = row;
const spriteWidth = total / maxPriceSize;
return (
<tr key={price} className={styles.ghostRow}>
<tr className={styles.colorSprite}>
<td className={styles.uncolored(spriteWidth)}></td>
<td className={styles.colored(spriteWidth)}></td>
</tr>
<tr className={styles.row}>
<td className={styles.cell}>{price}</td>
<td className={styles.cell}>{size}</td>
<td className={styles.cell}>{total}</td>
</tr>
</tr>
)
})}
</tbody>
</table>

But theres 2 things we need to pay special attention to, as they power our color grids.

  • const spriteWidth = total / maxPriceSize → this is a percentage of each row’s total value in comparison with the largest total value maxPriceSize in all the data. That’s how we get the width of the green/red bars.
  • styles.ghostRow, styles.colorSprite, styles.colored, styles.uncolored & styles.row → these 4 css styles is how we get the colored bars to render behind the rows of numbers

So let’s take a look at the <tr> styles. Note we use EmotionJS as our styling library as it makes writing css syntax in React super easy. Writing css is preferred over JS-as-css because we can directly change css values in the browser and copy over the css into our react code without converting into JS camelCase. It’s way faster.

// ordertable.tsximport { css } from "@emotion/css";const styles = {
ghostRow: css`
position: relative;
display: flex;
`,
colorSprite: css`
width: 100%;
height: 100%;
display: flex;
min-height: 20px;
height: 100%;
`,
row: css`
display: flex;
flex-direction: row;
justify-content: space-between;
align-items: flex-start;
top: 0;
position: absolute;
width: 100%;
`
}

The key thing to point out is that our outer most wrapping ghostRow has position: relative, and of its two children, colorSprite is default position whereas row is position: absolute. This is what allows our numbers data row to appear in front of the colored bars.

Now let’s look at the colored bars css:

// ordertable.tsxconst styles = {  ...  colored: (spriteWidth: number) => css`
flex: ${spriteWidth};
background-color: red;
`,
uncolored: (spriteWidth: number) => css`
flex: ${1-spriteWidth};
`
}

The flex value lets css auto-adjust the proportion of width each element colored & uncolored takes inside its parent colorSprite. As new orderbook data comes in from the webworker/websocket, the spriteWidth value will change, and so will the green & red bars.

And that is the final part of our project! By now things are looking good and ready to deploy.

Deployment with Vercel

Vercel is a content delivery network (CDN) which you can use to host your websites. It can connect with your github repo and automatically deploy the latest master branch. Visit their website to learn how to get set up, its easy and fast.

Before we merge our branch to master (or push directly to master like a barbarian 🦌), we’ll want to check that the build process works locally.

$ npm run build

If you have proper Typescript tooling set up, then you should have caught almost all your possible compilation bugs beforehand while coding. If you do see errors, you’re gonna have to fix them the good old fashion’ed way… by googling it. Or leave a comment in this article and I or another read can assist you. Or open an issue in Github.

If all is well, then commit to Git and merge to master. Then open up your Vercel dashboard and watch as your web app runs npm run build and gets deployed to production.

Easy Deployment with Vercel

Here’s my latest deployment, click to view live demo.

click to view live demo (hosted on Vercel)

Like I mentioned earlier, there are a ton of further optimizations that can be done. For example, when we look inside the Performance tab of the Chrome console, we can record the main JS thread and see where it spends its time working. Fortunately thanks to our 2 second throttling, the thread is actual idle most of the time and not being worked hard. Nice 🍃

If we wanted to take performance even further, we could start looking into memoization and optimizing our internal React state (I didn’t mention it in this article but we do use object.freeze() for immutable objects in React.useState). We can also look into tree-shaking so that unused code in NextJS rendering does not included in the HTML, resulting in smaller package sizes and faster page load. The rabbit hole can go deep. 🎣 Here’s a whole list of things that can be done to improve performance.

For now, this is where we will end things. I am heading to bed now because I need my beauty sleep 🥴

15.5 seconds recorded, only 0.5 seconds spent working 🛌

I hope you enjoyed this article and hopefully learned something new! If you liked this, please subscribe and leave a like/applause so I am encouraged to write more tutorials. Thanks for your time, and…

Keep the dev alive! 🔥

--

--