• Bytes Route logo

RESTFul APIs: from zero to HERO

We are writing this five parts tutorial to serve as a complete beginner guide to RESTful APIs using Node.js and Express. We tackle everything from networking to persistency. The motivation is to have a complete go-to guide to developing RESTful APIs with Node.js and Express.


RESTful APIs are usually built over the internet. But what is the internet?

What is the internet?

From Wikipedia, we learn that:The Internet is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices.

The internet is basically a huge network of computers that run different types of protocols for communication with one another. A protocol is a set of rules and standards that define the language that devices communicate. The conceptual model and communication protocols used are called the Internet Protocol Suite. It has four layers and each layer treats the underlying one as a black box.

The application layer, the scope which creates and communicates data. Application layer protocols are: HTTP/S, SSH, FTP, SMTP etc.

The transport layer provides the channel for the communication needs, host-to-host connectivity and end-to-end messaging regardless of the data format. Transport layer protocols are: TCP, UDP, IP, ICMP and, others

The internet layer provides a uniform networking interface that abstracts the actual layout of the network. Via routing the data is sent from the source network to the destination network. The primary protocol is IP, it defines the IP addresses. Internet layer protocols are: IP, ICMP, IGMP, and others

The link layer operates in the local network and defines the mechanisms for communication in that scope. It moves packages between the internet layer and local network hosts.


A single machine can have multiple applications running on it, an HTTP server, an FTP server and/or an SSH server, etc. Ports allow a single machine to have multiple applications that communicate via the same physical interface.

Most web developers build internet applications on top of TCP/IP and HTTP/S which are abstracted by the operating system, client, framework, and tools. You don't need to know the intricacies of the TCP/IP stack for small to medium-sized applications. As the application grows in complexity and size there may appear issues that require lower layer protocol knowledge to solve. As a beginner, you don't need to know exactly how the internet works. Focusing on HTTP is enough. As you grow as a web developer, networking knowledge will advance your skill set. Depending on the problems you work on, you might not be able to solve them without networking knowledge.



On the internet, resources have a Universal Resource Locator so that we can identify a specific resource thus avoiding confusion. Think of it as an address. The resource is not the URL like the house is not the address. The URL is just a way to find it.


We can see that the URL contains all the necessary information to find a unique resource. The protocol, the FQDN(the location of the machine), the port on the machine where the server is bind to, the path to the resource, URL query information, and resource fragments.

The Internet uses the IP protocol, so how is the domain being mapped with the IP address of a machine? The DNS service does that. The DNS is responsible for mapping domains with the machine IP.

For every domain registered, there is an IP address that corresponds to the physical machine which is responsible for responding to the request made.

Once you know the IP address, the ISP has the responsibility of sending it via the internet infrastructure.

Browsers are applications that run on the user machine, hence the name, client application. They use HTTP/S for communication with a server that also understands the HTTP protocol and the lower level protocols that HTTP/S is built on.

Let's recap and see how all fits together in the same picture.

Let's say some friend wants to send you an URL to a cool article that he read. You receive it via an email for example. Click on the link and the browser starts with the URL sent to the address bar.

This way the browser understands that it must make a GET request, requesting the article resource. He knows only the FQDN so it uses that to query the DNS service which returns the actual IP of the machine responsible for sending the response. The port is implied from the default HTTP/S port (80 or 443) and the request is sent.


HTTP is the protocol usually used for client-server applications. Web applications and websites are client-server applications. It uses a single direction channel, request-response, meaning that it works in request/response pairs. For every client request, a server must return a response even if the resource does not exist and a 404 code is used. The request is initiated by the client (a browser usually), it is intercepted by the server and the server responds with the data. A server cannot initiate a request to the clients. A request can be kept alive and the server can push data as needed with SSE or, for bi-channel communication, web sockets can be used.

A request is formed from one of the requests verbs, a header, and, optionally, a body. The verb specifies the action intended. The header contains meta-information about the request. The body may be missing and in case it is not, it should contain the data needed by the server.

The request verbs can be one of GET, POST, PUT, PATCH, DELETE, OPTIONS, HEAD, TRACE and CONNECT.

A response is formed from a status code, a header, and the body. The status code returns the status of the request. Every response has a header with a status code in the range [1xx..5xx]. The body of the response contains the information that was requested, for example, it can be an image, an XML, or a JSON.



The GET verb means that the client wants to retrieve the resource specified by the URL. In general, GET requests do not have any effect on the data being served.

The POST verb means that the client sends information to the server. It usually implies the creation of a new resource.

The PUT verb means that the client wants to update the information regarding a specific resource with the new information sent.

The DELETE verb means that the client wants to remove a specific resource or collection of resources.

The CONNECT verb establishes a tunnel to the server.

The OPTIONS verb establishes a tunnel to the server.

The TRACE verb establishes a tunnel to the server.

GET, HEAD, and OPTIONS requests are safe methods, no meaningful state should be changed on the server.

GET, PUT, and DELETE requests are idempotent, running one of the requests with the same information once or a hundred times means the same thing.

POST on the other hand is not safe nor idempotent because the same request creating a resource called a thousand times may create a thousand resources.

SSL is a standard technology for securing internet connections. HTTP over SSL is HTTPS, they are equivalent to each other with the exception that HTTPS is encrypted, thus more private and secure.

By default/convention the port 80 is used for applications that use the HTTP protocol and port 443 for applications that use HTTPS protocol. It is not mandatory so if you have multiple applications that communicate with each other via HTTP/S, other ports can be used.

Status Codes

Happy path

Status codes are split into 5 classes, 1xx for information, 2xx for success, 3xx redirection, 4xx client error, 5xx for server errors.

So how should we use the status codes? 200(OK) is a status code that is more generic and signifies that the request was successful. It usually used for file requests like web pages(.html).

For creating a resource a more specific status code is 201(Created), used with POST and sending back the created entity via the response.

If we don't want to send back anything as a response the 204(No content) is the proper status. Usually used with the DELETE verb.

Sad path

If the client does not use the API correctly then an error from the 4xx class should be used. If the server is at fault, then an error from the 5xx class should be used.

The 400(Bad Request) status code is as generic as it gets. One can use this status code to signal to the client that the request is malformed.

If you want to be more specific with authorization/authentication issues the proper status code is 401(Unauthorized) if the client is not authenticated (missing JWT for example) and 403(Forbidden) if the client is authenticated correctly but it does not have permissions to CRUD that resource.

PUT or DELETE works on specific resources like a user. If they are applied to 'collection' type resources a 405(Not allowed) can be used.

If the request cannot find the resource that acts as a single entity, a 404(Not found) is the correct status code. For resources that act as a collection an empty 200(OK) or 204(No content) can be used.


What is Node.js?

Node.js is an open-source runtime for JavaScript. It was created by Ryan Dall in 2009. Dahl criticized the limited possibilities of Apache HTTP Server and the sequential programming style. Node.js it's cross-platform, it can run on Unix, Windows, and macOS operating systems and 32/64 bit or ARM architectures.

The core proposition of Node.js is the language and non-blocking environment. The non-blocking environment makes Node.js a good choice for I/O intensive applications.

JavaScript developers can now, with the help of Node.js, create server-side applications.

With a team of JavaScript developers, a full-stack application can be built. The JSON format is widely used, especially in single-page applications.

Some NoSQL databases, like MongoDB, also use JSON and JavaScript. All in all is a great technology to be used.

The non-blocking nature is also a strong feature. Asynchronous programming practices like callbacks, promises, async/await, are first-class citizens in Node.js.

The possibility of sharing code between server and client is also a plus.

Using JavaScript and it's ecosystem a broad spectrum of applications can be built:


Node.js has a cool community and a lot of modules that everybody can use. For managing them, the NPM project was created.

Npm is an acronym for node package manager and acts as a package manager for node projects. Npm is also a CLI tool that aids with the installation.

All modules used in a project are defined in the package.json file and are installed in the node_modules folder. Due to the dependencies being defined in the package.json file committing the node_modules folder is bad practice, the folder should be added to .gitignore.

Choosing a good package for your problem implies checking how often the package is updated, how long bugs and issues remain unaddressed and how many people are using it, we do this to make sure it supports our requirements now and in the future.

You can check this article for a more detailed approach and more tips: NPM guide.



Node.js is a JavaScript runtime. What does this mean? It means that it can interpret JavaScript. Cool, but that's what the browser is doing too. Node.js shares the V8 JavaScript engine with chromium-based browsers. The V8 engine is single-threaded but it is also asynchronous by using a concept called Event Loop.

Event Loop

The Event Loop is the architecture that makes this possible. And event-driven architecture that promotes loosely-coupled systems coupled with the queue system makes concurrency on a single thread possible.

 while(queue.waitForNextMessage()) {

How? By delegating the actual work to the operating system internals or network and processing the next message.

For example, let's say that we send a request to a server, there is no point in blocking the thread until the data is transferred by the operating system and the network, processed by the other server and sent back.

Another example might be reading a file from the operating system, delegating the task to the operating system thread, the main thread can continue doing other things. When the operating system has data, an event is sent to the queue, the loop sends it to the main thread and it handles the rest.

The work is done by other threads of the operating system, routers, and/or another computer we just don't block our thread. In the meanwhile, we can process other messages. This is why asynchronicity works on a single thread. The problems appear when a task is CPU intensive. In CPU intensive tasks there is no work that the main thread can delegate to the OS or other computers, it has to do that itself.


Having just the JavaScript engine is not enough. In browsers, we don't have direct access to operating system resources because of security reasons. So an API for accessing low-level resources of the operating system was needed. Node.js provides such API. Accessing resources like files, threads or the network is possible from JavaScript.

Programming style

Async Programming

The core of the Node.js programming style lays in its event-driven architecture and using the event loop and handlers.


What are handlers? handlers are unnamed functions that are passed as arguments to other functions. Usually, the other functions are event handling declarations.

We refer to a handler as a callback when there is an actual async operation involved. All callbacks are handlers, only handlers involved in async operations are callbacks.

The anonymous function passed as the second argument to the readFile function is the callback. The callback will be called back when the file is read. The reading of the file is an async operation.

 const fs = require('fs');
 fs.readFile('example.txt', function(err, content) {
   if (err) return console.error(err);

We can see in the example above the common error handling pattern for callbacks. The pattern says that the callbacks first parameter should be null if there is no error, or an error object if something wrong happened. The rest of the parameters can be used freely.


What are promises? Built on top of callbacks, promises are instances of the Promise class.

Promises are JavaScript objects that represent internally the state of an asynchronous operation. When the operation is completed, its value is contained in the Promise instance. The Promise internal state can be one of: pending, fulfilled, and rejected.

Pending means that the asynchronous operation is in progress. Fulfilled means that the operation completed successfully and that we can access the value. Rejected means that the operation failed.

A promise that is pending can be completed with one of the two possible states, fulfilled or rejected.

Promises can be chained and can be passed as arguments.

The Promise constructor receives as an argument a callback, function or arrow function, with two parameters to be called when the operations succedees and when it fails.

 const fs = require('fs');
 const readFilePromisified = new Promise(function (resolve, reject) => {
   fs.readFile('example.txt', function(err, content) {
     if (err) return reject(err);
 .then(function (data) { console.log(data); })
 .catch(function (err) { console.error(err); });

Built on top of promises is the concept of async/await. In order to be used you need a special kind of function or arrow function marked with the word async. The marking states that that function will return a promise no mather what.

Every function that returns a promise is awaitable.

 async function getCar(name) {
   if (name !== 'Fiat') throw new Error('We do not have this car');
   return 'Fiat 500'
 async function callReadFilePromisified() {
    try {
      const car = getCar('fiat');
    } catch (err) {

Fitting together

We mentioned that everything is based on the event loop architecture. Callbacks are made possible by events, being handlers for them. Promises are built on top of callbacks. Promises are just proxies for a value that will be available in the future. Another layer of abstraction is brought by async/await feature which is built on top promises.

Every callback or handler is placed in a queue and waits for the event loop to call it when the necessary state is completed (file read, network request completed, etc.).

A handler queue is a data structure that Node.js uses internally to organize the async operations. A callback is added to the call stack when it is about to be executed. The event loop continually checks the call stack if it's empty so it can pic a callback from the queue and add it to the call stack.

There are several types of callback queues that handle different types of operations. IO queue, Timer queue, Microtask queue, Check/Immediate queue, and Close queue. It's important to note that the event loop checks and executes the microtask queue before other queues. The queue order is, microtask, timer, IO, check, and, lastly, close. Please refer to this article for more details, Deep dive into queues @ logrocket.


Promoting low coupling, a lot of objects in Node.js, and every API in Node.js emits events. Anybody can create custom events.

 const EventEmitter = require('events').EventEmitter;

 const myEventEmitter = new EventEmitter();
 const eventHandler = () => { console.log('Handled event'); }
 myEventEmitter.on('myCustomEvent', eventHandler);

As we can see from the example above there is no need for async functionality but the event loop queue is still used.

A common error that causes memory leaks is not removing an event handler once it ceases to be useful. This is particularly bad if the event handler is set in a loop. We need to keep a reference of the event handler, this way the function that removes the handler, knows which one to remove.

 const eventHandler = () => { console.log('Handled event'); }
 myEventEmitter.on('myCustomEvent', eventHandler);
 myEventEmitter.removeEventListener('myCustomEvent', eventHandler);
Buffers & Streams
What are buffers?

Buffers are instance of class Buffer that are used to manipulate raw data, octets in memory.

You can do Unicode in JavaScript but, sometimes, you need to process binary data. Buffer class is here to help in processing octet streams. The output of the readfile function is a buffer.

What is Unicode or UTF-8?

Unicode is a universal character set. This means that the standard defines, in one place, all the characters needed for writing the majority of living languages, punctuation, or other symbols like emojis 😇.

In the past, a common character set was ASCII, which was very limited because there was just 8 bit of information available for a character. This meant that 256 characters can be used. Not enough for a World Wide Web.

A code/number associated with a character is called a code point. The actual image used for that character is called a glyph. Even though 'A' has the same code point in ASCII, which is '65', the representation on the screen differs with respect to the fonts used.


The first 65,536 code point position in the Unicode character set constitute the Basic Multilingual Plane(BMP). It contains a small part of the set but that part is more commonly used. The rest of approximately a million code point positions are referred to as supplementary characters.

Unicode has 3 different encoding forms, UTF-8, UTF-16, UTF-32. UTF-8 uses one byte for ASCII set, two bytes for several more alphabets, and three bytes for the rest of BMP, four bytes for the rest of the BPM.

UTF-16 uses two bytes for the BMP and four bytes for supplementary characters.

UTF-32 uses four bytes for the BMP and four bytes for supplementary characters.

Endianness refers to the ordering of bytes in a word. A big-endian system stores the most significant byte in the word at the smallest memory address.

When in doubt, use UTF-8. It is simpler this way.

In order to know how to interpret a file, a program must contain the file's encoding. Character encoding detection is not a reliable process.

JavaScript source files can have any kind of encoding but the code will be converted ot UTF-16 internally. For data structures usually, UTF-8 is the default.

 const bt = Buffer.alloc(12);
 bt.write('abcdef'); // defaults to 'utf8'
 bt.write('abcdef',0, 6,'ascii');
 bt.write('abcdef',0, 12,'utf16le');

For more details you can Nick Gammon's take on the subject.

What are streams?

Using streams is a faster way of processing data. If the data can be processed in parts, then you have a very good chance of speeding the operations via streams.

Think of it this way, reading-writing means that no work is done by the writing counterpart until the reading of the file is done. Writing and Reading at the same time means at least doubling the speed of the operations, but, in real life, you can gain an order of magnitude.

 const fs = require('fs');
 const readStream = fs.createReadStream('source.txt');
 const writeStream = fs.createWriteStream('destination.txt');