Testing JavaScript's Fetch with Jest - Unhappy Paths
In this video we continue on with testing the asyncFetch
function that we began in the previous video, this time looking at the "unhappy paths" through our code.
In this instance, there are two unhappy paths to cover:
- That our
asyncFetch
function can handle situations whereby the response code isn't a happy one (>= 200, <300), and; - Ensure that we get a nice error message, if the server provides one
You may not like the error messages that your API server gives back, and you may wish to customise them to meet your needs. Feel free to extend this code as appropriate in order to do so.
Let's start off with our basic test setup:
// /__tests__/connectivity/async-fetch.js
const fetchMock = require('fetch-mock');
import asyncFetch from '../../src/connectivity/async-fetch';
describe('asyncFetch', () => {
it('can fetch', async () => {
fetchMock.get('http://fake.com', {hello: "world"});
const response = await asyncFetch('http://fake.com');
const result = await response.json();
expect(result.hello).toEqual("world");
});
xit('handles errors', async () => {
});
xit('displays a nicer error message if one is provided', async () => {
});
});
Remember, the it
functions need to be async
.
Whilst the happy path was relatively straightforward, testing that our async
function throw
s as expected is a little trickier. The problem being that in order to determine if our code ended up throw
ing, we need to first resolve the underlying promise.
Promises, Promises
The idea I am working towards is that the asyncFetch
function will throw
a custom error (HttpApiCallError
) if the response status code is anything outside of the success code range (>=200, <300
).
To begin with, I'm going to change up the asyncFetch
function to take this new logic into account:
// /src/connectivity/async-fetch.js
import HttpApiCallError from '../errors/HttpApiCallError';
export default async function asyncFetch(url, requestConfig = {}) {
const response = await fetch(url, requestConfig);
const isSuccess = response.status >= 200 && response.status < 300;
if (isSuccess) {
return response;
}
throw new HttpApiCallError(
response.statusText,
response.status
);
}
There's no new code here, just re-arranging code we've already created. See the video for a more in-depth look into this change.
At this point, our test suite should still be passing. We can be confident we haven't broken the way asyncFetch
was working before we moved code around.
Now, let's try and write our second test - handles errors
:
// /__tests__/connectivity/async-fetch.js
const fetchMock = require('fetch-mock');
import asyncFetch from '../../src/connectivity/async-fetch';
describe('asyncFetch', () => {
it('handles errors', async () => {
fetchMock.get('http://bad.url', {
status: 400,
body: JSON.stringify("bad data")
});
const response = await asyncFetch('http://fake.com');
const result = await response.json();
expect(result).toThrow();
});
});
Hmmm.
Running this shows our test failed, and on the console we get:
Bad Request
This maps to the 400
status code we are passing into fetchMock
, and we can double check this by changing 400
to 401
, which in turn shows Unauthorized
on the console.
Testing that your functions throw
in JavaScript is a mind-bender, in my experience. The solution to this problem whenever I did this in Angular-land was to wrap the function call in an anonymous function, which when resolved would correctly trigger the throw
, which satisfied the toThrow
assertion.
Unfortunately it's still not easy to do this in Jest. But there is a solution that gets out your way. I found this via a GitHub ticket for the Jest project.
First, we will create a new file in the root of our project called setupJest.js
.
Into this file we use the snippet from Louis Remi:
// /setupJest.js
const syncify = async (fn) => {
try {
const result = await fn();
return () => { return result; };
} catch (e) {
return () => { throw e; };
}
};
export default {
syncify
}
syncify
expects to be given a function, which it will call for us, and either return a function which returns the outcome, or will catch and re-throw the exception.
Note that syncify
is an async
function, so we will need to await
the outcome of it in our tests.
With this little snippet of helper code, we now need to pass in an anonymous async
function to syncify
which, when resolved, will return the outcome of our calls to asyncFetch
.
// /__tests__/connectivity/async-fetch.js
const fetchMock = require('fetch-mock');
import asyncFetch from '../../src/connectivity/async-fetch';
describe('asyncFetch', () => {
it('handles errors', async () => {
fetchMock.get('http://bad.url', {
status: 400,
body: JSON.stringify("bad data")
});
const outcome = await helpers.syncify(async () => {
return await asyncFetch('http://bad.url');
});
expect(outcome).toThrow();
});
});
Hopefully this ticket will find a resolution soon, and people much smarter than I will offer the defacto solution to this problem. In the interim, however, this does work.
Ok, so now we have a passing test.
The second test is really just a starting point for customising any display output you may wish to show. As we are now no longer calling response.json()
inside our asyncFetch
(in order to potentially work with responses other than JSON), we don't gain direct access to the any helpful errors returned as part of the JSON body.
At this point all we are going to test is that the status code for a given HTTP response code matches our expectations, e.g. a 400
error says Bad Request
, or 401
says Unauthorized
:
// /__tests__/connectivity/async-fetch.js
const fetchMock = require('fetch-mock');
import asyncFetch from '../../src/connectivity/async-fetch';
describe('asyncFetch', () => {
it('handles errors', async () => {
fetchMock.get('http://bad.url', {
status: 400,
body: JSON.stringify("bad data")
});
const outcome = await helpers.syncify(async () => {
return await asyncFetch('http://bad.url');
});
expect(outcome).toThrow('Bad Request');
});
});
Again, use this as a starting point for displaying nicer errors - maybe adding in a middleware to 'convert' the message from developer-speak, to user-speak.
Cleaning Up
One last thing to cover is in tidying up after each test has run.
An issue I ran into is that fetchMock
keeps track of each request throughout the test suite. That is to say that if you have multiple tests doing multiple calls, the outcomes of a previous test may impact the current test in unexpected ways.
This isn't an issue during this video, but to stop any potential recurrences of this problem, I now follow a simple practice of cleanup afterEach
test run:
// /__tests__/connectivity/async-fetch.js
describe('asyncFetch', () => {
afterEach(() => {
fetchMock.restore();
});
Again, you don't need to do this for the purposes of this video, but it may save you a head-scratcher of a problem for the sake of a little copy / paste boilerplate for each test file that mocks calls to fetch
.