There’s something super irritating to me about seeing a Jest test suite run and a bunch of noisy console logs get dumped out, mucking up the results.
Here’s an example:
PASS src/extract-property-data-script-property.test.ts
● Console
console.error
Failed to extract property data script property SyntaxError: Expected property name or '}' in JSON at position 2
at JSON.parse (<anonymous>)
at extractPropertyDataScriptProperty (/home/chris/Development/property-scraper/src/extract-property-data-script-property.ts:20:28)
at Object.<anonymous> (/home/chris/Development/property-scraper/src/extract-property-data-script-property.test.ts:60:53)
at Promise.then.completed (/home/chris/Development/property-scraper/node_modules/jest-circus/build/utils.js:298:28)
at new Promise (<anonymous>)
at callAsyncCircusFn (/home/chris/Development/property-scraper/node_modules/jest-circus/build/utils.js:231:10)
at _callCircusTest (/home/chris/Development/property-scraper/node_modules/jest-circus/build/run.js:316:40)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at _runTest (/home/chris/Development/property-scraper/node_modules/jest-circus/build/run.js:252:3)
at _runTestsForDescribeBlock (/home/chris/Development/property-scraper/node_modules/jest-circus/build/run.js:126:9)
at _runTestsForDescribeBlock (/home/chris/Development/property-scraper/node_modules/jest-circus/build/run.js:121:9)
at run (/home/chris/Development/property-scraper/node_modules/jest-circus/build/run.js:71:3)
at runAndTransformResultsToJestFormat (/home/chris/Development/property-scraper/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:122:21)
at jestAdapter (/home/chris/Development/property-scraper/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:79:19)
at runTestInternal (/home/chris/Development/property-scraper/node_modules/jest-runner/build/runTest.js:367:16)
at runTest (/home/chris/Development/property-scraper/node_modules/jest-runner/build/runTest.js:444:34)
at Object.worker (/home/chris/Development/property-scraper/node_modules/jest-runner/build/testWorker.js:106:12)
20 | return jsonData ? JSON.parse(jsonData) : {};
21 | } catch (e) {
> 22 | console.error("Failed to extract property data script property", e);
| ^
23 | return null;
24 | }
25 | };
at extractPropertyDataScriptProperty (src/extract-property-data-script-property.ts:22:13)
at Object.<anonymous> (src/extract-property-data-script-property.test.ts:60:53)
FAIL src/extract.test.ts
● extract › to let › should extract the expected result
TypeError: (0 , property_type_to_enum_1.default) is not a function
30 | return {
31 | listing: LISTING_TYPE.RESIDENTIAL,
> 32 | property_type: propertyTypeToEnum(property.type),
| ^
33 | bedrooms: property.bedrooms,
34 | bathrooms: property.bathrooms,
35 | receptionRooms: property.receptionRooms,
at extract (src/extract.ts:32:38)
at src/extract.test.ts:52:21
at step (src/extract.test.ts:33:23)
at Object.next (src/extract.test.ts:14:53)
at fulfilled (src/extract.test.ts:5:58)
Test Suites: 1 failed, 1 passed, 2 total
Tests: 1 failed, 1 skipped, 6 passed, 8 total
Snapshots: 0 total
Time: 1.647 s, estimated 2 s
Ran all test suites.
Watch Usage: Press w to show more.
Code language: PHP (php)
At a glance, I find it’s pretty hard to know which of those tests is problematic. Even with the coloured console output, the big expected error gets in the way of the failing test. Multiply that by 10’s or 100’s of tests, and it’s enough to make a grown man cry.
Now, it doesn’t have to be this way.
Let’s quickly take a look at an example implementation:
import { JSDOM } from "jsdom";
import { PropertyListing } from "./types";
export const extractPropertyDataScriptProperty = (
html: string,
): PropertyListing | null => {
try {
// some logic here
return jsonData ? JSON.parse(jsonData) : {};
} catch (e) {
console.error("Failed to extract property data script property", e);
return null;
}
};
export default extractPropertyDataScriptProperty;
Code language: TypeScript (typescript)
The guts of the code here isn’t that important.
All that is important is that code runs, it can go wrong, and if it does we can potentially throw an error.
For clarity, the error would be thrown by JSON.parse
being given some invalid JSON. But this could be anything, including throw
‘ing your own Error
.
Inside the catch
block, I want to know what the heck went wrong. That’s happening via console.error(...)
.
All is fine here. This code works well enough.
The problem is in the test suite output. It behaves identically to real code really going wrong. So it logs the error out, as we can see above. Great for the real world, not so great for our tests.
Possible Solutions
One possible solution is to simply not write tests.
God knows I have encountered enough of these sorts of people in my many years doing software dev. And I’m yet to understand their reasoning.
So that solution stinks. Let’s quickly move on.
Another possible solution is not to use console.error
. Just return null
and lose the extra information. Solves the problem, but you won’t thank yourself when things go wrong later down the line, and you have no idea where, or why.
Running --silent
is a possible approach.
I tend to run my tests in watch
mode, so something like this:
npm run test -- --watchAll
# becomes
npm run test -- --watchAll --silent
Code language: PHP (php)
That is the first solution I think is feasible.
However that is a blanket solution to stop Jest from printing any messages to the terminal. Maybe that’s not ideal.
My Preferred Solution
Personally I prefer to use a Spy to provide a void function to overwrite the console.whatever
logging methods.
Here’s how we can do this:
test("handles invalid JSON in propertyData", () => {
jest.spyOn(console, "error").mockImplementation(() => {});
const html = '<script>var propertyData = { invalid: "json" };</script>';
const result = extractPropertyDataScriptProperty(html);
expect(result).toBeNull();
});
Code language: TypeScript (typescript)
That’s enough to ‘solve’ this problem.
We can now control exactly which tests within we care about console
logs being output on a per test basis. Very granular. Very lovely.
But wait! There’s more!
Now that we have a spy
, we can validate that the console.error
function was called:
test("handles invalid JSON in propertyData", () => {
const consoleErrorSpy = jest
.spyOn(console, "error")
.mockImplementation(() => {});
const html = '<script>var propertyData = { invalid: "json" };</script>';
const result = extractPropertyDataScriptProperty(html);
expect(result).toBeNull();
expect(consoleErrorSpy).toHaveBeenCalled();
});
Code language: JavaScript (javascript)
In order to check back in with the spy
during the test, we need to save a reference to it into a variable.
Then we can assert that the spy
was called.
Pretty nice.
But again, we can go one step further:
test("handles invalid JSON in propertyData", () => {
// Spy on console.error before running the test
const consoleErrorSpy = jest
.spyOn(console, "error")
.mockImplementation(() => {});
const html = '<script>var propertyData = { invalid: "json" };</script>';
const result = extractPropertyDataScriptProperty(html);
expect(result).toBeNull();
expect(consoleErrorSpy).toHaveBeenCalledWith(
"Failed to extract property data script property",
new SyntaxError("Expected property name or '}' in JSON at position 2"),
);
});
Code language: TypeScript (typescript)
To me, that is really appealing.
I care about the error output. It’s my code. I want to know.
If this stops working, I need to know. That’s why I test.
Again, I know people who write tests and think this is totally overkill. I think otherwise, but I respect the right to a different view point.
Lastly, if you’re going to be checking in on console
logging at various points, or in multiple tests, it’s potentially worth extracting this out to a beforeEach
setup:
let consoleErrorSpy: jest.SpyInstance<void>;
beforeEach(() => {
// Spy on console.error before running the test
consoleErrorSpy = jest.spyOn(console, "error").mockImplementation(() => {});
});
afterEach(() => {
// Restore the original console.error implementation after each test has run
consoleErrorSpy.mockRestore();
});
test("handles invalid JSON in propertyData", () => {
const html = '<script>var propertyData = { invalid: "json" };</script>';
const result = extractPropertyDataScriptProperty(html);
expect(result).toBeNull();
expect(consoleErrorSpy).toHaveBeenCalledWith(
"Failed to extract property data script property",
new SyntaxError("Expected property name or '}' in JSON at position 2"),
);
});
Code language: TypeScript (typescript)
Basically the same idea, but we then get access to consoleErrorSpy
in every / any test that cares about it.
You can, of course, spy on other console
functions – log
/ debug
/ warn
etc. Or any other function in this way. It’s a very useful tool to have in your kit bag.