phantomjs is spooking our errors

MetaBroadcast loves data of all types, not just the meta variety. We like numbers and graphs that show that our code is working and more importantly if it’s not. As part of this knowledge collection, we want to understand in more detail how changes we make to our JS widgets affect the client pages that embed them.


Originally we relied on our staging environment and internal client-specific HTML pages that embed the widgets. This is a reasonable way to make sure our widgets work how we expect in isolation, before we deploy to production. For a while this most basic of ‘eyeballing’, plus a few small sanity checks and cross browser screenshot testing, was our primary way of ensuring things were ready to release. It gave us the confidence our stuff worked but it didn’t help flag up any JS conflicts that could occur and didn’t let us pre-test our stage code in the environment it would be living in.

a phantomjs suddenly appears

Phantomjs is a fantastic tool that we’re using to automate some simple high level DOM checking and JS error checking that we execute as the first step of our production release build process. We simply use a list of client URLs, that we know have our widgets embedded, to check both the current live version of the page followed by the same page using an injected staging version of our code.

The script prints something like the following:

===========
Testing: http://url1
-----------
  ✔ all good

===========
Testing: http://url1?fromStage=true
-----------
  ✘ 1 error

===========
ALL ERRORS:
  http://url1?fromStage=true: TypeError: 'undefined' is not a function (evaluating 'jumpAround()')
    -> http://widgets.metabroadcast.com/JumpUpJumpUpAndGetDown.js: 5

You can see the above example gives us visibility that if we were to go ahead and release, there’s going to be an error that may, or may not, affecting the functionality of our clients’ live page and is therefore something we need to investigate.

better errors

One of the first things we realised is that under the hood, a lot of the websites we visit on a regular basis have JS errors. These errors don’t stop the sites from working, but they do make a dumb tool like the one above stumble and report http://url1 is already broken. While it’s important information to see, we want to focus on how our work is affecting a client, so the first improvement to make is simply finding the differences between the errors for http://url1 and http://url1?fromStage=true.

===========
Testing: http://url1 against htt://url1?fromStage=true
-----------
ERRORS:
  http://url1?fromStage=true: TypeError: 'undefined' is not a function (evaluating 'jumpAround()')
    -> http://widgets.metabroadcast.com/JumpUpJumpUpAndGetDown.js: 5

-----------
✘ CHECK FAILED
-----------

This reduces the amount of irrelevant data and also doesn’t fail a build if our clients already have errors. It isn’t the solution though, you can imagine a website with ads loading different ad networks that contain different errors, showing up in the diff and breaking things. There are many obstacles like these that we need to overcome, but the end goal is always to understand how our code works, and how it affects the people that use is.

If you’ve had any experience with testing embeddable JS I’d love to hear how you’ve tackled some of these issues, so please get in touch!

blog comments powered by Disqus