Side Step For Benchmarking

Written by Hunter Jansen on July 24, 2016

Even though I’ve been actively working on the Angular 2 library that I mentioned in the last post, I decided to take a quick side step to satiate my curiousity when it comes to performance between a few main ways of iterating over an array of stuff and filtering that array into a new array; so I decided to write some benchmarking tests. This post will detail how I did it, what I tested, and what my findings were.

This all came around largely because I’d been writing some angular pipes, in which I was chaining .filter and .reduce. But I kept switching back and forth between .filter().reduce() and just .reduce(). So I started to wonder which might perform better between the two. I brought these two code snippits to him and we talked a little bit about it. Both of us having no idea which would go better, or even which we preferred syntactically.

.filter().reduce()

$scope.users = data.users.filter(function (user) {
                   return user.type === 'PRO_USER';
               }).
                   reduce(function (temp, user) {
                       user.fullName = user.first_name + ' ' + user.last_name;
                       temp.push(user);
                       return temp;
                   }, []);

.reduce()

$scope.users = data.users.reduce(function (temp, user) {
                   if (user.type === 'PRO_USER') {
                       user.fullName = user.first_name + ' ' + user.last_name;
                       temp.push(user);
                   }
                   return temp;
               }, []);

Both perform the exact same action, the users array ends up with the same array value, and the end results of both are the same. Now, before I continue, I should say that I know that the more common use case is filter.map; for some reason I tend to go with reduce, but the benchmarking covered in my tests will also include .filter().map().

I found it really interesting that we have these two ways of approaching looping through arrays and filtering them out on top of our typical for/foreach loops, so I figured I’d write up a simple test case and give a pretty naive attempt at benchmarking the performance of each way to compare them. The rest of this post will talk about how I went about that, and what my general results were. It will likely be pretty long. So if you’d rather just get to the results, click here.

First thing’s first

You can find the current version of the source code on my github here. It’s likely to go through a couple of iterations and facelifts and whatnot before the end of it, but the functionality will remain the same. You can also run these same tests in your browser via: hyperwidget. Right now it’s only been tested in Chrome and Firefox, I’d also advise you not to run it on your mobile, since it runs a ton of times.

The Constants

To start off each testing session, I create 10,000 objects that are based on the following template:

const TEMPLATEOBJECT = { id: 0, contents: [1, 2, 3, 4], included: false };

I then iterate through 10k times, assigning an incremented id, and randomly flipping the included boolean; storing those items into an array to use throughout the various actions performed throughout.

for (let index = 0; index <= OBJECTAMOUNT; index++) {
    objectsArray = [...objectsArray,
        Object.assign({}, TEMPLATEOBJECT, { id: index, included: Math.round(Math.random()) === 1 })
    ];
}

Getting the timing right

The next step was to accurately time how long each action was taking. I did this using performance.now(), tracking that and comparing it to performance.now() after my actions are done:

let start = performance.now();
// perfom actions here
let totalTime = performance.now() - startTime;

Since I knew that I’d be running each of these actions tens of thousands of times, I had to get the average run times for all of these tests; so I just stored them in an array; to get the average I simply summed them together and divided by the amount of tests run.

for (let index = 0; index <= TESTAMOUNT; index++) {
    let start = performance.now();
    
    times.push(getTimeDifference(start));
}
for (let index = 0; index < times.length; index++) {
    sum += times[index];
}

let result = " took an average of: " + sum / TESTAMOUNT + "ms";

I know that there’s cleaner ways to do this; but I wanted to persist the data and really keep the code as simple as possible as to not mess anything up by over complicating it.

Performing actions

So, obviously; in order to compare these tests, I have to actually perform these actions: y’know, do the thing.

Essentially, the actions are just taking the original array, and ending off with an array of only the objects in that array that have an included value of true. There’s no fancy pants coding here; just a bunch of standard ways of doing the same thing.

Filter Reduce

var filteredObjects = objectsArray.filter(function (item) {
    return item.included;
}).reduce(function (retArr, item) {
    item.randoVal = Math.random();
    retArr.push(item);
    return retArr;
}, []);

Filter Map

var filterMappedObjects = objectsArray.filter(function (item) {
    return item.included;
}).map(function (item) {
    return Object.assign(item, { randoVal: Math.random() });
});

Reducing

let reducedObjects = objectsArray.reduce(function (temp, item) {
    if (item.included) {
        item.randoVal = Math.random();
        temp.push(item);
    }
    return temp;
}, []);

ForEach

let forEachObjects = [];
objectsArray.forEach(function (item) {
    if (item.included) {
        item.randoVal = Math.random();
        forEachObjects.push(item);
    }
});

For

let forObjects = [];
for (var i = 0; i < objectsArray.length; i++) {
    if (objectsArray[i].included) {
        objectsArray[i]m.randoVal = Math.random();
        forObjects.push(objectsArray[i]);
    }
}

Results

On Chrome, first

Filter reducing took an average of: 3.4344950000000227ms 
Filter mapping took an average of: 6.663419999999976ms 
Reduce Filtering took an average of: 0.3388449999999757ms 
ForEach Filtering took an average of: 0.3241200000000572ms 
For Filtering took an average of: 0.3701200000000026ms 

Originally, I was actually very shocked at the results. I was sure that there would be all sorts of mumbo jumbo happening behind the scenes that would make filter/map/reduce the optimal way to go; besides, why would they introduce something that can easily be done with the standard for/foreach. But in the end, I guess that it makes some sort of sense, and behind the scenes everything probably gets broken down into some kind of for loop after all. So; we may as well just continue using foreach/for loops. If you want to take advantage of a fancier/cleaner syntax; just use reduce instead of filter.reduce/filter.map.

Now, firefox

Filter reducing took an average of: 0.3353450000000339ms
Filter mapping took an average of: 1.6847650000000285ms
Reduce Filtering took an average of: 0.3352049999999872ms
ForEach Filtering took an average of: 0.24079500000004556ms
For Filtering took an average of: 0.25638500000000203ms 

Wait a second…

On firefox, filterReduce&just reduce are pretty much the same with for/foreach only - with filterMapping being slowing; but not nearly as bad as on chrome. I found this even MORE surprising. I’ve faced speed issues on firefox (primarily when dealing with a ton of watchers with Angular), and as such I was expecting that running these tests on ff would cause my computer to catch on fire. I was wrooooong.

Not only did FF perform all these tests way faster; but filter & map’s performance are pretty much negligible in difference to for/foreach. Dannng.

My Takeaways

  • If I’m going to filter/iterate over things, I’ll be using either reduce or foreach, until these tests tell me otherwise
  • I had a lot of fun doing this
  • Firefox has one heck of engine, and maybe I should use it a little more (instead of doing everything mainly in chrome)
  • I really need to make some crazy smart dev friends to explain to me WHY these results are the way they are

Either way, for now, I’ll take the results as they are and use the knowledge I’ve gained due to this experiment in my daily life. I’ll likely update the demo app a little bit so that it’s not nearly so ugly, or to incorporate any feedback. But as it is, I’m largely done with this experiment I think.

Got any feedback?

Hit me up on twitter (link in the footer), make an issue on the repo (further up in the post), or drop me a line at hunter@hyperwidget.com

WHOA, that was long. Until next time, when I’ll get back to this multiselect business! -Hunter