Comparison of several sorting algorithms: Integers

(Main test page)

The test was performed by generating an array of 100, 5000, 100000 and 1 million random integers. The values of these integers were between 0 and 10 times the size of the array (thus there's minimal repetition). The different distributions were explained in the main page.

Integers are very fast to compare and to copy, so this testcase measures what could perhaps be called "raw sorting speed".

Each bar in the charts expresses the time taken by the sorting algorithm, in average, to sort the array once, in milliseconds. (The time used to generate the contents of the array were substracted from the total time, as described in the main page.) Thus lower bars are better.

100 items

100 integers

In this case the "random end" testcase was basically identical to the totally random testcase because the 256 last items of the array were random, and in this case the entire array was thus random. This is the reason why both bars are (approximately) equal for all algorithms.

Insertion sort, an O(n2) algorithm, compares very well with the "faster" algorithms when the array is small enough, like this one. As seen in the chart, when the array is almost sorted, insertion sort is tremendously faster than anything else. It was not tested with larger arrays, though, because it becomes really slow very fast as the array size increases.

Random Almost sortd Almost rev. Random end
Alg.Comp.Assig. Comp.Assig. Comp.Assig. Comp.Assig.
Insert. 25672670 418517 47044827 25672670
Shell 7671076 463747 7311092 7671076
Heap 10271743 10691882 9571558 10271743
Merge 558800 470800 424800 558800
Quick 1062475 2809241 2053335 1062475
Quick2 783724 691393 599487 783724

The effect of insertion sort being an O(n2) algorithm can be seen in the amount of comparisons and assignments it performs even for such a small array. The reason why it manages to be faster than most of the others in this case is that its inner loop is extremely simple and thus has almost no "overhead". Naturally it rapidly loses this advantage when the array grows any larger than this.

5000 items

5000 integers

The "random end" testcase starts to show its pathological nature in the vanilla quicksort case already with 5000 integers.

Random Almost sortd Almost rev. Random end
Alg.Comp.Assig. Comp.Assig. Comp.Assig. Comp.Assig.
Shell 101319136506 76299109854 81419118739 67994101395
Heap 107688171318 109588177608 103758161272 111986181244
Merge 5682370000 5172670000 5144970000 3461170000
Quick 9532142888 33290922059 21547431191 24532127143
Quick2 7431856365 8128232109 8121638250 13098833501

100000 items

5000 integers

The pathological case for vanilla quicksort was skipped here because the time was several times larger than all the others (and would have rendered the chart almost useless because of the scale difference). For some inexplicable reason the optimized quicksort starts showing problems with this case too (I can't figure out any rational explanation for this).

Random Almost sortd Almost rev. Random end
Alg.Comp.Assig. Comp.Assig. Comp.Assig. Comp.Assig.
Shell 38893554885915 28885223848833 29233293950839 16448422600599
Heap 30196024724840 30254664793870 29375984518492 313233570762
Merge 15665181800000 14648331800000 14635351800000 8793361800000
Quick 25478861153959 9731476574017 5967669849518 --
Quick2 20231671432955 2153949770245 2185309910995 2707046554394

1 million items

5000 integers

The (unexplicably) pathological case for the optimized quicksort was also skipped here because it was several times larger than any of the others.

Random Almost sortd Almost rev. Random end
Alg.Comp.Assig. Comp.Assig. Comp.Assig. Comp.Assig.
Shell 651036699993926 4713229758777435 4702911059435171 184627315661000
Heap 3679377857144949 3673927857652356 3601847355146281 1346321010746174
Merge 1871598720000000 1769510920000000 1769410820000000 1009714720000000
Quick 3048773213811163 487337676710516 6981452110289213 --
Quick2 2422859016697372 259373078782867 2700392510274957 --

The performance of the enhanced quicksort compared to the vanilla quicksort becomes clear when comparing the amount of comparisons they perform. Merge sort seems to excel in this category too.


Shell sort and heap sort don't show too much of a difference, even though the former might be slightly slower in average with very big arrays. Almost sorted arrays seem to be faster for both to sort.

Merge sort resulted to be very fast, not losing to its faster competitors. However, part of this speed might be explained in this test because of the slight optimization I used (ie. the function doesn't need to allocate the secondary array each time it is called). Definitely a strong option if the extra memory requirement is not a problem.

Quicksort showed its untrustworthiness. The vanilla version expectedly, the optimized version quite unexpectedly (I can't figure out why). This demonstrates that making a good implementation of quicksort is far from easy. The std::sort function demonstrates that it is possible, though.

It is interesting to note that the std::sort results mimic quite faithfully those of the optimized quicksort (which is to be expected since it uses a similar algorithm). In the pathological case either an alternative algorithm in the former kicked in, or there might be some kind of bug in my implementation.

Another interesting detail is that quicksort performs considerably less assignments than comparisons, unlike all the other algorithms. Merge sort performs even better in this category, but makes considerably more assignments (although due to the nature of the algorithm, it consistently makes always the same amount of assignments, which in some cases may be a good thing).

Main page - Next